[00:00] <thumper> kk
[00:00]  * thumper heads out for lunch
[00:34] <anastasiamac> waigani: hi :D
[00:48] <waigani> anastasiamac: hello :)
[00:49] <mup> Bug #1498232 opened: provider/ec2: provisioning with spaces should be provider-independent <ec2-provider> <network> <tech-debt> <juju-core:Triaged> <https://launchpad.net/bugs/1498232>
[00:49] <mup> Bug #1498235 opened: provider/ec2: add unit and feature tests for provisioning instances with spaces in constraints <ec2-provider> <network> <tech-debt> <juju-core:Triaged> <https://launchpad.net/bugs/1498235>
[00:49] <anastasiamac> waigani: i was wondering if ur old PR needed reviewing but I've reviewed it anyway :P
[00:49] <anastasiamac> waigani: so all good :D
[00:49] <anastasiamac> waigani: how r u, anyway?
[00:57] <waigani> anastasiamac: sorry, internet dropped out. thanks for the review.
[00:57] <axw> lazypower: if you're around, I can field answers about the loop storage provider now
[00:57] <axw> lazypower: if not, feel free to email me
[01:58] <natefinch> thumper, davechen1y, axw, etc: I have a function that gets run by a worker that iterates over a collection and performs an action for each document.  In theory, one or more of these actions may fail.  How do I handle the failures? Do I break out of the loop at the first one and let the worker restart? Do I run as many a possible and then collate the errors somehow?
[01:59] <axw> natefinch: what's the action?
[01:59] <axw> I'm not sure there's a general answer
[02:01] <natefinch> axw: assigning units to machines.... this is the second half of a bugfix that makes adding services atomic... in this case, the user's desired units and their placement criteria are saved to a new collection when the service is created, and a worker comes along and does the assigning. \
[02:03] <axw> natefinch: and what would cause individual ones to fail? bad placement?
[02:04] <axw> natefinch: I think this is a case where you'd want to try them all and collate errors (or set individual errors as status units?), so one bad placement doesn't block everything
[02:04] <natefinch> axw: in theory the service could have been destroyed, or yeah, bad placement
[02:05] <natefinch> axw: yeah, that was what I had gone to do, but I was hoping someone else had made a standard error collating thingy
[02:06] <axw> natefinch: maybe collating isn't the right thing to do though. if it may fail due to user input, then you probably want to report the error rather than bounce the worker
[02:09] <natefinch> axw: we validate the placement sychronously during service deployment, so the only thing that could really fail is if you specify some combination that doesn't match any possible machines.
[02:10]  * thumper agrees with axw
[02:10] <thumper> collate and report
[02:11] <natefinch> thoughts on how to turn N errors into 1 error?
[02:11] <axw> well I'm actually saying don't collate/combine, but return an error per unit... and then update unit status with those rrors
[02:11] <natefinch> axw: the unit statuses will be updated, that code already exists and I'm just reusing it
[02:12] <axw> natefinch: ok, what're you going to do with the error then?
[02:13] <natefinch> axw: that's what I was just thinking about... it's just an error reported to the worker, it either logs it and ignores it or restarts.  There's no much else to do.  If the state code logs each error individually... the worker shouldn't really need every detail of every error.
[02:15] <axw> natefinch: so I'd probably return something like ([]UnitAssignmentResult, error) to the worker, where each result contains a unit-specific error. you can log them or not, but only bounce the worker if the top-level error result is non-nil
[02:16] <axw> natefinch: and if you need to do something with them later that isn't logging, you haven't thrown away information
[02:16] <natefinch> axw: makes sense.  Thanks for the help :)
[02:17] <axw> natefinch: nps
[02:48] <axw> davechen1y: in response to your review comment: https://github.com/juju/juju/pull/3346
[02:58] <davechen1y> axw: lgtm, i dunno why reviewboard hasn't picked it up
[02:58] <axw> davechen1y: ta. maybe because I was lazy and didn't add a description
[02:59] <axw> oh it's there now... just a bit slow
[06:29] <thumper> and with that epic email, I'm done for the day
[06:29] <thumper> laters
[08:08] <mup> Bug #1498349 opened: juju upgrade fails with tools upload error due to invalid series "wily" <juju-core:New> <https://launchpad.net/bugs/1498349>
[08:11] <mup> Bug #1498349 changed: juju upgrade fails with tools upload error due to invalid series "wily" <juju-core:New> <https://launchpad.net/bugs/1498349>
[08:14] <mup> Bug #1498349 opened: juju upgrade fails with tools upload error due to invalid series "wily" <juju-core:New> <https://launchpad.net/bugs/1498349>
[08:17] <mup> Bug #1498349 changed: juju upgrade fails with tools upload error due to invalid series "wily" <juju-core:New> <https://launchpad.net/bugs/1498349>
[08:29] <mup> Bug #1498349 opened: juju upgrade fails with tools upload error due to invalid series "wily" <juju-core:New> <https://launchpad.net/bugs/1498349>
[08:56] <dooferlad> mgz: do you want me to split up those merge requests some more, or are you ok with them?
[08:57] <dooferlad> mgz: https://code.launchpad.net/~dooferlad/juju-ci-tools/addressable-containers-tools/+merge/271836, https://code.launchpad.net/~dooferlad/juju-ci-tools/addressable-containers-assess/+merge/271837
[13:09] <mup> Bug #1498481 opened: HAProxy charm broken by recent commit <blocker> <charm> <ci> <haproxy> <quickstart> <regression> <juju-core:Incomplete> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1498481>
[13:18] <mup> Bug #1498481 changed: HAProxy charm broken by recent commit <blocker> <charm> <ci> <haproxy> <quickstart> <regression> <juju-core:Incomplete> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1498481>
[13:21] <mup> Bug #1498481 opened: HAProxy charm broken by recent commit <blocker> <charm> <ci> <haproxy> <quickstart> <regression> <juju-core:Incomplete> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1498481>
[13:27] <perrito666> mattyw: your tweet just made me get again into the painful task of getting refills for my fountain pen
[13:27] <mattyw> perrito666, you're one of those people as well are you?
[13:27] <mattyw> perrito666, I'm new to it all
[13:27] <mattyw> tasdomas, ^^
[13:28] <perrito666> mattyw: lol, I am now using a very nice staedler I got as a prize in the las sprint charming contest because finding the refills for my pain is especially hard here
[13:28] <perrito666> but yes, I am picky about my writing materials
[13:28] <perrito666> and I write a lot
[13:35] <rick_h_> perrito666: /me looks at my staedler fountain pen I picked up during nuremberg sprint
[13:36] <katco> perrito666: rick_h_: i only use the one true writing utensil: the apple pencil
[13:36] <rick_h_> katco: hah!
[13:36] <rick_h_> my only pencils are my old drafting pencils
[13:36] <rick_h_> solid nice things
[13:37] <wwitzel3> I use the square construction pencils for everything
[13:37] <rick_h_> wwitzel3: those are my fav in the shop
[13:37] <rick_h_> nothing like taking a break to sharpen a pencil with a knife or a bandsaw :)
[13:38] <wwitzel3> rick_h_: I bought a box for the shop and they've migrated inside as well, so now I just use them for everything
[13:39] <perrito666> I only use the square ones when I help my wife with cooking, those things will write on anything
[13:39] <wwitzel3> even got me a proper carpenter pencil sharpener
[13:39] <perrito666> wwitzel3: oh, is that a thing? I use an opinel
[13:40] <wwitzel3> perrito666: yeah, they are very cheap, around a US dollar or two, but they are worth it, extends the life of the pencil by a lot (for me anyway)
[13:44] <perrito666> well look at that, never saw one of those
[13:45] <perrito666> katco: alexisb I completely forgot about the interlock sorry
[13:46] <katco> perrito666: too busy talking about pencils
[13:46] <perrito666> katco: and compiling mongo, oh joy
[13:46] <katco> perrito666: calendar notifications ftw ;)
[13:46] <perrito666> katco: yeah, I dismissed the notification of 10 min before but I have no notif for "starting now"
[13:47] <katco> perrito666: gcal really needs a snooze button
[13:53] <perrito666> katco: +1
[14:15] <mup> Bug #1498511 opened: state server records lxbr0 address which clients attempt to use for charm downloads <juju-core:New> <https://launchpad.net/bugs/1498511>
[14:15] <mup> Bug #1498518 opened: Error when attempting to deploy service in manual env without specifying machine <juju-core:New> <https://launchpad.net/bugs/1498518>
[14:25] <rogpeppe> anyone ever seen this error before (from the state tests) ? http://paste.ubuntu.com/12521386/
[14:27] <rogpeppe> hmm, looks like it's a TLS error
[14:28] <natefinch> rogpeppe: doesn't look familiar.
[14:29] <rogpeppe> natefinch: yeah, first time i've seen it too
[14:31] <natefinch> niemeyer: you around?
[14:31] <niemeyer_> natefinch: Yes, but on a meeting
[14:32] <natefinch> niemeyer_: ok, ping me when you're out?  Trying to understand out how to figure out what mgo assertion is failing
[15:04] <rogpeppe> i've been looking for a review of this for 5 days now. any takers? http://reviews.vapour.ws/r/2689/
[15:07] <rogpeppe> mgz, sinzui: i've pushed up a hopefully-working version of the use-charm.v6-unstable feature branch. any chance we could get a CI run on it, please?
[15:08] <sinzui> rogpeppe: yes. I may not need to intervene becase CI has better feature branch rules
[15:08] <rogpeppe> sinzui: thanks
[15:11] <rogpeppe> natefinch: you're OCR? fancy a review? http://reviews.vapour.ws/r/2689/
[15:15] <perrito666> rogpeppe: reviewed
[15:15] <rogpeppe> perrito666: ta!
[15:17] <rogpeppe> perrito666: not sure about your "comment name is wrong" suggestion. the comment looks as correct as it was before. or are you saying that it was always wrong?
[15:18] <rogpeppe> perrito666: FWIW i wouldn't use that phrasing either, but i didn't want to make too many gratuitous changes
[15:18] <perrito666> sorry, if you look carefully the name of the function in the comment and the actual function name are not the same
[15:18] <rogpeppe> perrito666: ha, good point
[15:18] <rogpeppe> perrito666: so it was always wrong
[15:18] <perrito666> most likely, just noticed and since you where there :p
[15:20] <perrito666> apart from that looks good, I really like that change
[15:20] <perrito666> I feel this doc is lying to me http://doc.bazaar.canonical.com/latest/en/tutorials/using_bazaar_with_launchpad.html#personal-branches
[15:21] <perrito666> I branched a project and push will not do what is says there
[15:23] <abentley> perrito666: Did you do the launchpad-login step above?
[15:23] <perrito666> abentley: yup
[15:23] <abentley> perrito666: What happens?
[15:24] <natefinch> rogpeppe: yeah, sorry, forgot I was OCR
[15:25] <rick_h_>  natefinch http://reviews.vapour.ws/r/2732/ has one as well please for frankban
[15:25] <perrito666> abentley: http://pastebin.ubuntu.com/12521781/
[15:26] <natefinch> rick_h_: looking
[15:28] <abentley> perrito666: I suspect juju-mongodb is a package, not a project.
[15:30] <abentley> perrito666: The package URL is something like lp:~hduran-8/ubuntu/trusty/juju-mongodb/juju-mongodb2.6
[15:32] <wwitzel3> where does the aws-quickstart-bundle live?
[15:33] <frankban> natefinch: ty!
[15:33] <sinzui> wwitzel3: lp:juju-ci-tools/repository maybe
[15:34] <sinzui> wwitzel3: http://bazaar.launchpad.net/~juju-qa/juju-ci-tools/repository/view/head:/bundles.yaml
[15:36] <wwitzel3> sinzui: thank you
[15:47] <perrito666> abentley: ah thank you my mistake
[16:08] <mattyw> TheMue, ping?
[16:09] <natefinch> alexisb, katco: have we picked a date for the juju-core sprint?
[16:13] <frobware> mattyw, he's out on holiday this week
[16:14] <mattyw> frobware, ok no problem
[16:15] <katco> natefinch: i think dec. 7th-11th is looking like it may be the winner. nothing official.
[16:23] <natefinch> katco: thanks.  I presume no word on where yet?
[16:24] <katco> natefinch: nope. i think everyone's still focused on upcoming sprint
[16:26] <natefinch> katco: fair enough... actually, I guess it's further away than I was thinking... I forgot november exists ;)
[16:27] <katco> natefinch: haha that's sep. for me... always seems to blow by for some reason
[16:40] <wwitzel3> sinzui: trying to reproduce https://bugs.launchpad.net/juju-core/+bug/1498481 with current 1.25 against aws, is this happening all the time for you? Or just intermintent?
[16:40] <mup> Bug #1498481: HAProxy charm broken by recent commit <blocker> <charm> <ci> <haproxy> <quickstart> <regression> <juju-core:Incomplete> <juju-core 1.25:Triaged by wwitzel3> <https://launchpad.net/bugs/1498481>
[16:41] <sinzui> wwitzel3: all the time on aws, hp, joyent, and maas
[16:42] <wwitzel3> sinzui: should I be doing something other than using deployer to deploy the bundle to replicate?
[16:43] <mup> Bug #1498575 opened: Create min helper function to eliminate duplicate definitions <juju-core:New for cherylj> <https://launchpad.net/bugs/1498575>
[16:46] <mup> Bug #1498575 changed: Create min helper function to eliminate duplicate definitions <juju-core:New for cherylj> <https://launchpad.net/bugs/1498575>
[16:46] <sinzui> wwitzel3: after quickstart finished, the script just polled the status. It early if an agent is in error. If all agents are stated, then success. This is a success from a previous test of 1.25 http://reports.vapour.ws/releases/3065/job/aws-quickstart-bundle/attempt/1016
[16:52] <mup> Bug #1498575 opened: Create min helper function to eliminate duplicate definitions <juju-core:New for cherylj> <https://launchpad.net/bugs/1498575>
[17:01] <mup> Bug #1498577 opened: juju deploy lis-test-charm hangs in pending <juju-core:New> <https://launchpad.net/bugs/1498577>
[17:04] <mup> Bug #1498577 changed: juju deploy lis-test-charm hangs in pending <juju-core:New> <https://launchpad.net/bugs/1498577>
[17:13] <mup> Bug #1498577 opened: juju deploy lis-test-charm hangs in pending <juju-core:New> <https://launchpad.net/bugs/1498577>
[17:26] <alexisb> natefinch, yes, week of dec 7th
[17:26] <alexisb> natefinch, venue still not locked down
[17:27] <natefinch> alexisb:  cool, thanks
[17:27] <perrito666> alexisb: well locking down the venue just for us is a bit overkill
[17:28] <alexisb> perrito666, I want to ensure proper network
[17:28] <rick_h_> perrito666: take over all the things!!!
[18:39] <natefinch> niemeyer_: ping?
[19:12] <perrito666> jamespage: thank you :D https://launchpad.net/~hduran-8/+archive/ubuntu/juju-mongodb2.6
[19:16] <niemeyer_> natefinch: Heya
[19:17] <niemeyer_> natefinch: So tell me, what's up with assertions there?
[20:05] <natefinch> niemeyer_: hey, sorry, had to step out.  Back now.  There's just a large-ish list of ops being put into a single transaction. Most of them were there before, but in separate transactions.  The transaction is returning ErrAborted, which I understand means an assertion is failing?  But I don't know how to figure out which one, as there are several.
[20:06] <niemeyer_> natefinch: There's no magic way.. you need to introspect the current state
[20:07] <natefinch> niemeyer_: ahh, boo.  I had hoped there was a magic log file somewhere or something that could tell me which one was failing.  That's fine.  I can slog it out the hard way.
[20:07] <niemeyer_> natefinch: No.. it's very hard to do that given the constraints we have in terms of db features
[20:08] <niemeyer_> natefinch: All the assertions need to match before we fail.. when we fail, we can't be sure of what is broken since things are running concurrently
[20:10] <natefinch> niemeyer_: understood.
[20:10] <mup> Bug #1498642 opened: juju lxc template does not DHCP release <juju-core:New> <https://launchpad.net/bugs/1498642>
[20:13] <mup> Bug #1498642 changed: juju lxc template does not DHCP release <juju-core:New> <https://launchpad.net/bugs/1498642>
[20:19] <katco> wwitzel3: hey, need an update on bug 1498481 for the release meeting. how's it going?
[20:19] <mup> Bug #1498481: HAProxy charm broken by recent commit <blocker> <charm> <ci> <haproxy> <quickstart> <regression> <juju-core:Incomplete> <juju-core 1.25:Triaged by wwitzel3> <https://launchpad.net/bugs/1498481>
[20:19] <mup> Bug #1498642 opened: juju lxc template does not DHCP release <juju-core:New> <https://launchpad.net/bugs/1498642>
[20:28] <mup> Bug #1498642 changed: juju lxc template does not DHCP release <juju-core:New> <https://launchpad.net/bugs/1498642>
[20:31] <mup> Bug #1498642 opened: juju lxc template does not DHCP release <juju-core:New> <https://launchpad.net/bugs/1498642>
[20:34] <mup> Bug #1498642 changed: juju lxc template does not DHCP release <juju-core:New> <https://launchpad.net/bugs/1498642>
[20:35] <perrito666> man waiting for ppas to build is nerve wrecking
[20:37] <mup> Bug #1498642 opened: juju lxc template does not DHCP release <juju-core:New> <https://launchpad.net/bugs/1498642>
[20:38] <perrito666> you know mup I feel like you are trying to tell me something
[20:48] <katco> wwitzel3: ping
[20:49] <mwhudson> good morning
[20:50] <katco> mwhudson: o/
[20:53] <mwhudson> perrito666: someone pointed me at https://wiki.ubuntu.com/SimpleSbuild recently
[20:54] <mwhudson> perrito666: makes for fewer ppa embarrassments
[20:55] <perrito666> mwhudson: thanks
[20:56] <natefinch> you know it's bad when the simple process is 11 steps
[21:01] <perrito666> mwhudson: but this is the first build of my ppa anyway, and its mongo so it will take time :p
[21:02] <mwhudson> ah yes
[21:04] <katco> wwitzel3: ping ping ping
[21:06] <alexisb> katco, thumper can you guys cover the release call today?
[21:06] <alexisb> and ping me if you need me
[21:07] <katco> alexisb: yes; anything in particular you want us to raise?
[21:07] <alexisb> give green light on beta1 release
[21:07] <alexisb> do a first pass of priorities on beta2
[21:07] <alexisb> let me know if you have questions
[21:08] <katco> alexisb: can't give a green light on beta1, 1 crit. open. wwitzel3 is working on it
[21:08] <alexisb> katco, crap yeah ok, forgot about that
[21:08] <alexisb> thanks
[21:48] <davecheney> what's the story with https://bugs.launchpad.net/juju-core/+bug/1497297
[21:48] <mup> Bug #1497297: TestFindToolsExactInStorage fails for some archs Again <blocker> <ci> <precise> <regression> <test-failure> <unit-tests> <juju-core:Fix Committed by cherylj> <https://launchpad.net/bugs/1497297>
[21:48] <davecheney> it's been fixed released since friday
[21:49] <davecheney> sorry, fixed comitted
[21:50] <katco> davecheney: sinzui says we're still waiting for a blessed master
[21:54] <davecheney> katco: ok, i'll keep waiting
[21:54] <davecheney> http://reviews.vapour.ws/r/2731/
[21:54] <davecheney> review, going free
[21:54] <katco> davecheney: you might poke him for a re-run in just a bit
[22:14] <mup> Bug #1484419 changed: Local provider: fail to download charm from state server when using an isolated network <deploy> <ha> <landscape> <network> <juju-core:Triaged> <https://launchpad.net/bugs/1484419>