[01:57] <axw> wallyworld: just making a cup of tea, will be a few minutes late
[02:01] <wallyworld> sure
[03:06] <axw> thumper: what's next-release? was that just to not disrupt 1.23?
[03:06] <thumper> axw: yep
[03:06] <thumper> a feature branch
[03:07] <thumper> so we weren't holding many branches open against master
[03:07] <axw> okey dokey
[03:07] <axw> oh balls, I didn't realise trunk was unblocked
[03:07] <thumper> axw: we are trialing a number of feature branches, and will report more in nuremburg
[03:07] <axw> thumper: cool. trunk blocking kinda screws everyone atm
[03:07] <thumper> agreed
[05:30] <tasdomas> morning
[07:58] <wallyworld_> axw: i found the correct appamor profile to use to allow mounting loop devices http://reviews.vapour.ws/r/1154/
[08:01] <axw> wallyworld_: https://help.ubuntu.com/lts/serverguide/lxc.html#lxc-apparmor -- still unsafe, and I still don't think we should enable this unless it's requested
[08:03] <wallyworld_> axw: that web page is out of date - ls /etc/apparmor.d/lxc/ shows the mount profile
[08:04] <axw> wallyworld_: the comments in the file line up with the commentary on the page - what is out of date?
[08:04] <wallyworld_> i guess we can only add the extra config if needed for storage. but, we would then nt be able to hogsmash a new unit with storage onto a container
[08:05] <wallyworld_> axw: i didn't see the lxc-container-default-with-mounting profile mentioned
[08:05] <wallyworld_> only lxc-container-default-with-nesting
[08:06] <axw> wallyworld_: "Another profile shipped with lxc allows containers to mount block filesystem types like ext4. This can be useful in some cases like maas provisioning, but is deemed generally unsafe since the superblock handlers in the kernel have not been audited for safe handling of untrusted input."
[08:08] <wallyworld_> axw: hmmm, ok, i wonder hen the auditing might occur. i guess then we only enable the extra config if needed for storage
[08:08] <wallyworld_> but no hogsmash
[08:15] <axw> wallyworld_: can't do placement yet anyway
[08:15] <axw> I suppose we could with containers
[08:15] <axw> non-existing containers that is
[09:00] <dooferlad> mornin' o/
[09:00] <dimitern> morning dooferlad
[09:01] <dimitern> dooferlad, I've seen your PR - am I correct to assume the order of (application of) the iptables rules do not matter?
[09:02] <dooferlad> dimitern: indeed. They are all inserted rat the top of the table, so since they are all above reject rules, order doesn't matter.
[09:03] <TheMue> morning o/
[09:03] <dimitern> dooferlad, good :) because using a map was the first red light for me - how about go1.3+ and gccgo map ordering :)
[09:03] <dimitern> TheMue, morning
[09:04] <dimitern> dooferlad, clever tests though - never using more than 1 element in the maps during tests
[09:04] <dooferlad> dimitern: that was indeed on my mind.
[09:10] <dimitern> davechen1y, are you about?
[09:11] <davechen1y> what's up ?
[09:11] <dimitern> davechen1y, re this review http://reviews.vapour.ws/r/1150/
[09:12] <dimitern> davechen1y, this fixes a regression on maas after the introduction of addressable containers (lxc) for ec2 and maas
[09:12] <dimitern> davechen1y, windows support was not even on my agenda tbh
[09:13] <dimitern> davechen1y, were they running before on windows?
[09:13] <davechen1y> dimitern: no idea
[09:13] <davechen1y> it is not clear what we're supposed to do about windows
[09:14] <dimitern> davechen1y, right, but I think we should fix it on ubuntu where it's supposed to work, then we can add it to the list of things to enable on windows
[09:14] <davechen1y> dimitern: feel free to land it
[09:14] <dimitern> davechen1y, ok, all add a comment, thanks
[09:15] <davechen1y> ok
[09:21] <dimitern> dooferlad, you have a review
[09:21] <dooferlad> dimitern: thanks
[09:27] <dimitern> TheMue, so you have your maas cluster controller running?
[09:28] <TheMue> dimitern: yes, only getting a weird apache error message (will now set the server name explicitely) and I'm currently adding a 2nd eth for a private network
[09:29] <dimitern> TheMue, that's just a warning - you can ignore it
[09:29] <TheMue> dimitern: yeah, but I dislike it :)
[09:30] <dimitern> TheMue, yeah, you'll need to setup both DHCP and static ranges for the internal network and leave the other one unmanaged
[09:30] <dimitern> TheMue, ok :)
[09:30] <TheMue> dimitern: do you have a good doc for dhcp conf when creating a vmaas?
[09:31] <dimitern> TheMue, why do you need to change the conf directly?
[09:31] <TheMue> dimitern: my eth0 has a static address here in my net, so that I can reach it
[09:31] <TheMue> dimitern: I only interpreted your hint as if I would have to do ;)
[09:32] <dimitern> TheMue, ah, no - you can configure all via the maas web ui - Clusters - edit interfaces
[09:32] <TheMue> dimitern: already wondered, because that's how I got the maas docs too
[09:33] <dimitern> TheMue, you'll probably need to enable ip_forwarding on the cluster and add a SNAT iptables rule so the machines behind maas on the internal network can reach outside
[09:34] <dimitern> TheMue, and on the other side - e.g. your machine you'll need a static route for the internal network pointing to your maas's eth0 address (the one you can reach from your machine)
[09:34]  * fwereade out at laura's school for a bit
[09:35] <natefinch> ParamsStateServingInfoToStateStateServingInfo .... really?
[09:35] <TheMue> natefinch: Java came to Go
[09:36] <natefinch> It has state in the name THREE TIMES
[09:36] <TheMue> dimitern: oh how I like this
[09:36] <davecheney> natefinch: now you know how I feel
[09:37] <TheMue> dimitern: "virtual metal" as a service! what.is.virtual.metal? :/
[09:37] <wallyworld_> axw: i'm off to soccer, have a revised lxc config branch almost ready, uses a StorageConfig arg similar to NetworkConfig, can be extended to do what's needed for host loop etc as needed; just need to finish tests, will propose when i get home later
[09:37] <natefinch> davecheney: yep
[09:38] <TheMue> natefinch: you know it's created with the ParamsStateServingInfoToStateStateServingInfoFactory and can be simulated by the ParamsStateServingInfoToStateStateServingInfoMock
[09:38] <dimitern> TheMue, it's stuff from fairy tales :)
[09:38] <dimitern> adamantium
[09:38] <TheMue> natefinch: that's state-of-the-art *scnr*
[09:39] <axw> wallyworld_: cool, thanks. enjoy
[09:39] <TheMue> dimitern: *lol*
[09:39]  * TheMue needs another cup of coffee
[09:39] <dimitern> we could call it AaaS
[09:40] <dimitern> adamantium as a service
[09:40] <natefinch> anything you could possibly pronounce as ass is probably not a good acronym
[09:40] <dimitern> :D
[09:42]  * dimitern has 335MB of logs from the yesterdays automated tests on MAAS and EC2 with containers
[09:43] <TheMue> dimitern: and that's only because you compressed them *g*
[09:44] <dimitern> TheMue, not even :) - full 5h of testing at TRACE level
[09:44] <dimitern> 16 separate environments
[09:45] <dimitern> a hefty $4.68 bill in EC2
[09:45]  * TheMue sees dimitern creating a data warehouse for log analyzing
[09:45] <TheMue> a DWaaS
[09:45] <dimitern> TheMue, yeah - running hadoop nodes doing rgrep ERROR
[09:46] <dimitern> voidspace, hey there
[09:46] <TheMue> dimitern: you need a filtered logging, only keeping stuff you're interested in and throwing away the rest
[09:46] <mattyw> TheMue, you available for a review?
[09:47] <jamespage> dimitern, do you know who's working on the systemd support in juju?
[09:47] <TheMue> mattyw: shure, it's my job today
[09:47] <dimitern> voidspace, let's have a chat after standup for the release of addresses worker
[09:47] <dimitern> jamespage, yes, ericsnow mostly
[09:47]  * fwereade out for a while at laura's school
[09:47] <jamespage> dimitern, what tz is he in?
[09:48] <dimitern> jamespage, -7 I believe
[09:48] <dimitern> jamespage, what's up?
[09:49] <jamespage> dimitern, I was wondering what state I could expect vivid support to be in in master and whether any other branches needed testing
[09:49] <jamespage> dimitern, we're busted on vivid testing right now so have a direct interest in seeing this land asap
[09:50] <dimitern> jamespage, AFAIK systemd support has landed and for vivid we no longer have an exception to run it with upstart
[09:50] <dimitern> jamespage, but it's only in 1.23 and master
[09:50] <jamespage> dimitern, hmm - yeah - still non-functional - tested yesterday
[09:50] <jamespage> the cloud-config that gets generated for instance creation still does "start jujud-XXX"
[09:50] <jamespage> which is upstart specific
[09:51] <dimitern> jamespage, hmm right - is that 1.22.0 ?
[09:51] <jamespage> no from master branch with locally built copy
[09:51] <dimitern> jamespage, ok, so it sounds not so complete as I thought
[09:52] <dimitern> jamespage, I'd suggest to write a mail to ericsnow  cc alexisb about this
[09:52] <jamespage> dimitern, I'll retest early next week with a clean master (currently have leader election merged as well)
[09:52] <jamespage> and report back
[09:52] <dimitern> jamespage, cheers
[09:54] <mattyw> davecheney, you still around?
[09:54] <davecheney> whats up ?
[09:59] <voidspace> dimitern: yep
[10:00] <voidspace> dimitern: although I think the high level details are reasonably clear
[10:00] <dimitern> voidspace, that's great :) i've started adding tasks to this new feature card I assigned to you
[10:01] <voidspace> dimitern: yeah, I saw :-)
[10:01] <voidspace> dimitern: thanks
[10:02] <natefinch> rogpeppe: where's the code that adds the new mongo admin users when we run ensure-availability?
[10:03] <dimitern> dooferlad, standup?
[10:03] <rogpeppe> natefinch: i don't think any users are added, are they
[10:03] <natefinch> rogpeppe: system.users gets a user per state machine
[10:03] <natefinch> rogpeppe: brb
[10:19] <natefinch> rogpeppe: system.users has the admin user and then one user per state machine.  no big deal if you don't remember this stuff offhand, I know it's been a year since we worked on it
[10:20] <rogpeppe> natefinch: i think users are added when machines are added
[10:29] <natefinch> rogpeppe: ahh, I see what it is, I was looking for EnsureAdminUser, but most places just call SetAdminMongoPassword
[10:43] <dimitern> voidspace, ok I'm done adding tasks - I think I mentioned everything relevant in the feature card
[10:46] <voidspace> dimitern: great, thanks
[10:59] <natefinch> I think I need to alias 'exit' to 'echo "dude, you're on you're on machine already"'
[10:59] <natefinch> s/on/own
[11:00] <dimitern> natefinch, I have a custom bash prompt - not for that, but it helps in this case
[11:01] <natefinch> dimitern: i do too.. but I do exit <enter> exit <enter> ... really fast, and sometimes do one too many
[11:02] <dimitern> natefinch, :) ctrl+d is too easy
[11:03] <natefinch> dimitern: yeah, I've done that by accident before too  whose bright idea was it to make a hotkey to close a window that could easily be typoed from ctrl+c ? :/
[11:04] <dimitern> :)
[11:04] <natefinch> (and ctrl+s, ctrl+x ctrl+f etc)
[11:05] <perrito666> I used to have a terminal (I think was konsole) where you could setup different background colors depending on the host
[11:06] <dimitern> looking at the wear patterns on my keyboard ctrl+A, S, C and lastly D I use most of the time -my  emacs habits haven't causes X to wear off too much yet
[11:08] <perrito666> dimitern: and ctrl?
[11:08] <dimitern> perrito666, left one is still barely readable, right one a lot more
[11:09] <dimitern> but oh boy! cursor keys - all but right are long gone
[11:11] <perrito666> lol. I tink excepting for my thinkpad, which is my spare machine, I hardly have a computer long enough to wear out anythin other than space bar
[11:12] <perrito666> although I do use an external kb
[11:12] <perrito666> whose wear pattern makes no sense, since I use it in english but is a spanish kb
[11:19] <dimitern> :)
[12:54]  * TheMue steps out for a moment, bbiab
[13:19] <sinzui> natefinch, mgz: do either of you have a minute for http://reviews.vapour.ws/r/1157/
[13:20] <mgz_> sinzui: on it
[13:21] <mgz_> sinzui: lgtm
[13:21] <sinzui> thank you mgz
[13:38] <perrito666> meh, I keep writting workflow instead of worload
[13:40] <jw4> oh, yeah... everytime I need to write worload I accidentally type workflow too... what is worload?
[13:40] <perrito666> :p
[13:40] <perrito666> Workload
[13:40] <jw4> hehe
[14:00] <dimitern> jw4, I have the same issue typing attempty vs. attempt
[14:01] <natefinch> I have the same problem with serve vs. server
[14:01] <natefinch> I can't type serve without typing server and deleting the r
[14:03] <ericsnow> dooferlad: you still have questions about juju systemd support?
[14:03] <dooferlad> ericsnow: I didn't think I had any to start with
[14:04] <jw4> it's funny how our brains store patterns
[14:04] <ericsnow> dooferlad: oh, wrong person :)
[14:04] <dooferlad> :-)
[14:04] <bodie_> how do I land a bugfix for 1.23?  I have an open bug.
[14:06] <mgz_> bodie_: propose a merge against the 1.23 branch? or do you mean more, what's the overall procedure?
[14:06] <ericsnow> jamespage: you have questions about juju and systemd?
[14:06] <jamespage> hi ericsnow
[14:06] <bodie_> mgz_, derp, of course
[14:07] <jamespage> ericsnow, indeed I do - vivid has now switched to systemd by default including cloud images and I wanted to get our vivid testing restarted asap for openstack
[14:07] <jamespage> ericsnow, do you have a branch for juju thatwe can test with?
[14:07] <ericsnow> jamespage: master :)
[14:07] <jamespage> ericsnow, ok testing now - but I had probs two days ago :-)
[14:07] <bodie_> mgz_, but yeah, what more is needed once I get LGTM?  I'd simply $$merge$$ it, right?
[14:08] <ericsnow> jamespage: we landed the last of the systemd support Tuesday-ish
[14:08] <bodie_> then mark the bug submitted?
[14:08] <jamespage> ericsnow, awesome - we may have missed that as we are working on a branch for leadership election right now
[14:08] <mgz_> bodie_: yup
[14:08] <jamespage> I did rebase so hopefully we're good
[14:08] <ericsnow> jamespage: it's totally conceivable there are issues
[14:09] <ericsnow> jamespage: I tested juju on systemd (vivid) before landing, but I'm sure I missed something
[14:10] <ericsnow> jamespage: if you run juju (e.g. bootstrap) with --debug you should see DEBUG messages saying which init systemd juju discovered
[14:10] <jamespage> ericsnow, ok so it works - I think I must have tested prior to re-basing
[14:10] <ericsnow> jamespage: yay
[14:10] <ericsnow> jamespage: thanks for taking it for a spin
[14:11] <ericsnow> jamespage: the alternative is to run vivid with upstart (not hard) temporarily but that's not ideal
[14:11] <jamespage> ericsnow, nah and thats backwards looking...
[14:11] <ericsnow> jamespage: :)
[14:15] <bodie_> mgz_, I already landed the bugfix in master.  can I just $$merge$$ the PR for 1.23?  or do I need to get LGTM on it?  it's identical to what I already got LGTM'd yesterday
[14:16] <mgz_> bodie_: no, you'll probably need to actually cherrypick
[14:16] <mgz_> it's a different branch target
[14:17] <mgz_> github may let you propose again targettting a different branch, I've not tried
[14:17] <bodie_> yeah, that's what I just did
[14:17] <mgz_> but you do need a new mp at least
[14:24] <axw_> wallyworld_: I'm too tired to review for reals, will take another look on the weekend. feel free to get others' opinion on juju-dev about lxc security
[14:26] <wallyworld_> axw_: no worries, i'm almost finished adding the loop mount config
[14:26] <wallyworld_> i'll propose tomorrow
[14:26] <wallyworld_> axw_: i explicitly allow the default loop devices, i think that will do for now as per my comments on the review
[14:28] <ericsnow> axw_, wallyworld_: yikes! still up?  have a good weekend :)
[14:28] <wallyworld_> ericsnow: yeah, about to head off, tired
[14:28] <sinzui> natefinch, dimitern, I just reported bug 1431888. I need to know if juju has a regression or a requirement change so that we get the functional-restricted-network test
[14:28] <mup> Bug #1431888: Juju cannot be deployed on a restricted network <ci> <deploy> <network> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1431888>
[14:30] <dimitern> sinzui, yeah, I was looking at that but couldn't off hand tell what's the problem
[14:30] <dimitern> sinzui, since the CA has landed we do modify a few iptables rules and routes
[14:31] <dimitern> sinzui, if the job drops or removes them it won't work with containers
[14:32] <sinzui> dimitern, we are just testing that we can bootstrap and deploy two services that don't have external requirements
[14:32] <sinzui> We know from 25 other tests that juju can bootstrap and deploy fine.
[14:33] <dimitern> sinzui, I'm looking at the prep steps the job does from the logs
[14:34] <dimitern> sinzui, and trying to understand what's the issue - so that's a local environment
[14:34] <dimitern> ?
[14:34] <sinzui> dimitern, this is one of our oldest tests. it is ugly. the interesting bits are at line 100+ http://bazaar.launchpad.net/~juju-qa/juju-ci-tools/trunk/view/head:/test-restricted-network
[14:38] <dimitern> sinzui, can you perhaps add a few things to the job - "ip link", "ip route", "ip addr", before and after the prep steps - to see how's the NICs, routes and addresses
[14:38] <dimitern> configured
[14:38] <mup> Bug #1431888 was opened: Juju cannot be deployed on a restricted network <ci> <deploy> <network> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1431888>
[14:38] <sinzui> dimitern, I can. you want this just before it calls juju bootstrap?
[14:41] <dimitern> sinzui, yes - before trying to change any networking stuff (e.g. around line 100) and just before bootstrap
[14:42] <dimitern> sinzui, while you're at it - in case of an error also dump these before exit 1
[15:01] <dimitern> sinzui, it will also be really useful for debugging if you can extract the logs for the host and containers - like you do for other local env tests
[15:01] <sinzui> dimitern, that is challenging
[15:02] <dimitern> sinzui, why?
[15:02] <sinzui> dimitern, the test isn't in our slaves
[15:03] <dimitern> sinzui, you mean you can't get the logs off that machine?
[15:07] <redelmann> Hi!
[15:08] <mup> Bug #1431918 was opened: gce minDiskSize incorrect <juju-core:New> <https://launchpad.net/bugs/1431918>
[15:11] <redelmann> someboy know what happend to "juju actions", im using "juju 1.22-beta4-trusty-amd64"
[15:11] <natefinch> bodie_, jw4: ^^
[15:11] <redelmann> and the "actions option does not exist
[15:16] <bodie_> redelmann, try `JUJU_DEV_FEATURE_FLAGS=actions juju action help`
[15:17] <bodie_> sorry
[15:17] <bodie_> JUJU_DEV_FEATURE_FLAG
[15:17] <bodie_> jw4, I thought it was FLAGS?  the Juju Doc site shows it as FLAG
[15:17] <bodie_> anywho, try that
[15:18] <jw4> bodie_: I think your quotes are wrong
[15:18] <jw4> JUJU_DEV_FEATURE_FLAG='action' juju action help
[15:19] <natefinch> JUJU_DEV_FEATURE_FLAGS=action juju action help
[15:19] <natefinch> (just tried it)
[15:19] <jw4> bodie_: oh; my mistake I see what you were doing
[15:19] <sinzui> dimitern, it is in an ec2 instance, but i have a plan. put termination protection on the instance when I see the instance spin up. Then we can claim logs
[15:19] <jw4> natefinch: sounds like we need to fix the doc
[15:19] <natefinch> jw4: yep, FLAG definitely does not work
[15:19] <dimitern> sinzui, good plan, let me know when you have the logs please
[15:24] <wwitzel3> port of gce fix to 1.23 http://reviews.vapour.ws/r/1161/ if someone can ptal
[15:24] <redelmann> bodie_ thank
[15:25] <bodie_> redelmann, that work for ya?
[15:25] <redelmann> bodie_, JUJU_DEV_FEATURE_FLAGS=action juju action help is working ;)
[15:25] <voidspace> dimitern: ping
[15:26] <voidspace> natefinch: you can probably help :-)
[15:26] <bodie_> redelmann, good.  :)
[15:28] <voidspace> natefinch: dimitern: cancel that ping
[15:28] <voidspace> found what I was looking for
[15:29] <natefinch> procrastination pays off again!
[15:29] <voidspace> :-)
[15:33] <bodie_> I landed my bugfix in 1.23.  Do I now mark the bug "released" rather than committed as usual?
[15:34] <perrito666> bodie_: committed I would say
[15:34] <perrito666> released is not released until we release
[15:35] <bodie_> that's what I thought.  just making sure :)
[15:38] <mup> Bug #1431612 changed: Action defaults don't work for nil params <actions> <defaults> <juju-core:Fix Released by binary132> <https://launchpad.net/bugs/1431612>
[15:44] <mup> Bug #1431612 was opened: Action defaults don't work for nil params <actions> <defaults> <juju-core:Fix Released by binary132> <https://launchpad.net/bugs/1431612>
[15:50] <mup> Bug #1431612 changed: Action defaults don't work for nil params <actions> <defaults> <juju-core:Fix Released by binary132> <https://launchpad.net/bugs/1431612>
[15:55] <voidspace> TheMue: when you have time: http://reviews.vapour.ws/r/1160/
[15:56] <TheMue> voidspace: I'm looking. just booted my first maas node on vmware *yeehaw*
[15:57] <voidspace> TheMue: awesome
[15:57] <TheMue> voidspace: only don't know how to log in *lol*
[15:57] <voidspace> TheMue: upgrade step is written, just looking at a test
[15:57] <voidspace> TheMue: hah
[15:57] <voidspace> TheMue: you have to set a secret in the config I believe and use that as password
[15:58] <voidspace> or something like that...
[15:58]  * fwereade out for a bit
[15:59] <voidspace> TheMue: this is the upgrade step, WIP, FWIW, YMMV: https://github.com/voidspace/juju/compare/address-life...voidspace:address-life-upgrade
[16:00] <TheMue> voidspace: will take a look after review
[16:00] <voidspace> TheMue: It's a WIP, it can wait until the test is done
[16:00] <voidspace> TheMue: just wanted to prove it's on the way...
[16:00] <TheMue> good
[16:04] <TheMue> voidspace: reviewed
[16:04] <voidspace> TheMue: thanks, I'll make that change
[16:04] <TheMue> voidspace: thx
[16:05] <TheMue> voidspace: the upgrade stuff looks fine so far too
[16:05] <voidspace> TheMue: I'm wondering how to test
[16:05] <voidspace> TheMue: I think I have to manually insert some ip address records without a Life field
[16:05] <voidspace> TheMue: and then check that the upgrade step adds them
[16:06] <voidspace> TheMue: (plus a test for idempotency for records with an existing Life field)
[16:06] <TheMue> voidspace: hmm, I once had an upgrade too. have to see how I've done it
[16:06] <voidspace> TheMue: that means manually constructing bson.M{...} for the address recoreds
[16:06] <voidspace> which is easy but tedious
[16:06] <voidspace> or create some records, delete the Life field...
[16:07] <voidspace> that's probably quicker (and more future proof) but weirder
[16:07] <voidspace> (future proof against future schema changes that would also have to be made in the manual bson.M records
[16:07] <voidspace> although we shouldn't have to future proof an upgrade step
[16:09] <TheMue> voidspace: I have to admit we only tested that it is called (the upgrade function)
[16:09] <voidspace> maybe just a manual test...
[16:09] <voidspace> :-)
[16:09] <TheMue> voidspace: see http://reviews.vapour.ws/r/253/diff/#
[16:10] <voidspace> TheMue: there are changes in state/upgrades_test.go
[16:11] <voidspace> TheMue: and that test runs MigrateJobManageNetworking
[16:12] <TheMue> voidspace: oh, eh, yeah. too quickly scrolled to the bottom
[16:13] <voidspace> TheMue: but you're adding to a set rather than adding a new field
[16:14] <voidspace> and you manually set the jobs of the machines you add before migrating
[16:14] <voidspace> whereas if I create new IPAddresses in the test they'll *have* the new field
[16:14] <voidspace> I'll think about it
[16:14] <voidspace> :-)
[16:18] <TheMue> voidspace: testing upgrade is *ugly*, indeed
[16:23] <dimitern> voidspace, since you've already sent the 1835 for merging, I'd ask you to address my review comments in a follow-up
[16:28] <voidspace> ah
[16:28] <voidspace> dimitern: ok
[16:28] <voidspace> dimitern: I can do that in the upgrade setp
[16:28] <voidspace> *step
[16:29] <voidspace> dimitern: why do we now need Refresh?
[16:29] <voidspace> dimitern: is it part of the interface?
[16:29] <voidspace> I know what it does
[16:29] <voidspace> but we only need it where we need retries
[16:29] <voidspace> the other comments are easy enough to address
[16:30] <voidspace> and if the answer is "we might need it", then can't it wait until we *do* need it? (like Destroy)
[16:30] <dimitern> voidspace, it's part of the interface
[16:30] <voidspace> heh, that was the answer I was hoping you wouldn't give
[16:31] <voidspace> ok
[16:32] <perrito666> hey, I need to be OoO for a moment, ill be back in about two hs
[16:32] <dimitern> voidspace, :)
[16:52] <fwereade> is anyone free to be hassled for a round of reviews? I'd like to land a few that all feed into one, so I can get a ~clean diff on that one without forcing the others into a sequence they don't have to be in
[16:54] <jw4> fwereade: I'll be out for a couple hours but when I get back I'd be happy to
[17:35] <dimitern> sinzui, I've commented on the restricted network bug
[17:35] <dimitern> sinzui, can you perhaps give me access to that machine after it has failed?
[17:36] <sinzui> dimitern, I can
[17:48] <marcoceppi> dimitern: thanks for the PR
[17:50] <dimitern> marcoceppi, np :) I like the tool very much
[17:57] <mup> Bug #1326355 changed: network-bridge doesn't work on trusty Ubuntu installed from scratch <addressability> <local-provider> <lxc> <network> <juju-core:Won't Fix> <https://launchpad.net/bugs/1326355>
[18:04] <sinzui> dimitern, ha ha, since the test breaks networking, we cannot get into the machine after the test starts
[18:04] <sinzui> dimitern, I will try to change the test to either revert the network on failure, or just get the logs
[18:06] <dimitern> sinzui, ok, sounds good - add the iptables-save dump to the logs as well please
[18:06] <dimitern> as commented on the bug
[18:14] <voidspace> dimitern: ping
[18:14] <dimitern> voidspace, pong
[18:14] <voidspace> dimitern: on isDeadDoc and handling the case where the address was already removed
[18:14] <voidspace> dimitern: your comment says "handle the case where the unit was removed already"
[18:14] <voidspace> dimitern: I assume you mean addres
[18:15] <voidspace> dimitern: I've been looking at unit.Remove
[18:15] <voidspace> dimitern: adding the isDeadDoc assert is trivial obviously
[18:15] <voidspace> dimitern: but it's not clear from Remove what code that handles it is
[18:16] <voidspace> dimitern: unless it's the code doing the Refresh - in which case it *looks* to me like it just ignores that error
[18:16] <voidspace> dimitern: and if this is the case I wonder how that is different from not having the assert
[18:16] <dimitern> voidspace, yeah - s/unit/address/ there in that comment
[18:16] <voidspace> dimitern: and if it's not correct I'd like to be corrected
[18:16] <dimitern> voidspace, so the isDeadDoc assert will cause the remove op to fail if the record is not dead
[18:17] <voidspace> ah, right - the Refresh handles not found, which is different
[18:17] <dimitern> voidspace, yeah
[18:17] <voidspace> we already check Life before performing the remove - so the assert *can't* fail of course
[18:18] <dimitern> voidspace, that's why refresh() is needed
[18:18] <voidspace> ah - so "handle the case where the unit was removed already" is a *separate* issue - not related
[18:18] <voidspace> dimitern: they can't go from Dead to NotDead
[18:18] <voidspace> dimitern: so I still say it can't fail
[18:18] <dimitern> voidspace, add asserts, and if they fail - refresh your local copy of the doc (which is obviously stale at this point) and retry - if needed
[18:18] <ericsnow> cmars: could you take a look at http://reviews.vapour.ws/r/1162/?
[18:18] <ericsnow> cmars: it should be pretty straight-forward
[18:19] <voidspace> dimitern: but the only assert [you want me to add] can't fail...
[18:19] <dimitern> voidspace, my point is you should *not* rely on the local copy of the doc (inside the state.IPAddress) when taking decisions whether to remove a doc or update it
[18:20] <voidspace> dimitern: we've just fetched it - and if life is Dead it can't go back
[18:20] <dimitern> voidspace, because it's possible that data is stale or somebody else changed the same doc your local copy was taken from
[18:20] <voidspace> dimitern: and if it's *not* Dead we'll bail before getting to that assert
[18:20] <voidspace> dimitern: so in this case I *don't* think that's possible
[18:21] <dimitern> voidspace, where is that you just fetched it?
[18:21] <voidspace> dimitern: prior to calling Remove
[18:21] <voidspace> dimitern: we only Remove in one place and we fetch all addresses for a container and Remove them
[18:21] <voidspace> dimitern: in the future we'll fetch all dying ones and Remove them
[18:21] <dimitern> voidspace, that's different - that's up to how the test is setup, I'm talking about the implementation
[18:21] <voidspace> dimitern: so am I
[18:21] <voidspace> dimitern: the actual use of Remove
[18:21] <voidspace> not the testing of it
[18:22] <voidspace> dimitern: and as I said, if the local copy doesn't have a Life == Dead we bail before the assert
[18:22] <voidspace> dimitern: and if the Life  of the local copy is Dead then there is no change anyone else can or will make to change that
[18:22] <dimitern> voidspace, not true
[18:23] <voidspace> dimitern: under what circumstances do you imagine an address going from Dead to NotDead?
[18:24] <dimitern> voidspace, imagine this case: i0 := st.IPAddress(x), i1 := st.IPAddress(x); i1.EnsureDead(), i1.Remove() , i0.EnsureDead()
[18:24] <dimitern> voidspace, what happens?
[18:25] <cmars> ericsnow, looking
[18:25] <voidspace> dimitern: the isAliveDoc assert in EnsureDead fails
[18:25] <dimitern> voidspace, and we're ignoring it and setting the local Life to Dead
[18:26] <voidspace> dimitern: there's an early return before setting local life to Dead
[18:27] <dimitern> voidspace, actually I'm not sure if it will even fail with ErrAborted
[18:27] <dimitern> voidspace, if the doc is gone
[18:27] <voidspace> dimitern: so you're saying asserts about a field will *succeed* if the doc doesn't exist at all
[18:28] <voidspace> dimitern: that sounds unlikely and horrible if true
[18:28] <dimitern> voidspace, I'm not saying that, I have to check if it is
[18:28] <voidspace> :-)
[18:29] <dimitern> voidspace, ok, I still think isDeadDoc should be added on Remove
[18:30] <dimitern> voidspace, but I've totally missed that now we have Life, a few other methods need to change to account for that
[18:31] <voidspace> dimitern: do you want to add more review comments?
[18:31] <voidspace> dimitern: I'm going jogging anyway
[18:31] <dimitern> voidspace, e.g. SetState and AllocateTo
[18:31] <dimitern> voidspace, I will yeah
[18:31] <voidspace> dimitern: yeah, ok
[18:32] <voidspace> dimitern: an assert on a doc that doesn't exist fails
[18:32] <voidspace> dimitern: and now you can't call Remove twice on an IPAddress - isDeadDoc now fails the second time
[18:33] <dimitern> voidspace, yeah, that's why it needs to call refresh on ErrAborted, and if it's not found - return nil, as in other cases
[18:33] <voidspace> dimitern: ok
[18:33] <dimitern> voidspace, thanks for confirming about the assert
[19:24] <redelmann> why when i try debug-hooks "JUJU_CONTEXT_ID" is not set?
[19:24] <redelmann> im doing something wrong?
[19:25] <redelmann> ex: config-get ->>  "JUJU_CONTEXT_ID" is not set
[19:25] <redelmann> inside tmux: juju debug-hooks rabbitmq-server/0 amqp-relation-changed
[19:26] <natefinch> redelmann: note that after you do debug-hooks, you then have to do a juju resolved --retry from another terminal, to get the hook to rerun, so the debug hook will hook
[19:27] <natefinch> then your debug hook terminal will dump you into the right directory with all the right environment variables set etc
[19:46] <ericsnow> cmars: I've replied to your review comment; did the code otherwise look okay?
[19:46] <redelmann> natefinch, but unit is not in error state
[19:46] <redelmann> natefinch, so it say: ERROR unit "rabbitmq-server/0" is not in an error state
[19:47] <redelmann> natefinch, i just need to do a "get-config" to see what is this charm given to my charm
[19:47] <natefinch> redelmann: ahh, debug-hooks only works if the hook is in an error state... this is usually easy to do, you can add a "return 1" or similar to the top of hook script so that it automatically errors out
[19:48] <redelmann> natefinch, but i deploy rabbitmq from online charm
[19:48] <redelmann> natefinch, s i have to download the charm and edit the hook?
[19:49] <redelmann> natefinch, too complicate just to know what get-config "send"
[19:49] <natefinch> redelmann: you can still edit the hook script on the unit
[19:51] <mup> Bug #1215579 changed: Address changes should be propagated to relations <addressability> <network> <reliability> <juju-core:Fix Released> <https://launchpad.net/bugs/1215579>
[19:51] <redelmann> natefinch, that's true
[19:51] <redelmann> natefinch, second day playing with juju
[19:53] <natefinch> redelmann: so, you can deploy the normal rabbitmq charm, then ssh into the unit (do juju ssh <machine number>), then go to the service's hooks directory on that machine (/var/lib/juju/agents/unit-rabbitmq-0/charm/hooks) and  edit the hooks
[19:53] <natefinch> redelmann: no problem... it's a bit of a learning curve at first
[19:54] <natefinch> rick_h_: does the gui show all the config data that a charm sets?
[19:54] <redelmann> natefinch, thank!
[19:54]  * natefinch should really use the gui more often
[19:54] <rick_h_> natefinch: it shows all defined in metadata.yaml yes
[19:54] <rick_h_> err config.yaml
[19:54] <natefinch> oh yeah, config.yaml
[19:55] <rick_h_> e.g. https://demo.jujucharms.com/precise/juju-gui-108/#configuration
[19:55] <natefinch> redelmann: the charm will have a config.yaml you can look at, which should define all the properties it sets... or you can use the gui to go look at them
[19:57] <natefinch> redelmann: https://api.jujucharms.com/v4/trusty/rabbitmq-server-26/archive/config.yaml
[19:57] <redelmann> natefinch, but i need relation variables
[19:58] <redelmann> natefinch, when i create a relation whit my charm, rabbitmq is not given to me her addres
[19:59] <rick_h_> redelmann: you've fallen into a little bit of a hole there. Relations are meant to be quite flexible and aren't that well document. The only relation currently really well documented is the mysql one as a first stab at it. https://jujucharms.com/docs/interface-mysql
[19:59] <redelmann> natefinch, address/hostname/whatever
[19:59] <rick_h_> redelmann: it's something we're working on trying to improve because it can be frustrating when trying to relate to a new service
[19:59] <rick_h_> redelmann: the best thing we suggest at the moment is to look at other services that talk to that service or to check out what the service does in the hooks for joining the relation.
[20:01] <redelmann> rick_h_, ok, thank, im going to look in another charms
[20:02] <redelmann> rick_h_,i already try: hookenv.relation_get("host"), hookenv.relation_get("hostname")
[20:02] <rick_h_> redelmann: yea, so looking at the charm the amqp relation is in line 110 of https://api.jujucharms.com/v4/trusty/rabbitmq-server-26/archive/hooks/rabbitmq_server_relations.py
[20:03] <rick_h_> redelmann: which sets up a bunch of stuff in relatoin_settings. You should be able to dump out everything in that I'd think. /me is trying to see
[20:04] <redelmann> rick_h_, for what can i see "hostname" should do the trick!
[20:05] <rick_h_> redelmann: woot!
[20:05] <redelmann> rick_h_, nothing
[20:06] <rick_h_> boooo
[20:06] <redelmann> rick_h_, i was trynig to retrive hostname from rabbitmq charm, but was my problem
[20:07] <redelmann> rick_h_, hookenv.relation_get("hostname") is working as expected
[20:07] <rick_h_> redelmann: ah ok cool
[20:07] <redelmann> rick_h_, but my template was wrong
[20:08] <redelmann> rick_h_, thank for the help!
[20:08] <rick_h_> redelmann: np, sorry for the trouble. It's definitely a weak point we're got our eye on
[20:08] <rick_h_> redelmann: thanks for pushing through it
[20:52] <ericsnow> cmars: thanks for the review
[21:24] <mup> Bug #1431685 was opened: juju nova-compute charm not enabling live-migration via tcp with auth set to none <juju-core:New> <https://launchpad.net/bugs/1431685>
[23:32] <fwereade> if anyone's on, I'd love a second opinion on http://reviews.vapour.ws/r/1151/
[23:37] <jw4> I recently created a bash script to display all your branches (local and remote) sorted by last commit date, and colorized to make finding recent work across remotes easy
[23:39] <jw4> now I can't get my link to paste! :)   hand typed:  https://gist.github.com/johnweldon/0a9ee3c9406fab2ac93b
[23:40] <jw4> fwereade: looking... shouldn't you be sleeping?
[23:40] <fwereade> jw4, ehh, soon
[23:41] <fwereade> jw4, I want to get a bunch of these landed so I can propose what I have that works and has hooks and proper tests and everything, but depends on too much still in review for a nice diff without hassle
[23:42] <jw4> fwereade: yeah, makes sense.  I actually wrote my little git tool (^^) so that I could add your repo as a remote and find the branches you've been working on recently :)
[23:42] <alexisb> fwereade, dude what time is it for you man?
[23:42] <jw4> fwereade: quarter til 2?
[23:43] <jw4> oh, quarter to one
[23:43] <jw4> (DST screws me up even worse now !)
[23:43] <fwereade> yeah, only quarter to 1
[23:44] <fwereade> if I can propose that last branch I'll feel like I've done the week
[23:50] <jw4> fwereade: reading that Raft doc you recommended is helping me review your changes ;)
[23:51] <alexisb> fwereade, I can put I ship it on it if that helps :)
[23:59] <jw4> fwereade: non-graduated second opinion :shipit: