[01:57] wallyworld: just making a cup of tea, will be a few minutes late [02:01] sure [03:06] thumper: what's next-release? was that just to not disrupt 1.23? [03:06] axw: yep [03:06] a feature branch [03:07] so we weren't holding many branches open against master [03:07] okey dokey [03:07] oh balls, I didn't realise trunk was unblocked [03:07] axw: we are trialing a number of feature branches, and will report more in nuremburg [03:07] thumper: cool. trunk blocking kinda screws everyone atm [03:07] agreed [05:30] morning [07:58] axw: i found the correct appamor profile to use to allow mounting loop devices http://reviews.vapour.ws/r/1154/ [08:01] wallyworld_: https://help.ubuntu.com/lts/serverguide/lxc.html#lxc-apparmor -- still unsafe, and I still don't think we should enable this unless it's requested [08:03] axw: that web page is out of date - ls /etc/apparmor.d/lxc/ shows the mount profile [08:04] wallyworld_: the comments in the file line up with the commentary on the page - what is out of date? [08:04] i guess we can only add the extra config if needed for storage. but, we would then nt be able to hogsmash a new unit with storage onto a container [08:05] axw: i didn't see the lxc-container-default-with-mounting profile mentioned [08:05] only lxc-container-default-with-nesting [08:06] wallyworld_: "Another profile shipped with lxc allows containers to mount block filesystem types like ext4. This can be useful in some cases like maas provisioning, but is deemed generally unsafe since the superblock handlers in the kernel have not been audited for safe handling of untrusted input." [08:08] axw: hmmm, ok, i wonder hen the auditing might occur. i guess then we only enable the extra config if needed for storage [08:08] but no hogsmash [08:15] wallyworld_: can't do placement yet anyway [08:15] I suppose we could with containers [08:15] non-existing containers that is [09:00] mornin' o/ [09:00] morning dooferlad [09:01] dooferlad, I've seen your PR - am I correct to assume the order of (application of) the iptables rules do not matter? [09:02] dimitern: indeed. They are all inserted rat the top of the table, so since they are all above reject rules, order doesn't matter. [09:03] morning o/ [09:03] dooferlad, good :) because using a map was the first red light for me - how about go1.3+ and gccgo map ordering :) [09:03] TheMue, morning [09:04] dooferlad, clever tests though - never using more than 1 element in the maps during tests [09:04] dimitern: that was indeed on my mind. [09:10] davechen1y, are you about? [09:11] what's up ? [09:11] davechen1y, re this review http://reviews.vapour.ws/r/1150/ [09:12] davechen1y, this fixes a regression on maas after the introduction of addressable containers (lxc) for ec2 and maas [09:12] davechen1y, windows support was not even on my agenda tbh [09:13] davechen1y, were they running before on windows? [09:13] dimitern: no idea [09:13] it is not clear what we're supposed to do about windows [09:14] davechen1y, right, but I think we should fix it on ubuntu where it's supposed to work, then we can add it to the list of things to enable on windows [09:14] dimitern: feel free to land it [09:14] davechen1y, ok, all add a comment, thanks [09:15] ok [09:21] dooferlad, you have a review [09:21] dimitern: thanks [09:27] TheMue, so you have your maas cluster controller running? [09:28] dimitern: yes, only getting a weird apache error message (will now set the server name explicitely) and I'm currently adding a 2nd eth for a private network [09:29] TheMue, that's just a warning - you can ignore it [09:29] dimitern: yeah, but I dislike it :) [09:30] TheMue, yeah, you'll need to setup both DHCP and static ranges for the internal network and leave the other one unmanaged [09:30] TheMue, ok :) [09:30] dimitern: do you have a good doc for dhcp conf when creating a vmaas? [09:31] TheMue, why do you need to change the conf directly? [09:31] dimitern: my eth0 has a static address here in my net, so that I can reach it [09:31] dimitern: I only interpreted your hint as if I would have to do ;) [09:32] TheMue, ah, no - you can configure all via the maas web ui - Clusters - edit interfaces [09:32] dimitern: already wondered, because that's how I got the maas docs too [09:33] TheMue, you'll probably need to enable ip_forwarding on the cluster and add a SNAT iptables rule so the machines behind maas on the internal network can reach outside [09:34] TheMue, and on the other side - e.g. your machine you'll need a static route for the internal network pointing to your maas's eth0 address (the one you can reach from your machine) [09:34] * fwereade out at laura's school for a bit [09:35] ParamsStateServingInfoToStateStateServingInfo .... really? [09:35] natefinch: Java came to Go [09:36] It has state in the name THREE TIMES [09:36] dimitern: oh how I like this === axw_ is now known as axw [09:36] natefinch: now you know how I feel [09:37] dimitern: "virtual metal" as a service! what.is.virtual.metal? :/ [09:37] axw: i'm off to soccer, have a revised lxc config branch almost ready, uses a StorageConfig arg similar to NetworkConfig, can be extended to do what's needed for host loop etc as needed; just need to finish tests, will propose when i get home later [09:37] davecheney: yep [09:38] natefinch: you know it's created with the ParamsStateServingInfoToStateStateServingInfoFactory and can be simulated by the ParamsStateServingInfoToStateStateServingInfoMock [09:38] TheMue, it's stuff from fairy tales :) [09:38] adamantium [09:38] natefinch: that's state-of-the-art *scnr* [09:39] wallyworld_: cool, thanks. enjoy [09:39] dimitern: *lol* [09:39] * TheMue needs another cup of coffee [09:39] we could call it AaaS [09:40] adamantium as a service [09:40] anything you could possibly pronounce as ass is probably not a good acronym [09:40] :D [09:42] * dimitern has 335MB of logs from the yesterdays automated tests on MAAS and EC2 with containers [09:43] dimitern: and that's only because you compressed them *g* [09:44] TheMue, not even :) - full 5h of testing at TRACE level [09:44] 16 separate environments [09:45] a hefty $4.68 bill in EC2 [09:45] * TheMue sees dimitern creating a data warehouse for log analyzing [09:45] a DWaaS [09:45] TheMue, yeah - running hadoop nodes doing rgrep ERROR [09:46] voidspace, hey there [09:46] dimitern: you need a filtered logging, only keeping stuff you're interested in and throwing away the rest [09:46] TheMue, you available for a review? [09:47] dimitern, do you know who's working on the systemd support in juju? [09:47] mattyw: shure, it's my job today [09:47] voidspace, let's have a chat after standup for the release of addresses worker [09:47] jamespage, yes, ericsnow mostly [09:47] * fwereade out for a while at laura's school [09:47] dimitern, what tz is he in? [09:48] jamespage, -7 I believe [09:48] jamespage, what's up? [09:49] dimitern, I was wondering what state I could expect vivid support to be in in master and whether any other branches needed testing [09:49] dimitern, we're busted on vivid testing right now so have a direct interest in seeing this land asap [09:50] jamespage, AFAIK systemd support has landed and for vivid we no longer have an exception to run it with upstart [09:50] jamespage, but it's only in 1.23 and master [09:50] dimitern, hmm - yeah - still non-functional - tested yesterday [09:50] the cloud-config that gets generated for instance creation still does "start jujud-XXX" [09:50] which is upstart specific [09:51] jamespage, hmm right - is that 1.22.0 ? [09:51] no from master branch with locally built copy [09:51] jamespage, ok, so it sounds not so complete as I thought [09:52] jamespage, I'd suggest to write a mail to ericsnow cc alexisb about this [09:52] dimitern, I'll retest early next week with a clean master (currently have leader election merged as well) [09:52] and report back [09:52] jamespage, cheers [09:54] davecheney, you still around? [09:54] whats up ? [09:59] dimitern: yep [10:00] dimitern: although I think the high level details are reasonably clear [10:00] voidspace, that's great :) i've started adding tasks to this new feature card I assigned to you [10:01] dimitern: yeah, I saw :-) [10:01] dimitern: thanks [10:02] rogpeppe: where's the code that adds the new mongo admin users when we run ensure-availability? [10:03] dooferlad, standup? [10:03] natefinch: i don't think any users are added, are they [10:03] rogpeppe: system.users gets a user per state machine [10:03] rogpeppe: brb [10:19] rogpeppe: system.users has the admin user and then one user per state machine. no big deal if you don't remember this stuff offhand, I know it's been a year since we worked on it [10:20] natefinch: i think users are added when machines are added [10:29] rogpeppe: ahh, I see what it is, I was looking for EnsureAdminUser, but most places just call SetAdminMongoPassword [10:43] voidspace, ok I'm done adding tasks - I think I mentioned everything relevant in the feature card [10:46] dimitern: great, thanks [10:59] I think I need to alias 'exit' to 'echo "dude, you're on you're on machine already"' [10:59] s/on/own [11:00] natefinch, I have a custom bash prompt - not for that, but it helps in this case [11:01] dimitern: i do too.. but I do exit exit ... really fast, and sometimes do one too many [11:02] natefinch, :) ctrl+d is too easy [11:03] dimitern: yeah, I've done that by accident before too whose bright idea was it to make a hotkey to close a window that could easily be typoed from ctrl+c ? :/ [11:04] :) [11:04] (and ctrl+s, ctrl+x ctrl+f etc) [11:05] I used to have a terminal (I think was konsole) where you could setup different background colors depending on the host [11:06] looking at the wear patterns on my keyboard ctrl+A, S, C and lastly D I use most of the time -my emacs habits haven't causes X to wear off too much yet [11:08] dimitern: and ctrl? [11:08] perrito666, left one is still barely readable, right one a lot more [11:09] but oh boy! cursor keys - all but right are long gone [11:11] lol. I tink excepting for my thinkpad, which is my spare machine, I hardly have a computer long enough to wear out anythin other than space bar [11:12] although I do use an external kb [11:12] whose wear pattern makes no sense, since I use it in english but is a spanish kb [11:19] :) [12:54] * TheMue steps out for a moment, bbiab [13:19] natefinch, mgz: do either of you have a minute for http://reviews.vapour.ws/r/1157/ [13:20] sinzui: on it [13:21] sinzui: lgtm [13:21] thank you mgz [13:38] meh, I keep writting workflow instead of worload [13:40] oh, yeah... everytime I need to write worload I accidentally type workflow too... what is worload? [13:40] :p [13:40] Workload [13:40] hehe [14:00] jw4, I have the same issue typing attempty vs. attempt [14:01] I have the same problem with serve vs. server [14:01] I can't type serve without typing server and deleting the r [14:03] dooferlad: you still have questions about juju systemd support? [14:03] ericsnow: I didn't think I had any to start with [14:04] it's funny how our brains store patterns [14:04] dooferlad: oh, wrong person :) [14:04] :-) [14:04] how do I land a bugfix for 1.23? I have an open bug. [14:06] bodie_: propose a merge against the 1.23 branch? or do you mean more, what's the overall procedure? [14:06] jamespage: you have questions about juju and systemd? [14:06] hi ericsnow [14:06] mgz_, derp, of course [14:07] ericsnow, indeed I do - vivid has now switched to systemd by default including cloud images and I wanted to get our vivid testing restarted asap for openstack [14:07] ericsnow, do you have a branch for juju thatwe can test with? [14:07] jamespage: master :) [14:07] ericsnow, ok testing now - but I had probs two days ago :-) [14:07] mgz_, but yeah, what more is needed once I get LGTM? I'd simply $$merge$$ it, right? [14:08] jamespage: we landed the last of the systemd support Tuesday-ish [14:08] then mark the bug submitted? [14:08] ericsnow, awesome - we may have missed that as we are working on a branch for leadership election right now [14:08] bodie_: yup [14:08] I did rebase so hopefully we're good [14:08] jamespage: it's totally conceivable there are issues [14:09] jamespage: I tested juju on systemd (vivid) before landing, but I'm sure I missed something [14:10] jamespage: if you run juju (e.g. bootstrap) with --debug you should see DEBUG messages saying which init systemd juju discovered [14:10] ericsnow, ok so it works - I think I must have tested prior to re-basing [14:10] jamespage: yay [14:10] jamespage: thanks for taking it for a spin [14:11] jamespage: the alternative is to run vivid with upstart (not hard) temporarily but that's not ideal [14:11] ericsnow, nah and thats backwards looking... [14:11] jamespage: :) [14:15] mgz_, I already landed the bugfix in master. can I just $$merge$$ the PR for 1.23? or do I need to get LGTM on it? it's identical to what I already got LGTM'd yesterday [14:16] bodie_: no, you'll probably need to actually cherrypick [14:16] it's a different branch target [14:17] github may let you propose again targettting a different branch, I've not tried [14:17] yeah, that's what I just did [14:17] but you do need a new mp at least [14:24] wallyworld_: I'm too tired to review for reals, will take another look on the weekend. feel free to get others' opinion on juju-dev about lxc security [14:26] axw_: no worries, i'm almost finished adding the loop mount config [14:26] i'll propose tomorrow [14:26] axw_: i explicitly allow the default loop devices, i think that will do for now as per my comments on the review [14:28] axw_, wallyworld_: yikes! still up? have a good weekend :) [14:28] ericsnow: yeah, about to head off, tired [14:28] natefinch, dimitern, I just reported bug 1431888. I need to know if juju has a regression or a requirement change so that we get the functional-restricted-network test [14:28] Bug #1431888: Juju cannot be deployed on a restricted network [14:30] sinzui, yeah, I was looking at that but couldn't off hand tell what's the problem [14:30] sinzui, since the CA has landed we do modify a few iptables rules and routes [14:31] sinzui, if the job drops or removes them it won't work with containers [14:32] dimitern, we are just testing that we can bootstrap and deploy two services that don't have external requirements [14:32] We know from 25 other tests that juju can bootstrap and deploy fine. [14:33] sinzui, I'm looking at the prep steps the job does from the logs [14:34] sinzui, and trying to understand what's the issue - so that's a local environment [14:34] ? [14:34] dimitern, this is one of our oldest tests. it is ugly. the interesting bits are at line 100+ http://bazaar.launchpad.net/~juju-qa/juju-ci-tools/trunk/view/head:/test-restricted-network [14:38] sinzui, can you perhaps add a few things to the job - "ip link", "ip route", "ip addr", before and after the prep steps - to see how's the NICs, routes and addresses [14:38] configured [14:38] Bug #1431888 was opened: Juju cannot be deployed on a restricted network [14:38] dimitern, I can. you want this just before it calls juju bootstrap? [14:41] sinzui, yes - before trying to change any networking stuff (e.g. around line 100) and just before bootstrap [14:42] sinzui, while you're at it - in case of an error also dump these before exit 1 [15:01] sinzui, it will also be really useful for debugging if you can extract the logs for the host and containers - like you do for other local env tests [15:01] dimitern, that is challenging [15:02] sinzui, why? [15:02] dimitern, the test isn't in our slaves [15:03] sinzui, you mean you can't get the logs off that machine? [15:07] Hi! [15:08] Bug #1431918 was opened: gce minDiskSize incorrect [15:11] someboy know what happend to "juju actions", im using "juju 1.22-beta4-trusty-amd64" [15:11] bodie_, jw4: ^^ [15:11] and the "actions option does not exist [15:16] redelmann, try `JUJU_DEV_FEATURE_FLAGS=actions juju action help` [15:17] sorry [15:17] JUJU_DEV_FEATURE_FLAG [15:17] jw4, I thought it was FLAGS? the Juju Doc site shows it as FLAG [15:17] anywho, try that [15:18] bodie_: I think your quotes are wrong [15:18] JUJU_DEV_FEATURE_FLAG='action' juju action help [15:19] JUJU_DEV_FEATURE_FLAGS=action juju action help [15:19] (just tried it) [15:19] bodie_: oh; my mistake I see what you were doing [15:19] dimitern, it is in an ec2 instance, but i have a plan. put termination protection on the instance when I see the instance spin up. Then we can claim logs [15:19] natefinch: sounds like we need to fix the doc [15:19] jw4: yep, FLAG definitely does not work [15:19] sinzui, good plan, let me know when you have the logs please [15:24] port of gce fix to 1.23 http://reviews.vapour.ws/r/1161/ if someone can ptal [15:24] bodie_ thank [15:25] redelmann, that work for ya? [15:25] bodie_, JUJU_DEV_FEATURE_FLAGS=action juju action help is working ;) [15:25] dimitern: ping [15:26] natefinch: you can probably help :-) [15:26] redelmann, good. :) [15:28] natefinch: dimitern: cancel that ping [15:28] found what I was looking for [15:29] procrastination pays off again! [15:29] :-) [15:33] I landed my bugfix in 1.23. Do I now mark the bug "released" rather than committed as usual? [15:34] bodie_: committed I would say [15:34] released is not released until we release [15:35] that's what I thought. just making sure :) [15:38] Bug #1431612 changed: Action defaults don't work for nil params [15:44] Bug #1431612 was opened: Action defaults don't work for nil params [15:50] Bug #1431612 changed: Action defaults don't work for nil params [15:55] TheMue: when you have time: http://reviews.vapour.ws/r/1160/ [15:56] voidspace: I'm looking. just booted my first maas node on vmware *yeehaw* [15:57] TheMue: awesome [15:57] voidspace: only don't know how to log in *lol* [15:57] TheMue: upgrade step is written, just looking at a test [15:57] TheMue: hah [15:57] TheMue: you have to set a secret in the config I believe and use that as password [15:58] or something like that... [15:58] * fwereade out for a bit [15:59] TheMue: this is the upgrade step, WIP, FWIW, YMMV: https://github.com/voidspace/juju/compare/address-life...voidspace:address-life-upgrade [16:00] voidspace: will take a look after review [16:00] TheMue: It's a WIP, it can wait until the test is done [16:00] TheMue: just wanted to prove it's on the way... [16:00] good [16:04] voidspace: reviewed [16:04] TheMue: thanks, I'll make that change [16:04] voidspace: thx [16:05] voidspace: the upgrade stuff looks fine so far too [16:05] TheMue: I'm wondering how to test [16:05] TheMue: I think I have to manually insert some ip address records without a Life field [16:05] TheMue: and then check that the upgrade step adds them [16:06] TheMue: (plus a test for idempotency for records with an existing Life field) [16:06] voidspace: hmm, I once had an upgrade too. have to see how I've done it [16:06] TheMue: that means manually constructing bson.M{...} for the address recoreds [16:06] which is easy but tedious [16:06] or create some records, delete the Life field... [16:07] that's probably quicker (and more future proof) but weirder [16:07] (future proof against future schema changes that would also have to be made in the manual bson.M records [16:07] although we shouldn't have to future proof an upgrade step [16:09] voidspace: I have to admit we only tested that it is called (the upgrade function) [16:09] maybe just a manual test... [16:09] :-) [16:09] voidspace: see http://reviews.vapour.ws/r/253/diff/# [16:10] TheMue: there are changes in state/upgrades_test.go [16:11] TheMue: and that test runs MigrateJobManageNetworking [16:12] voidspace: oh, eh, yeah. too quickly scrolled to the bottom [16:13] TheMue: but you're adding to a set rather than adding a new field [16:14] and you manually set the jobs of the machines you add before migrating [16:14] whereas if I create new IPAddresses in the test they'll *have* the new field [16:14] I'll think about it [16:14] :-) [16:18] voidspace: testing upgrade is *ugly*, indeed [16:23] voidspace, since you've already sent the 1835 for merging, I'd ask you to address my review comments in a follow-up [16:28] ah [16:28] dimitern: ok [16:28] dimitern: I can do that in the upgrade setp [16:28] *step [16:29] dimitern: why do we now need Refresh? [16:29] dimitern: is it part of the interface? [16:29] I know what it does [16:29] but we only need it where we need retries [16:29] the other comments are easy enough to address [16:30] and if the answer is "we might need it", then can't it wait until we *do* need it? (like Destroy) [16:30] voidspace, it's part of the interface [16:30] heh, that was the answer I was hoping you wouldn't give [16:31] ok [16:32] hey, I need to be OoO for a moment, ill be back in about two hs [16:32] voidspace, :) === kadams54 is now known as kadams54-away [16:52] is anyone free to be hassled for a round of reviews? I'd like to land a few that all feed into one, so I can get a ~clean diff on that one without forcing the others into a sequence they don't have to be in [16:54] fwereade: I'll be out for a couple hours but when I get back I'd be happy to === kadams54-away is now known as kadams54 [17:35] sinzui, I've commented on the restricted network bug [17:35] sinzui, can you perhaps give me access to that machine after it has failed? [17:36] dimitern, I can [17:48] dimitern: thanks for the PR [17:50] marcoceppi, np :) I like the tool very much [17:57] Bug #1326355 changed: network-bridge doesn't work on trusty Ubuntu installed from scratch [18:04] dimitern, ha ha, since the test breaks networking, we cannot get into the machine after the test starts [18:04] dimitern, I will try to change the test to either revert the network on failure, or just get the logs [18:06] sinzui, ok, sounds good - add the iptables-save dump to the logs as well please [18:06] as commented on the bug [18:14] dimitern: ping [18:14] voidspace, pong [18:14] dimitern: on isDeadDoc and handling the case where the address was already removed [18:14] dimitern: your comment says "handle the case where the unit was removed already" [18:14] dimitern: I assume you mean addres [18:15] dimitern: I've been looking at unit.Remove [18:15] dimitern: adding the isDeadDoc assert is trivial obviously [18:15] dimitern: but it's not clear from Remove what code that handles it is [18:16] dimitern: unless it's the code doing the Refresh - in which case it *looks* to me like it just ignores that error [18:16] dimitern: and if this is the case I wonder how that is different from not having the assert [18:16] voidspace, yeah - s/unit/address/ there in that comment [18:16] dimitern: and if it's not correct I'd like to be corrected [18:16] voidspace, so the isDeadDoc assert will cause the remove op to fail if the record is not dead [18:17] ah, right - the Refresh handles not found, which is different [18:17] voidspace, yeah [18:17] we already check Life before performing the remove - so the assert *can't* fail of course [18:18] voidspace, that's why refresh() is needed [18:18] ah - so "handle the case where the unit was removed already" is a *separate* issue - not related [18:18] dimitern: they can't go from Dead to NotDead [18:18] dimitern: so I still say it can't fail [18:18] voidspace, add asserts, and if they fail - refresh your local copy of the doc (which is obviously stale at this point) and retry - if needed [18:18] cmars: could you take a look at http://reviews.vapour.ws/r/1162/? [18:18] cmars: it should be pretty straight-forward [18:19] dimitern: but the only assert [you want me to add] can't fail... [18:19] voidspace, my point is you should *not* rely on the local copy of the doc (inside the state.IPAddress) when taking decisions whether to remove a doc or update it [18:20] dimitern: we've just fetched it - and if life is Dead it can't go back [18:20] voidspace, because it's possible that data is stale or somebody else changed the same doc your local copy was taken from [18:20] dimitern: and if it's *not* Dead we'll bail before getting to that assert [18:20] dimitern: so in this case I *don't* think that's possible [18:21] voidspace, where is that you just fetched it? [18:21] dimitern: prior to calling Remove [18:21] dimitern: we only Remove in one place and we fetch all addresses for a container and Remove them [18:21] dimitern: in the future we'll fetch all dying ones and Remove them [18:21] voidspace, that's different - that's up to how the test is setup, I'm talking about the implementation [18:21] dimitern: so am I [18:21] dimitern: the actual use of Remove [18:21] not the testing of it [18:22] dimitern: and as I said, if the local copy doesn't have a Life == Dead we bail before the assert [18:22] dimitern: and if the Life of the local copy is Dead then there is no change anyone else can or will make to change that [18:22] voidspace, not true [18:23] dimitern: under what circumstances do you imagine an address going from Dead to NotDead? [18:24] voidspace, imagine this case: i0 := st.IPAddress(x), i1 := st.IPAddress(x); i1.EnsureDead(), i1.Remove() , i0.EnsureDead() [18:24] voidspace, what happens? [18:25] ericsnow, looking [18:25] dimitern: the isAliveDoc assert in EnsureDead fails [18:25] voidspace, and we're ignoring it and setting the local Life to Dead [18:26] dimitern: there's an early return before setting local life to Dead [18:27] voidspace, actually I'm not sure if it will even fail with ErrAborted [18:27] voidspace, if the doc is gone [18:27] dimitern: so you're saying asserts about a field will *succeed* if the doc doesn't exist at all [18:28] dimitern: that sounds unlikely and horrible if true [18:28] voidspace, I'm not saying that, I have to check if it is [18:28] :-) [18:29] voidspace, ok, I still think isDeadDoc should be added on Remove [18:30] voidspace, but I've totally missed that now we have Life, a few other methods need to change to account for that [18:31] dimitern: do you want to add more review comments? [18:31] dimitern: I'm going jogging anyway [18:31] voidspace, e.g. SetState and AllocateTo [18:31] voidspace, I will yeah [18:31] dimitern: yeah, ok [18:32] dimitern: an assert on a doc that doesn't exist fails [18:32] dimitern: and now you can't call Remove twice on an IPAddress - isDeadDoc now fails the second time [18:33] voidspace, yeah, that's why it needs to call refresh on ErrAborted, and if it's not found - return nil, as in other cases [18:33] dimitern: ok [18:33] voidspace, thanks for confirming about the assert === kadams54 is now known as kadams54-away [19:24] why when i try debug-hooks "JUJU_CONTEXT_ID" is not set? [19:24] im doing something wrong? [19:25] ex: config-get ->> "JUJU_CONTEXT_ID" is not set [19:25] inside tmux: juju debug-hooks rabbitmq-server/0 amqp-relation-changed [19:26] redelmann: note that after you do debug-hooks, you then have to do a juju resolved --retry from another terminal, to get the hook to rerun, so the debug hook will hook [19:27] then your debug hook terminal will dump you into the right directory with all the right environment variables set etc [19:46] cmars: I've replied to your review comment; did the code otherwise look okay? [19:46] natefinch, but unit is not in error state [19:46] natefinch, so it say: ERROR unit "rabbitmq-server/0" is not in an error state [19:47] natefinch, i just need to do a "get-config" to see what is this charm given to my charm [19:47] redelmann: ahh, debug-hooks only works if the hook is in an error state... this is usually easy to do, you can add a "return 1" or similar to the top of hook script so that it automatically errors out [19:48] natefinch, but i deploy rabbitmq from online charm [19:48] natefinch, s i have to download the charm and edit the hook? [19:49] natefinch, too complicate just to know what get-config "send" [19:49] redelmann: you can still edit the hook script on the unit [19:51] Bug #1215579 changed: Address changes should be propagated to relations [19:51] natefinch, that's true [19:51] natefinch, second day playing with juju [19:53] redelmann: so, you can deploy the normal rabbitmq charm, then ssh into the unit (do juju ssh ), then go to the service's hooks directory on that machine (/var/lib/juju/agents/unit-rabbitmq-0/charm/hooks) and edit the hooks [19:53] redelmann: no problem... it's a bit of a learning curve at first [19:54] rick_h_: does the gui show all the config data that a charm sets? [19:54] natefinch, thank! [19:54] * natefinch should really use the gui more often [19:54] natefinch: it shows all defined in metadata.yaml yes [19:54] err config.yaml [19:54] oh yeah, config.yaml [19:55] e.g. https://demo.jujucharms.com/precise/juju-gui-108/#configuration [19:55] redelmann: the charm will have a config.yaml you can look at, which should define all the properties it sets... or you can use the gui to go look at them [19:57] redelmann: https://api.jujucharms.com/v4/trusty/rabbitmq-server-26/archive/config.yaml [19:57] natefinch, but i need relation variables [19:58] natefinch, when i create a relation whit my charm, rabbitmq is not given to me her addres [19:59] redelmann: you've fallen into a little bit of a hole there. Relations are meant to be quite flexible and aren't that well document. The only relation currently really well documented is the mysql one as a first stab at it. https://jujucharms.com/docs/interface-mysql [19:59] natefinch, address/hostname/whatever [19:59] redelmann: it's something we're working on trying to improve because it can be frustrating when trying to relate to a new service [19:59] redelmann: the best thing we suggest at the moment is to look at other services that talk to that service or to check out what the service does in the hooks for joining the relation. [20:01] rick_h_, ok, thank, im going to look in another charms [20:02] rick_h_,i already try: hookenv.relation_get("host"), hookenv.relation_get("hostname") [20:02] redelmann: yea, so looking at the charm the amqp relation is in line 110 of https://api.jujucharms.com/v4/trusty/rabbitmq-server-26/archive/hooks/rabbitmq_server_relations.py [20:03] redelmann: which sets up a bunch of stuff in relatoin_settings. You should be able to dump out everything in that I'd think. /me is trying to see [20:04] rick_h_, for what can i see "hostname" should do the trick! [20:05] redelmann: woot! [20:05] rick_h_, nothing [20:06] boooo [20:06] rick_h_, i was trynig to retrive hostname from rabbitmq charm, but was my problem [20:07] rick_h_, hookenv.relation_get("hostname") is working as expected [20:07] redelmann: ah ok cool [20:07] rick_h_, but my template was wrong [20:08] rick_h_, thank for the help! [20:08] redelmann: np, sorry for the trouble. It's definitely a weak point we're got our eye on [20:08] redelmann: thanks for pushing through it [20:52] cmars: thanks for the review [21:24] Bug #1431685 was opened: juju nova-compute charm not enabling live-migration via tcp with auth set to none === kadams54-away is now known as kadams54 === kadams54 is now known as kadams54-away [23:32] if anyone's on, I'd love a second opinion on http://reviews.vapour.ws/r/1151/ [23:37] I recently created a bash script to display all your branches (local and remote) sorted by last commit date, and colorized to make finding recent work across remotes easy [23:39] now I can't get my link to paste! :) hand typed: https://gist.github.com/johnweldon/0a9ee3c9406fab2ac93b [23:40] fwereade: looking... shouldn't you be sleeping? [23:40] jw4, ehh, soon [23:41] jw4, I want to get a bunch of these landed so I can propose what I have that works and has hooks and proper tests and everything, but depends on too much still in review for a nice diff without hassle [23:42] fwereade: yeah, makes sense. I actually wrote my little git tool (^^) so that I could add your repo as a remote and find the branches you've been working on recently :) [23:42] fwereade, dude what time is it for you man? [23:42] fwereade: quarter til 2? [23:43] oh, quarter to one [23:43] (DST screws me up even worse now !) [23:43] yeah, only quarter to 1 [23:44] if I can propose that last branch I'll feel like I've done the week [23:50] fwereade: reading that Raft doc you recommended is helping me review your changes ;) [23:51] fwereade, I can put I ship it on it if that helps :) [23:59] fwereade: non-graduated second opinion :shipit: