[01:19] <thumper> waigani: http://reviews.vapour.ws/r/844/diff/#
[02:32] <thumper> axw: hello there
[02:32] <axw> thumper: ahoy
[02:33] <thumper> axw: I have a couple of small interesting reviews if you can have your rubber stamp handy :)
[02:33] <thumper> this one I'm almost ashamed to ask: https://github.com/juju/utils/pull/109/files
[02:33] <axw> sure
[02:33] <thumper> edict, remove mess
[02:33] <thumper> we're not entirely sure what we will call it
[02:33] <thumper> but SSS seems a bit weird
[02:33] <thumper> shared state server
[02:34] <thumper> I'm inclined to go with:
[02:34] <thumper> SES: Shared Environment Server
[02:34] <thumper> the other is slightly more interesting
[02:34] <axw> heh, that's State Emergency Services over here :)
[02:34] <thumper> what does that do?
[02:35] <thumper> this one fell out of a branch I was developing, http://reviews.vapour.ws/r/846/diff/#
[02:35] <thumper> and it looked useful enough to keep around
[02:35] <axw> assistance during storms, floods, etc.
[02:36] <axw> thumper: both LGTM
[02:36] <thumper> ta
[02:39] <thumper> perrito666: still alive?
[02:39] <perrito666> thumper: yup
[02:39] <thumper> perrito666: what's up with the critical bug?
[02:40] <perrito666> thumper: azure is being a PITA
[02:40] <thumper> no surprise there?
[02:40] <thumper> do you know what is wrong?
[02:42] <perrito666> I am not managing to reproduce that because I get other kinds of failures yet I am looking into the output and trying to deduct it, ill be back to you in a moment
[02:54] <perrito666> thumper:  there you got my +1 on the mail
[02:54] <thumper> :)
[02:54] <thumper> cheers
[02:55] <perrito666> man how can these runs take so much to run, even though I am running it on a dedicated machine
[03:00] <perrito666> abentley: are you really here?
[03:03] <waigani> thumper: http://reviews.vapour.ws/r/819/
[03:06] <perrito666> axw: are you around
[03:06] <axw> perrito666: hey. yes, I am
[03:07] <perrito666> axw: could you lend me a bit of your azure knowledge?
[03:07] <axw> perrito666: you can have it all and keep it ;)
[03:07] <axw> how can I help?
[03:08] <perrito666> I am trying to discern the email conversation you had with abentley
[03:10] <perrito666> axw: what did you suggest him to do exactly? to delete the whole env or just the machine?
[03:10] <axw> perrito666: neither
[03:11] <axw> perrito666: he was deleting the deployment from a cloud service
[03:11] <axw> perrito666: but leaving the cloud service intact... I suggested he delete the cloud service altogether
[03:11] <axw> (as a workaround)
[03:12] <perrito666> mmm, I wonder if there is a limitation in azure that is preventing us from determining if the "machine" for machine 0 is there
[03:17] <thumper> anyone... http://reviews.vapour.ws/r/848/
[03:18] <axw> perrito666: isn't the point that machine-0 is dead?
[03:19] <perrito666> axw: it is, but restore plugin is dumb
[03:19] <axw> perrito666: so, looking at the azure code, if there are no live state servers it returns "ErrNotBootstrapped"
[03:19] <axw> in StateServerInstances
[03:20] <perrito666> mmm, that is a bug, it is inconsistent with the other providers
[03:21] <axw> perrito666: IMO, the *other* providers are buggy. they return spurious results from StateServerInstances
[03:21] <axw> i.e. instance IDs for things which don't exist
[03:22] <axw> perrito666: it possibly shouldn't be returning ErrNotBootstrapped though
[03:22] <axw> maybe just ErrNoInstances
[03:22] <perrito666> axw: the env is bootstraped
[03:22] <perrito666> exactly
[03:22] <axw> even still, restore would fail
[03:22] <perrito666> axw: 2 things there
[03:23] <perrito666> 1) it should be consistent, if we are going to correct that behavior we should do it in all providers at once :)
[03:23] <perrito666> 2) ErrNoInstances is waaaay les ambiguous than ErrNotbootstrapped
[03:24] <axw> perrito666: agreed on both counts
[03:32] <perrito666> axw: so it does check that the instances are alive
[03:32] <axw> perrito666: what does?
[03:32] <perrito666> axw: srry was ruberducking and your nick was on the prompt
[03:33] <axw> perrito666: if you mean restore, yeah I know; that's fine but StateServerInstances shouldn't be expected to return non-empty if there are no live state server instances
[03:34] <perrito666> agree, i was realizing we have a bug, another one, where  no provider is checking if the instances are live when returning them
[03:40] <perrito666> is it possible that dependencies.tsv is out of date?
[03:44] <axw> perrito666: out of date with what?
[03:44] <perrito666> axw: well master is not building in my machine :
[03:44] <perrito666> with what seems to be an out of date golxc issue
[03:44] <axw> perrito666: the bot should fail if dependencies.tsv is out of date
[03:45] <perrito666> axw: i know, but it has been known to eff up :p
[03:46] <perrito666> also I see an odd error in hooks about storage
[03:56] <perrito666> could anyone confirm if its just me or what?
[03:56] <perrito666> I have two machines failing with the same issue here
[04:00] <perrito666> ok there is this https://github.com/juju/juju/pull/1524 which I am pretty sure it fixes the thing although I did not compile it, for obvious reasons
[04:02] <perrito666> i really need to go to bed before sleeping on my computer and/or being eaten by mosquitoes if anyone (axw i am looking at you) wants to take a peak at it and perhaps merge it I would be thankful otherwise ill take a look in the morning
[04:02]  * perrito666 looks at axw
[04:02]  * axw looks at perrito666
[04:02] <axw> will look :)
[04:02] <axw> good night
[04:02] <perrito666> tx a lot, good... day?
[04:03] <axw> yes, midday :)
[04:04] <perrito666> :D cheers then, enjoy the rest of the day, if it compiles, that patch is quite safe and most likely the fix
[04:04] <perrito666> cheers
[04:59] <mattyw> morning folks
[04:59] <mattyw> I just go this error on almost tip ec2: ERROR cannot remove recorded state instance-id: Get : 301 response missing Location header
[05:00] <mattyw> anyone seen that?
[05:04] <mattyw> davecheney, still around? looks like you've seen this before: https://bugs.launchpad.net/juju-core/+bug/1083017
[05:04] <mup> Bug #1083017: Cannot bootstrap with public-tools in non us-east-1 region <ec2> <juju-core:Fix Released by dave-cheney> <https://launchpad.net/bugs/1083017>
[09:04] <TheMue> Morning
[09:13] <jamestunnicliffe> TheMue: Morning!
[09:57] <dimitern> TheMue, jamestunnicliffe, voidspace, hey guys!
[09:57] <jamestunnicliffe> hi dimitern
[09:58] <TheMue> dimitern: heya
[09:58] <dimitern> I'll chat with you this afternoon (~2-3h) for what's on for this week, just in case you might be wondering :)
[09:59] <dimitern> didn't quite manage to prepare a mail with work items to pick up
[10:00] <TheMue> dimitern: chat is fine. how is capetown?
[10:00] <dimitern> nothing critical though
[10:00] <TheMue> dimitern: hehe
[10:00] <dimitern> TheMue, so far so good - we'll see after lunch after the demo :)
[10:00] <TheMue> dimitern: ah
[10:01] <dimitern> yeah, forgot to mention - I managed to get it working by integrating all your changes and applying some fixes I found while live testing
[10:02] <dimitern> if you're curious - here's the branch I'm using https://github.com/dimitern/juju/tree/wip-containers-demo
[10:02] <TheMue> dimitern: fine, you've got it in a branch I can branch now?
[10:02] <TheMue> dimitern: there's the answer ;)
[10:02] <voidspace> dimitern: hey, hi
[10:02] <dimitern> :)
[10:03] <TheMue> dimitern: btw, alexis and you will get a mail today. due to signal troubles at kings cross the picadilly line to heathrow has been delayed so that I missed my flight (had already checked in and there still have been 40 minutes left until start, but they said no)
[10:04] <voidspace> dimitern: TheMue: jamestunnicliffe: shall we wait for you (dimitern) for standup?
[10:04] <dimitern> TheMue, oh, sorry to hear that :/
[10:04] <jamestunnicliffe> voidspace: apparently I can't join the call anyway...
[10:04] <jamestunnicliffe> will try logging out of other accounts...
[10:04] <TheMue> dimitern: so I took the next one and stayed the night on Amsterdam airport :/
[10:04] <dimitern> voidspace, nope, please go ahead - I'm in a discussion now
[10:05] <voidspace> jamestunnicliffe: you have to use your canonical account
[10:05] <TheMue> dimitern: interesting experience :)
[10:05] <dimitern> TheMue, oh boy, well - at least I'm glad you got home alright
[10:05] <TheMue> dimitern: yep, all went well
[10:06] <dimitern> TheMue, nice, don't worry, we'll sort it out
[10:06] <TheMue> dimitern: thx
[10:07] <voidspace> dimitern: we're doing standup - speak later
[10:07] <dimitern> +1
[10:10] <axw> perrito666: you should surely be asleep still.. :)  do I need another review, or are you "senior"?
[10:25] <davecheney> axw: your comment on the issue desn't make sense
[10:25] <davecheney> could you double check the description
[10:25] <davecheney> lease
[10:25] <axw> davecheney: wich issue?
[10:26] <axw> davecheney: are you talking about the critical blocker? I haven't commented on the bug... does the description on the PR not make sense?
[10:29] <davecheney> the description doens't make sense
[10:37] <axw> davecheney: hopefully clearer now, going through your comments. thanks for the review
[10:38] <davecheney> no probs
[10:38] <davecheney> i'm off to bed
[10:38] <davecheney> it's ++bloody late here
[10:38] <axw> no worries, good night
[10:43] <perrito666> axw: I wish I was still sleeping
[11:06] <perrito666> axw I like the new description
[11:31] <wwitzel3> ericsnow: ping me when you're online
[11:34] <wwitzel3> ericsnow: I just need to know which branch of yours I should base my systemd branch off
[12:40] <voidspace> tools from https://192.168.178.177:17070/tools/1.23-alpha1.1-trusty-amd64 downloaded: HTTP 000; time 0.000s; size 0 bytes; speed 0.000 bytes/s sha256sum: /var/lib/juju/tools/1.23-alpha1.1-trusty-amd64/tools.tar.gz: No such file or directory
[12:41] <wwitzel3> voidspace: that doesn't seem very useful
[12:41] <wwitzel3> ;)
[12:42] <voidspace> wwitzel3: my state server (manually provisioned) doesn't have a tools.tar.gz
[12:42] <voidspace> wwitzel3: so I can't manually add a new machine - it tries to download tools.tar.gz
[12:45] <wwitzel3> voidspace: hrmmm that is odd
[12:46] <voidspace> wwitzel3: part of the problem is that "destroy-environment" with the manual provider doesn't seem to remove everything it puts on the machine
[12:46] <voidspace> and re-provisioning the machine then sees some things already in place
[12:48] <voidspace> wwitzel3: what do I do with a service in an error state, how do I get it to retry?
[12:48] <voidspace> so long since I've done that
[12:50] <wwitzel3> voidspace: resolved --retry
[12:50] <voidspace> wwitzel3: thanks
[12:51] <wwitzel3> voidspace: np
[12:55] <dimitern> voidspace, TheMue, jamestunnicliffe, FYI the demo went great guys
[12:55] <jamestunnicliffe> dimitern: awesome!
[12:55] <dimitern> voidspace, TheMue, jamestunnicliffe, let me thank you again for all the hard work to make this possible!
[12:56] <TheMue> bfl
[12:56] <voidspace> dimitern: hey, great
[12:56] <TheMue> dimitern: wow, great news. thanks to you for driving it forward and doing this presentation.
[12:57] <dimitern> I have some news as well - we'll switch the networking work due to deliver in april from maas to openstack (aws is still a focus though)
[12:57] <dimitern> thanks! :) that's a great beginning
[12:57] <voidspace> dimitern: so we'll complete the maas stuff we've done and add openstack as well?
[12:58] <voidspace> dimitern: and going forward focus on openstack and aws
[12:58] <dimitern> voidspace, we'll complete the addressable containers stuff for maas only, then switch to openstack
[12:58] <voidspace> dimitern: cool
[12:59] <dimitern> voidspace, it's a pretty tight schedule, but I'm sure we can make it
[12:59] <voidspace> dimitern: :-)
[13:00] <dimitern> :)
[13:01] <voidspace> dimitern: I want to go to lunch, is that ok or do you want to talk to us?
[13:01] <TheMue> openstack is fine, many interests here in germany, for private clouds.
[13:03] <dimitern> voidspace, no, it can wait, go ahead
[13:03] <voidspace> dimitern: ok
[13:03] <voidspace> thanks
[13:03] <dimitern> voidspace, I'll compose my thoughts and send a mail to you guys later
[13:03] <voidspace> great
[13:03] <voidspace> I've been fighting the manual provisioner
[13:03] <voidspace> not had a clean deploy with my new code
[13:03] <voidspace> hampered by the fact that "destroy-environment" doesn't fully clean up the machine
[13:04] <voidspace> I might want to switch to docker
[13:04] <voidspace> as I really ought to re-provision these kvm images from scratch every time
[13:04] <dimitern> voidspace, oh,  too bad :/
[13:04] <dimitern> voidspace, can you use snapshots?
[13:05] <dimitern> from a clean kvm, before bootstrapping
[13:05] <voidspace> dimitern: virt-manager doesn't seem to ahve snapshots
[13:05] <voidspace> dimitern: it has clone, which maybe the same thing
[13:05] <dimitern> voidspace, try virsh maybe?
[13:05] <voidspace> dimitern: I'll look into it
[13:06] <voidspace> dimitern: I wanted to deploy to a separate unit, also behind the proxy, but add-machine failed because tools.tar.gz didn't exist on the state server machine
[13:06] <voidspace> but anyway :-)
[13:06] <voidspace> ok, I'm still seeing an "are you connected to the internet?" error for the charmrevisionworker
[13:06] <voidspace> so my changes are *not* sufficient
[13:07] <dimitern> voidspace, :) I'm sure you'll find a way - jamestunnicliffe can help perhaps?
[13:07] <jamestunnicliffe> dimitern: voidspace: sure
[13:07] <voidspace> nothing concrete to ask for help with yet
[13:07] <voidspace> dimitern: I have a successful deploy, just with that error still in the logs
[13:07] <voidspace> so it's definitely an improvement
[13:07] <dimitern> cool +!
[13:08] <voidspace> I'll probably propose this as is and continue to work on it
[13:08] <dimitern> voidspace, the apt-proxy format error?
[13:08] <dimitern> voidspace, or the charm revision updater
[13:09] <dimitern> voidspace, I'd suggest proposing the fix for the proxy updater to get it out of the way first
[13:09] <voidspace> dimitern: you want me to work on that too?
[13:10] <voidspace> dimitern: I can fix that. Just when we fall back actually parse the url and construct the new one correctly.
[13:10] <voidspace> dimitern: I was just going to file the issue. Happy to fix it as well.
[13:10] <voidspace> anyway, I'm taking a break
[13:10] <dimitern> voidspace, I think that's a good candidate for jamestunnicliffe perhaps? you guys can sync up on it
[13:11] <voidspace> dimitern: ok, cool
[13:11] <voidspace> jamestunnicliffe: I'll sync up with you later if that's ok
[13:11] <jamestunnicliffe> voidspace: sounds good
[13:11] <voidspace> jamestunnicliffe: I'll file the issue describing the problem and point you to the place in the code to fix it
[13:11] <jamestunnicliffe> voidspace: great
[13:12] <voidspace> jamestunnicliffe: but basically it's valid to specify http-proxy as "<ip addr>:port" but apt-proxy must be "http://<ip addr>:<port>/"
[13:12] <voidspace> jamestunnicliffe: and if http-proxy is specified but apt-proxy isn't we use http-proxy for apt-proxy
[13:12] <voidspace> jamestunnicliffe: so we have to ensure the format is correct
[13:13] <voidspace> jamestunnicliffe: environs/config/config.go I believe
[13:13] <voidspace> jamestunnicliffe: plus in proxyupdater there's a bug where an updated apt-proxy setting may not be written out to the apt config file if the file exists (but is out of date)
[13:13] <voidspace> jamestunnicliffe: I haven't looked too closely at the cause of that one yet
[13:14] <jamestunnicliffe> voidspace: thanks. Have your lunch though!
[13:15] <TheMue> jamestunnicliffe: I have to reconfigure my chat client because of your nick, it's indeed very looooong. :D
[13:15] <jamestunnicliffe> TheMue: :p
[13:15] <TheMue> jamestunnicliffe: it brakes the alignement here
[13:18] <TheMue> jamestunnicliffe: so, fixed ;)
[13:24] <perrito666> axw: :D
[13:57] <jw4> axw: so your fix merged... which tests are we waiting for to remove the blocker?
[14:00] <perrito666> jw4: you should ask abentley about that
[14:03] <jw4> perrito666: I see - it looked like your PR changes were in axw's PR
[14:03] <perrito666> they are
[14:04] <perrito666> I had a broken env last night and was unable to test them so I left the unpleasant task to axw since he had experience with azure
[14:04] <jw4> heh
[14:04] <jw4> nice
[14:04] <jw4> what time zone is abentley?
[14:05] <jw4> are we going to have to wait a day to begin merging again?
[14:05] <perrito666> nah, he is in some of the US tzs
[14:05] <jw4> perrito666: oh, good
[14:17] <jw4> morning abentley :)
[14:17] <abentley> jw4: Morning.
[14:18] <jw4> abentley: I see axw's fix for the blocker was merged... which tests are we waiting for before we can remove the blocker?
[14:20] <abentley> jw4: I guess industrial_test.  Where was it merged?  I see it marked as "Triaged" for 1.21 and 1.22, but "In progress" (not fix committed) for master.
[14:21] <jw4> abentley: hmm - maybe axw forgot to update the bug? https://github.com/juju/juju/pull/1526
[14:23] <jw4> maybe he was expecting perrito666 to update the bug? :)
[14:24] <perrito666> abentley: jw4 there, fix commited for master
[14:27] <jw4> abentley: interesting - industrial_test and industrial_test_azure look sunny to me, but no tests have run for a couple days on either
[14:28] <abentley> jw4: industrial-test is our weekly reliability test, used for this: http://reports.vapour.ws/reliability
[14:29] <jw4> abentley: ah, I see
[14:29] <jw4> abentley: I assume we won't wait for the weekly test to unblock?
[14:29] <abentley> jw4: No, we wont.
[14:30]  * jw4 thinks he wouldn't have done well with the marshmallow test as a kid
[14:36] <sinzui> perrito666, I am trying to resurrect juju-ci3' juju env. All agent confs have apiaddress: 10.0.3.1:17070. Editing the machines doesn't help because the insane value gets restored. Is this because the state-server has the wrong value? Do we assume the state server have the private dns name for the apiaddress?
[14:36] <perrito666> sinzui: otp, brb
[14:47] <alexisb> ericsnow, wwitzel3, you guys available tomorrow morning for a call with me?
[14:48] <alexisb> I need to chat with you guys re providers
[14:49] <voidspace> jamestunnicliffe: FYI https://bugs.launchpad.net/juju-core/+bug/1417617
[14:49] <mup> Bug #1417617: apt-proxy can be incorrectly set when the fallback from http-proxy is used <juju-core:New> <https://launchpad.net/bugs/1417617>
[14:51] <ericsnow> alexisb: sure, what time?
[14:51] <alexisb> I sent an invite
[14:51] <alexisb> ericsnow, I will need you on a call with altoros on thursday too
[14:51] <alexisb> I will send an invite
[14:52] <ericsnow> alexisb: sounds good
[14:54] <wwitzel3> alexisb: yep, sounds good
[14:55] <alexisb> thanks guys
[14:58] <jamestunnicliffe> voidspace: does the URL require the trailing slash, or just the scheme?
[15:09] <perrito666> sinzui: yes, most likely state server has the wrong addr
[15:15] <sinzui> perrito666, indeed bootstrap has the insane adresss. I edited the agent.conf and restarted. I got juju status working by the restart, but the confs were rewritten with the bad address
[15:21] <perrito666> sinzui: we really need a command to fix that
[15:21] <aznashwan> ericsnow: I just pushed a new version which has been slightly refactored to make testing possible and am getting to updating the tests, which should go quickly
[15:21] <aznashwan> ericsnow: mind re-having a look sometimes please: http://reviews.vapour.ws/r/671/
[15:22] <ericsnow> aznashwan: I'll have a look today
[15:22] <perrito666> sinzui: so, I know you hate restore :) but in the old version of restore you can actually see what you need to change in the state server to have the right address
[15:28] <voidspace> jamestunnicliffe: the scheme
[15:42] <sinzui> perrito666, ericsnow We know what the apiaddresses were for juju-ci3. I don't know how to force juju to use them. Since editing the agent.conf fixes juju for 1 second, I think I need to edit something else to get rid of the lxcbr0 address as the apiaddress.
[15:43] <sinzui> perrito666, ericsnow Is the address in juju-db? Do I need to update the juju collection in the db with the old address?
[15:45] <perrito666> sinzui: exactly, or else peergrouper will just keep updating the agent.cof
[15:46]  * sinzui looks for collection and key
[15:46] <sinzui> perrito666, do you know the collection and key I need to edit?
[15:46] <perrito666> sinzui: indeed, gimme a sec
[15:52] <perrito666> sinzui: gimme something more like a couple of mins
[15:52] <sinzui> thank you perrito666
[15:52] <perrito666> I am fetching a person that actually had the exact same issue the other day and figured all collections that needed to be tweaked
[15:53] <perrito666> well he is here niedbalski meet sinzui you both break juju in similar ways :p
[15:54] <sinzui> perrito666, and possibly this person https://bugs.launchpad.net/juju-core/+bug/1416928
[15:54] <mup> Bug #1416928: juju agent using lxcbr0 address as apiaddress instead of juju-br0 breaks agents <api> <lxc> <network> <juju-core:Triaged> <juju-core 1.21:Triaged> <juju-core 1.22:Triaged> <https://launchpad.net/bugs/1416928>
[15:55] <perrito666> man we really need a "update api server" command
[15:55] <niedbalski> perrito666, uhmm afaik is not the same issue, the one we faced was related to a wrongly commissioned node in MAAS, that returned an incorrect private ip address as primary. Thus the only 'fix' was to manually alter it on the machine collection.
[15:56] <niedbalski> perrito666, i am not aware on the specific collections that needs to be altered when you change the apiaddresses
[15:56] <perrito666> niedbalski: ouch :( sad
[15:58] <sinzui> oh, how do I fix values in the db when the js is not available
[15:58] <sinzui> perrito666, +1 for a new command.
[16:00] <niedbalski> sinzui, O am afraid that will not be persistent after reboot
[16:03] <sinzui> niedbalski, my env has 36 jujuds, each breaks seconds after I fix a conf file. Just getting an env that stays up long enough to allow me to update the services would be a big improvement
[16:10] <abentley> perrito666, jw4: after manually re-running industrial-test against the lastest master build, I've marked the issue fix released on master.  Fixes for 1.21 and 1.22 are pending, right?
[16:11] <perrito666> abentley: correct Ill try to backport them asap
[16:11] <abentley> perrito666: Thanks!
[16:27] <sinzui> bogdanteleaga, How do I install/use that mongod.exe on the win-slave? Do I place it in the syspath?
[16:36] <bogdanteleaga> sinzui, yeah just place it somewhere in path
[16:38] <sinzui> thank you bogdanteleaga
[17:03] <voidspace> perrito666: as OCR, if you have a chance
[17:03] <voidspace> perrito666: http://reviews.vapour.ws/r/853/
[17:08] <TheMue> voidspace: *click*
[17:09] <voidspace> TheMue: thanks
[17:09] <voidspace> TheMue: note that moving the proxyupdater to be started first doesn't fix all problems when deploying behind a proxy
[17:09] <voidspace> TheMue: so I'm doing "further investigations"
[17:09] <voidspace> TheMue: all tests pass though, and it seems like a *wise* change
[17:10] <TheMue> voidspace: yes, I've seen and already thought that it is a little risky race with now better chances for the proxyupdater
[17:11] <voidspace> TheMue: it might need us to wait on the first change
[17:11] <voidspace> TheMue: I'll investigate if that actually fixes the remaining issue I'm seeing or if there's something else going on
[17:11] <TheMue> voidspace: yep
[17:27] <perrito666> voidspace:  I dont see how that solves anything
[17:27] <voidspace> perrito666: which bit - the whole thing?
[17:27] <perrito666> voidspace: sorry
[17:27] <voidspace> perrito666: it solves not being able to deploy charms behind a proxy for one thing...
[17:27] <perrito666> worker/proxyupdater/proxyupdater.go
[17:27] <perrito666> l 148
[17:28] <perrito666> voidspace: I believe you :) what I say is I dont see how it does
[17:28] <voidspace> perrito666: we set the new proxySettings as environment variables unconditionally
[17:28] <voidspace> perrito666: essentially the logic determining first run is wrong - and there's no downside to just always setting them
[17:29] <perrito666> I mean, before that the proxy settings where updated if a given condition was met right?
[17:29] <voidspace> perrito666: correct
[17:29] <voidspace> perrito666: that condition meant that they weren't set at all
[17:30] <perrito666> if I get it right, what it does is setting the proxy settings if they are somehow different than the current ones
[17:30] <voidspace> perrito666: what do you refer to by "it"? the original code or the new code
[17:30] <perrito666> the original code
[17:31] <voidspace> perrito666: that was the idea, yes
[17:32] <perrito666> so it seems there is a different problem there, you just hid it
[17:32] <voidspace> perrito666: I solved the immediate problem
[17:32] <voidspace> of the environment variables not getting set
[17:32] <voidspace> what's the downside to that?
[17:33] <voidspace> perrito666: this approach was discussed with wallyworld and fwreade
[17:33] <voidspace> perrito666: if that helps :-)
[17:34] <voidspace> perrito666: I left the "first" logic in place as it's still used for writing system files
[17:34] <perrito666> for starters, what is inside the if is not being executed (and I am guessing it was supposed to ) and seccond if that assumption was made there  (that, if the incoming settings are equal to existing ones, env vars should already be set) is invalid and therefore something that is expected to happen somewhere else is not happening
[17:34] <voidspace> and mysteriously enough that seems to work
[17:38] <perrito666> voidspace:  anyway, that is the only part which doesn't convince me I would ship it with an issue to check why was that assumption made in the first place. At least because this immediate problem seems to be causing problems
[17:38] <voidspace> perrito666: it's not obvious why it wouldn't work from reading the code
[17:39] <voidspace> perrito666: proxyupdater.New starts the worker with first set to true. SetUp calls onChange and then sets first to false
[17:39] <perrito666> voidspace: well if it where the bug wouldn't be there in the first place :p
[17:39] <voidspace> perrito666: so when the worker is started "first" should be true and that code should be executed
[17:39] <voidspace> but the environment variables were *not* being set, and now they are
[17:40] <voidspace> so I'm happy to look into it further, but in the meantime I think this is a good fix we should ship
[17:40] <voidspace> as it resolves an actual problem leaving merely theoretical ones behind...
[17:40] <perrito666> voidspace: is it possible that there is a call to handleProxyValues with empty values and these are being unset?
[17:41] <voidspace> perrito666: hmmm, well onChange fetches them from api.EnvironConfig
[17:42] <voidspace> perrito666: so indeed they *could* be empty on first run (I suppose - they shouldn't be though)
[17:42] <voidspace> perrito666: but then the next change would match the first part of the clause (the actual values wouldn't equal the empty values)
[17:42] <voidspace> so that shouldn't be the cause either
[17:42] <voidspace> perrito666: I will put some tracing code in to see what values we're getting called with and what "first" is set to
[17:43] <perrito666> voidspace: I sense a race where those cases are reverted, but anyway, remember my shipt it is meaningless :(
[17:43] <voidspace> still?
[17:43] <voidspace> perrito666: shouldn't you ask to graduate?
[17:46] <perrito666> voidspace: there I explained to you in priv
[18:03] <aznashwan> ericsnow: ping?
[18:03] <ericsnow> aznashwan: hi
[18:03] <aznashwan> ericsnow: so, just pushed the updated tests
[18:04] <aznashwan> ericsnow: apart from the InitDir constant, everything is done on my end, and I would love to finally see it merge :D
[18:05] <aznashwan> ericsnow: how are you guys doing?
[18:05] <ericsnow> aznashwan: making progress
[18:05] <aznashwan> ericsnow: if you need any help, do feel free to ask
[18:06] <ericsnow> aznashwan: we've been working on incorporating your systemd patch into the new services approach
[18:06] <ericsnow> aznashwan: will do
[18:07] <aznashwan> eriscnow: awesome, thanks. seeing as though everything was very well abstracted I suspect things should go smoothly
[18:07] <aznashwan> ericsnow: thanks again, and as always, awaiting any feedback :D
[18:08] <ericsnow> aznashwan: I should be able to take a look a few hours from now
[18:08] <ericsnow> aznashwan: we're pairing until then
[18:14] <aznashwan> ericsnow: now hurries, my workday has come to an end anyhow
[18:14] <ericsnow> aznashwan: something to look forward to then :)
[18:15] <aznashwan> ericsnow: have a nice day, get back tomorrow :D
[18:15] <perrito666> so, who where has ship powers and can look at  http://reviews.vapour.ws/r/854/
[18:17] <natefinch> perrito666: looking
[18:18] <perrito666> gh ui to PR to something other than master has become a bit confusing
[18:18] <natefinch> yeah, it's a pain in the ass
[18:20] <perrito666> and the exact same for 1.22 http://reviews.vapour.ws/r/855/
[18:26] <natefinch> sigh... this is a backport that's already been approved on trunk, right?
[18:26] <voidspace> TheMue: perrito666: thanks for the reviews
[18:26] <natefinch> So if I have issues with the code, it would need to change on trunk as well?
[18:27] <TheMue> voidspace: yw, we could tomorrow talk a bit more about it
[18:27] <natefinch> perrito666:  ^^
[18:28] <perrito666> natefinch: ?
[18:30] <natefinch> perrito666: that review you wanted me to look at, the backport... this is just a straight backport of code on trunk, right?
[18:30] <perrito666> true
[18:30] <perrito666> natefinch: the code is actually landed on trunk
[18:30] <natefinch> ok, I won't nitpick the code then...a couple very minor things just bugged me as confusing
[18:30] <perrito666> it is what unlocked this AMs CI lockdown
[18:31] <jw4> abentley: thanks!
[18:31] <voidspace> TheMue: I'm continuing to investigate
[18:31] <voidspace> TheMue: so I'm merging this branch but not closing the issue
[18:31] <TheMue> voidspace: +1
[18:31] <voidspace> TheMue: and I've added a comment to the issue about the faulty first logic and the charmrevision worker issue
[18:32] <natefinch> perrito666: I gave it a shipit
[18:32] <perrito666> natefinch: can you do that for the exact same pr for 1.22? :)
[18:33] <natefinch> perrito666: just did
[18:33]  * perrito666 wonders why his phone did not tell anything
[18:34]  * perrito666 looks at landing bot 
[18:53] <voidspace> jamestunnicliffe: I added a comment to your issue
[18:53] <voidspace> jamestunnicliffe: hopefully the repro instructions are comprehensible, if not I can help you out further
[18:53] <voidspace> jamestunnicliffe: probably tomorrow now though...
[18:53] <voidspace> as I'm EOD
[18:53] <voidspace> g'night all
[18:53] <perrito666> bye
[18:58] <perrito666> ericsnow: 837 is long :)
[18:58] <ericsnow> perrito666: tell me about it :)
[19:07] <perrito666> I think we could benefit from making the "fixes-#####" expression a bit more permissive
[19:40] <natefinch>  TheMue - added bodie_ to the gojson* repos (and basically everyone else)
[19:40] <jw4> natefinch: thanks
[19:40] <TheMue> natefinch: great, thank you
[19:44] <bodie_> sweet, thanks
[19:50] <natefinch> just changed      if field, ok := myMap[foo]; !ok || field == "" {     to   if myMap[foo] == "" {      .... I love that maps return the zero value for the type in Go
[20:02] <perrito666> ericsnow: I did a review of one of your patches, I could go more in depth but my head is not in shape
[20:03] <ericsnow> perrito666: no worries; thanks for the review
[20:04] <ericsnow> perrito666: regarding the year, the code was copyrighted in that year (I just copied it into a new file)
[20:04] <ericsnow> perrito666: perhaps I've misunderstood
[20:04] <natefinch> I don't really think the year on the copyright really matters
[20:05] <perrito666> ericsnow: I have been following the rule: New file, new year, but I dont think we are being consistent on it
[20:05] <perrito666> because of the copyright being on the file and its contents
[20:05] <ericsnow> perrito666: that's probably the best way to go
[20:06] <perrito666> also its a great exercise to remember we are in 2015
[20:10] <thumper> o/
[20:12] <perrito666> ericsnow: to make it short, you could add apt-add-repository to the apt package :)
[20:12] <perrito666> thumper: hi
[20:13] <perrito666> abentley: just FYI https://bugs.launchpad.net/juju-core/1.22/+bug/1417178 is fix committed for all versions
[20:13] <mup> Bug #1417178: juju restore no longer works with Azure: error: cannot re-bootstrap environment: cannot determine state server instances: environment is not bootstrapped <azure-provider> <backup-restore> <ci> <regression> <juju-core:Fix Released by hduran-8> <juju-core 1.21:Fix Committed by hduran-8>
[20:13] <mup> <juju-core 1.22:Fix Committed by hduran-8> <https://launchpad.net/bugs/1417178>
[20:13] <thumper> I see axw's branch landed, are we unblocked now?
[20:14] <perrito666> thumper: we are for master
[20:14] <perrito666> and on our way there for 1.21 and 1.22
[20:14] <perrito666> and after a couple of hours of sleep I figured out why my master was not building
[20:36] <perrito666> bbl
[20:59] <davecheney> anyone for a simple review ? https://github.com/juju/juju/pull/1531
[21:08] <jw4> davecheney: shipit
[21:10] <perrito666> davecheney: you got a review and a recursion problem
[21:10] <perrito666> :p
[21:11] <jw4> perrito666: lol
[22:00] <davecheney> http://reviews.vapour.ws/r/857/
[22:00] <davecheney> trivial change
[22:09] <davecheney> "it's raining innovation" -- thumper
[22:28] <jw4> davecheney: we have four groups now? or is that a mistake in a couple of your files in 857?
[22:28] <jw4> davecheney: import groups
[22:32] <sinzui> thumper, we natefinchL we need someone to look into bug 1417790
[22:32] <mup> Bug #1417790: manual ppc64el units cannot download agents <ci> <maas-provider> <ppc64el> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1417790>
[22:33] <thumper> sinzui: we are sprinting right now... anyone else around?
[22:34] <sinzui> thumper, I am hoping you can find people on your teams, not that you will personally fix the issue
[22:34] <thumper> sinzui: my team is all with me
[22:34] <thumper> sinzui: I'm not in cape town
[22:34] <sinzui> I suppose we need to wait for axw to wake
[22:35] <sinzui> or anastasiamac
[22:43] <jw4> sinzui: what *should* the state server ip address be for that ppc64el bug?  voidspaces pr just moved the proxy settings to before the upgrader which seems plausibly connected to me?
[22:44] <jw4> sinzui: is it that the 10.0.3.1 address is private not public?
[22:44] <jw4> s/connected to me/connected to the issue/    (???)
[22:46] <sinzui> jw4,  10.0.3.1 is from the lxcbr0 interface. The machines are on the eth0 interface: 10.245.67.135 10.245.67.136 10.245.67.137. 135 is the bootstrap server in the test
[22:46] <jw4> sinzui: I see
[22:47] <sinzui> jw4,  and bug 1417308 and bug 1416928 are other examples of the wrong network being selected
[22:48] <mup> Bug #1417308: Juju-ci3 cannot upgrade to 1.21.1 <api> <ci> <upgrade-juju> <juju-core:Triaged> <https://launchpad.net/bugs/1417308>
[22:48] <mup> Bug #1416928: juju agent using lxcbr0 address as apiaddress instead of juju-br0 breaks agents <api> <lxc> <network> <juju-core:Triaged> <juju-core 1.21:Triaged> <juju-core 1.22:Triaged> <https://launchpad.net/bugs/1416928>
[22:48] <jw4> sinzui: the logs seem to indicate that the state server is listening on *both* networks?
[22:49] <jw4> sinzui: that seemed to have changed specifically with voidspace's commit
[22:50] <jw4> sinzui: maybe the unconditional proxy settings caused the state server to identify both networks
[22:50] <jw4> sinzui: and then the source and sink units are just picking the wrong one
[22:55]  * jw4 will brb
[23:15] <perrito666> and for a brief moment reviewboard forced me to communicate with pictures, I am back to the caves or ancient egypt
[23:45] <jw4> sinzui: is 10.0.31 ever a usable IP for the state server? Only when using local provider?
[23:46] <jw4> s/10.0.31/10.0.3.1/
[23:51] <sinzui> jw4, I don't think so. except to local-provider the state server needs an address visible to other machines. but if the unit is deployed to a container on the state-server, like juju-gui, maybe that is the address the container needs to see....
[23:51] <sinzui> OMG
[23:52] <jw4> sinzui: OMG?
[23:52] <sinzui> LXC is an anagram for CuXuLu and I suspect both the juju-ci3 upgrade bug and the manual provider bugs is cause by recent installations of lxc on those machines...
[23:54] <sinzui> jw4. I think I can revert the manual machine. If the test passes...then the bug will be downgraded and master re-opened.
[23:54] <jw4> sinzui: woot
[23:55] <jw4> sinzui: I think there will still need to be work obviously to fix this - either filtering out 10.0.3.1 as an unusable IP (network/hostport.go:182) with some jiggery for local provider - or improving the sorting of IP's so that 10.[^0].*.* is preferred to 10.0.3.*
[23:56] <sinzui> jw4, agreed, Those other bugs imply the state-server does something stupid if lxc (lxcbr0) is installed.
[23:57] <sinzui> jw4, I am watching http://juju-ci.vapour.ws:8080/view/Juju%20Revisions/job/manual-deploy-trusty-ppc64/1076/console which is now run without lxc on the bootstrap machine
[23:57] <sinzui> and I predict quick pass
[23:58] <anastasiamac> sinzui: morning! were u after me?
[23:58] <jw4> sinzui: as I'm looking at the SetAPIHostPorts and related code I'm not finding any distinction between 10.0.3.x IP addresses and other 10.0.0.0 addresses
[23:58] <jw4> sinzui: not sure if that should change... should we assume 10.0.3.x is a 'magic' range?
[23:59] <sinzui> anastasiamac, I was but I got new glasses and jw4's question made me realise that a regression isn't the regression I thought it was