[00:08] <sinzui> hazmat, yes --fancy shows them running
[00:08] <sinzui> hazmat, yes pending status, which is all I have ever seen since 1.14.0
[00:08] <hazmat> hmm
[00:09] <hazmat> that's a good start
[00:09] <hazmat> sinzui, jorge and marco had this issue, thumper had to do surgery on them.. but tbd if its needed here
[00:10]  * hazmat waits for cloudinit output
[00:10] <sinzui> hazmat, cloud-init: http://pastebin.ubuntu.com/6333073/
[00:10] <hazmat> cool.. the source of fail
[00:10] <sinzui> which agrees with
[00:10] <sinzui> curtis-local-machine-1  RUNNING  -     -     YES
[00:11] <hazmat> sinzui, so let's try a clean tear down..
[00:11] <sinzui> I wonder if I need to locate abentley's bug that explained his two step clear lxc cache
[00:11] <hazmat> sinzui, issue there is no cloudinit data made it to the container
[00:11] <hazmat> sinzui, juju destroy-environment
[00:12] <sinzui> done
[00:12] <hazmat> sinzui, anything in ls /etc/lxc/auto ?
[00:12] <sinzui> no
[00:13] <hazmat> sinzui, how about ls /var/lib/juju/containers/
[00:13] <sinzui> hazmat, empty
[00:13] <hazmat> sinzui, hmm.. and ls /var/lib/juju/removed-containers/
[00:14] <hazmat> sinzui, this is trunk or 1.17 ?
[00:14] <hazmat> er. 1.16
[00:14] <sinzui> hazmat, 1.16.1 release candidate...the one that is supposed to fix lxc
[00:14] <hazmat> ah.. one of those
[00:14] <sinzui> hazmat, Lots in removed containers
[00:15] <sinzui> curtis-local-machine-1  curtis-me-machine-1    curtis-me-machine-1.2  curtis-me-machine-1.4
[00:15] <sinzui> curtis-local-machine-2  curtis-me-machine-1.1  curtis-me-machine-1.3  curtis-me-machine-2
[00:15] <hazmat> sinzui go ahead and kill them.
[00:15] <hazmat> and move  your $JUJU_HOME/$local_provider_name out of the way
[00:15] <sinzui> hazmat, done
[00:15] <hazmat> and that should be pristine for a new bootstrap
[00:15] <hazmat> sinzui, if your bootstraping.. with 1.16.1 tools you may need --upload-tools
[00:15] <hazmat> if there not published
[00:16] <sinzui> I think that last step is key. I have not done that today
[00:16] <hazmat> sinzui, yeah.. --debug on bootstrap and pastebin for posterity if needed
[00:22] <sinzui> hazmat, nothing different from my previous tests...so far http://pastebin.ubuntu.com/6333122/
[00:23] <hazmat> sinzui, can you set juju set-env  logging-config="<root>=DEBUG;juju.provisioner=TRACE"
[00:23] <hazmat> and then deploy something..
[00:25] <hazmat> sinzui, btw re logs $JUJU_HOME/$local-env-name/machine-0.log has the logs of interest
[00:26] <sinzui> yeah, that is what I am tailing
[00:26] <hazmat> oh.. cool
[00:26] <thumper> sinzui: no all machines log for local at this stage
[00:26] <hazmat> sinzui, so .. i get roughly the same content you did
[00:26] <hazmat> 2013-10-31 00:26:17 INFO juju.provisioner provisioner_task.go:367 started machine 1 as instance kapil-local-machine-1 with hardware <nil>
[00:27] <thumper> so, what's the summary of the current issues?
[00:28] <sinzui> hazmat, 3 running machines without net
[00:28] <hazmat> thumper, local provider seems to work on trunk.. sinzui has it failing for him 1.16.1
[00:28] <hazmat> thumper, somewhat more critical is that it doesn't seem to work on maas as a container at all atm
[00:28] <sinzui> hazmat, all three services/machines are pending
[00:29] <hazmat> sinzui, you can watch them do upstart and lxc-create via pstree
[00:29]  * thumper sighs...
[00:30] <hazmat> thumper, the containers on maas are critical for some ODS stuff
[00:30]  * thumper nods
[00:30] <thumper> let's get that working first then
[00:30] <hazmat> thumper, i'm going to give it a go with trunk on the maas i think..
[00:30] <hazmat> just to see if that helps at all. plus to have a source compile i can instrument.. since the logging here seems to be nil
[00:31] <thumper> hazmat: do you have any instructions for a virtual maas setup
[00:31] <hazmat> sinzui, cool, is cloudinit done from pstree?
[00:31] <thumper> so we can test locally?
[00:31] <sinzui> abentley reported this bug. I don't think it is related because I start my lts container every day
[00:31] <sinzui> https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1236490
[00:31] <_mup_> Bug #1236490: Container has no networking <amd64> <apparmor> <apport-bug> <saucy> <linux (Ubuntu):Fix Released> <lxc (Ubuntu):Invalid> <https://launchpad.net/bugs/1236490>
[00:31] <hazmat> thumper, you mean did i attend the maas training ;-) ?
[00:31] <hazmat> no actually..
[00:31]  * bigjools bans the maas training from existence
[00:31] <thumper> sinzui: hangout? I can talk through some things
[00:32] <hazmat> sinzui, can you pastebin lxc-ls --fancy
[00:32] <hazmat> not sure what you mean by no networking
[00:32] <sinzui> hazmat, yes it is. I can tell when my fan stops actually
[00:32] <hazmat> cloudinit is injected here via file
[00:32] <hazmat> sinzui, wow.. is that spinning rust?
[00:32] <hazmat> sinzui,  or ssd?
[00:32] <sinzui> ssd
[00:32]  * hazmat runs a dozen containers normally
[00:33] <hazmat> only 2-4 for juju though
[00:33] <hazmat> i don't even notice them
[00:33] <sinzui> I have my fan tuned to come on often...its a mac and they run hot
[00:33] <hazmat> gotcha
[00:33] <sinzui> hazmat, no improvement http://pastebin.ubuntu.com/6333175/
[00:34] <hazmat> that's intriguing
[00:34] <hazmat> sinzui, can you go to /var/lib/juju/containers
[00:35] <thumper> there should be console.log in there for each machine
[00:35] <hazmat> and pastebin the console.log
[00:35] <thumper> that is the output of running the cloudinit
[00:35] <hazmat> although in this case i think we want the container.log
[00:35] <thumper> sinzui: also pastebin ifconfig
[00:35] <hazmat> the netns stuff should have happened pre lxc
[00:36] <hazmat> er. pre cloudinit
[00:36] <sinzui> hazmat, console.log http://pastebin.ubuntu.com/6333189/
[00:37] <sinzui> thumper, my ifconfig or one from a container?
[00:37] <thumper> sinzui: from the host
[00:38] <hazmat> sinzui, can you pastebin the container.log post ifconfig ..
[00:38] <hazmat> that should detail why the no netns
[00:38] <sinzui> thumper, ifconfig http://pastebin.ubuntu.com/6333193/
[00:38] <hazmat> huh
[00:38] <hazmat> you've got three veth devices there.
[00:38] <hazmat> for your three containers
[00:38] <hazmat> sinzui, is dnsmasq running ?
[00:39] <thumper> sinzui: how about the lxc.conf from the /var/lib/juju/containers/<firstmachine>/
[00:39] <hazmat> you should see something like... 116       1190  0.0  0.0  28184   804 ?        S    Oct15   0:04 dnsmasq -u lxc-dnsmasq --strict-order --bind-interfaces --pid-file=/var/run/lxc/dnsmasq.pid --conf-file= --listen-address 10.0.3.1 --dhcp-range 10.0.3.2,10.0.3.254 --dhcp-lease-max=253 --dhcp-no-override --except-interface=lo --interface=lxcbr0 --dhcp-leasefile=/var/lib/misc/dnsmasq.lxcbr0.leases --dhcp-authoritative
[00:39] <hazmat> er.. ignoring the preamble.. that's from ps aux | grep dnsmasq
[00:39] <sinzui> hazmat, yes http://pastebin.ubuntu.com/6333198/
[00:40] <hazmat> sinzui, the container.log should have the netns come up....
[00:40] <sinzui> hazmat, the lxc.conf from -1 http://pastebin.ubuntu.com/6333199/
[00:41]  * hazmat leaves to thumper's capable hands
[00:41] <thumper> ok, this needs help from an lxc person
[00:41] <thumper> as it is outside the remit of juju
[00:42] <thumper> the container didn't get networking
[00:42] <thumper> prior to all the juju shit
[00:42] <sinzui> thumper, fab, as I think too
[00:42] <thumper> sinzui: if you bring up a machine manually, does it get an ip address?
[00:42] <sinzui> while abentley's bug is not identical, I will try the work around
[00:43] <sinzui> thumper, do you mean bring one down than bring it back up again?
[00:43] <thumper> sinzui: what do you get with "ls -l /var/cache/lxc" ?
[00:43] <thumper> actually
[00:43] <thumper> sinzui: ls -lh /var/cache/lxc/cloud-precise/
[00:43] <sinzui> thumper,
[00:43] <sinzui> drwxr-xr-x 2 root root 4096 Jul  1 18:29 cloud-precise
[00:43] <sinzui> drwxr-xr-x 3 root root 4096 Jan  9  2013 precise
[00:44] <sinzui> thumper, -rw-r--r-- 1 root root 221M Jun 24 03:07 ubuntu-12.04.2-server-cloudimg-amd64-root.tar.gz
[00:44] <sinzui> -rw-r--r-- 1 root root 206M Dec 17  2012 ubuntu-12.04-server-cloudimg-amd64-root.tar.gz
[00:44] <thumper> ok, that's it
[00:44] <thumper> ubuntu-12.04-server-cloudimg-amd64-root.tar.gz is too old
[00:44] <thumper> and the refresh mechanism isn't working
[00:44] <thumper> this is the problem marco had
[00:44] <thumper> and jorge
[00:45] <thumper> smoser is aware
[00:45] <thumper> sinzui: delete the file
[00:45] <hazmat> sinzui, can you post the console.log
[00:45] <thumper> destroy the environment
[00:45] <hazmat> er.. container.log
[00:45] <hazmat> sinzui, ^
[00:45] <thumper> and start again
[00:45] <hazmat> before destroying
[00:45] <smoser> thats a known issue.
[00:45] <smoser> yeah, remove that file.
[00:45] <thumper> lxc will download a fresh copy of the cloud image
[00:45] <hazmat> oh.. that one
[00:45] <thumper> and it "should" work
[00:45] <hazmat> i forget about that.. i ran into that as well
[00:46] <hazmat> smoser, so what causes that?
[00:46] <hazmat> smoser, with the new lxc cloud template / cloud img download there's no dot versioning around point releases..
[00:47] <hazmat> but never really understood why the old container wouldn't netns
[00:47] <thumper> sinzui: still there?
[00:47] <smoser> "new lxc cloud template" ?
[00:47] <sinzui> hazmat, I have a copy of the container log. anything else before I start purging
[00:47] <smoser> there is no such thing as "dot"
[00:47] <smoser> ever.
[00:47] <smoser> 12.04.3 is no different than 12.04
[00:48] <smoser> the cloud images do not distinguish them in name. they did at one point and that was a bug.
[00:48] <hazmat> smoser, it is though from a cached file sense
[00:48] <smoser> that is independent of .2 verus .3
[00:48] <hazmat> previously it did.. and it worked better for caching and invalidation purposes when the point release incremented
[00:48] <smoser> previously it was broken on the server side in naming
[00:49] <smoser> which broke people who expect to 'wget http://path/to/some/known/current/release/file.img'
[00:49] <smoser> https://bugs.launchpad.net/ubuntu/+bug/1220366
[00:49] <_mup_> Bug #1220366: cloud-images have inconsistent filenames in 12.04.3 <bot-comment> <cloud-images> <cloud-images-build> <Ubuntu:Fix Released by utlemming> <https://launchpad.net/bugs/1220366>
[00:50] <sinzui> thumper, I am bootstrapping again
[00:50] <hazmat> ic
[00:50] <hazmat> that should fix sinzui's issue then
[00:50] <smoser> the cloud template needs to be smarter.
[00:50] <smoser> but we really would rather make a lxc wrapper that was smarter.
[00:51] <smoser> ie, hallyn doesn't think such knowledge should live in lxc itself.
[00:52] <hazmat> smoser, ala the kvm front end.. or ala docker front end?
[00:52] <smoser> uvtool-lxc is the plan in my head.
[00:52] <smoser> which is also ideally where the "--use-the-fastest" cloning argument goes.
[00:52] <sinzui> My fan has started
[00:52] <smoser> :)
[00:53] <hazmat> smoser, the default behavior with -s works for me generally.. ie autodetect btrfs ;-).. but i'm curious about the docker lvm thin provisioning on sparse file dev pool
[00:54] <sinzui> whoa! there it is. Installed
[00:54] <hazmat> sidnei just got the that merged into lxc-clone (re lvm-thin) support
[00:54] <hazmat> sinzui, do we have tools published for 1.16.1 or tarball?
[00:54] <smoser> hazmat, i think there were issues with that default behavior.
[00:54] <smoser> are you sure it even remains ?
[00:55] <smoser> and it wouldn't select unionfs
[00:55]  * hazmat checks
[00:55] <sinzui> Thank you rocket scientists! hazmat thumper smoser. I really appreciate your patience
[00:55] <thumper> sinzui: awesome
[00:55] <thumper> that's one fixed
[00:55] <thumper> and that fault should definitely be logged on the local provider troubleshooting page <- jcastro, marcoceppi
[00:55] <thumper> now to look at maas
[00:55] <bigjools> o/
[00:56] <bigjools> I just got pinged about this
[00:56] <thumper> hazmat: what versions were you playing with?
[00:56] <bigjools> narrowed it down any yet?
[00:56] <thumper> bigjools: not started looking yet
[00:56] <hazmat> thumper, 1.16.0 for maas
[00:56] <bigjools> thumper: I guess I should fire up my maas server? :)
[00:56] <thumper> bigjools: gimmie time to shower (haven't since back from the gym yet), and we can have an initial call prior to working out debugging plan
[00:56] <thumper> bigjools: ack
[00:56] <sinzui> hazmat, yes, my testing tools are in place in aws and hp and canonistack. I place them in the testing/ subdir and point the tools url to them: http://juju-dist.s3.amazonaws.com/
[00:56] <bigjools> thumper: roger mcdodger
[00:57] <sinzui> hazmat, and I will officially make the tarball in about 10 minutes
[00:57] <hazmat> smoser, it requires explicit lxc-clone -s  but yeah.. it works without specing btrfs directly
[00:57] <hazmat> du -hs /var/lib/lxc/container-name -> 7mb
[00:58] <sinzui> hazmat, and you can see all the scripts I am using at https://code.launchpad.net/~juju-qa/juju-core/ci-cd-scripts2
[00:58] <sinzui> you can make your own deb without any juju parts installed
[01:10] <wallyworld_> bigjools: how long does an juju azure bootstrap instance normally take to spin up?
[01:11] <smoser> wallyworld_, ~ 5 minutes i think
[01:11] <bigjools> wallyworld_: from memory not at all quick, I think it was like 5-10 m
[01:11] <wallyworld_> ok, ta. just bootstrapped an and waiting for juju status
[01:11] <wallyworld_> s/just/have been
[01:12] <marcoceppi> thumper: can you summarize what should be logged?
[01:13] <thumper> marcoceppi: the symptom was that the machines were pending, and lxc-ls --fancy showed the machines started but with no ipv4 address
[01:13] <thumper> marcoceppi: ls -lh /var/cache/lxc/cloud-precise showed ubuntu-12.04-server-cloudimg-amd64-root.tar.gz to be over 4 months old
[01:13] <thumper> marcoceppi: delete that file, destroy the environment and try again
[01:14] <thumper> a new lxc cloud image will be downloaded and all should be good
[01:17] <thumper> bigjools: hangout?
[01:18] <thumper> marcoceppi: does that all make sense?
[01:19] <sinzui> thumper, marcoceppi It makes sense to me. marcoceppi  can interview me tomorrow or he/nick can invite me to try updating the docs
[01:19]  * thumper nods
[01:19] <thumper> thanks sinzui
[01:19] <bigjools> thumper: calling you....
[01:20] <thumper> which me?
[01:20] <bigjools> the head
[01:20] <bigjools> wrong one?
[01:20] <bigjools> this is why I don';t have two g+ profiles
[01:48] <marcoceppi> thumper: yeah, that's what happened to me
[01:48] <marcoceppi> already recorded
[01:50] <wallyworld_> bigjools: do you know the magic command to ssh into a bootstrap node on azure? i have the ip address of the machine
[01:51] <bigjools> juju ssh 0 :)
[01:51] <wallyworld_> bigjools: doesn't work. times out
[01:51] <wallyworld_> bootstrap has failed
[01:51] <wallyworld_> but i have the ip address of the bootstrap node
[01:51] <bigjools> ummmm in that case you're probably screwed
[01:52] <bigjools> it will be firewalled
[01:52] <wallyworld_> well, when i say bootstrap failed, it completes but juju status fails
[01:52] <bigjools> ok
[01:52] <wallyworld_> and i want to find out why
[01:52] <bigjools> can you telnet port 22?
[01:52] <wallyworld_> i'll try
[01:53] <wallyworld_> yes
[01:53] <wallyworld_> when i try ssh directly, it fails with a permission denied error (publickey)
[01:53] <bigjools> so it's up but hasn't added the key
[01:53] <wallyworld_> yeah, seems so
[01:53] <sinzui> wallyworld_, with azure, I wait 30 minutes for the machine to really be up
[01:53] <bigjools> which means bootstrap was not as successful as you might think
[01:54] <wallyworld_> sinzui: really? oh joy
[01:54] <bigjools> !
[01:54] <wallyworld_> wtf takes so long
[01:54] <bigjools> oh - yes the crazy provisioner
[01:55] <bigjools> you can see in the MS dashboard
[01:55] <wallyworld_> so before when juju status timed out and i destoyed my env, i should have just  waited
[01:55] <sinzui> wallyworld_, on canonistack, we get TLS timeouts while the bootstrap node is coming up, but azure is silent. I have seen the machine come up in 20 minutes, but 30 is realistic
[01:55] <wallyworld_> dashboard? you have a link?
[01:56] <wallyworld_> sinzui: ok. i think i've fixed the azure incompatibility but am trying to make sure
[01:56] <bigjools> wallyworld_: well not exactly you would need my password.  let me look at it
[01:56] <wallyworld_> ok
[01:56] <wallyworld_> bigjools: i bet i can guess it
[01:56] <wallyworld_> aussieaussieaussieoioioi
[01:56] <bigjools> wallyworld_: "ianisacocktrumpet"
[01:57] <wallyworld_> that would have been my next guess
[01:58] <bigjools> wallyworld_: in fact it was so secure I can't remember it :)
[01:58] <wallyworld_> ha ha ha
[01:59] <bigjools> wallyworld_: get your own credentials
[01:59] <wallyworld_> bigjools: they're in the mail apparently
[02:00] <wallyworld_> i only needed yours cause of this azure cockup and the time pressure to fix it
[02:00] <bigjools> yeah np
[02:00] <sinzui> I encrypted my creds and put them in the CI machine so that my team could build better testing
[02:00] <wallyworld_> i wonder what else besides juju msft broke with their change?
[02:01] <bigjools> wallyworld_: btw baldrick left a hidden present that the mower found
[02:01] <wallyworld_> \o/ :-D
[02:01] <bigjools> you can imagine the result
[02:01] <wallyworld_> did it splat all over you /me asks hopefully?
[02:01] <bigjools> >:(
[02:01] <sinzui> Your dog is named baldrick?
[02:01] <wallyworld_> yep :-D
[02:01] <wallyworld_> he has a cunning plan
[02:02]  * sinzui was just telling his son about Ebenezer black adder 5 minutes ago.
[02:02] <bigjools> wallyworld_: I see two VMs running
[02:02]  * wallyworld_ is finding it hard to type cause he is laughing so hard to bigjools
[02:02] <bigjools> one of them from juju the other I think for a gwacl run
[02:02] <wallyworld_> bigjools: ah the gwacl one needs to be shutdown. i thought i did
[02:03]  * bigjools considers putting baldrick's present in wallyworld_'s mailbox
[02:03] <wallyworld_> sinzui: blackadder is one of my favourite tv shows
[02:03] <wallyworld_> bigjools: how can you, it's all vapourised all over you, right?
[02:03] <bigjools> there's some left
[02:04] <wallyworld_> next time get jen to mow the lawn after baldrick has visited :-D
[02:04] <bigjools> wallyworld_: so the cpu graph shows it finishing a heavy load at 11:45
[02:04] <wallyworld_> the juju one?
[02:05] <wallyworld_> what's the ip address?
[02:05] <bigjools> 137.135.11.14
[02:06] <wallyworld_> yep, that's the one juju status keeps polling mongo for trying to do a status
[02:06] <wallyworld_> :-(
[02:06] <wallyworld_> i really need to get the logs off that sucker
[02:06] <bigjools> wallyworld_: 17070 and 37017 ports are open
[02:06] <bigjools> that's it
[02:07] <wallyworld_> 37017 is the mgo port
[02:07]  * bigjools wonders if it's the non-ssl mgo
[02:07] <wallyworld_> and it says conection refused
[02:07] <wallyworld_> ah
[02:07] <wallyworld_> that could be it
[02:07]  * bigjools gets lunch while wallyworld_ fixes it
[02:08]  * wallyworld_ doesn't know how to fix that one
[02:08] <bigjools> deploy with saucy?
[02:08] <wallyworld_> ah ok
[02:08] <wallyworld_> good idea
[02:08] <bigjools> it might have failed to add the cloud archive
[02:09] <wallyworld_> yeah, sounds likely
[02:09] <bigjools> although doesn't explain why you can't ssh in
[02:09] <wallyworld_> hopefully saucy will work
[02:09] <bigjools> heard a lot of swearing coming from the living room earlier - jen found jake pulling keys off her laptop again... ROFL
[02:10]  * bigjools lunches
[02:10] <wallyworld_> lol
[02:11] <thumper> sinzui: still around?
[02:11] <sinzui> I am
[02:11] <thumper> sinzui: I have a critical bug
[02:11] <thumper> sinzui: for 1.16
[02:12] <thumper> sinzui: it is the lxc and maas problem
[02:12] <thumper> that they are using for ods
[02:12] <thumper> demo
[02:12] <thumper> always last minute
[02:12] <thumper> it is a regression introduced when the provisioner moved to the api
[02:12] <sinzui> I just cut 1.16.1, but plan a 1.16.2 for azure.
[02:12] <thumper> but no one noticed
[02:12] <sinzui> thumper, Do I need to take down 1.16.1
[02:12] <thumper> no
[02:12] <thumper> I don't think so
[02:12] <thumper> because they can use a custom release
[02:12] <thumper> I think
[02:13] <thumper> but we may have a 1.16.2 real soon now
[02:13] <thumper> if they do need a release
[02:13] <thumper> what's the timeframe for 1.16.2?
[02:13] <thumper> bigjools: I have a plan
[02:13] <sinzui> I can make it tomorrow immediately after 1.16.1 is needed. I hope for an azure fix of couse
[02:13]  * thumper nods
[02:13] <thumper> ok
[02:13] <thumper> let me file this bug and get to work
[02:15]  * sinzui creates milestone
[02:17] <wallyworld_> sinzui: azure fix will be commited to gwacl today i hope
[02:17] <sinzui> wallyworld_, excellent. Testing will be quick since it is limited to two providers
[02:18] <wallyworld_> i'm doing a local test now
[02:18] <sinzui> Did I mention abentley got rudimentary CI running on HP. we can run about 200 upgrade and bootstrap tests a day
[02:18] <wallyworld_> ran into the precise mongo thing i think
[02:18] <wallyworld_> trying on saucy
[02:23] <thumper> wallyworld_: are you chasing the azure issue?
[02:23] <thumper> wallyworld_: is there a bug?
[02:24] <wallyworld_> yes. the issue is with msft, not us
[02:24] <wallyworld_> we need to change gwacl cause they broke the api
[02:24] <wallyworld_> i'm testing now
[02:24] <thumper> :)
[02:24] <thumper> wallyworld_: can you target the bug to 1.16.2?
[02:24] <wallyworld_> but azure is sloooooow
[02:25] <wallyworld_> thumper: sure. there's 2 of them - a public and a private
[02:25]  * thumper nods
[02:26] <thumper> bigjools: lp:1246556 for the juju maas bug
[02:26] <thumper> which it seems mup can't understand, bug 1246556
[02:26] <_mup_> Bug #1246556: lxc containers broken with maas <api> <maas-provider> <juju-core:Triaged by thumper> <https://launchpad.net/bugs/1246556>
[02:41]  * thumper moved some code around, all tests still pass
[02:42] <thumper> I find this mildly disturbing
[03:06] <wallyworld_> bigjools: not sure how many azure instances i have running - i think 2. a bootstrap using saucy still doesn't allow juju status to connect sadly
[03:07] <wallyworld_> thumper: i have 2 small azure mp's for the 1.16.2 release if you have a moment sometime https://code.launchpad.net/~wallyworld/gwacl/azure-management-api-change https://code.launchpad.net/~wallyworld/gwacl/fix-request-eof-2/+merge/193346
[03:10]  * thumper is in deep thought
[03:10] <thumper> sorry
[03:11] <wallyworld_> np
[03:11] <wallyworld_> bigjools: what is the price for 2 small gwacl reviews?
[03:12] <wallyworld_> needed for juju 1.16.2 release tomorrow
[03:28] <thumper> bigjools: bugger...
[03:28] <thumper> bigjools: it seems like the quick fix won't work very well at al
[03:28] <thumper> may have to fix it the right way
[03:29] <bigjools> wallyworld_: I can look for you, after all you came bearing gifts yesterday
[03:29] <wallyworld_> indeed i did
[03:29] <wallyworld_> they are only small
[03:29] <wallyworld_> branches
[03:29] <bigjools> thumper: great architecture FTW
[03:30] <bigjools> 466 lines
[03:30] <bigjools> is small in juju world?
[03:30] <wallyworld_> bigjools: sadly i have a saucy node running now which also disallows connections to mongo
[03:30] <bigjools> wallyworld_: is the firewall endpoint open?
[03:30] <wallyworld_> not sure. what port is that do you know?
[03:31] <wallyworld_> i thought you said before 37017 was open?
[03:31] <wallyworld_> which is the port the client is sing to contact mongo
[03:32] <bigjools> ok
[03:32] <bigjools> wallyworld_: hang on I need to make call first
[03:32] <wallyworld_> ok
[03:49] <bigjools> wallyworld_: first comment - pease remove the factored version numbers
[03:49] <wallyworld_> why?
[03:50] <bigjools> I previously had someone stop doing that - each api request is separately versioned and if you factor it you will introduce subtle  bugs
[03:50] <wallyworld_> it kinda sucks having them all copied and pasted in the code
[03:50] <bigjools> yes - but it needs to be done
[03:50] <wallyworld_> ok then
[03:50] <bigjools> I rooted out at least 3 bugs last time I unfactored it
[03:50] <wallyworld_> sigh
[03:51] <bigjools> I know
[03:51] <wallyworld_> how the fook do we keep track of all the apis
[03:51] <wallyworld_> sounds like a nightmare
[03:52] <bigjools> wallyworld_: fwiw the version should be associated with the request struct ideally
[03:52] <bigjools> they change versions when the format changes
[03:52] <bigjools> so they are very closely tied
[03:52] <wallyworld_> ok. for now though, i'll just go back to how it was
[03:53] <wallyworld_> and do a search and replace of string
[03:53] <bigjools> wallyworld_: for example if you update the refactored version you would change all the calls at once, and introduce potentially subtle sematic changes
[03:53] <wallyworld_> i thought they would all change together
[03:54] <wallyworld_> as a group
[03:54] <bigjools> IME they dont
[03:54] <wallyworld_> :-(
[03:54] <bigjools> it might say they do....
[03:54] <bigjools> but I call BS
[03:54] <wallyworld_> fair enough. it is msft after all
[03:56] <bigjools> you might want to mention this in the code
[04:00] <wallyworld_> ok
[04:02] <thumper> arg
[04:02] <thumper> stabby stabby
[04:02]  * thumper is trying to unpick something
[04:02] <thumper> but others are sewing things tighter together
[04:04] <bigjools> wallyworld_: I bailed on the other one
[04:04] <wallyworld_> bigjools: i wanted to avoid a random custom data so i could hard code the expected response
[04:04] <bigjools> wallyworld_: why?
[04:04] <wallyworld_> same reason i like to use strings in tests
[04:04] <bigjools> the test demonstrates that it copes with random input
[04:05] <bigjools> you should always use random data where the data itself doesn't matter
[04:05] <wallyworld_> using the same api call in a test though as the logic sort of negates the test
[04:05] <bigjools> otherwise it says to the test reader that you're crafting a particular scenario
[04:05] <bigjools> feel free to ignore it, you have my +1 already
[04:06] <wallyworld_> ok, i'll think about it
[04:06] <wallyworld_> thanks
[04:06] <wallyworld_> why bail on other one?
[04:06] <bigjools> because it's deep Go innards that I know nothing about
[04:06] <wallyworld_> ok
[04:06] <bigjools> you're the Go expert :)
[04:06] <wallyworld_> i used same technique as in goose etc
[04:06] <wallyworld_> you reckon?
[04:07] <bigjools> self-approve?
[04:07] <wallyworld_> might do
[04:07] <bigjools> do you need all you instances on my azure account?
[04:07] <bigjools> your
[04:08] <wallyworld_> bigjools: the saucy one would be good. but i need to be able to ssh in to see the cloud init log
[04:08] <bigjools> you need a custom image with the backdoor
[04:09] <wallyworld_> ugh ok
[04:09] <bigjools> you have to build one and upload it to private storage and tell the api to use it when booting
[04:09] <bigjools> I forgot how though :(
[04:09] <wallyworld_> i never knew how
[04:09] <thumper> jam3: ping
[04:09] <wallyworld_> in the first place
[04:10] <bigjools> there's a config item for a custom kernel
[04:10] <bigjools> image I mean
[04:10] <jam> thumper: pong
[04:10] <thumper> jam: hangout? I have a critical bug to discuss
[04:10] <wallyworld_> bigjools: and you can't ssh in at all and grab the cloud init log?
[04:11] <jam> thumper: sure, I have to head out in about 20min
[04:11] <thumper> jam: ack
[04:11] <thumper> jam: https://plus.google.com/hangouts/_/72cpjufs8mm0i0htkjiuch72s8?hl=en
[04:12] <jam> I'm getting "call ended because of server error" give me a sec
[04:13]  * thumper nods
[04:13] <jam> thumper: https://plus.google.com/hangouts/_/72cpjqbfsmf42rpnbt2o36cgk0
[04:13] <jam> see if this one works
[04:13] <jam> If I follow your link, I got the "new" hangout layout
[04:14] <jam> thumper: I saw you show up for 1 second
[04:14] <thumper> jam: I have new, you have old
[04:14] <thumper> they don't like each other
[04:14] <thumper> server error
[04:14] <thumper> skype?
[04:14] <jam> thumper: know of any way to trigger the right one?
[04:14] <thumper> mumble
[04:14] <thumper> I don't care
[04:14] <jam> thumper: I'm on skype, and you're in my friends list
[04:14] <jam> but not showing as online
[04:14]  * thumper starts skype
[04:16] <thumper> jam: https://bugs.launchpad.net/juju-core/+bug/1246556
[04:16] <_mup_> Bug #1246556: lxc containers broken with maas <api> <maas-provider> <juju-core:Triaged by thumper> <https://launchpad.net/bugs/1246556>
[04:16] <bigjools> wallyworld_: no, unless cloud-init finishes putting the ssh key in your're buggered
[04:16] <wallyworld_> \o/
[04:16] <wallyworld_> maybe it should do that first up if it doesn't already
[04:16] <bigjools> wallyworld_: hence you need the backdoor.  Having said that - there is an api field for adding a user and p/w
[04:17] <bigjools> but we don't use it as it's mental
[04:17] <wallyworld_> got a hint where i look save me searching?
[04:17] <wallyworld_> i might do it to debug
[04:17] <bigjools> hang on
[04:18] <bigjools> wallyworld_: in LinuxProvisioningConfiguration you set a Username and Password.  It's in the same place Customdata goes
[04:18] <wallyworld_> ok, ta
[04:22] <bigjools> wallyworld_: line 545 in provider/azure/environ.go
[04:22] <wallyworld_> great
[04:22] <bigjools> it was a mandatory field so we randomised it
[04:22] <bigjools> yay security
[04:22] <wallyworld_> lol serious? mandatory?
[04:22] <wallyworld_> wtf
[04:23] <bigjools> this is msft
[04:23]  * wallyworld_ sighs *again*
[04:40] <thumper> jam:  lp:~thumper/juju-core/maas-lxc
[05:40] <sodre> juju-devs: is anyone still up ?
[05:41] <wallyworld_> sodre: some of us are here
[05:41] <sodre> hi wallyworld_: I traced the reason for that earlier panic on juju bootstrap
[05:41] <wallyworld_> oh cool
[05:41] <wallyworld_> admin-secret issue i think?
[05:42] <sodre> I've updated the bug report on launchpad. But I was wondering if you could explain to me something related to it.
[05:42] <sodre> yeah, it had nothing to do with the admin-secret.
[05:42] <wallyworld_> i'll read the bug
[05:42] <sodre> The issue is that jujud tries to get the provider-state through swift without authentication.
[05:44] <wallyworld_> it should use creds when it tries to get it
[05:44] <wallyworld_> i think from memory
[05:44] <wallyworld_> let me check
[05:45] <sodre> from the client side, (juju) connects to swift using goose.
[05:45] <sodre> but the bootstrap level seems to use (Go 1.1 package http)
[05:46] <jam> sodre, wallyworld_: provider-state needs to be in a container that is world readable because all of the clients download stuff from the container via wget/direct http access
[05:46] <jam> when we set up the bucket ourselves, we set the .r:.rlistings *
[05:47] <sodre> for somereason that is not being set on on my version
[05:47] <jam> sodre: you're on Havana ?
[05:47] <sodre> yes
[05:47] <wallyworld_> jam: not necessarily the conatiner itself but the file
[05:47] <jam> wallyworld_: we need the container, we put all the charm data, tool data, etc in there
[05:47] <wallyworld_> i think
[05:47] <jam> and all needs to be downloadable from plain wget
[05:48] <jam> we don't need it on EC2 because we can generate Signed URLs, but we don't have support for that on Openstack
[05:48] <wallyworld_> and doesn't wget work so long as the file itself is readable?
[05:48] <sodre> right now, my way around it has been to put tools in the juju-dist and set that world readable.
[05:49] <sodre> but then this provider-state issue came up.
[05:50] <jam> wallyworld_: as in, we put a bunch of stuff in the container, and *each* needs to be world readable. It is a fair point that you could just have each file have the ACL, but we need all of them anyway
[05:50] <wallyworld_> jam: sure. but now in trunk, public storage instances are gone from the providers
[05:50] <wallyworld_> azure, maas never had them anyway
[05:50] <jam> wallyworld_: "private" storage *must* be public for Openstack
[05:51] <jam> wallyworld_: so if your patch which got rid of the PublicStorage api also removed setting .r:.rlistings for the other Storage, then you broke openstack deployments
[05:51] <wallyworld_> i didn't tinker with the other Storage to my kmowledge
[05:52] <jam> wallyworld_: we *don't* want to give our Provider credentials to the agents running on non machine-0, so they have to have credentials-free access to download the charm blobs from storage
[05:53] <jam> wallyworld_: "rlisting" doesn't exist in the source tree
[05:53] <wallyworld_> jam: right. so i just checked, the "private" storage on openstack does have rlistings perms
[05:53] <jam> unless that is in goose
[05:53] <wallyworld_> jam: it's a const in goose
[05:53] <jam> wallyworld_: k
[05:53] <wallyworld_> swift.PublicRead
[05:53] <wallyworld_> see line 514 of provider.go
[05:54] <wallyworld_> in openstack package
[05:54] <jam> wallyworld_: yep
[05:54] <jam> I found ti
[05:54] <jam> sodre: can you "swift stat" your container to see what the ACL is ?
[05:54]  * wallyworld_ has to go get kid from school
[05:55] <sodre> I need to bootstrap again, because I've set it by hand.
[05:55] <sodre> give me one se.d
[05:56] <sodre> also, I'm on 1.16, not trunk. fyi
[05:58] <sodre> .r:*
[05:58] <sodre> humm... strange....
[05:59] <sodre> i need a new bucket name... one sec.
[06:00] <sodre> jenvs from hell...
[06:03] <sodre> the read acl is empty http://paste.ubuntu.com/6334137/
[06:21] <sodre> jam: any ideas if that can be fixed in 1.16.1 ? or 1.17 ?
[07:00] <wallyworld_> sodre: did you create that control bucket by hand?
[07:00] <sodre> yes
[07:00] <sodre> I got it to work as follows
[07:01] <wallyworld_> sodre: because juju will not set read acl if it exists already
[07:01] <sodre> I first swift post juju-control-bucket --read-acl .r:*
[07:01] <sodre> no... no..
[07:01] <sodre> if the bucket does not exist, it is still not setting it to .r:*
[07:02] <wallyworld_> really? the code in that area hasn't changed. it uses PublicRead = ACL(".r:*,.rlistings")
[07:02] <sodre> maybe it is the , .rlistings ?
[07:02] <sodre> because it is not setting it to read mode right now
[07:03] <sodre> ... it could be a radosgw issue again..
[07:03] <wallyworld_> could a havana thing maybe?
[07:03] <sodre> let me try the following.
[07:04] <sodre> I have a bucket right now with read ACL set to empty.
[07:05] <sodre> swift post pet.sodre.juju-control-02 --read-acl .rlistings
[07:05] <sodre> the read acl is still empty.
[07:05] <sodre> should it have changed ?
[07:05] <wallyworld_> um. not sure. i don't use the swift tool for that sort of thing too often
[07:05] <wallyworld_> cause juju does it for me :-)
[07:06] <sodre> are you using radosgw as well ?
[07:06] <wallyworld_> i've tested juju on hpcloud and our own internal openstacj deployment using folsom and grizzly
[07:07] <sodre> two variables: It could be Havana, or it could be RadosGW
[07:08] <wallyworld_> i'm testing now our our openstack
[07:08] <wallyworld_> sodre: so a seift post does set the read acl for me
[07:09] <wallyworld_> ie it is non empty after a swift post --read-acl ....
[07:09] <sodre> can you paste the readacl line for me.
[07:09] <wallyworld_> sure
[07:09] <wallyworld_>  Read ACL: .rlistings
[07:10] <sodre> humm..
[07:10] <sodre> strange!
[07:10] <wallyworld_> and i think this is on havana
[07:11] <sodre> what is .rlistings for anyways ?
[07:11] <wallyworld_> it allows unauthed clients to list the container
[07:11] <wallyworld_> the contents thereof
[07:12] <sodre> I see.
[07:13] <sodre> It seems that Rados does not support rlistings
[07:13] <sodre> shouldn't a plain .r:* suffice ?
[07:13] <wallyworld_> well that sucks if true
[07:13] <wallyworld_> don't think so - certain operations do need to list the files in a container
[07:14] <wallyworld_> but i can't tell you the specifics of the top of my head
[07:14] <sodre> http://wiki.ceph.com/01Planning/02Blueprints/Dumpling/Better_Swift_Compatability_for_Radosgw
[07:14] <wallyworld_> ah
[07:14] <wallyworld_> well, that sorta explains it :-(
[07:15] <wallyworld_> it does say .rlisting has something done
[07:17] <sodre> there as a patch in the wild, but it seems they did not want to apply it.
[07:18] <sodre> according to http://www.mail-archive.com/ceph-devel@vger.kernel.org/msg13829.html
[07:19] <sodre> all that is needed is to set a READ permission to  see it via S3
[07:19] <sodre> I imagine unauthenticated access.
[07:40] <sodre> wallyworld_: I've updated the bug report one more time.... what is the easiest way for me to modify the juju-core code?
[07:40] <wallyworld_> what do you want to change?
[07:40] <sodre> I would like to see why it is not honoring the .r:*
[07:40] <sodre> when working with RadosGW
[07:41] <wallyworld_> right now, you need to use bzr to get a copy of the source code off launchpad
[07:41] <wallyworld_> go get works too
[07:41] <wallyworld_> have you ever done that?
[07:42] <sodre> yeah, I did it quickly today. I am not sure how to ``debug'' the code, ie.. gdb did not really help
[07:42] <sodre> is there a way to step through, or are debug statements the standard ?
[07:43] <wallyworld_> lol
[07:43] <wallyworld_> Go has very limited tooling support for debugging
[07:43] <sodre> i.c.
[07:43] <wallyworld_> in my experience
[07:43] <wallyworld_> i think you can use gdb
[07:43] <wallyworld_> but i have not done that myself
[07:44] <wallyworld_> i'm used to modern IDEs
[07:44] <sodre> sure I can use that as well, will it work in eclipse ?
[07:44] <wallyworld_> perhaps. I use Intellij where there is a decent Go plugin but it does lack debug support sadly
[07:44] <sodre> juju-core will be the first "Go" code I ever read
[07:45] <wallyworld_> it's similar syntax to C - not too hard to read
[07:45] <sodre> alright... so I don't spend too much time digging... in which file should I be looking for these rlistings ?
[07:46] <wallyworld_> for openstack, there's two main files of interest....
[07:47] <wallyworld_> provider/openstack/provider.go <--- this one sets up the storage instance for an openstack environment
[07:47] <wallyworld_> it creates it with the ACL I pasted above
[07:47] <wallyworld_> and provider/openstack/storage.go <--- this one creates the container (if required) and reads/writes/lists files therein
[07:48] <wallyworld_> your main interest will be in storage.go
[07:48] <sodre> got it
[07:48] <wallyworld_> the Storage interface is quite simple - it has List(), Put(), Get() etc methods
[07:49] <wallyworld_> i normally just use lots of fmt.Println() statements to debug
[07:49] <wallyworld_> thrre's also debug logging
[07:49] <wallyworld_> if you run a command like bootstrap with --debug, it will print useful info also
[07:50] <sodre> yeah, --debug is my friend :)
[07:51] <sodre> it seems the code went to goose
[07:51] <sodre> i'll spend some time with it.
[07:51] <sodre> thanks a lot man!
[07:58] <wallyworld_> np. do ask if you have more questions
[07:58] <wallyworld_> yes, goose is the lib used by juju to talk to openstack
[07:59] <sodre> thanks. I should probably go to sleep ... its 4am here
[07:59] <sodre> the good thing is that there is a work around
[07:59] <wallyworld_> wow. yes. go to bed!
[07:59] <wallyworld_> where are you?
[08:00] <sodre> tz= EST
[08:00] <sodre> I live in the DC area
[08:01] <wallyworld_> ah ok. i'm in australia
[08:01] <wallyworld_> so almost drinking time here :-)
[08:01] <sodre> nice :)
[08:02] <wallyworld_> thanks for all your hard work!
[08:02] <sodre> I'm trying to deploy OpenStack at work
[08:02] <sodre> I need to show them a demo ...
[08:03] <sodre> so... here is an issue
[08:03] <sodre> I am running my compiled version of juju
[08:03] <sodre> how do I bootstrap my system ?
[08:03] <sodre> it can't find 1.17
[08:04] <wallyworld_> you need to use --upload-tools
[08:04] <wallyworld_> if you are running from source
[08:04] <sodre> tsk tsk tsk... IU need to sleep
[08:04] <sodre> :)
[08:04] <sodre> s/IU/Im
[08:04] <wallyworld_> yeah tomorrow!
[08:05] <sodre> btw...
[08:06] <sodre> 2013-10-31 08:05:44 INFO juju.environs.sync sync.go:103 listing target bucket
[08:06] <sodre> 2013-10-31 08:05:44 DEBUG juju.environs.tools storage.go:35 reading v1.* tools
[08:06] <sodre> 2013-10-31 08:05:45 ERROR juju supercommand.go:282 failed to list contents of container: pet.sodre.juju-control-02
[08:06] <sodre> issue *listing contents*
[08:07] <wallyworld_> right, that is consistent with not having rlisting permissions
[08:07] <wallyworld_> juju needs to list the tools available to find the correct ones to use
[08:08] <sodre> this is a bad bug in RadosGW
[08:08] <sodre> alright.. ttyl
[08:10] <rogpeppe> mornin' all
[08:11] <rvba> Hi wallyworld_, did you figure out what the issue with the Azure provider was?
[08:11] <wallyworld_> rvba: hi, yes. msft screwed us over
[08:11] <rvba> wallyworld_: nice :/
[08:12] <wallyworld_> they changed the api in an incompatible way
[08:12] <rvba> Great.
[08:12] <wallyworld_> tl;dr; we needed to change the apiversion passed to various management calls
[08:12] <rvba> So much for having API versions and all that jazz.
[08:12] <wallyworld_> yeah
[08:13] <wallyworld_> and we also needed to tweak the run utility to base64 encode the userdata
[08:13] <rvba> All right, thanks for the update.
[08:16] <wallyworld_> i'm landing some stuff now. thanks for the input yesterday
[08:16] <wallyworld_> just had trick or tweaters at my door
[08:16] <wallyworld_> treaters lol
[08:57] <mgz> mornin'
[09:33]  * thumper hopes jam can run the meeting tonight as he is hacking now he's back
[09:37] <natefinch> morning all
[10:03] <jam> fwereade: welcome back
[10:05] <jam> team meeting: https://plus.google.com/hangouts/_/calendar/bWFyay5yYW1tLWNocmlzdGVuc2VuQGNhbm9uaWNhbC5jb20.09gvki7lhmlucq76s2d0lns804
[10:05] <jam> fwereade_: ^^
[10:05] <jam> axw: ^^
[10:05] <fwereade_> jam, I am trying
[10:05] <jam> sure, we've been having trouble with new vs old G+ interfaces
[10:08] <axw> my machine froze, joining now
[10:09] <fwereade_> jam, it's weird, it is literally *just* the hangout that's not working
[10:11] <jam> fwereade_: extra joy in that it was working with just you and I until more people joined
[10:12] <fwereade_> jam, yeah, indeed
[10:12] <fwereade_> jam, there's a crazy storm out here but I doubt its effect is selective enough to be causing this
[10:12] <jam> fwereade_: is dimiter around today?
[10:13] <fwereade_> jam, he is, but he was out earlier; he expected to make it back for the meeting though
[10:52] <rogpeppe> damn, did i miss the team meeting today?
[10:52] <rogpeppe> jam: sorry about that, i totally forgot
[10:53] <natefinch> rogpeppe: yeah but you didn't miss much
[10:53] <rogpeppe> jam: i'd have joined if you'd pinged me...
[10:53] <rogpeppe> natefinch: ok, that's good
[10:53] <natefinch> rogpeppe: thumper was just talking about how much he loves having loggo on github
[10:53] <rogpeppe> natefinch: ironically?
[10:54] <thumper> ?!?
[10:54] <rogpeppe> thumper: you really *do* love having loggo on github?
[10:54] <thumper> heh
[10:54]  * thumper goes back to work
[10:54] <natefinch> rogpeppe: I wasn't being serious :)
[10:55] <rogpeppe> natefinch: it sounded like you weren't - just checking
[10:55] <natefinch> rogpeppe: actually I think the only real complaint was having his username in the import url.  He says it feels too unprofessional
[10:55] <rogpeppe> natefinch: yeah, that's actually part of why i chose launchpad for some projects
[10:55] <rogpeppe> natefinch: the user name doesn't seem like that important a part of the project
[10:57] <natefinch> rogpeppe: yeah, I wish there was a different way to do it... you can do something like a project team and have the project under the team.
[10:57] <rogpeppe> natefinch: yeah
[11:17] <thumper> can you have a team of one?
[11:17] <sinzui> yes
[11:18] <sinzui> You can have a team of 0 in Lp
[11:18] <thumper> sinzui: I was referring to github
[11:19] <thumper> jam: I think I may have a complete fix with tests
[11:19] <natefinch> thumper: yes, you can have a team of 1 on github
[11:19] <thumper> natefinch: so I could create an org called loggo?
[11:19] <thumper> is it worth it?
[11:21] <sinzui> thumper, github is similar to Lp, no one is in the team when it is created, you add members.
[11:21] <natefinch> thumper: up to you... you'd have github.com/loggo/loggo
[11:21] <thumper> meh
[11:22] <thumper> except they call it an organisation
[11:22] <mgz> make the team ~loggo :)
[11:22] <thumper> heh
[11:22] <mgz> ACCEPT THE TILDE
[11:22] <thumper> probably an invalid character
[11:23] <natefinch> yeah it's just alphanumeric and dashes
[11:24]  * thumper twiddles fingers while the local lxc downloads and the amazon one spins up
[11:24]  * thumper proposes while he waits
[11:27] <thumper> ok... local provider still works with my change
[11:27]  * thumper waits for ec2
[11:27] <natefinch> why working so late, thumper?\
[11:27] <thumper> natefinch: critical bug around lxc and maas
[11:27] <thumper> natefinch: they want it for the ODS demo
[11:28] <thumper> and sinzui is waiting on it to cut 1.16.2
[11:28] <natefinch> thumper: ahh, bummer
[11:28] <thumper> sinzui: care to land a branch that increments the version number on lp:juju-core/1.16
[11:29] <sinzui> thumper, oops, I will do that, thank your for reminding me
[11:30] <natefinch> sinzui: please make sure you increment scripts/win-installer/setup.iss as well.  It needs to get updated for the windows installer.
[11:30] <natefinch> (it's just a text file, the version number is at the top, you'll see it)
[11:31] <sinzui> natefinch, oh. I've got no notes on that. Have you been doing it each time you run it?
[11:31] <natefinch> sinzui: yeah, I've just been modifying it when I create the installer, but obviously that's not a good habit to keep up
[11:31] <natefinch> sinzui: I'm considering just making script that'll update all the right spots.
[11:36] <thumper> \o/
[11:36] <thumper> works on ec2
[11:36] <thumper> have ubuntu deployed into 1/lxc/0
[11:42] <thumper> jam, fwereade_: https://codereview.appspot.com/20220043/
[11:42] <thumper> and dimitern if he wants
[11:42] <thumper> this is to fix lxc on maas for the demo
[11:42] <thumper> makes it so we don't get an environ config for lxc provisioners
[11:43] <dimitern> thumper, looking
[11:43] <thumper> local provider works
[11:43] <thumper> and ec2 was able to provision a container
[11:43] <thumper> and deploy the ubuntu charm into it
[11:44] <thumper> so we know it at least works on something
[11:44] <sinzui> I wish I knew how to make DeployLocalSuite.TestDeployNumUnits pass on my computer
[11:44] <thumper> I really want to hand this off as it is almost 1am
[11:50] <dimitern> thumper, reviewed
[11:52] <thumper> dimitern: ta
[11:53] <thumper> sinzui: about to land my 1.16 branch
[11:53] <thumper> sinzui: wallyworld_ landed the azure fixes already
[11:53] <sinzui> I saw
[11:53] <thumper> sinzui: once mine is in, you should be good for 1.16.2
[11:53] <sinzui> rock
[11:53] <sinzui> Maybe I wont purse 1.16.1. It has not built yet
[11:54] <thumper> sinzui: however I'm about to go to bed
[11:54] <thumper> hopefully all lands ok
[11:54] <thumper> how long does it normally take?
[11:54] <thumper> and who can see if it is in progress?
[11:58] <sinzui> thumper, jamespages has the juju release scripted. He builds the new package and send it off to Ubuntu and the PPA builders in on go.
[11:58] <sinzui> He emails me when he has run the script
[11:58] <thumper> sinzui: I was really referring to the landing bot
[11:58] <sinzui> 15 minutes
[11:58] <thumper> I think I'll wait just to be sure
[12:00] <sinzui> natefinch, Is this right https://codereview.appspot.com/20230043 ?
[12:04] <natefinch> sinzui: great.  LGTM'd
[12:42] <sinzui> jamespage, are you about?
[14:12] <natefinch> fwereade_: do we have a list of the client commands that need to be API-ified?  I figured I'd knock one or two out
[14:40] <fwereade_> natefinch, we have a list of the ones that don't, and there are 2 of them
[14:41] <natefinch> fwereade_: haha ok
[14:42] <natefinch> fwereade_: what uses the API now, so I can see how it should be set up?
[14:42] <natefinch> fwereade_: nevermind, looks like add unit does  (I thought I remembered that one did)
[14:43] <fwereade_> natefinch, I think "get" does too
[14:44] <natefinch> fwereade_: figured I'd start on status, unless you have another suggestion
[14:44] <fwereade_> natefinch, yeah, let me chat to you about it after meeting
[14:45] <natefinch> fwereade_: sure.  I'm going to be AFK in about a half hour (for about an hour)
[15:03] <jam> natefinch: status is going to be much bigger than some of the other ones, and might involve some checking to make sure we are getting the passwords set on first connect, etc.
[15:03] <jam> something like "set" would be much easier if you want to get your feet wet
[15:10] <natefinch> jam: that's fine, I can do set to start :)
[15:27] <mattyw> the env vars that get set in a running hook - like: CHARM_DIR and JUJU_UNIT_NAME - where abouts are they actually set in the code?
[15:36] <fwereade_> mattyw, uniter/context.go
[15:36] <mattyw> fwereade_, cool thanks, think I just found it
[15:52] <rogpeppe> fwereade_: do you know any way of configuring the local provider so that it doesn't download a whole precise image when a precise charm is started?
[15:53] <rogpeppe> fwereade_: (at least, i *think* that's what it's taking ages to do - i'm not sure if there's any way of telling at all)
[15:55] <fwereade_> rogpeppe, I in't think we grabbed the whole thing every time
[15:56] <rogpeppe> fwereade_: i'm not sure how to tell what it's doing
[15:56] <rogpeppe> fwereade_: will we always grab the whole thing in a newly bootstrapped local env?
[15:57] <fwereade_> rogpeppe, I believe that is currently the case, but I have not looked at the code in question myself
[15:57] <rogpeppe> fwereade_: ah, sorry, i thought you were the one that reviewed it
[16:00] <rogpeppe> fwereade_: so no way to avoid that first download then. hmm.
[16:05] <rogpeppe> fwereade_: another small question: is there a good place for a charm to stash local state that *doesn't* get managed under git? for example, a unix-domain socket or something that should not be rolled back on failure?
[16:06] <fwereade_> rogpeppe, there's no canonical location, no
[16:09] <rogpeppe> fwereade_: i guess $CHARM_DIR/../charm-state might work ok, although it's not great.
[16:09] <fwereade_> rogpeppe, eww? name it after the charm outside the juju data-dir I would think?
[16:09] <rogpeppe> fwereade_: or just make up a name in /var/lib which includes the env uuid and the unit name
[16:10] <fwereade_> rogpeppe, +1
[16:13] <mramm> hey all: thumper's blog post about his logging library for Go is now up on hacker news: https://news.ycombinator.com/item?id=6643805 please upvote it if you think it is interesting, and feel free to comment on hacker news if you think you have something to add.
[16:36] <natefinch> mramm: if I could get to hacker news, I'd upvote it
[16:41] <adam_g> what is the proper way to upgrade an environment using a custom built juju and jujud binary pair? someone showed me last week but i've forgotten
[16:47] <mgz> adam_g: the magic is simply to have the juju you run to do the upgrade have the matched jujud as a sibling in the directory
[16:47] <mgz> this is very magic, but hey go hey go go
[16:47] <adam_g> thats what i thought
[16:48] <adam_g> adding the --upload-tools flag seems to skip going to s3
[16:52] <mramm> natefinch: problems with hackernews? thumper had them yesterday too. :(
[16:52] <natefinch> mramm: looking online, it's a DNS propagation issue
[17:06] <sodre_zzz>     juju-devs: has anyone seen this type of error with Juju & RadosGW
[17:06] <sodre_zzz> caused by: failed unmarshaling the response body: bootstrap-verify
[17:07] <mgz> sodre_zzz: can you manually fetch that object from the provider storage and see what it contains?
[17:07] <mgz> ...what provider even is that?
[17:07] <sodre> OpenStack, but Swift is served by ceph-radosgw
[17:08] <mgz> so, `swift list` to get CONTAINER, then `swift get CONTAINER bootstrap-verify`
[17:10] <sodre> alright... one sec.
[17:13] <hazmat> wallyworld_, wasn't there some facility you mentioned about auto upload-tools?
[17:13] <sodre> ahhh.. the error is different!
[17:14] <sodre> mgz: it says  juju was unable to list the contents of the container.
[17:14] <mgz> sodre: looking in ~/.juju/environments/ENV.jenv where ENV is the name of your environment will also tell you if you find the 'control-bucket' key at the bottom
[17:14] <sodre> Let me pastebinit
[17:15] <sodre> wasn't there a but where juju is expecting to see a .json object when listing contents in swift?
[17:15] <sodre> s/but/bug/
[17:15] <mgz> sodre: we might not be parsing the error response correctly
[17:15] <mgz> so, you're seeing a follow-on error rather than the underlying cause
[17:15] <mgz> sodre: can you file a bug against juju-core for this please?
[17:16] <sodre> will do, but can you take a look at the pastebin first ?
[17:16] <mgz> I need to transfer back home now, so would like to pick up later
[17:16] <mgz> sodre: sure, fast :)
[17:17] <sodre> http://paste.ubuntu.com/6336731/
[17:17] <sodre> this is with Havana, fyi.
[17:19] <mgz> sodre: yeah, looks like we're just expecting json and getting plain text
[17:20] <sodre> yeap, the error is in goose
[17:20] <mgz> there's a bug for this (or something very related) against goose already
[17:20] <sodre> yeap, that's what I thought.
[17:20] <mgz> if you can find that and add more details, that would be ace
[17:20] <sodre> i'll search for that.
[17:20] <sodre> thanks mgz
[17:20] <mgz> I've already got that bug on my list to look at
[17:32] <fwereade_> bbl
[17:44] <sodre> jam: are you around >?
[17:44] <sodre> I have quick goose patch/bug
[17:47] <natefinch> sodre: it's pretty late for Jam, almost 10pm.  Anything that I could help you with?
[17:53] <sodre> sure
[17:53] <sodre> the current version of goose/swift.go
[17:54] <sodre> it requests a list of entries in a container but forgets to set the format to json
[17:55] <sodre> the patch is very short,  http://paste.ubuntu.com/6336923/
[17:55] <sodre> natefinch, what do you think ?
[17:57] <natefinch> sodre: looks good, except you need to gofmt your source.  You have two spaces where the standard format is a tab.
[17:57] <sodre> got it, how is that done ? I am new to go
[17:58] <natefinch> go fmt <filename>  it'll rewrite the file
[17:59] <sodre> http://paste.ubuntu.com/6336944/, where should I send this to?
[18:00] <natefinch> so, in theory, you should branch lp:goose, make your change, and then propose the branch for merging to the lp:goose trunk
[18:00] <sodre> got it
[18:01] <natefinch> sodre: if you have credentials etc on launchpad, it's not too hard
[18:01] <sodre> I am new to Opensource collaboration...
[18:01] <sodre> I have an account on lp. but I've never contributed anything other than bug reporrts.
[18:02] <sodre> can you quickly walk me through it or point me to docs with the steps.
[18:03] <natefinch> sodre: I'm pretty new to the launchpad process too.  Let me see if I can find the steps somewhere
[18:03] <sodre> okay.
[18:07] <natefinch> sodre: ok, so presuming you started off by doing a bzr branch lp:goose to get your local copy.... you can do this to commit and propose your change (from the root directory of goose on your local drive):
[18:07] <natefinch> bzr commit -m 'commit comment'
[18:07] <natefinch> bzr push lp:~sodre/goose/<branchname>
[18:07] <natefinch> bzr lp-propose lp:goose
[18:09] <sodre> cool
[18:09] <sodre> I'm filing the bug report as well.
[18:09] <natefinch> awesome
[18:10] <natefinch> sodre: thanks for the bug and for the fix.  It's really a huge help even for little fixes like this.
[18:10] <natefinch> brb
[18:10] <sodre> np. I am just trying to get OS running here at work.
[18:32] <natefinch> TIL: when it looks like everyone else on freenode has quit at the same time... probably it's you that has actually quit
[18:34] <natefinch> sodre: I missed anything you might have said after you said you were just trying to get OS to work.
[18:35] <sodre> I didn't say anything :)
[18:35] <sodre> I just came across the bug because I need to get OpenStack running where I work.
[18:53] <sinzui> natefinch, will you have time today to create a a juju 1.16.2 windows package and file an Rt to get it signed?
[19:01] <natefinch> sinzui: sure will.   Hopefully this time the RT will actually be acted upon
[19:02] <sinzui> natefinch, send me the rt number and I will track it. I am happy for a delay since I never know when the builders will complete
[19:02] <natefinch> sinzui: ok
[19:17] <natefinch> sinzui: you probably will get an email about it, but here's the link: https://rt.admin.canonical.com/Ticket/Display.html?id=65618
[19:18] <sinzui> thank you natefinch. I will add that to my task list for 1.16.2
[19:19] <natefinch> sinzui: no, thank you.  I just want to make sure the windows installer stays available at the same version as the other platforms
[19:20] <sinzui> This one has to get to the users since they are the group that most likely will use juju on azure.
[19:43] <natefinch> morning thumper
[19:43] <thumper> morning natefinch
[19:44]  * thumper feels a little knackered
[19:44] <natefinch> seems like you were just here :)
[19:44] <thumper> :)
[19:44] <thumper> yeah
[19:44] <thumper> I know that feeling
[19:44] <thumper> however I also have three kids still in bed
[19:44] <thumper> instead of at school
[19:44] <thumper> the event we went to last night was a lot later than I expected
[19:44] <thumper> didn't get home until about 22:30
[19:44] <natefinch> thumper: doh
[19:44] <thumper> which is way late for the youngest two
[19:45] <thumper> if I knew it was going to be that late, I would have organised a babysitter
[19:45]  * thumper shrugs
[19:45] <thumper> oh well
[19:45] <natefinch> how old are your kids?
[19:45] <thumper> 8, 10 and 12
[19:45] <thumper> it was a "world of wearable arts" show for the eldest
[19:45] <natefinch> sounds like fun
[19:46] <thumper> it was quite good, but went on and on
[19:46] <thumper> you can really tell which outfits the kids did themselves, and which parents helped on
[19:46] <thumper> :)
[19:46] <natefinch> haha yeah....
[19:47] <jam> thumper: I had some comments on your proposal, they aren't critical but it might be good to get them in if sinzui is going to be doing a 1.16.2 soon
[19:47] <sinzui> jam, the tarball was sent
[19:48] <jam> well then, I guess it doesn't matter :)
[19:48] <sinzui> I might be convinced to make another if there are zero downloads of it
[19:48] <jam> sinzui: I'm not a big fan of reusing numbers regardless
[19:49] <thumper> jam: I may not worry too much for the 1.16 branch, but may take them into account on the move to trunk
[19:49] <sinzui> me neither. This is case where the one packers we are delivering too is traveling
[19:49] <thumper> sinzui: how dare he?
[19:49] <thumper> travel when we need him
[19:49] <thumper> geez
[19:50] <sinzui> SPoF
[19:50] <natefinch> busfactor violation
[19:50] <thumper> true that
[19:51] <jam> sinzui: I *thought* that when he set up the super special PPA he did give you access. At least, I listed a short list of names
[19:51] <jam> with you on it
[19:52] <sinzui> I don't see any new PPA on my list...
[19:52] <sinzui> oh, let me check teams
[19:53] <natefinch> thumper: if you get time today, I have a couple branches that could use some reviewing:  ec2 instance constraints: https://codereview.appspot.com/14523052/   modification to destroy-environment to require environment name: https://codereview.appspot.com/14502070/
[19:53] <jam> I don't know if that team actually got set up, but we did discuss it
[19:53] <thumper> natefinch: ok, I'll add to the list :)
[19:53] <sinzui> jam, thank you and your elephantine memory. I was added to a  team  8 days ago
[19:54]  * sinzui starts scripting
[19:54] <thumper> natefinch: so this is "juju destroy-environment amazon"
[19:54] <natefinch> thumper: correct
[19:54] <thumper> cool
[19:54] <thumper> juju destroy-environment production
[19:54] <thumper> oops
[19:54] <natefinch> that's basically the only change from current behavior
[19:55] <natefinch> I did add an exclamation point to the warning message, too.
[19:57] <thumper> :)
[19:57] <thumper> nice
[19:57] <natefinch> forgot there was a tweak to scanning the input... the code that was there before was actually completely borked. It happened to work, but also returned an error we were ignoring
[20:07] <arosales> bcsaller, do you have some time to join us @ https://plus.google.com/hangouts/_/calendar/YW50b25pby5yb3NhbGVzQGNhbm9uaWNhbC5jb20.tj9cngmc3p25r5nvbdthk5up8c
[20:14] <sodre> sinzui: have you already fixed 1209003
[20:18] <sinzui> sodre, no, no work has been done on it yet
[20:19] <sinzui> sodre, sorry, work has started on it and iI need to target it
[20:21] <sodre> okay. I've placed the patch on a branch for review. It is one-line.
[20:22] <sodre> I found the bug through a different route, so I did not notice it was duplicate of your bug until a few minutes ago.
[20:22] <thumper> sinzui: good news, adam_g confirmed my work on maas, it makes lxc containers work (FSVO work)
[20:23] <sinzui> I hit the mysql+apparmor bug while testing local provider today. I almost stalled the releases. I am glad I remembered the bug
[20:31] <thumper> sinzui: what's that bug?
[20:34] <sinzui> thumper, https://bugs.launchpad.net/juju-core/+bug/1236994
[20:34] <_mup_> Bug #1236994: Mysql doesn't deploy on local provider due to apparmor <juju-core:Incomplete> <mysql (Juju Charms Collection):New> <https://launchpad.net/bugs/1236994>
[20:34] <thumper> wha?
[20:34] <thumper> it used to work, what broke?
[20:37] <sinzui> thumper, I think the issue is intermittent. I think I deployed that last night when I confirmed my local provider worked
[21:05] <jamespage> sinzui, I am now - I guess you need me to push 1.16.2 everywhere right?
[21:08] <natefinch> I'm outta here. night all
[21:08] <thumper> night natefinch
[21:29] <jamespage> sinzui, don't expect armhf for quantal and raring btw
[21:30] <jamespage> we are missing the dpkg magic fix in those series right now
[21:30] <sinzui> quantal is dead to me
[21:31] <sinzui> maybe we should stop releasing quantal packages. I think users have had 6 months to get to raring
[21:32] <jamespage> sinzui, not really - raring expires in 2 months
[21:32] <jamespage> :-)
[21:32] <jamespage> 9 month support these days
[21:32] <jamespage> sinzui, I'd prefer we keep publishing - I still have to test using juju-core for those series
[21:32] <sinzui> yeah, but I think juju-core has tried to give users an extra 3 months.
[21:32] <jamespage> but just not fuss about armhf
[21:33] <jamespage> sinzui, where do you want to pull trusty binaries from? I've not uploaded to ppa for that series?
[21:34] <jamespage> I can do - the distro package will supercede anyone actually running on trusty once it lands
[21:35] <thumper> sinzui, jamespage: FWIW +1 on not caring about quantal and raring armhf
[21:36] <sinzui> jamespage, this is the script we use for releases and CI. We pull from 3 archives, including Ubuntu: http://pastebin.ubuntu.com/6337941/
[21:37] <sinzui> jamespage, I think trusty will work. I need to switch to Lp API to get packages from the new archive.
[21:37] <jamespage> sinzui, should do
[21:37] <jamespage> I'll leave it off the backports like last cycle then
[21:37] <sinzui> sorry lines 197  start where I define the archives to search
[21:48] <jamespage> sinzui, OK - its nearly baked - everything is built - just waiting for LP to publish to ppa.launchpad.net
[21:49] <sinzui> fab. I am rushing to finsh my script while getting children ready for Halloween
[21:49] <jamespage> sinzui, oh fun
[21:53] <jamespage> sinzui, bear in mind that http://archive.ubuntu.com/ubuntu/pool/universe/j/juju-core/ won't get the armhf bits
[21:54] <sinzui> ugh
[21:54] <jamespage> sinzui, trying to remember where they appear
[21:54] <jamespage> its ports.ubuntu.com or something
[21:55] <jamespage> sinzui, http://ports.ubuntu.com/pool/universe/j/juju-core/
[21:55] <sinzui> jamespage, will we do this for the final release of trust?
[21:56] <jamespage> sinzui, sorry - do what?
[21:56]  * jamespage thinks his brain is not quite working right
[21:57] <sinzui> will armhf always be in ports for trust?
[21:58] <jamespage> sinzui, yes
[21:58] <jamespage> I think so
[21:58] <jamespage> sinzui, might be easier to just pull everything from the PPA
[21:58] <sinzui> My mind has left the building. I look forward to a walk in the dark with people dressed weirder that me
[22:02] <sinzui> Do my eyes deceive me, I think everything is int eh staging ppa
[22:05] <jamespage> sinzui, yes - its all done
[22:06] <sodre> sinzui: should I change #1209003 to Fix Commited ? I have placed a pull request already.
[22:06] <_mup_> Bug #1209003: juju bootstrap fails with openstack provider (failed unmarshaling the response body) <openstack> <Go OpenStack Exchange:In Progress by psodre> <juju-core:Triaged> <https://launchpad.net/bugs/1209003>
[22:19] <thumper> man...
[22:19] <thumper> I'm normally a bit drained on Fridays due to the late meetings
[22:20] <thumper> but actually coding after the meetings to fix the critical bug as knocked me out
[22:20]  * thumper turns on the coffee machine
[22:20] <jamespage> sinzui, I'm dropping for today
[22:20] <jamespage> sinzui, if you need me todo anything tomorrow am my time ping me a mail
[22:20] <sinzui> Thank you for your time jamespage
[22:21] <jamespage> sinzui, np - ditto
[22:24] <bigjools> thumper: a good coffee fixes everything
[22:43]  * thumper looks for wallyworld_
[22:44] <wallyworld_> yes?
[22:44] <thumper> hey
[22:44] <wallyworld_> ho
[22:44] <thumper> got time to chat?
[22:44] <wallyworld_> ok
[22:45] <thumper> https://plus.google.com/hangouts/_/72cpimmkjidp1clo6ipgc9gg90?hl=en
[22:45] <adam_g> is bootstrapping an environment /w 1.16.2-precise-amd64 expected to pull 1.16.0 tools or 1.16.2 down from S3?
[22:45] <thumper> adam_g: I believe it matches on major and minor
[22:45] <thumper> adam_g: to be sure --upload-tools
[22:46] <adam_g> thumper, seemed to have only matched major. ill roll with --upload-tools for now
[23:41] <jcastro> http://askubuntu.com/questions/369263/juju-bootstrap-fails-with-temporary-failure-in-name-resolution-using-amazon-aw
[23:41] <jcastro> did we break a bucket?