[00:08] hazmat, yes --fancy shows them running [00:08] hazmat, yes pending status, which is all I have ever seen since 1.14.0 [00:08] hmm [00:09] that's a good start [00:09] sinzui, jorge and marco had this issue, thumper had to do surgery on them.. but tbd if its needed here [00:10] * hazmat waits for cloudinit output [00:10] hazmat, cloud-init: http://pastebin.ubuntu.com/6333073/ [00:10] cool.. the source of fail [00:10] which agrees with [00:10] curtis-local-machine-1 RUNNING - - YES [00:11] sinzui, so let's try a clean tear down.. [00:11] I wonder if I need to locate abentley's bug that explained his two step clear lxc cache [00:11] sinzui, issue there is no cloudinit data made it to the container [00:11] sinzui, juju destroy-environment [00:12] done [00:12] sinzui, anything in ls /etc/lxc/auto ? [00:12] no [00:13] sinzui, how about ls /var/lib/juju/containers/ [00:13] hazmat, empty [00:13] sinzui, hmm.. and ls /var/lib/juju/removed-containers/ [00:14] sinzui, this is trunk or 1.17 ? [00:14] er. 1.16 [00:14] hazmat, 1.16.1 release candidate...the one that is supposed to fix lxc [00:14] ah.. one of those [00:14] hazmat, Lots in removed containers [00:15] curtis-local-machine-1 curtis-me-machine-1 curtis-me-machine-1.2 curtis-me-machine-1.4 [00:15] curtis-local-machine-2 curtis-me-machine-1.1 curtis-me-machine-1.3 curtis-me-machine-2 [00:15] sinzui go ahead and kill them. [00:15] and move your $JUJU_HOME/$local_provider_name out of the way [00:15] hazmat, done [00:15] and that should be pristine for a new bootstrap [00:15] sinzui, if your bootstraping.. with 1.16.1 tools you may need --upload-tools [00:15] if there not published [00:16] I think that last step is key. I have not done that today [00:16] sinzui, yeah.. --debug on bootstrap and pastebin for posterity if needed [00:22] hazmat, nothing different from my previous tests...so far http://pastebin.ubuntu.com/6333122/ [00:23] sinzui, can you set juju set-env logging-config="=DEBUG;juju.provisioner=TRACE" [00:23] and then deploy something.. [00:25] sinzui, btw re logs $JUJU_HOME/$local-env-name/machine-0.log has the logs of interest [00:26] yeah, that is what I am tailing [00:26] oh.. cool [00:26] sinzui: no all machines log for local at this stage [00:26] sinzui, so .. i get roughly the same content you did [00:26] 2013-10-31 00:26:17 INFO juju.provisioner provisioner_task.go:367 started machine 1 as instance kapil-local-machine-1 with hardware [00:27] so, what's the summary of the current issues? [00:28] hazmat, 3 running machines without net [00:28] thumper, local provider seems to work on trunk.. sinzui has it failing for him 1.16.1 [00:28] thumper, somewhat more critical is that it doesn't seem to work on maas as a container at all atm [00:28] hazmat, all three services/machines are pending [00:29] sinzui, you can watch them do upstart and lxc-create via pstree [00:29] * thumper sighs... [00:30] thumper, the containers on maas are critical for some ODS stuff [00:30] * thumper nods [00:30] let's get that working first then [00:30] thumper, i'm going to give it a go with trunk on the maas i think.. [00:30] just to see if that helps at all. plus to have a source compile i can instrument.. since the logging here seems to be nil [00:31] hazmat: do you have any instructions for a virtual maas setup [00:31] sinzui, cool, is cloudinit done from pstree? [00:31] so we can test locally? [00:31] abentley reported this bug. I don't think it is related because I start my lts container every day [00:31] https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1236490 [00:31] <_mup_> Bug #1236490: Container has no networking [00:31] thumper, you mean did i attend the maas training ;-) ? [00:31] no actually.. [00:31] * bigjools bans the maas training from existence [00:31] sinzui: hangout? I can talk through some things [00:32] sinzui, can you pastebin lxc-ls --fancy [00:32] not sure what you mean by no networking [00:32] hazmat, yes it is. I can tell when my fan stops actually [00:32] cloudinit is injected here via file [00:32] sinzui, wow.. is that spinning rust? [00:32] sinzui, or ssd? [00:32] ssd [00:32] * hazmat runs a dozen containers normally [00:33] only 2-4 for juju though [00:33] i don't even notice them [00:33] I have my fan tuned to come on often...its a mac and they run hot [00:33] gotcha [00:33] hazmat, no improvement http://pastebin.ubuntu.com/6333175/ [00:34] that's intriguing [00:34] sinzui, can you go to /var/lib/juju/containers [00:35] there should be console.log in there for each machine [00:35] and pastebin the console.log [00:35] that is the output of running the cloudinit [00:35] although in this case i think we want the container.log [00:35] sinzui: also pastebin ifconfig [00:35] the netns stuff should have happened pre lxc [00:36] er. pre cloudinit [00:36] hazmat, console.log http://pastebin.ubuntu.com/6333189/ [00:37] thumper, my ifconfig or one from a container? [00:37] sinzui: from the host [00:38] sinzui, can you pastebin the container.log post ifconfig .. [00:38] that should detail why the no netns [00:38] thumper, ifconfig http://pastebin.ubuntu.com/6333193/ [00:38] huh [00:38] you've got three veth devices there. [00:38] for your three containers [00:38] sinzui, is dnsmasq running ? [00:39] sinzui: how about the lxc.conf from the /var/lib/juju/containers// [00:39] you should see something like... 116 1190 0.0 0.0 28184 804 ? S Oct15 0:04 dnsmasq -u lxc-dnsmasq --strict-order --bind-interfaces --pid-file=/var/run/lxc/dnsmasq.pid --conf-file= --listen-address 10.0.3.1 --dhcp-range 10.0.3.2,10.0.3.254 --dhcp-lease-max=253 --dhcp-no-override --except-interface=lo --interface=lxcbr0 --dhcp-leasefile=/var/lib/misc/dnsmasq.lxcbr0.leases --dhcp-authoritative [00:39] er.. ignoring the preamble.. that's from ps aux | grep dnsmasq [00:39] hazmat, yes http://pastebin.ubuntu.com/6333198/ [00:40] sinzui, the container.log should have the netns come up.... [00:40] hazmat, the lxc.conf from -1 http://pastebin.ubuntu.com/6333199/ [00:41] * hazmat leaves to thumper's capable hands [00:41] ok, this needs help from an lxc person [00:41] as it is outside the remit of juju [00:42] the container didn't get networking [00:42] prior to all the juju shit [00:42] thumper, fab, as I think too [00:42] sinzui: if you bring up a machine manually, does it get an ip address? [00:42] while abentley's bug is not identical, I will try the work around [00:43] thumper, do you mean bring one down than bring it back up again? [00:43] sinzui: what do you get with "ls -l /var/cache/lxc" ? [00:43] actually [00:43] sinzui: ls -lh /var/cache/lxc/cloud-precise/ [00:43] thumper, [00:43] drwxr-xr-x 2 root root 4096 Jul 1 18:29 cloud-precise [00:43] drwxr-xr-x 3 root root 4096 Jan 9 2013 precise [00:44] thumper, -rw-r--r-- 1 root root 221M Jun 24 03:07 ubuntu-12.04.2-server-cloudimg-amd64-root.tar.gz [00:44] -rw-r--r-- 1 root root 206M Dec 17 2012 ubuntu-12.04-server-cloudimg-amd64-root.tar.gz [00:44] ok, that's it [00:44] ubuntu-12.04-server-cloudimg-amd64-root.tar.gz is too old [00:44] and the refresh mechanism isn't working [00:44] this is the problem marco had [00:44] and jorge [00:45] smoser is aware [00:45] sinzui: delete the file [00:45] sinzui, can you post the console.log [00:45] destroy the environment [00:45] er.. container.log [00:45] sinzui, ^ [00:45] and start again [00:45] before destroying [00:45] thats a known issue. [00:45] yeah, remove that file. [00:45] lxc will download a fresh copy of the cloud image [00:45] oh.. that one [00:45] and it "should" work [00:45] i forget about that.. i ran into that as well [00:46] smoser, so what causes that? [00:46] smoser, with the new lxc cloud template / cloud img download there's no dot versioning around point releases.. [00:47] but never really understood why the old container wouldn't netns [00:47] sinzui: still there? [00:47] "new lxc cloud template" ? [00:47] hazmat, I have a copy of the container log. anything else before I start purging [00:47] there is no such thing as "dot" [00:47] ever. [00:47] 12.04.3 is no different than 12.04 [00:48] the cloud images do not distinguish them in name. they did at one point and that was a bug. [00:48] smoser, it is though from a cached file sense [00:48] that is independent of .2 verus .3 [00:48] previously it did.. and it worked better for caching and invalidation purposes when the point release incremented [00:48] previously it was broken on the server side in naming [00:49] which broke people who expect to 'wget http://path/to/some/known/current/release/file.img' [00:49] https://bugs.launchpad.net/ubuntu/+bug/1220366 [00:49] <_mup_> Bug #1220366: cloud-images have inconsistent filenames in 12.04.3 [00:50] thumper, I am bootstrapping again [00:50] ic [00:50] that should fix sinzui's issue then [00:50] the cloud template needs to be smarter. [00:50] but we really would rather make a lxc wrapper that was smarter. [00:51] ie, hallyn doesn't think such knowledge should live in lxc itself. [00:52] smoser, ala the kvm front end.. or ala docker front end? [00:52] uvtool-lxc is the plan in my head. [00:52] which is also ideally where the "--use-the-fastest" cloning argument goes. [00:52] My fan has started [00:52] :) [00:53] smoser, the default behavior with -s works for me generally.. ie autodetect btrfs ;-).. but i'm curious about the docker lvm thin provisioning on sparse file dev pool [00:54] whoa! there it is. Installed [00:54] sidnei just got the that merged into lxc-clone (re lvm-thin) support [00:54] sinzui, do we have tools published for 1.16.1 or tarball? [00:54] hazmat, i think there were issues with that default behavior. [00:54] are you sure it even remains ? [00:55] and it wouldn't select unionfs [00:55] * hazmat checks [00:55] Thank you rocket scientists! hazmat thumper smoser. I really appreciate your patience [00:55] sinzui: awesome [00:55] that's one fixed [00:55] and that fault should definitely be logged on the local provider troubleshooting page <- jcastro, marcoceppi [00:55] now to look at maas [00:55] o/ [00:56] I just got pinged about this [00:56] hazmat: what versions were you playing with? [00:56] narrowed it down any yet? [00:56] bigjools: not started looking yet [00:56] thumper, 1.16.0 for maas [00:56] thumper: I guess I should fire up my maas server? :) [00:56] bigjools: gimmie time to shower (haven't since back from the gym yet), and we can have an initial call prior to working out debugging plan [00:56] bigjools: ack [00:56] hazmat, yes, my testing tools are in place in aws and hp and canonistack. I place them in the testing/ subdir and point the tools url to them: http://juju-dist.s3.amazonaws.com/ [00:56] thumper: roger mcdodger [00:57] hazmat, and I will officially make the tarball in about 10 minutes [00:57] smoser, it requires explicit lxc-clone -s but yeah.. it works without specing btrfs directly [00:57] du -hs /var/lib/lxc/container-name -> 7mb [00:58] hazmat, and you can see all the scripts I am using at https://code.launchpad.net/~juju-qa/juju-core/ci-cd-scripts2 [00:58] you can make your own deb without any juju parts installed [01:10] bigjools: how long does an juju azure bootstrap instance normally take to spin up? [01:11] wallyworld_, ~ 5 minutes i think [01:11] wallyworld_: from memory not at all quick, I think it was like 5-10 m [01:11] ok, ta. just bootstrapped an and waiting for juju status [01:11] s/just/have been [01:12] thumper: can you summarize what should be logged? [01:13] marcoceppi: the symptom was that the machines were pending, and lxc-ls --fancy showed the machines started but with no ipv4 address [01:13] marcoceppi: ls -lh /var/cache/lxc/cloud-precise showed ubuntu-12.04-server-cloudimg-amd64-root.tar.gz to be over 4 months old [01:13] marcoceppi: delete that file, destroy the environment and try again [01:14] a new lxc cloud image will be downloaded and all should be good [01:17] bigjools: hangout? [01:18] marcoceppi: does that all make sense? [01:19] thumper, marcoceppi It makes sense to me. marcoceppi can interview me tomorrow or he/nick can invite me to try updating the docs [01:19] * thumper nods [01:19] thanks sinzui [01:19] thumper: calling you.... [01:20] which me? [01:20] the head [01:20] wrong one? [01:20] this is why I don';t have two g+ profiles [01:48] thumper: yeah, that's what happened to me [01:48] already recorded [01:50] bigjools: do you know the magic command to ssh into a bootstrap node on azure? i have the ip address of the machine [01:51] juju ssh 0 :) [01:51] bigjools: doesn't work. times out [01:51] bootstrap has failed [01:51] but i have the ip address of the bootstrap node [01:51] ummmm in that case you're probably screwed [01:52] it will be firewalled [01:52] well, when i say bootstrap failed, it completes but juju status fails [01:52] ok [01:52] and i want to find out why [01:52] can you telnet port 22? [01:52] i'll try [01:53] yes [01:53] when i try ssh directly, it fails with a permission denied error (publickey) [01:53] so it's up but hasn't added the key [01:53] yeah, seems so [01:53] wallyworld_, with azure, I wait 30 minutes for the machine to really be up [01:53] which means bootstrap was not as successful as you might think [01:54] sinzui: really? oh joy [01:54] ! [01:54] wtf takes so long [01:54] oh - yes the crazy provisioner [01:55] you can see in the MS dashboard [01:55] so before when juju status timed out and i destoyed my env, i should have just waited [01:55] wallyworld_, on canonistack, we get TLS timeouts while the bootstrap node is coming up, but azure is silent. I have seen the machine come up in 20 minutes, but 30 is realistic [01:55] dashboard? you have a link? [01:56] sinzui: ok. i think i've fixed the azure incompatibility but am trying to make sure [01:56] wallyworld_: well not exactly you would need my password. let me look at it [01:56] ok [01:56] bigjools: i bet i can guess it [01:56] aussieaussieaussieoioioi [01:56] wallyworld_: "ianisacocktrumpet" [01:57] that would have been my next guess [01:58] wallyworld_: in fact it was so secure I can't remember it :) [01:58] ha ha ha [01:59] wallyworld_: get your own credentials [01:59] bigjools: they're in the mail apparently [02:00] i only needed yours cause of this azure cockup and the time pressure to fix it [02:00] yeah np [02:00] I encrypted my creds and put them in the CI machine so that my team could build better testing [02:00] i wonder what else besides juju msft broke with their change? [02:01] wallyworld_: btw baldrick left a hidden present that the mower found [02:01] \o/ :-D [02:01] you can imagine the result [02:01] did it splat all over you /me asks hopefully? [02:01] >:( [02:01] Your dog is named baldrick? [02:01] yep :-D [02:01] he has a cunning plan [02:02] * sinzui was just telling his son about Ebenezer black adder 5 minutes ago. [02:02] wallyworld_: I see two VMs running [02:02] * wallyworld_ is finding it hard to type cause he is laughing so hard to bigjools [02:02] one of them from juju the other I think for a gwacl run [02:02] bigjools: ah the gwacl one needs to be shutdown. i thought i did [02:03] * bigjools considers putting baldrick's present in wallyworld_'s mailbox [02:03] sinzui: blackadder is one of my favourite tv shows [02:03] bigjools: how can you, it's all vapourised all over you, right? [02:03] there's some left [02:04] next time get jen to mow the lawn after baldrick has visited :-D [02:04] wallyworld_: so the cpu graph shows it finishing a heavy load at 11:45 [02:04] the juju one? [02:05] what's the ip address? [02:05] 137.135.11.14 [02:06] yep, that's the one juju status keeps polling mongo for trying to do a status [02:06] :-( [02:06] i really need to get the logs off that sucker [02:06] wallyworld_: 17070 and 37017 ports are open [02:06] that's it [02:07] 37017 is the mgo port [02:07] * bigjools wonders if it's the non-ssl mgo [02:07] and it says conection refused [02:07] ah [02:07] that could be it [02:07] * bigjools gets lunch while wallyworld_ fixes it [02:08] * wallyworld_ doesn't know how to fix that one [02:08] deploy with saucy? [02:08] ah ok [02:08] good idea [02:08] it might have failed to add the cloud archive [02:09] yeah, sounds likely [02:09] although doesn't explain why you can't ssh in [02:09] hopefully saucy will work [02:09] heard a lot of swearing coming from the living room earlier - jen found jake pulling keys off her laptop again... ROFL [02:10] * bigjools lunches [02:10] lol [02:11] sinzui: still around? [02:11] I am [02:11] sinzui: I have a critical bug [02:11] sinzui: for 1.16 [02:12] sinzui: it is the lxc and maas problem [02:12] that they are using for ods [02:12] demo [02:12] always last minute [02:12] it is a regression introduced when the provisioner moved to the api [02:12] I just cut 1.16.1, but plan a 1.16.2 for azure. [02:12] but no one noticed [02:12] thumper, Do I need to take down 1.16.1 [02:12] no [02:12] I don't think so [02:12] because they can use a custom release [02:12] I think [02:13] but we may have a 1.16.2 real soon now [02:13] if they do need a release [02:13] what's the timeframe for 1.16.2? [02:13] bigjools: I have a plan [02:13] I can make it tomorrow immediately after 1.16.1 is needed. I hope for an azure fix of couse [02:13] * thumper nods [02:13] ok [02:13] let me file this bug and get to work [02:15] * sinzui creates milestone [02:17] sinzui: azure fix will be commited to gwacl today i hope [02:17] wallyworld_, excellent. Testing will be quick since it is limited to two providers [02:18] i'm doing a local test now [02:18] Did I mention abentley got rudimentary CI running on HP. we can run about 200 upgrade and bootstrap tests a day [02:18] ran into the precise mongo thing i think [02:18] trying on saucy [02:23] wallyworld_: are you chasing the azure issue? [02:23] wallyworld_: is there a bug? [02:24] yes. the issue is with msft, not us [02:24] we need to change gwacl cause they broke the api [02:24] i'm testing now [02:24] :) [02:24] wallyworld_: can you target the bug to 1.16.2? [02:24] but azure is sloooooow [02:25] thumper: sure. there's 2 of them - a public and a private [02:25] * thumper nods [02:26] bigjools: lp:1246556 for the juju maas bug [02:26] which it seems mup can't understand, bug 1246556 [02:26] <_mup_> Bug #1246556: lxc containers broken with maas [02:41] * thumper moved some code around, all tests still pass [02:42] I find this mildly disturbing [03:06] bigjools: not sure how many azure instances i have running - i think 2. a bootstrap using saucy still doesn't allow juju status to connect sadly [03:07] thumper: i have 2 small azure mp's for the 1.16.2 release if you have a moment sometime https://code.launchpad.net/~wallyworld/gwacl/azure-management-api-change https://code.launchpad.net/~wallyworld/gwacl/fix-request-eof-2/+merge/193346 [03:10] * thumper is in deep thought [03:10] sorry [03:11] np [03:11] bigjools: what is the price for 2 small gwacl reviews? [03:12] needed for juju 1.16.2 release tomorrow [03:28] bigjools: bugger... [03:28] bigjools: it seems like the quick fix won't work very well at al [03:28] may have to fix it the right way [03:29] wallyworld_: I can look for you, after all you came bearing gifts yesterday [03:29] indeed i did [03:29] they are only small [03:29] branches [03:29] thumper: great architecture FTW [03:30] 466 lines [03:30] is small in juju world? [03:30] bigjools: sadly i have a saucy node running now which also disallows connections to mongo [03:30] wallyworld_: is the firewall endpoint open? [03:30] not sure. what port is that do you know? [03:31] i thought you said before 37017 was open? [03:31] which is the port the client is sing to contact mongo [03:32] ok [03:32] wallyworld_: hang on I need to make call first [03:32] ok [03:49] wallyworld_: first comment - pease remove the factored version numbers [03:49] why? [03:50] I previously had someone stop doing that - each api request is separately versioned and if you factor it you will introduce subtle bugs [03:50] it kinda sucks having them all copied and pasted in the code [03:50] yes - but it needs to be done [03:50] ok then [03:50] I rooted out at least 3 bugs last time I unfactored it [03:50] sigh [03:51] I know [03:51] how the fook do we keep track of all the apis [03:51] sounds like a nightmare [03:52] wallyworld_: fwiw the version should be associated with the request struct ideally [03:52] they change versions when the format changes [03:52] so they are very closely tied [03:52] ok. for now though, i'll just go back to how it was [03:53] and do a search and replace of string [03:53] wallyworld_: for example if you update the refactored version you would change all the calls at once, and introduce potentially subtle sematic changes [03:53] i thought they would all change together [03:54] as a group [03:54] IME they dont [03:54] :-( [03:54] it might say they do.... [03:54] but I call BS [03:54] fair enough. it is msft after all [03:56] you might want to mention this in the code [04:00] ok [04:02] arg [04:02] stabby stabby [04:02] * thumper is trying to unpick something [04:02] but others are sewing things tighter together [04:04] wallyworld_: I bailed on the other one [04:04] bigjools: i wanted to avoid a random custom data so i could hard code the expected response [04:04] wallyworld_: why? [04:04] same reason i like to use strings in tests [04:04] the test demonstrates that it copes with random input [04:05] you should always use random data where the data itself doesn't matter [04:05] using the same api call in a test though as the logic sort of negates the test [04:05] otherwise it says to the test reader that you're crafting a particular scenario [04:05] feel free to ignore it, you have my +1 already [04:06] ok, i'll think about it [04:06] thanks [04:06] why bail on other one? [04:06] because it's deep Go innards that I know nothing about [04:06] ok [04:06] you're the Go expert :) [04:06] i used same technique as in goose etc [04:06] you reckon? [04:07] self-approve? [04:07] might do [04:07] do you need all you instances on my azure account? [04:07] your [04:08] bigjools: the saucy one would be good. but i need to be able to ssh in to see the cloud init log [04:08] you need a custom image with the backdoor [04:09] ugh ok [04:09] you have to build one and upload it to private storage and tell the api to use it when booting [04:09] I forgot how though :( [04:09] i never knew how [04:09] jam3: ping [04:09] in the first place === jam3 is now known as jam [04:10] there's a config item for a custom kernel [04:10] image I mean [04:10] thumper: pong [04:10] jam: hangout? I have a critical bug to discuss [04:10] bigjools: and you can't ssh in at all and grab the cloud init log? [04:11] thumper: sure, I have to head out in about 20min [04:11] jam: ack [04:11] jam: https://plus.google.com/hangouts/_/72cpjufs8mm0i0htkjiuch72s8?hl=en [04:12] I'm getting "call ended because of server error" give me a sec [04:13] * thumper nods [04:13] thumper: https://plus.google.com/hangouts/_/72cpjqbfsmf42rpnbt2o36cgk0 [04:13] see if this one works [04:13] If I follow your link, I got the "new" hangout layout [04:14] thumper: I saw you show up for 1 second [04:14] jam: I have new, you have old [04:14] they don't like each other [04:14] server error [04:14] skype? [04:14] thumper: know of any way to trigger the right one? [04:14] mumble [04:14] I don't care [04:14] thumper: I'm on skype, and you're in my friends list [04:14] but not showing as online [04:14] * thumper starts skype [04:16] jam: https://bugs.launchpad.net/juju-core/+bug/1246556 [04:16] <_mup_> Bug #1246556: lxc containers broken with maas [04:16] wallyworld_: no, unless cloud-init finishes putting the ssh key in your're buggered [04:16] \o/ [04:16] maybe it should do that first up if it doesn't already [04:16] wallyworld_: hence you need the backdoor. Having said that - there is an api field for adding a user and p/w [04:17] but we don't use it as it's mental [04:17] got a hint where i look save me searching? [04:17] i might do it to debug [04:17] hang on [04:18] wallyworld_: in LinuxProvisioningConfiguration you set a Username and Password. It's in the same place Customdata goes [04:18] ok, ta [04:22] wallyworld_: line 545 in provider/azure/environ.go [04:22] great [04:22] it was a mandatory field so we randomised it [04:22] yay security [04:22] lol serious? mandatory? [04:22] wtf [04:23] this is msft [04:23] * wallyworld_ sighs *again* [04:40] jam: lp:~thumper/juju-core/maas-lxc === thumper is now known as thumper-afk [05:40] juju-devs: is anyone still up ? [05:41] sodre: some of us are here [05:41] hi wallyworld_: I traced the reason for that earlier panic on juju bootstrap [05:41] oh cool [05:41] admin-secret issue i think? [05:42] I've updated the bug report on launchpad. But I was wondering if you could explain to me something related to it. [05:42] yeah, it had nothing to do with the admin-secret. [05:42] i'll read the bug [05:42] The issue is that jujud tries to get the provider-state through swift without authentication. [05:44] it should use creds when it tries to get it [05:44] i think from memory [05:44] let me check [05:45] from the client side, (juju) connects to swift using goose. [05:45] but the bootstrap level seems to use (Go 1.1 package http) [05:46] sodre, wallyworld_: provider-state needs to be in a container that is world readable because all of the clients download stuff from the container via wget/direct http access [05:46] when we set up the bucket ourselves, we set the .r:.rlistings * [05:47] for somereason that is not being set on on my version [05:47] sodre: you're on Havana ? [05:47] yes [05:47] jam: not necessarily the conatiner itself but the file [05:47] wallyworld_: we need the container, we put all the charm data, tool data, etc in there [05:47] i think [05:47] and all needs to be downloadable from plain wget [05:48] we don't need it on EC2 because we can generate Signed URLs, but we don't have support for that on Openstack [05:48] and doesn't wget work so long as the file itself is readable? [05:48] right now, my way around it has been to put tools in the juju-dist and set that world readable. [05:49] but then this provider-state issue came up. [05:50] wallyworld_: as in, we put a bunch of stuff in the container, and *each* needs to be world readable. It is a fair point that you could just have each file have the ACL, but we need all of them anyway [05:50] jam: sure. but now in trunk, public storage instances are gone from the providers [05:50] azure, maas never had them anyway [05:50] wallyworld_: "private" storage *must* be public for Openstack [05:51] wallyworld_: so if your patch which got rid of the PublicStorage api also removed setting .r:.rlistings for the other Storage, then you broke openstack deployments [05:51] i didn't tinker with the other Storage to my kmowledge [05:52] wallyworld_: we *don't* want to give our Provider credentials to the agents running on non machine-0, so they have to have credentials-free access to download the charm blobs from storage [05:53] wallyworld_: "rlisting" doesn't exist in the source tree [05:53] jam: right. so i just checked, the "private" storage on openstack does have rlistings perms [05:53] unless that is in goose [05:53] jam: it's a const in goose [05:53] wallyworld_: k [05:53] swift.PublicRead [05:53] see line 514 of provider.go [05:54] in openstack package [05:54] wallyworld_: yep [05:54] I found ti [05:54] sodre: can you "swift stat" your container to see what the ACL is ? [05:54] * wallyworld_ has to go get kid from school [05:55] I need to bootstrap again, because I've set it by hand. [05:55] give me one se.d [05:56] also, I'm on 1.16, not trunk. fyi [05:58] .r:* [05:58] humm... strange.... [05:59] i need a new bucket name... one sec. [06:00] jenvs from hell... [06:03] the read acl is empty http://paste.ubuntu.com/6334137/ [06:21] jam: any ideas if that can be fixed in 1.16.1 ? or 1.17 ? [07:00] sodre: did you create that control bucket by hand? [07:00] yes [07:00] I got it to work as follows [07:01] sodre: because juju will not set read acl if it exists already [07:01] I first swift post juju-control-bucket --read-acl .r:* [07:01] no... no.. [07:01] if the bucket does not exist, it is still not setting it to .r:* [07:02] really? the code in that area hasn't changed. it uses PublicRead = ACL(".r:*,.rlistings") [07:02] maybe it is the , .rlistings ? [07:02] because it is not setting it to read mode right now [07:03] ... it could be a radosgw issue again.. [07:03] could a havana thing maybe? [07:03] let me try the following. [07:04] I have a bucket right now with read ACL set to empty. [07:05] swift post pet.sodre.juju-control-02 --read-acl .rlistings [07:05] the read acl is still empty. [07:05] should it have changed ? [07:05] um. not sure. i don't use the swift tool for that sort of thing too often [07:05] cause juju does it for me :-) [07:06] are you using radosgw as well ? [07:06] i've tested juju on hpcloud and our own internal openstacj deployment using folsom and grizzly [07:07] two variables: It could be Havana, or it could be RadosGW [07:08] i'm testing now our our openstack [07:08] sodre: so a seift post does set the read acl for me [07:09] ie it is non empty after a swift post --read-acl .... [07:09] can you paste the readacl line for me. [07:09] sure [07:09] Read ACL: .rlistings [07:10] humm.. [07:10] strange! [07:10] and i think this is on havana [07:11] what is .rlistings for anyways ? [07:11] it allows unauthed clients to list the container [07:11] the contents thereof [07:12] I see. [07:13] It seems that Rados does not support rlistings [07:13] shouldn't a plain .r:* suffice ? [07:13] well that sucks if true [07:13] don't think so - certain operations do need to list the files in a container [07:14] but i can't tell you the specifics of the top of my head [07:14] http://wiki.ceph.com/01Planning/02Blueprints/Dumpling/Better_Swift_Compatability_for_Radosgw [07:14] ah [07:14] well, that sorta explains it :-( [07:15] it does say .rlisting has something done [07:17] there as a patch in the wild, but it seems they did not want to apply it. [07:18] according to http://www.mail-archive.com/ceph-devel@vger.kernel.org/msg13829.html [07:19] all that is needed is to set a READ permission to see it via S3 [07:19] I imagine unauthenticated access. === mgz is now known as mgzh [07:40] wallyworld_: I've updated the bug report one more time.... what is the easiest way for me to modify the juju-core code? [07:40] what do you want to change? [07:40] I would like to see why it is not honoring the .r:* [07:40] when working with RadosGW [07:41] right now, you need to use bzr to get a copy of the source code off launchpad [07:41] go get works too [07:41] have you ever done that? [07:42] yeah, I did it quickly today. I am not sure how to ``debug'' the code, ie.. gdb did not really help [07:42] is there a way to step through, or are debug statements the standard ? [07:43] lol [07:43] Go has very limited tooling support for debugging [07:43] i.c. [07:43] in my experience [07:43] i think you can use gdb [07:43] but i have not done that myself [07:44] i'm used to modern IDEs [07:44] sure I can use that as well, will it work in eclipse ? [07:44] perhaps. I use Intellij where there is a decent Go plugin but it does lack debug support sadly [07:44] juju-core will be the first "Go" code I ever read [07:45] it's similar syntax to C - not too hard to read [07:45] alright... so I don't spend too much time digging... in which file should I be looking for these rlistings ? [07:46] for openstack, there's two main files of interest.... [07:47] provider/openstack/provider.go <--- this one sets up the storage instance for an openstack environment [07:47] it creates it with the ACL I pasted above [07:47] and provider/openstack/storage.go <--- this one creates the container (if required) and reads/writes/lists files therein [07:48] your main interest will be in storage.go [07:48] got it [07:48] the Storage interface is quite simple - it has List(), Put(), Get() etc methods [07:49] i normally just use lots of fmt.Println() statements to debug [07:49] thrre's also debug logging [07:49] if you run a command like bootstrap with --debug, it will print useful info also [07:50] yeah, --debug is my friend :) [07:51] it seems the code went to goose [07:51] i'll spend some time with it. [07:51] thanks a lot man! [07:58] np. do ask if you have more questions [07:58] yes, goose is the lib used by juju to talk to openstack [07:59] thanks. I should probably go to sleep ... its 4am here [07:59] the good thing is that there is a work around [07:59] wow. yes. go to bed! [07:59] where are you? [08:00] tz= EST [08:00] I live in the DC area [08:01] ah ok. i'm in australia [08:01] so almost drinking time here :-) [08:01] nice :) [08:02] thanks for all your hard work! [08:02] I'm trying to deploy OpenStack at work [08:02] I need to show them a demo ... [08:03] so... here is an issue [08:03] I am running my compiled version of juju [08:03] how do I bootstrap my system ? [08:03] it can't find 1.17 [08:04] you need to use --upload-tools [08:04] if you are running from source [08:04] tsk tsk tsk... IU need to sleep [08:04] :) [08:04] s/IU/Im [08:04] yeah tomorrow! [08:05] btw... [08:06] 2013-10-31 08:05:44 INFO juju.environs.sync sync.go:103 listing target bucket [08:06] 2013-10-31 08:05:44 DEBUG juju.environs.tools storage.go:35 reading v1.* tools [08:06] 2013-10-31 08:05:45 ERROR juju supercommand.go:282 failed to list contents of container: pet.sodre.juju-control-02 [08:06] issue *listing contents* [08:07] right, that is consistent with not having rlisting permissions [08:07] juju needs to list the tools available to find the correct ones to use [08:08] this is a bad bug in RadosGW [08:08] alright.. ttyl === sodre is now known as sodre_zzz === allenap_ is now known as allenap [08:10] mornin' all [08:11] Hi wallyworld_, did you figure out what the issue with the Azure provider was? [08:11] rvba: hi, yes. msft screwed us over [08:11] wallyworld_: nice :/ [08:12] they changed the api in an incompatible way [08:12] Great. [08:12] tl;dr; we needed to change the apiversion passed to various management calls [08:12] So much for having API versions and all that jazz. [08:12] yeah [08:13] and we also needed to tweak the run utility to base64 encode the userdata [08:13] All right, thanks for the update. [08:16] i'm landing some stuff now. thanks for the input yesterday [08:16] just had trick or tweaters at my door [08:16] treaters lol [08:57] mornin' === thumper-afk is now known as thumper [09:33] * thumper hopes jam can run the meeting tonight as he is hacking now he's back [09:37] morning all [10:03] fwereade: welcome back [10:05] team meeting: https://plus.google.com/hangouts/_/calendar/bWFyay5yYW1tLWNocmlzdGVuc2VuQGNhbm9uaWNhbC5jb20.09gvki7lhmlucq76s2d0lns804 [10:05] fwereade_: ^^ [10:05] axw: ^^ [10:05] jam, I am trying [10:05] sure, we've been having trouble with new vs old G+ interfaces [10:08] my machine froze, joining now [10:09] jam, it's weird, it is literally *just* the hangout that's not working [10:11] fwereade_: extra joy in that it was working with just you and I until more people joined [10:12] jam, yeah, indeed [10:12] jam, there's a crazy storm out here but I doubt its effect is selective enough to be causing this [10:12] fwereade_: is dimiter around today? [10:13] jam, he is, but he was out earlier; he expected to make it back for the meeting though [10:52] damn, did i miss the team meeting today? [10:52] jam: sorry about that, i totally forgot [10:53] rogpeppe: yeah but you didn't miss much [10:53] jam: i'd have joined if you'd pinged me... [10:53] natefinch: ok, that's good [10:53] rogpeppe: thumper was just talking about how much he loves having loggo on github [10:53] natefinch: ironically? [10:54] ?!? [10:54] thumper: you really *do* love having loggo on github? [10:54] heh [10:54] * thumper goes back to work [10:54] rogpeppe: I wasn't being serious :) [10:55] natefinch: it sounded like you weren't - just checking [10:55] rogpeppe: actually I think the only real complaint was having his username in the import url. He says it feels too unprofessional [10:55] natefinch: yeah, that's actually part of why i chose launchpad for some projects [10:55] natefinch: the user name doesn't seem like that important a part of the project [10:57] rogpeppe: yeah, I wish there was a different way to do it... you can do something like a project team and have the project under the team. [10:57] natefinch: yeah [11:17] can you have a team of one? [11:17] yes [11:18] You can have a team of 0 in Lp [11:18] sinzui: I was referring to github [11:19] jam: I think I may have a complete fix with tests [11:19] thumper: yes, you can have a team of 1 on github [11:19] natefinch: so I could create an org called loggo? [11:19] is it worth it? [11:21] thumper, github is similar to Lp, no one is in the team when it is created, you add members. [11:21] thumper: up to you... you'd have github.com/loggo/loggo [11:21] meh [11:22] except they call it an organisation [11:22] make the team ~loggo :) [11:22] heh [11:22] ACCEPT THE TILDE [11:22] probably an invalid character [11:23] yeah it's just alphanumeric and dashes [11:24] * thumper twiddles fingers while the local lxc downloads and the amazon one spins up [11:24] * thumper proposes while he waits [11:27] ok... local provider still works with my change [11:27] * thumper waits for ec2 [11:27] why working so late, thumper?\ [11:27] natefinch: critical bug around lxc and maas [11:27] natefinch: they want it for the ODS demo [11:28] and sinzui is waiting on it to cut 1.16.2 [11:28] thumper: ahh, bummer [11:28] sinzui: care to land a branch that increments the version number on lp:juju-core/1.16 [11:29] thumper, oops, I will do that, thank your for reminding me [11:30] sinzui: please make sure you increment scripts/win-installer/setup.iss as well. It needs to get updated for the windows installer. [11:30] (it's just a text file, the version number is at the top, you'll see it) [11:31] natefinch, oh. I've got no notes on that. Have you been doing it each time you run it? [11:31] sinzui: yeah, I've just been modifying it when I create the installer, but obviously that's not a good habit to keep up [11:31] sinzui: I'm considering just making script that'll update all the right spots. [11:36] \o/ [11:36] works on ec2 [11:36] have ubuntu deployed into 1/lxc/0 [11:42] jam, fwereade_: https://codereview.appspot.com/20220043/ [11:42] and dimitern if he wants [11:42] this is to fix lxc on maas for the demo [11:42] makes it so we don't get an environ config for lxc provisioners [11:43] thumper, looking [11:43] local provider works [11:43] and ec2 was able to provision a container [11:43] and deploy the ubuntu charm into it [11:44] so we know it at least works on something [11:44] I wish I knew how to make DeployLocalSuite.TestDeployNumUnits pass on my computer [11:44] I really want to hand this off as it is almost 1am [11:50] thumper, reviewed [11:52] dimitern: ta [11:53] sinzui: about to land my 1.16 branch [11:53] sinzui: wallyworld_ landed the azure fixes already [11:53] I saw [11:53] sinzui: once mine is in, you should be good for 1.16.2 [11:53] rock [11:53] Maybe I wont purse 1.16.1. It has not built yet [11:54] sinzui: however I'm about to go to bed [11:54] hopefully all lands ok [11:54] how long does it normally take? [11:54] and who can see if it is in progress? [11:58] thumper, jamespages has the juju release scripted. He builds the new package and send it off to Ubuntu and the PPA builders in on go. [11:58] He emails me when he has run the script [11:58] sinzui: I was really referring to the landing bot [11:58] 15 minutes [11:58] I think I'll wait just to be sure [12:00] natefinch, Is this right https://codereview.appspot.com/20230043 ? [12:04] sinzui: great. LGTM'd [12:42] jamespage, are you about? [14:12] fwereade_: do we have a list of the client commands that need to be API-ified? I figured I'd knock one or two out [14:40] natefinch, we have a list of the ones that don't, and there are 2 of them [14:41] fwereade_: haha ok [14:42] fwereade_: what uses the API now, so I can see how it should be set up? [14:42] fwereade_: nevermind, looks like add unit does (I thought I remembered that one did) [14:43] natefinch, I think "get" does too [14:44] fwereade_: figured I'd start on status, unless you have another suggestion [14:44] natefinch, yeah, let me chat to you about it after meeting [14:45] fwereade_: sure. I'm going to be AFK in about a half hour (for about an hour) [15:03] natefinch: status is going to be much bigger than some of the other ones, and might involve some checking to make sure we are getting the passwords set on first connect, etc. [15:03] something like "set" would be much easier if you want to get your feet wet [15:10] jam: that's fine, I can do set to start :) [15:27] the env vars that get set in a running hook - like: CHARM_DIR and JUJU_UNIT_NAME - where abouts are they actually set in the code? [15:36] mattyw, uniter/context.go [15:36] fwereade_, cool thanks, think I just found it [15:52] fwereade_: do you know any way of configuring the local provider so that it doesn't download a whole precise image when a precise charm is started? [15:53] fwereade_: (at least, i *think* that's what it's taking ages to do - i'm not sure if there's any way of telling at all) [15:55] rogpeppe, I in't think we grabbed the whole thing every time [15:56] fwereade_: i'm not sure how to tell what it's doing [15:56] fwereade_: will we always grab the whole thing in a newly bootstrapped local env? [15:57] rogpeppe, I believe that is currently the case, but I have not looked at the code in question myself [15:57] fwereade_: ah, sorry, i thought you were the one that reviewed it [16:00] fwereade_: so no way to avoid that first download then. hmm. [16:05] fwereade_: another small question: is there a good place for a charm to stash local state that *doesn't* get managed under git? for example, a unix-domain socket or something that should not be rolled back on failure? [16:06] rogpeppe, there's no canonical location, no [16:09] fwereade_: i guess $CHARM_DIR/../charm-state might work ok, although it's not great. [16:09] rogpeppe, eww? name it after the charm outside the juju data-dir I would think? [16:09] fwereade_: or just make up a name in /var/lib which includes the env uuid and the unit name [16:10] rogpeppe, +1 [16:13] hey all: thumper's blog post about his logging library for Go is now up on hacker news: https://news.ycombinator.com/item?id=6643805 please upvote it if you think it is interesting, and feel free to comment on hacker news if you think you have something to add. [16:36] mramm: if I could get to hacker news, I'd upvote it [16:41] what is the proper way to upgrade an environment using a custom built juju and jujud binary pair? someone showed me last week but i've forgotten [16:47] adam_g: the magic is simply to have the juju you run to do the upgrade have the matched jujud as a sibling in the directory [16:47] this is very magic, but hey go hey go go [16:47] thats what i thought [16:48] adding the --upload-tools flag seems to skip going to s3 [16:52] natefinch: problems with hackernews? thumper had them yesterday too. :( [16:52] mramm: looking online, it's a DNS propagation issue [17:06] juju-devs: has anyone seen this type of error with Juju & RadosGW [17:06] caused by: failed unmarshaling the response body: bootstrap-verify [17:07] sodre_zzz: can you manually fetch that object from the provider storage and see what it contains? === sodre_zzz is now known as sodre [17:07] ...what provider even is that? [17:07] OpenStack, but Swift is served by ceph-radosgw [17:08] so, `swift list` to get CONTAINER, then `swift get CONTAINER bootstrap-verify` [17:10] alright... one sec. [17:13] wallyworld_, wasn't there some facility you mentioned about auto upload-tools? [17:13] ahhh.. the error is different! [17:14] mgz: it says juju was unable to list the contents of the container. [17:14] sodre: looking in ~/.juju/environments/ENV.jenv where ENV is the name of your environment will also tell you if you find the 'control-bucket' key at the bottom [17:14] Let me pastebinit [17:15] wasn't there a but where juju is expecting to see a .json object when listing contents in swift? [17:15] s/but/bug/ [17:15] sodre: we might not be parsing the error response correctly [17:15] so, you're seeing a follow-on error rather than the underlying cause [17:15] sodre: can you file a bug against juju-core for this please? [17:16] will do, but can you take a look at the pastebin first ? [17:16] I need to transfer back home now, so would like to pick up later [17:16] sodre: sure, fast :) [17:17] http://paste.ubuntu.com/6336731/ [17:17] this is with Havana, fyi. [17:19] sodre: yeah, looks like we're just expecting json and getting plain text [17:20] yeap, the error is in goose [17:20] there's a bug for this (or something very related) against goose already [17:20] yeap, that's what I thought. [17:20] if you can find that and add more details, that would be ace [17:20] i'll search for that. [17:20] thanks mgz [17:20] I've already got that bug on my list to look at [17:32] bbl === paraglade_ is now known as paraglade [17:44] jam: are you around >? [17:44] I have quick goose patch/bug [17:47] sodre: it's pretty late for Jam, almost 10pm. Anything that I could help you with? [17:53] sure [17:53] the current version of goose/swift.go [17:54] it requests a list of entries in a container but forgets to set the format to json [17:55] the patch is very short, http://paste.ubuntu.com/6336923/ [17:55] natefinch, what do you think ? [17:57] sodre: looks good, except you need to gofmt your source. You have two spaces where the standard format is a tab. [17:57] got it, how is that done ? I am new to go [17:58] go fmt it'll rewrite the file [17:59] http://paste.ubuntu.com/6336944/, where should I send this to? [18:00] so, in theory, you should branch lp:goose, make your change, and then propose the branch for merging to the lp:goose trunk [18:00] got it [18:01] sodre: if you have credentials etc on launchpad, it's not too hard [18:01] I am new to Opensource collaboration... [18:01] I have an account on lp. but I've never contributed anything other than bug reporrts. [18:02] can you quickly walk me through it or point me to docs with the steps. [18:03] sodre: I'm pretty new to the launchpad process too. Let me see if I can find the steps somewhere [18:03] okay. [18:07] sodre: ok, so presuming you started off by doing a bzr branch lp:goose to get your local copy.... you can do this to commit and propose your change (from the root directory of goose on your local drive): [18:07] bzr commit -m 'commit comment' [18:07] bzr push lp:~sodre/goose/ [18:07] bzr lp-propose lp:goose [18:09] cool [18:09] I'm filing the bug report as well. [18:09] awesome [18:10] sodre: thanks for the bug and for the fix. It's really a huge help even for little fixes like this. [18:10] brb [18:10] np. I am just trying to get OS running here at work. [18:32] TIL: when it looks like everyone else on freenode has quit at the same time... probably it's you that has actually quit [18:34] sodre: I missed anything you might have said after you said you were just trying to get OS to work. [18:35] I didn't say anything :) [18:35] I just came across the bug because I need to get OpenStack running where I work. [18:53] natefinch, will you have time today to create a a juju 1.16.2 windows package and file an Rt to get it signed? [19:01] sinzui: sure will. Hopefully this time the RT will actually be acted upon [19:02] natefinch, send me the rt number and I will track it. I am happy for a delay since I never know when the builders will complete [19:02] sinzui: ok [19:17] sinzui: you probably will get an email about it, but here's the link: https://rt.admin.canonical.com/Ticket/Display.html?id=65618 [19:18] thank you natefinch. I will add that to my task list for 1.16.2 [19:19] sinzui: no, thank you. I just want to make sure the windows installer stays available at the same version as the other platforms [19:20] This one has to get to the users since they are the group that most likely will use juju on azure. [19:43] morning thumper [19:43] morning natefinch [19:44] * thumper feels a little knackered [19:44] seems like you were just here :) [19:44] :) [19:44] yeah [19:44] I know that feeling [19:44] however I also have three kids still in bed [19:44] instead of at school [19:44] the event we went to last night was a lot later than I expected [19:44] didn't get home until about 22:30 [19:44] thumper: doh [19:44] which is way late for the youngest two [19:45] if I knew it was going to be that late, I would have organised a babysitter [19:45] * thumper shrugs [19:45] oh well [19:45] how old are your kids? [19:45] 8, 10 and 12 [19:45] it was a "world of wearable arts" show for the eldest [19:45] sounds like fun [19:46] it was quite good, but went on and on [19:46] you can really tell which outfits the kids did themselves, and which parents helped on [19:46] :) [19:46] haha yeah.... [19:47] thumper: I had some comments on your proposal, they aren't critical but it might be good to get them in if sinzui is going to be doing a 1.16.2 soon [19:47] jam, the tarball was sent [19:48] well then, I guess it doesn't matter :) [19:48] I might be convinced to make another if there are zero downloads of it [19:48] sinzui: I'm not a big fan of reusing numbers regardless [19:49] jam: I may not worry too much for the 1.16 branch, but may take them into account on the move to trunk [19:49] me neither. This is case where the one packers we are delivering too is traveling [19:49] sinzui: how dare he? [19:49] travel when we need him [19:49] geez [19:50] SPoF [19:50] busfactor violation [19:50] true that [19:51] sinzui: I *thought* that when he set up the super special PPA he did give you access. At least, I listed a short list of names [19:51] with you on it [19:52] I don't see any new PPA on my list... [19:52] oh, let me check teams [19:53] thumper: if you get time today, I have a couple branches that could use some reviewing: ec2 instance constraints: https://codereview.appspot.com/14523052/ modification to destroy-environment to require environment name: https://codereview.appspot.com/14502070/ [19:53] I don't know if that team actually got set up, but we did discuss it [19:53] natefinch: ok, I'll add to the list :) [19:53] jam, thank you and your elephantine memory. I was added to a team 8 days ago [19:54] * sinzui starts scripting [19:54] natefinch: so this is "juju destroy-environment amazon" [19:54] thumper: correct [19:54] cool [19:54] juju destroy-environment production [19:54] oops [19:54] that's basically the only change from current behavior [19:55] I did add an exclamation point to the warning message, too. [19:57] :) [19:57] nice [19:57] forgot there was a tweak to scanning the input... the code that was there before was actually completely borked. It happened to work, but also returned an error we were ignoring [20:07] bcsaller, do you have some time to join us @ https://plus.google.com/hangouts/_/calendar/YW50b25pby5yb3NhbGVzQGNhbm9uaWNhbC5jb20.tj9cngmc3p25r5nvbdthk5up8c [20:14] sinzui: have you already fixed 1209003 [20:18] sodre, no, no work has been done on it yet [20:19] sodre, sorry, work has started on it and iI need to target it [20:21] okay. I've placed the patch on a branch for review. It is one-line. [20:22] I found the bug through a different route, so I did not notice it was duplicate of your bug until a few minutes ago. [20:22] sinzui: good news, adam_g confirmed my work on maas, it makes lxc containers work (FSVO work) [20:23] I hit the mysql+apparmor bug while testing local provider today. I almost stalled the releases. I am glad I remembered the bug [20:31] sinzui: what's that bug? [20:34] thumper, https://bugs.launchpad.net/juju-core/+bug/1236994 [20:34] <_mup_> Bug #1236994: Mysql doesn't deploy on local provider due to apparmor [20:34] wha? [20:34] it used to work, what broke? [20:37] thumper, I think the issue is intermittent. I think I deployed that last night when I confirmed my local provider worked [21:05] sinzui, I am now - I guess you need me to push 1.16.2 everywhere right? [21:08] I'm outta here. night all [21:08] night natefinch [21:29] sinzui, don't expect armhf for quantal and raring btw [21:30] we are missing the dpkg magic fix in those series right now [21:30] quantal is dead to me [21:31] maybe we should stop releasing quantal packages. I think users have had 6 months to get to raring [21:32] sinzui, not really - raring expires in 2 months [21:32] :-) [21:32] 9 month support these days [21:32] sinzui, I'd prefer we keep publishing - I still have to test using juju-core for those series [21:32] yeah, but I think juju-core has tried to give users an extra 3 months. [21:32] but just not fuss about armhf [21:33] sinzui, where do you want to pull trusty binaries from? I've not uploaded to ppa for that series? [21:34] I can do - the distro package will supercede anyone actually running on trusty once it lands [21:35] sinzui, jamespage: FWIW +1 on not caring about quantal and raring armhf [21:36] jamespage, this is the script we use for releases and CI. We pull from 3 archives, including Ubuntu: http://pastebin.ubuntu.com/6337941/ [21:37] jamespage, I think trusty will work. I need to switch to Lp API to get packages from the new archive. [21:37] sinzui, should do [21:37] I'll leave it off the backports like last cycle then [21:37] sorry lines 197 start where I define the archives to search [21:48] sinzui, OK - its nearly baked - everything is built - just waiting for LP to publish to ppa.launchpad.net [21:49] fab. I am rushing to finsh my script while getting children ready for Halloween [21:49] sinzui, oh fun [21:53] sinzui, bear in mind that http://archive.ubuntu.com/ubuntu/pool/universe/j/juju-core/ won't get the armhf bits [21:54] ugh [21:54] sinzui, trying to remember where they appear [21:54] its ports.ubuntu.com or something [21:55] sinzui, http://ports.ubuntu.com/pool/universe/j/juju-core/ [21:55] jamespage, will we do this for the final release of trust? [21:56] sinzui, sorry - do what? [21:56] * jamespage thinks his brain is not quite working right [21:57] will armhf always be in ports for trust? [21:58] sinzui, yes [21:58] I think so [21:58] sinzui, might be easier to just pull everything from the PPA [21:58] My mind has left the building. I look forward to a walk in the dark with people dressed weirder that me [22:02] Do my eyes deceive me, I think everything is int eh staging ppa [22:05] sinzui, yes - its all done [22:06] sinzui: should I change #1209003 to Fix Commited ? I have placed a pull request already. [22:06] <_mup_> Bug #1209003: juju bootstrap fails with openstack provider (failed unmarshaling the response body) [22:19] man... [22:19] I'm normally a bit drained on Fridays due to the late meetings [22:20] but actually coding after the meetings to fix the critical bug as knocked me out [22:20] * thumper turns on the coffee machine [22:20] sinzui, I'm dropping for today [22:20] sinzui, if you need me todo anything tomorrow am my time ping me a mail [22:20] Thank you for your time jamespage [22:21] sinzui, np - ditto [22:24] thumper: a good coffee fixes everything [22:43] * thumper looks for wallyworld_ [22:44] yes? [22:44] hey [22:44] ho [22:44] got time to chat? [22:44] ok [22:45] https://plus.google.com/hangouts/_/72cpimmkjidp1clo6ipgc9gg90?hl=en [22:45] is bootstrapping an environment /w 1.16.2-precise-amd64 expected to pull 1.16.0 tools or 1.16.2 down from S3? [22:45] adam_g: I believe it matches on major and minor [22:45] adam_g: to be sure --upload-tools [22:46] thumper, seemed to have only matched major. ill roll with --upload-tools for now === thumper is now known as thumper-afk [23:41] http://askubuntu.com/questions/369263/juju-bootstrap-fails-with-temporary-failure-in-name-resolution-using-amazon-aw [23:41] did we break a bucket?