=== CyberJacob is now known as CyberJacob|Away === melmoth_ is now known as melmoth === zz_paulczar is now known as paulczar === paulczar is now known as zz_paulczar === zz_paulczar is now known as paulczar === paulczar is now known as zz_paulczar === jam3 is now known as jam === thumper is now known as thumper-afk === stub` is now known as stub [06:07] is juju none and juju local supported in saucy [06:07] I have the feeling it's hasn't been really tested [06:09] probably the ppa is better tested; 'none' is quite new, 'local' is better tested.. [06:14] ok doing the opposite local -> "null" works much better [06:15] stupid question, can I install charms on the boostrap machine ? [06:17] ekacnet: using 'local', yes; probably 'null' can do it, but I've not tried that [06:22] can someone explain to me what functions the bootstrap juju machines does? It seems like the VM image comes from the cloud system (like MaaS). Does the bootstrap machine serve configs? Help setup relations between charms or something? [06:29] 2013-10-31 06:29:22 ERROR juju.tools list.go:113 cannot match tools.Filter{Released:false, Number:version.Number{Major:0, Minor:0, Patch:0, Build:0}, Series:"saucy", Arch:"amd64"} [06:30] bootstraping "null" didn't seems to work too well === CyberJacob|Away is now known as CyberJacob === mgz is now known as mgzh === sodre is now known as sodre_zzz === allenap_ is now known as allenap === thumper-afk is now known as thumper [13:37] evilnickveitch, I found broken local examples [13:37] MP incoming [13:49] ok. stupid question. [13:49] juju deploy --repository... local:my-charm [13:49] charm sucked. want to undeploy and try again. [13:49] can i delete ? [13:51] (other than with juju destroy-environment) [13:53] smoser, juju destroy-service my-charm [13:53] smoser, juju terminate-machine whatever-machine-it-was-on [13:53] $ juju destroy-service maas-region [13:53] $ echo $? [13:53] 0 [13:54] but juju status still shows the service [13:54] smoser, it takes a while to die, juju gives the charm a chance to clean up.. notice in status the lifecycle: dying [13:54] smoser, a while normally being a minute or less [13:54] yeah, you're right. [13:54] k [13:55] smoser, juju-deployer -T automates this a bit, but terminating all services, units, machines, clearing error states waiting etc. [13:55] but its whole hog atm rather than targeted [13:55] right. thanks hazmat [13:56] hazmat, in [13:56] juju deploy --repository ~/charms/ local:maas-region [13:56] why do i have to say "local:" ? [13:57] i would have thought that was defined with --repository= [13:57] smoser, i had that argument once upon a time.. [13:57] with the same perspective.. [13:57] but the notion was you could also define --repository via JUJU_REPOSITORY [13:57] and then its rather innocous what you might end up getting .. or define an alias with --repository [13:58] oh well. [13:58] ie. charm store as default, and local as explicit, since repository acquisition isn't always clear. === zz_paulczar is now known as paulczar [13:59] is debug-log known broken with local provider ? [13:59] or is it just me [13:59] $ juju debug-log [13:59] tail: cannot open `/var/log/juju/all-machines.log' for reading: No such file or directory [13:59] Connection to 10.0.3.1 closed. [13:59] ERROR exit status 1 [13:59] smoser, its broken [14:00] smoser, there's a couple of other places files for local provider can be found .. typically in $JUJU_HOME/$ENV_NAME/log [14:01] smoser, container/console.log go to /var/lib/juju/containers/$container-name [14:01] the jhome log files are for the agent [14:01] smoser: also, when you type juju deploy mysql juju-core is actually expanding that to "cs:precise/mysql" so when you don't explicitly list the protocol it'll be expanded to use the charm store URL [14:01] marcoceppi, yeah, that is weird behavior. [14:02] but i dont care to argue [14:02] marcoceppi, right.. but that expansion could just as easily switch when specifying a repo.. [14:02] (lots of times i *do* care to argue, but not now) [14:02] :-) [14:02] smoser, i'd liked how solomon put it yesterday ... keep scott happy. [14:03] hazmat: it could, but I feel it's a rather small point. Juju could actually warn about ambiguous source [14:03] juju deploy --repository=. (or JUJU_REPOSITORY is set) mysql [14:03] Ambiguous deploy statement, did you means local:mysql? [14:04] marcoceppi, its just feels rendundant to double specify repo... i know the reasons why its not that way though. [14:04] er.. local charm [14:04] hazmat: right, it does seem a bit redundant [14:04] but is more consistent the way it is [14:04] ^it [14:04] hazmat: maybe we should just drop the notion of protocol-less deploy statments though, just `juju deploy cs:mysql` [14:05] marcoceppi, given a sane default, no real reason to [14:05] hazmat: also, with this bug, it could cause a lot more confusion when not specifying local [14:06] hazmat: https://bugs.launchpad.net/juju-core/+bug/906008 [14:06] <_mup_> Bug #906008: way to set default charm repository [14:07] we had that in the docs for a while before we realized that doesn't actually work [14:07] marcoceppi, interesting [14:07] marcoceppi, JUJU_REPOSITORY is the only way to do that.. not sure how that came about otherwise.. [14:08] * hazmat contemplates bzr blame [14:08] hazmat: I think evilnickveitch tracked it back to jorge putting that in there [14:08] but that could have just been from information he got from someone else [14:08] indeed [14:08] I got all excited, and thought I could use it to run my own charmstore. I was sadly mistaken [14:09] i forget how much i dislike html docs till i try to write some [14:12] hazmat: I think you just made nick rage quit ;) [14:12] marcoceppi, hah, the scalable mediawiki bundle I just made needs a minimum of 5 nodes [14:12] go big or go home! [14:14] ok. another question. [14:14] i really suck, and i keep making stupid mistakes. [14:15] and juju deploy .... service [14:15] then install fails [14:15] how do i iterate that quickly? [14:15] juju destroy-service maas-region && juju terminate-machine 1 [14:15] is quite a slow iteration [14:15] and it wont let me deploy the service again until its out of 'dying' [14:16] https://juju.ubuntu.com/docs/authors-hook-debug.html [14:16] juju please-die [14:16] that helps you find the error [14:16] and then you can rerun the hook right from tmux [14:16] marcoceppi, I want to document the lxc-no-network-because-of-stale-cloud-init problem. Do I add a troubleshooting section to https://juju.ubuntu.com/docs/config-local.html [14:17] sinzui: I've got that recorded and I'm sending the notes over to jcastro to make a comprehensive local troubleshooting guide as that's one of many things that have fouled up local provider users [14:17] sinzui: not sure where it will live [14:18] thanks jcastro [14:18] I usually just try to rerun the install hook again [14:18] and see what errors out [14:19] smoser: yeah, typically I open juju debug-hooks in one terminal window, wait until tmux pops up, then run juju resolved --retry in another terminal, that'll cat the hook that failed and you can run `hook/` in the tmux 1 window over and over, make edits to the hooks live, go nuts until its fixed [14:19] smoser: Just make sure to copy your changes to your local machine/repo [14:21] marcoceppi, so charmhelpers/core/hookenv.py [14:21] wants juju-log [14:21] command [14:21] but not found ? [14:23] my PATH doesn't seem set up. is that something i missed ? [14:23] smoser: that's...odd, juju-log should definitely be a command [14:23] /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin [14:23] doesn't contain [14:23] /var/lib/juju/tools/1.16.0.1-precise-amd64/ [14:23] smoser: wait, what window in the tmux session are you right now? [14:24] ah. main one. [14:24] smoser: yeah, that doesn't have hook envionment vars [14:28] marcoceppi, thanks. [14:28] all. thanks. === paulczar is now known as zz_paulczar [15:04] jcastro: local provider troubleshooting guide draft I had: http://paste.ubuntu.com/6336111/ [15:10] ON IT [15:10] hey [15:11] are you done with it? [15:11] I was thinking getting it up and marked up asap [15:11] and then iterating fast, but in the docs themselves [15:11] instead of pads and pastebins === zz_paulczar is now known as paulczar [15:37] https://code.launchpad.net/~jorge/juju-core/troubleshooting-local-provider-docs/+merge/193444 [15:37] marcoceppi, I added mgz's tip from yesterday on blowing away those .jenv files [15:38] jcastro: right, hopefully that bug will be fixed soon [15:38] jcastro: but good point === freeflying is now known as freeflying_away === med_ is now known as med === med is now known as med_ [16:13] hey all: thumper's blog post about his logging library for Go is now up on hacker news: https://news.ycombinator.com/item?id=6643805 please upvote it if you think it is interesting, and feel free to comment on hacker news if you think you have something to add. [16:32] sinzui: https://code.launchpad.net/~adeuring/charmworld/more-heartbeat-info-2/+merge/193454 [16:33] thank you adeuring [16:34] marcoceppi, Do you want charms to have a saucy series so that bigjools can publish his tarmac charm [16:34] marcoceppi, Do we want to create a trusty series this week? [16:34] sinzui: yeah, I was under the impression that we had a series for each release so far [16:34] sinzui: please open both [16:35] using a custom built juju/jujud binary set to bootstrap a new environment, is there some way of confirming the resulting env is using the correct version? is it normal for to fetch tools from s3 if using the local binary set? [16:36] sinzui: as well, if future releases could have trusty tools, I'd like to start doing some light charm testing for the trusty series [16:39] marcoceppi, okay, I wont promise them today as I want 1.16.2 behind me [16:40] sinzui: understood, but we should start taking steps to open the series up now [16:40] thanks! === paulczar is now known as zz_paulczar === sodre_zzz is now known as sodre [17:21] 2013-10-31 06:29:22 ERROR juju.tools list.go:113 cannot match tools.Filter{Released:false, Number:version.Number{Major:0, Minor:0, Patch:0, Build:0}, Series:"saucy", Arch:"amd64"} [17:21] should I file a bug in launchpad for this ? [17:21] when my juju version is obviously not 0.0.0 [17:24] ekacnet: are you trying to bootstrap the null provider and the remove machine is a saucy machine? [17:24] s/remove/remote/ === paraglade_ is now known as paraglade [18:04] jcastro: marcoceppi: where is my LXC instance getting http://ubuntu-cloud.archive.canonical.com/ubuntu/ precise-updates/cloud-tools/main i386 Packages [18:04] from? [18:05] mhall119: cloud-init probably [18:05] it's pulling in django 1.5, where vanilla precise only has django 1.3 [18:06] is that going to always be there when I juju deploy, even to Canonical's internal clouds? [18:07] ugh [18:07] that's something that shouldn't happen [18:07] jamespage, we probably need django for ... maas? [18:08] https://bugs.launchpad.net/cloud-archive/+bug/1240667 [18:08] <_mup_> Bug #1240667: Version of django in cloud-tools conflicts with horizon:grizzly [18:08] jcastro: ^^ === zz_paulczar is now known as paulczar [18:08] probably same issue [18:10] yep [18:11] sinzui, how do I nominate that bug for a backport fix? [18:11] I mean, once it's fixed how do I say "we should have that in the stable release please' vs. 1.17+ only [18:13] jcastro, does the bug have a nominate for series link for you? [18:13] I guess not because the project doesn't have a bug supervisor [18:14] jcastro, which bug? and can the fix wait 2-3 weeks for 1.18 [18:14] are we backporting 1.18 to saucy and friends? [18:14] https://bugs.launchpad.net/cloud-archive/+bug/1240667 [18:14] <_mup_> Bug #1240667: Version of django in cloud-tools conflicts with horizon:grizzly [18:15] so in mhall's case, he's deploying django internally, and pulling in non-precise django sucks. [18:15] jcastro, well backport is the wrong word...Ubuntu only makes dev packages. no one is using trusty. The juju project makes all packages to supported ubuntu series [18:16] Oh that fucking bug [18:16] yeah so I don't care about trusty [18:16] I care about people inadvertantly deploying django from the cloud archive on 12.04 instead of what they expect [18:17] It's hard to suppress my visceral disdain for that bug. It is not possible to deploy charmworld without some really arcane hacking [18:18] jcastro, the good news is that bug will be fixed for everyone about the same time. [18:18] I targeted it to 17 because we need to fix the regression for everyone [18:21] jcastro, I think the intermediate fix will be to build mongodb without libv8 or statically link it. That will prove we can create juju-db [18:25] "build mongodb without libv8" -- what would be left? [18:26] jcastro: sinzui: so what are my options here? [18:26] I *think* my staging and production deployments arne't getting this, tiaz tells me they both have django 1.3.something installed [18:26] but I have a change that needs to be made if I have 1.5, that will cause a breakage if I have 1.3 [18:28] mhall119, for my charmworld deploy, I had to ssh to the machine, add a file to pin the priority, then manually run install: https://bugs.launchpad.net/cloud-archive/+bug/1240667/comments/9 [18:28] <_mup_> Bug #1240667: Version of django in cloud-tools conflicts with horizon:grizzly [18:28] mhall119, I don't know a graceful fix [18:29] sinzui: unfortunately that's not an option for me, I need to give IS a charm that "just works" without additional manual intervention [18:29] mhall119, you can fork the charm and add a step during install that adds the file described in jamespage's comment [18:30] mhall119, IS always forks its charms, so it is not a blocker, but it is more work [18:30] customers don't fork, though; it really looks like maas needs to separated out from cloud-tools [18:30] well this is my charm, so I can easily do it, I'll try jamespage's fix [18:30] mhall119, IS never deploys from the juju-store. they deploys the versions that they know [18:30] sinzui: I know [18:31] private clouds do fork because they have not vetted upstream changes. the require repeatability with every deploy [18:32] sinzui: yes and no; they stage, but they expect upstream fixes to be applied as they come, and don't like carrying deltas [18:32] mhall119, My own charm is affect by this bug. jujucharms.com cannot be setup in production without gross hacks. Lets hope it doesn't fall over [19:23] jcastro: I have a confession, I am starting to like juju [19:24] mhall119: that's how we get you. We make you got through trials and tribulations so that when you come out on top after weeks of grueling pain and tears, you develop stockholm syndrome and grow to love the tool you once despised [19:24] ;) [19:24] lol [19:25] "If I still hated it, all of these weeks of work would be for nothing, so I better love it" [19:25] mhall119: but seriously, any feedback you've got about the problems you ran into, confusing things, mail jorge about them (or me, or the list). We'd love to make the developer experience wayyy better [19:26] I think it's all been bugs you already know about, or else things specific to IS's requirements [19:26] stuff like LXC containers not being cleaned up on destroy-environment seems to be fixed now [19:27] I've had to delete ~/.juju/environments/* a couple of times [19:27] not sure what the cause was or if there's a bug for that [19:27] mhall119: yeah, that's a recent bug that's been reported now [19:27] I've got that stuff in the LXC troubleshooting page [19:27] it's in the doc MP queue if you wanna review and commit and can't wait for nick to get to it [19:28] mhall119: any ideas on how to make writing a charm easier would be cool too [19:28] I'd file a bug that I need more RAM and an SSD on my laptop, but I think that'll be won't-fix'ed [19:29] marcoceppi: unfortunately I had to base my charm of an existing IS django charm because of their way or doing things, instead of using the juju tools [19:29] mhall119: gotchya [19:29] cool [19:29] jcastro: I'll review and merge the troubleshooting when I get back from lunch [19:29] marcoceppi: having a safe, easy way to say "call this other hook" from within a hook would be really useful [19:30] since I find myself having very small hook scripts that call very large functions in common.sh [19:30] mhall119: what's wrong with just calling "hooks/" from the hook? Or are you referring to being able to call relation hooks out of band? [19:30] yeah [19:31] so initially I was wanting to re-generate my db credentials file whenever I put a new bzr branch of my code on the instance [19:31] rather than keeping the file in a common space and symlinking it in to new code directories [19:32] which meant I needed to run the same code in db-relation-joined/changed in my config-changed or upgrade-charm hooks [19:33] maybe it just needs to push the idea of putting all of the code that does stuff into common files, and having light-weight hook files that just call into it [19:33] mhall119: I'd instead put the code in config-changed, have the db-relation hook be a very small file that records the data it needs to like .db-creds then have it call config-changed which has logic to look for a .db-creds file and build the config [19:33] mhall119: that's actually a best practice for charms [19:34] mhall119: to have hooks that are just essentially stubs that call out to shared/common functions. Then you can like unit-test those functions to add a level of sanity checking to charms [19:34] marcoceppi: anything you guys can do to make that more obvious and easier would be nice [19:34] mhall119: which, subbing hooks or centralized logic? [19:35] well they kind of go hand-in-hand [19:35] mhall119: the next version of charm-tools will generate a more opinionated template that includes that idea in it [19:37] marcoceppi: providing tools that let you say something like "/srv/db.conf is the source for ${variablepath}/db.conf, so whenever variablepath changes automatically copy/symlink from the source to the new path" [19:37] but that'll make juju a lot more complicated, I think [19:38] marcoceppi: is there any way to make a custom hook? [19:38] like "juju restart-gunicorn api-website"? [19:38] mhall119: you can make whatever you want in the hooks/ directory. However, there's no way to trigger unless from another hook [19:39] mhall119: there's work for a `juju run` command which might allow you to do that [19:39] that would be nice [19:40] so I broke down and wrote an "initdb" command for api-website that can be safely run over and over, since trying to make things work with just a .db_initialized state file wasn't cutting it due to django limitations [19:43] sinzui: jamespage's apt preferences hack did the trick [19:44] mhall119, did you fork the charm? [19:45] sinzui: no, it's my own charm [19:45] okay. I think we need to blog about this. I was lucky that jamespage was in the room when my charm broke [19:45] the IS branch is the only branch [19:50] hmm.. I set up apache2 with juju, and I get Error 403: forbidden even from the apache2 charm "machine" and my localhost. (local juju instance) [20:28] it is not possible to halt/stop or shutdown juju on a local ? [20:36] b1001: describe? [20:37] well i got it running on a laptop.. and i reckon running mysql, hadoop and other charms will tear on my battery. [20:57] b1001: you can destroy the environment [20:59] i would have to rebuilt it then :( [21:01] b1001: not automatically [21:01] b1001: but manually, yes [21:01] thumper: this smells like a plugin [21:01] heh [21:01] thumper: care to elaborate? or is it just lxc-stop ? [21:02] use lxc-stop -n [21:02] they will auto start when the machine is rebooted [21:02] or using lxc-start -n [21:02] thumper: /me consisers juju local-suspend, juju local-wakeup [21:02] although I did see a bug fly past where containers were not auto starting [21:02] \o/ [21:02] local-resume? [21:03] oh, yeah, that's logical [21:03] known bug with fix coming through proposed [21:04] sinzui, that workaround needs to not be a workaround for to much longer [21:04] thumper: what do you mean? [21:05] b1001: the machines of the local provider are lxc containers [21:05] you can use the lxc commands to stop and start the machines [21:05] any documentation laying around for utilizing bundles? [21:06] jamespage, \o/ [21:06] sinzui, OK - I'll push it to trusty first [21:06] and then do the PPA's [21:07] thank you! [21:07] thumper: nice thanks.. didnt think of that.. seems to lower the load [21:10] sinzui: mhall119 what hack is this? sounds like it's a good docs candidate [21:11] stokachu: not yet, we have some drafts that will land when bundle support lands [21:11] trying to use the juju manual-provision, and its trying to connect to what looks to be mongo on 37017 but mongo is running on port 27017 [21:12] marcoceppi, add a file to pin the priority of the cloud-archive when the charm is installed: https://bugs.launchpad.net/cloud-archive/+bug/1240667/comments/9 [21:12] <_mup_> Bug #1240667: Version of django in cloud-tools conflicts with horizon:grizzly [21:12] sinzui: if jamespage has a fix landing soon, I think it'll be okay? [21:13] context: mongodb setup by juju should be spun up on it's own port as to not collide with existing default mongo installs [21:13] then bootstrap failed to do its job :x [21:14] this is a fresh brand new ubuntu 12.04 server [21:14] and now juju bootstrap thinks the machine is already provisioned [21:14] context: maybe not, what does `initctl list | grep juju` show on the server? [21:14] how do i get it back to a non-provisioned state [21:15] juju-db stop/waiting [21:15] context: can you pastebin /var/log/upstart/juju-db.log ? [21:15] context: I think you're getting the wrong version of mongodb, which shouldn't happen, but that's what it smells like [21:15] error command line: unknown option sslOnNormalPorts [21:16] context: you have the wrong mongodb version, thumper I thought manual provisioning added the cloud archive as part of cloud-init? [21:16] thumper: or rather, who should I bug about manual provisioning? [21:16] context: what version of juju are you running? [21:16] marcoceppi: there is a bug for it [21:16] I think it may even be fix committed [21:16] and axwalk [21:16] axw [21:17] one sec [21:17] thumper: awsome, so if context manually adds the cloud-tools archive prior to attempting bootstrap it should be peachy axwalk? [21:17] oh, he's not in the room right now [21:17] no [21:17] yes it should be fine [21:17] cool [21:17] ? [21:17] * thumper crosses fingers [21:18] thumper: he'll need to run destroy-environment to have it listed as "not bootstrapped" right? [21:18] juju command not on server [21:18] heh [21:18] * marcoceppi fears hulk smashing everything [21:18] that doesn't work [21:18] \o/ [21:18] there's a bug for that too [21:18] install juju on server before bootstrap ? [21:18] ERROR null provider destruction is not implemented yet [21:18] haha [21:18] context: no, but you'll need a newer version of mongodb than what's in precise [21:18] marcoceppi: I'm pretty sure that destroy-environment for manually bootstrapped ones should be fine [21:18] haha, fantastic [21:19] you have to manually remove the upstart jobs [21:19] I think that is all it really looks for [21:19] thumper: cool [21:19] /etc/init/juju* [21:19] context: okay, here's what I'm going to have you try [21:19] marcoceppi: so... document this? [21:19] thumper: well, for destruction, yes. For bootstraping and the cloud-tools archive if there's a fix committed it should be landing soon [21:20] * thumper looks for the bug [21:20] thumper: we explicitly state manual provisioning is crazy beta at the top [21:20] :)( [21:20] oh. didnt see that [21:20] https://juju.ubuntu.com/docs/config-manual.html [21:20] yeah i see the Note now [21:20] we should make that like a hero unit or something [21:20] yeah [21:20] with flashing sierns and lights [21:20] tag [21:21] context: sorry we lured you in to a trap [21:21] [21:21] that was my next suggestion ;) [21:21] haha [21:21] thumper: we list the bug on the docs! https://bugs.launchpad.net/juju-core/+bug/1238934 [21:21] <_mup_> Bug #1238934: manual provider doesn't install mongo from the cloud archive [21:21] context: well if you're still game to try this out, we can try a few things [21:22] im game [21:23] awesome! so log in to the machine, sudo rm -f /etc/init/juju-* [21:23] im guessing if i just updated to 13.10 it would fix crap [21:23] done [21:23] context: welllllllllll, I think there's a bug with manual provisioning a bootstrap that's not precise as well [21:23] again, still under development [21:24] yeah [21:24] i stopped mongodb [21:24] as well [21:24] okay, you'll need the cloud archive next, so sudo apt-get remove mongodb-server [21:24] sudo apt-get install -y ubuntu-cloud-keyring [21:24] sudo add-apt-repository cloud-archive:tools [21:24] sudo apt-get update [21:25] now you should be able to bootstrap the machine [21:25] * marcoceppi crosses fingers [21:25] marcoceppi, the add-apt-repository call will install the keyring package btw [21:25] jamespage: ah, probably should update https://wiki.ubuntu.com/ServerTeam/CloudToolsArchive [21:26] kk attempting bootstrap [21:26] now will i need to do this on machines in order to add-machine them? or should only need to be on the 'controller' [21:26] context: only for the bootstrap, juju add-machine should work without needing to do this [21:26] kk [21:27] context: since it's only the mongodb that needs updating. The juju tools themselves are kept independantly of the ubuntu archive [21:27] yeah [21:28] marcoceppi, updated [21:28] i /think/ it worked. command ended happily. waiting for juju-status [21:29] jujud running, juju-status just sitting there [21:29] context: it'll take a few mins to get setup [21:29] context: /var/log/juju/machine-0.log should illumante what's going on [21:30] kk [21:30] whats juju written in [21:30] golang [21:30] oh yeah :x i remember that [21:31] all im seeing in log is Pinger:Ping [21:32] context: what does initctl list | grep juju show? [21:32] db and machine-0 running [21:33] that's a good sign [21:34] nothing happening really cpu wise [21:35] juju status still hanging? [21:35] yeah [21:36] maybe try restarting machine for giggles ? [21:36] ¯\_(ツ)_/¯ [21:36] haha [21:37] context: nothing else fruitful in machine-0? [21:37] context: can you pastebin the entire machine-0 logfile? [21:37] I can have a quick look and see if anything leaps out [21:38] sec [21:39] http://pastebin.com/4BWB3ghR [21:39] i do see something regarding storage-auth-key [21:39] i do have storage-auth-key in my environments.yaml locally [21:42] hmm... [21:42] ah fark... [21:42] this is exactly the same problem that bit me with maas and lxc [21:42] * thumper double checks [21:42] also i noticed bootstrap-user doesn't work. i tried setting it as root and didnt work [21:43] yep that's it [21:43] grr!!!! [21:43] context: this isn't going to work until we fix this bug... [21:43] sorry about that [21:43] * thumper goes to crack a whip [21:43] context: can you file a bug for this? [21:43] or marcoceppi? [21:43] nothing i can do for now at least? [21:43] thumper: what's the problem? [21:43] i can, im not exactly sure what it should say though [21:45] context: say "status for manual provider hangs" [21:45] first charm submitted for review today, woot! [21:45] that is your symptom [21:45] marcoceppi: the storage-auth-key is being stripped out at the api level [21:45] so the thing that needs it doesn't get it [21:45] bit of a fail there [21:46] just miscommunication between devs, and lack of a functional test suite to catch [21:46] we are working on the test suite [21:46] but it's not there yet [21:47] kk filing bug now [21:47] thumper: thanks for the info context: thanks for playing along! [21:47] context: ta [21:48] ;) anytime [21:48] I haven't seen the release notes, but I think there were some approvments in 1.16.2 for manual provider [21:50] thumper, marcoceppi: https://bugs.launchpad.net/juju-core/+bug/1246905 [21:50] <_mup_> Bug #1246905: status for manual provider hangs [21:51] marcoceppi, this is the list for 1.16.2 - https://launchpad.net/juju-core/1.16/1.16.2 [21:51] ta [21:52] wasn't sure if i should have put any of that log output in there but i figured you could update the ticket with more detail on what you found [21:52] jamespage: thanks! [21:52] * marcoceppi wonders what happened to 1.16.1 [21:53] marcoceppi: superseeded by 1.16.2 [21:53] ah [21:54] thumper: any idea when there will be a release that fixes it [21:54] 1.17.0 looks awesome [21:54] i know im a little antsy ;) [21:56] context: real soon now™ [21:58] :D === AlanChicken is now known as alanbell === alanbell is now known as AlanBell [22:03] context: if you're not already, subscribe to the mailing list. We put the release notes for all the juju related release there as well as more indepth project discussion [22:09] kk, url ? :x [22:09] context: DUH! sorry: https://lists.ubuntu.com/mailman/listinfo/juju [22:10] kk will sign up for that in a bit. heading out for margaritas [22:10] context: enjoy your evening o/ === CyberJacob is now known as CyberJacob|Away [22:44] sinzui, ping === paulczar is now known as zz_paulczar === thumper is now known as thumper-afk === freeflying_away is now known as freeflying === freeflying is now known as freeflying_away