[05:24] <hloeung> is there a way to switch back to using a local charm?
[05:24] <hloeung> I switched to use haproxy from the charmstore but can't seem to switch back
[05:25] <hloeung> $ juju upgrade-charm --switch local:xenial/haproxy haproxy
[05:25] <hloeung> ERROR unknown schema for charm URL "local:xenial/haproxy"
[05:25] <hloeung> $ juju upgrade-charm --switch xenial/haproxy haproxy
[05:25] <hloeung> ERROR already running latest charm "cs:haproxy-38"
[05:54] <hloeung> hmm, code says local: is meant to be supported
[05:54] <hloeung> https://github.com/juju/juju/blob/staging/cmd/juju/application/upgradecharm.go#L278
[06:00] <hloeung> ok worked it out, had to specify full path, so /srv/mojo/...
[07:01] <kjackal> good morning juju world!
[07:01] <anrah> morning!
[07:14] <khyr0n> nites over here :)
[10:58] <Ankammarao> Mattyw,Hi
[10:59] <mattyw> Ankammarao, morning
[11:00] <Ankammarao> mattyw,stiil we are not able to create terms
[11:00] <mattyw> Ankammarao, what's the error you get?
[11:00] <Ankammarao> error: cannot get discharge from "https://api.jujucharms.com/identity/v1/discharger": third party refused discharge: cannot discharge: user is not a member of required groups
[11:01] <mattyw> Ankammarao, what's the name of the term you're uploading?
[11:01] <Ankammarao> mattyw,we have tried with different users which they are already members of that group
[11:01] <Ankammarao> ibm-platform-ac
[11:02] <mattyw> Ankammarao, and the group you're uploading to?
[11:02] <Ankammarao> ibmcharmers
[11:04] <Ankammarao> mattyw,/snap/bin/charm push-term /root/ibm-platform-ac.txt ibmcharmers/ibm-platform-ac is the command using to push terms
[11:04] <mattyw> Ankammarao, and you're a member of the ibmcharmers group?
[11:04] <urulama> Ankammarao: can you see the groups returned with "charm whoami"?
[11:05] <Ankammarao> no i am just seeing the user-name logged in
[11:05] <Ankammarao> sry, i am getting like this "root@islrpbeixv685:~# charm whoami User: rajith-pv Group membership: ibmcharmers"
[11:06] <urulama> mattyw: ^ this means that the group is ok ... please check access on terms side
[11:07] <Ankammarao> mattyw,i am not a member of ibmcharmers group,but my team meber tried with his userid(already a member to that group)
[11:08] <ashipika> mattyw: ping?
[11:08] <mattyw> Ankammarao, so rajith should be able to push the terms then from his machine?
[11:08] <mattyw> Ankammarao, and to get access you will need to contact one of the admins for that team
[11:08] <Ankammarao> mattyw,no he unable to push terms
[11:09] <mattyw> Ankammarao, what error does he get?
[11:09] <mattyw> Ankammarao, the same one?
[11:09] <Ankammarao> mattyw,the same error
[11:10] <mattyw> ashipika, can you help Ankammarao work out what's going on? they're trying to push terms but keep getting "user is not a member of required group"
[11:10] <ashipika> mattyw, Ankammarao: just trying to verify that it works for me.. one minute, please
[11:10] <mattyw> Ankammarao, how long as rajith been a member of that group?
[11:11] <mattyw> ashipika, is it possible they'd need to login again?
[11:11] <ashipika> mattyw: yes, perhaps
[11:12] <Ankammarao> mattyw,he is a member since from 6months or more
[11:12] <Ankammarao> mattyw,root@islrpbeixv685:~# /snap/bin/charm version charm 2.2.0 charm-tools 2.1.8
[11:13] <Ankammarao> mattyw,is that fine with version
[11:13] <mattyw> Ankammarao, looks fine to me - there's some kind of issue with login, ashipika is the expert there so he'll be able to help out
[11:13] <ashipika> Ankammarao: interesting.. so which term are you trying to push? what is the term id?
[11:13] <mattyw> ashipika, ibmcharmers/ibm-platform-ac
[11:14] <mattyw> ashipika, and charm whoami shows ibmcharmers membership
[11:14] <ashipika> mattyw: that is unusual
[11:14] <mattyw> Ankammarao, when you ran charm whoami was that running /snap/bin/charm whoami?
[11:14] <mattyw> Ankammarao, would be interesting to see if they return the same information
[11:15] <ashipika> mattyw: for me "charm whoami" returns "not logged into https://api.ujucharms.com/charmstore" :)
[11:15] <Ankammarao> mattyw,showing diffrent user root@islrpbeixv685:~# /snap/bin/charm whoami User: achittet
[11:16] <ashipika> Ankammarao:  could you please try removing your .go-cookies file?
[11:16] <mattyw> ashipika, be careful...
[11:16] <ashipika> mattyw: ?
[11:16] <mattyw> ashipika, there's some confusion between the go-cookies being used by the charm command and the one being used by the snap charm command
[11:16] <mattyw> ashipika, and only the snap charm command contains the commands for pushing terms
[11:16] <mattyw> ashipika, so we need to remove go-cookies for that command
[11:17] <mattyw> ashipika, which might not be ~/.go-cookies
[11:17] <ashipika> mattyw: ah, interesting.. i wonder why charm whoami does not work for me
[11:18] <ashipika> Ankammarao: launchpad says you are not member of any group..
[11:18] <ashipika> mattyw: ^
[11:19] <mattyw> ashipika, they're using login for rajith-pv
[11:19] <Ankammarao> ashipika : yes i am not member to that group, but my team member rajith is a member
[11:20] <ashipika> Ankammarao: could you please go to https://jujucharms.com and log in as rajith-pv? and then try again?
[11:20] <Ankammarao> ashipika: ok
[11:20] <ashipika> Ankammarao: thank you
[11:26] <Ankammarao> mattyw,ashipika: its working now
[11:26] <ashipika> Ankammarao: \o/
[11:26] <ashipika> Ankammarao: it's because of the way our identity manager works.. it cannot get user information until you log in with jujucharms.com at least once..
[11:27] <ashipika> Ankammarao: sorry for the inconvenience
[11:28] <Ankammarao> ashika: there is a conflict b/w charm login and /snap/bin/charm login
[11:28] <Ankammarao> ashipika : both are different users
[11:29] <Ankammarao> mattyw,ashika: root@islrpbeixv685:~# sudo /snap/bin/charm push-term /root/ibm-platform-ac.txt ibmcharmers/ibm-platform-ac ibmcharmers/ibm-platform-ac/1
[11:29] <Ankammarao> ashika : getting output like "ibmcharmers/ibm-platform-ac/1"
[11:29] <ashipika> Ankammarao: did you do snap login first?
[11:30] <Ankammarao> yes
[11:30] <ashipika> Ankammarao: now you can use ibmcharmers/ibm-platform-ac/1 term in your charms..
[11:30] <ashipika> mattyw: any idea why the two users should be different?
[11:31] <Ankammarao> ashika : but we have mentioned term name like "ibm-platform-ac/1" in metada.yaml
[11:31] <Ankammarao> ashipika : no, both users should be same as ealrier i got error because of both users are diffrenet
[11:32] <Spaulding> folks, one simple question... update-status... what is should return
[11:32] <Spaulding> it's like a health check?
[11:32] <Spaulding> so if i'll try to run curl to see that website is working, and return error code ... that should be fine?
[11:32] <ashipika> Ankammarao: you need to use ibmcharmers/ibm-platform-ac/1 in metadaya.yaml.. because therm id consists of the namespace (ibmcharmers) and the term name (ibm-platform-ac).
[11:33] <ashipika> Ankammarao: if you want your charm to require agreement to a specific term, you need to include the term revision number.. otherwise your charm will always require agreement to the latest revision of the term document
[11:36] <Ankammarao> ashika: ok,thank you
[11:37] <ashipika> Ankammarao: you're welcome.. if you have any further issues pushing or releasing terms, please ping me
[11:41] <Ankammarao> ashipika : sure i'll ping you
[15:25] <dbuliga> hey guys. What is the status of this PR? https://code.launchpad.net/~dbuliga/charms/trusty/nagios/nagios/+merge/288614 Nothing changed on it since 2016-10-07. Is it possible to get it reviewed? Thx. Denis
[15:47] <geetha> Hi, I have installed VNC server and firefox on ubuntu 16.0 s390x machine. When I tried to start firefox through vnc client, it's giving me error: sementation fault (core dumped).
[15:48] <geetha> *Segmentation fault
[17:13] <bdx> hey whats up guys, I'm having issues connecting to my controller, see -> https://bugs.launchpad.net/juju/+bug/1644634
[17:13] <mup> Bug #1644634: cannot access controller <juju:New> <https://launchpad.net/bugs/1644634>
[17:15] <bdx> this has basically rendered all of the models I have provisioned on that controller inaccessable
[17:16] <bdx> any ideas on what to do here?
[17:19] <marcoceppi> bdx: so you're not able to restart the mongodb process?
[17:20] <bdx> marcoceppi: I'm not able to get any correspondence from the controller
[17:20] <bdx> marcoceppi: my initial inclination was that the controller is just mad iowaiting
[17:21] <bdx> but my crud monitoring through aws console suggests its not under much load
[17:21] <marcoceppi> too many file descriptors open?
[17:21] <marcoceppi> bdx: you're not able to SSh or anything?
[17:22] <bdx> omg
[17:22] <bdx> my ssh just connected
[17:22] <bdx> after like 5+ minutes
[17:22] <bdx> I was right
[17:22] <bdx> controller is slammed
[17:23] <bdx> I couldn't connect all weekend
[17:23] <bdx> it would just time out
[17:23] <bdx> I'm actually surprised it is still running
[17:23] <bdx> marcoceppi: https://s22.postimg.org/gfo71ug4h/Screen_Shot_2016_11_28_at_9_26_25_AM.png
[17:24] <marcoceppi> okya, that's a lot of mongod processes
[17:24] <bdx> cloudwatch metrics are poop
[17:24] <marcoceppi> bdx: can you clean shutdown those processes?
[17:24] <bdx> I guess ...
[17:25] <marcoceppi> I want to grab logs, it seems like systemd is just going crazy spawning monogd procs
[17:25] <marcoceppi> rick_h: ^ ?
[17:25] <bdx> like `sudo kill <proc#>`
[17:25] <marcoceppi> `killall -15 mongod`
[17:25] <marcoceppi> might be quicker
[17:25] <bdx> right lol
[17:27] <marcoceppi> bdx: service juju-db stop might be good too
[17:27] <marcoceppi> bdx: then getting logs and restarting
[17:27] <bdx> restart the controller?
[17:28] <bdx> I must of hit a memory increase, my ssh session froze
[17:28] <bdx> eeehh, waiting for ssh again omp
[17:29] <rick_h> bdx: marcoceppi sorry was otp, looking
[17:29] <bdx> marcoceppi: ok, back on the controller
[17:30] <marcoceppi> bdx: restarting the services, though at this point, restarting the controller VM might not be a bad idea, though grabbing logs will be super helpful
[17:30] <bdx> https://s14.postimg.org/65u1rqcc1/Screen_Shot_2016_11_28_at_9_33_37_AM.png
[17:30] <rick_h> bdx: k, yea worst case reset the VM and it should come up if things are working.
[17:30] <bdx> grabbing logs now
[17:30] <rick_h> bdx: ouch, so load of 30 but cpu/memory is ok...disk thrashing to no end?
[17:31] <marcoceppi> bdx: are you running LXD workloads on here?
[17:34] <bdx> marcoceppi: no, but my controller was initially throwing a bunch of errors due to not having lxd it seemed, so I installed lxd simply to negate the noise
[17:35] <bdx> rick_h, marcoceppi: my tar of the controller logs is ~100MB, how can I get this to you guys?
[17:36] <marcoceppi> bdx: is that also gzipped?
[17:36] <bdx> ya
[17:36] <marcoceppi> dang. I don't know if LP will let you upload that much, but the bug you linked might be a good place
[17:37] <rick_h> bdx: if not, try slicing into 10 chunks perhaps? or dropbox link or whatever.
[17:37] <rick_h> bdx: get me a url and I'll get it and help get it to folks if needed
[17:40] <bdx> rick_h, marcoceppi: https://bugs.launchpad.net/juju/+bug/1644634/comments/1
[17:40] <mup> Bug #1644634: cannot access controller <juju:New> <https://launchpad.net/bugs/1644634>
[17:41] <rick_h> bdx: <3 ty
[17:41] <rick_h> bdx: are things somewhat sane post-restart?
[17:41] <rick_h> bdx: or still nuts?
[17:41] <bdx> rick_h, marcoceppi: no, the controller is iowaiting again, fully maxed
[17:41] <marcoceppi> bdx: is iotop installed on the machine?
[17:42] <bdx> rick_h, marcoceppi: I have a redis cluster deployed supporting a staging and QA env, other than that I could ditch this controller, unfortunately I need to keep that QA env up at all costs bc its getting beat on pretty heavily right now
[17:43] <rick_h> bdx: can you restart the instance?
[17:43] <rick_h> bdx: or did you do that? /me glanced through the backlog but might have missed it
[17:44] <bdx> iotop -> https://s21.postimg.org/58l6v7787/Screen_Shot_2016_11_28_at_9_47_38_AM.png
[17:44] <bdx> rick_h: no, i didn't restart .... do you think that will send my redis cluster into a state of disarray?
[17:45] <rick_h> bdx: shouldn't effect it at all.
[17:45] <rick_h> bdx: I mean the controller is not the same machiens as the redis cluster right?
[17:45] <bdx> correct
[17:45] <rick_h> bdx: restarting the controller should just have agents on the redis machines time out for a bit while it comes back up
[17:46] <rick_h> bdx: no damage to the running redis processes at all
[17:46] <bdx> rick_h: ok, restarting controller now
[17:47] <rick_h> bdx: poking at the logs but will take a sec
[17:47] <rick_h> lol 1.7G of logs
[17:47] <bdx> rick_h: I've previously had a whole openstack go tits up after losing the controller ... only reason I ask
[17:47] <bdx> rick_h: right ...
[17:48] <rick_h> bdx: really? I'd be curious about that story. We've had folks deploy OS with juju and then go in and shutdown all jujud on each machine/etc
[17:48] <rick_h> bdx: openstack still ran and they made it a manually run openstack at that point
[17:49] <bdx> rick_h: yeah, after I shutdown all of the juju agents everywhere I was able to move forward and save it all ... but if I remember correctly my db ended up borked
[17:50] <rick_h> bdx: the controller db? or the OS mysql db?
[17:50] <bdx> rick_h: openstack mysql db
[17:50] <rick_h> bdx: can you add info to the bug about the cloud/instances running the controller/etc?
[17:51] <bdx> yea, omp
[17:54] <bdx> ahhhh shoot, wrong issue
[17:55] <bdx> rick_h: comment #3
[17:55] <rick_h> bdx: k, ty
[17:56] <rick_h> bdx: can you note the instance/disk and such of the controller nodes. So there's a storm in the logs, but based on that top output it seems like it's not ram/cpu but disk and so wondering if we can draw some lessons on disk peroformance and the mount of logs/etc going on
[17:57] <rick_h> bdx: there's an openstack in here as well as the redis?
[17:57] <rick_h> how many models with what running?
[17:57] <bdx> rick_h: no the openstack was yesteryear
[17:58] <bdx> rick_h: http://paste.ubuntu.com/23549554/
[17:58] <rick_h> bdx: hmm, seeing this in the log
[17:58] <rick_h> 2016-11-28 17:32:16 DEBUG juju.apiserver request_notifier.go:140 -> [55] unit-openstack-dashboard-0 795.567548ms {"request-id":44,"error":"watcher has been stopped","error-code":"stopped","response":"'body redacted'"} NotifyWatcher["10"].Next
[17:59] <bdx> rick_h: yeah, I've been using the dashboard to help troubleshoot keystone v3 ops I'm trying to solidify
[17:59] <rick_h> bdx: oh ok, so not all of openstack but that is there ok
[17:59] <bdx> rick_h: I've deployed keystone + barbican as a standalone secrets provisioning
[17:59] <bdx> yea
[18:00] <rick_h> bdx: gotcha, just trying to understand the logs as I go through sorry
[18:00] <bdx> np
[18:02] <rick_h> bdx: is the restart back up?
[18:05] <bdx> rick_h: yesa
[18:05] <rick_h> bdx: sane or slammed?
[18:06] <bdx> rick_h: looking much better now -> https://s22.postimg.org/gkfbj32ld/Screen_Shot_2016_11_28_at_10_09_21_AM.png
[18:06] <rick_h> bdx: ok, I've got to run to a meeting. I've got hte logs and alexisb is getting someone to look at the bug/details when they come online in a bit.
[18:06] <bdx> rick_h: thanks man
[18:06] <rick_h> bdx: thanks for the added details, we're currently working on a lot of this so this is good extra data.
[18:06] <bdx> rick_h: appreciate it
[18:07] <bdx> do I get a guinea pig award?
[18:09] <alexisb> bdx you get the juju ecosystem rockstar award :)
[18:09] <alexisb> thanks for all the info in the bug
[18:09] <alexisb> we will get updates in it tonight
[18:12] <Guest34504> Is it possible to use Juju resource with a bundle? https://jujucharms.com/docs/stable/developer-resources
[18:12] <Guest34504> If so, pls let me know the way to do it
[18:14] <bdx> alexisb: awesome, thx
[19:04] <bdx> after adding and removing a machine multiple times from the manual provider, I am now not able to add the machine anymore, I'm getting this -> http://paste.ubuntu.com/23549809/
[19:04] <bdx> even though the machine doesn't exist on my controller in any models
[19:05] <bdx> I've tried ssh'ing in and 'rm -rf /var/{log,lib}/juju`, and `sudo deluser --remove-all-files ubuntu`
[19:05] <bdx> but I still get the same result
[19:05] <bdx> could the machine be hanging around in the controller somewhere, even though its not shown via `juju status`?
[19:06] <bdx> rick_h, marcoceppi: ^
[19:07] <rick_h> bdx: looking, want to check what juju looks for to make that determination
[19:08] <marcoceppi> rick_h: can you use resources in a bundle?
[19:09] <rick_h> marcoceppi: not a local resource atm, it's on the 17.04 roadmap list
[19:09] <marcoceppi> rick_h: ack, ta
[19:09] <bdx> rick_h: https://bugs.launchpad.net/juju/+bug/1645446
[19:09] <mup> Bug #1645446: juju incorrectly reports machine already provisioned <juju:New> <https://launchpad.net/bugs/1645446>
[19:11] <bdx> I'm going to hit every corner case today, fyi
[19:15] <rick_h> bdx: wheeee, replied with request for as much detail on the before as we can get please
[19:15] <rick_h> bdx: only work around I could think of atm is to manually remove the doc from the db, or change the ip address/hostname of the machine.
[19:16] <rick_h> bdx: but looks like some bug in the removal that you hit with the add/remove several times.
[19:17] <bdx> rick_h: ok, I don't have dynamic provisioning of spinning these up and down ... I could kill-controller and reprovision it all if that would be easier ?
[19:17] <rick_h> bdx: yea, unfortunately that's the non-hackiest path forward.
[19:17] <rick_h> bdx: since it's in the mongodb that the doc is hanging around that is triggering that error
[19:22] <bdx> rick_h: alright, updated, thanks
[19:22] <rick_h> ty bdx
[19:27] <bdx> rick_h: ehh, even after killing my controller, and re-bootstrapping, I'm getting the same thing
[19:28] <rick_h> bdx: ?! /me goes back to the code he thought he understood
[19:29] <rick_h> bdx: when you kill-controller in manual...I wonder if it clears the db? I mean that would be nuts...but.
[19:30] <bdx> rick_h: yea, I'm wondering if need to manualy tear all that ishk outta there?
[19:30] <rick_h> bdx: sec, can you grab the logs please?
[19:31] <rick_h> bdx: looking for lines like: "Checking if %s is already provisioned"
[19:32] <rick_h> bdx: looks like there's another place that can hit this if the juju service exists on the system
[19:32] <rick_h> bdx: so that's in upstart or systemd depending on series
[19:33] <rick_h> bdx: so it's probably not the juju directory, but the services directory that's the issue.
[19:34] <bdx> rick_h: ok, yeah, its a trusty controller
[19:34] <rick_h> bdx: but the machine is xenial?
[19:34] <bdx> rick_h: no, the machine is trusty too
[19:34] <bdx> ooh, the services dir on the machine ... my b
[19:35] <rick_h> bdx: right, the systemd script for jujud on that machine that's "already provisioned"
[19:35] <bdx> yea, I was looking around in there and didn't see anything that said juju .... oh, where is that?
[19:35]  * rick_h is looking
[19:35] <rick_h>  /etc/systemd/system ?
[19:36] <rick_h> anything juju in there?
[19:36] <bdx> rick_h: http://paste.ubuntu.com/23549930/
[19:36] <rick_h> bdx: yea, there we go: https://github.com/juju/juju/blob/6cf1bc9d917a4c56d0034fd6e0d6f394d6eddb6e/provider/manual/environ_test.go#L106
[19:36] <rick_h> bah
[19:37] <rick_h> bdx: any others from that code link hit?
[19:37] <jrwren> when did trusty get systemd?
[19:37] <rick_h> bdx: /etc/init/juju ?
[19:37] <bdx> I checked there too .. omp
[19:37] <rick_h> jrwren: no, but the controller is trusty, but the machine is xenial (the add-machine one that is)
[19:38] <jrwren> oh, ok.
[19:38] <rick_h> bdx: ok, what's omp? /me is ignorant
[19:38] <bdx> http://paste.ubuntu.com/23549940/
[19:39] <bdx> nothing
[19:39] <bdx> rick_h: no its trusty
[19:39] <rick_h> ok, so the controller was killed/restart, the machine in question has no reference to the juju service...
[19:40] <bdx> rick_h: here we go -> http://paste.ubuntu.com/23549954/
[19:40] <bdx> rick_h: exactly
[19:40] <bdx> rick_h: should I kill the controller, then ssh in and rm everything juju and mongo from the controller ?
[19:41] <rick_h> bdx: not yet, /me is chasing the code from that log output
[19:41] <rick_h> ok, that is definitely from https://github.com/juju/juju/blob/6cf1bc9d917a4c56d0034fd6e0d6f394d6eddb6e/environs/manual/init.go#L34 then
[19:42] <rick_h> at least the log lines match up
[19:42] <rick_h> so time to figure out wtf we run on systemd to get the list of services
[19:44] <rick_h> bdx: can you run this on the machine? -- /bin/systemctl list-unit-files --no-legend --no-page -t service | grep -o -P '^\w[\S]*(?=\.service)'
[19:47] <bdx> rick_h: bash: /bin/systemctl: No such file or directory
[19:47] <rick_h> bdx: this is on the xenial host?
[19:47] <bdx> rick_h: there is no xenial host
[19:47] <rick_h> bdx: on that pastebin it said xenial?
[19:47] <bdx> wha?
[19:48] <rick_h> http://paste.ubuntu.com/23549833/
[19:48] <rick_h> bdx: ok sorry, I thought you had a trusty controller with a xenial machine you were trying to add
[19:48] <rick_h> bdx: I see now that your other paste said it was --trusty
[19:48] <rick_h> bdx: sorry, my confusion. Ok, looking in the wrong place then
[19:50] <rick_h> bdx: so new command for trusty: sudo initctl list | awk '{print $1}' | sort | uniq
[19:50] <bdx> rick_h: http://paste.ubuntu.com/23549991/
[19:50] <bdx> its there
[19:51] <rick_h> bdx: there you go, so Juju thinks the machine's provisioned because of the juju entries in there
[19:51] <bdx> awesome, I'm gonig to rm the jujud-machine-17
[19:51] <rick_h> bdx: rgr
[19:52] <rick_h> bdx: the unit one as well if that's done/gone
[19:52] <rick_h> bdx: if any line has "juju" in it it's considered provisioned so need both gone: 	provisioned := strings.Contains(output, "juju")
[19:52] <bdx> yeah, just the machine-17 didn't do the trick
[19:53] <bdx> there we go, removing both did it
[19:53] <rick_h> bdx: <3 ok...updating the bug with these notes
[19:54] <bdx> thanks rick
[19:54] <rick_h> bdx: np, see you in an hour or so for round 3? :P
[19:57] <bdx> rick_h: I just got elasticsearch (patched with the addition of the python-yaml dep) to successfully deploy to that machine
[19:58] <bdx> rick_h: that was my last standing issue in the bunch, for the time being :-)
[19:58] <rick_h> bdx: ok, I'm going to take nap for now then :P
[20:35] <lazyPower> mbruzek  https://github.com/juju-solutions/jujubox/pull/20#pullrequestreview-10408603  - i had one minor comment. fi you dont want to take any action on that i'm +1 to this as it is.
[20:40] <mbruzek> lazyPower: I still think the juju-1 packaging is not final, and I don't want to create a sym-link at this time
[20:40] <mbruzek> If you want to create a bug we can track it that way.
[20:40] <lazyPower> ack, not that serious
[20:40] <lazyPower> i'm +1 as it is, merging now
[20:45] <lazyPower> mbruzek - both jujubox pr's landed. You made mention of a charmbox pr but i dont see one on the repo. Did it land already or is it MIA?
[20:46] <lazyPower> mbruzek - i assume it was this one? https://github.com/juju-solutions/charmbox/commit/3df12bf82ce6bd16d519bb812c710831e799e6fb
[20:46] <mbruzek> yes that is it
[20:46] <lazyPower> awesome. landed that one over the weekend
[20:47] <lazyPower> so i think we're good on the box images now, just need to kick the builders and get the flavors setup
[20:47] <lazyPower> plus get this back in CI
[20:47] <lazyPower> want to batcave it and get that done real quick? it'll only take a few minutes
[20:49] <lazyPower> mbruzek ^
[20:50] <mbruzek> yes I am there?
[22:52] <kwmonroe> lazyPower: i see a juju-1 and latest tag now in jujubox.  is the 'devel' branch now deprecated?
[22:53] <lazyPower> kwmonroe - i beleive mbruzek sent a mail to the list, but yes, devel is now :latest
[22:53] <lazyPower> and the devel tag is in the process of being deprecated
[22:53] <lazyPower> juju-1 will be the best-effort tag to keep a juju-1 compliant charmbox around
[22:55] <kwmonroe> word lazyPower -- i see mbruzek's note now (spelling errors and all).  thanks!
[22:55] <lazyPower> kwmonroe np happy to help
[23:25] <hallyn> 'juju bootstrap' always seems to hang (on both xenial and yakkety hosts), the console claims it's doing "apt-get upgrade", but the process list shows a
[23:25] <hallyn> 100000   15822 15821  0 22:58 ?        00:00:00 sudo /bin/bash -c /bin/bash -c  set -e tmpfile=$(mktemp) trap "rm -f $tmpfile" EXIT cat > $tmpfile /bin/bash $tmpfile
[23:25] <hallyn> which has a bash child and bash grandchild with no arguments
[23:45] <hallyn> tych0: using juju 2.0 and lxd, how can i specify that it should use an image i create?  (i need to drop open-iscsi so juju bootstrap won't hang)
[23:45] <tych0> hallyn: sec,
[23:46] <tych0> hallyn: make an image called ubuntu-$series
[23:46] <tych0> or, one that has a tag called that
[23:46] <tych0> it'll use that one
[23:46] <hallyn> oh, ok.  that'll suffice for now,
[23:46] <hallyn> but so there's no way to say "use image xenial-base" tha tyou knwo of?
[23:47] <hallyn> tych0: (trying that - thanks)
[23:47] <tych0> hallyn: no, i don't think so
[23:47] <tych0> the ubuntu-$series alias is always used for exactly htis reason
[23:47] <tych0> i didn't think about making the name custom at the time :)
[23:48] <hallyn> kewl - easy enough, re-bootstrapping, let's see
[23:50] <hallyn> tych0: success, thx :)
[23:51] <tych0> hallyn: cool, glad it worked!