[02:02] <marcoceppi> pmatulis: the difference is how juju models it, juju deploy ubuntu will create a machine with the ubuntu charm on it and the service ubuntu in your topology, juju add-machine will simply provoison a machine with no services deployed to it making it available for services to be deployed to it.
[02:13] <pmatulis> marcoceppi: yeah i'm starting to see it better now. thanks for the description
[03:50] <firl> if anyone could help with my vivid-kilo, maas set up that would be awesome. I am running into a ceph issue http://askubuntu.com/questions/668320/ceph-cannot-reformat-because-disk-is-mounted
[12:55] <coreycb> jamespage, looks like we need something like SWIFT_CODENAMES in charm-helpers for all the projects in liberty.  I can start on that if you haven't.
[13:05] <jamespage> coreycb, that should already be covered I think
[13:05] <jamespage> but maybe not
[13:08] <coreycb> jamespage, I think get_os_version_codename() and openstack_upgrade_available() need some updates
[13:54] <jamespage> beisner, are you ware of any trusty/kilo issues?  seeing a instance boot error on a mp I'm reviewing?
[13:57] <beisner> hi jamespage - t-k next mojo spec from yesterday fired up and connected to a new instance ok.  http://paste.ubuntu.com/12253397/   or   http://10.245.162.77:8080/view/Dashboards/view/Mojo/job/mojo_runner/745/console
[13:58] <jamespage> beisner, this is the MP - https://code.launchpad.net/~bbaqar/charms/trusty/nova-cloud-controller/next/+merge/269897
[14:01] <beisner> jamespage, digging into the logs collected on that run, nova-compute log shows MessagingTimeout and nova conductor woes.  http://paste.ubuntu.com/12253431/
[14:05] <beisner> jamespage, fyi Aug23:  n-c-c/next most recent time-triggered, successfully completed run.  (the Aug30 runs were sabotaged by undercloud storms.)
[14:05] <beisner> jamespage, Aug23 for reference:  http://10.245.162.77:8080/view/Dashboards/view/Amulet/job/charm_amulet_test/5981/
[14:05] <beisner> jamespage, i've re-triggered a new n-c-c/next amulet run now.
[14:11] <lazyPower> beisner: did any of those MP's come in lastnight?
[14:11] <lazyPower> beisner: i didn't get notice... or i did and they were filtered so i had no idea i had stuff to do
[14:12] <jamespage> gnuoy, hey - could you give https://code.launchpad.net/~james-page/charm-helpers/plumgrid/+merge/269920 a quick +1 (its for the plumgrid SDN stuff - managed to miss they have not proposed to ch's)
[14:13] <gnuoy> jamespage, sur
[14:13] <gnuoy> * sure
[14:13] <jamespage> ta
[14:25] <beisner> hi lazyPower - apologies if i kept ya hanging - that train is delayed, but could be arriving soon.
[14:33] <firl> any openstack charmers on?
[14:34] <lazyPower> beisner: no stress. Just wanted to circle back and make sure I didn't leave you hangin lastnight :)
[14:34] <marcoceppi> firl: do you have a question?
[14:34] <firl> marcoceppi: Yeah, someone helped me get a config ready for vivid-kilo, and I am running into an issue with ceph, and wanted to know if it was me doing something wrong or something else
[14:35] <firl> http://askubuntu.com/questions/668320/ceph-cannot-reformat-because-disk-is-mounted
[14:35] <marcoceppi> beisner jamespage gnuoy coreycb ^ ?
[14:36] <marcoceppi> firl: thanks for opening an Ask Ubuntu question on it
[14:36] <firl> np, yeah it was early morning, and didn’t know if it should be a bug or a question
[14:36] <jamespage> firl, hmm
[14:37] <marcoceppi> firl: question is always a good start, it allows a medium to track history and gives people who arent' always online the ability to readback from there it might be a bug or just a fix to your environment
[14:37] <firl> marcoceppi: good to know, thanks
[14:38] <jamespage> firl, is this a re-deployment?
[14:38] <firl> nope
[14:38] <jamespage> i.e. have you managed to deploy once, and you're now re-deploying on the same hardware?
[14:39] <firl> I am redeploying on same hard ware
[14:39] <firl> but I am using mass to juju destroy-environment
[14:39] <firl> I thought that the osd-reformat would take care of the designation on the newly deployed os ( on the same old hardware )
[14:40] <jamespage> firl: its possible that something systemd-ish is causing a race on the redeployed system
[14:40] <jamespage> i.e. a udev rule fires and mounts the volume at about the same time this process is running
[14:40] <firl> of where it is mounting the drive before ceph allocates it?
[14:40] <jamespage> maybe - tricky to etll
[14:40] <firl> gotcha
[14:41] <jamespage> firl, ceph used udev rules to mount and start osd on reboot - its possible this is the cause
[14:41] <jamespage> although I've not seen this one before
[14:41] <firl> yeah
[14:41] <firl> I have reproduced it 3 times
[14:41] <jamespage> urgh
[14:41] <firl> :)
[14:41] <firl> I try to test before asking haha
[14:41] <jamespage> firl, did you mean to deploy on vivid btw?
[14:41] <jamespage> or was kilo the main requirement?
[14:41] <firl> Kilo is the main
[14:41] <firl> vivid-kilo was for LXD possibility
[14:42] <jamespage> firl, ah!
[14:42] <jamespage> I see
[14:42] <firl> I can do trusty-kilo and see if it reproduces it
[14:42] <jamespage> makes sense
[14:42] <firl> :)
[14:42] <firl> Makes me feel a little less crazy then haha
[14:42] <jamespage> firl, I would probably bet its fine on trusty/kilo - that gets alot of redeployment exercise on hardware in test labs and at customer deployments
[14:43] <jamespage> vivid - less so - so this type of thing can bite.
[14:43] <firl> gotcha
[14:43] <firl> so trusty/kilo is more along the lines of trusted / stable ?
[14:43] <jamespage> firl: absolutely - but we don't provide LXD back to trusty ...
[14:44] <jamespage> so I can see why vivid was making sense for you
[14:44] <firl> yeah, yeah the LXD was just for fun testing
[14:44] <firl> not an immediate need so to speak
[14:44] <jamespage> firl, that said its probably worth highlighting that block device support is not yet implemented in LXD/nc-lxd
[14:44] <jamespage> so cinder/ceph + LXD == ERRNOTSUPPORTED right not
[14:44] <firl> ah that is good to know
[14:44] <firl> yeah I use cinder/ceph so that makes sense
[14:45] <firl> ok off to trusty-kilo
[14:45] <jamespage> that is being worked on - but there are a number fo security concerns with work through in the kernel todo with fs integrity
[14:46] <firl> very true
[14:47] <firl> Do you know of any other customer type example config.yaml for the OS world to compare my set up against by any chance?
[14:49] <jamespage> firl, it looks like you pulled yours from openstack-charm-testing right?
[14:49] <firl> ya
[14:50] <jamespage> firl, it you wanted something a little more concise to work from - https://jujucharms.com/openstack-base/
[14:50] <jamespage> that uses charm-store rather than branches and could easily be adapted
[14:50] <firl> thanks man, I appreciate it
[14:52] <jamespage> firl, that also uses the new bundle format which expresses everything about the environment including the physical machines
[14:52] <jamespage> gnuoy, sorry to nag on that review - pretty please :-)
[14:54] <gnuoy> jamespage, sorry, it's just possible I got distracted by something shiny
[14:54] <jamespage> ha
[14:55] <lazyPower> shiny? where?!
[14:57] <jamespage> ddellav, hey - I see you glance upgrade action stuff got landed - nice work
[14:57] <jamespage> gnuoy, ta
[15:00] <firl> jamespage, the format that I am using? or the openstack-base does?
[15:01] <jamespage> firl, the openstack-base bundle uses the new format
[15:01] <firl> gotcha, yeah I will convert over to the new format and try to specify it
[15:01] <firl> thanks again
[15:04] <jamespage> firl, its worth noting that by using the charm-store for the charms, you can fix charm revisions for repeatable deployments
[15:04] <jamespage> firl, worth doing to avoid an unexpected re-deployment change
[15:04] <jamespage> say a new feature or suchlike
[15:07] <jamespage> lazyPower, plumgrid changes for openstack charms all landed now
[15:07] <jamespage> lazyPower, well into next at least
[15:07] <lazyPower> https://media.giphy.com/media/OJdyjK11k2fLi/giphy.gif
[15:08] <lazyPower> awesome, I'll clean up the technical debt i left behind before EOW and cc you when i get the bundle corrected.
[15:08] <lazyPower> thanks for the update jamespage
[15:10] <jamespage> lazyPower, thanks for the final reviews whilst i was away - appreciated!
[15:11] <lazyPower> jamespage: np, happy to see them land their charms. :)
[15:12] <jamespage> indeed
[16:10] <marcoceppi> rbasak: I've got a debian packaing question, you got a min?
[16:25] <ddellav> jamespage, thanks :) I've also landed the cinder changes as well. Working on ceilometer now.
[16:26] <ddellav> jamespage, https://code.launchpad.net/~ddellav/charms/trusty/cinder/upgrade-action/+merge/269247
[16:26] <ddellav> feel free to take a look and comment.
[16:26] <jamespage> ddellav, good-oh!
[16:33] <rbasak> marcoceppi: sure. Also you just reminded me that I should have something on my todo for you I think. Do you remember what it is?
[16:33] <marcoceppi> rbasak: I think I just found out what I did wrong, let me get back to you in a few mins
[16:34] <marcoceppi> rbasak: so I cleaned up the charm-tools packaging. We left the discussion at "I need to do a 1.6.0 release so we can propose for archive)
[16:34] <marcoceppi> "
[16:34] <rbasak> marcoceppi: ah OK. So no action for me right now?
[16:34] <marcoceppi> however, 1.6.0 added a bunch of deps not packaged that I just wrapped up packaging
[16:34] <marcoceppi> rbasak: so I think after i finish testing/uploading to the PPA we should take another spin through the package to make sure everything is in place
[16:35] <marcoceppi> I'll ping you tomorrow to chat more about
[16:35] <rbasak> marcoceppi: OK. Let me know when you're ready.
[16:35] <rbasak> ack, thanks
[16:52] <marcoceppi> rbasak: okay, i actually do still have a packaging question
[16:52] <rbasak> marcoceppi: shoot :)
[16:52]  * rbasak EODs soon
[16:53] <marcoceppi> rbasak: so there was a  source package, jujubundlelib and it installed jujubundlelib package. I wanted to create a python-jujubundlelib and python3-jujubundlelib package while making sure to maintain backwards compatibility with jujubundlelib package because others packages still depend on it
[16:53] <marcoceppi> rbasak: so I created the following control file
[16:54] <marcoceppi> https://github.com/juju/juju-bundlelib-packaging/blob/master/control
[16:54] <marcoceppi> jujubundlelib depends on the python-jujubundlelib which replaces it, python-jujubundlelib breaks jujubundlelib 0.1.9-1 (this is now 0.1.9-2) and replaces jujubundlelib
[16:55] <marcoceppi> however, if I isntall 0.1.9-1 and perform an upgrade I get this error
[16:55] <marcoceppi> rbasak: http://paste.ubuntu.com/12254887/
[16:55] <marcoceppi> I have to run sudo apt-get install -f after the upgrade to have the package install properly
[16:56] <marcoceppi> so I figured it was because I was dong << I patched to do <= but the same error occurs
[16:56] <rbasak> 0.1.9-1 > 0.1.9-1~
[16:57] <marcoceppi> right so the control file is now
[16:57] <marcoceppi> jujubundlelib (<= 0.1.9-1)
[16:57] <marcoceppi> oh, so should i have done jujubundlelib (<< 0.1.9-2~) >
[16:57] <marcoceppi> ?*
[16:57] <rbasak> Yeah, jujubundlelib (<< 0.1.9-2~) is right
[16:58] <marcoceppi> damnit, okay let me patch taht and try again
[16:58] <marcoceppi> the ~ was throwing me off, I tried googling it but that's pretty ahrd
[16:58] <rbasak> It's fixed in 0.1.9-2, so anything older than that is what it replaces. Suffixing ~ allows for backports and PPAs
[16:59] <marcoceppi> cool
[17:00] <marcoceppi> rbasak: thanks, I'll give this a go
[17:00] <rbasak> Since if someone backports or PPAs 0.1.9-2 then it'll be in 0.1.9-2~, and that should also be treated as one where the transition has occurred.
[17:00] <rbasak> No problem.
[17:00]  * marcoceppi adds a little more packaging knowledge to the tree
[18:59] <skylerberg> How do I make a failed hook as resolved? I am looking for an equivalent to `juju resolved` for a failed hook instead of a failed charm.
[19:38] <lazyPower> skylerberg: juju resolved unit/# resolves just that hook
[19:38] <lazyPower> if a follow up hook fails, the charm re-enters a failed state
[19:40] <skylerberg> lazyPower: Thanks. I got it working a bit ago, but I had tried several things, so I still wasn't sure what actually fixed it.
[19:47] <skylerberg> wolsen: I resubmitted my patch for nova-compute with the changes we discussed: https://code.launchpad.net/~sberg-l/charms/trusty/nova-compute/tintri-interface/+merge/269972.
[19:49] <wolsen> skylerberg, ack - thanks - I'll take a look at it a bit later
[20:43] <beisner> wolsen, ping-o
[20:51] <wolsen> beisner, pong-o
[20:52] <beisner> yo!
[20:54] <wolsen> hey
[20:54] <beisner> wolsen, just touching base re: email (1c-h + 4 charm) review
[20:56] <beisner> wolsen, ah i think we left off with https://code.launchpad.net/~1chb1n/charm-helpers/amulet-svc-restart-race/+merge/269098  <-- rev pushed following review
[20:57] <wolsen> beisner, yep I hadn't gotten back to your updated push on it
[20:57]  * wolsen looks now
[20:59] <beisner> wolsen, np.  we had to let the tests run for the charms that i sync'd it into to confirm.
[20:59] <beisner> ie. those in your email consume these c-h changes
[20:59] <wolsen> beisner, yep
[21:00] <beisner> wolsen, thx man
[21:04] <wolsen> beisner, approved - added yet another comment, but its more of a follow up todo action
[21:09] <beisner> wolsen, replied fyi.  another great question.
[21:11] <wolsen> beisner, oh hah - my comment was misstated - heh, its ambiguous if len(pid_list) > 1 not 0
[21:11] <wolsen> beisner, but let me look at that method
[21:11] <beisner> wolsen, yes indeed, we are ignoring cases where there are multiple pids (as did the ancestor).   which is a good to-do to further examine.
[21:12] <wolsen> beisner, right - that was the crux of my comment - my mistake on the typo there - we both agree though ;)
[21:12] <wolsen> beisner k, I don't have rights to merge that - but its got my blessing fwiw
[21:13] <beisner> wolsen, ack & thank you sir
[21:13] <wolsen> beisner, thank you!
[21:14]  * wolsen turns towards ackk's proposal then to skylerberg's
[21:16] <firl> anyone able to help me diagnose my bundle parse errors ?
[21:27] <firl> nevermind, I needed to use “juju quickstart” instead of “juju-deployer"
[21:27] <marcoceppi> firl: huh, quickstart and deployer should both work, what were teh errors?
[21:28] <firl> http://pastebin.com/vds8YYDh
[21:28] <firl> http://pastebin.com/w9UckkRi
[21:28] <firl> nevermind the quickstart just failed more gracefully
[21:28] <firl> haha
[21:28] <marcoceppi> firl: heh :D
[21:28] <marcoceppi> what version of deployer do you have installed?
[21:30] <marcoceppi> it seems to be an issue with parsing constraints
[21:30] <firl> yeah, worked on my old bundle file
[21:30] <marcoceppi> what changed in this bundle file?
[21:31] <firl> switching from v3 to v4? I think? I mimic’ed the openstack-base charm
[21:31] <firl> and then added in my constraints
[21:31] <marcoceppi> insteresting
[21:31] <marcoceppi> let me check something
[21:31] <firl> and the quick start might have worked? still waiting for the gui to come up
[21:32] <marcoceppi> firl: it's possible
[21:32] <marcoceppi> it seems that "cannon unmarshal string into Go value type []string" means that the API is expecting a list of strings and not just a string
[21:32] <marcoceppi> firl: lets wait for quick start to succeed or fail
[21:32] <marcoceppi> and go from there, I have some ideas
[21:33] <firl> cool
[21:33] <firl> https://api.jujucharms.com/charmstore/v4/bundle/openstack-base-36/archive/bundle.yaml
[21:33] <firl> that constraints only has 1 as well
[21:34] <marcoceppi> firl: yeah, I've not used the v4 bundle format as much as I should
[21:34] <firl> or maybe it’s thinking tags=[“tag1”,”tag2”]
[21:34] <marcoceppi> I don't think so, but it's possible
[21:34] <marcoceppi> rick_h_: ^
[21:40] <firl> juju gui timing out to install, I will blow away and re bootstrap with dragging the yaml file to the gui. I don’t think the gui has a place for maas tags though
[21:40] <marcoceppi> firl: it may not expose it explicitly, but it should respect it
[21:41] <firl> cool
[21:43] <marcoceppi> firl: for what it's worth juju will soon be gaining the ability to just do `juju deploy <bundle>` so it'll be a lot more cohesive and we can finally drop things like deployer and quickstart for bundles
[21:43] <marcoceppi> that's why we're transitioning to the v4 bundle format
[21:43] <firl> sweet; yeah I’ve really enjoyed everything I have encountered when it comes to juju
[21:44] <firl> bringing a co worker with me to the summit too
[21:46] <marcoceppi> firl: \o/ aweseome, see you there
[21:46] <ntpttr> Hi, I'm having some trouble deploying juju-gui to my bootstrapped environment. I run "juju deploy --to 0 --repository=/srv/charmstore local:trusty/juju-gui" and then "juju expose juju-gui", and for a while the service-status is maintenance and the agent-status is running the install hook and the start hook, and then eventually the service-status changes to 'unknown' and the agent-status changes to 'idle', and even though the agent-state is started, going to the
[21:47] <marcoceppi> ntpttr: hey, so since 1.24 juju introduced a new extended status. a service status of unknown and agent-state of idle is for charms not yet using extended status. So that means the juju-gui should be deployed
[21:47] <marcoceppi> ntpttr: are you having issues loading it in your browser?
[21:47] <ntpttr> marcoceppi: Yeah, the browser is showing a spinning wheel under "Connecting to the Juju environment"
[21:48] <marcoceppi> ntpttr: hum, what provider are you using? local, maas, openstack?
[21:48] <ntpttr> marcoceppi: maas
[21:48] <ntpttr> marcoceppi: I have a corporate proxy I'm working behind, but I did configure that in environments.yaml
[21:49] <marcoceppi> ntpttr: ah, that might be the issue
[21:49] <marcoceppi> ntpttr: so the juju-gui uses websockets to communicate with the juju environment
[21:49] <marcoceppi> ntpttr: what browser are you using?
[21:49] <ntpttr> marcoceppi: Firefox
[21:50] <marcoceppi> ntpttr: if you open the developers tool console in firefox and hard refresh the juju-gui I imagine you'll probably see a connection error pop up
[21:51] <ntpttr> marcoceppi: Yeah it's saying can't establish a connection to the server at wss://node0vm0.maas/ws. It was working yesterday for me, but then I updated my juju by adding the stable ppa
[21:51] <marcoceppi> ntpttr: so what version were you on before? 1.22?
[21:51] <ntpttr> marcoceppi: It's also saying the connection was interrupted while the page was loading
[21:52] <ntpttr> marcoceppi: I'm honestly not sure, whichever one comes with the orange box build
[21:52] <marcoceppi> ntpttr: are you doing this on an orangebox right now?
[21:52] <ntpttr> marcoceppi: yes
[21:53] <marcoceppi> ntpttr: what does `env | grep "proxy"` produce on the machine you're trying to access the GUI from
[21:54] <ntpttr> marcoceppi: It produces all of my proxy urls for http_proxy https_proxy ftp_proxy and no_proxy. Yesterday I was having a connection issue as well and I fixed it by adding ".maas,10.14.4.1" to no_proxy
[21:54] <ntpttr> marcoceppi: Is there an easy way to downgrade juju to what I had before?
[21:54] <marcoceppi> yeah, you're going to have to do that again
[21:54] <marcoceppi> ntpttr: you could remove the stable ppa
[21:55] <firl> marcoceppi: drag to juju-gui seems to work  so far.
[21:55] <ntpttr> marcoceppi: and after I do that if I update it will downgrade?
[21:56] <marcoceppi> ntpttr: you'll have to either apt-get remove juju-core and re-install or do `apt-get install juju-core=1.XX*` whatever the version is
[21:56] <marcoceppi> ntpttr: I suggest the former
[21:56] <marcoceppi> firl: nice! glad to hear the drag and drop is working so far