[00:30] <sam__344> Hi. I asked this last night but maybe everyone was asleep
[00:30] <sam__344> Is there a trick to running "juju quickstart mediawiki-single"
[00:30] <sam__344> under virtualbox using Ubuntu 14.04..
[00:31] <sam__344> Both the computer and the virtualized computer are Ubuntu 14.04
[00:31] <sam__344> It just hangs on machine 1 provisioning is pending
[07:27] <BlackDex> sam__344: Did you start the second box? Is it running via maas? etc..
[07:29] <sam__344> BlackDex: I did nothing but follow the instructions here, up to step #2. https://jujucharms.com/get-started
[07:29] <sam__344> I'm running it inside virtual box, not deploying to virtualbox
[07:30] <BlackDex> Ah, so the machine you need to deploy it to is bare-metal
[07:31] <sam__344> Well I'm just playing with it for fun. But I don't want it to mess up my laptop so I'm using VirtualBox. I'd expect it to just work the same as if I was following the directions directly on my laptop
[07:32] <BlackDex> sure
[07:32] <BlackDex> if you just select the create a local environment
[07:33] <BlackDex> You need to have JuJu know where to deploy stuff to. That may be virtual, lxc, bare-metal, cloud etc..
[07:34] <sam__344> okay. So what should I type instead of "juju quickstart mediawiki-single"
[07:36] <sam__344> By the way BlackDex - I just rebooted the virtual machine and tried it again. Got  INFO jujuclient:328 Juju upgrade in progress...
[07:36] <sam__344> over and over again
[07:36] <sam__344> until finally "juju-quickstart: error: cannot retrieve the environment type: upgrade in progress - Juju functionality is limited"
[08:07] <BlackDex> hmm
[08:07] <BlackDex> that is strange
[08:07] <BlackDex> never did a local deploy btw
[08:45] <BlackDex> Hello there, on my setup juju seems to timeout after a while. The bootstrap node is running on a KVM-Instance and can connect to MAAS etc.. the deployed nodes are also receiving commands from juju, but after a while it seems to stop
[08:46] <BlackDex> if i then reboot the bootstrape node, some of those machines come to life without doing anything.
[08:46] <BlackDex> Is this a problem with the juju-api or mongodb? what can i do to try and solve this?
[13:11] <shruthima> Hi, We are developing IBM HTTP Server as subordinate charm we want to test the charm with ubuntu-devenv .((http server is a webserver which can be used by other  ibm products Eg: WAS .we are developing it as a layer and decide to use http interface which is present in interfaces.juju.solutions.)) so could you please include http interface in ubuntu-devenv metadata.yaml file under provides...//?
[13:18] <marcoceppi> shruthima: what is ubuntu-devenv? Do you have a link to it somewhere?
[13:21] <shruthima> this is the link  https://jujucharms.com/u/kwmonroe/ubuntu-devenv/trusty/5/
[13:21] <icey> any chance of get ting https://code.launchpad.net/~chris.macnaughton/charm-helpers/pids-can-be-a-list/+merge/288014 reviewed soon?
[13:22] <beisner> icey, i'll review/comment
[13:23] <icey> thanks beisner :)
[13:24] <cory_fu> shruthima: I'm not sure that ubuntu-devenv makes sense to support the http interface, as it provides a developer environment and not anything related to HTTP or web services.  Maybe it would make more sense to test the IBM HTTP server with the existing haproxy charm?
[13:25] <cory_fu> https://jujucharms.com/haproxy/
[13:29] <shruthima> ok will explore on how i can use haproxy charm to test http server . if any queries il get back . Thank you :)
[13:40] <beisner> coreycb, thx for the swift-* reviews.  with your +1 and the full test pass, i think we're clear to land those, yah?
[13:41] <coreycb> beisner, yes I think so
[13:43] <coreycb> beisner, I've also tested with jamespages nova-cc patch successfully but that should probably have a more active charmer review it
[13:45] <beisner> coreycb, yep thedac is queued up on the n-c-c review.
[13:45] <coreycb> cool
[13:58] <beisner> icey, remind me, which charm test were you updating to take that list validation?
[13:59] <icey> a ceph-mon test
[13:59] <icey> looking...
[14:00] <icey> beisner: https://review.openstack.org/#/c/287446/7/tests/basic_deployment.py
[14:00] <icey> line 185, changed from {'ceph-osd': 2} to {'ceph-osd': True}
[14:01] <icey> would rather be: {'ceph-osd': [2, 3]}
[14:14] <beisner> icey, thx. see review comment.
[14:14] <icey> beisner: saw it, thanks
[14:17] <beisner> icey, i'd be +1 to rename to just e_pids in that whole method
[14:17] <beisner> as it's also a funky name for the existing bool scenario
[14:17] <beisner> icey, or just "expected"
[14:48] <narindergupta> jujucharms.com is down
[14:49] <urulama_> narindergupta: seems a company network issue, we're looking into it
[14:49] <urulama_> narindergupta: thanks
[14:50] <narindergupta> urulama: thanks
[14:54] <magicaltrout> this is why juju needs multicloud models, so you can fail over outside of the core network ;)
[14:55] <rick_h__> magicaltrout: :)
[14:56] <urulama> :D
[15:22] <jcastro> jamespage: do you have the bundle you used openstack on your laptop in brussels pushed somewhere?
[15:23] <jcastro> I'm ready to melt my PC
[15:23] <lazyPower> yessssss
[15:24] <marcoceppi> jcastro: I think jamespage is on vacay
[15:28] <lazyPower> marcoceppi - do we know if 1.25.x is capable of using the devel channel in the store or is that a 2.0+ only feature?
[15:28] <marcoceppi> lazyPower: no idea, urulama ^?
[15:29] <urulama> lazyPower: 2.0 only
[15:29] <lazyPower> ack, ta
[15:35] <gnuoy> tinwood, novaclient.v1_1.client fix landed
[15:35] <tinwood> great stuff.  Thanks gnuoy!
[15:36] <gnuoy> np
[15:55] <dames> beisner: you said jamespage's nova-cc PR has already passed full amulet tests? Or I need to just aprove the workflow bit?
[15:56] <beisner> thedac, re: https://review.openstack.org/#/c/291721/   Yep, if you're not seeing it, hit the Toggle CI button to expand the history a bit.
[15:56] <beisner> thedac, my only hold-back was the self-comment on https://review.openstack.org/#/c/291721/1/hooks/nova_cc_hooks.py
[15:56] <thedac> ok
[16:05] <lazyPower> jcastro - bite sized PR if you have a moment to TAL - https://github.com/juju-solutions/bundle-logstash-core/pull/1
[16:06] <jcastro> +    juju deploy ~containers/bundle/logstash-core
[16:06] <jcastro> is that right? wouldn't it not be in the containers namespace?
[16:06] <lazyPower> thats what works today
[16:06] <lazyPower> when is promulgated, either/or
[16:07] <lazyPower> wait, i just committed an act of heresy, i just reference a bundle as "promulgated"
[16:07] <beisner> thedac, fyi it'll take a code-review +2 along with that workflow +1 to trigger the merge.  lmk if you don't have privs for the +2.
[16:07] <jcastro> it just seems that you would document where it will be at with a clean namespace
[16:07] <lazyPower> if it doesn't work in a copy/pasteable format, i'm hesitant to plug it in there ya know?
[16:07] <jcastro> who is going to remember ~containers/logstash-core?
[16:07] <jcastro> I just want to remember logstash-core
[16:07] <thedac> beisner: I do. I am going to respond back to jamespage about the liberty-> mitaka upgrade path.
[16:07] <lazyPower> also jcastro - should i rename it ot elastic-stack?
[16:07] <beisner> thedac, ok cool, thx
[16:07] <lazyPower> or elk-stack?
[16:08] <jcastro> yeah
[16:08] <jcastro> elk-stack
[16:08] <lazyPower> ok, will rename the repository after PR feedback, already updating the README
[16:09] <jcastro> looking at the bundle, one charm is in ~containers, one's in the promulgated store, and the jdk is in kwmonroe's personal namespace
[16:11] <jcastro> nod, don't block on me
[16:12] <jcastro> I'm just saying we should be pushing for bundles using reviewed charms, not that I don't trusty kev, heh
[16:12] <kwmonroe> :)
[16:12] <lazyPower> its in the revq ;)
[16:12] <lazyPower> currently at what? #7?
[16:12] <kwmonroe> lazyPower: 'zulu8' is a drop-in layered promulgated openjdk replacement if you'd rather use that.
[16:13] <jcastro> heh, he just went from reviewing a readme diff to "retest the whole thing with a new java layer"
[16:13] <lazyPower> jcastro - this is all pretext work to propose for ~charmer, things have to work when you submit for review or we tend to nack them. charms are never done after all ;)
[16:13] <lazyPower> kwmonroe - you're not helping me by adding work
[16:13] <lazyPower> <3
[16:13] <kwmonroe> lolz.
[16:13] <lazyPower> but thats easily done
[16:14] <lazyPower> actually, kwmonroe - want me to kick the tires on testing that?
[16:14] <kwmonroe> testing what?  java is java.  just use it: https://jujucharms.com/zulu8/
[16:14] <kwmonroe> AMIRITE MATT?!?!
[16:15] <lazyPower> the one day java was set up for a slam dunk, and matt is in Hawaii
[16:15] <kwmonroe> no doubt
[16:15]  * lazyPower grins at the irony
[16:16] <magicaltrout> i read ironing
[16:16] <magicaltrout> which was a little odd
[16:18] <lazyPower> indeed
[16:26] <thedac> gnuoy: I am thinking the keystone case issue, a compromise would be not to create as lowercase but in the checks. read only. I see that keystone/hooks/manager.py may be partialy at fault.
[16:31] <gnuoy> thedac, sorry, you're suggesting we create case sensitive but read case-insensitive ?
[16:32] <thedac> gnuoy: yes, one sec
[16:33] <thedac> gnuoy: something like http://pastebin.ubuntu.com/15393100/
[16:36] <gnuoy> thedac, that makes sense to me
[16:36] <thedac> gnuoy: ok, I'll submit a pull request
[16:37] <gnuoy> kk
[16:37] <cholcombe> beisner, looks like the tests on the ceph-osd charm were deploying the wrong services
[16:37] <stokachu> when you destroy-model the model still shows up in list-models but as 'no longer alive'
[16:37] <stokachu> is this expected?
[16:37] <cholcombe> beisner, it was deploying ceph and not ceph-mon
[16:38] <beisner> cholcombe, indeed.  the test in ceph-osd deploys the ceph charm.  https://github.com/openstack/charm-ceph-osd/blob/master/tests/basic_deployment.py#L47
[16:38] <stokachu> nm found the bug
[16:39] <cholcombe> beisner, yeah i changed it to ceph-mon which should fix the problem
[16:41] <beisner> cholcombe, ok.  be aware, that marks the beginning of the end of ceph+ceph-osd functional integration.   i'd suggest that we add ceph-osd to the ceph charm's amulet test @ https://github.com/openstack/charm-ceph/blob/master/tests/basic_deployment.py#L47 right along with these changes so we don't completely lose coverage.
[16:41] <cholcombe> beisner, yeah goo dpoint
[16:42] <cholcombe> beisner, yeah i'll cut a branch for that integration
[16:42] <beisner> cholcombe, sweet, thanks
[16:57] <thedac> gnuoy: fyi: https://review.openstack.org/293043
[17:03] <jcastro> marcoceppi: https://bugs.launchpad.net/juju-core/+bug/1557633
[17:03] <mup> Bug #1557633: status has duplicate entries for relationships <juju-core:New> <https://launchpad.net/bugs/1557633>
[17:03] <jcastro> look at how huge that list ended up being
[17:05] <thedac> marcoceppi: nuage charms:
[17:05] <thedac> https://jujucharms.com/nuage-vsc/trusty/
[17:05] <thedac> https://jujucharms.com/nuage-vsd/trusty/
[17:05] <thedac> https://jujucharms.com/nuage-vrs/trusty/
[17:05] <thedac> https://jujucharms.com/nuage-vrsg/trusty/
[17:13] <gnuoy> tinwood, +2'd your keystone change. Thanks for all your work on that
[17:14] <tinwood> gnuoy, sadly OSCI doesn't agree!  (I'm guessing keystone auth problems).
[17:15] <beisner> tinwood, gnuoy - hook failed: "install" for trusty and wily on that keystone change, inspecting unit logs now..
[17:16] <tinwood> thanks beisner
[17:16] <beisner> tinwood, gnuoy 2016-03-15 17:09:55 INFO install ImportError: cannot import name get_admin_domain_id
[17:17] <tinwood> beisner, oo, nasty - that must be me.
[17:17] <beisner> tinwood, gnuoy - http://pastebin.ubuntu.com/15393468/
[17:17] <beisner> gnuoy, gotta wait for both "Jenkins" and "Canonical CI" to +1
[17:18] <beisner> looks like patchset 3 is the culprit
[17:18] <tinwood> I will check and get back to you both.  My box is broken (until I can get an install of amulet working).
[17:20] <tinwood> yeah, definitely me with merge conflicts.  However, there's no test for get_admin_domain_id() - i.e. it passes unit tests.  I'll dig into that too.
[17:24] <cholcombe> beisner, where does amulet get ceph-mon from?
[17:25] <cholcombe> beisner, i realize this is a vague question haha.  What i'm wondering is which version of ceph-mon is it using.  There's a few different ceph-mon's under different namespaces
[17:25] <beisner> cholcombe, if it's in other_services, launchpad.   if it's the ceph-mon charm test itself, it gets it from wherever the change originated (refspec)
[17:25] <cholcombe> beisner, yeah it's in other_services
[17:25] <beisner> cholcombe, it should be pulling from ceph-mon/next
[17:26] <cholcombe> beisner, ok that's what i was wondering.  I'm failing to deploy on precise and I'm wondering if it exists
[17:26] <beisner> cholcombe, it uses the trusty charm for everything currently (precise through xenial)
[17:26] <cholcombe> beisner, hmm ok
[17:26] <cholcombe> i must've broken something then
[17:33] <bdx> whats up everyone? I'm trying to bootstrap lxd provider using juju2, does anyone know where I can find an example of a config.yaml to feed juju2?
[17:47] <icey> bdx `juju bootstrap lxd lxd --upload-tools --config="default-series=trusty"` is what I've been doing
[17:47] <icey> anybody have suggestions for subordinate charms being a mix of layered and old-school charms?
[17:48] <bdx> icey: thanks
[17:48] <marcoceppi> thedac: thanks
[17:49] <marcoceppi> thedac: these are already promulgated though?
[17:49] <thedac> marcoceppi: I suspected that. jamespage promulgated them last week. Again I am trying to get straight the review process for future changes to those charms.
[17:50] <marcoceppi> thedac: yes, so that's not 100% clear, I'll look into it to find out exactly how they were uploaded, etc
[17:50] <thedac> Sounds like http://review.juju.solutions/ will be (but is not yet) the correct location for nuage (or anyone else) to submit change proumulgation requests. Is that correct?
[17:51] <thedac> marcoceppi: ok, thanks. We could surely use some documentation on this. That or I have not had enough coffee today.
[17:52] <marcoceppi> thedac: documentation definitely, we're locking down a new reviewqueue with the new charm command due out this month (charm command)
[17:52] <marcoceppi> thedac: but basically yes, nuage, or whomever, would upload charms to their development channel - since those charms are promulgated, they'll have to got through a review to get them published to the stable channel
[17:53] <marcoceppi> but development will be done wherever the code for the charm lives, where previously anyone could contribute by just opening a merge requests against an lp branch
[17:53] <marcoceppi> so if they house the code for teh charms in gerrit, or github, or lp, that's where people will contribute to, and it's up to them to do uploading/submitting to the store
[17:53] <thedac> ok
[18:05] <tinwood> yay! amulet reinstalls under xenial :)
[18:08] <cory_fu> rick_h__: Hey.  Has anyone suggested adding a message param to `charm publish` so that we can indicate just what is included in the new rev?
[18:09] <urulama> cory_fu: yes, that part is still missing
[18:09] <rick_h__> cory_fu: yes, it's supposed to be updated to take one like a 'commit message'
[18:09] <cory_fu> Ok, cool, so it's already planned.  Nice
[18:10] <cory_fu> I'm finding that the lack of info about the charm revs is making my development process more difficult because I can't recall what I've released when
[18:12] <beisner> tinwood, just to confirm, we need to tackle keystone @ master as a crit.  she's bust.
[18:14] <beisner> tinwood, ie.  with keystone install hook failing, all other amulet tests for other peeps will also be blocked.
[18:14] <beisner> tinwood, two paths forward:  post a change to fix it, or post a change to revert that last one.  whaddaya think?
[18:14] <tinwood> beisner, I've sorted it out.  I'm just finishing the merge and will push again.
[18:15] <beisner> tinwood, awesome
[18:22] <bdx> charmers, core, dev: I see spaces can be assigned like so -> juju deploy mysql --bind "server=database cluster=internal"
[18:22] <bdx> charmers, core, dev: does this indicate that I should have an affinity between subnets and spaces?
[18:23] <bdx> charmers, core, dev: if my space-0 has multiple subnets e.g http://paste.ubuntu.com/15393950/
[18:23] <bdx> charmers, core, dev: how can I `bind` a specific subnet in a space? .... Is this a thing yet?
[18:24] <tinwood> beisner, it seems that my review is closed.  Do I need to branch again and push a new review?
[18:25] <lazyPower> bdx - I would redirect that at the list. The primary author of the spaces feature is in EU time, and is the most well equipped to answer those questions
[18:26] <bdx> lazyPower: totally, thanks
[18:26] <tinwood> beisner, or do I hit the revert button?
[18:56] <beisner> tinwood-dinner, since that change was merged, it'll have to be a separate proposal
[18:56] <tinwood-dinner> Okay, will do. Just finishing eating.
[18:57] <beisner> me too
[19:26] <tinwood> beisner, I'm just running some tests to make sure it is all still okay.  Be about 5-10 minutes for the review to pop up (I hope!)
[19:27] <beisner> tinwood, excellent, thank you
[19:34] <cory_fu> bcsaller: I went ahead and added the --show-report option to charm-build: https://github.com/juju/charm-tools/pull/135
[19:38] <bcsaller> cory_fu: utils.delta_signatures wasn't helpful there? (thanks btw :)
[19:38] <cory_fu> ...
[19:38] <cory_fu> It probably would have been
[19:38] <bcsaller> the validate method above this commit is similar, but not the same
[19:40] <cory_fu> bcsaller: Ah, yeah.  delta_sigs checks if the files on disk don't match the manifest, but by the time the files on disk are updated, the manifest has been overwritten as well.  I guess I could call report before write_signatures
[19:41] <cory_fu> I could do the same with clean_removed.  Hrm
[19:45] <pmatulis> should this "just work"? i get 'ERROR cmd supercommand.go:448 no registered provider for "lxd"' : 'juju bootstrap mycontroller lxd'
[19:47] <tinwood> beisner, it's gone up and failed it's CI?  It failed very quickly, so I'm wondering if that's because it's blocked?
[19:49] <beisner> tinwood, fail fast ;-)  ...
[19:49]  * beisner looks
[19:58] <marcoceppi> tinwood: yeah, sorry about that, amulet is fixed again though
[19:58] <marcoceppi> pmatulis: what version of Ubuntu?
[19:58] <tinwood> beisner, hey, no that's fine.  This is, after all, of my doing sadly.
[19:59] <pmatulis> marcoceppi: 2.0-beta2-0ubuntu1~16.04.1~juju1
[19:59] <pmatulis> sorry, Xenial
[20:00] <marcoceppi> pmatulis: when do you get that error? is there a bunch of text first or is it almost immediately, if it's the former --upload-tools is needed for now, otherwise that's another issue
[20:01] <pmatulis> marcoceppi: oh yeah, it runs for a good while
[20:01] <pmatulis> marcoceppi: lemme try --upload-tools . why do i need that again? b/c it tries to use a trusty container?
[20:01] <beisner> tinwood, unit test really did fail on that
[20:02] <marcoceppi> pmatulis: because the trusty agent doesn't have lxd provider compiled in
[20:02] <tinwood> beisner, hmm, strange, everything passed on my bastion.  what was the problem?
[20:02] <pmatulis> marcoceppi: roger. trying now...
[20:03] <beisner> tinwood, see priv channel for priv link
[20:03] <icey> beisner: any chance on more review on https://code.launchpad.net/~chris.macnaughton/charm-helpers/pids-can-be-a-list/+merge/288014 ?
[20:06] <pmatulis> marcoceppi: thank you - works
[20:08] <beisner> icey, on a hot issue atm, will circle back
[20:08] <cory_fu> bcsaller: Actually, I can't use delta_signatures for clean_removed after all, because the whole point is that they don't get removed, so d_s won't pick them up.  :/
[20:08] <icey> no worries beisner
[20:09] <bcsaller> cory_fu: :-/
[20:09] <bcsaller> heh
[20:48] <lazyPower> jcastro - how much ram do you ahve in your rig? and what did resource usage look like  with openstack in lxd on your lappy?
[20:52] <cory_fu> bcsaller: PR updated
[20:52] <cory_fu> Also, lazyPower and marcoceppi, since you both commented
[20:53] <cory_fu> marcoceppi: Your :BOAT: didn't sail.  ;)
[21:49] <roryschramm> is it possible to specify a custom apt repository when juju deploys an app to an lxc container? ie the lxc container will use http://myrepo/ubuntu in /etc/apt/sources.list
[22:40] <stub> roryschramm: Not globally. Each charm needs to add its required custom repositories. Charms that need custom repositories generally provide a config item for them (or hard code them).
[22:42] <stub> roryschramm: You might also find the apt_ environment settings in your juju environment can help (eg. juju set-env apt-mirror=...)
[22:44] <roryschramm> aha the apt-mirror setting is what i was looking for
[22:44] <roryschramm> thanks
[22:45] <roryschramm> will that override the default repos in the trusty lxc image?