[00:30] Hi. I asked this last night but maybe everyone was asleep [00:30] Is there a trick to running "juju quickstart mediawiki-single" [00:30] under virtualbox using Ubuntu 14.04.. [00:31] Both the computer and the virtualized computer are Ubuntu 14.04 [00:31] It just hangs on machine 1 provisioning is pending === natefinch-afk is now known as natefinch === scuttlemonkey is now known as scuttle|afk [07:27] sam__344: Did you start the second box? Is it running via maas? etc.. [07:29] BlackDex: I did nothing but follow the instructions here, up to step #2. https://jujucharms.com/get-started [07:29] I'm running it inside virtual box, not deploying to virtualbox [07:30] Ah, so the machine you need to deploy it to is bare-metal [07:31] Well I'm just playing with it for fun. But I don't want it to mess up my laptop so I'm using VirtualBox. I'd expect it to just work the same as if I was following the directions directly on my laptop [07:32] sure [07:32] if you just select the create a local environment [07:33] You need to have JuJu know where to deploy stuff to. That may be virtual, lxc, bare-metal, cloud etc.. [07:34] okay. So what should I type instead of "juju quickstart mediawiki-single" [07:36] By the way BlackDex - I just rebooted the virtual machine and tried it again. Got INFO jujuclient:328 Juju upgrade in progress... [07:36] over and over again [07:36] until finally "juju-quickstart: error: cannot retrieve the environment type: upgrade in progress - Juju functionality is limited" [08:07] hmm [08:07] that is strange [08:07] never did a local deploy btw [08:45] Hello there, on my setup juju seems to timeout after a while. The bootstrap node is running on a KVM-Instance and can connect to MAAS etc.. the deployed nodes are also receiving commands from juju, but after a while it seems to stop [08:46] if i then reboot the bootstrape node, some of those machines come to life without doing anything. [08:46] Is this a problem with the juju-api or mongodb? what can i do to try and solve this? === Guest84891 is now known as CyberJacob [13:11] Hi, We are developing IBM HTTP Server as subordinate charm we want to test the charm with ubuntu-devenv .((http server is a webserver which can be used by other ibm products Eg: WAS .we are developing it as a layer and decide to use http interface which is present in interfaces.juju.solutions.)) so could you please include http interface in ubuntu-devenv metadata.yaml file under provides...//? [13:18] shruthima: what is ubuntu-devenv? Do you have a link to it somewhere? [13:21] this is the link https://jujucharms.com/u/kwmonroe/ubuntu-devenv/trusty/5/ [13:21] any chance of get ting https://code.launchpad.net/~chris.macnaughton/charm-helpers/pids-can-be-a-list/+merge/288014 reviewed soon? [13:22] icey, i'll review/comment [13:23] thanks beisner :) [13:24] shruthima: I'm not sure that ubuntu-devenv makes sense to support the http interface, as it provides a developer environment and not anything related to HTTP or web services. Maybe it would make more sense to test the IBM HTTP server with the existing haproxy charm? [13:25] https://jujucharms.com/haproxy/ [13:29] ok will explore on how i can use haproxy charm to test http server . if any queries il get back . Thank you :) [13:40] coreycb, thx for the swift-* reviews. with your +1 and the full test pass, i think we're clear to land those, yah? [13:41] beisner, yes I think so [13:43] beisner, I've also tested with jamespages nova-cc patch successfully but that should probably have a more active charmer review it [13:45] coreycb, yep thedac is queued up on the n-c-c review. [13:45] cool [13:58] icey, remind me, which charm test were you updating to take that list validation? [13:59] a ceph-mon test [13:59] looking... [14:00] beisner: https://review.openstack.org/#/c/287446/7/tests/basic_deployment.py [14:00] line 185, changed from {'ceph-osd': 2} to {'ceph-osd': True} [14:01] would rather be: {'ceph-osd': [2, 3]} === MagicSponge is now known as UbuntuSponge === UbuntuSponge is now known as HaloSponge [14:14] icey, thx. see review comment. [14:14] beisner: saw it, thanks [14:17] icey, i'd be +1 to rename to just e_pids in that whole method [14:17] as it's also a funky name for the existing bool scenario [14:17] icey, or just "expected" === scuttle|afk is now known as scuttlemonkey [14:48] jujucharms.com is down [14:49] narindergupta: seems a company network issue, we're looking into it [14:49] narindergupta: thanks === urulama_ is now known as urulama [14:50] urulama: thanks [14:54] this is why juju needs multicloud models, so you can fail over outside of the core network ;) [14:55] magicaltrout: :) [14:56] :D === caribou_ is now known as caribou === sparkieg` is now known as sparkiegeek` === mhilton-lunch is now known as mhilton === fginther` is now known as fginther [15:22] jamespage: do you have the bundle you used openstack on your laptop in brussels pushed somewhere? [15:23] I'm ready to melt my PC [15:23] yessssss [15:24] jcastro: I think jamespage is on vacay [15:28] marcoceppi - do we know if 1.25.x is capable of using the devel channel in the store or is that a 2.0+ only feature? [15:28] lazyPower: no idea, urulama ^? [15:29] lazyPower: 2.0 only [15:29] ack, ta [15:35] tinwood, novaclient.v1_1.client fix landed [15:35] great stuff. Thanks gnuoy! [15:36] np [15:55] beisner: you said jamespage's nova-cc PR has already passed full amulet tests? Or I need to just aprove the workflow bit? === dames is now known as thedac [15:56] thedac, re: https://review.openstack.org/#/c/291721/ Yep, if you're not seeing it, hit the Toggle CI button to expand the history a bit. [15:56] thedac, my only hold-back was the self-comment on https://review.openstack.org/#/c/291721/1/hooks/nova_cc_hooks.py [15:56] ok [16:05] jcastro - bite sized PR if you have a moment to TAL - https://github.com/juju-solutions/bundle-logstash-core/pull/1 [16:06] + juju deploy ~containers/bundle/logstash-core [16:06] is that right? wouldn't it not be in the containers namespace? [16:06] thats what works today [16:06] when is promulgated, either/or [16:07] wait, i just committed an act of heresy, i just reference a bundle as "promulgated" [16:07] thedac, fyi it'll take a code-review +2 along with that workflow +1 to trigger the merge. lmk if you don't have privs for the +2. [16:07] it just seems that you would document where it will be at with a clean namespace [16:07] if it doesn't work in a copy/pasteable format, i'm hesitant to plug it in there ya know? [16:07] who is going to remember ~containers/logstash-core? [16:07] I just want to remember logstash-core [16:07] beisner: I do. I am going to respond back to jamespage about the liberty-> mitaka upgrade path. [16:07] also jcastro - should i rename it ot elastic-stack? [16:07] thedac, ok cool, thx [16:07] or elk-stack? [16:08] yeah [16:08] elk-stack [16:08] ok, will rename the repository after PR feedback, already updating the README [16:09] looking at the bundle, one charm is in ~containers, one's in the promulgated store, and the jdk is in kwmonroe's personal namespace [16:11] nod, don't block on me [16:12] I'm just saying we should be pushing for bundles using reviewed charms, not that I don't trusty kev, heh [16:12] :) [16:12] its in the revq ;) [16:12] currently at what? #7? [16:12] lazyPower: 'zulu8' is a drop-in layered promulgated openjdk replacement if you'd rather use that. [16:13] heh, he just went from reviewing a readme diff to "retest the whole thing with a new java layer" [16:13] jcastro - this is all pretext work to propose for ~charmer, things have to work when you submit for review or we tend to nack them. charms are never done after all ;) [16:13] kwmonroe - you're not helping me by adding work [16:13] <3 [16:13] lolz. [16:13] but thats easily done [16:14] actually, kwmonroe - want me to kick the tires on testing that? [16:14] testing what? java is java. just use it: https://jujucharms.com/zulu8/ [16:14] AMIRITE MATT?!?! [16:15] the one day java was set up for a slam dunk, and matt is in Hawaii [16:15] no doubt [16:15] * lazyPower grins at the irony [16:16] i read ironing [16:16] which was a little odd [16:18] indeed [16:26] gnuoy: I am thinking the keystone case issue, a compromise would be not to create as lowercase but in the checks. read only. I see that keystone/hooks/manager.py may be partialy at fault. === natefinch is now known as natefinch-lunch [16:31] thedac, sorry, you're suggesting we create case sensitive but read case-insensitive ? [16:32] gnuoy: yes, one sec [16:33] gnuoy: something like http://pastebin.ubuntu.com/15393100/ [16:36] thedac, that makes sense to me [16:36] gnuoy: ok, I'll submit a pull request [16:37] kk [16:37] beisner, looks like the tests on the ceph-osd charm were deploying the wrong services [16:37] when you destroy-model the model still shows up in list-models but as 'no longer alive' [16:37] is this expected? [16:37] beisner, it was deploying ceph and not ceph-mon [16:38] cholcombe, indeed. the test in ceph-osd deploys the ceph charm. https://github.com/openstack/charm-ceph-osd/blob/master/tests/basic_deployment.py#L47 [16:38] nm found the bug [16:39] beisner, yeah i changed it to ceph-mon which should fix the problem [16:41] cholcombe, ok. be aware, that marks the beginning of the end of ceph+ceph-osd functional integration. i'd suggest that we add ceph-osd to the ceph charm's amulet test @ https://github.com/openstack/charm-ceph/blob/master/tests/basic_deployment.py#L47 right along with these changes so we don't completely lose coverage. [16:41] beisner, yeah goo dpoint [16:42] beisner, yeah i'll cut a branch for that integration [16:42] cholcombe, sweet, thanks [16:57] gnuoy: fyi: https://review.openstack.org/293043 [17:03] marcoceppi: https://bugs.launchpad.net/juju-core/+bug/1557633 [17:03] Bug #1557633: status has duplicate entries for relationships [17:03] look at how huge that list ended up being [17:05] marcoceppi: nuage charms: [17:05] https://jujucharms.com/nuage-vsc/trusty/ [17:05] https://jujucharms.com/nuage-vsd/trusty/ [17:05] https://jujucharms.com/nuage-vrs/trusty/ [17:05] https://jujucharms.com/nuage-vrsg/trusty/ [17:13] tinwood, +2'd your keystone change. Thanks for all your work on that [17:14] gnuoy, sadly OSCI doesn't agree! (I'm guessing keystone auth problems). [17:15] tinwood, gnuoy - hook failed: "install" for trusty and wily on that keystone change, inspecting unit logs now.. [17:16] thanks beisner [17:16] tinwood, gnuoy 2016-03-15 17:09:55 INFO install ImportError: cannot import name get_admin_domain_id [17:17] beisner, oo, nasty - that must be me. [17:17] tinwood, gnuoy - http://pastebin.ubuntu.com/15393468/ [17:17] gnuoy, gotta wait for both "Jenkins" and "Canonical CI" to +1 [17:18] looks like patchset 3 is the culprit [17:18] I will check and get back to you both. My box is broken (until I can get an install of amulet working). [17:20] yeah, definitely me with merge conflicts. However, there's no test for get_admin_domain_id() - i.e. it passes unit tests. I'll dig into that too. [17:24] beisner, where does amulet get ceph-mon from? [17:25] beisner, i realize this is a vague question haha. What i'm wondering is which version of ceph-mon is it using. There's a few different ceph-mon's under different namespaces [17:25] cholcombe, if it's in other_services, launchpad. if it's the ceph-mon charm test itself, it gets it from wherever the change originated (refspec) [17:25] beisner, yeah it's in other_services [17:25] cholcombe, it should be pulling from ceph-mon/next [17:26] beisner, ok that's what i was wondering. I'm failing to deploy on precise and I'm wondering if it exists [17:26] cholcombe, it uses the trusty charm for everything currently (precise through xenial) [17:26] beisner, hmm ok [17:26] i must've broken something then [17:33] whats up everyone? I'm trying to bootstrap lxd provider using juju2, does anyone know where I can find an example of a config.yaml to feed juju2? === natefinch-lunch is now known as natefinch [17:47] bdx `juju bootstrap lxd lxd --upload-tools --config="default-series=trusty"` is what I've been doing [17:47] anybody have suggestions for subordinate charms being a mix of layered and old-school charms? [17:48] icey: thanks [17:48] thedac: thanks [17:49] thedac: these are already promulgated though? [17:49] marcoceppi: I suspected that. jamespage promulgated them last week. Again I am trying to get straight the review process for future changes to those charms. [17:50] thedac: yes, so that's not 100% clear, I'll look into it to find out exactly how they were uploaded, etc [17:50] Sounds like http://review.juju.solutions/ will be (but is not yet) the correct location for nuage (or anyone else) to submit change proumulgation requests. Is that correct? [17:51] marcoceppi: ok, thanks. We could surely use some documentation on this. That or I have not had enough coffee today. [17:52] thedac: documentation definitely, we're locking down a new reviewqueue with the new charm command due out this month (charm command) [17:52] thedac: but basically yes, nuage, or whomever, would upload charms to their development channel - since those charms are promulgated, they'll have to got through a review to get them published to the stable channel [17:53] but development will be done wherever the code for the charm lives, where previously anyone could contribute by just opening a merge requests against an lp branch [17:53] so if they house the code for teh charms in gerrit, or github, or lp, that's where people will contribute to, and it's up to them to do uploading/submitting to the store [17:53] ok [18:05] yay! amulet reinstalls under xenial :) [18:08] rick_h__: Hey. Has anyone suggested adding a message param to `charm publish` so that we can indicate just what is included in the new rev? [18:09] cory_fu: yes, that part is still missing [18:09] cory_fu: yes, it's supposed to be updated to take one like a 'commit message' [18:09] Ok, cool, so it's already planned. Nice [18:10] I'm finding that the lack of info about the charm revs is making my development process more difficult because I can't recall what I've released when [18:12] tinwood, just to confirm, we need to tackle keystone @ master as a crit. she's bust. [18:14] tinwood, ie. with keystone install hook failing, all other amulet tests for other peeps will also be blocked. [18:14] tinwood, two paths forward: post a change to fix it, or post a change to revert that last one. whaddaya think? [18:14] beisner, I've sorted it out. I'm just finishing the merge and will push again. [18:15] tinwood, awesome [18:22] charmers, core, dev: I see spaces can be assigned like so -> juju deploy mysql --bind "server=database cluster=internal" [18:22] charmers, core, dev: does this indicate that I should have an affinity between subnets and spaces? [18:23] charmers, core, dev: if my space-0 has multiple subnets e.g http://paste.ubuntu.com/15393950/ [18:23] charmers, core, dev: how can I `bind` a specific subnet in a space? .... Is this a thing yet? [18:24] beisner, it seems that my review is closed. Do I need to branch again and push a new review? [18:25] bdx - I would redirect that at the list. The primary author of the spaces feature is in EU time, and is the most well equipped to answer those questions [18:26] lazyPower: totally, thanks [18:26] beisner, or do I hit the revert button? === tinwood is now known as tinwood-dinner [18:56] tinwood-dinner, since that change was merged, it'll have to be a separate proposal [18:56] Okay, will do. Just finishing eating. [18:57] me too === tinwood-dinner is now known as tinwood [19:26] beisner, I'm just running some tests to make sure it is all still okay. Be about 5-10 minutes for the review to pop up (I hope!) [19:27] tinwood, excellent, thank you [19:34] bcsaller: I went ahead and added the --show-report option to charm-build: https://github.com/juju/charm-tools/pull/135 [19:38] cory_fu: utils.delta_signatures wasn't helpful there? (thanks btw :) [19:38] ... [19:38] It probably would have been [19:38] the validate method above this commit is similar, but not the same [19:40] bcsaller: Ah, yeah. delta_sigs checks if the files on disk don't match the manifest, but by the time the files on disk are updated, the manifest has been overwritten as well. I guess I could call report before write_signatures [19:41] I could do the same with clean_removed. Hrm [19:45] should this "just work"? i get 'ERROR cmd supercommand.go:448 no registered provider for "lxd"' : 'juju bootstrap mycontroller lxd' [19:47] beisner, it's gone up and failed it's CI? It failed very quickly, so I'm wondering if that's because it's blocked? [19:49] tinwood, fail fast ;-) ... [19:49] * beisner looks [19:58] tinwood: yeah, sorry about that, amulet is fixed again though [19:58] pmatulis: what version of Ubuntu? [19:58] beisner, hey, no that's fine. This is, after all, of my doing sadly. [19:59] marcoceppi: 2.0-beta2-0ubuntu1~16.04.1~juju1 [19:59] sorry, Xenial [20:00] pmatulis: when do you get that error? is there a bunch of text first or is it almost immediately, if it's the former --upload-tools is needed for now, otherwise that's another issue [20:01] marcoceppi: oh yeah, it runs for a good while [20:01] marcoceppi: lemme try --upload-tools . why do i need that again? b/c it tries to use a trusty container? [20:01] tinwood, unit test really did fail on that [20:02] pmatulis: because the trusty agent doesn't have lxd provider compiled in [20:02] beisner, hmm, strange, everything passed on my bastion. what was the problem? [20:02] marcoceppi: roger. trying now... [20:03] tinwood, see priv channel for priv link [20:03] beisner: any chance on more review on https://code.launchpad.net/~chris.macnaughton/charm-helpers/pids-can-be-a-list/+merge/288014 ? [20:06] marcoceppi: thank you - works [20:08] icey, on a hot issue atm, will circle back [20:08] bcsaller: Actually, I can't use delta_signatures for clean_removed after all, because the whole point is that they don't get removed, so d_s won't pick them up. :/ [20:08] no worries beisner [20:09] cory_fu: :-/ [20:09] heh === natefinch is now known as natefinch-afk [20:48] jcastro - how much ram do you ahve in your rig? and what did resource usage look like with openstack in lxd on your lappy? [20:52] bcsaller: PR updated [20:52] Also, lazyPower and marcoceppi, since you both commented [20:53] marcoceppi: Your :BOAT: didn't sail. ;) === Guest93664 is now known as med_ [21:49] is it possible to specify a custom apt repository when juju deploys an app to an lxc container? ie the lxc container will use http://myrepo/ubuntu in /etc/apt/sources.list [22:40] roryschramm: Not globally. Each charm needs to add its required custom repositories. Charms that need custom repositories generally provide a config item for them (or hard code them). [22:42] roryschramm: You might also find the apt_ environment settings in your juju environment can help (eg. juju set-env apt-mirror=...) [22:44] aha the apt-mirror setting is what i was looking for [22:44] thanks [22:45] will that override the default repos in the trusty lxc image?