[01:46] Hi everyone.. another newbie issue.. I seem to be stuck on this bug - https://bugs.launchpad.net/juju-core/+bug/1566420 [01:47] Bug #1566420: lxd doesn't provision instances on first bootstrap in new xenial image [01:47] any trick to get over this issue before it's fixed? [02:17] rohit_: I'm kinda glad it's not just me that ran into it :) [02:17] rohit_: what I did was bootstrapped another lxd controller (just give it a new controller name) [02:17] trying it now [02:18] rohit_: you can clean up the original controller in the following hacky way: [02:18] 1 - get the instance ID / container name. [02:19] (you can do a juju status -m :admin and see the ID there) [02:19] 2 - delete the controller using the lxd commands (lxc stop, then lxc delete) [02:20] 3 - juju kill-controller [02:21] I actually created a new controller before killing the old one [02:22] and now all juju commands are failing with "no space left on device" [02:24] gah, I was running into that on a small instance I had in AWS [02:25] I had to clean up space in /tmp, then lxc stop / lxc delete [02:36] I'm having a weird error with swift-proxy and the latest next charms as of now - http://pastebin.ubuntu.com/15752193/. Anyone seen anything like it before? === urulama__ is now known as urulama [06:51] hi everyone [06:51] need help regarding juju....i want to know 2 things [06:51] first where we define the pool of IP's which eventually gets allocated to juju services as public address when we deploy them [06:51] secondly in my setup when i deployed mysql it got public address as node0.maas and port is allocated to it something like 3306 [06:51] but when i deployed wordpress it again got public address node0.maas [06:52] but no port is allocated to it [07:35] gnuoy, morning [07:36] morning jamespage [07:36] jamespage, got a sec for https://review.openstack.org/#/c/301171/ this morning ? [07:39] gnuoy, lgtm [07:49] gnuoy, I have one more network space review to stack up - I missed neutron-gateway [08:14] gnuoy, ok so i can't land https://review.openstack.org/303329 until https://review.openstack.org/#/c/303321/ lands [08:14] as I need nova-cc to stop doing the migrations for neutron so that neutron-api can do them... [08:15] gnuoy, could you have a look at https://review.openstack.org/#/c/303321/ wolsom already +1'ed but needs a full review [08:15] ack, yep [08:25] gnuoy, I'm looking through the queue of features up still - any particular priority from your perspective? [08:26] jamespage, nope [08:27] gnuoy, do you have cycles for https://review.openstack.org/#/q/topic:network-spaces ? [08:28] jamespage, I'm looking at Bug #1518771 , I think we need that fixed asap, particularly given the dpdk work. The soultion I propose is to remove the fstab entry and instead write KVM_HUGEPAGES=1 to /etc/default/qemu-kvm and restart the qemu-kvm service. Does that sound sane ? (I still need to check on systemd support for that approach) [08:28] Bug #1518771: nova-compute hugepages breaks the boot process [08:30] gnuoy, I'm not sure exactly what your proposed solution does tbh? [08:30] gnuoy, ok just read /dev/default/qemu-kvm [08:30] jamespage, http://paste.ubuntu.com/15754379/ [08:31] thats what the init script runs [08:31] gnuoy, that looks like a reasonable solution [08:32] jamespage, I do have cycles for those review although I don't know ceph at all so usually try and leave ceph reviews to someone with more expertise. [08:33] gnuoy, ok if you could pickup the neutron-gateway one that would be appreciated... [08:33] jamespage, if I was to create a branch with that solution would it be easy for you to test dpdk is still happy? [08:33] gnuoy, not right now as I gave the dpdk machine back to rbasak [08:33] jamespage, ok [08:34] gnuoy, my only concern would be that ovs might race the qemu-kvm startup on boot, resulting in a non-running ovs with dpdk enabled. [08:37] jamespage, the option is to create an upstart script and hook into the /run mounted event eg "start on mounted MOUNTPOINT=/run" [08:39] that might still race though I guess [08:44] jamespage, those ceph changes are trivial enough for me to review [08:44] gnuoy, ta [08:47] Hi [08:53] need help regarding wordpress in juju [08:53] i have deployed mysql and wordpress [08:54] and exposed only wordpress [08:54] here is my juju status http://paste.ubuntu.com/15754539/ [08:54] i cannot access wordpress from browser [08:54] and also port is not allocated to wordpress [08:57] [09:21] jamespage: I've not used it yet, but probably will want it this afternoon or tomorrow. [09:27] jamespage, the only solution I can think of is to amend the ovs upstart script to contain similair logic to qemu-kvm to mount hugepages if theres a variable set in /etc/default/openvswitch-switch [09:30] gnuoy, hmm [09:30] * jamespage thinks [09:32] jamespage, I've just seen mention that there is hugepage config in /etc/dpdk/dpdk.conf , I assume thats not to do with mounting them in the first place [09:32] * gnuoy goes to check [09:34] gnuoy, that's part of the dpdk package itself... [09:34] gnuoy, you might not have that installed depending ... [09:36] jamespage, right, but I think amending /dev/default/qemu-kvm covers the non-dpdk use-case. The ovs with dpdk use case may be fixed with /etc/dpdk/dpdk.conf [09:36] gnuoy, agreed on the non-dpdk use-case [09:36] gnuoy, tbh I think that most deployments will set hugepages via kernel boot ops [09:37] gnuoy, at which point systemd mounts them on /dev/hugepages* anyway [09:37] jamespage, does upstart ? [09:47] gnuoy, no [09:47] gnuoy, but I don't really care about dpdk on anything other than 16.04 tbh [09:47] gnuoy, am I allows to drop the apache2/keystone check for amulet tests? [09:48] gnuoy, I just tried to fixup swift for 2.7.0 and both charms fail on that check now... [09:48] jamespage, ok, then I propose that I make the change to use /etc/default/qemu-kvm as its better thaan where we are now [09:48] jamespage, yes [09:51] gnuoy, well I fixed them anyway [10:13] gnuoy, I nacked https://review.openstack.org/#/c/299363/ on the basis of our 'MAAS should do this' conversation... [10:34] gnuoy, https://review.openstack.org/#/q/topic:keystone-apache+status:open both ready to go [10:36] gnuoy, and https://review.openstack.org/#/q/topic:swift-2.7.0 which are based on those [10:56] gnuoy, also I think I've introduced a bug with the neutron pullout [10:56] working that now [10:56] omlette eggs something something [11:04] gnuoy, https://review.openstack.org/#/c/304034/ [11:05] fairly minor - the NeutronAPIContext was added conditionally before, but I think its OK to add unconditionally now [11:25] gnuoy, tbh if that's the only thing I got wrong, then I'm pretty happy :-) [12:58] Hi [13:01] o/ [13:05] cory_fu: me [13:17] A-Kaser: http://interfaces.juju.solutions/ [13:20] jamespage, got a sec for https://review.openstack.org/#/c/304097/ / [13:20] ? [13:21] gnuoy, already on it [13:21] \o/ ta [13:22] gnuoy, do we care about disabling hugepages? [13:22] jamespage, it'd be nice I guess, why do you ask? [13:23] gnuoy, we only write the qemu config when hugepages is enabled... [13:23] well we might write it under a write_all [13:23] but.. [13:24] gnuoy, oh wait - there is a write_all at the bootom of config-changed... [13:24] I'll comment appropriatly [13:26] gnuoy, two nits only [13:31] cory_fu: https://jujucharms.com/u/frbayart/ [13:32] jamespage, ta [13:33] gnuoy, note to self - remember that the git->bzr sync is not that regular... [13:52] Hi, all. I've got another newb question: what's the best practice when it comes to storing passwords and other config data that a charm has generated. Is there a standard place to put stuff? [13:53] I ask because I'm writing a couch charm, and couch actually hashes admin passwords that you drop into its config files. [13:53] ... there isn't a good way to look and see what juju has generated, which makes it hard to login as the couch admin and test to make sure that things work :-) [13:57] petevg: in the MySQL charm we store the root password in someting like /var/lib/mysql/mysql.passwd as 600 root.root [13:57] A-Kaser: https://pythonhosted.org/amulet/ [13:57] petevg: it's not ideal, but it the idea that only root can read it and it's in a place that makes sense for the service so the charm hooks can see them [13:58] marcoceppi: that sounds sensible, even if it isn't ideal. :-) I can do a similar thing in /etc/couch. Thank you. [14:01] A-Kaser: https://github.com/juju-solutions/bundletester [14:03] A-Kaser: https://hub.docker.com/r/jujusolutions/charmbox/ [14:05] A-Kaser: https://jujucharms.com/docs/devel/developer-getting-started [14:07] A-Kaser: https://jujucharms.com/docs/devel/authors-charm-store [14:08] A-Kaser: http://review.juju.solutions/ [14:09] A-Kaser: https://jujucharms.com/docs/devel/authors-charm-store#recommended-charms [14:11] A-Kaser: https://jujucharms.com/docs/devel/reference-charm-hooks#stop [14:15] A-Kaser: https://jujucharms.com/big-data [14:17] petevg: Since you probably need the password to hand out to clients, you can also store it on the relation. The PostgreSQL charm stores the replication password in the leadership settings (which is how peer units retrieve it), and stores client passwords on the client relations. [14:19] stub: nice! I will look into that. Thank you. === cmagina_ is now known as cmagina [14:38] cory_fu: https://jujucharms.com/u/tads2015dataart/mesos-slave/trusty/0/ [14:40] https://code.launchpad.net/~dataart.telco/charms/trusty/mesos-slave/trunk [14:41] A-Kaser: https://jujucharms.com/u/dataart.telco/mesos-slave/trusty/ [14:47] marcoceppi: What needs to be installed to get `charm login` and `charm publish`? [14:50] aisrael charm from the devel ppa [14:51] rick_h_: thanks. I think I've got marco's ppa added and maybe that's screwing me up [14:53] Hm. Maybe not. [14:53] cory_fu: thx a lot [14:56] rick_h_: Got it working, thanks! [15:01] A-Kaser: No problem. Glad I could be of help. [15:05] gnuoy, I think this is nearly ready for landing - https://review.openstack.org/#/c/303329/ [15:05] I've submitted a recheck-full [15:16] cory_fu: when will that hadoop_extra_.... fix land in the charms? [15:16] cmagina: I will cut a release when I finish this meeting [15:18] cory_fu: awesome, thanks much [15:19] cmagina: Actually, I'm releasing it now. 6.4.3 should be available in a few seconds [15:19] cory_fu: cool, i'll get to testing that :) [15:22] aisrael: what does `charm version` say? [15:28] charm 2:2.1.0-0~ubuntu16.04.1~ppa0 [15:28] charm-tools 2.1.2 [15:29] aisrael: huh [15:29] aisrael: there's one more thing to tru [15:29] aisrael: ppa:marcoceppi/xenial-chopper [15:30] aisrael: add that and try again [15:30] * aisrael gets to the choppa [15:32] aisrael: there's a 2.1.0-1 which is slightly newer [15:32] aisrael: it's basically latest master [15:34] marcoceppi: same error. I'll file a bug. [15:37] aisrael: ack, ta, it might be a server side issue [15:44] cory_fu: ping [15:44] mbruzek: What's up? [15:46] cory_fu: getting a charm build error with the basic layer, and I checked there is no basic layer on my local filesystem. http://paste.ubuntu.com/15761546/ [15:47] mbruzek: Can you link me to your layer's metadata.yaml? [15:47] cory_fu: I don't have 22 lines in my metadata.yaml nor do I have an assembled charm with that many lines [15:48] https://github.com/mbruzek/layer-k8s/blob/master/metadata.yaml [15:52] mbruzek: I just cloned that layer and built it without error [15:52] hrmm. [15:53] cory_fu: can you pull eddy-master branch and try it again? [15:54] charm 2:2.1.0-0~ubuntu16.04.1~ppa0 [15:54] charm-tools 2.1.2 [15:54] cory_fu: that is the version of charm tools I am using [15:54] mbruzek: Also builds w/o error [15:54] Same version. (I'm building this in charmbox) [15:55] charmbox:devel [15:55] hrmm [15:55] I can reproduce this failure... I don't see a local layer-basic, or basic directory in my layers directory [15:56] mbruzek: hangouts to look? [15:56] mbruzek: rather [15:56] mbruzek: Why are you assuming the issue is with the basic layer? Do you have local copies of docker, flannel, tls, or kubernetes? [15:56] mbruzek: build with -L debug and paste the output [15:57] marcoceppi: https://paste.ubuntu.com/15761546/ [15:58] It looks like it pulls in flannel and basic from remote, but you could have local versions of the other layers [15:58] cory_fu: marcoceppi: That pastebin was with -L DEBUG [15:58] mbruzek: it seems to be failing on layer:docker [15:58] /home/mbruzek/workspace/layers/docker/metadata.yaml [15:59] that layer is b0rk3d [15:59] ah there it is! [15:59] When I read the output I thought it was in the basic layer, where do you see the docker layer? [15:59] mbruzek: the end of the traceback [16:00] mbruzek: the last three lines [16:00] I see [16:00] sorry marcoceppi and cory_fu [16:00] mbruzek: file a bug, we should have better error handling [16:01] marcoceppi: well it does actually have the file in question there [16:01] +1 that error handling sucks and should be improved [16:01] I just didn't see it [16:01] mbruzek: yea, but it should just say "There is malformed yaml in FILE" [16:01] mbruzek: that way you don't bleed while looking at output [16:02] mbruzek: 2.1.3 for sure [16:02] maybe I can work on this one since I signaled a false alarm [16:07] interesting, when the charm layer defines series in metadata, it changes the destination directory to builds/$charm ? [16:14] lazyPower: it should be builds/output [16:14] lazyPower: err [16:14] yes [16:14] mbruzek: If you do work on it, make sure you add docstrings. ;) [16:14] interesting... looks like i need to add another volume mount to charmbox then [16:14] cory_fu: Oh there will be a plethora of comments, trust me [16:14] ta for the info ;) [16:14] lazyPower: https://github.com/juju/charm-tools/issues/115 [16:15] lazyPower: ah, yeah [16:16] stub: You around? [16:38] jamespage, https://review.openstack.org/#/c/304097/ has passed a full amulet if you have a sec [16:44] gnuoy, ditto - https://review.openstack.org/#/c/303329/ [16:47] jamespage, amulet is still running on your mp === cos1 is now known as c0s [16:51] gnuoy, oh yes - that was just the smoke from my removal of .unit-state.db [16:53] gnuoy, hugepages: DONE [16:53] jamespage, fantastic, thanks [17:14] Is it easily possible to do a new install with mitaka and juju? [17:21] firl o/ [17:21] ? [17:21] firl i'm not sure what you're asking about "easy to do a new install" [17:21] are you asking if you can deploy mitaka with juju? [17:22] I haven’t seen any mailings / docs on doing a mitaka install ( which maas image ) any prebuilt etc [17:22] If I should just stick with trusty-mitaka ( if it exists ) [17:22] ah, the /next branches of the charms target mitaka. and i'm fairly certain those are being landed as stable as we speak [17:22] beisner ddellav thedac - am i correct in the above statement? [17:23] lazyPower: they are being tested now but will land in stable in a couple weeks [17:23] ah ok, i thought that push was this week. my b. [17:24] Gotcha, is there a place to get the latest info on this? ( just this channel in a couple weeks? ) [17:24] firl: we will post to the juju mailing list when we release [17:25] thedac: thanks! do you know if there will be a sample bundle, or if heat/ceilometer will easily be installed also ? [17:25] thedac, beisner would you mind keeping an eye on these https://review.openstack.org/#/q/topic:enable-xenial-mitaka-amulet and landing them if they pass ? [17:27] firl: The next bundle should already exist. Let me find that for you. I know ceilometer has had some feature split with adoh I am not sure that will be ready. [17:29] firl: https://jujucharms.com/u/openstack-charmers-next/ [17:30] can I get a review on https://github.com/juju/docs/pull/975 [17:31] thedac: perfect, should I be switching to XenialXerus for it, or stay on trusty for a while [17:32] firl: mitaka will be supported on both. So it depends on your taste for being on the latest and greatest === natefinch is now known as natefinch-lunch [17:54] gnuoy, sure, np [18:33] cherylj: I am not sure if the machines are showing up in juju status [18:33] rohit_: do you see any machines in your 'juju status' output re https://launchpad.net/bugs/1566420 [18:33] Bug #1566420: lxd doesn't provision instances on first bootstrap in new xenial image [18:34] rohit_, arosales if you do see machines, then I'm willing to bet that it's a dup of bug 1567683 [18:34] Bug #1567683: Agents stuck in "Waiting for agent initialization to finish" with lxd provider [18:35] yes ..machines are created..but juju agent doesn't initialize [18:35] rohit_: can you paste the output of lxc list? [18:35] I bet it's that bug above ^^ [18:35] * arosales looks at lp:1567683 [18:37] https://www.irccloud.com/pastebin/C2EsRpP7/ [18:37] yes.. It's identical issue [18:37] identical to 1567683 [18:37] rohit_: but I don't see a second lxd container for a service you've deployed? [18:37] (unless you took that snapshot just after bootstrapping, but before deploy) [18:37] I deleted it .. a sec ago [18:38] that would explain it [18:38] I am cleaning up it all before I switch to older version of juju [18:38] rohit_: you can hack around it by reconfiguring your lxdbr0 to use 10.0.3.1 as the bridge IP. Should trigger the containers to use 10.0.4.1 [18:38] as their bridge [18:38] ok [18:49] lazyPower: charmbox:devel doesn't have make installed in it, so make lint fails during reviews [18:49] marcoceppi - why isn't setup installing make? [18:49] lazyPower: I don't know, report from c0s [18:50] hmm, i suppose thats reason enough to have it in the base image [18:50] lemme see how big it bloats it, 1 sec [18:51] lazyPower: ack, ta [18:53] cory_fu: you're right - 'make lint' catches a bunch of the python bugs in the layer's code [18:53] do you think it would be make sense to add 'make lint' as the part of the charm-build? [18:54] marcoceppi - no noticeable change, incoming PR to add make [19:00] that pretty much sums it up : https://twitter.com/c0sin/status/719600673466114049 [19:01] thedac, beisner: hey can either of your +2/+1 https://review.openstack.org/#/c/303329/ - its done a full charm-recheck and completes the remove neutronfromnova work [19:02] jamespage: will do === redir is now known as redir_lunch === thomnico is now known as thomnico|Brussel === matthelmke is now known as matthelmke-afk === natefinch-lunch is now known as natefinch === matthelmke-afk is now known as matthelmke