rohit_ | Hi everyone.. another newbie issue.. I seem to be stuck on this bug - https://bugs.launchpad.net/juju-core/+bug/1566420 | 01:46 |
---|---|---|
mup | Bug #1566420: lxd doesn't provision instances on first bootstrap in new xenial image <lxd> <juju-core:Triaged> <https://launchpad.net/bugs/1566420> | 01:47 |
rohit_ | any trick to get over this issue before it's fixed? | 01:47 |
cherylj | rohit_: I'm kinda glad it's not just me that ran into it :) | 02:17 |
cherylj | rohit_: what I did was bootstrapped another lxd controller (just give it a new controller name) | 02:17 |
rohit_ | trying it now | 02:17 |
cherylj | rohit_: you can clean up the original controller in the following hacky way: | 02:18 |
cherylj | 1 - get the instance ID / container name. | 02:18 |
cherylj | (you can do a juju status -m <first controller name>:admin and see the ID there) | 02:19 |
cherylj | 2 - delete the controller using the lxd commands (lxc stop, then lxc delete) | 02:19 |
cherylj | 3 - juju kill-controller <first controller name> | 02:20 |
rohit_ | I actually created a new controller before killing the old one | 02:21 |
rohit_ | and now all juju commands are failing with "no space left on device" | 02:22 |
cherylj | gah, I was running into that on a small instance I had in AWS | 02:24 |
cherylj | I had to clean up space in /tmp, then lxc stop / lxc delete | 02:25 |
bradm | I'm having a weird error with swift-proxy and the latest next charms as of now - http://pastebin.ubuntu.com/15752193/. Anyone seen anything like it before? | 02:36 |
=== urulama__ is now known as urulama | ||
freak__ | hi everyone | 06:51 |
freak__ | need help regarding juju....i want to know 2 things | 06:51 |
freak__ | first where we define the pool of IP's which eventually gets allocated to juju services as public address when we deploy them | 06:51 |
freak__ | secondly in my setup when i deployed mysql it got public address as node0.maas and port is allocated to it something like 3306 | 06:51 |
freak__ | but when i deployed wordpress it again got public address node0.maas | 06:51 |
freak__ | but no port is allocated to it | 06:52 |
jamespage | gnuoy, morning | 07:35 |
gnuoy | morning jamespage | 07:36 |
gnuoy | jamespage, got a sec for https://review.openstack.org/#/c/301171/ this morning ? | 07:36 |
jamespage | gnuoy, lgtm | 07:39 |
jamespage | gnuoy, I have one more network space review to stack up - I missed neutron-gateway | 07:49 |
jamespage | gnuoy, ok so i can't land https://review.openstack.org/303329 until https://review.openstack.org/#/c/303321/ lands | 08:14 |
jamespage | as I need nova-cc to stop doing the migrations for neutron so that neutron-api can do them... | 08:14 |
jamespage | gnuoy, could you have a look at https://review.openstack.org/#/c/303321/ wolsom already +1'ed but needs a full review | 08:15 |
gnuoy | ack, yep | 08:15 |
jamespage | gnuoy, I'm looking through the queue of features up still - any particular priority from your perspective? | 08:25 |
gnuoy | jamespage, nope | 08:26 |
jamespage | gnuoy, do you have cycles for https://review.openstack.org/#/q/topic:network-spaces ? | 08:27 |
gnuoy | jamespage, I'm looking at Bug #1518771 , I think we need that fixed asap, particularly given the dpdk work. The soultion I propose is to remove the fstab entry and instead write KVM_HUGEPAGES=1 to /etc/default/qemu-kvm and restart the qemu-kvm service. Does that sound sane ? (I still need to check on systemd support for that approach) | 08:28 |
mup | Bug #1518771: nova-compute hugepages breaks the boot process <openstack> <upstart :New> <nova-compute (Juju Charms Collection):Confirmed for gnuoy> <https://launchpad.net/bugs/1518771> | 08:28 |
jamespage | gnuoy, I'm not sure exactly what your proposed solution does tbh? | 08:30 |
jamespage | gnuoy, ok just read /dev/default/qemu-kvm | 08:30 |
gnuoy | jamespage, http://paste.ubuntu.com/15754379/ | 08:30 |
gnuoy | thats what the init script runs | 08:31 |
jamespage | gnuoy, that looks like a reasonable solution | 08:31 |
gnuoy | jamespage, I do have cycles for those review although I don't know ceph at all so usually try and leave ceph reviews to someone with more expertise. | 08:32 |
jamespage | gnuoy, ok if you could pickup the neutron-gateway one that would be appreciated... | 08:33 |
gnuoy | jamespage, if I was to create a branch with that solution would it be easy for you to test dpdk is still happy? | 08:33 |
jamespage | gnuoy, not right now as I gave the dpdk machine back to rbasak | 08:33 |
gnuoy | jamespage, ok | 08:33 |
jamespage | gnuoy, my only concern would be that ovs might race the qemu-kvm startup on boot, resulting in a non-running ovs with dpdk enabled. | 08:34 |
gnuoy | jamespage, the option is to create an upstart script and hook into the /run mounted event eg "start on mounted MOUNTPOINT=/run" | 08:37 |
gnuoy | that might still race though I guess | 08:39 |
gnuoy | jamespage, those ceph changes are trivial enough for me to review | 08:44 |
jamespage | gnuoy, ta | 08:44 |
A-Kaser | Hi | 08:47 |
freak__ | need help regarding wordpress in juju | 08:53 |
freak__ | i have deployed mysql and wordpress | 08:53 |
freak__ | and exposed only wordpress | 08:54 |
freak__ | here is my juju status http://paste.ubuntu.com/15754539/ | 08:54 |
freak__ | i cannot access wordpress from browser | 08:54 |
freak__ | and also port is not allocated to wordpress | 08:54 |
freak__ | 08:57 | |
rbasak | jamespage: I've not used it yet, but probably will want it this afternoon or tomorrow. | 09:21 |
gnuoy | jamespage, the only solution I can think of is to amend the ovs upstart script to contain similair logic to qemu-kvm to mount hugepages if theres a variable set in /etc/default/openvswitch-switch | 09:27 |
jamespage | gnuoy, hmm | 09:30 |
* jamespage thinks | 09:30 | |
gnuoy | jamespage, I've just seen mention that there is hugepage config in /etc/dpdk/dpdk.conf , I assume thats not to do with mounting them in the first place | 09:32 |
* gnuoy goes to check | 09:32 | |
jamespage | gnuoy, that's part of the dpdk package itself... | 09:34 |
jamespage | gnuoy, you might not have that installed depending ... | 09:34 |
gnuoy | jamespage, right, but I think amending /dev/default/qemu-kvm covers the non-dpdk use-case. The ovs with dpdk use case may be fixed with /etc/dpdk/dpdk.conf | 09:36 |
jamespage | gnuoy, agreed on the non-dpdk use-case | 09:36 |
jamespage | gnuoy, tbh I think that most deployments will set hugepages via kernel boot ops | 09:36 |
jamespage | gnuoy, at which point systemd mounts them on /dev/hugepages* anyway | 09:37 |
gnuoy | jamespage, does upstart ? | 09:37 |
jamespage | gnuoy, no | 09:47 |
jamespage | gnuoy, but I don't really care about dpdk on anything other than 16.04 tbh | 09:47 |
jamespage | gnuoy, am I allows to drop the apache2/keystone check for amulet tests? | 09:47 |
jamespage | gnuoy, I just tried to fixup swift for 2.7.0 and both charms fail on that check now... | 09:48 |
gnuoy | jamespage, ok, then I propose that I make the change to use /etc/default/qemu-kvm as its better thaan where we are now | 09:48 |
gnuoy | jamespage, yes | 09:48 |
jamespage | gnuoy, well I fixed them anyway | 09:51 |
jamespage | gnuoy, I nacked https://review.openstack.org/#/c/299363/ on the basis of our 'MAAS should do this' conversation... | 10:13 |
jamespage | gnuoy, https://review.openstack.org/#/q/topic:keystone-apache+status:open both ready to go | 10:34 |
jamespage | gnuoy, and https://review.openstack.org/#/q/topic:swift-2.7.0 which are based on those | 10:36 |
jamespage | gnuoy, also I think I've introduced a bug with the neutron pullout | 10:56 |
jamespage | working that now | 10:56 |
gnuoy | omlette eggs something something | 10:56 |
jamespage | gnuoy, https://review.openstack.org/#/c/304034/ | 11:04 |
jamespage | fairly minor - the NeutronAPIContext was added conditionally before, but I think its OK to add unconditionally now | 11:05 |
jamespage | gnuoy, tbh if that's the only thing I got wrong, then I'm pretty happy :-) | 11:25 |
A-Kaser | Hi | 12:58 |
marcoceppi | o/ | 13:01 |
A-Kaser | cory_fu: me | 13:05 |
cory_fu | A-Kaser: http://interfaces.juju.solutions/ | 13:17 |
gnuoy | jamespage, got a sec for https://review.openstack.org/#/c/304097/ / | 13:20 |
gnuoy | ? | 13:20 |
jamespage | gnuoy, already on it | 13:21 |
gnuoy | \o/ ta | 13:21 |
jamespage | gnuoy, do we care about disabling hugepages? | 13:22 |
gnuoy | jamespage, it'd be nice I guess, why do you ask? | 13:22 |
jamespage | gnuoy, we only write the qemu config when hugepages is enabled... | 13:23 |
jamespage | well we might write it under a write_all | 13:23 |
jamespage | but.. | 13:23 |
jamespage | gnuoy, oh wait - there is a write_all at the bootom of config-changed... | 13:24 |
jamespage | I'll comment appropriatly | 13:24 |
jamespage | gnuoy, two nits only | 13:26 |
A-Kaser | cory_fu: https://jujucharms.com/u/frbayart/ | 13:31 |
gnuoy | jamespage, ta | 13:32 |
jamespage | gnuoy, note to self - remember that the git->bzr sync is not that regular... | 13:33 |
petevg | Hi, all. I've got another newb question: what's the best practice when it comes to storing passwords and other config data that a charm has generated. Is there a standard place to put stuff? | 13:52 |
petevg | I ask because I'm writing a couch charm, and couch actually hashes admin passwords that you drop into its config files. | 13:53 |
petevg | ... there isn't a good way to look and see what juju has generated, which makes it hard to login as the couch admin and test to make sure that things work :-) | 13:53 |
marcoceppi | petevg: in the MySQL charm we store the root password in someting like /var/lib/mysql/mysql.passwd as 600 root.root | 13:57 |
cory_fu | A-Kaser: https://pythonhosted.org/amulet/ | 13:57 |
marcoceppi | petevg: it's not ideal, but it the idea that only root can read it and it's in a place that makes sense for the service so the charm hooks can see them | 13:57 |
petevg | marcoceppi: that sounds sensible, even if it isn't ideal. :-) I can do a similar thing in /etc/couch. Thank you. | 13:58 |
cory_fu | A-Kaser: https://github.com/juju-solutions/bundletester | 14:01 |
cory_fu | A-Kaser: https://hub.docker.com/r/jujusolutions/charmbox/ | 14:03 |
cory_fu | A-Kaser: https://jujucharms.com/docs/devel/developer-getting-started | 14:05 |
cory_fu | A-Kaser: https://jujucharms.com/docs/devel/authors-charm-store | 14:07 |
cory_fu | A-Kaser: http://review.juju.solutions/ | 14:08 |
cory_fu | A-Kaser: https://jujucharms.com/docs/devel/authors-charm-store#recommended-charms | 14:09 |
cory_fu | A-Kaser: https://jujucharms.com/docs/devel/reference-charm-hooks#stop | 14:11 |
cory_fu | A-Kaser: https://jujucharms.com/big-data | 14:15 |
stub | petevg: Since you probably need the password to hand out to clients, you can also store it on the relation. The PostgreSQL charm stores the replication password in the leadership settings (which is how peer units retrieve it), and stores client passwords on the client relations. | 14:17 |
petevg | stub: nice! I will look into that. Thank you. | 14:19 |
=== cmagina_ is now known as cmagina | ||
A-Kaser | cory_fu: https://jujucharms.com/u/tads2015dataart/mesos-slave/trusty/0/ | 14:38 |
cory_fu | https://code.launchpad.net/~dataart.telco/charms/trusty/mesos-slave/trunk | 14:40 |
cory_fu | A-Kaser: https://jujucharms.com/u/dataart.telco/mesos-slave/trusty/ | 14:41 |
aisrael | marcoceppi: What needs to be installed to get `charm login` and `charm publish`? | 14:47 |
rick_h_ | aisrael charm from the devel ppa | 14:50 |
aisrael | rick_h_: thanks. I think I've got marco's ppa added and maybe that's screwing me up | 14:51 |
aisrael | Hm. Maybe not. | 14:53 |
A-Kaser | cory_fu: thx a lot | 14:53 |
aisrael | rick_h_: Got it working, thanks! | 14:56 |
cory_fu | A-Kaser: No problem. Glad I could be of help. | 15:01 |
jamespage | gnuoy, I think this is nearly ready for landing - https://review.openstack.org/#/c/303329/ | 15:05 |
jamespage | I've submitted a recheck-full | 15:05 |
cmagina | cory_fu: when will that hadoop_extra_.... fix land in the charms? | 15:16 |
cory_fu | cmagina: I will cut a release when I finish this meeting | 15:16 |
cmagina | cory_fu: awesome, thanks much | 15:18 |
cory_fu | cmagina: Actually, I'm releasing it now. 6.4.3 should be available in a few seconds | 15:19 |
cmagina | cory_fu: cool, i'll get to testing that :) | 15:19 |
marcoceppi | aisrael: what does `charm version` say? | 15:22 |
aisrael | charm 2:2.1.0-0~ubuntu16.04.1~ppa0 | 15:28 |
aisrael | charm-tools 2.1.2 | 15:28 |
marcoceppi | aisrael: huh | 15:29 |
marcoceppi | aisrael: there's one more thing to tru | 15:29 |
marcoceppi | aisrael: ppa:marcoceppi/xenial-chopper | 15:29 |
marcoceppi | aisrael: add that and try again | 15:30 |
* aisrael gets to the choppa | 15:30 | |
marcoceppi | aisrael: there's a 2.1.0-1 which is slightly newer | 15:32 |
marcoceppi | aisrael: it's basically latest master | 15:32 |
aisrael | marcoceppi: same error. I'll file a bug. | 15:34 |
marcoceppi | aisrael: ack, ta, it might be a server side issue | 15:37 |
mbruzek | cory_fu: ping | 15:44 |
cory_fu | mbruzek: What's up? | 15:44 |
mbruzek | cory_fu: getting a charm build error with the basic layer, and I checked there is no basic layer on my local filesystem. http://paste.ubuntu.com/15761546/ | 15:46 |
cory_fu | mbruzek: Can you link me to your layer's metadata.yaml? | 15:47 |
mbruzek | cory_fu: I don't have 22 lines in my metadata.yaml nor do I have an assembled charm with that many lines | 15:47 |
mbruzek | https://github.com/mbruzek/layer-k8s/blob/master/metadata.yaml | 15:48 |
cory_fu | mbruzek: I just cloned that layer and built it without error | 15:52 |
mbruzek | hrmm. | 15:52 |
mbruzek | cory_fu: can you pull eddy-master branch and try it again? | 15:53 |
mbruzek | charm 2:2.1.0-0~ubuntu16.04.1~ppa0 | 15:54 |
mbruzek | charm-tools 2.1.2 | 15:54 |
mbruzek | cory_fu: that is the version of charm tools I am using | 15:54 |
cory_fu | mbruzek: Also builds w/o error | 15:54 |
cory_fu | Same version. (I'm building this in charmbox) | 15:54 |
cory_fu | charmbox:devel | 15:55 |
mbruzek | hrmm | 15:55 |
mbruzek | I can reproduce this failure... I don't see a local layer-basic, or basic directory in my layers directory | 15:55 |
marcoceppi | mbruzek: hangouts to look? | 15:56 |
marcoceppi | mbruzek: rather | 15:56 |
cory_fu | mbruzek: Why are you assuming the issue is with the basic layer? Do you have local copies of docker, flannel, tls, or kubernetes? | 15:56 |
marcoceppi | mbruzek: build with -L debug and paste the output | 15:56 |
cory_fu | marcoceppi: https://paste.ubuntu.com/15761546/ | 15:57 |
cory_fu | It looks like it pulls in flannel and basic from remote, but you could have local versions of the other layers | 15:58 |
mbruzek | cory_fu: marcoceppi: That pastebin was with -L DEBUG | 15:58 |
marcoceppi | mbruzek: it seems to be failing on layer:docker | 15:58 |
marcoceppi | /home/mbruzek/workspace/layers/docker/metadata.yaml | 15:58 |
marcoceppi | that layer is b0rk3d | 15:59 |
mbruzek | ah there it is! | 15:59 |
mbruzek | When I read the output I thought it was in the basic layer, where do you see the docker layer? | 15:59 |
marcoceppi | mbruzek: the end of the traceback | 15:59 |
marcoceppi | mbruzek: the last three lines | 16:00 |
mbruzek | I see | 16:00 |
mbruzek | sorry marcoceppi and cory_fu | 16:00 |
marcoceppi | mbruzek: file a bug, we should have better error handling | 16:00 |
mbruzek | marcoceppi: well it does actually have the file in question there | 16:01 |
cory_fu | +1 that error handling sucks and should be improved | 16:01 |
mbruzek | I just didn't see it | 16:01 |
marcoceppi | mbruzek: yea, but it should just say "There is malformed yaml in FILE" | 16:01 |
marcoceppi | mbruzek: that way you don't bleed while looking at output | 16:01 |
marcoceppi | mbruzek: 2.1.3 for sure | 16:02 |
mbruzek | maybe I can work on this one since I signaled a false alarm | 16:02 |
lazyPower | interesting, when the charm layer defines series in metadata, it changes the destination directory to builds/$charm ? | 16:07 |
marcoceppi | lazyPower: it should be builds/output | 16:14 |
marcoceppi | lazyPower: err | 16:14 |
marcoceppi | yes | 16:14 |
cory_fu | mbruzek: If you do work on it, make sure you add docstrings. ;) | 16:14 |
lazyPower | interesting... looks like i need to add another volume mount to charmbox then | 16:14 |
mbruzek | cory_fu: Oh there will be a plethora of comments, trust me | 16:14 |
lazyPower | ta for the info ;) | 16:14 |
marcoceppi | lazyPower: https://github.com/juju/charm-tools/issues/115 | 16:14 |
marcoceppi | lazyPower: ah, yeah | 16:15 |
aisrael | stub: You around? | 16:16 |
gnuoy | jamespage, https://review.openstack.org/#/c/304097/ has passed a full amulet if you have a sec | 16:38 |
jamespage | gnuoy, ditto - https://review.openstack.org/#/c/303329/ | 16:44 |
gnuoy | jamespage, amulet is still running on your mp | 16:47 |
=== cos1 is now known as c0s | ||
jamespage | gnuoy, oh yes - that was just the smoke from my removal of .unit-state.db | 16:51 |
jamespage | gnuoy, hugepages: DONE | 16:53 |
gnuoy | jamespage, fantastic, thanks | 16:53 |
firl | Is it easily possible to do a new install with mitaka and juju? | 17:14 |
lazyPower | firl o/ | 17:21 |
firl | ? | 17:21 |
lazyPower | firl i'm not sure what you're asking about "easy to do a new install" | 17:21 |
lazyPower | are you asking if you can deploy mitaka with juju? | 17:21 |
firl | I haven’t seen any mailings / docs on doing a mitaka install ( which maas image ) any prebuilt etc | 17:22 |
firl | If I should just stick with trusty-mitaka ( if it exists ) | 17:22 |
lazyPower | ah, the /next branches of the charms target mitaka. and i'm fairly certain those are being landed as stable as we speak | 17:22 |
lazyPower | beisner ddellav thedac - am i correct in the above statement? | 17:22 |
thedac | lazyPower: they are being tested now but will land in stable in a couple weeks | 17:23 |
lazyPower | ah ok, i thought that push was this week. my b. | 17:23 |
firl | Gotcha, is there a place to get the latest info on this? ( just this channel in a couple weeks? ) | 17:24 |
thedac | firl: we will post to the juju mailing list when we release | 17:24 |
firl | thedac: thanks! do you know if there will be a sample bundle, or if heat/ceilometer will easily be installed also ? | 17:25 |
gnuoy | thedac, beisner would you mind keeping an eye on these https://review.openstack.org/#/q/topic:enable-xenial-mitaka-amulet and landing them if they pass ? | 17:25 |
thedac | firl: The next bundle should already exist. Let me find that for you. I know ceilometer has had some feature split with adoh I am not sure that will be ready. | 17:27 |
thedac | firl: https://jujucharms.com/u/openstack-charmers-next/ | 17:29 |
marcoceppi | can I get a review on https://github.com/juju/docs/pull/975 | 17:30 |
firl | thedac: perfect, should I be switching to XenialXerus for it, or stay on trusty for a while | 17:31 |
thedac | firl: mitaka will be supported on both. So it depends on your taste for being on the latest and greatest | 17:32 |
=== natefinch is now known as natefinch-lunch | ||
beisner | gnuoy, sure, np | 17:54 |
arosales | cherylj: I am not sure if the machines are showing up in juju status | 18:33 |
arosales | rohit_: do you see any machines in your 'juju status' output re https://launchpad.net/bugs/1566420 | 18:33 |
mup | Bug #1566420: lxd doesn't provision instances on first bootstrap in new xenial image <lxd> <juju-core:Triaged> <https://launchpad.net/bugs/1566420> | 18:33 |
cherylj | rohit_, arosales if you do see machines, then I'm willing to bet that it's a dup of bug 1567683 | 18:34 |
mup | Bug #1567683: Agents stuck in "Waiting for agent initialization to finish" with lxd provider <ci> <lxd> <network> <juju-core:Fix Committed by cherylj> <https://launchpad.net/bugs/1567683> | 18:34 |
rohit_ | yes ..machines are created..but juju agent doesn't initialize | 18:35 |
cherylj | rohit_: can you paste the output of lxc list? | 18:35 |
cherylj | I bet it's that bug above ^^ | 18:35 |
* arosales looks at lp:1567683 | 18:35 | |
rohit_ | https://www.irccloud.com/pastebin/C2EsRpP7/ | 18:37 |
rohit_ | yes.. It's identical issue | 18:37 |
rohit_ | identical to 1567683 | 18:37 |
cherylj | rohit_: but I don't see a second lxd container for a service you've deployed? | 18:37 |
cherylj | (unless you took that snapshot just after bootstrapping, but before deploy) | 18:37 |
rohit_ | I deleted it .. a sec ago | 18:37 |
cherylj | that would explain it | 18:38 |
rohit_ | I am cleaning up it all before I switch to older version of juju | 18:38 |
cherylj | rohit_: you can hack around it by reconfiguring your lxdbr0 to use 10.0.3.1 as the bridge IP. Should trigger the containers to use 10.0.4.1 | 18:38 |
cherylj | as their bridge | 18:38 |
rohit_ | ok | 18:38 |
marcoceppi | lazyPower: charmbox:devel doesn't have make installed in it, so make lint fails during reviews | 18:49 |
lazyPower | marcoceppi - why isn't setup installing make? | 18:49 |
marcoceppi | lazyPower: I don't know, report from c0s | 18:49 |
lazyPower | hmm, i suppose thats reason enough to have it in the base image | 18:50 |
lazyPower | lemme see how big it bloats it, 1 sec | 18:50 |
marcoceppi | lazyPower: ack, ta | 18:51 |
c0s | cory_fu: you're right - 'make lint' catches a bunch of the python bugs in the layer's code | 18:53 |
c0s | do you think it would be make sense to add 'make lint' as the part of the charm-build? | 18:53 |
lazyPower | marcoceppi - no noticeable change, incoming PR to add make | 18:54 |
c0s | that pretty much sums it up : https://twitter.com/c0sin/status/719600673466114049 | 19:00 |
jamespage | thedac, beisner: hey can either of your +2/+1 https://review.openstack.org/#/c/303329/ - its done a full charm-recheck and completes the remove neutronfromnova work | 19:01 |
thedac | jamespage: will do | 19:02 |
=== redir is now known as redir_lunch | ||
=== thomnico is now known as thomnico|Brussel | ||
=== matthelmke is now known as matthelmke-afk | ||
=== natefinch-lunch is now known as natefinch | ||
=== matthelmke-afk is now known as matthelmke |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!