/srv/irclogs.ubuntu.com/2016/04/11/#juju.txt

rohit_Hi everyone.. another newbie issue.. I seem to be stuck on this bug - https://bugs.launchpad.net/juju-core/+bug/156642001:46
mupBug #1566420: lxd doesn't provision instances on first bootstrap in new xenial image <lxd> <juju-core:Triaged> <https://launchpad.net/bugs/1566420>01:47
rohit_any trick to get over this issue before it's fixed?01:47
cheryljrohit_: I'm kinda glad it's not just me that ran into it :)02:17
cheryljrohit_:  what I did was bootstrapped another lxd controller (just give it a new controller name)02:17
rohit_trying it now02:17
cheryljrohit_: you can clean up the original controller in the following hacky way:02:18
cherylj1 - get the instance ID / container name.02:18
cherylj(you can do a juju status -m <first controller name>:admin and see the ID there)02:19
cherylj2 - delete the controller using the lxd commands (lxc stop, then lxc delete)02:19
cherylj3 - juju kill-controller <first controller name>02:20
rohit_I actually created a new controller before killing the old one02:21
rohit_and now all juju commands are failing with "no space left on device"02:22
cheryljgah, I was running into that on a small instance I had in AWS02:24
cheryljI had to clean up space in /tmp, then lxc stop / lxc delete02:25
bradmI'm having a weird error with swift-proxy and the latest next charms as of now - http://pastebin.ubuntu.com/15752193/.  Anyone seen anything like it before?02:36
=== urulama__ is now known as urulama
freak__hi everyone06:51
freak__need help regarding juju....i want to know 2 things06:51
freak__first where we define the pool of IP's which eventually gets allocated to juju services as public address when we deploy them06:51
freak__ secondly in my setup when i deployed mysql it got public address as node0.maas and port is allocated to it something like 330606:51
freak__ but when i deployed wordpress it again got public address node0.maas06:51
freak__but no port is allocated to it06:52
jamespagegnuoy, morning07:35
gnuoymorning jamespage07:36
gnuoyjamespage,  got a sec for https://review.openstack.org/#/c/301171/ this morning ?07:36
jamespagegnuoy, lgtm07:39
jamespagegnuoy, I have one more network space review to stack up - I missed neutron-gateway07:49
jamespagegnuoy, ok so i can't land https://review.openstack.org/303329 until https://review.openstack.org/#/c/303321/ lands08:14
jamespageas I need nova-cc to stop doing the migrations for neutron so that neutron-api can do them...08:14
jamespagegnuoy, could you have a look at https://review.openstack.org/#/c/303321/ wolsom already +1'ed but needs a full review08:15
gnuoyack, yep08:15
jamespagegnuoy, I'm looking through the queue of features up still - any particular priority from your perspective?08:25
gnuoyjamespage, nope08:26
jamespagegnuoy, do you have cycles for https://review.openstack.org/#/q/topic:network-spaces ?08:27
gnuoyjamespage, I'm looking at Bug #1518771 , I think we need that fixed asap, particularly given the dpdk work. The soultion I propose is to remove the fstab entry and instead write KVM_HUGEPAGES=1 to /etc/default/qemu-kvm and restart the qemu-kvm service. Does that sound sane ? (I still need to check on systemd support for that approach)08:28
mupBug #1518771: nova-compute hugepages breaks the boot process <openstack> <upstart :New> <nova-compute (Juju Charms Collection):Confirmed for gnuoy> <https://launchpad.net/bugs/1518771>08:28
jamespagegnuoy, I'm not sure exactly what your proposed solution does tbh?08:30
jamespagegnuoy, ok just read /dev/default/qemu-kvm08:30
gnuoyjamespage, http://paste.ubuntu.com/15754379/08:30
gnuoythats what the init script runs08:31
jamespagegnuoy, that looks like a reasonable solution08:31
gnuoyjamespage, I do have cycles for those review although I don't know ceph at all so usually try and leave ceph reviews to someone with more expertise.08:32
jamespagegnuoy, ok if you could pickup the neutron-gateway one that would be appreciated...08:33
gnuoyjamespage, if I was to create a branch with that solution would it be easy for you to test dpdk is still happy?08:33
jamespagegnuoy, not right now as I gave the dpdk machine back to rbasak08:33
gnuoyjamespage, ok08:33
jamespagegnuoy, my only concern would be that ovs might race the qemu-kvm startup on boot, resulting in a non-running ovs with dpdk enabled.08:34
gnuoyjamespage, the option is to create an upstart script and hook into the /run mounted event eg "start on mounted MOUNTPOINT=/run"08:37
gnuoythat might still race though I guess08:39
gnuoyjamespage, those ceph changes are trivial enough for me to review08:44
jamespagegnuoy, ta08:44
A-KaserHi08:47
freak__need help regarding wordpress in juju08:53
freak__i have deployed mysql and wordpress08:53
freak__and exposed only wordpress08:54
freak__here is my juju status http://paste.ubuntu.com/15754539/08:54
freak__i cannot access wordpress from browser08:54
freak__and also port is not allocated to wordpress08:54
freak__ 08:57
rbasakjamespage: I've not used it yet, but probably will want it this afternoon or tomorrow.09:21
gnuoyjamespage, the only solution I can think of is to amend the ovs upstart script to contain similair logic to qemu-kvm to mount hugepages if theres a variable set in /etc/default/openvswitch-switch09:27
jamespagegnuoy, hmm09:30
* jamespage thinks09:30
gnuoyjamespage, I've just seen mention that there is hugepage config in /etc/dpdk/dpdk.conf , I assume thats not to do with mounting them in the first place09:32
* gnuoy goes to check09:32
jamespagegnuoy, that's part of the dpdk package itself...09:34
jamespagegnuoy, you might not have that installed depending ...09:34
gnuoyjamespage, right, but I think amending  /dev/default/qemu-kvm covers the non-dpdk use-case. The ovs with dpdk use case may be fixed with /etc/dpdk/dpdk.conf09:36
jamespagegnuoy, agreed on the non-dpdk use-case09:36
jamespagegnuoy, tbh I think that most deployments will set hugepages via kernel boot ops09:36
jamespagegnuoy, at which point systemd mounts them on /dev/hugepages* anyway09:37
gnuoyjamespage, does upstart ?09:37
jamespagegnuoy, no09:47
jamespagegnuoy, but I don't really care about dpdk on anything other than 16.04 tbh09:47
jamespagegnuoy, am I allows to drop the apache2/keystone check for amulet tests?09:47
jamespagegnuoy, I just tried to fixup swift for 2.7.0 and both charms fail on that check now...09:48
gnuoyjamespage, ok, then I propose that I make the change to use /etc/default/qemu-kvm as its better thaan where we are now09:48
gnuoyjamespage, yes09:48
jamespagegnuoy, well I fixed them anyway09:51
jamespagegnuoy, I nacked https://review.openstack.org/#/c/299363/ on the basis of our 'MAAS should do this' conversation...10:13
jamespagegnuoy, https://review.openstack.org/#/q/topic:keystone-apache+status:open both ready to go10:34
jamespagegnuoy, and https://review.openstack.org/#/q/topic:swift-2.7.0 which are based on those10:36
jamespagegnuoy, also I think I've introduced a bug with the neutron pullout10:56
jamespageworking that now10:56
gnuoyomlette eggs something something10:56
jamespagegnuoy, https://review.openstack.org/#/c/304034/11:04
jamespagefairly minor - the NeutronAPIContext was added conditionally before, but I think its OK to add unconditionally now11:05
jamespagegnuoy, tbh if that's the only thing I got wrong, then I'm pretty happy :-)11:25
A-KaserHi12:58
marcoceppio/13:01
A-Kasercory_fu: me13:05
cory_fuA-Kaser: http://interfaces.juju.solutions/13:17
gnuoyjamespage, got a sec for https://review.openstack.org/#/c/304097/ /13:20
gnuoy?13:20
jamespagegnuoy, already on it13:21
gnuoy\o/ ta13:21
jamespagegnuoy, do we care about disabling hugepages?13:22
gnuoyjamespage, it'd be nice I guess, why do you ask?13:22
jamespagegnuoy, we only write the qemu config when hugepages is enabled...13:23
jamespagewell we might write it under a write_all13:23
jamespagebut..13:23
jamespagegnuoy, oh wait - there is a write_all at the bootom of config-changed...13:24
jamespageI'll comment appropriatly13:24
jamespagegnuoy, two nits only13:26
A-Kasercory_fu: https://jujucharms.com/u/frbayart/13:31
gnuoyjamespage, ta13:32
jamespagegnuoy, note to self - remember that the git->bzr sync is not that regular...13:33
petevgHi, all. I've got another newb question: what's the best practice when it comes to storing passwords and other config data that a charm has generated. Is there a standard place to put stuff?13:52
petevgI ask because I'm writing a couch charm, and couch actually hashes admin passwords that you drop into its config files.13:53
petevg... there isn't a good way to look and see what juju has generated, which makes it hard to login as the couch admin and test to make sure that things work :-)13:53
marcoceppipetevg: in the MySQL charm we store the root password in someting like /var/lib/mysql/mysql.passwd as 600 root.root13:57
cory_fuA-Kaser: https://pythonhosted.org/amulet/13:57
marcoceppipetevg: it's not ideal, but it the idea that only root can read it and it's in a place that makes sense for the service so the charm hooks can see them13:57
petevgmarcoceppi: that sounds sensible, even if it isn't ideal. :-) I can do a similar thing in /etc/couch. Thank you.13:58
cory_fuA-Kaser: https://github.com/juju-solutions/bundletester14:01
cory_fuA-Kaser: https://hub.docker.com/r/jujusolutions/charmbox/14:03
cory_fuA-Kaser: https://jujucharms.com/docs/devel/developer-getting-started14:05
cory_fuA-Kaser: https://jujucharms.com/docs/devel/authors-charm-store14:07
cory_fuA-Kaser: http://review.juju.solutions/14:08
cory_fuA-Kaser: https://jujucharms.com/docs/devel/authors-charm-store#recommended-charms14:09
cory_fuA-Kaser: https://jujucharms.com/docs/devel/reference-charm-hooks#stop14:11
cory_fuA-Kaser: https://jujucharms.com/big-data14:15
stubpetevg: Since you probably need the password to hand out to clients, you can also store it on the relation. The PostgreSQL charm stores the replication password in the leadership settings (which is how peer units retrieve it), and stores client passwords on the client relations.14:17
petevgstub: nice! I will look into that. Thank you.14:19
=== cmagina_ is now known as cmagina
A-Kasercory_fu: https://jujucharms.com/u/tads2015dataart/mesos-slave/trusty/0/14:38
cory_fuhttps://code.launchpad.net/~dataart.telco/charms/trusty/mesos-slave/trunk14:40
cory_fuA-Kaser: https://jujucharms.com/u/dataart.telco/mesos-slave/trusty/14:41
aisraelmarcoceppi: What needs to be installed to get `charm login` and `charm publish`?14:47
rick_h_ aisrael charm from the devel ppa14:50
aisraelrick_h_: thanks. I think I've got marco's ppa added and maybe that's screwing me up14:51
aisraelHm. Maybe not.14:53
A-Kasercory_fu: thx a lot14:53
aisraelrick_h_: Got it working, thanks!14:56
cory_fuA-Kaser: No problem.  Glad I could be of help.15:01
jamespagegnuoy, I think this is nearly ready for landing - https://review.openstack.org/#/c/303329/15:05
jamespageI've submitted a recheck-full15:05
cmaginacory_fu: when will that hadoop_extra_.... fix land in the charms?15:16
cory_fucmagina: I will cut a release when I finish this meeting15:16
cmaginacory_fu: awesome, thanks much15:18
cory_fucmagina: Actually, I'm releasing it now.  6.4.3 should be available in a few seconds15:19
cmaginacory_fu: cool, i'll get to testing that :)15:19
marcoceppiaisrael: what does `charm version` say?15:22
aisraelcharm 2:2.1.0-0~ubuntu16.04.1~ppa015:28
aisraelcharm-tools 2.1.215:28
marcoceppiaisrael: huh15:29
marcoceppiaisrael: there's one more thing to tru15:29
marcoceppiaisrael: ppa:marcoceppi/xenial-chopper15:29
marcoceppiaisrael: add that and try again15:30
* aisrael gets to the choppa15:30
marcoceppiaisrael: there's a 2.1.0-1 which is slightly newer15:32
marcoceppiaisrael: it's basically latest master15:32
aisraelmarcoceppi: same error. I'll file a bug.15:34
marcoceppiaisrael: ack, ta, it might be a server side issue15:37
mbruzekcory_fu: ping15:44
cory_fumbruzek: What's up?15:44
mbruzekcory_fu: getting a charm build error with the basic layer, and I checked there is no basic layer on my local filesystem. http://paste.ubuntu.com/15761546/15:46
cory_fumbruzek: Can you link me to your layer's metadata.yaml?15:47
mbruzekcory_fu: I don't have 22 lines in my metadata.yaml nor do I have an assembled charm with that many lines15:47
mbruzekhttps://github.com/mbruzek/layer-k8s/blob/master/metadata.yaml15:48
cory_fumbruzek: I just cloned that layer and built it without error15:52
mbruzekhrmm.15:52
mbruzekcory_fu: can you pull eddy-master branch and try it again?15:53
mbruzekcharm 2:2.1.0-0~ubuntu16.04.1~ppa015:54
mbruzekcharm-tools 2.1.215:54
mbruzekcory_fu: that is the version of charm tools I am using15:54
cory_fumbruzek: Also builds w/o error15:54
cory_fuSame version.  (I'm building this in charmbox)15:54
cory_fucharmbox:devel15:55
mbruzekhrmm15:55
mbruzekI can reproduce this failure... I don't see a local layer-basic, or basic directory in my layers directory15:55
marcoceppimbruzek: hangouts to look?15:56
marcoceppimbruzek: rather15:56
cory_fumbruzek: Why are you assuming the issue is with the basic layer?  Do you have local copies of docker, flannel, tls, or kubernetes?15:56
marcoceppimbruzek: build with -L debug and paste the output15:56
cory_fumarcoceppi: https://paste.ubuntu.com/15761546/15:57
cory_fuIt looks like it pulls in flannel and basic from remote, but you could have local versions of the other layers15:58
mbruzekcory_fu: marcoceppi: That pastebin was with -L DEBUG15:58
marcoceppimbruzek: it seems to be failing on layer:docker15:58
marcoceppi/home/mbruzek/workspace/layers/docker/metadata.yaml15:58
marcoceppithat layer is b0rk3d15:59
mbruzekah there it is!15:59
mbruzekWhen I read the output I thought it was in the basic layer, where do you see the docker layer?15:59
marcoceppimbruzek: the end of the traceback15:59
marcoceppimbruzek: the last three lines16:00
mbruzekI see16:00
mbruzeksorry marcoceppi and cory_fu16:00
marcoceppimbruzek: file a bug, we should have better error handling16:00
mbruzekmarcoceppi: well it does actually have the file in question there16:01
cory_fu+1  that error handling sucks and should be improved16:01
mbruzekI just didn't see it16:01
marcoceppimbruzek: yea, but it should just say "There is malformed yaml in FILE"16:01
marcoceppimbruzek: that way you don't bleed while looking at output16:01
marcoceppimbruzek: 2.1.3 for sure16:02
mbruzekmaybe I can work on this one since I signaled a false alarm16:02
lazyPowerinteresting, when the charm layer defines series in metadata, it changes the destination directory to builds/$charm ?16:07
marcoceppilazyPower: it should be builds/output16:14
marcoceppilazyPower: err16:14
marcoceppiyes16:14
cory_fumbruzek: If you do work on it, make sure you add docstrings.  ;)16:14
lazyPowerinteresting... looks like i need to add another volume mount to charmbox then16:14
mbruzekcory_fu: Oh there will be a plethora of comments, trust me16:14
lazyPowerta for the info ;)16:14
marcoceppilazyPower:  https://github.com/juju/charm-tools/issues/11516:14
marcoceppilazyPower: ah, yeah16:15
aisraelstub: You around?16:16
gnuoyjamespage, https://review.openstack.org/#/c/304097/ has passed a full amulet if you have a sec16:38
jamespagegnuoy, ditto - https://review.openstack.org/#/c/303329/16:44
gnuoyjamespage, amulet is still running on your mp16:47
=== cos1 is now known as c0s
jamespagegnuoy, oh yes - that was just the smoke from my removal of .unit-state.db16:51
jamespagegnuoy, hugepages: DONE16:53
gnuoyjamespage, fantastic, thanks16:53
firlIs it easily possible to do a new install with mitaka and juju?17:14
lazyPowerfirl o/17:21
firl?17:21
lazyPowerfirl i'm not sure what you're asking about "easy to do a new install"17:21
lazyPowerare you asking if you can deploy mitaka with juju?17:21
firlI haven’t seen any mailings / docs on doing a mitaka install ( which maas image  ) any prebuilt etc17:22
firlIf I should just stick with trusty-mitaka ( if it exists )17:22
lazyPowerah, the /next branches of the charms target mitaka. and i'm fairly certain those are being landed as stable as we speak17:22
lazyPowerbeisner ddellav thedac - am i correct in the above statement?17:22
thedaclazyPower: they are being tested now but will land in stable in a couple weeks17:23
lazyPowerah ok, i thought that push was this week. my b.17:23
firlGotcha, is there a place to get the latest info on this? ( just this channel in a couple weeks? )17:24
thedacfirl: we will post to the juju mailing list when we release17:24
firlthedac: thanks! do you know if there will be a sample bundle, or if heat/ceilometer will easily be installed also ?17:25
gnuoythedac, beisner would you mind keeping an eye on these https://review.openstack.org/#/q/topic:enable-xenial-mitaka-amulet and landing them if they pass ?17:25
thedacfirl: The next bundle should already exist. Let me find that for you. I know ceilometer has had some feature split with adoh I am not sure that will be ready.17:27
thedacfirl: https://jujucharms.com/u/openstack-charmers-next/17:29
marcoceppican I get a review on https://github.com/juju/docs/pull/97517:30
firlthedac: perfect, should I be switching to XenialXerus for it, or stay on trusty for a while17:31
thedacfirl: mitaka will be supported on both. So it depends on your taste for being on the latest and greatest17:32
=== natefinch is now known as natefinch-lunch
beisnergnuoy, sure, np17:54
arosalescherylj: I am not sure if the machines are showing up in juju status18:33
arosalesrohit_: do you see any machines in your 'juju status' output re https://launchpad.net/bugs/156642018:33
mupBug #1566420: lxd doesn't provision instances on first bootstrap in new xenial image <lxd> <juju-core:Triaged> <https://launchpad.net/bugs/1566420>18:33
cheryljrohit_, arosales if you do see machines, then I'm willing to bet that it's a dup of bug 156768318:34
mupBug #1567683: Agents stuck in "Waiting for agent initialization to finish" with lxd provider <ci> <lxd> <network> <juju-core:Fix Committed by cherylj> <https://launchpad.net/bugs/1567683>18:34
rohit_yes ..machines are created..but juju agent doesn't initialize18:35
cheryljrohit_: can you paste the output of lxc list?18:35
cheryljI bet it's that bug above ^^18:35
* arosales looks at lp:156768318:35
rohit_https://www.irccloud.com/pastebin/C2EsRpP7/18:37
rohit_yes.. It's identical issue18:37
rohit_identical to 156768318:37
cheryljrohit_: but I don't see a second lxd container for a service you've deployed?18:37
cherylj(unless you took that snapshot just after bootstrapping, but before deploy)18:37
rohit_I deleted it .. a sec ago18:37
cheryljthat would explain it18:38
rohit_I am cleaning up it all before I switch to older version of juju18:38
cheryljrohit_:  you can hack around it by reconfiguring your lxdbr0 to use 10.0.3.1 as the bridge IP.  Should trigger the containers to use 10.0.4.118:38
cheryljas their bridge18:38
rohit_ok18:38
marcoceppilazyPower: charmbox:devel doesn't have make installed in it, so make lint fails during reviews18:49
lazyPowermarcoceppi - why isn't setup installing make?18:49
marcoceppilazyPower: I don't know, report from c0s18:49
lazyPowerhmm, i suppose thats reason enough to have it in the base image18:50
lazyPowerlemme see how big it bloats it, 1 sec18:50
marcoceppilazyPower: ack, ta18:51
c0scory_fu: you're right - 'make lint' catches a bunch of the python bugs in the layer's code18:53
c0sdo you think it would be make sense to add 'make lint' as the part of the charm-build?18:53
lazyPowermarcoceppi - no noticeable change, incoming PR to add make18:54
c0sthat pretty much sums it up : https://twitter.com/c0sin/status/71960067346611404919:00
jamespagethedac, beisner: hey can either of your +2/+1 https://review.openstack.org/#/c/303329/ - its done a full charm-recheck and completes the remove neutronfromnova work19:01
thedacjamespage: will do19:02
=== redir is now known as redir_lunch
=== thomnico is now known as thomnico|Brussel
=== matthelmke is now known as matthelmke-afk
=== natefinch-lunch is now known as natefinch
=== matthelmke-afk is now known as matthelmke

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!