/srv/irclogs.ubuntu.com/2016/04/15/#juju.txt

=== scuttlemonkey is now known as scuttle|afk
=== axw_ is now known as axw
=== menn0 is now known as menn0-afk
=== thumper is now known as thumper-afk
=== menn0-afk is now known as menn0
jamespagetinwood, gnuoy: good morning08:41
tinwoodgood morning :)08:41
gnuoyo/08:41
jamespagetinwood, mmmm08:47
jamespagelooking at your port up change08:47
tinwoodjamespage, yup ?08:47
freak__ 08:47
freak__hi guys08:47
freak__is anybody available rightnow who can help me out08:48
jamespagetinwood, yeah - your fix is great for a normal port08:48
freak__ i'm deploying openstack base bundle08:48
jamespagebut for a dpdk port, the ip link commands will fail...08:48
tinwoodjamespage, but broken for other things?08:48
freak__through command juju quickstart openstack-base08:48
freak__ but i'm facing issue http://paste.ubuntu.com/15845425/08:48
tinwoodjamespage, oh08:48
jamespagetinwood, yah - a smaller fix for now would be to specialize that function for dpdk codepath and use th charmhelpers version for non-dpdk08:48
tinwoodjamespage, ya, I understand.  On it.08:49
jamespagefreak__, yeah - your lxc containers look unhappy08:49
tinwoodjamespage, any reading material on dpdk?08:49
jamespagefreak__, "failed to retrieve the template to clone"08:50
jamespagetinwood, erm08:50
freak__thats what i want to know why containers not initializing08:50
jamespagetinwood, look at line 360 of that same file08:50
jamespagethat's the bit that does the dpdk port adds...08:50
tinwoodjamespage, kk thanks.08:50
jamespagefreak__, I'd drop onto one of the physical hosts and take a look in /var/log/juju/machine-*.log08:51
jamespagemight give you a bit more of a clue08:51
freak__ok.let me check08:51
jamespagethe error is coming from lxc - so it might be some sort of firewall egress issue? is the environment your are deploying in network limited in any way?08:51
freak__WARNING juju.apiserver.client status.go:679 error fetching public address: public no address08:52
freak__my lab subnet is different and i'm doing NATing on my router to go to outside world08:54
jamespagebbaqar, around?08:54
jamespagefreak__, hmm that should be ok08:56
jamespagefreak__, that public address message is probably ignorable as well08:56
freak__jamespage how can i check whether my cluster controller is performing dhcp and dns correctly or not08:58
freak__any command to check status of these services08:58
jamespagefreak__, I'd drop onto one of the deployed machines and check dns is all good that way (dig or suchlike)08:59
freak__ok08:59
freak__jamespage check this http://paste.ubuntu.com/15845529/09:00
freak__i think its working fine as far as dns is concerned09:00
jamespagefreak__, yeah - that looks fine09:00
jamespagehmm09:01
jamespagefreak__, could you pastebin the output of sudo lxc-ls -f09:02
freak__ok09:02
=== thumper-afk is now known as thumper
freak__http://paste.ubuntu.com/15845565/09:03
freak__jamespage http://paste.ubuntu.com/15845565/09:13
jamespagefreak__, yeah got it thanks09:14
jamespagefreak__, we need to get a bit more debug output09:19
freak__ok09:19
jamespagefreak__, the juju controller/bootstrap node caches the images for lxc - I just did a quick test with 1.25.5 locally and it worked OK09:19
jamespagefreak__, ok I think we can do this without re-bootstrapping09:20
freak__i think that the issue with lxc service start09:20
jamespagefreak__, can you do09:20
jamespagejuju set-env logging-config="<root>=DEBUG;unit=DEBUG"09:21
jamespagethat will up the debug level across the environment09:21
jamespagefreak__, and then ssh to one of the nodes and delete the lxc container09:21
jamespagefreak__,09:22
jamespagesudo lxc-destroy --name juju-trusty-lxc-template09:22
jamespageand then have another go add adding a lxc container to that machine09:22
jamespagefreak__, juju add-machine lxc:X (where X is the machine id of the server you just deleted the template from)09:23
freak__ok..right now there is some power issue .they are doing maintenance .so nodes are off..i can perform these steps after 40mins..09:23
jamespagefreak__, okies09:23
jamespagelemme know how that goes09:23
freak__ok i will definitely inform you after performing these steps09:23
jamespagespecifically looking for messages from machine-X.log around problems with lxc and specific error messages09:23
freak__jamespage ok. i will09:24
jamespagegnuoy, gerrit is foobar by the way - apparently no new jobs are being processed...09:25
gnuoyoh09:26
jamespagegnuoy, infra team are working on it09:38
gnuoykk09:38
jamespagebbaqar, when you do appear - could you confirm what testing the updates for the plumgrid charms have had; I don't intened to exercise them myself so dependent on your test results :-)10:08
jamespagebbaqar, you also asked me about migration of the plumgrid charms under the openstack project for development10:09
freak__jamespage i executed the commands as you mentioned10:10
freak__upon running  juju add-machine lxc:0 it showed msg created container 0/lxc/310:11
freak__but in juju status10:11
freak__http://paste.ubuntu.com/15846755/10:12
jamespagebbaqar, https://review.openstack.org/#/c/232705/ is the change we made to push the core charms under the openstack project10:13
freak__jamespage here are machine 0 logs http://paste.ubuntu.com/15846774/10:13
jamespagebbaqar, I'd suggest that you follow the same route - I suspect that you will want the group for core-reviewers to be a little different tho10:14
jamespagefreak__, need a bit more log I think10:15
freak__ok10:15
freak__which one.tell me the command i will share10:15
jamespagefreak__, the machine-0 log10:21
freak__ok i will take the complete log and share10:21
freak__jamespage any website like paste.ubuntu where i can share the file of log i have taken using securecrt10:25
freak__coz the file is too long and paste.ubuntu got hung10:26
jamespagefreak__, a grep for 'container' might be helpful to filter things out10:26
freak__u mean cat /var/log/juju/machine0.log | grep container10:28
freak__jamespage here is the output http://paste.ubuntu.com/15847017/10:29
jamespagefreak__, ok10:33
jamespagefreak__, apparently Invalid argument - setting cmdline failed is a warning only10:34
jamespagefreak__, lets try starting that container10:34
freak__how to start10:34
jamespagefreak__, lxc-start --name juju-trusty-lxc-template10:35
freak__ok10:35
jamespagesudo lxc-start --name juju-trusty-lxc-template10:35
jamespagethat will happen in the foreground - hopefully that might give us a clue10:35
freak__jamespage here is the issue http://paste.ubuntu.com/15847100/10:36
jamespagefreak__, yeah still not much info10:37
jamespagefreak__, try again with "-l DEBUG"10:38
jamespagethat might give us more10:38
sparkiegeekand -F for foreground starting?10:38
jamespageall I can think is the cached image in juju is foobar10:38
jamespagesparkiegeek, I thought that was the default but I might be wrong10:39
jamespagefreak__, add "-F" as well to be sure...10:39
sparkiegeekjamespage: nah, default is daemon10:39
jamespagesparkiegeek, oh10:39
freak__jamespage check this http://paste.ubuntu.com/15847157/10:40
bbaqarjamespage: Thanks for the help. Bundles in all the charms tests deploy smoothly.10:40
jamespagefreak__, needs to be -F not -f10:40
freak__ok10:40
jamespagefreak__, and DEBUG not debug apparently...10:41
bbaqarjamespage: have not ran the amulet test though in the last two weeks .. but they run the same bundle that i deploy every day10:41
jamespagebbaqar, ok10:41
bbaqarjamespage: and test for the same things i test every day10:41
jamespagebbaqar, do you have a bundle in the charm-store for plumgrid?10:41
jamespagegnuoy, https://review.openstack.org/#/c/304668/ could you review please10:42
jamespagefinally passing amulet tests...10:42
freak__http://paste.ubuntu.com/15847193/10:42
bbaqarjamespage: yes we do .. but it has to be updated as soon as we land the charm MPs https://jujucharms.com/plumgrid-ons/bundle/910:42
jamespagebbaqar, yah - I see some of the config options are changing names - that will break upgraders but I suspect your install base is know right now :-)10:43
gnuoyjamespage, will do10:43
freak__jamespage while bootstrapping there is the option disable juju network management i checked this option do you think this bridge issue is appearing to that10:44
jamespagefreak__, right - that's usefull10:44
jamespagefreak__, hmmm10:44
jamespagefreak__, and you do 'ip addr' on that unit please10:44
jamespagefreak__, yes I suspect that is breaking things badly...10:44
bbaqarjamespage: no one is on those revision .. so we are good10:44
jamespagebbaqar, okay - next on my list10:45
freak__so do you suggest to bootstrap again and enable that option10:45
jamespagefreak__, yes10:45
freak__i think that will work10:45
freak__ok i will try and then get back to you10:45
jamespagefreak__, could you raise a bug against juju for this as well - it really should tell you that you can't have lxc containers if that option is enabled10:46
jamespagehttps://bugs.launchpad.net/juju-core/+filebug10:46
jamespagethat would have saved us 2 hours of debug...10:46
freak__ok i will10:46
jamespagefreak__, thankyou :-)10:47
jamespageping me the link once raised and I'll mark it as effects me tooo...10:47
freak__thanks to you as well for strong support..ok i will share link here with you10:48
bbaqarjamespage: appreciate the help.10:49
jamespagefreak__, no problem - just as a heads up we're a week off  the next charm release that will support 16.04 and the openstack mitaka release10:50
jamespagefreak__, so if you're planning ahead something to think about10:50
jamespagefreak__, you can upgrade liberty->mitaka but not in-place 14.04 -> 16.0410:50
jamespageusing the charms that is10:50
jamespagebbaqar, for migration under the openstack project, you'll want to add tox configurations for all of your charms...10:56
jamespagebbaqar, look at any of the core charms for hints on that10:56
jamespagebbaqar, any general summary for the commit messages on these?10:57
jamespagebbaqar, and remind me to talk to you about direct charm store publishing soon....10:57
bbaqar#let me add a commit message right now.10:57
bbaqarjamespage: I ll add a commit message right now10:58
freak__jamespage here is the bug link https://bugs.launchpad.net/juju-core/+bug/157079610:58
mupBug #1570796: container startup issue when juju network management disabled <juju-core:New> <https://launchpad.net/bugs/1570796>10:58
jamespagebbaqar, ta10:58
jamespagebbaqar, here is fine - vim is open waiting for me to type...10:58
bbaqarjamespage: just one mine10:58
jamespagegnuoy, nearly have odl-controller passing on xenial10:59
jamespagesoooo close10:59
bbaqarjamespage11:00
bbaqarjamespage: support added for plumgrid 4.1.3 and 5.0 releases | support added for configurable external interfaces | support added for separate fabric network (os-data-network)11:01
bbaqarjamespage . use this for plumgrid-director, plumgrid-edge, plumgrid-gateway11:01
gnuoyjamespage, excellent11:01
bbaqarjamespage: for neutron-api-plumgrid: support added for plumgrid 4.1.3 and 5.0 releases11:02
jamespagebbaqar, as a future improvement, you might have external-interfaces be a list of mac addresses to use across the deployment11:05
jamespagebbaqar, there is code in charm-helpers to resolve mac -> interface11:05
bbaqarjamespage: haha ... i wish i had known this earlier ..11:09
jamespagebbaqar, I suspect some of the challenges we have across SDN solutions are common11:09
bbaqarjamespage: i have seen the function actually .. might be difficult for the users to collect all mac-addresses for each interface on scale deployments11:10
jamespagebbaqar, well maas has them all :-)11:10
jamespagebbaqar, as a future feature, juju may grow support for presenting the interface directly via network spaces11:11
bbaqarjamespage: You are right. and yes i saw some development on spaces .. will rethink this before pushing xenial charms11:12
jamespagebbaqar, sure11:12
jamespageit might be good to schedule a general MAAS 2.0/juju 2.0 update for your team11:12
jamespagethere will be alot of docs being updated - prob best to wait for that11:13
jamespagebbaqar, ok all landed...11:13
jamespagelazyPower, ^^11:14
jamespagefyi11:14
jamespagegnuoy, "Controller configuration on bridge br-int incorrect: ![u'tcp:172.17.115.171:6653']! != !tcp:172.17.115.171:6633!"11:17
bbaqarjamespage: thanks alot.11:17
bbaqarjamespage: For the commit into openstack project. What upstream location should we use? I see all openstack charms are in https://github.com/openstack-charmers/ ... should i keep them in our plumgrid github space?11:22
jamespagebbaqar, that was just a staging area11:41
jamespageany git location for the source of the charms is fine11:42
=== matthelmke-afk is now known as matthelmke
bbaqarjamespage:got it12:06
freak__jamespage are you there?12:38
freak__jamespage i bootstrapped the environment again and enabled the option juju network management ..this time the lxc issue resolved12:40
freak__this is the current status some components are struck http://paste.ubuntu.com/15848673/12:40
jamespagefreak__, looks like lxc containers might be still coming up12:42
freak__ok. i will wait12:42
jamespagefreak__, whats the disk io like on your servers?12:46
jamespagetinwood, your proposed fix for n-ovs looks good to me - is that testing ok in the dvr spec for you?12:53
tinwoodjamespage, I'm just getting it sorted now - let you know when it's done xenial/mitaka?12:53
jamespagetinwood, okies12:54
freak__jamespage its /dev/sda12:54
jamespagetinwood, cause I know the amulet tests won't exercise that stuff at-all12:54
tinwoodjamespage, indeed.12:54
freak__jamespage  here is the detail http://paste.ubuntu.com/15848759/12:54
* lazyPower read backscroll12:55
lazyPowerjamespage niiiiceee!12:56
webscholar..12:57
lazyPowerjamespage - we threw down some docs about that as well https://jujucharms.com/docs/devel/authors-charm-store12:57
lazyPowerre: publishing12:57
jamespagelazyPower, awesome13:02
freak__jamespage still the status is same ...lxc are in allocating state13:05
freak__jamespage here is the current status http://paste.ubuntu.com/15849141/13:22
freak__is it normal to take so much time allocating?13:22
jamespagefreak__, not quite sure whats happening tbh13:22
jamespagefreak__, no - i suspect something else13:23
jamespagefreak__, can you check the status of the lxc containers on the machines?13:23
jamespagesudo lxs-ls13:23
jamespagesudo lxs-ls -f actuall is better13:23
freak__ok13:23
freak__jamespage   http://paste.ubuntu.com/15849161/13:24
jamespagefreak__, sudo !!13:25
jamespageneeds root13:25
freak__ohh...right   http://paste.ubuntu.com/15849174/13:26
jamespagefreak__, thats a good start13:27
freak__yes that alright... but have you noticed template at the end is stopped13:28
freak__jamespage what you think could be the issue in allocation of lxc13:31
jamespagefreak__, yeah - the template being stopped is fine13:32
freak__my current status13:36
freak__http://paste.ubuntu.com/15849258/13:36
freak__jamespage i found this bug13:40
freak__https://bugs.launchpad.net/juju/+bug/99823813:40
mupBug #998238: local provider unit agents get stuck in pending state because of host firewall blocking communication <firewall> <local-provider> <pyjuju:Triaged> <juju-core:Triaged> <https://launchpad.net/bugs/998238>13:40
freak__they are saying disable ufw13:41
freak__and then destroy environment and build again13:41
freak__http://askubuntu.com/questions/134977/juju-stuck-in-pending-state-when-using-lxc13:41
jamespagefreak__, I doubt its enabled if the physical machines are maas deployed13:42
freak__how to check ?13:43
freak__whether its enabled or not13:43
sparkiegeekfreak__:  can you SSH on to the machines that are still pending and paste /var/log/cloud-init-output.log and /var/log/juju/*13:44
freak__sparkiegeek , ok i will13:44
freak__sparkiegeek , here is the output http://paste.ubuntu.com/15849429/13:48
sparkiegeekfreak__: that looks like it's all healthy - how about /var/log/juju/all-machines.log ?13:49
freak__sparkiegeek, ok i will share13:50
jamespagetinwood, recheck-full not really required for the dpdk ports fix13:51
tinwoodjamespage, I guess I'm just a bit paranoid.  Sorry.13:51
tinwoodOne of the other charms broke on wily/liberty because it wasn't enabled when I did pause/resume.13:52
jamespagetinwood, np13:52
jamespagefreak__, sorry - doing about 5 different things at once so apologies13:53
jamespagefreak__, thats the juju machine 0 cloud-init output13:53
tinwoodjamespage, xenial/mitaka works with the ovs change.  Just checking trusty/kilo now. (for dvr).13:54
freak__jamespage , no issue.. you guys at canonical are very supportive.. salute you13:54
jamespagefreak__, you should be able to ssh directly to the lxc units  - say ssh ubuntu@192.168.6.16413:54
beisnercholcombe, yah, ceph-radosgw amulet full is failing @ master in the same way as on your review.   http://pastebin.ubuntu.com/15849481/   got ideas?13:54
freak__ok. let me ssh13:54
jamespagefreak__, you might not be able to13:54
jamespagelets see13:54
cory_fuaisrael: Thanks for charm-tools#182!  I had one review comment / request, but awesome that you took that on.  :)13:57
freak__jamespage i was able to do ssh to lxc here is the output http://paste.ubuntu.com/15849549/13:58
jamespagefreak__, is 192.168.11.193 the ip address of machine 0?13:59
freak__jamespage yes its the ip on machine 0 vlan 1114:00
jamespagefreak__, oh wait - thats a different subnet to the lxc containers? can they actually access that IP ?14:00
freak__jamespage , they are on same machine14:01
jamespagefreak__, can you explain your networking a bit please14:02
aisraelcory_fu: My pleasure. I just updated the pull request. I'm thinking I should also update the proof command to do similar wrt colourized output14:03
freak__on machine 0 i have given ip 192.168.6.193 on eth 0 in maas, on eth1 no ip ,, i have created vlans/subinterfaces14:03
freak__from vlan 11-1614:03
freak__for update :: from lxc on machine 0 i cannot ping 192.168.11.19314:03
cory_fuaisrael: I think that would be nice, but also a somewhat larger undertaking14:04
freak__but on machine 0 i also have the ip 192.168.6.19314:04
freak__so lxc should communicate through that14:04
freak__i mean 192.168.6.193 on machine 0 can communicate to 192.168.6.164 on lxc14:04
jamespagefreak__, yeah - I see what you mean14:05
jamespagefreak__, so the vlan interfaces are all trunked in over eth0 on machine 0 right?14:06
jamespageand you did that via MAAS I'm guessing...14:06
freak__eth1 is trunk14:06
jamespageoh right - sorry14:06
freak__and vlans are configure on eth114:06
jamespageokies...14:06
jamespagedimitern, hey - need some support here ^^ how does juju decide which IP address of machine 0 to use for machine agents to communicate with?14:07
jamespagecontext here is that the lxc containers being deployed are trying to address an IP address of machine 0 which they can't actually route to14:07
jamespagedespite the fact that machine 0 has an IP on the same subnet14:08
jamespagedimitern, 1.25.5 juju release - freak__: MAAS 1.9?14:08
freak__jamespage maas version is 1.914:08
jamespageta14:08
jamespagefreak__, bear with us on this one14:09
freak__jamespage ,  no issue i will wait14:09
cory_fuaisrael: Oh, I actually missed that was proof and not lint (a separate issue I created) and thought it would just be a blanket colorization.  Hrm.  I just realized that, technically, the intention of INFO in proof vs build is different and green might not be the best color for I from proof.  But I don't see another, better alternative14:10
jamespagetinwood, ok - thats good enough for me - pushing that through14:10
jamespagefreak__, can you check for me what 'node0.maas' resolves to please14:11
freak__jamespage , here is the output http://paste.ubuntu.com/15849691/14:12
jamespagefreak__, ack thanks - that looks right to me14:13
jamespage.6 not .1114:13
freak__jamespage the ip of region controller is 192.168.6.11 and of cluster controller is 192.168.6.1214:15
freak__and then the node0 ip is 192.168.6.19314:15
jamespagefreak__, +114:15
freak__if i run dig node0.maas on machine 0 for that output is this http://paste.ubuntu.com/15849691/14:15
freak__if you want dig on region or cluster controller i can also share that if you say14:16
tinwoodjamespage, yep, kilo passed too.  Good call. :)14:16
cory_fuaisrael: Merged.  :)14:16
aisraelcory_fu: <314:17
=== cos1 is now known as c0s
=== scuttle|afk is now known as scuttlemonkey
=== rogpeppe3 is now known as rogpeppe
freak__jamespage 192.168.11.193 ip should be reachable from lxc .here is the snip of routing table http://paste.ubuntu.com/15850132/14:40
jamespagefreak__, 192.168.6.1 can route between .6 and .11 ?14:41
freak__i think i should change the gateway ip to 192.168.6.19314:41
freak__that would be much better14:42
freak__jamespage i changed the default gw from 6.1 to 6.19314:44
freak__now from lxc i can ping to 11.19314:44
jamespagefreak__, I suspect that's not a great idea14:45
jamespagethey might be able to get to the bootstrap node, but I suspect the rest of the world is not inaccessible14:45
=== hbaum_ is now known as hbaum
freak__jamespage you are right but in that case lxc should choose 6.193 ip to go to outside world14:46
jamespagefreak__, not really14:46
freak__do not choose 11.19314:46
jamespagea default gateway should be enough most of the time14:46
jamespagethe problem is the way the machine agent on the physical machine is configuring the machine agents in the lxc containers to get their tools14:47
iceywith juju2, how do I setup an apt-http-proxy for an openstack cloud? I have it configured in my cloud.yaml that I imported the cloud settings with but it doesn't seem to be using it14:49
freak__jamespage how can i force lxc to resume its process of downloading tools which was interrupted to inaccessible ip 11.19314:51
tvansteenburghcory_fu: when creating a layer, when should one create a config.yaml, versus just exposing options in layer.yaml? is that documented somewhere?14:52
jamespagefreak__, hmm14:52
jamespagefreak__, restarting cloud-init might work14:52
jamespagemy other suggestion was to reboot the container - but that might reset your temp route14:52
freak__jamespage if we make the route permanent by modifying networks file14:54
freak__then restart the container14:54
freak__btw how we will restart container14:54
cory_futvansteenburgh: No, I don't think layer options are well documented yet.  Basically, layer options are a replacement for, e.g., apache.yaml from the apache-php layer, so that we don't have an explosion of .yaml files.  So, they're intended for a charm layer to influence what a base layer does, whereas config.yaml is for user-facing options14:54
jamespagefreak__, well maybe but I'm concerned that will then just break something else14:54
jamespagefreak__, reboot inside the container...14:54
freak__nice :P14:55
tvansteenburghcory_fu: oh, right :)14:55
tvansteenburghcory_fu: i forgot that config.yaml was just part of the charm :P14:55
cory_fu:)14:55
shruthimaHi All,We have developed IBM-Installation Manager in bash and   i have declared ibm_im_package option in layer.yaml but iam unable to fetch it ?   can you please suggest any command is there to fetch layer options ??15:03
lazyPowershruthima - layer options are documented here https://jujucharms.com/docs/devel/reference-layer-yaml15:05
lazyPowerensure you've implemented them as defined in that reference guide, you can then fetch the layer options like we do in this example: https://github.com/mbruzek/layer-storage/blob/master/reactive/storage.py#L11315:05
lazyPowershruthima - in addition to that guide, ensure you're also running the latest version of charm-tools (2.1.2) so you've got hte required builder modifications to support options in layer.yaml15:07
shruthimawe have written code in bash ? so any bash example can you provide if available ?15:11
shruthima<lazyPower>Can you please provide any example written in bash code ?15:16
lazyPowershruthima - ah good point i'm not sure if thats exposed via the bash CLI15:17
lazyPowercory_fu i'm pretty certain it is, do you happen to know if that was added to the bash helpers?15:18
aisraeljcastro: ping15:19
cory_fulazyPower, shruthima: Actually, layer options aren't actually exposed by bash CLI. :(  It shouldn't be too hard to add, and it would be in the base layer, so won't require much to get it released.15:21
lazyPoweroh, good to know!15:21
lazyPowerthanks cory_fu15:21
cory_fuHowever, I'm pretty busy today, with the last day of the big data team sprint, so I'm not sure if I'll be able to get it done today15:21
shruthimaoh k thanku lazyPower cory_fu15:22
cory_fushruthima: I will send out an email to the juju list when I can get that done15:23
shruthimacory_fu : Thanku :)15:24
=== Spads_ is now known as Spads
bdxhows it going everyone? I've a few jaas/charmstore questions for anyone listening .... 1. How can I delete stale charms that I don't want to show up in the charmstore anymore (ones in the legacy charmstore that got pulled in from my lp code, and new jaas namespace charms)?  2. How can I create a channel ?  3. (juju2) When I add users with write access to a model, they don't seem to be able to `juju ssh` into15:32
bdxany deployed machines; do I need to each new users ssh key, if so, which key?15:32
lazyPowerbdx - you were asking about channels - have a look at the docs over the charm store + channels here https://jujucharms.com/docs/devel/authors-charm-store15:33
bdxlazyPower: perfect, thx15:34
lazyPowerbdx - regarding removing of charms - you can only remove charms that have been uploaded to the new store. The old store is like an icebox, think of those charms as being in stasis.15:34
=== redelmann is now known as rudi_eating
bdxI'll offer an award of $50 and a bucket of beer to charmstore admin who can remove some old, stale, failed charm attempts for me :-)15:41
magicaltrouti'll give them some pork scratchings and a pint of ale15:43
lazyPowermagicaltrout - hows session planning going for apachecon?15:46
bdxwho could resist? someone will break .... just give them time15:46
lazyPowerbdx - good things come to those who are patient15:47
magicaltroutbleh in between contract work on saiku, contract work for nasa and apachecon my life is a big mush of loads to do15:47
lazyPowermagicaltrout - Fair enough :) was poking as a leading question to offer eyeballs for review/help if any of that would be helpful to your goals there15:47
magicaltroutits coming on, the Juju Data Management talk will be pretty straight forward (famous last words) I'm working on the tutorial content and presentation slides for all of them at the mo15:48
lazyPowerack, just lmk when/if i can be of help15:48
magicaltroutmy plan is to get the planning and presentation stuff done betwen now and the 29th which is dealing for my real job, and after that I should have 2 weeks to get all the technical stuff done for apachecon15:48
magicaltrouttranspires Q1 2016 is pretty chaotic ;)15:49
magicaltroutlazyPower: last year I was out at 2am playing very drunk beach volleyball, went to bed at 5 got up at 9 and wrote my presentation for 1115:57
magicaltroutever the professional.....15:57
magicaltroutwont be happening this year with my tutorial load15:58
tinwoodgnuoy, interesting.  The amulet for wily/liberty isn't stopping apache216:04
gnuoyoh, well that's be it then16:05
tinwoodgnuoy, except now it has.16:06
* tinwood thinks this is weird.16:06
tinwoodgnuoy, might be finger trouble on my side.  More investigation.16:07
gnuoygood luck16:07
tinwoodgnuoy, and just to verify, you were just doing amulet tests with a bootstrapped juju only?16:07
gnuoytinwood, yep16:08
tinwoodgnuoy, kk, ta16:09
=== redir_afk is now known as redir
freak_jamespage i made the default gw updated route to 6.193 permanent16:13
freak_and restarted the lxc16:13
freak_now when i see the cloud-init-output.log16:13
freak_here is the output http://paste.ubuntu.com/15852704/16:14
freak_it updated the route info only but not resumed the process16:14
=== tinwood is now known as tinwood_afk
beisnericey, cholcombe - bug 157096016:29
mupBug #1570960: ceph-osd stuck in a mon-relation-changed infinite loop <uosci> <ceph-osd (Juju Charms Collection):New> <https://launchpad.net/bugs/1570960>16:29
iceythat is not stuck... or in a mon relation changed loop16:31
iceyit's in an update-status loop, which is by design?16:31
beisnericey, well it goes on and on back to mon-relation-changed then apt installs again and again16:32
beisnericey, yah so my quick assessment may not be right on the actual issue.  the symptom is solid though:  ceph-osd blocks forever on trusty-icehouse16:34
beisnericey, bug updated16:37
iceybeisner: I can look after lunch16:37
cmarsicey, cholcombe question about the ceph charm.. should I be able to deploy it to lxd controller? anything special that needs to be done to get it working in containers?16:52
cholcombecmars, it should just work.  are you running into issues?16:52
cmarsi've tried using directories for the osds, but they always seem to be stuck in HEALTH_WARN16:52
cholcombecan you paste the warning for me?16:53
cmarscholcombe, yep, i'll try again and get you an error message from a fresh deploy16:53
cholcombeok thanks16:53
=== scuttlemonkey is now known as scuttle|afk
=== scuttle|afk is now known as scuttlemonkey
cmarscholcombe, here's what I'm seeing: https://paste.ubuntu.com/15854608/17:18
cholcombecmars, i've seen that before with instances that  have disks that are too small.  your ceph osd tree will show that all the osds are weighted at 0 instead of 1 like they should be17:19
cmarscholcombe, they're all running in containers, so they all think they have the same usage & free space as the host17:21
cmarscholcombe, is that the problem, perhaps?17:21
cholcombecould be.  what does your ceph osd tree look like?17:22
cmarscholcombe, how do I show that?17:22
cholcombejust type `ceph osd tree` :)17:23
=== rudi_eating is now known as rudi
webscholarHi dimiter17:45
webscholarare you around?17:45
webscholarjames how are you?17:45
webscholarjamespage need some help17:46
c0sbdx: I am sorry - I just seen your PR for the puppet layer, which I have essentially implemented after talking to you here yesterday17:56
c0sWasn't trying to steal your idea, really.17:57
beisnerthedac, can you cr+2 wf+1 this?  neutron-api-odl unit and lint were failing @master.  fyi, our ci fails b/c this charm has no amulet test yet.  https://review.openstack.org/#/c/306552/17:58
thedacbeisner: I'll take a look17:58
beisnerthx thedac18:01
beisnerthedac, for context:  https://review.openstack.org/#/q/topic:pbr-reqs18:02
cory_fuIs it true that local provider doesn't work on centos?  I didn't think that was the case, but had someone say that it was18:04
thedacbeisner: approved18:04
=== rbasak_ is now known as rbasak
=== tinwood_afk is now known as tinwood
beisnercholcombe, do we have a bug opened for the cache tier failure?18:33
cholcombebeisner, not yet i think18:33
beisnercholcombe, ok i'll raise18:33
beisnercholcombe, is this the --yes-i-really-mean-it thing? ;-)18:42
bbaqarjamespage: is the next branch of keystone merged into stable yet, for mitaka? I would like to get a small commit in. Am I in time?18:48
beisnerhi bbaqar - charm feature freeze was last week.  we'll be releasing next to stable late next week.  the only things we can land into next before then are critical bugfixes and test updates.18:50
bbaqarjamespage: okay no worries.18:51
=== redir is now known as redir_lunch
=== chuck__ is now known as zul
=== redir_lunch is now known as redir
jamespagebeisner, https://review.openstack.org/#/c/305065/19:39
jamespageUOSCI says go...19:39
freak_hi everyone19:46
freak_need help regarding lxc containers19:46
cholcombebeisner, yeah it's the --yes-i-really-mean-it bs19:52
beisnercholcombe, bug 157105020:10
mupBug #1571050: remove-cache-tier action failing @mitaka <uosci> <ceph (Juju Charms Collection):New for xfactor973> <ceph-mon (Juju Charms Collection):New for xfactor973> <https://launchpad.net/bugs/1571050>20:10
cholcombebeisner, yeah i'm on it :)20:11
beisnersweet thx sir20:11
cholcombebeisner, the --yes-i-really-mean-it flag fixes it.  i'll have a patch up soon20:14
beisnercholcombe, awesome20:14
=== rbasak_ is now known as rbasak
cholcombebeisner, https://code.launchpad.net/~xfactor973/charm-helpers/ceph-jewel-flag/+merge/29204920:43
cholcombewe'll have to resync the charms again :-/20:44
beisnercholcombe, c-h merged.   on these 2, i'd say just do a resync as a new patchset:20:56
beisnerhttps://review.openstack.org/#/c/305922/20:56
beisnerhttps://review.openstack.org/#/c/305933/20:56
beisnercholcombe, then new reviews for any others, if any20:56
cholcombebeisner, good idea21:06
cholcombebeisner, i think ceph-mon and ceph are the only ones that need it since they actually run the commands21:08
beisnerwoot21:09
cholcombei have a new patchset up for ceph-mon21:10
cholcombebeisner, for the future i'm pondering pulling out the cli calls and making api calls instead21:12
cholcombeso we can get out of this every iteration breaks things loop21:13
beisnerbut you just made your own api!  :-)21:13
cholcombehehe21:13
cholcombei mean swapping out the cli calls for librados calls21:13
beisneryah i'm just giving you shenanigans21:14
cholcombeit's not hard but it'll require lots of typing :)21:14
beisnerha!21:14
cholcombeand a massive refactor which i'll have to hold off till 16.10 for21:14
beisnerok so, gonna do a charm-ceph review too cholcombe ?21:14
cholcombeyeah21:14
cholcombeit's up21:15
beisnercool.  i think this should 'just land' as its the same result as master:    https://review.openstack.org/#/c/305922/   <- cholcombe ?21:15
cholcombeyeah21:16
beisnerboom21:16
cholcombedamn it's a merge frenzy21:17
beisnerthedac, dang and we just revalidated amulet-full rmq @master.  well you know you'll have a good baseline  ;-)21:26
thedacyeah, I suspect few people were doing ssl or we would have heard the yelling on this one21:28
=== blahdeblah_ is now known as blahdeblah
beisnerthedac, odd though, the rmq amulet test flips ssl on and off several times then sends/check amqp messages21:33
thedacbeisner: subsequent to install though.21:34
beisnerah but it has first settled in a non-ssl config, so everything is installed by the time we flip it21:34
beisneryeah21:34
thedacWith the bundle it is set before install21:34
cholcombehaha beisner i love your --yes-i-really-approve-it21:37
beisnerlolz21:38
=== scuttlemonkey is now known as scuttle|afk
beisnerthedac, cholcombe - amulet-smoke is clear on rmq, ceph, ceph-mon.  imho, land-worthy22:07
cholcombesweet!22:08
beisnernothin like landing after 5p on a fri, yah?22:09
thedacbeisner: thanks. I'll wait for a review Monday unless you are handing out +2s22:10
beisnerthedac, did you exercise that in the ssl mojo spec?22:10
thedacbeisner: a version of that yes22:10
thedacddellav: actually this would help you. Do you want to test it out? https://review.openstack.org/#/c/306628/22:10
ddellavah nice thedac22:11
beisnerthedac, ddellav - yah the mojo spec is really our only mechanism to exercise that.   it lgtm though!22:12
beisnerthedac, i'd be inclined to land her now tbh22:12
thedacddellav: I'll leave it to you. We'll land it after you test it22:13
ddellavthedac whats the best way to incorporate that fix into my mojo spec?22:13
beisneri don't think a refspec is consumable in mojo, though it is consumable in juju-deployer22:14
thedacoooh, good question. Because we need to grab the ref specs.22:14
thedacYeah, not sure it is doable in any easy kind of way22:14
beisnerddellav, thedac - looks like it'd take a fork, refspec fetch, merge, push, and repoint the spec's collect at the fork22:16
ddellavbeisner ew22:17
thedacbeisner: ddellav this is what I tested with. http://pastebin.ubuntu.com/15860862/ That both produced the problem and showed that it was fixed.22:17
thedachonestly I think that is enough22:17
ddellavthedac ok, ill run with that22:17
thedacwith a local copy22:17
beisnerthedac, me too.   the only hold up i've got is:    is the ctl file always /usr/sbin/rabbitmqctl, from precise --> xenial?22:18
thedacas far as I know. Also we use the same test elsewhere in the charm22:19
beisneri'm sold22:19
=== \b is now known as benonsoftware
cholcombei'm thinking of adding librados to charmhelpers.  Is it a bad idea to import python libraries that require an apt-get install ?22:54
cholcombelooks like the README.test already contains a bunch of libraries.  Seems like it's ok22:55

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!