=== scuttlemonkey is now known as scuttle|afk === axw_ is now known as axw === menn0 is now known as menn0-afk === thumper is now known as thumper-afk === menn0-afk is now known as menn0 [08:41] tinwood, gnuoy: good morning [08:41] good morning :) [08:41] o/ [08:47] tinwood, mmmm [08:47] looking at your port up change [08:47] jamespage, yup ? [08:47] [08:47] hi guys [08:48] is anybody available rightnow who can help me out [08:48] tinwood, yeah - your fix is great for a normal port [08:48] i'm deploying openstack base bundle [08:48] but for a dpdk port, the ip link commands will fail... [08:48] jamespage, but broken for other things? [08:48] through command juju quickstart openstack-base [08:48] but i'm facing issue http://paste.ubuntu.com/15845425/ [08:48] jamespage, oh [08:48] tinwood, yah - a smaller fix for now would be to specialize that function for dpdk codepath and use th charmhelpers version for non-dpdk [08:49] jamespage, ya, I understand. On it. [08:49] freak__, yeah - your lxc containers look unhappy [08:49] jamespage, any reading material on dpdk? [08:50] freak__, "failed to retrieve the template to clone" [08:50] tinwood, erm [08:50] thats what i want to know why containers not initializing [08:50] tinwood, look at line 360 of that same file [08:50] that's the bit that does the dpdk port adds... [08:50] jamespage, kk thanks. [08:51] freak__, I'd drop onto one of the physical hosts and take a look in /var/log/juju/machine-*.log [08:51] might give you a bit more of a clue [08:51] ok.let me check [08:51] the error is coming from lxc - so it might be some sort of firewall egress issue? is the environment your are deploying in network limited in any way? [08:52] WARNING juju.apiserver.client status.go:679 error fetching public address: public no address [08:54] my lab subnet is different and i'm doing NATing on my router to go to outside world [08:54] bbaqar, around? [08:56] freak__, hmm that should be ok [08:56] freak__, that public address message is probably ignorable as well [08:58] jamespage how can i check whether my cluster controller is performing dhcp and dns correctly or not [08:58] any command to check status of these services [08:59] freak__, I'd drop onto one of the deployed machines and check dns is all good that way (dig or suchlike) [08:59] ok [09:00] jamespage check this http://paste.ubuntu.com/15845529/ [09:00] i think its working fine as far as dns is concerned [09:00] freak__, yeah - that looks fine [09:01] hmm [09:02] freak__, could you pastebin the output of sudo lxc-ls -f [09:02] ok === thumper-afk is now known as thumper [09:03] http://paste.ubuntu.com/15845565/ [09:13] jamespage http://paste.ubuntu.com/15845565/ [09:14] freak__, yeah got it thanks [09:19] freak__, we need to get a bit more debug output [09:19] ok [09:19] freak__, the juju controller/bootstrap node caches the images for lxc - I just did a quick test with 1.25.5 locally and it worked OK [09:20] freak__, ok I think we can do this without re-bootstrapping [09:20] i think that the issue with lxc service start [09:20] freak__, can you do [09:21] juju set-env logging-config="=DEBUG;unit=DEBUG" [09:21] that will up the debug level across the environment [09:21] freak__, and then ssh to one of the nodes and delete the lxc container [09:22] freak__, [09:22] sudo lxc-destroy --name juju-trusty-lxc-template [09:22] and then have another go add adding a lxc container to that machine [09:23] freak__, juju add-machine lxc:X (where X is the machine id of the server you just deleted the template from) [09:23] ok..right now there is some power issue .they are doing maintenance .so nodes are off..i can perform these steps after 40mins.. [09:23] freak__, okies [09:23] lemme know how that goes [09:23] ok i will definitely inform you after performing these steps [09:23] specifically looking for messages from machine-X.log around problems with lxc and specific error messages [09:24] jamespage ok. i will [09:25] gnuoy, gerrit is foobar by the way - apparently no new jobs are being processed... [09:26] oh [09:38] gnuoy, infra team are working on it [09:38] kk [10:08] bbaqar, when you do appear - could you confirm what testing the updates for the plumgrid charms have had; I don't intened to exercise them myself so dependent on your test results :-) [10:09] bbaqar, you also asked me about migration of the plumgrid charms under the openstack project for development [10:10] jamespage i executed the commands as you mentioned [10:11] upon running juju add-machine lxc:0 it showed msg created container 0/lxc/3 [10:11] but in juju status [10:12] http://paste.ubuntu.com/15846755/ [10:13] bbaqar, https://review.openstack.org/#/c/232705/ is the change we made to push the core charms under the openstack project [10:13] jamespage here are machine 0 logs http://paste.ubuntu.com/15846774/ [10:14] bbaqar, I'd suggest that you follow the same route - I suspect that you will want the group for core-reviewers to be a little different tho [10:15] freak__, need a bit more log I think [10:15] ok [10:15] which one.tell me the command i will share [10:21] freak__, the machine-0 log [10:21] ok i will take the complete log and share [10:25] jamespage any website like paste.ubuntu where i can share the file of log i have taken using securecrt [10:26] coz the file is too long and paste.ubuntu got hung [10:26] freak__, a grep for 'container' might be helpful to filter things out [10:28] u mean cat /var/log/juju/machine0.log | grep container [10:29] jamespage here is the output http://paste.ubuntu.com/15847017/ [10:33] freak__, ok [10:34] freak__, apparently Invalid argument - setting cmdline failed is a warning only [10:34] freak__, lets try starting that container [10:34] how to start [10:35] freak__, lxc-start --name juju-trusty-lxc-template [10:35] ok [10:35] sudo lxc-start --name juju-trusty-lxc-template [10:35] that will happen in the foreground - hopefully that might give us a clue [10:36] jamespage here is the issue http://paste.ubuntu.com/15847100/ [10:37] freak__, yeah still not much info [10:38] freak__, try again with "-l DEBUG" [10:38] that might give us more [10:38] and -F for foreground starting? [10:38] all I can think is the cached image in juju is foobar [10:39] sparkiegeek, I thought that was the default but I might be wrong [10:39] freak__, add "-F" as well to be sure... [10:39] jamespage: nah, default is daemon [10:39] sparkiegeek, oh [10:40] jamespage check this http://paste.ubuntu.com/15847157/ [10:40] jamespage: Thanks for the help. Bundles in all the charms tests deploy smoothly. [10:40] freak__, needs to be -F not -f [10:40] ok [10:41] freak__, and DEBUG not debug apparently... [10:41] jamespage: have not ran the amulet test though in the last two weeks .. but they run the same bundle that i deploy every day [10:41] bbaqar, ok [10:41] jamespage: and test for the same things i test every day [10:41] bbaqar, do you have a bundle in the charm-store for plumgrid? [10:42] gnuoy, https://review.openstack.org/#/c/304668/ could you review please [10:42] finally passing amulet tests... [10:42] http://paste.ubuntu.com/15847193/ [10:42] jamespage: yes we do .. but it has to be updated as soon as we land the charm MPs https://jujucharms.com/plumgrid-ons/bundle/9 [10:43] bbaqar, yah - I see some of the config options are changing names - that will break upgraders but I suspect your install base is know right now :-) [10:43] jamespage, will do [10:44] jamespage while bootstrapping there is the option disable juju network management i checked this option do you think this bridge issue is appearing to that [10:44] freak__, right - that's usefull [10:44] freak__, hmmm [10:44] freak__, and you do 'ip addr' on that unit please [10:44] freak__, yes I suspect that is breaking things badly... [10:44] jamespage: no one is on those revision .. so we are good [10:45] bbaqar, okay - next on my list [10:45] so do you suggest to bootstrap again and enable that option [10:45] freak__, yes [10:45] i think that will work [10:45] ok i will try and then get back to you [10:46] freak__, could you raise a bug against juju for this as well - it really should tell you that you can't have lxc containers if that option is enabled [10:46] https://bugs.launchpad.net/juju-core/+filebug [10:46] that would have saved us 2 hours of debug... [10:46] ok i will [10:47] freak__, thankyou :-) [10:47] ping me the link once raised and I'll mark it as effects me tooo... [10:48] thanks to you as well for strong support..ok i will share link here with you [10:49] jamespage: appreciate the help. [10:50] freak__, no problem - just as a heads up we're a week off the next charm release that will support 16.04 and the openstack mitaka release [10:50] freak__, so if you're planning ahead something to think about [10:50] freak__, you can upgrade liberty->mitaka but not in-place 14.04 -> 16.04 [10:50] using the charms that is [10:56] bbaqar, for migration under the openstack project, you'll want to add tox configurations for all of your charms... [10:56] bbaqar, look at any of the core charms for hints on that [10:57] bbaqar, any general summary for the commit messages on these? [10:57] bbaqar, and remind me to talk to you about direct charm store publishing soon.... [10:57] #let me add a commit message right now. [10:58] jamespage: I ll add a commit message right now [10:58] jamespage here is the bug link https://bugs.launchpad.net/juju-core/+bug/1570796 [10:58] Bug #1570796: container startup issue when juju network management disabled [10:58] bbaqar, ta [10:58] bbaqar, here is fine - vim is open waiting for me to type... [10:58] jamespage: just one mine [10:59] gnuoy, nearly have odl-controller passing on xenial [10:59] soooo close [11:00] jamespage [11:01] jamespage: support added for plumgrid 4.1.3 and 5.0 releases | support added for configurable external interfaces | support added for separate fabric network (os-data-network) [11:01] jamespage . use this for plumgrid-director, plumgrid-edge, plumgrid-gateway [11:01] jamespage, excellent [11:02] jamespage: for neutron-api-plumgrid: support added for plumgrid 4.1.3 and 5.0 releases [11:05] bbaqar, as a future improvement, you might have external-interfaces be a list of mac addresses to use across the deployment [11:05] bbaqar, there is code in charm-helpers to resolve mac -> interface [11:09] jamespage: haha ... i wish i had known this earlier .. [11:09] bbaqar, I suspect some of the challenges we have across SDN solutions are common [11:10] jamespage: i have seen the function actually .. might be difficult for the users to collect all mac-addresses for each interface on scale deployments [11:10] bbaqar, well maas has them all :-) [11:11] bbaqar, as a future feature, juju may grow support for presenting the interface directly via network spaces [11:12] jamespage: You are right. and yes i saw some development on spaces .. will rethink this before pushing xenial charms [11:12] bbaqar, sure [11:12] it might be good to schedule a general MAAS 2.0/juju 2.0 update for your team [11:13] there will be alot of docs being updated - prob best to wait for that [11:13] bbaqar, ok all landed... [11:14] lazyPower, ^^ [11:14] fyi [11:17] gnuoy, "Controller configuration on bridge br-int incorrect: ![u'tcp:172.17.115.171:6653']! != !tcp:172.17.115.171:6633!" [11:17] jamespage: thanks alot. [11:22] jamespage: For the commit into openstack project. What upstream location should we use? I see all openstack charms are in https://github.com/openstack-charmers/ ... should i keep them in our plumgrid github space? [11:41] bbaqar, that was just a staging area [11:42] any git location for the source of the charms is fine === matthelmke-afk is now known as matthelmke [12:06] jamespage:got it [12:38] jamespage are you there? [12:40] jamespage i bootstrapped the environment again and enabled the option juju network management ..this time the lxc issue resolved [12:40] this is the current status some components are struck http://paste.ubuntu.com/15848673/ [12:42] freak__, looks like lxc containers might be still coming up [12:42] ok. i will wait [12:46] freak__, whats the disk io like on your servers? [12:53] tinwood, your proposed fix for n-ovs looks good to me - is that testing ok in the dvr spec for you? [12:53] jamespage, I'm just getting it sorted now - let you know when it's done xenial/mitaka? [12:54] tinwood, okies [12:54] jamespage its /dev/sda [12:54] tinwood, cause I know the amulet tests won't exercise that stuff at-all [12:54] jamespage, indeed. [12:54] jamespage here is the detail http://paste.ubuntu.com/15848759/ [12:55] * lazyPower read backscroll [12:56] jamespage niiiiceee! [12:57] .. [12:57] jamespage - we threw down some docs about that as well https://jujucharms.com/docs/devel/authors-charm-store [12:57] re: publishing [13:02] lazyPower, awesome [13:05] jamespage still the status is same ...lxc are in allocating state [13:22] jamespage here is the current status http://paste.ubuntu.com/15849141/ [13:22] is it normal to take so much time allocating? [13:22] freak__, not quite sure whats happening tbh [13:23] freak__, no - i suspect something else [13:23] freak__, can you check the status of the lxc containers on the machines? [13:23] sudo lxs-ls [13:23] sudo lxs-ls -f actuall is better [13:23] ok [13:24] jamespage http://paste.ubuntu.com/15849161/ [13:25] freak__, sudo !! [13:25] needs root [13:26] ohh...right http://paste.ubuntu.com/15849174/ [13:27] freak__, thats a good start [13:28] yes that alright... but have you noticed template at the end is stopped [13:31] jamespage what you think could be the issue in allocation of lxc [13:32] freak__, yeah - the template being stopped is fine [13:36] my current status [13:36] http://paste.ubuntu.com/15849258/ [13:40] jamespage i found this bug [13:40] https://bugs.launchpad.net/juju/+bug/998238 [13:40] Bug #998238: local provider unit agents get stuck in pending state because of host firewall blocking communication [13:41] they are saying disable ufw [13:41] and then destroy environment and build again [13:41] http://askubuntu.com/questions/134977/juju-stuck-in-pending-state-when-using-lxc [13:42] freak__, I doubt its enabled if the physical machines are maas deployed [13:43] how to check ? [13:43] whether its enabled or not [13:44] freak__: can you SSH on to the machines that are still pending and paste /var/log/cloud-init-output.log and /var/log/juju/* [13:44] sparkiegeek , ok i will [13:48] sparkiegeek , here is the output http://paste.ubuntu.com/15849429/ [13:49] freak__: that looks like it's all healthy - how about /var/log/juju/all-machines.log ? [13:50] sparkiegeek, ok i will share [13:51] tinwood, recheck-full not really required for the dpdk ports fix [13:51] jamespage, I guess I'm just a bit paranoid. Sorry. [13:52] One of the other charms broke on wily/liberty because it wasn't enabled when I did pause/resume. [13:52] tinwood, np [13:53] freak__, sorry - doing about 5 different things at once so apologies [13:53] freak__, thats the juju machine 0 cloud-init output [13:54] jamespage, xenial/mitaka works with the ovs change. Just checking trusty/kilo now. (for dvr). [13:54] jamespage , no issue.. you guys at canonical are very supportive.. salute you [13:54] freak__, you should be able to ssh directly to the lxc units - say ssh ubuntu@192.168.6.164 [13:54] cholcombe, yah, ceph-radosgw amulet full is failing @ master in the same way as on your review. http://pastebin.ubuntu.com/15849481/ got ideas? [13:54] ok. let me ssh [13:54] freak__, you might not be able to [13:54] lets see [13:57] aisrael: Thanks for charm-tools#182! I had one review comment / request, but awesome that you took that on. :) [13:58] jamespage i was able to do ssh to lxc here is the output http://paste.ubuntu.com/15849549/ [13:59] freak__, is 192.168.11.193 the ip address of machine 0? [14:00] jamespage yes its the ip on machine 0 vlan 11 [14:00] freak__, oh wait - thats a different subnet to the lxc containers? can they actually access that IP ? [14:01] jamespage , they are on same machine [14:02] freak__, can you explain your networking a bit please [14:03] cory_fu: My pleasure. I just updated the pull request. I'm thinking I should also update the proof command to do similar wrt colourized output [14:03] on machine 0 i have given ip 192.168.6.193 on eth 0 in maas, on eth1 no ip ,, i have created vlans/subinterfaces [14:03] from vlan 11-16 [14:03] for update :: from lxc on machine 0 i cannot ping 192.168.11.193 [14:04] aisrael: I think that would be nice, but also a somewhat larger undertaking [14:04] but on machine 0 i also have the ip 192.168.6.193 [14:04] so lxc should communicate through that [14:04] i mean 192.168.6.193 on machine 0 can communicate to 192.168.6.164 on lxc [14:05] freak__, yeah - I see what you mean [14:06] freak__, so the vlan interfaces are all trunked in over eth0 on machine 0 right? [14:06] and you did that via MAAS I'm guessing... [14:06] eth1 is trunk [14:06] oh right - sorry [14:06] and vlans are configure on eth1 [14:06] okies... [14:07] dimitern, hey - need some support here ^^ how does juju decide which IP address of machine 0 to use for machine agents to communicate with? [14:07] context here is that the lxc containers being deployed are trying to address an IP address of machine 0 which they can't actually route to [14:08] despite the fact that machine 0 has an IP on the same subnet [14:08] dimitern, 1.25.5 juju release - freak__: MAAS 1.9? [14:08] jamespage maas version is 1.9 [14:08] ta [14:09] freak__, bear with us on this one [14:09] jamespage , no issue i will wait [14:10] aisrael: Oh, I actually missed that was proof and not lint (a separate issue I created) and thought it would just be a blanket colorization. Hrm. I just realized that, technically, the intention of INFO in proof vs build is different and green might not be the best color for I from proof. But I don't see another, better alternative [14:10] tinwood, ok - thats good enough for me - pushing that through [14:11] freak__, can you check for me what 'node0.maas' resolves to please [14:12] jamespage , here is the output http://paste.ubuntu.com/15849691/ [14:13] freak__, ack thanks - that looks right to me [14:13] .6 not .11 [14:15] jamespage the ip of region controller is 192.168.6.11 and of cluster controller is 192.168.6.12 [14:15] and then the node0 ip is 192.168.6.193 [14:15] freak__, +1 [14:15] if i run dig node0.maas on machine 0 for that output is this http://paste.ubuntu.com/15849691/ [14:16] if you want dig on region or cluster controller i can also share that if you say [14:16] jamespage, yep, kilo passed too. Good call. :) [14:16] aisrael: Merged. :) [14:17] cory_fu: <3 === cos1 is now known as c0s === scuttle|afk is now known as scuttlemonkey === rogpeppe3 is now known as rogpeppe [14:40] jamespage 192.168.11.193 ip should be reachable from lxc .here is the snip of routing table http://paste.ubuntu.com/15850132/ [14:41] freak__, 192.168.6.1 can route between .6 and .11 ? [14:41] i think i should change the gateway ip to 192.168.6.193 [14:42] that would be much better [14:44] jamespage i changed the default gw from 6.1 to 6.193 [14:44] now from lxc i can ping to 11.193 [14:45] freak__, I suspect that's not a great idea [14:45] they might be able to get to the bootstrap node, but I suspect the rest of the world is not inaccessible === hbaum_ is now known as hbaum [14:46] jamespage you are right but in that case lxc should choose 6.193 ip to go to outside world [14:46] freak__, not really [14:46] do not choose 11.193 [14:46] a default gateway should be enough most of the time [14:47] the problem is the way the machine agent on the physical machine is configuring the machine agents in the lxc containers to get their tools [14:49] with juju2, how do I setup an apt-http-proxy for an openstack cloud? I have it configured in my cloud.yaml that I imported the cloud settings with but it doesn't seem to be using it [14:51] jamespage how can i force lxc to resume its process of downloading tools which was interrupted to inaccessible ip 11.193 [14:52] cory_fu: when creating a layer, when should one create a config.yaml, versus just exposing options in layer.yaml? is that documented somewhere? [14:52] freak__, hmm [14:52] freak__, restarting cloud-init might work [14:52] my other suggestion was to reboot the container - but that might reset your temp route [14:54] jamespage if we make the route permanent by modifying networks file [14:54] then restart the container [14:54] btw how we will restart container [14:54] tvansteenburgh: No, I don't think layer options are well documented yet. Basically, layer options are a replacement for, e.g., apache.yaml from the apache-php layer, so that we don't have an explosion of .yaml files. So, they're intended for a charm layer to influence what a base layer does, whereas config.yaml is for user-facing options [14:54] freak__, well maybe but I'm concerned that will then just break something else [14:54] freak__, reboot inside the container... [14:55] nice :P [14:55] cory_fu: oh, right :) [14:55] cory_fu: i forgot that config.yaml was just part of the charm :P [14:55] :) [15:03] Hi All,We have developed IBM-Installation Manager in bash and i have declared ibm_im_package option in layer.yaml but iam unable to fetch it ? can you please suggest any command is there to fetch layer options ?? [15:05] shruthima - layer options are documented here https://jujucharms.com/docs/devel/reference-layer-yaml [15:05] ensure you've implemented them as defined in that reference guide, you can then fetch the layer options like we do in this example: https://github.com/mbruzek/layer-storage/blob/master/reactive/storage.py#L113 [15:07] shruthima - in addition to that guide, ensure you're also running the latest version of charm-tools (2.1.2) so you've got hte required builder modifications to support options in layer.yaml [15:11] we have written code in bash ? so any bash example can you provide if available ? [15:16] Can you please provide any example written in bash code ? [15:17] shruthima - ah good point i'm not sure if thats exposed via the bash CLI [15:18] cory_fu i'm pretty certain it is, do you happen to know if that was added to the bash helpers? [15:19] jcastro: ping [15:21] lazyPower, shruthima: Actually, layer options aren't actually exposed by bash CLI. :( It shouldn't be too hard to add, and it would be in the base layer, so won't require much to get it released. [15:21] oh, good to know! [15:21] thanks cory_fu [15:21] However, I'm pretty busy today, with the last day of the big data team sprint, so I'm not sure if I'll be able to get it done today [15:22] oh k thanku lazyPower cory_fu [15:23] shruthima: I will send out an email to the juju list when I can get that done [15:24] cory_fu : Thanku :) === Spads_ is now known as Spads [15:32] hows it going everyone? I've a few jaas/charmstore questions for anyone listening .... 1. How can I delete stale charms that I don't want to show up in the charmstore anymore (ones in the legacy charmstore that got pulled in from my lp code, and new jaas namespace charms)? 2. How can I create a channel ? 3. (juju2) When I add users with write access to a model, they don't seem to be able to `juju ssh` into [15:32] any deployed machines; do I need to each new users ssh key, if so, which key? [15:33] bdx - you were asking about channels - have a look at the docs over the charm store + channels here https://jujucharms.com/docs/devel/authors-charm-store [15:34] lazyPower: perfect, thx [15:34] bdx - regarding removing of charms - you can only remove charms that have been uploaded to the new store. The old store is like an icebox, think of those charms as being in stasis. === redelmann is now known as rudi_eating [15:41] I'll offer an award of $50 and a bucket of beer to charmstore admin who can remove some old, stale, failed charm attempts for me :-) [15:43] i'll give them some pork scratchings and a pint of ale [15:46] magicaltrout - hows session planning going for apachecon? [15:46] who could resist? someone will break .... just give them time [15:47] bdx - good things come to those who are patient [15:47] bleh in between contract work on saiku, contract work for nasa and apachecon my life is a big mush of loads to do [15:47] magicaltrout - Fair enough :) was poking as a leading question to offer eyeballs for review/help if any of that would be helpful to your goals there [15:48] its coming on, the Juju Data Management talk will be pretty straight forward (famous last words) I'm working on the tutorial content and presentation slides for all of them at the mo [15:48] ack, just lmk when/if i can be of help [15:48] my plan is to get the planning and presentation stuff done betwen now and the 29th which is dealing for my real job, and after that I should have 2 weeks to get all the technical stuff done for apachecon [15:49] transpires Q1 2016 is pretty chaotic ;) [15:57] lazyPower: last year I was out at 2am playing very drunk beach volleyball, went to bed at 5 got up at 9 and wrote my presentation for 11 [15:57] ever the professional..... [15:58] wont be happening this year with my tutorial load [16:04] gnuoy, interesting. The amulet for wily/liberty isn't stopping apache2 [16:05] oh, well that's be it then [16:06] gnuoy, except now it has. [16:06] * tinwood thinks this is weird. [16:07] gnuoy, might be finger trouble on my side. More investigation. [16:07] good luck [16:07] gnuoy, and just to verify, you were just doing amulet tests with a bootstrapped juju only? [16:08] tinwood, yep [16:09] gnuoy, kk, ta === redir_afk is now known as redir [16:13] jamespage i made the default gw updated route to 6.193 permanent [16:13] and restarted the lxc [16:13] now when i see the cloud-init-output.log [16:14] here is the output http://paste.ubuntu.com/15852704/ [16:14] it updated the route info only but not resumed the process === tinwood is now known as tinwood_afk [16:29] icey, cholcombe - bug 1570960 [16:29] Bug #1570960: ceph-osd stuck in a mon-relation-changed infinite loop [16:31] that is not stuck... or in a mon relation changed loop [16:31] it's in an update-status loop, which is by design? [16:32] icey, well it goes on and on back to mon-relation-changed then apt installs again and again [16:34] icey, yah so my quick assessment may not be right on the actual issue. the symptom is solid though: ceph-osd blocks forever on trusty-icehouse [16:37] icey, bug updated [16:37] beisner: I can look after lunch [16:52] icey, cholcombe question about the ceph charm.. should I be able to deploy it to lxd controller? anything special that needs to be done to get it working in containers? [16:52] cmars, it should just work. are you running into issues? [16:52] i've tried using directories for the osds, but they always seem to be stuck in HEALTH_WARN [16:53] can you paste the warning for me? [16:53] cholcombe, yep, i'll try again and get you an error message from a fresh deploy [16:53] ok thanks === scuttlemonkey is now known as scuttle|afk === scuttle|afk is now known as scuttlemonkey [17:18] cholcombe, here's what I'm seeing: https://paste.ubuntu.com/15854608/ [17:19] cmars, i've seen that before with instances that have disks that are too small. your ceph osd tree will show that all the osds are weighted at 0 instead of 1 like they should be [17:21] cholcombe, they're all running in containers, so they all think they have the same usage & free space as the host [17:21] cholcombe, is that the problem, perhaps? [17:22] could be. what does your ceph osd tree look like? [17:22] cholcombe, how do I show that? [17:23] just type `ceph osd tree` :) === rudi_eating is now known as rudi [17:45] Hi dimiter [17:45] are you around? [17:45] james how are you? [17:46] jamespage need some help [17:56] bdx: I am sorry - I just seen your PR for the puppet layer, which I have essentially implemented after talking to you here yesterday [17:57] Wasn't trying to steal your idea, really. [17:58] thedac, can you cr+2 wf+1 this? neutron-api-odl unit and lint were failing @master. fyi, our ci fails b/c this charm has no amulet test yet. https://review.openstack.org/#/c/306552/ [17:58] beisner: I'll take a look [18:01] thx thedac [18:02] thedac, for context: https://review.openstack.org/#/q/topic:pbr-reqs [18:04] Is it true that local provider doesn't work on centos? I didn't think that was the case, but had someone say that it was [18:04] beisner: approved === rbasak_ is now known as rbasak === tinwood_afk is now known as tinwood [18:33] cholcombe, do we have a bug opened for the cache tier failure? [18:33] beisner, not yet i think [18:33] cholcombe, ok i'll raise [18:42] cholcombe, is this the --yes-i-really-mean-it thing? ;-) [18:48] jamespage: is the next branch of keystone merged into stable yet, for mitaka? I would like to get a small commit in. Am I in time? [18:50] hi bbaqar - charm feature freeze was last week. we'll be releasing next to stable late next week. the only things we can land into next before then are critical bugfixes and test updates. [18:51] jamespage: okay no worries. === redir is now known as redir_lunch === chuck__ is now known as zul === redir_lunch is now known as redir [19:39] beisner, https://review.openstack.org/#/c/305065/ [19:39] UOSCI says go... [19:46] hi everyone [19:46] need help regarding lxc containers [19:52] beisner, yeah it's the --yes-i-really-mean-it bs [20:10] cholcombe, bug 1571050 [20:10] Bug #1571050: remove-cache-tier action failing @mitaka [20:11] beisner, yeah i'm on it :) [20:11] sweet thx sir [20:14] beisner, the --yes-i-really-mean-it flag fixes it. i'll have a patch up soon [20:14] cholcombe, awesome === rbasak_ is now known as rbasak [20:43] beisner, https://code.launchpad.net/~xfactor973/charm-helpers/ceph-jewel-flag/+merge/292049 [20:44] we'll have to resync the charms again :-/ [20:56] cholcombe, c-h merged. on these 2, i'd say just do a resync as a new patchset: [20:56] https://review.openstack.org/#/c/305922/ [20:56] https://review.openstack.org/#/c/305933/ [20:56] cholcombe, then new reviews for any others, if any [21:06] beisner, good idea [21:08] beisner, i think ceph-mon and ceph are the only ones that need it since they actually run the commands [21:09] woot [21:10] i have a new patchset up for ceph-mon [21:12] beisner, for the future i'm pondering pulling out the cli calls and making api calls instead [21:13] so we can get out of this every iteration breaks things loop [21:13] but you just made your own api! :-) [21:13] hehe [21:13] i mean swapping out the cli calls for librados calls [21:14] yah i'm just giving you shenanigans [21:14] it's not hard but it'll require lots of typing :) [21:14] ha! [21:14] and a massive refactor which i'll have to hold off till 16.10 for [21:14] ok so, gonna do a charm-ceph review too cholcombe ? [21:14] yeah [21:15] it's up [21:15] cool. i think this should 'just land' as its the same result as master: https://review.openstack.org/#/c/305922/ <- cholcombe ? [21:16] yeah [21:16] boom [21:17] damn it's a merge frenzy [21:26] thedac, dang and we just revalidated amulet-full rmq @master. well you know you'll have a good baseline ;-) [21:28] yeah, I suspect few people were doing ssl or we would have heard the yelling on this one === blahdeblah_ is now known as blahdeblah [21:33] thedac, odd though, the rmq amulet test flips ssl on and off several times then sends/check amqp messages [21:34] beisner: subsequent to install though. [21:34] ah but it has first settled in a non-ssl config, so everything is installed by the time we flip it [21:34] yeah [21:34] With the bundle it is set before install [21:37] haha beisner i love your --yes-i-really-approve-it [21:38] lolz === scuttlemonkey is now known as scuttle|afk [22:07] thedac, cholcombe - amulet-smoke is clear on rmq, ceph, ceph-mon. imho, land-worthy [22:08] sweet! [22:09] nothin like landing after 5p on a fri, yah? [22:10] beisner: thanks. I'll wait for a review Monday unless you are handing out +2s [22:10] thedac, did you exercise that in the ssl mojo spec? [22:10] beisner: a version of that yes [22:10] ddellav: actually this would help you. Do you want to test it out? https://review.openstack.org/#/c/306628/ [22:11] ah nice thedac [22:12] thedac, ddellav - yah the mojo spec is really our only mechanism to exercise that. it lgtm though! [22:12] thedac, i'd be inclined to land her now tbh [22:13] ddellav: I'll leave it to you. We'll land it after you test it [22:13] thedac whats the best way to incorporate that fix into my mojo spec? [22:14] i don't think a refspec is consumable in mojo, though it is consumable in juju-deployer [22:14] oooh, good question. Because we need to grab the ref specs. [22:14] Yeah, not sure it is doable in any easy kind of way [22:16] ddellav, thedac - looks like it'd take a fork, refspec fetch, merge, push, and repoint the spec's collect at the fork [22:17] beisner ew [22:17] beisner: ddellav this is what I tested with. http://pastebin.ubuntu.com/15860862/ That both produced the problem and showed that it was fixed. [22:17] honestly I think that is enough [22:17] thedac ok, ill run with that [22:17] with a local copy [22:18] thedac, me too. the only hold up i've got is: is the ctl file always /usr/sbin/rabbitmqctl, from precise --> xenial? [22:19] as far as I know. Also we use the same test elsewhere in the charm [22:19] i'm sold === \b is now known as benonsoftware [22:54] i'm thinking of adding librados to charmhelpers. Is it a bad idea to import python libraries that require an apt-get install ? [22:55] looks like the README.test already contains a bunch of libraries. Seems like it's ok