=== FourDollars_ is now known as FourDollars === bpierre_ is now known as bpierre === braderhart_ is now known as braderhart [08:18] hi all,when i deploy openstack with juju ,some units always pending? how can I resolve it? [08:22] active false local:trusty/ceph-105 [08:22] ceph-osd blocked false cs:trusty/ceph-osd-14 [08:22] ceph-radosgw blocked false cs:trusty/ceph-radosgw-19 [08:22] cinder unknown false local:trusty/cinder-136 [08:22] cinder-ceph false local:trusty/cinder-ceph-2 [08:22] glance unknown false local:trusty/glance-150 [08:22] heat unknown false local:trusty/heat-12 [08:22] juju-gui unknown false cs:trusty/juju-gui-45 [08:22] keystone unknown false local:trusty/keystone-0 [08:22] mongodb unknown false cs:trusty/mongodb-33 [08:22] mysql unknown false local:trusty/percona-cluster-45 [08:22] neutron-api unknown false local:trusty/neutron-api-1 [08:22] neutron-api-onos false local:trusty/neutron-api-onos-0 [08:22] neutron-gateway blocked false local:trusty/neutron-gateway-64 [08:22] nodes-api unknown false cs:trusty/ubuntu-5 [08:22] nodes-compute unknown false cs:trusty/ubuntu-5 [08:22] nova-cloud-controller unknown false local:trusty/nova-cloud-controller-501 [08:22] nova-compute blocked false local:trusty/nova-compute-133 [08:22] ntp false cs:trusty/ntp-14 [08:22] onos-controller unknown false local:trusty/onos-controller-0 [08:22] openstack-dashboard unknown false local:trusty/openstack-dashboard-32 [08:22] openvswitch-onos false local:trusty/openvswitch-onos-0 [08:22] opnfv-promise unknown false local:trusty/opnfv-promise-2016011201 [08:22] rabbitmq-server unknown false local:trusty/rabbitmq-server-150 [11:14] yuanyou: please use a pastebin in the future, also this is the services, we'd need to see the units output as well. I suggest you install pastebinit (sudo apt-get install pastebinit) then run `juju status --format tabular | pastebinit` === scuttlemonkey is now known as scuttle|afk === urulama__ is now known as urulama [16:14] hey [16:15] trying to use juju + maas together; managed to bootstrap but after adding a new charm the next maas machine didn't come up from a juju perspective [16:15] I can ssh to the machine and I see that cloud-init properly ran, however, the juju agent isn't running [16:15] how could I debug further? === scuttle|afk is now known as scuttlemonkey [16:32] nagyz: what does cloud-init-output.log have in /var/log ? [18:05] funny thing is, after destroying it and re-deploying it, it worked. [18:05] so I'll need to recreate to dump the cloud-init-output.log [18:06] let me boot up a new one and see if I can get a dump. [18:51] ok, managed to reproduce [18:53] marcoceppi, http://pastebin.com/jDCcn8vP [19:10] anyone know of a murano charm / a guide to do the install with a juju openstack [19:26] firl there is no murano charm that I know of put the juju charm store does have a pretty good catalog that you can deploy onto openstack [19:26] ref = smile.amazon.com [19:26] s/put/but [19:26] I mean https://jujucharms.com/store [19:27] arosales - look at you supporting charity :D [19:27] :-) [19:27] arosales: yeah, I am very pleased with it [19:27] firl - is there a specific application you were looking for thats in murano that we dont yet have in the store? [19:28] lazypower|travel: some of the openstack summit videos have murano/heat walkthroughs. I was just hoping to use it. currently right now I am trying to figure out how I want to solve kuberenets/coreos implementation [19:29] firl - funny you mention that :) we have kubernetes charms [19:29] I don’t care about autoscaling right now, but it would be nice [19:29] yeah I know the charm is there, but I liked the idea of having coreos with the OS updates for security [19:29] I was trying to figure out which bundle to use for now to get there [19:29] and our latest revisions (not in the store proper just yet) support in place upgrades [19:29] gotcha [19:30] if you're interested, i can get you a bundle in the next few minutes [19:30] that’d be nice [19:30] we'll have to wait a 20 minute cycle for the charms to ingest, but i can get you up and running on k8s [19:30] the last thing I found is: [19:30] ack, let me ping my main man mbarnett [19:30] https://insights.ubuntu.com/2015/07/30/juju-kubernetes-the-power-of-components/ [19:30] er mbruzek [19:30] hey neat, thats my article [19:30] ( this is all to solve the fact that the meteor charm is outdated ) [19:30] what where? [19:30] firl you're in good company ;) [19:31] :) yeah hah, I remember meeting you in september for the summitt [19:31] Nice! [19:32] but yeah a recommended bundle for getting it up and going would be awesome [19:32] does juju have autoscaling implemented yet ( I know ceilometer / heat can ) [19:33] well, juju itself doesn't implement that, as autoscaling is subject to business intelligence / different requirements [19:33] we've got some implementations with zabbix [19:36] gotcha, yeah I was considering just writing some hooks with ceilometer/heat to call juju add-unit essentially [19:44] firl - ok, i'm running a quick test, we did some mods to the etcd interface last week that i need to ensure hasn't broken the charm [19:44] hah ok [19:44] I'd rather be honest than give you broken software :) [19:45] i appreciate it [19:52] while so many devs around any idea based on my pastebin why my deployment doesn't pick up juju after starting the node by maas? :-) [19:54] nagyz: have you confirmed that routing / networking is all set up properly? [19:55] all of your interfaces seem to be down, and the datasource ip is not on a private subnet [19:56] when deploying from maas you can physically be on the host and log in via ubuntu:ubuntu I believe to diagnose it [19:57] firl, yeah if I destroy the machine and start a new one it has a 50% chance to come up properly [19:57] and maas actually tells me the machine is deployed. [19:57] so it can do the callback to the maas server [19:57] is it 50% chance on the same machine, or 50% chance in general across multiple nodes? [19:58] on the same machine [19:58] 9.4.113.0/24 is actually our maas network so that is correct [19:59] so this only happens to a specific machine, and it works on all the other nodes no issues? [19:59] no, on any node if I start it via juju (and only then) then I have this issue [19:59] gotcha [19:59] the machine (from a maas perspective) can be deployed 100% [20:00] and as you can see it can do an apt-get update and fetch packages so the network bonding works [20:00] I suspect when juju is changing the networking around to be under a bridge instead of bond0 then something goes wrong [20:00] I also have a capture from a run when it actually came up properly if that helps? [20:00] it might [20:01] let me paste it [20:01] yeah I wonder if it is because of the networking setup ( what you have chosen for the networking in maas might be colliding with juju ) [20:01] mbruzek: http://hardening.io/ [20:01] mbruzek: interesting process on security hardening. [20:02] firl, basically I have two 10g interfaces and I'd like to bond them together (which is done by maas) [20:02] firl, while actually PXE booting from a 3rd, 1g interfaces [20:02] and have a 4th 1g interface that I'm not using [20:02] gotcha, and if you have the 1g interface, or don’t do bonding for this server does it work fine? [20:02] no idea. all our servers are bonded. :) [20:02] haven't tried [20:03] here's a good run: http://pastebin.com/5mgiVVPe [20:03] and you are using maas 1.9? [20:03] yep [20:03] or 1.8 [20:03] k [20:03] 1.9 [20:03] (which actually has a horrible list of bugs but that's a topic for the very dead #maas channel...) [20:04] haha [20:04] yeah, I haven’t found much help from maas vs a lot of help from the juju guys [20:04] I am curious [20:04] the known good paste you have [20:05] it is able to verify the ssl cert bundle [20:05] the bad one isn't [20:07] firl - yeah, i've got s'more work to do in here. the refactoring i did broke flannel networking [20:07] firl - question for you as a consumer. Would you prefer it with or without sdn bundled? [20:07] I don’t know to be honest [20:07] I am using docker UCP at work [20:08] this is for a home project [20:08] firl - ack. Are you perchance coming to the Charmer Summit on Monday? [20:09] I can reasonably have a fix in place by then :) [20:09] I am not haha [20:09] I can use an older version that is stable though if there is a good bundle to use [20:09] sure, let me get you that [20:10] mbruzek - can you paste the bundle we built for scale? [20:10] kk thanks man [20:11] http://paste.ubuntu.com/14765250/ [20:12] firl you can get the layer from http://github.com/mbruzek/layer-k8s [20:12] right, you can charm build that one [20:13] you'll need to build the layer in your $JUJU_REPOSITORY, then you should be g2g with that bundle [20:13] to create the k8s charm. [20:13] oh do a local deploy you mean [20:13] from the git repo [20:14] right, the bundle references a local charm [20:14] and the repository linked is just a layer, you'll need to `charm build` [20:14] hrmm ok, i’ve deployed locally, haven’t done a charm build before [20:15] firl: need to install charm tools [20:15] firl, so what could cause it not to be able to verify it? [20:15] firl, just started using juju, honestly. :) [20:15] nagyz: the bundle verifies it, both places [20:16] but the known bad is actually having to reach out 2x to the same ip on line 25/26 [20:16] firl - i'm off or now, headed out to get waffles. If you need anything, dont hesitate to ping and i'll get back to you when i return to the hotel [20:16] good luck and cheers o/ [20:17] sounds good man thanks again [20:24] nagyz: ok looks like it’s just a networking issue [20:24] look at lines 2336 on the known good boot [20:24] and compare that section to 2336 on the known bad [20:24] you will see that the known good is able to wait for the bonding to come up [20:25] my suggestion would be to look into some of the properties of your LACP or ling aggregation setup with the switch you are using to do the bonding with the devices [20:25] ( i could be totally off too ) [20:31] firl, the network is set to do active LACP, so the client needs to send LACP packets actually [20:31] but it's set to portfast, so should be fine (although I admit I have no idea how portfast+LACP work together) [20:31] yeah nor do i [20:32] I haven’t done bonding via maas yet ( hope to this year ) [20:32] only done it via manual config / cisco [20:33] why are you bonding if you don’t mind me asking [20:33] depending on what you deploy you could just split the net traffic into 2 ( 10 g ) networks ( ceph on one vs neutron on another for example if you were using openstack ) [20:33] we can't have enough network bandwidth :_) [20:34] hah [20:34] this ceph cluster is ~2PB, and even with bonding on the storage nodes I only have 360Gbit on the storage side [20:34] yeah I am waiting for mellanox support [20:34] vs 1.92Tbit on the compute side [20:34] if I don't bond, that 360 becomes 180... [20:34] ya [20:34] I hear ya [20:34] plus redundancy [20:35] I should have really got dual 40gbit instead of dual 10 [20:35] might put in an other dual 10g card next week [20:35] yeah [20:35] you will have more support with those than mellanox [20:35] yeah I have a 54g setup but I can’t use it with charms because of the maas / node configuration [20:36] use ethernet :p [20:36] ib is dead :p [20:36] (isn't it 56gbit, btw? or 52...?) [22:56] ragyz you might be right with the rate [23:56] mbruzek , lazypower|travel : http://pastebin.com/d44UjP9g [23:57] looks like there are some issues with it still; I assume this is what you were talking about with the networking