=== psivaa is now known as psivaa-afk === lazyPower changed the topic of #juju to: Welcome to Juju! || Office Hours, here 20 August 2000UTC || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP === scuttle|afk is now known as scuttlemonkey [15:19] lazyPower: what about the 7/9 office hours? [15:19] jose: i dont have that on my calendar, jcastro - do we have office hours 7/9? [15:20] we do [15:20] let me check [15:20] we do - i jsut found it >.> [15:20] yes, indeed, we have them [15:20] calendar search didn't return 7/9 in the search results [15:20] afaik it's every 3 weeks :) [15:20] I'll send a reminder === lazyPower changed the topic of #juju to: Welcome to Juju! || Office Hours, here July 9'th 2000UTC || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP [15:21] jose: thanks for catching that :) [15:32] One question dudes, is it possible to deploy openstack services with juju, on a lxd container? [15:33] this lxd does something magic i would like to try. Still few docs on how to implement an openstack install lxd-based [15:37] puzzolo: I'm curious. Why lxd instead of lxc for this use? [15:39] lxd *seems* a better effort to integration with openstack [15:40] and a semplification in lxc containers creation (integration with glance?) [15:41] puzzolo: oh, I misunderstood. I thought you wanted to deploy openstack. [15:41] sure i do. That is what i was curious. Does deploying with lxd instead of lxc make any sense? Or should i go with just lxc? [15:43] puzzolo: deploying openstack with lxc does not preclude using the lxd nova driver. [15:44] oook. One last question jrwren. Are there juju charms for a lxc high-availability deployment with less hardware then 28 servers? [15:45] puzzolo: I'm sorry, I do not know. [15:45] okok. I'll dig into it after a simple install. === kadams54 is now known as kadams54-away === scuttlemonkey is now known as scuttle|afk [15:57] puzzolo: you're probably looking for this: https://insights.ubuntu.com/2015/05/06/introduction-to-nova-compute-lxd/ [15:57] https://github.com/lxc/nova-compute-lxd [15:59] beisner: heya, you guys wanna check these out: [15:59] http://askubuntu.com/questions/639655/can-juju-interact-with-already-installed-openstack-service [15:59] http://askubuntu.com/questions/637341/how-to-prevent-juju-from-overwriting-heat-conf-file [16:03] puzzolo: but I don't think juju can deploy to individual containers yet? [16:03] jcastro: you can do juju deploy --to lxc: [16:03] but [16:04] i tried to use juju with openstack and containers [16:04] its not very easy [16:04] yeah I was under the impression that that area was still under heavy development === scuttle|afk is now known as scuttlemonkey [16:11] well the issue is openstack doesnt' play well with LXV [16:11] LXC* [16:12] so what you have to do is keep the hypervisor (nova-compute) out of LXC [16:12] as well as ceph (if you use it) [16:12] and then if you plan on having say 3 (or more) servers you need some sort of networking to get from the LXC containers on each node [16:12] flannel works, but its still just not quite enough to get fully running [16:12] i spent about a week trying to get it working and i concluded that it wasn't worth it === kadams54-away is now known as kadams54 === kadams54 is now known as kadams54-away === med_` is now known as med_ === liam_ is now known as Guest20028 [18:35] Destreyf: juju suggests maas for nova-compute quantum-gateway and neutron-openvswitch [18:35] 20:10 -!- liam_ is now known as Guest20028 [18:36] f*ck https://wiki.ubuntu.com/ServerTeam/OpenStackCharms/ProviderColocationSupport [18:36] Yup [18:36] that's what i had happen [18:36] XD [18:40] so from your experience, lxc goes well for openstack ... when you don't mess with nova-compute, quantum-gateway and neutron-openvswitch... which need seperate hardware. Still you can deploy nova-compute with neutron-openvswitch maas. and seperated quantum-gateway. [18:41] nova-compute hardware can support nova-compute-lxd in the same hardware... All seems a bit messy playground [18:43] so a working hypotesis, could be: [18:45] lxc0: cinder (ceph backend), glance,keystone,nova-cloud-controller,neutronapi,ceilometer,ceilometer-agent,heat,ceph/ceph-osd(with directory and no blockdevice - kinda sucks),swift-proxy,openstack-dashboard [18:47] and heat [18:47] maas1: nova-compute,swift-storage [18:47] maas2: quantum-gateway [18:48] One question... can maas be used with lxc containers as well? [18:49] like maas: nova-compute, swift-storage.... and on same hardware lxc keystone, lxc neutron-api [18:49] ? [18:52] ok quantum and neutron are just a renaming [18:56] ok wrong again: [18:56] lxc0: cinder (ceph backend), lxc1: glance lxc2: keystone lxc3: nova-cloud-controller lxc4: neutronapi lxc5: ceilometer lxc6: ceilometer-agent, lxc7: heat lxc8: ceph/ceph-osd(with directory and no blockdevice - kinda sucks) lxc9: swift-proxy lxc10: openstack-dashboard === lukasa is now known as lukasa_away === lukasa_away is now known as lukasa [20:46] puzzolo, hey - you might find https://jujucharms.com/openstack-base/ a useful reference - its a four physical node openstack cloud, used lxc containers for the control plan. [21:39] I'm having an issue with juju requesting provisioning from MAAS. [21:39] http://askubuntu.com/questions/631598/maas-juju-not-provisioning-in-parallel [21:40] That is the URL of the question. Basic problen is that juju only requests 1 machine at a time from MAAS, and does not ask for another until MAAS has finished provisioning. [21:41] this takes awhile for 40 machines, and would be a killer for 1000 nodes. === kadams54 is now known as kadams54-away === dmellado_ is now known as dmellado === jog_ is now known as jog === kadams54-away is now known as kadams54 [23:30] lazyPower: ping [23:31] jamespage: ping === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 [23:54] mbruzek: ping [23:54] apuimedo: pong [23:54] :-) [23:54] Hey [23:54] how are you? [23:54] Hello! [23:55] I am well. What can I do for you ? [23:55] how can I see what's the review queue like? [23:55] Oh! Let me get you the link. [23:55] http://review.juju.solutions/ [23:55] thanks [23:56] mbruzek: I have https://code.launchpad.net/~celebdor/charms/precise/cassandra/hostname_resolve [23:56] What bug number or charm name are you looking for? If it is NOT in the queue I can help you get it in there. [23:56] in which I was using the launchpad "propose for merge" [23:57] but it's already a veeeeery long time [23:57] and I'm wondering if that "propose for merge" thing is something the team looks at at all [23:58] It is, but I do not see it in the queue. [23:58] let me look [23:58] I see " On hold for merging into lp:charms/cassandra" === kadams54 is now known as kadams54-away [23:59] mbruzek: anything I can do? I've been trying to ping Charles and James, but I have had trouble finding them ;-) [23:59] Antoni you just edited 2 files? [23:59] I re-verified that it works [23:59] yes [23:59] Antoni *I* will help you with this.