[15:19] <jose> lazyPower: what about the 7/9 office hours?
[15:19] <lazyPower> jose: i dont have that on my calendar, jcastro - do we have office hours 7/9?
[15:20] <jcastro> we do
[15:20] <jcastro> let me check
[15:20] <lazyPower> we do - i jsut found it >.>
[15:20] <jcastro> yes, indeed, we have them
[15:20] <lazyPower> calendar search didn't return 7/9 in the search results
[15:20] <jose> afaik it's every 3 weeks :)
[15:20] <jcastro> I'll send a reminder
[15:21] <lazyPower> jose: thanks for catching that :)
[15:32] <puzzolo> One question dudes, is it possible to deploy openstack services with juju, on a lxd container?
[15:33] <puzzolo> this lxd does something magic i would like to try. Still few docs on how to implement an openstack install lxd-based
[15:37] <jrwren> puzzolo: I'm curious. Why lxd instead of lxc for this use?
[15:39] <puzzolo> lxd *seems* a better effort to integration with openstack
[15:40] <puzzolo> and a semplification in lxc containers creation (integration with glance?)
[15:41] <jrwren> puzzolo: oh, I misunderstood. I thought you wanted to deploy openstack.
[15:41] <puzzolo> sure i do. That is what i was curious. Does deploying with lxd instead of lxc make any sense? Or should i go with just lxc?
[15:43] <jrwren> puzzolo: deploying openstack with lxc does not preclude using the lxd nova driver.
[15:44] <puzzolo> oook. One last question jrwren. Are there juju charms for a lxc high-availability deployment with less hardware then 28 servers?
[15:45] <jrwren> puzzolo: I'm sorry, I do not know.
[15:45] <puzzolo> okok. I'll dig into it after a simple install.
[15:57] <jcastro> puzzolo: you're  probably looking for this: https://insights.ubuntu.com/2015/05/06/introduction-to-nova-compute-lxd/
[15:57] <jcastro> https://github.com/lxc/nova-compute-lxd
[15:59] <jcastro> beisner: heya, you guys wanna check these out:
[15:59] <jcastro> http://askubuntu.com/questions/639655/can-juju-interact-with-already-installed-openstack-service
[15:59] <jcastro> http://askubuntu.com/questions/637341/how-to-prevent-juju-from-overwriting-heat-conf-file
[16:03] <jcastro> puzzolo: but I don't think juju can deploy to individual containers yet?
[16:03] <Destreyf_> jcastro: you can do juju deploy --to lxc:<host> <charm>
[16:03] <Destreyf_> but
[16:04] <Destreyf_> i tried to use juju with openstack and containers
[16:04] <Destreyf_> its not very easy
[16:04] <jcastro> yeah I was under the impression that that area was still under heavy development
[16:11] <Destreyf_> well the issue is openstack doesnt' play well with LXV
[16:11] <Destreyf_> LXC*
[16:12] <Destreyf_> so what you have to do is keep the hypervisor (nova-compute) out of LXC
[16:12] <Destreyf_> as well as ceph (if you use it)
[16:12] <Destreyf_> and then if you plan on having say 3 (or more) servers you need some sort of networking to get from the LXC containers on each node
[16:12] <Destreyf_> flannel works, but its still just not quite enough to get fully running
[16:12] <Destreyf_> i spent about a week trying to get it working and i concluded that it wasn't worth it
[18:35] <puzzolo> Destreyf: juju suggests maas for nova-compute quantum-gateway and neutron-openvswitch
[18:35] <puzzolo> 20:10 -!- liam_ is now known as Guest20028
[18:36] <puzzolo> f*ck https://wiki.ubuntu.com/ServerTeam/OpenStackCharms/ProviderColocationSupport
[18:36] <Destreyf> Yup
[18:36] <Destreyf> that's what i had happen
[18:36] <Destreyf> XD
[18:40] <puzzolo> so from your experience, lxc goes well for openstack ... when you don't mess with nova-compute, quantum-gateway and neutron-openvswitch... which need seperate hardware. Still you can deploy nova-compute with neutron-openvswitch maas. and seperated quantum-gateway.
[18:41] <puzzolo> nova-compute hardware can support nova-compute-lxd in the same hardware... All seems a bit messy playground
[18:43] <puzzolo> so a working hypotesis, could be:
[18:45] <puzzolo> lxc0: cinder (ceph backend), glance,keystone,nova-cloud-controller,neutronapi,ceilometer,ceilometer-agent,heat,ceph/ceph-osd(with directory and no blockdevice - kinda sucks),swift-proxy,openstack-dashboard
[18:47] <puzzolo> and heat
[18:47] <puzzolo> maas1: nova-compute,swift-storage
[18:47] <puzzolo> maas2: quantum-gateway
[18:48] <puzzolo> One question... can maas be used with lxc containers as well?
[18:49] <puzzolo> like maas: nova-compute, swift-storage.... and on same hardware lxc keystone, lxc neutron-api
[18:49] <puzzolo> ?
[18:52] <puzzolo> ok quantum and neutron are just a renaming
[18:56] <puzzolo> ok wrong again:
[18:56] <puzzolo> lxc0: cinder (ceph backend),  lxc1: glance  lxc2: keystone lxc3: nova-cloud-controller lxc4: neutronapi lxc5: ceilometer lxc6: ceilometer-agent, lxc7: heat lxc8: ceph/ceph-osd(with directory and no blockdevice - kinda sucks) lxc9: swift-proxy lxc10: openstack-dashboard
[20:46] <jamespage> puzzolo, hey - you might find https://jujucharms.com/openstack-base/ a useful reference - its a four physical node openstack cloud, used lxc containers for the control plan.
[21:39] <Spizmar> I'm having an issue with juju requesting provisioning from MAAS.
[21:39] <Spizmar> http://askubuntu.com/questions/631598/maas-juju-not-provisioning-in-parallel
[21:40] <Spizmar> That is the URL of the question.  Basic problen is that juju only requests 1 machine at a time from MAAS, and does not ask for another until MAAS has finished provisioning.
[21:41] <Spizmar> this takes awhile for 40 machines, and would be a killer for 1000 nodes.
[23:30] <apuimedo> lazyPower: ping
[23:31] <apuimedo> jamespage: ping
[23:54] <apuimedo> mbruzek: ping
[23:54] <mbruzek> apuimedo: pong
[23:54] <apuimedo> :-)
[23:54] <apuimedo> Hey
[23:54] <apuimedo> how are you?
[23:54] <mbruzek> Hello!
[23:55] <mbruzek> I am well.  What can I do for you ?
[23:55] <apuimedo> how can I see what's the review queue like?
[23:55] <mbruzek> Oh!  Let me get you the link.
[23:55] <mbruzek> http://review.juju.solutions/
[23:55] <apuimedo> thanks
[23:56] <apuimedo> mbruzek: I have https://code.launchpad.net/~celebdor/charms/precise/cassandra/hostname_resolve
[23:56] <mbruzek> What bug number or charm name are you looking for?  If it is NOT in the queue I can help you get it in there.
[23:56] <apuimedo> in which I was using the launchpad "propose for merge"
[23:57] <apuimedo> but it's already a veeeeery long time
[23:57] <apuimedo> and I'm wondering if that "propose for merge" thing is something the team looks at at all
[23:58] <mbruzek> It is, but I do not see it in the queue.
[23:58] <mbruzek> let me look
[23:58] <mbruzek> I see "  On hold        for merging      into        lp:charms/cassandra"
[23:59] <apuimedo> mbruzek: anything I can do? I've been trying to ping Charles and James, but I have had trouble finding them ;-)
[23:59] <mbruzek> Antoni you just edited 2 files?
[23:59] <apuimedo> I re-verified that it works
[23:59] <apuimedo> yes
[23:59] <mbruzek> Antoni *I* will help you with this.