/srv/irclogs.ubuntu.com/2015/07/06/#juju.txt

=== psivaa is now known as psivaa-afk
=== lazyPower changed the topic of #juju to: Welcome to Juju! || Office Hours, here 20 August 2000UTC || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP
=== scuttle|afk is now known as scuttlemonkey
joselazyPower: what about the 7/9 office hours?15:19
lazyPowerjose: i dont have that on my calendar, jcastro - do we have office hours 7/9?15:19
jcastrowe do15:20
jcastrolet me check15:20
lazyPowerwe do - i jsut found it >.>15:20
jcastroyes, indeed, we have them15:20
lazyPowercalendar search didn't return 7/9 in the search results15:20
joseafaik it's every 3 weeks :)15:20
jcastroI'll send a reminder15:20
=== lazyPower changed the topic of #juju to: Welcome to Juju! || Office Hours, here July 9'th 2000UTC || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP
lazyPowerjose: thanks for catching that :)15:21
puzzoloOne question dudes, is it possible to deploy openstack services with juju, on a lxd container?15:32
puzzolothis lxd does something magic i would like to try. Still few docs on how to implement an openstack install lxd-based15:33
jrwrenpuzzolo: I'm curious. Why lxd instead of lxc for this use?15:37
puzzololxd *seems* a better effort to integration with openstack15:39
puzzoloand a semplification in lxc containers creation (integration with glance?)15:40
jrwrenpuzzolo: oh, I misunderstood. I thought you wanted to deploy openstack.15:41
puzzolosure i do. That is what i was curious. Does deploying with lxd instead of lxc make any sense? Or should i go with just lxc?15:41
jrwrenpuzzolo: deploying openstack with lxc does not preclude using the lxd nova driver.15:43
puzzolooook. One last question jrwren. Are there juju charms for a lxc high-availability deployment with less hardware then 28 servers?15:44
jrwrenpuzzolo: I'm sorry, I do not know.15:45
puzzolookok. I'll dig into it after a simple install.15:45
=== kadams54 is now known as kadams54-away
=== scuttlemonkey is now known as scuttle|afk
jcastropuzzolo: you're  probably looking for this: https://insights.ubuntu.com/2015/05/06/introduction-to-nova-compute-lxd/15:57
jcastrohttps://github.com/lxc/nova-compute-lxd15:57
jcastrobeisner: heya, you guys wanna check these out:15:59
jcastrohttp://askubuntu.com/questions/639655/can-juju-interact-with-already-installed-openstack-service15:59
jcastrohttp://askubuntu.com/questions/637341/how-to-prevent-juju-from-overwriting-heat-conf-file15:59
jcastropuzzolo: but I don't think juju can deploy to individual containers yet?16:03
Destreyf_jcastro: you can do juju deploy --to lxc:<host> <charm>16:03
Destreyf_but16:03
Destreyf_i tried to use juju with openstack and containers16:04
Destreyf_its not very easy16:04
jcastroyeah I was under the impression that that area was still under heavy development16:04
=== scuttle|afk is now known as scuttlemonkey
Destreyf_well the issue is openstack doesnt' play well with LXV16:11
Destreyf_LXC*16:11
Destreyf_so what you have to do is keep the hypervisor (nova-compute) out of LXC16:12
Destreyf_as well as ceph (if you use it)16:12
Destreyf_and then if you plan on having say 3 (or more) servers you need some sort of networking to get from the LXC containers on each node16:12
Destreyf_flannel works, but its still just not quite enough to get fully running16:12
Destreyf_i spent about a week trying to get it working and i concluded that it wasn't worth it16:12
=== kadams54-away is now known as kadams54
=== kadams54 is now known as kadams54-away
=== med_` is now known as med_
=== liam_ is now known as Guest20028
puzzoloDestreyf: juju suggests maas for nova-compute quantum-gateway and neutron-openvswitch18:35
puzzolo20:10 -!- liam_ is now known as Guest2002818:35
puzzolof*ck https://wiki.ubuntu.com/ServerTeam/OpenStackCharms/ProviderColocationSupport18:36
DestreyfYup18:36
Destreyfthat's what i had happen18:36
DestreyfXD18:36
puzzoloso from your experience, lxc goes well for openstack ... when you don't mess with nova-compute, quantum-gateway and neutron-openvswitch... which need seperate hardware. Still you can deploy nova-compute with neutron-openvswitch maas. and seperated quantum-gateway.18:40
puzzolonova-compute hardware can support nova-compute-lxd in the same hardware... All seems a bit messy playground18:41
puzzoloso a working hypotesis, could be:18:43
puzzololxc0: cinder (ceph backend), glance,keystone,nova-cloud-controller,neutronapi,ceilometer,ceilometer-agent,heat,ceph/ceph-osd(with directory and no blockdevice - kinda sucks),swift-proxy,openstack-dashboard18:45
puzzoloand heat18:47
puzzolomaas1: nova-compute,swift-storage18:47
puzzolomaas2: quantum-gateway18:47
puzzoloOne question... can maas be used with lxc containers as well?18:48
puzzololike maas: nova-compute, swift-storage.... and on same hardware lxc keystone, lxc neutron-api18:49
puzzolo?18:49
puzzolook quantum and neutron are just a renaming18:52
puzzolook wrong again:18:56
puzzololxc0: cinder (ceph backend),  lxc1: glance  lxc2: keystone lxc3: nova-cloud-controller lxc4: neutronapi lxc5: ceilometer lxc6: ceilometer-agent, lxc7: heat lxc8: ceph/ceph-osd(with directory and no blockdevice - kinda sucks) lxc9: swift-proxy lxc10: openstack-dashboard18:56
=== lukasa is now known as lukasa_away
=== lukasa_away is now known as lukasa
jamespagepuzzolo, hey - you might find https://jujucharms.com/openstack-base/ a useful reference - its a four physical node openstack cloud, used lxc containers for the control plan.20:46
SpizmarI'm having an issue with juju requesting provisioning from MAAS.21:39
Spizmarhttp://askubuntu.com/questions/631598/maas-juju-not-provisioning-in-parallel21:39
SpizmarThat is the URL of the question.  Basic problen is that juju only requests 1 machine at a time from MAAS, and does not ask for another until MAAS has finished provisioning.21:40
Spizmarthis takes awhile for 40 machines, and would be a killer for 1000 nodes.21:41
=== kadams54 is now known as kadams54-away
=== dmellado_ is now known as dmellado
=== jog_ is now known as jog
=== kadams54-away is now known as kadams54
apuimedolazyPower: ping23:30
apuimedojamespage: ping23:31
=== kadams54 is now known as kadams54-away
=== kadams54-away is now known as kadams54
apuimedombruzek: ping23:54
mbruzekapuimedo: pong23:54
apuimedo:-)23:54
apuimedoHey23:54
apuimedohow are you?23:54
mbruzekHello!23:54
mbruzekI am well.  What can I do for you ?23:55
apuimedohow can I see what's the review queue like?23:55
mbruzekOh!  Let me get you the link.23:55
mbruzekhttp://review.juju.solutions/23:55
apuimedothanks23:55
apuimedombruzek: I have https://code.launchpad.net/~celebdor/charms/precise/cassandra/hostname_resolve23:56
mbruzekWhat bug number or charm name are you looking for?  If it is NOT in the queue I can help you get it in there.23:56
apuimedoin which I was using the launchpad "propose for merge"23:56
apuimedobut it's already a veeeeery long time23:57
apuimedoand I'm wondering if that "propose for merge" thing is something the team looks at at all23:57
mbruzekIt is, but I do not see it in the queue.23:58
mbruzeklet me look23:58
mbruzekI see "  On hold        for merging      into        lp:charms/cassandra"23:58
=== kadams54 is now known as kadams54-away
apuimedombruzek: anything I can do? I've been trying to ping Charles and James, but I have had trouble finding them ;-)23:59
mbruzekAntoni you just edited 2 files?23:59
apuimedoI re-verified that it works23:59
apuimedoyes23:59
mbruzekAntoni *I* will help you with this.23:59

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!