[07:40] <dakj> jamespage, icey: yesterday I've removed everything the vm and tried to install Openstack with conjure-up using an only vm with 64GB of RAM, 1TB x2 of HDD and the result has been perfect, Openstack has been installed correctly. Now that I rebuild the situation before the issue with Ceph is always there presented. Now why with conjure-up the installation is correct and via manual with juju no???
[07:47] <kjackal> good morning juju world
[09:28] <jamespage> dakj: conjure-up on a single vm will be all contained within the same machine - i.e. nothing goes outside of the machines to the VMware vSwitch
[09:29] <jamespage> dakj: this is a strong pointer that the problem lies in the network connectivity between machines or lxd containers on different machines
[09:34] <dakj> jamespage: In MAAS the subnets must configured in this way eth0(https://pasteboard.co/7oi5M27zS.png) and eth1 (https://pasteboard.co/7oitlHdJm.png)?
[11:01] <dakj> jamespage: I've deployed MagPie on another node on MAAS now,  to use that on my lab how do I must do?
[11:05] <jamespage> dakj: charm README has some details on usage
[11:05] <jamespage> https://jujucharms.com/u/admcleod/magpie
[11:05] <jamespage> dakj: basically you want to replicate the unit structure you have with openstack, but using magpie
[11:06] <jamespage> dakj: so something like
[11:06] <jamespage> juju deploy -n 4 magpie
[11:06] <jamespage> juju add-unit --to lxd:0 magpie
[11:06] <jamespage> juju add-unit --to lxd:1 magpie
[11:06] <jamespage> juju add-unit --to lxd:2 magpie
[11:06] <jamespage> juju add-unit --to lxd:3 magpie
[11:06] <jamespage> which will spin up four 'physical' servers, and place a magpie lxd unit on each one
[11:07] <jamespage> once deployed they will run the benchmark and mtu tests
[11:15] <dakj> jamespage: ok I'm doing what you suggest me...5 min and I come back soon
[11:16] <jamespage> dakj: you might need to tweak the digit for the --to lxd:X to target the actual machine id's that the first step would have created
[13:20] <dakj> jamespage: I don't forget you, I was in break ... anyway could you have a look here to check if I made that right https://paste.ubuntu.com/24598754/
[13:22] <jamespage> dakj: kinda - you need to have more that one top level machine involved, otherwise everything is local bridge on machine 0
[13:22] <jamespage> dakj: wait you deployed all of them to lxd:0
[13:22] <dakj> jamespage: I know sorry....I wrong to create the lxc
[13:22] <jamespage> so they are all over the loopback device only
[13:23] <jamespage> which is why you get that odd mismatch on the mtu
[13:23] <dakj> jamespage: yes, yes sorry
[13:57] <dakj> Jamespage: here is the right way https://paste.ubuntu.com/24598905/, sorry for before.
[13:58] <jamespage> dakj: not quite - you still have a single physical machine with four containers
[13:58] <jamespage> rather than four physical machines with a container each
[13:58] <jamespage> need to get that network flow testing between physical machines :-)
[14:00] <dakj> Jamespage: I've only one physical server (IBM System x3650 R4 with 64GB of RAM and 4TB of store) where I'm doing my lab.
[14:04] <dakj> jamespage: you're saying I've to have 4 vmachine and on each one 1 magpie?
[14:16] <Budgie^Smore> morning o/ juju world
[14:25] <jamespage> dakj: ok so the pastebin you showed me the other day with the problem ceph-mons had 4 x 'physical' machines
[14:25] <jamespage> dakj: I thought those where VMware machines right?
[14:26] <dakj> jamespage: I'm waiting juju finishes to deploy magpie on 4 machine in lxcd container
[14:27] <dakj> <jamespage: yes it's a VMware esx
[14:27] <jamespage> dakj: ok so this one - https://paste.ubuntu.com/24592540/
[14:28] <jamespage> dakj: machines 16, 17, 18 and 19 are vmware machines connected to each other via the vSwitch in ESX right
[14:28] <jamespage> dakj: and then the lxd containers are on each machine
[14:28] <jamespage> dakj: you need to reproduce that topology with magpie only
[14:28] <jamespage> dakj: so four vmware machines, each with one lxd container on each
[14:28] <dakj> jamespage: that is now https://paste.ubuntu.com/24599034/
[14:28] <jamespage> all running magpie
[14:28] <jamespage> dakj: yup that's the one
[14:29] <jamespage> dakj: but please put magpie on the physical machines as well
[14:29] <jamespage> so you can diff between lxd and non-lxd traffic
[14:50] <dakj> jamespage: I give up :-), the situation now is that (https://paste.ubuntu.com/24599100/). I don't believe that to install Openstack is so hard there are a lot of thinks of consider. Problems to deploy Landscape Dense-MAAS, problem with Openstack base.....install VMware is easiest then Openstack with Juju...
[15:18] <jamespage> dakj: you're a bit off the beaten track here with how you are trying to test things
[15:19] <jamespage> dakj: all I can say is that getting your infrastructure right before attempting an openstack deployment in terms of networking, MTU, broadcast domains etc...
[15:19] <jamespage> dakj: is a problem that is common to any openstack deployment irrespective of deployment tool
[15:21] <dakj> jamespage: that was just a way to joke with my time to spend with openstack. If I'm here is to understand how I can replace vmware with openstack.
[15:22] <jamespage> dakj: \o/
[15:22] <jamespage> a common goal for alot of users
[15:23] <jamespage> dakj: getting existing vmware deployers to a point where they can try Juju/MAAS/Charms/OpenStack easily is a win for those users
[15:24] <dakj> jamespage: I know it's no easy to replace technology. I know also that one. After to understand that the next step will be replace esx with ubuntu lxd. By I've to make that step by step.
[15:25] <jamespage> dakj: ok so what type of scale are you looking to achieve here?
[15:27] <dakj> jamespage: the strange way is that host worked well with VMware Esx and VMware server center and it not moved nothing about networking. It's correctly connected to firewall via core switch.
[15:29] <dakj> jamespage: just to know how to resolve that issue and deploy openstack and landscape via juju. So I've the problem also for Landscape look here https://askubuntu.com/questions/906763/haproxy-reverseproxy-relation-changed-for-landscape-serverwebsite.
[15:34] <dakj> It's a month I'm trying to build that using Juju and in both case I have an issue, It's no easy to understand what is the issue and how to resolve that. have a look here is the last past with magpie https://paste.ubuntu.com/24599321/.
[15:39] <dakj> jamespage: I want to thank you about all your support but it's not easy to understand why I've that issues.
[16:14] <jamespage> dakj:  hey - minor niggle with your use of magpie
[16:15] <jamespage> you've deployed four different instances of the magpie charm +(-a,b,c)
[16:16] <jamespage> rather than eight units (four physical machines, four lxd containers) of a single instance of magpie
[16:16] <jamespage> which is why none of the units can find any peers
[16:22] <dakj> jamespage: sorry about that....it was only a joke. Tomorrow I'll make another try but this time not on that host but on our datacenter 16 compute node with 64GB per each node and 12 switch. All Compute node have vmware installed and my goal would be to replace that with Openstack....I'm sure that there is not problem with network. Also if that host is connect to the same infrastructure.let me try this last lab :-) then I don't know what is the problem.....
[16:31] <carpenike> Hi all, having trouble bootstrapping juju on a node with non-managed networks as I cannot set the ipv6 to disabled within lxc/lxd even though it's disabled on the parent interface.
[16:32] <carpenike> Is there a way for juju to skip the ipv6 check when it loads up?
[16:32] <carpenike> loads up = bootstraps.
[19:27] <tvansteenburgh> stokachu_: did you find a good solution for snapping c-u with go1.8 yet?
[20:30] <kwmonroe> marcoceppi lazyPower!!! i need halp.  for a global-scoped interface provider, when do i use conf.set_state (https://github.com/juju-solutions/interface-spark/blob/master/provides.py#L25) vs self.set_state (https://github.com/juju-solutions/interface-http/blob/master/provides.py#L12)?
[20:30] <kwmonroe> s/conf/conv
[20:31] <lazyPower> kwmonroe: conv.set_state is when you want to scope it to the conversation happening, eg: scope unit.  self.set_state works in the global namespace and doesn't matter what context you're in
[20:31] <kwmonroe> lazyPower: so anytime scope == global, self.foo is the right answer?  is it redundant to do "conv = get_conversation; conv.set_state"?
[20:32] <lazyPower> kwmonroe: I think the conv.set_state on a global scoped conv is implied to work in global state
[20:32] <lazyPower> cory_fu: fact check me <3
[20:32] <kwmonroe> ack, gracias lazyPower
[20:32] <kwmonroe> cory_fu is out pyconning
[20:32] <lazyPower> kwmonroe: i'm only 80% certain thats right, but i'm gonna say it convincingly enough that you think its right.
[20:32] <lazyPower> and i myself would never use self.set_state in an interface
[20:33] <kwmonroe> s/contact=kwmonroe/contact=lazypower/ && ship
[20:33] <lazyPower> s/contact=/ignored=/g && ship
[20:33] <kwmonroe> ship || shennanigans
[20:34] <lazyPower> Shenanigans wins every time
[20:36] <lazyPower> kwmonroe: i'm pretty sure that if i'm heinously wrong stub will follow up with why. He's really good at lurking with great information when i'm wrong :)  But again, i'm mostly certain thats how it works. Otherwise the `scope.global` would be pointless.
[20:54] <kwmonroe> lazyPower: when in doubt, github!  https://github.com/juju-solutions/charms.reactive/blob/master/charms/reactive/relations.py#L265 tells me self.set_state and self.conversation().set_state work the same.
[20:54] <lazyPower> kwmonroe: welp, and there you have it
[21:54] <bdx> what are CAAS models?
[22:00] <rick_h> bdx: :)
[22:00] <rick_h> bdx: experiment of juju on container platforms
[22:01] <thumper> rick_h: you took the words out of my mouth
[22:01] <thumper> :)
[22:01] <rick_h> thumper: I do my b est
[22:01] <rick_h> best
[22:01] <thumper> balloons, veebers: I can still hear you
[22:07] <veebers> luckily balloons and I weren't saying anything mean about you then :-)
[22:37] <bdx> rick_h: like a way to `juju deploy mydockerthing`?
[22:38] <rick_h> bdx: still a bit early to really go through how it goes.
[22:38] <rick_h> bdx: but we realize that folks like their containers and often they need to talk to things that don't fit well in containers