=== kadams54 is now known as kadams54-away === urulama is now known as urulama|kids === urulama|kids is now known as urulama === Murali_ is now known as Murali [10:15] good morning guys o/ [10:16] I'm having a little issue with security groups [10:16] when I delete the charm that I'm developing, the relative security group in nova isn't destroyed [10:16] how can I do this? === kadams54 is now known as kadams54-away === zz_CyberJacob is now known as CyberJacob === TimNich_ is now known as TimNich === kadams54 is now known as kadams54-away [15:36] hello guys [15:37] how can I tell to a juju charm which image should it run in his VM when the service is deployed? [15:37] Muntaner: that is denoted by the series of the charm [15:38] Muntaner: eg: juju deploy trusty/mysql - will tell juju to allocate a trusty VM, and then deploy the charm on top of trusty. [15:41] lazyPower, thanks [15:46] Muntaner: juju has a "default-series" option which you can set in environments.yaml so that if you don't specify a series when deploying, and there are charms in both, it will default to this settings. [15:59] it states here: https://news.ycombinator.com/item?id=5738252 "I tested juju a few months ago and found it to be buggy and unreliable." this was however from almost 2 years ago. does anybody agree with that statement still? [16:02] Adding network to machine deployed by Juju | http://askubuntu.com/q/597514 [16:23] R1ck, not at all, we use it for plenty of production services in Canonical, as do many customers [16:31] Hello, I am wondering if anyone can give any insight as to what charm or service parameters determine what interface compute nodes talk to storage on [16:31] ?? [17:02] marcoceppi, around? [17:02] yes [17:03] so I have a card that's l ike [17:03] "provide examples to what a good charm looks like" [17:03] and I'm going through the links in the docs [17:04] other than ones that should link to new charms, like say, services framework, is there anything in these examples ones like the vanilla forum ones that need to be fixed? [17:05] probably [17:14] evilnickveitch, heya [17:14] jcastro, hi [17:14] the review queue link is 404 [17:14] so I fixed them and pushed but I think you merged before [17:14] http://review.juju.solutions instead of manage.blah [17:15] okay, will take a look [17:15] marcoceppi, jcastro: How can I configure nova-compute to use ~(os-admin-network) for storage traffic? [17:16] I am unfamiliar with the nova-compute charm [17:16] we have a list for those iirc? [17:16] jcastro, done [17:16] Ohhh really? [17:16] thanks [17:17] jamespage, ^^^ [17:18] evilnickveitch, we should make it so like if a 404 is detected when we build the docs or something it yells at us [17:18] bdx, hello [17:18] nova-compute -> ceph? [17:18] jamespage: Hi, hows it going?? [17:18] yes! [17:18] so you're using the public and admin network configuration in the ceph charms right? [17:19] jcastro, yes, well we had that lint tool before, but that was before we switched to markdown. [17:19] jamespage: Correct [17:19] it is on my list of things to add to the new universal build tool [17:19] bdx, ok so the nova-compute nodes will access ceph over the public network IP's - so if you make the ceph public-network == os-admin-network that should work [17:20] jcastro, but to be honest, it may take a while until that gets done, looking at all the stuff I have to do [17:20] this assumes that both ceph and nova-compute are both physically or logically attached to the same networks [17:20] jamespage: Thats what I currently have.... [17:20] jcastro, however, we can probably cobble together a script to do it [17:21] bdx, what are you seeing? [17:21] Thats the problem....my admin network is 1G....I get bottleneck on the 1G interface [17:21] evilnickveitch, ok I'll mention it at the sprint, see if someone is willing to have a go [17:21] bdx, do your compute and ceph nodes have 10G's or alternative 1G's that can be used? [17:22] Yes, I have 2x 1G and 2x 10G on each node [17:22] bdx, ok - so in that config I'd probably bond the 2 x 1G's and run control plan traffic over that network [17:22] jamespage: Here is what my 1G os-admin-network interface looks like on my compute node [17:22] https://www.dropbox.com/s/ws3g577yjzq6v0v/Screenshot%202015-03-12%2011.55.35.png?dl=0 [17:23] and do the same for the 10G's and run os-data-network and ceph-public-network over that [17:23] jamespage: Here is my 10G os-data-network interface on compute node [17:23] https://www.dropbox.com/s/vqt3z5dauyiewjj/Screenshot%202015-03-12%2011.55.14.png?dl=0 [17:24] jamespage: I now realize that os-data-network doesn't need to be 10G [17:24] bdx, well it might depending on how busy your tenants get [17:25] bdx, are you using the ceph nova backend for instance storage? [17:25] beuno: well yes but seeing as its Canonical thats developing it, you should say that.. I'm looking for independant opinions ;) [17:25] jamespage: Yes [17:25] bdx, right - so that is going to get pretty busy with all the io [17:26] you def want that running over the 10G [17:26] Totally, but that means I need a 10G switch for os-admin [17:26] bdx - so you need to configure the ceph-public-network with the network CIDR for the 10G nics you have [17:27] bdx: ceph-public-network does not have to be the same as os-admin-network [17:27] bdx: the compute units just need to have a network connection to ceph-public-network - preferably over the 10G links :-) [17:27] jamespage: now we are getting somewhere [17:28] bdx, the network support across the charms is endpoint driven - the services when related will say 'connect to me over XXX' - ceph public network for ceph [17:28] clients will just use the most direct link they have [17:29] jamespage: I understand that....but how does compute know what interface to talk to ceph-public-network? [17:29] bdx, by the magic that is linux network routing [17:30] bdx, linux will just make the best choice - 1) the interface attached to the network 2) an explicit route via a gateway 3) the default route [17:30] bdx, netstat -rn will tell you which of those will happen [17:30] 1) or 3) are most likely [17:30] don't ever do storage traffic via a router - the latency will suck [17:32] bdx, does that make sense? [17:32] jamespage: Ok, so I create ceph-public-network: 10.50.0.0 (10G), ceph-cluster-network: 10.60.0.0 (10G), os-admin-network: 10.70.0.0 (1G), os-data-network: 10.80.0.0 (1G), os-internal-network: 10.90.0.0 (1G), os-public-network: 10.100.0.0 (1G) [17:32] nova-compute only has params for os-data-network [17:32] yup [17:33] nova compute does not have any endpoints - it just consumes them [17:33] So how does compute know to talk to 10.50.0.0 for storage trafficH [17:33] bdx, because it must have a 10.50.0.0 network connection [17:33] bdx, note the charms do not setup and configure network interfaces [17:33] R1ck, I understand, I was just commenting on the stability, given that we run our most critical services on it (SSO, the software store, payments, etc) [17:34] bdx, they just detect and consume what's already there [17:34] you would know fairly quickly if it wasn't stable ;) [17:34] bdx, MAAS + Juju are developing features to support network interface configuration (discover is already supported) [17:34] jamespage: Ahh, ok....so nova-compute will know to talk to ceph-public-network for storage traffic even if I do not specify 10.50.0.0 anywhere? [17:34] ok [17:34] beuno: awesome :) [17:35] bdx, yup - cause the ceph charm will pass it some 10.50 addresses - these get configured into /etc/ceph/ceph.conf and used that way [17:36] jamespage: Phewwww, this is great news! [17:36] jamespage: Thank you for taking the time to explain that.....it has been driving me crazy. [17:37] bdx, hey - its a little complex right now as neither maas or juju exposes networking in a consumable way by end-users or charms - that is coming - but the openstack charms jumped the gun on this due to requirement todo what you're doing [17:38] bdx, you can use a special charm to config up your boxes first - I've seen people use the 'ubuntu' charm with some extra scripts called from config-changed hook to configure the network [17:38] that's a stop-gap until everything hooks up between MAAS/Juju/Charms [17:39] jamespage: Ahhh totally, thats a great idea. [17:40] jamespage: I have a feeling what you advised is going to be exactly the fix I am looking for...for the time being. [17:40] bdx, once everything is up and networked, you can then use the "--to" syntax to target services are particular machines [17:40] bdx, erm so you will have to re-deploy your ceph cluster - its not possible to switch the public network post deployment [17:40] due to the way inter-mon communication works [17:42] jamespage: Entirely. [17:42] bdx, now that would be a neat trick but I feel I could waste alot of midnight hours trying to make that work [17:43] bdx, are you using lxc containers for any of the services? that's particularly tricky with the network split support right now [17:43] jamespage: Totally....I am using the openstack-installer as our means of deployment here as DarkHorseComics [17:44] bdx, ok so the lxc containers juju creates will only get networked to eth0 via a bridge [17:44] I am using nucs in my testlab for supporting services that aren't compute, quantum-network, and storage(ceph) [17:44] so that does limit what you can do [17:45] bdx, you can if you are feeling brave create the lxc containers with the right bridges/networking and then manually introduce them to your environment - but its a bit fiddly [17:46] bdx, I've also seen people use KVM machines networked up and then registered into MAAS for deployment - the power control is still manual (maas has some rudimentary virsh support - but its not for remote machines - just tesing). [17:47] jamespage: Totally......I just need to finish defining our deployment methodology....getting storage traffic off the os-admin-network/interface is one of my last issues to resolve. [17:48] bdx, awesome - hope this conversation unblocks you [17:48] jamespage: I'm pretty sure you can use the "virsh" power type in maas [17:49] quite likely [17:49] Thats what I use in my kvm labs... [17:49] Thanks again for your support [17:50] bdx, btw which type of tenant networks are you going to use? [17:50] one of the overlay network types? (gre/vxlan) [17:50] gre [17:51] jamespage: Yea, gre...why? [17:52] bdx, oh wait - your using os-data-network - that helps [17:52] bdx, packet fragmentation can be awkward - make sure you configure the DHCP server for that network (or your static network config) to use a MTU higher that 1500 - preferably 9000 [17:53] bdx, GRE carries some overhead - using a higher mtu ensures that you don't get packet fragmentation which can impact performance and cause network issues with nofrag packets [17:53] bdx; the ceph network would also benefit from that [17:54] jamespage: Totally, I was thinking about opening up all interface to mtu 9000 [17:54] jamespage: Do you see any issue with that? [17:54] bdx, that's a good idea [17:54] " option interface-mtu 9000;" [17:54] does the trick in isc-dhcp-server [17:55] you can edit the template for that in MAAS (on the assumption you are using MAAS for DHCP) [17:55] Entirely, I am [17:57] jamespage: I have edited my curtin_userdata to bring up my extra interfaces...do you think this is a reasonable way of doing this? [17:57] jamespage: http://paste.ubuntu.com/10611047/ [17:57] bdx, absolutely [17:57] * jamespage looks at the details [17:58] bdx, hows that working for you? [17:59] jamespage: Excellent! [17:59] bdx, maas curtin preseeds are not my strong point [17:59] bdx, you could use /etc/network/interfaces.d to fragment the config a bit - but that's my only comment [17:59] I couldn't figure out how else to bring up my extra interfaces.....that was the only thing other than making a puppet class for them [17:59] eth1.cfg eth2.fg etc... [18:00] bdx, this is where I've seen folk use a special charm to configure the networks up [18:01] jamespage: Ok, I'll keep that in mind. [18:01] bdx, well good luck - I'm EOD [18:01] ttfn [18:03] jamespage: Ok, thanks again!! === roadmr is now known as roadmr_afk === kadams54_ is now known as kadams54-away [19:55] hey rick_h_ [19:55] http://readme.io/ === roadmr_afk is now known as roadmr === kadams54 is now known as kadams54-away [20:25] dear hatch, THANK YOU https://github.com/juju/juju-gui/pull/707 [20:26] marcoceppi: :D [20:26] marcoceppi: it hasn't yet been QA'd by third parties so don't thank me YET ;) === kadams54-away is now known as kadams54 === kadams54 is now known as kadams54-away