=== kadams54 is now known as kadams54-away | ||
=== urulama is now known as urulama|kids | ||
=== urulama|kids is now known as urulama | ||
=== Murali_ is now known as Murali | ||
Muntaner | good morning guys o/ | 10:15 |
---|---|---|
Muntaner | I'm having a little issue with security groups | 10:16 |
Muntaner | when I delete the charm that I'm developing, the relative security group in nova isn't destroyed | 10:16 |
Muntaner | how can I do this? | 10:16 |
=== kadams54 is now known as kadams54-away | ||
=== zz_CyberJacob is now known as CyberJacob | ||
=== TimNich_ is now known as TimNich | ||
=== kadams54 is now known as kadams54-away | ||
Muntaner | hello guys | 15:36 |
Muntaner | how can I tell to a juju charm which image should it run in his VM when the service is deployed? | 15:37 |
lazyPower | Muntaner: that is denoted by the series of the charm | 15:37 |
lazyPower | Muntaner: eg: juju deploy trusty/mysql - will tell juju to allocate a trusty VM, and then deploy the charm on top of trusty. | 15:38 |
Muntaner | lazyPower, thanks | 15:41 |
jrwren | Muntaner: juju has a "default-series" option which you can set in environments.yaml so that if you don't specify a series when deploying, and there are charms in both, it will default to this settings. | 15:46 |
R1ck | it states here: https://news.ycombinator.com/item?id=5738252 "I tested juju a few months ago and found it to be buggy and unreliable." this was however from almost 2 years ago. does anybody agree with that statement still? | 15:59 |
AskUbuntu_ | Adding network to machine deployed by Juju | http://askubuntu.com/q/597514 | 16:02 |
beuno | R1ck, not at all, we use it for plenty of production services in Canonical, as do many customers | 16:23 |
bdx | Hello, I am wondering if anyone can give any insight as to what charm or service parameters determine what interface compute nodes talk to storage on | 16:31 |
bdx | ?? | 16:31 |
jcastro | marcoceppi, around? | 17:02 |
marcoceppi | yes | 17:02 |
jcastro | so I have a card that's l ike | 17:03 |
jcastro | "provide examples to what a good charm looks like" | 17:03 |
jcastro | and I'm going through the links in the docs | 17:03 |
jcastro | other than ones that should link to new charms, like say, services framework, is there anything in these examples ones like the vanilla forum ones that need to be fixed? | 17:04 |
marcoceppi | probably | 17:05 |
jcastro | evilnickveitch, heya | 17:14 |
evilnickveitch | jcastro, hi | 17:14 |
jcastro | the review queue link is 404 | 17:14 |
jcastro | so I fixed them and pushed but I think you merged before | 17:14 |
jcastro | http://review.juju.solutions instead of manage.blah | 17:14 |
evilnickveitch | okay, will take a look | 17:15 |
bdx | marcoceppi, jcastro: How can I configure nova-compute to use ~(os-admin-network) for storage traffic? | 17:15 |
jcastro | I am unfamiliar with the nova-compute charm | 17:16 |
jcastro | we have a list for those iirc? | 17:16 |
evilnickveitch | jcastro, done | 17:16 |
bdx | Ohhh really? | 17:16 |
jcastro | thanks | 17:16 |
jcastro | jamespage, ^^^ | 17:17 |
jcastro | evilnickveitch, we should make it so like if a 404 is detected when we build the docs or something it yells at us | 17:18 |
jamespage | bdx, hello | 17:18 |
jamespage | nova-compute -> ceph? | 17:18 |
bdx | jamespage: Hi, hows it going?? | 17:18 |
bdx | yes! | 17:18 |
jamespage | so you're using the public and admin network configuration in the ceph charms right? | 17:18 |
evilnickveitch | jcastro, yes, well we had that lint tool before, but that was before we switched to markdown. | 17:19 |
bdx | jamespage: Correct | 17:19 |
evilnickveitch | it is on my list of things to add to the new universal build tool | 17:19 |
jamespage | bdx, ok so the nova-compute nodes will access ceph over the public network IP's - so if you make the ceph public-network == os-admin-network that should work | 17:19 |
evilnickveitch | jcastro, but to be honest, it may take a while until that gets done, looking at all the stuff I have to do | 17:20 |
jamespage | this assumes that both ceph and nova-compute are both physically or logically attached to the same networks | 17:20 |
bdx | jamespage: Thats what I currently have.... | 17:20 |
evilnickveitch | jcastro, however, we can probably cobble together a script to do it | 17:20 |
jamespage | bdx, what are you seeing? | 17:21 |
bdx | Thats the problem....my admin network is 1G....I get bottleneck on the 1G interface | 17:21 |
jcastro | evilnickveitch, ok I'll mention it at the sprint, see if someone is willing to have a go | 17:21 |
jamespage | bdx, do your compute and ceph nodes have 10G's or alternative 1G's that can be used? | 17:21 |
bdx | Yes, I have 2x 1G and 2x 10G on each node | 17:22 |
jamespage | bdx, ok - so in that config I'd probably bond the 2 x 1G's and run control plan traffic over that network | 17:22 |
bdx | jamespage: Here is what my 1G os-admin-network interface looks like on my compute node | 17:22 |
bdx | https://www.dropbox.com/s/ws3g577yjzq6v0v/Screenshot%202015-03-12%2011.55.35.png?dl=0 | 17:22 |
jamespage | and do the same for the 10G's and run os-data-network and ceph-public-network over that | 17:23 |
bdx | jamespage: Here is my 10G os-data-network interface on compute node | 17:23 |
bdx | https://www.dropbox.com/s/vqt3z5dauyiewjj/Screenshot%202015-03-12%2011.55.14.png?dl=0 | 17:23 |
bdx | jamespage: I now realize that os-data-network doesn't need to be 10G | 17:24 |
jamespage | bdx, well it might depending on how busy your tenants get | 17:24 |
jamespage | bdx, are you using the ceph nova backend for instance storage? | 17:25 |
R1ck | beuno: well yes but seeing as its Canonical thats developing it, you should say that.. I'm looking for independant opinions ;) | 17:25 |
bdx | jamespage: Yes | 17:25 |
jamespage | bdx, right - so that is going to get pretty busy with all the io | 17:25 |
jamespage | you def want that running over the 10G | 17:26 |
bdx | Totally, but that means I need a 10G switch for os-admin | 17:26 |
jamespage | bdx - so you need to configure the ceph-public-network with the network CIDR for the 10G nics you have | 17:26 |
jamespage | bdx: ceph-public-network does not have to be the same as os-admin-network | 17:27 |
jamespage | bdx: the compute units just need to have a network connection to ceph-public-network - preferably over the 10G links :-) | 17:27 |
bdx | jamespage: now we are getting somewhere | 17:27 |
jamespage | bdx, the network support across the charms is endpoint driven - the services when related will say 'connect to me over XXX' - ceph public network for ceph | 17:28 |
jamespage | clients will just use the most direct link they have | 17:28 |
bdx | jamespage: I understand that....but how does compute know what interface to talk to ceph-public-network? | 17:29 |
jamespage | bdx, by the magic that is linux network routing | 17:29 |
jamespage | bdx, linux will just make the best choice - 1) the interface attached to the network 2) an explicit route via a gateway 3) the default route | 17:30 |
jamespage | bdx, netstat -rn will tell you which of those will happen | 17:30 |
jamespage | 1) or 3) are most likely | 17:30 |
jamespage | don't ever do storage traffic via a router - the latency will suck | 17:30 |
jamespage | bdx, does that make sense? | 17:32 |
bdx | jamespage: Ok, so I create ceph-public-network: 10.50.0.0 (10G), ceph-cluster-network: 10.60.0.0 (10G), os-admin-network: 10.70.0.0 (1G), os-data-network: 10.80.0.0 (1G), os-internal-network: 10.90.0.0 (1G), os-public-network: 10.100.0.0 (1G) | 17:32 |
bdx | nova-compute only has params for os-data-network | 17:32 |
jamespage | yup | 17:32 |
jamespage | nova compute does not have any endpoints - it just consumes them | 17:33 |
bdx | So how does compute know to talk to 10.50.0.0 for storage trafficH | 17:33 |
jamespage | bdx, because it must have a 10.50.0.0 network connection | 17:33 |
jamespage | bdx, note the charms do not setup and configure network interfaces | 17:33 |
beuno | R1ck, I understand, I was just commenting on the stability, given that we run our most critical services on it (SSO, the software store, payments, etc) | 17:33 |
jamespage | bdx, they just detect and consume what's already there | 17:34 |
beuno | you would know fairly quickly if it wasn't stable ;) | 17:34 |
jamespage | bdx, MAAS + Juju are developing features to support network interface configuration (discover is already supported) | 17:34 |
bdx | jamespage: Ahh, ok....so nova-compute will know to talk to ceph-public-network for storage traffic even if I do not specify 10.50.0.0 anywhere? | 17:34 |
bdx | ok | 17:34 |
R1ck | beuno: awesome :) | 17:34 |
jamespage | bdx, yup - cause the ceph charm will pass it some 10.50 addresses - these get configured into /etc/ceph/ceph.conf and used that way | 17:35 |
bdx | jamespage: Phewwww, this is great news! | 17:36 |
bdx | jamespage: Thank you for taking the time to explain that.....it has been driving me crazy. | 17:36 |
jamespage | bdx, hey - its a little complex right now as neither maas or juju exposes networking in a consumable way by end-users or charms - that is coming - but the openstack charms jumped the gun on this due to requirement todo what you're doing | 17:37 |
jamespage | bdx, you can use a special charm to config up your boxes first - I've seen people use the 'ubuntu' charm with some extra scripts called from config-changed hook to configure the network | 17:38 |
jamespage | that's a stop-gap until everything hooks up between MAAS/Juju/Charms | 17:38 |
bdx | jamespage: Ahhh totally, thats a great idea. | 17:39 |
bdx | jamespage: I have a feeling what you advised is going to be exactly the fix I am looking for...for the time being. | 17:40 |
jamespage | bdx, once everything is up and networked, you can then use the "--to" syntax to target services are particular machines | 17:40 |
jamespage | bdx, erm so you will have to re-deploy your ceph cluster - its not possible to switch the public network post deployment | 17:40 |
jamespage | due to the way inter-mon communication works | 17:40 |
bdx | jamespage: Entirely. | 17:42 |
jamespage | bdx, now that would be a neat trick but I feel I could waste alot of midnight hours trying to make that work | 17:42 |
jamespage | bdx, are you using lxc containers for any of the services? that's particularly tricky with the network split support right now | 17:43 |
bdx | jamespage: Totally....I am using the openstack-installer as our means of deployment here as DarkHorseComics | 17:43 |
jamespage | bdx, ok so the lxc containers juju creates will only get networked to eth0 via a bridge | 17:44 |
bdx | I am using nucs in my testlab for supporting services that aren't compute, quantum-network, and storage(ceph) | 17:44 |
jamespage | so that does limit what you can do | 17:44 |
jamespage | bdx, you can if you are feeling brave create the lxc containers with the right bridges/networking and then manually introduce them to your environment - but its a bit fiddly | 17:45 |
jamespage | bdx, I've also seen people use KVM machines networked up and then registered into MAAS for deployment - the power control is still manual (maas has some rudimentary virsh support - but its not for remote machines - just tesing). | 17:46 |
bdx | jamespage: Totally......I just need to finish defining our deployment methodology....getting storage traffic off the os-admin-network/interface is one of my last issues to resolve. | 17:47 |
jamespage | bdx, awesome - hope this conversation unblocks you | 17:48 |
bdx | jamespage: I'm pretty sure you can use the "virsh" power type in maas | 17:48 |
jamespage | quite likely | 17:49 |
bdx | Thats what I use in my kvm labs... | 17:49 |
bdx | Thanks again for your support | 17:49 |
jamespage | bdx, btw which type of tenant networks are you going to use? | 17:50 |
jamespage | one of the overlay network types? (gre/vxlan) | 17:50 |
bdx | gre | 17:50 |
bdx | jamespage: Yea, gre...why? | 17:51 |
jamespage | bdx, oh wait - your using os-data-network - that helps | 17:52 |
jamespage | bdx, packet fragmentation can be awkward - make sure you configure the DHCP server for that network (or your static network config) to use a MTU higher that 1500 - preferably 9000 | 17:52 |
jamespage | bdx, GRE carries some overhead - using a higher mtu ensures that you don't get packet fragmentation which can impact performance and cause network issues with nofrag packets | 17:53 |
jamespage | bdx; the ceph network would also benefit from that | 17:53 |
bdx | jamespage: Totally, I was thinking about opening up all interface to mtu 9000 | 17:54 |
bdx | jamespage: Do you see any issue with that? | 17:54 |
jamespage | bdx, that's a good idea | 17:54 |
jamespage | " option interface-mtu 9000;" | 17:54 |
jamespage | does the trick in isc-dhcp-server | 17:54 |
jamespage | you can edit the template for that in MAAS (on the assumption you are using MAAS for DHCP) | 17:55 |
bdx | Entirely, I am | 17:55 |
bdx | jamespage: I have edited my curtin_userdata to bring up my extra interfaces...do you think this is a reasonable way of doing this? | 17:57 |
bdx | jamespage: http://paste.ubuntu.com/10611047/ | 17:57 |
jamespage | bdx, absolutely | 17:57 |
* jamespage looks at the details | 17:57 | |
jamespage | bdx, hows that working for you? | 17:58 |
bdx | jamespage: Excellent! | 17:59 |
jamespage | bdx, maas curtin preseeds are not my strong point | 17:59 |
jamespage | bdx, you could use /etc/network/interfaces.d to fragment the config a bit - but that's my only comment | 17:59 |
bdx | I couldn't figure out how else to bring up my extra interfaces.....that was the only thing other than making a puppet class for them | 17:59 |
jamespage | eth1.cfg eth2.fg etc... | 17:59 |
jamespage | bdx, this is where I've seen folk use a special charm to configure the networks up | 18:00 |
bdx | jamespage: Ok, I'll keep that in mind. | 18:01 |
jamespage | bdx, well good luck - I'm EOD | 18:01 |
jamespage | ttfn | 18:01 |
bdx | jamespage: Ok, thanks again!! | 18:03 |
=== roadmr is now known as roadmr_afk | ||
=== kadams54_ is now known as kadams54-away | ||
jcastro | hey rick_h_ | 19:55 |
jcastro | http://readme.io/ | 19:55 |
=== roadmr_afk is now known as roadmr | ||
=== kadams54 is now known as kadams54-away | ||
marcoceppi | dear hatch, THANK YOU https://github.com/juju/juju-gui/pull/707 | 20:25 |
hatch | marcoceppi: :D | 20:26 |
hatch | marcoceppi: it hasn't yet been QA'd by third parties so don't thank me YET ;) | 20:26 |
=== kadams54-away is now known as kadams54 | ||
=== kadams54 is now known as kadams54-away |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!