/srv/irclogs.ubuntu.com/2014/12/16/#juju.txt

=== kadams54-away is now known as kadams54
=== kadams54 is now known as kadams54-away
=== kadams54-away is now known as kadams54
designatedi don't understand the configuration of juju-br0 with relation to deploying to lxc.  lxc seems to create a lxcbro interface that doesn't get bridged with any other interface.  Is this documented somewhere?  I'm unable to find it.00:41
josedesignated: that's the LXC network, which is local only01:02
=== kadams54_ is now known as kadams54-away
=== kadams54-away is now known as kadams54_
=== erkules_ is now known as erkules
erkulesahoi is there docker support for juju?03:22
lazyPowererkules: do you mean orchestrating docker with juju?03:49
lazyPowererkules: or do you mean a docker container that would serve as your workstation that you could get moving quickly with juju?03:49
sebas5384lazyPower: we have a docker image with juju in it?03:50
sebas5384o/03:50
lazyPowersebas5384: not officially no03:50
lazyPoweri was asking for clarification03:50
lazyPowerwe have some preliminary experimental stuff that we've been doing with docker + juju, but nothing official that I would recommend to anyone.03:51
sebas5384lazyPower: hmmm liked the ideia03:51
sebas5384did you use juju to orchestrating docker? how well is playing?03:52
designatedlazyPower: can you explain the purpose of juju-br0?  this interface seems to be causing some problems, chiefly with trying to deploy to lxc and certain charms.03:52
lazyPowerdesignated: lxc-br0 is a bridge device - its a virtual ethernet adapter required by the lxc configuration03:53
lazyPoweryou can bind it to another adapter and disable the internal DHCP stuff if thats the issue03:53
lazyPowersebas5384: nothing noteworthy to speak of at the moment03:53
sebas5384oh ok, I was excited to see activity with docker + juju :D03:54
lazyPowerdesignated: http://blog.dasroot.net/making-juju-visible-on-your-lan/ - i wrote a blog post about making the LXC containers bridge with a physical ethernet adapter (using the local provider- but the steps should be very similar for other environments)03:54
designatedwhen deploying percona-cluster, it configures the gcomm:// portion with addresses from the juju-br0 interface instead of what I have configured in my local.yaml file for the following:  ha-bindiface,vip_iface,access-network03:54
lazyPowerdesignated: that sounds like it may be a charm bug :(03:55
sebas5384lazyPower: just the post I was needing!03:55
lazyPowersebas5384: that was written against the 1.18 provider - if anything's moved keep an errata for me so i can update03:55
sebas5384lazyPower: did you create a vagrant box with this?03:55
designatedlazyPower: I'll take a look because I had given up on deploying to lxc due to the fact I'm trying to use an already configured bridge and it wasn't working...it kept trying to use juju-br0 but that's is associated with the wrong interface03:56
sebas5384lazyPower: ok o/03:56
lazyPowersebas5384: negative, this was purely for fun - my 2u originally started its life as a local provider, then moved to MAAS and is now decomissioned in leu of an intel NUC that is serving as my juju-box on the cheap.03:56
lazyPowerdesignated: ah - well if you already have a virtual adapter you can change the lxc adapter in those networking config files03:57
sebas5384lazyPower: sweet03:57
lazyPowerdesignated: i have indeed done this in the past with success03:57
lazyPowersebas5384: i got sick of the fan noise and the 480w power draw03:57
lazyPowersebas5384: if you ever feel like hacking up a nuc cluster let me know i'm getting fairly proficient at it03:57
lazyPowerand they stack so nicely03:57
sebas5384good to know lazyPower03:58
sebas5384yeah, i always wanted one of those in house03:58
sebas5384a nice lean stack03:58
sebas5384just to play03:58
lazyPowerpick up some i7's and 32gb of ram - slap in a MSATA and SSD and you've got yourself a cheap in-house server with plenty of resources03:59
lazyPowerhas ~ 25w of power draw03:59
sebas5384thanks for the tip ;)03:59
lazyPowerif you awnt to go the MAAS route and orchestrate them - you'll want to talk to marco or dustin - as they require a special series of the NUC04:00
lazyPoweri didnt go for the ones with AMT support, i actually went with an off-brand gigabyte BRIX system04:00
lazyPowerworks just as well for my needs :)04:00
designatedlazyPower: I'm not deploying to a local environment, I'm trying to deploy to lxc on a separate physical node.  Will the last part of your blog post be neccesary?04:00
lazyPoweri'm thinking a paif of these pre-loaded for big data deployments would be nice demo-ware hardware.04:00
lazyPowerdesignated: you can omit editing your environments.yaml, but the /etc/lxc/ stuff will be of interest to you04:01
sebas5384lazyPower: ;)04:01
designatedlazyPower: is LXC_DHCP_RANGE required in /etc/default/lxc-net?  If so...why?04:02
lazyPowerdesignated: its a phaux DHCP configuration - if you're binding to an interface that has an attached DHCP provider you can omit it04:03
designatedI do not have a DHCP daemon running on the network I intend to use, it's all static.04:03
lazyPoweryou'll need to provide that DHCP range then - as the machines are not configured for static networking04:03
lazyPowers/machines/containers/04:04
sarnoldwould the manual provider kinda let that work?04:04
lazyPowersarnold: only if you were going to create/enlist the lxc containers as a sep. juju environment04:04
designatedlazyPower: I don't understand why it's required.  what does it actually do?  Do services that come up in an lxc pull from that pool via dhcp?  will the pool need to be the size of the expected number of containers to be used?04:06
lazyPowersarnold: aiui designated is using a maas on metal provider.04:06
lazyPowerdesignated: correct - each host will assume 249 addresses on 10.0.3.x network by default.04:06
designatedwill pool have to be different across each of the nodes?04:07
lazyPowerwell since you're bridging, it introduces a new mindset - and there is a chance they might collide with one another04:07
lazyPowersince they dont talk to a centralized DHCP server - you're going to wind up having race conditions between hosts as you add ocntainers04:07
designatedlazyPower: more than a chance...I would say it will most certainly cause a conflict as it is all layer 204:08
lazyPowerthere's a 1/244 chance that they will pick the same ip (if you give it 245 addresses in the pool)04:08
sebas5384lazyPower: i'm planning to do a vagrant file with a bridge network created by virtual box, and then configure lxc + juju, so in that way from the host i could reach the charm's units04:08
lazyPowerdesignated: there are some constraints that are beign introduced that are definate concerns - if your lxc ip's collide, the container will never fully spin up and be stuck in 'pending'04:09
sebas5384lazyPower: do you think this is possible?04:09
lazyPoweras is the behavior of juju when unexpected things happen with containers, it just kind of sits there dumbfounded and waits for something to happen with the agent reporting in.04:09
lazyPowersebas5384: not sure what you're alluding to - the virtual adapter being bridged - doesn't this already happen in vagrant? Host only bridge networking or NAT host bridge networking?04:10
lazyPowersebas5384: it almost sounds like what you *really* want is a VPN style connection into the vm04:10
sebas5384yeah but you cant reach the container from the host04:10
sebas5384lazyPower: yeah04:10
lazyPowerwhich is what sshuttle provided you prior to the yosemite  update04:10
lazyPowernow it just sadly dumps core04:10
lazyPowersebas5384: without fiddling with low level virtual interfaces, my first thought is to find a lightweight and lean VPN you could add tot he vagrant box - do some testing and submit a feature request with your findings to get it pressed in officially04:11
sebas5384sshuttle wasn't an elegant solution04:11
sebas5384hehe04:11
lazyPoweri agree, we're solving a problem that has no good options - think about the portability of your device fix04:12
lazyPowerthat will work on posix systems04:12
lazyPowerwhat about our windows counterparts?04:12
lazyPowersurely there are windows devs in the drupal community - I stand by the light weight VPN service being the route to go04:12
lazyPoweras that will work on everything. ubuntu, osx, and windows04:12
sebas5384lazyPower: hmmm vagrant isn't dealing with portability?04:12
lazyPowermaybe i missed something04:13
lazyPoweri had 2 conversations running at once, let me scroll back up and read04:13
sebas5384hehe sorry to flood you04:13
lazyPowersebas5384: so if i understand correctly04:14
lazyPower i'm planning to do a vagrant file with a bridge network created by virtual box,04:14
lazyPoweryou're looking at doing this04:14
lazyPowerhost => virtualbox bridge => lxc bridge => containers04:14
sebas5384yeah!04:14
lazyPowermight work, i ahven't tested it04:15
sebas5384lazyPower: I would love to spend a time doing that04:15
sebas5384today for me a pain of every day04:15
lazyPowersebas5384: yeah the blog post i linked should help then04:15
sebas5384*is a04:15
lazyPowerso long as you have enough addresses on your DHCP server to give out - you're going to be sharing the IP's with your parent network04:15
sebas5384lazyPower: ;)04:16
lazyPowerwhen it sends that DHCP broadcast, its looking @ your router and skipping the phaux dhcp server given with juju04:16
sebas5384lazyPower: so in theory every container will have a new ip naturally?04:18
lazyPowercorrect04:18
sebas5384\o/04:18
lazyPowerif you're in a large organization, its not a very elegant solution04:18
lazyPoweras you'll exhaust ip's pretty quickly in a large formation04:18
sebas5384i'm going to work one that and let's see what happens04:18
sebas5384in my local ?04:18
lazyPowerbut for normal @ home use, unless you're running an IOT hive - you should be fine.04:18
sebas5384it's only for local04:19
lazyPoweryep - if you think about it, every container is going to get an ip from your DHCP servers configured range.04:19
lazyPowerso say you and 5 co-workers are on wifi04:19
sebas5384ok04:19
lazyPowerand you each deploy a 10 node cluster04:19
lazyPowerthats 50 ip's zapped, + the 5 for your laptops, thats 55 ip's in one swoop04:19
lazyPowerso be careful about the recommendation and make sure it has a caveate attached to it - as i can see that in itself being problematic04:20
sebas5384i was thinking in privet host04:20
lazyPowerwith that being said - a VPN would eliminate the need for that ;)04:20
sebas5384hehe04:20
sebas5384lazyPower: yeah but shuttle have's some caveats04:21
lazyPowersebas5384: i dont recommend sshuttle to anyone but my worst enemies @_@04:21
sebas5384like slowness and gives kernels panics04:21
sebas5384hehe04:21
sebas5384lazyPower: I will give it a try, if it solves the problem for me and my team04:22
sebas5384i will let you know :D04:22
designatedlazyPower: the whole lxc thing is confusing me, mainly the networking portion.  according to https://help.ubuntu.com/lts/serverguide/lxc.html you have a couple of recommended options, iptables with port forwarding, or bridges.  it's recommended not to use the macvlan option.04:22
designatedlazyPower: this part is especially confusing "A NIC can only exist in one namespace at a time, so a physical NIC passed into the container is not usable on the host"04:22
lazyPowerdesignated: thats correct - when you bridge the ethernet device - you're essentially dedicating it to anything consuming that bridge04:23
lazyPoweri don tknow what macvlan is - so that confuses me too04:23
lazyPowerdesignated: and iptables with port-forwarding is kind of a reverse NAT work around - that while it works it can be super complicated to setup and the source of many curse words and grey hairs before its configured properly04:24
designatedlazyPower: this is my scenario, multiple 10Gbe bonded interfaces with vlan tags (bond0.10, bond0.20, etc...).  If I'm understanding correctly, I have to create a bridge and map it to bond0.x for lxc to work properly?04:24
designatedwhich would prevent the host from further use of bond0.x04:25
lazyPowerdesignated: correct - whichever ethernet device is the one that will serve as your 'public' and 'private' network interface is the one you want to bind to.04:25
lazyPowerand  you wont want the host to have any use of that ethernet device outside of dedicating that traffic to your LXC containers - the host basically loses any control over it once its bridged - it becomes a gateway into the network04:25
lazyPowerthe only things you can really do with it at that point is edit /etc/network/interfaces and change the networking config - and edit the bridge settings - outside of that, it wont use it for network comms of the host for any reason. its dedicated at that point.04:26
designatedlazyPower: that seems like it puts me right back in the original predicament04:26
designatedthe whole point of using lxc is to put more services on the same physical nodes.  according to the limitations of lxc with regards to networking, I fail to see the benefit04:27
lazyPowerdesignated: I feel like at this point it would be prudent to bring in someone like jamespage that has a background in doing openstack configuration and deployments.04:28
lazyPowerdesignated: i'm far from an openstack expert, the reason why we blanket state that deploying to lxc on a host is for a few things 1) isolated deployments 2) cleanup from a failed deployment is simple 3) density out of the box - its possible to add more network interfaces to teh lxc configuration but thats outside my scope of knowledge.04:29
designatedlazyPower: unless I'm missing something it doesn't seem like using lxc in my situation is the best option.04:31
lazyPowerdesignated: its 11:30 here though and i'm about to call it in for the day - I'd be happy to resume troubleshooting and attempt to get more eyes on this when i get in ~ 9am04:31
designatedlazyPower: that would be awesome if you have the time.  I'll be around :)04:31
designatedlazyPower: thank you for all of your help so far.04:31
lazyPowerno worries - if i dont ping you dont hesitate to ping me04:31
lazyPower:)04:32
=== fuzzy_ is now known as Ponyo
=== kadams54_ is now known as kadams54-away
erkulesahoi lazyPower I would like to orchestrate docker with juju06:43
Odd_Blokeaisrael: Great! Have kicked off Trusty and Utopic builds.08:40
=== scuttlemonkey is now known as scuttle|afk
=== ev_ is now known as ev
=== nottrobin__ is now known as nottrobin_
=== nottrobin_ is now known as nottrobin
HorzAWhem im adding a node i get "curl: (7) Failed to connect to streams.canonical.com port 443: No route to host", it`s downloading everything else. Cant figure the problem...14:55
marcoceppiHorzA: do you have a proxy setup?14:56
HorzAno14:56
marcoceppiwhat environment is this?14:56
HorzAmaas and running sudo juju bootstrap --debug14:56
HorzAerr. remove sudo14:56
HorzAcan fetch it from the maas server but the node isn`t downloading it, running 2 network cards, ome to the nodes an the other to internett14:57
HorzAeven downloaded to /var/lib/juju/tools/1.20.14-trusty-amd64/juju-1.20.14-trusty-amd64.tgz14:59
HorzA(on the maas server)14:59
marcoceppiHorzA: well, by default, all traffic is routed through the maas server for traffic14:59
HorzAis port 443 blocked or cant it connect with https?15:00
marcoceppiHorzA: start a node in maas then ssh in to it and try to curl https://streams.canonical.com15:01
HorzAim trying to connect but does maas/juju add secret password on it?15:02
lazyPowerHorzA: nope. It will however load the ubuntu user on the host up with whatever credentials you have placed in maas15:10
lazyPowerand in terms of juju, it loads up a juju specific ssh keypair that reside in ~/.juju/ssh15:11
HorzAjust run "juju sync-tools" and it worked :)15:16
marcoceppioh, well that helps!15:37
=== scuttle|afk is now known as scuttlemonkey
=== kadams54 is now known as kadams54-away
sebas538_lazyPower: ping16:44
=== sebas538_ is now known as sebas5384
lazyPowersebas5384: pong16:46
sebas5384lazyPower: I tried to make the network solution in the vagrant flow16:47
sebas5384but I think I'm missing something16:47
sebas5384because the container is failing to start :(16:48
lazyPowerThat's not good16:48
sebas5384yep ¬¬16:48
lazyPowerhave something for me to look at? I dont promise i have the answer but i can take a look16:48
sebas5384lazyPower: well16:48
sebas5384i tried the instructions of your post16:48
sebas5384but i think there are some missing steps16:49
sebas5384because after setting the bridge in the interfaces file and restarting the networking isn't getting up the brige16:49
sebas5384*bridge16:50
lazyPowersebas5384: disclaimer - that was written against 1.1816:50
sebas5384something like ifup16:50
sebas5384i think16:50
sebas5384yeah I remember16:50
sebas5384:P16:50
sebas5384but16:50
lazyPowerbut everything you need to edit is basically in /etc/lxc/16:50
sebas5384what i'm talking about is not related to juju, yet...16:51
sebas5384but who is getting the interface up?16:51
lazyPowerdid you add a secondary network interface to the vbox or are you trying to hijack the default virtual eth device?16:51
lazyPowerand the ifup would be the bridge device16:51
sebas5384I added a host-only private network16:52
sebas5384like an eth1 with some other ip16:52
sebas5384so I tried to create a bridge linked to the eth1 but I had no luck with that16:52
sebas5384the eth1 was loosing it's ip range16:52
lazyPowerhmmm16:54
lazyPowerI've got a meeting coming up in 5 minutes - link me at a repository that you've got running and i'll take a look later today16:55
lazyPowerbut without knowing whats going on its hard to say16:55
sebas5384lazyPower: thanks!! I will ping you later16:58
sebas5384would be nice if we could talk, and show you what I have done16:58
sebas5384lazyPower: we could do it together if you like the ideia :)16:59
sebas5384i'm trying to do this a long time now ¬¬16:59
=== kadams54-away is now known as kadams54
sebas5384lazyPower: http://containerops.org/2013/11/19/lxc-networking/ pretty neat!17:18
designatedwho can I talk to about the percona-cluster charm?18:45
designatedAnyone want to take a stab at this one? http://pastebin.com/ptmHHpiZ  Is it possible this function is failing because I specified a bridge or bonded interface insteal of a physical interface?19:49
=== kadams54 is now known as kadams54-away
=== _thumper_ is now known as thumper
=== kadams54-away is now known as kadams54
=== kadams54 is now known as kadams54-away
=== kadams54-away is now known as kadams54
designatedThe procedure described here: https://wiki.ubuntu.com/ServerTeam/OpenStackHA for installing MySQL (Percona XtraDB Cluster) are inconsistent with the charm's README(http://bazaar.launchpad.net/~openstack-charmers/charms/trusty/percona-cluster/trunk/view/head:/README.md).  The service units must be deployed one at a time, not all at once.22:41
designatedIs there anyone knowledgeable enough with the percona-cluster and hacluster charms that could help troubleshoot why the servers are not clustering?22:42
=== kadams54 is now known as kadams54-away
=== kadams54-away is now known as kadams54
=== kadams54 is now known as kadams54-away
=== kadams54 is now known as kadams54-away

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!