=== kadams54-away is now known as kadams54 === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 [00:41] i don't understand the configuration of juju-br0 with relation to deploying to lxc. lxc seems to create a lxcbro interface that doesn't get bridged with any other interface. Is this documented somewhere? I'm unable to find it. [01:02] designated: that's the LXC network, which is local only === kadams54_ is now known as kadams54-away === kadams54-away is now known as kadams54_ === erkules_ is now known as erkules [03:22] ahoi is there docker support for juju? [03:49] erkules: do you mean orchestrating docker with juju? [03:49] erkules: or do you mean a docker container that would serve as your workstation that you could get moving quickly with juju? [03:50] lazyPower: we have a docker image with juju in it? [03:50] o/ [03:50] sebas5384: not officially no [03:50] i was asking for clarification [03:51] we have some preliminary experimental stuff that we've been doing with docker + juju, but nothing official that I would recommend to anyone. [03:51] lazyPower: hmmm liked the ideia [03:52] did you use juju to orchestrating docker? how well is playing? [03:52] lazyPower: can you explain the purpose of juju-br0? this interface seems to be causing some problems, chiefly with trying to deploy to lxc and certain charms. [03:53] designated: lxc-br0 is a bridge device - its a virtual ethernet adapter required by the lxc configuration [03:53] you can bind it to another adapter and disable the internal DHCP stuff if thats the issue [03:53] sebas5384: nothing noteworthy to speak of at the moment [03:54] oh ok, I was excited to see activity with docker + juju :D [03:54] designated: http://blog.dasroot.net/making-juju-visible-on-your-lan/ - i wrote a blog post about making the LXC containers bridge with a physical ethernet adapter (using the local provider- but the steps should be very similar for other environments) [03:54] when deploying percona-cluster, it configures the gcomm:// portion with addresses from the juju-br0 interface instead of what I have configured in my local.yaml file for the following: ha-bindiface,vip_iface,access-network [03:55] designated: that sounds like it may be a charm bug :( [03:55] lazyPower: just the post I was needing! [03:55] sebas5384: that was written against the 1.18 provider - if anything's moved keep an errata for me so i can update [03:55] lazyPower: did you create a vagrant box with this? [03:56] lazyPower: I'll take a look because I had given up on deploying to lxc due to the fact I'm trying to use an already configured bridge and it wasn't working...it kept trying to use juju-br0 but that's is associated with the wrong interface [03:56] lazyPower: ok o/ [03:56] sebas5384: negative, this was purely for fun - my 2u originally started its life as a local provider, then moved to MAAS and is now decomissioned in leu of an intel NUC that is serving as my juju-box on the cheap. [03:57] designated: ah - well if you already have a virtual adapter you can change the lxc adapter in those networking config files [03:57] lazyPower: sweet [03:57] designated: i have indeed done this in the past with success [03:57] sebas5384: i got sick of the fan noise and the 480w power draw [03:57] sebas5384: if you ever feel like hacking up a nuc cluster let me know i'm getting fairly proficient at it [03:57] and they stack so nicely [03:58] good to know lazyPower [03:58] yeah, i always wanted one of those in house [03:58] a nice lean stack [03:58] just to play [03:59] pick up some i7's and 32gb of ram - slap in a MSATA and SSD and you've got yourself a cheap in-house server with plenty of resources [03:59] has ~ 25w of power draw [03:59] thanks for the tip ;) [04:00] if you awnt to go the MAAS route and orchestrate them - you'll want to talk to marco or dustin - as they require a special series of the NUC [04:00] i didnt go for the ones with AMT support, i actually went with an off-brand gigabyte BRIX system [04:00] works just as well for my needs :) [04:00] lazyPower: I'm not deploying to a local environment, I'm trying to deploy to lxc on a separate physical node. Will the last part of your blog post be neccesary? [04:00] i'm thinking a paif of these pre-loaded for big data deployments would be nice demo-ware hardware. [04:01] designated: you can omit editing your environments.yaml, but the /etc/lxc/ stuff will be of interest to you [04:01] lazyPower: ;) [04:02] lazyPower: is LXC_DHCP_RANGE required in /etc/default/lxc-net? If so...why? [04:03] designated: its a phaux DHCP configuration - if you're binding to an interface that has an attached DHCP provider you can omit it [04:03] I do not have a DHCP daemon running on the network I intend to use, it's all static. [04:03] you'll need to provide that DHCP range then - as the machines are not configured for static networking [04:04] s/machines/containers/ [04:04] would the manual provider kinda let that work? [04:04] sarnold: only if you were going to create/enlist the lxc containers as a sep. juju environment [04:06] lazyPower: I don't understand why it's required. what does it actually do? Do services that come up in an lxc pull from that pool via dhcp? will the pool need to be the size of the expected number of containers to be used? [04:06] sarnold: aiui designated is using a maas on metal provider. [04:06] designated: correct - each host will assume 249 addresses on 10.0.3.x network by default. [04:07] will pool have to be different across each of the nodes? [04:07] well since you're bridging, it introduces a new mindset - and there is a chance they might collide with one another [04:07] since they dont talk to a centralized DHCP server - you're going to wind up having race conditions between hosts as you add ocntainers [04:08] lazyPower: more than a chance...I would say it will most certainly cause a conflict as it is all layer 2 [04:08] there's a 1/244 chance that they will pick the same ip (if you give it 245 addresses in the pool) [04:08] lazyPower: i'm planning to do a vagrant file with a bridge network created by virtual box, and then configure lxc + juju, so in that way from the host i could reach the charm's units [04:09] designated: there are some constraints that are beign introduced that are definate concerns - if your lxc ip's collide, the container will never fully spin up and be stuck in 'pending' [04:09] lazyPower: do you think this is possible? [04:09] as is the behavior of juju when unexpected things happen with containers, it just kind of sits there dumbfounded and waits for something to happen with the agent reporting in. [04:10] sebas5384: not sure what you're alluding to - the virtual adapter being bridged - doesn't this already happen in vagrant? Host only bridge networking or NAT host bridge networking? [04:10] sebas5384: it almost sounds like what you *really* want is a VPN style connection into the vm [04:10] yeah but you cant reach the container from the host [04:10] lazyPower: yeah [04:10] which is what sshuttle provided you prior to the yosemite update [04:10] now it just sadly dumps core [04:11] sebas5384: without fiddling with low level virtual interfaces, my first thought is to find a lightweight and lean VPN you could add tot he vagrant box - do some testing and submit a feature request with your findings to get it pressed in officially [04:11] sshuttle wasn't an elegant solution [04:11] hehe [04:12] i agree, we're solving a problem that has no good options - think about the portability of your device fix [04:12] that will work on posix systems [04:12] what about our windows counterparts? [04:12] surely there are windows devs in the drupal community - I stand by the light weight VPN service being the route to go [04:12] as that will work on everything. ubuntu, osx, and windows [04:12] lazyPower: hmmm vagrant isn't dealing with portability? [04:13] maybe i missed something [04:13] i had 2 conversations running at once, let me scroll back up and read [04:13] hehe sorry to flood you [04:14] sebas5384: so if i understand correctly [04:14] i'm planning to do a vagrant file with a bridge network created by virtual box, [04:14] you're looking at doing this [04:14] host => virtualbox bridge => lxc bridge => containers [04:14] yeah! [04:15] might work, i ahven't tested it [04:15] lazyPower: I would love to spend a time doing that [04:15] today for me a pain of every day [04:15] sebas5384: yeah the blog post i linked should help then [04:15] *is a [04:15] so long as you have enough addresses on your DHCP server to give out - you're going to be sharing the IP's with your parent network [04:16] lazyPower: ;) [04:16] when it sends that DHCP broadcast, its looking @ your router and skipping the phaux dhcp server given with juju [04:18] lazyPower: so in theory every container will have a new ip naturally? [04:18] correct [04:18] \o/ [04:18] if you're in a large organization, its not a very elegant solution [04:18] as you'll exhaust ip's pretty quickly in a large formation [04:18] i'm going to work one that and let's see what happens [04:18] in my local ? [04:18] but for normal @ home use, unless you're running an IOT hive - you should be fine. [04:19] it's only for local [04:19] yep - if you think about it, every container is going to get an ip from your DHCP servers configured range. [04:19] so say you and 5 co-workers are on wifi [04:19] ok [04:19] and you each deploy a 10 node cluster [04:19] thats 50 ip's zapped, + the 5 for your laptops, thats 55 ip's in one swoop [04:20] so be careful about the recommendation and make sure it has a caveate attached to it - as i can see that in itself being problematic [04:20] i was thinking in privet host [04:20] with that being said - a VPN would eliminate the need for that ;) [04:20] hehe [04:21] lazyPower: yeah but shuttle have's some caveats [04:21] sebas5384: i dont recommend sshuttle to anyone but my worst enemies @_@ [04:21] like slowness and gives kernels panics [04:21] hehe [04:22] lazyPower: I will give it a try, if it solves the problem for me and my team [04:22] i will let you know :D [04:22] lazyPower: the whole lxc thing is confusing me, mainly the networking portion. according to https://help.ubuntu.com/lts/serverguide/lxc.html you have a couple of recommended options, iptables with port forwarding, or bridges. it's recommended not to use the macvlan option. [04:22] lazyPower: this part is especially confusing "A NIC can only exist in one namespace at a time, so a physical NIC passed into the container is not usable on the host" [04:23] designated: thats correct - when you bridge the ethernet device - you're essentially dedicating it to anything consuming that bridge [04:23] i don tknow what macvlan is - so that confuses me too [04:24] designated: and iptables with port-forwarding is kind of a reverse NAT work around - that while it works it can be super complicated to setup and the source of many curse words and grey hairs before its configured properly [04:24] lazyPower: this is my scenario, multiple 10Gbe bonded interfaces with vlan tags (bond0.10, bond0.20, etc...). If I'm understanding correctly, I have to create a bridge and map it to bond0.x for lxc to work properly? [04:25] which would prevent the host from further use of bond0.x [04:25] designated: correct - whichever ethernet device is the one that will serve as your 'public' and 'private' network interface is the one you want to bind to. [04:25] and you wont want the host to have any use of that ethernet device outside of dedicating that traffic to your LXC containers - the host basically loses any control over it once its bridged - it becomes a gateway into the network [04:26] the only things you can really do with it at that point is edit /etc/network/interfaces and change the networking config - and edit the bridge settings - outside of that, it wont use it for network comms of the host for any reason. its dedicated at that point. [04:26] lazyPower: that seems like it puts me right back in the original predicament [04:27] the whole point of using lxc is to put more services on the same physical nodes. according to the limitations of lxc with regards to networking, I fail to see the benefit [04:28] designated: I feel like at this point it would be prudent to bring in someone like jamespage that has a background in doing openstack configuration and deployments. [04:29] designated: i'm far from an openstack expert, the reason why we blanket state that deploying to lxc on a host is for a few things 1) isolated deployments 2) cleanup from a failed deployment is simple 3) density out of the box - its possible to add more network interfaces to teh lxc configuration but thats outside my scope of knowledge. [04:31] lazyPower: unless I'm missing something it doesn't seem like using lxc in my situation is the best option. [04:31] designated: its 11:30 here though and i'm about to call it in for the day - I'd be happy to resume troubleshooting and attempt to get more eyes on this when i get in ~ 9am [04:31] lazyPower: that would be awesome if you have the time. I'll be around :) [04:31] lazyPower: thank you for all of your help so far. [04:31] no worries - if i dont ping you dont hesitate to ping me [04:32] :) === fuzzy_ is now known as Ponyo === kadams54_ is now known as kadams54-away [06:43] ahoi lazyPower I would like to orchestrate docker with juju [08:40] aisrael: Great! Have kicked off Trusty and Utopic builds. === scuttlemonkey is now known as scuttle|afk === ev_ is now known as ev === nottrobin__ is now known as nottrobin_ === nottrobin_ is now known as nottrobin [14:55] Whem im adding a node i get "curl: (7) Failed to connect to streams.canonical.com port 443: No route to host", it`s downloading everything else. Cant figure the problem... [14:56] HorzA: do you have a proxy setup? [14:56] no [14:56] what environment is this? [14:56] maas and running sudo juju bootstrap --debug [14:56] err. remove sudo [14:57] can fetch it from the maas server but the node isn`t downloading it, running 2 network cards, ome to the nodes an the other to internett [14:59] even downloaded to /var/lib/juju/tools/1.20.14-trusty-amd64/juju-1.20.14-trusty-amd64.tgz [14:59] (on the maas server) [14:59] HorzA: well, by default, all traffic is routed through the maas server for traffic [15:00] is port 443 blocked or cant it connect with https? [15:01] HorzA: start a node in maas then ssh in to it and try to curl https://streams.canonical.com [15:02] im trying to connect but does maas/juju add secret password on it? [15:10] HorzA: nope. It will however load the ubuntu user on the host up with whatever credentials you have placed in maas [15:11] and in terms of juju, it loads up a juju specific ssh keypair that reside in ~/.juju/ssh [15:16] just run "juju sync-tools" and it worked :) [15:37] oh, well that helps! === scuttle|afk is now known as scuttlemonkey === kadams54 is now known as kadams54-away [16:44] lazyPower: ping === sebas538_ is now known as sebas5384 [16:46] sebas5384: pong [16:47] lazyPower: I tried to make the network solution in the vagrant flow [16:47] but I think I'm missing something [16:48] because the container is failing to start :( [16:48] That's not good [16:48] yep ¬¬ [16:48] have something for me to look at? I dont promise i have the answer but i can take a look [16:48] lazyPower: well [16:48] i tried the instructions of your post [16:49] but i think there are some missing steps [16:49] because after setting the bridge in the interfaces file and restarting the networking isn't getting up the brige [16:50] *bridge [16:50] sebas5384: disclaimer - that was written against 1.18 [16:50] something like ifup [16:50] i think [16:50] yeah I remember [16:50] :P [16:50] but [16:50] but everything you need to edit is basically in /etc/lxc/ [16:51] what i'm talking about is not related to juju, yet... [16:51] but who is getting the interface up? [16:51] did you add a secondary network interface to the vbox or are you trying to hijack the default virtual eth device? [16:51] and the ifup would be the bridge device [16:52] I added a host-only private network [16:52] like an eth1 with some other ip [16:52] so I tried to create a bridge linked to the eth1 but I had no luck with that [16:52] the eth1 was loosing it's ip range [16:54] hmmm [16:55] I've got a meeting coming up in 5 minutes - link me at a repository that you've got running and i'll take a look later today [16:55] but without knowing whats going on its hard to say [16:58] lazyPower: thanks!! I will ping you later [16:58] would be nice if we could talk, and show you what I have done [16:59] lazyPower: we could do it together if you like the ideia :) [16:59] i'm trying to do this a long time now ¬¬ === kadams54-away is now known as kadams54 [17:18] lazyPower: http://containerops.org/2013/11/19/lxc-networking/ pretty neat! [18:45] who can I talk to about the percona-cluster charm? [19:49] Anyone want to take a stab at this one? http://pastebin.com/ptmHHpiZ Is it possible this function is failing because I specified a bridge or bonded interface insteal of a physical interface? === kadams54 is now known as kadams54-away === _thumper_ is now known as thumper === kadams54-away is now known as kadams54 === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 [22:41] The procedure described here: https://wiki.ubuntu.com/ServerTeam/OpenStackHA for installing MySQL (Percona XtraDB Cluster) are inconsistent with the charm's README(http://bazaar.launchpad.net/~openstack-charmers/charms/trusty/percona-cluster/trunk/view/head:/README.md). The service units must be deployed one at a time, not all at once. [22:42] Is there anyone knowledgeable enough with the percona-cluster and hacluster charms that could help troubleshoot why the servers are not clustering? === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 === kadams54 is now known as kadams54-away === kadams54 is now known as kadams54-away