[04:01] rharper: marcoceppi: we don't support containers anywhere except for MaaS ATM, we're working on that code (since if you want to make a routable container you need to bridge the network but then also ask for an assign an IP address for the container, etc0 [04:06] rharper: marcoceppi: there is the "network-bridge" environment config, but I'm pretty sure that is only used in MaaS (in 1.20) and local === uru is now known as urulama === vladk|offline is now known as vladk === CyberJacob|Away is now known as CyberJacob [07:40] hey, if i deploy a service to an lxc container, how do i define the bridge interface the container should use? === vladk is now known as vladk|offline === vladk|offline is now known as vladk [07:51] hey, if i deploy a service to an lxc container, how do i define the bridge interface the container should use? [07:51] maas environment bootstraped not local === psivaa-off is now known as psivaa [07:52] is it enought to make the changes in /etc/lxc/default.conf /etc/defaults/lxc-net? === CyberJacob is now known as CyberJacob|Away [08:10] hi guys, this morning I've re-made all steps from the begin to build a vMaaS with Juju and I saw this warning in the vm node "http://imgur.com/rdRC9hm", the error on the status of juju environment is the same...... [08:25] schegi, the maas provider should automatically setup a br0 -> eth0 [08:26] schegi, but its very basic right now in the context of multiple netwokrs === vladk is now known as vladk|offline === vladk|offline is now known as vladk === vladk is now known as vladk|offline === vladk|offline is now known as vladk === vladk is now known as vladk|offline === vladk|offline is now known as vladk === vladk is now known as vladk|offline === vladk|offline is now known as vladk [09:36] jamespage, is it possible to somehow force it to use an existing bridge or create one over another interface? [09:36] schegi, trying to figure that out [09:37] jamespage, btw how to destroy containers from juju? [09:37] schegi, 'terminate-machine' [09:37] use the machine-id of the container you want to destroy [09:38] just destroying the service doesnt destroy the related container [09:38] at least in juju status it still exists [09:38] schegi, indeed it does not [09:38] schegi, you can do a "juju terminate-machine --force 0/lxc/2" for example [09:38] ah ok nice [09:38] which will just rip the machine out of the service and terminate it, but leave the service deployed [09:38] but without units === psivaa is now known as psivaa-lunch [12:24] jamespage, played around a little bit with lxc. If i lock in to a machine i like to deploy a service in lxc container to and change the lxc settings in /etc/lxc/defaults.conf and /etc/defaults/lxc-net before deployment and restarted all necessary services [12:26] jamespage, it seems to work but still got some conectivity issues i think container stucks in pending state and if i look into the container log on the node it was deployed to ill see lots of these http://pastebin.com/agr77dnK [12:30] and if i try to ssh into container using juju ssh it states ERROR machine "15/lxc/4" has no internal address [12:54] schegi: what did you change? [12:54] schegi: i've modified lxc container bridge devices and network settings successfully for the local provider [12:55] schegi: relevant post: http://blog.dasroot.net/making-juju-visible-on-your-lan/ [12:57] ok, thats what i tried. only difference my bridge interface is not statically defined but gets its ip via dhcp [12:58] shouldn't make a difference so long as thats being passed off to the bridged device [12:58] and LXC knows about it when it gets fired up [12:59] i did however have some issues with the interfaces racing [12:59] i forget how i solved it, i think it was the order in /etc/network/interfaces [12:59] sometimes the bridge device would collide with the physical adapter and collide [13:00] er, yeah. i need more coffee... cant speak english... [13:00] btw is this your blog? [13:01] just to mention /etc/init.d/networking restart does not work anymore in trusty [13:01] it is, teh article was written prior to 14.04 [13:01] so it was tested on 12.04 [13:02] jam: marcoceppi: would be interested in testing something; in my case, I wouldn't need to assign an IP if the default network already runs DHCP; AFAICT, this is how it works with MaaS, bridge the host device, lxc runs a container on it, MaaS sees a new dhpc request with a new mac, gets and IP. This should work fine for networks that run DHCP. I suppose "expose" would require floating ip allocation, but I expect that the code is the same since that's n [13:02] eeded for the hosts as well. [13:07] lazyPower, still no connectivity ERROR machine "15/lxc/6" has no internal address on juju ssh 15/lxc/6 [13:07] lazyPower: hi. have y look here http://imgur.com/rdRC9hm. it's the situation after to reboot the host mahcine! [13:08] hmm [13:08] schegi: this is when you're deploying to an lxc container in a host right? [13:08] eg: deploy mediawiki --to lxc:15 [13:09] right [13:09] that article was for the local provider - i was unaware you were working on an agent-based installation [13:09] that changes a bit - the network bridging there still is pioneer territory for me [13:09] There was a talk on the mailing list about this a while back, about some WIP for a subordinate to handle the networking via relations [13:10] g0d_51gm4: hang on, i doubt i'm going to nkow the answer to this though, as i stated yesterday, it really appears to be a configuration issue with how your maas/juju environment is setup [13:11] g0d_51gm4: so, there's only so many things that can be wrong here. 1) IP Address changes of the bootstrap node. 2) SSH Keys have changed. 3) the juju services aren't running [13:11] if you cannot juju ssh 0 - and it says permission denied, pubkey - you know its the ssh keys. [13:14] lazyPower can you point me to the mailinglist or some archives? [13:16] https://lists.ubuntu.com/mailman/listinfo/juju [13:16] archive link is at the top, i'd suggest a signup since there's always activity on the list about the future of juju - its a great place to keep informed about whats coming at you in the next revision [13:29] lazyPower: i was thinking if is the problem is the firewall? because everytime i start the vMaaS environment i've to flush the table roles and clean it on the Region Cluster for working with juju. [13:30] g0d_51gm4: could be [13:30] lazyPower: the command juju ssh -0 gives me the prompt of the node. [13:30] thats good! that means juju is there and responding [13:31] i had not considered UFW to be honest [13:31] and it stands to reason that it would be the blocker if its reloading your FW ruleset on reboot [13:31] let me try to disable the ufw and [13:31] reboot everything [13:41] ok its getting really starnge following situation. did a juju deploy --to lxc:15 mysql, juju status shows the container constantly in pending state. the /var/log/juju/machine-15-lxc-X.log shows plenty of these and juju ssh 15/lxc/X returns ERROR machine "15/lxc/6" has no internal address, but using ssh ubuntu@192.168.25.158 succeeds and the machine is pingable [13:41] is it reachable by the bootstrap node though? [13:41] http://pastebin.com/agr77dnK -->/var/log/juju/machine-15-lxc-X.log [13:42] pingable and ssh-able from the maas master and all other nodes in the cluster. but not via juju only plain ssh to the ip of the container works [13:43] interesting [13:44] the whole cluster is also pingable from within the container [14:01] lazyPower: I asked yesterday about the mixed deployment with manual mode. I have the problem now when following https://juju.ubuntu.com/docs/config-manual.html, that I get a "su: authentication failure" directly in the beginning [14:01] from what I can tell from the debugging output, it thinks the user it logs in with is root [14:01] the user you add manually will need to be a passwordless sudo user [14:02] eg: the ubuntu user on most clouds [14:02] it is [14:02] http://pastebin.com/NRMH6D6N [14:03] it is just not using sudo === Ursinha is now known as Ursinha-afk [14:05] data: what am i looking at here? verbose output from what juju is doing to add the unit? [14:05] juju bootstrap --debug [14:06] sorry, normally, I'd have pasted everything, but too many machine names etc. in there, that I don't want in logfiles [14:07] its doing sudo [14:07] if you look at the tail end of that command [14:07] sudo "/bin/bash -c ' [14:07] I am blind, thanks [14:07] so make sure it is indeed a passwordless sudo user [14:08] i've had that bite me before [14:08] it is [14:09] better yet, "someone" created /home/ubuntu on the machine, but it is owned by root:root [14:12] is it possible to change the name of the user and the directory it is using? Because we have an ldap on that machine for users, and I'd hate to mess with the pam config [14:15] lazyPower: i disable the ufw on boot and started the whole vMaaS environment. the node now is working and in juju status i see the node without error [14:16] g0d_51gm4: awesome news! [14:16] lazyPower: a question now is...how do i have to set the ufw to use it [14:16] without to disable it!!!! [14:18] jcastro: http://askubuntu.com/questions/174171/what-are-the-ports-used-by-juju-for-its-orchestration-services - candidate for updating since we no longer use zookeeper [14:20] a firewall appliance front to host machine is already present, but to implement some rules also on host which ports i've to open to work with juju [14:20] i see it just now y answer sorry! [14:22] i've to permit just ssh connection from that vnetwork? [14:23] g0d_51gm4: you'll need to expose port 22, and 17017 [14:23] juju uses only a tunnel ssh to connect to MaaS, isn't it? [14:23] 17017 for which service? [14:23] 17017 is the API port for juju [14:24] lazyPower, yeah, edit away [14:27] jcastro: edits are in queue === Ursinha-afk is now known as Ursinha [14:33] lazyPower: ok thanks, i make a further tests. === psivaa-lunch is now known as psivaa [14:38] is it impossible to deploy the mysql charm to an lxc container and then relate it to a ceph cluster? i got some issues that during ceph relation changed the charm tries to load the rbd module, which is not possible from within a container. adding the module outside to the kernel does not help it still tires to load the module [14:48] schegi, no [14:50] how to get it working. in my setting it always fails because it tries to load the rbd module from within the container [14:51] just to give a bit of feedback: it is working now, but ssh auth failed due to wrong rights on .ssh (which they weren't), but it had cached the wrong user id as the ubuntu user is local so the nfs server didn't know it, created it there, of course first time around with the wrong user id... But all that mess is now harmonizd [14:52] http://pastebin.com/FJkSRnkY [14:52] jamespage, this is from the /var/log/juju/unit-mysql-0.log on the node which actually runs the container [15:14] schegi, you can modprobe the rbd module on the host first, and it will be in the container as well [15:15] schegi, and you can also do lsmod | grep rbd first to check if it's there before modprobbing it [15:17] schegi, one sec [15:17] schegi, for an example, look in https://github.com/juju/juju/blob/master/provider/maas/environ.go#L535 [15:17] negronjl, lazyPower: hrmm - mongodb? [15:17] jamespage: what about it? :) [15:17] schegi, HA for MySQL and RabbitMQ backed by ceph is not supported in LXC containers [15:18] lazyPower, http://paste.ubuntu.com/7803902/ [15:19] lazyPower, the last commit was quite a large delta - and breaks the relation_set calls [15:19] service('stop', 'mongod') [15:19] thats line 900 on my local copy - how would stopping the service cause failure? [15:20] lazyPower, that's not L900 on the charm in the charm store [15:20] ah wait is ee, i misread the stack trace [15:20] its down in teh relation_set block trying to send the replsets [15:21] lazyPower, database_relation_joined [15:21] jamespage: i see this, i've got a fix [15:22] my branch is pretty dirty atm, let me try to cherrypick out the fixes [15:22] its missing a None before the dict its sending [15:23] lazyPower, there are a whole heap of incorrect relation_set calls [15:23] lazyPower, suggestion - back out the last commit and test this better first.... [15:24] lazyPower, being explicit is better relation_settings={....} [15:25] what do you mean being explicit? setting the relationship name vs using none? [15:25] schegi, sorry - there is a way todo this - the percona-cluster charm provides an active/active mysql option which does not rely on ceph - its all userspace [15:26] lazyPower, just saying don't None out relation_id, just be explicit as to which parameter you are intending to pass - in this case its relation_settings... [15:28] this is missing quite a bit of what i've done, what i just fetched from the store [15:28] its still got the gnarly retval block at the bottom [15:28] * lazyPower sighs [15:29] what the hell [15:29] lazyPower, I stand by my recommendation to revert and try again [15:34] jamespage: reverted your mongos instances will fail though when you go to deploy this cluster. [15:34] when you relate mongos => configsvr [15:34] lazyPower, well right now I can't relate and clients to mongodb [15:38] jamespage: what client are you using? i'll add one to teh amulet test before i resub a MP [15:39] lazyPower, ceilometer [15:39] thanks [15:42] jamespage: and you've got mongodb deployed in standalone correct? [15:42] lazyPower, yes [15:42] perfect. easy enough [16:14] Hi - I'm looking for some help with manual provisioning. I have a working MaaS / Juju environment that I have deployed Openstack on using all the charms. I have, however, a single node that I can not add into MaaS and want to manually provision using Juju. I'm having some trouble understanding what I need to do from here: https://juju.ubuntu.com/docs/config-manual.html [16:15] ^^ Is bootstrap-host the same host or a different host from what I actually want to provision? [16:16] ctlaugh: the bootstrap node is what warehouses the juju api server. its responsible for orchestrating the environment [16:16] ctlaugh: sounds like you want to add this additional host as a unit into your environment [16:16] jcastro: did we ever get adding units manually to an environment docs published? [16:17] lazyPower: Does it need to be a different host from the one already running Juju? [16:19] ctlaugh: i dont understand the question. [16:20] lazyPower: I used juju bootstrap (using the MaaS provider) so that node has all the Juju bits running on it. (I also deployed the juju-gui on another node but that's probably not important) [16:21] lazyPower: So, do I need to add another bootstrap node? [16:21] nah, you can manually add hosts into an existing environment aiui [16:21] thats why i'm pinging jcastro to find out if we ever published docs, that functionality landed a few revisions ago [16:22] Do I need to configure anything for the manual provider in environments.yaml, or just do an add-machine ssh:xxxx? [16:22] ok - thank you for your help -- sorry, mid-typing before I saw your last msg. [16:24] np ctlaugh [16:24] should just be add-machine ssh:xxx [16:25] lazyPower: I'll try that and see if it works [16:33] lazyPower: That seems to have worked. New machine added and in the process of deploying a charm to it. Thank you [16:33] No problem :) glad its sorted [16:34] lazyPower: Well, the charm just failed to install, but I was expecting something like that to go wrong. But, at least Juju can see it. [16:34] ctlaugh: if you need help debugging, dont hesitate to reach out [16:35] I'll go ahead and ask one question before I start digging into it myself... I just ran juju debug-log and got this: [16:35] unit-nova-compute-1: 2014-07-16 16:32:24 DEBUG juju.worker.rsyslog worker.go:75 starting rsyslog worker mode 1 for "unit-nova-compute-1" "" [16:35] unit-nova-compute-1: 2014-07-16 16:32:24 DEBUG juju.worker.logger logger.go:45 reconfiguring logging from "=DEBUG" to "golxc=TRACE;unit=DEBUG" [16:35] unit-nova-compute-1: 2014-07-16 16:32:24 INFO install Traceback (most recent call last): [16:35] unit-nova-compute-1: 2014-07-16 16:32:24 INFO install File "/var/lib/juju/agents/unit-nova-compute-1/charm/hooks/install", line 5, in [16:35] unit-nova-compute-1: 2014-07-16 16:32:24 INFO install from charmhelpers.core.hookenv import ( [16:35] unit-nova-compute-1: 2014-07-16 16:32:24 INFO install File "/var/lib/juju/agents/unit-nova-compute-1/charm/hooks/charmhelpers/core/hookenv.py", line 9, in [16:35] unit-nova-compute-1: 2014-07-16 16:32:24 INFO install import yaml [16:35] unit-nova-compute-1: 2014-07-16 16:32:24 INFO install ImportError: No module named yaml [16:35] ctlaugh: > 3 lines pastebin it please [16:35] unit-nova-compute-1: 2014-07-16 16:32:24 ERROR juju.worker.uniter uniter.go:486 hook failed: exit status 1 [16:35] Sorry [16:35] which charm is this? [16:36] nova-compute? [16:36] nova-compute. It looks like it's just missing a dependency. I didn't have this issue when deploying using MaaS nodes. [16:36] series precise? [16:36] i recall on precise you had to install python-yaml [16:36] thats not the case on trusty [16:36] trusty-icehouse. Does the MaaS deployment process install dependencies automatically that I might need to install manually here? === scuttle|afk is now known as scuttlemonkey [16:37] that shouldn't have anything to do with it [16:37] maas is booting cloud-images [16:37] s/booting/serving up [16:38] lazyPower: I wasn't sure if the images (and the packages installed by default) might have dependencies already present that I wouldn't necessarily have on my manually-provisioned node. [16:39] shouldn't be the case - is the node you manually added to your env a precise host? [16:40] you should be able to do juju run --unit # "sudo apt-get install python-yaml" && juju resolved -r nova-compute/# -- and it'll at bare minimum get further along in the install process. [16:40] It's running trusty === vladk is now known as vladk|offline [16:42] After installing python-yaml, it's getting a lot further now. [16:43] lazyPower: Thank you for your help. I'll reach out if I run into anything else I can't work through. [16:45] anytime ctlaugh [17:04] marcoceppi: howdy [17:04] marcoceppi: I'm trying to submit my transcode-cluster bundle to the charm store [17:04] marcoceppi: according to: https://juju.ubuntu.com/docs/charms-bundles.html#sharing-your-bundle-with-the-community [17:04] marcoceppi: which says: juju bundle proof ../bundle-directory/ #default current working directory [17:05] marcoceppi: however, whenever I run: kirkland@x230:~/src/transcode/transcode/preciseāŸ« juju bundle proof transcode-cluster [17:05] ERROR unrecognized command: juju bundle [17:06] kirkland: do you have the latest charm tools? [17:06] Juju charm version [17:06] marcoceppi: sure, I'm on 14.04 [17:07] ii charm-tools 1.0.0-0ubuntu2 all Tools for maintaining Juju charms [17:07] kirkland: archives are severly behind [17:07] marcoceppi: well, that should be fixed ;-) [17:07] marcoceppi: you're telling me I'm going to need to slop up my laptop with a ppa? :-) [17:08] Well, we tried but missed the cut off [17:08] It's the best ppa around, ppa:juju/stable [17:10] kirkland: latest version requires software not yet in trust archives. If you know how to get around that I'm all ears [17:10] marcoceppi: dfdt? === roadmr is now known as roadmr_afk === CyberJacob|Away is now known as CyberJacob === vladk|offline is now known as vladk === roadmr_afk is now known as roadmr [19:22] I'm having some problem with juju authorized-keys add command... It keeps saying my public key is invalid and can't add... any reason? I'm sure my key is ok because it has been used in other places with no problem, [19:29] jamespage: I am using the cinder charm and am trying to work through a problem installing on a system with only a single disk. I see your name all in the charm code and hoped I could ask you a quick question about it: what's the right way to specify using a loopback file on trusty/icehouse? I am putting block-device: "/srv/cinder.data|750G" in a config file, and I can see that the file gets created, but the loopback device and [19:29] volume group don't get created. === alexisb is now known as alexisb_lunch [19:51] I'm having an issue with a service I removed not going away entirely and causing issues when trying to re-deploy it: http://pastebin.ubuntu.com/7805234/ [19:51] Any suggestions? [19:52] On how to get rid of it [19:55] cory_fu: is the machine its attached to destroyed? [19:55] Yes, it's gone [19:55] cory_fu: also, what about the relations? are any related services in error? [19:55] 9/10 its a related service thats trapped in error keeping it from going away [19:55] It has no relations at the moment [19:55] weird [19:55] I'm full of lies [19:56] Yes, related services are in error [19:56] haha [19:56] thats why [19:56] if you resolve the services its related to, it'll go away [19:56] Thanks [19:57] Yep, that worked [19:57] * lazyPower thumbs up [19:57] glad we got it sorted === vladk is now known as vladk|offline === alexisb_lunch is now known as alexisb