jam | rharper: marcoceppi: we don't support containers anywhere except for MaaS ATM, we're working on that code (since if you want to make a routable container you need to bridge the network but then also ask for an assign an IP address for the container, etc0 | 04:01 |
---|---|---|
jam | rharper: marcoceppi: there is the "network-bridge" environment config, but I'm pretty sure that is only used in MaaS (in 1.20) and local | 04:06 |
=== uru is now known as urulama | ||
=== vladk|offline is now known as vladk | ||
=== CyberJacob|Away is now known as CyberJacob | ||
schegi | hey, if i deploy a service to an lxc container, how do i define the bridge interface the container should use? | 07:40 |
=== vladk is now known as vladk|offline | ||
=== vladk|offline is now known as vladk | ||
schegi | hey, if i deploy a service to an lxc container, how do i define the bridge interface the container should use? | 07:51 |
schegi | maas environment bootstraped not local | 07:51 |
=== psivaa-off is now known as psivaa | ||
schegi | is it enought to make the changes in /etc/lxc/default.conf /etc/defaults/lxc-net? | 07:52 |
=== CyberJacob is now known as CyberJacob|Away | ||
g0d_51gm4 | hi guys, this morning I've re-made all steps from the begin to build a vMaaS with Juju and I saw this warning in the vm node "http://imgur.com/rdRC9hm", the error on the status of juju environment is the same...... | 08:10 |
jamespage | schegi, the maas provider should automatically setup a br0 -> eth0 | 08:25 |
jamespage | schegi, but its very basic right now in the context of multiple netwokrs | 08:26 |
=== vladk is now known as vladk|offline | ||
=== vladk|offline is now known as vladk | ||
=== vladk is now known as vladk|offline | ||
=== vladk|offline is now known as vladk | ||
=== vladk is now known as vladk|offline | ||
=== vladk|offline is now known as vladk | ||
=== vladk is now known as vladk|offline | ||
=== vladk|offline is now known as vladk | ||
schegi | jamespage, is it possible to somehow force it to use an existing bridge or create one over another interface? | 09:36 |
jamespage | schegi, trying to figure that out | 09:36 |
schegi | jamespage, btw how to destroy containers from juju? | 09:37 |
jamespage | schegi, 'terminate-machine' | 09:37 |
jamespage | use the machine-id of the container you want to destroy | 09:37 |
schegi | just destroying the service doesnt destroy the related container | 09:38 |
schegi | at least in juju status it still exists | 09:38 |
jamespage | schegi, indeed it does not | 09:38 |
jamespage | schegi, you can do a "juju terminate-machine --force 0/lxc/2" for example | 09:38 |
schegi | ah ok nice | 09:38 |
jamespage | which will just rip the machine out of the service and terminate it, but leave the service deployed | 09:38 |
jamespage | but without units | 09:38 |
=== psivaa is now known as psivaa-lunch | ||
schegi | jamespage, played around a little bit with lxc. If i lock in to a machine i like to deploy a service in lxc container to and change the lxc settings in /etc/lxc/defaults.conf and /etc/defaults/lxc-net before deployment and restarted all necessary services | 12:24 |
schegi | jamespage, it seems to work but still got some conectivity issues i think container stucks in pending state and if i look into the container log on the node it was deployed to ill see lots of these http://pastebin.com/agr77dnK | 12:26 |
schegi | and if i try to ssh into container using juju ssh it states ERROR machine "15/lxc/4" has no internal address | 12:30 |
lazyPower | schegi: what did you change? | 12:54 |
lazyPower | schegi: i've modified lxc container bridge devices and network settings successfully for the local provider | 12:54 |
lazyPower | schegi: relevant post: http://blog.dasroot.net/making-juju-visible-on-your-lan/ | 12:55 |
schegi | ok, thats what i tried. only difference my bridge interface is not statically defined but gets its ip via dhcp | 12:57 |
lazyPower | shouldn't make a difference so long as thats being passed off to the bridged device | 12:58 |
lazyPower | and LXC knows about it when it gets fired up | 12:58 |
lazyPower | i did however have some issues with the interfaces racing | 12:59 |
lazyPower | i forget how i solved it, i think it was the order in /etc/network/interfaces | 12:59 |
lazyPower | sometimes the bridge device would collide with the physical adapter and collide | 12:59 |
lazyPower | er, yeah. i need more coffee... cant speak english... | 13:00 |
schegi | btw is this your blog? | 13:00 |
schegi | just to mention /etc/init.d/networking restart does not work anymore in trusty | 13:01 |
lazyPower | it is, teh article was written prior to 14.04 | 13:01 |
lazyPower | so it was tested on 12.04 | 13:01 |
rharper | jam: marcoceppi: would be interested in testing something; in my case, I wouldn't need to assign an IP if the default network already runs DHCP; AFAICT, this is how it works with MaaS, bridge the host device, lxc runs a container on it, MaaS sees a new dhpc request with a new mac, gets and IP. This should work fine for networks that run DHCP. I suppose "expose" would require floating ip allocation, but I expect that the code is the same since that's n | 13:02 |
rharper | eeded for the hosts as well. | 13:02 |
schegi | lazyPower, still no connectivity ERROR machine "15/lxc/6" has no internal address on juju ssh 15/lxc/6 | 13:07 |
g0d_51gm4 | lazyPower: hi. have y look here http://imgur.com/rdRC9hm. it's the situation after to reboot the host mahcine! | 13:07 |
lazyPower | hmm | 13:08 |
lazyPower | schegi: this is when you're deploying to an lxc container in a host right? | 13:08 |
lazyPower | eg: deploy mediawiki --to lxc:15 | 13:08 |
schegi | right | 13:09 |
lazyPower | that article was for the local provider - i was unaware you were working on an agent-based installation | 13:09 |
lazyPower | that changes a bit - the network bridging there still is pioneer territory for me | 13:09 |
lazyPower | There was a talk on the mailing list about this a while back, about some WIP for a subordinate to handle the networking via relations | 13:09 |
lazyPower | g0d_51gm4: hang on, i doubt i'm going to nkow the answer to this though, as i stated yesterday, it really appears to be a configuration issue with how your maas/juju environment is setup | 13:10 |
lazyPower | g0d_51gm4: so, there's only so many things that can be wrong here. 1) IP Address changes of the bootstrap node. 2) SSH Keys have changed. 3) the juju services aren't running | 13:11 |
lazyPower | if you cannot juju ssh 0 - and it says permission denied, pubkey - you know its the ssh keys. | 13:11 |
schegi | lazyPower can you point me to the mailinglist or some archives? | 13:14 |
lazyPower | https://lists.ubuntu.com/mailman/listinfo/juju | 13:16 |
lazyPower | archive link is at the top, i'd suggest a signup since there's always activity on the list about the future of juju - its a great place to keep informed about whats coming at you in the next revision | 13:16 |
g0d_51gm4 | lazyPower: i was thinking if is the problem is the firewall? because everytime i start the vMaaS environment i've to flush the table roles and clean it on the Region Cluster for working with juju. | 13:29 |
lazyPower | g0d_51gm4: could be | 13:30 |
g0d_51gm4 | lazyPower: the command juju ssh -0 gives me the prompt of the node. | 13:30 |
lazyPower | thats good! that means juju is there and responding | 13:30 |
lazyPower | i had not considered UFW to be honest | 13:31 |
lazyPower | and it stands to reason that it would be the blocker if its reloading your FW ruleset on reboot | 13:31 |
g0d_51gm4 | let me try to disable the ufw and | 13:31 |
g0d_51gm4 | reboot everything | 13:31 |
schegi | ok its getting really starnge following situation. did a juju deploy --to lxc:15 mysql, juju status shows the container constantly in pending state. the /var/log/juju/machine-15-lxc-X.log shows plenty of these and juju ssh 15/lxc/X returns ERROR machine "15/lxc/6" has no internal address, but using ssh ubuntu@192.168.25.158 succeeds and the machine is pingable | 13:41 |
lazyPower | is it reachable by the bootstrap node though? | 13:41 |
schegi | http://pastebin.com/agr77dnK -->/var/log/juju/machine-15-lxc-X.log | 13:41 |
schegi | pingable and ssh-able from the maas master and all other nodes in the cluster. but not via juju only plain ssh to the ip of the container works | 13:42 |
lazyPower | interesting | 13:43 |
schegi | the whole cluster is also pingable from within the container | 13:44 |
data | lazyPower: I asked yesterday about the mixed deployment with manual mode. I have the problem now when following https://juju.ubuntu.com/docs/config-manual.html, that I get a "su: authentication failure" directly in the beginning | 14:01 |
data | from what I can tell from the debugging output, it thinks the user it logs in with is root | 14:01 |
lazyPower | the user you add manually will need to be a passwordless sudo user | 14:01 |
lazyPower | eg: the ubuntu user on most clouds | 14:02 |
data | it is | 14:02 |
data | http://pastebin.com/NRMH6D6N | 14:02 |
data | it is just not using sudo | 14:03 |
=== Ursinha is now known as Ursinha-afk | ||
lazyPower | data: what am i looking at here? verbose output from what juju is doing to add the unit? | 14:05 |
data | juju bootstrap --debug | 14:05 |
data | sorry, normally, I'd have pasted everything, but too many machine names etc. in there, that I don't want in logfiles | 14:06 |
lazyPower | its doing sudo | 14:07 |
lazyPower | if you look at the tail end of that command | 14:07 |
lazyPower | sudo "/bin/bash -c ' | 14:07 |
data | I am blind, thanks | 14:07 |
lazyPower | so make sure it is indeed a passwordless sudo user | 14:07 |
lazyPower | i've had that bite me before | 14:08 |
data | it is | 14:08 |
data | better yet, "someone" created /home/ubuntu on the machine, but it is owned by root:root | 14:09 |
data | is it possible to change the name of the user and the directory it is using? Because we have an ldap on that machine for users, and I'd hate to mess with the pam config | 14:12 |
g0d_51gm4 | lazyPower: i disable the ufw on boot and started the whole vMaaS environment. the node now is working and in juju status i see the node without error | 14:15 |
lazyPower | g0d_51gm4: awesome news! | 14:16 |
g0d_51gm4 | lazyPower: a question now is...how do i have to set the ufw to use it | 14:16 |
g0d_51gm4 | without to disable it!!!! | 14:16 |
lazyPower | jcastro: http://askubuntu.com/questions/174171/what-are-the-ports-used-by-juju-for-its-orchestration-services - candidate for updating since we no longer use zookeeper | 14:18 |
g0d_51gm4 | a firewall appliance front to host machine is already present, but to implement some rules also on host which ports i've to open to work with juju | 14:20 |
g0d_51gm4 | i see it just now y answer sorry! | 14:20 |
g0d_51gm4 | i've to permit just ssh connection from that vnetwork? | 14:22 |
lazyPower | g0d_51gm4: you'll need to expose port 22, and 17017 | 14:23 |
g0d_51gm4 | juju uses only a tunnel ssh to connect to MaaS, isn't it? | 14:23 |
g0d_51gm4 | 17017 for which service? | 14:23 |
lazyPower | 17017 is the API port for juju | 14:23 |
jcastro | lazyPower, yeah, edit away | 14:24 |
lazyPower | jcastro: edits are in queue | 14:27 |
=== Ursinha-afk is now known as Ursinha | ||
g0d_51gm4 | lazyPower: ok thanks, i make a further tests. | 14:33 |
=== psivaa-lunch is now known as psivaa | ||
schegi | is it impossible to deploy the mysql charm to an lxc container and then relate it to a ceph cluster? i got some issues that during ceph relation changed the charm tries to load the rbd module, which is not possible from within a container. adding the module outside to the kernel does not help it still tires to load the module | 14:38 |
jamespage | schegi, no | 14:48 |
schegi | how to get it working. in my setting it always fails because it tries to load the rbd module from within the container | 14:50 |
data | just to give a bit of feedback: it is working now, but ssh auth failed due to wrong rights on .ssh (which they weren't), but it had cached the wrong user id as the ubuntu user is local so the nfs server didn't know it, created it there, of course first time around with the wrong user id... But all that mess is now harmonizd | 14:51 |
schegi | http://pastebin.com/FJkSRnkY | 14:52 |
schegi | jamespage, this is from the /var/log/juju/unit-mysql-0.log on the node which actually runs the container | 14:52 |
dimitern | schegi, you can modprobe the rbd module on the host first, and it will be in the container as well | 15:14 |
dimitern | schegi, and you can also do lsmod | grep rbd first to check if it's there before modprobbing it | 15:15 |
jamespage | schegi, one sec | 15:17 |
dimitern | schegi, for an example, look in https://github.com/juju/juju/blob/master/provider/maas/environ.go#L535 | 15:17 |
jamespage | negronjl, lazyPower: hrmm - mongodb? | 15:17 |
lazyPower | jamespage: what about it? :) | 15:17 |
jamespage | schegi, HA for MySQL and RabbitMQ backed by ceph is not supported in LXC containers | 15:17 |
jamespage | lazyPower, http://paste.ubuntu.com/7803902/ | 15:18 |
jamespage | lazyPower, the last commit was quite a large delta - and breaks the relation_set calls | 15:19 |
lazyPower | service('stop', 'mongod') | 15:19 |
lazyPower | thats line 900 on my local copy - how would stopping the service cause failure? | 15:19 |
jamespage | lazyPower, that's not L900 on the charm in the charm store | 15:20 |
lazyPower | ah wait is ee, i misread the stack trace | 15:20 |
lazyPower | its down in teh relation_set block trying to send the replsets | 15:20 |
jamespage | lazyPower, database_relation_joined | 15:21 |
lazyPower | jamespage: i see this, i've got a fix | 15:21 |
lazyPower | my branch is pretty dirty atm, let me try to cherrypick out the fixes | 15:22 |
lazyPower | its missing a None before the dict its sending | 15:22 |
jamespage | lazyPower, there are a whole heap of incorrect relation_set calls | 15:23 |
jamespage | lazyPower, suggestion - back out the last commit and test this better first.... | 15:23 |
jamespage | lazyPower, being explicit is better relation_settings={....} | 15:24 |
lazyPower | what do you mean being explicit? setting the relationship name vs using none? | 15:25 |
jamespage | schegi, sorry - there is a way todo this - the percona-cluster charm provides an active/active mysql option which does not rely on ceph - its all userspace | 15:25 |
jamespage | lazyPower, just saying don't None out relation_id, just be explicit as to which parameter you are intending to pass - in this case its relation_settings... | 15:26 |
lazyPower | this is missing quite a bit of what i've done, what i just fetched from the store | 15:28 |
lazyPower | its still got the gnarly retval block at the bottom | 15:28 |
* lazyPower sighs | 15:28 | |
lazyPower | what the hell | 15:29 |
jamespage | lazyPower, I stand by my recommendation to revert and try again | 15:29 |
lazyPower | jamespage: reverted your mongos instances will fail though when you go to deploy this cluster. | 15:34 |
lazyPower | when you relate mongos => configsvr | 15:34 |
jamespage | lazyPower, well right now I can't relate and clients to mongodb | 15:34 |
lazyPower | jamespage: what client are you using? i'll add one to teh amulet test before i resub a MP | 15:38 |
jamespage | lazyPower, ceilometer | 15:39 |
lazyPower | thanks | 15:39 |
lazyPower | jamespage: and you've got mongodb deployed in standalone correct? | 15:42 |
jamespage | lazyPower, yes | 15:42 |
lazyPower | perfect. easy enough | 15:42 |
ctlaugh | Hi - I'm looking for some help with manual provisioning. I have a working MaaS / Juju environment that I have deployed Openstack on using all the charms. I have, however, a single node that I can not add into MaaS and want to manually provision using Juju. I'm having some trouble understanding what I need to do from here: https://juju.ubuntu.com/docs/config-manual.html | 16:14 |
ctlaugh | ^^ Is bootstrap-host the same host or a different host from what I actually want to provision? | 16:15 |
lazyPower | ctlaugh: the bootstrap node is what warehouses the juju api server. its responsible for orchestrating the environment | 16:16 |
lazyPower | ctlaugh: sounds like you want to add this additional host as a unit into your environment | 16:16 |
lazyPower | jcastro: did we ever get adding units manually to an environment docs published? | 16:16 |
ctlaugh | lazyPower: Does it need to be a different host from the one already running Juju? | 16:17 |
lazyPower | ctlaugh: i dont understand the question. | 16:19 |
ctlaugh | lazyPower: I used juju bootstrap (using the MaaS provider) so that node has all the Juju bits running on it. (I also deployed the juju-gui on another node but that's probably not important) | 16:20 |
ctlaugh | lazyPower: So, do I need to add another bootstrap node? | 16:21 |
lazyPower | nah, you can manually add hosts into an existing environment aiui | 16:21 |
lazyPower | thats why i'm pinging jcastro to find out if we ever published docs, that functionality landed a few revisions ago | 16:21 |
ctlaugh | Do I need to configure anything for the manual provider in environments.yaml, or just do an add-machine ssh:xxxx? | 16:22 |
ctlaugh | ok - thank you for your help -- sorry, mid-typing before I saw your last msg. | 16:22 |
lazyPower | np ctlaugh | 16:24 |
lazyPower | should just be add-machine ssh:xxx | 16:24 |
ctlaugh | lazyPower: I'll try that and see if it works | 16:25 |
ctlaugh | lazyPower: That seems to have worked. New machine added and in the process of deploying a charm to it. Thank you | 16:33 |
lazyPower | No problem :) glad its sorted | 16:33 |
ctlaugh | lazyPower: Well, the charm just failed to install, but I was expecting something like that to go wrong. But, at least Juju can see it. | 16:34 |
lazyPower | ctlaugh: if you need help debugging, dont hesitate to reach out | 16:34 |
ctlaugh | I'll go ahead and ask one question before I start digging into it myself... I just ran juju debug-log and got this: | 16:35 |
ctlaugh | unit-nova-compute-1: 2014-07-16 16:32:24 DEBUG juju.worker.rsyslog worker.go:75 starting rsyslog worker mode 1 for "unit-nova-compute-1" "" | 16:35 |
ctlaugh | unit-nova-compute-1: 2014-07-16 16:32:24 DEBUG juju.worker.logger logger.go:45 reconfiguring logging from "<root>=DEBUG" to "golxc=TRACE;unit=DEBUG" | 16:35 |
ctlaugh | unit-nova-compute-1: 2014-07-16 16:32:24 INFO install Traceback (most recent call last): | 16:35 |
ctlaugh | unit-nova-compute-1: 2014-07-16 16:32:24 INFO install File "/var/lib/juju/agents/unit-nova-compute-1/charm/hooks/install", line 5, in <module> | 16:35 |
ctlaugh | unit-nova-compute-1: 2014-07-16 16:32:24 INFO install from charmhelpers.core.hookenv import ( | 16:35 |
ctlaugh | unit-nova-compute-1: 2014-07-16 16:32:24 INFO install File "/var/lib/juju/agents/unit-nova-compute-1/charm/hooks/charmhelpers/core/hookenv.py", line 9, in <module> | 16:35 |
ctlaugh | unit-nova-compute-1: 2014-07-16 16:32:24 INFO install import yaml | 16:35 |
ctlaugh | unit-nova-compute-1: 2014-07-16 16:32:24 INFO install ImportError: No module named yaml | 16:35 |
lazyPower | ctlaugh: > 3 lines pastebin it please | 16:35 |
ctlaugh | unit-nova-compute-1: 2014-07-16 16:32:24 ERROR juju.worker.uniter uniter.go:486 hook failed: exit status 1 | 16:35 |
ctlaugh | Sorry | 16:35 |
lazyPower | which charm is this? | 16:35 |
lazyPower | nova-compute? | 16:36 |
ctlaugh | nova-compute. It looks like it's just missing a dependency. I didn't have this issue when deploying using MaaS nodes. | 16:36 |
lazyPower | series precise? | 16:36 |
lazyPower | i recall on precise you had to install python-yaml | 16:36 |
lazyPower | thats not the case on trusty | 16:36 |
ctlaugh | trusty-icehouse. Does the MaaS deployment process install dependencies automatically that I might need to install manually here? | 16:36 |
=== scuttle|afk is now known as scuttlemonkey | ||
lazyPower | that shouldn't have anything to do with it | 16:37 |
lazyPower | maas is booting cloud-images | 16:37 |
lazyPower | s/booting/serving up | 16:37 |
ctlaugh | lazyPower: I wasn't sure if the images (and the packages installed by default) might have dependencies already present that I wouldn't necessarily have on my manually-provisioned node. | 16:38 |
lazyPower | shouldn't be the case - is the node you manually added to your env a precise host? | 16:39 |
lazyPower | you should be able to do juju run --unit # "sudo apt-get install python-yaml" && juju resolved -r nova-compute/# -- and it'll at bare minimum get further along in the install process. | 16:40 |
ctlaugh | It's running trusty | 16:40 |
=== vladk is now known as vladk|offline | ||
ctlaugh | After installing python-yaml, it's getting a lot further now. | 16:42 |
ctlaugh | lazyPower: Thank you for your help. I'll reach out if I run into anything else I can't work through. | 16:43 |
lazyPower | anytime ctlaugh | 16:45 |
kirkland | marcoceppi: howdy | 17:04 |
kirkland | marcoceppi: I'm trying to submit my transcode-cluster bundle to the charm store | 17:04 |
kirkland | marcoceppi: according to: https://juju.ubuntu.com/docs/charms-bundles.html#sharing-your-bundle-with-the-community | 17:04 |
kirkland | marcoceppi: which says: juju bundle proof ../bundle-directory/ #default current working directory | 17:04 |
kirkland | marcoceppi: however, whenever I run: kirkland@x230:~/src/transcode/transcode/preciseā« juju bundle proof transcode-cluster | 17:05 |
kirkland | ERROR unrecognized command: juju bundle | 17:05 |
marcoceppi | kirkland: do you have the latest charm tools? | 17:06 |
marcoceppi | Juju charm version | 17:06 |
kirkland | marcoceppi: sure, I'm on 14.04 | 17:06 |
kirkland | ii charm-tools 1.0.0-0ubuntu2 all Tools for maintaining Juju charms | 17:07 |
marcoceppi | kirkland: archives are severly behind | 17:07 |
kirkland | marcoceppi: well, that should be fixed ;-) | 17:07 |
kirkland | marcoceppi: you're telling me I'm going to need to slop up my laptop with a ppa? :-) | 17:07 |
marcoceppi | Well, we tried but missed the cut off | 17:08 |
marcoceppi | It's the best ppa around, ppa:juju/stable | 17:08 |
marcoceppi | kirkland: latest version requires software not yet in trust archives. If you know how to get around that I'm all ears | 17:10 |
kirkland | marcoceppi: dfdt? | 17:10 |
=== roadmr is now known as roadmr_afk | ||
=== CyberJacob|Away is now known as CyberJacob | ||
=== vladk|offline is now known as vladk | ||
=== roadmr_afk is now known as roadmr | ||
ziliu2020_ | I'm having some problem with juju authorized-keys add command... It keeps saying my public key is invalid and can't add... any reason? I'm sure my key is ok because it has been used in other places with no problem, | 19:22 |
ctlaugh | jamespage: I am using the cinder charm and am trying to work through a problem installing on a system with only a single disk. I see your name all in the charm code and hoped I could ask you a quick question about it: what's the right way to specify using a loopback file on trusty/icehouse? I am putting block-device: "/srv/cinder.data|750G" in a config file, and I can see that the file gets created, but the loopback device and | 19:29 |
ctlaugh | volume group don't get created. | 19:29 |
=== alexisb is now known as alexisb_lunch | ||
cory_fu | I'm having an issue with a service I removed not going away entirely and causing issues when trying to re-deploy it: http://pastebin.ubuntu.com/7805234/ | 19:51 |
cory_fu | Any suggestions? | 19:51 |
cory_fu | On how to get rid of it | 19:52 |
lazyPower | cory_fu: is the machine its attached to destroyed? | 19:55 |
cory_fu | Yes, it's gone | 19:55 |
lazyPower | cory_fu: also, what about the relations? are any related services in error? | 19:55 |
lazyPower | 9/10 its a related service thats trapped in error keeping it from going away | 19:55 |
cory_fu | It has no relations at the moment | 19:55 |
lazyPower | weird | 19:55 |
cory_fu | I'm full of lies | 19:55 |
cory_fu | Yes, related services are in error | 19:56 |
lazyPower | haha | 19:56 |
lazyPower | thats why | 19:56 |
lazyPower | if you resolve the services its related to, it'll go away | 19:56 |
cory_fu | Thanks | 19:56 |
cory_fu | Yep, that worked | 19:57 |
* lazyPower thumbs up | 19:57 | |
lazyPower | glad we got it sorted | 19:57 |
=== vladk is now known as vladk|offline | ||
=== alexisb_lunch is now known as alexisb |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!