thomiHas anyone heard about an issue deploying the postgres charm on trusty where it fails to read /etc/ssl/private/ssl-cert-snakeoil.key (as in http://paste.ubuntu.com/7739744/ ) ?01:55
thomiit doesn't happen for me, but alesage is hitting it, and I know others have had it happen as well...01:56
=== thumper-afk is now known as thumper
l1fequick question - if you bootstrap juju on maas, it will enlist a single node. shouldn't deploying a new service automatically enlist a new node from what's available from maas?03:33
josel1fe: it should mark another node as used, it's going to use one node per service unless you specify other stuff03:40
l1fejose: gotcha...and if you do a juju status with no services deployed03:41
l1feit would still always just show the juju core node03:41
l1feand nothing else from maas03:41
l1fei'm having a problem where it tries to deploy a new service...gets the node id from maas, and then gets stuck in pending03:42
l1fenothing in the machine-0 logs03:42
l1fethere isn't even a /var/log/juju03:42
l1feon the machine-1 or whatever node it's trying to deploy on03:42
josecan you please try destroying the service and re-deploying?03:42
l1fesure, i've destroyed the environment a few times and tried to redeploy03:43
l1fe(have to manually go in an remove juju-mongodb)03:43
l1fei'll try again03:43
joseweird thing03:43
l1feif i manually provision the boxes03:44
l1feeverything works great03:44
josemaybe someone around who has previous experience with maas will be able to help, but I'm no maas expert unfortunately :(03:44
l1fei'm on juju 1.19.403:44
l1fenot sure if it matters03:47
l1febut the last line i get in machine-0.log03:47
l1feis juju.provider.maas environ.go:304 picked arbitrary tools &{1.19.4-trusty-amd64 https://streams.canonical.com/juju/tools/releases/juju-1.19.4-trusty-amd64.tgz 181fac6e269696eb68f1ff1ff819815af5da1beafd2d963bd5165fb7befdee84 8052214}03:47
l1feand then nothing03:48
josehave you tried using 1.18? maybe it's a bug in 1.19 and we haven't noticed yet03:48
joseI have no maas environment to test in, otherwise I would be already checking03:48
l1fei'll try 1.18 :)03:48
=== vladk|offline is now known as vladk
l1fesame thing with 1.1803:58
josethen it's not a bug on juju03:59
joseor, well, it is, but I don't know how to troubleshoot it03:59
l1fehaha, i'm not sure either...unless there are some logs that i don't know about03:59
josethere *is* a log called all-machines.log in /var/log/juju for the bootstrap node03:59
josemaybe check there?04:00
l1feyeah, nothing04:01
=== vladk is now known as vladk|offline
=== urulama-away is now known as urulama
=== CyberJacob|Away is now known as CyberJacob
=== vladk|offline is now known as vladk
=== vladk is now known as vladk|offline
=== vladk|offline is now known as vladk
=== vladk is now known as vladk|offline
=== vladk|offline is now known as vladk
=== vladk is now known as vladk|offline
=== vladk|offline is now known as vladk
=== vladk is now known as vladk|offline
=== vladk|offline is now known as vladk
=== vladk is now known as vladk|offline
=== vladk|offline is now known as vladk
noodles775jamespage: Hey there. Are you able to fix up a ghost revision on the rabbitmq-server branch? If you try `bzr revno lp:~charmers/charms/trusty/rabbitmq-server/trunk` it'll error (at least does for me), the same error which also stops me from pulling that branch in a deploy.08:14
noodles775or jam1 might have the bzr foo to fix that too?08:15
jamespagenoodles775, I'll look08:15
jamespagenoodles775, wtf - I don't even know that that means08:18
noodles775jamespage: afaik, it means that the branch knows about a revision, but doesn't have the details. I don't know much more than that, other than to fix it in the past on our own branches, we've run: `bzr reconfigure --unstacked` on the branch, but I can't say whether that's OK here (I don't know if the charm branches are stacked for a reason etc.)08:20
jamespagenoodles775, I'll have to poke someone with more bzr knowledge that I have - mgz?08:22
noodles775jamespage: k, thanks. fwiw, I verified that I can reproduce and fix the error on my own branch like this: http://paste.ubuntu.com/7740886/08:24
jamespagenoodles775, it might be an side-effect of the fact that I pushed the same branch to both precise and trusty charm branches08:29
jamespagebut not 100% sure08:29
=== vila_ is now known as vila
=== urulama is now known as urulama-away
jamespagegnuoy, can you take a look at - http://bazaar.launchpad.net/~james-page/charm-helpers/network-splits/view/head:/charmhelpers/contrib/openstack/ip.py10:28
gnuoyjamespage, sure10:28
jamespagegnuoy, context is for the various combinations of https, clustered-ness and network configuration a charm might be in10:29
jamespagegnuoy, usage example - http://bazaar.launchpad.net/~james-page/charms/trusty/cinder/network-splits/view/head:/hooks/cinder_hooks.py#L18110:31
gnuoyjamespage, I thought the charms only supported one vip10:33
jamespagegnuoy, right now that is the case; this adds support for 'vip' to be a list10:33
jamespagesupporting multiple VIP's10:34
gnuoyjamespage, this make the last one in the last the preferred one is that what you want ?10:34
jamespagegnuoy, the last one within the subnet is fine10:35
jamespagegnuoy, I'm making the assumption that people won't provide multiple VIPs on the same subnet10:36
jamespagewhich I think is OK10:36
gnuoyjamespage, lgtm10:36
urulama-awayi've got a service (rabbitmq-server) with a status "life: dying" due to 'hook failed: "install"' and this service just cant be removed. how can i shoot it in the head?11:03
urulama-awaybtw, it's been dying for an hour now :D11:04
=== vladk is now known as vladk|offline
=== vladk|offline is now known as vladk
=== liam_ is now known as Guest45753
=== urulama-away is now known as urulama
=== psivaa is now known as psivaa-lunch
=== CyberJacob is now known as CyberJacob|Away
=== psivaa-lunch is now known as psivaa
lazyPowerurulama: a failed hook will trap all events until you resolve it13:19
lazyPoweryou can force deletion of the machine, then remove teh service13:19
lazyPowerjuju destroy-machine # --force13:19
lazyPowerjuju destroy-service rabbitmq-server13:19
urulamalazyPower: tnx, did that in the end13:19
urulamalazyPower: what about removing containers ... i've got machine 1 (remote) with two containers ... is it possible to remove containers one by one? or is the same "remove-machine --force" the only option?13:24
rick_h__urulama: you should be able to remove single containers. kadams was working on doing that in the gui and you might be able to test it if you've got the gui there13:26
rick_h__urulama: though that requires a couple of extra steps13:27
l1fenot sure if anyone with maas+juju background is on now, but anyone have experience where juju bootstrap works, but when trying to deploy a service (juju-gui), it enlists a node, and gets stuck in pending state?13:27
l1feif i manually provision the node, however, it works fine13:28
l1fe(which kind of defeats the purpose of having maas)13:28
urulamarick_h__: will try it from gui. was playing to much with "oh, let's destroy the service before the container is brought up and redeploy/destroy a few more times"13:29
bacl1fe: sorry i have no maas experience.  will try to find someone who might help.13:29
urulamarick_h__: ended up with no services and a lot of containers in pending state13:29
rick_h__urulama: heh13:29
l1febac: thanks13:30
rick_h__urulama: ok, sounds like you're having fun :)13:30
urulamarick_h__: indeed :)13:30
urulamarick_h__: is quickstart the same as "create manual and then juju deploy juju-gui --to lxc:0"?13:32
bacrick_h__: aren't you on vacation?13:32
bacurulama: it does both of those things13:32
bacurulama: and it deploys a bundle if you give it one13:32
bacurulama: for LXC it does not --to the gui to 0 as it is not allowed13:33
urulamabac: does the last statement mean that it needs lxc for machine 0 because otherwise it is not allowed (as experience shows)?13:36
l1fequick question with regards to where to install juju-core to - if i'm in a maas environment, if I install juju-core on a node in the maas cluster, will that node theoretically still be able to be enlisted by juju?13:39
l1fe(that's if I ever get juju and maas to work together in the first place...)13:39
rick_h__bac: urulama yes, heading out now13:43
rick_h__bac: urulama was finishing hooking things up :)13:43
=== scuttlemonkey is now known as scuttle|afk
=== vladk is now known as vladk|offline
=== vladk|offline is now known as vladk
Tribaalsinzui: Hi, I've hit https://bugs.launchpad.net/juju-core/+bug/1337340 this morning, and it smells like a race condition to me ("nothing" changed between it happening and a successful bootstrap) :(14:24
_mup_Bug #1337340: Juju bootstrap fails because mongodb is unreachable <landscape> <juju-core:New> <https://launchpad.net/bugs/1337340>14:24
=== vladk is now known as vladk|offline
=== vladk|offline is now known as vladk
sinzuiTribaal, I saw that same error in a test yesterday. I will look into your bug14:41
Tribaalsinzui: ack, thanks!14:45
=== vladk is now known as vladk|offline
=== vladk|offline is now known as vladk
=== vladk is now known as vladk|offline
=== vladk|offline is now known as vladk
l1feanyone on with maas+juju exp? having a hard time debugging why nodes are stuck in pending state15:52
=== vladk is now known as vladk|offline
=== roadmr is now known as roadmr_afk
lazyPowerl1fe: i've run into this before16:19
l1felazyPower: were you ever able to resolve this?16:20
lazyPowerl1fe: do you have any other enlistments running during the pxe boot phase16:20
l1feah, this isn't during the maas pxe boot phase16:21
l1fethis is while deploying charms in juju16:22
lazyPowerl1fe: what you're showing me is the node itself did not finish its enlistment phase when juju requests the machine16:23
lazyPowerl1fe: its helpful to know where that's bailing out. These are NUC's right? are you going straight hardware witht he nUCS or are you using VM's on the NUCS?16:23
l1festraight hardware16:24
l1fethe nodes are properly "ready" in maas, and after running juju deploy, they go, correctly, to allocated to root16:24
l1feafter that, though, not sure what the heck is going on since no logs are created for juju on the new node16:24
l1feand nothing is logged on the already bootstrapped machine16:25
l1feif i manually provision the machine, though, everything is fine16:25
l1febut, that kind of defeats the purpose :)16:25
lazyPowerare you assigning the nodes before you run juju deploy?16:27
l1feno, only thing i did was bootstrap the one machine16:28
l1febased on the documentation, i kind of assumed juju+maas would handle everything else16:28
lazyPowerit should.16:28
lazyPowerso, do you see where the machien states pending in your log output in the askubuntu question?16:29
lazyPowerthat tells me something is happening during the enlistment / configuration phase thats gone awry. there is a -v flag you can pass to juju. can you run a deployment with the -v flag and pastebin the output?16:29
l1fenot sure how to get it out of "pending"16:29
l1feok, will do16:29
lazyPoweri want to isolate if its a juju issue, or a maas issue16:30
lazyPowerin the instances i've run into this problem - had multiple deployments running and the pxe boot never finishes loading the image16:30
lazyPowerdestroying the unit and re-requesting usually solves teh problem.16:30
lazyPowerit appears to be something related to my TFTP server on my host - i havent dove real deep into it - so i'm crossing fingers this isn't teh same problem :) because its uncharted territory for me if it is.16:31
l1feit didn't look like the -v changed much in terms of output16:32
l1fesorry about that16:33
lazyPowerwhen you ssh into dynuc001.maas - is there anything in /var/log/juju?16:33
lazyPowerok so we aren't connecting then - that should be one of teh first things it does is remote in and setup teh scaffolding16:34
l1feif i go into auth.log on dynuc001.maas i don't even see that they try to login16:34
=== hatch__ is now known as hatch
l1fei wonder if it has anything to do with the bug where during the bootstrap process, everything bugs out unless i first go onto the node to manually create a /var/lib/juju/nonce.txt16:39
lazyPowerthat i don't know. my exposure with maas is limited to my VMAAS setup.16:40
lazyPoweri dont have enough hardware to do a proper maas16:40
l1fenot even sure what other logs i can look at16:41
l1fesince at this point, it's like all output just drops into the void16:42
jrwrenwhy does the mongodb for localdb use so much space? 1.2G on one host, 6.5G on another16:45
lazyPowerjrwren: depends on how many blocks it allocates to perform the storage.16:46
lazyPowerjrwren: typically, mongodb will allocate ~ 2gb of storage space to get started, after that its exponential on growth - since its BSON its creating a blob on disk.16:46
lazyPowerand it doesnt take what it needs, it takes more than it needs for rapid growth so it stays snappy.16:47
lazyPowerl1fe: it makes me wonder if juju can access the node.. have you tried ssh'ing intot eh nodes ubuntu user using the ~/.juju/ssh/id_rsa key?16:48
jrwrenhow much does a tiny juju local need?16:48
l1feyup, juju can ssh in16:48
lazyPowerjrwren: that i don't know. I'd ping in #juju-dev as they have the document collection specifics16:48
l1felazyPower: i think maybe i have the answer - quick question re: lifecycle16:49
l1fewhen i do a juju deploy over maas, does it signal for the node to restart and then go into pxe boot to do its stuff16:49
lazyPoweri'm going to start from teh top and try to split apart concerns by starting the line with who's doing the work16:54
lazyPowerjuju => Signals to mass it needs a machine16:54
lazyPowermass => Powers on the machine and runs enlistment. The node reboots and completes enlistment by loading all of the prefs and ssh keys from teh MAAS server and returns a READY signal to teh juju bootstrap node (or client in terms of a bootstrap)16:54
lazyPowerjuju => Connects to the fresly provisioned server and loads scaffolding, packages, and the juju-client services16:55
lazyPowerdeployment begins.16:55
lazyPoweri may have missed something in there, but thats my understanding of the complete lifecycle during a requested deployment using juju/maas16:55
l1feok, so what happens if in maas, all the nodes are already enlisted and provisioned with something16:56
lazyPoweri haven't actually over-allocated - so i would *think* it would return an error since maas is aware of how many nodes it has16:56
lazyPowerbut it might sit in pending16:56
lazyPowerlet me try to spin up 20 mongodb nodes hang on16:57
l1fei guess i'm not sure how the lifecycle works if juju signals to maas to run enlistment, wouldn't the nodes have to already be enlisted for it to signal that it will deploy a service to it?16:57
l1feso when i do juju status, it comes back and says, i have a machine, it is called dynuc001.maas16:58
l1feand that it allocated that machine for the service juju-gui16:58
lazyPowerl1fe: well its not looking good for juju erroring out on not having enough nodes in its pool16:58
lazyPowerits still pending though - there's time for it to fail out16:59
l1fegot it to work17:00
l1feson of a...17:00
lazyPowerl1fe: thats great news - what was the fix/workaround? and wrt what happens17:01
lazyPowerthey go forever pending.17:01
l1fealright, so basically, when you kept on saying that juju will go into pxe boot17:01
l1fesomething finally clicked17:01
l1fein order for my to do juju bootstrap properly on my boxes17:02
l1fei had to follow: https://bugs.launchpad.net/juju-core/+bug/131468217:02
_mup_Bug #1314682: juju bootstrap fails <bootstrap> <juju> <maas-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1314682>17:02
l1feand create my own nonce.txt17:02
l1fei think this is because the power controls for the NUCs in maas doesn't work as expected17:02
l1feand it never restarts the node properly, to allow it to pxe boot and provision everything properly17:03
l1feso it never really sunk in, that that was the lifecycle17:03
l1fei just figured it would ssh into the machine, and run the appropriate commands17:03
l1feso when i did juju deploy, i figured it would do the same thing17:03
l1feinstead, based on what you said17:03
l1feit's expecting to go into pxe to have everything configured there17:03
l1fethrough maas (which makes sense, since why else would you have maas)17:04
l1feso i went onto dynuc001.maas and forced a reboot17:04
l1felet pxe do it's thing17:04
l1feand the status finally came back as started17:04
l1fehope that made sense...on the bright side, because of these problems, i come to understand the whole process since i have to manually intervene where normal power management would have worked haha17:06
=== roadmr_afk is now known as roadmr
=== psivaa is now known as psivaa-afk
jrwrenit would be cool if local provider had an option to run machine 0 in an lxc17:32
lazyPowerl1fe: ah, ok :)17:47
lazyPowerjrwren: thats on the books somewhere. to treat teh bootstrap node as its own LXC container instead of hulk-smashing it on the HOST17:48
jrwrenlazyPower: excellent!17:48
=== scuttle|afk is now known as scuttlemonkey
=== CyberJacob|Away is now known as CyberJacob
=== vladk|offline is now known as vladk
=== scuttlemonkey is now known as scuttle|afk
=== scuttle|afk is now known as scuttlemonkey
=== vladk is now known as vladk|offline
=== vladk|offline is now known as vladk
=== vladk is now known as vladk|offline
=== alexisb is now known as alexisb_vac
niedbalskithumper, you there?22:01
thumperniedbalski: yep22:02
niedbalskithumper, ok, i'll add a mock for that. not sure if this specific package has any mock library imported22:03
thumperniedbalski: I'm just looking in the juju testing library22:03
thumperthat may help22:03
thumperniedbalski: if you use "github.com/juju/testing" there is a PatchExecutable there22:05
thumperin cmd.go22:06
niedbalskithumper, cool, i'll use that then. thanks.22:09
thumperniedbalski: thanks for the patch22:09
=== CyberJacob is now known as CyberJacob|Away
themonklazyPower, hi22:59
themonkhow to use juju-gui charm  locally?23:00
themonklike if i deploy juju-gui in my lxc container then how to load local charms using juju-gui?23:02

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!