/srv/irclogs.ubuntu.com/2014/09/25/#juju.txt

=== mup_ is now known as mup
lazyPowermarcoceppi: https://github.com/juju/docs/pull/181 - updated00:51
marcoceppilazyPower: cool, I'm about to send an MP your way to fix callouts00:51
lazyPowerhaha, sweet. cuz we just broke the callouts on this page with that refactor00:52
marcoceppilazyPower: yeah, it's been broken in that we're not doing what we document00:52
marcoceppithere's a bunch of malformed callouts, i was going to patch them, but instead I'll just make the plugin better00:53
lazyPower:heart:00:53
marcoceppimm, sexy00:56
marcoceppiit's not pefect but it'll do00:57
marcoceppilazyPower: https://github.com/juju/docs/pull/18200:57
marcoceppilazyPower: what about arosales feedback?"00:58
lazyPowerThat was patched in too00:58
marcoceppicool, we need to teach him to comment on the diffs :P00:58
lazyPowerhe did00:59
lazyPowerhis diff's were a revision behind yours i think?00:59
marcoceppihe commented directly on the file00:59
lazyPowerif you click on his comments they show them inline00:59
marcoceppiinstead of the diff for the merge request00:59
lazyPowero00:59
marcoceppiso it's hard to see when they've been fixed00:59
lazyPoweryesh00:59
marcoceppilike mine were hiddenb ecause they're out of date now00:59
lazyPowerok hang on regenerating00:59
* lazyPower drum rolls00:59
lazyPower\o/01:00
lazyPowerworks01:00
marcoceppilazyPower: also, for future reference, the bolding on the Note isn't required anymore01:00
marcoceppiand it can be any word as long as you use !!!!01:00
marcoceppierr !!!01:00
marcoceppiso "!!! Warning:" will work01:00
marcoceppietc01:00
marcoceppia true callout plugin01:00
marcoceppiwhat a time to be alive01:00
lazyPowerwe are exploring the uncharted waters of writing your own generators01:00
lazyPowerwewt01:00
lazyPowerok01:01
lazyPowerpull master, we're good to rock on this01:01
marcoceppiboth have landed huzzah, they'll be live around 4a01:01
marcoceppi6:00 UTC, fwiw01:02
rick_h_marcoceppi: lazyPower heads up, I've got a guy looking into how to ingest the docs for use on the upcoming jujucharms.com rework.01:03
marcoceppilazyPower: while we're romping around in the docs, we should define version'd branches01:03
lazyPowerorly?01:03
marcoceppirick_h_: would that be helpful ^^?01:03
rick_h_marcoceppi: lazyPower so we might have some requests coming in to help us wrap that around the site and keep it up to date. We'll be wanting to make sure we address keeping it up to date/etc01:03
marcoceppirick_h_: cool, style wise or content wise?01:03
rick_h_marcoceppi: style wise really01:03
rick_h_marcoceppi: and figuring out how to prenent it in a way that fits with nav/search/etc01:04
rick_h_we'll be looking to ingest the docs into elasticsearch and building a custom docs search box01:04
marcoceppirick_h_: cool, it's pretty straight forward, there's one main template file then it's Markdown and CSS01:04
rick_h_marcoceppi: k01:04
rick_h_marcoceppi: lazyPower so if fabrice comes asking strange questions he's researching and doing some proof of concept stuff01:04
lazyPowerack01:04
marcoceppirick_h_: cool, sounds good01:04
marcoceppithanks for the heads up01:05
lazyPowermarcoceppi: want to do a hangout to talk about the doc structure + versioning?01:09
marcoceppiright now?01:09
lazyPoweruhh01:09
lazyPowerwhen do you want to do it?01:09
marcoceppiwe can do it now01:09
marcoceppiI was just asking01:09
lazyPoweri mean i can EOD whenever01:09
lazyPowerye01:09
lazyPowerlets do it now while its resh01:09
lazyPower*fresh01:09
marcoceppijoin my favorite hangout url01:09
marcoceppilazyPower: https://plus.google.com/hangouts/_/canonical.com/iwonderhowlongyoucanmakethesehangouturlsseemstherereallyisnolimitatall01:10
aisraelo/01:11
lazyPowero/01:18
marcoceppio701:45
lazyPowero501:46
lazyPoweri am quadroupal jointed01:46
marcoceppiit looks like a guy flexing01:51
lazyPowerhttp://i.imgur.com/4mYD13u.gif01:52
kadams54rick_h_: If you're still around… how are we looking for release? I looked around at PRs and the kanban board and everything looked pretty good.02:13
rick_h_kadams54: everything is going well02:58
kadams54rick_h_: great!02:58
rick_h_just got back from coffee shop, right before I left functional charm tests passed on both precise/trusty02:58
rick_h_so the release is about 5 commands away02:58
kadams54is `rm -rf /` one of them?02:58
rick_h_hah, not quite02:58
kadams54Alright, it's off to bed for me. Here's to a smooth release.03:02
rick_h_kadams54: night03:02
rick_h_well they charms are up, waiting on ingest time03:09
rick_h_will do one final QA, but the code looks good on LP03:09
rick_h_oh heh, not the GUI channel is it03:10
=== thumper is now known as thumper-afk
=== uru_ is now known as urulama
=== uru_ is now known as urulama
=== CyberJacob is now known as CyberJacob|Away
* bloodearnest is wondering if all the charms that use #!/bin/bash hooks need updating...09:56
gnuoyjamespage, any chance you could take a look at https://code.launchpad.net/~gnuoy/charms/trusty/keystone/next-lp-1355848/+merge/231529 ?09:57
gnuoyTribaal, to you have a moment to take a look at https://code.launchpad.net/~gnuoy/charms/trusty/nova-cloud-controller/next-multi-console-fix/+merge/233612 ?09:58
=== fabrice is now known as fabrice|lunch
=== thumper-afk is now known as thumper
=== fabrice|lunch is now known as fabrice
=== jacekn_ is now known as jacekn
Tribaalgnuoy: sorry, was out for a moment. I can look, yes12:16
gnuoyTribaal, thanks.12:16
=== urulama is now known as urulama-afk
=== SaMnCo changed the topic of #juju to: SaMnCo
SpadsHi, so what do I need to do to get juju status to show me floating IPs?13:35
Spadsjjo says it should have visibility, but I'm confused13:36
Spadsuse-floating-ip is the only env setting I could find that matched, but I thought that was the old behaviour that made every unit get a floating IP13:36
rcjTrusty juju tools mismatch on s3... http://paste.ubuntu.com/8425671/13:59
rcjhttps://bugs.launchpad.net/juju-core/+bug/137395414:01
mupBug #1373954: juju-tools checksum mismatch for trusty on S3 <juju-core:New> <https://launchpad.net/bugs/1373954>14:01
rcjCan someone look at Juju tool checksum mismatches blocking bootstrap.  Now seen with Trusty/S3, Precise/Canonistack14:10
rcjChecksums @ http://streams.canonical.com/juju/tools/streams/v1/com.ubuntu.juju:released:tools.json match content in http://streams.canonical.com/juju/tools/releases/ which I assume is the source for the mirrors that have issues14:13
arosalesmarcoceppi: lazyPower: protips for commenting on diffs instead of the files?14:46
arosalesmarcoceppi: lazyPower re: git hub14:46
=== fabrice is now known as fabrice|family
sebas5384question: Let's say I deploy an env on AWS through my machine, there's any way via other machine could comunicate to the same environment and continue deploying charms?15:28
lazyPowersebas5384: the othe rmachine would need a copy of your ~/.juju directory15:31
lazyPowerspecifically, the ~/.juju/environment_name.jenv15:31
sebas5384hmmm specifically the .jenv right?15:31
sebas5384holly s&*T15:32
sebas5384hehe15:32
sebas5384thanks lazyPower!15:32
lazyPowernp sebas538415:32
johnmcHi all. Can anyone give me some advice on a problem with juju failing to deploy new LXC machines on a host?15:49
johnmcSome some unknown reason all new LXC containers intended to be deployed on machine "1" simply stay in pending state forever.15:50
johnmcThis is what "juju stat" looks like http://pastebin.com/5b4D3d2315:50
johnmcall LXC containers from 1/lxc/22 onwards are pending. I've tried doing a "destroy-machine" and "destroy-machine --force" on them, which is how they ended up with "life: dead".15:52
johnmcI've looked at log files on machine "1", but can't see anything to suggest why it's not actioning the request for a new LXC container. No errors, nothing.15:53
johnmcif anyone can suggest logs files etc. I should be looking at that would help a lot, and right now, I've got nothing to go on.15:54
johnmcall existing LXC containers work fine, and come back after a complete physical host reboot.15:54
natefinchjohnmc: it sounds like an lxc problem, if there's no juju errors.  Juju just tells lxc what to do.  Probably good start would be to ssh into the base machine and do an lxc-ls and see what it spits out15:58
johnmcHi nate. I've been on the base machine quite a bit, and not found anything. This is what I get to lxc-ls:16:00
johnmcroot@controller-cam1:~# lxc-ls16:00
johnmc  juju-machine-1-lxc-15  juju-machine-1-lxc-17  juju-machine-1-lxc-19  juju-machine-1-lxc-21   juju-machine-1-lxc-16  juju-machine-1-lxc-18  juju-machine-1-lxc-2016:00
johnmcjust the healthy LXC containers16:00
rick_h_johnmc: what version of juju are you on? There were a bunch of lxc issues that have gotten fixed in recent weeks16:03
johnmcI would have thought that there'd be at least some activity in /var/log/juju/ in response to a request for a new LXC. Nothing happens there at all.16:03
natefinchjohnmc: there definitely should be some output at least, when doing add-machine16:04
johnmcjuju -> 1.20.7-0ubuntu1~14.04.1~juju116:04
johnmcrick_h_: Is there a more recent version (after 1.20.7) I should try?16:07
rick_h_johnmc: nope that should have the fixes I believe so you're good there. Wanted to double check16:08
johnmcnatefinch: As a test, I just did a "cp -a /var/log/juju /var/log/juju-old" on the base machine (1), followed by "juju add-machine lxc:1". I then waited a minute and ran "diff -uNr juju-old/ juju/" on the base machine. No logging had occurred.16:10
rick_h_johnmc: I think the juju logs are in a .juju directory for local stuff? /me tries to double check.16:10
rick_h_johnmc: so logs are in /home/rharding/.juju/local/log16:10
rick_h_where rharding is your username on the host machine16:10
rick_h_johnmc: and there's the all-machines.log along with per machine/unit logs there.16:10
natefinchrick_h_: it's maas, not local16:11
rick_h_hatch: natefinch oh, sorry. /me totally missed that part16:11
* hatch pokes head in16:12
natefinchjohnmc: if you do juju add-machine lxc:1 --debug --show-log  ... what does it print out?  It sounds like the commands aren't even making it to the server for some reason16:12
hatchrick_h_: did you mean to ping someone else? :)16:12
marcoceppiarosales: for github reviews16:13
rick_h_hatch: heh, I was starting to ping you for a different reason16:13
hatchoh haha16:13
johnmcnatefinch: http://pastebin.com/sJ49b6BY16:13
johnmcsays it's created16:14
johnmcsays it's falling back to 1.18!16:15
arosalesmarcoceppi: ack, any protips?16:21
marcoceppiarosales: yeah, making a screenshot16:22
marcoceppiwell, shutter keeps crashing16:23
marcoceppiwhen making comments, make them on the Files Changed tab16:23
marcoceppithat way they're associated with the merge request and not directly on the branch16:23
marcoceppias the merge request is iterated upon with the feedback it'll close comments that are no longer up to date16:23
marcoceppiarosales: so at the bottom https://github.com/juju/docs/pull/181 you can see how your comments are still shown but mine are marked as outdated16:24
marcoceppieven though chuck addressed all the comments in the merge16:24
johnmcnatefinch: I've updated to the latest juju-core on my juju -agent machine (machine 0), and tried again. I get the same log output as before ( http://pastebin.com/UbzN3mbY ).  Any idea where I go from here?16:36
johnmcnatefinch: incidentally I can make as many LXC containers on machine "3" as I like. Only machine LXC creation attempts on "1" fail silently.16:38
johnmcDoes anyone have any tips about where I should be looking for clues? Apart from the debug output I've shown ( http://pastebin.com/UbzN3mbY ) , there is no logging evidence I can find that sheds light on this.16:48
johnmcMy request for an LXC container is disappearing into a black hole.16:49
natefinchjohnmc: oh crap17:08
natefinchjohnmc: I think we have a 1.18<->1.21 bug17:09
natefinchjohnmc: we just fixed it last niedbalski17:09
natefinchjohnmc: last night... heh trying auto-complete mid-sentence is not exactly what I wanted17:09
natefinchjohnmc: although that doesn't explain why it would work on one machine and not the other... nevermind.17:10
natefinchjohnmc: if you do the same command with --debug --show-log on machine 3 (where it works), do you get different output17:11
natefinch?17:11
=== BradCrittenden is now known as bac
johnmcnatefinch: Sorry, had to go away for a bit. This is the output when creating on machine 3: http://pastebin.com/scsj1Jzb . The machine was successfully created in less than 2 minutes.18:21
natefinchjohnmc: hmm.... weird18:25
johnmcnatefinch: Looking at the two base systems, they both have /var/lib/juju/tools/1.18.4-trusty-amd64 on them. Same version on both.18:27
natefinchjohnmc: when you do add-machine to lxc:3, do you get output in the all-machines log that you don't see when you do it for lxc:1?  That seems like the most interesting place to start right now18:34
=== roadmr is now known as roadmr_afk
weblifeI am using the GUI for the first time to set up an environment I created, hosted on the bootstrap node.  Did the drag of a zip, committed the changes and launched the instance.  I had an install error and want to ssh in but I am getting an error.  Could this be because I am using the GUI for launching the instance?18:39
johnmcnatefinch: There's no all-machines.log on my workstation, but there is on my maas-agent machine. There is a block out log output *after* the new LXC machine was created on machine 3, but nothing that coincided with the actual request. This is the output: http://pastebin.com/fmX7bP9H18:41
johnmcnatefinch: so, I suppose the behaviour is consistent across machines 1 & 2, in that you only get log output after a successful lxc machine creation.18:42
natefinchjohnmc: sorry, yeah, I meant on the maas-agent machine.   You should get logs about things like the API being accessed18:42
johnmcnatefinch: I get no logging whatsoever in response to the request.18:44
johnmcnatefinch: only success creates any log output18:44
weblifenevermind I figured it out18:46
=== CyberJacob|Away is now known as CyberJacob
natefinchjohnmc: I'm bringing up an environment of my own so I can double check some stuff, but realized my local environment was kinda messed up.  one is coming up now.18:57
johnmcnatefinch: something interesting is going on with machine 1. Right now when I run "juju stat" it says "agent-state: down" for machine 1. I had this before an rectified it by restarting the juju daemon on machine 1. Strange this is that it was down earlier when I posted my initial stat output ( http://pastebin.com/5b4D3d23 ).19:06
johnmcnatefinch: correction - it was *not* down earlier19:07
natefinchjohnmc: yeah, something is wonky with that machine19:08
johnmcnatefinch: It's the total lack of any log output that gets me. I've just restarted the jujud on machine 1, and it's up again. I then tried to create yet another lxc machine on there and see no new log output on eith the maas-agent, or machine 1.19:11
natefinchjohnmc: can you run    sudo grep 1/lxc/30 /var/log/juju/all-machines.log      on that base maas-agent machine?19:24
johnmcnatefinch: # sudo grep 1/lxc/30 /var/log/juju/all-machines.log grep: /var/log/juju/all-machines.log: No such file or directory19:32
johnmcnatefinch: all-machines is only on maas-agent. Grepping the all-machines log there shows no log entries19:32
johnmcroot@juju-agent:~# grep 1/lxc/30 /var/log/juju/all-machines.log \n root@juju-agent:~#19:33
johnmcnatefinch: It's basically what I've been saying all along; the request for a machine vanishes without a trace.19:34
natefinchjohnmc: on the same machine, if you grep for 3/lxc/10  do you get hits?19:35
johnmcnatefinch: nothing for that either19:42
natefinchjohnmc: ok, that's weird, since that's the one that's actually working19:43
johnmcAs I said before, the lack of any logging pre-success is consistent.19:44
themonkhello19:44
themonkmy juju says a machine is down, i never encounter it before, why is this happening?? and unit on that machine says "agent-state: down , agent-state-info: (started)"19:47
natefinchthemonk: machines go down sometimes.... is the actual machine down, or just juju?19:56
natefinchjohnmc: obviously deploying to lxc containers on machine 1 used to work, since you have some working.  Do you know when it stopped working?19:59
sinzuinatefinch, themonk: if an local lxc env failed to tear down the cruft left behind will prevent new machines for starting20:01
sinzuinatefinch, themonk I know there is a juju plugin that will clean the machine20:01
* sinzui looks for doc about how to clean20:01
natefinchsinzui: johnmc is having a problem added new lxc containers to maas instances.  Not sure about themonk's problem yet20:02
natefinchsinzui: or rather... he can deploy lxc containers to one maas machine but not another20:02
sinzuiI have no lxc maas experience johnmc  but there were bugs about the network the machine might give the lxc20:03
natefinchjohnmc: is this a production environment?  Would you be willing to try upgrading the environment to 1.20.7?20:04
johnmcnatefinch: The first time it failed I was trying to install a haproxy charm to both machine 1 & 3 at petty much the same time (yesterday). Machine 3 succeeded, and nothing happened on machine 1. No idea why.20:04
sinzuithemonk, http://pastebin.ubuntu.com/8427862/20:04
* sinzui looks for lxc maas bugs20:04
johnmcnatefinch: In the end I realised I should have been using hacluster, but that realisation came long after the failure became apparent.20:05
johnmcnatefinch: It's not in production yet. I'll happily try anything.20:05
johnmcnatefinch: Is there a doc explaining what I need to do, or is there just a simple command?20:06
natefinchsinzui: juju upgrade-juju should just work, if he's on 1.18.4, right?20:07
natefinchsinzui: (and using 1.20.7 client)20:08
=== roadmr_afk is now known as roadmr
natefinchjohnmc: in theory "juju upgrade-juju" should just work, because you're running a newer stable version of the juju client, and the server is running an older stable  version of the server.  But I'd wait for the go-ahead from sinzui.  He's our QA head, and does about 1000x as many upgrades as I do.20:09
johnmcnatefinch: thanks. I'll check back in a few minutes.20:11
themonknatefinch, sinzui, thanks for response :) machine is up, one thing is that its in a virtualbox on windows machine20:17
themonkand its in laptop20:18
johnmcsinzui: Am I safe to run a "juju upgrade-juju" using a 1.20.7 client with 1.18.4 servers?20:45
sinzuijohnmc, yes. We wouldn't release it if it wasn't safe20:45
johnmcsinzui: looks like I'm trapped due to my broken (pending) lxc machines. Erro message: ERROR some agents have not upgraded to the current environment version 1.18.4: machine-1-lxc-22, machine-1-lxc-23, machine-1-lxc-24, machine-1-lxc-25, machine-1-lxc-26, machine-1-lxc-27, machine-1-lxc-28, machine-1-lxc-29, machine-1-lxc-30, machine-1-lxc-31, machine-1-lxc-3220:49
johnmcsinzui: Those are the broken LXCs I've been discussing with natefinch.20:49
johnmcsinzui: they are the reason I'm trying the upgrade20:50
sinzuinatefinch, you cannot upgrade while they are broken20:51
sinzuijohnmc, Juju will queue the upgrade until all the machine and unit agents call home and report they are healthy20:51
sinzuijohnmc, I know this because I have an arm64 instance that can go down. when it comes back, the upgrades take place20:52
sinzuijohnmc, I am still reviewing the release. when I am done in about 30 minutes, I can return to the lxc maas bug list to find a solution to the problem20:53
bic2knatefinch and sinzui: I am literally having the same problem right now. Same version 1.18.4 for 1.20.721:02
bic2klet me know if I can provide any details to help21:03
bic2kI lied, we are on 1.18.1 in this cluster21:06
sinzuibic2k, what do you see with "juju --show-log upgrade-juju" output will show which versions are available. We expect 1.18.4 for clouds with public access,21:13
sinzuibic2k, you can also be explicit about upgrade versions "juju --show-log upgrade-juju --version=1.18.4"21:13
bic2ksinzui: no matching tools avaliable?21:14
sinzuibic2k, explicit within reason. Juju has some internal rules about what it thinks I can upgrade too and will look for a match21:14
sinzuibic2k, does your env use public streams such as streams.canonical.com?21:15
=== JoseeAntonioR is now known as jose
bic2ksinzui: good question, thats no terminology to me. Where do I look?21:15
sinzuibic2k, run "juju get-env tools-metadata-url".  empty means default to streams.canonical.com21:16
bic2ksinzui: empty it is21:16
* bic2k isn't sure why Yoda said that21:17
sinzuibic2k, lets ask juju to be explicit about what it is doing. can you paste the output of21:17
sinzuijuju metadata validate-tools21:17
bic2ksinzui: local tools are on 1.20.7 right now, command metadata isn't around anymore right?21:18
sinzuibic2k, juju doesn't use your client unless you exploit a developer hack called --upload-tools21:18
bic2ksinzui: says the command is not found21:20
sinzuibic2k, are you on mac or windows?21:20
bic2ksinzui: mac21:20
sinzuibic2k, well good news for you. I promised to try to make the metadata plugin for mac today when I make the 1.20.5 binaries21:20
sinzuibut that requires me to reboot my machine into os x21:21
sinzuibic2k, are you in a private cloud?21:21
bic2ksinzui: public?21:21
sinzuibic2k, aws, hp, azure, joyent?21:22
sinzuior your own openstack21:22
bic2ksinzui: lol, this is a cluster on aws. Been active since 0.12.121:22
sinzuibic2k, okay, so I think we need to ask juju about the old urls...21:22
sinzuibic2k: juju get-env tools-url21:23
bic2ksinzui: empty21:23
bic2kwant me to gist the whole env without secrets?21:23
sinzuibic2k, I am not sure. I just switch to my aws env. and it shows sensible answers21:26
bic2kperhaps I got caught in some old upgrade issue with juju in the 0.18 series?21:27
bic2ksinzui: been looking in the bugs/issues but nothing related so far21:27
sinzuibic2k, once you run upgrade-juju, the action is queuesd. juju wont let us run it again to see its decision21:27
bic2ksinzui saves the day. Thanks again. Solution was setting my tools-metadata-url to https://streams.canonical.com/juju/tools/21:35
sinzuibic2k, okay, then I think we have learned that old envs may not get the stream url updated during upgrades. 1.18 requires it and a bootstrap will set it21:36
sinzuibic2k, I will post this issue for others21:36
sinzuibic2k, so now I suspect "juju run" wont work for you because it is a known failure for upgrades21:37
* sinzui looks for bug/work around21:37
sinzuibic2k, I think you are affected by bug 135368121:40
mupBug #1353681: juju upgrade-juju --upload-tools to 1.18.4 fails to provide juju-run <canonical-is> <run> <juju-core:Triaged> <https://launchpad.net/bugs/1353681>21:40
sinzuioh, and so  my juju-ci3 env it seems21:41
bic2kmup: close, but this was public cloud and not using --upload-tools. No errors related to permissions either.21:41
mupbic2k: I apologize, but I'm pretty strict about only responding to known commands.21:41
bic2ksinzui: having an env to reproduce on can go a long ways in figuring it out.21:42
=== mup_ is now known as mup
=== CyberJacob is now known as CyberJacob|Away

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!