=== mup_ is now known as mup [00:51] marcoceppi: https://github.com/juju/docs/pull/181 - updated [00:51] lazyPower: cool, I'm about to send an MP your way to fix callouts [00:52] haha, sweet. cuz we just broke the callouts on this page with that refactor [00:52] lazyPower: yeah, it's been broken in that we're not doing what we document [00:53] there's a bunch of malformed callouts, i was going to patch them, but instead I'll just make the plugin better [00:53] :heart: [00:56] mm, sexy [00:57] it's not pefect but it'll do [00:57] lazyPower: https://github.com/juju/docs/pull/182 [00:58] lazyPower: what about arosales feedback?" [00:58] That was patched in too [00:58] cool, we need to teach him to comment on the diffs :P [00:59] he did [00:59] his diff's were a revision behind yours i think? [00:59] he commented directly on the file [00:59] if you click on his comments they show them inline [00:59] instead of the diff for the merge request [00:59] o [00:59] so it's hard to see when they've been fixed [00:59] yesh [00:59] like mine were hiddenb ecause they're out of date now [00:59] ok hang on regenerating [00:59] * lazyPower drum rolls [01:00] \o/ [01:00] works [01:00] lazyPower: also, for future reference, the bolding on the Note isn't required anymore [01:00] and it can be any word as long as you use !!!! [01:00] err !!! [01:00] so "!!! Warning:" will work [01:00] etc [01:00] a true callout plugin [01:00] what a time to be alive [01:00] we are exploring the uncharted waters of writing your own generators [01:00] wewt [01:01] ok [01:01] pull master, we're good to rock on this [01:01] both have landed huzzah, they'll be live around 4a [01:02] 6:00 UTC, fwiw [01:03] marcoceppi: lazyPower heads up, I've got a guy looking into how to ingest the docs for use on the upcoming jujucharms.com rework. [01:03] lazyPower: while we're romping around in the docs, we should define version'd branches [01:03] orly? [01:03] rick_h_: would that be helpful ^^? [01:03] marcoceppi: lazyPower so we might have some requests coming in to help us wrap that around the site and keep it up to date. We'll be wanting to make sure we address keeping it up to date/etc [01:03] rick_h_: cool, style wise or content wise? [01:03] marcoceppi: style wise really [01:04] marcoceppi: and figuring out how to prenent it in a way that fits with nav/search/etc [01:04] we'll be looking to ingest the docs into elasticsearch and building a custom docs search box [01:04] rick_h_: cool, it's pretty straight forward, there's one main template file then it's Markdown and CSS [01:04] marcoceppi: k [01:04] marcoceppi: lazyPower so if fabrice comes asking strange questions he's researching and doing some proof of concept stuff [01:04] ack [01:04] rick_h_: cool, sounds good [01:05] thanks for the heads up [01:09] marcoceppi: want to do a hangout to talk about the doc structure + versioning? [01:09] right now? [01:09] uhh [01:09] when do you want to do it? [01:09] we can do it now [01:09] I was just asking [01:09] i mean i can EOD whenever [01:09] ye [01:09] lets do it now while its resh [01:09] *fresh [01:09] join my favorite hangout url [01:10] lazyPower: https://plus.google.com/hangouts/_/canonical.com/iwonderhowlongyoucanmakethesehangouturlsseemstherereallyisnolimitatall [01:11] o/ [01:18] o/ [01:45] o7 [01:46] o5 [01:46] i am quadroupal jointed [01:51] it looks like a guy flexing [01:52] http://i.imgur.com/4mYD13u.gif [02:13] rick_h_: If you're still around… how are we looking for release? I looked around at PRs and the kanban board and everything looked pretty good. [02:58] kadams54: everything is going well [02:58] rick_h_: great! [02:58] just got back from coffee shop, right before I left functional charm tests passed on both precise/trusty [02:58] so the release is about 5 commands away [02:58] is `rm -rf /` one of them? [02:58] hah, not quite [03:02] Alright, it's off to bed for me. Here's to a smooth release. [03:02] kadams54: night [03:09] well they charms are up, waiting on ingest time [03:09] will do one final QA, but the code looks good on LP [03:10] oh heh, not the GUI channel is it === thumper is now known as thumper-afk === uru_ is now known as urulama === uru_ is now known as urulama === CyberJacob is now known as CyberJacob|Away [09:56] * bloodearnest is wondering if all the charms that use #!/bin/bash hooks need updating... [09:57] jamespage, any chance you could take a look at https://code.launchpad.net/~gnuoy/charms/trusty/keystone/next-lp-1355848/+merge/231529 ? [09:58] Tribaal, to you have a moment to take a look at https://code.launchpad.net/~gnuoy/charms/trusty/nova-cloud-controller/next-multi-console-fix/+merge/233612 ? === fabrice is now known as fabrice|lunch === thumper-afk is now known as thumper === fabrice|lunch is now known as fabrice === jacekn_ is now known as jacekn [12:16] gnuoy: sorry, was out for a moment. I can look, yes [12:16] Tribaal, thanks. === urulama is now known as urulama-afk === SaMnCo changed the topic of #juju to: SaMnCo [13:35] Hi, so what do I need to do to get juju status to show me floating IPs? [13:36] jjo says it should have visibility, but I'm confused [13:36] use-floating-ip is the only env setting I could find that matched, but I thought that was the old behaviour that made every unit get a floating IP [13:59] Trusty juju tools mismatch on s3... http://paste.ubuntu.com/8425671/ [14:01] https://bugs.launchpad.net/juju-core/+bug/1373954 [14:01] Bug #1373954: juju-tools checksum mismatch for trusty on S3 [14:10] Can someone look at Juju tool checksum mismatches blocking bootstrap. Now seen with Trusty/S3, Precise/Canonistack [14:13] Checksums @ http://streams.canonical.com/juju/tools/streams/v1/com.ubuntu.juju:released:tools.json match content in http://streams.canonical.com/juju/tools/releases/ which I assume is the source for the mirrors that have issues [14:46] marcoceppi: lazyPower: protips for commenting on diffs instead of the files? [14:46] marcoceppi: lazyPower re: git hub === fabrice is now known as fabrice|family [15:28] question: Let's say I deploy an env on AWS through my machine, there's any way via other machine could comunicate to the same environment and continue deploying charms? [15:31] sebas5384: the othe rmachine would need a copy of your ~/.juju directory [15:31] specifically, the ~/.juju/environment_name.jenv [15:31] hmmm specifically the .jenv right? [15:32] holly s&*T [15:32] hehe [15:32] thanks lazyPower! [15:32] np sebas5384 [15:49] Hi all. Can anyone give me some advice on a problem with juju failing to deploy new LXC machines on a host? [15:50] Some some unknown reason all new LXC containers intended to be deployed on machine "1" simply stay in pending state forever. [15:50] This is what "juju stat" looks like http://pastebin.com/5b4D3d23 [15:52] all LXC containers from 1/lxc/22 onwards are pending. I've tried doing a "destroy-machine" and "destroy-machine --force" on them, which is how they ended up with "life: dead". [15:53] I've looked at log files on machine "1", but can't see anything to suggest why it's not actioning the request for a new LXC container. No errors, nothing. [15:54] if anyone can suggest logs files etc. I should be looking at that would help a lot, and right now, I've got nothing to go on. [15:54] all existing LXC containers work fine, and come back after a complete physical host reboot. [15:58] johnmc: it sounds like an lxc problem, if there's no juju errors. Juju just tells lxc what to do. Probably good start would be to ssh into the base machine and do an lxc-ls and see what it spits out [16:00] Hi nate. I've been on the base machine quite a bit, and not found anything. This is what I get to lxc-ls: [16:00] root@controller-cam1:~# lxc-ls [16:00] juju-machine-1-lxc-15 juju-machine-1-lxc-17 juju-machine-1-lxc-19 juju-machine-1-lxc-21 juju-machine-1-lxc-16 juju-machine-1-lxc-18 juju-machine-1-lxc-20 [16:00] just the healthy LXC containers [16:03] johnmc: what version of juju are you on? There were a bunch of lxc issues that have gotten fixed in recent weeks [16:03] I would have thought that there'd be at least some activity in /var/log/juju/ in response to a request for a new LXC. Nothing happens there at all. [16:04] johnmc: there definitely should be some output at least, when doing add-machine [16:04] juju -> 1.20.7-0ubuntu1~14.04.1~juju1 [16:07] rick_h_: Is there a more recent version (after 1.20.7) I should try? [16:08] johnmc: nope that should have the fixes I believe so you're good there. Wanted to double check [16:10] natefinch: As a test, I just did a "cp -a /var/log/juju /var/log/juju-old" on the base machine (1), followed by "juju add-machine lxc:1". I then waited a minute and ran "diff -uNr juju-old/ juju/" on the base machine. No logging had occurred. [16:10] johnmc: I think the juju logs are in a .juju directory for local stuff? /me tries to double check. [16:10] johnmc: so logs are in /home/rharding/.juju/local/log [16:10] where rharding is your username on the host machine [16:10] johnmc: and there's the all-machines.log along with per machine/unit logs there. [16:11] rick_h_: it's maas, not local [16:11] hatch: natefinch oh, sorry. /me totally missed that part [16:12] * hatch pokes head in [16:12] johnmc: if you do juju add-machine lxc:1 --debug --show-log ... what does it print out? It sounds like the commands aren't even making it to the server for some reason [16:12] rick_h_: did you mean to ping someone else? :) [16:13] arosales: for github reviews [16:13] hatch: heh, I was starting to ping you for a different reason [16:13] oh haha [16:13] natefinch: http://pastebin.com/sJ49b6BY [16:14] says it's created [16:15] says it's falling back to 1.18! [16:21] marcoceppi: ack, any protips? [16:22] arosales: yeah, making a screenshot [16:23] well, shutter keeps crashing [16:23] when making comments, make them on the Files Changed tab [16:23] that way they're associated with the merge request and not directly on the branch [16:23] as the merge request is iterated upon with the feedback it'll close comments that are no longer up to date [16:24] arosales: so at the bottom https://github.com/juju/docs/pull/181 you can see how your comments are still shown but mine are marked as outdated [16:24] even though chuck addressed all the comments in the merge [16:36] natefinch: I've updated to the latest juju-core on my juju -agent machine (machine 0), and tried again. I get the same log output as before ( http://pastebin.com/UbzN3mbY ). Any idea where I go from here? [16:38] natefinch: incidentally I can make as many LXC containers on machine "3" as I like. Only machine LXC creation attempts on "1" fail silently. [16:48] Does anyone have any tips about where I should be looking for clues? Apart from the debug output I've shown ( http://pastebin.com/UbzN3mbY ) , there is no logging evidence I can find that sheds light on this. [16:49] My request for an LXC container is disappearing into a black hole. [17:08] johnmc: oh crap [17:09] johnmc: I think we have a 1.18<->1.21 bug [17:09] johnmc: we just fixed it last niedbalski [17:09] johnmc: last night... heh trying auto-complete mid-sentence is not exactly what I wanted [17:10] johnmc: although that doesn't explain why it would work on one machine and not the other... nevermind. [17:11] johnmc: if you do the same command with --debug --show-log on machine 3 (where it works), do you get different output [17:11] ? === BradCrittenden is now known as bac [18:21] natefinch: Sorry, had to go away for a bit. This is the output when creating on machine 3: http://pastebin.com/scsj1Jzb . The machine was successfully created in less than 2 minutes. [18:25] johnmc: hmm.... weird [18:27] natefinch: Looking at the two base systems, they both have /var/lib/juju/tools/1.18.4-trusty-amd64 on them. Same version on both. [18:34] johnmc: when you do add-machine to lxc:3, do you get output in the all-machines log that you don't see when you do it for lxc:1? That seems like the most interesting place to start right now === roadmr is now known as roadmr_afk [18:39] I am using the GUI for the first time to set up an environment I created, hosted on the bootstrap node. Did the drag of a zip, committed the changes and launched the instance. I had an install error and want to ssh in but I am getting an error. Could this be because I am using the GUI for launching the instance? [18:41] natefinch: There's no all-machines.log on my workstation, but there is on my maas-agent machine. There is a block out log output *after* the new LXC machine was created on machine 3, but nothing that coincided with the actual request. This is the output: http://pastebin.com/fmX7bP9H [18:42] natefinch: so, I suppose the behaviour is consistent across machines 1 & 2, in that you only get log output after a successful lxc machine creation. [18:42] johnmc: sorry, yeah, I meant on the maas-agent machine. You should get logs about things like the API being accessed [18:44] natefinch: I get no logging whatsoever in response to the request. [18:44] natefinch: only success creates any log output [18:46] nevermind I figured it out === CyberJacob|Away is now known as CyberJacob [18:57] johnmc: I'm bringing up an environment of my own so I can double check some stuff, but realized my local environment was kinda messed up. one is coming up now. [19:06] natefinch: something interesting is going on with machine 1. Right now when I run "juju stat" it says "agent-state: down" for machine 1. I had this before an rectified it by restarting the juju daemon on machine 1. Strange this is that it was down earlier when I posted my initial stat output ( http://pastebin.com/5b4D3d23 ). [19:07] natefinch: correction - it was *not* down earlier [19:08] johnmc: yeah, something is wonky with that machine [19:11] natefinch: It's the total lack of any log output that gets me. I've just restarted the jujud on machine 1, and it's up again. I then tried to create yet another lxc machine on there and see no new log output on eith the maas-agent, or machine 1. [19:24] johnmc: can you run sudo grep 1/lxc/30 /var/log/juju/all-machines.log on that base maas-agent machine? [19:32] natefinch: # sudo grep 1/lxc/30 /var/log/juju/all-machines.log grep: /var/log/juju/all-machines.log: No such file or directory [19:32] natefinch: all-machines is only on maas-agent. Grepping the all-machines log there shows no log entries [19:33] root@juju-agent:~# grep 1/lxc/30 /var/log/juju/all-machines.log \n root@juju-agent:~# [19:34] natefinch: It's basically what I've been saying all along; the request for a machine vanishes without a trace. [19:35] johnmc: on the same machine, if you grep for 3/lxc/10 do you get hits? [19:42] natefinch: nothing for that either [19:43] johnmc: ok, that's weird, since that's the one that's actually working [19:44] As I said before, the lack of any logging pre-success is consistent. [19:44] hello [19:47] my juju says a machine is down, i never encounter it before, why is this happening?? and unit on that machine says "agent-state: down , agent-state-info: (started)" [19:56] themonk: machines go down sometimes.... is the actual machine down, or just juju? [19:59] johnmc: obviously deploying to lxc containers on machine 1 used to work, since you have some working. Do you know when it stopped working? [20:01] natefinch, themonk: if an local lxc env failed to tear down the cruft left behind will prevent new machines for starting [20:01] natefinch, themonk I know there is a juju plugin that will clean the machine [20:01] * sinzui looks for doc about how to clean [20:02] sinzui: johnmc is having a problem added new lxc containers to maas instances. Not sure about themonk's problem yet [20:02] sinzui: or rather... he can deploy lxc containers to one maas machine but not another [20:03] I have no lxc maas experience johnmc but there were bugs about the network the machine might give the lxc [20:04] johnmc: is this a production environment? Would you be willing to try upgrading the environment to 1.20.7? [20:04] natefinch: The first time it failed I was trying to install a haproxy charm to both machine 1 & 3 at petty much the same time (yesterday). Machine 3 succeeded, and nothing happened on machine 1. No idea why. [20:04] themonk, http://pastebin.ubuntu.com/8427862/ [20:04] * sinzui looks for lxc maas bugs [20:05] natefinch: In the end I realised I should have been using hacluster, but that realisation came long after the failure became apparent. [20:05] natefinch: It's not in production yet. I'll happily try anything. [20:06] natefinch: Is there a doc explaining what I need to do, or is there just a simple command? [20:07] sinzui: juju upgrade-juju should just work, if he's on 1.18.4, right? [20:08] sinzui: (and using 1.20.7 client) === roadmr_afk is now known as roadmr [20:09] johnmc: in theory "juju upgrade-juju" should just work, because you're running a newer stable version of the juju client, and the server is running an older stable version of the server. But I'd wait for the go-ahead from sinzui. He's our QA head, and does about 1000x as many upgrades as I do. [20:11] natefinch: thanks. I'll check back in a few minutes. [20:17] natefinch, sinzui, thanks for response :) machine is up, one thing is that its in a virtualbox on windows machine [20:18] and its in laptop [20:45] sinzui: Am I safe to run a "juju upgrade-juju" using a 1.20.7 client with 1.18.4 servers? [20:45] johnmc, yes. We wouldn't release it if it wasn't safe [20:49] sinzui: looks like I'm trapped due to my broken (pending) lxc machines. Erro message: ERROR some agents have not upgraded to the current environment version 1.18.4: machine-1-lxc-22, machine-1-lxc-23, machine-1-lxc-24, machine-1-lxc-25, machine-1-lxc-26, machine-1-lxc-27, machine-1-lxc-28, machine-1-lxc-29, machine-1-lxc-30, machine-1-lxc-31, machine-1-lxc-32 [20:49] sinzui: Those are the broken LXCs I've been discussing with natefinch. [20:50] sinzui: they are the reason I'm trying the upgrade [20:51] natefinch, you cannot upgrade while they are broken [20:51] johnmc, Juju will queue the upgrade until all the machine and unit agents call home and report they are healthy [20:52] johnmc, I know this because I have an arm64 instance that can go down. when it comes back, the upgrades take place [20:53] johnmc, I am still reviewing the release. when I am done in about 30 minutes, I can return to the lxc maas bug list to find a solution to the problem [21:02] natefinch and sinzui: I am literally having the same problem right now. Same version 1.18.4 for 1.20.7 [21:03] let me know if I can provide any details to help [21:06] I lied, we are on 1.18.1 in this cluster [21:13] bic2k, what do you see with "juju --show-log upgrade-juju" output will show which versions are available. We expect 1.18.4 for clouds with public access, [21:13] bic2k, you can also be explicit about upgrade versions "juju --show-log upgrade-juju --version=1.18.4" [21:14] sinzui: no matching tools avaliable? [21:14] bic2k, explicit within reason. Juju has some internal rules about what it thinks I can upgrade too and will look for a match [21:15] bic2k, does your env use public streams such as streams.canonical.com? === JoseeAntonioR is now known as jose [21:15] sinzui: good question, thats no terminology to me. Where do I look? [21:16] bic2k, run "juju get-env tools-metadata-url". empty means default to streams.canonical.com [21:16] sinzui: empty it is [21:17] * bic2k isn't sure why Yoda said that [21:17] bic2k, lets ask juju to be explicit about what it is doing. can you paste the output of [21:17] juju metadata validate-tools [21:18] sinzui: local tools are on 1.20.7 right now, command metadata isn't around anymore right? [21:18] bic2k, juju doesn't use your client unless you exploit a developer hack called --upload-tools [21:20] sinzui: says the command is not found [21:20] bic2k, are you on mac or windows? [21:20] sinzui: mac [21:20] bic2k, well good news for you. I promised to try to make the metadata plugin for mac today when I make the 1.20.5 binaries [21:21] but that requires me to reboot my machine into os x [21:21] bic2k, are you in a private cloud? [21:21] sinzui: public? [21:22] bic2k, aws, hp, azure, joyent? [21:22] or your own openstack [21:22] sinzui: lol, this is a cluster on aws. Been active since 0.12.1 [21:22] bic2k, okay, so I think we need to ask juju about the old urls... [21:23] bic2k: juju get-env tools-url [21:23] sinzui: empty [21:23] want me to gist the whole env without secrets? [21:26] bic2k, I am not sure. I just switch to my aws env. and it shows sensible answers [21:27] perhaps I got caught in some old upgrade issue with juju in the 0.18 series? [21:27] sinzui: been looking in the bugs/issues but nothing related so far [21:27] bic2k, once you run upgrade-juju, the action is queuesd. juju wont let us run it again to see its decision [21:35] sinzui saves the day. Thanks again. Solution was setting my tools-metadata-url to https://streams.canonical.com/juju/tools/ [21:36] bic2k, okay, then I think we have learned that old envs may not get the stream url updated during upgrades. 1.18 requires it and a bootstrap will set it [21:36] bic2k, I will post this issue for others [21:37] bic2k, so now I suspect "juju run" wont work for you because it is a known failure for upgrades [21:37] * sinzui looks for bug/work around [21:40] bic2k, I think you are affected by bug 1353681 [21:40] Bug #1353681: juju upgrade-juju --upload-tools to 1.18.4 fails to provide juju-run [21:41] oh, and so my juju-ci3 env it seems [21:41] mup: close, but this was public cloud and not using --upload-tools. No errors related to permissions either. [21:41] bic2k: I apologize, but I'm pretty strict about only responding to known commands. [21:42] sinzui: having an env to reproduce on can go a long ways in figuring it out. === mup_ is now known as mup === CyberJacob is now known as CyberJacob|Away