=== jcw4 is now known as jcw4|Zzz === CyberJacob|Away is now known as CyberJacob === CyberJacob is now known as CyberJacob|Away === cliff-hm is now known as cliff-mtg === psivaa is now known as psivaa-afk === jcw4|Zzz is now known as jcw4 [15:01] can anyone offer some assistance with python-jujuclient? === psivaa-afk is now known as psivaa [15:52] marcoceppi: howdy! can you give some attention to https://bugs.launchpad.net/charms/+bug/1342847 and https://bugs.launchpad.net/charms/+bug/1342843 ? [15:52] <_mup_> Bug #1342847: please add transcode-cluster bundle to the charm store [15:52] <_mup_> Bug #1342843: please include transcode charm in charm store [15:53] kirkland: sure thin === marcoceppi changed the topic of #juju to: Welcome to Juju! || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://goo.gl/9yBZuv || Unanswered Questions: http://goo.gl/dNj8CP || Weekly Reviewers: mbruzek && whit || News and stuff: http://reddit.com/r/juju [16:00] anyone seen this and know how to work around? http://pastebin.ubuntu.com/7856024/ [16:11] tvansteenburgh: is that local provider in lxc? [16:22] marcoceppi: thanks; it would be fantastic if that could land today-ish :-) [16:22] marcoceppi: no, local provider in virtualbox [16:26] happy sysadmin day! [16:28] to you to jose [16:28] Happy Friday jose! [16:28] thanks! [16:33] kirkland the transcode README could be markdown, looks like you duplicated a step step 5 also [16:33] kirkland, charm add readme will generate the markdown template that is recommended [17:14] mbruzek: okay, great -- is that all that's blocking it? [17:15] kirkland, no it is just what I found so far [17:15] mbruzek: Error: add is not a valid subcommand [17:16] kirkland: try `juju charm add readme`? [17:16] http://paste.ubuntu.com/7857501/ [17:22] mbruzek: can you just point me to your favorite README.md in the charm store, and I'll clone that? [17:23] kirkland, in meeting just a second. jose has a few good ones. [17:23] jose can you give kirkland a good example [17:24] sure, owncloud will do, sec, grabbing the link [17:25] thx [17:25] kirkland: http://paste.ubuntu.com/7857553/ is the one given by 'charm add readme' [17:26] also, I quite like the Chamilo one [17:26] https://bazaar.launchpad.net/~charmers/charms/precise/chamilo/trunk/view/head:/README.md [17:52] hey! o/ [17:52] :) [17:53] there's any way that I can help to get the Drupal charm reviewed ? [17:53] :) [17:54] we are planning to divulge how to deploy and scale Drupal with using the charm [17:55] and if the fact of being a recommended charm is really important, and the nice logo will appear in the charm too hehe === CyberJacob|Away is now known as CyberJacob [18:01] mbruzek: jose: okay, readme now markdown; what's next? [18:02] kirkland: which branch should I take a look at? [18:02] sebas5384: it's definitely in the review queue, http://manage.jujucharms.com/tools/review-queue should be looked at shortly! [18:02] * jose just joined for the readme stuff :) [18:02] thanks for the update marcoceppi ! === CyberJacob is now known as CyberJacob|Away === scuttle|afk is now known as scuttlemonkey [19:35] jose: mbruzek: looks like both of you are deploying now? [19:35] * jose is [19:35] kirkland, I deployed and tried the first config example in the new README [19:35] * kirkland is deploying now, too [19:35] kirkland, looking at the logs on transcode/0 I see a loop [19:36] mbruzek: yep, waiting until all worker nodes are done [19:36] mbruzek: how many nodes/ [19:36] ? [19:36] 4 total [19:36] http://pastebin.ubuntu.com/7858501/ [19:36] It does not seem to exit from this repeating pattern [19:37] looks like 0 is done. [19:38] kirkland: I'll ping you in 5-10 when this is done :) [19:38] mbruzek: can I see ls -alF /srv? [19:39] http://pastebin.ubuntu.com/7858521/ [19:40] mbruzek: uno momento, I'm trying to reproduce it now [19:41] kirkland, Where is the output written to? I didn't see that in the new README. The old README told me about the web url to hit, but not what the output file would be named. [19:42] mbruzek: good point; I'll update that. it'll be in /srv/, which is actually served out by apache2 on every single node [19:42] kirkland, I also copied a video from my phone to the system. that is what 20140408_*.mp4 is [19:42] mbruzek: so you can just point a browser to any node [19:42] mbruzek: cool, that should work [19:42] kirkland, seems stuck on the first one. [19:42] mbruzek: which bzr rev of the charm are you on? [19:42] mbruzek: r8? [19:43] kirkland, no let me destroy and boostrap again [19:43] mbruzek: okay, I'm bootstrapping now [19:49] mbruzek: deploying.... [19:49] mbruzek: total of 8 nodes [19:50] kirkland, I have a total of 5 now [19:50] mbarnett: k [19:50] mbruzek: k; mbarnett: sorry :-) [19:52] mbruzek: nearly up... [19:52] kirkland, ran this command $ juju set transcode input_url=http://download.blender.org/demo/old_demos/diditdoneit.mpg output_size=640x360 [19:53] jose: 25 nodes, is that right? [19:53] kirkland: yep! 25 nodes [19:53] mbruzek: I just ran the same [19:54] mbruzek: downloading mpg... [19:55] mbruzek: mine split into 8 parts, and they're each transcoding now [19:56] kirkland, I don't see the mount on transcode/0 . If I run sudo mount | grep srv I get nothing. [19:56] mbruzek: did you add the relation? [19:58] Yes I did. http://paste.ubuntu.com/7858678/ [19:58] mbruzek: is /srv/ mounted correctly in any of the other transcode/* ? [19:58] mbruzek: I'm running right now against MAAS, on an Orange Box, and everything is mounted correctly [19:59] mbruzek: perhaps a problem with NFS traffic in EC2? [19:59] EC2 works fine [19:59] kirkland, I am running local lxc containers [19:59] mbruzek: oh, hmm, well, maybe nfs isn't the best shared-fs for lxc? [20:00] mbruzek: I've never tried nfs mounts inside of lxc [20:01] * mbruzek is switching to hp cloud [20:02] interestingly, I just had 1 out of my 8 nodes not mount NFS correctly [20:03] 2014-07-25 19:52:48 INFO shared-fs-relation-changed + mount -t nfs -o rsize=8192,wsize=8192 10.14.100.6:/srv/data/transcode /srv [20:03] 2014-07-25 19:52:48 INFO shared-fs-relation-changed mount.nfs: access denied by server while mounting 10.14.100.6:/srv/data/transcode [20:03] 2014-07-25 19:52:48 INFO shared-fs-relation-changed + juju-log 'mount failed: /srv' [20:03] looks like the nfs service wasn't up yet, when I tried to make this relation [20:05] kirkland, I saw a similar error in my log (which is no sadly deleted) [20:05] kirkland, you have retry logic in the mount though right? [20:06] mbruzek: I'll need to look again [20:06] for try in {1..3}; do [20:07] mbruzek: okay; I hand mounted nfs on the one bad node, and did a juju set, and my transcode job finished immediately [20:07] juju set transcode input_url=http://download.blender.org/demo/old_demos/diditdoneit.mpg [20:07] jose: where are you stuck now? [20:07] kirkland: check the logs - it says something about invalid data [20:08] jose: ah, okay, let me try that url [20:08] jose: tbh, I haven't tried a .divx file yet; I've successfully done avi, mpg, mp4, mkv, ogg, mov [20:09] so there's another thing there - if something fails on the other nodes it's gonna be stuck in a loop [20:09] kirkland, does the basename get renamed or does the output have the same extension [20:09] jose: I suspect the install hook might need to add some additional codec packages [20:09] ? [20:09] and there's no way for me to touch and cut that loop [20:09] mbruzek: the output will tack on the resolution, codecs, and an mp4 [20:09] kirkland, ack [20:10] mbruzek: so for mine: diditdoneit.mpg_640x360_x264_aac.mp4 [20:10] mbarnett: I'll add that to the readme [20:10] mbruzek: ^ [20:11] jose: downloading .divx [20:12] kirkland: what do you mean? [20:12] jose: I'm trying to reproduce your failure [20:12] oh, ok [20:12] jose: and my run is currently wgetting your .divx example [20:13] jose: I'm trying to figure out what additional packages might be necessary to support that [20:13] ok, cool! [20:14] jose: okay, same problem here [20:16] mbruzek: I upped the retries in the shared-fs relation from 3 to 60 [20:16] jose and kirkland if the divx codec is not free it would be appropriate to list that in the known limitations. [20:17] mbruzek: ack; [20:17] * mbruzek suspects divx is not free, remembers something about divx in the past [20:19] so I have a unit of a service stuck in a dying state. Anyway to get this guy to just die [20:20] automatemecolema, You could juju destroy-machine # with the one it is on [20:20] automatemecolema, by chance were you using debug-hooks ? [20:20] jose: okay -- could you try a different input format? [20:20] kirkland: I could, but I'll have to re-deploy - the loop cannot be cut [20:20] jose: ? [20:20] jose: sure you can [20:20] just update the config [20:21] there's a couple of killalls in there [20:21] but the first instance of the config-changed hook would need to be terminated [20:21] jose: this just worked fine for me: juju set transcode input_url=http://download.blender.org/demo/old_demos/diditdoneit.mpg [20:21] jose: hmm [20:22] kirkland, jose brings up a good point. When my local one was stuck I tried different config options and the old loop never exited. [20:22] jose: okay, I can add a kill switch [20:22] would be awesome [20:22] jose: mbruzek: for now, you can just touch the DONE files [20:22] while [ $(ls ${filename}.part*.${format}.DONE | wc -l) -lt $total_nodes ]; do [20:23] mbruzek, I think we'll just delete the machine, I was just trying to avoid that [20:23] kirkland: are you applying your change now? so I can branch and re-deploy [20:23] mbruzek, my consultant was logged into debug-hooks at some point, but not sure when [20:23] jose: yes, give me a minute [20:23] np [20:23] automatemecolema, In some cases the debug hooks command will prevent a charm from completely dying [20:24] automatemecolema, you can try juju resolved --retry charm/# [20:24] jose: mbruzek: how would you like the kill switch to work? touch a file? add a config option killall=true [20:24] mbruzek, yea I tried that several times [20:24] kirkland: probably not a kill switch [20:24] but 'if retried for 10m then quit' [20:25] kill_all_jobs: [20:25] type: boolean [20:25] description: "kill switch for terminating all jobs" [20:25] default: false [20:26] another thingy there [20:26] that way, you can set it to true, and then back to false, to re-run [20:26] kirkland, the loop could check for a stop file of some kind. That would only be written if a new config-changed was trying to do something [20:26] mbruzek: would two config-changed's run at the same time? [20:26] actually, that would work [20:26] jose, not sure... marcoceppi would know [20:26] kirkland: problem with that boolean option is that config-get values don't change until the new run [20:27] mbruzek: I believe not because juju is event based, and one event needs to finish for the other to run [20:27] but let's wait for an answer :) [20:27] marcoceppi, will a new config-get run if the old one is still looping? [20:28] kirkland, this is kind of a big change, but could you fork the process to a script so the config-get exists immediately? [20:29] kirkland, once the charm gets all the information it needs, just call another bash script to do the dirty work? The loops in the new script could check for a stop flag [20:29] mbruzek: is there an easy way for my charm to install a file? [20:29] mbruzek: and a here-doc is not what I'm looking for [20:29] yes. [20:30] copy it from the charm directory [20:30] kirkland, I have created a files/ directory within the charm. [20:30] mbruzek: okay -- I'm game for that [20:30] mbruzek: can you give the the cp rune? [20:30] mbruzek: an example of where I'm copying it from? [20:31] mbruzek: or, rather, how to call it? it doesn't need to be copied [20:31] cp ${CHARM_HOME}/files/file.tar.gz /tmp/ [20:31] kirkland, you could just put another bash script in the hooks directory or charm root [20:32] source run_transcode_loop.sh arg1 arg2 arg3 [20:32] not source [20:32] my mistake [20:32] ./run_transcode_loop.sh would be in the charm dir [20:32] ./hooks/run_transcode_loop.sh [20:33] kirkland, is that what you are asking? [20:34] kirkland, ${CHARM_DIR} == cwd in a hook. [20:34] mbruzek: yeah, I have the change; testing locally now [20:35] kirkland, I have a hp bootstrapped and ready to test next revision [20:35] mbruzek: ack, let me put this through a quick local maas test [20:35] kirkland, yep [20:35] mbruzek: http://paste.ubuntu.com/7858922/ [20:35] mbruzek: maybe eyeball that for me? [20:36] kirkland, I have lots of other stuff to work on in the mean time. [20:38] kirkland, quick look over, I would suggest keeping the juju bits in the hook and just calling transcode with all the arguments that you compute in the hook. [20:39] kirkland, but as the author feel free to override me on that. [20:39] kirkland, I was thinking the transcode script could be juju free but perhaps that is too big of a change here. [20:41] mbruzek: more like this? http://paste.ubuntu.com/7858971/ [20:41] yep [20:42] I am sure both will work. [20:52] mbruzek: so I can just run ./transcode from within config-changed, if transcode is in the same dir as config-changed, right? [20:53] kirkland, no I believe the cwd is charm root [20:53] I believe the proper way is to type ./hooks/transcode [20:53] iirc [20:55] doh [20:55] Did I mislead you Homer? [20:55] mbruzek: I'm just confused about the cwd [20:56] OK === scuttlemonkey is now known as scuttle|afk [21:04] Hey tvansteenburgh do you have a minute? [21:05] yep [21:05] kirkland: he means that the folder from which the scripts are ran is your charm root [21:05] bare in mind that your scripts are in the hooks folder [21:06] jose: right -- I'm curious about the cwd when a hook runs [21:06] mbruzek: yeah what's up? [21:06] it's /var/lib/juju/agents/unit-charmname-number/charm/ [21:06] that is exported as a variable, called CHARM_DIR [21:06] so every time you call $CHARM_DIR, it's taking you to that path [21:06] I am trying to deploy a bundle with a local reference to the charm do you remember what we decided worked? [21:07] mbruzek: set the 'branch' key for the service [21:07] (to the local path of the charm) [21:07] http://pastebin.ubuntu.com/7859181/ [21:08] I am getting a lot of key errors [21:08] but that is using charm: /home/ubuntu/charms/trusty/postgresql-psql [21:08] Will switch this to branch and try again [21:12] tvansteenburgh, We had this problem solved, I thought it was charm: /path/to/charm but I just tested it with branch. [21:12] http://pastebin.ubuntu.com/7859194/ [21:12] tvansteenburgh, Can you have a look and tell me where I am wrong? [21:13] same KeyError with branch as with charm === sebas538_ is now known as sebas5384 [21:19] mbruzek: do you still have your local amulet changes [21:19] tvansteenburgh, yes I believe so, but that is a good question, how can I verify? [21:20] cd to amulet dir, run `git status` [21:20] fuck [21:21] modified: amulet/deployer.py [21:21] it's not working anymore [21:21] http://paste.ubuntu.com/7859278/ [21:21] kirkland, looking [21:21] seems the script just dies now inside of that while loop [21:21] after the sleep [21:21] disappears from ps [21:22] mbruzek: `git diff` [21:22] tvansteenburgh, http://pastebin.ubuntu.com/7859281/ [21:23] ok, those have already been added to upstream [21:23] transcode/0 is currently doing the download, and that's going, but it seems to exit right after the wget [21:23] transcode/1-N just all die inside of that while loop [21:26] I'm at a loss for idea [21:26] ideas [21:27] I'm feeling at this point the charm store is more trouble than it's worth [21:39] kirkland, I am looking, trying to find what command is causing the hook to exit? [21:40] kirkland, could it be another config-changed event is coming by and killing this first transcode? [21:41] mbruzek: that would be my best shot in the dark [21:41] kirkland, what is dying the transcode or the hook? [22:02] is there a way to juju set a value to either a) and empty string or b) back to it's default value? [22:06] bloodearnest, does juju set charm key="" not work? [22:06] mbruzek: nope [22:06] nor = "''", = '""' or just = [22:06] this is on 1.20 [22:09] I should say, juju set charm setting='""' does work, but it sets the value to "" [22:11] bloodearnest, I was not aware that setting to empty string threw an exception [22:12] mbruzek: yeah, that's new with 1.20 [22:13] http://pastebin.ubuntu.com/7859627/ [22:13] bloodearnest, You see a stack trace like that? [22:13] mbruzek: I've been seeing quite a few panics, although not always fatal (as in, juju deploy x shows traceback, but the deploy has been kicked off) [22:14] mbruzek: exactly [22:14] That should not be that way. you should ping in #juju-dev [22:14] mbruzek: kk [22:14] bloodearnest, and open a bug against juju-core [22:15] It should not panic like that [22:22] mbruzek: https://bugs.launchpad.net/juju-core/+bug/1348829 [22:22] <_mup_> Bug #1348829: juju-core client panics with juju set empty string === sarnold_ is now known as sarnold === freeflying__ is now known as freeflying === Beret- is now known as Beret === Ursinha_ is now known as Ursinha === cjohnston_ is now known as cjohnston