=== defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz === rvba` is now known as rvba === rogpeppe2 is now known as rogpeppe [08:41] Hi all, interested in juju, but want to clarify something. Does juju provide any mechanism for running multiple units (processes) per computer? The docs read like a new EC2 instance (or equivalent) is spun up for each new unit. [08:43] hi juju folks :) [08:43] i was wondering if there's any tutorial online on getting some juju + linode magic going === ehg_ is now known as ehg === freeflyi1g is now known as freeflying === wedgwood_away is now known as wedgwood [13:10] Just so I don't keep spinning my wheels, there's no way to get a charms metadata programatically without first branching the charm? [13:20] marcoceppi: IIRC the charm store used to provide it via a simple GET [13:20] SpamapS: thanks, I'll poke that code for a bit. See if it's still there === dannf` is now known as dannf [13:33] SpamapS: looks like it exposes some high level data about the charm, but doesn't actually provide the full metadata for the charm. Thanks for the tip though [13:34] marcoceppi: I used to run a nightly tarball of all the bzr branches ... [13:34] marcoceppi: if you ask IS, they might still have my old crontab.. ;) [13:34] marcoceppi: was quite useful for "wtf I need a full repo now" [13:36] SpamapS: Yeah, I was thinking about tapping in to the tpaas service we have running for graph testing, since that has a local cache of all the charms, it'd just be nice to have it all in one central place [13:37] hi, I was wondering if someone could help me with juju, I'm receiving following error when trying to bootstrap to EC2: ERROR 301 Moved Permanently [13:38] Tried recreating the environment a few times [13:39] version is 0.7 [13:39] Hi tabibito, what version of juju are you using? (dpkg -l | grep juju) [13:39] Beat me to it :) [13:39] :) [13:39] tabibito: Are you trying to deploy to a specific region? if so what's the region line look like in the environments file? [13:40] currently it's looking like this: region: eu-west-1 [13:40] but even if I don't use a region, I get the same result [13:40] * marcoceppi tries to bootstrap with 0.7 [13:41] Extra info: If I do a juju status, I see following error: ERROR Cannot connect to environment: 301 Moved Permanently [13:41] Traceback (most recent call last): [13:41] File "/usr/lib/python2.7/dist-packages/juju/providers/common/connect.py", line 43, in run [13:41] client = yield self._internal_connect(share) [13:41] Error: 301 Moved Permanently [13:43] tabibito: I'm not able to replicate with and without region on 0.7 Do you have an EC2_URL environment variable (or something similar) set in your shell? [13:44] not that I know of … How can I check? [13:45] tabibito: `env` but if you don't think you have it set then odds are you don't [13:45] no don't see any EC2_URL [13:45] strange [13:46] tabibito: run "juju -v status" and paste the output to a pastebin please [13:46] err, when you try to bootstrap too, if you haven't gotten a successful bootstrap yet [13:48] bootstrap: http://pastebin.com/gTyH8HCg [13:48] status: http://pastebin.com/MpDeK3vs [13:50] tabibito: darn, I was really hoping it would show the URL it was trying to connect to [13:50] environment(sanitized):http://pastebin.com/YDNWBdTZ [13:50] I know right! … That would be easier to troubleshoot :) [13:51] tabibito: shot-in-the-dark, what if you used precise as your default series (Most all charms are written for precise anyways, and we typically recommend deploying to LTS) [13:51] same thing [13:52] Tried with precise, quantal ... [13:52] I'm using 13.04 now, but I also tried with 12.04… results are the same, so it must be something I do… or :) [13:53] tabibito: You've got me puzzled, Ususally when you get a 301 from AWS it's because it's not looking at the right endpoint URL. I can't exactly replicate but opening a bug on LP or asking on Ask Ubuntu might get better traction than what I have to offer [13:54] marcoceppi: flagbearer charms in 5? [13:54] I can start the hangout [13:54] jcastro: cool, just point me at a URL [13:54] * marcoceppi looks for a brush [13:54] jcastro: oh btw, your adobo made my breakfast amazing today (and btw, no clumping here) [13:55] @marcoceppi thanks … I'll keep digging and if all fails I'll post a bug or go to Ubuntu [13:55] https://plus.google.com/hangouts/_/fa6dea509240bfe96361ee99a233312bebed02b0?authuser=0&hl=en [13:55] for those who want to join [13:56] SpamapS: tah, I think it was a bad bottle on my end [14:05] marcoceppi, I found it … needed to enter the EC2 and the S3 uri for the specific region and use HTTPS [14:05] duh [14:08] tabibito: interesting [14:08] indeed [14:44] exit [14:44] bye === scuttlemonkey_ is now known as scuttlemonkey === victorp_ is now known as victorp_uds [15:51] SpamapS, incidentally theres a new txzk for debian inclusion [15:54] hazmat: ty, will upload ASAP [15:58] Hi Robert, are you there? It's Ian. [15:58] I've got the go ahead to deploy our Cloud 1.0 on MAAS/Juju [15:58] But now I'm hitting a major blocker [16:02] bbcmicrocomputer, ^^ [16:05] irossi: hey, let's take this off channel [16:14] When using juju-core, destroy-service seems to be not as "reliable" as it was in pyjuju. I.e., if there is some kind of service error, I can't destroy the service successfully. The agent state changes to "dying" and then sits there [16:15] I'm actually not sure how to recover from this state. [16:17] dpb1, maybe similar to #1168154 or #1168145? [16:17] <_mup_> Bug #1168154: Destroying a service in error state fails silently [16:17] <_mup_> Bug #1168145: Destroying a service before it reaches started or running does not destroy the machine [16:17] Launchpad bug 1168154 in juju-core "Destroying a service in error state fails silently" [High,Confirmed] [16:17] <_mup_> Bug #1168154: Destroying a service in error state fails silently [16:17] Launchpad bug 1168145 in juju-core "Destroying a service before it reaches started or running does not destroy the machine" [Undecided,New] [16:17] <_mup_> Bug #1168145: Destroying a service before it reaches started or running does not destroy the machine [16:17] Oops.. [16:17] wow, dualing bots [16:18] Makyo: checking those, looks similar. [16:19] dpb1, juju resolved a few times.. or use juju-deployer -W -T [16:19] it works around the issue by resolving the unit [16:20] hazmat: yes, looks like resolved frees it up, thx. Makyo: 1168164 looks like the issue, thx [17:56] hi. I am pondering a deploy of charmworld and juju-gui behind a single apache to manage the ssl endpoint. I think I can create a vhost template to defines two reverse proxy relations, because each relation has a service name, such as jc-squidrev [18:01] hi charmers. I want to write a unittest for the mongodb charm. I have written an function that should be independent of the juju environment so could be a simple python unit test. Are there any examples of this being done before? [18:02] sinzui: There's no real examples of unit testing yet. We have an idea for how it should look but nothing solid yet [18:03] marcoceppi: hmm, would my test be rejects if I included one with instructions to run it? [18:03] s/rejects/rejected/ [18:03] sinzui: for unit testing the charm? Probably not [18:04] okay, I think it is worth trying at least. [18:04] wedgwood, ^ any thoughts about 2 reverseproxy relations for a single apache charm deploy [18:05] sinzui: we're open to see how people want to do this. We were kicking around the idea of having charms stub most of their hooks and put a lot of logic in lib/ and then tests for that in lib//tests something like that [18:06] again, that's just a thought, we'd be interested to see how charm authors start tackling this [18:08] sinzui: marcoceppi: that's not quite true. I believe that the haproxy and apache2 charms have unit tests. [18:11] I don't see any tests in apache2 [18:11] * sinzui pulls haproxy [18:14] thank you wedgwood, I see an example in haproxy [18:14] mongodb is similar so this helps me a lot [18:47] sinzui, i've got some in my aws-* charms re unit tests [18:47] hazmat, fab, will I find them under ~hazmat on Lp? [18:48] sinzui, yeah.. here's one lp:~hazmat/charms/precise/aws-elb/trunk [18:50] I got it. Thanks hazmat [18:51] sinzui: the juju-gui charm has some unit tests (if I gather correctly what you're looking for) [18:52] sinzui: https://bazaar.launchpad.net/~sidnei/charms/precise/haproxy/trunk/files/head:/hooks/tests/ it's not merged yet [18:53] wedgwood: ^ it's only merged into canonical-losas [18:54] thanks benji, sidnei. [19:24] hazmat: ping? [19:25] sidnei, pong [19:25] hazmat: https://pastebin.canonical.com/91061/ [19:26] hazmat: looks like juju 0.7 failed to build on the ppa for precise [19:26] hazmat: the tb above is from bootstrapping with 0.6 [19:27] sidnei, yeah.. i just noticed mgz moved that ppa off trunk to a branch.. i'll trigger the build [19:27] hmm [19:27] sidnei, what version of txzookeeper? [19:27] hazmat: in the paste [19:27] sidnei, its not in the paste.. [19:27] ah, txzookeeper sorry [19:28] hazmat: Installed: 0.9.8-0juju53~precise1 [19:28] argh.. [19:29] hazmat: actually, i bootstrapped with 0.7-0ubuntu1~0.IS.12.04, but the bootstrap node got 0.6 as that's what's in the juju ppa [19:32] sidnei, hmm.. where's that from? [19:32] sidnei, i queued some builds for the ~juju/pkgs ppa [19:33] hazmat: i assume you mean 0.7: https://pastebin.canonical.com/91063/ [19:33] sidnei, yeah.. i did [19:39] wierd.. it build correctly for precise but says error on upload [19:40] hazmat: yup, because it has the same version as the previous failed build [19:40] need to bump to ~precise2 or something [19:40] i'll commit something to the branch [19:40] yeah, or that. [19:54] yay, progress [19:55] i'm still surprised by the error, that aspect of txzk hasn't changed in quite a while [19:57] yo, any of you having issues with pyjuju and local provider? Whatever service I try to deploy never get past "pending". Logs doesn't tell much [19:59] hazmat: funny you mention that: https://pastebin.canonical.com/91065/ [20:00] argh [20:00] fcorrea: I can try [20:01] fcorrea: bootstrap works ok? [20:01] dpb1, cool. Found something similar up on askubuntu.com. Will try disabling the firewall [20:01] sidnei, got a minute for a g+? [20:01] dpb1, yep [20:01] fcorrea: quantal? [20:01] dpb1, raring [20:01] k [20:01] hazmat: let me see if it's working today or if i need to reboot *wink* [20:07] dpb1, it gets very quiet after deploying. In the logs I see a "Started service unit mysql/0" for example and that's it...it was working last week though. I guess I should head back to Oakland [20:07] fcorrea: what series are you trying to deploy. What charm? [20:07] dpb1, default series is precise and charm is mysql [20:09] lemme try to create an lxc container and check it works. I could be there maybe [20:10] fcorrea: what does lxc list show you? [20:10] lxc-ls correctly shows the container created by juju though [20:10] ok [20:10] what about lxc-ls --fancy [20:10] you should get the ip [20:10] which you should be able to ssh into the ubuntu account [20:10] dpb1, yeah it's there: fcorrea-local-mysql-0 RUNNING 10.0.3.87 - NO [20:10] doing it [20:12] dpb1, mmm....the unit-mysql-0-output.log shows the issue: ImportError: No module named txzookeeper.utils [20:12] fcorrea: ok, that is what hazmat is debugging right now [20:12] dpb1, hah! Awesome [20:12] I think same here [20:13] I guess it's not just quantal. :( [20:13] somehow the debian build changed to not include the src [20:13] for the ppa txzk [20:13] ic [20:13] and its python.. so no binaries.. it seems quite strange [20:13] well, feeling better now [20:13] pypi is seeming pretty awesome right now ;-) [20:14] ya, dpkg -L python-txzookeeper is pretty embarrassing. :) [20:14] hazmat, lets use buildout as a package manager ;) [20:20] SpamapS, the txzk recipe was just using the embedded packaging? [20:21] hazmat: probably? [20:22] hazmat: uploading 0.9.8 to debian unstable shortly [20:22] SpamapS, k, i don't see the recipe do anything different, and the embedded packaging hasn't changed.. but now the package being generated is empty.. which is just odd [20:22] SpamapS, cool, thanks [20:22] hazmat: yeah probably needs a kick, not sure why [20:32] SpamapS: i suspect the recipe got changed to not use a nested the packaging branch at some point, since the build log for 0.9.7 have --with python2 and the one for 0.9.8 doesn't, and the debian/rules in lp:txzookeeper have not changed. [20:33] quite likely the recipe has been broken for a long time. They tend to break over time in my experience. [20:34] SpamapS! /me waves [20:34] m_3: howdy === Makyo is now known as Makyo|out [20:35] hazmat: anyway, 0.9.8-1 is in unstable now.. should make its way through the tubes to saucy by tomorrow [20:35] SpamapS, sweet [20:36] SpamapS, yeah.. that seems quite likely re the recipe, it hasn't built in a while [20:36] er.. been built [20:37] Given how ridiculously simple it is to package.. kind of sad that it broke :-P === BradCrittenden is now known as bac [20:49] SpamapS, agreed [20:50] hazmat: can the ppa get changes faster? [20:52] dpb1, not sure what you mean [20:57] hazmat: sorry, I'm not sure why I typed that. I meant, can the package be uploaded to the ppa? (maybe you already have). [20:57] dpb1, the alternative is running a non ppa origin [20:58] er. removing origin and relying on distro version [21:00] you can probably ask a web-op to kick the ppa and bump its priority [21:01] if a new package is being uploaded, you should probably do that [21:02] anyone else running into the python-txzookeeper thing floating around today? [21:08] ahasenack, the problem is the recipe is foobar [21:08] Akira1_, yes.. [21:08] hazmat: where is it? url [21:08] dpb1, ahasenack .. anyone else if your interested.. and have packaging knowledge.. we're hanging in https://plus.google.com/hangouts/_/a278ee33e829a90be5c6c364a6754726e6b975ee?authuser=0&hl=en === defunctzombie_zz is now known as defunctzombie [21:10] ahasenack, https://code.launchpad.net/~juju/+recipe/txzookeeper [21:11] hazmat: I'll try and join you shortly [21:12] SpamapS, that would be awesome.. i'm digging through dh_python2 conversion docs atm [21:12] my packaging knowledge is pretty minimal [21:13] hazmat: we've been following your commits so coo [21:18] hazmat: if I type "make" in the txzookeeper directory, I get the "done" target, that is confusing dh_auto_build === defunctzombie is now known as defunctzombie_zz [21:19] there is an override to get it to use python build in this case, it is finding the Makefile and assuming that's how the thing is built [21:19] ahasenack, aha! [21:19] let me find it [21:19] ahasenack, i'll just kill the makefile [21:19] that would work too [21:20] I don't remember how recipes handle 3.0 quilt packages, i.e., if they fetch the orig tarball or not [21:21] so maybe you want to kill r55 too, or just try it without the makefile first and see what happens [21:22] ahasenack, killing the makefile and it seems to work locally.. recipe builds requeued [21:22] ok [21:24] hazmat, ping [21:24] oh wait [21:25] fubar in progress.. crossed fingers on next recipe build [21:26] there's also a note on the mailing list about txzookeeper in the ppa, [21:27] so probably want to respond there when it's built and confirmed fixed [21:48] sidnei, ahasenack, adam_g thanks for your help [21:48] ppa should be good now [21:48] or as soon as the index gets rebuilt [21:49] working for us too [21:49] was pretty neat as I was planning to demo juju to my project team about 90 minutes ago and we couldn't bootstrap [21:50] I figured it just stopped working cause we were demoing it cause, you know, that is how things work, especially on wednesdays [21:51] Akira1_, ouch.. sorry. the root cause seems to have been the makefile and then some flailing trying to get to that introduced another issue (source/format). its generally been pretty good, but that particular build recipe hasn't been exercised in a long while. [21:52] yeah, it happens ;) [21:53] I'm loving this stuff otherwise. we've cooked up saltstack integration and I'm hoping to distill 4.5 years of garbage bash deployment scripts down to some minor charms and salt grains [21:54] so cheers even with the hiccups [21:54] bah! [21:54] I missed him [21:54] would love to see some salt stack stuff [21:59] jcastro: see the new google plus? [21:59] man...not feeling it...but maybe it'll grow on me [22:21] juju: deploying lamp charm on ec2 causes instances to terminate | http://askubuntu.com/q/295961 === wedgwood is now known as wedgwood_away === gianr_ is now known as gianr