[08:41] <davidbanham> Hi all, interested in juju, but want to clarify something. Does juju provide any mechanism for running multiple units (processes) per computer? The docs read like a new EC2 instance (or equivalent) is spun up for each new unit.
[08:43] <b_> hi juju folks :)
[08:43] <b_> i was wondering if there's any tutorial online on getting some juju + linode magic going
[13:10] <marcoceppi> Just so I don't keep spinning my wheels, there's no way to get a charms metadata programatically without first branching the charm?
[13:20] <SpamapS> marcoceppi: IIRC the charm store used to provide it via a simple GET
[13:20] <marcoceppi> SpamapS: thanks, I'll poke that code for a bit. See if it's still there
[13:33] <marcoceppi> SpamapS: looks like it exposes some high level data about the charm, but doesn't actually provide the full metadata for the charm. Thanks for the tip though
[13:34] <SpamapS> marcoceppi: I used to run a nightly tarball of all the bzr branches ...
[13:34] <SpamapS> marcoceppi: if you ask IS, they might still have my old crontab.. ;)
[13:34] <SpamapS> marcoceppi: was quite useful for "wtf I need a full repo now"
[13:36] <marcoceppi> SpamapS: Yeah, I was thinking about tapping in to the tpaas service we have running for graph testing, since that has a local cache of all the charms, it'd just be nice to have it all in one central place
[13:37] <tabibito> hi, I was wondering if someone could help me with juju, I'm receiving following error when trying to bootstrap to EC2: ERROR 301 Moved Permanently
[13:38] <tabibito> Tried recreating the environment a few times
[13:39] <tabibito> version is 0.7
[13:39] <marcoceppi> Hi tabibito, what version of juju are you using? (dpkg -l | grep juju)
[13:39] <marcoceppi> Beat me to it :)
[13:39] <tabibito> :)
[13:39] <marcoceppi> tabibito: Are you trying to deploy to a specific region? if so what's the region line look like in the environments file?
[13:40] <tabibito> currently it's looking like this: region: eu-west-1
[13:40] <tabibito> but even if I don't use a region, I get the same result
[13:40]  * marcoceppi tries to bootstrap with 0.7
[13:41] <tabibito> Extra info: If I do a juju status, I see following error: ERROR Cannot connect to environment: 301 Moved Permanently
[13:41] <tabibito> Traceback (most recent call last):
[13:41] <tabibito>   File "/usr/lib/python2.7/dist-packages/juju/providers/common/connect.py", line 43, in run
[13:41] <tabibito>     client = yield self._internal_connect(share)
[13:41] <tabibito> Error: 301 Moved Permanently
[13:43] <marcoceppi> tabibito: I'm not able to replicate with and without region on 0.7 Do you have an EC2_URL environment variable (or something similar) set in your shell?
[13:44] <tabibito> not that I know of … How can I check?
[13:45] <marcoceppi> tabibito: `env` but if you don't think you have it set then odds are you don't
[13:45] <tabibito> no don't see any EC2_URL
[13:45] <tabibito> strange
[13:46] <marcoceppi> tabibito: run "juju -v status" and paste the output to a pastebin please
[13:46] <marcoceppi> err, when  you try to bootstrap too, if you haven't gotten a successful bootstrap yet
[13:48] <tabibito> bootstrap: http://pastebin.com/gTyH8HCg
[13:48] <tabibito> status: http://pastebin.com/MpDeK3vs
[13:50] <marcoceppi> tabibito: darn, I was really hoping it would show the URL it was trying to connect to
[13:50] <tabibito> environment(sanitized):http://pastebin.com/YDNWBdTZ
[13:50] <tabibito> I know right! … That would be easier to troubleshoot :)
[13:51] <marcoceppi> tabibito: shot-in-the-dark, what if you used precise as your default series (Most all charms are written for precise anyways, and we typically recommend deploying to LTS)
[13:51] <tabibito> same thing
[13:52] <tabibito> Tried with precise, quantal ...
[13:52] <tabibito> I'm using 13.04 now, but I also tried with 12.04… results are the same, so it must be something I do… or :)
[13:53] <marcoceppi> tabibito: You've got me puzzled, Ususally when you get a 301 from AWS it's because it's not looking at the right endpoint URL. I can't exactly replicate but opening a bug on LP or asking on Ask Ubuntu might get better traction than what I have to offer
[13:54] <jcastro> marcoceppi: flagbearer charms in 5?
[13:54] <jcastro> I can start the hangout
[13:54] <marcoceppi> jcastro: cool, just point me at a URL
[13:54]  * marcoceppi looks for a brush
[13:54] <SpamapS> jcastro: oh btw, your adobo made my breakfast amazing today (and btw, no clumping here)
[13:55] <tabibito> @marcoceppi thanks … I'll keep digging and if all fails I'll post a bug or go to Ubuntu
[13:55] <jcastro> https://plus.google.com/hangouts/_/fa6dea509240bfe96361ee99a233312bebed02b0?authuser=0&hl=en
[13:55] <jcastro> for those who want to join
[13:56] <jcastro> SpamapS: tah, I think it was a bad bottle on my end
[14:05] <tabibito> marcoceppi, I found it … needed to enter the EC2 and the S3 uri for the specific region and use HTTPS
[14:05] <tabibito> duh
[14:08] <marcoceppi> tabibito: interesting
[14:08] <tabibito> indeed
[14:44] <tabibito> exit
[14:44] <tabibito> bye
[15:51] <hazmat> SpamapS, incidentally theres a new txzk for debian inclusion
[15:54] <SpamapS> hazmat: ty, will upload ASAP
[15:58] <irossi> Hi Robert, are you there? It's Ian.
[15:58] <irossi> I've got the go ahead to deploy our Cloud 1.0 on MAAS/Juju
[15:58] <irossi> But now I'm hitting a major blocker
[16:02] <jamespage> bbcmicrocomputer, ^^
[16:05] <bbcmicrocomputer> irossi: hey, let's take this off channel
[16:14] <dpb1> When using juju-core, destroy-service seems to be not as "reliable" as it was in pyjuju.  I.e., if there is some kind of service error, I can't destroy the service successfully.  The agent state changes to "dying" and then sits there
[16:15] <dpb1> I'm actually not sure how to recover from this state.
[16:17] <Makyo> dpb1, maybe similar to #1168154 or #1168145?
[16:17] <_mup_> Bug #1168154: Destroying a service in error state fails silently <juju-core:Confirmed> <https://launchpad.net/bugs/1168154>
[16:17] <_mup_> Bug #1168145: Destroying a service before it reaches started or running does not destroy the machine <juju-core:New> <https://launchpad.net/bugs/1168145>
[16:17] <ubot5`> Launchpad bug 1168154 in juju-core "Destroying a service in error state fails silently" [High,Confirmed]
[16:17] <_mup_> Bug #1168154: Destroying a service in error state fails silently <juju-core:Confirmed> <https://launchpad.net/bugs/1168154>
[16:17] <ubot5`> Launchpad bug 1168145 in juju-core "Destroying a service before it reaches started or running does not destroy the machine" [Undecided,New]
[16:17] <_mup_> Bug #1168145: Destroying a service before it reaches started or running does not destroy the machine <juju-core:New> <https://launchpad.net/bugs/1168145>
[16:17] <Makyo> Oops..
[16:17] <dpb1> wow, dualing bots
[16:18] <dpb1> Makyo: checking those, looks similar.
[16:19] <hazmat> dpb1, juju resolved a few times.. or use juju-deployer -W -T
[16:19] <hazmat> it works around the issue by resolving the unit
[16:20] <dpb1> hazmat: yes, looks like resolved frees it up, thx.  Makyo: 1168164 looks like the issue, thx
[17:56] <sinzui> hi. I am pondering a deploy of charmworld and juju-gui behind a single apache to manage the ssl endpoint. I think I can create a vhost template to defines two reverse proxy relations, because each relation has a service name, such as jc-squidrev
[18:01] <sinzui> hi charmers. I want to write a unittest for the mongodb charm. I have written an function that should be independent of the juju environment so could be a simple python unit test. Are there any examples of this being done before?
[18:02] <marcoceppi> sinzui: There's no real examples of unit testing yet. We have an idea for how it should look but nothing solid yet
[18:03] <sinzui> marcoceppi: hmm, would my test be rejects if I included one with instructions to run it?
[18:03] <sinzui> s/rejects/rejected/
[18:03] <marcoceppi> sinzui: for unit testing the charm? Probably not
[18:04] <sinzui> okay, I think it is worth trying at least.
[18:04] <sinzui> wedgwood, ^ any thoughts about 2 reverseproxy relations for a single apache charm deploy
[18:05] <marcoceppi> sinzui: we're open to see how people want to do this. We were kicking around the idea of having charms stub most of their hooks and put a lot of logic in lib/<service-name> and then tests for that in lib/<service-name>/tests something like that
[18:06] <marcoceppi> again, that's just a thought, we'd be interested to see how charm authors start tackling this
[18:08] <wedgwood> sinzui: marcoceppi: that's not quite true. I believe that the haproxy and apache2 charms have unit tests.
[18:11] <sinzui> I don't see any tests in apache2
[18:11]  * sinzui pulls haproxy
[18:14] <sinzui> thank you wedgwood, I see an example in haproxy
[18:14] <sinzui> mongodb is similar so this helps me a lot
[18:47] <hazmat> sinzui, i've got some in my aws-* charms re unit tests
[18:47] <sinzui> hazmat, fab, will I find them under ~hazmat on Lp?
[18:48] <hazmat> sinzui, yeah.. here's one lp:~hazmat/charms/precise/aws-elb/trunk
[18:50] <sinzui> I got it. Thanks hazmat
[18:51] <benji> sinzui: the juju-gui charm has some unit tests (if I gather correctly what you're looking for)
[18:52] <sidnei> sinzui: https://bazaar.launchpad.net/~sidnei/charms/precise/haproxy/trunk/files/head:/hooks/tests/ it's not merged yet
[18:53] <sidnei> wedgwood: ^ it's only merged into canonical-losas
[18:54] <sinzui> thanks benji, sidnei.
[19:24] <sidnei> hazmat: ping?
[19:25] <hazmat> sidnei, pong
[19:25] <sidnei> hazmat: https://pastebin.canonical.com/91061/
[19:26] <sidnei> hazmat: looks like juju 0.7 failed to build on the ppa for precise
[19:26] <sidnei> hazmat: the tb above is from bootstrapping with 0.6
[19:27] <hazmat> sidnei, yeah.. i just noticed mgz moved that ppa off trunk to a branch.. i'll trigger the build
[19:27] <hazmat> hmm
[19:27] <hazmat> sidnei, what version of txzookeeper?
[19:27] <sidnei> hazmat: in the paste
[19:27] <hazmat> sidnei, its not in the paste..
[19:27] <sidnei> ah, txzookeeper sorry
[19:28] <sidnei> hazmat:   Installed: 0.9.8-0juju53~precise1
[19:28] <hazmat> argh..
[19:29] <sidnei> hazmat: actually, i bootstrapped with 0.7-0ubuntu1~0.IS.12.04, but the bootstrap node got 0.6 as that's what's in the juju ppa
[19:32] <hazmat> sidnei, hmm.. where's that from?
[19:32] <hazmat> sidnei, i queued some builds for the ~juju/pkgs ppa
[19:33] <sidnei> hazmat: i assume you mean 0.7: https://pastebin.canonical.com/91063/
[19:33] <hazmat> sidnei, yeah.. i did
[19:39] <hazmat> wierd.. it build correctly for precise but says error on upload
[19:40] <sidnei> hazmat: yup, because it has the same version as the previous failed build
[19:40] <sidnei> need to bump to ~precise2 or something
[19:40] <hazmat> i'll commit something to the branch
[19:40] <sidnei> yeah, or that.
[19:54] <sidnei> yay, progress
[19:55] <hazmat> i'm still surprised by the error, that aspect of txzk hasn't changed in quite a while
[19:57] <fcorrea> yo, any of you having issues with pyjuju and local provider? Whatever service I try to deploy never get past "pending". Logs doesn't tell much
[19:59] <sidnei> hazmat: funny you mention that: https://pastebin.canonical.com/91065/
[20:00] <hazmat> argh
[20:00] <dpb1> fcorrea: I can try
[20:01] <dpb1> fcorrea: bootstrap works ok?
[20:01] <fcorrea> dpb1, cool. Found something similar up on askubuntu.com. Will try disabling the firewall
[20:01] <hazmat> sidnei, got a minute for a g+?
[20:01] <fcorrea> dpb1, yep
[20:01] <dpb1> fcorrea: quantal?
[20:01] <fcorrea> dpb1, raring
[20:01] <dpb1> k
[20:01] <sidnei> hazmat: let me see if it's working today or if i need to reboot *wink*
[20:07] <fcorrea> dpb1, it gets very quiet after deploying. In the logs I see a "Started service unit mysql/0" for example and that's it...it was working last week though. I guess I should head back to Oakland
[20:07] <dpb1> fcorrea: what series are you trying to deploy.  What charm?
[20:07] <fcorrea> dpb1, default series is precise and charm is mysql
[20:09] <fcorrea> lemme try to create an lxc container and check it works. I could be there maybe
[20:10] <dpb1> fcorrea: what does lxc list show you?
[20:10] <fcorrea> lxc-ls correctly shows the container created by juju though
[20:10] <dpb1> ok
[20:10] <dpb1> what about lxc-ls --fancy
[20:10] <dpb1> you should get the ip
[20:10] <dpb1> which you should be able to ssh into the ubuntu account
[20:10] <fcorrea> dpb1, yeah it's there: fcorrea-local-mysql-0  RUNNING  10.0.3.87  -     NO
[20:10] <fcorrea> doing it
[20:12] <fcorrea> dpb1, mmm....the unit-mysql-0-output.log shows the issue: ImportError: No module named txzookeeper.utils
[20:12] <dpb1> fcorrea: ok, that is what hazmat is debugging right now
[20:12] <fcorrea> dpb1, hah! Awesome
[20:12] <dpb1> I think same here
[20:13] <dpb1> I guess it's not just quantal. :(
[20:13] <hazmat> somehow the debian build changed to not include the src
[20:13] <hazmat> for the ppa txzk
[20:13] <fcorrea> ic
[20:13] <hazmat> and its python.. so no binaries.. it seems quite strange
[20:13] <fcorrea> well, feeling better now
[20:13] <hazmat> pypi is seeming pretty awesome right now ;-)
[20:14] <dpb1> ya, dpkg -L python-txzookeeper is pretty embarrassing. :)
[20:14] <fcorrea> hazmat, lets use buildout as a package manager ;)
[20:20] <hazmat> SpamapS, the txzk recipe was just using the embedded packaging?
[20:21] <SpamapS> hazmat: probably?
[20:22] <SpamapS> hazmat: uploading 0.9.8 to debian unstable shortly
[20:22] <hazmat> SpamapS, k, i don't see the recipe do anything different, and the embedded packaging hasn't changed.. but now the package being generated is empty.. which is just odd
[20:22] <hazmat> SpamapS, cool, thanks
[20:22] <SpamapS> hazmat: yeah probably needs a kick, not sure why
[20:32] <sidnei_> SpamapS: i suspect the recipe got changed to not use a nested the packaging branch at some point, since the build log for 0.9.7 have --with python2 and the one for 0.9.8 doesn't, and the debian/rules in lp:txzookeeper have not changed.
[20:33] <SpamapS> quite likely the recipe has been broken for a long time. They tend to break over time in my experience.
[20:34] <m_3> SpamapS!  /me waves
[20:34] <SpamapS> m_3: howdy
[20:35] <SpamapS> hazmat: anyway, 0.9.8-1 is in unstable now.. should make its way through the tubes to saucy by tomorrow
[20:35] <hazmat> SpamapS, sweet
[20:36] <hazmat> SpamapS, yeah.. that seems quite likely re the recipe, it hasn't built in a while
[20:36] <hazmat> er.. been built
[20:37] <SpamapS> Given how ridiculously simple it is to package.. kind of sad that it broke :-P
[20:49] <hazmat> SpamapS, agreed
[20:50] <dpb1> hazmat: can the ppa get changes faster?
[20:52] <hazmat> dpb1, not sure what you mean
[20:57] <dpb1> hazmat: sorry, I'm not sure why I typed that.  I meant, can the package be uploaded to the ppa?  (maybe you already have).
[20:57] <hazmat> dpb1, the alternative is running a non ppa origin
[20:58] <hazmat> er. removing origin and relying on distro version
[21:00] <ahasenack> you can probably ask a web-op to kick the ppa and bump its priority
[21:01] <ahasenack> if a new package is being uploaded, you should probably do that
[21:02] <Akira1_> anyone else running into the python-txzookeeper thing floating around today?
[21:08] <hazmat> ahasenack, the problem is the recipe is foobar
[21:08] <hazmat> Akira1_, yes..
[21:08] <ahasenack> hazmat: where is it? url
[21:08] <hazmat> dpb1, ahasenack .. anyone else if your interested.. and have packaging knowledge.. we're hanging in https://plus.google.com/hangouts/_/a278ee33e829a90be5c6c364a6754726e6b975ee?authuser=0&hl=en
[21:10] <hazmat> ahasenack, https://code.launchpad.net/~juju/+recipe/txzookeeper
[21:11] <SpamapS> hazmat: I'll try and join you shortly
[21:12] <hazmat> SpamapS, that would be awesome.. i'm digging through dh_python2 conversion docs atm
[21:12] <dpb1> my packaging knowledge is pretty minimal
[21:13] <Akira1_> hazmat: we've been following your commits so coo
[21:18] <ahasenack> hazmat: if I type "make" in the txzookeeper directory, I get the "done" target, that is confusing dh_auto_build
[21:19] <ahasenack> there is an override to get it to use python build in this case, it is finding the Makefile and assuming that's how the thing is built
[21:19] <hazmat> ahasenack, aha!
[21:19] <ahasenack> let me find it
[21:19] <hazmat> ahasenack, i'll just kill the makefile
[21:19] <ahasenack> that would work too
[21:20] <ahasenack> I don't remember how recipes handle 3.0 quilt packages, i.e., if they fetch the orig tarball or not
[21:21] <ahasenack> so maybe you want to kill r55 too, or just try it without the makefile first and see what happens
[21:22] <hazmat> ahasenack, killing the makefile and it seems to work locally.. recipe builds requeued
[21:22] <ahasenack> ok
[21:24] <adam_g> hazmat, ping
[21:24] <adam_g> oh wait
[21:25] <hazmat> fubar in progress.. crossed fingers on next recipe build
[21:26] <mgz> there's also a note on the mailing list about txzookeeper in the ppa,
[21:27] <mgz> so probably want to respond there when it's built and confirmed fixed
[21:48] <hazmat> sidnei, ahasenack, adam_g thanks for your help
[21:48] <hazmat> ppa should be good now
[21:48] <sidnei_> or as soon as the index gets rebuilt
[21:49] <Akira1_> working for us too
[21:49] <Akira1_> was pretty neat as I was planning to demo juju to my project team about 90 minutes ago and we couldn't bootstrap
[21:50] <Akira1_> I figured it just stopped working cause we were demoing it cause, you know, that is how things work, especially on wednesdays
[21:51] <hazmat> Akira1_, ouch.. sorry. the root cause seems to have been the makefile and then some flailing trying to get to that introduced another issue (source/format). its generally been pretty good, but that particular build recipe hasn't been exercised in a long while.
[21:52] <Akira1_> yeah, it happens ;)
[21:53] <Akira1_> I'm loving this stuff otherwise. we've cooked up saltstack integration and I'm hoping to distill 4.5 years of garbage bash deployment scripts down to some minor charms and salt grains
[21:54] <Akira1_> so cheers even with the hiccups
[21:54] <jcastro> bah!
[21:54] <jcastro> I missed him
[21:54] <jcastro> would love to see some salt stack stuff
[21:59] <robbiew> jcastro: see the new google plus?
[21:59] <robbiew> man...not feeling it...but maybe it'll grow on me
[22:21] <AskUbuntu> juju: deploying lamp charm on ec2 causes instances to terminate | http://askubuntu.com/q/295961