[09:37] <pavel> Is there any way I can increase size of root partition in EC2 instance? Or at least attach extra ebs volume?
[09:41] <pavel> my problem is that by default lucid images are 8gb and it's totally not enough for my purposes
[10:27] <mgz> gah, why are recreating the error of having 'docs' as a series of the main project?
[10:28] <mgz> juju-core/docs makes no sense
[10:52] <stub> oh noes! Where has debug-hooks gone
[10:55] <mgz> stub: bug 1027876
[10:55] <_mup_> Bug #1027876: cmdline: Support debug-hooks <cmdline> <juju-core:Confirmed> <https://launchpad.net/bugs/1027876>
[10:56] <stub> In the -broken hook, the relation is already gone... so I need to use my -changed hooks to remember state somewhere else so my -broken hook can actually clean up?
[10:58] <mgz> that sounds possible to me, hopefully someone with experience of charming something like that can give suggestions
[11:11] <mthaddon> hi folks, I have three instances in an environment that I want to get rid of (no longer needed). I've removed them via juju destroy-service (for 2) and a juju remove-unit (for 1). I've then done a nova delete, but juju has reprovisioned the instances. Is this expected behaviour? is there some way to work around it?
[11:13] <mgz> yes, you want `juju terminate-machine`
[11:13] <mgz> mthaddon: ^
[11:13] <mthaddon> mgz: awesome, thanks
[14:04] <ehg> hi - does anyone have any idea of how to get rid of zombie services/machines in the go juju DB? i.e. services that won't die, machines that are pending
[14:41] <RAZORQ> Hi guys
[14:42] <RAZORQ> i have question about Ubuntu for phones. I Have ZTE GXI with Intel atom processor, and i want to know that, will be there ubuntu for my phone official, or i must port it on my own?
[16:53] <vennenno> (?)
[16:59] <jamespage> wedgwood, adam_g_: cached decoration and some hookenv normalization - https://code.launchpad.net/~james-page/charm-helpers/caching_hookenv/+merge/169160
[17:01] <wedgwood> jamespage: we can't cache relation-get. we may want to see our own relation variables
[17:02] <wedgwood> same for the other relation_* commands
[17:03] <wedgwood> er, some of them anyway. the ones that call relation-get
[17:31] <jamespage> wedgwood: not sure I understand why that is?
[17:32] <wedgwood> jamespage: if, in one part of my hook I call relation-get -r <some relation id> - <my unit name>, then relation-set -r <same relation id>, I won't see the changes if they're cached
[17:33] <jamespage> wedgwood: you can't see data in that way within a relation
[17:33] <jamespage> hmm - or maybe...
[17:33] <wedgwood> jamespage: I'm pretty sure you can
[17:33] <wedgwood> jamespage: I did a lot of experimenting when I wrote http://senselessranting.org/post/52242864437/juju-relations-how-do-they-work
[17:33] <wedgwood> jamespage: you can see your own changes immedately
[17:34] <wedgwood> they're not committed and sent to other units until you exit the hook, but you can see your own
[17:35] <wedgwood> none of the rest of charm-helpers operates with that assumption, but I wouldn't want to surprise someone
[17:36] <wedgwood> jamespage: I also have another branch that may give you similar results.
[17:36] <wedgwood> jamespage: it's not completely finished: https://code.launchpad.net/~mew/charm-helpers/persistence
[17:38] <wedgwood> jamespage: you could always call cached(relation_get(...)) in your own charm.
[17:38] <jcastro> wedgwood: mind if I syndicate that blog on juju.u.c?
[17:38] <wedgwood> jcastro: by all means
[17:41] <jamespage> wedgwood: well I never even knew you could do that
[17:41] <wedgwood> neither did I :)
[17:43] <jamespage> wedgwood: I can probably figure out how to selectively flush the cache if that happens
[17:55] <Slaytorson> How can I destroy a charm?
[17:58] <jcastro> can you specify what you mean by destroy?
[17:59] <jcastro> as in, remove the machine it's running on or destroy the service or ... ?
[17:59] <Slaytorson> Perhaps I meant service. Completely delete and terminate the service and all machines running it
[18:01] <jcastro> look at destroy-service and destroy-machine
[18:01] <Slaytorson> Thanks. I did "juju --help" and nothing about destroying came about.
[18:01] <jcastro> actually, I wonder what happens when you destroy-machine before destroying a service
[18:01] <jcastro> `juju help commands` is what you want
[18:01] <jcastro> oh, which version of juju are you on?
[18:02] <Slaytorson> Not sure. juju --version doesn't give me the version
[18:03] <jcastro> if you do "juju" and then hit enter do you get a nice list of basic commands and a link to the website?
[18:05] <Slaytorson> Sort of. I don't get anything about destroying. https://gist.github.com/bslayton/197c168321381135556e
[18:05] <jcastro> ah yeah
[18:05] <jcastro> you're on the right version then
[18:05] <jcastro> but still crappy that we don't make --version, etc.
[18:05] <jcastro> I think I filed a bug about that
[18:06] <jcastro> https://bugs.launchpad.net/ubuntu/+source/juju-core/+bug/1182898
[18:06] <_mup_> Bug #1182898: Please support `juju --version` <amd64> <apport-bug> <raring> <juju-core:New> <juju-core (Ubuntu):New> <https://launchpad.net/bugs/1182898>
[19:32] <jhf> hey all - I think I'm in some weird state.. after a reboot (with all my juju running and happy), when I run 'juju status' I get 'ERROR could not connect before timeout' yet 'juju bootstrap' reports 'ERROR Environment already bootstrapped' - this is using LXC containers.
[19:32] <jhf> any ideas?
[19:33] <jhf> 'juju -v status' errors with Socket [10.0.3.1:60460] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
[19:41] <sarnold> jhf: I vaguely think I recall someone complaiing about their network bridge not coming back up after a reboot recently...
[19:41] <jhf> yeah… I ended up doing a destroy-environment and starting over for now..
[19:41] <jhf> but I imagine if I reboot again it'll happen again.
[20:13] <jcastro> http://andrewsomething.wordpress.com/2013/06/13/introducing-bug-2-trello/
[20:13] <jcastro> m_3: marcoceppi: arosales ^^^^^
[20:14] <marcoceppi> jcastro: wat. awesome
[20:14] <marcoceppi> Oh, I thought it did sync stuff, it's a chrome plugin
[20:14] <marcoceppi> still, very cool
[21:35] <thumper> FunnyLookinHat: ping
[21:35] <FunnyLookinHat> thumper, howdy
[21:35] <thumper> FunnyLookinHat: hey hey
[21:35] <thumper> so
[21:35] <thumper> containers...
[21:35] <FunnyLookinHat> Ha
[21:35] <thumper> good progress is being made
[21:35] <thumper> but nothing ready to test yet
[21:36] <thumper> we have managed to get some things working with manual poking
[21:36] <thumper> but not fully automated yet
[21:36] <FunnyLookinHat> Ah interesting
[21:36] <FunnyLookinHat> Any idea on a timeline ?
[21:36] <thumper> :)
[21:36] <thumper> that's the question isn't it?
[21:36] <thumper> as fast as possible
[21:36]  * thumper considers carefully
[21:37] <thumper> I *think* we should have something very beta by the end of next week (hopefully(
[21:37] <thumper> this depends on some network refactoring
[21:37] <thumper> to make sure the containers are properly addressable from the rest of the machines
[21:38] <thumper> sorry it isn't more concrete
[21:38] <thumper> but since this is the first iteration
[21:38] <thumper> it is bound to be wrong
[21:39]  * thumper quotes kiko - the first two times will be wrong
[21:40] <thumper> FunnyLookinHat: are you on the juju-dev email list?
[21:40] <thumper> FunnyLookinHat: I'm trying to post updates there
[21:42] <thumper> FunnyLookinHat: so, completely different question - the new ultrabook - does it have a metal outer?
[21:42] <FunnyLookinHat> Sorry was AFK - had to take care of something
[21:42] <thumper> FunnyLookinHat: np
[21:44] <FunnyLookinHat> Ok so - Q#1 - at some point could you give me a high level idea of how the charms would work to deploy applications within containers?  i.e. will much have to change from a standard charm?  On top of that - I'm still not sure if this means I'd put the entire LAMP stack in the container or just LAP and then do MySQL elsewhere.
[21:44] <FunnyLookinHat> RE: mailing list - I'm not - I asked for approval and never got it I believe :
[21:45] <FunnyLookinHat> RE: Ultrabook - not metal, but a VERY strong polycarbonate.  I can't bend the one sitting next to me at all if I try to twist one way or another
[21:45] <FunnyLookinHat> The metal chassis inside is quite awesome :)
[21:46] <thumper> ?! you need approval for juju-dev list?
[21:46] <FunnyLookinHat> I guess so?  I threw in my email address and it said "pending approval" - no emails yet  :)
[21:46] <thumper> A#1 - charms shouldn't need any modification *at all* - should ben entirely transparent
[21:47] <FunnyLookinHat> I used my personal, so that might be it - funnylookinhat@gmail.com - if you could approve
[21:47] <thumper> I'll take a look
[21:56] <FunnyLookinHat> thumper, ok - so charm theory follow-up.  Given that I don't care about actually "scaling" any of our deployed applications, would it make sense to have a charm wrap an entire LAMP stack?  I would _really_ like to be able to do that for security and backup purposes
[21:56] <thumper> FunnyLookinHat: I think that idea may well suit your needs very well
[21:56] <FunnyLookinHat> Ok - good deal.
[22:04] <FunnyLookinHat> Gotta run - but thanks for the update thumper