[00:41] <lazypower> Excellent. Turns out it was due to bundler scoping the env, awesome.
[00:41] <lazypower> sorry - being unable to call pry in the chef cookbook during runtime. The charm skeleton was doing best practice by piping the chef-solo run through the bundle
[03:26] <lazypower> How can I tell if my unit already has a relationship with another charm? I'm seeing a way to get the relation ids, and the list, but I'm looking more to see if it has an implicitly defined relationship with mongodb
[03:26] <lazypower> since thats defined in the yaml, i assume its got a boolean call to discover if relation-get('mongodb') exists?
[03:27] <marcoceppi> lazypower: I use dot files to track when relations are created, etc
[03:28] <lazypower> That works, thanks
[03:45] <marcoceppi> lazypower: I'll also stuff data in there, so I don't have to call the relation commands, I can just read the contents of the dot file
[05:00] <lazypower> i was thinking about that, using it as a storage unit.
[10:46] <noodles775> jam: RE the cached charms when deploying locally, is there a way I can work around that (other than --upgrade)? I want to re-run an amulet test on my bootstrapped env (which doesn't use --upgrade when deploying).
[10:47] <jam> noodles775: I don't know of a way offhand, I didn't find where they were being cached
[10:50] <noodles775> jam: when running locally, is machine-0 even an lxc? It doesn't seem to be, perhaps it's talking to a mongodb somewhere that I need to clear.
[10:50]  * noodles775 looks around.
[10:50] <jam> no
[10:50] <jam> machine-0 runs mongodb on the local system
[10:55] <mgz> noodles775: one option there is to have your setup in amulet using a juju-deployer config, rather than doing deploy() calls and so on
[10:56] <mgz> so, you can then run the same thing again (which juju-deployer copes with), and just have the assertions changed
[10:57] <noodles775> OK, thanks mgz.
[11:05] <mgz> noodles775: I'd love to see your test stuff if you can point me at it
[11:08] <noodles775> mgz: http://bazaar.launchpad.net/~michael.nelson/charms/precise/elasticsearch/trunk/files
[11:10] <mgz> noodles775: thanks!
[16:29] <mgz> marcoceppi: are you aware of any charms actually using the python-django application charm thing? I'm trying to find an example.
[16:29] <marcoceppi> mgz: cjohnston might know. I think he was using it
[16:30] <marcoceppi> but he's not in the channel
[16:31] <mgz> marcoceppi: I think I'm looking at his code, and it's just doing the use-this-launchpad-branch thing
[16:33] <mgz> marcoceppi: put up a pull request for amulet on python 2 btw
[16:34] <marcoceppi> mgz: Thank you! I've been meaning to this for a while now
[16:34] <mgz> if you're curious about what we've been up to as well, see:
[16:34] <marcoceppi> mgz: I'll make sure it gets packaged python and python3

[16:36] <marcoceppi> o/
[16:37] <rbasak> jackweirdy, marcoceppi: o/
[16:59] <sinzui> marcoceppi, I see intermittent failures deploying to mysql on HP. I wonder if the machine is under powered. Do you think 2G for mysql + juju agent is low?
[17:47] <marcoceppi> sinzui: this is a problem with a specific region of HP Cloud
[17:47] <marcoceppi> I think az2
[17:48] <marcoceppi> I haven't been able to pinpoint. Works flawlessly in az1
[17:48] <marcoceppi> and az3
[17:50] <sinzui> marcoceppi, I see the problem on az-3. I just changed CI to use more ram and didn't see the mysql state error. I will let CI run a while to confirm the failure has become rare
[17:50] <marcoceppi> sinzui: thanks
[17:55] <rick_h__> sinzui: just a heads up, still would love to chat a bit if you get time this week
[17:56] <sinzui> rick_h__, sure. maybe in an hour or so.
[19:13] <sinzui> rick_h__, fate is conspiring against me. My wife's car is misbehaving. I don't have any meeting tomorrow. Ca we talk in the morning?
[19:15] <rick_h__> sinzui: sure thing, no hurry
[19:23] <_bjorne> can someome explain for me why im alwas get this from node 2 and up?!?!?!?! GET /MAAS/metadata//2012-03-01/user-data HTTP/1.1" 404 217 "-" "Python-urllib/2.7
[19:23] <_bjorne> and im not get that on node 1
[19:24] <_bjorne> what im doing wrong?
[19:28] <_bjorne> no one here?
[19:30] <natefinch> _bjorne, try #maas you might get more luck.
[19:33] <_bjorne> natefinch should all nodes from scratch, can i see them in juju status or do im do anything more?`
[19:36] <natefinch> _bjorne, you will need to do juju add-machine (or juju-deploy, which will add a machine and deploy a service to it) in order to see machines in juju status.
[19:37] <_bjorne> if im not do add-machine? im see only the first node/machine?
[19:38] <natefinch> _bjorne, yes, that's the correct behavior.  We only acquire machines as necessary.
[19:39] <_bjorne> ok and after that, machine 2 runs dist-upgrade and install lxc and mongodb?
[19:40] <_bjorne> can you tell me how im write add-machine a example? :) im new on this :/ and not new with computers and linux :)
[19:46] <_bjorne> are that like this only im need to write if im want to add a new machine?  juju add-machine lxc
[19:46] <natefinch> _bjorne, it should just be a matter of "juju add-machine"  All that really does is requisition a machine from MAAS and start it running.
[19:47] <_bjorne> okey
[19:48] <natefinch> _bjorne, add-machine lxc will start a new machine and put an empty lxc container on it.  Containers are optional, you don't need to use them unless you want to for some reason.  by default, things deploy just to the base machine
[19:50] <natefinch> _bjorne, the usual way to use juju is to set up your ~/.juju/environments.yaml with the right information for your environment.  Then do juju switch maas (to make your maas environment the default one you're working with).  Then you do juju bootstrap to start up the environment (this deploys node 0).  You can then deploy services with juju deploy <servicename>.  That'll requisition new machines, start them up, and deploy the s
[19:50] <natefinch> ervice to them.
[19:52] <_bjorne> im using default config
[19:54] <_bjorne> my problem was from the beginning, im cant get user-data on from the second machine, the first was function good.
[19:55] <_bjorne> agent-state-info: '(error: cannot run instances: gomaasapi: got error
[19:55] <_bjorne> back from
[19:55] <_bjorne> and what is that?
[20:01] <_bjorne> hmm if machine dont find user-data that is not installs lxc and mongodb? or im do that with add-machine?
[20:14] <natefinch> _bjorne, I'm ont sure what it means not to find user data.  I'm better at juju than maas.  The guys on #maas will be more useful with errors coming out of maas, which is what this sounds like
[20:16] <_bjorne> okey :) i will try to ask there again.
[20:17] <_bjorne> and know time to sleep :) up and working tomorrow... :/ drive that f*cking truck in Gothenburg :)
[20:17] <thumper> o/
[20:18] <_bjorne> havey 60tom 25.25meter long :)
[20:20] <natefinch> yikes, good luck, _bjorne
[20:21] <sarnold> wow, what a different kind of problem ;) hehe
[20:22] <_bjorne> im wish im working inside :) today that was rain :( and not fun to be out then.
[20:26] <sarnold> :(
[20:47] <dpb1> Can juju use custom images?
[20:47] <dpb1> I guess I would need to maintain my own simplestreams map?
[20:47] <natefinch> dpb1, you mean machine images?
[20:48] <dpb1> natefinch: cloud images
[20:48] <natefinch> dpb1, ok, so, right now, no.  We always use an ubuntu cloud image
[20:49] <dpb1> natefinch: OK.  if I wanted to snapshot an assembled ubuntu cloud image and maintain a simplestreams directory, would that work?
[20:50] <dpb1> or is there more to it than that.
[20:51] <dpb1> natefinch: I'm launching ci slaves with juju, and I get tired of the 15-20 minutes spin-up time because of all the software we need to install.
[21:06] <natefinch> dpb1, understandable.  I'm actually not sure if you can usurp the regular image for another ubuntu image with stuff pre-installed.  thumper - would you know if that's something that could be done?
[21:45]  * thumper notices the name
[21:45]  * thumper reads
[21:45] <thumper> dpb1, natefinch: at this time, I think that only the standard ubuntu-cloud image is supported
[21:45] <thumper> but I may be wrong
[21:46] <thumper> dpb1: on which provider?
[21:46] <thumper> dpb1: regarding the simplestreams work ,that may well work, check with wallyworld_ when he is around
[21:47]  * wallyworld_ reads backscroll
[21:48] <wallyworld_> dpb1: juju has tools for maintaining simplestreams metadata for using your own images. see "juju help metadata"
[21:49] <wallyworld_> you basically generate the json files and put them in your cloud storage (or elsewhere and then use image-metadata-url to point to that location)
[21:49] <wallyworld_> see also the 1.16 release notes for a little more information
[21:49] <dpb1> thumper: openstack
[21:50] <wallyworld_> i'm not sure how up-to-date the online docs are yet
[21:50] <dpb1> wallyworld_: thanks, I will read that.
[21:50] <wallyworld_> if openstack, easiest to put the custom metadata in the "control buket"
[21:50]  * dpb1 nods
[21:51] <wallyworld_> i'll be afk for 30 mins or so soon but can answer any questions when i get back
[21:53] <natefinch> wow, that was actually a lot more positive that I expected :)  That simplestreams stuff is pretty sweet :)
[21:54] <thumper> natefinch: sinzui
[21:54] <thumper> ugh
[21:54] <thumper> EWRONGCHAN
[21:55] <dpb1> thx all... I'll ping back if I run into issues.
[21:56] <natefinch> thumper: I'm just going to keep switching channels for the fun of it
[22:49] <maxcan> hey
[22:50] <maxcan> i have a question about writing a charm which will run a docker container
[22:51] <maxcan> the docker instructions say that on ubuntu precise, you need to upgrade the kernel and reboot to install docker, but doing that inside a juju install hook seems wrong
[22:51] <maxcan> is there a "right way" to do this?
[22:51] <maxcan> alternatively, is it possible/safe to have juju agents running newer versions of ubuntu than precise?
[22:58] <maxcan> marcoceppi, lazypower, if you're around :)
[22:59] <marcoceppi> maxcan: you can write your charm for any series of Ubuntu, precise, raring, saucy, and now trusty
[22:59] <marcoceppi> maxcan: we have a few charms that do reboot servers, though it's not exactly a recommended practice
[23:00] <marcoceppi> maxcan: juju agents should pick up where they left off with hook execution, though it'll attempt to re-run the install hook (since it never completed) you'l have to have idemopotency guards to make sure it doesn't loop reboot
[23:00] <maxcan> marcoceppi: so is there anything i need to do besides putting it in a raring/ or saucy/ directory instead of precise?
[23:00] <marcoceppi> maxcan: some of the openstack charms do reboots for kernels
[23:00] <maxcan> got it
[23:00] <maxcan> ty sir
[23:01] <maxcan> so one of my other developers was wondering if its possible to use custom AMIs with juju.
[23:14] <marcoceppi> maxcan: no, charmers are guarenteed to get the Canonical blessed image for each cloud
[23:14] <lazypower> +1 to that
[23:14] <lazypower> https://lists.ubuntu.com/archives/juju/2013-April/002389.html
[23:14] <marcoceppi> maxcan: whatever customizations you do to the AMI should just be included in the charm
[23:14] <maxcan> sounds good to me
[23:14] <lazypower> ^ this suggests it was on its way back in but the tribunal has ruled against it?
[23:15] <lazypower> marcoceppi: wait, so with that being said, does that mean it's a safe assumption that my charm's are going to be deployed on ubuntu only, and if i make that assumption just plug it into the readme and go?
[23:15] <marcoceppi> lazypower: charms are currently tied to a series
[23:16] <lazypower> Oh man, i love that
[23:16] <lazypower> i dont know why i didn't think of that earlier, i was trying to get heady with supporting some really ancient stuff.
[23:16] <marcoceppi> lazypower: we may be adding additional support for OS outside of Ubuntu, but that's why charms are locked to a series branch, ie precise
[23:16] <marcoceppi> lazypower: either way, charms will always denote what platoform they support
[23:16] <lazypower> code cleanup incoming in light of recent news.