[01:04] <lazyPower> thumper: pong
[01:04] <lazyPower> thumper: i assume you found your answer by now? i saw a fairly lengthy thread with whit
[01:35] <thumper> lazyPower: yeah, I have a way forward
[06:53] <lathiat> Hi Folks.. is there any documentation/exmaple charms I can look at for generic/recommended ways of supporting multiple install sources (e.g. package, install from source, install from a standard python source bundle, etc) .. a la the newer openstack charms
[08:18] <jose> lathiat: hey, are you still around?
[08:21] <Odd_Bloke> marcoceppi: Will https://code.launchpad.net/~daniel-thewatkins/charms/trusty/ubuntu-repository-cache/fix_cron_path/+merge/260696 address https://bugs.launchpad.net/charms/+source/ubuntu-repository-cache/+bug/1455649?
[08:21] <mup> Bug #1455649: ubuntu-repository-cache: hard-coded cron path to juju-run is wrong for juju v1.24 <ubuntu-repository-cache (Juju Charms Collection):In Progress by daniel-thewatkins> <https://launchpad.net/bugs/1455649>
[08:45] <Odd_Bloke> Does Juju ensure that only one hook can be running at a time?
[08:47] <lathiat> jose: yeah
[13:10] <marcoceppi> Odd_Bloke: yes, hooks are executed serially on each machine
[13:12] <Odd_Bloke> marcoceppi: Thanks; that explains the behaviour I was seeing.
[14:06] <nunutu> hi
[14:57] <kingsman_> hi all
[14:57] <kingsman_> i can't see the video :(
[15:04] <Battaglin_> hi
[15:06] <wakawaka_> hi
[15:06] <wakawaka_> why am i on juju???
[15:06] <wakawaka_> what is the Community Team Q&A irc channel?
[15:08] <yoann54> BQ announced a convergence product too
[15:11] <wakawaka_> guys.. ubuntu on air channel is #ubuntu-on-air
[15:19] <Silviu> QUESTION: Why did you guy ditched the "Love the bottom edge" design direction from Ubuntu for phones? Ubuntu now has an edge that it's not used all that much, not to mention that there is no longer an exit option for apps.
[15:44] <supereman16> Does anyone know if there is any way possible to manage juju services across environments?
[15:45] <supereman16> Our business is trying to deploy a lot of our stuff on Amazon, but some stuff is required to be on bare metal services in our own vpc.
[15:46] <supereman16> So I was sondering if we could manage these services with juju both over maas and amazon, but so far I haven't found anything saying that's possible
[15:49] <supereman16> Anyone?
[15:51] <med_> marcoceppi, ^ arosales ^ cross env juju?
[15:52] <med_> http://curtis.hovey.name/2014/06/10/building-trans-cloud-environments-with-juju/
[15:54] <arosales> med_, hello
[15:55] <arosales> supereman16, at this time spanning services across different environment is a feature juju core is still working on
[15:55] <med_> 'k
[15:56] <arosales> supereman16, a good feature, just still being worked on.
[15:56] <supereman16> Sad, any sort of eta?
[15:58] <arosales> supereman16, to be clear juju can of course manage envX in maas and envY in amazon, but  cross environment meaning in one envZ service 1 in AWS related to service 2 in maas -- is still a feature
[15:59] <arosales> supereman16, not sure on an eta. I hope by  16.04
[15:59] <supereman16> Ok. Yeah. That would be nice.
[15:59] <arosales> for sure
[16:06] <supereman16> med_, thanks for the link, looks interesting. I think I'll play around with that. :)
[16:10] <med_> just a google result, not an endorsement supereman16
[16:12] <supereman16> med_, but it looks interesting and I hadn't found it. Thank's anyways.
[16:12] <med_> yw, caveat emptor
[17:08] <Odd_Bloke> What's the process for getting a MP in to charm-helpers?
[17:08] <Odd_Bloke> (Specifically: https://code.launchpad.net/~daniel-thewatkins/charm-helpers/lp1370053/+merge/260864)
[17:15] <lazyPower> marcoceppi: correct me if i'm wrong, but i believe that they need to propose against charm-helpers/next right?
[17:16] <lazyPower> Odd_Bloke: ^
[17:16] <lazyPower> i'm pretty sure thats the baseline, propose against charmhelpers-next, and our devx maintainers will take a look during their review queue time slot
[17:16] <lazyPower> s/-next/\/next/
[17:26] <Odd_Bloke> lazyPower: I can't find a likely looking charm-helpers/next.
[17:26] <lazyPower> tvansteenburgh: ping
[17:27] <lazyPower> Odd_Bloke: let me ping and see if i cant find a proper answer for you - sorry about my lack of knowledge here
[17:27] <lazyPower> Odd_Bloke: but as it stands w/ that MP open, it should be on the docket regardless, looking more in terms for next time around
[17:27] <Odd_Bloke> lazyPower: No worries; I'm EOD'ing anyway, so I'll read whatever is said tomorrow. :)
[17:27] <tvansteenburgh> there is no next, propose against trunk
[17:28] <Odd_Bloke> tvansteenburgh: trunk == lp:charm-helpers?
[17:28] <tvansteenburgh> yes
[17:28] <Odd_Bloke> tvansteenburgh: Thanks! :)
[17:28] <Odd_Bloke> (So much for reading it tomorrow ;)
[17:28] <tvansteenburgh> sure thing :)
[17:29] <lazyPower> tvansteenburgh: thanks for following up on that.
[17:29] <tvansteenburgh> np
[17:29] <lazyPower> tvansteenburgh: is the /next thing only an idiom for openstack then?
[17:29] <tvansteenburgh> yeah
[17:29]  * tvansteenburgh wanders off to lunch
[17:29] <lazyPower> Ok, im' going crazy then :)
[19:55] <beisner> marcoceppi,  so i'm trying to systematically compare series in a future-proof way in test writing.   ie. tests need to do more or do less, or do things slightly differently, depending on ubuntu release.
[19:55] <marcoceppi> beisner: so just special cases where like if series > trusty do this?
[19:55] <beisner> right
[19:56] <marcoceppi> where series is the series of the deployment?
[19:56] <beisner> right, not necessarily matching the charm series
[19:56] <beisner> before pulling in distro_info python module usage, where I know we can do version comparisons, i'm wondering if amulet already has a way to do series comparison?
[19:57] <beisner> as Wily just overlapped with Warty, alphabets no longer mean so much
[19:57] <beisner> fyi, distr_info.UbuntuDistroInfo.all() spits out the following, so that would be pretty easy to incorporate.  just don't want to reinvent a wheel.
[19:57] <beisner> ['warty', 'hoary', 'breezy', 'dapper', 'edgy', 'feisty', 'gutsy', 'hardy', 'intrepid', 'jaunty', 'karmic', 'lucid', 'maverick', 'natty', 'oneiric', 'precise', 'quantal', 'raring', 'saucy', 'trusty', 'utopic', 'vivid', 'wily']
[19:59] <marcoceppi> beisner: there's no notion of series based testing in amulet at all, but seems like a useful thing to have
[19:59] <marcoceppi> esp when we move series out of Deployment() and instead use as an environment variable
[20:09] <beisner> marcoceppi, in this case, i have a specific need for tests to behave differently >= vivid.   upstart vs systemd.
[20:10] <beisner> ie.  on < vivid, we check system service status on each unit via   `status <service-name>` but on >= vivid, that needs to be `service <service-name> status`.
[20:11] <coreycb> beisner, what about that enum in charm-helpers?
[20:11] <beisner> so it's a pretty simple pivot point, and could be resolved easily now, just want to choose an approach that survives.
[20:12] <coreycb> beisner, I was thinking about the last 2 functions here: http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/view/head:/charmhelpers/contrib/openstack/amulet/deployment.py
[20:12] <beisner> coreycb, yep, looking at extending that.
[20:13] <beisner> or rather, using the data in that dict
[20:13] <coreycb> beisner, not sure how to future proof though without knowing names, so I think it'd need an update when we find out
[20:13] <beisner> coreycb, distr_info.UbuntuDistroInfo.all() is maintained and always returns an ordered list
[20:14] <coreycb> beisner, nice
[20:15] <beisner> so for the purpose of      if series >= 'vivid':    the existing dict data could be inspected.
[20:15] <beisner> and we'll likely have to maintain that UbuntuRelease:OpenstackRelease  table anyway
[22:12] <arosales> marcoceppi, do you know if the mysql charm can do multi-master?
[22:19] <arosales> marcoceppi, I think the current charm, https://jujucharms.com/mysql/trusty/25, only supports multi-slave but thought I would check here.
[22:53] <lazyPower> arosales: it doesn't support multi-master in its current form
[22:57] <arosales> lazyPower, ack
[22:57] <arosales> and thanks
[23:00] <lazyPower> np
[23:00]  * lazyPower hattips
[23:14] <stokachu> anyone running into issues with juju failing to create containers
[23:14] <stokachu> http://paste.ubuntu.com/11530516/
[23:15] <stokachu> this is with MAAS and ive hit this with both trusty and precise
[23:15] <stokachu> it just dumps the entire script into the agent-state-info :\
[23:20] <lazyPower> whoa, and this is on -stable
[23:20] <lazyPower> stokachu: no thats a new one on me. I haven't been using with MAAS recently however.
[23:21] <stokachu> yea this is the first ive seen it too, makes me wonder though b/c i was using the same bits yesterday with no problems
[23:21] <stokachu> wondering if images were recently updated
[23:23] <mahmoh> stokachu: everything worked for you the day before just fine?!
[23:23] <stokachu> yea i ran it yesterday all day
[23:24] <mahmoh> stokachu: it was broken for me yesterday after 4pm EST
[23:24] <mahmoh> stokachu: so is this an lxc bug or juju bug, I'm guessing start with lxc?
[23:25] <stokachu> im trying to find where the images are for lxc that juju uses
[23:25] <stokachu> see if it was recently uploaded
[23:25] <mahmoh> stokachu: since you're not using proxies in your env, and actually so more likely lxc
[23:27] <stokachu> mahmoh: https://bugs.launchpad.net/juju-core/+bug/1417594
[23:27] <mup> Bug #1417594: failure to retrieve the template to clone: lxc container with 1.22 beta2 <lxc> <oil> <juju-core:Fix Released by wallyworld> <juju-core 1.22:Fix Released by wallyworld> <https://launchpad.net/bugs/1417594>
[23:27] <stokachu> says it was fixed though
[23:28] <mahmoh> when where though?
[23:28]  * mahmoh guesses 1.24
[23:28] <stokachu> the milestone says 1.23-beta1
[23:28] <mahmoh> reported back in Feb
[23:28] <stokachu> so it should be included in the current juju
[23:32] <mahmoh> stokachu: could you check on one of your problem nodes if " cloud-init " is installed , it should be if not that's the problem, cloud-archive meta problem
[23:32] <mahmoh> pls ^
[23:33] <mahmoh> stokachu: my env is remote and a watch-me-type so I'm avoiding it until I have a fix
[23:33] <stokachu> mahmoh: it doesn't even get that far
[23:33] <stokachu> the template has cloud-init
[23:34] <stokachu> but the cloning of that template is failing
[23:35] <mahmoh> stokachu: but you can download the template where I cannot, right?  So maybe a slightly different problem
[23:36] <stokachu> ah
[23:36] <stokachu> yea it looks like it downloads the template but the actual lxc clone is failing
[23:37] <mahmoh> I don't know which problem I'd rather have, you out of disk space by any chance?
[23:37] <stokachu> nah im on 2T
[23:37] <stokachu> i did run out of ips again :(
[23:37] <stokachu> but fixed that
[23:39] <mahmoh> lol
[23:39] <mahmoh> IPs
[23:39] <mahmoh> this isn't your problem is it: https://bugs.launchpad.net/lxc/+bug/1410876
[23:39] <mup> Bug #1410876: Error executing lxc-clone: lxc_container: utils.c: mkdir_p 220 Not a directory - Could not destroy  snapshot %s - failed to allocate a pty; Insufficent
[23:39] <mup> privileges to control  juju-trusty-lxc-template <lxc> <oil> <stakeholder-critical> <trusty> <juju-core:Triaged> <lxc:New> <https://launchpad.net/bugs/1410876>
[23:40] <stokachu> doesnt look like it
[23:41] <mahmoh> stokachu: this might be my problem: https://bugs.launchpad.net/lxc/+bug/1331920
[23:41] <mup> Bug #1331920: keyserver workarounds in templates/lxc-download.in not accessible <lxc:New> <https://launchpad.net/bugs/1331920>
[23:41] <stokachu> looks promising