[01:04] thumper: pong [01:04] thumper: i assume you found your answer by now? i saw a fairly lengthy thread with whit [01:35] lazyPower: yeah, I have a way forward === scuttlemonkey is now known as scuttle|afk === zz_CyberJacob is now known as CyberJacob [06:53] Hi Folks.. is there any documentation/exmaple charms I can look at for generic/recommended ways of supporting multiple install sources (e.g. package, install from source, install from a standard python source bundle, etc) .. a la the newer openstack charms === CyberJacob is now known as zz_CyberJacob [08:18] lathiat: hey, are you still around? [08:21] marcoceppi: Will https://code.launchpad.net/~daniel-thewatkins/charms/trusty/ubuntu-repository-cache/fix_cron_path/+merge/260696 address https://bugs.launchpad.net/charms/+source/ubuntu-repository-cache/+bug/1455649? [08:21] Bug #1455649: ubuntu-repository-cache: hard-coded cron path to juju-run is wrong for juju v1.24 [08:45] Does Juju ensure that only one hook can be running at a time? [08:47] jose: yeah === mgz is now known as mgz_ === liam_ is now known as Guest92132 === Johncr1 is now known as Syed_ === Syed_ is now known as Syed_A === mimi is now known as merkurus === scuttle|afk is now known as scuttlemonkey === f2_ is now known as f2 [13:10] Odd_Bloke: yes, hooks are executed serially on each machine [13:12] marcoceppi: Thanks; that explains the behaviour I was seeing. [14:06] hi === brandon is now known as web [14:57] hi all [14:57] i can't see the video :( [15:04] hi [15:06] hi [15:06] why am i on juju??? [15:06] what is the Community Team Q&A irc channel? [15:08] BQ announced a convergence product too [15:11] guys.. ubuntu on air channel is #ubuntu-on-air === dang is now known as Guest19219 [15:19] QUESTION: Why did you guy ditched the "Love the bottom edge" design direction from Ubuntu for phones? Ubuntu now has an edge that it's not used all that much, not to mention that there is no longer an exit option for apps. === zz_CyberJacob is now known as CyberJacob [15:44] Does anyone know if there is any way possible to manage juju services across environments? [15:45] Our business is trying to deploy a lot of our stuff on Amazon, but some stuff is required to be on bare metal services in our own vpc. [15:46] So I was sondering if we could manage these services with juju both over maas and amazon, but so far I haven't found anything saying that's possible [15:49] Anyone? [15:51] marcoceppi, ^ arosales ^ cross env juju? [15:52] http://curtis.hovey.name/2014/06/10/building-trans-cloud-environments-with-juju/ === kadams54 is now known as kadams54-away [15:54] med_, hello [15:55] supereman16, at this time spanning services across different environment is a feature juju core is still working on [15:55] 'k [15:56] supereman16, a good feature, just still being worked on. [15:56] Sad, any sort of eta? [15:58] supereman16, to be clear juju can of course manage envX in maas and envY in amazon, but cross environment meaning in one envZ service 1 in AWS related to service 2 in maas -- is still a feature [15:59] supereman16, not sure on an eta. I hope by 16.04 [15:59] Ok. Yeah. That would be nice. [15:59] for sure === kadams54-away is now known as kadams54 [16:06] med_, thanks for the link, looks interesting. I think I'll play around with that. :) [16:10] just a google result, not an endorsement supereman16 [16:12] med_, but it looks interesting and I hadn't found it. Thank's anyways. [16:12] yw, caveat emptor === merkurus_ is now known as merkurus [17:08] What's the process for getting a MP in to charm-helpers? [17:08] (Specifically: https://code.launchpad.net/~daniel-thewatkins/charm-helpers/lp1370053/+merge/260864) [17:15] marcoceppi: correct me if i'm wrong, but i believe that they need to propose against charm-helpers/next right? [17:16] Odd_Bloke: ^ [17:16] i'm pretty sure thats the baseline, propose against charmhelpers-next, and our devx maintainers will take a look during their review queue time slot [17:16] s/-next/\/next/ [17:26] lazyPower: I can't find a likely looking charm-helpers/next. [17:26] tvansteenburgh: ping [17:27] Odd_Bloke: let me ping and see if i cant find a proper answer for you - sorry about my lack of knowledge here [17:27] Odd_Bloke: but as it stands w/ that MP open, it should be on the docket regardless, looking more in terms for next time around [17:27] lazyPower: No worries; I'm EOD'ing anyway, so I'll read whatever is said tomorrow. :) [17:27] there is no next, propose against trunk [17:28] tvansteenburgh: trunk == lp:charm-helpers? [17:28] yes [17:28] tvansteenburgh: Thanks! :) [17:28] (So much for reading it tomorrow ;) [17:28] sure thing :) [17:29] tvansteenburgh: thanks for following up on that. [17:29] np [17:29] tvansteenburgh: is the /next thing only an idiom for openstack then? [17:29] yeah [17:29] * tvansteenburgh wanders off to lunch [17:29] Ok, im' going crazy then :) === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 === liam_ is now known as Guest52471 === kadams54 is now known as kadams54-away === scuttlemonkey is now known as scuttle|afk === scuttle|afk is now known as scuttlemonkey [19:55] marcoceppi, so i'm trying to systematically compare series in a future-proof way in test writing. ie. tests need to do more or do less, or do things slightly differently, depending on ubuntu release. [19:55] beisner: so just special cases where like if series > trusty do this? [19:55] right [19:56] where series is the series of the deployment? [19:56] right, not necessarily matching the charm series [19:56] before pulling in distro_info python module usage, where I know we can do version comparisons, i'm wondering if amulet already has a way to do series comparison? [19:57] as Wily just overlapped with Warty, alphabets no longer mean so much [19:57] fyi, distr_info.UbuntuDistroInfo.all() spits out the following, so that would be pretty easy to incorporate. just don't want to reinvent a wheel. [19:57] ['warty', 'hoary', 'breezy', 'dapper', 'edgy', 'feisty', 'gutsy', 'hardy', 'intrepid', 'jaunty', 'karmic', 'lucid', 'maverick', 'natty', 'oneiric', 'precise', 'quantal', 'raring', 'saucy', 'trusty', 'utopic', 'vivid', 'wily'] [19:59] beisner: there's no notion of series based testing in amulet at all, but seems like a useful thing to have [19:59] esp when we move series out of Deployment() and instead use as an environment variable [20:09] marcoceppi, in this case, i have a specific need for tests to behave differently >= vivid. upstart vs systemd. [20:10] ie. on < vivid, we check system service status on each unit via `status ` but on >= vivid, that needs to be `service status`. [20:11] beisner, what about that enum in charm-helpers? [20:11] so it's a pretty simple pivot point, and could be resolved easily now, just want to choose an approach that survives. [20:12] beisner, I was thinking about the last 2 functions here: http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/view/head:/charmhelpers/contrib/openstack/amulet/deployment.py [20:12] coreycb, yep, looking at extending that. [20:13] or rather, using the data in that dict [20:13] beisner, not sure how to future proof though without knowing names, so I think it'd need an update when we find out [20:13] coreycb, distr_info.UbuntuDistroInfo.all() is maintained and always returns an ordered list [20:14] beisner, nice [20:15] so for the purpose of if series >= 'vivid': the existing dict data could be inspected. [20:15] and we'll likely have to maintain that UbuntuRelease:OpenstackRelease table anyway === kadams54-away is now known as kadams54 === natefinch is now known as natefinch-afk === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 === kadams54 is now known as kadams54-away [22:12] marcoceppi, do you know if the mysql charm can do multi-master? [22:19] marcoceppi, I think the current charm, https://jujucharms.com/mysql/trusty/25, only supports multi-slave but thought I would check here. [22:53] arosales: it doesn't support multi-master in its current form [22:57] lazyPower, ack [22:57] and thanks [23:00] np [23:00] * lazyPower hattips [23:14] anyone running into issues with juju failing to create containers [23:14] http://paste.ubuntu.com/11530516/ [23:15] this is with MAAS and ive hit this with both trusty and precise [23:15] it just dumps the entire script into the agent-state-info :\ [23:20] whoa, and this is on -stable [23:20] stokachu: no thats a new one on me. I haven't been using with MAAS recently however. [23:21] yea this is the first ive seen it too, makes me wonder though b/c i was using the same bits yesterday with no problems [23:21] wondering if images were recently updated [23:23] stokachu: everything worked for you the day before just fine?! [23:23] yea i ran it yesterday all day [23:24] stokachu: it was broken for me yesterday after 4pm EST [23:24] stokachu: so is this an lxc bug or juju bug, I'm guessing start with lxc? [23:25] im trying to find where the images are for lxc that juju uses [23:25] see if it was recently uploaded [23:25] stokachu: since you're not using proxies in your env, and actually so more likely lxc [23:27] mahmoh: https://bugs.launchpad.net/juju-core/+bug/1417594 [23:27] Bug #1417594: failure to retrieve the template to clone: lxc container with 1.22 beta2 [23:27] says it was fixed though [23:28] when where though? [23:28] * mahmoh guesses 1.24 [23:28] the milestone says 1.23-beta1 [23:28] reported back in Feb [23:28] so it should be included in the current juju [23:32] stokachu: could you check on one of your problem nodes if " cloud-init " is installed , it should be if not that's the problem, cloud-archive meta problem [23:32] pls ^ [23:33] stokachu: my env is remote and a watch-me-type so I'm avoiding it until I have a fix [23:33] mahmoh: it doesn't even get that far [23:33] the template has cloud-init [23:34] but the cloning of that template is failing [23:35] stokachu: but you can download the template where I cannot, right? So maybe a slightly different problem [23:36] ah [23:36] yea it looks like it downloads the template but the actual lxc clone is failing [23:37] I don't know which problem I'd rather have, you out of disk space by any chance? [23:37] nah im on 2T [23:37] i did run out of ips again :( [23:37] but fixed that [23:39] lol [23:39] IPs [23:39] this isn't your problem is it: https://bugs.launchpad.net/lxc/+bug/1410876 [23:39] Bug #1410876: Error executing lxc-clone: lxc_container: utils.c: mkdir_p 220 Not a directory - Could not destroy snapshot %s - failed to allocate a pty; Insufficent [23:39] privileges to control juju-trusty-lxc-template [23:40] doesnt look like it [23:41] stokachu: this might be my problem: https://bugs.launchpad.net/lxc/+bug/1331920 [23:41] Bug #1331920: keyserver workarounds in templates/lxc-download.in not accessible [23:41] looks promising