=== negronjl_ is now known as negronjl === ejat- is now known as ejat [09:59] niemeyer, g'morning [09:59] hazmat: Hey! [09:59] * hazmat is up early to take out the afternoon for a dentist [10:00] fwereade, rog greetings [10:03] morning hazmat [10:05] mornings :) [10:11] hazmat, fwereade, niemeyer: hey! [10:12] i saw previous exchange but unity had crashed so i couldn't type anything [10:12] guess what, i've installed oneiric [10:13] (probably my fault though for trying to tweak things with ccsm) [11:54] fwereade, thanks for having a look [11:54] at the txzk stuff [12:03] <_mup_> txzookeeper/session-and-conn-fail r62 committed by kapil.foss@gmail.com [12:03] <_mup_> save the chaos monkey via s/loose/lose ;-) [12:15] <_mup_> txzookeeper/session-and-conn-fail r63 committed by kapil.foss@gmail.com [12:15] <_mup_> simplify pass through property, reenable retry function test, use min() instead of condition, per review comments [12:24] <_mup_> txzookeeper/session-and-conn-fail r64 committed by kapil.foss@gmail.com [12:24] <_mup_> verify session event sequence per review comment [12:40] <_mup_> juju/session-expiration r408 committed by kapil.thangavelu@canonical.com [12:40] <_mup_> incorporate session expiration in process handling and retry client [14:30] <_mup_> juju/local-repo-log-broken-charm r413 committed by kapil.thangavelu@canonical.com [14:30] <_mup_> metadata parse enriches yamlerror with path info [14:46] * hazmat wishes lp bug search would handle comments [15:02] hey guys - I'm looking to work out how best to organise a system for managing my ubuntu installations. [15:03] What resource should I look at to find more about running a management system on my own private infrastructure? [15:09] andylockran, that sounds like a question better suited to #ubuntu-server .. it depends on what your looking for re 'management', if you mean deployment automation orchestra is nice, if you mean automated machine management, there are a number of closed and opensource tools that might fit the bill from landscape to puppet.. juju itself is focused at a higher level of service management and orchestration [16:10] SpamapS: hey don't forget to ask scale-buddy of yours about Charm School [16:11] yes, charm school. [16:13] :) [16:29] <_mup_> Bug #887644 was filed: juju/go: fixes for new error interface < https://launchpad.net/bugs/887644 > [16:34] oooh, golang now in juju - I'd not realised it had started already :) [16:35] noodles775: early stages yet! [16:39] noodles775: ...and not targeted for production release in 12.04 ;-)...but should be a sweet tech preview for sure [16:41] m_3: ping [17:00] * hazmat yawns [17:00] fwereade, reply sent [17:01] * hazmat heads off to discover the tender mercy of a dentist [17:16] hey folks [17:17] using lxc provider, juju ssh is giving me an error: "PTY allocation request failed on channel 0" [17:18] it worked fine while at UDS last week [17:20] bloodearnest, this seems to be a common problem out there, in terms of googling on that error. i haven [17:20] 't seen it myself however [17:20] eg, http://blog.asteriosk.gr/2009/02/20/pty-allocation-request-failed-on-channel-0/ discusses this issue [17:21] jimbaker, hey, yeah googling suggests somthing incorrect with /dev/tty setup [17:21] jimbaker: strange thing is it worked all last week fine :( [17:21] bloodearnest, precisely. i don't think it has anything to do with our lxc stuff, other than it seems like a possible resource exhaustion issue [17:22] jimbaker: k, I'll try a reboot and see if that clears it up. Been a while anyway... :) [17:23] bloodearnest, ok, well that can fix some issues like this, but not ideal of course [17:23] but it will tell something [17:25] jimbaker: any other route you'd suggest? I double check running processes, but nothing seemed awry [17:25] bloodearnest, from what i read on the problem, that would not be an issue [17:31] jimbaker: lsof | grep lxc tells me that a single lxc-start has /dev/pts/[2-6] open [17:34] bloodearnest, but that's a very minimal number of ptys to have open [17:34] jimbaker: yep [17:43] <_mup_> juju/support-num-units r412 committed by jim.baker@canonical.com [17:43] <_mup_> Support in juju add-units [17:52] jimbaker: fwiw, reboot didn't help. Fresh environment, trivial charm, still no ssh :( [17:52] jimbaker: gotta EOD, but thanks for your help! [17:57] niemeyer: here's a first skeleton of the juju provider-independent tests interface. [17:57] http://paste.ubuntu.com/732246/ [17:57] bloodearnest, just pointing out some possibilities. easier if i had the problem myself :( [17:57] comments welcome [17:58] rog: I don't understand what the interface is doing there [17:58] jimbaker: sure - thanks anyway :) [17:58] niemeyer: it's to enable a given environ provider to run a set of standard juju tests against itself [17:59] niemeyer: as we discussed AFAIR [17:59] rog: what about testProvider(provider)? [17:59] niemeyer: i wanted to do that, but gocheck makes it hard. [17:59] rog: It's a function..? [18:00] because gocheck registration is global [18:00] maybe we could fix that, but i thought i'd try to work within existing constraints first [18:00] rog: It's even easier than that.. [18:00] rog: Create a suite, and simply register it for each provider [18:01] rog: Suite(&ProviderSuite{theProvider}) [18:01] not so easy. but i've no time to do explain now, gotta go. [18:01] rog: and all the tests will be run independently with the provider set [18:01] rog: Duh [18:01] two places need to call TestingT [18:01] rog: Nope [18:02] because we've got two totally independant sets of tests [18:02] rog: They're simply two suites [18:02] yes, that's what i want [18:02] rog: Suite(&ProviderSuite{provider1}) [18:02] but who calls TestingT [18:02] rog: Suite(&ProviderSuite{provider2}) [18:02] ? [18:02] rog: A single independent function, like all uses of gocheck [18:03] should jujutest call TestingT? [18:03] i didn't think it should, but perhaps that's the way to go [18:03] rog: Not sure about who's jujutest [18:03] jujutest is called by a provider [18:03] rog: The provider package should do it [18:04] rog: I'd have to know more about the structure to help in that case [18:04] ok, then how does jujutest register its test suite? [18:04] rog: It's feeling more complex than it ought to be.. [18:04] rog: I don't know who's jujutest [18:05] niemeyer: jujutest is the platform-independent testing part of juju [18:05] rog: All the tests of juju are platform independent [18:05] niemeyer: i don't want it to be part of juju itself because it's only about testing [18:05] rog: Let's talk about that when you have more time [18:05] yeah, speak tomorrow [18:06] rog: Have a good one [18:30] niemeyer: will do. and you! [18:44] hey again, I have an lxc specific question, but maybe you can help me out anyway? I have created an lxc container, which I was able to log in successfully, however after applying updates (it was a fresh lucid install) it now no longer completes booting, and I have no idea how to find out why [18:45] I managed to get some log output from it, but not very enlightening [18:45] it looks like if init never spawns the getty processes [18:46] and I see this: [18:46] init: ureadahead-other main process (31) terminated with status 4 [18:46] init: console-setup main process (32) terminated with status 1 [18:46] init: ureadahead-other main process (37) terminated with status 4 [19:19] pindonga: I'm not sure lucid is going to work... LXC was pretty new then. [19:19] SpamapS, I just found out something interesting [19:20] I configured network manually with a static address [19:20] ssh comes up ok [19:20] but the console does not [19:20] which even if it's broken works for me [19:20] I use the lxc container as a glorified chroot :) [19:20] pin/whois pindonga [19:20] hahahaha doh [19:20] \o [19:20] * pindonga waves [19:21] wondered if we met last week ;) [19:24] pindonga: so are you running juju inside an LXC container, spawning more LXC containers? [19:25] no, no *that* crazy :) [19:25] in the short term I just tried to use juju to manage my lxc instances [19:25] so I can do development in an isolated environment [19:26] however, juju+lxc doesn't survive reboots yet [19:26] so I moved onto just using plain lxc [19:26] in the long term I want to move all of our infrastructure to juju (lxc locally and openstack for deployments) [20:01] * hazmat catches up [20:04] bloodearnest, so completly allegorical, i had some issues with pty allocation for ssh on lxc, reboots would help.. but once it failed it would never work, they did effectively disappear though i yanked my virtualbox install and kernel modules.. i'd be curious to look at your lsmod output [20:10] jimbaker, ping [20:12] pindonga, this isn't really juju related, but you can start the container with a debug log and pass cli options to /sbin/init (ie. upstart) to verbose log as well, that might help... imo, its probably just better to focus on 11.10 with eye to deploying 12.04 lts ... but if you really want it ;-) [20:13] hazmat, yes , I was looking at that... thx [20:27] hazmat, hi [20:33] coffee time, biab [20:46] jimbaker, priv msg [21:31] <_mup_> juju/support-num-units r413 committed by jim.baker@canonical.com [21:31] <_mup_> Support in juju deploy [22:15] jimbaker, nice [22:43] can we make the bot also tell us when trunk is committed to? [23:23] <_mup_> juju/support-num-units r415 committed by jim.baker@canonical.com [23:23] <_mup_> PEP8, PyFlakes [23:48] <_mup_> juju/support-num-units r416 committed by jim.baker@canonical.com [23:48] <_mup_> Better help output for CLI changes [23:53] SpamapS, its client side, the code is avail [23:53] although its erlang as i recall, not sure where its deployed [23:53] hazmat: thanks for your tip. Ok - I'll look into that - thanks. [23:54] SpamapS, https://launchpad.net/mup [23:56] andylockran, np