[00:00] jcastro, there's one more branch before that works for ec2, i'm going to try and get a review on it today, but i've perused it and looks pretty good already [00:00] you can define the constraints on the service or unit level [00:00] that is awesome because we just ran into this problem, so now I can see why we would need it in real practice [00:07] d0od, we severely underestimated how huge OMG is [00:07] I'm going to relaunch it with larges [00:23] marcoceppi, how fast is the ec2->ec2 transfer? [00:23] jcastro: probably 10-15Mb/s [00:24] The meat of the OMG Site is about 2GB in size (images, plugins, themes, etc) [00:25] Sorry, not 10-15, it's currently tracking at 60Mbps [00:27] jcastro: Tear it down, everything is moved off [00:28] on it [00:28] If you want, you can use the patched branch that fixes the haproxy issue [00:28] link me to it pls? [00:28] lp:~marcoceppi/charms/oneiric/wordpress/haproxy-patch [00:28] ta [00:29] hey so the wife wants ice cream [00:29] when I launch these [00:29] I'll wait for the team to review, it's a minor change but I'm not sure if there's a *better* way to do it [00:29] and get you in them can you drive for a while? [00:29] I shouldn't be longer than 30 [00:29] yeah, no problem [00:30] Launch and I can relate, move files, setup, etc [00:30] bootstrapping [00:31] d0od: I saw that you have cdn.omgubuntu.co.uk setup, but it looks like you're pointing it to the same server as the main site [00:31] Do you have a CDN for OMG? [00:31] man, waiting for bootstrap on ec2 vs. lxc is not a fun time [00:32] I know, *keeps pinging status* [00:32] The CDN was something our then web admin set up; i really don't know more than that [00:33] shocking, I know ;) [00:33] ha, no problem! just checking [00:34] Do you have access to the DNS for the domain? [00:35] ok deployed [00:35] marcoceppi, not started yet [00:35] marcoceppi: I do [00:37] jcastro: it looks like they're finally coming up [00:37] mysql is up [00:37] just waiting on wp [00:37] and ... up [00:37] go. [00:38] ok lmk when you have access to stuff, and then I'll pop out and be back in 30 or so [00:38] No EC2 Instances selected. [00:38] Select an instance above [00:38] mispaste, sorry [00:38] jcastro: I'm good! Enjoy [00:38] bbiab! [00:39] Transfering the Dabatase dump back over [00:48] Okay, database is on the server, importing. WordPress is setup, haproxy in place, and the OMG content is being transferred over [00:48] :D [00:50] Once the content is extracted it'll be a quick skip and hop to a living site [00:50] I can't thank you enough for doing this :) [00:51] No problem! I hope to have this done before jcastro gets back [00:53] Database imported, content synced. Moving all the images, plugins, and theme in to place [00:56] Okay, d0od, it's not pretty but: http://ec2-23-20-126-44.compute-1.amazonaws.com/ the post links don't work yet either. The images and layout will be fixed when we "redirect" the domain [01:01] If you open your browser in to porno mode, http://ec2-23-20-126-44.compute-1.amazonaws.com/ that should look like OMG Ubuntu! [01:01] Images won't all be working yet, because of the domain name [01:02] Just need to get the seo friendly URLs setup [01:02] marcoceppi, back, wow that was fast! [01:02] jcastro: slow poke [01:02] :) [01:02] It's always faster the second time around, when you know what to do [01:03] heh [01:03] tellin' ya, it's like 10 mins away from being a charm [01:03] hey so how's the IO on the larges? Much better? [01:03] smoooooooooth [01:03] nice [01:03] the old server was an 8core beast [01:03] ok so what do we need to do then ... just the redirect? [01:03] one more thing [01:03] post links aren't working [01:03] but I think that's a mis-behaving plugin [01:03] yeah we should have looked at the hw it was on before deciding on smalls [01:04] * marcoceppi nods sadly [01:04] 1 large for a bootstrap node, heh [01:05] well, Zookeeper should be happy [01:07] So, I'm not sure how to fix the whole post not working, I'm going to try to create myself an account [01:20] d0od, ok get ready [01:20] * d0od readies self [01:21] hazmat, any idea how we can move the bootstrap node and haproxy to smalls? [01:21] or is it like "wait a week and redeploy"? [01:25] jcastro, the bootstrap node isn't really resizable without additional work [01:26] ok [01:27] marcoceppi, about ready? [01:27] the whole notion of resizing isn't really supported.. but if you want to try.. i'd first try stopping and restarting a non bootstrap node, check if it comes up okay.. then snapshot, and boot the snapshot as a new instance, hmm.. i don't think its going to work. [01:27] yeah, just need to patch up the wordpress install so it functions 100% [01:27] its got to update the zk data with the new instance id or juju will terminate it as an unknown (if it shares the environment group, which it needs to for net security) [01:27] ok so I think it's best to wait until the ability to specify a size per node lands [01:27] then just redeploy? [01:28] jcastro, yeah [01:32] jcastro: founda few more things that need to be added to the wp charm, php-mail and sendmail are not installed [01:32] .... adding to the doc [01:51] d0od, ok wordpress problem, I texted SpamapS, he's on his way [01:51] just this one thing left to fix [01:53] marcoceppi, is there a way to turn off plugins from a config file or something? [01:53] jcastro: use /mnt and backup/replicate aggressively ;) [01:53] jcastro: /mnt is the instance storage on an ebs root instance [01:53] jcastro: I'm in the admin panel [01:53] its huge, and heavily outperforms EBS [01:53] ... and we ran out of space on EBS anyway. :) [01:53] so he moved it [01:54] SpamapS: do all units have access to that same /mnt? [01:54] marcoceppi: no definitely not [01:55] damn, okay [01:55] that is local to the instance [01:55] that would be too easy [01:55] marcoceppi, hey so how many plugins are in there? [01:55] maybe we can shut them off [01:55] and then turn them on one by one? [01:55] 29 active [01:55] hah, seriously [01:56] marcoceppi: whats the trouble? [01:56] shared image upload? [01:56] well, 29 of 45 :) [01:56] simplest thing would be to use the wordpress S3 plugin. [01:56] which I used long before I had my blog in EC2 :) [01:57] FIXED [01:57] FIRE MISSILES [01:58] but I am le tired [01:58] url please? [01:58] http://www.omgubuntu.co.uk/2012/03/instant-messaging-comes-to-thunderbird-13-speed-dial-to-firefox-13/ [01:59] db error [01:59] there we go [01:59] the tbird image is broken though [02:00] file isn't on the server [02:01] marcoceppi, your hackergotchi is showing up instead of joey, heh [02:01] jcastro: yeah, refresh and I should be gone [02:02] nvm, I'm still there, Page is cached [02:03] still getting db errors [02:03] is the mysql box getting thrashed? [02:04] the tuning on the default mysql is really low scale [02:04] There are some big knobs for bumping it up [02:04] jcastro: connections are maxed out [02:04] turning big knobs [02:04] I believe that is one of them [02:05] juju set max-connections=500 should work [02:05] marcoceppi, you comfortable turning them up or want clint to have a look? [02:06] Normal circumstances I would say STAND BACK, but Clint I hear you're like a SQL wizard [02:06] I can add your key to the db to take a look and work some magic [02:06] cpu looks maxed on the wp one too according to the Aws meter [02:07] jcastro: I'm going to drop a fix for that in a min [02:07] see if we can't keep AWS from getting too pissed [02:07] marcoceppi: I do have some mysql scaling experience yes. :) [02:07] yes, let's do this [02:08] marcoceppi: probably want to bump dataset-size > 1G unless the database is in fact ~= 1G [02:08] db is about 1.2gb [02:08] nice ok [02:08] SpamapS: PM'd you the server address [02:08] I haven't done much, but I was planning to move MySQL to a tmpdisk [02:09] marcoceppi: tmpdisk? are we replicating or backing up aggressively at all? [02:10] right now it's just one instance [02:10] ok [02:11] What I'd recommend is that we spin up another env on t1.micro's in a different region... [02:11] and manually setup replication to that. [02:11] Any chance we started with t1.micro's here? Because they have the advantage of being able to be rebooted into any other instance type [02:12] we started with smalls [02:14] jcastro: darn. [02:15] ok so we need more instances? [02:15] wow... MyISAM .. [02:15] We can't tune the existing one? [02:15] SpamapS: ah, I converted the first time around, hadn't done it for this time around [02:15] jcastro: no, but thats a way around the "all larges" problem [02:15] all but a few tables can be moved to InnoDB [02:15] marcoceppi: was there a mysqldump ? [02:15] because the default table type is innodb [02:16] SpamapS: yeah it's in /usr/src [02:16] Dump had them as MyISAM [02:16] Because MyISAM will have table locks which will make comment traffic suck ass [02:16] Probably means the original site had MyISAM [02:16] lets not d*** with that [02:16] comments are on disqus [02:16] Originally (OMGEC2-v1) I did a quick dump of the table names and update to InnoDB, few tables have fulltext indexes (yuck) and can't be converted [02:18] obal turned on slow query logging for a bit.. the thing looks healthy [02:18] jcastro: score, ok [02:19] jcastro: probably because the old comments were so damn slow.. ;) [02:19] marcoceppi, is the db problem the reason the Continue Reading, etc don't work? [02:19] 33qps .. 1GB db.. could have stayed on small. :) [02:19] tho I suspect it gets a lot busier sometimes. :) [02:19] we can move it to smalls tomorrow [02:20] marcoceppi, ooh, cpu on wp going down [02:20] hopefully if this caching works out like it's supposed to, we can move the site back down to a smaller instance and just use add-units when needed [02:20] ^^ [02:20] that's why we went with smalls to begin with [02:21] marcoceppi, ... and pegged again [02:21] Yeah seems like without comments this site should be nearly static [02:21] SpamapS: should [02:21] comments are off-site with disquss [02:21] just a lot of plugins [02:22] marcoceppi, hmm so what do you suppose is going on with the wp node? [02:23] query cache is handling most of the mysql load... [02:23] jcastro: php [02:23] marcoceppi: apc? [02:26] I think that's it [02:26] * SpamapS installs mytop and enjoys the show [02:26] jk, it's back [02:26] ok, I need to run out and get dinner for the family.. you guys need anything else? [02:27] ok so mysql is set? [02:27] man [02:28] that is waaaay faster [02:28] mysql is doing nearly nothing [02:28] man so all that mess was the ootb mysql being that horrible? [02:28] Key Efficiency: 97.9% Bps in/out: 1.0/335.9 Now in/out: 8.3/ 3.2k [02:29] 97.9% means that the indexes are all basically fitting in the anemic buffers that are left over because we tune for InnoDB by default [02:29] I owed you one, <3 [02:29] jcastro: this has been so much fun.. I totally miss running a real site. ;) [02:29] * SpamapS sighs [02:30] Ok, I have to go get tacos. Will check back in later. [02:30] well, if you want, you can fix php [02:30] ok, cheers! [02:30] VERY COOL btw [02:30] install APC [02:30] PHP should fly [02:30] it seriously helps [02:30] ok, out, bbl [02:30] SpamapS: APC is already installed :) [02:30] WHAT SAY YOU NOW! [02:31] So we're pegging 4 CPU's w/ PHP? time to look at varnish :) [02:31] * SpamapS gone [02:44] varnish rocks [02:45] too bad there's no charm [02:45] hah [02:45] although i've been using the builtin nginx cache facilities for more trivial caching as of recent [02:46] its hard to charm i think, to allow for flexibility you end up needing to put vcl config down the relation.. for really basic upstream caching its okay [02:49] <_mup_> juju/scheduler-peek-list r474 committed by kapil.thangavelu@canonical.com [02:49] <_mup_> new scheduler v2, much expanded error testing, now for version 3 [03:04] d0od, ok [03:04] working [03:04] the images weren't in the backup [03:04] so marco is looking on the old server [03:04] we can move dns now if you want [03:16] d0od: Images recovered [03:16] everything's in place [03:56] <_mup_> Bug #958312 was filed: Change zk logging configuration < https://launchpad.net/bugs/958312 > [07:47] <_mup_> Bug #958378 was filed: juju/control should be aware of subordinates < https://launchpad.net/bugs/958378 > === Leseb_ is now known as Leseb === d0od_ is now known as d0od [13:47] marcoceppi, alright, she's still up and running! [13:47] jcastro: she lives! [13:48] There's a few quirks with the WordPress charm that I was going over with d0od [13:48] ah ok [13:48] do we have that logged? [13:49] it was PM I can put it in the doc [13:50] k, whenevs [13:50] hey I noticed search is wonky, probably like the other links where it needs the real domain I am guessing [13:52] jcastro: Yeah, on my desktop (with host modification) it works fine. On my laptop without host modification it's hit or miss [13:52] CPU on the wordpress notes is spiking [13:52] 13:52:44 up 13:17, 1 user, load average: 0.44, 0.56, 0.68 [13:52] wouldn't appear so? [13:53] I am just looking back at the AWS graph [13:53] ah [13:53] http://i.imgur.com/Aw0FH.png [13:53] there were a few spikes over 1 [13:56] marcoceppi, ok so he can get back in now? [13:57] jcastro: Yeah, it was a dns issue preventing him from logging in [13:57] He's all setup [14:02] Stupid Twitter thing doesn't work though === koolhead17|away is now known as koolhead17 [16:10] marcoceppi: good "morning" :) [16:14] * hazmat digs himself out of the rabbit hole [16:30] SpamapS: o/ [16:32] <_mup_> juju/scheduler-peek-list r475 committed by kapil.thangavelu@canonical.com [16:32] <_mup_> rewrite of the relation hook scheduler [16:39] marcoceppi: so, did you just accept that PHP was going to get pegged sometimes? [16:40] SpamapS: we tried varnish and that sucked. I went back and looked at APC, quick tweak, hard stopped apache, brought it back up [16:40] that combined with aggressive caching in WP settled the load to an average of 0.43 for the night on the WP node [16:40] marcoceppi: ahh, was APC not actually working? [16:40] that's what it looked like [16:40] Enable APC, load jumps extra 20 points. Didn't seem right [16:40] marcoceppi: yeah varnish is hard to get right [16:41] It seems really complex to charm, but would be great as a proxy/cache drop in for HAProxy [16:41] marcoceppi: "quick tweak" ? did it land in the wordpress charm yet? ;) [16:41] marcoceppi: yeah m_3 started working on it [16:41] SpamapS: there's a bunch of stuff that needs to be tweaked in the WP charm ;) [16:42] marcoceppi: yeah I was thinking we should look at switching it to fastcgi [16:42] mm [16:42] I'm making a bunch of changes in the omg fork of the charm [16:42] then see what I can slide back in [16:43] There's also some oddities when you throw HAProxy in front [16:43] The silly archive version of WP doesn't handle it very well [16:44] marcoceppi: the way the configs are done in the packaged version is kind of confusing [16:45] marcoceppi: I think we need to enhance the 'http' relation with some new optional bits.. one of those being 'endpoint hostname' [16:46] marcoceppi: I'm not settled on whether haproxy should feed that back to wordpress, or wordpress should feed it to haproxy.. [16:46] marcoceppi: but either way, I'm sure you had problems with the Host: header [16:53] <_mup_> juju/scheduler-peek-list r476 committed by kapil.thangavelu@canonical.com [16:53] <_mup_> cleanup for review [16:55] <_mup_> juju/scheduler-peek-list r477 committed by kapil.thangavelu@canonical.com [16:55] <_mup_> expand ignores === Leseb_ is now known as Leseb [16:58] i'm still amazed sometimes how much memory/cpu emacs uses === Leseb_ is now known as Leseb === almaisan` is now known as al-maisan === al-maisan is now known as almaisan-away [17:12] <_mup_> Bug #958662 was filed: Rewrite the scheduler, for simplicity, and better error handling. < https://launchpad.net/bugs/958662 > [17:16] <_mup_> juju/scheduler-peek-list r478 committed by kapil.thangavelu@canonical.com [17:16] <_mup_> log stops [17:17] <_mup_> Bug #958668 was filed: Rewrite the scheduler, for simplicity, and better error handling. < https://launchpad.net/bugs/958668 > [17:17] * hazmat sighs [17:20] <_mup_> juju/local-provider-container-wait-flags r485 committed by kapil.thangavelu@canonical.com [17:20] <_mup_> wip on lxc containers with bitmask wait flags, something odd in the lxc responses, reporting started units as stopped in lxc-ls, tbd [17:48] bcsaller1, please resolve your branch (subordinate-type) with trunk conflicts [18:28] hazmat: ok and I noticed a another issue, I'll push an updated version in a bit [18:30] just noticed some failures on trunk from my hook-alias branch merge.. argh. [18:31] ah.. found it [18:56] bcsaller1, did you end up normalizing log locations as part of subordinates?.. if not i'm going to do a branch against trunk for it [18:57] hazmat: I didn't address that, no [18:57] bcsaller1, no worries, the normalized form i'll use should accomodate it, i'm just going to use the local scheme on other providers [19:07] bcsaller1, jimbaker can i get a +1 on this trivial? [19:15] http://paste.ubuntu.com/889598/ [19:18] hazmat: in process_reason the first branch returns and the second doesn't now? [19:19] bcsaller1, the second branch and third branch have return at the end of the conditional block, the other two conditional blocks setup the return value, the first block is self-contained [19:21] ie. the others define error, and then return deferred.errback(error) at the end of the method.. the first one was failing because.. [19:30] <_mup_> juju/trunk r485 committed by kapil.thangavelu@canonical.com [19:30] <_mup_> [trivial] hook exit and test teardown one liner fixes [r=bcsaller] [19:44] hi, I'm trying to use juju on precise and I'm getting errors about dns name resolving issues... looks like it's not using --ipv4 when running the equivalent of euca-describe-instances [19:44] any ideas? [19:51] SpamapS, not sure how to setup --no-recommends cloud-init installs juju [19:52] so, on another angle, what's the recommended way for starting with juju if I don't want to use amazon (due to it's cost basically)? [19:52] is it openstack or lxc? [19:52] I have not found a single way to deploy a charm yet in either :( [20:07] pindonga, ugh.. [20:07] pindonga, i use lxc all the time, openstack is also viable [20:07] pindonga, do you have a traceback with the dns issues? [20:07] er. pastebin/log [20:07] let me get one for you [20:09] hazmat, http://pastebin.ubuntu.com/889699/ [20:10] also, when I re-run the bootstrap, now I get an error saying the security group cannot be deleted [20:10] http://pastebin.ubuntu.com/889701/ [20:10] is that expected? [20:11] I'm running precise (afaik) [20:11] with lxc i get a log entry saying 'Creating master container...' and nothing after that, so juju will always see the container as pending [20:12] I can share my environments.yaml if that helps [20:12] pindonga, you have to destroy-environment before doing another bootstrap typically [20:13] hazmat, I figured, but wanted to check anyway... I think if something fails juju should recover gracefully, but that can be worked on later maybe? [20:13] pindonga, lxc creates the containers in the background, it takes a few minutes, if takes more than 10m, there master-customize.log in data-dir should have some clues if you could paste it, there's all dev tool for it that does the same [20:14] let me check that log file [20:14] one sec [20:14] re openstack, it looks like that setup isn't returning publicly routable dns names [20:15] pindonga, you can either associate a static ip/public ip to the bootstrap node or run juju from a location where the public-addresses returned by the api are routable [20:15] hazmat, this is canonistack, so yes, the ips are not public routable by default.... (you can get ips by doing euca-describe-instances --ipv4 normally) [20:16] not sure this is something juju should fix or canonistack should fix [20:16] hazmat, so where should I look for the master-customize.log file? I got nothing neither in the juju folder nor in the lxc folder (the lxc folder is completely empty) [20:16] just have a config file and an empty rootfs folder [20:18] though I still see a bunch of lxc processes around... maybe I should keep waiting [20:20] hey SpamapS why do you want to remove md5 and sha1 checks from ch_get_file ? [20:21] hazmat, I appreciate your help btw... while we are on it... I know juju is service-oriented and not machine oriented, but can you tell me if this is too far fetched? in order to learn about juju and charms, I decided to write a charm to deploy my development environment... ie, once deployed, a new host will have my full env ready for development (though it doesn't make sense to have more than one host for this 'service' ,) [20:25] pindonga, thats sound reasonable to me.. effectively a charm for a dev env [20:26] ie. its taking advantage of automation not orchestration [20:26] hazmat, ok, good news, the master-customize.log showed up in the end... it may have been I was just too anxious :) [20:26] pindonga, try status now out of curiosity? [20:26] still pending [20:26] though the log says Container Customization Complete [20:27] and the juju log says Starting container... [20:28] pindonga, what's lxc-ls show? [20:28] besides the ones I already had it lists ricardo-local-0-template and ricardo-local-devel-1 [20:28] pindonga, on canonistack, you can allocate and associate a public ip address to the bootstrap node [20:29] pindonga, cool, does it list -devel-1 only once.. it lists them twice if its running [20:29] no, it lists everything only once [20:29] which means no container is running [20:29] yup [20:30] hmm.. pindonga is there an lxc-wait in your ps aux | grep lxc output? [20:30] yes [20:30] lxc-wait -n ricardo-local-devel-1 -s RUNNING [20:31] pindonga, hmm.. so this is bug 912879 [20:31] <_mup_> Bug #912879: Machine agent hangs if lxc container start fails < https://launchpad.net/bugs/912879 > === lifeless_ is now known as lifeless [20:31] pindonga, what's unclear though is why the container fails to start.. there's a console log for the container in data-dir/units/$unit-name [20:31] it might have some useful details [20:32] let me check [20:35] hazmat, this doesn't sound right [20:36] lxc-start 1332102305.341 ERROR lxc_conf - Permission denied - failed to mount 'proc' on '/usr/lib/lxc/root//proc' [20:36] lxc-start 1332102305.342 ERROR lxc_conf - failed to setup the mounts for 'ricardo-local-devel-1' [20:36] lxc-start 1332102305.342 ERROR lxc_start - failed to setup the container [20:42] hmm.. why's it using /usr/lib/lxc.. that paths look strange [20:43] * hazmat tries locally [20:44] I have lxc==0.7.5-3ubuntu40 [20:48] oh.. that's in the container [20:48] pindonga, can you pastebin the whole console.log [20:48] sure, one sec [20:50] hazmat, http://pastebin.ubuntu.com/889766/ [20:50] this is container.log , yes? [20:51] pindonga, yup, thanks [20:51] hmm.. why would the mount fail.. [20:51] one extra bit of info that might be useful [20:52] I don't get the cgroups to mount automatically, so I issues cgroups-mount [20:53] pindonga, there should be an upstart job that takes care of that.. cgroup-lite.conf [20:53] and I get this exact same error when running lxc-start directly [20:53] on this specific container [20:53] mhh [20:53] actually on other containers too [20:54] k, let me reboot to see if this is a temporary glitch with the cgroups [20:54] be right back [20:54] pindonga, it sounds like something to take up with hallyn on #ubuntu-server.. offhand i don't know what the issue is [20:54] hazmat, ok, you've been quite helpful anyway, so thx! :) [20:55] for the canonistack env,you said I can associate a public ip to the bootstrap? [20:55] let's say I get a public ip from openstack first [20:55] how do I invoke juju then? [20:57] pindonga, yeah.. its euca-allocate-address, and then euca-associate-address to the bootstrap instance id [20:58] pindonga, then just use juju normally [20:58] kk [20:58] thx, I'll try it out and let you know [21:17] hazmat: question for you, when you e.g. start a node, does zookeeper get updated *to trigger* the node being started, juju agent kicking off set, or does zookeeper get updates to record that that stuff *has happened* [21:17] hazmat: e.g. which is cause and which is effect [21:24] lifeless: zookeeper is changed, the agents react to that change [21:26] lifeless: states are then updated by the agent to reflect the result of the reactions [21:27] lifeless, there's always a state change to initiate an accompanying action [21:27] great, thanks for that [21:29] hazmat: re cloud-init / --no-install-recommends , I think you can tell cloud-init what apt options to use... standby [21:32] hm, actually it would seem rather complex [21:35] hazmat: I believe you'd need to actually drop a file in /etc/apt/apt.conf.d to disable the recommends. But then you'd have to turn back around and re-enable it, otherwise charms will fail. :-/ [21:42] SpamapS, i'll just rearrange to use cmds for juju pkg install.. what do we need for proposed upgrade support? [21:43] hazmat: be careful with that. :) [21:44] hazmat: in the past we broke badly because DEBIAN_FRONTEND=noninteractive wasn't set.. and we also should make sure it uses --force-confold when calling dpkg. See cloudinit's code for how it calls apt-get [21:45] hazmat: for proposed we need, IMO, the thing I sent to the mailing list. But at a bare minimum, we need to be able to cause juju to point machines at -proposed at any time.. [21:47] SpamapS, yeah.. running cluster upgrade support for juju itself would be nice.. but its out of scope for 12.04 at the moment. but getting initial installation, means what exactly.. just installing juju from a proposed ppa? [21:47] i'll check the ml [21:55] hazmat: not ppa [21:55] hazmat: you just need to enable the -proposed repository [21:56] SpamapS, interesting.. so the easiest way to do this.. effectively version-pinning and upgrades.. is to always deploy from bzr or assembled tarball [21:56] hazmat: so add another apt source in cloud-init just like the PPA. The bare minimum would be to have another juju-origin called 'proposed' that would enable proposed in cloud-init [21:56] hazmat: then a set of instructions for how to change the origin in zk so that you can test adding newly -proposed units to a running environment. [22:03] SpamapS, i can do the minimal proposed origin step for testing new installation for today, i think your ml post would be a good starting post for a spec on upgrading/running existing environments, i don't know if it can be done before 12.04 though, it is important though, tbd [22:27] hi hazmat , so I have some progress in the lxc case [22:27] hazmat, but now I see this when the container starts: [22:27] lxc-start 1332109039.505 NOTICE lxc_start - '/sbin/init' started with pid '28727' [22:27] lxc-start 1332109039.505 WARN lxc_console - console input disabled [22:27] lxc-start 1332109039.505 WARN lxc_start - invalid pid for SIGCHLD [22:27] lxc-start 1332109147.641 INFO lxc_af_unix - message denied for '1000/1000' [22:27] lxc-start 1332109147.673 INFO lxc_af_unix - message denied for '1000/1000' [22:27] lxc-start 1332109162.493 DEBUG lxc_commands - peer has disconnected [22:27] lxc-start 1332109162.555 DEBUG lxc_commands - peer has disconnected [22:27] I think I heard about a bug like this, but not sure what the workaround was (the bug was that the default user already existed with the same id (1000)) [22:45] http://zookeeper.apache.org/doc/r3.1.2/zookeeperAdmin.html#sc_strengthsAndLimitations - do we hve something in place to clean up the data snapshot/log files? === objectiveous_ is now known as objectiveous [22:49] <_mup_> Bug #958872 was filed: bootstrap Install cron job to cleanup zk logs < https://launchpad.net/bugs/958872 > [22:51] hah [23:12] is it possible to have some code run whenever a new node is setup ? (user supplied code I mean)