[00:00] <SpamapS> m_3: ^^ this is likely to break your charm tester as well if it pulls in the latest juju
[00:12] <hazmat> SpamapS, indeed, its at the top of my list for friday
[00:20] <m_3> SpamapS: thanks
[00:21] <SpamapS> hazmat: have a patch now actually.. just EOD'ing ;)
[00:21] <hazmat> SpamapS, cool feel free to lbox it or if you prefer merge it directly :-)
[00:35] <m_3> hazmat: juju-plan and juju-load-plan work fine with ec2 right out of the gate (I've gotta add support for environment param)
[00:35] <hazmat> m_3, yeah.. some do some don't..
[00:35] <m_3> juju-watch is borking a bit...
[00:35] <hazmat> m_3, the recorder definitely doesn't
[00:35] <m_3> right
[00:35] <m_3> assumptions about local
[00:35] <hazmat> m_3, and snapshot tries to poke at the local dir
[00:35] <m_3> hazmat: yup
[00:36] <hazmat> m_3, yeah.. i tried  document those which where ec2 specific..
[00:36] <hazmat> i've got a few things new utils i want to try out on friday
[00:36] <m_3> I'm excited about getting snapshot working on other providers though... that's really a great stack impl imo
[00:37] <hazmat> m_3, its not quite that.. but its getting there.. need to capture service config as well
[00:37] <hazmat> m_3, i might see if i can try something fun on friday like  suspend/resume and expand/shrink.. assuming the universe doesnt' fall apart
[00:38] <m_3> ha!
[00:38] <hazmat> although i should probably take a look at doing those directly in jujitsu
[00:38]  * hazmat can never remember that speling
[00:38] <hazmat> m_3, juju-watch won't like the new status atm, again its all post b2 to me
[00:38] <m_3> hazmat: so the watcher I started with should work on ec2... http://paste.ubuntu.com/904796/... it was pretty clumsy with state though
[00:39] <m_3> might actually work with the new status output b/c it's getting unit state directly
[00:39] <hazmat> m_3, the charmrunner watcher should work fine on ec2
[00:39] <hazmat> except for the aforementioned status changes
[00:40] <hazmat> there's more status changes coming (additions) with the subordinate work..
[00:40] <hazmat> speaking of which
[00:40] <m_3> oh, really?  cool... I thought it was using local-specific stuff too... that's great
[00:43] <m_3> hazmat: if you wanna focus on snapshot, I'll get the individual commands into juju-jitsu
[00:48] <hazmat> m_3, i'd still like charmrunner to do its thing, but the extra stuff into jitsu sounds good
[00:48]  * hazmat focuses on expiration
[00:51] <m_3> hazmat: hope you focus on feeling better man!
[00:53] <hazmat> the 'scrips should do the trick
[03:45] <_mup_> txzookeeper/managed-watch-and-ephemeral r51 committed by kapil.foss@gmail.com
[03:45] <_mup_> functional managed client
[04:04] <_mup_> txzookeeper/managed-watch-and-ephemeral r52 committed by kapil.foss@gmail.com
[04:04] <_mup_> use a deferred lock around restablishing the session, log error if failure to restablish
[05:45] <sc-rm> is there a way to debug/watch the communication between two relationship peers? like the keystone <--> nova-compute when the relation is established?
[07:04] <imbrandon> i'm in love, "TheStack" Drupal Charm is definately gonna favor HPcloud versus AWS
[07:40] <yolanda> hi, i'm trying to create a local instance for juju, and receive that error:  ERROR Unable to create file storage for environment
[07:43] <koolhead11> yolanda: dpkg -l juju please
[07:43] <koolhead11> and cat /etc/lsb-release
[07:44] <yolanda> ||/ Nombre                            Versión                          Descripción
[07:44] <yolanda> +++-[07:44] <yolanda> ii  juju                              0.5+bzr499-1juju4~oneiric1        next generation service orchestration system
[07:44] <yolanda> DISTRIB_ID=Ubuntu
[07:44] <yolanda> DISTRIB_RELEASE=11.10
[07:44] <yolanda> DISTRIB_CODENAME=oneiric
[07:44] <yolanda> DISTRIB_DESCRIPTION="Ubuntu 11.10"
[07:44] <koolhead11> yolanda: ok
[07:44] <koolhead11> yolanda: from next time to paste these details use paste.ubuntu.com :)
[07:45] <yolanda> ok :)
[07:45] <koolhead11> yolanda: lemme upgrade my juju
[07:45] <koolhead11> as am still using the old version of it
[07:45] <koolhead11> :P
[07:46] <koolhead11> yolanda: also i followed http://askubuntu.com/questions/65359/how-do-i-configure-juju-for-local-usage  for my juju deployment
[07:46] <koolhead11> are you using it on a physical machine
[07:46] <yolanda> kookhead11, yes
[07:47] <yolanda> last time i tried, wasn't giving that error
[07:47] <koolhead11> yolanda: :(
[07:47]  * koolhead11 upgrades his juju
[07:55] <yolanda> how is it going?
[07:55] <koolhead11> yolanda: am on really slow connection so have to wait for a while :P
[07:56] <yolanda> ok, np
[08:15] <jamespage> yolanda, you might have some cruft hanging around on disk - I'd try a juju destroy-environment first to tidy things up and then do juju bootstrap again
[08:18] <yolanda> jamespage, i tried to destroy and bootstrap again, it fails
[08:18] <koolhead11> yolanda: do as jamespage says. because i have the version http://paste.ubuntu.com/905235/  works well. I am using the defualt repo
[08:21] <yolanda> i tried two times, still same error
[08:22] <yolanda> i'll try uninstall and install juju
[08:26] <yolanda> i removed and purged juju, installed again and seems to work
[08:26] <yolanda> thx
[10:10] <yolanda> hi, i'm having this error when trying to deploy any charm:  WARNING Charm '.mrconfig' has an error: CharmError() Error processing '/home/ubuntu/jujudevelopment/precise/.mrconfig': unable to process /home/ubuntu/jujudevelopment/precise/.mrconfig into a charm
[10:10] <yolanda> any idea?
[10:19] <yolanda> please, any one knows about this error? i only did a charm getall and then a juju deploy
[10:36] <jamespage> yolanda, for some reason juju is seeing that file as config - I don't know why
[10:36] <jamespage> its safe to ignore IMHO
[10:36] <jamespage> or you can delete it if you like
[10:36] <yolanda> jamespage, i've ended removing it, and all the charms i don't need
[10:37] <yolanda> still testing
[10:40] <yolanda> jamespage, i did something like that:  juju deploy --repository=/home/ubuntu/jujudevelopment local:precise/postgresql
[10:40] <yolanda> i receive that message: INFO Searching for charm local:precise/postgresql in local charm repository: /home/ubuntu/jujudevelopment
[10:40] <yolanda> and it's hanged there for ages, is that normal?
[10:56] <jamespage> yolanda, it should not hang - at least not due to that message - its just a warning -  I get it as well
[10:57] <jamespage> yolanda, if you environment has not completed bootstrapping I think the juju deploy command will wait
[10:57] <yolanda> jamespage, but it's still at the same point, and i did also a juju status, and receive no response. Is normal that does something for that lot of time?
[10:57] <jamespage> no
[10:58] <jamespage> if you get no response your environment has not bootstrapped
[10:58] <jamespage> yolanda: are you using the local provider still
[10:58] <jamespage> ?
[11:00] <yolanda> jamespage, yes
[11:01] <yolanda> jamespage, i do a juju bootstrap and says "environment already bootstrapped"
[11:01] <yolanda> but juju statu doesn't answer, it hangs there
[11:01] <yolanda> the same for juju deploy
[11:01] <jamespage> yolanda, so its in some indeterminate inconsistent state
[11:01] <jamespage> if you can do juju status - its broken
[11:01] <yolanda> jamespage, i destroy and create again?
[11:01] <jamespage> yep
[11:02] <yolanda> damn, now works very fast!
[11:06] <koolhead11> yolanda: :P
[12:07] <jamespage> yolanda, local provided plus a few cores, some memory and a SSD is v.fast
[13:14] <lynxman> Does somebody remember the tool to compress several charms in one? I'm tight on available boxes :)
[13:44] <lynxman> m_3: around? :)
[13:50] <yolanda> hi, i have a problem on a config.yaml file for a charm
[13:50] <yolanda> i'm just creating two values, with default value as null, as shown in the sample
[13:50] <yolanda> and i receive
[13:50] <yolanda> https://yolanda.robla:J1nx3bGbnL8pTzFqK0nP@private-ppa.launchpad.net/canonical-isd-hackers/ppa/ubuntu precise main #Personal access of Yolanda Robla (yolanda.robla) to Canonical ISD PPA
[13:50] <yolanda> oh, sorry
[13:51] <yolanda> that error : WARNING Charm 'openerp-core' has an error: ServiceConfigError() Error processing '/home/ubuntu/jujudevelopment/precise/openerp-core/config.yaml': Invalid options specification: options.ppa-key.default: expected string, got None
[13:52] <koolhead11> yolanda: can you pastebin your config.yaml
[13:53] <yolanda> sure
[13:54] <yolanda> http://paste.ubuntu.com/905612/
[13:59] <koolhead11> yolanda: have you checked some other existing charms available? it will be good idea before planning to write a new one
[14:00] <yolanda> koolhead11, yes
[14:00] <yolanda> there is an openerp one, but for 6.0 version
[14:00] <yolanda> seems that you cannot set "null" in config var, if i put something different works
[14:02] <koolhead11> yolanda: that would be a bug then
[14:02] <yolanda> :(
[14:02] <yolanda> i just grabbed this example from juju documentation
[14:03] <koolhead11> i remember getting error if revision file was with value zero
[14:10] <yolanda> i just changed to another value
[14:10] <yolanda> now i'm trying to debug a hook and i receive that error: No JUJU_AGENT_SOCKET/-s option found
[14:10] <yolanda> when doing a config-get
[14:15] <hazmat> yolanda, the cli-api is only vaid in a hook or debug session
[14:15] <yolanda> hazmat, but i'm debugging
[14:15] <hazmat> yolanda, even in the debug session.. there are mulitple windows.. a default window, and the hook window which pop up as hooks are executed
[14:16] <hazmat> the cli api needs to be used from the hook window.. its window name is that of the hook
[14:19] <yolanda> hazmat, i do a juju debug-hooks opener-core/0
[14:19] <yolanda> and then i try there to run the hook
[14:19] <yolanda> am i doing something wrong?
[14:22] <hazmat> yolanda, you don't run the hook in the main window
[14:22] <hazmat> the hook is run in response to an event
[14:22] <hazmat> it will spawn a new window in the tmux session
[14:22] <hazmat> with the name of the hook and in that window you can execute the hook or cli api
[14:23] <yolanda> so i enter into the debug
[14:23] <yolanda> and then from another terminal i do a retry?
[14:25] <hazmat> yolanda that would work
[14:25] <hazmat> juju resolved --retry
[14:26] <hazmat> that looks like an error in juju though with null values
[14:26] <hazmat> for service config
[14:27] <yolanda> hazmat, that's another error now, the debug worked, thanks a lot!
[14:28] <hazmat> yolanda, np
[14:29] <yolanda> hazmat, seems that the problems is that i try to add a private repo to download a package, that is under https instead of http
[14:29] <yolanda> it shows an error
[14:30] <yolanda> this is the error: The method driver /usr/lib/apt/methods/https could not be found
[14:30] <yolanda> and that's caused by an entry in /etc/apt/sources.list.d/ like that
[14:30] <yolanda> deb https://yolanda.robla:J1nx3bGbnL8pTzFqK0nP@private-ppa.launchpad.net/canonical-isd-hackers/ppa/ubuntu precise main
[14:31] <yolanda> that's a valid url
[14:47] <robbiew> jcastro: ping
[15:34] <lynxman> m_3: ping
[15:34] <lynxman> Does somebody remember the tool to compress several charms in one? I'm tight on available boxes
[15:40] <m_3> lynxman: yo
[15:41] <lynxman> m_3: hey I was looking for you, good morning sir :)
[15:41] <m_3> lynxman: two options afaik... I think placement: local still works, and then charm splice is one of clint's branches
[15:41] <lynxman> m_3: maybe you know what I'm referring about?
[15:41] <lynxman> that's the one I was looking, charm splice
[15:46] <lynxman> m_3: thank you
[15:46] <m_3> lynxman: np dude
[15:49] <m_3> robbiew: what's your setup... I'm getting errors http://paste.ubuntu.com/905787/ trying the same thing (I'm precise, spinning up oneiric charms in ec2)
[15:50] <SpamapS> m_3: is it intentional that the charm tester only waits 30 seconds for things to deploy!?
[15:50] <m_3> SpamapS: no, it's a bug in juju-watch
[15:50] <lynxman> SpamapS: oh btw could you approve Robert Ayres in charmers whenever you can? He's on our team
[15:51] <m_3> doesn't accept the argument correctly
[15:51] <m_3> SpamapS: I set it to 1800secs... _and_ then precache lxc before trying to run them
[15:51] <m_3> otherwise nothing'll succeed and things'll be in a screwy state
[15:52] <m_3> SpamapS: let me push the changes to charmrunner
[15:52] <SpamapS> 2012-03-29 15:47:01,970 juju.service_watch:DEBUG polling
[15:52] <SpamapS> 2012-03-29 15:47:35,855 juju.service_watch:ERROR activity timeout reached
[15:54] <robbiew> m_3: I'm using the PPA...and have a pretty standard environment
[15:55] <m_3> robbiew: dang
[15:56] <m_3> SpamapS: pushed... I don't know what's necessary to kick off the packaging for charmrunner
[15:57] <SpamapS> m_3: did we get a fix for the status changes yet?
[15:57] <SpamapS> I have my patched version.. but haven't been able to get it into charmrunner trunk yet
[15:58] <m_3> nope, I was just weeding through some of that
[15:58] <m_3> SpamapS: what's your branch for that?
[15:58] <m_3> did you do a MP?
[16:01] <m_3> SpamapS: it looks like you should have perms to push that though
[16:03] <SpamapS> m_3: no I just have it here local :-P
[16:04] <SpamapS> m_3: and yeah hazmat alread +1'd :)
[16:09] <imbrandon> marcoceppi: how far did you get with juju and hpcloud ?
[16:10]  * SpamapS is guessing the lack of S3 made it a no-go
[16:12] <imbrandon> hrm kk
[16:12] <robbiew> I heard a rumour that they turned the EC2/S3 stuff back on...but of course I never checked it out :P
[16:13] <imbrandon> heh i was just checking that
[16:13] <imbrandon> robbiew: btw i'm in love, did i tell you that , yup love, hp drupal module + hp php classes == much love from imbrandon
[16:14] <imbrandon> course they need a bit o touchup, but hey their on github, i can do that
[16:14] <imbrandon> :)
[16:14] <imbrandon> this code is like soooooo much cleaner than amazons php
[16:15] <imbrandon> infact i though about seeing if i can use it on aws too hahha
[16:15] <robbiew> :D
[16:15] <robbiew> they'll love to hear that
[16:15] <robbiew> lol
[16:15] <robbiew> they=HPCloud...not AWS
[16:15] <imbrandon> hahah yea
[16:15] <robbiew> jcastro: we hit 100 folks today...wooot!
[16:16] <imbrandon> i've been emailing with one of their devex team dudes , their are a big drupal house
[16:16] <imbrandon> they are *
[16:17] <imbrandon> infact they have special modules specificly for drupal to speed them up via their cdn but the cms sees it as a native fs
[16:17] <imbrandon> that will be sooooo much nicer for say OMG
[16:17] <imbrandon> :)
[16:18] <imbrandon> wont take much to adapt to wordpress, and i'm hoping any s3 compatable storage
[16:19] <imbrandon> SpamapS: http://hpcloud.github.com/HPCloud-PHP/ if you havent seen
[16:19] <imbrandon> file_get_contents("swift://myfile.png") FTW
[16:21] <SpamapS> nice
[16:21] <SpamapS> Still, I'd rather se s3://
[16:21] <SpamapS> see
[16:21] <imbrandon> yea swift is their name for s3
[16:21] <SpamapS> its not s3
[16:21] <imbrandon> i'm sure i can be done though
[16:21] <SpamapS> swift is s3-like
[16:22] <imbrandon> yea, its like couldfront and s3
[16:22] <imbrandon> kinda
[16:22] <SpamapS> but it only recently grew S3 an S3 compatibility layer
[16:22] <imbrandon> ahh
[16:22] <SpamapS> CEPH's RADOS-GW is probably a better choice for full S3 compatibility
[16:22] <imbrandon> googlestorage is s3 all the way, s3cmd even works on it
[16:23] <SpamapS> aye.. radosgw is like that too
[16:23] <imbrandon> hrm not seen that
[16:23] <robbiew> +1 on ceph
[16:23] <SpamapS> (and has the benefit of sharding your file by block across all storage servers.. so a giant file can be streamed from multiple servers in parallel .. basically server RAID10 :)
[16:23]  * imbrandon contemplates a s3:// streamwrapper 
[16:23] <imbrandon> nice
[16:24] <imbrandon> well i hadent before because honestly i hadent seen one done in pure php, and i avoid c ( not c++ ) unless nessesary
[16:25] <imbrandon> hrm , doesnt s3 already to that sharding
[16:25] <imbrandon> like on the stack level
[16:25] <SpamapS> imbrandon: I believe yes the real S3 does that... though its all behind the Bezos Curtain so who knows. ;)
[16:26] <imbrandon> heh
[16:26] <SpamapS> imbrandon: swift does not. Swift simply moves your file around as giant chunks.
[16:26] <imbrandon> ahh
[16:26] <SpamapS> imbrandon: *large* files are broken up, but not at the block level.
[16:26]  * imbrandon has been blinded by their adoption of clean php and not botherd to look at the man behind the curtain
[16:26] <imbrandon> maybe i should do that
[16:26] <imbrandon> ;)
[16:27] <SpamapS> imbrandon: http://swift.openstack.org/overview_large_objects.html
[16:27] <imbrandon> so ...
[16:28] <imbrandon> swift is part of openstack but ....
[16:28] <imbrandon> erm
[16:28] <SpamapS> ?
[16:28] <SpamapS> hp cloud == openstack .. clearly :)
[16:28] <imbrandon> well yea but
[16:28] <imbrandon> so are others
[16:28] <imbrandon> i mean ... well not sure what i mean
[16:28] <imbrandon> let me digest this a bit
[16:29] <imbrandon> btw i see this says 5GB but i noticed their web ui said 50MB
[16:30] <imbrandon> i was wondering about that limit , likely due to a diff in s3 and swift
[16:35] <imbrandon> swift caches with memcache and syncs cross node with rsync ... seems like block level network dev would be better ...
[16:36] <imbrandon> im not great at that low in the stack though so i'm sure there is reason, with as many ppl work on it that do
[16:42] <SpamapS> imbrandon: its mostly historical IMO
[16:42] <SpamapS> imbrandon: IMO, CEPH's model is far more scalable and will be easier to support in a large scale installation. Rackspace gets by in large part because they hire amazing people and keep them happy.. but that won't scale the way AWS has scaled.
[16:48] <robbiew> SpamapS: speaking of ceph :)...the MIR is simply in the queue right?  nothing else is holding it up...just resources for MIRs
[16:49] <SpamapS> robbiew: right
[16:49] <SpamapS> robbiew: its pretty hard to test.. so I understand why it might take a while.
[16:49] <robbiew> SpamapS: ack, thx
[16:50] <imbrandon> SpamapS: hrm i'm def checking it out now, not really looked before
[16:50] <niemeyer> rogpeppe: Yeah, I'm about to be out for the day, after I pack
[16:51] <rogpeppe> niemeyer: ok. i guess no meeting today then.
[16:51] <rogpeppe> niemeyer: going anywhere nice?
[16:51] <niemeyer> rogpeppe: We're going to Porto Alegre
[16:52] <rogpeppe> niemeyer: nice. have fun!
[16:52] <niemeyer> rogpeppe: Not exactly a touristic place, but Ale is doing some training there over the next couple of days
[16:52] <niemeyer> rogpeppe: I'll just join her
[16:52] <rogpeppe> niemeyer: you're offline tomorrow then?
[16:52] <imbrandon> SpamapS: for the nginx MIR we were gonna wait for 12.10 open right ?
[16:52] <niemeyer> rogpeppe: Not sure.. I did file a holiday to avoid trouble, but I may be around
[16:53] <niemeyer> rogpeppe: Have you seen my note to the list?
[16:53] <rogpeppe> niemeyer: the juju list?
[16:53] <niemeyer> rogpeppe: Yeah, I hope the message did reach the list?
[16:53] <niemeyer> Oh, it did.. I recall jcastro replied
[16:53] <rogpeppe> niemeyer: i see it
[16:56] <SpamapS> imbrandon: yeah, we'll have a session where we present a couple of things for MIR like nginx..
[16:57] <SpamapS> imbrandon: I'll subscribe you to the blueprint when I create it. I usually create a 'webscale' blueprint where we talk about packaging and/or MIR'ing stuff that is hot and sexy. :)
[16:57] <SpamapS> robbiew: I just looked back at CEPH, there is one thing that I think I need to do before the MIR will pass, which is switch to using libnss instead of libcryptopp
[16:58] <SpamapS> robbiew: I discussed it with upstream a while back.. should be fairly easy, will try to get to it before the MIR review
[16:58] <imbrandon> kk
[16:58] <robbiew> SpamapS: sweet, thx
[16:59]  * SpamapS starts testing and poking at r504 to upload to precise today
[17:07] <imbrandon> SpamapS: so where can i find the dummy been under a rock for 3 months info on CEPH, tottaly flying blind here
[17:09] <imbrandon> nvm got it
[17:09]  * avoine dream about btrfs being finally stable
[17:11] <imbrandon> SpamapS: this is nice, why have i not seen this candy before, it seems like i should have known about this
[17:15] <imbrandon> marcoceppi: ( jcastro , SpamapS too kinda ) btw blog posted about nginx configs earlier today, and noticed in doing so there are somethings i'd like to get pushed to the OMG charm, gonna be round to work on that a bit later or ?
[17:15] <marcoceppi> Do you have your changes in branch imbrandon ?
[17:15] <imbrandon> no not at all, but i will
[17:16] <imbrandon> and actually one makes a totaly new config. should i add it to files/ ?
[17:17] <imbrandon> marcoceppi: should i work from your omg-next branch ?
[17:17] <marcoceppi> yeah, omgwtp/next is my current dev head
[17:17] <imbrandon> kk
[17:17] <marcoceppi> there's an omg-nginx config in there already
[17:17] <marcoceppi> use that as a template to work your changes in to
[17:17] <SpamapS> avoine: for CEPH's purposes, btrfs is quite stable. If you look at it, you can lose the volume and not lose data. ;)
[17:18] <imbrandon> yea this is for the /etc/nginx/nginx.conf thou
[17:18] <marcoceppi> After that we can spin up a quick staging instance to check on it
[17:18] <imbrandon> not the sites/avail one
[17:18] <marcoceppi> imbrandon: is it a lot of changes? if not it might be easier to just sed them in place, if it's a lot, then just drop it in files/nginx.conf
[17:18] <imbrandon> hrm , its about 10 lines
[17:18] <imbrandon> could prbably sed it
[17:19] <imbrandon> but if it gets to be more then it would make sense
[17:19] <imbrandon> i'll look exactly here in a bit and make the call
[17:19] <marcoceppi> I'm fine either way
[17:20] <imbrandon> kk
[17:20] <imbrandon> i like the idea of it in files better , but if it ends up being smaller than i thought then i'll just sed it
[17:21] <imbrandon> the jest of it is posted on brandonholtsclaw.com as the newest post in a <pre> if you wanna peek, basicly that + omg specific dirs etc
[17:21] <imbrandon> but i'll get it all in the banch and test
[17:21] <imbrandon> and then ping ya
[17:32] <imbrandon> and omg i dont care , i'm still in love with the hp-php code, its the first php code with any substance ( more than a snipit or two ) from any larger body ( company or group ) that is actualy NICE to work with, twitter has some nice code to once you dig past their examples that are crap, but ..... but but but
[17:32] <imbrandon> :)
[17:36] <imbrandon> jcastro: btw i was talking to a couple of the old timers i used to hang with here on irc about juju ( one is even a canonical employee now, whoda thunk ) and they/we all agreed, we need a nother way to articulate what juju is to those of us devs that brushed it aside as just another provisioning fad and that its really much more etc etc
[17:36] <imbrandon> dunno how yet, but yea
[17:37] <jcastro> imbrandon: once the store lands (RSN) it'll be way easier for me to explain it to people, but yeah, you're right
[17:37] <imbrandon> sweet, if i can help lemme know, no idea how but offer stands
[17:45] <_mup_> Bug #962383 was filed: ec2-key-pair went away, but juju doesn't say it is unsupported <documentation> <ec2> <juju:New> <juju (Ubuntu):New> < https://launchpad.net/bugs/962383 >
[17:47] <jcastro> SpamapS: are you in charm mode or distro/ship mode today?
[17:47] <jcastro> SpamapS: znc and appflower are ready for round 2 reviews if you feel like being a promulgator today
[17:49] <jcastro> imbrandon: is quickdrop ready for a first review?
[17:49] <marcoceppi> appflower looks good, I haven't had a chance to check his changes but the diffs look solid
[17:50] <SpamapS> jcastro: I have sucked at charm review this week..
[17:50] <SpamapS> jcastro: I will make it a point to get into charm review mode as soon as I get the latest juju into precise
[17:50] <m_3> jcastro: me too
[17:50] <imbrandon> jcastro: no, will be as soon as i push , trying to clean up the last bits now
[17:51] <jcastro> imbrandon: ok, next time don't add the "new-charm" tag until you're ready for review pls.
[17:51] <imbrandon> jcastro: was thinking about retrofitting some stuff but i'm gonna do that later
[17:51] <imbrandon> jcastro: sure
[17:51] <jcastro> m_3: hah man, this queue is larger than it's ever been. :)
[17:52] <SpamapS> jcastro: perhaps we should change the tag to 'ready-for-review'
[17:52]  * jcastro nods
[17:52] <SpamapS> jcastro: its a bit confusing when you have a "new charm" but its not necessarily ready for people to look at.
[17:52] <m_3> oh, that's a good idea
[17:53] <jcastro> yeah, ok after we get these done I'll just change it.
[17:53] <SpamapS> We should have a retrospective session at UDS about the whole charmers process.
[17:53] <SpamapS> jcastro: eh.. lets hold off on that for now
[17:53] <SpamapS> jcastro: lets just think about it, and make a noisy change when we're ready
[17:53] <jcastro> ok
[17:53] <SpamapS> too much going on right now
[17:53] <m_3> sissy
[17:53] <imbrandon> lol
[17:54] <jcastro> well, by "doing it" I meant more like bringing it up on the list, etc.
[17:54] <SpamapS> pretty much everything has changed in some way in the last 8 days
[17:54] <SpamapS> Lets stop changing stuff.. just for 3 weeks.. and see if anything explodes.
[17:54] <m_3> I think the quote is something like "may you live in interesting times"
[17:54] <jcastro> ok.
[17:54] <imbrandon> jcastro: yea and the charm is being pushed to github too, i'll let you figure that one out /me ducks
[17:55] <jcastro> that's cool, we don't mind fetching charms. :)
[17:55] <imbrandon> :)
[17:55] <SpamapS> m_3: 你可能生活在有趣的时代
[17:55] <imbrandon> mmmm utf-8
[17:56]  * SpamapS loves google translate
[17:57] <imbrandon> my fav has always been "Et hoc ante omnia omnibus fieri etiam" even blogged aobut it, need to import all my old blog posts today sometime
[17:58] <m_3> geek
[17:58] <imbrandon> dosnt translate nice, but its the repeated saying in galatica
[17:58] <imbrandon> lol
[18:00] <imbrandon> Eternal Return , or whatever the concept is called
[18:00] <imbrandon> very geeky, but kinda cool :)
[18:03] <imbrandon> hahah and i just found out something i did not know, and not sure i'm happy bout it , "The first line of Disney's Peter Pan is 'All of this has happened before, and it will all happen again.' "
[18:03]  * imbrandon frowns
[18:04] <imbrandon> was better when only the humans and cylons repeated it ...
[18:19] <_mup_> juju/relation-hook-context r508 committed by jim.baker@canonical.com
[18:19] <_mup_> Only try to flush relation hook contexts that have been changed
[18:55] <_mup_> juju/relation-hook-context r509 committed by jim.baker@canonical.com
[18:55] <_mup_> Fix remaining failing test
[18:57] <_mup_> juju/relation-ids-command r506 committed by jim.baker@canonical.com
[18:57] <_mup_> Merged upstream
[19:07] <_mup_> juju/relation-ids-command r507 committed by jim.baker@canonical.com
[19:07] <_mup_> Relation ident update
[19:10] <shazzner> hello
[19:11] <shazzner> quick question, can you configure juju for local, but to a remote data-dir?
[19:11] <shazzner> like data-dir: /media/mountpnt/.
[19:12] <shazzner> I'm sure just testing it out instead of asking would probably be quicker :p
[19:12] <_mup_> juju/relation-id-option r515 committed by jim.baker@canonical.com
[19:12] <_mup_> Merged upstream & resolved conflicts
[19:50] <_mup_> juju/relation-id-option r516 committed by jim.baker@canonical.com
[19:50] <_mup_> Relation ident, precaching of child relation hook contexts refactoring
[19:51] <shazzner> huh
[19:51] <shazzner> I'm having this issue when trying to bootstrap a local instance: http://paste.ubuntu.com/906169/
[19:51] <shazzner> Network is already in use
[19:52] <imbrandon> reboot after installing the local deps ?
[19:52] <shazzner> hmm ok
[19:58] <shazzner> hey that worked!
[20:01] <imbrandon> :)
[20:08] <shazzner> hmm another issue
[20:08] <shazzner> after deploying a charm, both the public-address and state are both null
[20:09] <imbrandon> happens with local sometimes
[20:09] <imbrandon> just juju destroy-environment
[20:10] <imbrandon> and rebootstrap
[20:10] <imbrandon> should fixer up
[20:11] <shazzner> hmm, still null
[20:11] <imbrandon> hrm
[20:11] <imbrandon> SpamapS: halp! heh
[20:15] <shazzner> I'm grabbing the ppa and updating juju
[20:15] <shazzner> maybe that'll help
[20:17] <shazzner> ok dist-upgrade hmm
[20:22] <hazmat> shazzner, what's the instance state?
[20:22] <hazmat> shang, er. agent-state
[20:23] <hazmat> er. shazzner
[20:23] <SpamapS> I'm really glad we made it 'agent-state'
[20:23] <SpamapS> would be so hard to ask people "whas the state?"
[20:23] <hazmat> SpamapS, indeed, it is a bit clearer
[20:23] <SpamapS> who's on installed? config_error is on 2nd, and state is pitching
[20:24]  * hazmat made it all the way to 'stopped'
[20:26] <_mup_> juju/relation-hook-context r510 committed by jim.baker@canonical.com
[20:26] <_mup_> Require relation_ident in constructing relation hook contexts
[20:26] <shazzner> ok I upgraded juju and rebootstrapped
[20:26] <shazzner> agent-state: pending
[20:28] <shazzner> whoa
[20:28] <shazzner> tons of errors in debug-log
[20:29] <shazzner> http://paste.ubuntu.com/906211/
[20:29] <shazzner> looks like Twisted errors
[20:30] <SpamapS> shazzner: what OS?
[20:30] <shazzner> oneiric
[20:31] <SpamapS> 2012-03-29 15:28:09,205 Machine:0: twisted ERROR: cp: cannot stat `/var/lib/lxc/chris-testlocal-0-template/config': No such file or directory
[20:31] <hazmat> shazzner, that's using the ppa?
[20:31] <shazzner> yup
[20:31] <SpamapS> shazzner: something failed in lxc-create most likely
[20:31] <SpamapS> shazzner: lxc-ls ?
[20:32] <shazzner> chris-testlocal-gitolite-0
[20:33] <hazmat> that tweaking error is odd
[20:33] <shazzner> not sure if it matters, but this is on an old i686 hp proliant server
[20:33] <hazmat> its not from juju
[20:33] <SpamapS> shazzner: you should see christ-testlocal-0-template
[20:34] <shazzner> in lxc-ls?
[20:34] <SpamapS> shazzner: yes
[20:34] <shazzner> huh
[20:34] <shazzner> did it just name it wrong? :p
[20:34] <SpamapS> shazzner: thats the container that gets created on the first unit, and then cloned to all the others
[20:34] <SpamapS> shazzner: no something failed along the way
[20:34] <shazzner> oh I see
[20:34] <SpamapS> shazzner: I'd suggest destroy-environment
[20:34] <shazzner> k
[20:35] <SpamapS> shazzner: and if that container is still there, 'lxc-destroy -n chris-testlocal-gitolite-0'
[20:35] <SpamapS> shazzner: also if you get the same fail again, try 'rm -rf /var/cache/lxc/*' which will force re-downloading the minimal image
[20:35] <shazzner> nope lxc-ls returned nothing
[20:36] <shazzner> bootstrap doesn't log anything
[20:36] <shazzner> attempting deploy again
[20:37] <shazzner> huh
[20:37] <shazzner> the template is there now
[20:37] <shazzner> but no gitolite
[20:38] <shazzner> let me try deleted the lxc cache
[20:38] <SpamapS> shazzner: wait
[20:39] <SpamapS> shazzner: the template gets created and then messed with for a bit
[20:39] <SpamapS> we really need to add the master-customize.log tod ebug-log
[20:39] <SpamapS> to debug-log I mean
[20:39] <shazzner> I get this error upon destroy-enviornment http://paste.ubuntu.com/906226/
[20:39] <SpamapS> shazzner: in your data-dir, you should see 'units/master-customize.log'
[20:39] <SpamapS> shazzner: you need to slow down
[20:40] <SpamapS> shazzner: it takes a few minutes between creating the template and creating the actual unit
[20:40] <shazzner> ok I will :)
[20:40] <SpamapS> shazzner: make sure there aren't any juju processes left hanging around, and then clear out any leftover lxc templates or units with lxc-destroy
[20:41] <SpamapS> shazzner: then your destroy-environment should clear things out adequately
[20:42] <shazzner> ok everything looks clean
[20:42] <shazzner> from the top
[20:43] <shazzner> ok instance-state running
[20:43] <shazzner> attempting deploy
[20:44] <shazzner> agent-state: pending
[20:45] <SpamapS> shazzner: good
[20:45] <SpamapS> shazzner: now find master-customize.log in your data-dir/units and tail that
[20:46] <shazzner> on it already
[20:46] <shazzner> still on creating mast container
[20:46] <shazzner> *master
[20:46] <shazzner> I'll wait though :)
[20:46]  * SpamapS vows to work on a "log everything to one place" feature sometime in the near future
[20:46] <SpamapS> shazzner: lots of downloading and installing of packages to get through there. :-P
[20:46] <SpamapS> I wonder if eatmydata can make the container creation faster
[20:50] <shazzner> hmm it's on juju.state.unit@INFO: Started service unit gitolite/0
[20:51] <shazzner> it's probably still running the install script
[20:51] <SpamapS> shazzner: agent-state should be running then
[20:51] <shazzner> still pending :/
[20:52] <SpamapS> shazzner: odd... hm
[20:52] <SpamapS> shazzner: actually no that makes sense..
[20:52] <SpamapS> shazzner: should be up shortly though
[20:53] <shazzner> huh in the master log: it reached Container Customization Complete
[20:54] <shazzner> still pending on state
[20:55] <SpamapS> shazzner: so now it has to clone it
[20:55] <SpamapS> shazzner: which is just a cp -a (you may even see that running)
[20:55] <SpamapS> shazzner: I hope at some point very soon that becomes an overlayfs mount
[20:55] <shazzner> ah
[20:55] <SpamapS> I bet we could make it a local provider argument.. 'ephemeral: true'
[20:56] <SpamapS> because right now they're at least persistent until you destroy the environment
[20:56] <shazzner> huh
[20:58] <shazzner> well
[20:59] <shazzner> either I'm an impatient twit (likely) or it got stuck somehow
[21:01] <SpamapS> shazzner: does lxc-ls show it running? (2nd line) ?
[21:02] <shazzner> yup
[21:02] <shazzner> chris-testlocal-0-template  chris-testlocal-gitolite-0
[21:02] <shazzner> chris-testlocal-gitolite-0
[21:03] <SpamapS> shazzner: progress :)
[21:03] <shazzner> indeed :p
[21:04] <SpamapS> shazzner: so if you run a pstree (or ps auxf) you should see a juju agent running under lxc-start->init->python
[21:05] <shazzner> I see lxc-start-init-cron
[21:05] <shazzner> http://paste.ubuntu.com/906267/
[21:06] <shazzner> oh I see python nm
[21:06] <SpamapS> shazzner: (ps auxf will show you the actual arguments os you can see juju most likely)
[21:06] <SpamapS> shazzner: ok well you're almost there then :)
[21:07] <SpamapS> shazzner: still pending?
[21:07] <shazzner> yeah I see
[21:07] <shazzner> yup still pending
[21:08] <arashbm> I have a question!
[21:08] <arashbm> what is it trying to download when it is "Creating master container..."?
[21:08] <SpamapS> arashbm: Ubuntu :)
[21:09] <arashbm> SpamapS: ouch! that would hurt with 100KiB/s
[21:09] <SpamapS> shazzner: ok look in data-dir/units/gitolite-0/unit.log
[21:10] <SpamapS> arashbm: its the minimal distro
[21:10] <SpamapS> arashbm: about 50MB
[21:10] <SpamapS> arashbm: and it should be cached in /var/cache/lxc after the first time you do it
[21:10] <shazzner> Spamaps: you mean container.log?
[21:10] <SpamapS> shazzner: no, unit.log
[21:11] <shazzner> err no such file
[21:11] <SpamapS> shazzner: *hmmmm*
[21:11] <shazzner> err wait
[21:12] <SpamapS> shazzner: should be a symlink in there
[21:12] <SpamapS> shazzner: to the log insie the container's rootfs
[21:12] <imbrandon> Error processing '/root/.juju/cache/cs_3a_oneiric_2f_mysql-0.charm': must be a zip file (File is not a zip file)
[21:12] <shazzner> http://paste.ubuntu.com/906277/
[21:13] <imbrandon> wth is that?
[21:13] <shazzner> sorry that was directed to SpamapS
[21:14] <imbrandon> yea, sorry
[21:14] <imbrandon> afk, brb . SpamapS , thats from deply mysql , others work fine
[21:14] <SpamapS> shazzner: my guess is something has gone wrong with your unit agent..
[21:14] <shazzner> SpamapS: in the output.log I get an endless series of this: http://paste.ubuntu.com/906281/
[21:14] <SpamapS> shazzner: anything in output.log ?
[21:15] <shazzner> looks bad
[21:15] <SpamapS> AHA!
[21:15] <SpamapS> shazzner: ok that makes sense
[21:15] <SpamapS> shazzner: for some odd reason, your container has the distro version of juju
[21:15] <SpamapS> shazzner: but your host is running the PPA version
[21:15] <shazzner> huh
[21:15] <SpamapS> and they are not compatible
[21:15] <shazzner> oh
[21:15] <SpamapS> shazzner: juju-origin: ppa
[21:16] <SpamapS> shazzner: juju is supposed to detect that and set the origin automatically.. but I wouldn't be surprised if it guessed wrong
[21:16] <shazzner> oh oh oh
[21:16] <shazzner> shit
[21:16] <SpamapS> ahh
[21:16] <SpamapS> the oh shit moment
[21:16] <SpamapS> one of life's great experiences
[21:16] <shazzner> yep juju-origin: distro
[21:16] <shazzner> haha
[21:20] <_mup_> juju/relation-hook-context r511 committed by jim.baker@canonical.com
[21:20] <_mup_> Created missing test for get_relation_hook_context
[21:23] <shazzner> hooray I get a public-address now!
[21:24] <shazzner> booyah it's started
[21:24] <shazzner> christ thanks for your help SpamapS
[21:24] <shazzner> sorry for dragging you through that!
[21:25] <SpamapS> shazzner: no problem.. we should probably detect the distro version and refuse to start a container that will never work actually.
[21:25] <shazzner> that sounds nice :)
[21:25] <SpamapS> Though that would mean juju would need to become version aware ;)
[21:25]  * SpamapS still waiting for --version :-P
[21:30] <_mup_> juju/relation-hook-context r512 committed by jim.baker@canonical.com
[21:30] <_mup_> Comments
[21:59] <shazzner> huh
[21:59] <shazzner> one weird thing
[21:59] <shazzner> when I deploy a charm, then try to ssh into the machine
[22:00] <shazzner> it asks me for a password
[22:09] <SpamapS> shazzner: which "Machine" ?
[22:10] <SpamapS> shazzner: you need to ssh to the unit. the "machine" is your machine
[22:12] <shazzner> ok got it
[23:19] <_mup_> juju/relation-hook-context r513 committed by jim.baker@canonical.com
[23:19] <_mup_> Verify lookup of relation hook context for a nonexistent relation id fails properly
[23:38] <arashbm> Juju is magic!!
[23:40] <SpamapS> arashbm: :)
[23:47] <imbrandon> echo "apt-get install" > /tmp/install.txt && ssh xerox.websitedevops.com 'dpkg --get-selections' | tee /tmp/full-list.txt | grep -v deinstall| cut -f 1| tr "\r\n" " " >> /tmp/install.txt && cat /tmp/install.txt| tr "\r\n" " " > /tmp/install.txt && mv /tmp/install.txt $(pwd)/install.sh && echo '#!/bin/bash'|cat - $(pwd)/install.sh > /tmp/out && mv /tmp/out $(pwd)/install.sh && chmod +x $(pwd)/install.sh
[23:47] <imbrandon> doh
[23:49] <SpamapS> imbrandon: chapter 8, verse 13 in The Unholy Scriptonomicom, Amen
[23:49] <imbrandon> SpamapS: hahah yea, its ugly
[23:50] <imbrandon> makes for a quick "clone" though
[23:51] <_mup_> juju/relation-hook-context r514 committed by jim.baker@canonical.com
[23:51] <_mup_> Removed unused change field from relation hook context
[23:51] <SpamapS> imbrandon: I believe there is an apt-clone now
[23:52] <SpamapS> Description-en: Script to create state bundles This package can be used to clone/restore the packages on a apt based system. It will save/restore the packages, sources.list, keyring and automatic-installed states.
[23:52] <_mup_> juju/relation-ids-command r508 committed by jim.baker@canonical.com
[23:52] <_mup_> Merged upstream
[23:52] <SpamapS> imbrandon: *much* cooler. :)
[23:53] <imbrandon> hahaha figures
[23:53] <imbrandon> :)
[23:54]  * imbrandon rm's this old thing thats prone to error anyhow from ~/bin/misc-tools
[23:55] <imbrandon> SpamapS: yea without the sources.list and stuff from apt-clone , you end up with cruft like
[23:55] <imbrandon> E: Package 'logmein-hamachi' has no installation candidate
[23:55] <imbrandon> E: Package 'newrelic-php5' has no installation candidate
[23:56] <imbrandon> E: Package 'newrelic-sysmond' has no installation candidate
[23:56] <imbrandon> lol
[23:56] <imbrandon> with mine that is
[23:56] <SpamapS> imbrandon: apt-clone specifically addresses that
[23:56] <imbrandon> yea
[23:56] <imbrandon> thats what i was getting at
[23:56] <imbrandon> nother reason to use it :)
[23:56] <_mup_> juju/relation-id-option r517 committed by jim.baker@canonical.com
[23:56] <_mup_> Merged trunk & resolved conflicts
[23:57] <imbrandon> SpamapS: here i got one other act of un-holly i use semi-often, got something for this ?
[23:57] <imbrandon> alias autokey='sudo apt-get update 2> /tmp/keymissing; for key in $(grep "NO_PUBKEY" /tmp/keymissing |sed "s/.*NO_PUBKEY //"); do echo -e "\nProcessing key: $key"; gpg --keyserver keyserver.ubuntu.com --recv $key && gpg --export --armor $key | sudo apt-key add -; done'
[23:58] <imbrandon> lol
[23:58] <SpamapS> imbrandon: um.. yes.. don't use sources w/o thinking about the source!
[23:59] <imbrandon> haha its only when i transer the sources.list to a new machine
[23:59] <imbrandon> transfer*
[23:59] <imbrandon> i dont use shady source cept for maybe my own ppa
[23:59] <imbrandon> lol
[23:59] <SpamapS> imbrandon: but.. thats the thing. why are you using sources that you don't trust first before transferring?