[00:05] <nijaba> m_3: pong
[00:06] <m_3> nijaba: so in the peer relation hook, it doesn't look like the key's being gen'd
[00:06] <nijaba> m_3: humm which one?  the ssh or des?
[00:07] <m_3> des
[00:07]  * nijaba looks
[00:07] <m_3> I'm still testing, but gotta make some changes locally before I get further
[00:08] <m_3> config-changed barfs if you just use default config params (trying to do stuff with ssl certs even though do_https is off)
[00:08] <nijaba> m_3: hmm. the des key should be generated the first time get-des-key gets called
[00:09] <nijaba> m_3: uhhh did not do much test with default params, you are right.
[00:09]  * nijaba juju bootstraps
[00:09] <m_3> wondering about a couple of strictly '-lt's in the peer hook
[00:10] <SpamapS> we should offer Juju boot straps in the ubuntu store
[00:10] <m_3> doesn't look like that'd be called for the master... that the key's being gen'd from a set-des-key calling get-des-key
[00:12] <nijaba> m_3: it is called the first time peer-relation-joinded is invoked, afaics
[00:13] <m_3> nijaba: adding a 'do_https' guard around 'set-ssl-cert' in config-changed to get it deployed
[00:16] <nijaba> m_3: drat, you are right.  I should have guarded this.  do you want me to work on it with default params a bit more and signal you back.  I was a bit too much concentrated on distributing those certs that I forgot the basic test case
[00:17] <m_3> nijaba: the part I'm trying to get to test is the peer-relation-all in the master case.  I'm wondering if:
[00:17] <m_3> if  [[ $LOCAL_UNIT_ID -lt $REMOTE_UNIT_ID ]] && [[ $LOCAL_UNIT_ID -lt $FIRST_UNIT_ID ]] ; then
[00:18] <m_3> sorry for the bad paste, but I'm wondering if this is ever true
[00:18] <m_3> [[ "0" -lt $SOME_UNDEFINED_VARIABLE ]] && echo "yes" || echo "no"
[00:18] <nijaba> m_3: it is, according to my logs.  done tests all the way, removing a master, middle and end nodes
[00:19] <SpamapS> looks like we need a charm helper for peers and leader detection
[00:19] <SpamapS> I did something very similar in ceph
[00:20] <m_3> SpamapS: yes, agree... there're multiple impls already
[00:20] <m_3> nijaba: cool...  I'll bang on it
[00:20]  * m_3 is sticking with my primary skills
[00:21] <nijaba> m_3: the theory is that the unit I am on is never in the the list.  So if my id is less than the remote and the first unit id in the list, then I am elected master
[00:22] <m_3> nijaba: right
[00:23] <SpamapS> relation-list shows all parties, IIRC
[00:23] <nijaba> drat, my chmods in set-ssl-cert are at the wrong place, hence the issue you found when they are not set
[00:23]  * SpamapS honestly doesn't remember, but I think in ceph I had to account for my own ID being in the list
[00:24] <m_3> I'll test it carefully to see what's going on... my guess was that this wasn't executing, but the key was still generated 'lazily' by set-des-key
[00:24] <nijaba> SpamapS: I never ever saw this happen so far.  maybe implementation has changed?
[00:25] <m_3> set-des-key with and empty arg actually calls get-des-key which gens
[00:25] <m_3> s/and/an/
[00:25] <SpamapS>     relation-list > $units_file
[00:25] <SpamapS>     echo $JUJU_UNIT_NAME >> $units_file
[00:25] <SpamapS> nijaba: you are correct .. it is actually only the "other" units
[00:27] <m_3> totally need an i_am_leader helper fn
[00:27] <nijaba> m_3: yes, this was intended.
[00:28] <nijaba> m3: (set-des-key calling get-des-key when called with no param)
[00:28] <SpamapS> m_3: ch_peer_leader
[00:29] <SpamapS> and ch_peer_i_am_leader
[00:29] <nijaba> m_3: that would have helped me A LOT.  had to fight a few hours to understand the logic
[00:31] <SpamapS> nijaba: as did I. We will have to combine the best of those two implementations into a charm helper version.
[00:32] <nijaba> m_3: just pushed a fix for the config-changed set-ssl-cert that seems to work better :)
[00:32] <m_3> that's ok, I reimplimented facter while writing varnish... doh!
[00:33] <m_3> nijaba: I'll pull in a bit... I just commented them out to get to teh peer relations
[00:34] <nijaba> SpamapS: mine is actually already inspired from you ceph charm, but it did not look like we had exactly the same case.  ceph seems to need keys to all other peers, I just need to push the keys of the current master
[00:34] <SpamapS> nijaba: yeah there are two different things needed. One is a generic system for agreeing on who is the leader. The other one is a generic system for transferring a file from the leader to all non-leaders in a peer relation.
[00:35] <nijaba> SpamapS: right.
[00:37] <nijaba> I can try to genericize in ch_helper if you want.  It's quite fresh in my head atm
[00:37] <SpamapS> Go for it!
[00:37] <SpamapS> nijaba: please add tests for it in tests/helpers
[00:37] <nijaba> SpamapS: k
[00:38] <SpamapS> nijaba: I named the file 'helpers.sh' but I think it should have been 'net.sh'. Since what you are doing is really, I think, unrelated to net.sh, you should maybe call it peer.sh
[00:38] <nijaba> SpamapS: remind me where the branch is please
[00:38] <SpamapS> lp:charm-tools
[00:40] <nijaba> m_3: I really did not test without cert sets. /me slaps myself
[00:40] <m_3> nijaba: peer keys seem to be generated just fine... sorry for the noise
[00:41] <m_3> pulling your latest
[00:41]  * m_3 on to the next test scenario
[00:44] <mchenetz> yeah... Stackops server1 is up... Now for another compute node
[00:46] <mchenetz> or openstack
[00:52] <nijaba> SpamapS: I am not seeing any helpers.sh file, only helper/sh/net.sh...  not sure what you meant
[00:53] <nijaba> SpamapS: Can I use bash, or you tink sh is very important?
[00:56] <SpamapS> nijaba: in tests/helpers, there is helpers.sh
[00:56] <SpamapS> nijaba: write the tests first. ;)
[00:56] <nijaba> SpamapS: ah? really? I usually do the opposite...
[00:57] <SpamapS> nijaba: I use posix shell only, if you guys feel strongly that bash is important, we can make them bash specific, but they need to be called .bash, not .sh
[00:57] <SpamapS> nijaba: TDD, write test, then write code.
[00:57] <nijaba> SpamapS: aye sir
[00:57] <SpamapS> nijaba: you're welcome to do it your wya. :)
[00:57] <SpamapS> way even
[00:58] <SpamapS> but typically tests written after the fact are more shallow and have more assumptions in them.
[00:59] <nijaba> SpamapS: good to know.
[01:00] <SpamapS> nijaba: to be fair, I do it "the right way" only about 50% of the time, because usually I'm too busy to write tests first. :)
[01:00] <m_3> the biggest thing that gets me with sh -vs- bash is the ${VARIABLE%%.xml} expasion stuff
[01:01] <m_3> saves having to exec cut
[01:01] <SpamapS> m_3: that stuff is awesome.. and available in all shells.
[01:01] <SpamapS>      ${parameter%%word}    Remove Largest Suffix Pattern.  The word is expanded to produce a pattern.
[01:01] <m_3> and really lots more than just cutting (regexp subs)
[01:01] <SpamapS> from man dash
[01:02] <m_3> was having problems in dash
[01:02] <m_3> in particular, the substitution one... ${VARIABLE//\/*}
[01:02] <nijaba> SpamapS: what were you expecting "ch_peer_leader"? return the leader unit name?
[01:03] <SpamapS> nijaba: echo $leader .. yeah
[01:03] <m_3> even unit sequence id would be great
[01:03] <nijaba> m_3: which do you prefer?
[01:03]  * nijaba put a param :)
[01:03] <m_3> but go with the whole unit name I'd think, it can be trimmed by the user
[01:04] <SpamapS> oh yeah another one that wouldn't be peer related at all is just "ch_my_unit_id" and "ch_unit_id"
[01:04] <m_3> with the ${//} above :)
[01:04] <SpamapS> Even though those are basically just 1 liners
[01:04] <SpamapS> m_3: I tend to go with ## or #
[01:04] <SpamapS> But I can't explain why ;)
[01:04] <m_3> right
[01:04] <m_3> didn't know dash was supposed to impl it
[01:05] <m_3> I'll check and see if a bug is appropo
[01:06] <m_3> you'd think that'd be in the test suite tho
[01:07] <SpamapS> Yeah I believe there's an actual POSIX test shell test suite somewhere out there
[01:20]  * m_3 relocating... coffee shop -> home
[01:22] <nijaba> m_3: drive safely :)
[01:59] <nijaba> SpamapS: how can I simulate a call to relation-list? relation-list is a bad function name in sh
[03:19] <nijaba> SpamapS: charm_tools waiting for a merge :)
[03:19]  * nijaba -> bed
[07:11] <SpamapS> nijaba: alias relation-list=my_relation_list
[08:15] <TheMue> rog: Ready for a worklow question?
[08:17] <rog> TheMue: sure
[08:17] <TheMue> rog: In my past I've worked with svn and hg, so some questions about the workflow here with bzr.
[08:18] <rog> TheMue: go on
[08:18] <TheMue> rog: If I wonna start something new, like we yesterday talked about, I first create a branch with a kind of name describing what I'm doing?
[08:19] <TheMue> rog: E.g. bzr lp:juju/go go-add-state
[08:20] <rog> TheMue: yeah. actually i rarely know what the change will be until i'm nearly ready to submit, so i often just name it something fairly generic to start with
[08:20] <TheMue> rog: ok
[08:20] <rog> s/bzr/bzr branch/ but yes, that's right
[08:20] <TheMue> rog: Oh, eh, yes. ;)
[08:21] <TheMue> rog: So let's asume I've then got something I want to submit, what are the next stepts?
[08:21] <TheMue> steps
[08:21] <rog> in your case, i think that rather than starting with a branch, i might write down how you envisage the API
[08:21] <rog> and put it forward for comments
[08:21] <rog> but anyway
[08:22] <rog> ok if you want to submit something
[08:22] <rog> you'd use gustavo's lbox tool
[08:22] <rog> (have you installed that?)
[08:22] <TheMue> yep
[08:22] <rog> there's been a new version very recently, so you might want to update, BTW
[08:22] <TheMue> done this morning ;)
[08:23] <rog> ok, so you'd commit your changes (bzr commit)
[08:23] <rog> then you'd do lbox propose -cr -for lp:juju/go
[08:24] <rog> and that should get you to edit a file to which you can add the description
[08:24] <rog> of the changes
[08:24] <rog> and it'll put the merge request onto launchpad and codereview
[08:24] <TheMue> ah, ic
[08:25] <rog> then if you need to make some changes in response to the review, you commit and run lbox propose again (with no args this time)
[08:26] <rog> then when you get a LGTM, it's time to do the actual merge
[08:26] <rog> to do that, i make sure that i have a trunk branch that's up to date (e.g. cd my-repo-dir/go-trunk; bzr pull)
[08:27] <rog> then go to that directory and merge in your change
[08:27] <TheMue> ok, got it
[08:27] <rog> bzr merge ../go-add-state
[08:27] <rog> then build and make sure that everything tests ok
[08:28] <rog> then, assuming that's all ok, you can do: bzr push
[08:28] <rog> to actually commit the changes
[08:28] <TheMue> who's giving the LGTM? only gustavo or are always at least two people needed?
[08:29] <TheMue> fine, helps a lot, will write it down in my Getting Started document
[08:29] <rog> TheMue: only one LGTM is needed, but you might want to resolve discussions if there's some other comments
[08:30] <TheMue> so of anyone says it's ok that's enough. got it.
[08:30] <rog> the other thing that caught me out was when i had some branches in a sequence
[08:30] <rog> TheMue: generally you want a LGTM from gustavo - he's the man
[08:31] <TheMue> rog: ok
[08:31] <rog> once you've pushed a branch, if you want to work on the next branch in a sequence of changes, you'll want to merge trunk into that branch before continuing
[08:31] <rog> oh yes, once you've merged, you need to commit the merge
[08:32] <TheMue> ok, sound reasonable
[08:34] <TheMue> thx for your help, will write it down now
[09:09] <mpl> TheMue: jsyk the bzr online help pages + the tips on launchpad are pretty complete already.
[09:09] <mpl> hi all
[09:15] <TheMue> mpl: hi and yep, they are looking good.
[09:16] <TheMue> mpl: justed wanted to know how we doing branching and merging here. had different policies in other companies.
[09:23] <mpl> TheMue: ok. for now I've just followed gustavo's blog post about lbox, it was enough to get started.
[09:48] <koolhead11> hi all
[11:00] <niemeyer> Hello all.. how's that jujuer life coming?
[11:03] <jelmer> it's.. charming
[11:07] <niemeyer> :-)
[11:10] <rog> niemeyer: mornin'
[11:10] <niemeyer> rog: yo
[11:10] <niemeyer> rog: How're things going there?
[11:11] <rog> niemeyer: not bad. distracted by watching that devops video this morning. lots of i have no idea about :-)
[11:11] <rog> s/lots of/lots of stuff
[11:11] <niemeyer> rog: It was pretty interesting indeed
[11:12] <_mup_> Bug #904201 was filed: need supporting code to model machine constraints <juju:In Progress by fwereade> < https://launchpad.net/bugs/904201 >
[11:12] <niemeyer> rog: After about the middle it got a bit boring, with the guy going over the history of a lot of people without much context for how it made any sense to bring these up
[11:12] <TheMue> ..ooOO( Reminder: Look that video, too. )
[11:12] <niemeyer> rog: But otherwise really good
[11:12] <rog> yeah
[11:13] <TheMue> I'll participating a DevOps talk at the OOP too. Looking forward.
[11:16] <TheMue> niemeyer: Getting a better understanding of the existing state code. There's nothing yet like the state.State we yesterday talked about?
[11:18] <niemeyer> TheMue: You can see state.State a _bit_ like the sum of all the StateManager classes
[11:19] <niemeyer> TheMue: We should clean these interfaces up while joining, but that's the direction at least
[11:19] <TheMue> niemeyer: Understand, ok
[12:00] <_mup_> juju/trunk r432 committed by kapil.thangavelu@canonical.com
[12:00] <_mup_> [merge] robust-zk-connect,sshclient-refactor juju commands will now
[12:00] <_mup_> wait for juju to be running post bootstrap, so using juju commands immediately
[12:00] <_mup_> after bootstrap is viable. [a=jimbaker,hazmat][r=fwereade,jimbaker,hazmat]
[12:00] <_mup_> [f=849071]
[13:00] <rog> niemeyer: https://codereview.appspot.com/5444043/
[13:00] <rog> niemeyer: hopefully i've remembered everything
[13:02] <niemeyer> rog: Thanks!
[13:48] <niemeyer> rog: Before submitting cloudinit, would you mind to do a quick godoc test?
[13:49] <niemeyer> rog: I suspect that SetOutput will format improperly
[13:49] <niemeyer> rog: The documentation, that is
[13:49] <rog> niemeyer: i'll have a look
[13:50] <niemeyer> rog: Also, it'd be good to clarify AddRunCommand with smoser
[13:51] <niemeyer> rog: It's unclear (the "first?", and also which shell script)
[13:51] <rog> gokpgdoc FTW: http://gopkgdoc.appspot.com/pkg/launchpad.net/~rogpeppe/juju/go-cloudinit/cloudinit
[13:51] <niemeyer> rog: Wow :-)
[13:51] <niemeyer> This is awesome
[13:51] <rog> ain't it just?
[13:52] <rog> niemeyer: yup, you're right about SetOutput
[13:52] <niemeyer> Ok, I'll add the comments there
[13:54] <smoser> link?
[13:55] <smoser> ah. runcmd are only executed on first boot of an instance.
[13:55] <rog> smoser: and bootcmd?
[13:56] <smoser> i have to check.
[13:56] <smoser> you're adding them to cloudconfig, right?
[13:57] <smoser> as opposed to a multipart part
[13:57] <rog> smoser: yup
[13:57] <rog> (erm, actually i don't know what you mean by "multipart part" there...)
[13:58] <niemeyer> rog: Sent another round
[13:58] <niemeyer> smoser: Yeah, cloud-config
[13:58] <niemeyer> rog: cloud-init actually supports multiple kinds of input
[13:58] <smoser> rog, cloud-inti can take multipart mime input. and one of the types is "boothook".
[13:59] <smoser> bootcmd runs every boot.
[13:59] <niemeyer> smoser: What's boothook?
[13:59] <smoser> which is actually not what was originally intended.
[13:59] <niemeyer> rog: ^ info will be useful for the review
[13:59] <rog> smoser: ah, so the doc in that cloud-init.txt is misleading
[14:00] <niemeyer> rog: How so?
[14:00] <smoser> it should follow "Cloud Boothook" at https://help.ubuntu.com/community/CloudInit
[14:00] <rog> smoser: it implies that the only difference between runcmd and bootcmd is the stage in the boot process that the commands get executed
[14:00] <smoser> but right now, it doesn't even put the INSTANCE_ID in environment.
[14:00] <smoser> i will change it to have that in environment.
[14:01] <smoser> rog, i'll update that documentation also.
[14:02] <smoser> rog, that link there also hopefully describes multipart, but its probably information only for you. i think your approach with cloud-config is fine
[14:02] <rog> smoser: it would be useful if you could take a brief look at the other comments in the docs linked above and let me know if they look right.
[14:05] <rog> smoser: (if you have a moment, of course!)
[14:07] <smoser> SetDisableRoot: disables ssh login to the root account via the ssh authorized key found in metadata
[14:09] <smoser> rog, looks reasonable.
[14:09] <rog> smoser: great, thanks a lot
[14:12] <rog> smoser: when you say "the ssh authorized key found in metadata", do you mean one of the ssh keys specified with ssh_authorized_keys ?
[14:12] <rog> smoser: or is there different key that this refers to?
[14:13] <smoser> different.
[14:14] <rog> smoser: which metadata is the key found in?
[14:15] <rog> smoser: (i'd like to document it if i can)
[14:15] <smoser> only the metadata source's [public-keys are put into both user and root
[14:15] <smoser> ie, in ec2, the ec2 metadata service has a 'public-keys' entry (that is populated when you launch an instance with '--key mykey')
[14:16] <rog> ah, the EC2 metadata!
[14:16] <rog> so i'd be correct if i said this:
[14:16] <rog> // SetDisableRoot sets whether ssh login is disabled to the root account
[14:16] <rog> // via the ssh authorized key found in the instance's EC2 metadata.
[14:16] <rog> // It is true by default.
[14:16] <rog> ?
[14:18] <rog> smoser: ^
[14:18] <smoser> that is good. yeah.
[14:18] <rog> great, thanks
[14:18] <smoser> but it isn't necessarily EC2 metadata specific.
[14:19] <rog> hmm
[14:19] <smoser> as there are multiple DataSources, EC2 is one of them.
[14:19] <smoser> the others are "nocloud" (directory, and there is OVF).
[14:19] <smoser> which can also provide that stuff
[14:20] <rog> smoser: what does SetDisableEC2Metadata do on non-EC2 cloud providers?
[14:20] <rog> smoser: (and is that referring to the same metadata?)
[14:23] <rog> smoser: does this make sense to you:?
[14:23] <rog> // SetDisableRoot sets whether ssh login is disabled to the root account
[14:23] <rog> // via the ssh authorized key associated with the instance.
[14:23] <rog> // It is true by default.
[14:24] <rog> maybe "instance metadata" would be better there
[14:30] <rog> niemeyer: is there a way of running lbox propose that doesn't edit the description?
[14:30] <niemeyer> rog: Not at the moment.. feel free to save the text unchanged, though :-)
[14:30] <niemeyer> rog: I can add a flag if that bothers you
[14:31] <rog> niemeyer: yeah, i do. (well, i add a blank line because of my slightly strange editor set up)
[14:31] <rog> niemeyer: but it would be nice to be able to upload changes only
[14:31] <rog> niemeyer: i usually only edit the description once at the beginning
[14:31] <niemeyer> rog: Note that it doesn't actually upload the description if you don't change it
[14:31] <rog> niemeyer: sure. it's just i'd like to take out one more interactive step.
[14:32] <niemeyer> rog: I personally find it useful since its a nice reminder to update an out-of-date description after the changes made
[14:32] <niemeyer> rog: But I can add a flag if you don't care
[14:32] <rog> niemeyer: i do care - but i always check on codereview anyway
[14:32] <rog> niemeyer: a flag would be great, please.
[14:35] <niemeyer> rog: Will do
[14:37] <niemeyer> rog: I see submit worked for you! :-)
[14:37] <rog> niemeyer: yup, yay!
[14:38] <rog> niemeyer: BTW what things are fixed by the first call to propose? can i, for instance, change the prereq later?
[14:38] <rog> niemeyer: or the bug number, etc
[14:39] <niemeyer> rog: Launchpad doesn't allow changing the prereq, so there's nothing we can do about it
[14:39] <niemeyer> rog: -bug always works
[14:39] <niemeyer> rog: It doesn't _change_ the bug, though
[14:39] <niemeyer> rog: It associates the given bug with the merge proposal (and blueprint, in case it was used)
[14:40] <rog> niemeyer: if you use -prep, is the bug created then? or later?
[14:41] <niemeyer> rog: Works in either case
[14:41] <niemeyer> rog: Sorry
[14:41] <niemeyer> rog: I misinterpreted your question
[14:41] <niemeyer> rog: The bug is created and associated whenever -bug is used
[14:41] <niemeyer> rog: Erm
[14:41] <niemeyer> rog: The bug is associated whenever -bug is used
[14:41] <niemeyer> rog: The bug is created and associated whenever -new-bug is used
[14:41] <rog> niemeyer: -new-bug?
[14:41] <rog> cool
[14:42] <niemeyer> rog: So propose can be called repetitively at will
[14:42] <niemeyer> rog: and associate/create/update stuff as one goes
[14:43] <rog> niemeyer: if you do new-bug twice, will you get two bugs with the same text?
[14:45] <niemeyer> rog: You'll get as many bugs as you call -new-bug with, with whatever text was used
[14:45] <rog> niemeyer: cool, just checking
[14:45] <niemeyer> rog: This is probably a bad behavior, though.. we can do better
[14:46] <rog> niemeyer: presumably lbox propose doesn't remember the new-bug option
[14:46] <niemeyer> rog: We have metadata about it at the server side
[14:46] <niemeyer> rog: Since we associate with the merge proposal
[14:53] <mchenetz> Good morning
[15:06] <SpamapS> mchenetz: ahoy
[15:12] <mchenetz> Spamaps: Back at ya. :-)
[15:12] <niemeyer> I'm heading to lunch
[15:23] <mchenetz> Is there any diagram of the Juju solution? I am starting to integrate Virtualbox into Juju and i wanted to see if there was any good documentation on Juju dev
[15:35] <TheMue> mchenetz: You've already visited https://juju.ubuntu.com/docs/?
[15:35] <TheMue> mchenetz: Here you'll find many informations about juju.
[15:36] <mchenetz> TheMue: I was just wondering if there was a general flow diagram or something... But i will look through it
[15:39] <TheMue> mchenetz: And feel free to ask.
[15:40] <mchenetz> TheMue, thanks
[15:41] <SpamapS> mchenetz: at one point I had one that showed the architecture.. but I don't know if it will be all that helpful.
[15:42] <mchenetz> Spamaps: If it's still relevant then i would like to see it... If not, then don't worry about it.
[15:42] <SpamapS> mchenetz: for providers, you should only need to add a file to juju/providers and maybe config stuff in juju/environment/config.py
[15:43] <mchenetz> What about inside the virtualbox image? Any daemeons that i need to inject into the image on create? Or, do i pre-create images?
[15:43] <SpamapS> mchenetz: boot the cloud image, it will get things right
[15:43] <m_3> need to touch some other stuff throughout the code too (like unit/address.py)
[15:44] <mchenetz> These are the things i need to know... I guess i will start and as i hit a roadblock, i will just ask questions. I will look at the local provider python code as an example of what i need to hook into
[15:45] <SpamapS> mchenetz: cloud-images.ubuntu.com
[15:45] <SpamapS> mchenetz: no, the local provider is going to confuse you a lot I think.
[15:45] <SpamapS> mchenetz: because the LXC stuff is "special" .. and isn't done as a provider, its done as a container technology to run inside VMs.
[15:46] <mchenetz> Spamaps: okay... Then i am not sure... I will just do my best
[15:46] <SpamapS> mchenetz: probably easier to copy the dummy provider actually. :)
[15:46] <mchenetz> Spamaps: Okay, i will do that... Thanks
[16:06] <marcoceppi> Could I use juju to deploy on Ubuntu Cloud Live?
[16:08] <fwereade> mchenetz, just a note: I tried to make the docstrings for MachineProviderBase helpful, do take a look at those
[16:09] <mchenetz> fwereade: thanks, will do
[16:20] <jimbaker> hazmat, thanks for merging in the branches to enable juju commands immediately after bootstrap
[16:20] <jimbaker> (enable their effective, error free use)
[16:24] <m_3> whoohoo!
[16:26] <jimbaker> m_3, yes, it's a very nice change. glad that was able to make it in. also i'm feeling better today :)
[16:27] <m_3> jimbaker: good... missed you at the goat yesterday
[16:28] <jimbaker> unfortunately my wife is now sick with whatever i had earlier this week
[16:37] <mpl> jimbaker: that's true love and dedication ;)
[16:38] <jimbaker> mpl, indeed ;)
[16:43] <drt24> juju eureka isn't in the ppa yet?
[16:47] <robbiew> drt24: I think it is...but SpamapS can probably answer best
[16:47] <robbiew> oneiric or precise?
[16:47] <robbiew> (ppa)
[16:47] <drt24> both
[16:48] <drt24> or I could be reading it wrong.
[16:55] <drt24> I suspect that I am wrong and was confused by the ppa not being linked from the juju project page but only form the juju hackers page.
[16:56] <jcastro> juju logo sideways! http://www.stumbleupon.com/
[17:06] <m_3> yeah, I've seen a couple of pipes logos out there
[17:11] <hazmat> jimbaker, bcsaller, fwereade standup?
[17:11] <jimbaker> hazmat, sure
[17:11] <fwereade> hazmat, sounds good
[17:30] <TheMue> Aargh, layer 8 error detected. Using the right command to do something is helpful. *sigh*
[17:35] <TheMue> rog: Why do we have an own log package instead using the Go log package?
[17:36] <rog> TheMue: i think it's so that we have a central place to set up logging (it's not possible to set up the normal log package so it prints to anything other than stderr, i think
[17:36] <rog> )
[17:36] <TheMue> rog: It can, log.SetOutput(w io.Writer)
[17:37] <rog> TheMue: also, we want to layer it on top of gocheck, but you can only layer log onto io.Writer
[17:37] <rog> TheMue: oh yes
[17:37] <TheMue> rog: The package is not yet optimal but quite flexible.
[17:37] <rog> TheMue: but the latter point remains
[17:38] <TheMue> rog: How about letting gocheck implement the Writer interface?
[17:38] <rog> TheMue: the writer interface is not really record-oriented
[17:39] <TheMue> rog: The Go log is standard and used by other packages too. How do we handle logging of used package that we don't develop on our own?
[17:40] <TheMue> rog: And an own writer impl could do intelligent stream parsing too.
[17:41] <rog> TheMue: i had this discussion with niemeyer before, and i lost. you'll have to convince him, not me.
[17:41] <TheMue> Maybe we should talk about it in Bud.
[17:43] <niemeyer> TheMue: Note that we're not replacing the log package in any way
[17:43] <TheMue> rog: Another topic. Currently we use Makefiles. You once talked about the migration to use goinstall. How is the status here?
[17:43] <niemeyer> TheMue: We're building on it
[17:43] <niemeyer> TheMue: rog had the same feeling originally, but it isn't the case
[17:43] <rog> TheMue: we use both Makefiles and goinstall
[17:43] <TheMue> niemeyer: OK, we will do so. Currently it's only fmt based.
[17:44] <rog> TheMue: currently you can't avoid Makefiles if you want to use gotest
[17:44] <niemeyer> TheMue: and I can certainly understand why both of you had that feeling.. the entry point of logging is a different function, and we now have a package named "log"
[17:44] <niemeyer> TheMue: Not really.. we don't format the actual log
[17:44] <m_3> marcoceppi: ping
[17:44] <marcoceppi> m_3: pong
[17:44] <niemeyer> TheMue: What we do is trivial fmt.Sprintf
[17:44] <TheMue> rog: Yep, I hope this will be fixed soon, already before Go1.
[17:44] <niemeyer> TheMue, rog: => #juju-dev please
[17:45] <rog> i was just going to suggest that
[17:45] <m_3> marcoceppi: can you please change ownership of lp:charm/roundcube to ~charmers?
[17:45] <marcoceppi> m_3: yes
[17:45] <marcoceppi> Did I progumate wrongly?
[17:46] <m_3> marcoceppi: dunno... needs to end up owned by charmers... perhaps it's a bug in charm promulgate
[17:46] <m_3> marcoceppi: it happens to me too.. have to go manually change ownership to charmers afterwards
[17:46] <jcastro> marcoceppi: it's god punishing us for creating a command called "promulgate"
[17:46] <m_3> rofl
[17:46] <marcoceppi> m_3: haha, changed :)
[17:46] <jcastro> awesome, so roundcube is done?
[17:47] <jcastro> and phpmyadmin was mostly done right?
[17:47] <m_3> marcoceppi: gracias
[17:47] <marcoceppi> jcastro: Yeah, I need to test the apt->upstream->apt->upstream to make sure there isn't anything ugly lurking
[17:48] <marcoceppi> Otherwise it was ready for review. So, if I get time today *crosses fingers*
[17:48] <jcastro> marcoceppi: nitpick on using "aptitude" in your function for installing from the archive btw.
[17:48] <jcastro> perhaps "repository" or something would make more sense? </bikeshed>
[17:48] <marcoceppi> Ah, I was like NO WAY I USE APT I SWEAR
[17:48] <marcoceppi> repository would be better <3
[17:52] <m_3> jcastro: roundcube is in... still have a couple of test scenarios, but they'll just be bugs against trunk if those fail
[17:52]  * jcastro nods and will just give those a couple of days
[18:45] <nijaba> SpamapS: marcoceppi: could some of you have a look at lp:~nijaba/charm-tools/peer-scp and tell me how I can write a test function for it? alias scp and build a state test function?
[18:46]  * marcoceppi recommends rsync <3
[18:46] <nijaba> marcoceppi: already done with scp, easy to allow for the option
[19:27] <SpamapS> nijaba: or start your own sshd on a random port and actually do the scp. ;)
[19:28] <SpamapS> marcoceppi: re ubuntu cloud live, yes that should work. I've been meaning to try juju against it for a while.
[22:08] <SpamapS> Hmm, looks like there's a test that looks for a very specific error message which sometimes shows up differently...
[22:09] <SpamapS> https://launchpadlibrarian.net/87459690/buildlog_ubuntu-oneiric-i386.juju_0.5%2Bbzr432-1juju2~oneiric1_FAILEDTOBUILD.txt.gz
[22:11]  * hazmat peeks
[22:12] <hazmat> hmm
[22:12] <SpamapS> hazmat: when I run that test locally on precise it passes
[22:12] <SpamapS> hazmat: so I'm wondering if its just something weird running in the buildd chroot
[22:14] <hazmat> SpamapS, there's no external interaction, the error is being setup by the test via a mock
[22:14] <SpamapS> hazmat: so is this a case where something else is causing said error?
[22:15] <hazmat> SpamapS, no.. its the error reporting not matching the expected error text
[22:15] <hazmat> SpamapS, the error is coming back with some additional traceback information that the test doesn't like
[22:16] <hazmat> SpamapS, splitting the expected strings and asserting both are in the output would suffice to resolve and maintain the expectation
[22:16]  * hazmat digs up a trivial patch
[22:17] <hazmat> it is odd that the error representation would change for buildd but not for local usage
[22:18] <hazmat> here's the trivial patch http://paste.ubuntu.com/770590/
[22:19] <SpamapS> hazmat: I'm trying the test in a local sbuild chroot to see if it is different
[22:24] <SpamapS> hazmat: fails in a clean chroot, so some python module is changing things on our local machines
[22:25] <SpamapS> or lxc maybe?
[22:25] <hazmat> SpamapS, interesting.. i'd suspect a change to the logging module or twisted
[22:25] <SpamapS>     test_watch_new_service_unit ... No handlers could be found for logger "juju.agents.machine"
[22:25] <SpamapS> I see that, but that is much earlier
[22:26] <hazmat> SpamapS, that's not particular indicative of anything in this context, except a test didn't care about log output, and didn't setup a default logger
[22:26] <hazmat> ie. its harmless
[22:26] <SpamapS> hazmat: in oneiric, none of the modules have changed...
[22:27] <SpamapS> well shoot, WTF has been failing since 431
[22:28] <hazmat> hmm
[22:28] <hazmat> oh..
[22:28] <hazmat> ah
[22:28] <hazmat> and functional tests since 429
[22:29] <SpamapS> hazmat: your trivial in 431 changed this text
[22:29] <hazmat> SpamapS, yup
[22:29] <hazmat> i'll commit the trivial fix then
[22:29] <SpamapS> yeah
[22:29] <SpamapS> it looks good to me, +1
[22:29] <hazmat> niemeyer, wtf functional tests are failing with.. Bootstrap aborted because file storage is not writable: Error Message: You have attempted to create more buckets than allowed
[22:30] <hazmat> for the last few revs
[22:30] <SpamapS> hazmat: wait, can you push it to a branch and I'll try it in a chroot?
[22:30] <hazmat> SpamapS, sure
[22:30]  * SpamapS realizes its already been pastebinned..
[22:30] <SpamapS> hazmat: if you haven't pushed it already, belay that.. I can do it here
[22:31] <hazmat> SpamapS, un momento new patch coming
[22:31] <hazmat> SpamapS, http://paste.ubuntu.com/770604/
[22:32] <SpamapS> yeah the 1st one didn't work ;)
[22:33] <SpamapS> hazmat: confirmed, that fixes it
[22:34] <SpamapS> hazmat: if you want to test in a chroot (highly recommended) its pretty easy.. install ubuntu-dev-tools, run 'mk-sbuild oneiric', and then 'schroot -c oneiric-amd64 -u root' to get a clean oneiric chroot to play in (that gets cleaned up on exit)
[22:34] <SpamapS> apt-get build-dep juju will pull in all the build deps
[22:35] <SpamapS> .. obviously ;)
[22:36] <hazmat> SpamapS, why not just lxc?
[22:36] <hazmat> SpamapS, thanks i'll try that out
[22:37] <SpamapS> hazmat: because this is nearly identical to the buildd
[22:38] <nijaba> SpamapS: ok, will try, thanks
[22:38] <nijaba> marcoceppi: first version with rsync as an option: lp:~nijaba/charm-tools/peer-scp/
[22:39] <nijaba> SpamapS: (context: start my own scp)
[22:39] <nijaba> sshd even
[22:41] <_mup_> juju/trunk r433 committed by kapil.thangavelu@canonical.com
[22:41] <_mup_> [trivial] fix provider unit test regression from r431, from overly exact error output check [r=clint-fewbar]
[22:42] <niemeyer> hazmat: I noticed that, and cleared them today
[22:43] <niemeyer> I'm heading out for the day.. cheers everybody!
[22:43] <hazmat> niemeyer, cheers
[22:43]  * hazmat debates heading out to a nodejs meetup
[22:51]  * SpamapS dist-upgrades his precise box with fingers firmly crossed
[23:20] <hazmat> smoser, do you  know if cloud-init is installed on the rackspace cloud instances by default
[23:23] <hazmat> apparently not
[23:37] <_mup_> juju/ssh-known_hosts r439 committed by jim.baker@canonical.com
[23:37] <_mup_> Inject keys as part of cloud-init for ZK instances
[23:43] <_mup_> juju/ssh-known_hosts r440 committed by jim.baker@canonical.com
[23:43] <_mup_> Merged trunk & resolved conflicts