[00:05] m_3: pong [00:06] nijaba: so in the peer relation hook, it doesn't look like the key's being gen'd [00:06] m_3: humm which one? the ssh or des? [00:07] des [00:07] * nijaba looks [00:07] I'm still testing, but gotta make some changes locally before I get further [00:08] config-changed barfs if you just use default config params (trying to do stuff with ssl certs even though do_https is off) [00:08] m_3: hmm. the des key should be generated the first time get-des-key gets called [00:09] m_3: uhhh did not do much test with default params, you are right. [00:09] * nijaba juju bootstraps [00:09] wondering about a couple of strictly '-lt's in the peer hook [00:10] we should offer Juju boot straps in the ubuntu store [00:10] doesn't look like that'd be called for the master... that the key's being gen'd from a set-des-key calling get-des-key [00:12] m_3: it is called the first time peer-relation-joinded is invoked, afaics [00:13] nijaba: adding a 'do_https' guard around 'set-ssl-cert' in config-changed to get it deployed [00:16] m_3: drat, you are right. I should have guarded this. do you want me to work on it with default params a bit more and signal you back. I was a bit too much concentrated on distributing those certs that I forgot the basic test case [00:17] nijaba: the part I'm trying to get to test is the peer-relation-all in the master case. I'm wondering if: [00:17] if [[ $LOCAL_UNIT_ID -lt $REMOTE_UNIT_ID ]] && [[ $LOCAL_UNIT_ID -lt $FIRST_UNIT_ID ]] ; then [00:18] sorry for the bad paste, but I'm wondering if this is ever true [00:18] [[ "0" -lt $SOME_UNDEFINED_VARIABLE ]] && echo "yes" || echo "no" [00:18] m_3: it is, according to my logs. done tests all the way, removing a master, middle and end nodes [00:19] looks like we need a charm helper for peers and leader detection [00:19] I did something very similar in ceph [00:20] SpamapS: yes, agree... there're multiple impls already [00:20] nijaba: cool... I'll bang on it [00:20] * m_3 is sticking with my primary skills [00:21] m_3: the theory is that the unit I am on is never in the the list. So if my id is less than the remote and the first unit id in the list, then I am elected master [00:22] nijaba: right [00:23] relation-list shows all parties, IIRC [00:23] drat, my chmods in set-ssl-cert are at the wrong place, hence the issue you found when they are not set [00:23] * SpamapS honestly doesn't remember, but I think in ceph I had to account for my own ID being in the list [00:24] I'll test it carefully to see what's going on... my guess was that this wasn't executing, but the key was still generated 'lazily' by set-des-key [00:24] SpamapS: I never ever saw this happen so far. maybe implementation has changed? [00:25] set-des-key with and empty arg actually calls get-des-key which gens [00:25] s/and/an/ [00:25] relation-list > $units_file [00:25] echo $JUJU_UNIT_NAME >> $units_file [00:25] nijaba: you are correct .. it is actually only the "other" units [00:27] totally need an i_am_leader helper fn [00:27] m_3: yes, this was intended. [00:28] m3: (set-des-key calling get-des-key when called with no param) [00:28] m_3: ch_peer_leader [00:29] and ch_peer_i_am_leader [00:29] m_3: that would have helped me A LOT. had to fight a few hours to understand the logic [00:31] nijaba: as did I. We will have to combine the best of those two implementations into a charm helper version. [00:32] m_3: just pushed a fix for the config-changed set-ssl-cert that seems to work better :) [00:32] that's ok, I reimplimented facter while writing varnish... doh! [00:33] nijaba: I'll pull in a bit... I just commented them out to get to teh peer relations [00:34] SpamapS: mine is actually already inspired from you ceph charm, but it did not look like we had exactly the same case. ceph seems to need keys to all other peers, I just need to push the keys of the current master [00:34] nijaba: yeah there are two different things needed. One is a generic system for agreeing on who is the leader. The other one is a generic system for transferring a file from the leader to all non-leaders in a peer relation. [00:35] SpamapS: right. [00:37] I can try to genericize in ch_helper if you want. It's quite fresh in my head atm [00:37] Go for it! [00:37] nijaba: please add tests for it in tests/helpers [00:37] SpamapS: k [00:38] nijaba: I named the file 'helpers.sh' but I think it should have been 'net.sh'. Since what you are doing is really, I think, unrelated to net.sh, you should maybe call it peer.sh [00:38] SpamapS: remind me where the branch is please [00:38] lp:charm-tools [00:40] m_3: I really did not test without cert sets. /me slaps myself [00:40] nijaba: peer keys seem to be generated just fine... sorry for the noise [00:41] pulling your latest [00:41] * m_3 on to the next test scenario [00:44] yeah... Stackops server1 is up... Now for another compute node [00:46] or openstack [00:52] SpamapS: I am not seeing any helpers.sh file, only helper/sh/net.sh... not sure what you meant [00:53] SpamapS: Can I use bash, or you tink sh is very important? [00:56] nijaba: in tests/helpers, there is helpers.sh [00:56] nijaba: write the tests first. ;) [00:56] SpamapS: ah? really? I usually do the opposite... [00:57] nijaba: I use posix shell only, if you guys feel strongly that bash is important, we can make them bash specific, but they need to be called .bash, not .sh [00:57] nijaba: TDD, write test, then write code. [00:57] SpamapS: aye sir [00:57] nijaba: you're welcome to do it your wya. :) [00:57] way even [00:58] but typically tests written after the fact are more shallow and have more assumptions in them. [00:59] SpamapS: good to know. [01:00] nijaba: to be fair, I do it "the right way" only about 50% of the time, because usually I'm too busy to write tests first. :) [01:00] the biggest thing that gets me with sh -vs- bash is the ${VARIABLE%%.xml} expasion stuff [01:01] saves having to exec cut [01:01] m_3: that stuff is awesome.. and available in all shells. [01:01] ${parameter%%word} Remove Largest Suffix Pattern. The word is expanded to produce a pattern. [01:01] and really lots more than just cutting (regexp subs) [01:01] from man dash [01:02] was having problems in dash [01:02] in particular, the substitution one... ${VARIABLE//\/*} [01:02] SpamapS: what were you expecting "ch_peer_leader"? return the leader unit name? [01:03] nijaba: echo $leader .. yeah [01:03] even unit sequence id would be great [01:03] m_3: which do you prefer? [01:03] * nijaba put a param :) [01:03] but go with the whole unit name I'd think, it can be trimmed by the user [01:04] oh yeah another one that wouldn't be peer related at all is just "ch_my_unit_id" and "ch_unit_id" [01:04] with the ${//} above :) [01:04] Even though those are basically just 1 liners [01:04] m_3: I tend to go with ## or # [01:04] But I can't explain why ;) [01:04] right [01:04] didn't know dash was supposed to impl it [01:05] I'll check and see if a bug is appropo [01:06] you'd think that'd be in the test suite tho [01:07] Yeah I believe there's an actual POSIX test shell test suite somewhere out there [01:20] * m_3 relocating... coffee shop -> home [01:22] m_3: drive safely :) [01:59] SpamapS: how can I simulate a call to relation-list? relation-list is a bad function name in sh [03:19] SpamapS: charm_tools waiting for a merge :) [03:19] * nijaba -> bed [07:11] nijaba: alias relation-list=my_relation_list [08:15] rog: Ready for a worklow question? [08:17] TheMue: sure [08:17] rog: In my past I've worked with svn and hg, so some questions about the workflow here with bzr. [08:18] TheMue: go on [08:18] rog: If I wonna start something new, like we yesterday talked about, I first create a branch with a kind of name describing what I'm doing? [08:19] rog: E.g. bzr lp:juju/go go-add-state [08:20] TheMue: yeah. actually i rarely know what the change will be until i'm nearly ready to submit, so i often just name it something fairly generic to start with [08:20] rog: ok [08:20] s/bzr/bzr branch/ but yes, that's right [08:20] rog: Oh, eh, yes. ;) [08:21] rog: So let's asume I've then got something I want to submit, what are the next stepts? [08:21] steps [08:21] in your case, i think that rather than starting with a branch, i might write down how you envisage the API [08:21] and put it forward for comments [08:21] but anyway [08:22] ok if you want to submit something [08:22] you'd use gustavo's lbox tool [08:22] (have you installed that?) [08:22] yep [08:22] there's been a new version very recently, so you might want to update, BTW [08:22] done this morning ;) [08:23] ok, so you'd commit your changes (bzr commit) [08:23] then you'd do lbox propose -cr -for lp:juju/go [08:24] and that should get you to edit a file to which you can add the description [08:24] of the changes [08:24] and it'll put the merge request onto launchpad and codereview [08:24] ah, ic [08:25] then if you need to make some changes in response to the review, you commit and run lbox propose again (with no args this time) [08:26] then when you get a LGTM, it's time to do the actual merge [08:26] to do that, i make sure that i have a trunk branch that's up to date (e.g. cd my-repo-dir/go-trunk; bzr pull) [08:27] then go to that directory and merge in your change [08:27] ok, got it [08:27] bzr merge ../go-add-state [08:27] then build and make sure that everything tests ok [08:28] then, assuming that's all ok, you can do: bzr push [08:28] to actually commit the changes [08:28] who's giving the LGTM? only gustavo or are always at least two people needed? [08:29] fine, helps a lot, will write it down in my Getting Started document [08:29] TheMue: only one LGTM is needed, but you might want to resolve discussions if there's some other comments [08:30] so of anyone says it's ok that's enough. got it. [08:30] the other thing that caught me out was when i had some branches in a sequence [08:30] TheMue: generally you want a LGTM from gustavo - he's the man [08:31] rog: ok [08:31] once you've pushed a branch, if you want to work on the next branch in a sequence of changes, you'll want to merge trunk into that branch before continuing [08:31] oh yes, once you've merged, you need to commit the merge [08:32] ok, sound reasonable [08:34] thx for your help, will write it down now [09:09] TheMue: jsyk the bzr online help pages + the tips on launchpad are pretty complete already. [09:09] hi all [09:15] mpl: hi and yep, they are looking good. [09:16] mpl: justed wanted to know how we doing branching and merging here. had different policies in other companies. [09:23] TheMue: ok. for now I've just followed gustavo's blog post about lbox, it was enough to get started. [09:48] hi all [11:00] Hello all.. how's that jujuer life coming? [11:03] it's.. charming [11:07] :-) [11:10] niemeyer: mornin' [11:10] rog: yo [11:10] rog: How're things going there? [11:11] niemeyer: not bad. distracted by watching that devops video this morning. lots of i have no idea about :-) [11:11] s/lots of/lots of stuff [11:11] rog: It was pretty interesting indeed [11:12] <_mup_> Bug #904201 was filed: need supporting code to model machine constraints < https://launchpad.net/bugs/904201 > [11:12] rog: After about the middle it got a bit boring, with the guy going over the history of a lot of people without much context for how it made any sense to bring these up [11:12] ..ooOO( Reminder: Look that video, too. ) [11:12] rog: But otherwise really good [11:12] yeah [11:13] I'll participating a DevOps talk at the OOP too. Looking forward. [11:16] niemeyer: Getting a better understanding of the existing state code. There's nothing yet like the state.State we yesterday talked about? [11:18] TheMue: You can see state.State a _bit_ like the sum of all the StateManager classes [11:19] TheMue: We should clean these interfaces up while joining, but that's the direction at least [11:19] niemeyer: Understand, ok [12:00] <_mup_> juju/trunk r432 committed by kapil.thangavelu@canonical.com [12:00] <_mup_> [merge] robust-zk-connect,sshclient-refactor juju commands will now [12:00] <_mup_> wait for juju to be running post bootstrap, so using juju commands immediately [12:00] <_mup_> after bootstrap is viable. [a=jimbaker,hazmat][r=fwereade,jimbaker,hazmat] [12:00] <_mup_> [f=849071] [13:00] niemeyer: https://codereview.appspot.com/5444043/ [13:00] niemeyer: hopefully i've remembered everything [13:02] rog: Thanks! [13:48] rog: Before submitting cloudinit, would you mind to do a quick godoc test? [13:49] rog: I suspect that SetOutput will format improperly [13:49] rog: The documentation, that is [13:49] niemeyer: i'll have a look [13:50] rog: Also, it'd be good to clarify AddRunCommand with smoser [13:51] rog: It's unclear (the "first?", and also which shell script) [13:51] gokpgdoc FTW: http://gopkgdoc.appspot.com/pkg/launchpad.net/~rogpeppe/juju/go-cloudinit/cloudinit [13:51] rog: Wow :-) [13:51] This is awesome [13:51] ain't it just? [13:52] niemeyer: yup, you're right about SetOutput [13:52] Ok, I'll add the comments there === sanderj is now known as Sander^work [13:54] link? [13:55] ah. runcmd are only executed on first boot of an instance. [13:55] smoser: and bootcmd? [13:56] i have to check. [13:56] you're adding them to cloudconfig, right? [13:57] as opposed to a multipart part [13:57] smoser: yup [13:57] (erm, actually i don't know what you mean by "multipart part" there...) [13:58] rog: Sent another round [13:58] smoser: Yeah, cloud-config [13:58] rog: cloud-init actually supports multiple kinds of input [13:58] rog, cloud-inti can take multipart mime input. and one of the types is "boothook". [13:59] bootcmd runs every boot. [13:59] smoser: What's boothook? [13:59] which is actually not what was originally intended. [13:59] rog: ^ info will be useful for the review [13:59] smoser: ah, so the doc in that cloud-init.txt is misleading [14:00] rog: How so? [14:00] it should follow "Cloud Boothook" at https://help.ubuntu.com/community/CloudInit [14:00] smoser: it implies that the only difference between runcmd and bootcmd is the stage in the boot process that the commands get executed [14:00] but right now, it doesn't even put the INSTANCE_ID in environment. [14:00] i will change it to have that in environment. [14:01] rog, i'll update that documentation also. [14:02] rog, that link there also hopefully describes multipart, but its probably information only for you. i think your approach with cloud-config is fine [14:02] smoser: it would be useful if you could take a brief look at the other comments in the docs linked above and let me know if they look right. [14:05] smoser: (if you have a moment, of course!) [14:07] SetDisableRoot: disables ssh login to the root account via the ssh authorized key found in metadata [14:09] rog, looks reasonable. [14:09] smoser: great, thanks a lot [14:12] smoser: when you say "the ssh authorized key found in metadata", do you mean one of the ssh keys specified with ssh_authorized_keys ? [14:12] smoser: or is there different key that this refers to? [14:13] different. [14:14] smoser: which metadata is the key found in? [14:15] smoser: (i'd like to document it if i can) [14:15] only the metadata source's [public-keys are put into both user and root [14:15] ie, in ec2, the ec2 metadata service has a 'public-keys' entry (that is populated when you launch an instance with '--key mykey') [14:16] ah, the EC2 metadata! [14:16] so i'd be correct if i said this: [14:16] // SetDisableRoot sets whether ssh login is disabled to the root account [14:16] // via the ssh authorized key found in the instance's EC2 metadata. [14:16] // It is true by default. [14:16] ? [14:18] smoser: ^ [14:18] that is good. yeah. [14:18] great, thanks [14:18] but it isn't necessarily EC2 metadata specific. [14:19] hmm [14:19] as there are multiple DataSources, EC2 is one of them. [14:19] the others are "nocloud" (directory, and there is OVF). [14:19] which can also provide that stuff [14:20] smoser: what does SetDisableEC2Metadata do on non-EC2 cloud providers? [14:20] smoser: (and is that referring to the same metadata?) [14:23] smoser: does this make sense to you:? [14:23] // SetDisableRoot sets whether ssh login is disabled to the root account [14:23] // via the ssh authorized key associated with the instance. [14:23] // It is true by default. [14:24] maybe "instance metadata" would be better there [14:30] niemeyer: is there a way of running lbox propose that doesn't edit the description? [14:30] rog: Not at the moment.. feel free to save the text unchanged, though :-) [14:30] rog: I can add a flag if that bothers you [14:31] niemeyer: yeah, i do. (well, i add a blank line because of my slightly strange editor set up) [14:31] niemeyer: but it would be nice to be able to upload changes only [14:31] niemeyer: i usually only edit the description once at the beginning [14:31] rog: Note that it doesn't actually upload the description if you don't change it [14:31] niemeyer: sure. it's just i'd like to take out one more interactive step. [14:32] rog: I personally find it useful since its a nice reminder to update an out-of-date description after the changes made [14:32] rog: But I can add a flag if you don't care [14:32] niemeyer: i do care - but i always check on codereview anyway [14:32] niemeyer: a flag would be great, please. [14:35] rog: Will do [14:37] rog: I see submit worked for you! :-) [14:37] niemeyer: yup, yay! [14:38] niemeyer: BTW what things are fixed by the first call to propose? can i, for instance, change the prereq later? [14:38] niemeyer: or the bug number, etc [14:39] rog: Launchpad doesn't allow changing the prereq, so there's nothing we can do about it [14:39] rog: -bug always works [14:39] rog: It doesn't _change_ the bug, though [14:39] rog: It associates the given bug with the merge proposal (and blueprint, in case it was used) [14:40] niemeyer: if you use -prep, is the bug created then? or later? [14:41] rog: Works in either case [14:41] rog: Sorry [14:41] rog: I misinterpreted your question [14:41] rog: The bug is created and associated whenever -bug is used [14:41] rog: Erm [14:41] rog: The bug is associated whenever -bug is used [14:41] rog: The bug is created and associated whenever -new-bug is used [14:41] niemeyer: -new-bug? [14:41] cool [14:42] rog: So propose can be called repetitively at will [14:42] rog: and associate/create/update stuff as one goes [14:43] niemeyer: if you do new-bug twice, will you get two bugs with the same text? [14:45] rog: You'll get as many bugs as you call -new-bug with, with whatever text was used [14:45] niemeyer: cool, just checking [14:45] rog: This is probably a bad behavior, though.. we can do better [14:46] niemeyer: presumably lbox propose doesn't remember the new-bug option [14:46] rog: We have metadata about it at the server side [14:46] rog: Since we associate with the merge proposal [14:53] Good morning [15:06] mchenetz: ahoy [15:12] Spamaps: Back at ya. :-) [15:12] I'm heading to lunch [15:23] Is there any diagram of the Juju solution? I am starting to integrate Virtualbox into Juju and i wanted to see if there was any good documentation on Juju dev [15:35] mchenetz: You've already visited https://juju.ubuntu.com/docs/? [15:35] mchenetz: Here you'll find many informations about juju. [15:36] TheMue: I was just wondering if there was a general flow diagram or something... But i will look through it [15:39] mchenetz: And feel free to ask. [15:40] TheMue, thanks [15:41] mchenetz: at one point I had one that showed the architecture.. but I don't know if it will be all that helpful. [15:42] Spamaps: If it's still relevant then i would like to see it... If not, then don't worry about it. [15:42] mchenetz: for providers, you should only need to add a file to juju/providers and maybe config stuff in juju/environment/config.py [15:43] What about inside the virtualbox image? Any daemeons that i need to inject into the image on create? Or, do i pre-create images? [15:43] mchenetz: boot the cloud image, it will get things right [15:43] need to touch some other stuff throughout the code too (like unit/address.py) [15:44] These are the things i need to know... I guess i will start and as i hit a roadblock, i will just ask questions. I will look at the local provider python code as an example of what i need to hook into [15:45] mchenetz: cloud-images.ubuntu.com [15:45] mchenetz: no, the local provider is going to confuse you a lot I think. [15:45] mchenetz: because the LXC stuff is "special" .. and isn't done as a provider, its done as a container technology to run inside VMs. [15:46] Spamaps: okay... Then i am not sure... I will just do my best [15:46] mchenetz: probably easier to copy the dummy provider actually. :) [15:46] Spamaps: Okay, i will do that... Thanks [16:06] Could I use juju to deploy on Ubuntu Cloud Live? [16:08] mchenetz, just a note: I tried to make the docstrings for MachineProviderBase helpful, do take a look at those [16:09] fwereade: thanks, will do [16:20] hazmat, thanks for merging in the branches to enable juju commands immediately after bootstrap [16:20] (enable their effective, error free use) [16:24] whoohoo! [16:26] m_3, yes, it's a very nice change. glad that was able to make it in. also i'm feeling better today :) [16:27] jimbaker: good... missed you at the goat yesterday [16:28] unfortunately my wife is now sick with whatever i had earlier this week === kapil_ is now known as hazmat [16:37] jimbaker: that's true love and dedication ;) [16:38] mpl, indeed ;) [16:43] juju eureka isn't in the ppa yet? [16:47] drt24: I think it is...but SpamapS can probably answer best [16:47] oneiric or precise? [16:47] (ppa) [16:47] both [16:48] or I could be reading it wrong. [16:55] I suspect that I am wrong and was confused by the ppa not being linked from the juju project page but only form the juju hackers page. [16:56] juju logo sideways! http://www.stumbleupon.com/ [17:06] yeah, I've seen a couple of pipes logos out there [17:11] jimbaker, bcsaller, fwereade standup? [17:11] hazmat, sure [17:11] hazmat, sounds good [17:30] Aargh, layer 8 error detected. Using the right command to do something is helpful. *sigh* [17:35] rog: Why do we have an own log package instead using the Go log package? [17:36] TheMue: i think it's so that we have a central place to set up logging (it's not possible to set up the normal log package so it prints to anything other than stderr, i think [17:36] ) [17:36] rog: It can, log.SetOutput(w io.Writer) [17:37] TheMue: also, we want to layer it on top of gocheck, but you can only layer log onto io.Writer [17:37] TheMue: oh yes [17:37] rog: The package is not yet optimal but quite flexible. [17:37] TheMue: but the latter point remains [17:38] rog: How about letting gocheck implement the Writer interface? [17:38] TheMue: the writer interface is not really record-oriented [17:39] rog: The Go log is standard and used by other packages too. How do we handle logging of used package that we don't develop on our own? [17:40] rog: And an own writer impl could do intelligent stream parsing too. [17:41] TheMue: i had this discussion with niemeyer before, and i lost. you'll have to convince him, not me. [17:41] Maybe we should talk about it in Bud. [17:43] TheMue: Note that we're not replacing the log package in any way [17:43] rog: Another topic. Currently we use Makefiles. You once talked about the migration to use goinstall. How is the status here? [17:43] TheMue: We're building on it [17:43] TheMue: rog had the same feeling originally, but it isn't the case [17:43] TheMue: we use both Makefiles and goinstall [17:43] niemeyer: OK, we will do so. Currently it's only fmt based. [17:44] TheMue: currently you can't avoid Makefiles if you want to use gotest [17:44] TheMue: and I can certainly understand why both of you had that feeling.. the entry point of logging is a different function, and we now have a package named "log" [17:44] TheMue: Not really.. we don't format the actual log [17:44] marcoceppi: ping [17:44] m_3: pong [17:44] TheMue: What we do is trivial fmt.Sprintf [17:44] rog: Yep, I hope this will be fixed soon, already before Go1. [17:44] TheMue, rog: => #juju-dev please [17:45] i was just going to suggest that [17:45] marcoceppi: can you please change ownership of lp:charm/roundcube to ~charmers? [17:45] m_3: yes [17:45] Did I progumate wrongly? [17:46] marcoceppi: dunno... needs to end up owned by charmers... perhaps it's a bug in charm promulgate [17:46] marcoceppi: it happens to me too.. have to go manually change ownership to charmers afterwards [17:46] marcoceppi: it's god punishing us for creating a command called "promulgate" [17:46] rofl [17:46] m_3: haha, changed :) [17:46] awesome, so roundcube is done? [17:47] and phpmyadmin was mostly done right? [17:47] marcoceppi: gracias [17:47] jcastro: Yeah, I need to test the apt->upstream->apt->upstream to make sure there isn't anything ugly lurking [17:48] Otherwise it was ready for review. So, if I get time today *crosses fingers* [17:48] marcoceppi: nitpick on using "aptitude" in your function for installing from the archive btw. [17:48] perhaps "repository" or something would make more sense? [17:48] Ah, I was like NO WAY I USE APT I SWEAR [17:48] repository would be better <3 [17:52] jcastro: roundcube is in... still have a couple of test scenarios, but they'll just be bugs against trunk if those fail [17:52] * jcastro nods and will just give those a couple of days === lamal666 is now known as lamalex [18:45] SpamapS: marcoceppi: could some of you have a look at lp:~nijaba/charm-tools/peer-scp and tell me how I can write a test function for it? alias scp and build a state test function? [18:46] * marcoceppi recommends rsync <3 [18:46] marcoceppi: already done with scp, easy to allow for the option [19:27] nijaba: or start your own sshd on a random port and actually do the scp. ;) [19:28] marcoceppi: re ubuntu cloud live, yes that should work. I've been meaning to try juju against it for a while. [22:08] Hmm, looks like there's a test that looks for a very specific error message which sometimes shows up differently... [22:09] https://launchpadlibrarian.net/87459690/buildlog_ubuntu-oneiric-i386.juju_0.5%2Bbzr432-1juju2~oneiric1_FAILEDTOBUILD.txt.gz [22:11] * hazmat peeks [22:12] hmm [22:12] hazmat: when I run that test locally on precise it passes [22:12] hazmat: so I'm wondering if its just something weird running in the buildd chroot [22:14] SpamapS, there's no external interaction, the error is being setup by the test via a mock [22:14] hazmat: so is this a case where something else is causing said error? [22:15] SpamapS, no.. its the error reporting not matching the expected error text [22:15] SpamapS, the error is coming back with some additional traceback information that the test doesn't like [22:16] SpamapS, splitting the expected strings and asserting both are in the output would suffice to resolve and maintain the expectation [22:16] * hazmat digs up a trivial patch [22:17] it is odd that the error representation would change for buildd but not for local usage [22:18] here's the trivial patch http://paste.ubuntu.com/770590/ [22:19] hazmat: I'm trying the test in a local sbuild chroot to see if it is different [22:24] hazmat: fails in a clean chroot, so some python module is changing things on our local machines [22:25] or lxc maybe? [22:25] SpamapS, interesting.. i'd suspect a change to the logging module or twisted [22:25] test_watch_new_service_unit ... No handlers could be found for logger "juju.agents.machine" [22:25] I see that, but that is much earlier [22:26] SpamapS, that's not particular indicative of anything in this context, except a test didn't care about log output, and didn't setup a default logger [22:26] ie. its harmless [22:26] hazmat: in oneiric, none of the modules have changed... [22:27] well shoot, WTF has been failing since 431 [22:28] hmm [22:28] oh.. [22:28] ah [22:28] and functional tests since 429 [22:29] hazmat: your trivial in 431 changed this text [22:29] SpamapS, yup [22:29] i'll commit the trivial fix then [22:29] yeah [22:29] it looks good to me, +1 [22:29] niemeyer, wtf functional tests are failing with.. Bootstrap aborted because file storage is not writable: Error Message: You have attempted to create more buckets than allowed [22:30] for the last few revs [22:30] hazmat: wait, can you push it to a branch and I'll try it in a chroot? [22:30] SpamapS, sure [22:30] * SpamapS realizes its already been pastebinned.. [22:30] hazmat: if you haven't pushed it already, belay that.. I can do it here [22:31] SpamapS, un momento new patch coming [22:31] SpamapS, http://paste.ubuntu.com/770604/ [22:32] yeah the 1st one didn't work ;) [22:33] hazmat: confirmed, that fixes it [22:34] hazmat: if you want to test in a chroot (highly recommended) its pretty easy.. install ubuntu-dev-tools, run 'mk-sbuild oneiric', and then 'schroot -c oneiric-amd64 -u root' to get a clean oneiric chroot to play in (that gets cleaned up on exit) [22:34] apt-get build-dep juju will pull in all the build deps [22:35] .. obviously ;) [22:36] SpamapS, why not just lxc? [22:36] SpamapS, thanks i'll try that out [22:37] hazmat: because this is nearly identical to the buildd [22:38] SpamapS: ok, will try, thanks [22:38] marcoceppi: first version with rsync as an option: lp:~nijaba/charm-tools/peer-scp/ [22:39] SpamapS: (context: start my own scp) [22:39] sshd even [22:41] <_mup_> juju/trunk r433 committed by kapil.thangavelu@canonical.com [22:41] <_mup_> [trivial] fix provider unit test regression from r431, from overly exact error output check [r=clint-fewbar] [22:42] hazmat: I noticed that, and cleared them today [22:43] I'm heading out for the day.. cheers everybody! [22:43] niemeyer, cheers [22:43] * hazmat debates heading out to a nodejs meetup [22:51] * SpamapS dist-upgrades his precise box with fingers firmly crossed [23:20] smoser, do you know if cloud-init is installed on the rackspace cloud instances by default [23:23] apparently not [23:37] <_mup_> juju/ssh-known_hosts r439 committed by jim.baker@canonical.com [23:37] <_mup_> Inject keys as part of cloud-init for ZK instances [23:43] <_mup_> juju/ssh-known_hosts r440 committed by jim.baker@canonical.com [23:43] <_mup_> Merged trunk & resolved conflicts