[00:42] SpamapS, zmq is awesome.. but this shit is hard enough without a src of truth [00:42] ;-) [00:43] SpamapS, zmq is definitely on my radar for fun [00:43] SpamapS, replacing debug log with zmq would work well, although rsyslogd would work as well for that case [00:44] SpamapS, i'd like to ideally extract the key exchange and encryption from saltstack into a standalone lib for zmq cross dc relay or perhaps just start afresh with keyczar [00:46] SpamapS, the problem with mq, esp a no persistence one like zmq, is that handling failures.. i mean we kill sigkill agents all day long, and they will come up and do the right thing [00:46] er.. we can [00:46] because their state based [02:06] RAWR [02:06] http://paste.ubuntu.com/904867/ [02:07] hazmat, I'll have another review for you in just a mo :) [02:07] fwereade_, nice [02:08] fwereade_, this expiration stuff is going to take up the rest of my night.. i don't see myself getting to the env settings [02:08] hazmat, sadly this has distracted me from both the logging and the warning but they're trivial by comparison [02:09] hazmat, tbh that's a good thing for me; we have slightly increased dependency on syncing and on default-series [02:09] hazmat, I'd hate to redo it so soon ;) [02:09] fwereade_, how so re dep on sync? [02:10] bcsaller, ping [02:10] hazmat: pong [02:10] hazmat, we need providers for constraints, and providers need environments [02:10] hazmat, so get-constraints also syncs [02:10] bcsaller, i was wondering if rebase might work better than merge, i hope that diagram was helpful [02:11] hazmat: rebase doesn't really work either, I'd already merged trunk [02:11] bcsaller, uncommit can undo some of it [02:12] hazmat, once different-syncing gets there it won't be any harder to fix; it's just finding and killing another set_config_state [02:12] hazmat, and bootstrap needs default-series to, er, pick a series [02:12] fwereade_, why does get-constraint sync? [02:13] * hazmat feels like he's missing something [02:13] hazmat, because it seems wrong to get the environment out of the config file [02:13] the provider needs an env.. but its not using it [02:14] hazmat, and I need an env to get a provider to get an actual constraints object [02:14] fwereade_, but in the remote case the provider works against state [02:14] er. the provisioning agent case [02:14] which follows that the cli could do the same [02:15] its a bit odd isn't it [02:15] you need the disk env to get to the provider so it can read so it can populate the provider ;-) [02:15] hazmat, hmm, yeah; the PA waits for /environment (which only shows up when the cli syncs) and the cli *could* read stuff directly from there [02:15] hazmat, but tbh I'd rather get the state from where I know I'm meant to, even if I have to send it tere first [02:17] bcsaller, i think i see what the problem is.. [02:17] effectively it merged something which didn't have the commit after it merged something with the commit [02:18] I can't parse that :) [02:18] eh, maybe I can [02:19] bcsaller, so looking at the lineage.. http://kapilt.com/files/subreltype-graph.png [02:19] yeah [02:21] that looks to be too far back for uncommit, I'll see if I can rebase 506 maybe [02:21] bcsaller, nm.. i can't make heads or tails of it [02:22] bcsaller, my thoughts where play capture changes via bzr diff and rollback .. the easiest thing might just be staging your files onto a fresh branch [02:22] but the pipeline complicates that, since its flowed through [02:22] i guess status-changes reapplied isn't the worst thing, but it wasn't immediately clear that was the only thing yanked [02:23] no, it wasn't clear, I agree [02:27] bcsaller, so when was the last non-merge work? [02:27] in terms of rev [02:28] looks like r521 [02:28] bcsaller, yeah. its also yanking force-upgrade [02:28] thats borked [02:30] bcsaller, 521 looks good [02:30] hazmat, https://codereview.appspot.com/5957043 [02:32] bcsaller, try this.. bzr uncommit -r 521 && bzr merge ../../trunk [02:32] the resulting diff looks a bit better [02:32] bcsaller, make a copy first :-) [02:33] fwereade_, oi! env constraints [02:35] hazmat, yeah, I needed a new node to make the warnings possible, so I thought I might as well do something useful with it :p [02:35] hazmat, warnings not actually done yet, but... ;) [02:35] hazmat, if you didn't see it: http://paste.ubuntu.com/904867/ [02:36] bcsaller, the resulting diff looks pretty good.. although it still has jimbaker's machine agent refactor.. but i think thats as far back in history we can go without making more work [02:36] doh [02:38] that's so odd [02:38] the diff looks good pre-merge commit, but then i commit the new merge, and its bad again [02:39] hazmat: rerunning tests now for sanity check [02:39] bcsaller, that didn't quite work its really odd [02:39] the diff looks good pre-merge commit, but then i commit the new merge, and its bad again [02:40] i'm going to try a different merge algorithm.. its detecting the criss-cross but then it goes bonkers [02:40] after the merge commit [02:40] so strange [02:42] bcsaller, yeah.. you can take rev 521, and diff to trunk and see the correct diff.. but if you merge for 522... [02:45] bcsaller, did you base/merge this branch on jim's machine-agent before it was on trunk? [02:46] hazmat: yes, I think I did, I'd needed to get moving and it was already slated to go in. That could be the source of the issue though === samkottle is now known as samkottler [03:14] fwereade__, i'm probably not going to get to those reviews till the am [03:15] fwereade__, i think the subordinate branches bzr problems are unf*cked though [03:15] tbd [03:15] hazmat, that's a relief [03:16] hazmat, btw, astoundingly, the legacy stuff appears to block what it should; there will be one more review when the tests have passed [03:17] hazmat, warning about ignored constraints is not going to happen tonight I'm afraid [03:19] fwereade__, fair enough [03:19] fwereade__, is it night? [03:19] hazmat, I don;t think it's started getting light again, but it's not far off ;p [03:19] fwereade__, thanks again [03:19] hazmat, a pleasure === bigjools-afk is now known as jtvs-evil-twin === jtvs-evil-twin is now known as bigjools_ [08:51] morning, back from the gym, now ready to start [09:05] TheMue: hiya [09:05] rogpeppe: moin [09:06] rogpeppe, TheMue, heyhey [09:06] TheMue: watcher ready to submit, at last! [09:06] fwereade: yo! [09:06] fwereade: also moin to you [09:06] * fwereade cheers [09:06] rogpeppe: i've seen and will do it in a few minutes [09:06] update, restarting [09:10] rogpeppe: and then i have to look how i will use it for my config node. one way is a kind of wrapper, another one is just return the pure change and let the caller decide. i've got to check where and how the information is used today. [09:12] TheMue: what code watches config nodes? and what do it do in response? [09:13] rogpeppe: that's what i have to check. [09:15] rogpeppe: today there are callbacks listening to changes of the config nodes of services or units [09:15] TheMue: the way i'd imagined the watchers being used is that they'd be wrapped to create a similar interface but a different channel type that represents the higher-level changes in a nice way. [09:16] rogpeppe: that's my favorite idea too [10:05] rogpeppe: seen your last comment. the code will go in as it is now. ideas for tomb changes should be discussed somewhere else. [10:06] TheMue: yeah, please submit [10:06] TheMue: i was responding to gustavo's remarks [10:07] rogpeppe: yip, but he i approved it and it will be closed after the submit. so i don't know if it's the right forum for a further discussion abot tomb. [10:08] TheMue: sure. [10:26] allenap, ping [10:26] fwereade: Hi. [10:26] allenap, just wondering whether we'd verified my maas-name branch against a real maas? [10:27] allenap, if we haven't I would be much obliged if you would just have a go with lp:~fwereade/juju/shadow-trunk-1204 [10:28] fwereade: I haven't I'm afraid. I'll ask if someone can have a go. What command should we try? [10:28] allenap, I guess just `juju deploy --constraints maas-name=whatever ...` [10:29] fwereade: Thanks. [10:29] allenap, and maybe follow up with a `juju set-constraints -s maas-name=another` and `juju add-unit` to make sure that works [10:30] allenap, there are no constraints on add-unit at the moment [10:31] allenap, it's something I believe niemeyer would disapprove of -- but I would be very willing to write a branch for it if it strikes you as a needlessly infuriating omission [10:31] allenap, sorry, the followup set-constraints should have a service-name arg after the "-s" [10:33] allenap, (btw, please tell whoever it is not to forget to set juju-origin) [10:34] fwereade: Okay... I'll try and understand all that and get it done :) [10:35] allenap, sorry, I've been a bit buried in it for a while, I lack normal-person context, please bug me if anything is unclear [10:35] Hehe :) [11:31] hazmat, ping [11:32] fwereade, pong [11:32] * hazmat is still in a bootup sequence [11:33] hazmat, just a status update: I have 4 reviews, one of which is moderately large but actually doesn't seem to bad when I see it in rietveld, 2 of which are trivial followups, and the final one is doc fixes only [11:34] hazmat, and I'm as sure as I can be that the final code branch does what it should on both EC2 and local providers [11:34] fwereade, excellent.. i'll give it a try with a legacy env [11:35] hazmat, sweet [11:36] hazmat, for reference:https://codereview.appspot.com/5957043/ https://codereview.appspot.com/5952045/ https://codereview.appspot.com/5956044/ https://codereview.appspot.com/5946047/ [11:36] fwereade, thanks [11:36] hazmat, and the final code branch is lp:~fwereade/juju/warn-ignored-constraints [12:09] fwereade, unsupported provider constraints are supposed to warn? [12:09] fwereade, ie.. --constraints on bootstrap just get silently ignored [12:22] hazmat, ...crap [12:23] fwereade, for example setting zone on local provider [12:23] fwereade, no worries.. works well otherwise [12:23] hazmat, yeah, just an oversight on bootstrap [12:23] hazmat, ty, nice catch [12:24] hazmat, actually, huh, that's odd, let me investigate [12:25] hazmat, works for me... http://paste.ubuntu.com/905506/ [12:29] hazmat, what are you seeing? [12:32] fwereade, oh.. same thing [12:32] hazmat, the INFO logging is a bit noisy which doesn't help [12:33] missed the warning at the top [12:42] it only takes one missing inline callbacks to ruin a morning [12:45] hazmat, man, tell me about it [12:46] fwereade, can you verify this works for you.. [12:46] ./bin/juju bootstrap -e aws --constraints="ec2-zone=b" [12:46] ./bin/juju deploy -e aws -n 2 --constraints="instance-type=m1.large" --repository=examples local:mysql [12:46] hazmat, just a mo [12:46] fwereade, never mind [12:46] hazmat, working now? [12:46] i should try that with the right juju-origin [12:46] hazmat, heh [13:00] hazmat, incidentally, yeah, that works for me [13:02] fwereade_, even with the warning branch as origin, the bootstrap fails to initialize for me.. just finishing up the tx managed session branch, will investigate more [13:04] hazmat, disturbing... I got an m1.small and 2 m1.larges running in us-east-1b [13:20] hazmat, *possibly* a wrong PYTHONPATH? [13:20] hazmat, I confused myself with one of those the other day [13:32] hazmat, incidentally, pre-existing bug? terminate-machine should also do env-config syncing in case access stuff has changed [13:57] heya niemeyer [13:59] brb, quick walk around the block [14:08] ls -lsa [14:08] ls -lsa [14:08] ops :p [14:12] niemeyer, TheMue: i'm thinking for projects we don't mind losing r60 suport for we should just remove all our go tags. then we won't have to remember to tag every time tip changes. we'll only have to add a tag when we start to rely on something from go1.1. [14:13] fwereade_: Heya [14:13] rogpeppe: Why would we need to remove the tags? [14:13] niemeyer: because the tags are there, they'll be used by preference instead of tip. [14:14] niemeyer: which means we have to remember to tag on every push [14:14] rogpeppe: Hmm [14:15] rogpeppe: I've tested yesterday, and mgo looked alright without any tags [14:15] rogpeppe: Did I do something wrong? [14:15] niemeyer: has it still got some old tags? [14:15] rogpeppe: All of them [14:16] niemeyer: hmm, maybe i'm mistaken. let me check. [14:17] niemeyer: no, i'm right. [14:18] niemeyer: if you go get mgo, you get revision 114, not revision 115 [14:18] niemeyer: because 114 has the go.weekly.2012-01-20 tag [14:19] rogpeppe: Hmm.. ok.. that's probably what I wanted indeed, at least in that case [14:19] niemeyer: because it was just a comment change? [14:20] rogpeppe: No, because mgo has releases.. intermediate changes shouldn't be published before announced [14:20] niemeyer: ah, in that case you'll definitely want to do explicit tagging. [14:20] rogpeppe: So which projects do you suggest we take tags out?> [14:22] niemeyer: any projects we don't have explicit releases for. gozk, gnuflag, juju itself while we're developing it. [14:22] goamz seems to have a release tag. [14:22] rogpeppe: Sound good [14:22] s [14:23] mthaddon: ping [14:24] niemeyer: howdy [14:25] mthaddon: Heya.. I've heard things went well? [14:25] niemeyer: you did? excellent. What things? [14:25] mthaddon: Your progress on the store :) [14:26] niemeyer: erm, seems to be going okay, I just had a question about how to "reset mongodb" in a replicated configuration - will db.dropDatabase() work, or is something else needed? [14:27] mthaddon: I'd suggest cleaning up the files in both machines [14:27] mthaddon: and restarting the database [14:27] niemeyer: which files? [14:27] mthaddon: The files backing the database [14:27] mthaddon: Sorry, not the database.. the files backing mongodb [14:28] mthaddon: Just remove them and start anew [14:28] niemeyer: so just remove all the files in the directory defined as "dbpath" in /etc/mongodb.conf"? [14:28] mthaddon: Right [14:28] ok, will do - thx [14:29] assuming only one db.. dropDatabase should work though [14:29] niemeyer, why would you recommend manually deleting files? [14:31] hazmat: Because I'm lazy, most critically [14:32] hazmat: There's no chance of getting this one wrong [14:34] hazmat: how would dropDatabase work in a replicated setting? [14:34] mthaddon: It should actually work.. [14:34] mthaddon: It's just another operation in the oplog [14:35] ok, maybe I'll try that on the master and see - can always take a look and delete files if that doesn't work as expected [14:35] mthaddon: Sure, try it out.. just make sure you got the right database please [14:35] mthaddon: mongo creates databases on demand [14:35] mthaddon: Which means a typo would go unperceived [14:35] not if I check it before and afterwards :) [14:36] mthaddon: Right, exactly [14:36] mthaddon: show collections should tell you're in the right place [14:36] yep [14:36] niemeyer: doing ssh connect. shall i make it support multiple zookeeper servers? or shall i stick with current functionality? [14:36] hazmat: That's why.. :) [14:36] niemeyer: I take it it's just the "juju" db we care about, not the "local" db? [14:36] mthaddon: Right.. the local database is for maintenance of mongo itself [14:37] mthaddon: It's not replicated [14:40] $ juju deploy mysql -e AWS [14:40] 2012-03-29 09:39:12,678 INFO Searching for charm cs:oneiric/mysql in remote charm repository: https://store.juju.ubuntu.com [14:40] 2012-03-29 09:39:14,711 INFO Connecting to environment... [14:40] 2012-03-29 09:39:17,923 INFO Connected to environment. [14:40] 2012-03-29 09:39:20,506 INFO Charm deployed as service: 'mysql' [14:40] 2012-03-29 09:39:20,507 INFO 'deploy' command finished successfully [14:40] \0/ [14:43] robbiew: What.. really!? :-) [14:44] niemeyer: yep [14:45] I had to set default back to oneiric, b/c we have no precise charms I guess [14:45] but still...awesome [14:46] niemeyer: not sure if outside folks can see the store yet...as my ssh connections on ubuntu.com and canonical.com domains go through chinstrap [14:46] robbiew: We have a few, but way less [14:46] robbiew: We need search now ;) [14:47] yep....baby steps [14:47] niemeyer: code updated and nagios check seems happier thx [14:51] keepalives.. causing problems since 1995 [14:53] the charm store is *live* [14:53] charm: cs:oneiric/thinkup-0 [14:53] robbiew: re us having no charms for precise.. I had a brief chat w/ m_3 about that yesterday... [14:54] robbiew: we're going to run the charm tester with all of the oneiric charms on precise.. and fix them for precise.. then I'll have a LOSA pull the big handle that changes the series to precise as soon as all the tests pass. :) [14:56] niemeyer: i just deleted the tags on gozk/zookeeper [14:56] SpamapS: coolio [15:09] rogpeppe: Have you trying pulling? [15:09] Erm [15:09] rogpeppe: Have you tried pulling? [15:09] mthaddon: Superb [15:10] mthaddon: SpamapS and robbiew are already seeing the post-drop database? [15:10] yep [15:10] mthaddon: Brilliant, thank you [15:10] np [15:10] mthaddon: Sorry for all the mess that was involved in getting this out [15:11] no problem - looking forward to the juju-ised version when we get there [15:11] nice [15:11] niemeyer: pulling what? gozk? [15:11] mthaddon: It's ironic that I've worked so much to make it trivial to deploy, but the main blockers were not technical [15:11] rogpeppe: The branch you deleted the tags on [15:11] niemeyer: yeah, it seems to go get fine [15:12] rogpeppe: Have you tried pulling the branch, and checking to see if the tags are there? [15:12] niemeyer: yeah. [15:13] niemeyer: (they're not) [15:14] rogpeppe: Cool, did you have to delete the branch in launchpad? [15:14] niemeyer: no. i just did a lightweight checkout, then a few "bzr tag --delete"s [15:15] rogpeppe: Hmm.. interesting.. I hadn't tried that. [15:15] rogpeppe: Pushing tag removals doesn't work [15:15] niemeyer: i used the same approach we were taking to adding tags. [15:16] rogpeppe: Right, cool.. it's just that removing a local tag and pushing doesn't work.. I wasn't expecting the lightweight checkout to work either. Awesome that it works. [15:16] niemeyer: cool. [15:17] niemeyer: i get tags are treated like sets. [15:17] s/get/guess/ [15:32] SpamapS: how soon can we get the juju version with the correct store URL in precise? [15:33] Okay! [15:34] I guess it's time to pretend this is a holiday :) [15:34] I'll step out for lunch, and pack to the trip when I'm back [15:34] Cheers all! [15:40] robbiew: I can upload today [15:41] niemeyer: you off for the rest of the day? [15:41] is there a meeting today? [15:47] SpamapS: cool [15:47] SpamapS: would that mean people would get it in the beta 2 upgrade? [15:48] robbiew: possibly, I'll ask release team about universe packages [15:55] robbiew: FYI, beta freeze was just lifted, so I'll upload juju shortly [15:56] SpamapS: no worries..I'm leaving it out of the release announcement, given it wasn't *technically* part of the beta 2 [15:56] we can blog/tweet the shit out of it today/tomorrow anyway [15:59] SpamapS, so, no chance for any new merges before then? [16:04] fwereade_: juju isn't going on any CDs, so arguably we can keep changing things forever.. but at some point.. we need to stabilize and *test*. :) [16:05] SpamapS, believe me, that's what I want to do ;) [16:05] fwereade_: we have some merges to get in for security problems pointed out by jdstrand .. I'm sure we can also get a few important things in before release day. :) [16:07] SpamapS, cool, I'm just stressing out because I *do* have golang work as well, I've had a ludicrously painful push to get constraints into a state I'm not too embarrassed about, and I fear that if I don't draw a line under it *now* I'm going to end up back off in goland with my reviews quietly rotting in the sun [16:07] SpamapS, but none of that is your problem, sorry [16:08] fwereade_: given the deadline for the python version (1 month ago..) and the deadline for the golang version.. (???) .. perhaps we can persuade your golang fellows to wait a bit while we sort out the python mess? [16:10] SpamapS, no doubt that will be doable -- the rotting golang reviews are indeed less important at this time -- but IIRC this was originally timeboxed to 2 weeks and I think I've used most of them already ;) [16:11] fwereade_: 2 galactic standard weeks.. read the fine print. ;) [16:11] SpamapS, haha :) [16:15] fwereade_: to the more practical point.. are you all that far away from finishing that last constraints branch? [16:15] SpamapS, there are always possible enhancements but IMO my last branch is good enough [16:16] fwereade_: so just waiting for +1's? [16:16] SpamapS, frankly the 6am one was close enough, today's work is just polishing little rough edges as I come across them [16:16] SpamapS, yeah, but that's not a "just" IME [16:35] * hazmat dives back into the review stack [16:35] there's gold in here [16:39] lol === marrusl_ is now known as marrusl [17:41] that was painful. finally tracked down the proximate source of a failure in the reworked relation hook context. most likely just a weird test setup issue, but at least i have something i can troubleshoot [18:06] * niemeyer heading off.. see you all later! [18:07] niemeyer, enjoy [19:55] fwereade__, the look env constraints look really nice [19:56] * hazmat wonders on the difference between irrelevant and invalid [20:05] fwereade__, all the branches look good, except the doc branch, where i'd like some additional context if you intended it as dev or user doc [21:17] hazmat: i think there's something wrong with the store caching [21:18] SpamapS, how so? [21:18] http://paste.ubuntu.com/906286/ [21:18] The first thing that gets cached fails every time for me [21:18] ends up with an empty file instead of the zip file [21:18] everything after that works fine [21:34] SpamapS, it looks like the charm store is empty at the moment [21:34] i'm getting not found json responses to all my queries. [21:35] weird, I've been deploying stuff since it went live :) [21:36] hazmat: hm actually I think just mysql is messed up [21:36] other charms are fine [21:36] I just always do mysql first :) [21:49] SpamapS, yeah.. seems to work well outside of mysql [23:34] hazmat: ssl verify branch polished up all shiny for you now :)