[01:08] * davecheney starts to salivate [01:08] http://shopap.lenovo.com/au/en/products/laptops/thinkpad/thinkpad-innovation?cid=EDM_20120828_ANZ_AU_CON_X1Carbon_ShortRange_SL&RRID=222186471&esrc=EPI2JANZ [01:23] davecheney: Wow, sweet indeed [03:42] * mramm2 has no love for my ISP today -- but they are sending out the second tech support person in 2 days tomorrow morning [06:45] hello, is there an mstate.Open ? [06:45] i can't find it, i must be dumb [07:04] good morning [07:06] TheMue: good morning [07:06] do you know if there is an mstate.Open method ? [07:06] hiya dave [07:07] davecheney: sorry, don't know if it already exist [07:07] TheMue: that is sad [07:07] is there an mstate.Info ? [07:08] davecheney: dunno too, i've just started with lifecycle and test completion [07:08] davecheney: so you just have to scan the code [07:08] TheMue: cool [07:08] thanks [07:08] i'll ask aram [07:14] davecheney: at least in trunk they don't exist [07:14] TheMue: does mongo have anything like the concept of zookeepers' fallover addresses ? [07:14] davecheney: and aram currently focusses on txn and watchers [07:27] davecheney: afaik the concept is different, see http://www.mongodb.org/display/DOCS/Sharding+and+Failover. but here i'm not deep enough into both systems. [07:31] TheMue: ta === TheMue_ is now known as TheMue [08:09] fwereade, TheMue: morning [08:09] rogpeppe, heyhey [08:10] rogpeppe: hi, had a nice day off (spending your time on wedding photos)? [08:10] fwereade: hello [08:10] fwereade: any chance you could run a very brief live test for me? i can't work out if i've mucked up my amazon stuff or if our code has gone wrong [08:10] TheMue, heyhey :) [08:10] TheMue: yes thanks [08:10] rogpeppe, heh, ok, sure; in 5 mins? [08:10] fwereade: np [08:29] rogpeppe, ok, that wasn't 5 mins, but maybe it actually will be in 5 more mins -- would you let me know what I need to do now? [08:30] fwereade: go test -amazon -gocheck.vv launchpad.net/goaws/ec2 [08:30] fwereade: (assuming you've got valid AWS_ environment variables set up) [08:31] oops [08:32] s/goaws/goamz/ [08:33] oops again [08:33] fwereade: go test launchpad.net/goamz/ec2 -amazon -gocheck.vv [08:33] flags after packages, but only for go test :-) [08:34] rogpeppe, failures: http://paste.ubuntu.com/1171358/ (I managed to figure that bit out at least) [08:34] fwereade: interesting, but not the failures i was looking for :-) [08:35] fwereade: i'm seeing signature failures [08:35] rogpeppe, ha, sorry :( [08:36] fwereade: the weird thing is that the python juju works ok. [08:36] fwereade: time to make a minimal failing example, i think [08:39] rogpeppe, something's scratching at my mind about signed urls, I'll let you know if it turns into anything real [08:39] fwereade: ta. it's really odd - some test example code works. [08:50] moin. [08:58] hi Aram [09:03] fwereade: one last test, just to make doubly sure: could you make sure you've got the latest goamz version (cd $GOPATH/src/launchpad.net/goamz; bzr pull) and run this program, with the auth details i gave you earlier substituted as appropriate.. http://paste.ubuntu.com/1171398/ [09:03] fwereade: i'm finding it all a bit weird [09:04] fwereade: or just say bugger off if you're too busy... :-) [09:04] rogpeppe, huh, sorry, looks like I wasn't up to date :/ [09:05] rogpeppe, and np at all I'm watching other tests atm, did something stupid :) [09:05] rogpeppe, bingo, signature does not match [09:05] phew [09:05] fwereade: and if you use your own credentials? [09:06] rogpeppe, doing that now [09:08] rogpeppe, yep, same failures [09:08] rogpeppe, sorry about that :( [09:08] fwereade: lovely thanks [09:08] fwereade: no, that's good! [09:08] fwereade: and when i reverted to revision 8, it all works [09:08] rogpeppe, nah, just sorry about the dumb version thing to begin with [09:09] fwereade: np, i'd've done the same [09:10] fwereade: the three line change between r8 and r9 is the culprit [09:12] Aram: just to get sure, the agreement has been that all reads on one or all entities with a lifecycle returns them regardless if of their life state, isn't it? [09:13] TheMue: yes. [09:14] Aram: ok, i'm currently doing services and machines and will change that too where needed [09:14] ok. [09:21] fwereade: i think i've found the problem [09:21] rogpeppe, oh yes? [09:21] fwereade: the urls don't end with a slash [09:21] rogpeppe, ha! [09:23] aaaaaaaaaaaaaaaaaaaaand we have a (rudimentary) Uniter merged :D [09:23] fwereade: right, now i can have some breakfast [09:23] fwereade: yay! [09:24] fwereade: i will eat muesli in celebratory mood! [09:24] rogpeppe, heh, I should probably do the same in a mo :) [09:49] sorry guys, need to pop out for a bit, bbs [10:09] lunchtime [10:50] biggup! [10:52] heya davecheney [10:55] has everyone got their UDS invite yet ? [10:55] no [10:55] ffs, i told michelle that there was a problem, but she didn't believe me [10:55] eventbrite has this web bug they put on their email, so they claim it has been 'opened' [10:55] like that could ever be wrong [10:55] hmm [10:55] I do have an invite, now that you made me look [10:56] bwahahah [10:56] 14 days ago, actually [10:56] heh [10:56] sssh [10:57] Ubuntu Developer Summit - R [10:57] what's the 'R'? [10:57] Aram, letter after Q [10:57] it's the next letter after Q [10:57] jynx! [10:58] * fwereade gesticulates wildly but emits no sound [10:58] fwereade: ! [10:58] sorry guys 2 mins [10:58] at an old workplace, the IRC bot had a !jynx command [10:59] one guy spent far to much time teaching it levenshtein distances so it could figure out who to mute [11:02] Yo! [11:02] hiya niemeyer [11:02] hey [11:03] Anyone has the invites out yet [11:03] ? [11:03] nup [11:03] Sending [11:07] rogpeppe? [11:07] niemeyer: yo! [11:07] niemeyer: meeting, i guess [11:08] * rogpeppe goes to fetch the other computer [11:14] https://bugs.launchpad.net/juju-core/+bug/1042604 [11:14] https://bugs.launchpad.net/juju-core/+bug/1042579 [11:15] https://bugs.launchpad.net/juju-core/+bug/1038296/comments/5 [11:17] https://bugs.launchpad.net/juju-core/+bug/1038296/comments/6 [11:23] Whoop whoop! [11:54] davecheney: I'm not sure if you want to talked to talk a bit more about the problem [11:54] davecheney: Do you wanna brainstorm on it for a moment? [11:54] niemeyer: sure [11:54] davecheney: Cool, so I'll pick the code to follow alone [11:54] along [11:55] Wow.. interesting typo [11:55] is the hangout still active ? [11:56] davecheney: Would you rather use G+? Cool.. I've sent it again [11:56] jynx! so have I [11:57] niemeyer: https://plus.google.com/hangouts/_/649cd97bfeb012132fd9f58aeaf998dfd90329b2 [11:57] ^ does that work ? [12:05] niemeyer: svc, err := s.Conn.AddService("test-service", charm) [12:05] c.Assert(err, IsNil) [12:05] err = svc.SetExposed() [12:05] c.Assert(err, IsNil) [12:05] units, err := s.Conn.AddUnits(svc, 1) [12:05] c.Assert(err, IsNil) [12:05] err = units[0].OpenPort("tcp", 999) [12:05] c.Assert(err, IsNil) [12:07] davecheney, TheMue: i think the firewaller probably needs to be changed so that it only adds machines to the machineds map when the machine has an instance id. [12:08] davecheney: or, better perhaps, the firewaller could keep the set of current instances, and change it when a machine's instance id changes [12:09] rog: InstanceId() of machine already returns an error if unset and that is caught in firewaller line 202 [12:10] TheMue: yes, but this is something that happens in the normal course of things. we don't want the provisioner to be restarted each time it happens. [12:10] TheMue: i'm not sure that retry logic is the right fit here either [12:12] TheMue: hmm, mind you perhaps retry is ok here, in a kinda hacky sort of way, because we know that the firewaller is in the same process, so it will be seeing the same changes and be reacting pretty quickly. [12:12] rog: if we don't have an instance id retrying to open them doesn't help ;) so indeed there may be a need for (a) watching for the instance id [12:12] TheMue: that's what i'm thinking [12:12] TheMue: (we already have a watcher that can do that) [12:15] rog: sadly i don't see the according error message in the log. here i'm wondering [12:16] rog: ah, found, looked too high [12:17] TheMue: looking at the code, i don't think it would be too hard. [12:17] ... maybe [12:20] rog: depends on the strategy [12:20] rog: wait there less or more good wrapped but blocking [12:20] TheMue: syntax error :-) [12:20] rog: or start a kind of async port opener in an extra goroutine [12:21] more or less [12:21] TheMue: i didn't understand that first sentence, but i don't think the latter is a good idea [12:21] TheMue: if we let ourselves be led by the model, a Machine's instance id can change at any time, and we should track that. [12:22] rog: 1st sentences is a kind of instanceId, err := machined.machine.WaitInstanceId() [12:22] TheMue: i don't think that's a great idea either [12:22] rog: no, because it blocks the fw [12:22] TheMue: i'm thinking we should maintain another map inside the Firewaller [12:22] TheMue: that maps machine id to instance id [12:23] TheMue: or to environs.Instance, better perhaps. [12:23] TheMue: there's something i'm trying to understand about the current firewaller; maybe you can explain [12:23] rog: i'll try [12:24] TheMue: it never calls Instance.Ports, so if the firewaller is restarted, how can it know what ports to close on an instance? [12:26] rog: afaik we once talked about it. i just pass the instance the ports i want have opened or close, regardless if they are alredy open or closed [12:27] TheMue: that's fine for opening ports (open is idempotent, and when we start, we assume no ports are open), but i think it fails for closing ports. [12:27] sorry lads, wasn't watching what you were writing [12:27] was talking to gustavo [12:28] TheMue: Please leave that to davecheney [12:29] rog: why does it fail? [12:29] niemeyer: ok [12:29] TheMue: because when you start up, you need to close any ports that are currently open but that are not mentioned in the state [12:30] TheMue: but unless you call Instance.Ports, you can't know what those are [12:30] ARGH! [12:30] why does LP not have a 'report a bug' link on the milestone page [12:30] I spend my life on that page and there is no bloody link to create a _new_ bug for this milestone [12:31] davecheney: think of daisies, la la la [12:32] niemeyer: is rog with this fw-startup-port-closing right? if so we should file a bug. [12:33] rog: please file a bug [12:33] rog: also, is there a bug for the AMZ breakage [12:34] davecheney: yes, someone else had already filed one [12:34] I have a shittone of 'doesn't work in XYZ region' bugs that I am working through [12:34] TheMue: interestingly, this was an issue that we didn't have in the code sketch that i originally proposed for the firewaller [12:35] TheMue: (well, perhaps... :-]) [12:36] ;) [12:37] davecheney: i'll write a test that breaks first, then i'll file a bug :-) [12:37] rog, you've done this before :) [12:43] davecheney: yup, test fails as expected [12:44] niemeyer: https://bugs.launchpad.net/juju-core/+bug/1042717 [12:44] does this capture (part) of the discussion we just had [12:45] davecheney: i think AllInstances only returns running instances [12:45] davecheney: (or pending) [12:45] davecheney: As far as terminology goes, I suggest stopped vs. terminated [12:45] davecheney: That's what EC2 uses [12:45] ya'all can edit the ticket, please have at it [12:46] sorry all [12:46] mramm2: did my text make it ? [12:46] yep [12:46] and my phone was right next to me [12:46] mramm2: if you wanna have a quick catchup now while everyone is online, lets do it [12:47] but I slept through it [12:47] davecheney: Looks good [12:47] davecheney: I was wondering a bit about (2) [12:48] davecheney: Do we need it right now? [12:48] davecheney: what do you think we should do with a stopped machine? its agent will appear dead. [12:48] niemeyer: ^ [12:48] rog: discuss with niemeyer, he dug up that corpse [12:49] hmm, yeah, it might be problematic if we find we're running two unit agents for the same unit [12:50] but i suppose that's a problem with a down network connection too [12:50] TheMue: Sorry, I did read the discussion, but it's not clear to me what problem is being fixed, or why we should change something there [12:50] niemeyer: this one is a bit pithier [12:50] niemeyer: the problem is that the firewaller sees a new machine and tries to change its ports, but the machine hasn't yet been allocated an instance. [12:50] davecheney: Which one? [12:51] niemeyer: https://bugs.launchpad.net/juju-core/+bug/1042721 [12:51] paste fail [12:51] niemeyer: so the firewaller dies [12:51] rog: I've discussed this with davecheney already.. there's zero reason to change ports for an instance that doesn't exist [12:51] niemeyer: indeed [12:51] niemeyer: which means the firewaller should watch each machine to see when its instance id changes, i think [12:52] rog: I don't get the leap [12:52] niemeyer: when should the firewaller open the ports for a new instance? [12:52] davecheney: +1 [12:52] niemeyer: my only question is [12:53] which ports are we talking about, the ones in the state, or the ones in the security group of the provider ? [12:53] rog: When it gets an open port watcher firing for an instance within it and its service was exposed, in either order [12:53] davecheney: State [12:53] niemeyer: open port watchers fire for machines, not instances [12:53] davecheney: We've agreed back then that StartInstance never gives back something with ports open [12:53] rog: It fires for units, actually [12:54] niemeyer: sure [12:54] niemeyer: ok, then in the pathological case, all the units' machines have been replaced [12:54] rog: Which live within an instance when they are running [12:54] and the service is now offline [12:54] rog: No instance, no unit [12:54] niemeyer: but when you see that state change, you haven't necessarily got an instance to change the ports on [12:54] rog: Impossible [12:54] niemeyer: really? [12:55] rog: The unit lives within the instance.. if there's no instance, there's no uniter, and thus no open ports [12:55] niemeyer: ahh, and you just answered my question, when the instance is replaced, the new uniter will react to 'exposed' and do what it needs [12:55] niemeyer: ah... [12:55] niemeyer: i'd forgotten that rub [12:55] niemeyer: brilliant! [12:55] the missing piece [12:55] niemeyer: so the test is wrong. [12:56] it's all my fault :-) [12:56] I setup a hangout: tps://plus.google.com/hangouts/_/0fbf3a66c1e6ee955123854516a78f947aa621cb [12:56] davecheney: Yeah, or in install/start whatever.. it's free to run open-port in any hook [12:56] not required, but if you want to chat [12:56] feel free to join [12:57] rog: Well, kind of.. as discussed with davecheney, the test is also right [12:57] rog: It's just a different unit tests [12:57] rog: It shouldn't blow up in such a state [12:57] rog: it's wrong in the way a hat made of bacon is wrong [12:57] davecheney: are you dissing my bacon hat? [12:59] niemeyer: should it just ignore the error? [12:59] niemeyer: the firewaller, that is [13:00] rog: Yeah, if the instance is gone, it can ignore it.. the provisioner will close ports in state, fire a new instance, assign to it, and the new uniter will open it back again [13:01] niemeyer, davecheney: so this should fix that particular test: http://paste.ubuntu.com/1171742/ [13:03] niemeyer: me looks [13:04] niemeyer: that should work _almost_ work [13:04] davecheney: almost? [13:04] there is a race between the call to dummy.StartInstance() returning and the intance id hitting the state [13:05] it is a much smaller window than currently exists [13:05] davecheney: shit yeah [13:05] davecheney: i knew about that before [13:05] davecheney: just forgotten it [13:05] * rog is groggy today [13:05] rog: http://codereview.appspot.com/6482081/ [13:05] your thoughts would be appreciated [13:06] note, setting a value higher than about 10ms will cause jujud tests to hang [13:07] davecheney: Huh.. a good one to inspect later :) [13:07] rog: niemeyer : booohhh https://bugs.launchpad.net/bugs/1042545 [13:10] davecheney: why bother putting the delay time in the environState if it's actually global? [13:10] rog: TheMue requested that we be able to change it [13:10] davecheney: ah [13:10] so in theory, we could change it by type aserting to dummy, then reaching in and changing the value on a per test basis [13:11] davecheney: not really [13:11] davecheney: it's all unexported [13:11] davecheney: i'd prefer a possible entry in the dummy environ configuration attributes. [13:11] davecheney: if we wanted an override [13:13] rog: good point [13:13] rog: Yeah, that sounds sensible [13:13] I've suggested a function in the dummy package, but an attribute setting seems even nicer [13:13] environment setting [13:13] niemeyer, davecheney: i'm not sure about it though (an override that is) [13:14] davecheney: That said, I still think there's value in having a flag to enable the delay globally [13:14] when would it be appropriate to use the override? [13:14] davecheney: May be done with a command line flag, though [13:14] davecheney: as suggested in the review [13:14] niemeyer: i think on balance i prefer the environment variable, as it means it's easy to run all tests with the delay enabled. [13:15] rog: -dummy.delay 10s is just as easy [13:15] I'd prefer to stay out of the business of env variables [13:15] niemeyer: i can't do: go test launchpad.net/juju-core/... -dummy.delay 10s [13:15] For that purpose [13:15] niemeyer: that doesn't work when you do: go test launchpad.net/juju-core/... [13:15] what rog said [13:16] Hmm.. okay [13:16] we'll put it in a SOAP web service that the tests query using CORBA. [13:17] Aram: don't forget perl, you'll need lots of perl [13:18] davecheney: Okay, so why all the fanciness? var delaySecs = os.Getenv("..."); func delay() { if delaySecs != "" { time.Sleep(...) } }? [13:19] niemeyer: TheMue requested that it be changable [13:19] niemeyer: +1 (assuming we don't allow delay overriding with the config) [13:19] davecheney: Ah, we have time.ParseDuration actually [13:19] which I now realise didn't work [13:19] Aram: i'm missing the JEE server in the middle receiving the requests and writem the via mqseries into an oracle where we could fetch them [13:20] TheMue: why do you want the delay overridable? [13:20] Which you're already using [13:20] niemeyer: rog so if in principle you are in favor, I'll resubmit that CL tomorrow with something simpler [13:20] env var -> delay() [13:20] davecheney: Yeah, definitely [13:20] davecheney: we can also improve it in the future as we see necessary [13:20] davecheney: i think it's a good idea. i don't think we want a configurable delay - i can't think when it would ever be appropriate to use it. [13:20] davecheney: Potentially approaching the mgo/txn's Chaos stuff [13:21] niemeyer: +1 [13:21] davecheney: But it's not worth it for now [13:21] rog: just has been a quick idea after dave mentioned the topic to test different ranges [13:21] rog: if the implementation concept now makes it useless it's fine for me [13:21] all: i agree, i just need a way to make dummy slow down to better simulate a real provider [13:21] davecheney: +1 [13:22] davecheney: +1 [13:22] davecheney: +1 [13:22] right, i'll try out your test sugggestion tomorrow niemeyer [13:23] davecheney: Cheers man [13:23] niemeyer: thanks for the discussion, i can see a straightforward way for the PA to implement those two tickets we discussed [13:30] davecheney: Superb, thanks for finding the issue! Glad to see these bugs being fleshed out. [13:35] Okay, so reviews, and then presence [13:40] TheMue: ping [13:40] niemeyer: pong [13:40] TheMue: We'll need to fix the Kill methods.. they're changing the cached state to Dying irrespective of previous state [13:40] TheMue: We shouldn't do Dying > Dead [13:41] niemeyer: ouch [13:41] TheMue: Adding a comment [13:41] niemeyer: oh, yes, now i see it [13:41] TheMue: It's fine to move on as they are for the moment, and fix all of them at once in a follow up [13:42] niemeyer: both new ones for service and machine are like the ones for unit and relation [13:42] TheMue: Yeah, it's all good [13:42] TheMue: We can have a new CL that follows up on that and fixes all three at once [13:42] TheMue: (with a test!) [13:42] fwereade: could you just quickly give a high level overview of how constraints work? in particular, who solves the constraints? the PA or the client? I'm presuming the former, but just need to check. [13:43] niemeyer: when i've got your ok i'll change it i'll do the followup [13:43] niemeyer: +1 [13:43] TheMue: Cool [13:43] rog, well, "solve" is frankly a bit of a strong word -- we basically match against a list of instance types, sort by cost, and pick the first [13:44] fwereade: yeah, but until that's done we don't know the architecture that's going to be used for the new unit, right? [13:44] fwereade: or series [13:44] fwereade: but that "solving" is done by the PA? [13:44] rog, ok, the series is known before we even start on constraints [13:44] rog, that's defined by the charm [13:45] rog, in every existing case, arch defaults to amd64 but can be set to i386 if desired [13:45] rog, but remember this is pretty ec2-specific [13:45] fwereade: yeah, i don't want to do anything provider-specific here [13:46] fwereade: so am i right that the PA does the matching? [13:46] rog, and -- well, *probably* it should be up to the PA, but at present no it is not [13:46] fwereade: ah, so the add-unit command works out the architecture etc then adds the new unit with those set? [13:46] rog, but I know niemeyer is -1 on this, and we hashed it out last UDS, so... yes, it is up to the PA [13:46] rog, er, IYSWIM [13:47] rog, yeah, that was what it did [13:47] fwereade: thanks, that's useful [13:47] rog, the reasons for it doing so are not especially interesting, and the actual choice procedure shouldn't really be any different [13:48] fwereade: it makes a difference for upgrading, interestingly [13:48] fwereade: i'm writing a little description of my current problem [13:49] TheMue, Aram: ping [13:49] pong [13:49] rog, ah, ok :) [13:49] Aram, TheMue: Can we quickly talk about the last point here: https://codereview.appspot.com/6495043/ [13:49] niemeyer: pong, sorry, have been afk for a moment [13:49] niemeyer: yes [13:50] niemeyer: what point, this? mstate/service.go:45: s.doc.Life = Dying [13:51] Aram: The last one in the review [13:51] niemeyer, fwereade: here's a description of my current upgrading difficulty: http://paste.ubuntu.com/1171829/ [13:51] niemeyer: currently RemoveUnit() would run on an error if the units are not dead [13:51] niemeyer: but it has to be put into a complete txn later [13:51] rog: Sorry, I'm covering a different issue right now [13:51] niemeyer: that's fine. but when you have a moment, i'm blocked on this. [13:51] TheMue: That's unrelated to transactions [13:52] TheMue: This is about the lifecycle behavior [13:52] niemeyer: otherwise we start to remove relations, remove unitts and may beak before deleting the service [13:52] niemeyer: I thought we had agreed that units should listen for their service and delete/die themselves. [13:52] TheMue: RemoveService is abruptly *removing units from state*, despite whatever state they're in [13:53] niemeyer: ok, reducing it to lifecycle the solution should be to let all units die if Die() is called on a service [13:53] niemeyer: today it's not working this way [13:53] TheMue: Of course it's not.. that's why we're doing the lifecycle stuff in the first place :) [13:53] TheMue: Which is why I'm asking what's the plan [13:54] Aram: Yes [13:54] Aram: Not delete [13:54] yes, only die. [13:54] Aram: Kill themselves, actually [13:54] Aram: and then die, you're right [13:54] Aram: The deletion is the bit that is done outside [13:54] (by the machine agent) [13:55] niemeyer: cool, we're on the same page. it hasn't been done yet because we were lacking watchers, I haven't forgoten about it. [13:55] Aram: Still, there's a problem in that RemoveService implementation.. doesn't look like that's what we want [13:55] well no, now it isn't. [13:55] but the plan is to change it after we have watchers. [13:56] Aram: Okay, I'm wondering because we're changing it now to claim lifecycle integration, [13:56] Aram: and it feels quite bogus from a lifecycle perspective [13:56] nah, the claim is wrong. it's a step, but not the final step. [13:56] there's more work to be done. [13:57] Aram: Cheers [13:58] niemeyer: how should "kill themselves" happen? [13:58] TheMue: The uniter will monitor the service [13:59] TheMue: If the service gets to Dying, the units kills itself [13:59] TheMue: There's also a refcounter in the service to tell how many units are alive or dying (and not Dead yet) [13:59] niemeyer: so they don't kill themselves, the uniter kills them, ok [13:59] TheMue: The service stays dying meanwhile [13:59] TheMue: Heh.. the uniter is the implementation of the unit [14:00] fwereade: Btw, are you in sync with this ^^^ [14:00] fwereade: last 5 sentences [14:01] niemeyer: had been at the unit state, not at the unit implementation [14:02] niemeyer, had half an eye on it; looks pretty sensible to me [14:02] niemeyer, same model as relations, really [14:03] fwereade: Yeah [14:03] niemeyer, one more thing to watch, maybe in a couple of places, which might be a little tedious [14:03] niemeyer, but well actually, no, I might only need to watch it when I'm started [14:04] fwereade: Well, not really.. the service can die at any point [14:04] fwereade: It's an entry for the "steady" select loop, I suppose [14:04] niemeyer, yeah, still thinking it through [14:05] TheMue: Okay, you got a +1 on all the lifecycle stuff [14:05] rog: So, what' sup? [14:05] niemeyer: cheers [14:06] niemeyer: just booking flights 4 uds, 1 mo [14:06] niemeyer, I *think* that impending service death is not a good enough reason to interrupt anything else the unit is doing (including, say, waiting for hook error resolution) [14:06] rog: np, reading the paste meanwhile [14:06] fwereade: Uh, that's awkward [14:06] fwereade: The guy said *kill the whole thing* [14:07] fwereade: Why we'd we go "Oh, btw, I have a small issue here?" [14:07] fwereade: Hmm [14:07] fwereade: I'm trying to think of scenarios where we'd actually want to wait for the error to be resolved [14:08] fwereade: Did you have something in mind? [14:08] fwereade: Well, at the same time, it doesn't feel like a big deal to wait, to be honest [14:09] fwereade: An argument could be enabling debugging of such issues [14:09] fwereade: "juju resolved" would always enable the service to die either way, I suppose? [14:09] fwereade: Sorry, clearly I'm brainstorming.. [14:09] niemeyer, yeah, that was my thinking -- that in general we want smooth and steady shutdown of everything [14:10] niemeyer: flight booked [14:10] niemeyer: did the issue make sense to you? [14:10] fwereade: Sounds sensible, sorry for the derail.. should have talked to a bear before [14:10] niemeyer, by my gut I'm -1 on any sudden-death mechanisms beyond remove-unit --force [14:10] niemeyer, haha np [14:12] rog: Not yet.. digesting it still [14:15] rog: Why would add-unit *guess* tools? [14:15] niemeyer: because it doesn't know what architecture the new unit is going to run on yet [14:16] rog: Hmm [14:16] rog: So that's not right [14:16] niemeyer: unless we say that constraints are solved by the client [14:16] rog: We shouldn't set tools before that's decided [14:16] niemeyer: we don't [14:16] rog: Well, if we *guess*, we do [14:17] rog: If it's decided, we dont' guess [14:17] niemeyer: that's only with solution 2 [14:17] niemeyer: which i'm not keen on, but i thought it might a possibility [14:17] rog: Solution 1 feels a bit like a derail.. [14:18] niemeyer: yeah, i'm not too keen on that either [14:18] niemeyer: which might mean that the whole proposed tools architecture is misguided [14:18] rog: Heh [14:18] rog: Let's keep the bby [14:18] baby [14:20] rog: I'm still digesting the issue, just a moment [14:29] rog: You know what's interesting.. a unit may have to run on a different Ubuntu release than the machine agent that starts it [14:30] rog: This is theoretically quite feasible [14:30] niemeyer: yes, that's true. [14:30] rog: and probably practically too [14:30] rog: For a constrained selection of series at least [14:30] niemeyer: oh... unit [14:31] rog: So we need three details to be able to start a unit: [14:31] niemeyer: interesting. i thought the LXC stuff always used the same series as the main instance [14:31] Sorry [14:31] I actually meant [14:31] rog: So we need three details to be able to assign tools to a unit: [14:31] - The series [14:31] - The version [14:31] - The arch [14:32] We can tell the series from the service [14:32] The version should probably be inherited from the provisioning agent [14:33] The arch must match the machine being deployed in [14:33] rog: People deploy different series in *chroots* [14:33] rog: LXC has better isolation than chroots even [14:34] niemeyer: uh huh. and i guess we can take advantage of that. [14:34] rog: Yeah [14:34] niemeyer: the above stuff seems to imply that you think the PA should assign the tools to a unit [14:34] niemeyer: is that right [14:34] ? [14:35] rog: No.. so far it's just brainstorm.. just trying to figure what comes from where, so we find the proper hook point [14:35] niemeyer: ok [14:36] rog: It feels like there are two possible cases: [14:36] rog: 1) Unit assigned to existing machine [14:36] rog: 2) Unit assigned to undeployed machine [14:36] niemeyer: +1 [14:37] off the top of my head, maybe the PA should assign units to machines, rather than doing it client-side. that would solve this issue, at any rate. [14:37] rog: For (1), AssignToMachine may ensure the proper set of agent tools based on the machine agent tools, and potentially the service some day when we do support the distinction [14:38] niemeyer: i'm not sure that's true actually [14:38] rog: Oh? [14:38] niemeyer: what if the machine agent gets upgraded in the meantime? [14:41] niemeyer: i suppose it comes down to what semantics we want from upgrade [14:41] rog: Actually, hmm.. [14:41] s/upgrade/upgrade-juju [14:41] rog: What if ProposedTools() defaulted to the machine tools? [14:42] rog: When the setting is missing entirely [14:42] niemeyer: interesting [14:42] niemeyer: what about machine proposed tools? [14:43] rog: Meaning? [14:43] niemeyer: what does Machine.ProposedAgentTools default to? [14:43] rog: That's an easy one.. we need tools to start the machine agent in the first place [14:44] niemeyer: but we don't need tools to create the Machine [14:44] rog: Indeed, but we need tools to start the machine agent in the first place [14:44] niemeyer: sure. but i don't see how this gets us out of the race that i described [14:46] rog: Well, there's no way to avoid it if we're allowing for anything concurrent to pick agent tools [14:47] niemeyer: solution 1 avoids the race, at some cost. [14:47] rog: It doesn't.. [14:47] rog: Unless you sit down and wait for all upgrades to finish [14:47] rog: Before continuing to upgrade [14:47] niemeyer: you have to sit down and wait for parent agent upgrades to finish, yeah [14:48] rog: and even that has a race, if you assume that new parent agents may be starting [14:48] niemeyer: they'll be started by another agent, so we'll always be able to upgrade that first [14:49] niemeyer: essentially we percolate upgrades down from the root [14:49] rog: That's a long derail [14:49] niemeyer: here's another possibility: [14:49] rog: and complex too.. [14:49] niemeyer: agents are responsible for ProposingTools on their children. [14:50] niemeyer: (not sure i like that much either) [14:50] rog: Hmm [14:51] rog: It sounds like we're introducing a lot of cost for the benefit of features that won't exist for quite a while.. [14:51] rog: I wish we had noticed that before :( [14:51] [15:18:34] niemeyer: which might mean that the whole proposed tools architecture is misguided [14:51] :-) [14:51] :-( [14:52] rog: state.ProposedVersion() ? :-) [14:52] niemeyer: that has its own down sides, and i can't quite remember them right now... [14:53] rog: Mainly we can't do selective upgrading, which is what I was referring to above [14:53] niemeyer: what happens if we don't have versions for every architecture we need? [14:55] rog: We find the closest possible version available, and if there's none, we put an error in the state pointing out we can't deploy said resource [14:56] rog: I think the version setting can actually be part of the config.Config type [14:57] niemeyer: what do you mean by "closest version"? [14:58] niemeyer: i think perhaps we should make it exact or nothing [14:58] rog: $MAJOR.0.0 <= $CLOSEST_VERSION <= $MAJOR.$MINOR.$PATCH [14:59] rog: That's unnecessary.. we have to handle compatibility within majors anyway.. there's no reason to prevent that from happening purposefully [14:59] rog: This will be handy when there's an upgrade in one architecture but not in another [14:59] niemeyer: so nothing later than the proposed version. [15:00] rog: Yeah [15:00] niemeyer: state.ProposedVersion() (version.Number, error) right? [15:00] rog: I was thinking that it'd be easier to have that in config.Config [15:00] rog: and thus state.EnvironConfig() [15:00] rog: So we can use existing infrastructure to deal with it [15:00] * rog thinks [15:01] rog: E.g. we already have env watches, already have means for reading and writing this setting, etc [15:03] rog: There's one handy pre-req which is making state deal with config.Config rather than ConfigNode on EnvironConfig and the watch [15:03] rog: Which is something I've been trying to do since Lisbon [15:04] niemeyer: this means that the providers would have to know about proposed tools, right? [15:04] rog: Why? [15:05] niemeyer: because they're created with config.Config attributes, no? [15:05] rog: Not that I see a problem upfront, but just wondering what you have in mind [15:05] rog: No provider should break if config.COnfig has an attribute that it doesn't know about [15:05] niemeyer: ah, i didn't know that [15:05] rog: The generic config.Config will handle it [15:06] niemeyer: another question occurs to me [15:07] niemeyer: when we do "juju upgrade-juju", how do we choose what version to propose in the state? [15:07] niemeyer: do we look through all the agents and see what architectures they're running, then choose the best version that is provided for all of them? or do we just choose the best version for any architecture? [15:08] rog: We can use the functionality you've already put in place to find max(version with current major) [15:08] niemeyer: for which architecture? [15:08] rog: I'd say any [15:08] rog: If we use the logic prevoiusly mentioned, that'd be fine [15:09] rog: Agents may simply not be able to catch up immediately [15:09] niemeyer: the functionality currently in place looks for tools for a given arch and series [15:09] rog: But with the retry logic that should exist anyway (download may fail, etc), we'd catch that [15:09] niemeyer: but i could remove that restriction [15:10] rog: Ah, I see.. this is still useful [15:10] rog: We'll want to use that when within the agent figuring what to run [15:10] niemeyer: indeed [15:10] niemeyer: i could provide BestBinaryTools or something [15:10] rog: We just need to be able to disable the flag [15:10] rog: Yeah [15:11] niemeyer: i wonder, if an agent can't find the exact version, perhaps it should keep on polling the available tools until the version is availab.e [15:12] rog: I think it's fine to run something else that is closer to the proposed version [15:12] niemeyer: but what happens if we later upload the required version? how can we ask the agents to upgrade? [15:12] rog: Reality will have that kind of scenario due to arch and series discrepancies [15:12] rog: We shouldn't have to [15:13] rog: The agent itself should note that it's still out of date in comparison to the proposed version [15:13] niemeyer: it'll know that, but what should it do about it? [15:13] rog: It should check to see if the available tools are now available [15:13] Erm [15:13] rog: It should check to see if the proposed tools are now available [15:14] [16:11:52] niemeyer: i wonder, if an agent can't find the exact version, perhaps it should keep on polling the available tools until the version is availab.e [15:14] niemeyer: that's what i was suggesting. [15:14] rog: I think it's fine to run something else that is closer to the proposed version [15:14] rog: That's my counterproposal :-) [15:14] rog: The "exact" word in there is the disagreement [15:15] rog: It should continue polling, but if it finds something closer, it should upgrade too [15:15] rog: and continue polling [15:15] niemeyer: definitely. [15:15] niemeyer: ah, sorry, i thought you were disagreeing with the idea of polling [15:15] rog: No, that's nice [15:16] * rog hopes that he doesn't have to throw away *too* much code :-) [15:16] rog: I was thinking about that as went through, I *think* it's mostly ok [15:16] niemeyer: that's my inclination too [15:16] rog: The whole upgrading logic is gold, just needs to watch something else [15:17] niemeyer: yeah [15:17] niemeyer: i think it's mainly the watchers in state [15:17] rog: Interestingly, there's a bunch of code going away, which is nice [15:17] TheMue: ping [15:18] niemeyer: at the expense of flexibility of course, but maybe we'd never really want to deliberately deploy different versions. [15:18] niemeyer: pong [15:18] rog: I'm feeling better about this aspect, to be honest.. I prefer we make things more complex to implement the fancy scenarios when we do need it, than to have it complex by default [15:18] TheMue: Heya [15:19] TheMue: I think we might use your help on this one [15:19] niemeyer: yeah, i think i agree. [15:19] TheMue: Not sure about rog's plan, so we have to brainstorm for a sec [15:19] niemeyer: ok, will read the last lines. [15:19] TheMue: There's some work that is on my plate for a while, and I never got to it [15:19] TheMue: No need, I'll explain [15:19] niemeyer: i'd rip out all the current ProposedAgentTools stuff from state [15:19] niemeyer: ok, listening mode = on [15:20] TheMue: ^ [15:20] TheMue: We need EnvironConfig() to return config.Config [15:20] TheMue: and also the respective watche [15:20] r [15:20] TheMue: I think all the stars are now aligned for this to be relatively easy, but this is a blocker for rog [15:21] TheMue: Would you mind to put that at the front of the queue, on CL for state, then another one for mstate? [15:21] s/on CL/one CL/ [15:21] rog: Not sure if you agree, or if you'd like to do that yourself? [15:21] niemeyer, TheMue: that would be great [15:22] TheMue: We'll need a counterpart for State.EnvironConfig: State.SetEnvironConfig [15:22] TheMue: Since config.Config is read-only [15:22] TheMue: But it all sounds quite straightforward [15:22] niemeyer: yes, first look seems so [15:23] TheMue: if you do state.SetEnvironConfig, i'll do the ProposedAgentTools stuff. [15:23] TheMue: Can you help us on that, with some priority? [15:23] niemeyer: sure [15:23] TheMue: Thanks a lot [15:23] niemeyer: yw [15:24] TheMue: There are quite a few things that are touched by that (provisioner, etc), but I suspect it will be rather pleasing. This is the last piece of the puzzle of config.Config, so I expect it to fall into place correctly in all cases. [15:24] * rog goes for a bite of lunch [15:24] niemeyer: so EnvironConfig() (*ConfigNode, *ConfigWatcher, error)? [15:25] TheMue: Uh? [15:25] niemeyer: or two calls? [15:25] TheMue: We already have an environ watcher [15:25] TheMue: The idea is just to make EnvironConfig() and the respective watcher operate with config.Config, rather than ConfigNode [15:26] TheMue: We have proper helpers for everything [15:26] niemeyer: aargh, read it wrong, sorry [15:26] TheMue: np [15:26] niemeyer: already wondered [15:27] TheMue: State.EnvironConfig and the watch will both use config.New, and SetEnvironConfig will use Config.AllAttrs [15:28] niemeyer: ok [15:30] back [15:32] I'll head to lunch [15:32] biab [15:45] rog: to get it right, state don't uses a persisted config anymore. it is in mem set with SetEnvironConfig()? [15:46] TheMue: i'm not sure what you mean [15:47] rog: today the config that is returned by EnvironConfig() is fetched from ZK [15:48] rog: from the environment path [15:48] rog: ah, explain helps [15:48] rog: it's just a differrent return type, source of the data stays the same [15:50] TheMue: +1 [15:50] rog: had been confused for the moment due to the late jump into the discussion [15:51] TheMue: np [15:55] TheMue, rog, Aram: when one upgrades a charm, and gets a conflict, and marks it resolved, I don't see any way for us to verify whether or not the user has actually done anything sensible with the conflicted data; does this sound like a problem to you, or a "just don't be an idiot" situation? [15:56] fwereade: the latter [15:56] conflict? [15:56] how might a charm upgrade cause a "conflict" ? [15:58] fwereade: ^ you might wanna explain :-) [15:59] SpamapS, ah, sorry [16:00] SpamapS, short version: we're versioning charms [16:01] SpamapS, so we maintain a git repo of charms-used-by-this-unit, which just gets its contents overwritten neatly when we upgrade; and *then* we pull from that repo into the actual charm dir, which is itself a git repo [16:01] SpamapS, which then means that weird directory structure changes and the like will at least be *caught* when we try to upgrade [16:01] nice to see we're abandoning bzr whole-heartedly :) [16:02] SpamapS, haha, I wanted to use bzr, but it has an ugly crash in the precise situation that prompted this idea [16:02] fwereade: I feel like you can be way more heavy handed than this with a charm upgrade. If I say "give me version X" .. I don't mean "merge it into what I have now" .. I mean *X* [16:03] also I feel like we need to (soon) make the charms readonly and enforce a data storage area for charms that want to write data.. but I keep forgetting to file a bug on that :p [16:04] SpamapS, ha, I would like that solution most of all, but my personal reading was that the writing-to-charm-dir genie was out of the bottle; is that completely wrong? [16:05] its out of the version 1 bottle [16:05] or rather, format: 1 bottle [16:05] format: 2 is still unsettled... [16:05] * fwereade makes very loud HMMMMM sounds [16:05] (and has actually never been discussed on the public mailing list.. which is a HUGE problem) [16:07] fwereade: why git tho. Why not just rsync? [16:07] SpamapS, (to briefly return to what you said before, the trouble is that without separate data storage we actually *are* always saying "merge with what I have") [16:07] SpamapS, er, because I didn't consider it, and when I thought "detect conflicts" I thought "VCS" [16:08] see I don't think detecting conflicts is at all important [16:08] applying delta tho.. that is.. [16:08] rsync would fail for this.. [16:08] and you're right, we are stuck w/ merging until we ditch the writable charm dir [16:10] SpamapS, ok; I was concerned that an un-thought-through upgrade from version X to version X+3, for example, could easily end up (say) blindly replacing a data dir with a file, and this seemed like unfriendly behaviour [16:10] fwereade: that must be thought through in upgrade-charm, not juju [16:11] SpamapS, but by the time upgrade-charm runs, the damage is done [16:11] fwereade: all I'd like to see is deltas applied sanely. VCS does make sense for that. [16:11] fwereade: its not "damage" if the author does something stupid [16:11] fwereade: authors *must* be cautious of exactly that situation. [16:12] SpamapS, (the other advantage, which feels like a nice help when writing/debugging charms, is that we can actually maintain a complete per-unit history of the charm dir) [16:12] SpamapS, in my mind this is indeed more targetted at authors than at users [16:12] SpamapS, if a user hits this situation the author has screwed up [16:12] authors will have their own VCS [16:12] fwereade: feels very much like "trying to do too much" [16:13] What I really care about is that you try to apply all the delta from the shared base.. a straight "merge" problem. [16:13] If I have created a 'data' dir in charm, and the new charm has a static data dir.. then yes, thats a conflict. [16:14] SpamapS, heh, I am not immune to this disease; I think niemeyer will be back from lunch soon, and I would like to involve him in this discussion [16:14] SpamapS, another benefit of merging is that (eg) deleted hooks actually get deleted [16:14] fwereade: I think its fine to do as you've suggested. Merge.. report conflict when they happen. I get it now. [16:15] SpamapS, cool [16:15] and I'm even wondering if thats actually more straight forward than trying to disallow writing [16:16] SpamapS, I think it is a pretty neat solution (which I totally can't take credit for) [16:17] SpamapS, although I was pleased with myself for then realising that we can commit charm dir state after every hook, which could be quite the debugging aid -- do you forsee any issues there? [16:18] fwereade: no, I don't think it should be a problem... [16:19] fwereade: some charms download lots of code into the charm dir on config-changed .. some even have git repos embedded.. so you have to be mindful of that. [16:23] SpamapS, ha, good wrinkle, hadn't thought of that [16:24] gents, I need to be off; I'll try to get back on later [16:24] takes care all [16:35] niemeyer: your email mentioned a ChangeLog function, but: http://paste.ubuntu.com/1172117/ [16:36] where is it? :). [16:36] or did you mean something else? [16:47] Aram: Yo [16:47] hey. [16:48] Aram: No, that was it really [16:48] Aram: Are you not finding it after pull? [16:49] as you can see, no. [16:49] is my tip revision there correct? [16:50] Aram: Let me check [16:52] Aram: There's something awkward going on [16:53] Aram: That revision is the one that introduced the ChangeLog function [16:53] Aram: Try to run this: bzr diff -r 168..169 [16:53] Aram: my godoc shows Runner.ChangeLog() [16:53] Aram: Try to "bzr revert" perhaps [16:54] Aram: To get the tree back in shape [16:54] niemeyer: it's in the diff. [16:54] I'll do a revert [16:54] maybe this is the problem [16:54] white:txn$ bzr st [16:54] working tree is out of date, run 'bzr update' [16:54] how did that happen though? [16:54] I didn't make any changes to it. [16:55] bzr revert didn't do anything, bzr update solved it though [16:56] Aram: I can't tell how you got there, but there are a number of ways this can happen [16:56] perhaps a go get -u did this? [16:57] Aram: Quite possible [16:57] Aram: Since it'll have to update backwards [16:57] Aram: (the tag is not in the latest revision) [17:15] fwereade: ping [17:19] niemeyer, pong [17:23] fwereade: Do you have a moment for a call? [17:23] fwereade: I know it's late for you, so "no" is a fine answer [17:23] niemeyer, just a sec... [17:24] niemeyer, yeah, can we keep it down to 15 mins or so though please? [17:26] niemeyer, shall I invite? you and..? [17:27] fwereade: Let's go then [17:27] fwereade: You and me, for the moment [17:27] niemeyer, sent [18:14] niemeyer: ping [18:14] rog: Yo [18:14] niemeyer: i sent a response to your review [18:15] niemeyer: only point of contention is 0666 vs 0600 [18:15] niemeyer: i vote for former as it's standard [18:15] niemeyer: and what threat are we protecting against? [18:15] rog: 0600 is the usual for files that contain credentials.. if it doesn't work with that file mode, it's broken [18:16] niemeyer: ah, i see. [18:16] rog: re. 40ms, awesome! [18:16] niemeyer: though i can't see how it could make any difference [18:17] niemeyer: 4ms is probably when there's nothing to remove. and it is using my SSD device [18:17] 40 [18:17] rog: Sure, if it makes no difference, then 0600 is fine [18:17] niemeyer: sure, ok. [18:18] niemeyer: it'd probably be even faster if i wasn't running it -gocheck.vv [18:19] rog: Ah, most certainly [18:19] niemeyer: a very useful program BTW: http://paste.ubuntu.com/1172299/ [18:19] niemeyer: i call it "timestamp" [18:20] niemeyer: so i did (in state) go test -gocheck.vv 2>&1 | timestamp [18:20] niemeyer: to find the timings [18:20] rog: Curious [18:21] rog: Clever, actually [18:21] niemeyer: sample output: http://paste.ubuntu.com/1172303/ [18:21] rog: This is awesome [18:21] niemeyer: it's incredibly useful sometimes [18:23] rog: Well, curiously gocheck is also showing the timestamps in that one case [18:23] niemeyer: unfortunately gocheck's timestamps wrap [18:23] niemeyer: i've had a proposal in for ages to fix that [18:23] rog: Where's it? [18:24] * rog looks [18:25] niemeyer: https://codereview.appspot.com/5874049/ [18:25] Looking [18:26] rog: I see.. I'd be glad to fix the wrapping, but requires some more thinking indeed [18:27] rog: It should at least be a consistent unit and length [18:27] rog: The length can of course vary on extremely long cases, but the sample output there is a bit awkward [18:27] niemeyer: yeah. [18:28] niemeyer: i've found that the output from "timestamp" works quite well. [18:28] niemeyer: but that's probably because i'm used to it! [18:28] rog: What is it? min/sec/ms? [18:28] niemeyer: yeah [18:29] rog: It looks reasonable to me as well actually.. [18:29] niemeyer: and i guess it doesn't matter so much if it wraps after an hour [18:30] rog: I'm actually more concerned on the lower side, but 1ms might be enough resolution [18:30] niemeyer: it could be 04:05.000 i suppose [18:30] (to debug races) [18:30] rog: =1 [18:30] +1 [18:30] niemeyer: yeah. i think that less than 1ms and stuff like the latency of locks around the logging starts to have an effect. [18:31] rog: mutexes should run well under that [18:31] rog: As in, several orders of magnitude below it [18:32] niemeyer: true. [18:33] niemeyer: if something is within a millisecond, then it deserves a closer look. but if i'm trying to debug a race, it's generally sensitive to the scheduler and i'll use println not Printf. [18:35] * rog is not sure that those two sentences are in any way related [18:35] rog: :) [18:35] rog: I've used gocheck's output to debug races quite successfully in the past [18:36] niemeyer: and sub-millisecond timing was important to that? [18:38] niemeyer: the other weird thing about the current log time stamps is that they don't start from zero... [18:43] rog: It can help.. sometimes the timing tells how much apart the two events were [18:44] rog: I can't recall how much the under ms helped, though [18:44] niemeyer: yeah. that was where my "if something is within a millisecond, then it deserves a closer look" statement came from. [18:44] rog: Since it was just there, I was considering it without realizing [18:44] rog: Well, that's too late [18:45] rog: If you're debugging a race, "deserves a closer look" is exactly what the log is for [18:45] niemeyer: we could print microseconds too. i don't mind too much. i just want it not to wrap. [18:47] rog: I think ms is fine to begin with, to be honest [18:47] rog: If we ever miss resolution we can increase it [18:47] niemeyer: sounds good. milliseconds is nice and human-friendly :-) [18:47] rog: I'd also do M:SS [18:47] rog: Rather than MM [18:49] rog: Or even SSS, I guess [18:49] niemeyer: i vote for M:SS [18:50] rog: Works for me [18:50] niemeyer: i think that makes the units marginally more obvious [18:50] rog: Yeah [18:51] niemeyer: i'll repropose the CL [18:53] rog: Thanks a lot [19:52] done [19:52] niemeyer: CL reproposed [19:52] i'm off now. see y'all tomorrow. [22:04] Wow, 3 different Uverse technicians have been out to my house in the last 36 hours to try to fix my internets! [22:05] and finally I think we have got it resolved. [22:05] apparently "squirrels ate the wires" [22:22] Haha [22:22] mramm: That's great [22:22] Aram: Btw, I was wondering that maybe we could have an unbounded log [22:22] Aram: Rather than a capped collection [22:22] Aram: The difference is pretty minimal either way [22:22] Aram: So we can tweak this