[01:37] davecheney: how does the GUI know to hide NRPE relationships? [01:37] NRPE-to-Nagios I mean [01:43] subs are not shown [01:43] thye are just a little marker on the sevice [01:43] when you hover over them [01:43] the links from the subs to their 'provides' peers are shown [01:49] heh, I work on the gui and didn't realize that was updated. I know there's some upcoming work for making relations show better though [01:49] just had to go try it out in jujucharms.com and see [01:49] :) [01:50] there's been a lot of talk of 'visualization layers' though. one day [01:54] cool [01:58] hmm, I don't find the nrpe hover thing very intuitive === gary_poster is now known as gary_poster|away [03:13] thumper: that test you didn't like needs to be in one method because it's testing multiple operations with some succeeding and others failing. i'll see if i can make it clearer by adding comments or whatever [03:14] since it's a bulk api, there's no point testing one machine at a time [03:59] it's been a shit day for getting things done [03:59] * thumper back later to talk to robie === thumper is now known as thumper-afk [04:49] i just had to do something fairly dirty for removing ghost machines.. direct db surgery.. fwiw, http://paste.ubuntu.com/6441072/ [04:57] should be unesc come 1.16.4 with terminate-machine --force [05:08] hazmat: bad news [05:08] that feature won't make it into 1.16.4 [05:08] it breaks upgrades [05:08] so sinzui told me [05:11] davecheney, that seems odd, re garbage cleanup breaking upgrades.. haven't looked at fwereade's patch for it though. [05:14] hazmat: something about the fix relies on an api call which is not present in 1.16.3 [05:19] ah === thumper-afk is now known as thumper === rogpeppe1 is now known as rogpeppe [08:10] mornin' all [08:21] morning rogpeppe [08:21] rogpeppe: I never did get your email [08:21] let me check my spam folder just in case [08:21] jam: oh, hmm [08:22] jam: weird, looks like i never sent it [08:22] jam: sent now [08:22] nope, not there either :) [08:23] jam: you should get it in a mo [08:23] got it [08:23] jam: cool. dunno how that happened... [08:25] rogpeppe: its a big graph :) [08:25] jam: yeah [08:26] its interesting to see the simple ones off to the right, and the really big convoluted mess on the left [08:26] jam: yeah [08:26] jam: note how all the stuff on the left comes in through the API (the reflect.Call right at the top) [08:27] yep [08:27] handleRequest [08:28] rogpeppe: thx for your review yesterday evening [08:28] jam: note: when you see =0x[0-9a-f]+, that's when there's only a single value for that parameter in all the calls [08:28] jam: otherwise the count is the number of distinct values for that parameter [08:29] TheMue: np [08:29] rogpeppe: "sync.runtime_Semacquire" with 14642 callers and 4922 distinct values [08:29] I'm curious how that works [08:30] I would have thought you'd have, like 1 lock object they were blocking on [08:30] jam: i think it is, mostly - it's probably to do with the implementation of semacquire [08:31] rogpeppe: well on the other hand if you had different objects talking about something similar, I would expect 14,000 different values [08:31] jam: i'm also interested in the big sync.Mutex.Lock called by sync.RWMutex.Lock [08:31] I'm curious that we would have some ratio there [08:31] rogpeppe: that's the acquireSocket call [08:31] which is "I want to ask Mongo something, let me have the socket to write to it" [08:31] jam: the former has 4196 different values but the latter has only one [08:32] hmmm.. why would we be calling RLock, though [08:32] I would have thought we would need the WLock [08:34] "acquireSocket" says "Read-only lock to check for previously reserved socket" [08:35] jam: well, there are 9720 blocked on the same semaphore [08:36] I haven't found 9720 yet [08:36] but I do see 8478 calling sync.RLock [08:36] and 6159 calling sync.Lock [08:36] and the former has only 1 object [08:36] the latter has 4916 objects [08:36] so yeah, probably it is just that we have a couple different things going on, some of which are on a shared lock, some are on distributed locks [08:37] I'm guessing the results from a mgo query [08:37] are blocked on a sync lock [08:37] unique to that queury [08:37] query [08:37] given SimpleQuery is blocked on 4916 different locks [08:37] while acquireSocket is 2866 calls blocked on 1 object [08:37] and the other 5612 calls from Find [08:37] blocked on that same object [08:38] jam: 14642 - 4922 = 9720 [08:39] rogpeppe: I think you're numbers are actually a bit wrong. 8478 are blocked on the same one, 4 are blocked on one side, and 4916 are blocked on unique objects [08:39] specifically, acquireSocket and Find both use a session.m mutex to handle global state stuff [08:40] session.queryConfig AFAICT [08:40] and s.masterSocket [08:41] rogpeppe: so if a bunch ofthings are blocked on a sync.RWMutex.RLock doesn't that mean that something else has the Lock of that object? [08:41] otherwise the RLocks are mutually compatible, right? [08:42] jam: yup [08:44] jam: the RLock seems to be just to get the default query parameters [08:44] rogpeppe: so in Find yes, in acquireSocket it is to get the masterSocket [08:44] which then has its own Lock [08:46] rogpeppe: right, so SimpleQuery has N locks (each call creates a new mutex which will be unlocked when the replyFunc gets the response from mongo [08:47] jam: looks like the whole thing is blocked trying to write to the mongo socket [08:47] rogpeppe: well all of those are blocked waiting for response [08:47] responses [08:47] they should have already written their requests [08:48] rogpeppe: *lots* of things are blocked on the global session lock. Which all *should* be using RLocks, but something is taking out the other lock [08:48] jam: look at the 1-wide trail coming out of mongoSocket.Query [08:48] jam: leading to tls.Conn.Write [08:49] jam: and eventually to net.netFD.Write [08:49] jam: i think it's likely that's the path that actually has the lock [08:50] jam: hmm, mind you, the write lock should not block the readers [08:51] rogpeppe: a write lock does block read locks [08:51] I thihnk [08:51] think [08:51] since it is saynig "I'm changing the data you want to read" [08:51] jam: no, sorry, i meant something different there [08:52] rogpeppe: but the lock in mongoSocket.Query should be the mongoSocket mutex *not* the Session mutex [08:52] jam: i meant the lock on the io.Writer shouldn't block the io.Readers [08:52] rogpeppe: so digging around mgo/session.go I don't see many code paths that take the session.m.Lock [08:52] Login does [08:52] and Clone [08:53] jam: and indeed mongoSocket.readLoop is blocked on tls.Conn.Read as expected [08:53] rogpeppe: sure, I'd expect that, but it doesn't explain why the Session.m is write Locked blocking all the acquireSocket and Find calls [08:54] jam: there are quite a few Clones around [08:54] rogpeppe: session.Clone [08:54] there is only one going via sync.RWMutex.Lock [08:54] ah, nm, there are 1136 of them [08:54] probably only 1 of them is trying to execute [08:55] jam: that doesn't feel quite right - if one was executing, it wouldn't be in Lock, i think [08:55] maybe [08:55] jam: except fleetingly [08:55] so if you look at Clone, and then follow it down to sync.RWMutex.Lock [08:55] you'll notice that all of them are called with the same object [08:56] and 1135 of them are stuck in sync.*Mutex.Lock [08:56] and *1* of them is in sync.runtime_Semacquire [08:56] well, directly vs indirectly [08:56] jam: so perhaps that's just spinning fast [08:57] rogpeppe: yeah, its possible, it does have 1136 Lock requests to chew through and another 8478 RLock requests concurrently [08:57] rogpeppe: so it *looks* like our transaction logic is requiring a Clone which requires a write Lock on the session object [08:58] runTransaction leads down into mgo.Query.Apply which ends up at session.Clone [08:58] jam: yeah, i don't know what Query.Apply does [08:59] jam: ah: session.SetMode(Strong, false) [08:59] jam: that's why it does the Clone [09:00] rogpeppe: what I'm getting out of this is that the txn logic gums up a lot of the internals :) [09:00] we have 3307 calls to state.runTransaction [09:00] mostly from SetAgentVersion calls [09:00] which *do* write to the DB [09:00] jam: it's what i expected [09:00] so a unit agent coming up does a bunch of writes [09:01] jam: the txn stuff is marvellously inefficient [09:01] SetPrivateAddress, SetPublicAddress, SetAgentVersion are the big ones [09:04] rogpeppe: you know what would be interesting, being able to poll runtime.Stack and generate this graph "live" [09:04] and watch it evolve [09:05] jam: i've been thinking about this, yes [09:05] jam: the question is how to relate two graphs in the sequence [09:05] jam: because the layout may be very different [09:06] rogpeppe: I *think* you can see something like dot with existing locations [09:06] but yeah, it doesn't guarantee they'll stay put (AFAIK) [09:06] morning fwereade [09:06] jam: but it would not be hard to make an api call to get the current stack [09:06] jam, heyhey [09:07] jam: (or the summary of the current stack, which would incur considerably less network traffic) [09:07] fwereade: hiya [09:07] rogpeppe: I would probably just grab the stack and dump it out to a local place/socket/whatever and have that process do the CPU work of collapsing it [09:08] jam: just make the API call locally [09:09] jam: or perhaps from another adjacent machine, so the processing doesn't mess too much with progress [09:09] hey rogpeppe [09:15] jam: https://codereview.appspot.com/28270043/ updated to use agent.conf; a bit less awkward now, I think. [09:33] fwereade, rogpeppe: poke from yesterday :) https://codereview.appspot.com/21940044/ - upgrade-juju + api [09:33] dimitern: will look after i've finished looking at axw's [09:33] thanks rogpeppe [09:40] rogpeppe, cheers [09:43] axw: could you remind me why we need configurable agent and mongo service names? [09:43] rogpeppe: local provider, so different users can each have an environment on the same machine [09:44] axw: ah right. [09:44] axw: i guess we could just put the uuid in there [09:44] rogpeppe: tho ideally, they'd go into user-level upstart jobs [09:44] axw: but that's a bigger change [09:44] rogpeppe: true, could do that across the board [09:44] axw: that would be better still [09:44] an even bigger change ;) [09:45] axw: (and not possible currently AFAIK) [09:45] yeah, I don't know the status of that [09:50] axw: reviewed [09:51] ta [09:52] rogpeppe: yeah, I started down the road of len(errors)==1 (actually a switch), but it started getting ugly [09:53] axw: yeah. [09:53] the error message will only go into the machine log anyway. [09:53] axw: true 'nuff [09:53] * rogpeppe sometimes wonders about a MultiError type [09:54] heh, I did consider it :) [09:54] didn't feel like burdening this CL with a one-size-fits all error list [09:59] fwereade: hi, don't feel obliged to look unless you are interested - these follow on from our conversations https://codereview.appspot.com/28790043/ https://codereview.appspot.com/28890043/ [10:00] wallyworld, thanks, I *might* get to them before the standup [10:00] no hurry or obligation [10:22] * rogpeppe reboots, sigh. [10:47] mgz: rogpeppe, natefinch standup? https://plus.google.com/hangouts/_/calendar/am9obi5tZWluZWxAY2Fub25pY2FsLmNvbQ.mf0d8r5pfb44m16v9b2n5i29ig [10:47] wallyworld: ^ [11:17] dimitern: I understand William's concern about an API that goes unused almost immediately, but I think we'd still have a "tell me what tools you can find" API, and then follow that with a maybe-upload, and then a "now set your Agent Version", so we still need the api [11:19] jam, sounds reasonable [11:20] jam, except that the actual tools selection for the upgrade will happen in the command still [11:31] dimitern: https://codereview.appspot.com/21940044/ reviewed [11:33] jam, ta [12:52] rogpeppe, fwereade I've got a cl for you to take a quick look at: https://codereview.appspot.com/28970043. (it's better to ask forgiveness than permission) [12:54] mattyw, LGTM [12:56] mattyw, actually [12:56] mattyw, would you copy the addresses slice instead of returning the internal one? [13:04] fwereade, sorry - I'm not sure I understand [13:04] mattyw, allocate a new slice of the same type/size, copy the contents, return the fresh one [13:05] mattyw, lest a client alter the returned slice and mess with the config's internals [13:05] fwereade, isn't []string a value anyway? === gary_poster|away is now known as gary_poster [13:07] mattyw: a slice is a reference type [13:07] mattyw, slices are references -- http://play.golang.org/ [13:08] mattyw: the slice itself is like a struct containing a pointer to the underlying array, the length and the capacity [13:08] rogpeppe, fwereade sorry - just reading go-slices and internals and realised - thanks - I'll sort it out [13:09] mattyw, no worries, easy to miss [13:09] rogpeppe, oh yes - of course, I probably have to be reminded of that every week ;) [13:10] mattyw: that said, i don't care too much in this instance; the config isn't an immutable type. still, probably best to copy to avoid unexpected behaviour [13:24] trivial code move review anyone? https://codereview.appspot.com/28980043 [13:24] fwereade, jam, dimitern: ^ [13:24] fwereade seems to have reviewed it alreadyu [13:27] rogpeppe: seems good to be as well though [13:28] rogpeppe, reviewed [13:32] rogpeppe, I've updated https://codereview.appspot.com/28970043. I'm afraid I didn't notice your comment until I'd pushed the change for fwereade's comment so the delta from patch set might be a bit mixed up -but it's hopefully small enough a change to not matter [13:37] mattyw: reviewed [13:49] fwereade: looking at AssignToNewMachineOrContainer, it seems to me that the logic isn't quite right when there are several clean machines [13:49] rogpeppe, oh yes? [13:49] fwereade: at the end of that function, it seems to assume that because the machine it tried to choose isn't now clean, that there are no clean machines [13:50] fwereade: but i think it should probably just recurse into AssignToNewMachineOrContainer or something like that [13:50] fwereade: i.e. look for another clean machine [13:52] rogpeppe: a trivial MP: https://codereview.appspot.com/29000043 [13:52] adeuring: looking [13:52] thanks! [13:53] adeuring: LGTM [13:53] adeuring: thanks [13:53] rogpeppe, yeah, we should probably iterate through the clean machines, well soptted [13:53] rogpeppe: thanks! [13:53] rogpeppe: https://codereview.appspot.com/24040044/ is in again after handling your review [13:53] fwereade: i don't think we can just iterate through the clean machines [13:54] fwereade: because a new clean machine might be created during the iteration [13:54] rogpeppe: there's still an open question regarding the passing of timeouts from the clients to the apiserver [13:54] TheMue: looking [13:54] rogpeppe: thx [13:56] rogpeppe, I'm not too bothered by that case -- isn't that always a possibility? any time we decide there aren't any clean machines and create a new one, a sufficiently annoying and precise demon could create a new clean machine and say "why didn't you use that? lol u idiots" [13:56] fwereade: true enough [13:58] rogpeppe, in practice I think we'd need a whole bunch of clean machines for the iteration to take a long time *and* for all of those to be sniped *and* for a new clean one to be created in that window, which probably isn't *that* wideopen in the first place [13:58] fwereade: i've been sporadically tempted to fact out the unit machine-assigning stuff to addmachine.go too, BTW, but i keep swithering [13:58] s/fact/factor/ [13:59] rogpeppe, yeah, AssignToNewMachine is definitely related, the rest is a bit less clear [14:20] fwereade: another thing i just noticed: the logic inside the "if machineIdSpec != """ test inside juju.Conn.AddUnits really looks as if it should be living inside state - do you concur? [14:22] rogpeppe, not immediately clear to me, anyway -- expand? [14:23] rogpeppe, in particular handling lxc:3 inside state seems like it's definitely a bad idea [14:24] rogpeppe, the "lxc:" ought to be handled by something more like an environ (ok, a pseudo-environ backed by state, but still) [14:44] TheMue: you've got a review [14:45] rogpeppe: ta [14:45] fwereade: sorry, only just saw your remarks [14:45] fwereade: thinking [14:47] fwereade: why does state need to be ignorant of the lxc:3 syntax? [14:48] fwereade: i'm probably just ignorant of the underlying rationale behind that syntax [14:48] fwereade: and where it sits in the set of juju abstractions [14:50] rogpeppe, the name we've been using is "placement directives" [14:51] rogpeppe, provider-specific languages for instance locations [15:03] fwereade: I'm having conceptual difficulties reconciling lxc:3 with ec2:us-east-1c. [15:03] fwereade: The former is saying "an lxc container within machine 3" AFAICS and the latter "the us-east1c region within the ec2 provider". [15:03] fwereade: The container/contained relationship seems to be different in each one. [15:03] fwereade: Is there a nice way of thinking about these things? [15:05] rogpeppe, consider lxc a provider -- something that gives you new instances [15:06] rogpeppe, the domain it has access to happens to be the set of machines already in the environment [15:06] * rogpeppe twists his brain [15:10] fwereade: also, wondering why lxc:3:lxc:2 is equivalent to lxc:3/lxc/2 [15:10] fwereade: or if it is - i may well be missing something [15:12] rogpeppe, if there's a lxc:3:lxc:2 in there that's just a mistake I think [15:13] fwereade: i'm looking at this line: mid = strings.Join(specParts[1:], "/") [15:13] rogpeppe, lxc:3/lxc/2 would be "a new lxc container inside 3/lxc/2" [15:13] rogpeppe, : [15:13] rogpeppe, where the lxc provider accepts machine ids as directives [15:14] fwereade: but it splits the whole thing on ":" then joins it again on "/" [15:14] fwereade: so AFAICS lxc:3:lxc:2 will be transformed into lxc:3/lxc/2 [15:15] fwereade: perhaps it's not deliberate [15:15] rogpeppe, I suspect it's just an oversight [15:15] fwereade: ok [15:15] rogpeppe, in general the plan is to extract the bit before the first colon and hand the bits after the colon off to the provider unchanged [15:16] rogpeppe, good catch, thank you [15:16] fwereade: ah, i think they probably just want to SplitN [15:16] rogpeppe, +1 [15:31] wow, I just noticed mongo's replset flag is actually replSet (capital S).... who uses camelcase in their command line flags? :/ [15:31] rogpeppe, is there a way I can get the name of the current envname for passing to juju.NewAPIClientFromName? [15:33] fwereade: still looking at Conn.AddUnits, getting closer to why I thought some of it at least should live in state, what's to stop the newly added machine (the AddMachineWithConstraints line) being grabbed by some random other service with a container=lxc constraint? [15:33] mattyw: just pass "" [15:34] rogpeppe, that uses the defauly - all sorts of craziness happens if I use juju switch [15:34] mattyw: ah, i see [15:34] mattyw: what's the context? [15:34] rogpeppe, nothing, I've always hated all that stuff :( [15:35] rogpeppe, I think I'm with you on the non-lxc: bits being better off as single txns [15:35] fwereade: i'd hoped to get away without touching this stuff [15:36] rogpeppe, sadly the state of state is such that changes almost invariably involve fixes as well [15:36] mattyw: i think we probably want to factor out the juju switch logic somewhere [15:37] fwereade: most of the dubious stuff is relatively recent :-( [15:37] rogpeppe, but I don't think that particular stuff is in scope -- pre-existing bugs shouldn't pull you out of your way unless fixing them actually helps you [15:38] rogpeppe, the add-unit/add-machine stuff was always a problem, and I'm sure the machine-sniping problem always existed too [15:38] rogpeppe, the current arrangement of the code maybe brings the problems into clearer focus [15:39] fwereade: the --to and containers stuff has added a great deal of difficult-to-reason-about code [15:40] fwereade: at least, i find it hard to reason about :-) [15:40] rogpeppe, but the real problem is that we've never really had a coherent state model for machines, and so we built on a shaky foundation without firming it up underneath us [15:40] rogpeppe, yay pressure [15:40] fwereade: yeah [15:41] mattyw: i think we probably want a DefaultEnvironName function inside juju [15:41] mattyw: inside the juju package, that is [15:42] rogpeppe, mattyw: hmm, I don't think we should be talking about environs' names at all inside state -- except where we really cannot help it [15:43] rogpeppe, mattyw: and those should always be for historical reasons [15:43] fwereade: so how do you think an external command that calls NewAPIClientFromName should work? [15:44] fwereade: given that it wants to respect the usual conventions for finding an environment [15:44] fwereade: JUJU_ENV, switch value [15:44] rogpeppe, mattyw: ok, I misread somehow, because I am an idiot [15:45] rogpeppe, mattyw: I'd been thinking of Conn as something server-side [15:45] rogpeppe, mattyw: and think of juju as primarily there for the Conn [15:46] rogpeppe, mattyw: client-side that's fine... I guess the juju package itself should just be split up at somepoint [15:47] fwereade: what part of the juju package isn't intended for client side? [15:47] rogpeppe, Conn? [15:48] fwereade: that's definitely client-side, isn't it? [15:48] fwereade: ah, except that it's used by the API these days [15:48] rogpeppe, that state+environ combo is most certainly not ;p [15:48] rogpeppe, once, indeed, it was [15:49] rogpeppe, but, eh, different bits evolve at different rates and things end up in surprising places [15:49] fwereade: you're right - it should be again [15:49] fwereade: the client api has really taken on the role of juju now [15:49] rogpeppe, yeah [15:50] fwereade: the state-connection stuff should probably be moved somewhere under apiserver [15:51] * rogpeppe grabs some lunch === Beret- is now known as Beret [16:02] rogpeppe, fwereade for now it looks like I can use cmd.ReadCurrentEnvironment() to solve my issue === adam_g is now known as adam_g_afk === jamespag` is now known as jamespage [16:17] mattyw: that doesn't respect $JUJU_ENV, unfortunately [16:17] rogpeppe, I don't seeme to have JUJU_ENV set on my machine - where does that come from? [16:25] mattyw: you don't need it set, but it's an optional thing [16:25] mattyw: it overrides juju switch [16:31] rogpeppe, but I can do if os.Getenv("JUJU_ENV") == "" { envName := cmd.ReadCurrentEnv() } ? [16:41] rogpeppe, I guess what you're saying is all of that logic should be put somewhere in juju? [17:09] rogpeppe: so I'm trying to write tests and I realized that we have testing.mgo ... but it's all set up to be a single instance of mongo. Seems like it should be refactored to allow N instances... probably by just making it all methods on a type rather than package level functions. What do you think? [17:31] natefinch: i was wondering when you'd come up against that issue... [17:32] natefinch: (sorry for slow response - my irc client is declining to run my notification script again) [17:32] rogpeppe: no big deal. Was getting lunch anyway [17:32] natefinch: i agree, although to save churn perhaps we should keep the existing entry points the same, but as a thin layer on top of the multiple-mongo layer [17:33] natefinch: perhaps a testing/mongo package? [17:40] rogpeppe: yeah, that's doable, though I could refactor the current file without modifying the existing entry points. Either way is fine. [17:45] natefinch: go with whatever you feel like. [17:46] natefinch, fwereade: trivial code review BTW: https://codereview.appspot.com/29130043/ [17:46] rogpeppe: I can get it. I have been slacking in the review dept [17:47] rogpeppe: ahh yeah, this is what you guys were talking about earlier [17:48] natefinch: yeah [17:52] rogpeppe: I wish that machine id parsing code were broken out into its own function [17:52] rogpeppe: would be a lot easier to test in isolation [17:53] natefinch: it's not really machine id parsing [17:53] natefinch: but yes, i agree [17:53] rogpeppe: right, sorry, bad choice of words [17:53] natefinch: but i'm not fixing everything now [17:55] rogpeppe: now is the best time to fix everything ;) Well, not everything. But little things, sure. Anyway, LGTM. [18:09] any core devs joining in the juju sessions at UDS? [18:09] http://summit.ubuntu.com/uds-1311/meeting/22011/t-cloud-juju-destroy-machines-by-default/ [18:12] jcastro - I can join [18:12] https://plus.google.com/hangouts/_/7acpimg327ft0ld2qto6rnffgk?authuser=0&hl=en === tasdomas` is now known as tasdomas [18:43] jcastro: sorry, if i'd seen your msg above, i'd have joined [18:44] jcastro: to my shame, i didn't know about this at all [18:52] rogpeppe, ok I can post a reminder [18:53] jcastro: thanks... I didn't know about it either. [18:59] There is a juju-core update for trusty tomorrow that we'll need some of you guys for [19:00] mainly a "what's coming in 14.04" kind of thing [19:00] jcastro: interesting discussion [19:01] jcastro: i think my view would be: if we created the machine automatically, we should destroy it automatically. [19:01] * jcastro nods [19:01] jcastro: but if you used add-machine explicitly, i'm not sure we should [19:04] rogpeppe: honestly, add-machine doesn't really fit the way juju is supposed to work anyway. [19:05] natefinch: but that's the way it does work, and we have to, um, work with that [19:05] hah, we have add-machine? [19:05] never needed it [19:06] rogpeppe: commands can be removed as easily as they're added. More easily, even. But even if not... I'd hate to see add-machine causing more complexity percolating through the system. [19:06] jcastro: exactly. There [19:06] jcastro: there's very little need for it [19:07] yeah, if I need a new machine it's `juju deploy ubuntu` [19:26] rogpeppe, you still there? [19:28] mattyw: i am [19:29] mattyw: (lucky i saw your msg - my irc client has stopped notifying me again dammit) [19:29] mattyw: what's up bro? [19:29] rogpeppe, can I be a pain and ask for another look at https://codereview.appspot.com/28970043/? [19:29] mattyw: np [19:29] rogpeppe, thanks - you sure you didn't turn of notifications ;) [19:30] mattyw: in general if i say LGTM i mean you can make the change and submit [19:30] mattyw: yeah, unfortunately so [19:30] rogpeppe, I thought that was probably the case - but part of my contract says to keep rogpeppe 100% happy though [19:31] mattyw: i don't use the magic acronym if i don't mean it :-) [19:31] mattyw: aw [19:31] mattyw: LGTM says "i'm happy" [19:31] rogpeppe, you cheated though - you said LGTM with a tiny simplification [19:32] wanted to make sure you were happy with the simplification [19:32] mattyw: given that it's exactly what i suggested (apart from that mistake i made), i should hope so :-) [19:32] rogpeppe, sweet [19:33] I don't have permission to mark it approved - I guess someone else will pick that up [19:33] mattyw: i'll approve it [19:33] rogpeppe, thanks - another question then while I've distracted you [19:33] mattyw: done [19:33] thanks [19:33] mattyw: go on [19:35] mattyw: make it quick - i just saw the car arrive back and i'll need to go and have supper... [19:35] on 1.16 - when I'm "WatchingAll" throught the api and I call add-unit, i seem to get updates about all the state changes for the new unit - then after the new unit gets to started I see another update about the existing unit [19:35] rogpeppe, you can go [19:35] we can talk about this tomorrow [19:35] mattyw: ok, ping me [19:35] and I'll work out some proper details and some pastes [19:35] mattyw: cool. [19:35] rogpeppe, enjoy supper [19:35] mattyw: see ya tomoz [19:35] g'night all [19:36] night [19:52] morning [20:01] thumper: morning [20:02] hi natefinch [20:02] reading your experiences with high res desktops :) [20:02] thumper: haha... the answer is "you're gonna have to wait".... which I think is pretty bad. [20:02] well... [20:02] hangout? [20:03] there are a few things you could try [20:03] natefinch: https://plus.google.com/hangouts/_/76cpi5uan67mdgbaass8rvt83o?hl=en [22:08] gah... [22:08] I need to fix syslog for the local provider [22:08] * thumper pokes rsyslog with a stick [22:10] * thumper cringes inside [22:13] thumper: please do! [22:13] thumper: actually, that's something i wanted to bring up [22:13] :) [22:14] thumper: because the local provider pokes into lxc rootfs, but with lvm the container root is not under rootfs [22:14] so that's a blocker for me [22:15] * thumper nods [22:15] I've got kvm machines working with my local provider [22:15] but it has no mounting like lxc [22:15] so I need to get the log files out [22:15] however... [22:15] I'm trying to work out how we can have a general juju local rsyslog conf file [22:16] so we don't block the non-sudo bootstrap that may happen later [22:16] since we need sudo to add the rsyslog conf and restart the rsyslog service [22:16] although I guess you could do the horrible bit of encoding sudo into the command running [22:16] not sure how that would work though [22:16] but anyway, [22:17] I suppose I could fix the syslog bit independently [22:17] ugh... [22:17] multiple local environments would be a pain [22:17] I have that now [22:17] I have "local" and "local-kvm" [22:17] both running [22:17] but we'd need to filter syslog messages nicely [22:17] can do this with some syslog tags [22:18] but needing multiple filenames... [22:18] etc [22:18] the problem there is the line: UDPServerRun 514 [22:18] so perhaps... [22:18] we'd have to move the rsyslog port into config... [22:18] ick [22:19] thumper: multiple rsyslog processes running under the user with different ports? inject the port config via cloudinit? [22:21] sure, that isn't the problem [22:21] and I guess, this is only a value that the user needs to change if they want multiple local jujus running [22:22] but it is just the whole "need to put it into config" that makes me go ugh [22:22] but easy enough to add [22:23] I am also very concious that the logging is a single point of failure [22:24] * thumper considers briefly making rsyslog put logging messages into a real db that we can spread around... [22:28] * thumper sighs === thumper is now known as thumper-afk === bradm_ is now known as bradm