[00:01] thumper: i forwarded an email [00:01] but you will need a previous one where the creds were sent [00:01] i think you should have received both of those emails [00:01] probably [00:02] probably ignored them too [00:02] lol [00:02] so, i just source the creds and can then ssh in to that ip address [00:02] then you can bzr pull the new rev as needed for ehatever source tree [00:03] the bot uses a cron job to wake up every minute [00:15] is the bot still alive [00:15] has canonistack eaten that machine ? [00:19] davecheney: the bot is still there [00:19] it failed to merge my branch [00:19] because I use an updated crypto [00:21] lunchbreak means going to the supermarket with kids - always fun... [01:01] hey thumper, wallyworld_ - welcome back [01:02] hi. wish i could say it's good to be back :-) [01:02] you were back last week? [01:02] yup [01:02] would have been quiet [01:02] yeah it was [01:03] fortunately my stuff took long enough that it didn't need reviews yet :) [01:03] :-) [01:04] soooo hot here today. may need to jump in the pool at some point to cool off. we had record breaking temperatures over the weekend. 44 degrees [01:04] ouch [01:04] almost as bad today [01:04] gonna be 39 here [01:04] I thought that was bad [01:04] it is 18°C here [01:04] we get it worse cause the hot air blows from west to east [01:05] wish it were 18 here [01:05] I could do 18 [01:10] thumper: are you waiting for another review on the ssh stuff? [01:10] axw: nah, I think it is all good [01:10] I intend to use your GenerateKey function in an MP of mine [01:10] just need to hack the bot [01:10] cool [01:11] bot's broken? [01:13] just dependencies [01:16] wallyworld_: only record breaking temps if you believe the shite in the paper [01:16] bigjools: you saying the paper makes up the recorded temps? [01:17] it misquotes [01:17] * bigjools finds a reference [01:17] you need to take off your tin foil hat [01:17] it has cooked your brain with the heat [01:17] not wearing one, just papers like to sell papers so they dramatise [01:17] talking about clamitous weather is known to sell more [01:18] calamitous even [01:18] well the temp on sat was about 4, right? [01:18] 44 [01:18] was 46 here [01:18] so what's the issue? [01:18] that's what the paper reported [01:19] "record breaking" [01:19] well it was [01:19] nope [01:19] when was it hotter [01:19] ... [01:19] when did measurements start? [01:19] several years ago it was 41 or 42 [01:19] 1800s [01:20] don't tell me you are going to be a pedant and say it may have been hotter 600000000000 years ago [01:21] sigh [01:24] axw: I've logged into the bot and updated go.crypto [01:24] axw: attempting to land again [01:30] thumper: thanks [01:30] thumper: how come dependencies.tsv didn't do it? [01:31] or did you not update that file? [01:31] axw: apparently the bot doesn't update based on that yet [01:31] so I'm told [01:31] oh [01:31] I assumed it did [01:31] just the LP recipe I guess [01:37] wallyworld_: where is the tarmac log on that machine? [01:38] thumper: ~tarmac/logs i think [01:39] wow, doesn't log much useful [01:39] can see it is merging my code though [01:39] and done [01:39] \o/ [04:40] * thumper is done for the day [04:40] see y'all tomorrow === mthaddon` is now known as mthaddon [08:15] mornin' all [08:19] heya and happy new year [08:35] hi rogpeppe and TheMue [08:35] jam, TheMue: hiya [08:36] and happy new year to you too === ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: see calendar | Bugs: 2 Critical, 205 High - https://bugs.launchpad.net/juju-core/ [10:02] morning! [10:04] morning mgz, just finishing up my 1:1 with rogpeppe, will chat with you in a sec [10:04] jam, sure [10:12] mgz: I'm ready [10:13] jam: mumble? [10:13] certainly, I'm there [10:13] almost [10:13] well, I had been there, maybe it gets angry if you sit in an empty channel [10:43] see you guys in the standup in a sec, just grabbing a coffee [11:17] mgz: any idea how to recursively delete s3 buckets? The web interface requires you to empty buckets before you can delete them, but we have 50 "test" buckets that are left over [11:25] euca2ools should have something [11:27] hm, or maybe not, maybe you need the s3 command [11:28] mgz: there is s3nukem http://robertlathanh.com/2010/07/s3nukem-delete-large-amazon-s3-buckets/ [11:29] funny name [11:29] mgz: adapted from s3nuke, but in a nice way :) [11:32] natefinch: ha! it's another transient error - i retried and it succeeds after 2 seconds of returning that error... [11:33] jam: s3cmd supports recursive removal [11:34] jam: i seem to remember writing something to do concurrent recursive removal, but maybe i just piped the output of s3cmd into my concurrent xargs program === gary_poster|away is now known as gary_poster [13:59] jam: should juju 1.17.0 status work correctly against a 1.16.3 environment? [14:04] rogpeppe: it should fall back to reading the db [14:05] mgz: it doesn't appear to be working [14:05] rogpeppe: so, if the db is compatible (we make sureit is...) it should be okay? [14:05] rogpeppe: can you tell if it's trying the api and failing... then? [14:05] mgz: i just got a report that showed this status: http://hastebin.com/hudipomama.txt [14:06] that pastebin is much better for my silly detatched irc thing [14:06] shame it needs js... [14:06] well, that's pretty facinating [14:07] mgz: yeah, that's what i thought [14:07] I assume you don't really have lots of machines? I wonder what all the ds are [14:07] mgz: that's not my status, it's from a real installation [14:07] mgz: they were using juju-devel ppa, not juju stable [14:07] ah, so maybe the machines are semi-accurate [14:08] mgz: i think so [14:08] but the state connection apparent;y didn't work at all [14:08] just the get machines from the ec2 api bit [14:09] when I hack-tested this, when we added the fallback, this all worked [14:09] hmm [14:09] but of course, that's not what landed, and dimiter may well have not tested that, I know I didn't [14:11] rogpeppe: can you get them to file a bug? [14:11] mgz: will do [14:13] mgz: just to check: there should be no probs upgrading from 1.16.3 to 1.16.5, right? [14:15] I have not heard of any, and certainly that's our general minor version promise is [14:15] the bump past 1.16.3 is rougher than normal though [14:15] we added various things that wouldn't normally make a minor release, but not anything that should mess with upgrades really [14:20] looks like dependencies.tsv is invalid again http://162.213.35.54:8080/job/prepare-new-version/745/console === sidnei` is now known as sidnei [14:23] sinzui: it does indeed [14:23] I'll land a fix [14:24] blame is tim, r2182 [16:42] natefinch: i'm never seeing the instances get out of StartupState. looks like getStatus is returning the state in "state", not "myState". [16:42] natefinch: it'd be good to have a test for that in the replicaset package [16:44] natefinch: ah, it's actually as documented [16:53] rogpeppe: which is documented? [16:54] natefinch: the fact that the state is returned in "state", not "myState" [16:54] rogpeppe: weird, wonder where I got "myState" from [16:54] natefinch: i've fixed that, but run straight into another problem - that the two secondaries never move out of "UNKNOWN" state [16:54] natefinch: you didn't read far enough down the page :-) [16:55] rogpeppe: those documentation pages are pretty long at times ;) [16:55] natefinch: myState is the field used for the state of the server you're talking to [16:55] natefinch: it seems my secondaries are never managing to connect to the primary [16:55] rogpeppe: hmm weird [16:55] natefinch: it might be something to do with the localhost issue, i suppose [16:56] natefinch: whatever that issue might be... [16:56] rogpeppe: arg, possibly. sigh. [16:57] rogpeppe: I should have tested the mystate vs. state thing, but I had been intentionally trying not to test mongo itself, assuming that if I gave it commands it accepts, it would do the right thing. [16:58] natefinch: it did :-) [16:58] natefinch: unfortunately StateupState == 0 [17:00] rogpeppe: mgz: you guys around? [17:00] mramm2: i am [17:01] sabdfl is looking to setup a sprint at bluefin end of january [17:01] we are going to have 5 MAAS clusters in a box ready for testing then [17:02] and will be developing a set of demo's that the sales engineers can use for deploying openstack [17:02] and then solutions on top of openstack using those maas clusters [17:02] and sabdfl would like juju core representation [17:03] mramm2: bluefin? [17:03] mramm: bluefin? [17:03] mramm: I'm here too [17:03] 12:01 mramm has left IRC (Ping timeout: 240 seconds) [17:03] 12:01 mramm2: we are going to have 5 MAAS clusters in a box ready for testing then [17:03] 12:02 mramm2: and will be developing a set of demo's that the sales engineers can use for deploying openstack [17:03] 12:02 mramm2: and then solutions on top of openstack using those maas clusters [17:03] 12:02 mramm2: and sabdfl would like juju core representation [17:03] yep [17:03] bluefin [17:04] mramm: sorry, what's bluefin? [17:04] representation at bluefin, I assume [17:04] rather than in a hangout or something [17:04] yep [17:04] bluefin's a company? [17:04] rogpeppe, our head office [17:04] it's the location of our offices [17:04] rogpeppe has first refusal, given I've been down recently, but it's an easy trip for me [17:04] ah! [17:05] rogpeppe: worth going if you've not been before [17:05] rogpeppe is my preferred choice too === marcoceppi_ is now known as marcoceppi [17:05] i'd be happy to do it [17:05] unless you both want to go ;) [17:05] but mgz has lots more MAAS experience than me [17:05] rogpeppe: I'll add you to the list [17:06] rogpeppe: that is true [17:06] (i have never used MAAS) [17:06] nah, I have some specific maas knowledge that's probably not relevent here [17:06] but we should have a MAAS team member there too [17:07] my openstack-fu is similarly limited [17:07] but i'm fine in juju-core-land :-) [17:08] mramm: do you know the actual dates? [17:14] i'd quite like the opportunity to spend a bit more time around openstack stuff, actually [17:21] natefinch: found the problem with the UNKNOWN thing - it takes 10s before the secondaries connect [17:21] natefinch: my problem now is that the secondaries never seem to get out of STARTUP2STATE [17:21] rogpeppe: ahh hmm ok. [17:22] rogpeppe: we're not exactly speeding up the tests here ;) [17:22] natefinch: indeed not [17:26] natefinch: ah, found the problem, i think [17:26] 2014/01/06 17:26:03 mongod:57236: Mon Jan 6 17:26:03.817 [conn4] key file must be used to log in with internal user [17:28] rogpeppe: hmm... I didn't think you needed any authentication if you're connecting from localhost [17:28] natefinch: i think it must be different for peers [17:36] natefinch: bingo! [17:36] rogpeppe: what did you find? [17:37] natefinch: that was the reason, and now that i've done that, i've finally seen a [PRIMARY SECONDARY SECONDARY] status... [17:37] well good [17:39] natefinch: it took about 30s from starting to wait for sync, to fully synced (that's with an empty db), and about 42 seconds from no-servers-running [17:39] rogpeppe: ug [17:39] natefinch: this is not the fastest sync in the world :-) [17:40] rogpeppe: 30s to sync no data is pretty bad ;) [17:40] rogpeppe: on localhost no less [17:40] natefinch: for the record, here's the log of what was going on during that time: http://paste.ubuntu.com/6704366/ [17:43] rogpeppe: wonder if this has anything to do with it: "noprealloc may hurt performance in many applications" honestly, I wouldn't think anything coudl hurt performance that badly with empty DBs on localhost [17:44] natefinch: i agree - i don't think noprealloc could harm anything here [17:44] natefinch: no data on an SSD [17:53] rogpeppe: does it matter if it's an SSD if no data is there? (there's a profound question question ;) [17:54] natefinch: well, it might be doing *something* in that 30 seconds.... [17:54] natefinch: i'm thinking that the 30s might be due to the 10s heartbeart [17:54] natefinch: 3 heartbeats for the status to propagate around the system === allenap_ is now known as allenap [18:20] niemeyer: happy new year! [18:21] niemeyer: i'd like to confirm something about mgo, if you have a moment at some point [18:22] rogpeppe: Heya [18:22] rogpeppe: Thanks, and happy new year :) [18:22] rogpeppe: Shoot [18:23] niemeyer: if i've performed an operation on a Session, it it possible that the session changes to use a different server at some point in the future (without doing anything explicit, such as SetMode) ? [18:23] niemeyer: from my current experiments, it seems like that doesn't happen, but i'd like to be sure [18:23] rogpeppe: Depends on the current mode of the session.. if on Strong mode, no [18:24] (Strong is the default, FWIW) [18:25] niemeyer: so if it's in Monotonic, it may switch without any explicit intervention? [18:26] rogpeppe: If it's Monotonic, it will switch deterministically on writes [18:26] niemeyer: currently, i seem to get an EOF error even in Monotonic mode, but perhaps that might resolve itself if I keep trying [18:26] rogpeppe: From a slave to the master [18:26] rogpeppe: (that's the whole point of Monotonic) [18:26] niemeyer: ah, but not if a primary goes down and is replaced by another one? [18:27] rogpeppe: If besides the primary that was the single server alive, then surely it'll break as suggested [18:27] rogpeppe: After it fails, it will not switch [18:27] niemeyer: in this case there were two secondaries [18:27] rogpeppe: Or even reconnected without an ack from the code [18:27] reconnect [18:27] niemeyer: but that's good to know [18:28] rogpeppe: In that case the behavior depends on whether the session was already hooked to the master [18:28] niemeyer: my current plan is to make the single-instance workers (provisioner, firewaller, etc) move with the mongo master [18:28] rogpeppe: If the session was hooked to the mater, and the connection drops due to a server shutdown, you'll get an EOF [18:28] niemeyer: that's the behaviour I want to rely on [18:29] rogpeppe: That EOF will not go away until: a) THe session is Closed, discarded, and re-created b) The session is Refresh'ed [18:29] rogpeppe: You want to rely on the fact the error doesn't go away? [18:29] niemeyer: if that happens, we'll redial and run the workers if we are on the same machine as the mongo primary [18:29] rogpeppe: You don't have to redial.. just Refresh the session at the control point [18:30] niemeyer: i want to rely on the fact that if the primary gets reelected, that we're guaranteed to get an error so that we know for sure that we've not got two "single-instance" workers running at the same time [18:31] niemeyer: that's useful to know, thanks [18:31] rogpeppe: You'll not get an error if a primary gets re-elected and the session was not hooked to it [18:31] niemeyer: for those workers' connection to the State, i'd use a direct connection in strong mode [18:31] rogpeppe: E.g. a Monotonic session that got no writes will be hooked to a slave, assuming one is available [18:32] rogpeppe: That session has no reason to error out if the primary shifts [18:32] niemeyer: it will get an error though, right? [18:33] rogpeppe: Hmm.. these two last sentences contradict each other.. :) [18:33] niemeyer: so it'll be difficult to avoid (say) an API call failing because the primary shifts [18:33] rogpeppe: It will NOT error out if the primary shifts, because it was connected to a slave.. it doesn't care about the fact the primary went through trouble [18:33] niemeyer: ah, but if it happens to be connected to the primary, then it will error out, right? [18:33] rogpeppe: It really depends on how you're structuring the code.. let me give an example [18:34] niemeyer: so i guess it's down to chance [18:34] rogpeppe: Let's say you have an http server that creates a new session to handle each request [18:34] rogpeppe: Now, the primary just broke down, but there were no requests being serve [18:34] d [18:34] rogpeppe: Then, the primary gets re-elected.. [18:35] rogpeppe: The driver picks up the new master, resyncs, and keeps things up [18:35] rogpeppe: Finally, the http server gets a new request to handle [18:35] rogpeppe: and runs Copy on a master session, creating a new session to handle this specific request [18:35] rogpeppe: This new request will not error out.. [18:36] rogpeppe: As it was a fresh session to a working server [18:36] niemeyer: ok [18:36] rogpeppe: It's not down to chance, any more than a TCP connection breaking due to a temporary outage on the packet path is down to chance [18:36] niemeyer: (that doesn't apply easily to our case, because we've got very long-lived http requests) [18:37] rogpeppe: If there is a long-living session that is active (was used, not refreshed), the error will surface [18:37] rogpeppe: It doesn't pretend that things are sane when they're not [18:38] niemeyer: ok [18:38] rogpeppe: No need for the direct mode for that, btw [18:40] niemeyer: apart from the fact that Refresh presumably uses more up-to-date info for the server addresses, is there a significant difference between using Dial and using Refresh? [18:41] rogpeppe: Oh yeah [18:41] rogpeppe: Dial creates an entire new cluster [18:41] rogpeppe: Syncing up with every server of the replica set to figure their state and sanity [18:41] rogpeppe: Refresh is very lightweight [18:41] niemeyer: ok, that's good to know [18:42] rogpeppe: It just cleans the session state and moves sockets back to the pool, assuming they're in a sane state [18:42] niemeyer: can I rely on the fact that err==io.EOF implies that the server has gone down and a Refresh could clean it up? [18:42] rogpeppe: No.. EOF means socket-level EOF [18:42] rogpeppe: Whatever the reason why that happened [18:43] niemeyer: ah, ok. is there a decent way of telling that the server has gone away and i need to refresh? [18:43] rogpeppe: But you can always refresh on errors [18:43] rogpeppe: and once/if the server works again, the new session will behave properly again [18:43] rogpeppe: IOW, no reason to reconnect [18:43] s/reconnect/redial/ [18:44] rogpeppe: In fact, you can always refresh at the control point [18:44] rogpeppe: With errors or not [18:44] niemeyer: after every API call? [18:45] rogpeppe: No.. by control point I mean the place where the code falls back to on every iteration [18:45] rogpeppe: For example, if there was a loop, it might be refreshed on every iteration of the loop to get rid of already acknowledged errors [18:45] rogpeppe: If it's an http request, it might refresh before every Accept (although that doesn't quite fit, since there would be multiple requests on any realistic server) [18:46] rogpeppe: Rarely in an application it would be okay to assume errors didn't happen, mid-logic [18:46] niemeyer: i'm not quite sure how that would map to our code structure [18:47] niemeyer: the main loop is in the rpc package, reading requests from the websocket and invoking methods [18:48] rogpeppe: What happens if an error happens in the middle of a request from the client being handled? [18:48] niemeyer: the error gets returned to the client [18:48] rogpeppe: Okay, and I assume each client has its own session to the server? [18:49] niemeyer: yes [18:49] rogpeppe: Okay, so you might Refresh before starting to handle a new RPC, for example [18:49] niemeyer: it's a bit of a bad spot for us currently - we don't ever redial or refresh the connection for API requests [18:49] niemeyer: ok, so Refresh really is that (<0.1ms) light weight? [18:50] rogpeppe: Yeah [18:50] rogpeppe: It's literally clean the session and put socket back in the pool [18:50] rogpeppe: mem-ops only [18:51] niemeyer: cool [18:51] niemeyer: so it doesn't matter if lots of ops call it concurrently whenever they like [18:51] rogpeppe: Yep, no problem [18:52] rogpeppe: Note that if you have multiple goroutines using a single session in a way that they might potentially call Refresh concurrently, that's not quite okay [18:52] niemeyer: oh [18:52] rogpeppe: As they might be cleaning up errors from each other that ought to be acted upon [18:52] rogpeppe: That's why I asked whether each client has its own session [18:53] niemeyer: (that's definitely the case for us, as *state.State holds a single *mgo.Session) [18:53] niemeyer: ah, i misunderstood session there [18:53] niemeyer: even if each client had its own session, it still wouldn't be ok, as one client can have many concurrent calls [18:54] rogpeppe: It's fine to use sessions concurrently, but if there's a fatal error in a session that is being concurrently used, the correct behavior is to let all goroutines see such error, as whatever they were doing got interrupted [18:54] niemeyer: i'm not sure i quite see how one Refresh might "clean up" an error from another [18:55] niemeyer: won't whatever they were doing be locked to a particular socket (during that operation), and therefore draw the correct error anyway? [18:55] rogpeppe: If such a concurrently used session gets Refresh calls while other goroutines are using it, such a fatal error may be incorrectly swiped under the carpet [18:56] rogpeppe: I don't get your previous points [18:56] rogpeppe: If a single session is used concurrently, it's a single session [18:56] rogpeppe: If that single session sees a fatal error in its underlying socket, it's the same socket (assuming Strong or Monotonic modes) [18:56] rogpeppe: for all code using it [18:58] niemeyer: don't operations do Session.acquireSocket before doing something? [18:59] rogpeppe: That in itself is irrelevant [18:59] rogpeppe: A session gets a socket associated with it [18:59] rogpeppe: Assuming Strong or Monotonic mode [19:00] rogpeppe: There's no magic.. if such a session gets an error while it has an associated socket, and it's being concurrently used, all the concurrent users should observe the fault [19:00] niemeyer: could you expand on why we want that to be the case? [19:02] rogpeppe: Sure [19:02] rogpeppe: Imagine the perspective of any of the concurrent users of the session [19:02] rogpeppe: When a session is re-established, it won't necessarily be to the same server [19:03] niemeyer: ah, so for Monotonic mode, we might suddently skip events in the log? [19:03] rogpeppe: and depending on the mode, it may even walk history backwards (master socket becoming slave socket) [19:03] niemeyer: that doesn't apply in Strong mode, presumably? [19:04] niemeyer: ah, i guess it can [19:04] niemeyer: unless Wmode==Majority [19:04] rogpeppe: That's irrelevant [19:05] rogpeppe: This just means a writer won't consider the write successful until it reaches a majority [19:05] rogpeppe: This point in time can easily happen without a particular slave reader observing the data [19:06] niemeyer: is it possible that one of the ones that wasn't part of that majority is elected primary? [19:06] niemeyer: (i thought the election process took into account up-to-dateness) [19:06] rogpeppe: Again, that's a separate problem [19:06] rogpeppe: We're talking about someone that is actively *reading from a slave* [19:06] rogpeppe: It doesn't matter who the master is [19:06] niemeyer: i thouht that couldn't happen in Strong mode [19:07] rogpeppe: and depending on the mode, it may even walk history backwards (master socket becoming slave socket) [19:07] rogpeppe: I did say "depending on the mod" [19:07] mode [19:07] [19:03:56] niemeyer: that doesn't apply in Strong mode, presumably? [19:07] rogpeppe: Ah, sorry, okay [19:07] rogpeppe: It can *still* happen [19:08] rogpeppe: If a failure happens before the majority was reached [19:08] niemeyer: ah [19:09] rogpeppe: Of course, we're getting into very unlikely events [19:09] niemeyer: indeed, but it's nice to have some idea of these things [19:09] rogpeppe: As the re-election will still attempt to elect the most up-to-date server, despite majority concerns [19:11] niemeyer: i wonder if a reasonable approach might be to Copy the session for each RPC call [19:11] rogpeppe: That sounds fine [19:11] rogpeppe: Easier to reason about too [19:11] niemeyer: presumably there's no need to Refresh such a copy? [19:11] rogpeppe: No, as it's being Closed after the RPC is handled [19:12] rogpeppe: The session is buried, and the (good) resources go back to the pool [19:14] niemeyer: ok, that's useful, thanks [19:15] rogpeppe: My pleasure [19:15] rogpeppe: I'll head off to a coffee break [19:15] niemeyer: i shall head to supper [19:15] Still on holiday this week, btw [19:15] Should send a note [19:16] niemeyer: ah, well, thanks a lot for the chat! [19:16] rogpeppe: Glad to chat [19:28] niemeyer: FYI I tried Refreshing the session after an error, but I can't get it to work - it seems to try connecting to the old primary and never trying the old secondaries [19:29] niemeyer: (when i was redialling with all the peer addresses, it worked) === BradCrittenden is now known as bac [21:31] o/ === gary_poster is now known as gary_poster|away