[03:23] thumper: review plz? https://github.com/juju/juju/pull/6588 [03:23] * thumper looks [03:24] * babbageclunk thanks thumper kindly. [04:01] thumper - any idea why my !!build!! isn't triggering a check build? [04:02] nope, I've not grokked that part much [04:02] thumper: thanks for the comments - good points. [04:06] thumper: I'm not sure about having a log file per model. [04:07] At the moment the log file is managed by lumberjack so it does rolling and doesn't take up too much space. [04:07] If it was per-model wouldn't we have to come up with something else on top to do the same thing? [04:09] yeah... [04:09] it would be more manual [04:09] I'm happy enough with a global one.. [04:09] given that it is really just a backup [04:10] thumper: cool [04:10] thumper: at the moment debug-log (which will be the source of the logs) doesn't send the version with the log records. [04:11] no... I don't even know why it is there TBH [04:11] I'm not sure whether the version is important [04:12] I think I know who might ahve added it but no idea why [04:12] It does seem a pity to just throw that info away when we migrate though. [04:12] * thumper has to run [04:12] okbye [04:12] whoa, he really meant it [10:08] Bug #1592188 opened: No 'enable-ha' for manual provider [11:11] Hello Folks, I am playing with liberty release and by default i get keystone v2 endpoints. Is it possible that i can turn on keystone v3 api by default for liberty. [11:11] jamespage: ^^ [11:11] Andrew_jedi, yes [11:11] the keystone charm has a configuration option to allow v3 to be used [11:12] jamespage: Ok and to enable it, I will have to recreate the keystone service after making changes to the config file? [11:12] Andrew_jedi, no I think you can just enabled it on the running environment [11:12] juju config keystone preferred-api-version=3 [11:12] I think [11:12] check the key [11:13] jamespage: Ok, let me execute this command. [11:18] jamespage: http://paste.openstack.org/show/589861/ [11:18] config/set [11:18] config is juju 2.0 [11:24] jamespage: thanks i was looking for this everywhere. Executing again :) [11:25] does anyone know a way that I can find out what relation values are being set in given relationships? === freyes__ is now known as freyes [14:47] I [14:48] Hi I'm running into a problem with LXD where when attached using the `lxc exec` command and running a command in the background then i cannot dettach from that LXD. [14:48] Is there a channel specific to LXD? [14:51] rick_h, do you know ^ [14:55] deanman: natefinch not that I know of no. I think the issue is just how exec works with lxd. I'd use ssh for that if you can deanman. https://github.com/lxc/lxd/issues/1830 for instance walks through some of the tricks that goes into things. [15:00] rick_h: my browser is not loading hangouts for some reason; trying to figure it out [15:01] katco: I had to restart FF today as it upgraded and needed a restart to pick up the plugin [15:01] rick_h: i use chrome; restart doesn't appear to have worked... lemme try ff [15:04] rick_h: going to try and reboot the machine [15:05] Hey rick_h, thanks for the tip, will look into it more. [20:12] thumper, babbageclunk, mwhudson: o/ [20:13] menn0: would love a chat when you've read my email [20:13] menn0: it's the one with the body that starts with "well... fuck" [20:15] lol, like he has only one email from you with that subject [20:17] menn0: hi! [20:18] menn0: hi [20:18] thumper: i've read it once, will read it again with more attention to the details [20:18] menn0: ack, let me know when you're ready [20:23] thumper: ok 1:1? [20:24] menn0: just in the middle of a web form, be there in a sec [20:29] natefinch: ping [20:32] katco: only here for a moment... what's up? [20:32] natefinch: hey i'll be quick [20:33] natefinch: starting work on allowing IB to give a cloud a name. 1. can i start from develop, 2. does that overlap with anything you're doing? [20:33] katco: it will tie into what I'm doing, but I haven't gotten there yet, so you're welcome to get the ball rolling :) [20:34] natefinch: is there anything i should look at in terms of integration? [20:34] natefinch: commit, branch, etc. [20:35] katco: interactive add cloud is the current state of the art. cmd/juju/cloud/add.go it uses cmd/juju/interact.Pollster to interact with the user, we should be moving toward using that, since it standardizes the formatting etc. [20:36] katco: that's all available in develop [20:36] natefinch: cool cool [20:37] natefinch: ty [20:37] katco: welcome :) [21:24] menn0: ping [21:24] babbageclunk: hi, sorry on a call [21:24] tsk, thumper! [21:24] ;) [21:32] mm, a heavy earthquake hit japan, are you guys ok at nz? [21:32] babbageclunk: thumper menn0 [21:34] we are a long way from japan [21:35] nothing here [21:35] thumper: I know, but I though usually eqs on japans sea on your side would end up being felt there [21:37] * perrito666 well look at that, I thought the indo-australian plate was covering jp too, I stand corrected [21:46] perrito666: ring of fire baby [21:47] perrito666: oh man, Fukushima again. :( [21:47] babbageclunk: sadly yes [21:49] perrito666: Wow, the 2011 earthquake was magnitude 9! [21:50] yup, yet todays is advertised as 7.3 which is a big shake [21:52] * perrito666 tries to purchase a used 11" macbook air to try some things and finds it has the same price tag as a new 13".... what is wrong with people? [22:13] babbageclunk: ok, i'm free [22:14] menn0: I think I just had an epiphany, but I'll run it by you. [22:14] menn0: Oh, did you work out what was going on with thumper's thing? [22:14] his problem I mean [22:14] babbageclunk: yep [22:15] oh nice - what was it? [22:15] babbageclunk: it was a tough one but it ended being due to old indexes [22:15] ugh [22:15] babbageclunk: mgo handles index violations badly [22:15] babbageclunk: so failed updated were being silently ignored [22:15] updates [22:16] gross [22:16] it is so frustrating [22:16] because I knew the indices were wrong [22:16] and we deleted them [22:16] but after the updates [22:16] not before [22:16] so they were interferring [22:16] how is "old indexes" even a thing you need to worry about with a database? [22:17] babbageclunk: this isn't mongodb's fault, it's more a problem with mgo [22:17] Oh wow, so you can have a transaction where the assertions pass, so it's applied, but fails and partially applies because of an index error? [22:17] yes [22:17] babbageclunk: afaics it's due to the logic to prevent txn's failing if multiple txns attempt to insert the same doc at the same time [22:18] * babbageclunk throws up in mouth [22:18] babbageclunk: I'll send an email about it later today. it's probably worth getting niemeyer involved as well. [22:19] babbageclunk: I believe we owe you the "what mongo is and isnt" talk [22:19] presumably over some strong drinks [22:19] babbageclunk: short one txn is a task queue not a transactional engine and mongo is a bag of hopeful storage not a db [22:20] I go by these rules and they work for me [22:20] * babbageclunk sighs [22:21] no, seriously, you need to make your head stop thinking in mongo+txn as if it was a rdbms because many of us did and that is where you eff up [22:22] * perrito666 pats babbageclunk on the back and hands a beer [22:22] babbageclunk: did you want to chat? [22:22] Yeah, I can get behind that. Except for the transaction being marked as successful. [22:22] menn0: yes please! ok, so on a happier note, I'm trying to work out how to get to the debug log api from the migrationmaster worker [22:24] menn0: It's a bit fiddly to call api.Client.WatchDebugLog, because I need a *state. [22:24] babbageclunk: and workers can't use *State directly... they have to use the API [22:25] menn0: It seems like I can change that to use FacadeCaller.RawAPICaller().ConnectStream instead. [22:25] menn0: Oh, no - it's an api.state, not a state.State. [22:25] babbageclunk: ok cool [22:25] menn0: But I don't have a *api.state either. [22:26] * menn0 looks at code to refresh his memoory [22:26] memory too [22:28] menn0: Basically, once I can call WatchDebugLog then I can call openAPIConn to get to the target /migrate/logtransfer endpoint and everything else should be straightforward. [22:29] babbageclunk: probably the most correct thing to do is to extract the guts of WatchDebugLog to api/common and then add a similar (probably simpler) API to the api/migrationmaster facade [22:30] menn0: ok - you mean make the guts take a stream or streamconnector then? [22:31] menn0: yeah, that sounds good. [22:31] babbageclunk: yep [22:32] menn0: I was a bit nervous about changing the api.Client.WatchDebugLog bit to use the FacadeCaller instead of the st, just in case something was making a Client where they were different for some reason. [22:32] babbageclunk: you'll also need to consider the transfer of logs getting interrupted [22:33] menn0: I was worried you might say that. [22:33] babbageclunk: that'll make things "fun" ... [22:34] menn0: We don't really have any kind of sequence number we could yse for that on the logs themselves, right? [22:34] use [22:35] babbageclunk: no we don't... the best we've got is the timestamps [22:35] babbageclunk: a small amount of overlap on retry is probably ok [22:36] babbageclunk: there is the LastSentLogTracker in state/logs.go [22:37] babbageclunk: maybe that could be used for this too (it was developed for shipping logs offsite) [22:37] menn0: So I need to make the receiving side discard logs when they're before the last sent log? [22:38] babbageclunk: I was thinking more that the sender (i.e. the migrationmaster) would track what had already been sent and not send again [22:38] menn0: ah, ok [22:38] babbageclunk: otherwise if there's 3GB of logs to send and 2GB has already been sent, that's a lot of wasted bandwidth [22:38] brb grocery shopping [22:39] menn0: But it needs to be able to query what the target has, right? [22:42] babbageclunk: in theory it should know if it's recorded locally [22:42] (that's what the LastSendLogTracker does) [22:42] babbageclunk: but it might be better to ask [22:42] babbageclunk: in fact that would simplify things a lot [22:42] babbageclunk: do that :) [22:43] babbageclunk: so the LOGTRANSFER phase ends up looking like: ask the timestamp that the target has, stream from that timestamp onwards [22:43] menn0: So that would be another method on the migrationmaster facade to get the latest log time? Or probably have something in the logtransfer endpoint to track the last time. [22:44] menn0: quick chat to clarify? :) [22:44] babbageclunk: the migrationtarget facade will need a new API to query the latest log timestamp [22:44] babbageclunk: HO would be good [22:44] menn0: yeah, I meant migrationtarget rather than migrationmaster. [22:45] menn0: https://hangouts.google.com/call/p4p47uhzxzbm5iycpkv66uc3tae [22:46] menn0: actually. try this https://hangouts.google.com/hangouts/_/canonical.com/menno [23:00] thumper, on the HO whenever you are ready [23:00] HO or blue jeans? [23:00] bluejeans [23:47] menn0, ping