babbageclunk | thumper: review plz? https://github.com/juju/juju/pull/6588 | 03:23 |
---|---|---|
* thumper looks | 03:23 | |
* babbageclunk thanks thumper kindly. | 03:24 | |
babbageclunk | thumper - any idea why my !!build!! isn't triggering a check build? | 04:01 |
thumper | nope, I've not grokked that part much | 04:02 |
babbageclunk | thumper: thanks for the comments - good points. | 04:02 |
babbageclunk | thumper: I'm not sure about having a log file per model. | 04:06 |
babbageclunk | At the moment the log file is managed by lumberjack so it does rolling and doesn't take up too much space. | 04:07 |
babbageclunk | If it was per-model wouldn't we have to come up with something else on top to do the same thing? | 04:07 |
thumper | yeah... | 04:09 |
thumper | it would be more manual | 04:09 |
thumper | I'm happy enough with a global one.. | 04:09 |
thumper | given that it is really just a backup | 04:09 |
babbageclunk | thumper: cool | 04:10 |
babbageclunk | thumper: at the moment debug-log (which will be the source of the logs) doesn't send the version with the log records. | 04:10 |
thumper | no... I don't even know why it is there TBH | 04:11 |
babbageclunk | I'm not sure whether the version is important | 04:11 |
thumper | I think I know who might ahve added it but no idea why | 04:12 |
babbageclunk | It does seem a pity to just throw that info away when we migrate though. | 04:12 |
* thumper has to run | 04:12 | |
babbageclunk | okbye | 04:12 |
babbageclunk | whoa, he really meant it | 04:12 |
mup | Bug #1592188 opened: No 'enable-ha' for manual provider <juju-core:New> <https://launchpad.net/bugs/1592188> | 10:08 |
Andrew_jedi | Hello Folks, I am playing with liberty release and by default i get keystone v2 endpoints. Is it possible that i can turn on keystone v3 api by default for liberty. | 11:11 |
Andrew_jedi | jamespage: ^^ | 11:11 |
jamespage | Andrew_jedi, yes | 11:11 |
jamespage | the keystone charm has a configuration option to allow v3 to be used | 11:11 |
Andrew_jedi | jamespage: Ok and to enable it, I will have to recreate the keystone service after making changes to the config file? | 11:12 |
jamespage | Andrew_jedi, no I think you can just enabled it on the running environment | 11:12 |
jamespage | juju config keystone preferred-api-version=3 | 11:12 |
jamespage | I think | 11:12 |
jamespage | check the key | 11:12 |
Andrew_jedi | jamespage: Ok, let me execute this command. | 11:13 |
Andrew_jedi | jamespage: http://paste.openstack.org/show/589861/ | 11:18 |
jamespage | config/set | 11:18 |
jamespage | config is juju 2.0 | 11:18 |
Andrew_jedi | jamespage: thanks i was looking for this everywhere. Executing again :) | 11:24 |
mattyw | does anyone know a way that I can find out what relation values are being set in given relationships? | 11:25 |
=== freyes__ is now known as freyes | ||
deanman | I | 14:47 |
deanman | Hi I'm running into a problem with LXD where when attached using the `lxc exec` command and running a command in the background then i cannot dettach from that LXD. | 14:48 |
deanman | Is there a channel specific to LXD? | 14:48 |
natefinch | rick_h, do you know ^ | 14:51 |
rick_h | deanman: natefinch not that I know of no. I think the issue is just how exec works with lxd. I'd use ssh for that if you can deanman. https://github.com/lxc/lxd/issues/1830 for instance walks through some of the tricks that goes into things. | 14:55 |
katco | rick_h: my browser is not loading hangouts for some reason; trying to figure it out | 15:00 |
rick_h | katco: I had to restart FF today as it upgraded and needed a restart to pick up the plugin | 15:01 |
katco | rick_h: i use chrome; restart doesn't appear to have worked... lemme try ff | 15:01 |
katco | rick_h: going to try and reboot the machine | 15:04 |
deanman | Hey rick_h, thanks for the tip, will look into it more. | 15:05 |
menn0 | thumper, babbageclunk, mwhudson: o/ | 20:12 |
thumper | menn0: would love a chat when you've read my email | 20:13 |
thumper | menn0: it's the one with the body that starts with "well... fuck" | 20:13 |
perrito666 | lol, like he has only one email from you with that subject | 20:15 |
babbageclunk | menn0: hi! | 20:17 |
mwhudson | menn0: hi | 20:18 |
menn0 | thumper: i've read it once, will read it again with more attention to the details | 20:18 |
thumper | menn0: ack, let me know when you're ready | 20:18 |
menn0 | thumper: ok 1:1? | 20:23 |
thumper | menn0: just in the middle of a web form, be there in a sec | 20:24 |
katco | natefinch: ping | 20:29 |
natefinch | katco: only here for a moment... what's up? | 20:32 |
katco | natefinch: hey i'll be quick | 20:32 |
katco | natefinch: starting work on allowing IB to give a cloud a name. 1. can i start from develop, 2. does that overlap with anything you're doing? | 20:33 |
natefinch | katco: it will tie into what I'm doing, but I haven't gotten there yet, so you're welcome to get the ball rolling :) | 20:33 |
katco | natefinch: is there anything i should look at in terms of integration? | 20:34 |
katco | natefinch: commit, branch, etc. | 20:34 |
natefinch | katco: interactive add cloud is the current state of the art. cmd/juju/cloud/add.go it uses cmd/juju/interact.Pollster to interact with the user, we should be moving toward using that, since it standardizes the formatting etc. | 20:35 |
natefinch | katco: that's all available in develop | 20:36 |
katco | natefinch: cool cool | 20:36 |
katco | natefinch: ty | 20:37 |
natefinch | katco: welcome :) | 20:37 |
babbageclunk | menn0: ping | 21:24 |
menn0 | babbageclunk: hi, sorry on a call | 21:24 |
babbageclunk | tsk, thumper! | 21:24 |
babbageclunk | ;) | 21:24 |
perrito666 | mm, a heavy earthquake hit japan, are you guys ok at nz? | 21:32 |
perrito666 | babbageclunk: thumper menn0 | 21:32 |
thumper | we are a long way from japan | 21:34 |
menn0 | nothing here | 21:35 |
perrito666 | thumper: I know, but I though usually eqs on japans sea on your side would end up being felt there | 21:35 |
* perrito666 well look at that, I thought the indo-australian plate was covering jp too, I stand corrected | 21:37 | |
babbageclunk | perrito666: ring of fire baby | 21:46 |
babbageclunk | perrito666: oh man, Fukushima again. :( | 21:47 |
perrito666 | babbageclunk: sadly yes | 21:47 |
babbageclunk | perrito666: Wow, the 2011 earthquake was magnitude 9! | 21:49 |
perrito666 | yup, yet todays is advertised as 7.3 which is a big shake | 21:50 |
* perrito666 tries to purchase a used 11" macbook air to try some things and finds it has the same price tag as a new 13".... what is wrong with people? | 21:52 | |
menn0 | babbageclunk: ok, i'm free | 22:13 |
babbageclunk | menn0: I think I just had an epiphany, but I'll run it by you. | 22:14 |
babbageclunk | menn0: Oh, did you work out what was going on with thumper's thing? | 22:14 |
babbageclunk | his problem I mean | 22:14 |
menn0 | babbageclunk: yep | 22:14 |
babbageclunk | oh nice - what was it? | 22:15 |
menn0 | babbageclunk: it was a tough one but it ended being due to old indexes | 22:15 |
babbageclunk | ugh | 22:15 |
menn0 | babbageclunk: mgo handles index violations badly | 22:15 |
menn0 | babbageclunk: so failed updated were being silently ignored | 22:15 |
menn0 | updates | 22:15 |
babbageclunk | gross | 22:16 |
thumper | it is so frustrating | 22:16 |
thumper | because I knew the indices were wrong | 22:16 |
thumper | and we deleted them | 22:16 |
thumper | but after the updates | 22:16 |
thumper | not before | 22:16 |
thumper | so they were interferring | 22:16 |
babbageclunk | how is "old indexes" even a thing you need to worry about with a database? | 22:16 |
menn0 | babbageclunk: this isn't mongodb's fault, it's more a problem with mgo | 22:17 |
babbageclunk | Oh wow, so you can have a transaction where the assertions pass, so it's applied, but fails and partially applies because of an index error? | 22:17 |
thumper | yes | 22:17 |
menn0 | babbageclunk: afaics it's due to the logic to prevent txn's failing if multiple txns attempt to insert the same doc at the same time | 22:17 |
* babbageclunk throws up in mouth | 22:18 | |
menn0 | babbageclunk: I'll send an email about it later today. it's probably worth getting niemeyer involved as well. | 22:18 |
perrito666 | babbageclunk: I believe we owe you the "what mongo is and isnt" talk | 22:19 |
babbageclunk | presumably over some strong drinks | 22:19 |
perrito666 | babbageclunk: short one txn is a task queue not a transactional engine and mongo is a bag of hopeful storage not a db | 22:19 |
perrito666 | I go by these rules and they work for me | 22:20 |
* babbageclunk sighs | 22:20 | |
perrito666 | no, seriously, you need to make your head stop thinking in mongo+txn as if it was a rdbms because many of us did and that is where you eff up | 22:21 |
* perrito666 pats babbageclunk on the back and hands a beer | 22:22 | |
menn0 | babbageclunk: did you want to chat? | 22:22 |
babbageclunk | Yeah, I can get behind that. Except for the transaction being marked as successful. | 22:22 |
babbageclunk | menn0: yes please! ok, so on a happier note, I'm trying to work out how to get to the debug log api from the migrationmaster worker | 22:22 |
babbageclunk | menn0: It's a bit fiddly to call api.Client.WatchDebugLog, because I need a *state. | 22:24 |
menn0 | babbageclunk: and workers can't use *State directly... they have to use the API | 22:24 |
babbageclunk | menn0: It seems like I can change that to use FacadeCaller.RawAPICaller().ConnectStream instead. | 22:25 |
babbageclunk | menn0: Oh, no - it's an api.state, not a state.State. | 22:25 |
menn0 | babbageclunk: ok cool | 22:25 |
babbageclunk | menn0: But I don't have a *api.state either. | 22:25 |
* menn0 looks at code to refresh his memoory | 22:26 | |
menn0 | memory too | 22:26 |
babbageclunk | menn0: Basically, once I can call WatchDebugLog then I can call openAPIConn to get to the target /migrate/logtransfer endpoint and everything else should be straightforward. | 22:28 |
menn0 | babbageclunk: probably the most correct thing to do is to extract the guts of WatchDebugLog to api/common and then add a similar (probably simpler) API to the api/migrationmaster facade | 22:29 |
babbageclunk | menn0: ok - you mean make the guts take a stream or streamconnector then? | 22:30 |
babbageclunk | menn0: yeah, that sounds good. | 22:31 |
menn0 | babbageclunk: yep | 22:31 |
babbageclunk | menn0: I was a bit nervous about changing the api.Client.WatchDebugLog bit to use the FacadeCaller instead of the st, just in case something was making a Client where they were different for some reason. | 22:32 |
menn0 | babbageclunk: you'll also need to consider the transfer of logs getting interrupted | 22:32 |
babbageclunk | menn0: I was worried you might say that. | 22:33 |
menn0 | babbageclunk: that'll make things "fun" ... | 22:33 |
babbageclunk | menn0: We don't really have any kind of sequence number we could yse for that on the logs themselves, right? | 22:34 |
babbageclunk | use | 22:34 |
menn0 | babbageclunk: no we don't... the best we've got is the timestamps | 22:35 |
menn0 | babbageclunk: a small amount of overlap on retry is probably ok | 22:35 |
menn0 | babbageclunk: there is the LastSentLogTracker in state/logs.go | 22:36 |
menn0 | babbageclunk: maybe that could be used for this too (it was developed for shipping logs offsite) | 22:37 |
babbageclunk | menn0: So I need to make the receiving side discard logs when they're before the last sent log? | 22:37 |
menn0 | babbageclunk: I was thinking more that the sender (i.e. the migrationmaster) would track what had already been sent and not send again | 22:38 |
babbageclunk | menn0: ah, ok | 22:38 |
menn0 | babbageclunk: otherwise if there's 3GB of logs to send and 2GB has already been sent, that's a lot of wasted bandwidth | 22:38 |
perrito666 | brb grocery shopping | 22:38 |
babbageclunk | menn0: But it needs to be able to query what the target has, right? | 22:39 |
menn0 | babbageclunk: in theory it should know if it's recorded locally | 22:42 |
menn0 | (that's what the LastSendLogTracker does) | 22:42 |
menn0 | babbageclunk: but it might be better to ask | 22:42 |
menn0 | babbageclunk: in fact that would simplify things a lot | 22:42 |
menn0 | babbageclunk: do that :) | 22:42 |
menn0 | babbageclunk: so the LOGTRANSFER phase ends up looking like: ask the timestamp that the target has, stream from that timestamp onwards | 22:43 |
babbageclunk | menn0: So that would be another method on the migrationmaster facade to get the latest log time? Or probably have something in the logtransfer endpoint to track the last time. | 22:43 |
babbageclunk | menn0: quick chat to clarify? :) | 22:44 |
menn0 | babbageclunk: the migrationtarget facade will need a new API to query the latest log timestamp | 22:44 |
menn0 | babbageclunk: HO would be good | 22:44 |
babbageclunk | menn0: yeah, I meant migrationtarget rather than migrationmaster. | 22:44 |
babbageclunk | menn0: https://hangouts.google.com/call/p4p47uhzxzbm5iycpkv66uc3tae | 22:45 |
babbageclunk | menn0: actually. try this https://hangouts.google.com/hangouts/_/canonical.com/menno | 22:46 |
alexisb | thumper, on the HO whenever you are ready | 23:00 |
thumper | HO or blue jeans? | 23:00 |
alexisb | bluejeans | 23:00 |
alexisb | menn0, ping | 23:47 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!