=== bigjools_ is now known as bigjools | ||
=== wedgwood is now known as wedgwood_away | ||
davecheney | https://bugs.launchpad.net/bugs/1187062 | 01:19 |
---|---|---|
_mup_ | Bug #1187062: 1.11.0-1~1240~quantal1 cannot find Precise Image on HP Cloud <juju-core:New> <https://launchpad.net/bugs/1187062> | 01:19 |
davecheney | does anyone have the creds to run sync tools ? | 01:19 |
wallyworld | davecheney: it's not sync-tools - there's no quantal image metadata uploaded to hp cloud yet, just precise | 01:33 |
wallyworld | i can add a quantal image entry to the metadata | 01:33 |
wallyworld | what image id do you want? | 01:34 |
davecheney | wallyworld: thanks for replying | 01:43 |
davecheney | i'm wondering if I should tell antonio not to use quantal | 01:44 |
davecheney | we're only pushing the LTS | 01:44 |
wallyworld | that's what i thought when i uploaded the metadata - that people would just be using precise | 01:44 |
davecheney | i think that is a reasonable response | 01:44 |
davecheney | and is consistent with the company message | 01:44 |
wallyworld | i thought that the charms would only be quaranteed to work with the LTS | 01:44 |
davecheney | also, there are almost 0 charms for quantal | 01:45 |
wallyworld | yeah | 01:45 |
wallyworld | so i didn;t want to provide a footgun | 01:45 |
davecheney | wallyworld: the charms also have a series, and there aren't really any for quantal, almost none for raring and ziltch for suacy | 01:45 |
davecheney | lol @ footgun | 01:45 |
wallyworld | i know charms have a series - just assumed there were only a number for precise and not many for anything else like you say | 01:46 |
wallyworld | foorgun amuses me too | 01:46 |
wallyworld | footgun even | 01:46 |
wallyworld | for demos, like i think he wants this for, precise is the way to go | 01:47 |
davecheney | wallyworld: thanks, i'll write some stuff to antonio | 01:48 |
wallyworld | i had a chat this morning - there was also an issue not reading the release notes i *think* | 01:49 |
wallyworld | davecheney: there's a meeting tomorrow where all this will be cleared up fwiw, so don't feel like you have to write too much today | 01:49 |
davecheney | ok | 02:17 |
davecheney | yeah, simple streams | 02:17 |
thumper | hi jam, you around yet? | 02:39 |
thumper | davecheney: I have a question for you if you are around | 03:26 |
thumper | davecheney: wondering about lockless reading of an int | 03:30 |
thumper | davecheney: when it is possible that another goroutine may be writing to it | 03:31 |
thumper | any ideas on guarantees? | 03:31 |
thumper | davecheney: nm, I'll use a defined size and sync.atomic | 03:32 |
jam | thumper: hey, what's up ? | 03:48 |
thumper | jam: hi, I've just tweaked your branch a little | 03:48 |
thumper | and about to merge it | 03:48 |
jam | thumper: for loggo? | 03:48 |
thumper | yeah | 03:48 |
thumper | jam: I changed the globalMinLevel to be a uint32 explicitly | 03:48 |
thumper | (and changed the type from int to uint32 for Level) | 03:49 |
thumper | so I could use sync.atomic for the reads and writes | 03:49 |
thumper | otherwise we'd be reading an int without a lock | 03:49 |
thumper | potential problem | 03:49 |
thumper | so using sync.atomic for both reads and writes fixes this | 03:49 |
jam | thumper: because it is a word size, you will either get the existing or previous value (I'm pretty sure), which you are racing on when the int gets set with when you are logging | 03:49 |
jam | which is already a race. | 03:49 |
jam | well 'race' | 03:50 |
thumper | while the writes were all within the mutex, the reads weren't | 03:50 |
jam | thumper: sure, but it doesn't seem like an int that you have to be strict about | 03:50 |
thumper | while most likely, it isn't guaranteed (for the reads) | 03:50 |
jam | because if you got the wrong value, you could have also gotten the 'wrong' value because the async call to change it was 1ms later. | 03:50 |
thumper | yeah, but I'm mildly pedantic about shit like that | 03:50 |
jam | thumper: for word-size, you won't ever read part of the value. | 03:50 |
jam | thumper: for the lowest part of logging, it would be nice to avoid mutexes. | 03:51 |
jam | or sync level calls. | 03:51 |
jam | did you check it with the benchmarks? | 03:51 |
thumper | I seem to recall an exact example where it would | 03:51 |
jam | thumper: for multi-word stuff, you can get craziness. | 03:51 |
thumper | let me poke it with and without the sync calls | 03:51 |
jam | for example 'interface' is 2 words | 03:52 |
jam | (a pointer to type, and a pointer to value, which you can abuse with GOMAXPROCS>1) | 03:52 |
jam | You may still want uint32 instead of 'int' for that same reason. | 03:52 |
jam | thumper: anyway, I have the feeling the specific overhead doesn't matter much, but it is a multi-cpu sychronization point to log something that you are then throwing away (sometimes) | 03:53 |
jam | thumper: btw, you don't have a good way to change the log level of writers. To change the default logging you have to RemoveWriter() RegisterWriter(), because ReplaceDefaultWriter doesn't let you change the level. | 03:54 |
thumper | jam: ah, good point... | 03:54 |
thumper | I suppose we should add that at some stage | 03:54 |
jam | thumper: I was a little surprised that Logger objects track their log level, but Writer objects have it tracked in a separate location. | 03:55 |
thumper | seems to be about 5% slower on the fastest case to use sync | 03:55 |
jam | thumper: tbh the fastest case probably doesn't trigger all that often, since getEffectiveLogLevel is probably going to be the primary DEBUG/TRACE filter? | 03:56 |
thumper | aye | 03:56 |
jam | thumper: the goal is to just make it cheap enough that you don't have to think about perf when adding logging, because it will be cheaply filtered out. | 03:56 |
thumper | so I guess the question is with a read of uint32, will we always get a "whole" value? | 03:57 |
jam | thumper: note, layering the calls has a measurable performance impact, though in the "NS" range. | 03:57 |
thumper | what do you mean layering? | 03:57 |
jam | thumper: Debug calling Log | 03:57 |
jam | it is also measurable if you pass an extra parameter. | 03:57 |
jam | but again, 13ns per call vs 19ns per call. | 03:58 |
jam | sort of range. | 03:58 |
thumper | that is because I special case the one param | 03:58 |
thumper | to avoid Sprintf | 03:58 |
jam | thumper: no, actually, it doesn't get to Sprintf | 03:58 |
jam | Sprintf costs 600ns | 03:58 |
jam | or so | 03:58 |
jam | this is just parameter passing | 03:58 |
thumper | oh, so jsut the param? | 03:58 |
thumper | weird | 03:58 |
jam | if you change the fastest benchmark test | 03:58 |
jam | thumper: yeah, but again, nanoseconds | 03:58 |
thumper | and you really shouldn't be logging 1000s of things per second | 03:59 |
thumper | hopefully | 03:59 |
jam | thumper: well, millions of things, really | 03:59 |
thumper | although I did log a metric fuck-ton of stuff in unity for testing cleanup | 03:59 |
jam | I could see TRACE getting really verbose.3 | 03:59 |
thumper | jam: exactly, which is why we want better module separation | 04:00 |
jam | but if you are actually logging stuff, the Sprintfs start to add up. | 04:00 |
thumper | so you can set trace on say, the provisioner | 04:00 |
thumper | but nothing else | 04:00 |
thumper | yeah, I can imagine that it does add up, but the cost/benefit of the logging is worth it I think | 04:00 |
jam | thumper: so on my machine, if you look at BenchmarkLoggingDiskWriterNoMessagesLogLevel | 04:01 |
jam | with s.logger.Debug(msg, i) | 04:02 |
jam | it is 25.5ns/op | 04:02 |
jam | if I change it to | 04:02 |
jam | s.logger.Debug(msg) | 04:02 |
jam | it is 13.8ns/op | 04:02 |
thumper | heh | 04:02 |
jam | if I change it to s.logger.Log(trace.DEBUG, msg) | 04:02 |
jam | It is 9.2ns/op | 04:03 |
jam | so, interesting that all that is measurable | 04:03 |
jam | I imagine the overhead of varargs is because it has to allocate the slice and the backing array | 04:03 |
jam | but you *are* talking 15ns absolute time. | 04:03 |
jam | so if you call this 1000s of times per second, it has a net overhead of 15 microseconds/second (15ppm) | 04:04 |
jam | I don't think that will explicitly ever show up in a pprof :) | 04:04 |
thumper | :) | 04:04 |
jam | compared to TestWriters which is 1574ns | 04:05 |
jam | so once you are actually formatting and writing stuff to a string | 04:05 |
thumper | so, reading around the atomicity of 32 bit reads and writes... | 04:05 |
jam | you are ~100x slower. | 04:05 |
thumper | seems fine on intel, and amd64 | 04:05 |
thumper | but not sure about arm | 04:05 |
jam | thumper: I *believe* that all platforms give you an atomic-word-aligned load because often pointers are used to implement atomic operations. | 04:06 |
thumper | I'll take that 10ns hit for a nicer method signature | 04:06 |
thumper | hmm... | 04:06 |
jam | thumper: http://stackoverflow.com/questions/9399026/arm-is-writing-reading-from-int-atomic | 04:06 |
thumper | jam: so a question is "does go word align integers?" | 04:07 |
thumper | I think the answer should be "I damn well hope so" | 04:08 |
jam | thumper: I don't know of anything that *doesn't* align objects unless you do crazy shit | 04:08 |
thumper | jam: like std::vector<bool>? | 04:08 |
jam | like use an integer pointer into a byte array to manually extract stuff (which I've done, but I've also gotten BUS errors on Mac doing it :) | 04:08 |
jam | thumper: does the stdlib treat it as wide pointers in memory? | 04:09 |
thumper | I don't recall | 04:09 |
jam | there is a fair amount of C code (especially string searching) that knows that on certain platforms | 04:09 |
jam | it is safe to do unaligned loads | 04:09 |
thumper | the general answer was don't do it | 04:09 |
jam | (intel amd is fine, PPC is not) | 04:09 |
thumper | ew | 04:10 |
jam | I think bzrlib itself has some, but also with platform checks. | 04:10 |
thumper | ok, so back to a global uint32 | 04:10 |
thumper | should be word aligned? | 04:10 |
thumper | how would we know? | 04:10 |
thumper | apart from taking the address of it | 04:10 |
jam | if you want to *know* then you have to take the address and compare mod 4, but there is reflect.Type.Align as well. | 04:11 |
thumper | yeah, you know what? | 04:13 |
thumper | I'm just going to use sync.atomic | 04:13 |
thumper | it isn't enough of a difference to care | 04:13 |
jam | thumper: did you check what the over head is? If it is 10ns I agree, if it is 100 or 1000 then I might quibble | 04:14 |
thumper | jam: it is about 4ns | 04:14 |
thumper | on my machine | 04:14 |
davecheney | thumper: back now (lunch) | 04:14 |
davecheney | looks like you already got an answer | 04:15 |
thumper | davecheney: that's ok, me and jam have just been talking about atomic stuff | 04:15 |
davecheney | maybe you've already covered it | 04:15 |
davecheney | but there is a difference between an atomic write, and a write that is safely published | 04:15 |
thumper | what do you mean by safely published? | 04:16 |
davecheney | visible to another thread | 04:16 |
thumper | I don't care about delays | 04:16 |
thumper | just valid reads and writes | 04:16 |
davecheney | you'll probaby be ok, but that is playing fast and loose with the memory model | 04:17 |
davecheney | ie, the delay could be infinite | 04:17 |
* thumper pulls a face | 04:17 | |
thumper | so how does one safely publish something? | 04:17 |
davecheney | sync/atomic or use a lock | 04:17 |
davecheney | ie, you need a memory barrier | 04:17 |
davecheney | or send it through a channel | 04:17 |
thumper | ah, yes, I decided to use sync/atomic | 04:17 |
davecheney | all good then | 04:18 |
thumper | ok, cool | 04:18 |
* thumper considers something else | 04:18 | |
thumper | so, theoretically I could have one go routine set a logging level on a logger, and not have it visible to another go routine? | 04:20 |
thumper | if not protected by locks? | 04:20 |
davecheney | correct | 04:20 |
thumper | or atomic reads/writes | 04:20 |
thumper | hmm... | 04:20 |
davecheney | yes | 04:20 |
thumper | poos | 04:20 |
thumper | davecheney: so, I have "type Level uint32" | 04:21 |
thumper | but I can't have a Level variable and use atomic.LoadUint32 | 04:21 |
thumper | because it complains about the casts | 04:22 |
thumper | so perhaps just better not to have the Level type? | 04:22 |
thumper | although I kinda like the String method on it | 04:22 |
thumper | or just have the Level at the public interface | 04:22 |
thumper | and use uint32 internally? | 04:22 |
davecheney | thumper: where is the code again | 04:22 |
thumper | launchpad.net/loggo | 04:22 |
* davecheney looks | 04:23 | |
davecheney | two secs | 04:23 |
thumper | particularly considering the func (logger Logger) SetLogLevel method | 04:23 |
* davecheney twiddles fingers | 04:23 | |
thumper | davecheney: also, FYI, I'm giving a talk on Go in about an hour | 04:24 |
davecheney | at the NZ meetup | 04:24 |
davecheney | sweet | 04:24 |
davecheney | > | 04:24 |
davecheney | ? | 04:24 |
davecheney | thumper: let me check one thing | 04:25 |
davecheney | i think when you use atomic.SetUint32, you also need to read using atomic.ReadUint32 | 04:26 |
davecheney | let me check in the channel | 04:26 |
thumper | davecheney: yes, was also using both | 04:27 |
thumper | read and write for sync | 04:27 |
davecheney | ok, cool | 04:27 |
thumper | atomic.StoreUint32 and LoadUint32 | 04:27 |
thumper | however | 04:27 |
davecheney | indeed | 04:27 |
davecheney | logically that would appear to be the correct usage | 04:27 |
thumper | currently for the logging levels, this isn't done | 04:27 |
thumper | just assigning to a Level variable | 04:28 |
thumper | which is a uint32 | 04:28 |
thumper | but the typing is just getting in the way of using the atomic functions to store and load | 04:28 |
thumper | which is frustrating | 04:28 |
davecheney | thumper: got the line(s) | 04:28 |
thumper | davecheney: I'm currently poking jam's branch | 04:28 |
thumper | which isn't currently pushed | 04:28 |
davecheney | ewww | 04:28 |
thumper | ewww what? | 04:29 |
davecheney | just paste the few lines into paste.ubuntu.com | 04:29 |
thumper | return level >= Level(atomic.LoadUint32(&globalMinLevel)) | 04:36 |
thumper | sorry, someone at the door | 04:36 |
thumper | that is what I have by saving globalMinLevel as a uint32 | 04:36 |
thumper | but I can't seem to have globalMinLevel as a Level, and still use atomic reads and writes | 04:37 |
thumper | davecheney: http://play.golang.org/p/pB65DtZrXr so this is what I want to do but can't | 04:39 |
davecheney | thumper: http://play.golang.org/p/0_ei1fpH9q | 04:39 |
thumper | davecheney: gah... ok, ta | 04:40 |
thumper | I'm pleased it is possible | 04:40 |
thumper | I was getting very frustrated | 04:40 |
davecheney | you need the parens to disambiguate the * deref | 04:40 |
davecheney | please hold, arguing about StoreUint32 in #go-nuts | 04:41 |
thumper | davecheney: as in arguing the need for it? | 04:43 |
davecheney | not really | 04:44 |
davecheney | more wandering a blind confusion at this point | 04:44 |
thumper | interesting | 04:55 |
thumper | changing the access to the module.level to use atomics made 0.1ns of difference | 04:56 |
thumper | rather than the 4ns of difference changing the globalMinLevel for the writer | 04:56 |
davecheney | thumper: i think i missed that part of the timeing discussion | 04:58 |
* davecheney looks for globalMinLevel | 04:58 | |
davecheney | thumper: honestly, if you're in the ns territory, it doesn't matter | 04:59 |
davecheney | even in the us territory | 04:59 |
thumper | davecheney: it is in the branch jam proposed | 05:01 |
thumper | not in trunk | 05:01 |
davecheney | can't see the code | 05:02 |
davecheney | but there are probably a few things going oin there | 05:02 |
davecheney | 1. this is uncontented | 05:02 |
davecheney | no other CPU is stealing the cache line from the active one | 05:02 |
davecheney | so you're benchmarking the round trip to your L1/L2 cache | 05:02 |
davecheney | which is fair | 05:02 |
davecheney | most synchronisation is uncontended | 05:03 |
davecheney | i'm assuming because globalMinLevel has the word global in it | 05:04 |
davecheney | you're walking the logger tree to the root | 05:04 |
davecheney | correct ? | 05:04 |
thumper | I think so, but I'm off now | 05:05 |
thumper | perhaps we could chat tomorrow? | 05:05 |
davecheney | who's used debug-hooks recently | 06:00 |
davecheney | ? | 06:00 |
=== bigjools_ is now known as bigjools | ||
fwereade_ | davecheney, I haven't but I might be able to talk about it? | 06:20 |
davecheney | fwereade_: so I have a version of debug-hooks that replicates the ssh into tmux behavior | 06:21 |
davecheney | (stole the script straight out of py juju) | 06:21 |
davecheney | but it doesn't really look right | 06:21 |
davecheney | ie, shouldn't HOOK_CONTEXT be exported, blah blah ? | 06:21 |
fwereade_ | davecheney, yeah, I think so, HOOK_CONTEXT_ID is needed to make the hook tools run | 06:22 |
davecheney | and probably putting them in the path | 06:22 |
davecheney | but I think pyjuju cheats on that bit | 06:22 |
fwereade_ | davecheney, yeah -- it ought to be as close a copy of the normal hook environment as possible | 06:23 |
davecheney | fwereade_: did such a thing contextually exist in pyjuju ? | 06:23 |
fwereade_ | davecheney, there must have been *something* that communicated back with the main agent process... let me see if I can find it | 06:24 |
davecheney | fwereade_: i'm looking at control/debug_hooks.py | 06:24 |
fwereade_ | davecheney, you've seen the other half of it in hooks/executor.py? | 06:26 |
davecheney | fwereade_: nope | 06:26 |
davecheney | there was a concerning something.setdebug(true) | 06:27 |
davecheney | that i chose to ignore | 06:27 |
davecheney | fwereade_: even if there was a signal to the unit agent | 06:28 |
davecheney | i cant find any hook(sic) in the command that does anything but wait til it reached that point | 06:28 |
davecheney | then ssh in | 06:28 |
fwereade_ | davecheney, I'm afraid I don't recall the experience of actually using debug-hooks | 06:30 |
davecheney | fwereade_: i saw marco use it once | 06:31 |
davecheney | https://codereview.appspot.com/9996043 | 06:31 |
davecheney | it works, but doesn't really do much more than juju ssh atm | 06:31 |
fwereade_ | davecheney, (but yeah, I think you can forget the set debug log bit for now -- it'd be worth syncing up with thumper on what he has planned in that direction though) | 06:31 |
davecheney | fwereade_: i'm trying to figure out if my faxilily is poor, or if the debug hooks world is more complicated in our juju | 06:32 |
fwereade_ | davecheney, there's definitely some crazy magic that needs to happen at the other end, so that we do actually somehow inject the appropriate hook context while a hook is being run | 06:33 |
* davecheney scrobbles in the code | 06:34 | |
davecheney | lucky(~/devel/juju/juju/hooks) % ack tmux | 06:35 |
davecheney | executor.py | 06:35 |
davecheney | 34:# The beauty below is a workaround for a bug in tmux (1.5 in Oneiric) or | 06:35 |
davecheney | 37:tmux new-session -d -s $JUJU_UNIT_NAME 2>&1 | cat > /dev/null || true | 06:35 |
davecheney | 38:tmux new-window -t $JUJU_UNIT_NAME -n {hook_name} "$JUJU_DEBUG/hook.sh" | 06:35 |
davecheney | oh ffs | 06:35 |
davecheney | that is what set debug does | 06:35 |
fwereade_ | davecheney, ah, got you, I misunderstood what you said before | 06:36 |
fwereade_ | davecheney, you will indeed need to set some state flag so the unit can know to do that | 06:37 |
davecheney | fwereade_: ok, i'll talk to le thump tomorrow | 06:37 |
fwereade_ | davecheney, although... we will also need to make sure we unset it, however the connection ends, surely | 06:38 |
* davecheney noted that python didn't appear to unset whatever it sets | 06:38 | |
fwereade_ | davecheney, oh, ffs, it uses a ZK ephemeral node that goes away when the cli client goes away | 06:41 |
* fwereade_ sighs deeply | 06:41 | |
fwereade_ | davecheney, ...we could maybe use a presence node with a session guid? cleaning up feels likely to be racy, but perhaps it's actually equivalent | 06:42 |
* davecheney throws up his hands | 06:43 | |
fwereade_ | davecheney, ZK ephemerals don't go away immediately either | 06:43 |
davecheney | this is over my head | 06:43 |
davecheney | so much for debug-hooks being 'simple' to implement | 06:43 |
fwereade_ | davecheney, yeah, I didn't have any recollection of that being the case myself | 06:43 |
davecheney | there was a suggestion of such on the ML | 06:43 |
fwereade_ | davecheney, roughly speaking, the presence stuff should enable the same stuff that ZK ephemerals do, so once you're operating at a certain level of rarefied abstraction I can imagine how it could seem simple :/ | 06:45 |
davecheney | sure, it's just programming, right ? | 06:45 |
fwereade_ | davecheney, it's just 1s and 0s, type fastr | 06:45 |
fwereade_ | davecheney, and, hmm, we can't get the ownership guarantees with a pinger that we could with a ZK ephemeral node | 06:49 |
* davecheney goes to find whisky | 06:49 | |
* fwereade_ looks at the clock, and regretfully does not join davecheney | 06:50 | |
davecheney | fwereade_: it's always 5pm somewhere in the world | 06:51 |
fwereade_ | davecheney, ok, we *could* fake up everything we need, I think -- I just need to get up properly before I can discuss this sanely | 06:51 |
fwereade_ | davecheney, I'll be back in a few minutes; will you be free for a quick hangout then? I think I can give you a rough sketch of what we need and you can decide whether it's doable without bloodshed | 06:52 |
davecheney | fwereade_: i don't think it is worth it | 06:53 |
davecheney | this is over my head | 06:53 |
davecheney | i'll throw this card back into the pool and leave this command for someone else to use | 06:53 |
davecheney | this is a much bigger job than I thought | 06:54 |
davecheney | and this work is not scheduled for a good reason | 06:54 |
dimitern | morning | 06:55 |
fwereade_ | dimitern, heyhey | 07:10 |
fwereade_ | TheMue, heyhey | 07:10 |
rogpeppe | mornin' all! | 07:14 |
dimitern | fwereade_: hey | 07:14 |
dimitern | rogpeppe: morning | 07:14 |
TheMue | fwereade_, dimitern: heya | 07:14 |
rogpeppe | dimitern, TheMue: yo! | 07:14 |
dimitern | fwereade_: about to propose the machiner facade stuff | 07:14 |
TheMue | rogpeppe: oh, yes, welcome back too ;) | 07:14 |
fwereade_ | rogpeppe, heyhey | 07:14 |
fwereade_ | dimitern, cool | 07:14 |
rogpeppe | fwereade_: hiya! | 07:14 |
dimitern | rogpeppe: you might be surprised by my next CL :) | 07:14 |
rogpeppe | dimitern: how did you get on with that set of branches? | 07:14 |
dimitern | rogpeppe: didn't manage due to too much other stuff going on | 07:15 |
rogpeppe | dimitern: i haven't seen any emails go by, but that's i think because i never see any CLs i'm not directly involved in | 07:15 |
rogpeppe | dimitern: what's the "machiner facade" then? | 07:16 |
dimitern | rogpeppe: I did 3 separate refactoring proposals last week, all of them changing the API in a different way | 07:16 |
dimitern | rogpeppe: finally we agreed on how to move forward | 07:16 |
dimitern | rogpeppe: so basically we decided to get rid of srvMachine, srvUnit, api.Machine, api.Unit and related stuff | 07:16 |
dimitern | rogpeppe: and instead have srvMachiner, which implements a subset of the API only used by the machiner | 07:17 |
rogpeppe | dimitern: okay... what's the plan then? | 07:17 |
rogpeppe | dimitern: so you're putting the machiner in the API? | 07:17 |
dimitern | rogpeppe: and have lightweight "MachinerMachine" objects proxying calls through the facade | 07:17 |
rogpeppe | dimitern: i'm not sure i understand what you mean by that | 07:18 |
dimitern | rogpeppe: yeah, as an attempt to be have SOA oriented approach, rather than replicate the state api directly | 07:18 |
* rogpeppe doesn't really know what "service-oriented architecture" actually means | 07:19 | |
dimitern | rogpeppe: you'll see shortly - the code will explain it better | 07:19 |
rogpeppe | dimitern: could you point me towards a place where you were discussing this, so i can get some background? | 07:19 |
dimitern | rogpeppe: take a look at the juju-dev ML messages and this document: https://docs.google.com/a/canonical.com/document/d/1Yd2Nil43AemnBq8qv003OkWLptiPzuhWGESBcwlI7Nc/edit#heading=h.fdyojyfogyn1 | 07:20 |
dimitern | rogpeppe: it's not up-to-date now, but the proposal implements what was agreed upon | 07:20 |
rogpeppe | dimitern: so, by "bulk operation" you mean that you have a set of objects and you perform the same operation on all of them at once? kinda like vector math but for objects? | 07:26 |
TheMue | rogpeppe: it's not in the typical sense of soa. but the style is service-oriented. instead m := getMachine(4711); m.DoThis(...); it's a machineService.DoThis(4711, ...). | 07:26 |
dimitern | rogpeppe: yes | 07:26 |
fwereade_ | rogpeppe, yeah, the consensus is essentially that domain objects always suck | 07:27 |
fwereade_ | rogpeppe, client-side, we'll still be faking them up, so the refectoring doesn't kill us | 07:27 |
dimitern | rogpeppe: instead of having Machine.SetStatus() -> Machiner(args []{Id, status, info}) -> []errorresults | 07:27 |
rogpeppe | fwereade_: i guess i don't quite see the issue. | 07:27 |
dimitern | rogpeppe: s/Machiner/Machiner.SetStatus/ | 07:27 |
rogpeppe | fwereade_: how do domain objects always suck? | 07:28 |
fwereade_ | rogpeppe, well, the API mimics state at the moment, and thereby preserves all the mistakes we made with state -- but is much harder to change | 07:28 |
fwereade_ | rogpeppe, consider the provisioner | 07:29 |
fwereade_ | rogpeppe, grabbing N machine objects individually is plainly insane | 07:29 |
rogpeppe | fwereade_: it calls AllMachines, no? | 07:30 |
fwereade_ | rogpeppe, yes, and AllMachines is total unjustifiable crack | 07:30 |
rogpeppe | fwereade_: interesting p.o.v. | 07:30 |
rogpeppe | fwereade_: why so? | 07:30 |
fwereade_ | rogpeppe, it's a load of unnecessary data that'll go stale, and which doesn't allow for useful bulk ops -- if we wanted to do the provisioner right, we'd be asking for 1000 life statuses, then getting the instance ids of the alive 800 of those, then getting whatever the next subset of information is | 07:32 |
fwereade_ | rogpeppe, talking about individual machines renders this approach unworkable to the point you don't even consider it | 07:32 |
rogpeppe | fwereade_: we can get bulk ops easily by issuing multiple concurrent requests - but do you consider that an unworkable approach? | 07:33 |
fwereade_ | rogpeppe, yes, it's insane | 07:33 |
rogpeppe | fwereade_: that's a strong statement :-) | 07:33 |
fwereade_ | rogpeppe, makes it impossible to leverage the db's support for bulk ops | 07:33 |
rogpeppe | fwereade_: ah, that's a good point. | 07:34 |
fwereade_ | rogpeppe, I understand *why* it happened like this -- we swapped backends without considering what the state package should look like in the new context | 07:34 |
rogpeppe | fwereade_: and mongodb has such support? (other than bulk query ops)? | 07:35 |
fwereade_ | rogpeppe, bulk query ops are exactly what we need in general | 07:35 |
fwereade_ | rogpeppe, and in the cases where we need bulk change ops (currently few, I suspect will become more and more important as the project matures) | 07:35 |
rogpeppe | fwereade_: i thought query ops were very fast, and not likely to be a bottleneck for us | 07:35 |
fwereade_ | rogpeppe, we still want to be able to express them in a compact way even if we end up needing to write intent to a queue and handle the txn changes in smaller batches | 07:36 |
rogpeppe | fwereade_: is there a risk that we're making things harder for ourselves by making a significant architecture shift here that might actually be premature optimisation? | 07:36 |
fwereade_ | rogpeppe, I accept responsibility for that risk | 07:36 |
rogpeppe | fwereade_: have you made some measurements to support these decisions? | 07:37 |
fwereade_ | rogpeppe, no, because we can't scale far enough to tell in the first place | 07:39 |
dimitern | there it is -> https://codereview.appspot.com/9896046/ | 07:43 |
* dimitern ducks for cover with a steak :) | 07:43 | |
fwereade_ | rogpeppe, please read through the lists for better context than IRC | 07:44 |
rogpeppe | fwereade_: which lists? | 07:44 |
fwereade_ | rogpeppe, juju-dev | 07:44 |
fwereade_ | rogpeppe, on a related note, btw, what is the deal with Pinger? | 07:44 |
rogpeppe | fwereade_: i haven't seen anything on juju-dev since dimiter's response to my handoff email | 07:45 |
fwereade_ | rogpeppe, have you switched to gmail yet? | 07:45 |
rogpeppe | fwereade_: ahhh | 07:45 |
rogpeppe | fwereade_: that will be the issue | 07:45 |
rogpeppe | fwereade_: i use gmail anyway - i need to point it to my canonical gmail i guess | 07:45 |
rogpeppe | fwereade_: it's a good thing actually that i didn't see any of this while i was away on holiday :-) | 07:46 |
fwereade_ | rogpeppe, ha, yeah | 07:46 |
TheMue | rogpeppe: ;) | 07:47 |
rogpeppe | fwereade_: what about Pinger? | 07:47 |
fwereade_ | rogpeppe, I was very clear that I did not want pinger in the API | 07:47 |
rogpeppe | fwereade_: ah, what would you like to call it? | 07:48 |
fwereade_ | rogpeppe, and that I wanted it out of machiner so we could move forward without it complicating the issue, and give ourselves some breathing room to make a final decision without immediate pressure | 07:48 |
fwereade_ | rogpeppe, I don't think it's justifiable at all in the API, myself, as you know | 07:49 |
=== tasdomas_afk is now known as tasdomas | ||
rogpeppe | fwereade_: ah, so we don't want the agents telling the state they're alive at all? | 07:49 |
fwereade_ | rogpeppe, we certainly do not want the machiner pretending it's the machine agent | 07:49 |
rogpeppe | fwereade_: what *is* the machine agent? | 07:50 |
fwereade_ | rogpeppe, the bit that runs the various workers | 07:50 |
fwereade_ | rogpeppe, not the workers themselves | 07:50 |
rogpeppe | fwereade_: presumably *some* worker has got to do it, no? | 07:50 |
fwereade_ | rogpeppe, I'm not sure why that would be the case at all | 07:51 |
fwereade_ | rogpeppe, why not keep it purely server-side? | 07:51 |
rogpeppe | fwereade_: how does the server side know a client is around? | 07:51 |
rogpeppe | s/client/agent/ | 07:51 |
fwereade_ | rogpeppe, I *hope* it can tell, otherwise it'll be running a lot of watchers for a client that's disconnected | 07:52 |
fwereade_ | rogpeppe, anyway, the reason I wanted it out of the API was so that we could have this discussion independently of the critical path | 07:53 |
rogpeppe | fwereade_: so you're suggesting that the API see a connection from a given agent and run a pinger on its behalf as a result of that connection being made? | 07:54 |
fwereade_ | rogpeppe, that is indeed a possibility, as we discussed in detail before I went away on holiday | 07:54 |
rogpeppe | fwereade_: i remember a few discussions in that area. i hadn't realised you didn't want the pinger in the API at all though, sorry. | 07:55 |
rogpeppe | fwereade_: istr the suggestion to put it in the Agent.Entity call, which didn't work out | 07:56 |
fwereade_ | rogpeppe, I wanted it out of machiner so that we could discuss the answer to this question separately, without blocking the machiner work | 07:56 |
rogpeppe | fwereade_: ok, so let's just remove it then | 07:56 |
dimitern | rogpeppe, fwereade_: https://codereview.appspot.com/9896046/ ? | 08:08 |
fwereade_ | dimitern, I'm reading it right now | 08:08 |
dimitern | (it's big, but there are mostly removals) | 08:09 |
=== TheRealMue is now known as TheMue | ||
fwereade_ | dimitern, reviewed with a few thoughts | 08:37 |
dimitern | fwereade_: cheers | 08:37 |
fwereade_ | dimitern, some of them are a bit vague | 08:37 |
fwereade_ | dimitern, Machines in particular is maybe really just Exists()? | 08:38 |
fwereade_ | dimitern, but I'm ambivalent about the Refresh()/Life() thing in particular | 08:39 |
dimitern | fwereade_: i think it's a good idea for Refresh calling Life and caching | 08:39 |
fwereade_ | dimitern, so long as we implement the API sanely, I think it's sensible to keep the *Machine interface as close as possible | 08:39 |
fwereade_ | dimitern, cool | 08:39 |
fwereade_ | dimitern, so long as we all know that's now how it "should" be long-term | 08:40 |
dimitern | fwereade_: yeah | 08:40 |
dimitern | fwereade_: not sure I get the point of converting NotFound into Dead? | 08:40 |
dimitern | fwereade_: that surely the same as state works | 08:41 |
fwereade_ | dimitern, yeah, but all it means is that all the client code has to go around specifically handling NotFound and handling it as Dead ;p | 08:42 |
fwereade_ | dimitern, we kinda BDUFed the lifecycle stuff and that's one of the ickiness points | 08:42 |
fwereade_ | dimitern, there may or may not in general be a distinction between Dead and NotFound, and it's situational | 08:43 |
fwereade_ | dimitern, I suspect that if we're asking for Life explicitly it should probably be reported as Dead -- although this then makes auth errors interesting | 08:44 |
* fwereade_ grumbles at the world | 08:44 | |
dimitern | fwereade_: well, how about a true "not found" case? | 08:44 |
dimitern | fwereade_: i mean you asked for a machine that was never there | 08:44 |
fwereade_ | dimitern, ok, lets start with Machines() -- what's the use case there on the server side? to get a domain object, we need to discover its Life, but that's the only call we need -- we already know the ID | 08:46 |
fwereade_ | dimitern, not-auth is actually a reasonable response to a Life query I guess | 08:46 |
fwereade_ | dimitern, (in general, anyway...) but from the POV of the machiner, I think that reporting notfound as Dead is actually quite useful... am I making sense here, or is this all too dependent on my internal context? | 08:48 |
fwereade_ | dimitern, it's like calling EnsureDead on a machine that doesn't exist -- it succeeds | 08:48 |
dimitern | fwereade_: do you mean that only for life? | 08:48 |
dimitern | fwereade_: i.e. Machines will still return not found as usual | 08:48 |
fwereade_ | dimitern, I'm trying to figure out what the use case is for Machines() | 08:49 |
dimitern | fwereade_: what? no - ensuredead fails on a non-existing machine through the API | 08:49 |
dimitern | fwereade_: because we need to get the machine first | 08:49 |
dimitern | fwereade_: but i guess in state it operates on cached life and might succeed | 08:49 |
fwereade_ | dimitern, that's an interesting behaviour change | 08:49 |
fwereade_ | dimitern, g+ a mo? | 08:50 |
dimitern | fwereade_: ok, just a sec | 08:51 |
dimitern | fwereade_: https://plus.google.com/hangouts/_/971de9659aecd256626b1d52513288375093bf72?authuser=0&hl=en | 08:52 |
dimitern | rogpeppe: can you take a look as well please? https://codereview.appspot.com/9896046/ | 09:23 |
rogpeppe | dimitern: i'm currently going through all the backlog | 09:24 |
dimitern | rogpeppe: ok | 09:25 |
=== rogpeppe2 is now known as rogpeppe | ||
dimitern | fwereade_: if i remove the authEnvironManager from the allowed perm checks | 10:25 |
dimitern | fwereade_: then it kinda defeats the point of having bulk operations - any method on the machiner will only ever succeed for the machine this machiner is responsible for | 10:26 |
dimitern | fwereade_: do you know what I mean? | 10:34 |
dimitern | fwereade_: anyway, it's updated now https://codereview.appspot.com/9896046/ | 11:06 |
wallyworld | TheMue: hey frank, thanks for the review. as written, i think the provisioner tweak is safe to land now so i'd like to do that and then your changes can come along later? | 11:09 |
jam | fwereade_: so, we currently have a test that is broken with latest mgo. And it will prevent us from using the juju-core go-bot. Care to give direction about how to fix it? | 11:13 |
jam | TestOpenDoesNotDelayOnHandShakeFailure | 11:14 |
jam | it was written by Dave, because he implemented the logic to have juju-core delay if it gets a connection failure, but *not* delay if it gets a TLS handshake failure. | 11:15 |
jam | The test now fails because mgo unconditionally delays 500ms on *any* failure. | 11:15 |
TheMue | wallyworld: it's ok for me. the information ContainerType() is returning matches exactly my needs. | 11:17 |
wallyworld | great :-) | 11:17 |
TheMue | wallyworld: just code a little outline where it better can be seen how it will be used. | 11:17 |
TheMue | wallyworld: eh, not "just code", but "i'm just coding" ;) | 11:17 |
wallyworld | TheMue: ok. so are you wanting to land your work before mine? wouldn't mine need to land before yours? | 11:18 |
wallyworld | so you can use the new ContainerType() method? | 11:19 |
fwereade_ | dimitern, the remaining point of bulk operations is in habit and consistency; and in that we *can't* really predict what a machiner will ultimately be responsible for, and we get some future-proofing by allowing ourselves to express multiple ops if we ever need them | 11:19 |
fwereade_ | dimitern, I do agree that the machiner is not in itself a compelling use case for bulk ops | 11:19 |
dimitern | fwereade_: sure, np | 11:19 |
wallyworld | fwereade_: i think you'll be happy with this now, hopefully https://codereview.appspot.com/9820043/ | 11:20 |
dimitern | fwereade_: wanna take a look now? | 11:20 |
fwereade_ | dimitern, wallyworld, thanks both, I'll take a look | 11:20 |
wallyworld | thanks | 11:20 |
fwereade_ | jam, I would provisionally be ok dropping that test and that behaviour -- it seems like it's been taken out of our hands with the mgo change | 11:21 |
jam | fwereade_: well *a* fix is to get something upstreamed | 11:21 |
jam | but an *easier* fix is to just drop it :) | 11:21 |
fwereade_ | jam, but it would be good to have a word with davecheney for a bit more context | 11:21 |
jam | fwereade_: AIUI the initial issue was that mgo always retried without any delay, which punished things a bit | 11:22 |
jam | so we put in a delay, and now so has mgo directly. | 11:22 |
fwereade_ | wallyworld, what were your thoughts re unit-dirty vs container-dirty? | 11:22 |
fwereade_ | jam, in that case I would be happy dropping it, assuming davecheney's approval | 11:23 |
fwereade_ | just in case i missed something | 11:23 |
TheMue | wallyworld: yours imho can already land, yes | 11:24 |
wallyworld | fwereade_: hmmm. i must confess i forgot that if it was raised as a question. sorry. the flag as implemented is a unit-dirty flag i guess. but that's all we need now i think | 11:24 |
wallyworld | TheMue: thanks. i promise to fix that method comment if you +1 it :-) | 11:25 |
fwereade_ | wallyworld, fair enough -- I think there will be some interestingness in future though -- possibly I raised it in a different CL | 11:25 |
TheMue | wallyworld: great | 11:26 |
wallyworld | fwereade_: the idea now is that a unit can be deployed if unit-clean=true, since it a machine has containers, it doesn;t really matter for that case | 11:26 |
wallyworld | or so i understand | 11:26 |
fwereade_ | wallyworld, I'm just a bit antsy about what we really want to express -- "clean" and "unused" both feel like rational and distinct requests | 11:26 |
fwereade_ | wallyworld, I'm willing to call this progress, though -- nothing's using "unused" at the moment, right? | 11:27 |
fwereade_ | wallyworld, I *will* be nervous about reactivating unused while it's not able to take constraints into account, though | 11:27 |
wallyworld | fwereade_: for now, afaiui, we only need care about clean rather than unused (as far as containerisation goes) | 11:27 |
wallyworld | fwereade_: i think the current AssignUnused realy should be AssignClean | 11:28 |
wallyworld | since that's the semantic we are really aiming for right now | 11:28 |
wallyworld | afaiui | 11:28 |
fwereade_ | wallyworld, I'd be +1 on a rename there, but you don't need it in this CL | 11:28 |
danilos | mumble trouble :/ | 11:28 |
mgz | you're hoping a lot ::) | 11:29 |
wallyworld | fwereade_: yes, i agree with the rename. as you say, that's part of the evolution of this work and not for this mp | 11:29 |
wallyworld | that also matches my thinking of the issue | 11:29 |
jam | danilos: we've put you in another room for now, hopefully you can get it working again. | 11:30 |
danilos | jam: I am trying | 11:30 |
danilos | jam: works fine then kicks me out 5 seconds later :/ | 11:31 |
jam | you could try restarting completely... | 11:31 |
fwereade_ | wallyworld, you have an LGTM, sorry I let that one linger | 11:31 |
fwereade_ | dimitern, I'm on yours now | 11:31 |
wallyworld | np, thanks | 11:31 |
dimitern | fwereade_: cheers! | 11:32 |
danilos | jam: I can, if you mean rebooting? (I've killed mumble and tried again, fwiw) | 11:32 |
jam | that is what I meant, though we could also switch to a hangout | 11:32 |
jam | wallyworld, danilos, mgz: https://plus.google.com/hangouts/_/8868e66b07fa02bdc903be4601200d470dae9ee3 | 11:33 |
jam | dimitern: ^^ | 11:33 |
dimitern | fwereade_: blast, I realized I have to add MachinerMachine.EnsureDead() and also add client-side tests for the machiner | 11:53 |
fwereade_ | dimitern, you *could* very happily just strip out the client-side code from this CL, and repropose those with tests in a new one | 11:54 |
dimitern | fwereade_: good idea, will do | 11:57 |
dimitern | fwereade_: so the pipeline will be: 2) client-side + tests, 3) split suites, 4) split (apiserver|api)/machiner into a separate subpackage, 5) implement Machiner.Watch, | 11:58 |
dimitern | fwereade_: sounds good? | 11:58 |
fwereade_ | dimitern, SGTM | 11:59 |
fwereade_ | dimitern, and LGTM | 11:59 |
dimitern | fwereade_: tyvm | 11:59 |
fwereade_ | dimitern, (although lt me know your thoughts on the comments, and if you decide they're candidates for this CL so much the better) | 11:59 |
fwereade_ | dimitern, I'm very happy with splitting Authorizer out | 12:00 |
dimitern | TheMue: since you're the OCR today, can you have a look as well? https://codereview.appspot.com/9896046/ (disregard the state/api/machiner.go stuff - will split it in a follow-up) | 12:00 |
TheMue | dimitern: yep | 12:00 |
dimitern | fwereade_: i'm looking at your review and will ask if something is unclear | 12:01 |
danilos | TheMue, hi, I wonder if you can take a look at https://codereview.appspot.com/9876043/? | 12:32 |
TheMue | danilos: *click* | 12:33 |
danilos | TheMue, thanks :) | 12:33 |
TheMue | danilos: Done. | 12:43 |
danilos | TheMue, thanks | 12:44 |
fwereade_ | wallyworld, ping | 13:13 |
wallyworld | hi | 13:13 |
fwereade_ | wallyworld, free for a chat about containers? | 13:13 |
wallyworld | sure | 13:13 |
fwereade_ | wallyworld, I'll start one | 13:13 |
mgz | can I sit in? | 13:14 |
wallyworld | oh, alright | 13:14 |
fwereade_ | mg, sure, I'll invite, just a sec | 13:14 |
dimitern | fwereade_, TheMue: next one - https://codereview.appspot.com/9686047/ | 13:34 |
=== tasdomas is now known as tasdomas_afk | ||
TheMue | dimitern: *click* | 13:43 |
fwereade_ | wallyworld, mgz, ffs, sorry | 13:45 |
mgz | no problem | 13:45 |
fwereade_ | waiting for plus.google.com... | 13:46 |
* fwereade_ sighs | 13:47 | |
fwereade_ | wallyworld, mgz, it really doesn't want to talk to me today :/ | 13:48 |
wallyworld | fwereade_: try mumble? | 13:49 |
fwereade_ | wallyworld, I really ought to have set that up at some point since I started here, shouldn't I :/ | 13:49 |
wallyworld | lol | 13:49 |
=== wedgwood_away is now known as wedgwood | ||
fwereade_ | I'll bounce my router on general principles, bbiab | 13:50 |
wallyworld | kk | 13:50 |
dimitern | TheMue: thanks | 13:50 |
frankban | rogpeppe, dimitern,anyone else: I need another review for https://codereview.appspot.com/9641044/ . could you please take a look? | 13:50 |
TheMue | dimitern: yw | 13:52 |
dimitern | frankban: looking | 13:54 |
frankban | dimitern: thanks | 13:54 |
dimitern | fwereade_, TheMue: and another small one: https://codereview.appspot.com/10003044/ | 13:54 |
fwereade__ | TheMue, mramm, kanban? | 14:02 |
dimitern | frankban: LGTM | 14:03 |
frankban | dimitern: great, thank you. The test with a float is already present as part of the schema tests. | 14:06 |
dimitern | frankban: wasn't sure, but ok | 14:06 |
TheMue | sh..., sorry, just a technician arrived *grmpf* | 14:07 |
fwereade__ | grar | 14:13 |
fwereade__ | google dislikes me today | 14:13 |
fwereade__ | and it really is just google | 14:17 |
dimitern | fwereade__: :/ | 14:18 |
fwereade__ | launchpad is positively sprightly by comparison | 14:18 |
* fwereade__ sighs deeply | 14:23 | |
fwereade__ | dimitern, rogpeppe1, mramm, I think I'm going to give up on this and go reboot ALL THE THINGS | 14:25 |
=== BradCrittenden is now known as bac | ||
dimitern | TheMue: https://codereview.appspot.com/10003044/ | 14:58 |
dimitern | also second reviewer needed on https://codereview.appspot.com/9686047/ | 14:58 |
TheMue | dimitern: *click* | 14:59 |
dimitern | TheMue: cheers | 14:59 |
dimitern | TheMue: tyvm | 15:23 |
dimitern | need to relax a bit, i'm off for now; might be back later | 15:24 |
=== BradCrittenden is now known as bac | ||
=== deryck is now known as deryck[lunch] | ||
* rogpeppe2 is done for the day. g'night all. | 17:07 | |
=== deryck[lunch] is now known as deryck | ||
=== Daviey_ is now known as Daviey | ||
=== _mup___ is now known as _mup_ | ||
=== benji_ is now known as benji | ||
=== _mup___ is now known as _mup_ | ||
thumper | fwereade_: around? | 21:36 |
* thumper sighs | 21:46 | |
thumper | can anyone else confirm a build failure with trunk? | 21:46 |
* thumper wonders where our tarmac committer is | 21:46 | |
thumper | grr | 21:57 |
thumper | dimitern_: you broke trunk, naughty naughty | 21:58 |
thumper | r1247 | 21:58 |
thumper | jujud tests fail to build | 21:58 |
wallyworld | thumper: tarmac is almost ready - was waiting on a failing mongo test to be fixed | 21:59 |
thumper | wallyworld: hi there | 21:59 |
wallyworld | but that test will be deleted | 21:59 |
wallyworld | hi | 21:59 |
wallyworld | how's the dog? | 22:00 |
thumper | wallyworld: how am I supposed to know that? trunk build fails, this is bad | 22:03 |
thumper | dog is fine, was sleeping | 22:03 |
wallyworld | thumper: no, separate issue | 22:03 |
thumper | now is staring at me | 22:03 |
wallyworld | test failure (resulting from upstream mongo changes) was preventing tarmac bot bring deployed | 22:04 |
wallyworld | and don't you love how we just pull upstream from tip so we are not isolated from breaking changes | 22:04 |
thumper | yeah, it's awesome | 22:04 |
wallyworld | ah, who needs dependency management | 22:04 |
thumper | didn't kapil have a dep management thing to add in? | 22:05 |
thumper | hazmat: where is that? | 22:05 |
wallyworld | yeah, but now someone else has proposed yet another Go solution | 22:05 |
thumper | wallyworld: do you know what needs to be fixed in the failing test? | 22:05 |
thumper | wallyworld: I'm not sure what the test is trying to test | 22:05 |
wallyworld | thumper: gopm or something. but given how the last person who proposed something was shot down in flames, i'm not optimistic | 22:06 |
thumper | who proposed a go thing? | 22:06 |
wallyworld | thumper: no idea. i just looked at irc and saw your comments. i know nothing of the build failure yet | 22:06 |
wallyworld | the failing test is being deleted | 22:06 |
thumper | you said someone proposed a go solution... | 22:06 |
thumper | i was asking about that | 22:06 |
wallyworld | yeah, i was told, let me try and find something | 22:06 |
wallyworld | thumper: https://groups.google.com/forum/?fromgroups#!topic/golang-nuts/k8pmk8FQC8w | 22:07 |
wallyworld | https://github.com/GPMGo/gopm-api/ | 22:08 |
wallyworld | so just a proposal really it seems at this stage | 22:08 |
wallyworld | thumper: if you add your +1 to this i can land it https://codereview.appspot.com/9820043/ | 22:11 |
fwereade_ | thumper, hey dude | 22:13 |
fwereade_ | thumper, sorry I haven't been around much at sociable hours the last few days | 22:14 |
thumper | wallyworld: sorry, doggy break needed | 22:21 |
thumper | fwereade_: hey | 22:21 |
thumper | wallyworld: we should land anything new until trunk is fixed | 22:21 |
wallyworld | thumper: sure, i just want to get it unblocked, not going to land immediately | 22:22 |
thumper | fwereade_: so trunk is broken due to r1247 | 22:22 |
thumper | fwereade_: just wondering what to do with the now failing test | 22:22 |
thumper | wallyworld: ack, I'll look shortly | 22:22 |
wallyworld | no hurry | 22:22 |
* thumper sighs | 22:28 | |
thumper | it isn't obvious how to fix this test | 22:28 |
* thumper comments out the whole test | 22:28 | |
thumper | wallyworld: can I get a +1 trivial on this? Rietveld: https://codereview.appspot.com/10022043 | 22:44 |
thumper | I could just merge it in, but someone else agreeing helps | 22:44 |
* wallyworld looks | 22:44 | |
wallyworld | thumper: done. i had a quick look too but it wasn't immediately obvious what the replacement api to call was | 22:46 |
thumper | yeah | 22:46 |
wallyworld | we could have found it i guess, but other things to do | 22:46 |
fwereade__ | thumper, sorry, just getting up to date | 22:49 |
fwereade__ | thumper, oh, hell, what's broken? | 22:49 |
thumper | fwereade__: just a test, see review just above | 22:50 |
fwereade__ | thumper, ah, ok, the issue is that API no longer has a .Machine()? | 22:51 |
fwereade__ | thumper, I'd just drop that bit | 22:51 |
fwereade__ | thumper, being able to log in should be evidence enough that something's serving the API | 22:51 |
thumper | fwereade__: well, actually submitted already | 22:51 |
thumper | :) | 22:51 |
fwereade__ | thumper, ok, just mail dimitern_ with some light bitching about running *all* the tests then ;p | 22:52 |
* thumper nods | 22:52 | |
thumper | fwereade__: do you have time for a quick catch up? | 22:52 |
thumper | like a hangout? | 22:52 |
fwereade__ | thumper, sure, woudl you start one please? with you in 2 | 22:53 |
thumper | fwereade__: ok | 22:53 |
hazmat | thumper, its kinda of lame.. ie works for ci use case only. its at lp:goreq | 22:53 |
thumper | fwereade__: https://plus.google.com/hangouts/_/69cccc01076c5b15bb3afbf54ba00501977e7b80?hl=en | 22:54 |
hazmat | there's better vcs management in go juju's deployer impl lp:juju-deployer/darwin | 22:54 |
hazmat | definitely a few go build tools popping up | 22:55 |
thumper | not surprising | 22:55 |
thumper | the problem hits everyone | 22:55 |
hazmat | besides the ones on the golang list.. there's also mozilla's heka-build tool which supports compile time plugins. | 22:55 |
hazmat | as well as the frozen/repeatable vcs version sets | 22:55 |
hazmat | the later of which is all goreq does, update a tree/gopath to a known set of versions | 22:56 |
fwereade__ | mramm, ping | 23:03 |
thumper | mramm: are you alive? | 23:03 |
mramm | thumper: fwereade__: yep I'm here | 23:03 |
mramm | and alive | 23:04 |
thumper | mramm: can you join us in a hangout? | 23:04 |
mramm | sure | 23:04 |
thumper | mramm: https://plus.google.com/hangouts/_/69cccc01076c5b15bb3afbf54ba00501977e7b80?hl=en | 23:04 |
=== wedgwood is now known as wedgwood_away | ||
thumper | wallyworld: got a few minutes? | 23:34 |
wallyworld | thumper: on a csll | 23:34 |
wallyworld | call | 23:34 |
thumper | wallyworld: ack | 23:34 |
thumper | wallyworld: I'm going to go to the gym in about 20 minutes, so perhaps we'll chat when I'm back? | 23:35 |
wallyworld | sure | 23:35 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!