[00:09]  * thumper is now grumpy, sore and hungry
[00:09] <thumper> specialist appointment for shoulder isn't until 26th of November
[00:14] <davecheney> o_O
[00:17] <menn0> thumper: bugger
[00:17] <menn0> thumper: sounds very NHS-ish
[00:18] <menn0> thumper, davecheney: so what are we doing about the standup? I have an errand to do so just trying to plan the rest of the day.
[00:22] <thumper> menn0, davecheney: now?
[00:22] <menn0> thumper, davecheney: works for me
[00:23] <davecheney> sure, i'll see you in the hangout
[01:04] <davecheney> menn0: one for you http://reviews.vapour.ws/r/120/diff/
[01:04] <menn0> davecheney: cool. will look shortly.
[01:06] <davecheney> kk
[01:34] <menn0> davecheney: apparently the review request is private and I'm not allowed to look
[01:45] <menn0> davecheney: I don't think you've hit publish yet
[01:50] <davecheney> menn0: wut
[01:50] <davecheney> is there a flag to rbt to say "yes, i'd actually like to review this "
[01:50] <davecheney> done
[01:50] <davecheney> menn0: try againplease
[01:53] <davecheney> menn0: i've got a few more changes like this that are trying to pull apart the multi watcher to make it more unstandable (for me)
[02:00] <menn0> davecheney: finished chatting to Tim now. looking at your change.
[02:02] <menn0> davecheney: Ha ... I kept reading InfoId and "Infold"
[02:02] <menn0> s/and/as/
[02:04] <thumper> axw: sha'ping
[02:04] <davecheney> rb is a useless sack of shit
[02:04] <davecheney> if it do rbt post on a branch that has already been posted, it creates a new review
[02:05] <davecheney> % rbt publish 120
[02:05] <davecheney> ERROR: Error publishing review request (it may already be published): Object does not exist (HTTP 404, API Error 100)
[02:05] <davecheney> why is this an error ?
[02:05] <davecheney> can't the tool look at the status of ther view and see it's been published
[02:07] <davecheney> menn0: what is the command to upload a new diff to an existin greview ?
[02:07] <menn0> "rbt post -r <the review id>"
[02:07] <davecheney> rbt can rbt remember that an existing branch is attached to a review ?
[02:09] <davecheney> ok, that review is totally screwed
[02:09] <davecheney> just getting 500's
[02:09] <davecheney> i'll make another one
[02:09] <davecheney> http://reviews.vapour.ws/r/120/diff/#
[02:09] <davecheney> should be correct now
[02:10] <menn0> davecheney: review of 120 done. thumper needs to meta-review.
[02:11] <davecheney> menn0: I have to remove that type because otherwise I cannot break the import look between apiserver/params and state/multiwatcher
[02:11] <davecheney> menn0: fwiw, i disagree with your comment
[02:11] <davecheney> this type adds nothing in terms of type safety
[02:11] <menn0> davecheney: I know you do, we've discussed this before :)
[02:12] <davecheney> it leads people to think that f(id multiwatcher.InfoId) takes things of a specific type
[02:12] <davecheney> (it doesnt')
[02:12] <menn0> davecheney: I know it doesn't add type safety but it does add the other benefits I mentioned
[02:12] <davecheney> and I am neurtral on the reability argument
[02:12]  * thumper looks
[02:12] <menn0> davecheney: if it helps with the import loop breaking then let's do it. that's a bigger win.
[02:13] <davecheney> kk
[02:13] <menn0> davecheney: probably should have mentioned that reason in the commit message or review description
[02:14] <davecheney> yes, my bad
[02:14]  * menn0 is out for a bit. errands to run.
[02:36] <davecheney> reviewboard is telling me I have -1 Open incoming reviews
[02:36] <davecheney> now i have -2
[02:50] <davecheney> anyone seen this failure ?
[02:50] <davecheney> http://paste.ubuntu.com/8452331/
[02:50] <davecheney> it's sporadic
[02:56] <davecheney> http://reviews.vapour.ws/r/122/diff/
[03:22] <davecheney> menn0: can you tell me how to do a dependent change with rbt ?
[03:22] <davecheney> i'm guessing there is a flag for rbt post
[03:28] <menn0> davecheney: I've never done it myself but I believe it hinges on the --parent option
[03:30] <menn0> davecheney: re that failure - I have seen it before but it looks like a ordering problem that might be fixed by using SameContents
[03:36] <davecheney> menn0: same
[03:36] <davecheney> lp has forgotten my login
[03:36] <davecheney> i'll log a bug when I go downstairs and get my key for 2fa
[03:36] <menn0> davecheney: k
[03:37] <davecheney> menn0: if you ahve time, http://reviews.vapour.ws/r/122/diff/
[03:37] <davecheney> opens the door to my next fix which makes params.EntityId an interface
[03:39] <menn0> davecheney: give me a little while
[03:42] <davecheney> client_test.go:2448: c.Assert(client.AddCharm(curl), gc.IsNil, gc.Commentf("goroutine %d", index))
[03:42] <davecheney> ... value *params.Error = &params.Error{"", "cannot add charm to storage: unexpected deletion of resource catalog entry with id \"47d6e7812099383e8266f590d37b7f5369ef43931aecc72a4b86674e7052f346a391648
[03:42] <davecheney> 209b8e69ea5b649d2c0e86da0\": Resource not available because upload is not yet complete"} ("cannot add charm to storage: unexpected deletion of resource catalog entry with id \"47d6e7812099383e8266f590d
[03:42] <davecheney> 37b7f5369ef43931aecc72a4b86674e7052f346a391648209b8e69ea5b649d2c0e86da0\": Resource not available because upload is not yet complete")
[03:42] <davecheney> ... goroutine 0
[03:42] <davecheney> [LOG] 0:00.462 DEBUG juju.storage managed resource entry created with path "environs/90168e4c-2f10-4e9c-83c2-feedfacee5a9/charms/cs:precise/wordpress-3-082e539d-946e-44ef-8684-489fc4bcfc3f" -> "47d6e78
[03:42] <davecheney> 12099383e8266f590d37b7f5369ef43931aecc72a4b86674e7052f346a391648209b8e69ea5b649d2c0e86da0"more
[03:42] <davecheney> more intermediate failures
[03:51] <thumper> axw: ping
[03:55] <thumper> menn0: where are we on these branches?
[03:55] <thumper> davecheney: re 122 above, shipit
[03:55] <menn0> thumper: just about to push the API server change again
[03:55] <davecheney> thumper: ta
[03:56] <menn0> thumper: then one more manual test of the env uuid unit change and that can be merged
[03:56] <thumper> menn0: ok
[03:56] <thumper> menn0: let me know when I need to look at that review again
[03:57] <menn0> thumper: just manually testing that branch now
[03:58] <thumper> kk
[04:01] <menn0> thumper: ok. http://reviews.vapour.ws/r/119/
[04:01]  * thumper looks
[04:03] <davecheney> menn0: still LGTM
[04:03] <davecheney> trying to auth as an aribtrary user is pretty gross
[04:04] <davecheney> would you consider raising a ticket/card/smoke signal
[04:04] <davecheney> to have a proper method added somewhere to do this ?
[04:04] <thumper> menn0: one small thing
[04:05] <thumper> menn0: now that we are treating any error as maintenance in progress
[04:05] <menn0> davecheney: *nod* I'll create a ticket
[04:05] <thumper> menn0: you don't really need to create a fake user tag
[04:05] <thumper> menn0: you could just pass through the agent tag
[04:05] <menn0> thumper: well not quite
[04:05] <thumper> no?
[04:06] <menn0> thumper: the loging validator in jujud returns nil for the local machine
[04:06] <thumper> ah...
[04:06]  * thumper nods
[04:06] <menn0> thumper: because the local machine is always allowed to login
[04:06] <thumper> yeah...
[04:06] <menn0> thumper: but if that breaks due to db migrations then we still want to know we're in upgrade mode
[04:07] <thumper> got it
[04:07] <thumper> please add that to the comment
[04:07] <menn0> thumper: ok
[04:07] <thumper> so the next person doesn't change it
[04:07] <menn0> thumper: and as per davecheney I'll add a TODO referring to a ticket to get this done in a cleaner way
[04:08] <davecheney> thanks
[04:08] <davecheney> the next person will probably be me
[04:08] <davecheney> and i'll go through trampling that
[04:15] <davecheney> % echo $?
[04:15] <davecheney> 0
[04:15] <davecheney> sweet, sweet, tests
[04:16] <thumper> number of tests run: 0
[04:20] <menn0> thumper: do you want to see that change again or should I land it?
[04:27] <menn0> hmm my env uuid for units branch is getting lots of merge conflicts with upstream ... for files I haven't touched
[04:27] <menn0> I wonder what's up with that
[04:27] <davecheney> thumper: menn0 i think i'm at the point that I have to stop nibbling around the edges and invert the dependenceies between state/multiwatcher and apiserver/params
[04:35] <thumper> menn0: just land it
[04:35] <menn0> thumper: doing it now
[04:36] <thumper> davecheney: I'm ok with that
[04:36] <thumper> menn0: NFI re conflicts
[04:37] <menn0> thumper: it's ok. there were tiny unit env uuid changes in those files. lots of conflicts with recent ports ports.
[04:37] <menn0> thumper: i'm almost done resolving.
[04:37] <thumper> ugh
[04:37] <thumper> ok
[04:45]  * thumper EODs
[04:45] <thumper> dog walk time
[05:47] <dimitern> morning all
[05:48] <dimitern> tasdomas, as OCR, can you please review http://reviews.vapour.ws/r/117/ ?
[05:52] <dimitern> fwereade, jam, TheMue, you might be interested as well ^^
[05:53] <jam> morning dimitern, looking
[05:53] <dimitern> jam, cheers!
[05:59] <jam> dimitern: I'd like to see it tweaked, but I'm willing to discuss it
[06:03] <dimitern> jam, thanks, I realized I've missed something
[06:03] <jam> dimitern: what's that?
[06:04] <dimitern> jam, we shouldn't be opening and closing ports like that at every hook commit
[06:04] <dimitern> jam, even though there won't be an error as they will be opened or closed already
[06:04] <jam> dimitern: do you commit hooks before they are done?
[06:05] <jam> ah, this is pre-populated, not the stuff which is currently changing
[06:05] <jam> dimitern: yeah
[06:05] <dimitern> jam, it's not pre-populated because I realized it needn't be
[06:05] <dimitern> jam, the uniter hook commands is the only way to open and close ports on *that* unit
[06:07] <dimitern> jam, so we should cache what's opened and closed, to show it later with the opened-ports hook tool (in a follow-up), but use temp lists for pending ports, which are cleaned up at hook commit time
[06:11] <jam> dimitern: sgtm
[06:31] <tasdomas> morning
[06:47] <dimitern> jam, updated - http://reviews.vapour.ws/r/117/diff/1-2/
[06:49] <jam> dimitern: so why do we track "Closed" ports in the portRanges map, isn't everything closed that hasn't been opened ?
[06:50] <jam> or this is both "things I've requested" and "things that were already there"
[06:51] <dimitern> jam, I was thinking we can have both "opened-ports" and "closed-ports" hook tools in a follow-up, which will use the map
[06:51] <dimitern> jam, it might be useful for charms to see what ports it requested to be closed as well as opened
[06:51] <jam> dimitern: k, "opened" or "opening" ? I'm just trying to figure out what portRanges buys us above just openingPorts and closingPorts
[06:53] <dimitern> jam, the portRanges map is not reset between hooks, unlike the slices
[06:54] <dimitern> jam, and they need to be reset so we don't unnecessarily issue api calls on each hook commit for the port ranges already opened before
[06:55] <TheMue> morning
[06:55] <dimitern> morning TheMue
[07:01] <tasdomas> dimitern, I've submitted a review
[07:02] <jam> dimitern: am I *very* choppy ?
[07:02] <jam> or just a little?
[07:02] <dimitern> tasdomas, cheers, I've updated the review and some of your suggestions are implemented, can you take another look?
[07:14] <tasdomas> dimitern, is there a need to add the OpeningPorts and ClosingPorts methods to the interface?
[07:16] <dimitern> tasdomas, what do you suggest instead?
[07:19] <tasdomas> dimitern, am I missing something or are they only used in tests?
[07:19] <dimitern> tasdomas, for now yes, but this might change
[07:20] <dimitern> tasdomas, but then again, it makes sense to only add them to export_test
[07:20] <tasdomas> dimitern, exactly
[07:20] <dimitern> as of today
[07:25] <dimitern> fwereade, ping
[07:28] <tasdomas> dimitern, I'm also a bit concerned about how conflicting operations will be handled
[07:29] <tasdomas> dimiter, open(100-200) followed by a close(100-200) should probably result in a nop?
[07:29] <tasdomas> dimitern, unless 100-200 was already open
[07:32] <fwereade> dimitern, heyhey
[07:32] <jam> tasdomas: I'd probably say "yes but that is a bit of an edge case in a buggy charm" so we're allowed to have it be slightly undefined behavior
[07:32] <jam> because you'd have to determine that the first open() is actually the noop.
[07:32] <jam> If we really want it, then we could make the total list of open + close ordered
[07:33] <jam> so one slice with (true, 100-200), (false, 100-200), (true, 100-200) etc
[07:33] <jam> morning fwereade
[07:33] <fwereade> jam, heyhey
[07:33] <dimitern> fwereade, I'd appreciate it if you find some time to review my open/close-port sandboxing branch  http://reviews.vapour.ws/r/117/ - and read a bit of scrollback with comments from jam and tasdomas
[07:34] <fwereade> dimitern, will do, but I have to do gsamfira's ones first of all
[07:36] <dimitern> fwereade, sure
[07:37] <dimitern> fwereade, it will be easier maybe if we do a quick g+ talk when you can
[08:06] <fwereade> dimitern, ok, I had a quick look at that because it seemed smaller ;p
[08:06] <fwereade> dimitern, the things I immediately wonder are:
[08:07] <fwereade> dimitern, do we have some mechanism for checking against what's already opened/closed?
[08:07] <fwereade> dimitern, and, how will we integrate this with per-relation port open/closes?
[08:08] <dimitern> fwereade, well, that's why I suggested a g+, it will be easier talking than typing
[08:08] <fwereade> dimitern, ok then :)
[08:30] <dimitern> jam, tasdomas, ok, after a chat with fwereade I'll discard http://reviews.vapour.ws/r/117/ and do a prereq first that adds a method to uniter api to retrieve all machine ports and cache them in the uniter, so we can both check for conflicts and do sandboxing
[08:31] <jam> dimitern: well, I would guess you can use most of what you already have, just an extra step at  the start?
[08:32] <tasdomas> dimitern, understood - I was thinking about that solution, I'm just worried that there would still be a window present, where conflict could emerge
[08:34] <dimitern> tasdomas, even with the extra checks against machine ports, there will still be a very minor possibility for open/closePorts to fail at finalize time, but that's OK, as we'll catch and handle most other cases
[08:35] <tasdomas> dimitern. sound good to me
[10:48] <jam> TheMue: standup?
[10:48] <TheMue> omw
[11:23] <perrito666> morning
[11:24] <rogpeppe> i just wanted to use juju with a local environment. it's not working... anyone got any idea of what might be happening here and how I might be able to fix it? http://paste.ubuntu.com/8454648/
[11:26] <rogpeppe> hmm, perhaps it was as simple as just apt-get install lxc. somehow that must have got uninstalled at some point.
[11:26] <rogpeppe> ha, it works now. phew.
[11:26] <rogpeppe> those error messages were not great though.
[11:33] <mgz> wow, that is a bit of a mess rogpeppe
[11:33] <rogpeppe> mgz: mmm
[11:33] <mgz> I was expecting the "must install juju-local" message
[11:49] <dimitern> fwereade, jam, http://reviews.vapour.ws/r/123/ - uniter api changes, as discussed
[11:49] <fwereade> dimitern, cheers, just popping out but back shortly
[11:49] <dimitern> fwereade, sure, np
[12:45] <jam> dimitern: my first thought is "why does *Uniter* have a Machine implementation", it seems the wrong thing for a uniter to think about.
[12:45] <jam> maybe it needs to know the machine it is on isn't dying?
[12:45] <dimitern> jam, because the ports are on the machine, not on a unit anymore
[12:46] <dimitern> jam, and the only reason to have a Machine object is to be able to call AllPorts() on it
[12:47] <jam> dimitern: so looking at the data structures, you expose MachinePortsResults which holds a slice of MachinePortsResult which holds a slice of MachinePortRange which uses a UnitTag and a PortRange
[12:47] <jam> but nothing there uses the NetworkTag
[12:47] <jam> And what do we do if more than one Unit asks for a similar port to be open?
[12:47] <jam> Is it just a conflict ?
[12:51] <dimitern> jam, the idea is to return all ports on the machine, regardless of network
[12:52] <dimitern> jam, when more the one unit tries to open a conflicting range, we'll detect and not allow it to happen
[13:00] <jam> dimitern: but especially in the case of ranges, saying "I want these open" doesn't actually have to be a problem does it?
[13:02] <dimitern> jam, I'm not sure I follow - can you expand a bit?
[13:03] <jam> if I say "i need this machine to expose 10-100, and someone else says 90-200", we could just open 10-200 and be done
[13:03] <jam> I suppose it would be clearer for the charm to fail with "you can't actually use 90-100" so it doesn't try to configure the application to use them?
[13:06] <jam> dimitern: ^^
[13:07] <dimitern> jam, nope
[13:08] <jam> dimitern: nope ? not sure what you are saying no to
[13:08] <dimitern> jam, if unit 0 says open 10-100/tcp, that's fine; later unit 1 says open 90-100/tcp - there's a conflict and we won't allow it
[13:08] <dimitern> jam, we can't have concurrently running open-port requests, as there's only one hook running at a time on the machine
[13:14] <dimitern> jam, port ranges are bound to units that requested them, so 10-100/tcp + 90-200/tcp for different units != 10-200/tcp
[14:02] <wwitzel3> perrito666, natefinch: standup
[14:03] <perrito666> wwitzel3: going
[14:03] <natefinch> wwitzel3: coming one sec, grabbing my coffee from the other room
[14:04] <jam> natefinch: wwitzel3: did you see the email from menno about syslog and cert issues?
[14:05] <wwitzel3> jam: yep, the one about log message spam?
[14:05] <jam> wwitzel3: about not being able to connect because of an issue with "no IP SANs"
[14:05] <jam> after upgrade
[14:05] <wwitzel3> jam: looking at it now
[14:06] <jam> wwitzel3: k, I'd chat with nate about it, because I think he discovered the golang issue for the HTTP API stuff
[14:06] <jam> I think the issue is that whatever workaround we did for the API we need to do for syslog
[14:07] <wwitzel3> jam: ack
[14:26] <alexisb> fwereade, ping
[14:26] <fwereade> alexisb, pong
[14:26] <alexisb> hey there
[14:26] <alexisb> do you mind if I reschedule our 1x1?
[14:27] <alexisb> my little guy is still sleeping and I dont want to wake him to go to his nanny given he has been sick
[14:27] <alexisb> so I will be on the road for our normally scheduled time
[14:27] <alexisb> fwereade, ^^
[14:27] <fwereade> alexisb, np
[14:27] <alexisb> thanks
[14:28] <alexisb> any particular day that works best for you?
[14:31] <alexisb> fwereade, ^^
[14:31] <fwereade> alexisb, tomorrow isn't great, how about weds?
[14:32] <fwereade> alexisb, or thurs?
[14:33] <alexisb> I can do a thursday morning, I will reschedule for them, thanks for being flexible fwereade !
[14:33] <fwereade> alexisb, no worries, glad to be of service
[14:38] <ericsnow> natefinch: FYI, I fixed that issue on http://reviews.vapour.ws/r/103/
[14:40] <natefinch> ericsnow: can you print the name and the type of the value, instead of a generic string?  Would make debugging easier.
[14:40] <ericsnow> natefinch: sure
[14:41] <ericsnow> natefinch: you know, you can leave comments on the review :)
[14:41] <natefinch> ericsnow: sometimes it's faster just to talk on irc, but I just left a message there too
[14:41] <ericsnow> natefinch: thanks
[15:41] <hazmat> fwereade, didn't you have some writing provider getting started docs? not seeing them in tree.. got  a partner thats interested
[15:50] <dimitern> fwereade, TheMue, jam, tasdomas, re-proposed open(close)-port sandboxing for the uniter, please take a look http://reviews.vapour.ws/r/125/
[16:39] <jcw4> mgz: this error looks suspiciously like a build script merge failure -
[16:39] <jcw4> /var/lib/jenkins/juju-release-tools/make-release-tarball.bash: line 115: syntax error near unexpected token `<<<'
[16:41] <jcw4> natefinch: do we have a UTC-0400 to -0700 timezone person who knows the build server stuff?
[16:42] <jcw4> perrito666: surely I can count on at least you to be around :)
[16:43] <mgz> that looks odd
[16:43] <jcw4> whew
[16:43] <jcw4> mgz: still not sure if that's the issue but the last couple builds failed with that error
[16:43] <mgz> wassit on? a landing?
[16:43] <mgz> okay, fixing.
[16:44] <jcw4> yep
[16:44] <jcw4> http://juju-ci.vapour.ws:8080/job/github-merge-juju/837/ and http://juju-ci.vapour.ws:8080/job/github-merge-juju/838/
[16:44] <mgz> yup... conflictishy
[16:45] <mgz> okay, should work now
[16:45] <jcw4> wow.  thanks mgz
[16:45] <mgz> I'm also going to make another change with some fixes, will notify when it's through
[16:46] <jcw4> thanks mgz  should we start to land again or wait for your fixes?
[16:46] <mgz> try a landing now if you're waiting
[16:46] <jcw4> k
[16:50] <bodie_> mgz++
[17:55] <ericsnow> natefinch: could you have another look at 103?
[17:57] <natefinch> ericsnow: ok
[17:58] <ericsnow> natefinch: thanks
[18:01] <natefinch> ericsnow: ship it
[18:01] <ericsnow> natefinch: thanks
[20:55] <arosales> thumper: fyi we got some new info @ https://bugs.launchpad.net/juju-core/+bug/1375268
[20:55] <mup> Bug #1375268: Juju Panic'ing on MAAS Power8le Environment <bootstrap> <maas-provider> <ppc64el> <juju-core:Triaged> <https://launchpad.net/bugs/1375268>
[20:55] <thumper> arosales: ta, looking
[20:55] <arosales> thumper: natefinch had done some debug to identy the panic, but didn't have anything conclusive
[20:56] <thumper> arosales: I've read all that...
[20:56] <arosales> thumper: let us know if you need any other info, or access
[20:56] <thumper> access to the maas would be helpful
[20:56] <thumper> and the instance that is having problems
[20:56] <arosales> mbruzek: do you have docs you can send over to thumber on accessing the maas IBM system?
[20:56] <thumper> the logs don't show any information for the stack frame that actually is the one panicing
[20:57] <thumper> which is somewhat surprising to me
[20:57] <thumper> arosales: mbruzek: info also to davecheney plz
[20:57] <arosales> mbruzek: also be interesting to see if you have reproduced on the other maas environment
[20:58] <mbruzek> thumper: arosales I will send that out right now.  Tim from IBM needs to make him an account.  We could jump in a hangout if you want to see my screen now.
[20:58] <thumper> mbruzek: I have meetings starting in a minute for a while
[20:59] <arosales> mbruzek: I guess the VPN is specific to individuals
[21:22] <thumper> davecheney: are these the lines that indicate a bad compiler? juju[5386]: bad frame in setup_rt_frame: 0000000000000000 nip 0000000000000000 lr 0000000000000000
[21:24] <davecheney> yes
[21:24] <davecheney> thumper: hold fire
[21:24] <davecheney> writing you a long email so that you too can know all there is to know
[21:27] <perrito666> wow an email containing "all there is to know" must be instanely long
[21:34] <perrito666> I would really dig doc strings on things like state/watcher.go
[21:35] <davecheney> perrito666: brevity is not my strong suit
[21:36]  * perrito666 imagines thumper getting an email with 42 as body text
[21:46] <perrito666> I would also dig a trackball, I have too much junk on my desktop to be able to move the mouse :/
[22:18] <jcw4> thumper: per our hangout last week: http://reviews.vapour.ws/r/127/
[22:19] <thumper> jcw4: cheers, will look when I have the kids back from swimming :)
[22:19] <jcw4> menn0: it's about EnvUUID so your feedback appreciated too :)
[22:19] <jcw4> thx thumper
[22:33] <menn0> jcw4: having a look
[23:04] <menn0> jcw4: review done
[23:06] <jcw4> menn0: much appreciated
[23:07] <perrito666> menn0: tx for your mail it was enlightening
[23:12] <menn0> perrito666: good! it was useful for me to write it. I learned a few things while making sure I was giving you correct information :)
[23:15] <perrito666> sadly I cannot have a watcher since I am nuking the db in this process so I am looking another way to signal my worker
[23:31] <davecheney> http://reviews.vapour.ws/r/126/
[23:31] <davecheney> what's going on here
[23:32] <davecheney> i thought that backups were going to be streamed back to the client, not shoehorned into the api server
[23:33] <perrito666> davecheney: as per fwereade design and I believe we agreed on that in Las Vegas, backups are stored in the state server and downloaded on demand
[23:33] <perrito666> Iam not sure if any of those things include shoehorning since I am having some difficulty picturing the meaning of it in my head :p
[23:34] <davecheney> perrito666: yes
[23:34] <davecheney> that is no in question
[23:34] <ericsnow> davecheney: That patch provides all the boilterplate needed plus a very basic implementation of the data transfer
[23:34] <davecheney> but downloading them via encoding them into json is bad
[23:34] <ericsnow> davecheney: I agree sending a []bytes over the wire is not a valid solution
[23:35] <davecheney> ericsnow: i thought the plan was to add some bulk download api
[23:35] <davecheney> i thought that had happened
[23:35] <ericsnow> davecheney: not that I'm aware
[23:36] <davecheney> bummer
[23:36] <ericsnow> davecheney: I agree that download should support bulk calls
[23:36] <ericsnow> davecheney: upload as well (when we get to that)