[02:09] <davecheney> ttps://github.com/juju/juju/pull/144
[03:09] <axw> wallyworld: https://github.com/juju/juju/pull/134/files#diff-4fd108ff3861516a9ea367ed5e560d50R1534    does this look reasonable?
[03:10] <waigani> thumper: sorry I'm late - in hangout now
[03:11] <wallyworld> axw: for the first test, i think we need to load a new machine object on which to set addr1, addr0
[03:11] <wallyworld> in the before hook
[03:11] <axw> oops, yeah, did it in the second but not the first
[03:11] <wallyworld> so that we don't mess up the in memory representation for machine
[03:11] <axw> will do
[03:12] <wallyworld> ah, haven't got to the 2nd test yet
[03:13] <wallyworld> axw: yep, lgtm
[03:14] <axw> wallyworld: thanks. will fix the first one and land
[03:14] <wallyworld> ta
[03:16] <perrito666> wallyworld: :) please tal at your email I would really appreciate an answer to that last email
[03:17] <perrito666> wallyworld: took me 3 days but my background brain thread finally returned the solution
[03:18] <wallyworld> perrito666: will do, looking now
[03:21] <wallyworld> perrito666: good pickup. my view is that all restore type operations should be run as admin. normal users have readwrite permissions but lack these ones mgo.RoleDBAdminAny, mgo.RoleUserAdminAny. i don't think it's appropriate for users other than admin to have such permissions. i'll reply to the email
[03:24] <perrito666> wallyworld: ok, Ill propose a small patch and if we then decide that its the other way around we can reject it
[03:25] <wallyworld> perrito666: which lines fail from mongpoEval?
[03:26] <wallyworld> perrito666: i'm also confused why it worked when run bu hand?
[03:26] <wallyworld> by
[03:26] <perrito666> wallyworld: at first sight none :| that is none yields errors, but something might be lost when we go from bash trough ssh to go..
[03:26] <perrito666> wallyworld: there lies the solution actually
[03:26] <perrito666> I realized that by hand I was using admin instead of tag user
[03:26] <wallyworld> oh
[03:27] <wallyworld> that was a bit of a ref herring then :-)
[03:27] <wallyworld> red
[03:27] <perrito666> wallyworld: all thanks to the talk we had the other day
[03:28] <perrito666> wallyworld: I take that a red herring is not a fish in that sentence? :p
[03:28] <wallyworld> no :-) a colloquial expression for an unintended diversion or misdirection
[03:29] <perrito666> well yes and no, I focused on the fact that it worked by hand and tried to replace --eval for a js file and while I was writting that patch I realised I was not using admin on restore
[03:32] <perrito666> sadly the realisation of this solution came on a sunday night, so here I am :|
[03:32] <wallyworld> :-(
[03:33] <wallyworld> so the solution is to remove mongoEval  and run everything as mongoAdminEval?
[03:34] <wallyworld> or did you want to keep doing some things as non admin?
[03:34] <perrito666> nope, everything as Admin
[03:35] <perrito666> or I can try one by one the commands and do as admin only what is required
[03:35] <perrito666> which is, as I see it, a great loss of time
[03:36] <wallyworld> yep, agreed
[03:41] <perrito666> ok, Ill go sleep and send the patch tomorrow AM
[03:42] <wallyworld> sounds good, thanks for spending the extra time to find it
[04:09] <davecheney> Session closed is getting me down
[04:37] <davecheney> axw: wallyworld + echo 'Instance setup done:' Mon Jun 23 04:35:03 UTC 2014
[04:37] <davecheney> ^ this is great
[04:37] <davecheney> but there is no other timestamp in the log to see when it started
[04:38] <davecheney> well thre is this one
[04:38] <davecheney> [workspace] $ /bin/bash /tmp/hudson2115592101584260118.sh
[04:38] <davecheney> Started: Mon Jun 23 04:26:02 UTC 2014
[04:38] <davecheney> aeriously - 10 minutes to setup a machine in ec2 ...
[04:39] <wallyworld> davecheney: yep :-(
[04:39] <wallyworld> can be as short a 4 minutes
[04:39] <wallyworld> that's why we're looking at using a nailed up instance
[04:39] <wallyworld> there are 3 timestamps in the log
[04:40] <wallyworld> tests started, instance finished/tests stating, all done
[04:40] <wallyworld> s/test started/ job started
[04:40] <davecheney> Instance has ip ec2-54-84-105-221.compute-1.amazonaws.com
[04:40] <davecheney> Waiting for 22..............................
[04:40] <davecheney> + set +e
[04:40] <davecheney> ^ there should be a timestamp here
[04:40] <wallyworld> can add one
[04:40] <davecheney>   System information as of Mon Jun 23 04:29:53 UTC 2014
[04:40] <wallyworld> the apt dance take ages :-(
[04:40] <davecheney> there is this one from the motd
[04:41] <davecheney> i guess its up to date
[05:43] <menn0> wallyworld: can I log in to Jenkins or is there a limited list of people who can? The current build is mine and has failed and can be cancelled.
[05:44] <menn0> on a related note, there's been a lot of mgo panics today during test runs along the lines of "Session already closed"
[05:45] <menn0> is this new or could it be related to davecheney's recent close-mongo-iterators PR?
[05:45] <wallyworld> yes
[05:46] <wallyworld> the tracebacks seemed to implicate root.go somehow
[05:46] <wallyworld> and then i saw you email to dev and stopped looking
[05:46] <wallyworld> could be related though
[05:47] <menn0> no I think these are different from root.go
[05:48] <menn0> there's been 2 regular build problems today: root.go and the mgo "session already closed"
[05:48] <menn0> I think they're separate issues (probably)
[05:52] <wallyworld> could be
[05:52] <wallyworld> we've looked on and off and fixed several issues
[05:55] <jam1> menn0: so the existing test that was failing does show a data race under "go test -race", I'll try to write up a simpler case, though.
[05:56] <menn0> jam1: great that you were able to track it down quickly
[05:56] <jam1> menn0: well, when the line that has the error is "objectCache[key] = obj"
[05:56] <jam1> it gives a pretty good hint
[05:57] <jam1> but yeah, I'm pretty familiar with the code since I've been working on it closely.
[05:57] <menn0> jam1: note that it was a different test that failed on my machine, but in a similar way
[05:58] <jam1> menn0: yeah, it is an API data race (concurrent mutation of a golang map, which are *not* concurrent safe, you have to wrap them in a mutex)
[05:58] <menn0> jam1: yep. it could happen at any point right?
[05:58] <menn0> any call
[05:58] <jam1> menn0: yeah
[06:02] <vladk> jam1: morning
[06:02] <jam1> vladk: morning. Sorry I'm a bit late, looking into this data race, I'll be there in a couple mins
[06:04] <vladk> jam1: go maps and slices are non-concurrent objects, only channels are concurrent and strings are immutable
[06:04] <jam1> vladk: yeah, I'm aware, just wasn't thinking about the concurrent access when I was writing the code.
[06:24] <jam1> menn0: https://github.com/juju/juju/pull/146
[06:25] <jam1> it also fixes an only tangentially related race condition in state/api/watcher/watcher.go that I only noticed because the test that was failing in cmd/jujud had 2 sources of race conditions.
[06:25] <jam1> "go test -race" is pretty nice, its a shame it slows things down so much.
[06:26] <jam1> menn0: you're also OCR for today, so poke for the review :)
[06:26] <wallyworld> jam1: not looking in detail, you may have perhaps fixed an ongoing intermittent test failure around watchers perhaps
[06:26] <wallyworld> well at least i'm hoping :-)
[06:27] <jam1> wallyworld: so the race for watchers is that it is possible for the loop() to terminate before it actually starts anything
[06:27] <jam1> because it calls w.wg.Add() but only *inside a goroutine*
[06:27] <jam1> which isn't, itself, protected by a wg.Add()
[06:27] <wallyworld> ah ok. may not be the same issue then
[06:27] <jam1> so you could start a watcher, have a couple pending goroutines, and then exit
[06:28] <jam1> although thinking about it, I may need to move something around a bit
[06:29] <wallyworld> good that we found and fixed this before 1.19.4 ships
[06:32] <jam1> k, I don't need to move it after all. so my patch is ready for review.
[06:36] <jam1> vladk: I'm in the hangount
[06:36] <jam1> hangout
[06:36] <jam1> wallyworld: I wonder if we want a CI test that runs the whole test suite in "go test -race" mode
[06:36] <jam1> I don't think we're *quite* clean there, though.
[06:37] <wallyworld> jam1: worth adding i reckon
[06:37] <jam1> wallyworld: well it doesn't help if it never passes :)
[06:38] <wallyworld> sure, so let's get it passing first :-)
[06:43] <davecheney> ah mongo, how to you you leak temporary file, let me count the ways ...
[07:42] <rogpeppe1> davecheney: morning!
[07:43] <davecheney> rogpeppe1: ahoy!
[07:44] <TheMue> morning
[07:44] <rogpeppe1> TheMue: hiya
[08:58] <jam1> TheMue: morning ! I'm just finishing up lunch, I'll be there in about 5-10 min.
[08:59] <TheMue> jam1: ok
[09:35] <fwereade> wallyworld, ping
[09:35] <wallyworld> fwereade: hey
[09:36] <fwereade> wallyworld, hey, I was wondering about proof-of-access for the managed resource stuff
[09:36] <wallyworld> proof of access?
[09:37] <fwereade> wallyworld, ie "here store this file with md5/sha256" "ok I want the md5/sha256 of <random byte range>" "here you go" "ok cool your file is stored"
[09:37] <wallyworld> ah that
[09:37] <wallyworld> not implemented yet
[09:37] <wallyworld> just getting basics landed
[09:37] <fwereade> wallyworld, I'm wondering what impact that will have on this layer, because it's starting to feel like the right place for it
[09:38] <fwereade> wallyworld, maybe I'm wrong
[09:38] <fwereade> wallyworld, but the lower the layer that implements it the less opportunity we will have to fuck it up
[09:38] <wwitzel3> perrito666: nice job :)
[09:38] <fwereade> wallyworld, the higher the layer the less we need to thread the challenge/response stuff through, I understand it's a tradeoff
[09:39] <wallyworld> fwereade: it will impact i think, may need an extension to the current api. workflow will be controlled by a layer above but primitives to make it work will be in the current layer
[09:39] <fwereade> wallyworld, ok, cool, so long as it's on your mind and coming soon I won't worry about it for this CL
[09:40] <wallyworld> fwereade: well, next on the todo list is the ToolsStorage fascade so we get get rid of the http storage stuff for manual provider
[09:40] <wallyworld> so it's on my mind but not on the very immediate next to do list
[09:40] <wallyworld> does that work for you?
[09:41] <fwereade> wallyworld, I worry that we'll want that functionality for all facades, and that changing the tools facade to accommodate it *as well as* the managedresource stuff will exert subtle pressures to do it less cleanly than we might
[09:42] <wallyworld> fwereade: ok, i can add some new apis to the current design spec and do the proof of access stuff first then
[09:43] <wallyworld> after i land the current pull request
[09:43] <wallyworld> https://github.com/juju/juju/pull/124
[09:43] <fwereade> wallyworld, great, thanks
[09:43] <fwereade> wallyworld, yeah, was starting to look at that, that was what made me think of it :)
[09:43] <wallyworld> lol ok, i figured as much
[09:51] <menn0> jam1: sorry I had finished up for the day... I'm actually OCR tomorrow not today anyway
[09:52] <jam1> menn0: I must not have refreshed the page
[09:52] <jam1> pn
[09:52] <jam1> np
[09:52] <menn0> jam1: it did the same for me too, at first it said I was and them recalced
[09:52] <menn0> then even
[09:53] <menn0> jam1: it's good that you pointed it out anyway. I hadn't realised I was on tomorrow :)
[09:54] <wallyworld> fwereade: what did we want for the challenge-response policy? retain the current Put() where the caller has to provide all the data (and it is de-deduped on the server) but also add a *new* API where they provide just the checksums and then are issued a challenge for a segment checksum and if that passes they don't need to upload anything?
[09:56] <wallyworld> not thinking too much, the new api will necessarily be stateful so we'll have to consider a timeout etc after the initial request
[09:56] <wallyworld> ie if they don't respond soon enough the acceptable response expires and they would be issued with a new challenge
[09:57] <fwereade> wallyworld, in my mind the main goal is to avoid having to send the bytes at all in the common case, so what I'd like us to *expose* is the stateful case, and only fall back to sending bytes in response to a never-heard-of-it result from the first call
[09:57] <fwereade> wallyworld, yeah, we'd want a timeout, indeed
[09:57] <wallyworld> fwereade: "avoid sending bytes in the common case" assumes there is a high chance the data is already uploaded
[09:59] <wallyworld> i guess the caller can optimistically try and use just the checksums
[09:59] <wallyworld> and if the server doesn't have the data, the caller is requested to upload everything
[10:00] <perrito666> morning everyone
[10:00] <fwereade> wallyworld, I think that globally there is a high enough chance that the (low) cost of even quite a lot of back-and-forths will be reasonable compared to the (high) cost of even a few ~gig-sized uploads
[10:00] <fwereade> wallyworld, remember this is closely aligned with the fat charms case
[10:01] <fwereade> wallyworld, those can often end up gig-sized
[10:01] <wallyworld> fwereade: agreed. i'm just stating the obvious to e really explicit we have a shared understanding
[10:01] <jam1> dimitern: can you take a look at https://github.com/juju/juju/pull/146
[10:01] <fwereade> wallyworld, cool, I think we do
[10:02] <dimitern> jam1, looking
[10:02] <wallyworld> fwereade: both Put(supplyTheData) and Put(supplyTheChecksums) will be exported so i guess the caller can decide which one they want to use
[10:05] <jam1> morning perrito666
[10:05] <natefinch> jam: morning
[10:06] <jam1> morning natefinch
[10:08] <perrito666> mm, we no longer have a way to say "this fixes bug lp:#######" ?
[10:09] <wwitzel3> perrito666: I think if there is  a lp issue, it is nice to mention it in the pull request comments
[10:10] <perrito666> wwitzel3: yup, I just wanted to know if there is a way to trigger the "fix committed" status
[10:10] <wwitzel3> perrito666: not that I am aware of
[10:10] <wwitzel3> perrito666: I've been doing that manually
[10:10] <perrito666> ok, Ill use what I see for other bugs
[10:12] <dimitern> jam1, reviewed
[10:12] <jam1> thx
[10:17] <perrito666> natefinch: wwitzel3 wallyworld https://github.com/juju/juju/pull/147
[10:17] <perrito666> there are things that upset me and then the fact that this bug is fixed with so little.... :p
[10:18] <wallyworld> perrito666: \o/ thank you for fixing
[10:19] <perrito666> :) now back to write a decent restore
[10:19] <wallyworld> i bet you are sick of backup/restore now
[10:20] <perrito666> wallyworld: no, I am actually very fond of it, I really look forward to have the new one implmented
[10:20] <wallyworld> you have a lot of patience :-)
[10:20] <perrito666> I am a bit sleepy tho, I slept only 4 hs last night
[10:21] <wallyworld> :-(
[10:24] <rogpeppe1> jam1: on further reflection, i don't think i understand your commonLoop changes
[10:24] <rogpeppe1> jam1: i'm not convinced they're right
[10:25] <rogpeppe1> jam1: specifically, i don't see how the changes ensure anything happens before the outer loop terminates
[10:25] <jam1> rogpeppe1: so the race as detected by 'go test -race' is that 'NewNotifyWatcher' does a defer w.wg.Wait()" before calling loop. And nothing has been Added to the wg at that time.
[10:25] <jam1> we then call "go w.commonLoop()" internally
[10:25] <jam1> which will, eventually, call w.wg.Add() for the two goroutines that *it* spawns
[10:26] <jam1> however, the 'go w.commonLoop()' hasn't actually incremented anything and can return out of "loop" before we've started it.
[10:26] <jam1> I believe there is a secondary channel of information in the "w.in"  so that the for{} loops never actual exit until commonLoop has entered.
[10:27] <rogpeppe1> jam1: i see.
[10:27] <rogpeppe1> jam1: a better solution (i think) is to avoid calling Wait in NewNotifyWatcher
[10:27] <rogpeppe1> jam1: but to make sure loop waits for in to be closed before returning
[10:27] <jam1> rogpeppe1: I personally felt like wg.Wait() should probably be called inside the loop() functions
[10:28] <jam1> rogpeppe1: so in the case of "w.tomb.Dying" we can return without checking w.in
[10:28] <jam1> is that ok ?
[10:28] <rogpeppe1> jam1: the original scheme was the wg is for commonLoop's internal use only
[10:29] <rogpeppe1> jam1: it's kinda weird that commonLoop is doing the Wait itself
[10:29] <jam1> rogpeppe1: if it is internal to commonLoop, couldn't it just use a local var ?
[10:29] <jam1> rogpeppe1: I certainly originally thought to change "go commonLoop()" to just be a synchronous "w.commonLoop()" and then wait outside
[10:30]  * rogpeppe1 checks
[10:30] <jam1> but that closes w.in in a defer
[10:30] <jam1> so we could change it some other way
[10:30] <rogpeppe1> jam1: yeah, that was my initial thought too
[10:30] <rogpeppe1> jam1: i'm not keen on the current change as it adds more stuff that each caller of commonLoop must remember to do
[10:32] <rogpeppe1> jam1: yes, i think wg could/should be a local var
[10:35] <jam1> rogpeppe1: so 'must wait until in is closed' isn't quite true today, because of stuff like "the tomb can die first"
[10:36] <rogpeppe1> jam1: yup
[10:36] <rogpeppe1> jam1: if the tomb dies, we should wait for the in channel to be closed
[10:36] <jam1> rogpeppe1: I'm not sure that it means the outer loop must not terminate before then
[10:36] <rogpeppe1> jam1: because that's the way commonLoop signifies that it's finished
[10:44] <rogpeppe1> jam1: it's instructional to see how the code has changed since the original version (state/api/apiclient.go in rev 1235)
[10:46] <jam1> TheMue: standup ?
[10:49] <perrito666> yay fix committed
[11:41] <jam1> vladk: you dropped out? Is everything ok?
[12:47] <jam1> it would be nice if they had a very soft ding when someone connects
[12:54] <bodie_> morning all
[12:55] <perrito666> morning bis
[12:55] <perrito666> ericsnow: wallyworld I will go back to the new restore, what are you guys doing? I don't want to step on your toes
[12:55] <bodie_> anyone have a free minute to scope a PR or two?
[12:56] <bodie_> https://github.com/juju/juju/pull/140 and https://github.com/juju/juju/pull/141
[12:56] <ericsnow> perrito666: I'm still working on the backup client code
[12:57] <wallyworld> perrito666: i'm not working on it
[12:57] <perrito666> wallyworld: I meant wwitzel3 sorry
[12:58] <wallyworld> :-)
[12:58] <perrito666> wallyworld: I am used to you not being here at this time :p
[12:58] <wallyworld> can't sleep
[12:59] <perrito666> wallyworld: try watching a movie, works wonders for mi wife, in almos five years together I think she hardly saw more than 3 movies to full extent
[13:00] <bodie_> hahahah
[13:01] <wallyworld> lol
[14:00] <wwitzel3> natefinch: standup
[14:48] <perrito666> natefinch: taxes in MA are really low
[14:55] <rogpeppe1> mgz: ping
[14:56] <natefinch> perrito666: what's funny is that most people around here call it Taxachusetts.  However, I presume you're talking about sales tax
[14:58] <natefinch> perrito666: sales tax in the US is done per state, Massachusetts is pretty middle of the road for states at 6.25% ... California being the highest AFAIK, at 10%, and several states have 0% (notably New Hampshire, which borders MA).
[14:59] <TheMue> *sniff*
[15:00] <TheMue> in Germany we’ve got 7% for food and books, magazines etc, but 19% for the rest
[15:03] <alexisb> fwereade, having an issue with my hangouts, will be there shortly
[15:03] <perrito666> natefinch: I am talking about the tax amazon collects from me when trying to ship you stuff :p
[15:03] <fwereade> alexisb, oops, forgot we were meeting, omw too :)
[15:04] <alexisb> :)
[15:05] <perrito666> man, lenovo really makes it hard to find a replacement battery
[15:06] <bac> cmars: ping
[15:07] <cmars> bac, pong
[15:12] <mgz> rogpeppe1: hey
[15:12] <rogpeppe1> mgz: in a call currently, but are you around for a chat in 30 mins or so?
[15:13] <mgz> sure thing
[15:13] <rogpeppe1> mgz: also... did you manage to get around that godeps problem?
[15:13] <mgz> rogpeppe1: yeah, should all be fine now
[15:13] <rogpeppe1> mgz: what was the issue?
[15:13] <mgz> unrelated repository issue on the bot
[15:13] <rogpeppe1> mgz: which was?
[15:14] <mgz> a repo is shared between a bunch of different things, including godeps apparently, and we hit a bzr bug which made every branch using the repo unhappy
[15:16] <rogpeppe1> ah, i wondered if it was something like
[15:16] <rogpeppe1> that
[15:42] <perrito666> brb lunch
[15:53] <rogpeppe1> mgz: hey
[15:53] <mgz> rogpeppe1: hey
[15:53] <rogpeppe1> mgz: hangout?
[15:54] <rogpeppe1> mgz: if it's a hassle, np
[15:54] <mgz> sure, lets use juju-core-team
[15:54] <rogpeppe1> mgz: link?
[15:55] <mgz> rogpeppe1: in the calendar for thursday or just ...plus.google.com/hangouts/_canonical.com/juju-core-team
[15:56] <rogpeppe1> mgz: hmm, i get 404
[15:56] <rogpeppe1> mgz: will try the link in the calendar
[15:56] <mgz> after the _
[15:56] <mgz> add /
[15:56] <mgz> I mistyped
[16:03] <sinzui> abentley, Juju-ci will fail juju for the wrong reasons.
[16:04] <sinzui> abentley, ppc and arm64 access was restored, but ci missed the opportunity t make the debs. all those arch tests will fail
[16:04] <abentley> Doh!
[16:05] <sinzui> abentley, aws has 6 old instances still running, causing the manual test to fail.
[16:05] <sinzui> I will restart the revision if no revision lands in the next hour
[16:10] <sinzui> perrito666, I am resting the current revision. CI ran of our AWS resources and ppc64 and arm64 machines. Many tests couldn't be run. Looks like the restore is working when there are resources
[16:17] <perrito666> sinzui: \o/
[16:19] <natefinch> fwereade: you around?
[16:21] <natefinch> I love getting happy birthday emails from websites I don't even remember visiting
[16:22] <perrito666> natefinch: is it your bday?
[16:22] <natefinch> It is my birthday and my twin sister's birthday and my wife's birthday today.
[16:23] <perrito666> uhh, that is a cool memory space saver
[16:23] <perrito666> natefinch: well happy bday (and why is you bd not in the calendar for bdays?)
[16:23] <natefinch> it's in my calendar, I dunno
[16:23] <natefinch> and my aunt's birthday is tomorrow and Wednesday is Zoë, my younger daughter's birthday
[16:24] <natefinch> and a couple days ago was my sister's step son's birthday.    My mother went to the store and bought 6 birthday cards last week :)
[16:25] <perrito666> "I will not get friends with people that have birthdays outside this week" great technique
[16:54] <rogpeppe> on reflection, i'm not sure that using gopkg.in/juju/charm.v2 gives significant advantage over using github.com/juju/charm.v2
[16:54] <rogpeppe> mgz: %
[16:54] <rogpeppe> mgz: ^
[16:54] <natefinch> rogpeppe: I actually thought of that when Gustavo proposed gopkg.in
[16:55] <rogpeppe> mgz: the main disadvantage of the latter that i can think of is that github.com/juju will show several more repos, one for each api version
[16:55] <natefinch> yep
[16:55] <rogpeppe> natefinch: what do you reckon?
[16:56] <mgz> rogpeppe: it's mostly a benefit with lots of api bumps, and keeping a sane git branch workflow
[16:56] <rogpeppe> mgz: i think that the git workflow can be pretty similar in both cases
[16:56] <natefinch> rogpeppe: that you could make your own foo.v2 and not need his magic.  However, it does clean up the juju repo list
[16:57] <rogpeppe> mgz: there's not much difference between a remote branch whichever repo it's in
[16:57] <natefinch> rogpeppe: his magic does let you do v2.1 v2.2 and let import foo.v2  work with all of those
[16:57] <natefinch> not sure how necessary that minor revision bumping is though
[16:57] <rogpeppe> natefinch: that is true
[16:58] <natefinch> it keeps the code separate, I guess, but there's little difference to the end user from it all being in the same branch
[16:58] <rogpeppe> the thing is, the code will need to live in two separate directories anyway, because that's the way go works
[16:59] <natefinch> I mean, it keeps the v2.1 separate from v2.2 in git
[16:59] <natefinch> yes, on disk, foo.v2 will need to live separately from foo.v1
[16:59] <natefinch> I think it's worth using gopkg.in to keep the juju repos cleaner
[17:00] <natefinch> it's already getting a little noisy
[17:00] <rogpeppe> my main inclination the other way is that it's nice to have all the juju packages live under github.com/juju in my $GOPATH
[17:02] <rogpeppe> because i'll often do a recursive grep inside that dir
[17:04] <jam1> alexisb: you dropped out at "lets say"
[17:04] <alexisb> jam1, yeah
[17:04] <alexisb> I am trying to get back in
[17:05] <perrito666> oh the sweet looks of passing tests http://juju-ci.vapour.ws:8080/job/functional-ha-backup-restore/213/
[17:10] <natefinch> perrito666:  beautiful
[17:30] <sinzui> alexisb, natefinch, jam, 1.19.4 release is blocked by bug 1333357 which was introduced earlier today
[17:31] <natefinch> dammit
[17:32]  * perrito666 facepalms
[17:33] <alexisb> ooo the saga continues
[17:33] <natefinch> ahh, it's only gccgo, who cares?
[17:33]  * natefinch is joking, mostly
[17:33] <alexisb> IBM does
[17:33] <alexisb> ;)
[17:36] <perrito666> sinzui: do you really think that revision is the one that introduced the bug?
[17:37] <sinzui> perrito666, it is the only rev that changed apiserver/networker in the last 2 days
[17:38] <perrito666> the output of gccgo is less than useful
[17:41] <natefinch> so, that's a compiler error, which means it's a gccgo bug not a juju bug... not that we don't still have to fix it in gccgo (and perhaps try to avoid it in juju)
[17:42] <perrito666> natefinch: If I were a compiler dev, I really would like to have better error reports than that
[17:42] <perrito666> do you know what the $ mean?
[17:42]  * perrito666 takes a quick look at the code
[17:43] <bac> hi sinzui, for deploying to prodstack one of the webops mentioned a while back that we should transition to storing charm dependencies in an bucket somewhere. can you point me to one of your deployments that does that so i copy the hell out of your work?
[17:44] <natefinch> perrito666: it has the exact line number and everything, though the message itself is not very useful
[17:44] <sinzui> bac, I don't have an example.
[17:44] <bac> doh
[17:47] <sinzui> bac swift post charmworld-deps
[17:49] <sinzui> bac, the charm can call swift download charmworld-deps <object>
[17:50] <bac> sinzui: thanks
[17:52] <sinzui> bac, you will probably want to make the container public so that the charm doesn't need creds. swift post -r '.r:*'
[17:53] <jcastro> natefinch, do you have the URL handy for the inprogress API documentation? I believe you gave it to me before but I didn't bookmark it.
[17:53] <sinzui> bac, I don't trust canonistack's swift this month. I got canonistack tests to pass by avoiding it. You probably wan't have a problem intermittently uploading files to it.
[17:56] <perrito666> natefinch: I was curious of what line of go triggered the .cc to crash
[18:07] <natefinch> perrito666: no idea.  the error doesn't really say, and I can't imagine "String()" would do it.
[18:10] <perrito666> natefinch: the only nested something is on the test
[18:34]  * perrito666 buys new guts for his computer
[18:53] <alexisb> natefinch, I need a quick break and will be a few minutes late for our 1x1, I will ping you when I get back
[18:53] <natefinch> alexisb: okie dokie
[20:02] <natefinch> alexisb: are you in the call? I'm there but it says no one else is
[20:03] <alexisb> I am there
[20:03] <alexisb> video call
[20:04] <natefinch> I'll rejoin
[20:04] <alexisb> trying call in as well
[20:04] <perrito666> ok, EOD, bye ppl
[20:04] <natefinch> the bridge ID said not valid
[20:05] <alexisb> yep
[20:05] <alexisb> natefinch, are you on the video call?
[21:15] <mattyw> thumper, morning
[21:15] <mattyw> thumper, fwereade asked if you could take a look at this https://github.com/juju/juju/pull/108
[21:15] <thumper> mattyw: ok, and otp
[21:16] <mattyw> thumper, no problem, just wanted to let you know
[21:16] <mattyw> I'll be heading to bed soon so anytime today is fine
[21:17] <menn0> perrito666: ping
[21:17] <perrito666> menn0: pong
[21:17] <menn0> perrito666: I'm wanting to understand how the native backup solution is looking.
[21:17] <menn0> just the high level design
[21:18] <menn0> how much is committed already and how much is to come?
[21:18] <perrito666> menn0: you have my divided attention between you and my merienda :)
[21:18] <perrito666> if what you want is backup, its inner parts are already committed and not likely to change much
[21:18]  * menn0 had to look up what merienda means :)
[21:19] <perrito666> I looked up and wp does not have a translation for it
[21:19] <perrito666> menn0: as for restore is being done, I pretty much know how it will be but not yet completed
[21:19] <perrito666> we had a few days setback bc of a bug in the old restore
[21:20] <menn0> so will backups be stored server side with the option of downloading or did your team go with the direct download to the client option?
[21:20] <menn0> yeah I saw the discussions about the problem that was breaking CI
[21:21] <perrito666> menn0: if you give me 5 minutes to remove my toasts from the fire we might solve this faster with a hangout
[21:21] <menn0> sounds good
[21:25] <perrito666> menn0: https://plus.google.com/hangouts/_/canonical.com/moonstone?authuser=3
[21:27] <perrito666> menn0: ?
[21:27] <menn0> perrito666: missed you
[21:27] <menn0> try again?
[21:27] <perrito666> https://plus.google.com/hangouts/_/canonical.com/moonstone?authuser=3
[21:28] <menn0> perrito666: party is over. try this: https://plus.google.com/hangouts/_/g3qbgajp7bnquq576ulbflvdvia
[21:28] <perrito666> menn0: I did not call :p that is the url for the moonstone hangout
[21:35] <bodie_> anyone familiar with the permissions stuff in apiserver?
[21:35] <bodie_> I'm trying to write a failing test for a unit without perms
[21:36] <bodie_> but, I can't quite figure out how to find a suitable unit to try to query that I won't have perms for
[21:36] <bodie_> I'm in state/apiserver/uniter
[21:36] <bodie_> UniterAPI suite
[21:36] <bodie_> sorry, uniterSuite
[21:45] <bodie_> and batch Actions query is in
[21:45] <bodie_> https://github.com/juju/juju/pull/140#discussion-diff-14067952
[21:45] <bodie_> sorry, https://github.com/juju/juju/pull/140
[21:46] <bodie_> ActionsWatcher API endpoint would be great to have a review on as well :)
[21:46] <bodie_> PR 141
[22:23] <menn0> bodie_: I'll take a look at that PR today
[22:25] <bodie_> sweet, thanks menn0
[22:31] <sinzui> wallyworld, We got another regression while ppc64 testing was down. I don't think perrito666 or natefinch made progress with it https://bugs.launchpad.net/juju-core/+bug/1333357
[22:31] <wallyworld> :-(
[22:32] <wallyworld> sinzui: ok, we'll fix today
[22:33] <sinzui> wallyworld, I will grab the tarball and installer the moment I see CI pass to start the release proc
[22:33] <wallyworld> rightio. this release really is cursed so far
[22:34] <dpb1> is there an equivalent to juju run, but for transferring files?   like, send this file to "--unit <unit list>"
[22:35] <wallyworld> sinzui: we also have bug 1333098 that has not been fix commited yet afaict
[22:35] <_mup_> Bug #1333098: API panic running test suite <api> <panic> <regression> <juju-core:In Progress by jameinel> <https://launchpad.net/bugs/1333098>
[22:35] <wallyworld> dpb1: i think juju scp
[22:36] <wallyworld> yup, type juju help scp
[22:37] <dpb1> wallyworld: yes, but I have to iterate, right?  I have a big file, and I was looking for something that could copy once into the cloud, then distribute that to the units I specify.
[22:38] <wallyworld> dpb1: ah i see what you want. no, sadly you have to iterate
[22:38] <dpb1> wallyworld: k
[22:38] <dpb1> thx
[22:38] <wallyworld> sorry
[22:38] <dpb1> np, I was just wishing. :)
[22:38] <wallyworld> raise a bug if you want
[22:38]  * dpb1 nods
[22:38] <wallyworld> we may be able to do something
[23:20] <perrito666> I did not, I just took a look at it