=== jcsackett_ is now known as jcsackett [02:09] ttps://github.com/juju/juju/pull/144 [03:09] wallyworld: https://github.com/juju/juju/pull/134/files#diff-4fd108ff3861516a9ea367ed5e560d50R1534 does this look reasonable? [03:10] thumper: sorry I'm late - in hangout now [03:11] axw: for the first test, i think we need to load a new machine object on which to set addr1, addr0 [03:11] in the before hook [03:11] oops, yeah, did it in the second but not the first [03:11] so that we don't mess up the in memory representation for machine [03:11] will do [03:12] ah, haven't got to the 2nd test yet [03:13] axw: yep, lgtm [03:14] wallyworld: thanks. will fix the first one and land [03:14] ta [03:16] wallyworld: :) please tal at your email I would really appreciate an answer to that last email [03:17] wallyworld: took me 3 days but my background brain thread finally returned the solution [03:18] perrito666: will do, looking now [03:21] perrito666: good pickup. my view is that all restore type operations should be run as admin. normal users have readwrite permissions but lack these ones mgo.RoleDBAdminAny, mgo.RoleUserAdminAny. i don't think it's appropriate for users other than admin to have such permissions. i'll reply to the email [03:24] wallyworld: ok, Ill propose a small patch and if we then decide that its the other way around we can reject it [03:25] perrito666: which lines fail from mongpoEval? [03:26] perrito666: i'm also confused why it worked when run bu hand? [03:26] by [03:26] wallyworld: at first sight none :| that is none yields errors, but something might be lost when we go from bash trough ssh to go.. [03:26] wallyworld: there lies the solution actually [03:26] I realized that by hand I was using admin instead of tag user [03:26] oh [03:27] that was a bit of a ref herring then :-) [03:27] red [03:27] wallyworld: all thanks to the talk we had the other day [03:28] wallyworld: I take that a red herring is not a fish in that sentence? :p [03:28] no :-) a colloquial expression for an unintended diversion or misdirection [03:29] well yes and no, I focused on the fact that it worked by hand and tried to replace --eval for a js file and while I was writting that patch I realised I was not using admin on restore [03:32] sadly the realisation of this solution came on a sunday night, so here I am :| [03:32] :-( [03:33] so the solution is to remove mongoEval and run everything as mongoAdminEval? [03:34] or did you want to keep doing some things as non admin? [03:34] nope, everything as Admin [03:35] or I can try one by one the commands and do as admin only what is required [03:35] which is, as I see it, a great loss of time [03:36] yep, agreed [03:41] ok, Ill go sleep and send the patch tomorrow AM [03:42] sounds good, thanks for spending the extra time to find it [04:09] Session closed is getting me down [04:37] axw: wallyworld + echo 'Instance setup done:' Mon Jun 23 04:35:03 UTC 2014 [04:37] ^ this is great [04:37] but there is no other timestamp in the log to see when it started [04:38] well thre is this one [04:38] [workspace] $ /bin/bash /tmp/hudson2115592101584260118.sh [04:38] Started: Mon Jun 23 04:26:02 UTC 2014 [04:38] aeriously - 10 minutes to setup a machine in ec2 ... [04:39] davecheney: yep :-( [04:39] can be as short a 4 minutes [04:39] that's why we're looking at using a nailed up instance [04:39] there are 3 timestamps in the log [04:40] tests started, instance finished/tests stating, all done [04:40] s/test started/ job started [04:40] Instance has ip ec2-54-84-105-221.compute-1.amazonaws.com [04:40] Waiting for 22.............................. [04:40] + set +e [04:40] ^ there should be a timestamp here [04:40] can add one [04:40] System information as of Mon Jun 23 04:29:53 UTC 2014 [04:40] the apt dance take ages :-( [04:40] there is this one from the motd [04:41] i guess its up to date === vladk|offline is now known as vladk [05:43] wallyworld: can I log in to Jenkins or is there a limited list of people who can? The current build is mine and has failed and can be cancelled. [05:44] on a related note, there's been a lot of mgo panics today during test runs along the lines of "Session already closed" [05:45] is this new or could it be related to davecheney's recent close-mongo-iterators PR? [05:45] yes [05:46] the tracebacks seemed to implicate root.go somehow [05:46] and then i saw you email to dev and stopped looking [05:46] could be related though [05:47] no I think these are different from root.go [05:48] there's been 2 regular build problems today: root.go and the mgo "session already closed" [05:48] I think they're separate issues (probably) [05:52] could be [05:52] we've looked on and off and fixed several issues [05:55] menn0: so the existing test that was failing does show a data race under "go test -race", I'll try to write up a simpler case, though. [05:56] jam1: great that you were able to track it down quickly [05:56] menn0: well, when the line that has the error is "objectCache[key] = obj" [05:56] it gives a pretty good hint [05:57] but yeah, I'm pretty familiar with the code since I've been working on it closely. [05:57] jam1: note that it was a different test that failed on my machine, but in a similar way [05:58] menn0: yeah, it is an API data race (concurrent mutation of a golang map, which are *not* concurrent safe, you have to wrap them in a mutex) [05:58] jam1: yep. it could happen at any point right? [05:58] any call [05:58] menn0: yeah [06:02] jam1: morning [06:02] vladk: morning. Sorry I'm a bit late, looking into this data race, I'll be there in a couple mins [06:04] jam1: go maps and slices are non-concurrent objects, only channels are concurrent and strings are immutable [06:04] vladk: yeah, I'm aware, just wasn't thinking about the concurrent access when I was writing the code. [06:24] menn0: https://github.com/juju/juju/pull/146 [06:25] it also fixes an only tangentially related race condition in state/api/watcher/watcher.go that I only noticed because the test that was failing in cmd/jujud had 2 sources of race conditions. [06:25] "go test -race" is pretty nice, its a shame it slows things down so much. [06:26] menn0: you're also OCR for today, so poke for the review :) [06:26] jam1: not looking in detail, you may have perhaps fixed an ongoing intermittent test failure around watchers perhaps [06:26] well at least i'm hoping :-) [06:27] wallyworld: so the race for watchers is that it is possible for the loop() to terminate before it actually starts anything [06:27] because it calls w.wg.Add() but only *inside a goroutine* [06:27] which isn't, itself, protected by a wg.Add() [06:27] ah ok. may not be the same issue then [06:27] so you could start a watcher, have a couple pending goroutines, and then exit [06:28] although thinking about it, I may need to move something around a bit [06:29] good that we found and fixed this before 1.19.4 ships [06:32] k, I don't need to move it after all. so my patch is ready for review. [06:36] vladk: I'm in the hangount [06:36] hangout [06:36] wallyworld: I wonder if we want a CI test that runs the whole test suite in "go test -race" mode [06:36] I don't think we're *quite* clean there, though. [06:37] jam1: worth adding i reckon [06:37] wallyworld: well it doesn't help if it never passes :) [06:38] sure, so let's get it passing first :-) [06:43] ah mongo, how to you you leak temporary file, let me count the ways ... === vladk is now known as vladk|offline [07:42] davecheney: morning! [07:43] rogpeppe1: ahoy! [07:44] morning [07:44] TheMue: hiya [08:58] TheMue: morning ! I'm just finishing up lunch, I'll be there in about 5-10 min. [08:59] jam1: ok [09:35] wallyworld, ping [09:35] fwereade: hey [09:36] wallyworld, hey, I was wondering about proof-of-access for the managed resource stuff [09:36] proof of access? [09:37] wallyworld, ie "here store this file with md5/sha256" "ok I want the md5/sha256 of " "here you go" "ok cool your file is stored" [09:37] ah that [09:37] not implemented yet [09:37] just getting basics landed [09:37] wallyworld, I'm wondering what impact that will have on this layer, because it's starting to feel like the right place for it [09:38] wallyworld, maybe I'm wrong [09:38] wallyworld, but the lower the layer that implements it the less opportunity we will have to fuck it up [09:38] perrito666: nice job :) [09:38] wallyworld, the higher the layer the less we need to thread the challenge/response stuff through, I understand it's a tradeoff [09:39] fwereade: it will impact i think, may need an extension to the current api. workflow will be controlled by a layer above but primitives to make it work will be in the current layer [09:39] wallyworld, ok, cool, so long as it's on your mind and coming soon I won't worry about it for this CL [09:40] fwereade: well, next on the todo list is the ToolsStorage fascade so we get get rid of the http storage stuff for manual provider [09:40] so it's on my mind but not on the very immediate next to do list [09:40] does that work for you? [09:41] wallyworld, I worry that we'll want that functionality for all facades, and that changing the tools facade to accommodate it *as well as* the managedresource stuff will exert subtle pressures to do it less cleanly than we might [09:42] fwereade: ok, i can add some new apis to the current design spec and do the proof of access stuff first then [09:43] after i land the current pull request [09:43] https://github.com/juju/juju/pull/124 [09:43] wallyworld, great, thanks [09:43] wallyworld, yeah, was starting to look at that, that was what made me think of it :) [09:43] lol ok, i figured as much [09:51] jam1: sorry I had finished up for the day... I'm actually OCR tomorrow not today anyway [09:52] menn0: I must not have refreshed the page [09:52] pn [09:52] np [09:52] jam1: it did the same for me too, at first it said I was and them recalced [09:52] then even [09:53] jam1: it's good that you pointed it out anyway. I hadn't realised I was on tomorrow :) [09:54] fwereade: what did we want for the challenge-response policy? retain the current Put() where the caller has to provide all the data (and it is de-deduped on the server) but also add a *new* API where they provide just the checksums and then are issued a challenge for a segment checksum and if that passes they don't need to upload anything? [09:56] not thinking too much, the new api will necessarily be stateful so we'll have to consider a timeout etc after the initial request [09:56] ie if they don't respond soon enough the acceptable response expires and they would be issued with a new challenge [09:57] wallyworld, in my mind the main goal is to avoid having to send the bytes at all in the common case, so what I'd like us to *expose* is the stateful case, and only fall back to sending bytes in response to a never-heard-of-it result from the first call [09:57] wallyworld, yeah, we'd want a timeout, indeed [09:57] fwereade: "avoid sending bytes in the common case" assumes there is a high chance the data is already uploaded [09:59] i guess the caller can optimistically try and use just the checksums [09:59] and if the server doesn't have the data, the caller is requested to upload everything [10:00] morning everyone [10:00] wallyworld, I think that globally there is a high enough chance that the (low) cost of even quite a lot of back-and-forths will be reasonable compared to the (high) cost of even a few ~gig-sized uploads [10:00] wallyworld, remember this is closely aligned with the fat charms case [10:01] wallyworld, those can often end up gig-sized [10:01] fwereade: agreed. i'm just stating the obvious to e really explicit we have a shared understanding [10:01] dimitern: can you take a look at https://github.com/juju/juju/pull/146 [10:01] wallyworld, cool, I think we do [10:02] jam1, looking [10:02] fwereade: both Put(supplyTheData) and Put(supplyTheChecksums) will be exported so i guess the caller can decide which one they want to use [10:05] morning perrito666 [10:05] jam: morning [10:06] morning natefinch [10:08] mm, we no longer have a way to say "this fixes bug lp:#######" ? [10:09] perrito666: I think if there is a lp issue, it is nice to mention it in the pull request comments [10:10] wwitzel3: yup, I just wanted to know if there is a way to trigger the "fix committed" status [10:10] perrito666: not that I am aware of [10:10] perrito666: I've been doing that manually [10:10] ok, Ill use what I see for other bugs [10:12] jam1, reviewed [10:12] thx [10:17] natefinch: wwitzel3 wallyworld https://github.com/juju/juju/pull/147 [10:17] there are things that upset me and then the fact that this bug is fixed with so little.... :p [10:18] perrito666: \o/ thank you for fixing [10:19] :) now back to write a decent restore [10:19] i bet you are sick of backup/restore now [10:20] wallyworld: no, I am actually very fond of it, I really look forward to have the new one implmented [10:20] you have a lot of patience :-) [10:20] I am a bit sleepy tho, I slept only 4 hs last night [10:21] :-( [10:24] jam1: on further reflection, i don't think i understand your commonLoop changes [10:24] jam1: i'm not convinced they're right [10:25] jam1: specifically, i don't see how the changes ensure anything happens before the outer loop terminates [10:25] rogpeppe1: so the race as detected by 'go test -race' is that 'NewNotifyWatcher' does a defer w.wg.Wait()" before calling loop. And nothing has been Added to the wg at that time. [10:25] we then call "go w.commonLoop()" internally [10:25] which will, eventually, call w.wg.Add() for the two goroutines that *it* spawns [10:26] however, the 'go w.commonLoop()' hasn't actually incremented anything and can return out of "loop" before we've started it. [10:26] I believe there is a secondary channel of information in the "w.in" so that the for{} loops never actual exit until commonLoop has entered. [10:27] jam1: i see. [10:27] jam1: a better solution (i think) is to avoid calling Wait in NewNotifyWatcher [10:27] jam1: but to make sure loop waits for in to be closed before returning [10:27] rogpeppe1: I personally felt like wg.Wait() should probably be called inside the loop() functions === vladk|offline is now known as vladk [10:28] rogpeppe1: so in the case of "w.tomb.Dying" we can return without checking w.in [10:28] is that ok ? [10:28] jam1: the original scheme was the wg is for commonLoop's internal use only [10:29] jam1: it's kinda weird that commonLoop is doing the Wait itself [10:29] rogpeppe1: if it is internal to commonLoop, couldn't it just use a local var ? [10:29] rogpeppe1: I certainly originally thought to change "go commonLoop()" to just be a synchronous "w.commonLoop()" and then wait outside [10:30] * rogpeppe1 checks [10:30] but that closes w.in in a defer [10:30] so we could change it some other way [10:30] jam1: yeah, that was my initial thought too [10:30] jam1: i'm not keen on the current change as it adds more stuff that each caller of commonLoop must remember to do [10:32] jam1: yes, i think wg could/should be a local var [10:35] rogpeppe1: so 'must wait until in is closed' isn't quite true today, because of stuff like "the tomb can die first" [10:36] jam1: yup [10:36] jam1: if the tomb dies, we should wait for the in channel to be closed [10:36] rogpeppe1: I'm not sure that it means the outer loop must not terminate before then [10:36] jam1: because that's the way commonLoop signifies that it's finished [10:44] jam1: it's instructional to see how the code has changed since the original version (state/api/apiclient.go in rev 1235) [10:46] TheMue: standup ? [10:49] yay fix committed [11:41] vladk: you dropped out? Is everything ok? === vladk is now known as vladk|offline === vladk|offline is now known as vladk === vladk is now known as vladk|offline === vladk|offline is now known as vladk [12:47] it would be nice if they had a very soft ding when someone connects === vladk is now known as vladk|offline [12:54] morning all [12:55] morning bis [12:55] ericsnow: wallyworld I will go back to the new restore, what are you guys doing? I don't want to step on your toes [12:55] anyone have a free minute to scope a PR or two? [12:56] https://github.com/juju/juju/pull/140 and https://github.com/juju/juju/pull/141 [12:56] perrito666: I'm still working on the backup client code [12:57] perrito666: i'm not working on it [12:57] wallyworld: I meant wwitzel3 sorry [12:58] :-) [12:58] wallyworld: I am used to you not being here at this time :p [12:58] can't sleep [12:59] wallyworld: try watching a movie, works wonders for mi wife, in almos five years together I think she hardly saw more than 3 movies to full extent [13:00] hahahah [13:01] lol === vladk|offline is now known as vladk [14:00] natefinch: standup [14:48] natefinch: taxes in MA are really low [14:55] mgz: ping [14:56] perrito666: what's funny is that most people around here call it Taxachusetts. However, I presume you're talking about sales tax [14:58] perrito666: sales tax in the US is done per state, Massachusetts is pretty middle of the road for states at 6.25% ... California being the highest AFAIK, at 10%, and several states have 0% (notably New Hampshire, which borders MA). [14:59] *sniff* [15:00] in Germany we’ve got 7% for food and books, magazines etc, but 19% for the rest [15:03] fwereade, having an issue with my hangouts, will be there shortly [15:03] natefinch: I am talking about the tax amazon collects from me when trying to ship you stuff :p [15:03] alexisb, oops, forgot we were meeting, omw too :) [15:04] :) [15:05] man, lenovo really makes it hard to find a replacement battery [15:06] cmars: ping [15:07] bac, pong === niemeyer_ is now known as niemeyer [15:12] rogpeppe1: hey [15:12] mgz: in a call currently, but are you around for a chat in 30 mins or so? === hatch__ is now known as hatch [15:13] sure thing [15:13] mgz: also... did you manage to get around that godeps problem? [15:13] rogpeppe1: yeah, should all be fine now [15:13] mgz: what was the issue? [15:13] unrelated repository issue on the bot [15:13] mgz: which was? [15:14] a repo is shared between a bunch of different things, including godeps apparently, and we hit a bzr bug which made every branch using the repo unhappy [15:16] ah, i wondered if it was something like [15:16] that [15:42] brb lunch === vladk is now known as vladk|offline === vladk|offline is now known as vladk === vladk is now known as vladk|offline [15:53] mgz: hey [15:53] rogpeppe1: hey [15:53] mgz: hangout? [15:54] mgz: if it's a hassle, np [15:54] sure, lets use juju-core-team [15:54] mgz: link? [15:55] rogpeppe1: in the calendar for thursday or just ...plus.google.com/hangouts/_canonical.com/juju-core-team [15:56] mgz: hmm, i get 404 [15:56] mgz: will try the link in the calendar [15:56] after the _ [15:56] add / [15:56] I mistyped [16:03] abentley, Juju-ci will fail juju for the wrong reasons. [16:04] abentley, ppc and arm64 access was restored, but ci missed the opportunity t make the debs. all those arch tests will fail [16:04] Doh! [16:05] abentley, aws has 6 old instances still running, causing the manual test to fail. [16:05] I will restart the revision if no revision lands in the next hour [16:10] perrito666, I am resting the current revision. CI ran of our AWS resources and ppc64 and arm64 machines. Many tests couldn't be run. Looks like the restore is working when there are resources [16:17] sinzui: \o/ [16:19] fwereade: you around? [16:21] I love getting happy birthday emails from websites I don't even remember visiting [16:22] natefinch: is it your bday? [16:22] It is my birthday and my twin sister's birthday and my wife's birthday today. [16:23] uhh, that is a cool memory space saver [16:23] natefinch: well happy bday (and why is you bd not in the calendar for bdays?) [16:23] it's in my calendar, I dunno [16:23] and my aunt's birthday is tomorrow and Wednesday is Zoë, my younger daughter's birthday [16:24] and a couple days ago was my sister's step son's birthday. My mother went to the store and bought 6 birthday cards last week :) [16:25] "I will not get friends with people that have birthdays outside this week" great technique === rogpeppe1 is now known as rogpeppe [16:54] on reflection, i'm not sure that using gopkg.in/juju/charm.v2 gives significant advantage over using github.com/juju/charm.v2 [16:54] mgz: % [16:54] mgz: ^ [16:54] rogpeppe: I actually thought of that when Gustavo proposed gopkg.in [16:55] mgz: the main disadvantage of the latter that i can think of is that github.com/juju will show several more repos, one for each api version [16:55] yep [16:55] natefinch: what do you reckon? [16:56] rogpeppe: it's mostly a benefit with lots of api bumps, and keeping a sane git branch workflow [16:56] mgz: i think that the git workflow can be pretty similar in both cases [16:56] rogpeppe: that you could make your own foo.v2 and not need his magic. However, it does clean up the juju repo list [16:57] mgz: there's not much difference between a remote branch whichever repo it's in [16:57] rogpeppe: his magic does let you do v2.1 v2.2 and let import foo.v2 work with all of those [16:57] not sure how necessary that minor revision bumping is though [16:57] natefinch: that is true [16:58] it keeps the code separate, I guess, but there's little difference to the end user from it all being in the same branch [16:58] the thing is, the code will need to live in two separate directories anyway, because that's the way go works [16:59] I mean, it keeps the v2.1 separate from v2.2 in git [16:59] yes, on disk, foo.v2 will need to live separately from foo.v1 [16:59] I think it's worth using gopkg.in to keep the juju repos cleaner [17:00] it's already getting a little noisy [17:00] my main inclination the other way is that it's nice to have all the juju packages live under github.com/juju in my $GOPATH [17:02] because i'll often do a recursive grep inside that dir [17:04] alexisb: you dropped out at "lets say" [17:04] jam1, yeah [17:04] I am trying to get back in [17:05] oh the sweet looks of passing tests http://juju-ci.vapour.ws:8080/job/functional-ha-backup-restore/213/ [17:10] perrito666: beautiful === vladk|offline is now known as vladk [17:30] alexisb, natefinch, jam, 1.19.4 release is blocked by bug 1333357 which was introduced earlier today [17:31] dammit [17:32] * perrito666 facepalms [17:33] ooo the saga continues [17:33] ahh, it's only gccgo, who cares? [17:33] * natefinch is joking, mostly [17:33] IBM does [17:33] ;) [17:36] sinzui: do you really think that revision is the one that introduced the bug? [17:37] perrito666, it is the only rev that changed apiserver/networker in the last 2 days [17:38] the output of gccgo is less than useful [17:41] so, that's a compiler error, which means it's a gccgo bug not a juju bug... not that we don't still have to fix it in gccgo (and perhaps try to avoid it in juju) [17:42] natefinch: If I were a compiler dev, I really would like to have better error reports than that [17:42] do you know what the $ mean? [17:42] * perrito666 takes a quick look at the code [17:43] hi sinzui, for deploying to prodstack one of the webops mentioned a while back that we should transition to storing charm dependencies in an bucket somewhere. can you point me to one of your deployments that does that so i copy the hell out of your work? [17:44] perrito666: it has the exact line number and everything, though the message itself is not very useful [17:44] bac, I don't have an example. [17:44] doh [17:47] bac swift post charmworld-deps [17:49] bac, the charm can call swift download charmworld-deps [17:50] sinzui: thanks [17:52] bac, you will probably want to make the container public so that the charm doesn't need creds. swift post -r '.r:*' [17:53] natefinch, do you have the URL handy for the inprogress API documentation? I believe you gave it to me before but I didn't bookmark it. [17:53] bac, I don't trust canonistack's swift this month. I got canonistack tests to pass by avoiding it. You probably wan't have a problem intermittently uploading files to it. [17:56] natefinch: I was curious of what line of go triggered the .cc to crash [18:07] perrito666: no idea. the error doesn't really say, and I can't imagine "String()" would do it. [18:10] natefinch: the only nested something is on the test [18:34] * perrito666 buys new guts for his computer [18:53] natefinch, I need a quick break and will be a few minutes late for our 1x1, I will ping you when I get back [18:53] alexisb: okie dokie === vladk is now known as vladk|offline [20:02] alexisb: are you in the call? I'm there but it says no one else is [20:03] I am there [20:03] video call [20:04] I'll rejoin [20:04] trying call in as well [20:04] ok, EOD, bye ppl [20:04] the bridge ID said not valid [20:05] yep [20:05] natefinch, are you on the video call? [21:15] thumper, morning [21:15] thumper, fwereade asked if you could take a look at this https://github.com/juju/juju/pull/108 [21:15] mattyw: ok, and otp [21:16] thumper, no problem, just wanted to let you know [21:16] I'll be heading to bed soon so anytime today is fine [21:17] perrito666: ping [21:17] menn0: pong [21:17] perrito666: I'm wanting to understand how the native backup solution is looking. [21:17] just the high level design [21:18] how much is committed already and how much is to come? [21:18] menn0: you have my divided attention between you and my merienda :) [21:18] if what you want is backup, its inner parts are already committed and not likely to change much [21:18] * menn0 had to look up what merienda means :) [21:19] I looked up and wp does not have a translation for it [21:19] menn0: as for restore is being done, I pretty much know how it will be but not yet completed [21:19] we had a few days setback bc of a bug in the old restore [21:20] so will backups be stored server side with the option of downloading or did your team go with the direct download to the client option? [21:20] yeah I saw the discussions about the problem that was breaking CI [21:21] menn0: if you give me 5 minutes to remove my toasts from the fire we might solve this faster with a hangout [21:21] sounds good [21:25] menn0: https://plus.google.com/hangouts/_/canonical.com/moonstone?authuser=3 [21:27] menn0: ? [21:27] perrito666: missed you [21:27] try again? [21:27] https://plus.google.com/hangouts/_/canonical.com/moonstone?authuser=3 [21:28] perrito666: party is over. try this: https://plus.google.com/hangouts/_/g3qbgajp7bnquq576ulbflvdvia [21:28] menn0: I did not call :p that is the url for the moonstone hangout [21:35] anyone familiar with the permissions stuff in apiserver? [21:35] I'm trying to write a failing test for a unit without perms [21:36] but, I can't quite figure out how to find a suitable unit to try to query that I won't have perms for [21:36] I'm in state/apiserver/uniter [21:36] UniterAPI suite [21:36] sorry, uniterSuite [21:45] and batch Actions query is in [21:45] https://github.com/juju/juju/pull/140#discussion-diff-14067952 [21:45] sorry, https://github.com/juju/juju/pull/140 [21:46] ActionsWatcher API endpoint would be great to have a review on as well :) [21:46] PR 141 === Guest38543 is now known as wallyworld [22:23] bodie_: I'll take a look at that PR today [22:25] sweet, thanks menn0 [22:31] wallyworld, We got another regression while ppc64 testing was down. I don't think perrito666 or natefinch made progress with it https://bugs.launchpad.net/juju-core/+bug/1333357 [22:31] :-( [22:32] sinzui: ok, we'll fix today [22:33] wallyworld, I will grab the tarball and installer the moment I see CI pass to start the release proc [22:33] rightio. this release really is cursed so far [22:34] is there an equivalent to juju run, but for transferring files? like, send this file to "--unit " [22:35] sinzui: we also have bug 1333098 that has not been fix commited yet afaict [22:35] <_mup_> Bug #1333098: API panic running test suite [22:35] dpb1: i think juju scp [22:36] yup, type juju help scp [22:37] wallyworld: yes, but I have to iterate, right? I have a big file, and I was looking for something that could copy once into the cloud, then distribute that to the units I specify. [22:38] dpb1: ah i see what you want. no, sadly you have to iterate [22:38] wallyworld: k [22:38] thx [22:38] sorry [22:38] np, I was just wishing. :) [22:38] raise a bug if you want [22:38] * dpb1 nods [22:38] we may be able to do something [23:20] I did not, I just took a look at it