[06:37] mornin' all [06:38] huwshimi: do you, by any chance, know the name/address of the calendar that we should add leave to? [06:54] rogpeppe, morning [06:54] dimitern: hiya [06:54] dimitern: how's tricks? [06:54] rogpeppe, I wanted to ask you for a while.. [06:55] rogpeppe, why do you usually send a mail twice to the same lists - once normally and once forwarding it to the same list? :) [06:55] dimitern: because i very often forget to send it from my canonical address [06:55] rogpeppe, ah :) [06:56] dimitern: and i believe that it bounces when i send from my normal gmail address [06:56] dimitern: so i *think* you'll see two messages only when you're directly CC'd [06:56] rogpeppe, so godeps now does both fetching/pulling new revisions and go getting missing packages? [06:56] dimitern: if you're seeing two messages otherwise, then perhaps i don't need to resend the messages after all... [06:56] dimitern: yeah [06:56] rogpeppe, nope, I always see 2 msgs [06:57] dimitern: hmm, maybe that means that rogpeppe@gmail.com is actually subscribed to both [06:57] rogpeppe, good, I'll give it a try to see if it solves my previous issues [06:57] rogpeppe, most likely :) [06:58] rogpeppe: Hey, I'm not aware that there is one. [06:58] dimitern: it doesn't always work, unfortunately, and i'm not quite sure why. there's something weird going on. [06:58] rogpeppe, sorry about bashing godeps a bit in the msg I sent btw (^^) [06:58] dimitern: hrmph :-) [06:58] dimitern: it's open source and i develop it in my spare time - you're free to contribute too :-) [06:58] rogpeppe, with git, you're using fetch rather than pull, right? [06:59] * rogpeppe checks [06:59] rogpeppe, yep, I was thinking about it [06:59] dimitern: it's not that big either - only 718 lines [07:00] rogpeppe, I'll have a look at the source [07:00] dimitern: i'm using pull. yeah, i should definitely use fetch there, i think. [07:01] rogpeppe, yeah, it's safer - no changes to work dir [07:01] dimitern: yeah, but... which remote to fetch? [07:01] dimitern: i guess i could always fetch origin [07:02] dimitern: ha, "origin" is the default anyway [07:03] rogpeppe, with a properly set up local repo (i.e. as a result of go get or git clone), the remote is already implicitly defined, so just git fetch should work [07:03] rogpeppe, that's what I'm using in my up-dep script for git repos [07:03] dimitern: i'm not sure. [07:03] When no remote is specified, by default the origin remote will be used, [07:03] unless there’s an upstream branch configured for the current branch. [07:04] so it's possible that someone might have configured an upstream branch [07:04] i think just "git fetch origin" is probably the way forward [07:04] rogpeppe, this can only happen if you're doing development on that dependency [07:05] dimitern: yeah, but that might happen, i guess [07:05] rogpeppe, i.e. have both origin and upstream remotes, like we have for juju/juju [07:05] dimitern: yeah [07:05] rogpeppe, and in that case it is still safer to fetch origin, as it should be configured on github to track upstream anyway [07:05] dimitern: i actually never have an "upstream" remote - i think it's confusing [07:06] dimitern: i just have "origin" and remotes named after the person [07:06] dimitern: yeah, i think "origin" is probably the right thing to do, as we really want to fetch updates from the shared origin. [07:07] rogpeppe, having upstream remote locally configured can help you looking for changes/logs/diffs between local branches/origin and upstream [07:07] dimitern: does it make a difference that it's called "upstream" instead of "rogpeppe" ? [07:08] rogpeppe, also, because I kinda don't trust github magically keeping my origin for up-to-date, I usually fetch upstream/master, push to origin/master, and pull origin/master into my local master [07:08] dimitern: (i can never remember which way the "stream" flows, so "upstream" isn't a useful name for me) [07:09] rogpeppe, no special meaning about the name [07:09] dimitern: but upstream is configured to your own github fork, right? [07:09] rogpeppe, :) it's even harder for me, as it different in bulgarian; but what I remembered is upstream=close to the original source :) [07:10] rogpeppe, nope, upstream is the main juju/juju repo, origin is dimitern/juju (i.e. my github fork of the main repo) [07:10] dimitern: oh, that's really weird then :-) [07:11] dimitern: and that means that my proposed godeps fix would fail in your situation [07:11] :-\ [07:11] rogpeppe, how so? [07:11] dimitern: because it means that "git fetch origin" won't fetch any new updates that have been pushed by other people. [07:12] dimitern: for me, origin is always the main repo that everyone forks from [07:12] dimitern: that's automatically set up when i "go get" something [07:12] dimitern: then i add a named remote for any fork [07:13] dimitern: darn, i'm not sure that i can make automatic godeps fetching work in general then [07:13] dimitern: assuming yours is a common pattern [07:14] dimitern: i suppose i could fetch all [07:21] dimitern: so do you usually configure your upstream as the default upstream remote for your branches? [07:22] dimitern: if so (and assuming that's the common pattern), then perhaps i should just do "git fetch" (no args). i think that's probably better. [07:23] rogpeppe, nope [07:24] dimitern: oh [07:24] rogpeppe, usually origin is the remote you cloned from [07:24] dimitern: but do you usually clone from your fork, or from the original branch? [07:24] rogpeppe, and in our workflow origin is you fork of the main repo in github, which is supposed to automatically track upstream [07:25] rogpeppe, always from my fork, precisely for the reason origin tracks upstream [07:25] dimitern: by "track" you mean that it automatically gets updates from its upstream branch? [07:25] rogpeppe, yes, master is tracked by default, and you can add other tracking branches [07:25] dimitern: (i.e. github does some magic behind the scenes to do that?) [07:26] dimitern: i didn't know about tracking branches. [07:26] * rogpeppe googles it [07:26] rogpeppe, that's the magic really - the fork is a local repo on github servers that has a master branch tracking upstream/master [07:28] dimitern: so if you "git fetch" your fork repo, you should always get any changes that have been made upstream too? [07:29] rogpeppe, that's how it's supposed to work - i.e. you're not expected to pull upstream/master into origin/master locally and then push to origin/master, but I do that, just in case :) [07:31] dimitern: but... origin/master surely can't be the name of the remote tracking branch? [07:31] dimitern: because that's where you've got your own changes [07:32] * rogpeppe experiments [07:32] rogpeppe, well it is "remote" wrt your local repo, and it is also "remote" wrt upstream, in a different way :) [07:33] * rogpeppe is still confused :-) [07:33] ... and i thought i had this git thing sussed :-) [07:33] rogpeppe, locally, you have origin/master (I have upstream/master as well), both of these (locally speaking) are tracking branches for the origin and upstream repos [07:34] rogpeppe, on github, where your fork lives, locally (there) origin/master is a remote tracking branch for the upstream repo (juju/juju) [07:35] dimitern: so how would I refer to that remote remote? [07:35] (i.e. origin/master@github tracks juju/juju/ master @github) [07:35] rogpeppe, you don't need to [07:36] rogpeppe, you need to track the place you cloned from (your fork), and your fork needs to track upstream, so 2 levels of indirection [07:36] dimitern: so where's the HEAD reference to juju/juju master in my local repo? [07:36] rogpeppe, unless you have an upstream remote to juju/juju, you don't have direct reference to the upstream/master HEAD [07:37] rogpeppe, that's what I have: http://paste.ubuntu.com/8051647/ [07:37] dimitern: but you're saying that doing a "git fetch" of origin (my fork) will automatically fetch any commits that have been made upstream? [07:38] dimitern: that's what i expected to see [07:38] rogpeppe, yes, because origin tracks upstream [07:39] dimitern: this is what mine looks like http://paste.ubuntu.com/8051665/ :-) [07:40] rogpeppe, but I find this a little magical, so instead, I do: git checkout master (locally), git pull --ff-only upstream master (into my local master), git push origin master [07:41] dimitern: i just can't quite work out where the references are kept locally. i.e. how are those double-remote tracking branches represented in my local repo? [07:41] rogpeppe, hah :) interesting, but it's just different naming - your origin is actually my upstream, and your rogpeppe is like my origin [07:41] dimitern: yeah [07:41] dimitern: but that scales better [07:42] dimitern: because i actually own quite a few github accounts [07:42] dimitern: and it means that any repo that i've done "go get" with is already half set up for development [07:42] dimitern: (i just need to fork it on github and add the "rogpeppe" remote) [07:43] rogpeppe, I don't think it will scale as well if you need to have multiple origin remotes (say juju/juju and juju/names/ - you need to work on both at some point) [07:43] dimitern: both of those will be in separate directories [07:43] rogpeppe, no, sorry - yeah :) [07:44] "there can only be one origin" :-) [07:45] respect the origin for it shall prevail :) [07:50] dimitern: hmm, i can't reproduce the behaviour you're expecting [07:50] rogpeppe, which one? [07:50] dimitern: it looks like doing a fetch of a github fork does not fetch commits made in the original master [07:51] dimitern: so i did this experiment: [07:51] dimitern: i forked a github repo (github.com/go-errgo/errgo) into my own git account (github.com/rogpeppe/errgo) [07:52] dimitern: i cloned github.com/rogpeppe/errgo into a new directory [07:52] dimitern: oops, no, scratch that! [07:52] dimitern: i cloned github.com/go-errgo/errgo into a new directory [07:52] rogpeppe, why the fork then? [07:53] dimitern: i had another clone [07:53] rogpeppe, you're supposed to clone the fork, not upstream for the tracking to work I think [07:53] dimitern: with two remotes [07:53] dimitern: github.com/go-errgo/errgo and github.com/rogpeppe/errgo [07:54] dimitern: which i named origin and rogpeppe, according to my usual convention [07:54] rogpeppe, ok [07:54] dimitern: then, in the first cloned directory, i made a change, and pushed that (to github.com/go-errgo/errgo) [07:55] dimitern: then in the directory with two remotes, i did "git fetch rogpeppe" [07:55] dimitern: now that, if i understood you correctly, *should* have fetched the commit that was made in master [07:56] dimitern: but it didn't [07:56] rogpeppe, it depends what is the rogpeppe remote tracking [07:56] rogpeppe, do git branch -r in the double-remote dir [07:56] dimitern: the rogpeppe remote was tracking the fork (github.com/rogpeppe/errgo) [07:56] rogpeppe, this will show you all remote tracking branches [07:56] % git branch -r [07:56] origin/master [07:56] origin/v1 [07:57] rogpeppe/master [07:57] rogpeppe/v1 [07:57] rogpeppe, ok, once upstream changed, if you go to rogpeppe/errgo does it show the commit (on github) [07:57] dimitern: yeah, it says "This branch is 1 commit behind go-errgo:v1 [07:58] " [07:58] rogpeppe, hmmm.. so maybe I don't get what "tracking" means :) [07:58] rogpeppe, it doesn't pull revisions, just tells you if you need to [07:59] dimitern: i think "tracking" just means that it refers to a remote repo; if you fetch it (explicitly), it'll get all branches from that (and by implication all the commits the HEADs of those branches refer to) [07:59] rogpeppe, so what I'm doing with my "git sync" script - pulling upstream/master and pushing to origin/master is the right thing to do [08:00] * rogpeppe goes back to look at the sync script [08:00] rogpeppe, heh :) what do you know - quite an interesting morning discussion, thank you [08:01] dimitern: indeed - git is a complex beast, and noone quite understands it :-) [08:03] dimitern: your sync script looks reasonable. but i'm not sure why you bother pushing to origin/master. [08:05] dimitern: in general, i just use named branches in my github fork, and just use the original (my "origin", your "upstream") tracking branch as a source [08:07] rogpeppe, I push to origin to make sure it's up-to-date (on github) with all upstream commits; that way I can always use either the github UI or git commands locally on origin for diffs etc. [08:26] dimitern: i don't quite get that. what's an example command that you might run that needs origin/master pushed? [08:26] dimitern: (i often don't have an rogpeppe/master branch at all, BTW) [08:27] rogpeppe, like git difftool origin/master (or something) [08:28] * rogpeppe looks up git difftool [08:32] dimitern: presumably you could always do "git difftool upstream/master" instead? [08:32] dimitern: which i'm guessing more directly represents what you actually want to do [08:50] rogpeppe, good point, yes [08:53] dimitern: so, i've just changed godeps to use "git fetch" rather than "git pull" [08:57] rogpeppe, cheers! [11:03] morning all [11:04] rogpeppe: I added the leave for you. It's the juju ui engineering calendar [11:04] rick_h__: thanks a lot [11:04] np, went through and approved that stuff last night so should be good [11:04] rick_h__: have you got a reference for that calendar? (i think calendars have email addresses, right?) [11:04] rogpeppe: looking [11:05] rogpeppe: just shared it out to you [11:05] rick_h__: i think i'm seeing the standup entries because i'm subscribed to frankban's calendar :-) [11:05] rick_h__: thanks [11:05] oh hmm, standup is different. You should see those. I thought I invited you to those [11:05] yea, standup should be on your own. You're an invited person [11:07] jujugui wife has a migraine this morning. I've got to take the boy into day care for her and then get her up to the urgent care. Will be a bit afk this morning. Email if you need anything [11:08] * rogpeppe tries to work out how to find a reference to the calendar so i can add it to mine [11:09] hmm, reloaded and it seems to have added it [11:09] calendars are a kinda black magic [12:16] dammit, my headphones just broke [12:17] ... and i thought they were so well built [12:21] ha, i wondered why the charmstore charm wasn't responding as i expected [12:21] it uses the old charm store by default! [12:43] * rogpeppe lunches [13:12] rogpeppe: I think I should change the default to v4. [13:28] * rick_h__ is back around [13:30] rick_h__: welcome back. [13:30] jrwren: thanks [13:34] jrwren: sorry, i didn't see the git change in PR#2 (though to be fair it wasn't mentioned in the description) [13:43] jujugui: Morning all. Has anyone done anything with CI this morning? I had a build late last night that froze up on one of the tests. Ended up killing it after 1.5 hours, but it seems like a process is still out there hanging onto a port. Subsequent builds are failing with "port in use" errors. [13:46] kadams54: it knew I was back and wanted to blow up on me [13:46] kadams54: sec, looknig [13:46] rick_h__: welcome back :-) [13:47] wheeee [13:47] kadams54: process killed [13:47] THanks [13:56] morning hatch__ [14:06] mornin rick_h__ [14:07] hatch__: pics finally made it up overnight https://www.flickr.com/photos/7508761@N03/ [14:07] killed poor flickr's api lol [14:07] haha nice! [14:07] had to restart it 4 times yesterday [14:07] lol! [14:07] why flicker and not, say G+? [14:07] G+ pissed me off when they did the picasa to G+ photos crap [14:08] they botched that hard core and so moved all my stuff to flickr [14:08] plus there's no G+ import from other apps/programs [14:08] ahhh [14:08] I tend to find the highlights and upload them to G+ for easier/thinned out sharing [14:08] but takes more time [14:08] rick_h__: ah, you did bring the wife. Where is the boy? [14:08] jrwren: he stayed at home! [14:08] rick_h__: excellent choice! :) [14:08] he's going on his first plane ride next week though. [14:09] will be fun to see how that goes [14:14] I've found kids are either pro's or absolutely horrible [14:14] when it comes to flying [14:14] (from my friends with kids) [14:14] lol [14:15] they don't usually understand that they are going 600MPH 35,000 ft up [14:15] lol === hatch__ is now known as hatch [14:19] jrwren: do you have a card in progress? [14:19] jrwren: or the one charm one in review? [14:20] rick_h__: not really. yes, that in review. [14:20] jrwren: cool, have a hacky friday project if you're interested [14:20] jrwren: meet back in standup? [14:20] rick_h__: I'm sold. What do you have? [14:20] rick_h__: ok [14:21] https://plus.google.com/hangouts/_/calendar/cmljay5oYXJkaW5nQGNhbm9uaWNhbC5jb20.t3m5giuddiv9epub48d9skdaso?authuser=1 [14:21] bah, wrong one [14:22] https://docs.google.com/a/canonical.com/spreadsheets/d/1szaU3S4r72DRzDr9TAsMS57FJ8AUaN84q1ov2rNLxZ8/edit#gid=0 [14:26] kadams54: when you qa'd https://github.com/juju/juju-gui/pull/494 the issues I mentioned were still there? [14:26] https://juju.ubuntu.com/docs/charms-deploying.html#deploying-to-specific-machines-and-containers [14:27] I ask because huw pushed a commit [14:27] hatch: yeah, although since then I've QA'd the card that fixes container count [14:27] wait, what? [14:27] https://github.com/juju/juju-gui/pull/499 [14:28] so does #494 have to land before #499? [14:28] or the other way? [14:28] confused... [14:28] I don't think it matters [14:28] As long as they both land eventually :-) [14:28] oh it was an existing issue... [14:29] ok I'll pick up the second reviews on these and get them landed [14:29] It does seem odd to me that we're calling bare metal "root container" but not incrementing the container count. [14:29] Great [14:29] It'll be nice to have all of these landed. [14:29] FYI, I'm re-running the CI build for https://github.com/juju/juju-gui/pull/499 [14:29] aye [14:29] sounds good [14:34] in order to get my keybindings in ubuntu to be what I want it makes the OSX host completely unusable lol [14:34] looks like I'll have to investigate only modifying the keybindings in the vm [14:35] other than caps lock as ctrl, I gave up on custom keybindings a decade ago :) [14:40] kadams54: just fyi I'm not going to ship 494 until 499 lands and I can re-qa [14:40] k [14:41] jrwren: so I have caps as ctrl, alt as super, super as alt, plus some others I had to shift around to make that work :) [14:41] command, option (in osx) etc [14:41] hatch: I was about to say, "Sounds like a Mac" [14:41] jrwren: well it is mac hardware :) [14:42] but of course I use a PC keyboard which makes it even more complicated [14:42] hatch: using a PC keyboard on a Mac was one of the most frustrating things I've ever done in my life. I hate macminis to this day largely as a result of that experience. [14:43] I'm thinking i need a new ergo keyboard and entirely remap it so that doing the ctrl+ combos aren't so messed up [14:43] lol [14:43] I remapped so that alt was cmd and super was option [14:43] * rick_h__ gets annoyed with my 'photo' air with the thinkpad external keyboard on it [14:43] then it was ok [14:43] damn 'command/option' crap [14:44] I was going to just keep fighting to get ubuntu on metal but then I coudln't qa in windows without booting into osx, running ubuntu and windows in a vm [14:44] so....yeah [14:44] tough life I suppose... [14:44] :) [14:45] kadams54: do you have a good test run of #496? [14:46] That'll be the next CI build, but no, not yet [14:47] ok np [14:51] rick_h__: where would this matrix code go? [14:52] jrwren: in a spreadsheet, just put it in a doc for now and maybe we'll dump it to a doc or something once we go through it [14:52] ok [14:52] jrwren: so we might add a doc/containers or osmething with the script, results, etc [14:53] jujugui call in 8, don't be late, rick_h__ is here now ;) [14:53] woot! [14:58] jujugui call in 2 [15:01] antdillon: around for call? [15:21] allllright back to doin qa's [15:22] darn, looks like there is some conflicts [15:28] fixing, will issue a new pr [15:44] kadams54: comments made to your PR [15:44] * kadams54 sticks his fingers in his ears. [15:44] "La la la" [15:46] lol [15:46] +1'd with changes and qa ok [15:47] lol all of huw's branches are failing when landing because of conflicts and failing tests [15:47] I think they will have to wait until he gets in to get them fixed and landed [15:47] :-( [15:48] Lunch break for me [15:54] hatch: yea, I cut down the WIP limits on there. That was a few too many branches in review at once [15:56] rick_h__: I think it was just a symptom of huw combining cards [15:56] into branches [15:56] but probably not a bad idea :) [16:23] rick_h__: what's do you think is a reasonable behaviour when an http PUT request partially succeeds? should we return an error http status, but have some indication of which things have succeeded in the response body; or should we return a 200 status, but have some indication of errors in the response body? [16:24] rogpeppe: I'd expect it to be some sort of 500 error with some information in the body. [16:25] rogpeppe: how does one 'partially succeed'? [16:25] e.g. we got the metadata but didn't finish getting a blob or something? [16:25] rick_h__: if we've got a PUT to meta/any, it might involve several database writes; one or more may fail [16:26] imho the entire thing should fail (500) [16:26] rogpeppe: gotcha, hmm, so it might be more a 400 things vs a 500? [16:26] hatch: i think i agree with that. [16:26] 400 - you screwed up, 500 - we screwed up :) [16:26] yea, I can get behind that. [16:26] hatch: right, but that's what I mean. If they supply bad data for the 4th document and we fail it should be a 400 "bad request" [16:27] rick_h__: yeah, that's a difficult one [16:27] hatch: but if we fail because of conflicting user, db issues of our own, timeout, etc that'd be more 500 on us [16:27] rick_h__: the 4th document might have bad data, but the 5th document might have correct data but a db write error [16:27] but I do agree that I think we fail in some error code way, and try to make sure a subsequent request has a chance of succeeding [16:27] rogpeppe: right, but we'd return on first fail? [16:27] rogpeppe: vs trying to push on? [16:28] rick_h__: we may not be able to do that [16:28] oh hmm, double sucky [16:28] rick_h__: as we want to be able to this stuff concurrently [16:28] yeah it should and then roll back to a clean state [16:28] hatch: we may not be able to roll back either (no transactions) [16:28] rogpeppe: yea, I think a safe default is a 500 with message info [16:28] rogpeppe: right but you'll have to implement them in the Go side [16:28] you can't have a bunch of broken records in the db because the user supplied incorrect data [16:29] rogpeppe: yea, we might need to be able to call out ids/records we know is in a fubar state or something [16:29] hatch: if it's a db error, it's likely the database connection is down, so we'll be a bit stuffed if we try to roll back [16:29] rogpeppe: or else how does a subsequent api call not get back half rev 3 data and half rev 4 data? [16:30] rick_h__: i think it should be possible to avoid anything getting in a fubar state [16:30] rogpeppe: cool, yea I'm +1 on starting with a 500 + message body [16:30] rick_h__: sgtm [16:30] +1 [16:30] rick_h__, hatch: thanks [16:36] only one other thing that concerns me: currently we are totally consistent when we send back an error response - it looks like {"Code": "something", "Message": "an error message"}. In this one case, we'd need to send back a set of errors, not just one. this makes me more inclined towards a different error code. [16:37] the consensus from a brief glance at these search results seems to be that some kind of 200 response might be more appropriate: https://www.google.co.uk/search?q=http+status+partial+failure&oq=http+status+partial+failure&aqs=chrome..69i57.4301j0j7&sourceid=chrome&es_sm=93&ie=UTF-8 [16:37] specifically: http://stackoverflow.com/questions/8472935/http-status-code-for-a-partial-successful-request [16:37] rick_h__, hatch: ^ [16:37] rogpeppe: the issue is that we shouldn't have a partial record created [16:37] rogpeppe: right, but in that case resending to users would be harmful [16:38] if you only create half of the required stuff it's going to cause issues [16:38] rogpeppe: while in our case, resetting the data should be ok and safe [16:38] rogpeppe: so there's no need to split out the next request to only supply data for docs 4 and 5 since 1-3 are already updated [16:38] rogpeppe: I think a partial update of a charm is a failed update [16:38] imo and all that [16:38] agree [16:39] rick_h__: so we don't bother telling the client which elements of the bulk request have failed? [16:39] we have to do everything we can to maintain a quality data set [16:39] rick_h__: we just say it all failed, even though some elements have actually succeeded? [16:39] rogpeppe: right, the message might contain some info, but we're not going to expose to them "don't bother sending fields x,y,z, they're ok [16:39] rogpeppe: we would return the first thing that failed [16:39] hatch: we don't necessarily apply them all sequentially [16:39] rogpeppe: the clients would have to be able to take that response, adjust a subsequent request, and submit again? [16:40] rick_h__: well, they'd have that freedom [16:40] rogpeppe: right, but I think it's overkill for what we're doing [16:40] rick_h__: or they could just send the whole request again [16:40] rogpeppe: I'm confused, so you would leave partial incorrect data in the db? [16:40] rogpeppe: and a ton of client complexity for what win? request size? [16:40] hatch: yes - i don't think there's any way around that possibiity [16:40] rogpeppe: of course there is [16:41] hatch: welcome to document dbs sans transactions across docs :) [16:41] rick_h__: i'm just wary of saying to the client "this failed" when some changes have actually been made [16:41] no just because the db doesn't support it doesn't mean that the app layer cant implement it [16:41] rogpeppe: right, but the client requested "change this metadata on an object" and that failed to occur and the client should notify the user, give them the info to try again with the hopes they can succeed. [16:42] hatch: if one change has been made, then the db connection goes down before you can make the next change, how're you gonna revert the first change? [16:42] and it's too much for me to hink clients will not just check '200 success' but '200 but hey, did every field get updated?' [16:43] hatch: also, reverting the data is hard when you've got other concurrent clients. you might revert other clients' deliberate changes. [16:43] hatch: because the only way of reverting is by writing some data that you retrieved some time ago, since when it might have changed. [16:44] rogpeppe: so you have updated the db with bad data, now you go and request that data - oops the app fell over because of the bad data [16:44] hatch: we can work on methods of mitigating. [16:44] hatch: let's concentrate on the initial question at hand. [16:44] hatch: i don't think that we'll ever update the db with bad data here [16:45] oh i thought the initial question was solved [16:45] :) [16:45] and we can debate ACID different :) [16:46] hatch: well, rogpeppe was showing us the other examples in stack overflow and we were discussing how this is a bit different [16:46] oh ok yeah [16:46] anyway, on that note I definitely think an erorr code on a failed update is still appropriate. [16:46] rick_h__: this is different because we don't trust our clients? :-) [16:47] no the workload to implement it in every client is not worth it [16:47] imho [16:47] (to read the docs and response body) [16:47] rogpeppe: this is different because redoing the work a second time in their case is 'ungood' [16:47] rogpeppe: so I'd consider their case an exception to the rule vs a standard [16:50] rick_h__: there's just something that seems wrong in my bones about discarding error info like that, when one error might be "permission denied" and another might be "database failed", and the client would want to retry the latter but not the former. [16:50] I'm not saying throw away the info. I don't see why we can't work out getting the errors to the user [16:51] the only thing I'm against is a 200 when we know it failed [16:51] rick_h__: ah, ok. [16:51] rogpeppe: the first is a 4** the second is a 5** [16:51] I'm all for getting good error to the user [16:51] rick_h__: can we choose a different 500 error then? [16:51] rick_h__: i guess we'd have to choose a number ourselves [16:51] rick_h__: or... just encode it in the string i suppose [16:51] rogpeppe: yea, nothing else 'fits' I can see [16:52] where did you shoot me the current message object? [16:52] {"Code": "something", "Message": "an error message"} [16:52] so can we not update that in any way? [16:52] with a details key, or allowing Message to be a list? [16:53] rick_h__: one possibility, which we've already explored a little way, is double-encoding json in the message [16:53] :( [16:54] so on submit you're going to wait untill all the 'transactions' are complete before returning to the user a list of the 'tasks' and their outcome? [16:54] hatch: so they're going to fiure off 4 workers to update 4 docs and when all 4 come back build a response [16:55] hatch: yeah [16:55] hatch: it might be all 4 are good, 200, or 3 good, one bad with a message, or 1 good 3 bad with 3 messages [16:55] rick_h__: allowing Message to be a list doesn't quite cut it. [16:56] rogpeppe: can one transaction fail with a 400 while the other is a 500? [16:56] rogpeppe: k, I'm tempted to extend it a bit and say we have the code, a message "applying changes failed" and some details object that's more complex clients could use to provide the better list of feedback [16:56] rick_h__: we actually want to return a map from id to error in some cases, and from id to endpoint to error in some others [16:56] rogpeppe: and for simple cases, the complex obj is just empty [16:56] hatch: yeah [16:57] rogpeppe: but that's just what I would do. I can be talked into a different custom error code number or whatever [16:57] rick_h__: i think that's probably a reasonable approach. need to think it through a little more. [16:57] I think this is all moot, the validation should happen prior to the db request so it will 400 before anything could 500 [16:58] so you only have 2 options, 200 for all, or 400 for one+ (which makes the responce 400) [16:58] response* [16:58] hatch: for example, consider a PUT to /v4/meta/extra-info/foo of {"precise/wordpress-23": "val1", "precise/badcharm": "val2"} [16:59] ok [16:59] hatch: that makes it considerably less efficient (and it can still fail) because you've got two db round trips where you only need one, and the object might still have disappeared when you actually make the request [17:00] hatch: (talking about pre-request validation there, BTW) [17:00] but what I'm saying is that you should know that precise/badcharm is a bad charm before making the db request [17:00] hatch: you can't know that [17:01] hatch: maybe it was good but when you actually do the request for real, it's been removed [17:02] *sigh* [17:02] lol [17:02] race condition turtles all the way down :) [17:02] rick_h__: yeah. [17:02] sounds like wrong-tool-for-the-job syndrome [17:02] hatch: it's ok really. [17:03] hatch: just treat them as independent requests, even though they're bundled up into one [17:04] ok so return a list of error messages (in whatever format) and the request response status would be 200 < 500 < 400 [17:04] so if they broke it, even if there is a 500, it returns a 400 [17:04] it couldn't continue anyways because they did something wrong [17:05] (we are still talking about the original request response code right? ) [17:05] hatch: there are different classes of "wrong" [17:05] hatch: yeah [17:05] well, unrecoverable wrong is a 400 [17:06] ok let me rephrase [17:06] when would we want to return a 500 if there is also a 400? [17:06] so they sent us bad data, and our db was down for example... [17:06] i'm somewhat inclined towards rick_h__'s p.o.v. that we should only return a 200 if everything went according to plan [17:06] well yeah [17:07] I didn't know that was even off the table lol [17:07] and i think that if someone is generating bulk requests, they should be prepared to deal with the results [17:07] agreed [17:08] hatch: well, that's what the SO post was suggesting (200 with result errors) [17:08] oh, no imho that's bonkers [17:08] :D\ [17:08] :D [17:08] (I don't have strong beliefs at-all!) [17:09] rogpeppe: I suppose in the case where it would be alright if only one succeded then maybe...but even then i'd be pretty far on the side of returning a 4/500 [17:10] it makes sense in that case. If the api call sends emails out to users you don't want to resend to the same users again [17:10] so i'm thinking that for a bulk request, you'll always get a 500 error, and you have to look at the error details to find your individual error codes [17:10] seems reasonable [17:11] rogpeppe: well.... what if they had 400 errors and no 500 errors? [17:11] so it was actually all their fault [17:11] hatch: i don't think it matters that much [17:11] a 500 they may just re-submit the data even though it's faulty [17:12] hatch: they can determine that by looking at the individual codes [17:12] hatch: which is logic they're going to have to write anyway [17:13] hatch: so it actually simplifies both server and client code, i think [17:13] hatch: (to have a single error code for a bulk request, that is) [17:13] s/code/status/ [17:13] ok I gotcha [17:13] agree [17:13] 200 or 500 with a list [17:13] hatch: yeah [17:14] hatch: and i will look into providing more arbitrary error objects in the error response [17:14] yeah It would be nice if there was an array of errors or the lke [17:15] I think rick_h__ mentioned that earlier [17:15] hatch: definitely. that's the plan (actually a map/object of errors) [17:15] what would the keys be? (curious) [17:16] hatch: in the /meta endpoint, the requested ids. in the /meta/any endpoint, the requested "include" endpoints. [17:16] hatch: and those can nest, i think [17:16] ahh ok cool [17:16] yeah +1's all around [17:17] hatch: so you could do: PUT /meta/any with the body {"precise/wordpress-23": {"extra-info/foo": 123, "extra-info/bar": true}, "precise/bad-435": {"extra-info/ppp", 345}} [17:17] hatch: and get the error response: [17:19] oops, let's rephrase that [17:19] {"precise/wordpress-23": {"extra-info/foo": 123, "bad-endpoint": true}, "precise/bad-435": {"extra-info/ppp", 345}} [17:19] possible error response: [17:21] {"Message": "partial failure", "Code": "partial failure", "Details": {"precise-wordpress-23": {"extra-info/foo": {}, "bad-endpoint": {"Message": "bad endpoint", "Code": "not found"}}, "precise/bad-435": {"extra-info/ppp": {"Message": "charm not found", "Code": "not found"}}} [17:21] perhaps... [17:21] i think that's the most complex scenario that there is [17:22] anyway, i need to stop now :-) [17:22] looks easily parsable, although that would be a full failure :) [17:23] hatch: not quite - "precise/wordpress-23/extra-info/foo" was successfully changed to 123... [17:23] g'night all and happy weekends and mondays too. [17:23] night rogpeppe see ya tuesday [17:24] hatch: see ya [17:36] juju status nil pointer reference. I'm good at breaking things. [17:36] hah [17:39] lol [18:38] hey hatch, question for you - is there already work being done to get ghost on series=trusty? [18:39] lazyPower: by work.....do you mean.....me not being lazy? [18:39] :D [18:39] your words not mine ^_^ [18:39] apperance of lazyness is just applied efficiency [18:40] haha - tbh it deploys jsut fine on trusty I just need to make a repo for it [18:41] you also need tests [18:41] Amulet testing of the ghost charm would be pretty simple. just some config set validation, and port 2364 (default) validation w/ port change requests validation and it should be g2g. Throw in a haproxy relationship verify the reverse:proxy relationship is g2g and sounds like a winner for trusty [18:43] oh yeah I need to learn amulet [18:44] friday rant: juju drives me to drink. [18:44] jrwren: good day for it? [18:44] jrwren: anything we can help out with? [18:45] rick_h__: just venting. I'm frustrated becuase I just wasted what feels like 2 hrs trying to make azure work. [18:46] rick_h__: isn't juju supposed to read updated environments.yaml ? somehow I had a stale environments/azure.jenv [18:46] jrwren: yea, they're trying to remove environments.yaml [18:46] and the jenv is the true source of truth [18:46] ah. [18:47] that is too bad. [18:48] hopefully things will get better. Trying to get rid of the jenv as well. We'll see how that shakes out [18:49] that will be sweet :) [19:24] Makyo: hey any luck with the investigation of the multi unit removal? [19:26] hatch, yeah, I don't think we have enough information for much other than an XXX Comment now. We don't know how the modelIds will look, and we'll need to allow people to remove specific units if they're on specific machine constraints. [19:26] ahhh [19:26] that's a good point [19:27] darn - we really need that removal functionality :/ [19:27] I mean, it's not HORRIBLE heh [19:27] but.....ya know [19:29] Yeah, for sure. [19:29] So let me push up real quick and get your opinion [19:36] Pushed [19:50] checking [19:51] +1'd [19:52] Will fix up that test, then [20:07] * rick_h__ heads out for the day [20:07] have a nice weekend all [20:07] you too, cya [21:20] man I'm just having no luck with the internets this week [21:23] I'm off. Have a good weekend ya'll. [21:34] cya jrwren [21:34] you too