[02:05] axw: meeting? [02:05] wallyworld: sure [02:05] rotating team meeting [02:05] see cal [02:05] oh right, core team [02:05] whoops [02:06] :-) [02:07] perrito666: if you're still around [02:37] thumper: want the 1:1 now? [02:37] wallyworld: yeah, in our normal hangout [02:38] already there :-) [03:04] * thumper goes to make a coffee [03:21] axw, I updated bug 1324255 to explain that the juju and the slave were dead from disk shortage and zombie mongos. from other tests [03:21] <_mup_> Bug #1324255: Local provider cannot boostrap [03:22] sinzui: hmm ok. I'm pretty sure I saw it on the amd64 tests to [03:22] too* [03:24] axw, CI doesn't have a trusty local lxc test yet [03:24] sinzui: sorry, I was unclear; I meant the unit tests themselves see the "Closed explicitly" error. not in local deployment, just running the unit tests [03:24] sinzui: e.g. http://juju-ci.vapour.ws:8080/job/walk-unit-tests-amd64-trusty/485/console [03:24] it may be unrelated, I'm not sure [03:26] I see [04:22] * thumper making dinner, back in about 2 hours for a bit more === thumper is now known as thumper-afk [05:06] https://codereview.appspot.com/98700043 :) === vladk|offline is now known as vladk [06:15] axw: thanks for review, more info on reason for new interface etc is here https://docs.google.com/a/canonical.com/document/d/1J6EZ37gfiZdVMdoxdFrih4R27LCcqyBzi50wjWFjbiA [06:18] okey dokey, will have a read [06:18] wallyworld: I think calling it "err" will also be a problem, because an "err" in the body will shadow it [06:19] axw: err in body is not meant to be a new var [06:19] that's why i used = not := [06:19] wallyworld: not on the first line of the function [06:20] that's because file is new but err will not be as it's the 2nd var and there's one in scope [06:20] or have i got that wrong? [06:20] mm I think it only counts if it's declared in the same scope. will verify [06:20] I could be full of crap [06:20] the err at the top is in the same scope afaik [06:20] bbiab, gotta pick up kid [06:21] axw: i was tempted to leave the err in the defer as err := but thought it could be confusing so changed its name even though not necessary [06:21] wallyworld: you are correct [06:21] np [06:22] that's why i hate := and = [06:22] too easy to make subtle mistakes [06:22] now i'm really bbiab === Ursinha-afk is now known as Ursinha [08:08] morning all [08:30] is anyone working on the "juju doesn't bootstrap on precise" issue? [08:30] I suspect it's my fault [08:31] I'm spinning up a precise VM to look at it [09:06] voidspace: if you can reproduce it, I have a small change I'd love to test [09:07] I'll also start a precise image on AWS [09:08] I'm creating a local VM. Still downloading precise - my "old" VM was 13.10 not precise. [09:11] voidspace: from what I've found in Googling, it seems that mgo will kill sockets for servers that it doesn't think are available in the cluster [09:11] voidspace: so I've made a small change to copy (rather than clone) the session when running replSetInitiate, to force it to create a new socket [09:14] jam: ping [09:18] vladk: pong [09:19] jam: I'd like to discuss my further steps in developing Networker API [09:19] sure, can we get dimitern involved as well? [09:20] jam: of course [09:20] (well that was mostly just to ping him) [09:20] dimitern: ping [09:21] jam: can we do it on hangout? [09:21] certainly, I was just waiting to see if dimitern was actually around [09:24] jam, vladk, i'm here [09:25] k I need about 30s [09:26] we can just use our team standup hangout [09:27] https://plus.google.com/hangouts/_/canonical.com/juju-sapphire [09:27] dimitern: vladk ^^ [09:29] axw, pre-commented on https://codereview.appspot.com/102860043/ -- let me know if it sounds sane to you [09:31] axw: awesome, that sounds very useful [09:31] axw: might help test issues as well [09:31] axw: we start mongo ok, but then fail to connect to it [09:31] axw: (when using replica sets for tests) [09:31] axw: and it's not a timing issue, we wait sixty seconds [09:32] axw: I'm creating a branch with my changes backed out - so once I get the VM setup I can quickly tell if it's my changes [09:32] fwereade: thanks, seems sensible to put it on RelationUnit. in my PoC I did the address selection TODO in the uniter, which is a bit dumb I guess. [09:32] fwereade: yes, free for a chat tomorrow [09:32] voidspace: we wait sixty seconds for what? [09:33] axw: we wait up to sixty seconds (mongoDialupTimeout I believe) when trying to connect to the mongo server we have created for tests [09:33] axw: you're missing some context [09:34] axw: as well as using replica sets for local provider I wanted to switch us to using replica sets for tests too [09:34] I see [09:34] axw: so that our test mongo matches the production mongo [09:34] we're already doing that in a bunch of tests - you want to do it in *all* of the,? [09:34] them* [09:34] axw: but when I do that, many tests fail with "unreachable servers" - many of them deterministically [09:34] axw: yep [09:34] ok [09:35] JujuConnSuite *doesn't* [09:35] the driving motivator is that we want to switch to write-majority [09:35] sounds like it will add quite a bit of overhead to the tests. setting up a replica set can take quite some time [09:35] axw, no worries, we converge :) [09:36] the command to switch to "write majority" (WMode) errors if you're not using replica sets [09:36] :/ [09:36] so one fix is to use replica sets everywhere [09:36] that's what we were attempting [09:36] it maybe that because of precise we *can't* [09:42] * axw sighs [09:42] of course it bootstraps first time for me [09:42] of course it'd help if I had pulled trunk [09:42] heh [09:51] axw: this branch reverts my local provider replica set changes [09:51] axw: lp:~mfoord/juju-core/revert-local-replset [09:51] ok [09:51] or "bzr merge -r 2803..2802" [09:58] voidspace: if you manage to reproduce the error with replSet enabled, try this patch: http://paste.ubuntu.com/7542857/ [09:58] (please) [10:00] axw: yep [10:00] axw: precise installing now [10:00] axw: you couldn't reproduce? [10:00] voidspace: nope :( [10:00] odd [10:03] mgz: 1:1 time [10:13] wallyworld: whoops, not looking, joining now [10:17] voidspace: it's not just precise. http://juju-ci.vapour.ws:8080/job/local-deploy-trusty-ppc64/ [10:17] there's a bunch of failures there on r2807 [10:18] "no reachable servers" [10:28] axw: :-/ [10:28] axw: configuring my vm to try it [10:31] 215mb of updates... [10:35] hi all [10:36] menn0: morning :-) [10:37] menn0: howsit? [10:37] voidspace: alright [10:37] voidspace: how are you? [10:37] voidspace: we had a bit of household chaos here this afternoon so I'm catching up now. have one big email to send to #juju-dev today. I'm off until Tuesday. [10:39] menn0: ah, fun [10:39] menn0: I hope "household chaos" is good chaos [10:39] voidspace: not really, but all fine now [10:41] menn0: :-/ [10:41] we're all good here, lots going on [10:41] better a hectic life than a boring one though [10:42] trying to reproduce precise bugs at the moment [10:42] voidspace: I think there's a balance :) [10:42] voidspace: yeah I heard about that problem [10:42] voidspace: tricky to reproduce? [10:43] menn0: so far [10:43] menn0: I'm just updating and configuring a precise vm to see if I can reproduce [10:43] axw has failed so far === Ursinha is now known as Ursinha-afk === thumper-afk is now known as thumper [10:56] thumper: hi again :) [10:56] o/ [11:01] cmars, fwereade: joining us? [11:03] ahhhh stupid hangouts keeps crashing [11:11] natefinch: morning [11:18] voidspace: morning === Ursinha-afk is now known as Ursinha [11:51] so if I try to bootstrap with the local provider on precise it tells me that juju-local is required [11:51] but juju-local is also unavailable for precise [11:51] ahh hrm [11:51] do I need to add a ppa? [11:51] looking [11:53] natefinch: "Due to needing newer versions of LXC the local provider does require a newer kernel than the released version of 12.04." [11:53] Therefore we install Linux 3.8 from the LTS Hardware Enablement Stack: [11:55] natefinch: I've added ppa:juju/stable and juju-local is now available [11:55] looks like I need to upgrade the kernel too [11:56] voidspace: I used cloud-archive:tools [11:56] axw: is there a difference? [11:57] I guess "stable" is likely to be out of date [11:57] I mean to get the newer lxc [11:57] ah [11:57] hmmm [11:57] hm, I guess you got an older 12.04, I probably got 12.04.03 [11:57] I did a full update [11:58] ok I dunno [11:58] axw: those quotes were from the docs, not an error message [11:58] https://juju.ubuntu.com/docs/config-LXC.html [11:59] axw: I might try without the new kernel first [11:59] maybe it will fail in the same way... [11:59] voidspace: `sudo apt-add-repository cloud-archive:tools` if you've not [11:59] although Curtis is pretty switched on, so I doubt that's the problem [11:59] mgz: ok [12:00] mgz: you sound like you know what you're talking about :-) [12:00] worth checking your kernel version too, by default upgrades may not give you a newer kernel [12:00] * axw disappears [12:00] axw: o/ [12:00] night axw [12:00] mgz: they're unlikely too [12:00] *to [12:01] mgz: would you reccommend I remove the juju/stable ppa? [12:02] if you're not trying to use the *client* we have released on that machine, yes [12:02] jam. dimitern: what is your way to keep go packages up to date? [12:02] 'go get -u ./...' damages my working juju-core, so I wrote a script http://pastebin.ubuntu.com/7543496/ [12:02] vladk: yeah, go get is dangerous [12:03] vladk: I generally run just "godeps -u dependencies.tsv" and then go update the ones that are out of date [12:03] you can just "go get -u github.com/juju/errors/…" for example [12:03] vladk: ^that's what I do too [12:03] mgz: I'm trying to use trunk, thanks [12:03] well, I don't use go get at all, I pushd to the branch and git pull or whatever [12:04] mgz: sure, I've done that as well, depends on my mood, I guess [12:05] jam: and yeah, I do use go get as well, if say we've added a new dep with its own deps [12:05] ok, installing new kernel [12:05] so lunch [12:06] vladk: jam, mgz: I want to update all of packages [12:14] vladk, that's a handy script, thanks [12:14] hello [12:15] wwitzel3: morning [12:15] so, juju bootstrap on precise, with trunk, seems to work fine for me [12:15] so after not sleeping more than 5 hours the last 3 days, I finally zonked out and managed to sleep 10 hours. [12:15] dimitern: use with care, not fully tested, I'm not sure about pull options [12:15] feels good, like I'm sane again :) [12:15] going on lunch [12:15] will dig deeper after that [12:19] night folks [12:20] vladk, sure, i'll give it a try [12:21] wwitzel3: that's good news, welcome back to the land of the living [12:22] voidspace: thanks, enjoy lunch [13:03] cmars, hazmat, fwereade: omnibus review [13:05] good sort of morning [13:08] sort of morning? [13:09] mgz: I usually start around or before 7AM now its 10 [13:11] sinfully late [13:45] perrito666: it is a trend today, I didn't get on until just after 8 and I even went to bed right after the meeting last night. [13:47] wwitzel3: I did too, it was like midnight when the meeting ended, but this AM I just spent 2.5hs in a queue trying to do some bureaucracy and then an extra half an hour having to exit and re enter the city because the path to go trough the center was blocked by a cluster of strike protests [13:48] that sounds less fun than sleeping [13:48] yep [13:48] fwereade: ping [13:48] wwitzel3, pong, listening, but in meetings, so responding slowly [13:50] fwereade: np, I was pretty dense yesterday and I don't think I retained the full value of our earlier conversation. [13:51] fwereade: looking at the presence code, I'm not seeing us actually use WaitAgentPresence anywhere. We define it and have tests. [13:51] fwereade: I'm wondering if we should just be using that with or instead of SetAgentPresence in the Pinger. [13:52] wwitzel3, not quite following -- I know we don't use WaitAgentPresence except in tests, though, but I don't get the larger thread [13:53] fwereade: part of the problem is I'm still cloudy on the "Why" SetAgentPresence doesn't think the agent is there when it is called the first time. [13:53] wwitzel3, I *think* it's because it uses "2 consecutive pings" as the "yep, it's there" condition [13:54] wwitzel3, but please validate that against reality [13:54] wwitzel3, (ie if you've only had one ping it's not yet considered presence) [13:58] Wow, I just realized the Juju Core Team meeting was last night not tonight. Stupid time zones making my Thursday meeting on Wednesday :/ [13:59] yup, do we have stdup today? [13:59] hah [13:59] I just set up an alarm for the day before, so I know in advance when that meeting is [14:00] I had skipped thursdays, because we used to have the juju core meeting before it, but now that the juju core meeting is moving, I think I need to reinstate the thursday standup. Short answer: yes [14:00] fwereade: I'm not seeing the two pings, following the trail from api status -> machine.AgentPresence -> presence.Alive which which fires a watcher.sendReq for the machine key and we select over the result and return bool,err back up the stack. [14:02] fwereade: I can make the problem with a long enough wait and recalling SetAgentPresence, but that isn't a legitimate fix and I am still failing to really grok the problem in the first place. [14:03] wwitzel3, voidspace: standup? [14:03] natefinch: oops, sorry [14:03] wwitzel3: it's ok, it wasn't on the calendar for today until now === dannf` is now known as dannf [14:19] fwereade, natefinch, mgz, I'd love a review on this networks constraints CL https://codereview.appspot.com/102830044 [14:20] wow! LP now supports inline comments on MP diffs! \o/ [14:20] wow [14:20] is it too late now to not switch to github? :D [14:21] dimitern: hah [14:23] haha [14:24] fwereade: did you and Ian figure out if we should use --replset in tests? [14:24] wwitzel3, ok, I may be wrong there: is it just the case that the first ping is not landing quickly enough to be seen? [14:25] natefinch, essentially yes [14:26] natefinch, for the tests that *by their fundamental nature* require mongo, we should be using --replset, because those really are integration tests and we should mimic a real environment as closely as possible [14:27] natefinch, but we have many tests that only coincidentally require mongo, and in those cases --replset is not required, because we should be aiming to make those tests unitier anyway -- to the point where they don't depend on mongo [14:28] natefinch, in particular replset clearly needs a real mongo, and almost certainly peergrouper too; and very probably state [14:29] natefinch, other test that merely happen to use state should be migrating away from the mongo dependency anyway, and so the loss of --replset is not a big deal [14:29] natefinch, how upsetting is this from your perspective? [14:30] sinzui: ping [14:30] hi voidspace [14:30] sinzui: ah, actually ping for five minutes [14:30] fwereade: it seems very squishy. Deciding if a test "really" needs mongo is non-trivial. [14:30] I assumed you'd be slower :-) [14:31] natefinch, well, state/presence and state/watcher are the other two definite ones [14:31] fwereade: btw are you on the cross team meeting, or should I jump on? [14:31] natefinch, but generally anything we *could* back with a double for state.State is IMO a candidate [14:32] natefinch, I'm not, alexis is; your presence couldn't hurt though :) [14:32] heh ok [14:36] fwereade: thank you, I think you've given me an epiphany [14:56] I've been asked to speak at PyCon India [14:56] A good opportunity to promote juju [14:56] Have to work on a snappy demo [14:57] Cool [15:00] voidspace: sweet [15:00] voidspace: you can try the one from LAS [15:00] fwereade, did you get a chance to check out my MR? [15:01] perrito666: Las Vegas? [15:01] that was pretty IBM centric [15:01] I'm looking for direction to either dig deeper on the gojsonschema to add (id) references, or to move forward towards state and param validate [15:01] something equally impressive though I hope [15:01] voidspace: yes, but nevertheless pretty [15:01] yep [15:01] something python related would be good [15:02] voidspace: some of the material used on the os demo, minus the lego part [15:02] I'll talk to the online services guys as they're charming up their infrastructure [15:02] bodie_: I'm on it [15:02] deploying and updating a django app would be good [15:02] and scaling out [15:02] fwereade, mgz, natefinch, a gentle reminder about https://codereview.appspot.com/102830044/ [15:02] voidspace: oi! that's my planned demo for PyConUK :D [15:03] bloodearnest: awesome, I can use yours [15:03] dimitern: I'm not wild about that syntax... [15:03] oops, I mean "we can work on it together"... [15:03] mgz, it's decided from above [15:03] voidspace: sure you meant that :p [15:03] dimitern: code seems fine so far [15:03] :-) [15:04] mgz, I'll wait for jam and/or fwereade to take a look as well, because i'm not sure about a few bits, but any comments are welcome [15:04] voidspace: that is cool, I only get invited to pycons I cant afford at times I cant travel :p [15:05] voidspace: when is pycon india? [15:05] perrito666: right, thankfully they're covering the costs or I wouldn't be able to afford it either [15:05] bloodearnest: end of September [15:05] so after PyCon UK I think [15:05] kk [15:05] voidspace: so we should do a joint talk proposal for pyconuk then :) [15:06] thanks mgz, nothing too fancy really. just got your changes in to the github repo (all tests are green) but need to make a decision about gojsonschema itself [15:07] bloodearnest: heh, yeah - sounds good [15:07] fwereade indicated interest in getting it MR'd as such even if it's not a "real" merge request, just to get visibility [15:07] voidspace, http://pastebin.ubuntu.com/7544599/ [15:08] sinzui: looking, thanks [15:08] voidspace, I asked abentley to join us to discuss what the test is doing bad [15:09] sinzui: so that worked [15:10] sinzui: ok [15:10] sinzui: although revision 2802 on master worked [15:10] so *something* changed [15:10] abentley: hello [15:10] voidspace: Hi. [15:11] voidspace, yes, but the only thing I think changed is that I renamed the test to from local-deploy to loca-deploy-precise-amd64 [15:11] heh [15:12] it was failing deterministically before that [15:12] abentley, voidspace I can try to restore the test name [15:13] that *really* shouldn't make any difference [15:13] oh but ci is still running [15:13] I can copy the test to the old name [15:13] oh, voidspace, I haven't run upgrade yet. I will do that to get the new lxc [15:13] sinzui: so the initial manual bootstrap works, but we think that re-bootstrapping from the generated .jenv file doesn't? [15:14] sinzui: Are you re-bootstrapping from old jenvs? [15:14] abentley, voidspace, the env name and the pregenerated jenv are the difference [15:14] or by "generated jenv file", do you mean your pre-canned one? [15:14] abentley, I used true local with a jenv [15:15] abentley, I used true local withOUT a jenv [15:16] can I get a copy of the failing jenv file? [15:16] sinzui: And you used an old jenv for testing local-deploy-precise-amd64? [15:17] voidspace: We don't have a pre-canned jenv. [15:17] I can't see it in the workspace to get at it [15:17] voidspace, abentley. It is generated. not pre-canned. We let juju make the jenv file. We update the version in it to ensure juju does what we and enterprises mean [15:17] ah, I see [15:17] bodie_: see comments on review, one thing needs fixing and some other comments, then lets land [15:17] voidspace, we tell users to share jenvs. we are [15:17] cool [15:17] sinzui: We leave the .jenv entirely untouched. It's the yaml that we tweak. [15:18] I am tempted to do the lxc upgrade, then ask jenkins to run the test as usual [15:19] sinzui: ok [15:19] mgz, I agree about pulling out the yaml -- is there a place in the repo where test files could go? there will be a charm actions yaml loader tested separately as part of dir (I think it was) [15:20] can I get at the CI scripts? I'm reading the logs, but it looks like a lot of the work is in deploy_job.py [15:20] bodie_: it would be an improvement to just have them as top level variables so that's easy later, splits the test logic a bit but easier to pull apart later [15:21] voidspace: the branches should be public [15:21] mgz: ok [15:21] voidspace: bzr branch lp:juju-ci-tools [15:21] yep, just found it :-) [15:22] mgz: thanks [15:23] mgz, I moved them to local scope due to fwereade's comments here: https://codereview.appspot.com/94540044/diff/1/charm/actions_test.go#newcode21 [15:24] bodie_: lets not go back and forth on that then :) [15:24] voidspace: Every juju action is dumped to stdout so you should be able to reproduce the script actions from the log output. [15:24] abentley, voidspace jenkins failed again after the update [15:25] there's probably a case to be made for either approach. I'm personally kind of a fan of the way the gojsonschema tests work with external files, but in this case it's not really testing the schema as much as the YAML reader, which is simple enough [15:25] oh, I forgot the mem constraint [15:25] * sinzui run manually with the constraint [15:25] cannot initiate replica set [15:25] so it is the problem with mongo [15:25] and just as mysterious as last time we saw it apparently [15:25] frankban: reviewed https://codereview.appspot.com/92700044 [15:25] okay, going to put my kid down for a nap, then into the salt mines with me! [15:26] lick lick? [15:27] voidspace, where any of these mongo warning relevant to the failure http://juju-ci.vapour.ws:8080/job/local-deploy-precise-amd64/1371/console [15:27] rogpeppe: thanks [15:27] mgz -- after I get these tweaks in shall I start digging towards State then? [15:28] maybe start on a new branch to get dir and the loader landed [15:28] bodie_: yeah, I think so [15:28] we'll make sure this lands first :) [15:29] sinzui: unknown config field. Shouldn't be. [15:29] I'll look at the last successful build, see if those warnings are new. [15:30] nope, those warning were there before [15:30] voidspace, the warning is from a vestigial environments.yaml. I think I can remove it now, juju 1.18 doesn't need it. 1.16 did [15:33] voidspace, abentley maybe a memory issue...this is the failure from my manual run. I recalled the good bootstrap command from bash history, then appended the mem=2G http://pastebin.ubuntu.com/7544723/ [15:34] I'll try that locally [15:34] I didn't use the memory constraint [15:35] although my VM only has 1G! [15:35] machine has lots of memory [15:35] sinzui: Hmm. I wouldn't have thought that had an effect. Especially since mongo runs outside the lxc. [15:35] * sinzui tries again without mem=2G [15:35] abentley, agreed [15:36] nope, works [15:36] for me [15:36] if we can't figure out the difference we'll have to back out the replica set change [15:36] abentley, I believe 1G should be adequate for our testing on lxc anyway [15:36] I'm looking up that specific failure (cannot initiate replica set) [15:37] natefinch: ^^^ [15:37] natefinch: the precise failure is consistent when run as a CI job [15:37] natefinch: although effectively the same command run on the CI machine succeeds [15:37] natefinch: the specific failure is "cannot initiate replica set" [15:38] natefinch: so the replica set is the problem, although it's not obvious why [15:38] mongo logs probably have a better message. [15:39] that error implies that we have a mongo started *without a replica set* and then tried to add a replica set [15:39] or we tried to add a non-empty mongo to a replica set [15:40] it's the initiation which has to happen after it starts [15:40] but we start mongo with the replset argument [15:40] that's not initiation [15:41] nope, but initiation will fail if the db isn't empty [15:41] voidspace, abentley I bootstrapped again manually *without* the mem constraint and it failed. This is the relevant errors: http://pastebin.ubuntu.com/7544771/ [15:41] voidspace: you are missing a step there? [15:41] natefinch: https://jira.mongodb.org/browse/SERVER-6294 [15:41] voidspace: I don'tb think that's true. [15:42] natefinch: that's the closest I could find to a reference [15:42] natefinch: it's local.* that needs to be empty [15:42] perrito666: I'm sure I'm missing lots of step [15:42] voidspace: ahh, yeah... I don't think that's the problem [15:44] usually it's that mongo can't resolve the address that you gave it during initiate, or you don't have access to the admin database [15:44] rebootstrapping from an existing jenv worked for me [15:46] natefinch: should I back out the change to unblock CI? [15:46] natefinch: without a way to repro it's going to be hard to re-introduce though [15:47] what's the version of mongo on ci vs your vm? [15:48] sinzui: can you check the mongo version on the CI machine for me please [15:48] natefinch: I'm using 2.4.6 it looks like [15:48] voidspace, Installed: 1:2.4.6-0ubuntu5~ubuntu12.04.1~juju1 [15:49] sinzui: thanks [15:50] natefinch: so looks like the same [15:51] I really don't want to revert a change that seems to work correctly on all machines except CI :/ [15:51] especially without understanding why [15:56] my guess is that the address we're putting into the replicaset information in mongo is somehow not resolvable [15:56] voidspace: run that rs.config by hand by connecting with the client [15:57] you might get something more useful that failed out of it [15:57] perrito666: the only place it fails is on the CI machine which I don't have direct access to [15:57] perrito666: what specific command are you suggesting, we can ask sinzui or abentley to run it [15:58] voidspace: line 5 of http://pastebin.ubuntu.com/7544771/ [15:58] rs.initiate(daconfig) iirc [16:01] so, attempt to manually re-initiate? From my hunting it looks like that error message we get is the full mongo error. [16:02] yeah, all you really need is { members: [ { host: "10.0.3.1:37019" } ] } [16:02] it's odd that we're using a 10.0 address in that call to replicaSet.Config [16:02] yeah [16:02] 2014-05-29 15:21:41 DEBUG juju.worker.peergrouper initiate.go:34 Initiating mongo replicaset; dialInfo &mgo.DialInfo{Addrs:[]string{"127.0.0.1:37019"}, [16:03] Followed by: [16:03] 2014-05-29 15:21:42 INFO juju.replicaset replicaset.go:52 Initiating replicaset with config replicaset.Config{Name:"juju", Version:1, Members:[]replicaset.Member{replicaset.Member{Id:1, Address:"10.0.3.1:37019", Arbiter:(*bool)(nil), BuildIndexes:(*bool)(nil), Hidden:(*bool)(nil), Priority:(*float64)(nil), Tags:map[string]string{"juju-machine-id":"0"}, SlaveDelay:(*time.Duration)(nil), Votes:(*int)(nil)}}} [16:04] yeah, you have to dial using localhost in order to get mongo to let you call initiate [16:04] I'm checking what address we use when it succeeds on my machine [16:04] grabbing some food [16:05] ah, except I'm only seeing INFO not debug here [16:05] run with --debug [16:05] thanks [16:07] I see *no* mongo related output [16:10] voidspace: oh, right, you don't actually get the output, you have to look at the logs under $HOME/.juju/local/log [16:11] natefinch: my successful run uses 10.0 addresses too [16:12] maybe it's just the CI machine can't connect to that IP/port [16:12] although I don't know why it would be different for precise and trusty [16:12] natefinch: I'm pretty sure that "local.oplog.rs is not empty" error is coming back from mongo [16:13] nor why it would be different on *some* precise machines [16:14] maybe the CI machine already has the database populated in a way that a normal machine would not? [16:14] I'm grabbing coffee [16:24] uh, cffee, good idea [16:24] even better with an o === revagomes_ is now known as revagomes [16:36] anyone know what our convention is for landing branches in github.com/juju ? [16:36] i want to submit this PR: https://github.com/juju/testing/pull/6 [16:38] natefinch, voidspace there are no other mongos running on ci. I did check that because I found that had happened before [16:38] right [16:39] but maybe there is config left behind? [16:39] I wonder if somehow artifacts are persisting between test runs on *that* mongo [16:41] natefinch: perrito666: is there some step we can run at the beginning of CI to ensure the database is clean and there is no config / other artefacts left around? [16:42] in fact, have we got any CI set up on the github repos? [16:42] the failure indicates to me that we're trying to initiate a non-clean repo [16:42] rogpeppe: no idea. I know that's not helpful, but I'm not ignoring you :-) [16:42] voidspace: thanks [16:42] voidspace: if mongo's not running, you can just delete and recreate the folder that we use for mongo [16:43] rick_h_: what's the process for landing stuff on github? [16:43] natefinch: +1 restore does just that [16:44] rogpeppe: I don't think we have CI on github right now (at least for the juju-core stuff) [16:44] natefinch: ok, ta. i'll just rebase and push it then. [16:44] natefinch: what stuff? [16:44] mgz: github.com/juju/... [16:44] all the little bits? [16:45] yeah, that's all botless [16:45] mgz: yeah [16:45] mgz: ok, thanks [16:45] we should fix that [16:45] especially since we're moving more and more stuff into its own repo [16:45] mgz: pity we've lost that, but i'm guessing sinzui has plans for reinstating it [16:45] it's covered as part of juju-core [16:46] mgz: it's going to be part of other pieces too [16:46] once you start depending on a broken rev, core will refuse the dep bump [16:46] mgz: juju-core CI runs the github.com/juju tests ? [16:46] that still leaves go get etc borked as we saw yesterday [16:46] my ideal bot wil do the squashing for us so that we don't need to worry that we loose conversations when commits are removed [16:47] sinzui: my plan is to rebase-squash only at submit time [16:47] sinzui: but i have a feeling that violates the "don't rebase after publishing" rule [16:48] * rogpeppe misses rietveldt already [16:48] It does and github will delete conversations linked to the revisions [16:48] sinzui: so is there any way we can a) keep the conversations and b) keep the revision history coherent ? [16:48] rogpeppe: squash on merge would [16:49] mgz: that's what i was suggesting, i think [16:49] rogpeppe, I see that homebrew is now making their own swashed pull request for each pull request I make. My conversation and commits are maintained, but they do not show up as merged. [16:49] sinzui: yeah, that's the upshot [16:50] no pull requests are ever really merged [16:50] yeah, i can see that's the case. i can live with that, i think [16:50] just so long as i can get from the revision history to the conversations, i think it's all ok [16:51] rogpeppe, I think the bot can merge the approve branch, test, squash, commit, then add a comment naming the replacement commit when it closes the original pull request [16:51] sinzui: i guess we'll need a convention to allow the bot to choose an appropriate commit message. [16:51] yep [16:52] sinzui: most recent commit (by, erm, some measure of "recent") with some keyword in the message might work [16:52] +1 [16:54] question..... do we really need to squash? [16:55] So like... yes, roger's PR has 4 revisions and if you don't squash, those 4 revisions all go on main...... who cares? [16:55] natefinch: not nessersarily, especially if everyone gets in the habbit of rebasing neatly before proposing [16:55] natefinch: yes, i care [16:55] mgz: sure [16:55] natefinch: mostly hurts with lots of small review comment fixes [16:55] natefinch: so the mongo directory is defined as agentConfig.DataDir() + '/db' [16:56] natefinch: where is the default location [16:56] natefinch: sometimes i'll commit hundreds of times before proposing [16:56] natefinch: with minute changes [16:56] "/var/lib/juju"? [16:56] natefinch: i don't want all of those commits to end up cluttering the main log [16:57] voidspace: I think so, yeah [16:57] voidspace: /var/lib/juju/ [16:57] rogpeppe: natefinch: between CI builds is it safe to delete /var/lib/juju altogether? [16:57] voidspace: it better be [16:57] voidspace: it should be [16:57] good [16:57] brb [16:58] voidspace: although CI shouldn't be touching /var/lib/juju at all [16:58] sinzui: can you check if there's a /var/lib/juju on the CI machine - and preferably delete that at the start of the CI build [16:58] voidspace: because CI may be running on a juju node [16:58] voidspace: in fact, it probably will be [16:58] voidspace: so that will kill the local juju agents, which is not a great thing [16:58] rogpeppe: we're worried that db artefacts are what is causing bootstrap of local provider to fail *only* on the CI machine [16:59] sinzui: hmmm... if jenkins is provisioned by juju then blowing the whole thing away is not a good idea [16:59] we just want to delete the part created by the CI run [16:59] voidspace, I was about to say I want to keep the machine and unit agents alive [16:59] heh [16:59] * sinzui looks for cruft [16:59] probably preferable [17:00] it would be nice if CI never actually touched /usr/lib/juju [17:00] voidspace, /var/lib/juju is not contaminated. [17:01] damn [17:01] sinzui: thanks === andreas__ is now known as ahasenack [17:16] bbiab [17:24] g'night all [17:24] rogpeppe: g'night === Ursinha is now known as Ursinha-afk === vladk is now known as vladk|offline [17:46] I have to go, EOD [17:46] natefinch: so, we still don't have a resolution for this issue :-( [17:47] which sucks majorly [17:47] voidspace: ug.... yeah, thanks for all your work on it [17:47] heh [17:47] I think I might have some time to actually look into it... not that I currently have a precise VM, but I can make one [17:47] cool [17:47] let me know of any progress you make [17:47] g'night all [17:48] nite [19:07] I'm not following why this is still showing a conflict in dependencies.tsv (all the way at the bottom) [19:07] https://code.launchpad.net/~binary132/juju-core/charm-actions/+merge/219926 [19:08] oh, god. did I commit it with merge comments still in it? -_- [19:08] bodie_: you need to actually merge trunk I think [19:09] that's what I thought I did, heh [19:09] the last rev seems to only have one parent [19:10] just `bzr merge co:trunk`, resolve, commit, push, should be fine [19:12] alright, why would bzr merge lp:juju-core not work? [19:12] doesn't that refer to trunk? [19:23] how expensive is to iterate a slice? === vladk|offline is now known as vladk [19:28] perrito666: O(n) :) [19:29] yeah, plain old iteration should be linear [19:30] natefinch: nobody likes a smartass :p [19:30] perrito666: it's just like iterating over an array in C, with some bounds checking that makes it minutely slower. If you're doing the for n, val := range slice { version, then it copies each value, too, but using a for x := 0; x < len(slice); x++ { there's no copying until you access an index of the slice (obviously) [19:30] perrito666: heh [19:30] lol [19:30] natefinch: tx, that was the answer I was looking for [19:31] :) [19:31] perrito666: a slice is just a struct with a pointer to a backing array, the length of the current slice, and the capacity of the backing array. Iterating over it is just like iterating over the backing array. [19:31] I was curious about how much copying was range doing [19:32] I'd just use the syntax that expresses the idea in a more clear fashion [19:32] * perrito666 is not so happy with having to check if a string is in a slice [19:33] 90% of the complaints about Go are "I have to write this loop where in my other language, the loop is hidden from my view" [19:33] natefinch: well, off course === vladk is now known as vladk|offline [19:33] at the end of the day it is all just typing and loops [19:33] natefinch: I think the issue is that almost everyone trying this knows python [19:34] if x in foo: [19:34] yeah I know [19:34] wwitzel3: and if err != nil, dont forget if err != nil === vladk|offline is now known as vladk [19:36] mgz, nope, bzr merge co:trunk didn't do it for me. Not a branch: "/home/bodie/go/src/launchpad.net/juju-core/.bzr/branches/master/" [19:36] er, that's when I tried to merge co:master because I got that error for trunk [19:36] perrito666: yep [19:36] sorry this is taking so long, I've been doing bzr gymnastics [19:36] bodie_: er, are you using the same branching scheme as me? [19:36] poorly [19:36] I'm using cobzr [19:36] okay, you're not [19:37] cobzr people, how do you address trunk? [19:37] there's too many different ways to do colocated branches in bzr (which is to say, there's two, and they both seem to have the same name) [19:37] well, three [19:37] one was an actual bzr plugin that did the right things, and got superceeded by support in 2.6 [19:38] then there's the weird go thing that doesn't play well with anything... [19:38] I thought that's what cobzr was? [19:38] oh [19:38] I don't know that third thing [19:39] okay, so what I need to do is really simple [19:39] and I thought I already did it... [19:39] but now I've erased and re-checked-out my repo, and I'm not certain it will be normal [19:39] going in circles and driving myself nuts [19:40] perrito666: you're welcome to write a utils package that hasa function that takes a slice of strings and a string and returns a boolean that says whether the string is in the slice. then you could do like strslice.Contains(list, value) [19:40] natefinch: what do you call trunk with cobzr? [19:41] natefinch: I am really thinking it although I am a bit suspicious about the fact that is not something that is already there and widely used, so I presume there must be a decent reason [19:41] I think bzr merge lp:juju-core worked (I don't have any conflicts in my dependencies.tsv, in the file) but in Launchpad it shows conflicts [19:41] mgz: uh.... I use the built in support.... I think? [19:41] mgz: I have a branch called trunk [19:41] bodie_: yeah, that's fine too [19:41] bodie_: what does `bzr status` say? [19:41] mgz: I just use bzr switch -b foo to make a new colocated branch.... I think that's the built-in support, not cobzr, right? [19:42] natefinch: can be either.. [19:42] lol [19:42] the actual naming is what different (and confusing), which is what you need for merge [19:42] natefinch: that depends on what does your bzr [19:43] mgz: I have to point to the place where my bzr branches actually live with the full path [19:43] perrito666: you use tim's method, right? [19:43] there's a conflict in deps [19:43] mgz: I... am not sure [19:43] I am using A method [19:43] which might be tim's [19:46] mgz, I merged dependencies.tsv, fixed the conflict, tried to commit, ran bzr resolve dependencies.tsv, committed, pushed, erased the core repo, go get'd juju-core, bzr branch charm-actions, bzr switch charm-actions, bzr pull --overwrite lp:~binary132/juju-core/charm-actions, lastly bzr merge lp:juju-core/dependencies.tsv [19:46] * mgz blinks [19:46] heh [19:47] okay, your problem [19:47] you merge just the branch [19:47] not a file *in* the branch, that's a cherrypick [19:47] you *want* the actual revisions to resolve the conflict, not just the file contents [19:48] bzr merge lp:juju-core <- what you want [19:48] I don't understand why since all I'm trying to do is grab the differences in that file that will cause issues and tweak it to match trunk plus my addition [19:48] so you're saying grab all of trunk just to make sure everything is shipshape before I land the branch? [19:49] bodie_: you are trying to git your bzr [19:49] :'( [19:49] because with a cherrypick, you still have a split history [19:49] and the launchpad merge resolution plays it safe [19:49] if there's not a clean history merge, it throws it back for a real person to resolve [19:50] so I have to merge the whole trunk in order to merge a single file? [19:50] that doesn't seem sensible [19:50] you're not merging a single file [19:50] even git doesn't work like that, it's not cvs [19:51] hrm [19:52] you can do the merge in other ways to make launchpad happy, but just merging trunk as a whole is fine [19:52] (you can rebase all your changes onto trunk if you like) [19:53] (and push --overwrite the branch) [19:53] I doubt it'll conflict since my stuff doesn't touch other places [19:54] but how would I rebase my changes onto trunk? [19:55] eg, `bzr switch trunk && bzr switch feature_redux && bzr merge -rancestor:trunk..branch:feature && bzr commit` or similar [19:56] using the rewrite plugin means you can keep your individual revisions if you like [19:56] -b on the second switch, the the revspec may not be quite right [19:57] then you bzr push --overwrite [19:58] I see [19:59] I think I'm gonna stick to the basics for the time being [19:59] still having enough trouble with bzr, heh [20:00] once I've merged trunk then I simply need to commit to finish the job, right? [20:00] I've edited the conflicts out and bzr resolved them [20:07] Okay, I committed it, and pushed up to my branch, and it still appears to have conflicts, which I expressly fixed [20:07] so either Launchpad is lagging, or I'm going insane, or both [20:07] okay, it was just lagging. thank god [20:11] bodie_: you might still be going insane [20:11] haha, for sure [20:11] mgz, I'm looking at the other comment now -- I think this may actually be a more significant problem than it appears [20:11] or -- [20:12] yeah, I don't think we're hitting it. that's supposed to be for schemas that don't adhere to the spec for JSON-Schema [20:12] which are supposed to be caught by gojsonschema [20:14] perrito666: btw, next on our plate is doing backup and restore the right way. WIth the schema upgrades stuff getting taken over by Tim's team, we'll be working on that pretty soon. [20:15] natefinch: sweet [20:15] with luck we will be able to do that before it breaks again [20:17] haha [20:23] oh, mgz I recall why we're not testing that error -- william thought we shouldn't test gojsonschema in our tests [20:23] I'll add a couple of edge cases [20:23] bodie_: I'm a little less worried about the coverage than I am the error being bogus, but it should be easy enough to do something there [20:24] hm? I'm not following [20:24] the error in the branch has two fmt operators and one arg given [20:24] ah [20:24] gotcha [20:24] why would that not build? [20:25] I fixed that, thought you meant something different [20:25] it does build, it just is pretty bogus [20:25] you get some junk in the error string [20:26] oh, since it's a formatting string it's at runtime [20:26] I see [20:26] I meant why would that build [20:26] thanks :) [20:49] mgz, if you're still waiting for my branch to come in, it's because I may have found an error with gojsonreference [20:49] getting a nil exception in my new test [20:50] bodie_: urk? [20:53] that feeling when you accidentally reboot your laptop instead of the VM running on it [20:55] EOD anyway [20:55] natefinch: oh yeah... [20:55] night all [20:55] have a good evening [21:03] morning all [21:05] waigani: sure, morning, why not [21:05] hehe [21:13] fwereade: are you still awake? === vladk is now known as vladk|offline [21:27] waigani, heyhey [21:27] fwereade: hello! [21:27] waigani, how's it going? [21:27] fwereade: I'm stumbling my way into understanding client/server api cmd [21:28] fwereade: what do you mean by bulk api call? [21:28] fwereade: I checked the api.txt docs, but couldn't see anything [21:28] waigani, I mean taking an array of args, and returning an array of results [21:29] ah okay, that's all it means? [21:29] waigani, pretty much, yeah [21:29] and what I did was not good because it had no args? [21:29] waigani, it's inconvenient in some respects but not having it is more inconvenient [21:29] * fwereade looks at api.txt, and adds it to the list of things to fix [21:30] fwereade: maybe if you elucidate your reasoning in api.txt, I'm happy to read that later [21:30] waigani, yeah, I'm worried I may have given offence, I was *seriously* pissed yesterday off to discover that horacio can't fix a bug he's working on cleanly because someone implemented *another* non-bulk call [21:31] I'm not offended, I don't understand - that's all [21:31] fwereade: ^ [21:32] fwereade: why does having a cmd that calls an api without args an issue? [21:32] i think my connection is having a spaz [21:32] waigani, the short version is: you can always call a bulk API with one arg, but you can't call a singular one with bulk args; and you have many more opportunities for optimizing bulk calls, that you just don't get if you have singular ones [21:33] waigani_, and the reason that no-args is bad is because it encodes the assumption that the result will always be the same for everyone everywhere [21:34] fwereade: but couldn't that be the case? [21:34] waigani_, in the StateServerAddresses (or whatever it is) case, one might think "ehh, the set will always be the same, doesn't matter who's calling", right? [21:34] waigani_, but this is not true, because the actual address one might use to access machine X may depend on where you're calling it from [21:35] waigani_, if we're explicit about "what does the state server address list look like to machine-4?" we can accommodate that problem without having to change the API [21:36] right [21:36] so it's about allowing room to move in the future [21:36] waigani_, and when we're, I dunno, provisioning a whole bunch of machines, it's much more efficient to say "what will the state servers look like for machines 5, 6, 7, 8,9,..." [21:37] waigani_, yeah: changing an API is a whole heap more hassle than changing an implementation [21:37] so api calls should ALWAYS tag an args struct? Even if there are no args in the struct? [21:37] waigani_, the meta-point is that the assumption about common sets of state servers has a character to it that I have grown sensitive to because that sort of assumption seems to end up wrong *disturbingly* often [21:38] waigani_, I'm not sure there are any cases where there aren't any args: the closest I can come up with is getting environ info, because there's always one environ per conn [21:39] waigani_, that still doesn;t feel to me like enough of a unique snowflake to be worth breaking convention for [21:39] okay I get it [21:39] waigani_, because I've also used apis in which singular/bulk are mixed apparently at random, and I'd really like us to be predictable :) [21:40] waigani_, (and when I have it usually seems that they have fallen prey to exactly that class of assumption, and had t ofix it later, and it always just looks embarrassing ;)) [21:40] fwereade: is there a way of ensuring args is always passed? [21:40] waigani_, heh, we probably could do something to our rpc package [21:41] could save a lot of explaining ... [21:41] waigani_, but we're carrying the weight of the original client api, designed by rog, who doesn't believe in bulk api calls, and we're not ready to retire it all yet [21:41] waigani_, much though I may want to [21:42] I'm missing something with your grudge about the state servers [21:42] fwereade, what time is it for you? [21:42] waigani_, the specific bug was that we want state servers' api tasks to connect to the local api for preference [21:43] alexisb, eh, 11:45, but cath went to bed early leaving me with 2 g&ts to drink, I have to entertain myself somehow [21:43] lol [21:43] by trying to drum some sense into us newbies! [21:44] fwereade, hehe, fair enough [21:44] waigani_, it's much more sinister than that, I'm trying to turn you all into mental copies of me :) [21:44] okay, keep trying please :) [21:45] waigani_, the first cut at the bug was a bit of a hack -- it was looking up the state servers in the agent config, grabbingone of them for the port number, and replacing the list with localhost:port [21:45] waigani_, and so I said "eww!" and started thinking how we could do better [21:45] waigani_, and to be fair it's not entirely easy to do better [21:46] waigani_, it's not smart to keep the other api servers secret and *always* just write localhost, because we want to remain open to machines being demoted but still coming back into the fold as viable deployment targets [21:47] waigani_, so we do indeed want to store the complete set of state servers [21:47] I assumed that is what so I would assumed StateServerAddresses would return [21:47] the complete set of state servers [21:48] ugh, let me type that again [21:48] I assumed that is what StateServerAddresses would return [21:48] waigani_, right -- but the best set of addresses for machine 2 to know about is not "m0.dns, m1.dns, m2.dns" [21:49] waigani_, it's "m0.dns, m1.dns, *localhost*" [21:50] waigani_, but you can only return tailored results like this if you know who's asking [21:50] seems like two different functions to me [21:50] waigani_, think broader [21:51] waigani_, the address yu use to hit a particular machine depends on your own location [21:51] waigani_, this is a very narrow example, but the general problem is much broader [21:51] waigani_, think of state servers spread across a manual environment [21:52] okay [21:52] waigani_, 10.x addresses may or may not be valid, depending on where you're connecting from [21:52] one set of state servers per environ? [21:52] ha [21:52] waigani_, the same 10.x address might refer to entirely different machines, according to two different clients [21:52] yep, of course [21:53] I'm stuck in web world where everything has a public IP [21:53] waigani_, yeah, I've had a bit of difficulty adjusting myself [21:53] waigani_, remember when computers never talked to one another at all? life was so easy :) [21:53] private networks make the story much more interesting [21:54] lol [21:55] waigani_, anyway, this is essentially anecdotal, but it's a case in which being explicit about the intended context of the result would have made life a lot easier for us, because it would have been a pure api-side fix [21:55] okay cool so, the result of StateServerAddresses is always relative to the agent calling it, and that agent needs to be passed in as an arg [21:55] or an identifier for the agent i should say [21:55] waigani_, and once you're being explicit about the agent who's interested, you may as well implement the bulk call for the reasons discussed above [21:55] waigani_, yeah [21:56] sweet I think I get it :) [21:56] time for a quick question about my branch? [21:56] waigani_, ofc, it is a matter of certainty that there will be some APIs for which my insistence on bulk never does us a shred of good, and they really are only ever called with a single arg, and that arg always matches the authenticated entity [21:57] waigani_, but I think the costs are worth bearing when weighed against the costs of the alternative ;) [21:57] yep, like my bogus CurrentUserInfo [21:58] Why not just enforce args? [21:58] ensure that a struct is always passed, even if empty? [21:58] ohh because of backwards compatibility [21:59] waigani_, mainly hysterical raisins -- I'm not confident that we've ever been entirely without APIs-that-make-me-grumpy [21:59] waigani_, if we can get to the point where we can, I will be happy, but we *do* have to carry the 1.18 api forever [21:59] waigani_, well, 5 years, which is approximately equal to forever [21:59] oh [21:59] yeah true [22:00] well, at least get this in the docs [22:00] waigani_, anyway, yes, let's talk about your branch -- can I take 5 for a ciggie first please? [22:01] of course (and another g&t, makes things much more fun) [22:01] waigani_, yeah, there were plenty of ephemeral google docs around the time we were discussing it but the never made it into dev docs [22:01] ping me when you're ready [22:06] waigani_, back :) [22:06] fwereade: cool [22:07] fwereade: I've got two questions [22:07] one practical the other just for understanding [22:07] o/ [22:08] thumper, heyhye [22:08] fwereade: how do you authorise the api calls? [22:08] fwereade: want to catch up hangout? [22:08] fwereade: I have some points around connect [22:08] waigani_, the facades get created with something that implements Authorizer, and you can ask that who's connected [22:09] waigani_, that's the primary mechanism for keeping inappropriate clients' dirty hands off apis they shouldn't have access to [22:09] fwereade: cool I'll look into that, as I didn't want to expose UserInfo to just anyone [22:09] thumper, shortly, yes please [22:09] fwereade: second why have a client side api? [22:10] fwereade: why can't clients just call and get results from the server api? [22:10] waigani_, I'm not quite sure what you're asking, but I think the answer is "because we're bad at naming things" [22:10] right, yeah I'm not quite sure what I'm asking either [22:11] waigani_, the Client facade is implemented on the server, but it's so named because it's intended for the use fof clients [22:11] I get that, but you were talking about machine 2 having access to the api on localhost? [22:11] I don't understand that? [22:12] waigani_, if machine 2 is a state server, api clients on that machine should prefer to connect to the local api server rather than the remote one [22:12] (s) [22:12] oooh it's a state sever [22:12] waigani_, if you're not a state server you have no choice but to connect to a remote one [22:13] waigani_, sorry, yeah, that was the context that I never explicitly mentioned [22:13] okay I'm done! Can I be a fly on the wall for the connect conversation? [22:13] fwereade: thank you :) [22:13] a little part of my brain feels a little less like me ;) [22:14] * fwereade cackles cheerfully [22:14] thumper, ready now, please invite waigani_ too [22:15] waigani_: https://plus.google.com/hangouts/_/gwnzn2d4zfnezivhtfspwc2r3ma?hl=en [22:55] waigani: did you want to do a standup? it is just the two of us [22:56] fwereade, mgz, looks like there's a corner case in gojsonschema where a "$schema" key not at the document root causes a nil exception [22:56] thumper: okay [22:56] thumper: I'm in the channel [23:32] thumper: how can I check if --format is used? [23:33] waigani: you can just check the formatter name [23:34] thumper: cool, got it thanks