[00:48] jillr: I don't have much experience diagnosing such things, but I can try. do you have any logs from the unhappy mongod that I can peruse? [00:48] and the jujud agent on that machine [00:52] axw: thanks. I have an IR with a bunch of pastes of logs and stats, I can upload new logs, I can also provide shell access [00:52] what's the best route to get you fresh logs? [00:53] jillr: I don't know what an IR is. if you can give me shell access then I can just dig around directly [00:54] oh, and since it's HA, which machine? the one with the FATAL mongo member? [00:54] sorry, incident report [00:55] jillr: yes please, the FATAL one [02:02] ericsnow: wwitzel3 ? [02:45] thumper: possibly sorted, the machine is NUMA and juju 1.20 doesn't run mongod on NUMA properly [02:46] what's NUMA? [02:46] which can lead to periodic slow downs [02:46] non uniform memory architecture [02:46] err, access [02:46] and this caused some of the transaction log failures? [02:47] maybe the transaction log was being written to /dev/null because it is web scale? [02:47] thumper: sorry brb [02:52] thumper: so I noticed in the the mongo log that it was timing out getting replset info from the master [02:52] and googled that [02:52] and after much sifting, found someone who had that on a NUMA machine, and it was fixed when they used the recommended settings [02:53] anyway, it's been applied on 1/3 machines, status will be monitored and applied to others as necessary [02:53] bbs [03:00] hmm [03:19] very simple review for someone: http://reviews.vapour.ws/r/864/diff/# === kadams54 is now known as kadams54-away [03:44] thumper: wouldn't u want to check info.created in the test? [03:44] thumper: otherwise, lgtm but u'd need someone with power to 'shipit' [03:47] anastasiamac: it is an internal implementation detail, and the secondary write fails without it [03:50] thumper: i understand why u'd set it to false on failure (disk.go) but why is it still false on success (mem.go)? [03:50] thumper: just a point of interest really... did not feel too intuitive to me :D [03:51] anastasiamac: because mem.go supports the same behaviour as disk.go [03:51] but just stores in memory [03:51] and without it, it fails the test :) [03:51] and it isn't false on failure in disk.go [03:51] it is on success [03:53] anastasiamac: oh... yeah, it is a bit ick [03:53] I just noticed what you were looking at [03:53] errors.Annotate returns nil if err is nil [03:54] thumper: yes, i forgot Annotate returns nil [03:54] thumper: in other words, u r just resetting .created for the next write [03:55] aye [03:56] thumper: thnx for patiently explaining stuff :D lgtm [04:03] anastasiamac: you have a few minutes while I propose this branch, then I'm heading home [04:03] so... go [04:03] thumper: thnx :) [04:04] thumper: i want to test some db operations (annotations specifically) [04:04] thumper: they r performed on some entities [04:04] thumper: that comply with an interface [04:04] thumper: any problems for me to test using a mock entity [04:04] thumper: rather than our existing juju entity? [04:05] thumper: since m testing operations rather than entity behaviour... [04:05] as long as functions aren't writing to the db with the expectation that the entity exists [04:05] then, yes, sure use a mock [04:06] here's another one: http://reviews.vapour.ws/r/870/ [04:06] thumper: well, i can't write my test entity to db for tests? [04:06] thumper: don't we clean db after tests run? [04:09] is this thing still on...? [04:09] why can't you write your entity to the db? [04:09] sure you can [04:09] use the Factory to create an entity [04:09] it is reset after every test [04:09] in tear down [04:09] my network connection was dropped [04:10] thumper: s/well/why [04:10] thumper: yes, it was my intention :D thnx [04:11] thumper: maybe [05:21] axw: thanks. That was fast :) [05:21] jw4: nps, that was easy :) [05:21] :) [05:46] axw: great feedback - thanks! [05:46] nps [08:36] morning === mthaddon` is now known as mthaddon [10:02] TheMue: hi :D [10:03] anastasiamac: heya o/ [10:15] voidspace, ping [10:16] morning TheMue [10:16] how goes the testing? :) [10:16] dimitern: heya, we're in our hangout [10:16] dimitern: morning btw [10:16] ah, ok [10:17] dimitern: pong [10:17] dimitern: tests currently fail due to non-provisioned machines inside the containers, InstanceId() needs it. so I'll add it next. [10:17] TheMue: jamestunnicliffe: browser crash! Sorry. [10:17] voidspace, so just a quick sync up - when you're done about would you manage to setup docker etc. to test the proxy fix backport for bug 1403225 [10:17] Bug #1403225: charm download behind the enterprise proxy fails [10:17] dimitern: the rest of your changes are now in my refactoring [10:17] dimitern: docker experiment was a fail [10:17] TheMue, ok, that's progress, thanks [10:18] dimitern: oh, oops [10:18] dimitern: I forgot your backport [10:18] dimitern: I have clonable kvm images though instead [10:18] dimitern: will get on it [10:18] voidspace, sweet! even better imo [10:18] voidspace, cheers [10:18] jamestunnicliffe, and morning to you as well :) [10:19] jamestunnicliffe, did you manage to look into bug 1417617? [10:19] Bug #1417617: apt-proxy can be incorrectly set when the fallback from http-proxy is used [10:21] dimitern: he has a fix for it... [10:22] dimitern: Indeed, fix on its way. Just in hangout. [10:22] jamestunnicliffe, awesome! [10:22] ok, i'll leave you guys alone now :) [10:23] just a reminder - please add cards for what you're doing on the kanban board if you haven't done it [10:23] oops :D [10:23] ok [10:24] and if you want your expenses for the sprint reimbursed this month file a claim today [10:24] cheers ;) [10:25] dimitern: do we add cards for bug work? [10:25] dimitern: never mind, seen one of yours - will copy :-) [10:25] jamestunnicliffe, yes - there's a card type "defect" and a field on the right to fill in the LP bug# [10:26] so it's linked automatically [10:29] morning [10:35] perrito666: morning [11:16] morning [11:17] wwitzel3: morning [11:17] hey voidspace, how's it going? [11:17] wwitzel3: not bad [11:17] wwitzel3: a huge stack of paperwork to sign alongside my regular work [11:17] wwitzel3: (house paperwork) [11:18] wwitzel3: current status: cloning, destroying, and cloning again kvm images for testing race conditions around setting up proxy environment variables [11:18] voidspace: that's great! [11:18] wwitzel3: it's quite fun [11:18] wwitzel3: yeah, it's good news [11:18] wwitzel3: I'm really hoping we can complete next week [11:19] wwitzel3: and we're now officially in the "safe for home birth period" [11:19] wwitzel3: as of midnight last night, if Delia goes into labour we can have the birth at home [11:19] voidspace: wonderful [11:19] wwitzel3: two weeks early is common, which would be next week [11:19] wwitzel3: probably at the same time as we're trying to move... :-) [11:20] voidspace: hah, yeah [11:21] voidspace: at least it is close, worst case is you are two houses away from one of the houses :) [11:23] wwitzel3: indeed :-) [11:23] wwitzel3: during the crossover period wouldn't be too bad. At least we'd have a choice of houses... [12:56] perrito666: ok, so whenever the proxyupdater onChange is called "first" is always false. I'm continuing to track down why... [12:57] ah, test passes now again [12:57] voidspace: I see, that might by why there is an || in the if [12:58] someone trying to bypass that issue in the wrong way [12:58] perrito666: sure, but the proxy settings aren't different on first run [12:58] perrito666: no, the || is so that system files are only written if they've changed *or* on the first run [12:58] perrito666: but "first" is always false - so they're not written out [12:59] perrito666: I'm tracking down why "first" is false when it's clearly initialised to true [12:59] well, clearly *looks like* it's initialised to true at any rate... [13:00] heheh #DEFINE False True [13:00] or the other way around :p [13:00] :-) [13:01] voidspace, needs to block perhaps, but only inside the unit agent [13:02] voidspace, I started implementing the suggestion katco gave, but this really blew up the size of the patch with all the testing required [13:03] dimitern: ah, ok [13:04] dimitern: it's only an intermittent issue :-) [13:04] voidspace, yeah - inside the machine agent there's no charm revision updater [13:04] voidspace, however, hmm.. [13:04] dimitern: yes there is [13:04] * dimitern takes a deeper look [13:04] dimitern: charmrevisonworker is started inside newStateStarterWorker [13:05] dimitern: see my latest comment on the bug [13:05] voidspace, ok, I stand corrected :) [13:05] sorry [13:06] np of course [13:06] dimitern: will the worker retry [13:06] dimitern: if so, does a transient error matter? [13:06] voidspace, var interval = 24 * time.Hour [13:07] it does retry daily [13:07] which is perhaps too long it should retry more often on connection errors I think [13:08] dimitern: any reason I shouldn't see error logging from when notify watchers are starting in the logs? [13:09] voidspace, but that should happen in the apiserver charmrevisionupdater I think [13:09] dimitern: right [13:09] dimitern: what do you think is the *right* fix? [13:09] dimitern: have charmrevisionworker retry on failure or have the agents block until proxy settings are in place [13:09] or both :-) [13:15] ok, so I *am* seeing "handleProxyValues" being called with "first" true [13:18] and I was seeing the logging - I just had to grep through all-machines.log [13:19] it wasn't recent enough for "juju debug-log" [13:20] voidspace, well :) first off, I think the charmrevisionupdater should retry a few times on download failures with exponential backoff [13:21] dimitern: so should I do that *now* or should I file a bug for it and go to the lxc-broker [13:21] voidspace, this is definitely more resilient than the current "fire-and-forget" approach [13:22] voidspace, don't worry about it, I'm on it already - will file a bug [13:22] dimitern: I'm spending a little bit more time on the "first" logic [13:23] dimitern: it *really* looks to me like the old code should have worked... unless, hang on... [13:23] nope [13:23] mysterious [13:29] mgz: so on my maas juju doesn't write the syslog or cert files at all .. so I am not able to reproduce the error [13:29] wwitzel3: that... does not sound good? [13:29] mgz: re https://bugs.launchpad.net/juju-core/+bug/1417875 [13:29] Bug #1417875: ERROR juju.worker runner.go:219 exited "rsyslog": x509: certificate signed by unknown authority [13:30] not writing the syslog would be a ug, no? [13:30] *bug [13:31] mgz: well it might just be my setup *shrug* I haven't actually confirmed what is happening yet [13:31] mgz: will open a bug for it once I confirm [13:31] it must write something, you mean the all-machines.log on the state server doesn't exist at all? or what? [13:32] voidspace, it seems to me this is indeed a racy situation between the CRU and PU workers [13:32] mgz: the state server is fine, the units don't have juju-*.conf files for syslog or any of the certs [13:32] dimitern: indeed, that's my conclusion [13:32] voidspace, one question though - when you see the errors from CRU failing to download, do you also see right after the CRU worker existing and getting restarted? [13:33] voidspace, I've just answered my own question looking at the code :) sorry - ruw.updateVersions just *logs* the error, rather than returning it out of the loop [13:34] right [13:34] voidspace, which is *wrong* [13:34] mgz: but a ca-cert.pem is getting written, which is odd since ensureCertificates does that part *last* ..anyway, investigating further :) [13:34] voidspace, because, if it *did* return an error, that would've caused its runner to kill it and restart it 3secs later, and retrying so - even with a race it will be resolved in seconds, not 24h [13:35] dimitern: how many times will it retry? [13:35] dimitern: maybe that's the right fix then [13:35] voidspace, just consulted william and that seems like the easiest fix to do for 1.22 and 1.22, for 1.23 however, we should do better [13:35] dimitern: as far as I can tell the existing "first" logic is correct [13:36] dimitern: William will be interested in this [13:36] dimitern: and putting the SetEnvironmentVariables call *back* where it was, but leaving in place starting the proxyupdaterworker first [13:36] dimitern: I can successfully deploy charms [13:36] voidspace, it will keep restarting it [13:36] dimitern: so I think it was the race condition that was the problem all along... [13:36] dimitern: and the first logic is fine [13:37] voidspace, wow :) [13:37] dimitern: I've just successfully deployed a charm this way and my logging confirms that "first" is set correctly and the environment variables are set [13:37] voidspace, great job on finding it then! [13:38] voidspace, however to fix the race we'll need a lot more code and testing than to make it virtually irrelevant [13:38] dimitern: right [13:39] dimitern: so I'll change the charmrevisionworker to return the error [13:39] voidspace, for 1.23 we should fix it properly, as william also suggested we should take advantage of nesting runners to define the order things start [13:39] dimitern: is there an example of this I can look at? [13:40] voidspace, it should return an error yes, in addition to leaving your original fix in place for 1.21 and 1.22 [13:40] dimitern: we didn't backport to 1.21 [13:40] dimitern: want me to look at that? [13:41] voidspace, it's as easy as return errors.Annotate(err-from-updateVersions, "failed updating charm versions") [13:42] dimitern: but the main fix will need backporting too [13:42] dimitern: I meant an example of nesting workers to define the order [13:42] voidspace, that has to happen in the CRU loop each time when we call updateVersions [13:42] dimitern: returning an error I can probably work out for myself... [13:42] voidspace, ah, well a worker.Runner is itself a worker.Worker [13:42] voidspace, sorry :) [13:43] dimitern: so let's do this in order of things to do [13:43] voidspace, I got confused trying to follow 3 separate topics we're having [13:43] dimitern: backport the existing fix to 1.21 [13:43] dimitern: change charm revision worker to return the error in trunk, 1.22 and 1.21 [13:43] dimitern: then look at a proper fix for trunk [13:43] dimitern: sound good? [13:44] perrito666: FYI, the existing "first" logic works fine [13:44] voidspace, yeah, if you don't mind fixing CRU for 1.21 first, *without* four PU fix [13:44] s/four/your/ [13:44] dimitern: so you're saying no to "backport the existing fix to 1.21" then [13:44] dimitern: that's fine [13:45] voidspace, and then as i backported your PU fix in 1.22, just forward port it CRU fix for 1.22 and 1.23 === hazmat is now known as hazaway === hazinhell is now known as hazmat [13:45] perrito666: the actual bug was a race condition and fixed (mostly) by the other part of the PR [13:45] perrito666: but unconditionally setting environment variables is harmless and can be left in place [13:45] perrito666: as there's still plenty of other things to fix [13:45] dimitern: ok [13:45] dimitern: will do [13:45] dimitern: later... lunch first [13:45] voidspace: agreed, as long as we know what was the other problem [13:46] voidspace, the reason not to backport the CRU fix for 1.21 is because I believe it's a lot messier to do and we really need to get 1.21.2 out the door tomorrow [13:46] dimitern: you mean "not to backport the PU fix for 1.21" [13:46] voidspace, while the other fixes are less critical and also trivial to transplant [13:46] dimitern: but ok [13:46] yep [13:46] voidspace, ofc :) sorry [13:47] voidspace, and having the CRU fix in 1.21 will make the issue a minor annoyance rather than a blocker [13:47] dimitern: the CRU wasn't the main problem I don't think [13:47] dimitern: only a side issue [13:48] dimitern: I still don't think you'll be able to deploy charms with 1.21 [13:48] dimitern: I'd be happy to be wrong about that... [13:48] dimitern: and I can try it [13:48] voidspace, hmm.. [13:48] dimitern: it's downloading the charm that fails [13:49] voidspace, that's likely to be correct, but I'd rather tackle it myself so you can return to the CA work [13:49] :) [13:50] dimitern: heh, ok [13:50] dimitern: really going on lunch [13:50] o/ [13:50] voidspace, enjoy! :) [13:52] dimitern: sorry to botter, what is stopping https://bugs.launchpad.net/juju-core/+bug/1416425 from being in 1.21? [13:52] Bug #1416425: src/bitbucket.org/kardianos/osext/LICENSE is wrong [13:52] perrito666, let me have a look [13:53] dimitern: says you committed the fix [13:53] perrito666, you mean why it's not Fix Released for 1.21?\ [13:53] yup [13:54] perrito666, because 1.21.2 is not released yet (due tomorrow) [13:54] perrito666, and since it's a release issue, unlike a blocker or other.. [13:54] I see, yup [14:31] sinzui, do you know when 1.22-beta3 is due for release? [14:33] dimitern, I was hoping for a few more fixes given the large list of issues https://launchpad.net/juju-core/+milestone/1.22-beta3 [14:33] dimitern, we can release tomorrow if we have stakeholders that need to test a fix [14:34] sinzui, sgtm to sync 1.21.2 with 1.22-beta3 release [14:34] sinzui, and how about 1.22 proper? what's the plan? [14:35] dimitern, when all bugs are fixed and stakeholders find no other bugs, we propose 1.22.0. That could be next week [14:39] anyone would be so nice? http://reviews.vapour.ws/r/873/ its trivial and urgent [14:39] sinzui, that sounds great, thank you [14:39] np [14:57] sinzui: helpful? https://github.com/juju/testing/pull/49 to address bug 1416430 from the 1.22-beta3 milestone [14:57] Bug #1416430: Some files refer to an include license file that is not included [14:58] jw4, yes please [14:58] TheMue, can you approve this backport please? http://reviews.vapour.ws/r/877/ [14:59] dimitern: yes, will do. [14:59] sinzui: kk - does that version have to actually be referenced in the dependencies.tsv or is it sufficient for it to be merged into master? [14:59] (of the testing repo) [15:01] ericsnow: are you a full reviewer? [15:01] perrito666: net yet [15:01] * perrito666 needs a bureaucratic LGTM [15:01] dimitern: +1 [15:01] perrito666: OCR though [15:01] ericsnow: http://reviews.vapour.ws/r/873/ [15:01] perrito666: maybe TheMue can stamp it - it looks good to me [15:02] jw4: perrito666: already looking [15:02] TheMue: cool [15:02] yeah we all know frank is getting free beer for that in april [15:03] perrito666: go for it, you've got a ship-it. and I'll remind you for the beer :D [15:04] jw4, yes, dependencies.tsv must be updated in juju/juju if you need a fixed package. [15:04] sinzui: kk [15:05] sinzui: fixes for the 1.22-beta3 milestone should be based on 1.22 branch I presume... [15:05] jw4, we need to fix 1.23, then fix 1.22 and 1.21. Each is a separate PR :( [15:06] sinzui: lol - okay - so 1.23 --> master right? and I should do that first and then backport? [15:06] jw4: hangout? [15:06] jw4, yep [15:06] TheMue: sorry - thought we were just doing irc today [15:10] and since we are on it https://github.com/juju/syslog/pull/1 === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 [15:14] yeah, OCR PTAL https://github.com/juju/testing/pull/49 -- just a licensing update pr [15:22] TheMue, thanks! [15:24] dimitern: btw, I thought we already had feature freeze for 1.22, don't we? asking because of requirements for actions. [15:24] dimitern: and did you seen my question about the travel back I mailed to you and Alexis? [15:25] TheMue, yeah, but that's a potential critical issue fix [15:25] dimitern: finally btw the tests are working again and I'm now working on the exhausting of the available addresses :D [15:25] TheMue, yes, I'll ping her today for a response [15:25] dimitern: great, thanks [15:25] TheMue, sweet! [15:25] who owns juju/syslog ? [15:26] perrito666, cloudbase I think [15:26] dimitern: I doubt it, its in our repo :p [15:27] perrito666, it's forked from gabriel-samfira/syslog [15:27] which is a bit weird [15:28] I haven't see this for other juju/* projects [15:28] seen* even [15:28] dimitern: it was made by cloudbase and added to our repo when we began merging windows [15:28] perrito666, right [15:29] dimitern: but whoever added this repo did not make us (the team) owners [15:30] perrito666, hmm [15:30] curious copyright line Copyright (c) 2014, Gabriel. All rights reserved. but it appears to be BSD/MIT [15:32] dimitern: what I am trying to commit fixes that [15:32] actually :p [15:32] that was github filling in the license when he created the repo [15:32] perrito666, ah :) I suspected [15:32] it now has the proper licence [15:32] dimitern: if you want to merge my pr is good enough for me [15:33] seriously? *lol* GH *is* too smart for its own good [15:33] you should be owner since you are part of the powers that be [15:39] dimitern: well it has a sort of a wizard when you create a repo [15:39] :) [15:42] mm, let me guess, the bot is still not on utils [15:42] I feel like an idiot [15:45] dimitern: do you know anybody in cape town able to help Aram with a kernel problem? asking on the standard canonical channels brought now response, only a hint to cape town [15:51] TheMue, ok, I'll ask [15:52] dimitern: great, thanks, will help him. his machine doesn't boot anymore after installing a new kernel [15:52] dimitern: as a hint => http://sprunge.us/OfFV [15:56] katco: you around? [15:56] natefinch: yup [15:56] katco: evidently there's a problem with goamz in the china north region, where it needs to use V4 for S3, but it's not: https://bugs.launchpad.net/juju-core/+bug/1415693 [15:57] Bug #1415693: Unable to bootstrap on cn-north-1 [15:57] * katco looking [15:58] katco: note the linked github issue [15:59] natefinch: ah yes i remember now; the work was supposed to target aws specifically [15:59] natefinch: i remember being a little confused why we were using different signing everywhere [15:59] natefinch: so we made no effort to support s3 at the time i wrote that, do we now need to do that? [16:00] katco: sounds like we do. Some of our people are testing in China and need that to work there... they're supposed to go live by end of month, so sooner is better than later [16:00] natefinch: i will have to get clarification from ian on priorities, but if i remember, it shouldn't be too hard [16:00] natefinch: kind of a "redirect to master signing package" type thing [16:01] katco: I'll send an email to canonical-juju and make sure to ping Ian about it. This sounds like it should probably be pretty high priority [16:01] mm, we should start branching things like utils when we branch juju for revs [16:02] backporting licence fixes is going to be an interesting PITA [16:04] natefinch: ok cool ty for the heads-up [16:04] natefinch, perrito666, wwitzel3: standup? [16:11] natefinch: standup? [16:19] jw4: ? [16:19] https://github.com/juju/testing/pull/49 <-- why this patch changes the LICENCE file to AGPL? [16:31] natefinch, ericsnow we are having some iffy networking issues [16:31] I am hoping I can get into the hangout at the top of the hour but may not be able to [16:32] if I cant get in we may have to reschedule just fyi [16:32] alexisb: ok [16:34] alexisb: k [16:38] jw4: ? [16:40] ericsnow, we will need to redeploy reviewboard. The current one will be kept alive, but you cannot change it [16:40] ericsnow, We will need to config you used to deploy it, or you can redploy it when juju-ci4 is ready [16:41] sinzui: I should have some time for that today so just let me know when it's ready [17:15] mm, bad day for hardware [17:15] jw4: ping? [17:18] jamestunnicliffe: seen your PR. golang convention for function/method comments is in case of func Foo() { ... } starting the comment with the name: // Foo does this and that. [17:19] TheMue: Ah, yes [17:28] jamestunnicliffe: you'v got a review [17:30] thank you ericsnow and natefinch !! [17:30] alexisb: no, thank you :) [17:30] yeah, how'd that go? [17:33] went well I think. Basically just clarifying expectations etc [17:35] nice [17:42] jamestunnicliffe: did you test your latest change? the renaming to the expected values isn't used, you use the old name in the assert [17:42] TheMue: Sorry about that, enthusiastic push [17:42] TheMue: Just running tests now [17:43] jamestunnicliffe: *lol* cool, np, it's a good feeling to get the first fix in [17:43] TheMue: For some reason I can't swash the changes into one commit. Works locally, then github complains. [17:44] TheMue: I guess you will cope! [17:45] jamestunnicliffe: how does it complain? [17:45] TheMue: Updates were rejected because the tip of your current branch is behind its remote counterpart... [17:46] TheMue: The usual fast forward error. Though after a pull, merge, squash the problem remains. [17:46] TheMue: I can pull, merge, push. That is fine. Very odd. [17:47] jamespage: because of MY current branch? [17:47] TheMue: no. [17:47] oh, addressed wrong james [17:47] TheMue: It really isn't a problem. Just a bit untidy. [17:48] jamestunnicliffe: one of the weird situations when using git. sometimes I don't really understand it [17:49] TheMue: About that comment about spacing, I used the same as the section above, so it may be strange, but it passes go fmt and is consistent :-) [17:51] jamestunnicliffe: go fmt won't complain, it's inside of the string. "http//http proxy" looks interesting. would have to look how it is used. [17:51] http://... [17:52] TheMue: Oh, that. The previous test writers used "http proxy" and I stuck with it, though the http:// is added. [17:54] jamestunnicliffe: yes, that's how I understood it too. the format containing a space isn't part of this change. [17:57] anyone know about the worker/runner infrastructure and care to answer a question? [17:58] it works and runs *duck* *scnr* [17:59] heh [17:59] TheMue: dimitern said that I need to change a worker to "return an error out of the loop" instead of just logging it [17:59] to do this do I have the method that gets the error call ruw.tomb.Kill(err) [18:00] where ruw is the worker in question [18:00] ah no, the loop needs to actually return [18:00] so updateVersions returns the error [18:00] and the call in the loop returns that [18:00] easy-peasy [18:00] :-) [18:00] voidspace: this would kill it, yes [18:01] TheMue: once again, in the process of explaining the question I work out the answer :-) [18:01] TheMue: thanks for being my rubber ducky... [18:01] voidspace: has been a pleasure :) [18:02] :-) [18:04] TheMue: Think I am ready for a re-review if you can. Then I have a daughter to cuddle :-) [18:06] bogdanteleaga: I've left a lot of review comments, but many of them are there just to mark all the spots where previous comments apply [18:07] bogdanteleaga: each one isn't some separate kind of problem that needs to be addressed :) [18:07] jamestunnicliffe: I'll take a look [18:09] ericsnow: I'm trying to find the syscall thing, but no luck :( [18:09] jamestunnicliffe: look good, ship-it [18:10] ericsnow: oh, now I see what you mean [18:14] TheMue, voidspace: Have a good weekend! I'm off for the evening. See you Monday/Tuesday. [18:14] jamestunnicliffe: enjoy your evening and weekend too, see you then [18:20] perrito666: back [18:23] perrito666: there was no clear license [18:23] jw4: github says you removed lgpl and added agpl [18:23] perrito666: the existing file 'LICENSE' was lgpl, but all the source files referenced 'LICENCE' [18:24] perrito666: the two files from the bug referenced LICENSE [18:24] but, the LICENSE file wasn't the golang license [18:25] perrito666: since the testing files were moved out of juju-core, I figured the right license was the one from juju core (especially since the name in all the source files was LICENCE like the juju-core one [18:25] jw4: I can positively say you just confused me [18:25] the file I removed was not being referenced by any of the source files [18:26] perrito666: except the two referenced in the bug [18:26] perrito666: and the file I removed was wrong per the bug for those two files [18:26] perrito666: all of the source files referenced a non-existent 'LICENCE' (spelled with a C) file [18:27] perrito666: which used to point to the LICENCE file in juju-core before testing was split to it's own repo [18:27] perrito666: any better ? :) [18:28] jw4: I see [18:28] so the part of go licence is good [18:28] but, you removed the mispelled license and added licence [18:28] and those have different licence texts inside [18:28] the thing is [18:28] if you look at the files [18:29] https://github.com/juju/testing/blob/master/cleanup.go [18:29] they expect that licence file to be lgplv3 [18:29] perrito666: I see [18:29] perrito666: I assumed that was part of the big re-licensing change a few months ago [18:30] perrito666: my understanding was that testing was split out before core was cleaned up [18:30] natefinch: care to weight on that? [18:31] perrito666, natefinch it looks like LICENCE in core has always been affero [18:31] perrito666: so maybe the fix is to update all the testing files to point to the LICENCE file and revert the changes in that file [18:36] perrito666: er, not updating the testing files, just moving LICENSE to LICENCE [18:36] yep [18:38] perrito666: look better now? https://github.com/juju/testing/pull/49 [18:39] jw4: totally [18:39] perrito666: w00t [18:39] could you be a sport and make sure that all files have either go or lgpl licences? [18:39] perrito666: heh - yes I'll be a sport [18:40] sorry there is a brittish fellow living inside me :p [18:40] perrito666: yeah, me too [18:42] sillies :) [18:42] perrito666: yep all licensed [18:42] sweeet [18:42] mgz: oh, just in time to lgtm jw4 proposal [18:42] mgz: jolly good show [18:45] here now mgz; popping in with a bit of wit and then disappearing is not cricket! [18:46] mgz: if you come back you can make fun of Americans? Or South Africans? Or Argentinians? [18:48] you really can't, you can only mock in the reverse of colonial history, otherwise it's just picking on people :0 [18:49] mgz: hehe [19:06] perrito666, jw4: license for juju core itself is agpl.. the license for "libraries" outside of core should be lgpl... where exactly we draw the line between what is and is not core is kinda fuzzy [19:06] natefinch: cool [19:06] natefinch: that's where this PR ended up so that's good [19:06] (also note, lgpl is actually lgpl with Canonical's special static linking exception, which is required since Go always static links) [19:07] natefinch: interesting. [19:07] natefinch: do you care to review and possibly stamp https://github.com/juju/testing/pull/49 [19:08] natefinch: it's work for one of the bugs sinzui is hoping to be addressed for 1.22-beta3 release [19:11] perrito666: woot :) [19:11] jw4: I say LGTM, given the hurry that is enough [19:11] :p [19:11] jw4: right. I do wish we'd just put these files in a separate repo or something.... but I think this is good enough for now. [19:11] natefinch: cool, tx [19:16] natefinch: (perrito666) http://reviews.vapour.ws/r/880/ <-- dependencies update with change [19:17] jw4: I am superseeding you with a patch including several of those licencing fixes [19:17] perrito666: score [19:17] I'll close mine [19:17] just running the whole test suite before [19:17] to make sure the version jump didnt break anything [19:18] perrito666: I ran the tests on my change already [19:18] jw4: well I am packing other two :D [19:18] perrito666: overachiever [19:19] jw4: I have a dedicated machine for that nevertheless [19:19] that makes things faster [19:19] perrito666: my change has to be backported to 1.22 and 1.21 - will you bundle that too? [19:19] jw4: yup, all of those have to [19:20] sinzui: natefinch question [19:20] perrito666: I have a m3.2xlarge instance on ec2 that kicks the tests out in about 8 minutes [19:20] when we create a release branch [19:20] why dont we create a branch for the dep libraries that are ours? [19:21] jw4: I have a corei5 with 8 gigs of ram downstairs :p [19:21] yeah - I was nervous about that too [19:21] as a laptop it sucks because It is bulky but as a compile machine it rocks [19:21] perrito666: sweet [19:24] * TheMue likes his i7 / 16 Gig / SSD combination ;) [19:25] TheMue: show-off [19:25] ;) [19:26] TheMue: try to get one of those in argentina :p [19:26] right EOW [19:26] see you all on Monday [19:27] perrito666: why not? not available or too expensive === kadams54 is now known as kadams54-away [19:27] voidspace: me too, only cannot close chat *lol* [19:27] heh [19:27] voidspace: but have already wine beside me [19:27] switch off the computer... [19:27] nice [19:27] TheMue: enjoy, see you on Tuesday [19:28] TheMue: both, but the first specially [19:28] voidspace: yess, see you then. enjoy your weekend too [19:28] perrito666, a branch for each dep? or a branch that included a dependencies.tsv of what we officially use? [19:28] perrito666: interesting, didn't expected this [19:28] sinzui: a branch for each dep, like utils should have a 1.21 branch/tag [19:29] when backporting fixes to our own deps that poses an interesting problem [19:29] perrito666, That might have bad consequences from Ubuntu's perspective, but worth talking about [19:30] sinzui: you mean maintenance costs? [19:30] sinzui: well if I have to, say backport a fix for something that is in utils, because it is broken in 1.21 and 1.21 is using an old rev of utils, I will need to do a branch anyway [19:31] jw4: what does "darwin" use for an init system? something cooked up by Apple? [19:31] perrito666, I think you suggest that dependencies.tsv points to a tag instead of a hash. We update the tag when we want to change the rev that godeps will select [19:32] sinzui: yes, that works [19:32] ericsnow: good question - I don't know the official answer, but on my machine it seems to be a custom launchd thing [19:32] jw4: yeah I figured as much :) [19:32] ericsnow: :) [19:32] perrito666, but since we are not tracking the exact revision used to make the release tarball. that means it is not possible to recreate an older juju rev in the same series [19:34] perrito666, 1.21 build 5 works, but the tag is changed. I do a rebuild to test (build 6) and I get a different package with possibly different results [19:35] perrito666, I will see the dep changed if I diffed the tarball, but I wouldn't know why juju choose the change. [19:35] perrito666, so while I like your idea for its convenience, it undermines our need to repeatability. [19:35] sinzui: well right now I have to push a change to 1.21 that fixes deps, lets see how easy to do this is [19:36] perrito666: basically as long as there are no breaking changes in the updated deps we'll get lucky - otherwise a real hassel [19:36] jw4: yes, that is the thing, I dont like to rely on luck [19:37] ditto [19:37] (but I'll take it if it comes!) [19:37] perrito666, I would argue that core is branching too early. There are too many branches. I think unstable and stable are enough. [19:38] sinzui: the thing is, we have, lets say 1.21, which depends on utils 1, syslog 3 and whatever 4 [19:38] and I need to fix something on utils that breaks 1.21 [19:39] but utils is now in version 5 [19:39] so, I really dont want to include 2,3 and 4 into 1.21 [19:39] perrito666, ah [19:39] perrito666, that is a nasty combination. We need a lot of branches in that scenario [19:41] sinzui: that is our current combination [19:41] sinzui: right now I need to backport a licence change in serveral deps to 1.21 [19:41] that should be fun for non functional changes === kadams54-away is now known as kadams54 [19:49] anyone http://reviews.vapour.ws/r/881/ >? [19:49] I will take lgtm even from mup at this stage [19:49] * jw4 impersonates mup [19:50] : LGTM [19:58] ericsnow: tx === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 [20:05] mm this makes no sense, in github the current commit for 1.21 for utils is not marked as being part of master but in my local version it does [20:09] sinzui: natefinch so, the issue now, 1.21 is pointing to a rev of utils from dec 11, and I need to apply my changes to 1.21 (the changes are in utils licencing) so is it ok if I create a 1.21 brach, apply them there and then point 1.21 to that? [20:09] I hear alternatives [20:10] perrito666, I think the 1.21 branch is the best option [20:10] perrito666, sorry. It is work, but at least the branch clearly indicated why it exists [20:11] no prob, I inteded to do that === kadams54 is now known as kadams54-away [20:18] sinzui: flaky test? http://juju-ci.vapour.ws:8080/job/github-merge-juju/2045/console === kadams54-away is now known as kadams54 === kadams54 is now known as kadams54-away [20:40] perrito666, I think it is a flaky test. [20:42] perrito666: I think sinzui answered your question? Sorry, was snowblowing (yes, again... only 4" today, but enough to turn into ice after I drive over it). Mother nature is making up for not giving us any snow until the end of January, evidently [20:43] natefinch, Surely you will be getting show into March. [20:43] natefinch: isnt snowblowing something you do often enough to be worthy of automation? [20:43] natefinch, Do you hands hurt now? [20:44] perrito666: like a roomba for the snow in the driveway? Brilliant [20:44] Oh, I like hat idea. I would like that for my sidewalks too [20:44] sinzui: nah... sweaty and tired, but no pain. Ironically, overheating is way more of a problem than getting cold when I snowblow, unless it's near 0 F [20:45] o/ [20:45] thumper: \o [20:46] natefinch: seems like something you could fix by making the driveway floor a trapdoor over a big hole with salt [20:46] * perrito666 just broke his bread machine so did the bread by hand... I am much better than the machine [20:46] perrito666: kudos [20:49] who will rubberstamp this? http://reviews.vapour.ws/r/882/ [20:49] natefinch: although I prefer the machine in terms of not having to do anything else than adding ingredients into a bowl [20:50] perrito666: haha yeah... it's weird, ours used to work well, and then suddenly the bread started not to rise enough.... far as I can tell, we didn't do anything different. Made me sad, I love fresh baked bread.... though now that I've used my stand mixer to make bread a few times, I don't think it's actually that much more work. [20:51] I made mine by hand [20:51] so it is some degree of work [20:51] I just hate having to remember that my bread is raising and that it needs to raise a second time [20:52] my machine died in an odd manner, apparently the cogs (plastic) got too dry and they disintegrated [20:56] cmooon, who wants this 3 line change? http://reviews.vapour.ws/r/882/ its a great oportunity to review without much effort [21:02] ooh ooh, credit without effort, I'm all over it [21:02] god boy, here have a slice of bread [21:03] updating the libraries didn't require changing core at all? [21:05] nope, I actually only added licence changes [21:05] I sent an email about that [21:05] email, who reads that crap? [21:05] ship it! [21:08] ericsnow: dependencies.tsv for 1.21 did not have the stamps apparently [21:09] perrito666: ah, I totally missed that it was for 1.21 :) [21:15] * perrito666 looks at juju bot while tapping his fingers on the desk [21:25] this is bad, https://www.techdirt.com/articles/20150205/11373529920/worlds-email-encryption-software-relies-one-guy-who-is-going-broke.shtml [21:25] we should donate to the poor guy [21:26] sinzui: my changes have been committed to 1.21, it as no more licencing issues [21:27] I need to step out for a moment, upon returning, 1.22 [21:27] perrito666: yeah, it's a damn shame that there's so many companies and governments that rely on this kind of software and can't be bothered to spend .0000001% of their budget to give money to the people that maintain it [21:28] well this particular software we use A LOT [21:35] perrito666: ‏@stripe 6 minutes ago: Stripe and Facebook are going to sponsor @gnupg development with $50k/year each. [21:49] natefinch: just saw that - that's great news, I was really surprised that it wasn't already sponsored by a group of companies [21:50] hatch: totally === kadams54-away is now known as kadams54 === kadams54 is now known as kadams54-away [22:50] thumper: http://reviews.vapour.ws/r/883