[00:48] <axw> jillr: I don't have much experience diagnosing such things, but I can try. do you have any logs from the unhappy mongod that I can peruse?
[00:48] <axw> and the jujud agent on that machine
[00:52] <jillr> axw: thanks. I have an IR with a bunch of pastes of logs and stats, I can upload new logs, I can also provide shell access
[00:52] <jillr> what's the best route to get you fresh logs?
[00:53] <axw> jillr: I don't know what an IR is. if you can give me shell access then I can just dig around directly
[00:54] <jillr> oh, and since it's HA, which machine? the one with the FATAL mongo member?
[00:54] <jillr> sorry, incident report
[00:55] <axw> jillr: yes please, the FATAL one
[02:02] <perrito666> ericsnow: wwitzel3 ?
[02:45] <axw> thumper: possibly sorted, the machine is NUMA and juju 1.20 doesn't run mongod on NUMA properly
[02:46] <thumper> what's NUMA?
[02:46] <axw> which can lead to periodic slow downs
[02:46] <axw> non uniform memory architecture
[02:46] <axw> err, access
[02:46] <thumper> and this caused some of the transaction log failures?
[02:47] <thumper> maybe the transaction log was being written to /dev/null because it is web scale?
[02:47] <axw> thumper: sorry brb
[02:52] <axw> thumper: so I noticed in the the mongo log that it was timing out getting replset info from the master
[02:52] <axw> and googled that
[02:52] <axw> and after much sifting, found someone who had that on a NUMA machine, and it was fixed when they used the recommended settings
[02:53] <axw> anyway, it's been applied on 1/3 machines, status will be monitored and applied to others as necessary
[02:53] <axw> bbs
[03:00] <thumper> hmm
[03:19] <thumper> very simple review for someone: http://reviews.vapour.ws/r/864/diff/#
[03:44] <anastasiamac> thumper: wouldn't u want to check info.created in the test?
[03:44] <anastasiamac> thumper: otherwise, lgtm but u'd need someone with power to 'shipit'
[03:47] <thumper> anastasiamac: it is an internal implementation detail, and the secondary write fails without it
[03:50] <anastasiamac> thumper: i understand why u'd set it to false on failure (disk.go) but why is it still false on success (mem.go)?
[03:50] <anastasiamac> thumper: just a point of interest really... did not feel too intuitive to me :D
[03:51] <thumper> anastasiamac: because mem.go supports the same behaviour as disk.go
[03:51] <thumper> but just stores in memory
[03:51] <thumper> and without it, it fails the test :)
[03:51] <thumper> and it isn't false on failure in disk.go
[03:51] <thumper> it is on success
[03:53] <thumper> anastasiamac: oh... yeah, it is a bit ick
[03:53] <thumper> I just noticed what you were looking at
[03:53] <thumper> errors.Annotate returns nil if err is nil
[03:54] <anastasiamac> thumper: yes, i forgot Annotate returns nil
[03:54] <anastasiamac> thumper: in other words, u r just resetting .created for the next write
[03:55] <thumper> aye
[03:56] <anastasiamac> thumper: thnx for patiently explaining stuff :D lgtm
[04:03] <thumper> anastasiamac: you have a few minutes while I propose this branch, then I'm heading home
[04:03] <thumper> so... go
[04:03] <anastasiamac> thumper: thnx :)
[04:04] <anastasiamac> thumper: i want to test some db operations (annotations specifically)
[04:04] <anastasiamac> thumper: they r performed on some entities
[04:04] <anastasiamac> thumper: that comply with an interface
[04:04] <anastasiamac> thumper: any problems for me to test using a mock entity
[04:04] <anastasiamac> thumper: rather than our existing juju entity?
[04:05] <anastasiamac> thumper: since m testing operations rather than entity behaviour...
[04:05] <thumper> as long as functions aren't writing to the db with the expectation that the entity exists
[04:05] <thumper> then, yes, sure use a mock
[04:06] <thumper> here's another one: http://reviews.vapour.ws/r/870/
[04:06] <anastasiamac> thumper: well, i can't write my test entity to db for tests?
[04:06] <anastasiamac> thumper: don't we clean db after tests run?
[04:09] <thumper> is this thing still on...?
[04:09] <thumper> why can't you write your entity to the db?
[04:09] <thumper> sure you can
[04:09] <thumper> use the Factory to create an entity
[04:09] <thumper> it is reset after every test
[04:09] <thumper> in tear down
[04:09] <thumper> my network connection was dropped
[04:10] <anastasiamac> thumper: s/well/why
[04:10] <anastasiamac> thumper: yes, it was my intention :D thnx
[04:11] <anastasiamac> thumper: maybe
[05:21] <jw4> axw: thanks.  That was fast :)
[05:21] <axw> jw4: nps, that was easy :)
[05:21] <jw4> :)
[05:46] <jw4> axw: great feedback - thanks!
[05:46] <axw> nps
[08:36] <TheMue> morning
[10:02] <anastasiamac> TheMue: hi :D
[10:03] <TheMue> anastasiamac: heya o/
[10:15] <dimitern> voidspace, ping
[10:16] <dimitern> morning TheMue
[10:16] <dimitern> how goes the testing? :)
[10:16] <TheMue> dimitern: heya, we're in our hangout
[10:16] <TheMue> dimitern: morning btw
[10:16] <dimitern> ah, ok
[10:17] <voidspace> dimitern: pong
[10:17] <TheMue> dimitern: tests currently fail due to non-provisioned machines inside the containers, InstanceId() needs it. so I'll add it next.
[10:17] <voidspace> TheMue: jamestunnicliffe: browser crash! Sorry.
[10:17] <dimitern> voidspace, so just a quick sync up - when you're done about would you manage to setup docker etc. to test the proxy fix backport for bug 1403225
[10:17] <mup> Bug #1403225: charm download behind the enterprise proxy fails <cloud-installer> <deploy> <proxy> <sync-tools> <cloud-installer:Confirmed for adam-stokes> <juju-core:Fix Committed by mfoord> <juju-core 1.21:Triaged> <juju-core 1.22:In Progress by dimitern> <https://launchpad.net/bugs/1403225>
[10:17] <TheMue> dimitern: the rest of your changes are now in my refactoring
[10:17] <voidspace> dimitern: docker experiment was a fail
[10:17] <dimitern> TheMue, ok, that's progress, thanks
[10:18] <voidspace> dimitern: oh, oops
[10:18] <voidspace> dimitern: I forgot your backport
[10:18] <voidspace> dimitern: I have clonable kvm images though instead
[10:18] <voidspace> dimitern: will get on it
[10:18] <dimitern> voidspace, sweet! even better imo
[10:18] <dimitern> voidspace, cheers
[10:18] <dimitern> jamestunnicliffe, and morning to you as well :)
[10:19] <dimitern> jamestunnicliffe, did you manage to look into bug 1417617?
[10:19] <mup> Bug #1417617: apt-proxy can be incorrectly set when the fallback from http-proxy is used <juju-core:In Progress by dooferlad> <https://launchpad.net/bugs/1417617>
[10:21] <voidspace> dimitern: he has a fix for it...
[10:22] <jamestunnicliffe> dimitern: Indeed, fix on its way. Just in hangout.
[10:22] <dimitern> jamestunnicliffe, awesome!
[10:22] <dimitern> ok, i'll leave you guys alone now :)
[10:23] <dimitern> just a reminder - please add cards for what you're doing on the kanban board if you haven't done it
[10:23] <TheMue> oops :D
[10:23] <voidspace> ok
[10:24] <dimitern> and if you want your expenses for the sprint reimbursed this month file a claim today
[10:24] <dimitern> cheers ;)
[10:25] <jamestunnicliffe> dimitern: do we add cards for bug work?
[10:25] <jamestunnicliffe> dimitern: never mind, seen one of yours - will copy :-)
[10:25] <dimitern> jamestunnicliffe, yes - there's a card type "defect" and a field on the right to fill in the LP bug#
[10:26] <dimitern> so it's linked automatically
[10:29] <perrito666> morning
[10:35] <voidspace> perrito666: morning
[11:16] <wwitzel3> morning
[11:17] <voidspace> wwitzel3: morning
[11:17] <wwitzel3> hey voidspace, how's it going?
[11:17] <voidspace> wwitzel3: not bad
[11:17] <voidspace> wwitzel3: a huge stack of paperwork to sign alongside my regular work
[11:17] <voidspace> wwitzel3: (house paperwork)
[11:18] <voidspace> wwitzel3: current status: cloning, destroying, and cloning again kvm images for testing race conditions around setting up proxy environment variables
[11:18] <wwitzel3> voidspace: that's great!
[11:18] <voidspace> wwitzel3: it's quite fun
[11:18] <voidspace> wwitzel3: yeah, it's good news
[11:18] <voidspace> wwitzel3: I'm really hoping we can complete next week
[11:19] <voidspace> wwitzel3: and we're now officially in the "safe for home birth period"
[11:19] <voidspace> wwitzel3: as of midnight last night, if Delia goes into labour we can have the birth at home
[11:19] <wwitzel3> voidspace: wonderful
[11:19] <voidspace> wwitzel3: two weeks early is common, which would be next week
[11:19] <voidspace> wwitzel3: probably at the same time as we're trying to move... :-)
[11:20] <wwitzel3> voidspace: hah, yeah
[11:21] <wwitzel3> voidspace: at least it is close, worst case is you are two houses away from one of the houses :)
[11:23] <voidspace> wwitzel3: indeed :-)
[11:23] <voidspace> wwitzel3: during the crossover period wouldn't be too bad. At least we'd have a choice of houses...
[12:56] <voidspace> perrito666: ok, so whenever the proxyupdater onChange is called "first" is always false. I'm continuing to track down why...
[12:57] <TheMue> ah, test passes now again
[12:57] <perrito666> voidspace: I see, that might by why there is an || in the if
[12:58] <perrito666> someone trying to bypass that issue in the wrong way
[12:58] <voidspace> perrito666: sure, but the proxy settings aren't different on first run
[12:58] <voidspace> perrito666: no, the || is so that system files are only written if they've changed *or* on the first run
[12:58] <voidspace> perrito666: but "first" is always false - so they're not written out
[12:59] <voidspace> perrito666: I'm tracking down why "first" is false when it's clearly initialised to true
[12:59] <voidspace> well, clearly *looks like* it's initialised to true at any rate...
[13:00] <perrito666> heheh #DEFINE False True
[13:00] <perrito666> or the other way around :p
[13:00] <voidspace> :-)
[13:01] <dimitern> voidspace, needs to block perhaps, but only inside the unit agent
[13:02] <dimitern> voidspace, I started implementing the suggestion katco gave, but this really blew up the size of the patch with all the testing required
[13:03] <voidspace> dimitern: ah, ok
[13:04] <voidspace> dimitern: it's only an intermittent issue :-)
[13:04] <dimitern> voidspace, yeah - inside the machine agent there's no charm revision updater
[13:04] <dimitern> voidspace, however, hmm..
[13:04] <voidspace> dimitern: yes there is
[13:04]  * dimitern takes a deeper look
[13:04] <voidspace> dimitern: charmrevisonworker is started inside newStateStarterWorker
[13:05] <voidspace> dimitern: see my latest comment on the bug
[13:05] <dimitern> voidspace, ok, I stand corrected :)
[13:05] <dimitern> sorry
[13:06] <voidspace> np of course
[13:06] <voidspace> dimitern: will the worker retry
[13:06] <voidspace> dimitern: if so, does a transient error matter?
[13:06] <dimitern> voidspace, var interval = 24 * time.Hour
[13:07] <dimitern> it does retry daily
[13:07] <dimitern> which is perhaps too long it should retry more often on connection errors I think
[13:08] <voidspace> dimitern: any reason I shouldn't see error logging from when notify watchers are starting in the logs?
[13:09] <dimitern> voidspace, but that should happen in the apiserver charmrevisionupdater I think
[13:09] <voidspace> dimitern: right
[13:09] <voidspace> dimitern: what do you think is the *right* fix?
[13:09] <voidspace> dimitern: have charmrevisionworker retry on failure or have the agents block until proxy settings are in place
[13:09] <voidspace> or both :-)
[13:15] <voidspace> ok, so I *am* seeing "handleProxyValues" being called with "first" true
[13:18] <voidspace> and I was seeing the logging - I just had to grep through all-machines.log
[13:19] <voidspace> it wasn't recent enough for "juju debug-log"
[13:20] <dimitern> voidspace, well :) first off, I think the charmrevisionupdater should retry a few times on download failures with exponential backoff
[13:21] <voidspace> dimitern: so should I do that *now* or should I file a bug for it and go to the lxc-broker
[13:21] <dimitern> voidspace, this is definitely more resilient than the current "fire-and-forget" approach
[13:22] <dimitern> voidspace, don't worry about it, I'm on it already - will file a bug
[13:22] <voidspace> dimitern: I'm spending a little bit more time on the "first" logic
[13:23] <voidspace> dimitern: it *really* looks to me like the old code should have worked... unless, hang on...
[13:23] <voidspace> nope
[13:23] <voidspace> mysterious
[13:29] <wwitzel3> mgz: so on my maas juju doesn't write the syslog or cert files at all .. so I am not able to reproduce the error
[13:29] <mgz> wwitzel3: that... does not sound good?
[13:29] <wwitzel3> mgz: re https://bugs.launchpad.net/juju-core/+bug/1417875
[13:29] <mup> Bug #1417875: ERROR juju.worker runner.go:219 exited "rsyslog": x509: certificate signed by unknown authority <canonical-bootstack> <regression> <juju-core:New for wwitzel3> <https://launchpad.net/bugs/1417875>
[13:30] <mgz> not writing the syslog would be a ug, no?
[13:30] <mgz> *bug
[13:31] <wwitzel3> mgz: well it might just be my setup *shrug* I haven't actually confirmed what is happening yet
[13:31] <wwitzel3> mgz: will open a bug for it once I confirm
[13:31] <mgz> it must write something, you mean the all-machines.log on the state server doesn't exist at all? or what?
[13:32] <dimitern> voidspace, it seems to me this is indeed a racy situation between the CRU and PU workers
[13:32] <wwitzel3> mgz: the state server is fine, the units don't have juju-*.conf files for syslog or any of the certs
[13:32] <voidspace> dimitern: indeed, that's my conclusion
[13:32] <dimitern> voidspace, one question though - when you see the errors from CRU failing to download, do you also see right after the CRU worker existing and getting restarted?
[13:33] <dimitern> voidspace, I've just answered my own question looking at the code :) sorry - ruw.updateVersions just *logs* the error, rather than returning it out of the loop
[13:34] <voidspace> right
[13:34] <dimitern> voidspace, which is *wrong*
[13:34] <wwitzel3> mgz: but a ca-cert.pem is getting written, which is odd since ensureCertificates does that part *last* ..anyway, investigating further :)
[13:34] <dimitern> voidspace, because, if it *did* return an error, that would've caused its runner to kill it and restart it 3secs later, and retrying so - even with a race it will be resolved in seconds, not 24h
[13:35] <voidspace> dimitern: how many times will it retry?
[13:35] <voidspace> dimitern: maybe that's the right fix then
[13:35] <dimitern> voidspace, just consulted william and that seems like the easiest fix to do for 1.22 and 1.22, for 1.23 however, we should do better
[13:35] <voidspace> dimitern: as far as I can tell the existing "first" logic is correct
[13:36] <voidspace> dimitern: William will be interested in this
[13:36] <voidspace> dimitern: and putting the SetEnvironmentVariables call *back* where it was, but leaving in place starting the proxyupdaterworker first
[13:36] <voidspace> dimitern: I can successfully deploy charms
[13:36] <dimitern> voidspace, it will keep restarting it
[13:36] <voidspace> dimitern: so I think it was the race condition that was the problem all along...
[13:36] <voidspace> dimitern: and the first logic is fine
[13:37] <dimitern> voidspace, wow :)
[13:37] <voidspace> dimitern: I've just successfully deployed a charm this way and my logging confirms that "first" is set correctly and the environment variables are set
[13:37] <dimitern> voidspace, great job on finding it then!
[13:38] <dimitern> voidspace, however to fix the race we'll need a lot more code and testing than to make it virtually irrelevant
[13:38] <voidspace> dimitern: right
[13:39] <voidspace> dimitern: so I'll change the charmrevisionworker to return the error
[13:39] <dimitern> voidspace, for 1.23 we should fix it properly, as william also suggested we should take advantage of nesting runners to define the order things start
[13:39] <voidspace> dimitern: is there an example of this I can look at?
[13:40] <dimitern> voidspace, it should return an error yes, in addition to leaving your original fix in place for 1.21 and 1.22
[13:40] <voidspace> dimitern: we didn't backport to 1.21
[13:40] <voidspace> dimitern: want me to look at that?
[13:41] <dimitern> voidspace, it's as easy as return errors.Annotate(err-from-updateVersions, "failed updating charm versions")
[13:42] <voidspace> dimitern: but the main fix will need backporting too
[13:42] <voidspace> dimitern: I meant an example of nesting workers to define the order
[13:42] <dimitern> voidspace, that has to happen in the CRU loop each time when we call updateVersions
[13:42] <voidspace> dimitern: returning an error I can probably work out for myself...
[13:42] <dimitern> voidspace, ah, well a worker.Runner is itself a worker.Worker
[13:42] <dimitern> voidspace, sorry :)
[13:43] <voidspace> dimitern: so let's do this in order of things to do
[13:43] <dimitern> voidspace, I got confused trying to follow 3 separate topics we're having
[13:43] <voidspace> dimitern: backport the existing fix to 1.21
[13:43] <voidspace> dimitern: change charm revision worker to return the error in trunk, 1.22 and 1.21
[13:43] <voidspace> dimitern: then look at a proper fix for trunk
[13:43] <voidspace> dimitern: sound good?
[13:44] <voidspace> perrito666: FYI, the existing "first" logic works fine
[13:44] <dimitern> voidspace, yeah, if you don't mind fixing CRU for 1.21 first, *without* four PU fix
[13:44] <dimitern> s/four/your/
[13:44] <voidspace> dimitern: so you're saying no to "backport the existing fix to 1.21" then
[13:44] <voidspace> dimitern: that's fine
[13:45] <dimitern> voidspace, and then as i backported your PU fix in 1.22, just forward port it CRU fix for 1.22 and 1.23
[13:45] <voidspace> perrito666: the actual bug was a race condition and fixed (mostly) by the other part of the PR
[13:45] <voidspace> perrito666: but unconditionally setting environment variables is harmless and can be left in place
[13:45] <voidspace> perrito666: as there's still plenty of other things to fix
[13:45] <voidspace> dimitern: ok
[13:45] <voidspace> dimitern: will do
[13:45] <voidspace> dimitern: later... lunch first
[13:45] <perrito666> voidspace: agreed, as long as we know what was the other problem
[13:46] <dimitern> voidspace, the reason not to backport the CRU fix for 1.21 is because I believe it's a lot messier to do and we really need to get 1.21.2 out the door tomorrow
[13:46] <voidspace> dimitern: you mean "not to backport the PU fix for 1.21"
[13:46] <dimitern> voidspace, while the other fixes are less critical and also trivial to transplant
[13:46] <voidspace> dimitern: but ok
[13:46] <voidspace> yep
[13:46] <dimitern> voidspace, ofc :) sorry
[13:47] <dimitern> voidspace, and having the CRU fix in 1.21 will make the issue a minor annoyance rather than a blocker
[13:47] <voidspace> dimitern: the CRU wasn't the main problem I don't think
[13:47] <voidspace> dimitern: only a side issue
[13:48] <voidspace> dimitern: I still don't think you'll be able to deploy charms with 1.21
[13:48] <voidspace> dimitern: I'd be happy to be wrong about that...
[13:48] <voidspace> dimitern: and I can try it
[13:48] <dimitern> voidspace, hmm..
[13:48] <voidspace> dimitern: it's downloading the charm that fails
[13:49] <dimitern> voidspace, that's likely to be correct, but I'd rather tackle it myself so you can return to the CA work
[13:49] <dimitern> :)
[13:50] <voidspace> dimitern: heh, ok
[13:50] <voidspace> dimitern: really going on lunch
[13:50] <voidspace> o/
[13:50] <dimitern> voidspace, enjoy! :)
[13:52] <perrito666> dimitern: sorry to botter, what is stopping https://bugs.launchpad.net/juju-core/+bug/1416425 from being in 1.21?
[13:52] <mup> Bug #1416425: src/bitbucket.org/kardianos/osext/LICENSE is wrong <licensing> <packaging> <juju-core:Fix Committed by dimitern> <juju-core 1.21:Fix Committed by dimitern> <juju-core 1.22:Fix Released by dimitern> <https://launchpad.net/bugs/1416425>
[13:52] <dimitern> perrito666, let me have a look
[13:53] <perrito666> dimitern:  says you committed the fix
[13:53] <dimitern> perrito666, you mean why it's not Fix Released for 1.21?\
[13:53] <perrito666> yup
[13:54] <dimitern> perrito666, because 1.21.2 is not released yet (due tomorrow)
[13:54] <dimitern> perrito666, and since it's a release issue, unlike a blocker or other..
[13:54] <perrito666> I see, yup
[14:31] <dimitern> sinzui, do you know when 1.22-beta3 is due for release?
[14:33] <sinzui> dimitern, I was hoping for a few more fixes given the large list of issues https://launchpad.net/juju-core/+milestone/1.22-beta3
[14:33] <sinzui> dimitern, we can release tomorrow if we have stakeholders that need to test a fix
[14:34] <dimitern> sinzui, sgtm to sync 1.21.2 with 1.22-beta3 release
[14:34] <dimitern> sinzui, and how about 1.22 proper? what's the plan?
[14:35] <sinzui> dimitern, when all bugs are fixed and stakeholders find no other bugs, we propose 1.22.0. That could be next week
[14:39] <perrito666> anyone would be so nice? http://reviews.vapour.ws/r/873/ its trivial and urgent
[14:39] <dimitern> sinzui, that sounds great, thank you
[14:39] <sinzui> np
[14:57] <jw4> sinzui: helpful? https://github.com/juju/testing/pull/49  to address bug 1416430 from the 1.22-beta3 milestone
[14:57] <mup> Bug #1416430: Some files refer to an include license file that is not included <licensing> <packaging> <juju-core:Triaged> <juju-core 1.21:Triaged> <juju-core 1.22:Triaged> <https://launchpad.net/bugs/1416430>
[14:58] <sinzui> jw4, yes please
[14:58] <dimitern> TheMue, can you approve this backport please? http://reviews.vapour.ws/r/877/
[14:59] <TheMue> dimitern: yes, will do.
[14:59] <jw4> sinzui: kk - does that version have to actually be referenced in the dependencies.tsv or is it sufficient for it to be merged into master?
[14:59] <jw4> (of the testing repo)
[15:01] <perrito666> ericsnow: are you a full reviewer?
[15:01] <ericsnow> perrito666: net yet
[15:01]  * perrito666 needs a bureaucratic LGTM
[15:01] <TheMue> dimitern: +1
[15:01] <ericsnow> perrito666: OCR though
[15:01] <perrito666> ericsnow: http://reviews.vapour.ws/r/873/
[15:01] <jw4> perrito666: maybe TheMue can stamp it - it looks good to me
[15:02] <TheMue> jw4: perrito666: already looking
[15:02] <jw4> TheMue: cool
[15:02] <perrito666> yeah we all know frank is getting free beer for that in april
[15:03] <TheMue> perrito666: go for it, you've got a ship-it. and I'll remind you for the beer :D
[15:04] <sinzui> jw4, yes, dependencies.tsv must be updated in juju/juju if you need a fixed package.
[15:04] <jw4> sinzui: kk
[15:05] <jw4> sinzui: fixes for the 1.22-beta3 milestone should be based on 1.22 branch I presume...
[15:05] <sinzui> jw4, we need to fix 1.23, then fix 1.22 and 1.21. Each is a separate PR :(
[15:06] <jw4> sinzui: lol - okay - so 1.23 --> master right? and I should do that first and then backport?
[15:06] <TheMue> jw4: hangout?
[15:06] <sinzui> jw4, yep
[15:06] <jw4> TheMue: sorry - thought we were just doing irc today
[15:10] <perrito666> and since we are on it https://github.com/juju/syslog/pull/1
[15:14] <jw4> yeah, OCR PTAL https://github.com/juju/testing/pull/49 -- just a licensing update pr
[15:22] <dimitern> TheMue, thanks!
[15:24] <TheMue> dimitern: btw, I thought we already had feature freeze for 1.22, don't we? asking because of requirements for actions.
[15:24] <TheMue> dimitern: and did you seen my question about the travel back I mailed to you and Alexis?
[15:25] <dimitern> TheMue, yeah, but that's a potential critical issue fix
[15:25] <TheMue> dimitern: finally btw the tests are working again and I'm now working on the exhausting of the available addresses :D
[15:25] <dimitern> TheMue, yes, I'll ping her today for a response
[15:25] <TheMue> dimitern: great, thanks
[15:25] <dimitern> TheMue, sweet!
[15:25] <perrito666> who owns juju/syslog ?
[15:26] <dimitern> perrito666, cloudbase I think
[15:26] <perrito666> dimitern: I doubt it, its in our repo :p
[15:27] <dimitern> perrito666, it's forked from  gabriel-samfira/syslog
[15:27] <dimitern> which is a bit weird
[15:28] <dimitern> I haven't see this for other juju/* projects
[15:28] <dimitern> seen* even
[15:28] <perrito666> dimitern: it was made by cloudbase and added to our repo when we began merging windows
[15:28] <dimitern> perrito666, right
[15:29] <perrito666> dimitern: but whoever added this repo did not make us (the team) owners
[15:30] <dimitern> perrito666, hmm
[15:30] <dimitern> curious copyright line Copyright (c) 2014, Gabriel. All rights reserved. but it appears to be BSD/MIT
[15:32] <perrito666> dimitern: what I am trying to commit fixes that
[15:32] <perrito666> actually :p
[15:32] <perrito666> that was github filling in the license when he created the repo
[15:32] <dimitern> perrito666, ah :) I suspected
[15:32] <perrito666> it now has the proper licence
[15:32] <perrito666> dimitern: if you want to merge my pr is good enough for me
[15:33] <dimitern> seriously? *lol* GH *is* too smart for its own good
[15:33] <perrito666> you should be owner since you are part of the powers that be
[15:39] <perrito666> dimitern: well it has a sort of a wizard when you create a repo
[15:39] <dimitern> :)
[15:42] <perrito666> mm, let me guess, the bot is still not on utils
[15:42] <perrito666> I feel like an idiot
[15:45] <TheMue> dimitern: do you know anybody in cape town able to help Aram with a kernel problem? asking on the standard canonical channels brought now response, only a hint to cape town
[15:51] <dimitern> TheMue, ok, I'll ask
[15:52] <TheMue> dimitern: great, thanks, will help him. his machine doesn't boot anymore after installing a new kernel
[15:52] <TheMue> dimitern: as a hint => http://sprunge.us/OfFV
[15:56] <natefinch> katco: you around?
[15:56] <katco> natefinch: yup
[15:56] <natefinch> katco: evidently there's a problem with goamz in the china north region, where it needs to use V4 for S3, but it's not: https://bugs.launchpad.net/juju-core/+bug/1415693
[15:57] <mup> Bug #1415693: Unable to bootstrap on cn-north-1 <bootstrap> <ec2-provider> <online-services> <juju-core:Triaged> <https://launchpad.net/bugs/1415693>
[15:57]  * katco looking
[15:58] <natefinch> katco: note the linked github issue
[15:59] <katco> natefinch: ah yes i remember now; the work was supposed to target aws specifically
[15:59] <katco> natefinch: i remember being a little confused why we were using different signing everywhere
[15:59] <katco> natefinch: so we made no effort to support s3 at the time i wrote that, do we now need to do that?
[16:00] <natefinch> katco: sounds like we do. Some of our people are testing in China and need that to work there... they're supposed to go live by end of month, so sooner is better than later
[16:00] <katco> natefinch: i will have to get clarification from ian on priorities, but if i remember, it shouldn't be too hard
[16:00] <katco> natefinch: kind of a "redirect to master signing package" type thing
[16:01] <natefinch> katco: I'll send an email to canonical-juju and make sure to ping Ian about it.  This sounds like it should probably be pretty high priority
[16:01] <perrito666> mm, we should start branching things like utils when we branch juju for revs
[16:02] <perrito666> backporting licence fixes is going to be an interesting PITA
[16:04] <katco> natefinch: ok cool ty for the heads-up
[16:04] <ericsnow> natefinch, perrito666, wwitzel3: standup?
[16:11] <perrito666> natefinch: standup?
[16:19] <perrito666> jw4: ?
[16:19] <perrito666> https://github.com/juju/testing/pull/49 <-- why this patch changes the LICENCE file to AGPL?
[16:31] <alexisb> natefinch, ericsnow we are having some iffy networking issues
[16:31] <alexisb> I am hoping I can get into the hangout at the top of the hour but may not be able to
[16:32] <alexisb> if I cant get in we may have to reschedule just fyi
[16:32] <natefinch> alexisb: ok
[16:34] <ericsnow> alexisb: k
[16:38] <perrito666> jw4: ?
[16:40] <sinzui> ericsnow, we will need to redeploy reviewboard. The current one will be kept alive, but you cannot change it
[16:40] <sinzui> ericsnow, We will need to config you used to deploy it, or you can redploy it when juju-ci4 is ready
[16:41] <ericsnow> sinzui: I should have some time for that today so just let me know when it's ready
[17:15] <perrito666> mm, bad day for hardware
[17:15] <perrito666> jw4: ping?
[17:18] <TheMue> jamestunnicliffe: seen your PR. golang convention for function/method comments is in case of func Foo() { ... } starting the comment with the name: // Foo does this and that.
[17:19] <jamestunnicliffe> TheMue: Ah, yes
[17:28] <TheMue> jamestunnicliffe: you'v got a review
[17:30] <alexisb> thank you ericsnow and natefinch !!
[17:30] <ericsnow> alexisb: no, thank you :)
[17:30] <wwitzel3> yeah, how'd that go?
[17:33] <natefinch> went well I think.  Basically just clarifying expectations etc
[17:35] <wwitzel3> nice
[17:42] <TheMue> jamestunnicliffe: did you test your latest change? the renaming to the expected values isn't used, you use the old name in the assert
[17:42] <jamestunnicliffe> TheMue: Sorry about that, enthusiastic push
[17:42] <jamestunnicliffe> TheMue: Just running tests now
[17:43] <TheMue> jamestunnicliffe: *lol* cool, np, it's a good feeling to get the first fix in
[17:43] <jamestunnicliffe> TheMue: For some reason I can't swash the changes into one commit. Works locally, then github complains.
[17:44] <jamestunnicliffe> TheMue: I guess you will cope!
[17:45] <TheMue> jamestunnicliffe: how does it complain?
[17:45] <jamestunnicliffe> TheMue: Updates were rejected because the tip of your current branch is behind its remote counterpart...
[17:46] <jamestunnicliffe> TheMue: The usual fast forward error. Though after a pull, merge, squash the problem remains.
[17:46] <jamestunnicliffe> TheMue: I can pull, merge, push. That is fine. Very odd.
[17:47] <TheMue> jamespage: because of MY current branch?
[17:47] <jamestunnicliffe> TheMue: no.
[17:47] <TheMue> oh, addressed wrong james
[17:47] <jamestunnicliffe> TheMue: It really isn't a problem. Just a bit untidy.
[17:48] <TheMue> jamestunnicliffe: one of the weird situations when using git. sometimes I don't really understand it
[17:49] <jamestunnicliffe> TheMue: About that comment about spacing, I used the same as the section above, so it may be strange, but it passes go fmt and is consistent :-)
[17:51] <TheMue> jamestunnicliffe: go fmt won't complain, it's inside of the string. "http//http proxy" looks interesting. would have to look how it is used.
[17:51] <TheMue> http://...
[17:52] <jamestunnicliffe> TheMue: Oh, that. The previous test writers used "http proxy" and I stuck with it, though the http:// is added.
[17:54] <TheMue> jamestunnicliffe:  yes, that's how I understood it too. the format containing a space isn't part of this change.
[17:57] <voidspace> anyone know about the worker/runner infrastructure and care to answer a question?
[17:58] <TheMue> it works and runs *duck* *scnr*
[17:59] <voidspace> heh
[17:59] <voidspace> TheMue: dimitern said that I need to change a worker to "return an error out of the loop" instead of just logging it
[17:59] <voidspace> to do this do I have the method that gets the error call  ruw.tomb.Kill(err)
[18:00] <voidspace> where ruw is the worker in question
[18:00] <voidspace> ah no, the loop needs to actually return
[18:00] <voidspace> so updateVersions returns the error
[18:00] <voidspace> and the call in the loop returns that
[18:00] <voidspace> easy-peasy
[18:00] <voidspace> :-)
[18:00] <TheMue> voidspace: this would kill it, yes
[18:01] <voidspace> TheMue: once again, in the process of explaining the question I work out the answer :-)
[18:01] <voidspace> TheMue: thanks for being my rubber ducky...
[18:01] <TheMue> voidspace: has been a pleasure :)
[18:02] <voidspace> :-)
[18:04] <jamestunnicliffe> TheMue: Think I am ready for a re-review if you can. Then I have a daughter to cuddle :-)
[18:06] <ericsnow> bogdanteleaga: I've left a lot of review comments, but many of them are there just to mark all the spots where previous comments apply
[18:07] <ericsnow> bogdanteleaga: each one isn't some separate kind of problem that needs to be addressed :)
[18:07] <TheMue> jamestunnicliffe: I'll take a look
[18:09] <bogdanteleaga> ericsnow: I'm trying to find the syscall thing, but no luck :(
[18:09] <TheMue> jamestunnicliffe: look good, ship-it
[18:10] <bogdanteleaga> ericsnow: oh, now I see what you mean
[18:14] <jamestunnicliffe> TheMue, voidspace: Have a good weekend! I'm off for the evening. See you Monday/Tuesday.
[18:14] <TheMue> jamestunnicliffe: enjoy your evening and weekend too, see you then
[18:20] <jw4> perrito666: back
[18:23] <jw4> perrito666: there was no clear license
[18:23] <perrito666> jw4: github says you removed lgpl and added agpl
[18:23] <jw4> perrito666: the existing file 'LICENSE' was lgpl, but all the source files referenced 'LICENCE'
[18:24] <jw4> perrito666: the two files from the bug referenced LICENSE
[18:24] <jw4> but, the LICENSE file wasn't the golang license
[18:25] <jw4> perrito666: since the testing files were moved out of juju-core, I figured the right license was the one from juju core (especially since the name in all the source files was LICENCE like the juju-core one
[18:25] <perrito666> jw4: I can positively say you just confused me
[18:25] <jw4> the file I removed was not being referenced by any of the source files
[18:26] <jw4> perrito666: except the two referenced in the bug
[18:26] <jw4> perrito666: and the file I removed was wrong per the bug for those two files
[18:26] <jw4> perrito666: all of the source files referenced a non-existent 'LICENCE' (spelled with a C) file
[18:27] <jw4> perrito666: which used to point to the LICENCE file in juju-core before testing was split to it's own repo
[18:27] <jw4> perrito666: any better ? :)
[18:28] <perrito666> jw4: I see
[18:28] <perrito666> so the part of go licence is good
[18:28] <perrito666> but, you removed the mispelled license and added licence
[18:28] <perrito666> and those have different licence texts inside
[18:28] <perrito666> the thing is
[18:28] <perrito666> if you look at the files
[18:29] <perrito666> https://github.com/juju/testing/blob/master/cleanup.go
[18:29] <perrito666> they expect that licence file to be lgplv3
[18:29] <jw4> perrito666: I see
[18:29] <jw4> perrito666: I assumed that was part of the big re-licensing change a few months ago
[18:30] <jw4> perrito666: my understanding was that testing was split out before core was cleaned up
[18:30] <perrito666> natefinch: care to weight on that?
[18:31] <jw4> perrito666, natefinch it looks like LICENCE in core has always been affero
[18:31] <jw4> perrito666: so maybe the fix is to update all the testing files to point to the LICENCE file and revert the changes in that file
[18:36] <jw4> perrito666: er, not updating the testing files, just moving LICENSE to LICENCE
[18:36] <perrito666> yep
[18:38] <jw4> perrito666: look better now? https://github.com/juju/testing/pull/49
[18:39] <perrito666> jw4: totally
[18:39] <jw4> perrito666: w00t
[18:39] <perrito666> could you be a sport and make sure that all files have either go or lgpl licences?
[18:39] <jw4> perrito666: heh - yes I'll be a sport
[18:40] <perrito666> sorry there is a brittish fellow living inside me :p
[18:40] <jw4> perrito666: yeah, me too
[18:42] <mgz> sillies :)
[18:42] <jw4> perrito666: yep all licensed
[18:42] <perrito666> sweeet
[18:42] <perrito666> mgz: oh, just in time to lgtm jw4 proposal
[18:42] <jw4> mgz: jolly good show
[18:45] <jw4> here now mgz; popping in with a bit of wit and then disappearing is not cricket!
[18:46] <jw4> mgz: if you come back you can make fun of Americans? Or South Africans? Or Argentinians?
[18:48] <mgz> you really can't, you can only mock in the reverse of colonial history, otherwise it's just picking on people :0
[18:49] <jw4> mgz: hehe
[19:06] <natefinch> perrito666, jw4: license for juju core itself is agpl.. the license for "libraries" outside of core should be lgpl... where exactly we draw the line between what is and is not core is kinda fuzzy
[19:06] <jw4> natefinch: cool
[19:06] <jw4> natefinch: that's where this PR ended up so that's good
[19:06] <natefinch> (also note, lgpl is actually lgpl with Canonical's special static linking exception, which is required since Go always static links)
[19:07] <jw4> natefinch: interesting.
[19:07] <jw4> natefinch: do you care to review and possibly stamp https://github.com/juju/testing/pull/49
[19:08] <jw4> natefinch: it's work for one of the bugs sinzui is hoping to be addressed for 1.22-beta3 release
[19:11] <jw4> perrito666: woot :)
[19:11] <perrito666> jw4: I say LGTM, given the hurry that is enough
[19:11] <perrito666> :p
[19:11] <natefinch> jw4: right.  I do wish we'd just put these files in a separate repo or something.... but I think this is good enough for now.
[19:11] <jw4> natefinch: cool, tx
[19:16] <jw4> natefinch: (perrito666) http://reviews.vapour.ws/r/880/ <-- dependencies update with change
[19:17] <perrito666> jw4: I am superseeding you  with a patch including several of those licencing fixes
[19:17] <jw4> perrito666: score
[19:17] <jw4> I'll close mine
[19:17] <perrito666> just running the whole test suite before
[19:17] <perrito666> to make sure the version jump didnt break anything
[19:18] <jw4> perrito666: I ran the tests on my change already
[19:18] <perrito666> jw4: well I am packing other two :D
[19:18] <jw4> perrito666: overachiever
[19:19] <perrito666> jw4: I have a dedicated machine for that nevertheless
[19:19] <perrito666> that makes things faster
[19:19] <jw4> perrito666: my change has to be backported to 1.22 and 1.21 - will you bundle that too?
[19:19] <perrito666> jw4: yup, all of those have to
[19:20] <perrito666> sinzui: natefinch question
[19:20] <jw4> perrito666: I have a m3.2xlarge instance on ec2 that kicks the tests out in about 8 minutes
[19:20] <perrito666> when we create a release branch
[19:20] <perrito666> why dont we create a branch for the dep libraries that are ours?
[19:21] <perrito666> jw4: I have a corei5 with 8 gigs of ram downstairs :p
[19:21] <jw4> yeah - I was nervous about that too
[19:21] <perrito666> as a laptop it sucks because It is bulky but as a compile machine it rocks
[19:21] <jw4> perrito666: sweet
[19:24]  * TheMue likes his i7 / 16 Gig / SSD combination ;)
[19:25] <jw4> TheMue: show-off
[19:25] <jw4> ;)
[19:26] <perrito666> TheMue: try to get one of those in argentina :p
[19:26] <voidspace> right EOW
[19:26] <voidspace> see you all on Monday
[19:27] <TheMue> perrito666: why not? not available or too expensive
[19:27] <TheMue> voidspace: me too, only cannot close chat *lol*
[19:27] <voidspace> heh
[19:27] <TheMue> voidspace: but have already wine beside me
[19:27] <voidspace> switch off the computer...
[19:27] <voidspace> nice
[19:27] <voidspace> TheMue: enjoy, see you on Tuesday
[19:28] <perrito666> TheMue: both, but the first specially
[19:28] <TheMue> voidspace: yess, see you then. enjoy your weekend too
[19:28] <sinzui> perrito666, a branch for each dep? or a branch that included a dependencies.tsv of what we officially use?
[19:28] <TheMue> perrito666: interesting, didn't expected this
[19:28] <perrito666> sinzui: a branch for each dep, like utils should have a 1.21 branch/tag
[19:29] <perrito666> when backporting fixes to our own deps that poses an interesting problem
[19:29] <sinzui> perrito666, That might have bad consequences from Ubuntu's perspective, but worth talking about
[19:30] <jw4> sinzui: you mean maintenance costs?
[19:30] <perrito666> sinzui: well if I have to, say backport a fix for something that is in utils, because it is broken in 1.21 and 1.21 is using an old rev of utils, I will need to do a branch anyway
[19:31] <ericsnow> jw4: what does "darwin" use for an init system?  something cooked up by Apple?
[19:31] <sinzui> perrito666, I think you suggest that dependencies.tsv points to a tag instead of a hash. We update the tag when we want to change the rev that godeps will select
[19:32] <perrito666> sinzui: yes, that works
[19:32] <jw4> ericsnow: good question - I don't know the official answer, but on my machine it seems to be a custom launchd thing
[19:32] <ericsnow> jw4: yeah I figured as much :)
[19:32] <jw4> ericsnow: :)
[19:32] <sinzui> perrito666, but since we are not tracking the exact revision used to make the release tarball. that means it is not possible to recreate an older juju rev in the same series
[19:34] <sinzui> perrito666, 1.21 build 5 works, but the tag is changed. I do a rebuild to test (build 6) and I get a different package with possibly different results
[19:35] <sinzui> perrito666, I will see the dep changed if I diffed the tarball, but I wouldn't know why juju choose the change.
[19:35] <sinzui> perrito666, so while I like your idea for its convenience, it undermines our need to repeatability.
[19:35] <perrito666> sinzui: well right now I have to push a change to 1.21 that fixes deps, lets see how easy to do this is
[19:36] <jw4> perrito666: basically as long as there are no breaking changes in the updated deps we'll get lucky - otherwise a real hassel
[19:36] <perrito666> jw4: yes, that is the thing, I dont like to rely on luck
[19:37] <jw4> ditto
[19:37] <jw4> (but I'll take it if it comes!)
[19:37] <sinzui> perrito666, I would argue that core is branching too early. There are too many branches. I think unstable and stable are enough.
[19:38] <perrito666> sinzui: the thing is, we have, lets say 1.21, which depends on utils 1, syslog 3 and whatever 4
[19:38] <perrito666> and I need to fix something on utils that breaks 1.21
[19:39] <perrito666> but utils is now in version 5
[19:39] <perrito666> so, I really dont want to include 2,3 and 4 into 1.21
[19:39] <sinzui> perrito666, ah
[19:39] <sinzui> perrito666, that is a nasty combination. We need a lot of branches in that scenario
[19:41] <perrito666> sinzui: that is our current combination
[19:41] <perrito666> sinzui: right now I need to backport a licence change in serveral deps to 1.21
[19:41] <perrito666> that should be fun for non functional changes
[19:49] <perrito666> anyone http://reviews.vapour.ws/r/881/ >?
[19:49] <perrito666> I will take lgtm even from mup at this stage
[19:49]  * jw4 impersonates mup
 LGTM
[19:58] <perrito666> ericsnow: tx
[20:05] <perrito666> mm this makes no sense, in github the current commit for 1.21 for utils is not marked as being part of master but in my local version it does
[20:09] <perrito666> sinzui: natefinch so, the issue now, 1.21 is pointing to a rev of utils from dec 11, and I need to apply my changes to 1.21 (the changes are in utils licencing) so is it ok if I create a 1.21 brach, apply them there and then point 1.21 to that?
[20:09] <perrito666> I hear alternatives
[20:10] <sinzui> perrito666, I think the 1.21 branch is the best option
[20:10] <sinzui> perrito666, sorry. It is work, but at least the branch clearly indicated why it exists
[20:11] <perrito666> no prob, I inteded to do that
[20:18] <perrito666> sinzui: flaky test? http://juju-ci.vapour.ws:8080/job/github-merge-juju/2045/console
[20:40] <sinzui> perrito666, I think it is a flaky test.
[20:42] <natefinch> perrito666: I think sinzui answered your question?  Sorry, was snowblowing (yes, again... only 4" today, but enough to turn into ice after I drive over it).  Mother nature is making up for not giving us any snow until the end of January, evidently
[20:43] <sinzui> natefinch, Surely you will be getting show into March.
[20:43] <perrito666> natefinch: isnt snowblowing something you do often enough to be worthy of automation?
[20:43] <sinzui> natefinch, Do you hands hurt now?
[20:44] <natefinch> perrito666: like a roomba for the snow in the driveway?  Brilliant
[20:44] <sinzui> Oh, I like hat idea. I would like that for my sidewalks too
[20:44] <natefinch> sinzui: nah... sweaty and tired, but no pain.  Ironically, overheating is way more of a problem than getting cold when I snowblow, unless it's near 0 F
[20:45] <thumper> o/
[20:45] <jw4> thumper: \o
[20:46] <perrito666> natefinch: seems like something you could fix by making the driveway floor a trapdoor over a big hole with salt
[20:46]  * perrito666 just broke his bread machine so did the bread by hand... I am much better than the machine
[20:46] <natefinch> perrito666: kudos
[20:49] <perrito666> who will rubberstamp this? http://reviews.vapour.ws/r/882/
[20:49] <perrito666> natefinch: although I prefer the machine in terms of not having to do anything else than adding ingredients into a bowl
[20:50] <natefinch> perrito666: haha yeah... it's weird, ours used to work well, and then suddenly the bread started not to rise enough.... far as I can tell, we didn't do anything different.  Made me sad, I love fresh baked bread.... though now that I've used my stand mixer to make bread a few times, I don't think it's actually that much more work.
[20:51] <perrito666> I made mine by hand
[20:51] <perrito666> so it is some degree of work
[20:51] <perrito666> I just hate having to remember that my bread is raising and that it needs to raise a second time
[20:52] <perrito666> my machine died in an odd manner, apparently the cogs (plastic) got too dry and they disintegrated
[20:56] <perrito666> cmooon, who wants this 3 line change? http://reviews.vapour.ws/r/882/ its a great oportunity to review without much effort
[21:02] <natefinch> ooh ooh, credit without effort, I'm all over it
[21:02] <perrito666> god boy, here have a slice of bread
[21:03] <natefinch> updating the libraries didn't require changing core at all?
[21:05] <perrito666> nope, I actually only added licence changes
[21:05] <perrito666> I sent an email about that
[21:05] <natefinch> email, who reads that crap?
[21:05] <natefinch> ship it!
[21:08] <perrito666> ericsnow: dependencies.tsv for 1.21 did not have the stamps apparently
[21:09] <ericsnow> perrito666: ah, I totally missed that it was for 1.21 :)
[21:15]  * perrito666 looks at juju bot while tapping his fingers on the desk
[21:25] <perrito666> this is bad, https://www.techdirt.com/articles/20150205/11373529920/worlds-email-encryption-software-relies-one-guy-who-is-going-broke.shtml
[21:25] <perrito666> we should donate to the poor guy
[21:26] <perrito666> sinzui: my changes have been committed to 1.21, it as no more licencing issues
[21:27] <perrito666> I need to step out for a moment, upon returning, 1.22
[21:27] <natefinch> perrito666: yeah, it's a damn shame that there's so many companies and governments that rely on this kind of software and can't be bothered to spend .0000001% of their budget to give money to the people that maintain it
[21:28] <perrito666> well this particular software we use A LOT
[21:35] <natefinch> perrito666:  ‏@stripe 6 minutes ago: Stripe and Facebook are going to sponsor @gnupg development with $50k/year each.
[21:49] <hatch> natefinch: just saw that - that's great news, I was really surprised that it wasn't already sponsored by a group of companies
[21:50] <natefinch> hatch: totally
[22:50] <waigani> thumper: http://reviews.vapour.ws/r/883