[00:32] wallyworld: this doesn't actually need a review right? http://reviews.vapour.ws/r/1653/diff/ [00:48] Bug #1454468 was opened: Hyper-V Server 2012 R2 node deployed by maas but juju status remains pending [01:15] menn0: hey, sorry was out at accountant. anastasia has looked at it, i was just waiting for 1.24 to be unblocked [01:15] thanks for asking [01:17] ericsnow: it was master [01:17] thumper: i have a horrible feeling I know what is going on with arm64 and juju [01:17] long story short, the text of most of the net error messages has changed [01:18] i'm sure we're doing string sniffing [01:18] that's why it's not figuring out when ports are not available and stuff [01:22] wallyworld: ok cool [01:23] wallyworld: what's blocking 1.24? I don't see anything [01:23] menn0: was blocked yesterday, and i landed another one first this morning before i had to pop out. will land this one now [01:25] wallyworld: cool cool [01:26] thumper: yup, i was right [01:26] juju local provider tests fail on amd64 [01:26] under go 1.5 [01:26] (which is what we have to use for arm64) [01:26] it's the changes to the net output text [01:26] :-< [01:26] grr [01:32] ...and what have we learned, kids? Don't do string matching on error messages. [01:36] Bug #1454359 changed: Unexpected api shutdown followed by panic [01:39] natefinch-afk: sad trombone === natefinch-afk is now known as natefinch [01:40] yeah.... it's unfortunate that gocheck makes it so easy to do string parsing on errors [01:41] davecheney: net output text? [01:41] my guess is somewhere in the local provider we're looking a err.Error() to figure out what kind of failure it was [01:42] ugh [01:43] yeah, this is going to suck for a while [01:43] straddling a bunch of different go versions [01:44] good for us... make us stop doing stupid things [01:44] http://paste.ubuntu.com/11105279/ [01:45] lets hear it for juju/errors [01:46] davecheney: nice [02:21] I need a smart diff tool that can diff two git patch files to prove a forward port really is just a copy of another PR. [02:21] it has to be smart, because for some reason, the order of files in the patches is not constant, so you can't easily just do a text diff [02:23] related... can someone review this forward port to 1.23 of a fix already merged to 1.22? http://reviews.vapour.ws/r/1659/ [02:23] menn0: if you're not working on that customer issue? ^ [02:27] natefinch: if it's a forward port of an already reviewed PR with minimal additional changes then it doesn't need another review [02:27] menn0: even better [02:27] natefinch: I did see it :) [02:28] :) [02:38] natefinch: menn0 thumper https://github.com/juju/juju/pull/2304 [02:39] davecheney: wow, that seems kind of broken [02:39] the contract for the net pacakge says the errors will be of type *net.OpError [02:40] natefinch: https://groups.google.com/d/topic/golang-dev/nHMOfuSBYLs/discussion [02:41] davecheney: is this why everything was timing out? [02:43] yup [02:43] well, sort of the opposite [02:43] the test was expecting that ECONNREFUSED was ok [02:43] err [02:43] it was expecting to get ECONREFUSED [02:43] it got another error and freaked [02:44] wish there had just been a IsConnectionRefused(error) bool [02:45] i considered breaking that big block of logic out into a function [02:45] if you review my PR [02:45] you can ask me to do that [02:45] davecheney: I meant in the stdlib :) [02:46] hmm [02:46] maybe [02:46] but where does it end [02:49] dang, I always forget how to do "fix it then ship it" :/ [02:49] there, close enough [02:49] davecheney: looking [02:50] davecheney: lol, just saw your post to that golang-dev thread on this change. [02:51] natefinch: pride comes before the fall [02:58] natefinch: menn0 thanks for your review [02:58] i've pulled the logic out into a function [02:59] let's see if it'll fit through the CI needle [03:21] Bug #1454481 was opened: juju log spams ERROR juju.worker.diskmanager lsblk.go:111 error checking if "sr0" is in use: open /dev/sr0: no medium found [03:22] * thumper snorts [03:22] davecheney: I also looked but you already had two shipits [03:22] so I didn't bother adding another [03:23] was more snorting at the 'fit through the CI needle" comment :-) [03:24] thumper: interestinghow ur virtual reactions are so different to ur real life ones.. [03:27] anastasiamac: or not so different... [03:27] anastasiamac: although to be honest, I have never stabbed anyone in the face in person [03:27] Bug #1454481 changed: juju log spams ERROR juju.worker.diskmanager lsblk.go:111 error checking if "sr0" is in use: open /dev/sr0: no medium found [03:28] can't imagine u "snort"ing :D [03:28] * natefinch can. [03:28] * natefinch ducks [03:28] u r brave, Nate :D [03:29] anastasiamac: nah, I'm just very far away ;) [03:30] :D [03:36] Bug #1454481 was opened: juju log spams ERROR juju.worker.diskmanager lsblk.go:111 error checking if "sr0" is in use: open /dev/sr0: no medium found [03:44] WT actual F [03:44] ha [03:45] hmm [03:45] I should set an alias for 'got est' to be 'go test' [03:45] I hate timing based intermittent failures in our test suite [03:45] just sayin [03:46] ActionSuite.TestActionsWatcherEmitsInitialChanges [03:46] * thumper looks at jw4 [03:48] -*- thumper watches jw4 sleep... [03:49] fark [03:49] * thumper hates git at times [03:50] * thumper branches then does a git dance [03:55] http://juju-ci.vapour.ws:8080/job/github-merge-juju/3219/console [03:55] what gives [03:55] is this a flake ? [03:55] thumper: well, i'm testing with go trunk [03:55] probably one of the more boring fixes for a critical issue http://reviews.vapour.ws/r/1669/ [03:56] i need to check it'll fit through the 1.2 sieve [03:56] git it [03:56] hulk smash [03:56] davecheney: looks like an ec2 failure [03:57] menn0: you were reviewer right? [03:57] thumper: I am [03:57] menn0: care to look at the above review? [03:58] thumper: looking [03:58] menn0: cheers [04:04] thumper: wow so that was a bit of screwup [04:04] yeah [04:04] naming things is hard, right? [04:05] thumper: that explains the huge number of SetAddresses calls [04:05] and the resulting txns [04:05] right [04:05] thumper: ship it... no comments [04:05] * thumper looks at something [04:06] this would add one transaction per machine per 15 minutes [04:06] so in an environment with 100 machines, 400 per hour [04:09] hmm... [04:09] I wonder if I got that __fixes__ thing right, or even if this fix needed it [04:10] thumper: AFAIK nothing is blocked [04:10] merge accepted [04:10] * natefinch has landed stuff on 1.22, 1.23, 1.24, and master today [04:11] menn0: I'll look to forward port the fix once it has landed [04:11] thumper: sweet [04:14] thumper: Yeah, that WatcherEmitsIntialChanges test sux0rs [04:14] thumper: mea culpa [04:15] hopefully I'll get a chance to fix it if no-one else does this cycle [04:15] menn0: would you please take a look at http://reviews.vapour.ws/r/1670/? trivial change, fixes a critical bug [04:15] I'll squeeze it in with Actions 2.0 [04:16] axw: looking [04:16] axw: what about existing systems - do we need an upgrade step? [04:18] axw: looks good. what manual testing have you done to confirm it works? [04:19] anastasiamac: it's only 9:20 pm... but I feel like I *should* be sleeping [04:20] jw4: well, if ur were working from an office, would u have been available?.. [04:20] anastasiamac: nope [04:20] anastasiamac: at least not normally :) [04:21] checking: go vet ... [04:21] provider/vsphere/ova_import_manager.go:269: missing verb at end of format string in Debugf call [04:21] what's the point of doing this check if it doesn't fail ? [04:21] jw4: :D ur dedication and quick reponse is appreaciated [04:21] davecheney: you ranted about that last week didn't you? [04:21] anastasiamac: ;) [04:21] i rant about a lot of stuff [04:21] :) [04:21] it's hard for me to keep track [04:25] davecheney: i already ranted about that earlier today too [04:26] davecheney: i think go vet doesn't always exit with an error (at least for the Go version used by the bot) [04:26] but ffs, why don't developers have it turned on locally [04:26] wallyworld, davecheney the scripts/verify.bash file does return non-zero... the error must be swallowed elsewhere in CI? [04:26] jw4: it depends on the version og go vet i believe [04:27] wallyworld: ah [04:27] earlier versions were broken [04:27] and CI uses Go 1.2.1 [04:27] wallyworld: worker/rsyslog will rewrite the config if changes, so no upgrade step required [04:27] axw: awesome thanks, just wanted to double check due to the sensitity of the site [04:28] * thumper goes to walk the dog while the landing bot does its thing [04:28] menn0: I confirmed that the config is updated, and that logging still works/is distributed. I couldn't repro; I left a note to that effect in the bug, will test manually if someone is able to give me the steps === thumper is now known as thumper-bbs [04:30] axw: I suspect you need to have an HA setup where rsyslog isn't up on one or more of the state servers so that the outbound queue fills up. with your change, the queue should fill up to 512MB instead of to all free space. [04:30] menn0: yeah, I tried that [04:31] axw: (the outbound queue should fill up on the hosts where rsyslog is working obviously) [04:31] menn0: from the bug: "I tried creating an HA env, creating a giant machine-0.log and all-machines.log on machine 0, and then disabling rsyslog on the 2nd and 3rd state servers; didn't cause the spool to grow by more than a couple MB." [04:32] axw: then I don't know. jam might have better context (he's around) [04:33] jam: ^^ any ideas on how to repro https://bugs.launchpad.net/juju-core/+bug/1453801 [04:33] Bug #1453801: /var/spool/rsyslog grows without bound [04:35] axw: looking [04:36] axw: put 4GB of data into machine-0.log [04:36] jam: from the bug: "I tried creating an HA env, creating a giant machine-0.log and all-machines.log on machine 0, and then disabling rsyslog on the 2nd and 3rd state servers; didn't cause the spool to grow by more than a couple MB." [04:37] (giant was ~512MB, not 4GB, though) [04:37] axw: so to start with what version is your env? [04:37] jam: 1.22.3 [04:38] axw: so to repro, I'd recommend using 1.20 if you can [04:38] at the very least because I believe rsyslog might be configured slightly differently (forward messages from the log file vs connect to rsyslog and send messages directly) [04:38] you're Right about ActionQueueMaxDiskSpace [04:39] I'll give 1.20 a shot [04:44] axw: do you see if 1.22 is reading machine-X.log and forwarding the content? [04:45] axw: ah I have another way [04:45] python! [04:45] jam: it was when rsyslog was enabled, yes [04:45] python -m "import syslog; syslog.openlog('juju-test'); syslog.syslog(syslog.LOG_WARNING, 'test this out'))" [04:45] axw should end up in the all-machines.log [04:45] axw: and you can trivially loop over that to create as much logspam as you want [04:46] jam: thanks, I'll try with that [05:02] grr... bad record mac [05:03] jam: I think my previous attempts failed because I was creating log lines that were too long, not fitting into syslog messages [05:03] this seems to be creating more spool files [05:03] axw: so you need lots of short messages rather than few long ones ? [05:03] jam: at least log lines smaller than 1MB :) [05:04] axw: we probably have some sort of "max message length" [05:04] yes probably. I'm testing with 1K now, seems to be doing the trick. now to see what happens when it gets to 512MB [05:08] what the heck http://paste.ubuntu.com/11107599/ [05:10] ... error stack: github.com/juju/juju/environs/bootstrap/bootstrap.go:103: Juju cannot bootstrap because no tools are available for your environment. [05:10] You may want to use the 'agent-metadata-url' configuration setting to specify the tools location. [05:10] did juju just tell me to go fuck muself ? [05:11] davecheney: but nicely... [05:12] ok, i think these are all symptoms of missing tools === TheRealMue is now known as TheMue [05:39] axw: how goes? [05:40] jam: trying to figure out where my rsyslog config files went...they've disappeared out of /etc/rsyslog.d [05:41] axw: that doesn't sound very good. I thought juju writes them on startup/etc. [05:41] jam: yeah, worker/rsyslog is meant to write it out [06:00] wallyworld: we aren't reviewing forward ports of fixes are we? [06:00] * thumper-bbs forgot to change nick === thumper-bbs is now known as thumper [06:04] * thumper guesses not [06:09] jam: finally, verified. I think the worker/rsyslog code isn't very robust to rsyslog being restarted externally [06:09] i.e. if it's not running, and tries to stop/restart it, it'll be unhappy [06:10] axw: ah, we're probably just issuing a "stop" which will fail if it isn't running, and not handling the "stop a stopped service" [06:10] axw: but you found you could overload the queue ? [06:10] jam: yes, and it stopped growing when I added the patch [06:15] axw: can you confirm if the patch would obviously apply to a 1.20 branch? [06:16] axw: and any thoughts on how we might test in "as realistic as we can for CI" ? [06:17] jam: yes, the 1.20 code is pretty similar, so would work there [06:17] axw: you're doing this on 1.22, right? Have you checked 1.24/master as well? [06:17] I haven't heard the official statement from alexis and wes about what version we're targetting [06:17] jam: ensure-availability, juju ssh 1 "sudo stop rsyslog", logspam on machine 0 until all-machines.log grows to 512MB, and then a bit more [06:18] jam: I thought 1.22 was the important one, that's all I've looked into so far. AFAIK the others haven't changed dramatically [06:18] I'll work on forward porting them now [06:19] axw: I agree with that basic sentiment, wes and alexis were supposed to meet last night to discuss official upgrade plans for them [06:19] wallyworld: do you know any more about ^^ ? [06:31] * thumper head desks [06:32] * thumper grunts [06:33] *another* damn intermittent failure [06:33] fuck fuck fuckity fuck [06:34] provider/vsphere tests on 1.24 [06:35] and for extra hillarity [06:35] environ_broker_test.go:93: [06:35] c.Assert(err, gc.ErrorMatches, "no mathicng images found for given constraints: .*") [06:35] ... error string = "invalid URL \"http://cloud-images.ubuntu.com/releases/streams/v1/index.json\" not found" [06:35] ... regex string = "no mathicng images found for given constraints: .*" [06:35] note the spelling mistake in the regex [06:36] * thumper fixes that first [06:42] axw: I'm happy with your patch, but I'm wondering why we ended up with lines added in 2 places [06:42] axw: we are going to patch 1.22 afaik [06:42] ah non API vs API [06:44] jam: that's your understanding too right? we talked this morning at the release meeting that the imminent arrival of 1.22.x into trusy is now being delayed till these issues are fixed in 1.22 [06:44] wallyworld: Last I had heard it was going to be discussed at the release meeting, which I was not at [06:44] and it was a "do we go for 1.22 and push back 1.24" [06:45] so, if i understood correctly, it is 1.22 [06:45] as we want these fixes in trusty also [06:47] wallyworld: care to cast your eye over this commit? https://github.com/howbazaar/juju/commit/9757821b5070ff26510cedc58e7919450ebfa9a6 [06:47] wallyworld: not sure why it was intermittently failing [06:47] looking [06:47] wallyworld: but with this patch, it passes all the time [06:48] wallyworld: the log showed that the file source was read, and didn't find an image [06:48] but sometimes it would try to get to cloud-images.ubuntu.com... [06:48] no idea why it was only sometimes [06:48] thumper: NFI about intermittent nature either [06:49] but good that you fixed the other stuff [06:49] this does seem to make it go away though [06:49] hmmm [06:49] if it works, but would like to understand the root cause [06:49] i'm looking at another bug related to this [06:49] i'll poke around a bit [06:49] yeah... me too [06:50] fix it taking a while to land [06:50] landed in 1.22 [06:50] trying 1.23 [06:50] thumper: i'm looking at bug 1452422 [06:50] before I try 1.24 [06:50] Bug #1452422: Cannot boostrap from custom image-metadata-url or by specifying metadata-source [06:50] we're not overlapping are we? [06:50] don't think so [06:50] not at all [06:50] all my changes have been around machine Addresses and SetAddresses [06:50] ok, just when you said me too i wasn't sure [06:51] just a slight diversion to fix the intermittent vsphere test failure [06:51] ok [06:51] me too was relating to wanting to know the root cause [06:51] ah [06:51] anyway, +1 on that fix [06:51] I hate weird shit like that [06:51] yup [06:51] cheers [06:51] just checking [06:52] the regexp typo made me laugh [06:52] talk about tweaking the test to match bad code :-) [06:52] you can tell it wasn't TDD :-) [06:53] wallyworld: prob just copy and paste [06:53] yup [07:04] thumper: do we know what menn0's plan is for working on https://bugs.launchpad.net/juju-core/+bug/1453785 ? [07:04] Bug #1453785: transaction collection (txns) grows without bound [07:04] I feel like we should have some coordination to find out when we can purge items from the txn collection [07:04] it is possible that everything can be purged once they have been in APPLIED [07:12] jam, wallyworld: quick hangout to hand off? [07:13] sure [07:13] wallyworld, jam: https://plus.google.com/hangouts/_/canonical.com/handoff?authuser=0 [07:13] rogpeppe2: ^^ [07:17] wallyworld: when you're free, can you please pastebin the output of "sudo lsblk"? I don't have an optical drive :) [07:17] axw: me either [07:18] wallyworld: oh, I thought sr0 was optical. well, anyway, I don't have one of them [07:18] axw: neither does my output [07:19] wallyworld: same machine you had juju running on? juju just runs "lsblk"... [07:19] wallyworld: juju just runs lsblk. you're on the same machine you had juju running, where the log was spammed? [07:19] sorry, thought I was disconnected [07:20] wallyworld: sorry, I'm an idiot [07:20] wallyworld: you didn't send the bug report... :) [07:23] :-) [09:40] Bug #1454599 was opened: firewaller gets an exception if a machine is not provisioned [09:47] axw: very small review? http://reviews.vapour.ws/r/1677/ [09:48] looking [09:48] ty [09:51] wallyworld: done [09:52] axw: tyvm [09:53] wallyworld: axw: ping - know anything about lifecycle watchers? [09:53] a little [09:53] voidspace: what about them? [09:54] wallyworld: axw: I have a new watcher / worker combo watching for when IPAddresses become Dead and releasing them with the provider [09:54] ohh, nice [09:54] machine removal marks the addresses as dead, which should trigger the worker to release and remove them [09:54] the watcher / worker is tested - setting an IPAddress to Dead triggers its removal [09:54] machine removal is tested [09:55] removing a machine marks associated IP addresses as dead [09:55] but an end-to-end test fails [09:55] allocating an ip address to a machine and then removing the machine *does* mark the address as Dead [09:55] but the watcher doesn't seem to notice it - it's not released [09:55] I wonder if I'm missing anything obvious [09:56] it is most likely a resource catsing issue [09:56] casting [09:56] i've seen before [09:56] voidspace: where's the worker? [09:56] I figure it maybe something about our test infrastructure - two states or something [09:56] worker/addresser/worker.go [09:56] let me link you to the current WIP [09:56] where if struct was not castable to an interface i can't remember, events were rejected [09:57] I figure it maybe something about our test infrastructure - two states or [09:57] dammit [09:57] axw: https://github.com/juju/juju/compare/1.23...voidspace:addresser-machine-destruction [09:57] wallyworld: interesting [09:57] the only difference is that when a machine is removed the ipaddress is set to Dead as part of a bigger transaction [09:58] axw: TestMachineRemovalTriggersWorker is the failing test [09:58] mk [09:58] axw: it fails in waitForReleaseOp [09:58] (and without waiting for the release op it fails because the address really isn't removed) [09:58] voidspace: i'll see if i can find the code [09:59] if I change state.Machine.Remove to call address.EnsureDead (set the address to dead in its own transaction) the test *still fails* [09:59] yet tests that do *exactly that* pass [09:59] so I suspect test infrastructure problems [09:59] wallyworld: ok, cool [10:00] the debug output shows that the watcher never sees the event [10:08] voidspace: what first came to mind was that you might need to do s.State.StartSync() just before waitForReleaseOp... but it looks like you're using just the one State, and not BackingState+State [10:09] voidspace: nothing jumps out, sorry [10:09] voidspace: do you have the watcher code? [10:10] wallyworld: it's a plain old lifecycle watcher: https://github.com/juju/juju/blob/master/state/watcher.go#L174 [10:11] oh, right [10:13] StartSync doesn't appear to help [10:13] I see the ip address life set to Dead - watcher isn't triggered [10:14] wallyworld: axw: ok, time to start digging into the event code [10:15] hah [10:15] axw: wallyworld: moving the StartSync to later in the test worked [10:15] magically... [10:15] axw: wallyworld: thanks... [10:15] yay for mysterious magic [10:15] didn't do anything, glad you got it working [10:15] voidspace: where in the test did you add it? [10:15] but i hate magic :-) [10:15] I'm curious to know why that works [10:16] me too [10:16] axw: just before the "machine.EnsureDead()" [10:16] huh, that doesn't make any sense [10:17] test now passes [10:17] voidspace: I can't repro success with that change, is it passing reliably for you? [10:17] celebration coffee [10:18] axw: I also needed to tweak the instance ID I allocate to [10:18] axw: just pushed a passing branch [10:18] ah no [10:18] axw: just failed [10:19] axw: maybe it's a timing issue [10:19] goddammit [10:19] voidspace: if the sync were to make sense anywhere, it'd be after the machine.Remove() [10:20] but that fails for me too [10:20] I just had two passes [10:20] now two fails [10:23] axw: with *three* calls to StartSync it reliably passes, remove any one and it seems to fail [10:23] after machine provisioning, after address creation and allocation and after machine removal [10:24] voidspace: still fails with three for me; I suspect it's just adding enough time for it to see the event in your case [10:24] that's awful [10:24] hmmm... no, it seems like only the first two are needed [10:24] sorry, didn't try in all those spots [10:24] it's still reliably passing [10:25] that kind of makes sense - it syncs the two new entities - the machine and the address [10:26] voidspace: sorry not sure, gotta go help get kids ready for bed.. I'd be interested to know if you get to the bottom of it [10:27] axw: well, that works... [10:27] I can dig through the connsuite and try and see where we end up using the different states [10:28] axw: laters o/ [10:50] is wallyworld on? I want to bitch about launchpad some more ;) [10:50] \o/ [10:51] wallyworld: I added a tag to a bug, but when I click on the link for that tag that was created, it brings me to a list of bugs that doesn't include the bug I just tagged [10:51] bug: https://bugs.launchpad.net/juju-core/+bug/1452285 [10:51] Bug #1452285: logs don't rotate [10:51] link from tag: https://bugs.launchpad.net/juju-core/+bugs?field.tag=cpec [10:52] it the bug assigned to the 1.24 series? [10:52] yes, i can see it is [10:52] Bug #1452285 changed: logs don't rotate [10:52] Bug #1454627 was opened: presence shouldn't try to hold all possible Sequences at once [10:53] natefinch: the default search criteria only includes open bugs i think [10:54] so if the bug is fix committed it won't show up (a guess) [10:54] you may need to go to advanced deatch so you can explicitly select the bug sates you want [10:54] states [10:54] eg fix committed, triaged, in progress etc [10:55] ahh, that actually almost makes sense :) [10:57] natefinch: and i just tested it, advanced search works [10:57] if you clock on advanced search, you'll see the default tags [10:57] wallyworld: yeah, I did too. [10:57] s/tags/states [10:57] not obvious i agree [10:58] it didn't help that I'd *just* set it to fix released... plus the whole "only searching one series" thing. [11:00] yeah [11:25] TheMue: http://reviews.vapour.ws/r/1679/ - if you could take a look. This should be getting familiar now! [11:25] wallyworld: or axw: question about "juju status" and agent up/down [11:26] yo [11:26] this is older code, but it might be stuff that you guys have thought about recently [11:27] dooferlad: sure, will do [11:30] dooferlad: done [11:30] dooferlad: a diff between two PRs would be nice, so that it more easily can be compared [11:30] * TheMue is at lunch now [11:47] Bug #1454661 was opened: presence collection grows without bound [12:06] jam: ping [12:07] hey anastasiamac, sorry I got my head deep in this sky stuff. Give me a sec and I'll be right there. [12:07] jam: can reschedule if it's easier :D [12:14] anastasiamac: joining now [12:29] known problem in ubuntu 15.04: can't enter decryption password for encrypted hard drive on boot [12:29] https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1359689 [12:29] Bug #1359689: cryptsetup password prompt not shown [12:29] [12:29] *sigh* [12:32] and there's no supported way to remove full disk encryption either [12:32] Bug #1454676 was opened: failed to retrieve the template to clone - 500 Internal Server error - error creating container juju-trusty-lxc-template - [12:41] Bug #1454676 changed: failed to retrieve the template to clone - 500 Internal Server error - error creating container juju-trusty-lxc-template - [12:47] Bug #1454676 was opened: failed to retrieve the template to clone - 500 Internal Server error - error creating container juju-trusty-lxc-template - [12:47] Bug #1454678 was opened: "relation-set --file -" doesn't seem to work [12:53] Bug #1454678 changed: "relation-set --file -" doesn't seem to work [12:59] Bug #1454678 was opened: "relation-set --file -" doesn't seem to work [13:07] ericsnow: ping [13:38] Bug #1454697 was opened: jujud leaking file handles [13:47] mgz, any luck with dreamhost? [13:51] jcastro: expect to have some news shortly, I need to poke a bit more [13:52] ack [14:04] ericsnow: standup [15:20] voidspace: hey are you in #juju on canonical's irc? [15:24] katco: he hides there as mfoord [15:42] x-post from #juju -- "Man, actions + the new status stuff in 1.24 is really nice. hattip @ jujucore for this" === anthonyf is now known as Guest56189 [16:24] anyone know how to change the debug level of a running juju environment? [16:28] rogpeppe2: `juju set-environment logging-config=<>` [16:28] mgz: ah, ok, i wondered if it was set-environment [16:28] mgz: the help could be more helpful there, i think :) [16:28] `juju help logging` isn't bad... just not super concise === anthonyf is now known as Guest84788 [17:20] cherylj: how's the file handle bug going? === kadams54 is now known as kadams54-away [17:48] cherylj: perrito666: btw if you don't think you'll get a patch up for your bugs before your EOD, please be sure to update the bug with any information so we can hand them off [17:48] katco: sure [17:56] * perrito666 tries to reproduce a bug by having crappy db [17:56] s/db/bw [18:00] natefinch, katco: sorry, was at an appointment. I'm still digging into that bug === kadams54-away is now known as kadams54 [18:11] wwitzel3: hey do you have some code i can look at for the container management stuff? [18:13] katco: I do [18:14] katco: https://github.com/wwitzel3/juju/tree/ww3-container-mgmt [18:14] oh great, I am almost falling sleep on the kb and spotify decides to play total eclipse of the heart [18:14] * perrito666 makes coffee [18:14] lol [18:14] katco: that is fairly recent, I haven't pushed anything from today yet, will after it is actually compiling :) [18:21] wwitzel3: can i suggest this (https://github.com/juju/juju/blob/master/apiserver/leadership/leadership.go#L41-L49) as a way of doing IoC for the apiserver stuff? [18:23] wwitzel3: also wondering if we really need a state object in api/procmanager/procmanager.go? what functionality are we using from state? [18:25] wwitzel3: and last observation, ericsnow and i have been talking about organizing code into modules, so would it make sense to put all of this code in a central spot, and then utilize the interesting bits in the various areas of Juju? [18:30] katco: yeah, I like the idea of all the code being in the same package, we are going to be registering / unregistering the process with state, those methods aren't there yet [18:30] wwitzel3: ah gotcha. so could we just pass in a few closures or an interface that handle the registration? [18:31] katco: that is the idea, yeah [18:31] wwitzel3: awesome... looking forward to seeing the next iteration of code [18:33] katco: thanks for looking at the WIP, appreicate the reivew, it should all be a little more concerete tomorrow and we can shop it to every one via a review [18:33] wwitzel3: sweet [18:33] cherylj: want some help? I could help try to repro or something. [18:34] natefinch: have a maas? [18:34] * perrito666 grins [18:35] perrito666: nope. I got a maas half set up at the sprint but then hit some problems and never got further with it [18:35] meh, I thought you had a hardware maas [18:36] perrito666: I have hardware that I could probably make into maas given time. ... but time is not something I have much of. [18:36] natefinch: yeah, that would be helpful if you could do that... [19:03] haha juju 1.20 does *not* appreciate all the extra garbage in my environments.yaml [19:04] huge list of 'WARNING unknown config field "blah"' [19:11] sinzui: what do we expect for backwards compatibility between juju versions? I had a 1.22 local environment and tried to juju status using a 1.20 client, and it just failed [19:11] (hung forever) [19:12] natefinch, that is a very bad. I don't think CI has seen that though. The compatibility tests for 1.20 clients to 1.22 servers could get status and do other ops [19:13] sinzui: I wonder if juju local is just special [19:13] natefinch, shouldn't be, but since everyones local id a little different it can be [19:35] cherylj: for what it's worth, I can't seem to reproduce the file descriptor leak. At least from the proposed hypothesis of it just being because the API server is down. === kadams54 is now known as kadams54-away [19:35] cherylj: granted, I'm using 1.20.14, not 1.20.11 like at the customer site... I can try 1.20.11 and see if it changes anything though (also trying on juju local, but I can't imagine that matters). [19:36] natefinch: yeah, I haven't had much luck with that either. [19:40] and you dont have a lot of descriptors open? [19:40] if it is a leak it most likely showing even when not arriving to a critical point [19:43] natefinch, I think you have found a regressions. I may need to block 1.22.1 going into trusty. [19:44] natefinch, 1.20.x client sees this error talking to a 1.22.x env: x509: certificate is valid for localhost, juju-apiserver, juju-mongodb, not anything [19:45] sinzui: oops [19:46] natefinch, I wonder if this error is about closing a security issue, in which case, it is intentional [19:54] sinzui: I don't know [19:54] natefinch, I think this is just for new envs. I am retesting upgraded envs [19:57] sinzui: yes, this was not an upgraded environment [19:58] anyone free for reviewing http://reviews.vapour.ws/r/1681/ ? [19:59] natefinch, this is just old clients cannot be guaranteed to talk to envs bootstrapped by newer/securer clients. upgraded envs continue to to work. I just took 1.20.11 to 1.21, 1.1.22, then 1.23 and all is good [20:00] sinzui: ok... I find that odd, but since I don't really care about backwards compatibility personally, I'm ok with it if you're ok with it ;) [20:03] brb, new firmware for my wifi card [20:04] well, no improvements, new hardware an linux is a nightmare [20:36] Bug #1454829 was opened: 1.20.x client cannot communicate with 1.22.x env [20:41] well, the good news is, we're going to have a brand new A/C unit. That's also the bad news. === kadams54-away is now known as kadams54 [20:43] natefinch: well, that fix gave you an extra year [20:43] perrito666: yes, but it cost me $1000 [20:44] perrito666: I don't really want to pay $1000 for a year of A/C [20:45] no warranty on the fix? [20:46] the bad part is, if you had known this a couple of months ago you could have saved the time you spent trying to protect it from the falling ice [20:49] perrito666: not sure about warranty, probably not (or not more than like 30-60 days) [20:50] gotta run, Lily has an art show at her school [20:50] cherylj: FWIW, I have some agents that are very very slowly gaining file handles (like one per half hour), not sure where though, so I'll leave them to run and see what happens. [20:51] there is a joke around there about running, heat and AC but I cannot quite make it [20:51] haha [20:55] bbl === _thumper_ is now known as thumper === Spads_ is now known as Spads [21:27] Bug #1454658 was opened: TestUseLumberjack fails on windows [21:48] FFS [21:49] waigani: I don't suppose you have a windows box handy to run tests on? [21:49] waigani: can I get you to look at bug 1454658 ? [21:49] Bug #1454658: TestUseLumberjack fails on windows [21:49] thumper: windows? what's that? [21:49] waigani: CI blocker, and very simple fix [21:49] this is the merge that brought in the failure: https://github.com/juju/juju/pull/2303/files [21:49] thumper: okay, I'm giving this upgrade bug one final pock (I may have Stockholm syndrome) [21:50] waigani: please leave it for a bit, and look at this critical bug [21:50] *poke [21:50] thumper: yep, will do [21:50] cmd/jujud/agent/machine_test.go needs two tweaks [21:51] func (FakeConfig) LogDir() string should wrap the file path in filepath.FromSlash [21:52] unit_test needs it too in the same palce [21:52] and the test that checks the filename also needs filepath.FromSlash [21:52] I *think* that should be sufficient [21:53] to fix the windows issue [21:53] * thumper has other ports to fix === kadams54 is now known as kadams54-away [21:53] thumper: sure. I'm not going to be able to test in on windows though. [21:53] waigani: it is purely an assumption on slash file path separators [21:53] waigani: that's fine, as long as it passes locally, submit it as a fix for that bug and CI will tell us [21:54] I'm 98% sure this will fix it [21:54] ok, on it [21:54] cheers [21:55] waigani: also not that you should start on the 1.23 branch [21:55] and forward port through 1.24 and master [21:55] I was going to ask, okay [21:55] s/not/note/ [22:02] alexisb: you joining sky handoff? [22:06] thumper: http://reviews.vapour.ws/r/1683/ [22:07] thumper: local unit tests pass [22:35] waigani: first one merged, now to forward port :) [22:35] thumper: okay. ports don't need reviews right? [22:36] waigani: as long as they apply cleanly (and it should) [22:38] thumper: https://github.com/juju/juju/pull/2323 - 1.24 port [22:39] waigani: LGTM [22:39] waigani: the one thing I'd add for the next one is to mention in the pull request that it is a forward port of a previously reviewed and landed fix [22:42] waigani: please also keep the bug tasks up to date :-) ta [22:44] thumper: https://github.com/juju/juju/pull/2324 - 125 port, added comment [22:45] waigani: nice, thanks - generally I wait for the previous target to merge before pushing the next in [22:45] thumper: yep. I haven't tried to merge 1.25, waiting for 1.24 [22:46] cool [22:57] Bug #1454870 was opened: Client last login time writes should not use mgo.txn [23:00] menn0: do you know of examples in state where we write to the db without a transaction? [23:03] thumper: state/sequence.go does [23:03] ta [23:03] thumper: 1.24 landed, 1.25 landing... bug status updated [23:04] thumper: but that uses mgo.Change and Apply which is a little esoteric [23:06] thumper: State.AddCharm doesn't use a txn... and probably should. that looks like a bug to me. [23:15] menn0: if not mgo.Change and Apply then what? [23:16] thumper: someCollection.Insert/Update/RemoveId/etc etc [23:17] hmm.. [23:17] there's lots of methods on collections which let you add, modify and delete docs [23:17] do we do an Update on a collection anywhere? [23:17] * thumper looks for the mgo docs [23:17] thumper: not in state proper [23:19] thumper: you probably want UpdateId [23:20] yeah, that is what i'm doing :) [23:21] thumper: i've just noticed that the collection type you get back due to the auto multi-env stuff doesn't support any of the Update* methods :( [23:21] thumper: easily added though [23:22] agh [23:22] thumper: i probably didn't implement them b/c we weren't using them anywhere [23:22] I don't think that is in 1.22 [23:22] but I'll talk to you about adding it as I go [23:22] thumper: might not be, in which case you're ok ther [23:22] there [23:23] thumper: and the compiler will tell you when you get to the version that does have the multi-env collections stuff [23:23] :) [23:23] * thumper nods [23:25] menn0: oh shit [23:25] menn0: it is in 1.22 [23:25] thumper: ha [23:25] thumper: ok, it's easy enough to add the required method(s) [23:25] * thumper nods [23:25] thumper: do you just need UpdateId/ [23:30] I'm going to add update and updateid for consistency === bradm_ is now known as bradm [23:49] menn0: could I get you to look at http://reviews.vapour.ws/r/1687/ for me plz? [23:49] much appreciated [23:49] * thumper goes to the gym [23:55] thumper: looking