[02:29] heh, juju help ensure-availability says the default number of machines to add is zero :/ [02:51] natefinch: your message to juju-dev went to spam for some reason... [02:52] axw: weird [02:53] I don't see anything spammy about it. *shrug* [02:53] axw: maybe too many links (since it linked everytime I mentioned gopkg.in) [02:53] could be [02:53] peddlin' gopkg.in wares [02:53] gmail's all like "dude, this guy is really pushing gopkg.in" [02:53] :) [02:58] ha I just noticed that I have a ton of those CI Cursed messages in my spam. Gmail says "many people marked similar messages as spam" ... I guess not everyone appreciated getting those 8 emails a day ;) [07:18] waigani: i suspect 1.18 etc did support running without jenv files, so long as the certs etc were in the yaml [07:18] so if that's now not possible in 1.23, it is indeed a regression [07:18] wallyworld: ah right, I misunderstood that [07:18] wallyworld: let me test that [07:18] so you need to copy the certs etc from jenv to the yaml [07:19] wallyworld: right, will do [07:19] ty [07:21] dimitern: hey, not sure if you've seen bug 1437021 [07:21] Bug #1437021: 1.23b2 some charms only listening on IPv6 [07:22] wallyworld, hey, I saw the report, but haven't read it in detail yet [07:22] np, seems to be a dup of bug 1437038 [07:22] Bug #1437038: 1.23b2 fails to get IP from MAAS for containers, falls back to lxcbr0 [07:23] not sure if it's related to stuff fixed for b2 or not [07:23] wallyworld, now that second one is interesting as it seems related to the new addressable containers [07:24] could be, i suggested a dup because adam who reported the bug thought it may be [07:26] wallyworld, yeah it seems so [07:29] wallyworld: you're right - I can connect on 1.18 without .jenv. I'll update the bug. Thanks for calling me out on that. [07:31] waigani: np, at least we know there's a real issue to fix [07:32] wallyworld: I think that warrents a new bug though - as it is different from the missing UUID in the config file. [07:32] assuming it is indeed broken in 1.23 [07:34] wallyworld: I get: [07:34] Error details: [07:34] missing namespace, config not prepared [07:54] Bug #1437021 changed: 1.23b2 some charms only listening on IPv6 [07:54] Bug #1437177 was opened: Running several 'juju run' commands in parrallel triggers lock timeout failures [08:02] wallyworld_: eod for me. I've updated bugs and have a PR up for the UUID issue. I'll tell Tim about the regression on Monday. [08:02] waigani: thank you, have a good weekend [08:30] Bug #1437191 was opened: juju cannot run without a jenv file [08:53] wallyworld_: UNIT ID LOCATION STATUS PERSISTENT [08:53] storagetest/0 filesystem/0 /var/lib/juju/storage/1/0 attached false [08:53] just got filesystems creating/mounting on volumes working again [08:53] will probably take at least monday to clean it all up [09:25] Bug #1437220 was opened: gce provider often can't find its own instances [10:02] voidspace, standup? [10:02] dimitern: oh yes... [10:08] TheMue: http://reviews.vapour.ws/r/1290/ [10:08] TheMue: :-) [10:22] dimitern: in case I wasn't clear in my note on the firewall bug: azure opens the port, but it never reports it back to the state server. thus, when the firewaller goes to reconcile, it doesn't touch the API port [10:22] dimitern: it's a bit hackish I guess, but it works today [10:42] GAAAAH bloody mutexes (mutices? mutexen?) [10:43] at least now I know why that test is hanging [10:44] I'm running 1.23~beta1 from ppa:juju/devel for vivid support, with the local provider, and my bootstrap jujud keeps panicing, usually on destroying services but occasionally on deploying. Should I file a bug? [10:45] "anic: cannot retrieve unit "m#15#n#juju-public": "m#15#n#juju-public" is not a valid unit name" [11:02] wgrant, yes please, that looks like a very serious problem [11:03] dimitern, ^^ -- that looks like a networky globalKey somehow slipping into a unit request [11:03] fwereade: It helpfully seems to truncate the log on startup, so I'll have to wait for the next failure and grab it before it restarts. [11:03] s/it restarts/I restart it/ [11:03] But it's died five or six times today, so I imagine I'll have a few good data tomorrow... [11:03] Thanks. [11:04] wgrant, cool -- fwiw, hopefully that error above will be enough to track it down [11:04] fwereade: Ah, great, let me know if not. [11:04] In the bug I will shortly file. [11:05] dimitern, would you put someone on that please? I am currently fighting a deadlock in the dummy provider and am not sure when I'll finish :-/ [11:05] wgrant, tyvm [11:07] fwereade, I'm in a call, will get back to you [11:07] sorry [11:07] dimitern, no worries [11:10] https://bugs.launchpad.net/juju-core/+bug/1437266 [11:10] Bug #1437266: Bootstrap node occasionally panicing with "not a valid unit name" [11:16] Bug #1437266 was opened: Bootstrap node occasionally panicing with "not a valid unit name" [11:30] TheMue, please take a look http://reviews.vapour.ws/r/1283/ [11:31] fwereade, ok, back now [11:32] fwereade, that's a globalkey for openedPorts [11:32] wgrant, ^^ [11:33] dimitern: Heh, I'm going to pretend I know what a globalkey is and that you'll let me know if you need any more debug info. [11:33] wgrant, I'm commenting on the bug [11:33] Thanks for investigating. [11:33] wgrant, basically more info is needed - logs [11:37] dimitern: How do I adjust the log level persistently? [11:37] And should I just grab all-machines.log, or something else? [11:37] wgrant, add logging-config: '=TRACE' in env.yaml for your local env [11:38] dimitern: Is that local.jenv? [11:38] I haven't seen an env.yaml. [11:38] wgrant, and then re-bootstrap and attach just the machine-0.log when you reproduce the issue [11:38] Oh, rebootstrap from the environments.yaml? Sure. [11:39] wgrant, yeah, cheers [11:39] Thanks, will probably have something over the weekend. [11:39] sweet1 [11:39] sweet! :) even [11:46] Bug #1436961 was opened: juju add-machine fails if interface of the new machine is not called eth0 [11:57] fwereade: not sure if you'll get a chance, i've proposed the next branch for the unit agent work, stupid review board didn't create a PR though https://github.com/juju/juju/pull/1964 [11:57] wallyworld_, cheers [11:58] i need to add more tests, but i'll do that over the next few branches [11:58] wallyworld_, I am becoming teeth-grindingly frustrated with what I'm currently doing so your chances are non-nil ;) [11:58] \o/ [11:58] i'm hoping to land so i can get this out next week, but have more stuff to do [12:00] wallyworld_, (deadlock in the dummy provider that I'm lucky enough to be able to repro; but fixing(?) that appears to expose a problem in apiserver (in-progress calls panicking because the mongo session is already closed :/) [12:00] oh joy [12:01] hmmm, i think we close sessions at the end of each operation [12:01] and clone/copy when we access a collection the next time [12:01] this was to ensure we didn't attempt to keep one long session open [12:02] which often died and left everything disconnected [12:04] Bug #1437280 was opened: with juju run: unit "/0" not found on this machine [12:11] voidspace, you have a review [12:11] two, actually [12:12] ;) [12:12] http://feedproxy.google.com/~r/GeekAndPoke/~3/HaXsef5q--I/code-freeze <= very good [12:16] wallyworld_, yeah, I'm digging my way through [12:17] wallyworld_, I'm *mostly* suspicious of the waitgroup abuse in apiserver.go:392 [12:17] * wallyworld_ takes a peek [12:17] wallyworld_, but I'm not really sure how to fix it :/ [12:18] wallyworld_, and really I don't have a good reason to suspect it [12:18] wallyworld_, other than, well, I'm pretty sure it's broken to call Add on a different goroutine to Wait [12:18] hmmm, yeah, i've not looked at that code before really [12:18] wallyworld_, (right?) [12:19] i think so [12:19] not 100% sure though [12:22] fwereade: oh, btw, will you be porting leader election to master soon? [12:22] i need it for health status [12:22] wallyworld_, dammit yes I have no excuse not to [12:23] for service health [12:23] but review my branch first :-) [12:29] mgz, ping [12:34] Bug #1437296 was opened: apt-http-proxy being reset to bridge address [12:40] Bug #1437296 changed: apt-http-proxy being reset to bridge address [12:45] dimitern: hey [12:46] Bug #1437296 was opened: apt-http-proxy being reset to bridge address [12:47] dimitern: thanks [12:47] mgz, I was wondering why master was not yet tested in CI [12:47] mgz, but I got it - there was one job for 1.23 still pending from the last rev [12:47] dimitern: yeah, I'll see if I can poke, we want to unblock trunk [12:48] mgz, cheers [12:48] i seems it just hung [12:55] wallyworld_, btw, you still around? [12:55] yeah [12:55] wallyworld_, it crosses my mind [12:55] wallyworld_, anastasiamac_, hey there, I've commented on that bug above ^^ (1437296) - it seems to me it's a regression introduced recently in the local provider [12:55] oh goodie [12:55] wallyworld_, that we should still avoid setting the workload status to error just because the unit's in error [12:55] wallyworld_, because we *know* that's wrong [12:56] wallyworld_, if the sp[ec want us to report stupid shit we can do that [12:56] fwereade: do we really want to open that can of worms again? [12:56] wallyworld_, but that's not excuse for baking the stupidity in at implementation level [12:56] wallyworld_, the spec is a UI-level thing, not an implementation level thing [12:57] remind me, we would want to set the workload status to unknown them? [12:57] wallyworld_, I *think* I'm avoiding reopening it ;) [12:57] wallyworld_, I don't think so [12:57] wallyworld_, in fact, you know what, sorry this only just crystallised [12:57] ie if the unit agent runs a hook that exits 1, then what is the status of the workload [12:57] wallyworld_, it's still what the user told us [12:58] wallyworld_, at the model level, seriously, we should *only ever* store what the charm told us the workload status was [12:58] yes, i know [12:58] wallyworld_, anything else is straight-up sabotaging ourselves [12:58] dimitern: wallyworld_: off memory, we r only changing address, if we believe that address is a loopback one and we want correct bridge ip... [12:59] wallyworld_, so, contrary to what I was thinking yesterday [12:59] anastasiamac_, that's correct, but I think you got bitten by a corner case of the regexp used there - see my comment [12:59] dimitern: k. thnx :D [12:59] wallyworld_, the *local* storage of whether we''ve sent is not so helpful [12:59] s/sent/set [13:00] fwereade: if a hook exists with error, we don't currently set anything else besides workload status to show that [13:00] s/show/record [13:00] wallyworld_, huh? we have to set the agent status, don't we? [13:00] no [13:00] it goes back to idle [13:00] wallyworld_, but the unit agent just changed state [13:00] sure [13:00] wallyworld_, the unit is NOW IN AN ERROR STATE [13:01] wallyworld_, it is REACTING DIFFERENTLY [13:01] wallyworld_, the workload is still running exactly as it was before [13:01] sure, but the agent is idle, ie not running a hook or an action [13:01] wallyworld_, it's *broken* [13:01] wallyworld_, it is *no lonnger reactinng to changes* [13:02] hence people wanted to mark the workload as error [13:02] wallyworld_, it *requires user intervention* [13:02] wallyworld_, the workload is almost certain to be completely unaffected [13:02] the agent itself may be working fine though, just because a hook exists with error [13:02] wallyworld_, no, the agent is *in an error state* [13:02] the agent may still be perfectly happy [13:02] wallyworld_, it is *not reacting to almost anything* [13:03] well, it is waiting for the hook error to be resolved i guess [13:03] wallyworld_, exctly [13:03] there's nothing in the spec relevant to that though i don't think in terms of what state the agent should be [13:04] failed is not for hook errors [13:04] wallyworld_, I'm *sure* we still had an error state for the agent, and nothing about how we should do that changed [13:05] as written, there's no agent error state in the spec [13:05] wallyworld_, so how do we tell people that they need to resolve a hook error? [13:05] we discussed it, and part of the discussion was not having the same state (error) for both unit and agent [13:05] by setting the unit state to error [13:05] wallyworld_, so basically we've made it impossible for the user to know what's going on [13:06] look, i understand and i agree with you, but we lost the rgument [13:06] the last time we did something contrary to the spec, we/I got in trouble [13:18] dimitern: thanks for analysing that regexp address bug, we'll fix on 1.22/1.23/1.24 [13:18] wallyworld_, awesome! [13:19] will delay 1.22.1 sadly [13:31] sinzui: license meeting? [13:40] dooferlad: ping, how has the name of your AMT script has been? [13:41] TheMue: /usr/local/bin/amttool [13:41] dooferlad: ah, ok, thx. already thought so based on the man pages, but haven't been sure [13:42] TheMue: No problem. [13:48] o/ hi cores [13:49] i've got this new one by the tail atm (1.22.0), with an enviro up. seems like a somewhat rare race, but causes false fails in openstack deploy testing on bare metal. bug 1437280 i'll be collecting and uploading logs, but wondered if anyone wanted to poke at it while it's alive? the enviro will be torn back down when i re-release the automated testing. [13:49] Bug #1437280: with juju run: unit "/0" not found on this machine [13:54] is anyone free to poke beisner's bug? ^ [13:55] beisner, I'm looking at it already [13:55] beisner, the only reason I can see for that error in 1.22 is if the unit directory is not present on the machine [13:57] beisner, can you ssh into the machine hosting the unit and have a look what's in /var/lib/juju/agents/unit-swift-storage-z1-0/ ? [13:59] dimitern, sure enough. on swift-storage-z1 (juju run fails), no dir there. and on swift-storage-z2 (juju run works), the dir is there. [14:00] beisner, so then the real question is why the dir is not there? can you see something in /var/log/juju/unit-swift-storage-z1.log which might show a deployment issue? [14:06] dimitern, oh wait. something else is horribly confused. it seems maas dns/dhcp is attempting to defy physics. two objects cannot occupy the same space at the same time. [14:06] dimitern, http://paste.ubuntu.com/10689200/ [14:07] fwereade: any thoughts on the failure reported here: http://juju-ci.vapour.ws:8080/job/run-unit-tests-vivid-amd64/312/ [14:08] natefinch: standup? [14:11] beisner, oh, interesting :) [14:11] beisner, so it seems that's a maas issue then? [14:11] dimitern, indeed. naturally juju (and other things) will be badly confused by this. [14:11] voidspace, hey, I've assigned you to bug 1437038 [14:11] Bug #1437038: 1.23b2 fails to get IP from MAAS for containers, falls back to lxcbr0 [14:12] beisner, ok, i'll appreciate if you update the bug with our findings [14:12] dimitern: thanks for fixing that GCE issue and for being accommodating :) [14:12] dimitern, ack. adding comment. [14:12] dimitern, mgz - thanks! [14:13] dimitern: ok, thanks [14:13] cmars: PTAL http://reviews.vapour.ws/r/1234/ (just needs a "senior" reviewer) [14:13] ericsnow, no worries :) [14:19] ericsnow: sorry, last meeting ran over, then I had to go afk a little bit [14:19] natefinch: we can call it; I don't have much to report :) [14:19] ericsnow: ok [14:19] natefinch: LICENCE all sorted out? [14:20] ericsnow: yeah, I think we knew what the answer was before we had the meeting [14:20] natefinch: saw Oleg's email [14:22] Bug #1437280 changed: maas 1.8.0alpha7 two machines have same ip address [14:23] natefinch, ericsnow: So, is Oleg's proposal our final answer? ;) [14:23] xwwt: it occurs to me that I don't think we need the copyright in a separate file. diffing the file vs. the actual license will make it obvious we just stuck a copyright at the top of the license file. Plus, it seems to be the standard place to put it. [14:23] cherylj: ericsnow: https://docs.google.com/a/canonical.com/document/d/1H_c9KYdXtWG-YQ5oL-qSJv6G9AkedZ2GDyyjZBzrBjs/edit [14:24] natefinch: Sure that makes sense. Feel free to edit the text in that file, etc. until we are confident in the message. [14:26] natefinch, xwwt: nice! thanks for putting that together [14:27] natefinch: we ought to have a copy of that in the core docs directory [14:27] natefinch: "in a separate file"...separate from what? [14:28] ericsnow: We had some help with that. Once we are happy with the wording, we wil make sure the main thread of the lic convo has the detail. Everyone should use this as standard to eliminate confusion. [14:28] xwwt: cool [14:28] ericsnow: sorry, we had started to say we wanted the copyright statement in a separate file, but changed our minds. [14:28] xwwt: thanks for motivating this; even if it's a pain, it's important to get it right :) [14:29] dimitern: hmmm... when I deploy to lxc:0 with maas I get an IP address allocated in the static range of the cluster controller as expected [14:30] dimitern: (container still starting - but the address is allocated ok) [14:30] dimitern: so it's not a deterministic bug [14:30] voidspace, no, it's a config issue [14:31] voidspace, I guess you don't have a eth0 on the cluster controller which is without static range? [14:31] dimitern: that's correct [14:31] dimitern: so did your patched binary help? [14:31] voidspace, let me add to the bug the pastes I got which describe how their maas was setup [14:31] voidspace, it did help to discover the cause [14:31] it's not deterministic even for them - if they deploy ubuntu first it works [14:32] natefinch: does there need to be a mandate to update the copyright in the LICENCE file every year? [14:32] natefinch, ericsnow, xwwt: The statement about "All files in this repository are licensed as follows..." doesn't currently exist in any of the repos. Do I need to go add that in to all of the repos? or just the ones that I touched as part of bug 1435974? [14:33] Bug #1435974: Copyright information is not available for some files [14:33] ericsnow: nope [14:33] natefinch: otherwise how can newer files (e.g. in 2016) be copyrighted in 2015? [14:34] voidspace, it's always happens with that unit [14:35] natefinch: ah, so we are switching the charm repo to LGPL? [14:35] voidspace, the random factor could be which machine is picked [14:35] ericsnow: it should have been lgpl from the beginning, what is it now? [14:35] voidspace, check your pm-s [14:35] natefinch: AGPL [14:36] https://github.com/juju/charm/blob/v5-unstable/LICENCE [14:36] voidspace, that machine I've seen has eth0 and wlan0 - second one disabled [14:36] ericsnow: I don't think that hurts anything, but what we are supposed to do is make the main juju codebase itself AGPL and anything else LGPL w/ the static linking exception [14:37] natefinch: right (I was asking about charm because of that statement in the doc) [14:38] natefinch: the doc should probably mention how to handle code that is copied from elsewhere (e.g. Go source that we copied and slightly modified) [14:38] ericsnow: actually, that one is wrong too, since it says v3 or later [14:38] ericsnow: good point [14:39] natefinch: I ran into that the other day with a patch I added to the utils repo (cmars made a diving save on that one) [14:40] natefinch: should the license from the third-party code also be copied somewhere (that's what I did for utils) [14:40] ericsnow: yes [14:41] natefinch: for utils I put it in LICENCE.Go [14:42] ericsnow: I'd avoid using a dot separator, since it'll look like an extension to windows. In fact... that may well not build now on windows [14:42] natefinch: ah, I didn't list the files in that file [14:42] natefinch: good point [14:42] natefinch: I'll fix both [14:45] natefinch: it may help to cite an example for the different scenarios (e.g. 1 for "Process", 1 for "Copied Code") [14:49] natefinch: how about LICENCE-Go? or LICENCE.golang? [14:52] Bug #1437366 was opened: MAX_ARGS is reached when calling relation-set [15:04] ericsnow: I like LICENSE-golang [15:05] natefinch: k [15:05] natefinch: we should probably fix the syslog repo too [15:05] natefinch, sinzui, ericsnow: Are we to a point to share this out to others? [15:07] xwwt, tye [15:07] xwwt, yes [15:12] xwwt: I think so [15:15] natefinch: now that I look at it, do we need to add our own copyright and LICENCE file to the syslog repo? [15:15] natefinch: it is mostly just the files copied from the Go sources with a few extras added on (e.g. support SSL) [15:17] ericsnow: if we only made small changes to it, I think it still mainly falls under their license [15:17] natefinch: k [15:17] natefinch: that's what I figured [15:18] natefinch: is it LICENCE or LICENSE :) [15:18] LICENSE [15:18] natefinch: so do we need to change the filename in all the repos? [15:19] * ericsnow ponders american language imperialism [15:20] natefinch: consider moving the "Put full text ..." bullet up one [15:20] ericsnow: ug, I thought they were already LICENSE. No, I don't think it matters the spelling. [15:20] Damn UK folks think they invented English or something [15:20] LIZENZ (in German) [15:21] TheMue: the Z's make it so much cooler [15:21] TheMue: that sounds like a good compromise :) [15:21] hehe [15:21] We're a multi-national company, so every sentence a different language. [15:22] cherylj: note the thing on committing using the license name as the commit message. It's not really necessary, but it's a nice touch. [15:22] natefinch: yeah, I saw that. [15:22] natefinch: mind if I make a couple quick additions to the doc (you can delete them if they're dumb) [15:22] ericsnow: of course. It's open to editing on purpose :) [15:22] natefinch: Do I need to go in and change all the repos now? I had only done a subset to address the concerns in that bug... [15:24] if we ever use "AGPL v3 or later", we need to change that to not "or later". I think we need to have the explicit text at the top of the license to say "All files in this repository are licensed..." etc. [15:24] +1 [15:26] Do I need to go into each branch for these repos and make the change? or just master? [15:27] I think for juju I'll need 1.22 1.23 and master [15:27] natefinch: does that look okay (moving that bullet into the "Put in the LICENSE file" part? [15:27] and charms v4 and v5 [15:27] ericsnow: looks good [15:27] natefinch: k [15:27] cherylj: yes, 1.22, 1.33, master, charms v4 and v5. Sorry. :/ [15:28] but what about the other repos [15:28] natefinch: (and I added those "Example:" bullets (which aren't good examples quite yet) [15:28] cherylj: yeah, all canonical-owned repos in dependencies.tsv need to be updated. [15:28] cherylj: we should really divide-and-conquer here [15:29] cherylj: I can do some [15:29] ericsnow: thanks for helping. I would, but I really should be writing "HA --to" tests [15:30] I'll email about changing the license for charm, since I'm pretty sure it's supposed to be LGPL. [15:31] natefinch: okay, I'll hold off on doing charm for now and get started with they juju/juju branches [15:31] ericsnow: I can do all the ones in dependencies.tsv that are github.com/juju/* [15:32] not sure how many of the others are canonical owned [15:33] and could I get a graduated review for this PR? http://reviews.vapour.ws/r/1239/ [15:33] natefinch: tangentially, we need to stay on top of our dependencies that are hosted on code.google.com as I'm sure they are all moving (e.g. to github) [15:34] cherylj: that sounds good [15:35] ericsnow: oh, yeah, crap. We need to fix that. They're all s/code.google.com\/p/golang.org\/x/ [15:36] cmars: do you think your could help out with quick ship-its on the many LICENCE-related requests we'll be doing? [15:36] ericsnow, sure can [15:36] natefinch: even the GCE (api client) one? [15:37] ericsnow, are you seeing a test fail in ./worker/uniter in master, by any chance? [15:37] cmars: thanks [15:37] ericsnow, fails consistently for me. master's blocked but if we fix it, I'll JFDI that one thru [15:37] hoping it's just me and my devenv is trashed somehow, but i've seen it in vivid and now my trusty [15:38] :)( [15:38] cmars: looking [15:38] that should just be a sad [15:38] cmars: that sounds familiar from the CI unit test runs but I'm running locally now on trusty to check [15:40] ericsnow, fwiw i'm running go 1.4 installed with gvm [15:40] ericsnow, tasdomas http://paste.ubuntu.com/10689670/ [15:40] ericsnow: hmms, missed that one, probably not that one - https://github.com/google/google-api-go-client [15:41] cmars: this it taking a long time (too long I think) [15:41] you say LICENCE, but I say LICENSE. but I'm probably wrong... [15:42] natefinch: yeah, that looks like the right repo [15:43] cmars: gvm? boo [15:43] cmars: `go test ./worker/uniter` just passed for me on trusty (after almost 4 minutes) [15:43] cmars: also, we don't support 1.4... only 1.2.x [15:43] cmars: I think there are some tests that fail in 1.4 [15:44] natefinch, yikes. ok. really, not even 1.3.3? dude [15:46] * cmars runs with the go compiler of yesteryear [15:47] natefinch: do I need to state "Copyright (C) 2011-2015 Canonical Ltd." or is "Copyright (C) 2015 Canonical Ltd." sufficient? [15:50] cherylj: 2015 is sufficient I think. [15:50] k, thanks [15:51] natefinch: I can fix those code.google.com ones if you are busy with HA [15:51] ericsnow: yes please. [15:52] natefinch: will do :) [15:53] cherylj: if you could update charm to use LGPL like in github.com/juju/utils/LICENSE that would be great. Got confirmation from Mark Shuttleworth over email. [15:53] natefinch: will do, thanks. === rvba` is now known as rvba [15:58] ugh, I don't know why the patch is having problems for 1.22 https://github.com/juju/juju/pull/1965 [16:02] cherylj: maybe try updating your local copy of 1.22 and then creating a new branch? Seems like it's just git being dumb. [16:08] ericsnow: don't forget you have to change the imports too, not just the dependencies.tsv :) [16:08] natefinch: yeah, I remembered :/ [16:08] ericsnow: which I only mention because I had forgotten for a minute :) [16:09] natefinch: I have a hunch it won't be trivial [16:09] ericsnow: I don't think it'll be hard at all. should be simple sed/find & replace [16:10] natefinch: oh, that is the easy part [16:10] natefinch: I just have a feeling that something could be funky upstream (but then again everyone upstream tests their code after making such a change, right?) [16:11] * ericsnow winks [16:11] ericsnow: hopefully it was a simple move of the code... but yeah, if they like changed all the commit hashes, it's gonna screw us [16:15] natefinch: I'm hopeful that Google has been thorough in the move (they are leading the charge after all) === xwwt is now known as xwwt-afk [16:20] okay, here's the license for 1.22: http://reviews.vapour.ws/r/1303/ [16:23] natefinch: we may want to be more specific in the doc about which deps we should be managing ("canonical-owned" is perhaps too broad) [16:23] cherylj, thanks, reviewed. needs the "or later" bit removed [16:24] thanks, cmars [16:27] cmars: I don't think that section actually applies to the current project, as it talks about how to apply these terms to a new program? [16:27] I mean, I can still change it, but that's an example in the license, I think. [16:28] cherylj, i did skim it.. looking again [16:28] ericsnow: ummmm hmm [16:29] ericsnow: anything hosted under github.com/juju and anything else you have commit rights to ... how about that? :) [16:29] natefinch: can we just keep it to things under the juju org? [16:29] cherylj, you're right, sorry [16:30] cherylj, ship it! [16:30] ericsnow: I think that should be fine for now. I don't think we have to go changing stuff in launchpad etc [16:30] thanks, cmars ! [16:30] natefinch: yep, and it makes our responsibilities in dependencies.tsv much clearer [16:31] ericsnow: obviously include stuff from gopkg.in/juju as well [16:31] natefinch: yep [16:31] cherylj, wait a minute [16:31] cherylj, not lgtm [16:32] afk for a while for lunch etc. === natefinch is now known as natefinch-afk [16:33] cherylj, updated the review [16:33] cherylj, need to add the short statement below our copyright, then the full text of the AGPL follows below that [16:35] cherylj, ericsnow we'll need to do something similar for all the LGPLv3 + exception projects as well [16:40] cmars: can we kill out the CI test for that merge so it doesn't complete? [16:42] cherylj, how come? you want to JFDI it b/c master is blocked? [16:42] I guess I could just do another PR for the short statement [16:43] cherylj, oh, it's already gone, i see [16:43] cherylj, not sure if i can stop the CI bot. i'll fast-track a followup PR [16:45] cmars: Your suggestions differ from what's been put together by natefinch-afk and ericsnow in this doc: https://docs.google.com/a/canonical.com/document/d/1H_c9KYdXtWG-YQ5oL-qSJv6G9AkedZ2GDyyjZBzrBjs/edit [16:45] I just want to make sure everyone's on the same page before I continue making changes :) [16:47] Bug #1437177 changed: Running several 'juju run' commands in parrallel triggers lock timeout failures [16:47] cherylj, ah ok, hadn't seen that doc [16:47] cherylj, ok, i think we're fine then. sorry to be a pain, just trying to make sure we get it right. [16:48] i'll drop that issue [16:48] cmars: I understand :) [16:51] who's working on the new status stuff? The blocking etc? [17:14] marcoceppi_: wallyworld's team (with an assist from perrito666) [17:24] perrito666, katco : can either of you help triage this bug. The issue might be the private cloud There are some logs in the last comments: https://bugs.launchpad.net/juju-core/+bug/1435644 [17:24] Bug #1435644: private cloud:( environment is openstack )index file has no data for cloud === xwwt-afk is now known as xwwt [17:30] ericsnow: thanks! === natefinch-afk is now known as natefinch [17:47] marcoceppi_: can you explain this docker thing to me? I don't understand what you're isolating *from* .... juju doesn't change anything on your local machine except under ~/.juju [17:47] (assuming you're not using juju local) [17:49] natefinch: it's not for regular juju users, it's for charm authors and charmers performing reviews. Tests in charms are a required item for any charm in trusty and beyond. As a result, authors have to install testing dependencies which polute the system. (Recently it even broke someone's setup). The Docker container gives you a very fast and very light weight environment to execute these charm tests [17:49] it's basically virtualenv for juju [17:50] TheMue: http://reviews.vapour.ws/r/1306/diff/# [17:52] marcoceppi_: Ok, I see. thank you. I get it. [17:55] natefinch, and for me, who isn't familiar with building core from source I can fire up adam's 1.23 container, mess around without having to mess with my "production" juju stable release on my host. [17:57] jcastro: sure, but really, you can run new juju side by side with production juju, if you just set JUJU_HOME in the windows you're using to test juju. But I get that doing it in a container feels (and is) a lot more safe. [17:58] * jcastro nods [18:12] anyone else able to reproduce LP: #1437445 ? I've tried two vanilla ubuntu KVM instances now and it seems pretty repeatable [18:12] Bug #1437445: worker/uniter: FAIL: util_test.go:665: "never reached desired status" [18:14] Bug #1437445 was opened: worker/uniter: FAIL: util_test.go:665: "never reached desired status" [18:29] natefinch: gah, the oauth2 library we are using is not getting moved to github (it was deprecated a while back) [18:29] ericsnow: well, we can copy it for now [18:29] natefinch: the one to which the old lib sends you is not compatible :( [19:05] natefinch: for windows we rely on a third-party package that relies on code.google.com/p/winsvc (which does not appear to be moved or replaced) === mup_ is now known as mup [19:06] ericsnow: oh yeah, crud. It appears to belong to brainman, the guy who did most of the Windows support work for Go [19:07] natefinch: I may punt on fixing that one for this initial patch [19:08] ericsnow: that's ok [19:14] Can I get some more reviews for license changes? http://reviews.vapour.ws/r/1305/ [19:14] http://reviews.vapour.ws/r/1307/ [19:15] http://reviews.vapour.ws/r/1308/ [19:15] http://reviews.vapour.ws/r/1309/ [19:17] cherylj, sure, looking now [19:20] cmars: thanks! [19:30] cmars: PTAL http://reviews.vapour.ws/r/1311/ [19:35] ericsnow: https://code.google.com/p/winsvc/issues/detail?id=13 [19:36] ericsnow, thanks, looking [19:55] cmars: this might have gotten lost in the shuffle, but could you also take a look at http://reviews.vapour.ws/r/1234/ [19:55] cmars: it is *not* related to licensing :) [19:55] cmars: http://reviews.vapour.ws/r/1294/ too (should be trivial) [20:01] mgz: I don't s'pose you're around to validate the fix for bug 1436871 which is blocking CI? [20:01] Bug #1436871: ppc64 gccgo fails to build cmd/juju [20:03] or sinzui, ^^ ? [20:03] cmars: I should have pushed a little earlier [20:03] :) [20:04] it's EOW for almost everyone [20:04] jw4, yep, its getting there [20:04] ericsnow, jw4 since both of you want to land things, are you able to get tests to pass on latest master? [20:05] cmars: yeah - let me check again, -- what errors are you seeing? [20:05] jw4: I haven't seen any problems [20:06] jw4, LP: #1437445 [20:06] Bug #1437445: worker/uniter: FAIL: util_test.go:665: "never reached desired status" [20:07] cmars: hmm I sometimes see similar errors to that which appear to be something local, 'cause it works on CI [20:07] cmars: I'm not sure about that one specfically [20:07] cmars: I'm running again with latest master to verify [20:08] jw4, thanks [20:19] cmars: yeah, I didn't get that error [20:19] jw4, ok, thanks [20:19] cmars: I did get an error in PoolListSuite, but I think that's a map ordering bug [20:19] jw4, yeah, there's tons of those lurking [20:20] jw4, can you open a bug on that PoolListSuite one? it'd be nice to get it later [20:21] cmars: yeah; maybe I'll work on it too [20:21] jw4, much thanks [20:22] cmars: I already opened a bug on that one (and anastasiamac_ has already posted a fix) [20:22] jw4: ^^^ [20:22] ericsnow, oh crap. thanks! sorry jw4 [20:22] cmars: :) [20:23] cmars, thank you for reminding me. that bug will be fix released [20:24] haha, thanks - I guess we're just waiting for CI then -- hey hi sinzui :) [20:25] jw4, ah, hold on, I don't think we have tested your rev, this is 1.22 again [20:25] sinzui: eh... [20:25] :) [20:25] sinzui: I'd let you or mgz or someone else mark it as fixed anyway [20:26] cmars, I am going to remove 1.22 and 1.23 from the queue to ensure master is tested next [20:26] sinzui, thanks [20:49] I love it when I fix a typo in our output and then find tests that are checking for the typo. [20:53] ha! === anthonyf is now known as Guest42183