[00:19] axw_: wallyworld: cleaned up apiserver storage dynamic add... :D http://reviews.vapour.ws/r/1563/ [00:19] coolio [00:32] Bug #1452511 was opened: jujud does not restart after upgrade-juju on systemd hosts [00:33] wallyworld: ^^^ another critical systemd bug that I just ran across [00:33] \o/ [00:33] faaark [00:43] axw_: i didn't realise in the standup - looking at the code now, i see we already have a "maas" provider type defined [00:44] wallyworld: we do? where? [00:45] axw_: oh wait, we have some infrastructure code to register [00:45] but nothing fully defined yet [01:03] wallyworld: so I've got my fix for bug 1441913 ready. it's quite an extensive change but it looks like this area has no tests as I haven't had to change a single machine agent test :( [01:03] Bug #1441913: juju upgrade-juju failed to configure mongodb replicasets [01:03] menn0: sadly that doesn't surprise me [01:04] i'll look as soon as i can, just in the middle of something [01:05] wallyworld: no need for you to do anything right now. i'm just moaning :) [01:05] :-) [01:33] axw_: i've pushed the changes to plug in the maas storage provider to the provisioner, but i think there's a bug in state/storage.go default pool processing. got a sec to discuss in standup hangout? [02:24] wallyworld: hey re: #1349949 -- I'm easily reproducing on 1.23.2 [02:24] Bug #1349949: Juju's mongodb does not need to log every command in syslog [02:25] dpb1: it's easy to reproduce. but, there's no obvious mongo setting to turn it off and i recall one time i used google to try and solve many others were having the same issue with no resolution [02:26] wallyworld: ah, I was just commenting on this update: "I wasn't able to reproduce the messages on a recent [02:26] version of Juju (so maybe we can mark it invalid?) [02:26] " [02:26] oh, ok, i didn't see that comment [02:26] i would not say it is invalid :-) [02:26] ok, great. :) [02:53] bah, I knew I should have gotten rid of the part of the upstart script that redirects stderr to the machine log.... it was masking the fact that someone overrode the lumberjack logger and sent logs back to stderr [02:56] axw_: i've tweaked storage constraints defaults, no bug yet, will raise one if we want to proceed with the PR [02:56] http://reviews.vapour.ws/r/1598/ [02:57] axw_: at one stage, i had a recollection that AWS was different but i think uniform behaviours for all providers is best [02:57] wallyworld: if no *contsraints* are specified, we should use loop. so you could specify count only, and we should still use the env default [02:58] (just reading your description, not code yet) [02:58] axw_: ok, i was just basing on size and pool [02:58] i can tweak further [02:59] wallyworld: also, it shouldn't be loop for filesystem type [02:59] this needs to be done in defaultStragePool I think [03:00] ah yeah, true [03:00] i'll fix [03:00] natefinch: how early in the jujud process does the default logger get replaced? [03:20] wallyworld, thumper or axw: fix for bug 1441913: http://reviews.vapour.ws/r/1599/ [03:20] Bug #1441913: juju upgrade-juju failed to configure mongodb replicasets [03:21] axw_: changes pushed [03:22] wallyworld: ok. just reviewing your maas changes [03:22] \o/ [03:22] no rush [03:22] wallyworld: what did you base your code on? spec? docs? [03:22] axw_: the maas roadmap doc [03:22] wallyworld: got a link handy? [03:29] axw_: wallyworld: charm url on storage instance http://reviews.vapour.ws/r/1600/ [03:32] axw_: wallyworld: btw, would i need an upgrade step even tho storage was behind dev flag prior to 1.24? [03:32] nope [03:32] \o/ [03:32] already looking [03:33] tyvm [03:33] Bug #1452535 was opened: default storage constraints are not quite correct [03:34] sinzui: --- FAIL: TestPackage (0.00s) mgo.go:350: exec: "/usr/lib/juju/bin/mongod": stat /usr/lib/juju/bin/mongod: no such file or directory [03:34] how do I get this version of mongodb ? [03:35] it's not documented in CONTRIBUTING.md [03:36] davecheney: my understand was that in featuretest package [03:36] davecheney: we want to test "glue" btw diff layers [03:37] davecheney: hence usage ofJujuConnSuite is k [03:37] davecheney: ? [03:39] anastasiamac: are we talking about the same thing ? [03:39] i was having a whinge about http://reviews.vapour.ws/r/1597/ [03:40] i see what you're talking about [03:40] juju conn suite still sucks [03:40] davecheney: yes :D [03:40] davecheney: NOOOOO in featuretest pkg :D [03:41] davecheney: :)) yes, it does but full stack testing needs to happen too :D [03:43] wallyworld: thnx for review :D i'll wait for 2nd opinion then :D [03:43] np [03:44] davecheney: changing repo suite to conn suite in that pkg was something I did because I needed to be able to create a new environment to really test the full stack [03:44] it's not a serious objection [03:44] thumper knows that [03:44] --- FAIL: TestPackage (0.00s) mgo.go:350: exec: "/usr/lib/juju/bin/mongod": stat /usr/lib/juju/bin/mongod: no such file or directory [03:44] well I don't [03:45] does anyone know where to get the compatible version of mongo [03:45] But I guess I do now :) [03:45] cherylj: sounds like a perfect candiddate to add to our "reviewer tips" document :D [03:45] cherylj: if only i could type :D [03:45] haha [03:46] cherylj: *candidate [03:49] davecheney: isn't there a juju-mongodb package? [03:49] mabe [03:49] where do I get it ? [03:49] this isn't the system one, the system one doesn't install into /usr/lib/juju [03:50] davecheney: on any recent Ubuntu: apt-get install juju-mongodb [03:52] wallyworld: do you have a moment to look at that replicaset init review? [03:52] * menn0 is itching to be done with bug fixes [03:52] sorta, will look now [03:52] wallyworld: sorry, I know you're super busy [03:53] menn0: thanks [03:53] that's the answer [03:53] menn0: np, i just saw the description, test time reduction :-) [03:54] wallyworld: yeah, the test was a bit dumb. even without my fix the test should have patched the retry strategy to reduce the test time [03:54] davecheney: i'm surprised juju package doesn't require juju-mongodb. but then again, i'm not a packaging expert [03:54] davecheney, wallyworld: i'm pretty sure the juju-local package does [03:55] ah yes, that rings a bell [03:55] davecheney, wallyworld: for non-local providers juju installs juju-mongodb itself [03:55] so why was dave not having it on his system? [03:56] juju-local not installed? [03:56] cosmic rays? [03:56] or juju hadn't been run [03:57] I just checked apt-cache and juju-local definitely depends on juju-mongodb [03:59] menn0: this is a fresh machine [03:59] i'm building from source [03:59] davecheney: ok so you need to install juju-local [04:00] davecheney: i'm pretty sure the local provider tells you this if you try to bootstrap and it's not there [04:00] davecheney: it installs the deps that the local provider needs [04:02] i'm not bootstrapping [04:02] just running the tests [04:06] wallyworld: sent a bunch of comments [04:07] ty [04:07] wallyworld anastasiamac: I don't think we need to add the charm URL/rev/anything to output at all [04:07] wallyworld anastasiamac: it was just an internal detail, so we can track it in case we need to perform upgrade steps [04:08] davecheney: ok right. well just install juju-mongodb or juju-local. this should be better documented. [04:08] axw_: tyvm :D [04:08] axw_: might be nice for a verbose option later, so worth keeping in api result even if we don't display [04:08] wallyworld: I don't see what a user is ever going to do with that information [04:09] see what version of the charm their storage was made from? [04:09] for debugging problems [04:09] maybe i'm not a normal user [04:09] we wouldn't show it by default [04:10] ok, as long as it's not shown, I don't care much [04:10] but i'm not fussed either way [04:10] probs best to leave it out f everything then if there's no use for it [04:10] apiserver i mean [04:12] axw_: wallyworld: k, i'll keep it in state on creation but will remove any mention of it from apiserver and above :D [04:14] menn0: review done, tyvm [04:16] wallyworld: cheers [05:02] looks like the network from CI to the ubuntu archives is sick. apt-get is running very slow and/or failing [05:21] axw_: i've updated both my PRs from before if you are free to look [05:22] wallyworld: ok. just trying to figure out why my PR is failing: http://juju-ci.vapour.ws:8080/job/github-merge-juju/3123/consoleFull [05:22] works here :/ [05:22] i'll take a look [05:22] seems related to leadership, but no idea why [05:23] there's an extra leader-settings-changed hook in the failing test [05:25] wallyworld: are you saying that MAAS doesn't currently support specifying the root disk size? [05:25] or we don't [05:25] axw_: maas doesn't [05:25] okey dokey [05:25] we have root disk constraint of course [05:26] maas in the last X days landed some code to allow it [05:26] I take it the "root" label is just a placeholder for now then [05:26] yeah [05:26] ah right, you did mention that [05:28] axw_: i *think* i've seen TestUniterRelations fail intermittently [05:28] wallyworld: it's happened twice in a row on my branch on the bot, seems too much of a coincidence [05:28] oh :-( [05:29] might be subtle timing issue [05:29] unless someone else merged some crap into 1.24 and they were luck [05:29] y [05:30] could be that too, anyway I'll look after I finish reviewing this === urulama__ is now known as urulama [05:34] wallyworld: reviewed MAAS one again [05:34] ty [06:03] axw_: hopefully you are making progress with the test failure? anyway, i think/hope the maas one is ok now when you get a chance [06:03] wallyworld: nope :/ [06:03] :-( [06:03] looking [06:11] axw_: fwiw, i pulled your branch and it works for me too [06:31] wallyworld: seems whether the leader-settings-changed hook runs before stop is non-deterministic. see "unit becomes dying while in a relation" directly below the failing test in worker/uniter/uniter_test.go [06:32] wallyworld: I'll propose a change for that separately [06:38] wallyworld: please, http://reviews.vapour.ws/r/1603/ [06:38] or davecheney ^^ [07:07] axw_: sorry, just got back from school pick up [07:09] axw_: nice pickup, lgtm [07:09] wallyworld: thanks [07:37] morning o/ [07:57] jam: you're on call reviewer today [07:57] can you have a look at [07:57] http://reviews.vapour.ws/r/1592/ [08:07] davecheney: looking [08:11] davecheney: reviewed http://reviews.vapour.ws/r/1592/ === liam_ is now known as Guest73172 [08:49] jam: thanks [08:55] voidspace: TheMue: I won't be able to make the standup today, looking into sky issues [08:55] jam: ok, thanks for the info [08:56] jam: what does it mean "looking into sky issues" (for a non-native speaker)? [08:56] jam: ok [08:57] jam: I'd like some input on bug 1435283 when you have time [08:57] Bug #1435283: juju occasionally switches a units public-address if an additional interface is added post-deployment [08:57] jam: preserving the first entry (in state.Machine.SetAddresses I assume) is a very simple change [08:57] jam: but overly simplistic [08:57] jam: should we prefer the first public address for example [09:41] voidspace: doc updated [09:43] Bug #1450631 changed: Something is injecting gopkg.in/juju/charm.v6-unstable into the tree [09:53] dooferlad: cool [09:53] dooferlad: those proliants are designed for dual-ranked ecc memory [09:54] dooferlad: so buying the right ram is either expensive (HP official) or non-trivial [09:54] :-) [09:55] Bug #1452649 was opened: storage: loop devices leaked by LXC containers [09:58] voidspace: http://uk.crucial.com/gbr/en/compatible-upgrade-for/HP-Compaq/proliant-microserver-gen8 [09:58] voidspace: easy, £40 each [09:58] voidspace: It is just DDE3 1600 ECC RAM in normal DIMM form. [09:59] dooferlad: heh, yeah I'm on that site now [09:59] dooferlad: it needs to be "unregistered ecc" though apparently, but crucial tell you which memory to buy anyway [10:01] dooferlad: so server, ram and ssd come in something under half the budget - so if it works I can duplicate it [10:02] voidspace, TheMue: https://plus.google.com/hangouts/_/canonical.com/juju-core-team [10:02] dooferlad: thanks [10:02] dooferlad: already there [10:19] dooferlad: ordered memory, SSD and server [10:20] voidspace: sweet! Will be interested to hear how it goes. [11:12] voidspace: so IIRC the idea from jamespage is that just because we see a new device we shouldn't think that it is a device that we can use [11:12] at least one idea is that the device has only shown up after we connected, which means we would have never bound to it [11:13] Bug #1254766 changed: unit "started" status is unhelpful [11:13] Bug #1452680 was opened: Failed to upgrade from 1.22.1 to 1.22.3 during deployment [11:42] jam: so keeping the first address first would solve that problem, right? [11:43] voidspace: keeping the address we are using does seem correct. I think jamespage's point is we shouldn't start recording it as a *possible* address yet. Though I don't know that we can detect when a charm/API would actually be using a given address. [11:43] jam: if an interface with an ip address comes up on the machine how are we to know we're not supposed to be using it [11:44] jam: they'd be annoyed if they *did* want to use it [11:44] jam: the wider network model should solve this [11:45] jam: the concept of "the address" is currently, the first address from the list that have been set by state.Machine.SetAddresses - correct? [11:47] jam: ping [11:47] wwitzel3: /wave, I'm currently in about 4 conversations, but I'll try to include you as well :) [11:48] voidspace: so I believe jamespage's point is "if we have called bind() on an address, then we shouldn't report it as a possible API address" which is different from *should* we bind to it [11:48] jam: yeah, I noticed and thought that ping was superfluous after I read the buffer [11:48] jam: I can wait, I'd like to jump on a hangout if possible [11:50] jam: so if we call bind() (where?) shouldn't we *store* that information as canonical and only report that [11:50] jam: rather than relying on list ordering [11:54] I presume it's the charm that's binding [12:21] mattyw: +1 on focusing team meetings on topic [12:21] mattyw: -1000 for haskell discussion [12:22] anastasiamac: ...wha? [12:23] mgz: i like the idea of proposing topics for core meetings for discussion [12:23] anastasiamac, I like putting saying things like that to check people read it ;) [12:24] anastasiamac, I remember we sort of discussed it at nuremberg - I've spent the interim time trying to think of something [12:24] mgz: i do not like the h-word matty used in the same sentence as juju :D [12:25] mattyw: thnx for putting it in writting :D [12:26] anastasiamac, thanks for the support :) [12:30] anastasiamac: you don't want us to rewrite juju in haskell? :) [12:30] mgz: :)) === luca__ is now known as lucap_ [12:40] mattyw: I would prefer erlang ;) [12:41] TheMue, would you settle for elixir? [12:41] mattyw: somehow not, maybe I'm too much erlang old school [12:42] TheMue, maybe I can change your mind in a future team meeting ;) [12:42] mattyw: I'm currently taking a look into pony, kind of best of go, erlang, and rust, but too young right now [12:42] mattyw: towards elixir or haskell? [12:43] TheMue, elixir [12:43] hello all [12:43] TheMue, but also - I'm very much not serious [12:43] perrito666: heya [12:43] TheMue, will have to look at pony [12:43] perrito666, morning! [12:44] mattyw: elixir isn't bad, but cannot get warm with the syntax [12:45] TheMue, my feeling to erlang [12:45] TheMue, althought I've been told you get used to it in no time [12:46] mattyw: hehe, yeah, it's strange too. [12:46] mattyw: yes, it's more compelling than e.g. elixir. [12:46] mattyw: do you get hipster points for a talk about errlang in a channel about a go project? [12:47] mattyw: but this old prolog based syntax makes many people wonder [12:47] perrito666: madness points for mentioning the reimplementation of juju in language X (fill in any language you like) [12:49] perrito666, I find hipster points is a zero sum game [13:15] +1 for learning any new programming language :) [13:16] does nothing but broaden your horizons! [13:18] katco: learning new programming language is not the question :D [13:18] katco: personally, haskell is an old friend [13:18] :) [13:18] it is probably the next one i'll learn [13:26] I just took a look into rust, but only for an article about it. it's ok, but not mine. I like langs with lightweight processes, like erlang, go, and now pony. [13:28] axw: a small one https://github.com/juju/juju/pull/2250 [13:31] what's the easiest way to figure out what branches have been cut from master since a specific commit? I have to figure out what releases have this bug in them [13:31] natefinch: a graphical tool [13:32] but if you know in which commit the bug is present just go to github to that commit [13:32] and in the top bar it will tell you which branches contain said commit [13:32] perrito666: ahh, cool, thanks [13:33] natefinch: You would actually want branches that *merged* master with that commit too, so perrito666's answer is actually better than you asked for. [13:33] nice [13:33] looks like log rotation for machine logs is broken in all of 1.22 and newer [13:39] natefinch, lovely [13:42] alexisb: yeah... it's mostly my fault for not having CI tests for the rotation. Katco moved around some of the machine agent code, and just happened to move the log rotation stuff earlier in the startup code... and we evidently have two places that twiddle with how we write logs... and the order in which that code runs determines whether we rotate or not. [13:43] well here is our opportunity to fix it [13:43] alexisb: yep [13:44] alexisb: it's interesting that menn0 just happened to notice we weren't logging right before that big customer problem came up... [13:44] interesting... yes that is a good word ;) [13:45] alexisb: lucky, really. Otherwise we would have upgraded to 1.22 and rotation would have *still* been broken [13:45] natefinch: its a clasic movie moment, when the character realizes he cannot deactivate the bomb right bifore the explosion [13:45] camera focus on face, surpised face, big blast, black screen [13:46] perrito666, lol [13:51] rogpeppe2, jam, fwereade: any of you know why we're messing with logging here? https://github.com/juju/cmd/blob/master/logging.go#L92 [13:54] natefinch: probably because commands are supposed to use the stderr from the context, not os.Stderr [13:55] natefinch: but that surely shouldn't affect logging in the environment? [13:55] rogpeppe2: loggo's default writer is a global variable. this code replaces the default writer with one that writes to the whatever is in ctx.stderr [13:56] natefinch: why is that a problem? [13:56] rogpeppe2: because we have code elsewhere that changes loggo to write to a lumberjack writer, which does the log rotation. So this overwrites that and sets it back to stderr [13:56] thus breaking log rotation [13:56] natefinch: i don't understand [13:57] natefinch: oh yes i do [13:57] natefinch: sorry, i thought this was in a cmd/juju file [13:57] natefinch: but it's in cmd/juju... [13:58] natefinch: where's the lumberjack writer installed? [13:58] s:cmd/juju:juju/cmd: [13:59] rogpeppe2: https://github.com/juju/juju/blob/master/cmd/jujud/agent/machine.go#L224 (that function just calls loggo.ChangeDefaultWriter or whatever it is [13:59] gsamfira ping [13:59] natefinch: that should probably be done in Start [13:59] alexisb: pong [13:59] for bug https://bugs.launchpad.net/juju-core/+bug/1451626 [13:59] Bug #1451626: Erroneous Juju user data on Windows for Juju version 1.23 <1.23> [14:00] can you please add the pull request under review in a comment [14:00] just so we can track them [14:00] sure [14:00] rogpeppe2: I was wondering if it might be a good idea to check in the cmd code if loggo is currently writing to stderr before replacing it [14:01] natefinch: alternatively it could be be done in logging.go [14:01] rogpeppe2: the code says "write to ctx.Stderr instead of os.Stderr" which sounds fine.... unless we aren't actually writing to os.Stderr anymore. [14:01] natefinch: istm that logging.go and machine.go are in conflict with one another [14:01] natefinch: exactly [14:01] alexisb: done [14:01] thanks! [14:02] natefinch: ISTM that the Log struct is where logging-related stuff is centralised [14:02] natefinch: standup [14:02] natefinch: i see two immediate possibilities: [14:02] natefinch: 1) put a NoStderrLogging bool inside the Log struct [14:03] natefinch: 2) put a lumberjack config inside the Log struct [14:10] rogpeppe2: yeah, I think the main problem is that we 're trying to modify logging config without doing it through the cmd Log structure. [14:10] natefinch: yeah [14:11] natefinch: you can either modify the Log structure to make it possible to avoid that (but not introduce a lumberjack dependency there), or just make it know about lumberjack [14:11] natefinch: i *think* my inclination is towards the former [14:12] rogpeppe2: yeah, I don't want to add a hard dependency on lumberjack... it should be something you can easily plug in if you want, but not something we demand you use. [14:13] natefinch: are you sure we're passing the --show-log flag to agents? [14:14] rogpeppe2: not sure. In standup, will check after. [14:14] natefinch: it doesn't appear so (not in 1.20.11 anyway...) [14:15] natefinch: oh yes, it is: [14:15] f.BoolVar(&l.Debug, "debug", false, "equivalent to --show-log --log-config==DEBUG") [14:16] natefinch: i don't think that --debug should imply --show-log [14:16] natefinch: hmm, actually i take that back [14:17] natefinch: i don't think that agents should be started with the --debug flag, but just --log-config=DEBUG [14:34] Bug #1452745 was opened: 386 constant 250059350016 overflows int <386> [14:36] sinzui - are you on cross-team? [14:49] * natefinch wonders if anyone else thinks it's funny that "250059350016 overflows int" is marked as a regression.... as if it didn't used to overflow ;) [14:50] natefinch: it is a regression, storage work landed some bad json examples in tests :) [14:51] mgz: no no, I understand the fact that we're using that constant is a regression... but the way it's worded it sounds like we're saying that 250059350016 used to be a valid int32 :) [14:51] natefinch: ints aren t what they used to be [14:51] perrito666: gets it [14:51] in my day, I could count an int on one two hands? [14:51] in my times, int32s where 64s, everything comes in smaller cans nowadays [14:55] haha, new listing in my town: 1 bedroom, 1 bathroom, 856 sq ft (79.5 sq m): $360,000 [14:55] seems legit [14:56] it's even a tiny lot (for this town). Weird. [14:56] to be fair, it's probably the cheapest house in town [14:58] wwitzel3: are you back on yet? [16:05] Bug #1452785 was opened: os-enable-refresh-update is false and some security packages are not available [16:35] Bug #1452808 was opened: no API addresses to connect to after juju upgrade [16:41] natefinch: that is dollars? [16:42] katco: is the bug you mentioned earlier still orphaned? [16:42] perrito666: no, cherylj is looking into it [16:42] perrito666: ty for responding [16:50] perrito666: I could use some input, if you're familiar with upgrading :) [16:51] This bug is confusing since they're saying that things are hung and nothing else was written to all-machines.log. Yet, if you look at the juju status that they included, all the agents are at 1.22.3 [16:52] and the only thing that looks weird to me in the log is this error: ERROR juju.worker runner.go:219 exited "firewaller": machine 3 not provisioned [16:53] (but there is a machine-3 in their status, which also has an agent-version of 1.22.3) [17:09] cherylj: sorry I was eating one of my daily rations of white rice... (gotta love stomach bugs) [17:10] yuck :( [17:10] cherylj: the number is? [17:10] of the bug I mean [17:10] bug 1452680 [17:10] Bug #1452680: Failed to upgrade from 1.22.1 to 1.22.3 during deployment [17:16] cherylj: i am not at all familiar with autopilot, but if I had to guess, I would say compatibility was broken? the juju status output seems to indicate everything went fine [17:16] katco: back now [17:16] wwitzel3: hope you're feeling better [17:17] all services started, all agents upgraded [17:17] ericsnow: feel free to pair up with wwitzel3 on those user stories [17:17] katco: k [17:17] katco: k, will do [17:18] katco: did you see the trello for emacs thing someone mentioned on telegram the other day? [17:18] perrito666: yeah, it's the primary reason i wish we were using trello [17:18] katco: thanks, I'll be ok, just little queasy still [17:18] perrito666: because my *life* is in org-mode, and it would unify two big inboxes for me [17:18] wwitzel3: food poisoning? :( [17:19] wwitzel3: welcome to the white rice club [17:19] katco: yeah :( .. but already feeling way better [17:20] wwitzel3: good to hear. that sucks... i think you can report it to the cdc [17:21] katco: yeah, hindsight .. probably don't get a shrimp omelette at a hole in the wall diner [17:21] but it soo good [17:22] rofl [17:23] wwitzel3: katco be fair, I say that, if the food is good, the place only needs to be reported if you get sick 2 out of 3 times [17:24] perrito666: I've eaten at this place a lot and never got sick, it is the only place we go for breakfast [17:24] perrito666: I did call them to let them know and they said they were going to stop serving that [17:25] that is a bit extremist, they could just serve that not rotten [17:26] perrito666: well, he said "we will pull that batch" [17:26] ah, that is better [17:52] perrito666: btw, yes, dollars. [17:56] natefinch: how's the log rotation fix coming? [17:56] katco: mostly done... doing testing now. [17:57] natefinch: sweet [17:58] natefinch: let's plan on deferring our 1:1 until the bug is fixed [17:58] katco: makes sense [17:58] natefinch: that is an interesting number for a 75m^2 loft [17:59] katco: the implementation is less fragile now, though I kinda hate that it involves some code needing to know about the implementation in otherwise very remote code ... but at least it's not relying on the order of execution of two unrelated pieces of code. [17:59] perrito666: it's a standalone house with 3358m^2 of land [17:59] (.83 acres for the US people) [18:00] that's a small lot for my town [18:00] natefinch: you might have omitted that point [18:00] perrito666: 360k for that small a living space is pretty bad unless you're in the middle of the city. ANd we are decidedly not in the middle of a city [18:01] 3358m2 of land makes for the small house [18:02] perrito666: not here.... most plots are ~7000+ [18:02] (at least in this town) [18:02] there used to be a law that all plots had to be 8000+ (2 acres+) [18:03] but that got struck down as being illegal for discriminating against poor people or something along those lines [18:04] * perrito666 watches reviewboard waiting for it to take his backport [18:05] lol, the house I currently live in is in a 45, 50 m2 piece of land :p [18:06] perrito666: my house is on 51,000m2 ;) [18:07] to be fair, that's large for this town, but there are larger plots. [18:07] 10% of your land is about 4 times the plot I bought for my new house :p [18:08] the benefits of living in the middle of nowhere :) [18:08] I cant imagine how much time it takes to mow the lawn there :p [18:08] most of it is woods on a hill. So no really usable for much except keeping the neighbors away [18:08] I only have about 4000m2 of lawn to mow [18:08] which is not tiny, but a lot more manageable :) [18:09] you clearly as not as lazy as I am, I have 10m2 of lawn and I pay someone to mow it (and to cover all the holes my dog does and remove the branches from my neigbour's palm tree) [18:10] lol [18:10] perrito666: I'm lazy, I'm just even more cheap than lazy [18:11] perrito666: I'm lazy, I'm just even more cheap than lazy [18:11] oops [18:11] in my defense, I should pay this guy for years before actually covering the cost of the machine for the job [18:11] perrito666: yeah, it's different here. Labor is actually expensive. To be fair, I could probably pay someone for a few years to cover the cost of my riding lawnmower, but... it *feels* like I'm saving money ;) [18:12] before moving into a larger plot I should have children, so I can get them to do it eventually [18:12] natefinch: labour is somehow expensive for local standards, but machinery is more expensive [18:12] *nod* [18:32] * perrito666 discovers that his terminal emulator actually handles lp: uris [18:33] nice [18:37] ahh crap, I think I just pushed to 1.23 directly [18:38] sigh [18:38] how do you do that? [18:39] perrito666: I was deleting a branch to recreate it off of 1.23, and so deleted it, checked out 1.23, and then foolishly did a git push -f thinking that would push up the branch delete to my own github [18:40] uh git push -f? [18:40] yes [18:40] perhaps crap was too tame a word [18:40] here's what git said after I did push -f: [18:40] Counting objects: 86, done. [18:40] Delta compression using up to 8 threads. [18:40] Compressing objects: 100% (65/65), done. [18:40] Writing objects: 100% (86/86), 16.86 KiB | 0 bytes/s, done. [18:40] Total 86 (delta 58), reused 40 (delta 17) [18:40] natefinch: you might have deleted history there [18:40] To https://github.com/juju/juju [18:40] + 62cc2e3...7a6a90a 1.23 -> 1.23 (forced update) [18:41] perrito666: I know. [18:41] natefinch: luckily for you, I pulled 1.23 less than an hour ago :) [18:41] I'm kind of surprised that git checkout foo && git push -f actually does *anything* [18:42] I shouldn't have any changes to 1.23 on my local branch [18:44] I don't even know how to tell what changed [18:59] perrito666: if you recently pulled 1.23, can you then push -f your branch and just fix it? I doubt anyone checked anything into 1.23 in the last hour [19:08] natefinch: going to now [19:08] perrito666: thanks, owe you like 1000 beers [19:09] sorry the architect interrupted with the official plans for my house to be presented to the city inspection :D [19:09] natefinch: lmk when/if you're ready for our 1:1 [19:09] wwitzel3: i'm free now if you're up for our 1:1. fine if you're not. [19:11] natefinch: you just lost one commit from wayne [19:14] natefinch: I have no permission to do that :p [19:14] perrito666: lol [19:14] katco: yep, we can do that, one sec [19:14] perrito666: I can give you the power [19:14] natefinch: github.com/perrito666/juju/1.23 you will have to do it [19:14] natefinch: I am perfectly fine not having the power [19:14] the less have the power the better [19:14] perrito666: heh... I obviously shouldn't have the power either :) [19:15] natefinch: just make sure your upstream remote for juju uses an anonymous checkout instead of a ssh one [19:16] perrito666: it does... setting up ssh was too much of a hassle [19:17] it doesn't if your upstream was anonymous you wouldn't be able to push like that without at least providing creds [19:17] anyway, do you do the push or give me the power? [19:18] perrito666: I don't actually know how to get your 1.23 branch pushed up to the juju 1.23 branch. I mean, I could cherry-pick the commit, but I'm not sure if that's the right thing to do === anthonyf is now known as Guest80518 [19:20] katco: yes, doing wwitzel3's 1:1 now is probably better. [19:20] natefinch: kk [19:20] natefinch: cd /tmp; git clone myrepo; cd myrepo; git co 1.23; git remote add upstream jujurepo; git push -f upstream 1.23 [19:21] natefinch: If I wherent that lazy I would have written a working line, but that one is not that far [19:21] but lets face it, I am lazy [19:24] perrito666: thanks, fixed [19:26] wwitzel3: did i lose connection? you or me? [19:26] natefinch: np [19:26] * natefinch wipes a bead of sweat off his brow. [19:29] * perrito666 thinks what is the item natefinch will have to transport for him in payment :p [19:30] rofl [19:38] mm, IsValidMachine will say yes to a container, is that correct? [19:38] I mean,is that supposed to be the behaviour? [19:52] katco, perrito666, wwitzel3, anyone else: review me? critical fix for 1.22 (to be merged into 1.23 and 1.24 and master) - http://reviews.vapour.ws/r/1619/diff/# [19:53] er rather, "to be merged later into..." [19:57] natefinch: I am a little concerned that there is not _test touched in that patch [19:58] perrito666: yeah.... testing whether we're rotating logs is... tricky. I guess I could try to make a test that at least checks loggo is writing to a lumberjack.Logger with the correct configuration [19:58] we could then just rely on lumberjack's tests to ensure it rotates correctly with the given configuration, which I think is safe [19:59] natefinch: I would for for a test no one is changing my writer test [19:59] I am not sure if there is a sane infrastructure to test that though [19:59] natefinch: I think that is a good idea (test that it is writing to the LJ logger) [20:01] ironically, most tests intentionally disable writing to lumberjack so we can read from stderr === anthonyf is now known as Guest51489 [20:07] ericsnow: are you working in https://bugs.launchpad.net/juju-core/+bug/1452511 ? [20:07] Bug #1452511: jujud does not restart after upgrade-juju on systemd hosts [20:11] perrito666: yep [20:11] cool [20:11] * perrito666 scratches from his list [20:47] gah, I have no friggin' clue how to make the tests that test this. The tests for the Machine agent are like 20 layers of abstraction and mocking, and one of the lowest layers changes the default logger :/ [20:47] Bug #1452891 was opened: juju determines the unit IP addresss based on the last interface attached in openstack [20:50] ericsnow: did i freeze again? [20:52] natefinch: no way to toss it at a functional layer in CI? hammer the logs and verify they rotate during a deployment test? [20:53] rick_h_: yeah, it just would be nice if we had a unit test that would catch it at development time, rather than asynchronously breaking trunk due to a CI test a day later [20:54] natefinch: +1 just sympathizing with this type of thing that really is a function of things 'working' [21:00] rick_h_: it would be a lot easier to test that we haven't screwed it up if it wasn't a global variable that any code anywhere could screw up [21:37] Would someone be willing to provide an example of setting data-port and bridge-mappings for quantum-gateway and for neutron-openvswitch [21:37] ? [21:37] ? [22:11] bdx, all of our juju networking folks are based in europe [22:12] so an email would be good [22:31] alexisb: Thanks, will do!@ [23:35] wallyworld: axw anastasiamac suddenly I cannot reach anything google related [23:39] perrito666: no worries, anastasiamac was talking about you [23:39] perrito666: all good tho :D [23:40] wallyworld: 1:1 or tanzanite? [23:40] 1:1 [23:48] oh I hope it was nice things [23:48] aghh my client cannot even get the backlog for irc, this is ludicrous [23:48] * perrito666 suddenly realizes that half of the negborhood has a power outage and it might be related