[00:31] anyone in this shift has a decent idea of ha and peergrouper? [00:37] davecheney: \o/ [00:37] perrito666: only that the peergrouper is doing it wrong [00:37] thumper: yeah, I meant other than the obvious part :p [01:05] ok brain out of service, EOD [02:00] wallyworld: be there in a minute [02:00] np [02:09] The next person that creates a file called state.go, that is not a. in the state package, and b. related to the act of getting stuff in and out of mongodb will have 500 points subtracted from Griffendor [02:12] thumper: yaml.v2 returns errors who's .Error() string representation contains newlines [02:12] so, fu if you were expecting to use regex to match on those [02:14] :) [02:15] thumper: and the error text contains backticks [02:17] haha [02:17] OK - trying here since Canonical is quiet: Anyone able to point me to a source for juju 1.24.7 for trusty? It's gone from the stable PPA already, and we need to backrev to test something that might be a regression. [02:18] davecheney: since my affiliation with slytherin is stronger, it's perfectly fine for u to substract points from Gryffindor for my doing \o/ [02:19] Ah, found it in http://nova.clouds.archive.ubuntu.com/ [02:19] Sorry for the noise [02:20] blahdeblah: glad we helped \o/ [02:20] anastasiamac: Well, now that you've mentioned it, any ideas about my question re: http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/1392/console in #juju? :-) [02:23] blahdeblah: no ideas from me :D [02:24] blahdeblah: but it does feel like testing infrastructure... mayb? [02:25] When the log says "DEBUG:runner:The ntp deploy test completed successfully" and "DEBUG:runner:Exit Code: 0", I'm not even sure what all the rest of it is about... [02:32] Hey axw, could you get the 1.25 fix in for bug 1483492 in the next couple days? We're wanting to do the 1.25.1 release soon [02:32] Bug #1483492: worker/storageprovisioner: machine agents attempting to attach environ-scoped volumes [02:36] cherylj: sorry was on hangout. will do, didn't realise you were waiting on me, sorry [02:37] axw: no worries. And you're not blocking things. I just figured we could include it if it was an easy fix to backport [02:37] cherylj: I'll take a look now, shouldn't take long [02:38] axw: cool, thanks much! [03:09] we just had something odd with juju 1.25.0, when doing a destroy environment we see requests to cinder being done over https, when the endpoint is http? [03:10] when we drop back to 1.24.7, it goes back to being http [03:16] Looks like it's https://bugs.launchpad.net/juju-core/+bug/1512399 [03:16] Bug #1512399: ERROR environment destruction failed: destroying storage: listing volumes: Get https://x.x.x.x:8776/v2//volumes/detail: local error: record overflow 1.25:Triaged> [03:28] thumper: sayonara juju/juju/utils, https://github.com/juju/juju/pull/3724 [03:30] huzzah! [03:36] davecheney: ship it! [04:07] gah, I hate local provider. I can't ever get trusty containers to start up on my vivid machine [04:31] axw: both your PR's should have +1. if you could ptal at the facade one again that would be great [04:42] wallyworld: sure, thanks [04:52] bad record mac strikes again [04:52] axw: sorry, i missed a commit, i just pushed as you were reviewing [04:52] wallyworld: ok, looking [04:52] was only for rename to scheme [04:53] oh ok, cool [05:35] axw: as your doctor if BAD RECORD MAC is right for you. [05:47] hmm... just had juju's CLI autocomplete print out an error when I tried to autocomplete while not bootstrapped... repro'd it several times [05:47] one time even managed to have it print out a panic, that was fun [06:05] wtf is up with CI today? [06:30] axw: ci has taken 53 minutes to get to godeps -u [06:30] \o/ [06:31] good times [06:31] * axw does something that doesn't involve merging [07:56] axw: a small one http://reviews.vapour.ws/r/3131/ [07:56] wallyworld: looking [07:57] axw: next step is to add a collection to record remote add-relation requests. a worker will process those using the stuff in the PR above. [07:57] ie look up offer etc [07:57] and write out relation if the url can be resolved [08:01] wallyworld: hrm, I would've thought clients would always go through the API to the local API server, and the API server might proxy requests to a remote environment [08:02] axw: they will [08:02] juju add-relation will [08:02] wallyworld: so why is the factory on the client side then? [08:02] for the worker [08:02] wallyworld: by client, I mean any client of the api [08:02] a worker will listen to remote relation requests [08:02] wallyworld: workers, CLI, GUI, everything [08:03] the api later defines an interface that can be implemennted by a api facade or http facade to a remote controller [08:03] layer [08:03] wallyworld: I can see that, I'm just not seeing why we would do that, instead of making that decision in the API server [08:04] because add-relatio will record the request; the worker only has api layer === mup_ is now known as mup [09:35] voidspace: darn it! Forgot about the feature branch. At least my proposed change seemed exactly the same as what has landed and it it didn't take long to do. [09:36] dooferlad: yep [09:36] dooferlad: although the apiserver calls (called) SupportsSpaces and only checked the error result not the bool! [09:36] dooferlad: something I also fixed [09:37] :-) [09:37] yeah [09:37] dooferlad: the new subnets implementation is proving a bit more fiddly than I expected [09:38] dooferlad: the new maas subnets api is easy enough to call - but it doesn't do node filtering nor include the static range information we need (there's a separate api for that) [09:38] dooferlad: so we first call subnets then once per subnet ask for the addresses so we can match the node id [09:38] voidspace: That's unfortunate. Maybe we should submit a patch against maas. [09:39] dooferlad: then for every subnet that matches we call the reserved_ranges api [09:39] dooferlad: and the maas test server needs extending to support all this [09:39] dooferlad: it wouldn't be hard, not so sure they'd want it though [09:39] dooferlad: might be worth talking to them about it [09:39] dooferlad: they've made a definite decision to make the range information separate [09:40] voidspace: OK, well, talking won't hurt. [10:02] voidspace, dimitern, frobware: hangout? https://plus.google.com/hangouts/_/canonical.com/sapphire [10:05] dimitern, dooferlad, voidspace: self-inflicted apt-get upgrade problems here... limited desktop capabilities [10:06] dimitern, dooferlad, voidspace: I also have to run to the dentist in 10 mins. [10:07] frobware: `apt-get purge teeth`? [10:07] :) [10:08] might be awkward before reinstalling though [10:08] mgz: I'll just wait for the next version [10:24] frobware, good luck with both :) [10:28] dooferlad: email sent [10:28] voidspace: great, thanks! [10:29] dooferlad: let me know when you have gomaasapi - I'm just going to update my version and we can spelunk the test code [10:29] dooferlad: it's pretty straightforward adding and testing new methods, just tedious [10:29] dooferlad: adding the endpoints to the test server is trivial, but you also need to manage state on the test server (so the results are consistent) [10:29] dooferlad: and provide a way of populating some test data [10:30] voidspace: at this point I am going to refresh my coffee supply, then get onto that. Was just tidying up a script [10:30] dooferlad: cool [10:30] dooferlad: let me know when you want to HO [10:39] dimitern: problem with unreserved ranges :-/ [10:39] dimitern: if there are allocated ip addresses then that breaks the allocatable range [10:39] dimitern: so maas reports several smaller ranges [10:40] dimitern: full cidr minus dynamic range might be the way to go :-/ [10:41] voidspace, yeah, it breaks the unused range around the allocated IPs [10:41] dimitern: which isn't good for us [10:41] voidspace, we can merge them [10:42] dimitern: well [10:42] voidspace, but since roaksoax confirmed the static range effectively is cidr-dhcp - let's go with that [10:42] dimitern: suppose you have a dynamic range in the middle - and an unused range at the start and at the end [10:42] dimitern: how do we merge that? [10:42] dimitern: what merge algorithm are you suggesting? [10:43] dimitern: we could just take the low bounds of any unused and the high bounds of any unused portion [10:43] dimitern: and if there's an unallocatable portion in the middle - ah well [10:48] voidspace, right, it might be non-contiguous [10:48] dimitern: but our allocation strategy can handle attempting to pick an address that isn't available [10:48] dimitern: it will just try a new one [10:48] dimitern: and for the common case of a contiguous block it will work fine [10:49] voidspace, however, since there's no way to configure that via the API or CLI, this must mean "just ignore unused range before or after the dhcp range" [10:49] right [10:49] voidspace, but for now, let's go with cidr-dhcp range and leave the address picking to handle unavailable addresses [10:50] cidr - dynamic range [10:50] voidspace, yeah [10:50] if the dynamic range is in the middle I'll pick the bigger of the two blocks (above dynamic or below dynamic) [10:50] cool [10:52] voidspace, that sounds good [10:53] voidspace, and matches what maas hearsay claims - use a bigger static than dynamic range :) [10:53] heh [11:03] FYI, fwereade texted me he won't be around today (scheduled power outage + working till late yesterday) [11:17] dimitern: is there are a config option to say really-don't-use-ipv6? [11:17] dns-name: 2001:4800:7818:104:be76:4eff:fe05:c186 <- not useful address for state server [11:17] (this is on nearly-master) [11:19] mgz, is this from a unit test or when prefer-ipv6 is enabled in envs.yaml? [11:19] it's a CI test, I do not have prefer-ipv6 set for the environment in the yaml [11:20] but I don't see why that address would ever make sense to select as dns-name [11:20] dns-name really just means "address" to juju [11:20] it doesn't distinguish [11:21] mgz, yeah, dns-name is a damn lie in status [11:21] mgz, I'd like to look at some logs to figure out why [11:24] dimitern: bootstrapping again, I can give you whatever [11:26] openstack says it has a 23. a 10. and a 2001: address [11:26] mgz, bootstrap --debug log should be useful, if not that then /v/l/c-i-output.log and /v/l/j/machine-0.log [11:27] dimitern: hm, this time it picked the 23. one [11:29] mgz, hmm [11:29] mgz, it might be nova-to-instance addresses in provider/openstack is acting funny [11:37] voidspace: are you OK to hangout? [11:38] dooferlad: just grabbing coffee [11:38] voidspace: ack [11:50] dooferlad: right [11:50] dooferlad: team hangout? [11:50] voidspace: https://plus.google.com/hangouts/_/canonical.com/sapphire [12:04] https://maas.ubuntu.com/docs/api.html#spaces [12:53] dimitern: here's one where it picked the ipv6 address and got upset https://chinstrap.canonical.com/~gz/bootstrap.log [13:16] mgz, ok, how about machine-0.log? [13:26] dimitern: ignore unrelated panic, [13:26] hm, this is not from the same bootstrap, but equiv I think [13:37] dimitern: https://chinstrap.canonical.com/~gz/rackspace-bad-machine-0.log [13:37] tried and failed to make chinstrap apache2 sane [13:51] mgz, weird... it seems at the agent side it uses the 10. address [13:52] mgz, but then in status the ipv6 one ends up eventually [14:12] dimitern: there's not way I can just force ipv6 not to be selected at all? it used to be the default behaviour [14:12] but now this test is only pot-luck to pass, as everything dies if only an ipv6 address is exposed [14:16] sinzui: I've disabled build-revision because I promised wallyworld I'd re-run the series-in-metadata branch. [14:17] abentley: He merged it (I thought) [14:17] sinzui: When? [14:17] abentley: the branch merged 14 hours ago [14:18] sinzui: IOW, he merged it when it was not blessed? Not cool. [14:18] abentley: I told him to after retesting his one failed job [14:19] sinzui: Oh, okay then. [14:22] Bug #1516023 opened: HAProxie: KeyError: 'services' [14:25] voidspace, mgz: did we come to any conclusion why this was failing? http://juju-ci.vapour.ws:8080/job/github-merge-juju/5435/ [14:26] sinzui: I'd like to chat when you have some time. [14:28] mgz, there's no way to ignore ipv6 addresses [14:28] frobware: no :-/ [14:29] nah, looks like a real error though [14:29] mgz, you might want to try ignore-machine-addresses, as the ipv6 one seems to be coming from the machine [14:29] I can try that. [14:30] mgz: it calls AddMachine which creates a machine document with a principals field, and then immediately checks that field and its not there [14:30] mgz: and it only happens on CI infrastructure... [14:31] mgz: not on anyone else's machine [14:31] mgz: so "genuine" for some value of genuine... [14:31] voidspace, mgz: for the record, does not happen on my desktop [14:31] very weird, because all the things I can think of that would make it happen would also cause a load of other stuff to fail [14:31] and it's at least consistent [14:35] frobware: same test fails on master as well as 1.25 for CI [14:36] frobware: I'm going to have to spend some time looking at it [14:39] the definition does not have omitempty on it [14:39] voidspace, but only on CI infra? [14:39] frobware: yeah, could be different version of go or different version of mongo [14:39] frobware: although that would still be weird for just that test to fail [14:54] dooferlad, please could we close/move/update bugs & cards related to https://github.com/juju/utils/pull/164 [14:58] Bug #1516036 opened: provider/maas: test failure because of test isolation failure [15:06] rogpeppe, what's interesting (for me at least) is that "testing.invalid" is quite pervasive through juju's code base. [15:12] frobware: yeah [15:12] does 'a-b-.com' resolve for you? [15:13] frobware: yes [15:13] bleh [15:13] rogpeppe, and @8.8.8.8? [15:13] frobware: this is why we started using 0.1.2.3 as an invalid ip address everywherre [15:13] frobware: 'cos it stops immediately in the network stack [15:14] .invalid *should* fail to resolve rapidly [15:14] but people do have odd dns settings [15:14] rogpeppe, does 'dig @8.8.8.8 testing.invalid' resolve? [15:14] frobware: no [15:15] no further questions your honour [15:15] frobware: but it doesn't look much like a DNS name either [15:15] mgz: my IP provider happily resolves anything [15:16] rogpeppe, which bit doesn't look like a DNS name? [15:16] frobware: "@" is allowed in DNS names? [15:16] rogpeppe, ah, no. that forces dig to use 8.8.8.8 as its revolver, not whatever your host would normally use [15:17] frobware: "nslookup @8.8.8.8 testing.invalid" takes 15 seconds to fail, saying ";; connection timed out; no servers could be reached" [15:17] frobware: basically, we should not be relying on user's actual network [15:18] rogpeppe, agreed. but that comes back to my original observation that testing.invalid is already used a lot [15:19] frobware: interestingly "invalid" (no dots) fails swiftly on my machine [15:19] frobware: hopefully most of those places aren't actually hitting the network stack [15:19] rogpeppe, btw, I don't think nslookup is using the @syntax like dig will. [15:19] rogpeppe, as I see the same timeout. [15:20] rogpeppe, whereas dig replies with 'no name' [15:20] frobware: i always find dig output too verbose [15:20] frobware: i can't see if it has a result or not [15:20] rogpeppe, add +short to the end [15:21] frobware: ah, useful [15:21] $ dig ubuntu.com +short [15:21] 91.189.94.40 [15:21] frobware: in that case, it returns immediately, printing nothing (with a zero exit code) [15:22] rogpeppe, for testing.invalid (i.e., unresolvable) [15:22] frobware: yeah [15:22] ho hum [15:22] blessed be the ISPs [15:22] anyway, our tests really shouldn't exercise the machine's DNS [15:23] the point of those bogus-looking addresses was to make the test [15:23] actually fail if it reached the resolver [15:23] mgz: indeed [15:23] mgz: so it would be better to use 0.1.2.3 [15:23] if the test actually tries to get it to resolve then expects or chucks an error to that effect, the test needs changing [15:23] gsamfira_ o/ [15:39] mgz: do you know who did the principals stuff? I think it was fwereade [15:39] fwereade: ping if you're around... [15:39] voidspace, heyhey [15:40] fwereade: hey, hi [15:40] fwereade: I have a very mysterious failing test [15:40] fwereade: only fails on CI machines and I'm struggling to see how it's possible for it to fail at all [15:40] fwereade: and I wondered if you had any insight [15:41] voidspace, interesting, I will try to sound wise -- pastebin? [15:41] fwereade: failure here [15:41] http://juju-ci.vapour.ws:8080/job/github-merge-juju/5435/console [15:41] fwereade: http://pastebin.ubuntu.com/13248028/ [15:41] that's the relevant bit [15:42] voidspace, yeah, just got there, that's a tad baffling [15:42] fwereade: the test starts with AddMachine which definitely creates a machine with a principals field [15:42] fwereade: and it doesn't fail on my machine or frobware's [15:42] only on CI... [15:43] if we had a timing/session issue with mongo I could understand maybe it being empty (unless the whole doc is empty - maybe I should just log what we do get back) [15:43] but being *missing* is weird [15:44] fwereade: that test is ignoring the error from FindId().One() [15:45] maybe there's an error there [15:45] voidspace, so it is, and, yes, most likely [15:45] fwereade: I'll check the error and log what we do get (likely nothing if there's an error) [15:45] thanks [15:46] (unless you can think of anything else) [15:46] voidspace, when the machine gets created does it definitely have a princpals field? I suspect it starts as nil so it might not [15:47] fwereade: hmmm... template.principals is copied in [15:47] fwereade: if that's nil will mongo ignore the field? [15:47] fwereade: it's *not* omitempty [15:47] so I assumed it would always be there [15:48] voidspace, in which case is it possible that the s.machines session is out of date? handwave handwave -- if it's never been written to it might be returning old data? [15:48] rogpeppe, do you time/bandwidth to verify my change/patch? [15:48] fwereade: in which case finding the machine should fail - I'll add the checking for the error and we'll see [15:48] voidspace, omitempty on []string{} preserves it -- not sure how it plays with nil [15:48] voidspace, yeah [15:48] it fails consistently and only on CI, so not timing related I don't think [15:49] could be a mongo version / go version issue [15:49] voidspace, I *would* say that it's kinda evil anyway [15:49] voidspace, do we know why we don't just Refresh() or state.Machine(id) it? [15:49] heh, no [15:50] I thought you wrote the test ;-) obviously not [15:50] voidspace, I might have done? [15:50] frobware: sorry, i'm just off for the weekend [15:51] voidspace, I usually try to avoid that sort of thing, but who knows :) [15:51] rogpeppe, ack; I'll add it to the bug anyway [15:51] fwereade: the function definition was touched by rogpeppe in August [15:51] August 2013! [15:51] voidspace, ahh, rogpeppe broke it then ;p [15:52] voidspace, crikey [15:52] WHO IS TAKING MY NAME IN VAIN? [15:52] :-) [15:52] voidspace, yeah, that was early days for mongo, our best practices were... evolving [15:52] fwereade: that code specifically (checking the principals field like that) was menno [15:52] I can ask him [15:53] a year ago, and that was early days for menno [15:53] voidspace, hmm, worth dropping him a note but I think he's off for a day or two? [15:53] fwereade: if he can't think of a reason not to do it by refreshing the machine I'll switch the test to doing that [15:53] fwereade: ok, I'll email him and ask [15:53] a couple of days won't hurt desperately so long as we don't miss a release cut fof [15:53] *off [15:53] voidspace, ok, cool [15:54] voidspace, (if that makes that collection unused by the tests, would be nice to drop it) [15:54] according to the calendar he's back in on Monday [15:54] voidspace, excellent [15:55] ooh, the collection is defined by ConnSuite - I wonder if it is a stale session [15:55] s/defined/lives on/ [15:56] fwereade: great, a few avenues of attack anyway [16:23] dimitern, voidspace, dooferlad, mgz: http://reviews.vapour.ws/r/3137/ [16:23] voidspace, dooferlad, frobware, please have a look at http://reviews.vapour.ws/r/3136/ (fixes bug 1483879) [16:23] Bug #1483879: MAAS provider: terminate-machine --force or destroy-environment don't DHCP release container IPs [16:23] frobware, ha! :) you were faster [16:23] frobware, looking [16:25] frobware, LGTM [16:27] :) [16:31] Bug #1516077 opened: CLI autocomplete prints errors/panics when not bootstrapped [16:37] Bug #1516077 changed: CLI autocomplete prints errors/panics when not bootstrapped [16:43] Bug #1516077 opened: CLI autocomplete prints errors/panics when not bootstrapped [16:54] sinzui: where does the landscape_scalable.yaml bundle come from that CI deploys? [16:58] hi [16:59] ya, natefinch, you need to use an updated bundle. the latest charm can't deploy with that old bundle [17:00] https://jujucharms.com/u/landscape/landscape-scalable/10 [17:00] yay, not my fault! :) [17:00] heh [17:01] actually, the bug says that the bundle deploys ok in 1.25, but not using master: https://bugs.launchpad.net/juju-core/+bug/1516023 [17:01] Bug #1516023: HAProxie: KeyError: 'services' [17:02] natefinch: it is in lp:juju-ci-tools/repository. Re froze the landscape-scalable.yaml bundle to prevent it from changing behind our backs [17:03] sinzui: dpb1 says the old bundle doesn't work with the new charm... though that doesn't seem to mesh with what the CI tests are seeing [17:04] natefinch: I don't think the charm and the bundle have changed. [17:05] sinzui: honestly, my initial assessment was that the haproxy charm just wasn't written carefully enough to account for config-changed getting fired before some of the config data exists. [17:07] natefinch: http://bazaar.launchpad.net/~juju-qa/juju-ci-tools/repository/view/head:/landscape-scalable.yaml does not let any charm run and if 1.25 liks he bundle and maser doesn't I am not sure the bundle can be blamed [17:07] natefinch: sorry that isn't english. [17:07] sinzui: lol, I can decipher :) [17:08] natefinch: the bundle controls the version of the charms. [17:08] hmm... services is getting passed as an empty string... this all seems familiar [17:08] that bundle is out of date [17:09] natefinch: I really want to bame the charm for not handling all conditions [17:09] can you not store it locally? [17:09] just grab from the store? [17:09] dpb1: we use this old version to guarantee consistency in the versions of juju we test [17:09] sinzui: I want to blame the charm, too... but it almost seems like we're deserializing the data in a different way than we were before [17:10] dpb1: as long as both the bundle and the charm are version-locked, it should be ok, right? [17:10] guys, you need to update the bundle [17:10] we don't support 'services' key anymore. [17:10] lol [17:10] is this a change in quickstart, then? [17:11] natefinch: let me check. there are 4 machines involved [17:11] Here is the bundle we now upload to the store: https://api.jujucharms.com/charmstore/v4/~landscape/bundle/landscape-scalable-10/archive/bundles.yaml.orig [17:12] You'll notice that apache2 isn't a part of it anymore, landscape-msg is gone, landscape has changed to landscape-server, etc. [17:12] dpb1: I think had to make a local copy of the bundle that worked with older quickstart [17:13] this should work with latest stuff in the stable ppa: juju quickstart u/landscape/landscape-scalable [17:14] which will pull bundle version 10, and coicedently, charm version 10 [17:30] natefinch: dpb1 CI is using 2.2.2+bzr142+ppa39~ubuntu14.04.1 on all machines, that version was released last week. CI tested the bundle and the quickstart many times with many jujus and passed. [17:33] your bundle is out of date. Look at the charm store. I don't know what this stored copy gains you, but I would recommend not storing it. [17:33] If you need to store it, you'll need to look at the history to see when you started using charm version 10 [17:33] dpb1: it is intentionally out of date to allow continuity in testing [17:33] charm version 10 is not compatable with that bundle. [17:35] dpb1: I am in meeting. I will replay the test with the current bundle when soon. [17:37] oh I see the services: "" [17:37] there is a bug in juju from time to time where empty strings are dropped [17:38] sinzui: I thought I'd seen that before [17:42] natefinch: I am retesting several combinations. I suspect someone fixed the bug where config keys are dropped/ignored with they are set to an empty string. I believe thei bug is fixed...and master is doing exactly what it was told to do and the charm errors. 1.25 is dropping data (for wrong reasons) and the test passes. [17:43] frobware: are you still around? [17:44] sinzui: cool. I was pretty sure there was no way my code could cause data to be missing in the charm's config. [18:08] thanks for changing tabs into spaces in a tsv, vim [18:10] sinzui: do we use godeps for charmrepo in master? [18:11] sinzui: I'm getting weird compile issues after a rebase on master that look like they're using the wrong version of the code. either wrong version of charmrepo, or juju/juju and charmrepo disagree on the version of juju/charm to use [18:12] natefinch: We appear to be. [18:13] I remember roger talking about using godeps for some charm stuff, and I said at the time it was going to be a disaster to have more than one source of truth for what the right version of the code is. [18:13] natefinch: yes, godeps was called [18:14] wow that is going to be a huge pain in the ass. 1.) update juju/charm, 2.) update charmrepo deps to point to new charm, 3.) update juju/juju deps to point to new charmrepo AND new juju/charm [18:16] natefinch: yeah. omnibus is is a similar situation, when we test it against the current juju, there is a rule to emit the conflicting deps to help reconcile them, but it is still many steps [18:16] actually, both charmrepo and charmstore have a dependencies.tsv now, and bothe reference charm [18:37] hmm... this may be my fault. I rebased against "master" on a gopkg.in branch.... [18:38] eyah, I think that [18:38] that's the problem [18:39] natefinch, ouch, been there. gopkg + godeps = :S [18:41] cmars: yeah, it's really just gopkg.in's fault that "master" of charm.v6-unstable is actually "v6-unstable", not "master". Just a mental mapping problem on my part. [18:42] gah.... [18:42] natefinch, it's kind of weird that you can say "use this branch" with the gopkg path, but then "nah not really, use this commit hash from another branch" with godeps [18:42] cmars: yeah, really, godeps overrides gopkg.in [18:42] that's what's tripped me up before [18:43] i bet godeps could be made to be gopkg-aware... check if the commit hash is a parent of the named branch? hmm [18:49] Bug #1516023 changed: HAProxie: KeyError: 'services' === BradCrittenden is now known as Guest54329 [19:49] rogpeppe: are you here? [19:50] natefinch: kinda [19:50] natefinch: i'm in a car travelling up to scotland [19:50] lol [19:50] natefinch: connectivity may be patchy :) [19:50] rogpeppe: I rebased changes onto head of charm.v6-unstable and I'm getting undefined: charm.Reference [19:50] where will you be when omnipresence attacks [19:51] natefinch: yeah [19:51] natefinch: i wanted to get the latest changes into juju core today but failed [19:51] natefinch: i've removed the Reference type [19:52] natefinch: i've made the changes in core but haven't got rid of the test failures yet [19:52] natefinch: what are you trying to land in charm ? [19:52] * perrito666 suddenly wonders if rogpeppe is driving [19:52] rogpeppe: min juju version [19:52] perrito666: no, self-driving car [19:53] natefinch: can it wait for a day or two? [19:53] rogpeppe: yeah [19:53] rogpeppe: just making sure things weren't fundamentally broken [19:53] natefinch: no, it's all broken by design :) [19:54] natefinch: the reason for charm.Reference going is that we're going to have multi-series charms [19:54] natefinch: so a fully resolved URL no longer requires a series [19:54] rogpeppe: yeah, that's awesome [19:54] natefinch: and that was the sole reason for the existence of the Reference type [19:55] natefinch: so most of the failures i'm seeing are from juju-core tests that are expecting an error when creating a URL without a series [19:56] rogpeppe: ok, yeah, I think I just hit a problem because the head of charm.v6 doesn't compile with the head of juju master currently [19:56] natefinch: yeah, that's right [19:56] rogpeppe: so when I rebased my code on top of head of both, everything broke [19:57] natefinch: yup [19:57] natefinch: sorry 'bout that [19:57] rogpeppe: no problem, I can re-rebase onto the old known good version of charm.v6 for now [19:57] natefinch: thanks [19:57] natefinch: do you know, by any chance, if the transaction log increases in size forever still? [19:58] rogpeppe: I forget if we fixed that or not. [19:59] does any of you know where can I get one of those? https://pbs.twimg.com/media/CTsqRyPWIAEVKRj.jpg [19:59] perrito666: can't [20:00] natefinch: you are no fun [20:00] perrito666: they were sold from the google store way back in the day, IIRC, but they're not being made anymore [20:00] well, it seems that ill just need a very detailed set of pictures and convince a plastic artist [20:01] or a 3D printer [20:01] if you can reproduce them, I'm sure there's a market for them [20:01] perrito666: i've got one [20:01] I'd buy one [20:01] natefinch: I am pretty sure a 3d printer will get me very far from that [20:01] perrito666: for a suitable price i could probably be persuaded to part with it :) [20:02] rogpeppe: :p can your price be expressed within the realm of real numers? :p [20:02] perrito666: sure :) [20:02] if so, does it at least fit an int64? :p [20:02] perrito666: that's a substantial realm [20:03] rogpeppe: people have asked things like unicorns [20:03] perrito666: i should think so [20:03] perrito666: probably even within an int16 [20:04] signed? [20:04] perrito666: yes [20:04] perrito666: (by me :-]) [20:04] I meant the int [20:08] perrito666: did i say uint16 ? [20:31] gah... I can't remember how to push up a new charm to launchpad [20:33] $ bzr push lp:~natefinch/charms/vivid/ducksay [20:34] bzr: ERROR: Permission denied: "~natefinch/charms/vivid/ducksay/": : Cannot create branch at '/~natefinch/charms/vivid/ducksay' [20:34] $ bzr push lp:~natefinch/charms/vivid/ducksay/trunk [20:34] Created new branch. [20:34] really? [20:41] Bug #1516144 opened: Cannot deploy charms in jes envs [20:44] Bug #1516144 changed: Cannot deploy charms in jes envs [20:47] Bug #1516144 opened: Cannot deploy charms in jes envs [20:53] Bug #1516144 changed: Cannot deploy charms in jes envs [20:56] Bug #1516144 opened: Cannot deploy charms in jes envs [20:59] if anyone's around, reviews.vapour.ws/r/3138/diff/ is pretty small [21:00] if no one is around the review grows? [21:01] * cherylj sighs.... [21:01] why must maas be so difficult? [21:01] I just want to work on my feature, but NO, I can't bootstrap my virtual maas. At all. [21:01] on 1.25 or 1.26 [21:02] cherylj: ah that used to happen to me too [21:02] I end up throwing my vmaas [21:02] and sold the computer running it [21:02] :p [21:02] I am an extremist [21:02] I'm getting the same "cannot get replica set status: can't get local.system.replset config from self or any seed (EMPTYCONFIG)" errors [21:02] even with frobware's fix. [21:02] but now I'm also getting these: DEBUG juju.mongo open.go:122 TLS handshake failed: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "juju-generated CA for environment \"maas\"") [21:05] Bug #1516144 changed: Cannot deploy charms in jes envs [21:06] you made me google what is a manifold [21:07] when I'm merging a turnk-merge to a feature branch [21:07] should I just do that, rather than put up a pr? or should I pr and self approve? [21:07] trunk. [21:08] Bug #1516144 opened: Cannot deploy charms in jes envs [21:08] deppends on the desired result [21:08] I prefer just do the merge [21:09] fwereade: since you're still around, did you get my email about SetInstanceStatus for containers? [21:11] cherylj, just read it [21:11] cherylj, not completely sure I follow: attempted restate: [21:12] cherylj, "the problem with setting instance status is that it's stored in the instanceData doc for the machine, let's change instanceData"? [21:12] mgz_: the nice thing about a PR and approve is that the bot runs, so you ensure it passes tests etc [21:13] that is a reasonable point. [21:14] fwereade: not quite. more like, we can't set the instance status until we've associated an instance with a machine, which happens after a complete call to StartInstance [21:14] fwereade: and then, "how about for containers, we associate an instance with a machine before StartInstance" [21:15] cherylj, ok, but the reason we can't set it is only because, AFAIAA, there's currently no place to store it that isn't the instance data [21:15] cherylj, and I contend that it shouldn't be in instancedata in the first place [21:16] fwereade: wait, I thought that was where you wanted it. That's where instancepoller puts it [21:16] cherylj, it should be a proper status like all the others, and that dooc can be keyed on `m##instance` or something [21:16] cherylj, I have evidently been failing to communicate that I think we have lots of awesome infrastructure for statuses that instance statuses should also use [21:17] cherylj, and that it would be super-nice to get the status out of the instanceData which is otherwise immutable hardware stuff [21:17] so, don't use SetInstanceStatus [21:17] ok [21:18] cherylj, state/status.go has getStatus and setStatus helpers [21:18] simple-ish review please: http://reviews.vapour.ws/r/3139 [21:19] cherylj, you'll want to make sure you create/destroy the instance status docs alongside the machine status ones [21:20] ok, I see now [21:20] cherylj, and either put the instance-status validation near all the other status validation in state -- or, if you have time/inclination, extract all the status validation stuff to its own package that doesn't know about mongo and call it from there [21:20] cherylj, cool [21:21] cherylj, that should get you some stuff like free status-history tracking which it might be nice to work out how to expose cleanly [21:23] mgz_, LGTM [21:24] fwereade: thanks! [21:26] Bug #1516150 opened: LXC containers getting HA VIP addresses after reboot [21:26] hey, that sounds familiar! ^^ [21:29] heh [21:35] Bug #1516150 changed: LXC containers getting HA VIP addresses after reboot [21:41] Bug #1516150 opened: LXC containers getting HA VIP addresses after reboot [22:39] mmmpf, anyone ever found state servers permannently on adding vote? [22:41] nevermind