mup | Bug #1512191 opened: worker/uniter: update tests to use mock clock <tech-debt> <juju-core:Triaged> <https://launchpad.net/bugs/1512191> | 01:34 |
---|---|---|
mup | Bug #1512191 changed: worker/uniter: update tests to use mock clock <tech-debt> <juju-core:Triaged> <https://launchpad.net/bugs/1512191> | 01:37 |
mup | Bug #1512191 opened: worker/uniter: update tests to use mock clock <tech-debt> <juju-core:Triaged> <https://launchpad.net/bugs/1512191> | 01:43 |
cherylj | anastasiamac: ping? | 02:25 |
anastasiamac | cherylj: pong? | 02:25 |
cherylj | hey :) | 02:25 |
cherylj | got a few minutes to chat? | 02:25 |
anastasiamac | cherylj: isn't it sunday for u? | 02:25 |
anastasiamac | of course | 02:25 |
cherylj | technically, yes | 02:25 |
cherylj | :) | 02:25 |
anastasiamac | tomorrow's meeting? | 02:26 |
cherylj | yes, let me get my headset | 02:26 |
davechen1y | mwhudson: so far joining the IBM partner network has granted me | 03:00 |
davechen1y | 1. zero access to the things I want | 03:00 |
davechen1y | 2. spam | 03:01 |
mwhudson | davechen1y: \o/ | 03:05 |
mwhudson | davechen1y: i'm not sure i've gotten as far as getting spam | 03:06 |
jam | dimitern: did I miss you at our 1:1 ? I thought I was in the room a while | 08:47 |
dimitern | jam, sorry, I overslept :/ | 08:48 |
jam | dimitern: k. no prob. I just was making sure we still had the right schedule with tz changes | 08:53 |
dimitern | jam, yeah, the schedule is correct | 08:54 |
voidspace | dimitern: ping | 09:44 |
dimitern | voidspace, pong | 09:45 |
voidspace | dimitern: hang on - trying something | 09:48 |
voidspace | dimitern: may still need your help, will re-ping if necessary :-) | 09:48 |
dimitern | voidspace, :) sure | 09:48 |
voidspace | dimitern: dooferlad: frobware: just grabbing coffee and taking a loo break, will be a couple of minutes late to standup | 09:56 |
voidspace | sorry! | 09:56 |
dimitern | voidspace, np | 09:56 |
dimitern | jam, standup? | 10:03 |
voidspace | dimitern: omw | 10:07 |
voidspace | wallyworld: ping | 10:07 |
frobware | jam: today is the day, my first bootstrap failed with the replica set failure; the day keeps getting better.... ??? :) | 10:10 |
voidspace | dimitern: are addressable containers in 1.24? | 10:32 |
dimitern | voidspace, there are some parts of it, but it's not working fully | 10:33 |
voidspace | dimitern: so there could be people using deployed environments with addressable containers | 10:34 |
dimitern | voidspace, in 1.24 ? | 10:35 |
voidspace | dimitern: yep | 10:36 |
dimitern | voidspace, that's possible of course, but I highly doubt it | 10:36 |
voidspace | dimitern: so making them "not work" on maas 1.8 would be a backwards compatibility issue... | 10:36 |
dimitern | voidspace, they won't work on maas without devices support, i.e. <1.8.2 | 10:37 |
voidspace | dimitern: what do you mean by won't work? | 10:37 |
dimitern | voidspace, juju cannot guarantee container resources will be full released | 10:39 |
voidspace | dimitern: don't we support the older ways of requesting addresses - we used to | 10:39 |
voidspace | dimitern: so by "won't work" you mean "will work but there might be a temporary issue later under some circumstances" | 10:40 |
voidspace | dimitern: I really dislike the abuse of the phrase "won't work" | 10:40 |
dimitern | voidspace, the "temporary" issue is quite critical for some of our users | 10:40 |
voidspace | dimitern: specific users in specific cases | 10:41 |
voidspace | dimitern: that we can communicate with | 10:41 |
dimitern | voidspace, since maas 1.8.2 is in trusty, as a user you most likely won't even see that error | 10:41 |
voidspace | dimitern: we have many users with many use cases, breaking stuff that works for one use case - when we have fixed the problem for the other use case and can communicate with them - seems like a real backwards step to me | 10:44 |
voidspace | dimitern: we're [potentially] breaking things for some users - to avoid a problem that we've already fixed another way! | 10:45 |
dimitern | voidspace, all of that depends on the definition of "works" | 10:46 |
dimitern | voidspace, does it work if you can re-do the same deployment on the same maas only a certain number of times? | 10:47 |
voidspace | dimitern: well sort of but "the feature does what it says but under some circumstances might temporarily leak resources when you've *finished using it*" is a funny definition of "doesn't work" | 10:47 |
voidspace | dimitern: using thousands of containers within a short space of time is a pretty specific use case - and one we *have addressed* | 10:48 |
voidspace | dimitern: we're not ignoring that use case, but to block *all other use cases* because of it is not good | 10:48 |
perrito666 | morning | 10:48 |
dimitern | voidspace, sorry, but that's sounds to me like saying "leaving instances around after destroy-environment is not our problem, as it did work fine while the environment was running" | 10:48 |
voidspace | perrito666: morningg | 10:48 |
voidspace | dimitern: leaving instances around, that cost money, would be much worse and we should really avoid it | 10:49 |
voidspace | dimitern: temporarily leaking a dhcp lease is not the same | 10:49 |
dimitern | voidspace, it's the same, but it takes more retries for it to become a problem | 10:49 |
dimitern | voidspace, the same as with memory leaks really - a small leak won't be a problem, unless you run your application for a long time | 10:50 |
dimitern | :) | 10:50 |
voidspace | dimitern: they're not at all the same | 10:50 |
voidspace | dimitern: resource leakage is not a good thing | 10:50 |
wallyworld | voidspace: hi | 10:51 |
voidspace | dimitern: having temporary leaks that don't cost money under specific known corner cases - addressed in a later release - is not the end of the world | 10:51 |
voidspace | dimitern: stopping *existing deployments* working, is much worse | 10:51 |
dimitern | voidspace, why you keep calling the leaks "temporary" ? it's not like they're going way by themselves after a while | 10:52 |
voidspace | dimitern: the dhcp lease expires, true? | 10:52 |
dimitern | voidspace, that depends on the dhcp server config | 10:52 |
voidspace | wallyworld: I would like to talk to you about bug 1403689 | 10:53 |
mup | Bug #1403689: Server should handle tools of unknown or unsupported series <upgrade-juju> <upload-tools> <juju-core:Fix Released by wallyworld> <juju-core 1.24:Triaged> <juju-core 1.25:Fix Released by wallyworld> <https://launchpad.net/bugs/1403689> | 10:53 |
wallyworld | sure | 10:53 |
dimitern | voidspace, I think MAAS dhcpd uses rather long leases by default | 10:53 |
voidspace | wallyworld: did you fix it in the server or client? | 10:54 |
voidspace | wallyworld: server I assume | 10:54 |
wallyworld | voidspace: tim had already found and fixed most of the cases of mapping series -> version, but there was one place in simplestreams search that was not covered | 10:55 |
voidspace | dimitern: this is a specific use case for *experimentation*, real users aren't burning through all their dhcp leases! I'm not saying ignore the issue - we've fixed it! (Requiring maas 1.8). | 10:55 |
voidspace | dimitern: however blocking all other "normal uses" because of it, seems wrong / bad | 10:55 |
wallyworld | voidspace: so all the usages that i can see that would panic or return an error have been patched | 10:55 |
voidspace | wallyworld: where? | 10:56 |
voidspace | wallyworld: we have this problem on 1.20 / 1.22 servers... | 10:56 |
dimitern | voidspace, can you explain which "normal users" will be blocked? | 10:56 |
voidspace | wallyworld: which can't be upgraded with --upload-tools | 10:56 |
wallyworld | i fixed it in master | 10:56 |
wallyworld | and 1.25 i think | 10:56 |
voidspace | dimitern: anyone using addressable containers | 10:56 |
dimitern | voidspace, for existing environments, it will keep working as before | 10:56 |
wallyworld | upload-tools is bad | 10:56 |
dimitern | voidspace, for new environments, the new behavior is enforced by default | 10:56 |
wallyworld | we try not to encourage its use | 10:57 |
voidspace | wallyworld: however if you want to give users new binaries to test a fix it is what we have | 10:57 |
wallyworld | is there a use case for it? | 10:57 |
voidspace | wallyworld: unless you can suggest an alternative? | 10:57 |
voidspace | dimitern: upgrading a deployed environment | 10:57 |
dimitern | voidspace, the only affected users will be those using maas 1.7 or earlier | 10:57 |
wallyworld | our policy afaik is to get them to upgrade to latest stable relase | 10:57 |
wallyworld | aka 1.25 | 10:57 |
wallyworld | uness that's changed | 10:58 |
voidspace | wallyworld: that upgrade doesn't work | 10:58 |
wallyworld | from 1.22 to 1.25? | 10:58 |
voidspace | wallyworld: we need to know if a proposed change has fixed the problem they have | 10:58 |
voidspace | wallyworld: yep | 10:58 |
voidspace | wallyworld: lots of horrible problems | 10:58 |
wallyworld | we haven't cuaght that in CI? | 10:59 |
voidspace | nope | 10:59 |
wallyworld | CI should have flagged those issues | 10:59 |
voidspace | wallyworld: https://bugs.launchpad.net/juju-core/+bug/1507867 | 10:59 |
mup | Bug #1507867: juju upgrade failures <canonical-bootstack> <upgrade-juju> <juju-core:Triaged by hduran-8> <https://launchpad.net/bugs/1507867> | 10:59 |
voidspace | wallyworld: for a specific user | 10:59 |
wallyworld | looking | 10:59 |
wallyworld | voidspace: ah right ignore-machine-addresses | 10:59 |
voidspace | wallyworld: not just that though | 11:00 |
wallyworld | what else? | 11:00 |
wallyworld | there was a mongo corrution | 11:00 |
voidspace | yep | 11:00 |
wallyworld | but we were witing for logs | 11:00 |
wallyworld | mogo got corrupt before upgarde | 11:00 |
wallyworld | and could be fixed by running repairDatabase() | 11:00 |
voidspace | wallyworld: meanwhile, I've fixed the ignore-machine-addresses issue | 11:00 |
wallyworld | yay | 11:01 |
voidspace | wallyworld: but I can't get them to test that | 11:01 |
wallyworld | what about trying upload-tools with a 1.25 client? | 11:01 |
wallyworld | and having a custom jujud in the path | 11:01 |
wallyworld | unless w ebackport all the series versio fixes (and there were several, older clients will get stuck i expect) | 11:02 |
wallyworld | and we are not doing any new 1.20/1.22 releases | 11:02 |
dimitern | voidspace, so is the whole argument about displaying a warning if we detect no devices api instead of an error? | 11:02 |
voidspace | wallyworld: is the fix in the client then? | 11:04 |
voidspace | wallyworld: the 1.25 client won't attempt to upload a version that the 1.22 server rejects? | 11:04 |
voidspace | dimitern: a warning would be better, rather than refusing to create a new container (for deployed environment that may already have addressable containers created under 1.24) | 11:05 |
voidspace | wallyworld: sinzui said he *would* do new 1.22 release if required for this bug | 11:05 |
wallyworld | voidspace: --upload-tools will IIRC choose a jujud in the path - so you put the jujud that you want to test where the client can see it | 11:05 |
dimitern | voidspace, in this specific case, I agree | 11:05 |
voidspace | dimitern: \o/ :-) | 11:05 |
dimitern | voidspace, :) | 11:06 |
wallyworld | voidspace: and using a 1.25 client with all the series version fixes should work (that's my theory) | 11:06 |
voidspace | wallyworld: ok | 11:06 |
dimitern | voidspace, how about in other cases - new environment on older maas (<1.8)? | 11:06 |
wallyworld | voidspace: if we are to do a new 1.22, then all the series version fixes from tim and me would need backporting | 11:06 |
voidspace | wallyworld: right | 11:06 |
voidspace | dimitern: I care less I guess - but I don't think addressable containers are broken just because deploying thousands of them and using destroy-environment force causes an issue | 11:07 |
voidspace | dimitern: that's a very specific (and experimental) use case - that we have a fix for | 11:07 |
voidspace | dimitern: so even then, preventing addressable containers seems wrong to me | 11:07 |
voidspace | dimitern: not the world's worst wrong, only a minor wrong... | 11:07 |
voidspace | dimitern: so I would prefer a warning then too | 11:07 |
wallyworld | voidspace: we can do that backport if needed. but it would be intersting to try 1.25 client with custom 1.22 jujud push up via upload-tools | 11:08 |
voidspace | wallyworld: however, we are seeing --upload-tools *not work* on 1.22 (with a custom jujud in the path) | 11:08 |
voidspace | wallyworld: try it yourself, deploy 1.22 then try --upload-tools with only the new jujud in the path | 11:08 |
voidspace | wallyworld: you hit the wily bug | 11:08 |
wallyworld | voidspace: it would be intersting to see then error then so we can see where the issue is | 11:08 |
dimitern | voidspace, how about an extra flag - error by default, with the flag - warning and proceed? | 11:08 |
voidspace | dimitern: more flags! don't like it | 11:09 |
wallyworld | voidspace: ok, i'll try, but likely tomorrow | 11:09 |
wallyworld | need to finish ome other stuff tonight | 11:09 |
voidspace | wallyworld: I'll try again today and email you (currently working on a different environment) | 11:09 |
wallyworld | ok | 11:09 |
voidspace | wallyworld: and confirm that I can't upgrade from 1.22 to latest trunk | 11:09 |
voidspace | wallyworld: and you can believe me or not! :-) | 11:09 |
wallyworld | man, we need to fix our upgrades | 11:10 |
dimitern | voidspace, users are unlikely to see a mere warning in the case where juju is used as a tool (e.g. autopilot or a scripted deployer-based deployment) | 11:10 |
wallyworld | and figure out why CI didn't catch the issues | 11:10 |
voidspace | yeah | 11:10 |
wallyworld | voidspace: i believe you but just don't have enough info yet | 11:10 |
voidspace | wallyworld: sure :-) | 11:10 |
wallyworld | :-P | 11:10 |
voidspace | wallyworld: I'll email you and you can tell me what more diagnostic information you need | 11:10 |
wallyworld | once i see the symptoms i can look at the code and see where the issue might be | 11:10 |
voidspace | dimitern: users are unlikely to hit the problem | 11:11 |
wallyworld | ok, and i'll try also | 11:11 |
voidspace | dimitern: and if they do we have a known fix for them | 11:11 |
voidspace | wallyworld: thanks | 11:11 |
dimitern | voidspace, which is? | 11:11 |
voidspace | dimitern: upgrade maas... | 11:11 |
dimitern | voidspace, and how are we communicating that to the users? | 11:11 |
voidspace | dimitern: all our available communication channels | 11:12 |
voidspace | dimitern: creating hundreds of containers and then destroying them is a pretty specific use case | 11:13 |
dimitern | voidspace, like the docs that suggest "oh, and by the way just in case set disable-network-management: true in your environments.yaml" ? :) | 11:13 |
voidspace | heh | 11:13 |
dimitern | voidspace, yeah, it's one of the cases we should support well - for density | 11:14 |
voidspace | dimitern: yep, I definitely agree we should make it work | 11:14 |
voidspace | we don't really have any choice in that matter | 11:14 |
voidspace | dimitern: new topic | 11:19 |
voidspace | dimitern: hopefully less contentious | 11:19 |
dimitern | voidspace, ok :) | 11:19 |
voidspace | dimitern: I'm trying to recreate the ignore-machine-addresses issue | 11:19 |
dimitern | voidspace, yeah? | 11:19 |
voidspace | dimitern: I have a deployed environment (current trunk) with a deployed unit of wordpress | 11:19 |
voidspace | dimitern: on that machine I've added a new nic | 11:20 |
voidspace | dimitern: this is my nic definition http://pastebin.ubuntu.com/13081446/ | 11:20 |
voidspace | dimitern: I see "eth0:1" with that assigned (and spurious) 10.0 address when I do ifconfig | 11:21 |
voidspace | dimitern: but I don't see any issue with the machine from juju | 11:21 |
voidspace | dimitern: it isn't visibly picking up that new (wrong) address | 11:21 |
dimitern | voidspace, did you wait ~10m for the instance poller to try refreshing the machine addresses? | 11:21 |
voidspace | dimitern: no... | 11:21 |
voidspace | dimitern: :-) | 11:21 |
voidspace | dimitern: I'll go get coffee and see what happens | 11:21 |
dimitern | voidspace, :) | 11:22 |
voidspace | dimitern: thanks | 11:22 |
dimitern | voidspace, np, might not be the only thing, but I'd start there | 11:22 |
voidspace | dimitern: cool | 11:22 |
jam | dooferlad: frobware: I'm back around if you wanted to chat | 11:27 |
rogpeppe | i need a review of this please, towards fixing a juju critical bug: https://github.com/juju/persistent-cookiejar/pull/9 | 11:28 |
rogpeppe | mgz_: ^ | 11:29 |
rogpeppe | mgz_: i don't think that this will entirely fix CI problems with the cookies though | 11:29 |
jam | frobware, dooferlad, dimitern, voidspace: did any of you get a chance to play with the updated kvm_mass script? | 11:41 |
dooferlad | jam not I | 11:41 |
jam | k | 11:41 |
jam | dooferlad: did you have any other questions about bug #1510651? | 11:41 |
mup | Bug #1510651: Agents are "lost" after terminating a state server in an HA env <bug-squad> <ensure-availability> <juju-core:Triaged by dooferlad> <https://launchpad.net/bugs/1510651> | 11:41 |
dimitern | jam, not yet, I have to fix my vmaas first | 11:42 |
dooferlad | jam: I probably will have, just not yet. | 11:42 |
frobware | jam: not me either | 11:43 |
jam | k. I'm happy to get feedback if there are thoughts about what could make it better | 11:43 |
jam | the next step I was considering was creating networks | 11:43 |
frobware | jam: I have 12+ nodes in various combos already. | 11:43 |
jam | say you could tell maas what networks and what spaces you wanted, and then it would make sure those existed in libvirt | 11:43 |
jam | frobware: hopefully a given node isn't in more than one maas, given each maas wants to control its subnet | 11:44 |
frobware | jam: no I have some half-baked naming scheme that mostly keeps me out of trouble. | 11:44 |
jam | frobware: heh. I was just using "m1-foo1" and was planning to go to "m2" if I set up another maas. | 11:47 |
voidspace | jam: not yet | 11:49 |
voidspace | dimitern: no dice | 11:50 |
voidspace | dimitern: it still reports imaginative-hose.maas as the dns name | 11:50 |
voidspace | dimitern: and I can still ssh to the machine via "juju ssh 1" | 11:50 |
voidspace | dimitern: I guess imaginative-hose still sorts earlier | 11:51 |
voidspace | dimitern: although 10.0 should sort before 172.16 - anything else I can do to trigger the bug | 11:51 |
dimitern | voidspace, but do you see the extra address you added? | 11:51 |
voidspace | dimitern: see it where? | 11:51 |
dimitern | voidspace, well, in the log - as part of the machine addresses | 11:52 |
voidspace | dimitern: I'll check | 11:52 |
voidspace | dimitern: not in all-machines.log | 11:53 |
voidspace | dimitern: I'll change the log level and check again | 11:53 |
voidspace | in 10 minutes... | 11:53 |
dimitern | voidspace, for the sake of the test, you could reduce the instance poller timeout | 11:54 |
voidspace | dimitern: yeah, adding better instrumentation would be a good idea too | 11:54 |
voidspace | dimitern: thanks | 11:54 |
dimitern | voidspace, looking at the network package's address sort order is: public IPs first, hostnames next (except "localhost"), cloud-local, machine-local, link-local | 11:55 |
voidspace | dimitern: I'll try and find the bug report and see if it has repro instructions | 11:55 |
dimitern | voidspace, there's also the piece of code in maas that *always* adds the hostname of the machine in the response of the provider addresses | 11:56 |
voidspace | dimitern: yeah, but the bug was a problem for maas users - so it is obviously possible to trigger it | 11:57 |
dimitern | voidspace, I think the difference is machines hosting units (and needing a preferred private address) and machines not hosting units (which only need the public address to display in status) | 11:58 |
dimitern | voidspace, so I'd try not add-machine + add extra IP, but deploy a unit and then add extra IP on that machine | 11:58 |
voidspace | dimitern: I did the latter anyway (used deploy and not add-machine) | 12:00 |
voidspace | I'll find the bug report | 12:00 |
rogpeppe | if you want master unblocked, could someone please review this? https://github.com/juju/persistent-cookiejar/pull/9 | 12:00 |
dimitern | rogpeppe, reviewed | 12:01 |
rogpeppe | dimitern: ta! | 12:01 |
rick_h__ | morning | 12:59 |
mup | Bug #1512371 opened: Using MAAS 1.9 as provider using DHCP NIC will prevent juju bootstrap <juju-core:New> <https://launchpad.net/bugs/1512371> | 14:45 |
mup | Bug #1512371 changed: Using MAAS 1.9 as provider using DHCP NIC will prevent juju bootstrap <juju-core:New> <https://launchpad.net/bugs/1512371> | 14:48 |
mup | Bug #1512371 opened: Using MAAS 1.9 as provider using DHCP NIC will prevent juju bootstrap <juju-core:New> <https://launchpad.net/bugs/1512371> | 14:51 |
cmars | wwitzel3, can i get a review of http://reviews.vapour.ws/r/3040/ ? (fixes-1511717) | 15:16 |
cmars | wwitzel3, thanks! | 15:17 |
mgz_ | cmars: if a user has both an old juju client installed, and a newer juju in ~/.local or something for testing a shiny new feature | 15:21 |
mgz_ | do we break them? | 15:22 |
cmars | mgz_, hmm.. i guess such a user would need to use separate JUJU_HOME directories in that case, wouldn't they? | 15:23 |
mgz_ | well, I know they don't in practice | 15:23 |
cmars | mgz_, but they'd have to, because the newer juju will have providers that the old juju doesn't understand | 15:23 |
mgz_ | when I give someone a binary to test I don't say "only use this with JUJU_HOME=/tmp" | 15:23 |
mgz_ | natefinch: I believe we are still on step #1: make the unit tests pass with go 1.5 | 15:34 |
natefinch | mgz_: oh man. is there a list of what needs to be fixed? The LXD provider is dependent on go 1.3+ due to limitations with the LXD Go library | 15:35 |
=== meetingology` is now known as meetingology | ||
mgz_ | the remaining issues with run-unit-tests-wily-amd64 look like big environmental things rather than nice easy things like map ordering | 15:35 |
mgz_ | bug 1494951 looks like one place to start | 15:36 |
mup | Bug #1494951: Panic "unexpected message" in vivid and wily tests <bug-squad> <ci> <intermittent-failure> <panic> <unit-tests> <wily> <juju-core:Triaged> <https://launchpad.net/bugs/1494951> | 15:36 |
natefinch | mgz_: do you know if there's a team assigned to get us working on 1.5? | 15:37 |
mgz_ | I know that some of the other-way-uppers have fixed bugs relating to it, but just as good citizens | 15:37 |
mfoord | dimitern: ping | 15:37 |
katco | frobware 's team is on bug squad i think | 15:37 |
katco | natefinch: mgz_: frobware: seems like getting 1.5 bugs fixed needs to be high priority | 15:38 |
ericsnow | with a plugin provider it wouldn't be a short-term issue... | 15:39 |
ericsnow | just sayin' :) | 15:39 |
mgz_ | the other thing I see a lot of in the history is worker/peer group related test failures | 15:39 |
natefinch | ericsnow: we'd still need 1.5 support in trusty, and I don't think we'd also have 1.2 in trusty, so that would be a problem | 15:40 |
ericsnow | natefinch: true | 15:40 |
mgz_ | yeah, we can't backport toolchain to trusty | 15:40 |
frobware | katco, ack | 15:41 |
dimitern | mfoord, pong | 15:41 |
mfoord | dimitern: reading through the ignore-machine-addresses bug it looks like it only affected containers | 15:42 |
mfoord | dimitern: is that true? | 15:42 |
mfoord | dimitern: https://bugs.launchpad.net/juju-core/+bug/1463480 | 15:42 |
mup | Bug #1463480: Failed upgrade, mixed up HA addresses <blocker> <canonical-bootstack> <ha> <upgrade-juju> <juju-core:Fix Released by wallyworld> <juju-core | 15:42 |
mup | 1.22:Fix Committed by thumper> <juju-core 1.24:Fix Released by wallyworld> <hacluster (Juju Charms Collection):New> <https://launchpad.net/bugs/1463480> | 15:42 |
mfoord | I assume without addressable containers on as they're starting pre-1.24 | 15:42 |
dimitern | mfoord, yeah | 15:42 |
mfoord | also it looks hard to reproduce (timing related) | 15:42 |
mfoord | dimitern: so to reproduce this I really need to add an lxc container | 15:43 |
mfoord | and add the virtual nic there (?) | 15:43 |
mfoord | I have a build with extra instrumentation and a shorter poll time on the instancepoller | 15:43 |
dimitern | mfoord, let me think | 15:43 |
natefinch | mgz_: how can we require 1.5 if 1.5 is not in trusty? | 15:43 |
mfoord | although it looks to me like the instancepoller only requests provider addresses and that machine addresses are done by the machiner | 15:43 |
dimitern | mfoord, yeah, the machine addresses are updated on machiner startup | 15:44 |
mfoord | dimitern: so really rebooting the machine should trigger it | 15:44 |
mfoord | dimitern: I'll add a container and reboot | 15:44 |
mfoord | once the container is up | 15:44 |
dimitern | mfoord, no need to reboot - just restart the machine agent | 15:44 |
mfoord | dimitern: but should I add the extra nic to the container or to the host | 15:45 |
mfoord | or both just to be sure... | 15:45 |
dimitern | mfoord, I guess both, and that address should be like 10.0.0.x | 15:45 |
mfoord | dimitern: ok | 15:45 |
mfoord | thanks | 15:45 |
mgz_ | natefinch: you can't, but how are you running an lxd provider on trusty? | 15:47 |
natefinch | mgz_: Is trusty never having anything beyond juju 1.25? | 15:47 |
mgz_ | natefinch: that's not the plan, but I don't know what the intention is with your lxd provider work | 15:50 |
natefinch | mgz_: our intention is to have a juju provider that uses lxd in 1.26 | 15:50 |
mgz_ | so, what's your plan with the existing backports to trusty scheme? | 15:51 |
* mgz_ enjoys circular conversations | 15:52 | |
mfoord | our normal plan is to leave that up to QA to sort out... | 15:53 |
natefinch | ^^ | 15:53 |
mgz_ | how to release software you're writing is not someone else's problem | 15:53 |
natefinch | mgz_: it is when someone else is putting up the restrictions while simultaneously telling us to deliver software that has a problem with those restrictions | 15:53 |
mgz_ | you do know those two things are not from me, and are different parties, right? | 15:54 |
natefinch | mgz_: absolutely. Sorry if my tone indicated I thought it was your fault. I know it's not. | 15:54 |
mgz_ | the distro, and sanity in general, limits what we can do in terms of backports | 15:54 |
mgz_ | and mark, and the desire for shiny features, wants everyone to have a great experience | 15:55 |
natefinch | mgz_: I guess the answer is, people at a higher pay grade are going to have to figure out what to do | 15:55 |
natefinch | mgz_: the LXD provider will fail gracefully if lxd is not installed... but the code still requires 1.5 to build. | 15:56 |
mgz_ | there's so such thing as a !build for go versions, right... | 15:56 |
mgz_ | we can always do the equivelent in the debian rules, just rm the package | 15:57 |
natefinch | mgz_: no, but you can just set flags at build time to trigger !build code | 15:57 |
natefinch | mgz_: however, having the same codebase support both 1.2 and 1.5 would seem to be adding a lot of developer/qa/etc overhead.... but again, that's above my pay grade. | 15:58 |
mgz_ | anyway, the first thing is getting it working well in development | 15:58 |
mgz_ | natefinch: well, it's what we do currently, and isn't too hard | 15:59 |
mgz_ | I know it's anti-go, but python code manages to support multiple *interpreter* versions okay | 15:59 |
mgz_ | trusty has go 1.21. vivid has go 1.33. wily/xenial have go 1.5.1 | 16:01 |
mgz_ | +. | 16:01 |
alexisb | mgz_, natefinch trusty will need 1.5 for lxd as well as juju | 16:01 |
alexisb | the current plan is to work on getting it into backports | 16:01 |
alexisb | so it can be used both by juju and lxd | 16:01 |
mgz_ | alexisb: actual backports? or srued? | 16:02 |
alexisb | mgz_, actual backports | 16:02 |
alexisb | sru not needed | 16:02 |
mgz_ | okay, ace. so, the provider failing neatly is a requirement. | 16:03 |
mup | Bug #1512399 opened: ERROR environment destruction failed: destroying storage: listing volumes: Get https://x.x.x.x:8776/v2/<UUID>/volumes/detail: local error: record overflow <amulet> <openstack> <uosci> <juju-core:New> <https://launchpad.net/bugs/1512399> | 16:06 |
mgz_ | anyway, this is something to work out early, thanks for asking nate, we do want to know exactly how we're getting the lxc provider distributed. | 16:10 |
=== You're now known as ubuntulog2 | ||
rogpeppe | mgz_: do you know whether cookie isolation in CI has been done yet? | 16:25 |
mgz_ | jog set the env var as you'd discussed, not sure it's everywhere it's needed but at least in the obvious place. | 16:28 |
mgz_ | hm, actually that change got reverted | 16:28 |
mgz_ | rogpeppe: gimme a sec, I'll find out. | 16:28 |
jog | mgz_, It broke other tests | 16:28 |
jog | older versions of Juju | 16:28 |
mup | Bug #1511771 changed: regression setting tools-metadata-url <blocker> <ci> <regression> <set-env> <juju-core:Triaged> <https://launchpad.net/bugs/1511771> | 17:09 |
mup | Bug #1511771 opened: regression setting tools-metadata-url <blocker> <ci> <regression> <set-env> <juju-core:Triaged> <https://launchpad.net/bugs/1511771> | 17:12 |
mup | Bug #1511771 changed: regression setting tools-metadata-url <blocker> <ci> <regression> <set-env> <juju-core:Triaged> <https://launchpad.net/bugs/1511771> | 17:15 |
mup | Bug #1511771 opened: regression setting tools-metadata-url <blocker> <ci> <regression> <set-env> <juju-core:Triaged> <https://launchpad.net/bugs/1511771> | 17:18 |
mup | Bug #1511771 changed: regression setting tools-metadata-url <blocker> <ci> <regression> <set-env> <juju-core:Triaged> <https://launchpad.net/bugs/1511771> | 17:21 |
natefinch | ericsnow: you mentioned in your review of my better error message PR that I should rebase against either your or wayne's personal branches... I hesitate to rebase against a personal branch. Are you guys going to get one of those things landed soon so I can just rebase against the main lxd branch? | 17:28 |
ericsnow | natefinch: just waiting for your reviews :) | 17:28 |
natefinch | ericsnow: that the support using local lxd as remote? | 17:29 |
ericsnow | natefinch: http://reviews.vapour.ws/r/3012/ and http://reviews.vapour.ws/r/3013/ | 17:29 |
natefinch | ericsnow: ok, yeah, I'm looking at those now. Guess it'll be an unofficial review day for me | 17:30 |
ericsnow | natefinch: thanks | 17:30 |
frobware | mfoord, a heads-up on our recent changes to rendering /e/n/i --- https://bugs.launchpad.net/juju-core/+bug/1512371 | 17:32 |
mup | Bug #1512371: Using MAAS 1.9 as provider using DHCP NIC will prevent juju bootstrap <bug-squad> <maas-provider> <network> <juju-core:Triaged> <https://launchpad.net/bugs/1512371> | 17:32 |
natefinch | ericsnow: gah... saw the copied file from github.com/lxd, so I went to look at their repo for licensing... and they don't even have a LICENSE file. Geez | 17:34 |
ericsnow | natefinch: yep | 17:34 |
cmars | mgz_, here's a cookie update for 1.25, what do you think? http://reviews.vapour.ws/r/3041/ | 17:39 |
natefinch | ericsnow: wow, that lxd/shared.GenCert function is awful. Have you filed a bug to their project to de-awful it? | 17:54 |
ericsnow | natefinch: was waiting :) | 17:54 |
natefinch | ericsnow: for what? | 17:55 |
ericsnow | natefinch: until we had settled down on our LXD provider work | 17:55 |
natefinch | ericsnow: If that's the only way to create certs for LXD, seems pretty awful regardless of what anyone else is doing | 17:56 |
ericsnow | natefinch: agreed | 17:57 |
natefinch | ericsnow: I'm willing to write a bug now if you'd prefer. | 17:58 |
ericsnow | natefinch: sure, though I'd prefer the reviews first :) | 17:58 |
natefinch | ericsnow: right right | 17:58 |
mfoord | frobware: I saw | 18:03 |
mfoord | frobware: ouch | 18:03 |
mfoord | frobware: although I don't think it's the recent changes to be fair, I think it's maas 1.9 | 18:03 |
mfoord | frobware: will look tomorrow | 18:03 |
mfoord | EOD | 18:03 |
davechen1y | thumper: afk breakfast | 20:57 |
davechen1y | i'll call you after taht | 20:57 |
mup | Bug #1512481 opened: register dns names for units in MAAS <juju-core:New> <https://launchpad.net/bugs/1512481> | 21:01 |
mup | Bug #1512481 changed: register dns names for units in MAAS <juju-core:New> <https://launchpad.net/bugs/1512481> | 21:04 |
mup | Bug #1512481 opened: register dns names for units in MAAS <juju-core:New> <https://launchpad.net/bugs/1512481> | 21:07 |
thumper | davechen1y: ack | 21:09 |
cmars | hey waigani i'd like to try writing a CI test. where should I start? | 21:34 |
waigani | cmars: https://github.com/juju/juju/wiki/ci-tests :) | 21:37 |
cmars | waigani, thanks | 21:38 |
waigani | np | 21:38 |
perrito666 | ah finally, the server holding my irc got cut from part of the world | 22:33 |
perrito666 | (and by cut I mean something sliced the fiber) | 22:33 |
mwhudson | perrito666: https://www.reddit.com/r/cablefail/comments/1y2ei8/lost_in_the_woods_call_a_backhoe/cfh4wox | 22:39 |
perrito666 | lol | 22:41 |
davechen1y | thumper: ping | 23:26 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!