/srv/irclogs.ubuntu.com/2016/10/06/#juju-dev.txt

axwwallyworld: would you please stamp https://github.com/juju/juju/pull/6380?00:19
wallyworldsure00:19
wallyworldaxw: what was the rationale for not doing server side filtering of the machines?00:31
axwwallyworld: so we don't break backwards compat00:32
axwwallyworld: we'll have existing deployments with juju-<long-UUID>-, so we either search with prefix "juju-" or we can't change the format00:32
wallyworldoh right, because existing envs will use non-truncated00:32
axwyup00:33
wallyworldseems like the only solution, but it does seem icky00:33
wallyworldaxw: we take the last 6 chars of the uuid now. can't we we just do a machine filter on that? ie modify the filter regexp?00:36
axwwallyworld: I guess we could do "juju-.*(last-6-chars).*"  -- but this way we have freedom to change the format later00:37
wallyworldmaybe the number of machines needing to be filtered client side is nothing to worry about00:38
axwyeah, I don't think so00:38
wallyworldi wonder what ec2 does in this area00:38
axwwallyworld: ec2 allows you to filter on tags, so it's a bit different00:39
wallyworldfair enough00:39
wallyworldlgtm then00:40
axwta00:41
wallyworldmenn0: axw: 99% of this is s/@local// and s/Canonical()/Id() and s/old upgrade code//. The bit that needs attention is the upgrade steps. See if you get a chance today to look https://github.com/juju/juju/pull/638801:19
menn0wallyworld: will take a look soon01:20
axwwallyworld: ok, just finishing off something and will look01:20
wallyworldta, no rush01:20
* redir goes EoD01:54
thumperfuck, what? http://juju-ci.vapour.ws:8080/job/github-merge-juju/9427/artifact/artifacts/trusty-out.log02:36
thumperpanic: runtime error: invalid memory address or nil pointer dereference02:36
thumper[signal 0xb code=0x1 addr=0x20 pc=0x8bf770]02:36
thumperin mgo02:36
thumpernot seem that before02:36
thumpermenn0: got a few minutes?02:40
menn0thumper: yep02:40
menn0thumper: that mgo panic is new to me as well02:42
menn0thumper: during logging by the looks02:43
axwwallyworld: I'm going out shortly for lunch, will be out for a while. so I'll have to review properly later. I added a few comments02:48
wallyworldok, ta02:48
axwwallyworld: if you get a chance, https://github.com/juju/juju/pull/6390. it looks bigger than it is. most of the diff is in auto-generated stuff02:53
wallyworldsure02:53
menn0wallyworld: I intentionally got rid of assertSteps and assertStateSteps03:07
menn0wallyworld: you don't need them03:08
wallyworldmenn0: oh ok, those tests can be deleted?03:08
menn0wallyworld: yep03:08
menn0wallyworld: instead you use findStep03:08
menn0wallyworld: this confirms that a given Step with a certain description exists for the specified version03:09
menn0wallyworld: and then gives you the Step so that it can be tested03:09
menn0wallyworld: kills 2 birds with one stone03:09
wallyworldmenn0: ok. in this case though, i am testing the step itself in state03:10
menn0wallyworld: ok, well just have a test which calls findStep and have a comment explaining that it's tested elsewhere03:10
wallyworldok03:10
menn0given that it's a state step you might need a findStateStep variant03:11
wallyworldright. i saw that sort of thing was missing and assumed it would be those assert funcs03:11
menn0I just hadn't implemented findStateStep b/c it wasn't needed yet03:12
menn0I probably should have to make it clearer03:12
wallyworldi wasn't across the changes, just went with what i knew :-)03:13
menn0wallyworld: totally understandable - I should have made it more obvious03:14
wallyworldglad i asked you to review :-)03:14
menn0:-)03:16
menn0wallyworld: review done03:45
wallyworldtyvm, will look after i finish reviewing andrew's pr03:46
axwwallyworld: can you please review https://github.com/go-amz/amz/pull/71, I need that for my ec2 change. looking at your PR again now07:01
wallyworldsure07:01
axwwallyworld: https://github.com/juju/juju/pull/6369 also needs a second review (sorry, feel free to pass if you're busy)07:04
wallyworldtis ok07:04
=== frankban|afk is now known as frankban
wallyworldaxw: cert cleanup looks ok. will be good to get this fixed before release. chris is also working on a leak with state objects07:19
axwwallyworld: cool, ta07:19
axwwallyworld: reviewed07:28
wallyworldta07:28
wallyworldaxw: yeah, the error can only ever be notfound. bool is better07:29
wallyworldaxw: for some reason, doing another manual test - api returns errperm after upgrade, even though db is all properly migrated etc. maybe there's a macaroon issue and you need to logout before upgrading and then login again. not sure yet. pita to diagnose07:31
axwwallyworld: hmm. try deleting your cookie jar and logging back in07:31
axwwallyworld: the macaroon will have user@local in it07:32
wallyworldaxw: yeah, that was my suspicion07:32
wallyworldbut no luck with deleting cookie jar. something else is messing up07:33
wallyworldworked fine another time damit07:34
rogpeppe1axw: tyvm for the review08:57
axwrogpeppe1: np08:58
urulamawallyworld: try deleting ~/.go-cookies and ~/.local/share/juju/store-usso-token08:59
rogpeppe1axw: v glad to hear about @local going away :)09:00
axwme too09:01
* axw will be bbl09:03
wallyworldurulama: turns out i was using an older copy of user tags which did not properly strip @local when parsing. seems to work no, without needing to delete any cookies09:22
urulamacool09:23
rogpeppe1a small change to make bootstrapping controllers with autocert a little simpler: https://github.com/juju/juju/pull/639110:00
rogpeppe1anyone up for reviewing the above? axw? wallyworld? voidspace?10:45
wallyworldi can look10:46
rogpeppe1wallyworld: ta!10:47
wallyworldrogpeppe1: looks like a fairly simple change, lgtm10:48
rogpeppe1wallyworld: ta10:49
voidspacebabbageclunk: ping11:01
babbageclunkvoidspace: pong11:02
voidspacebabbageclunk: do you know much about syctl.d?11:02
babbageclunkvoidspace: nup11:02
babbageclunkvoidspace: hth11:03
voidspacebabbageclunk: I'm looking at bug 1602192 which was assigned to you at some point11:03
mupBug #1602192: when starting many LXD containers, they start failing to boot with "Too many open files" <lxd> <juju:Triaged by rharding> <lxd (Ubuntu):Confirmed> <https://launchpad.net/bugs/1602192>11:03
voidspacebabbageclunk: yep, great help, thanks...11:03
babbageclunkvoidspace: ah right - that was called something very different when I first saw it.11:03
voidspacebabbageclunk: what about the juju cloud-init system and including a new file in it (specifically /etc/sysctl.d/10-juju.conf)11:03
voidspacebabbageclunk: do you know how to do that?11:04
voidspacebabbageclunk: I assume it's the cloudconfig package11:04
babbageclunkvoidspace: nope - I think it would need to be done on the host machine, right?11:05
voidspaceah, right - yes11:05
voidspacebut that's still cloud-init, just not container init11:05
babbageclunkvoidspace: oh, but what about when someone's bootstrapping to lxd - don't they need the limits on their host machine?11:06
voidspaceah, the lxd provider11:07
voidspaceyes, I don't even know if we can do that11:07
babbageclunkvoidspace: I think you need to pick rick_h_'s brains about whether he means that those limits should be set at juju install time and/or instance start time (for machines that could then host lxd containers)11:08
voidspacethe juju client probably shouldn't change global defaults on the machine you run it on11:08
babbageclunkvoidspace: no, that seems like a bad thing to do11:08
babbageclunkvoidspace: barring that, then yeah cloud-init seems like the right place to add a sysctl.d file11:09
voidspacebabbageclunk: I can look at doing it in cloud-init for machine sthat juju provisions and will talk to Rick about what to do with the lxd provider11:10
voidspacebabbageclunk: unfortunately the lxd provider seems like the major use case this affects11:10
babbageclunkvoidspace: yeah, that was certainly the original problem11:10
rick_h_voidspace: babbageclunk jam had some opinions on this yesterday11:18
rick_h_voidspace: babbageclunk I think we need to put that on hold while that works out atm tbh.11:18
voidspacerick_h_: put the bug on hold or just fixing the host machine for the lxd provider?11:18
voidspacerick_h_: we can still fix the issue for machines that juju provisions11:19
voidspaceunless jam has opinions on how that should be done too11:19
jamrick_h_: voidspace: babbageclunk: so I replied to the email that rick_h_ forwarded to me. The *ideal* time to do it is at "bootstrap" time, because that is the only time that a client is actually asking to create containers.11:20
jamhowever, Juju is running at user privilege then11:20
jamthe only time we have root is during "apt" time, but just because you are installing a juju client doesn't feel like a great time to consume kernel resources because you *might* bootstrap LXD11:21
jamis it possible to just give better feedback about why something isn't starting, and point people toward how to fix it?11:21
voidspacejam: so we can't do the right time and we shouldn't do the wrong time?11:21
voidspacejam: working out what the problem was required some serious probling outside of juju - switching back to lxc rather than lxd was how it was worked out I think11:22
voidspacejam: so I'm not sure it's "easy" from juju to tell why provisioning fails11:22
jamvoidspace: fundamentally it feels like it should be LXD's problem, as anyone who wants to create 20 containers is going to hit it, we just make it easier to do so.11:23
voidspacejam: right11:23
jamcan you link the original bug?11:23
voidspacebug 160219211:23
mupBug #1602192: when starting many LXD containers, they start failing to boot with "Too many open files" <lxd> <juju:Triaged by rharding> <lxd (Ubuntu):Confirmed> <https://launchpad.net/bugs/1602192>11:23
voidspacejam: see comment 29 (from Stephane in july) about an upstream fix11:24
jamso the patch to tie it to a user namespace seems the ideal, as then each container gets X handles, and launching Y containers automatically gets you X*Y handles available.11:25
jamI suppose if the answer is only "8x more consumption" and it gives us the headroom for 30-ish containers maybe thats sane...11:26
voidspacejam: so reach out to Stephane to find the state of the upstream patches and leave the bug for the moment?11:27
jamvoidspace: from the conversation, the 'upstream' patch is likely to be many months out of acceptance.11:27
jamit does feel like the most correct fix.11:27
jambabbageclunk: do you know how many containers you could do with default settings?11:28
rick_h_jam: folks were hitting around 811:28
jamI do believe my environments have all been touched so I'm not 100% sure what pristine is.11:28
rick_h_jam: sorry, wrong bug11:28
jamrick_h_: voidspace: what about having a script that we ship with juju which can create an appropriate /etc/sysctl.d/10-juju.conf file, and if you do "juju bootstrap ... lxd" we check for that file11:32
rick_h_jam: voidspace I think that we have to be ready to react though quick. I'd like to suggest we get a patch ready for the local provider case and leave the cloud-init case and at least have it handy.11:32
jamand give you a message about # of container limitations, and what "sudo bigger-inotify-limits.sh" you can run to fix it?11:32
voidspacerick_h_: the local provider is the one that's problematic to fix11:33
rick_h_jam: I just don't think that haveing a 10-20 limit on the local provider case is going to pass muster.11:33
jamso we tie it to "juju bootstrap ... lxd"11:33
voidspacerick_h_: we either have to do it at install time or use jam's idea11:33
jambut we make it explicit, which also gives the user a pointer if they really need to go up to 50 containers, etc.11:33
voidspacerick_h_: as "juju bootstrap" runs with user priveleges and changing this on the host machine requires system priveleges11:34
rick_h_voidspace: I understand. imo we should just put it in at install time.11:34
rick_h_voidspace: jam I understand, just really can't get past the fail/extra command to use lxd for 10-2011:34
rick_h_voidspace: jam I guess I'd feel different if it was a 50+ thing11:35
voidspacerick_h_: jam: I'm inclined to agree - it's a setting that I don't see a downside to changing11:35
jamvoidspace: if there wasn't a downside, then LXD would ship with it.11:36
voidspacerick_h_: jam: however I know many users *might* feel differently about us changing their system settings11:36
jam"each used inotify watch takes up 1kB of unswappable kernel memory"11:37
voidspacejam: I don't think that necessarily follows - it's more likely to be caution about changing system settings11:38
rick_h_bumping the limit will also allow any other user on the11:38
rick_h_system to use a whole lot of kernel memory and still run you dry of11:38
rick_h_inotify watches.11:38
voidspacejam: if someone is trying to run 20 lxd container they'll be fine with that - it's a necessary consequence11:38
rick_h_^^ is the "cost" which in a local provider case (folk's laptops/desktops) I don't feel is an issue to outweight the gain11:38
jamvoidspace: I *absolutely* agree that if someone wants 20 containers they want it11:39
voidspaceit's not *ideal* that's for sure11:39
jambut "apt install juju" is not "I want to run 20 containers on this machine"11:39
voidspaceyep, understood11:39
jamrick_h_: again, that is why I'm trying to tie it to "juju bootstrap ... lxd" where someone is very close to saying they want 20 containers.11:39
rick_h_jam: I understand. but we can't do that. So within out limits of influence atm, we need to be ready to do the right thing for juju users with lxd.11:40
rick_h_if we can get lxd to carry the issue great, but with one week until yakkety release I'm not convinced we can get that to happen.11:40
rick_h_jam: voidspace so I don't see any way around us carrying this as part of the juju install for the time being.11:41
rick_h_voidspace: jam especially because this isn't just 20 containers at a time, but 20 cross mulitple models locally11:41
jamrick_h_: if I read his recommended values correctly, that leaves us with 4GB of unswappable kernel memory, which sounds like a bad default.11:42
jamagain, that seems to be only the in-use ones, but it does mean a runaway process will cause real problems on your machine.11:43
jamcan we cut that by 1/4th ?11:43
jamso instead of 4M default go to 1M default, which cuts it to a 1GB peak?11:43
jamcan we test how many containers you can run with that cleanly?11:43
voidspacejam: rick_h_: I can test that11:44
rick_h_so if we cut that by 1/4 we should be looking at 2x our original bug 20-40?11:45
mupBug #20: Sort translatable packages by popcon popularity and nearness to completion <feature> <lp-translations> <Launchpad itself:Invalid> <https://launchpad.net/bugs/20>11:45
* rick_h_ has to run the boy to school11:45
voidspaceon my system both baloo and nepomuk have set the fs.inotify.max_user_watches to 52428811:47
jamit seems that "apt install kde-runtime" creates a /etc/sysctl.d/30-baloo-inotify-limit.conf with 52428811:49
jamvoidspace: yeah, I just found the baloo one.11:50
jamthe doc I found says the default is 819211:50
jam512K is >> 8k11:50
voidspacewow, that's low11:50
jamI'll try to see what a fresh image has in AWS11:51
jamvoidspace: my old EC2 IRC bot is, indeed, having 8192 by default.11:53
voidspacejam: I'm upping the limit on my machine and seeing how many containers I can create11:53
jamvoidspace: yeah, unfortunately, I have the feeling that babbageclunk has the 512k version, not the built-in 8k version.11:54
jamIf it was just 8k lets us create 8 containers, then I'm happy to go 8k * 2011:54
voidspacejam: baloo is part of kde, so it may well be the default for desktop users11:57
jamvoidspace: so, what is a good way to test that the container is getting provisioned without Juju? Something that runs via upstart and print something?11:59
voidspacejam: I was just going to do it with juju...11:59
voidspacethere are some scripts on that bug though11:59
voidspacejuju bootstrap localxd lxd --upload-tools11:59
voidspacefor i in {1..30}; do juju deploy ubuntu ubuntu$i; sleep 90; done11:59
voidspaceexcept without --upload-tools12:00
jamvoidspace: so that can tell you if juju came up and run, I was trying to check it without Juju in the picture.12:01
voidspacejam: there's some lxc reproducer scripts that check for success12:02
voidspacehttps://bugs.launchpad.net/juju/+bug/1602192/+attachment/4700890/+files/go-lxc-run.sh12:02
mupBug #1602192: when starting many LXD containers, they start failing to boot with "Too many open files" <lxd> <juju:Triaged by rharding> <lxd (Ubuntu):Confirmed> <https://launchpad.net/bugs/1602192>12:02
jamjust doing "juju deploy ubuntu -n 30" is another way to just test containers.12:02
jamvoidspace: thanks, go-lxc-run is what I was looking for.12:06
voidspacerunning it now12:06
jaminteresting, those require Xenial images because it is using systemctl, I wonder if Trusty would be different as a guest.12:07
jamso I get 12 good containers with 512k max_user_inotify12:08
voidspacehah, my machine is grinding to a halt...12:08
jaminteresting, it may be fs.max_user_instances that is failing first, vs max_user_watches12:11
jamvoidspace: 30 containers all at once can do that to you :)12:11
voidspaceI failed at 1312:12
voidspacetrying again12:13
voidspacealso setting max_user_instances and max_queued_events12:15
jamvoidspace: I failed at 13 with 512k an 128 max_user_instances, but at 131072/128 I also failed at 1312:16
jamso I'm pretty sure it is the max_user_instances which blocks us at ~12 containers.12:16
voidspaceright12:17
jamtrying to dig up reasons to set/not set that one.12:19
jamI found the 1KB kernel memory for a "user_watch" but nothing yet on the cost of a "user_instanec"12:20
jamvoidspace: with 'go-run-lxc' I'm trying to set max_user_watches really low (32k) and see if I hit a limit first12:20
voidspacecool12:21
voidspacewith max_user_instances set to 1024 (plus max_queued_events) as suggested by Rick I got to 16 before failing12:22
voidspacewhich seems low12:22
voidspacehard to tell why it failed though12:22
jameven with 32k I still get 13 containers started. it seems there isn't too many user watches actually created (probably a lot more when juju is running, cause there are more upstart scripts)12:22
jamvoidspace: 'sudo slabtop' should tell you what kind of kernel memory you are using.12:23
jamI haven't fully figured it out, but it might be interesting if we see Kernel mem bloating dramatically.12:23
jamwith no containers active, my kernel memory seems to sit at 512k12:25
jamwell, presumably a display bug as it says 865991.26K right now12:25
jambut that is 865GB which is a bit more than my 16GB ram :)12:26
jamah, sorry, I meant 512MB12:26
jamwhich is accurate.12:26
jamvoidspace: so, I'm +1 on 512*1024 default max_user_watches, as that is a standard used by other things in a normal desktop install, and doesn't seem to be the direct limiting factor in launching containers.12:27
jamvoidspace: I got to 18 successful containers with max_user_instances=1024 max_user_watches=3276812:27
voidspacejam: right - did you dig up any reasons not to change max_user_instances12:28
voidspacejam: cool - I'm trying again and am up to 1212:28
jamkernel memory went up to 1.2GB12:28
jamvoidspace: how much mem do you have ?12:29
voidspacejam: with 15 containers running it's at 600474.73K12:30
jaminteresting, mine was much higher12:31
voidspacejam: still just over half a gig then12:31
jambut how much total ram?12:31
jamin the machine12:31
jamI'm also on a Trusty kernel testing this, so some of that may vary12:31
voidspacejam: 16GB in the machine12:31
jamsame as myself12:31
voidspacexenial kernel12:32
jamI'm also running a btrfs root disk, and btrfs_inode is actually my top consumer in slabtop12:32
voidspace18 containers up to 87761112:32
voidspaceso similar to yours I think12:32
voidspacefluctuatuing12:32
voidspacemachine grinding to a halt again12:33
jamat 17 containers it switches to 'dentry'12:33
jamwhich is probably inotify stuff12:33
jaminterestingly, the script fails to cleanup when it hits 19 containers12:33
jam'unable to open database file'12:33
jamsounds like a general FS limit12:33
voidspacejam: mine died on 20 but cleaned up ok12:34
voidspacejam: I have the settings suggested by rick_h_ in the bug, but it sounds like we don't need to touch user_watches12:34
voidspacejam: I'll set that back to the default and try again12:35
jamvoidspace: correct. user_watches = 512k seems sane12:35
jamplaying around with user_instances myself.12:35
jamand haven't touched max_queue yet12:35
voidspaceI'm grabbing coffee - so maybe it's just max_user_instances we need to change12:36
voidspaceI'll do some digging on that12:36
rick_h_jam: voidspace so looks like the numbers we got suggested from stephane might not all need tweaking as much?12:48
jamrick_h_: yeah, we don't need to multiply all numbers by 8, I'm also trying to get several data points so we know how much kernel memory is taken up by what settings and how many containers that yields.12:49
rick_h_jam: gotcha ty12:50
jamrick_h_: so at max_user_instances=256, I've hit a soft cap where "chmod" seems to be failing. Which means we have a different bottleneck.13:05
jamthats at 19 containers.13:06
jamHow many do you consider "sane by default" which would mean we need to go poke something else.13:06
jamI'm going to run the tests again with Juju in the loop, to confirm that we can get close to the 'ideal' limits of go-lxc-run.sh13:06
rick_h_jam: honestly I'd hope for 40-50 ?13:07
rick_h_not sure what other folks think13:07
jamrick_h_: then we need to find what the next bottleneck is13:07
jamcause it looks to be something like "max files open"13:07
jamI get errors in "lxc delete" because it can't open the database.13:07
rick_h_jam: isn't that what this bug is?13:08
rick_h_https://bugs.launchpad.net/juju/+bug/160219213:08
mupBug #1602192: when starting many LXD containers, they start failing to boot with "Too many open files" <lxd> <juju:Triaged by rharding> <lxd (Ubuntu):Confirmed> <https://launchpad.net/bugs/1602192>13:08
jamrick_h_: that is inotify handles13:08
jamand we can bump it up from the defaults, but that only moves me from 13 to 19 containers13:08
rick_h_I see, different "too many open files" situation?13:09
jamrick_h_: right.13:09
rick_h_how does lxd do massive scale if there's limits hit like this? /me is confused13:09
rick_h_tych0: how many containers do you all run in testing things? do much with > 20 on a host?13:10
jamI didn't test max-queued-events yet, maybe that's the bottleneck13:10
babbageclunk(Sorry everyone, catching up on the conversation now)13:11
voidspacejam: with a raised max_queued_events I still had a limit of 2013:56
jamvoidspace: same here, something else is hitting a limit13:56
jamI'm thinking something like max procs or max fs handles13:56
jambut I can't tell13:56
jamother things on my machine start failing with 'could not open file' at if I have 1913:56
rick_h_katco: ping for standup14:01
jamrick_h_: voidspace: babbageclunk: So having played with it for a bit, I'm more comfortable with an /etc/sysctl.d/10-juju.conf that sets max_user_watches=512k and max_user_instances=256 but if we want to get to 50 instances we need to dig harder.14:01
jamI can just barely get to 10 instances of 'ubuntu' from juju, and only 19 raw containers with any of the inotify settings, and processes start dying at that point.14:02
jam(firefox/Term/etc crash)14:02
voidspacejam: currently in standup and then collecting daughter - I'm doing some digging on "scaling lxc|d" as people *must* have done this before14:03
voidspacejam: I've added that as a note to the bug just to track where we've got to14:05
jamvoidspace: https://launchpad.net/~raharper/+junk/density-check was something Dustin used to get 600+ containers on his system, but he didn't say what tuning he did around that.14:06
* voidspace looking14:07
tych0rick_h_: we use busybox in our test suite, which doesn't run a lot of actual things inside the container14:13
tych0rick_h_: but also,14:13
tych0rick_h_: https://github.com/lxc/lxd/blob/master/doc/production-setup.md14:13
tych0has a bunch of limits that we recommend bumping14:13
rick_h_tych0: ah, interesting14:14
rick_h_jam: voidspace ^14:14
rick_h_tych0: hmm, ok. So ootb this limit of 19ish doesn't sound like we're doing something wrong?14:14
voidspacetych0: thanks14:14
voidspacerick_h_: jam: there's a bunch of things to tweak there - I'll play14:15
voidspacecollecting daughter from school first14:15
jamvoidspace: it feels a lot like i'm hitting max number of open files for 1 user14:16
voidspacejam: that's /etc/security/limits.conf I guess14:16
rick_h_voidspace: please ping when you're back14:16
jamvoidspace: yeah14:16
jamsounds like changing that needs a system reboot14:16
jamand doesn't sound like something we should really be poking at14:17
rick_h_jam: yea, I think the 19 we run with and make sure we do a really solid job of error'ing cleanly and having this link from tych0 as something we're ready to point to after that14:17
jamrick_h_: so with Juju in there, its 1014:17
rick_h_ouch?!14:18
voidspacegotta run - bbiab14:18
jambecause we run a lot more things than just empty containers14:18
jamrick_h_: yeah, and that's 'ubuntu' charm14:18
rick_h_yea, understand :/ just ouch14:18
tych0so there has been talk in the past about namespacing some of this14:18
tych0(in the kernel)14:18
tych0perhaps we should talk about that more in bucharest14:19
rick_h_ tych0 yea, sounds like we have to roll with what we can do for now, but it'll be a topic to chat about because 10 isn't great for a local juju experience14:19
jamrick_h_: pointing users to docs for how to tweak settings seems a best-effort on our part for now14:31
katcorick_h_: hey sorry about the standup... they're starting to close off roads for the debate on sunday. massive traffic14:39
* perrito666 imagines the debate like a street rap battle given the closed roads14:39
katcoperrito666: lol no, they're just ramping up security around the university where it's being held... or something. maybe it's just for parking, dunno14:41
rick_h_katco: rgr14:45
rick_h_voidspace: when you're back also want to check on https://bugs.launchpad.net/juju/+bug/162945214:46
mupBug #1629452: [2.0 rc2]  IPV6 address used for public address for Xenial machines with vsphere as provider <oil> <oil-2.0> <vsphere> <juju:Triaged> <https://launchpad.net/bugs/1629452>14:46
perrito666katco: you cant denny that rap battle style debate would be awesome14:46
voidspacerick_h_: no useful progress on that one I'm afraid - I got stuck for a while on getting access to vsphere14:47
katcoperrito666: it would be yuuuuuge14:47
voidspacerick_h_: I think I've solved that but switched to the lxd bug14:47
rick_h_voidspace: ok, what's involved in solving it?14:47
rick_h_voidspace: we're getting asked to get that to make the cut for 2.0 and I want to understand how big the ask is14:47
voidspacerick_h_: I couldn't get the VPN to work - but using ssh config and the cloud-city key I should be able to get to it14:47
voidspacerick_h_: I got as far as connecting, but refused access and now I have the right key I should have full access14:48
voidspacerick_h_: so I can look at it14:48
rick_h_voidspace: ok, I'm going to pull it back then and we'll try to get to it next.14:48
rick_h_voidspace: k, but you have a hint at the root issue that needs fixing?14:48
voidspacerick_h_: ah, solving the issue, not solving access14:48
voidspacerick_h_: nothing tangible, but with some logging it should be easy enough to find the source if it's deterministic14:49
voidspacewhich from the bug report it it14:49
voidspace*it is14:49
rick_h_voidspace: k, yea.14:49
rick_h_voidspace: ok, will pull the card back in and let's see what we can come up with.14:49
rick_h_voidspace: but for now, let's move forward with the small tweak for a 20% gain in containers and make sure our error'ing/logging is clear around the container limit14:50
rick_h_to wrap up the current bug14:50
voidspacerick_h_: I'm concerned about handling the error case14:50
voidspacerick_h_: the error that surfaces to juju isn't related to the file issue - that's well underlying14:50
rick_h_voidspace: the too many files info isn't coming from lxd but into the syslog or something?14:50
voidspacerick_h_: I will try and see where it ends up and report back14:51
rick_h_voidspace: k14:51
voidspaceI hadn't found it so far in my playing today14:51
voidspacerick_h_: for getting a new file into the ubuntu juju package, do I need to bug the package maintainers with a patch rather than in juju-core?14:53
voidspaceI can't see deb related stuff in juju-core14:53
rick_h_voidspace: check with mgz and sinzui please for that14:53
voidspacerick_h_: yep14:53
sinzuivoidspace: mgz, balloons, and propose the Ubuntu packages. We can make changes as needed14:54
voidspacesinzui: cool, thanks14:54
sinzuivoidspace: I think "I" was supposed to be in that last message. I am working on the yakkety package now14:55
voidspacesinzui: I mentally interpolated it anyway...14:55
voidspacesinzui: we need to add a new sysctl conf file for juju, shall I raise a specific issue for it or just email you (plural)?14:56
sinzuivoidspace: report a bug against juju-release-tools. We can track the point it is fixed14:58
voidspacesinzui: thanks14:58
rogpeppe1to anyone that's been working on removing hard time dependencies in juju-core, you should find this useful: https://github.com/juju/utils/pull/24515:10
rogpeppe1reviews appreciated, please15:10
rogpeppe1redir: i'm not sure if you were working on removing time dependency, but you might be interested to take a look: https://github.com/juju/utils/pull/24515:11
voidspacejam: ping15:12
rick_h_rogpeppe1: I think macgreagoir is doing some of that &15:15
rick_h_not sure if he's available to peek15:15
rogpeppe1rick_h_: thanks15:15
rogpeppe1rick_h_: looks like redir definitely was too15:15
rick_h_ok cool15:15
natefinchrogpeppe1: when you get a minute, I updated that PR, btw: https://github.com/juju/persistent-cookiejar/pull/1715:25
rogpeppe1natefinch: cool, thanks15:26
rogpeppe1natefinch: you too might be interested in the retry package PR i mentioned above15:26
rogpeppe1natefinch: i thought it was quite reasonable to pass a URL to RemoveAll15:28
rogpeppe1natefinch: as it might be useful to remove all cookies under a particular path (e.g. our services store service-specific cookies under api.jujucharms.com/servicename/15:29
rogpeppe1natefinch: but given that you don't need that functionality, i'm suggesting you just rename your method RemoveAllHost instead15:31
natefinchrogpeppe1: sounds good to me15:31
rogpeppe1natefinch: which can be a specialised version of RemoveAll if/when we implement that15:32
natefinchrogpeppe1: yep, great.15:33
voidspacesinzui: https://bugs.launchpad.net/juju-release-tools/+bug/163103815:34
mupBug #1631038: Need /etc/sysctl.d/10-juju.conf <juju-release-tools:New> <https://launchpad.net/bugs/1631038>15:34
voidspacesinzui: let me know if I should do more, like provide an actual file15:34
sinzuivoidspace: this is for the juju *client* on their localhost?15:35
voidspacesinzui: yes, sorry15:35
sinzuiyep15:35
voidspacesinzui: otherwise it would be a juju-core bug for cloud-init to create it15:35
sinzuivoidspace: I should have clicked trhough to the bug...I know it well15:35
sinzuivoidspace: Juju is also providing that when it sets up a jujud?15:36
voidspacesinzui: alas, this isn't enough - it gets us up from ~10 to ~20 or so containers15:36
sinzuivoidspace: that is enough for me to test an openstack depoyment though :)15:36
voidspacesinzui: cool15:36
voidspacecoffee15:37
rogpeppe1natefinch: BTW your cookiejar branch is named "master" which is probably not what you want15:41
natefinchrogpeppe1: I was just noticing that15:41
rogpeppe1natefinch: reviewed15:46
rogpeppe1katco: i see a lot of bugs with your name on that this could help fixing... fancy a review? :) https://github.com/juju/utils/pull/24515:58
rogpeppe1s/bugs/TODOs/15:58
katcorogpeppe1: sure16:01
rogpeppe1katco: ta!16:01
katcorogpeppe1: hmmm how is this different enough from github.com/juju/retry?16:02
rogpeppe1katco: ha, i didn't know about that16:02
* redir was going to mention katco since I recall her doing retry stuff recently16:02
rogpeppe1katco: well for a start it keeps to the existing pattern16:03
katcorogpeppe1: the bug todos you're probably seeing from me are referencing a bug to *consolidate* not create another retry mechanism haha16:03
rogpeppe1katco: i think that having a loop is better than a callback16:03
katcorogpeppe1: this would be i think the 4th or 5th way of doing retries in juju... bc there's so many this would definitely have to go through the tech board16:04
katcorogpeppe1: i don't like our current retry package very much, personally16:04
rogpeppe1katco: well, it's intended to be a straight replacement for utils.AttemptStrategy16:04
katcorogpeppe1: we're meant to be consolidating everything to juju/retry16:04
rogpeppe1katco: juju/retry looks pretty complicated to me16:05
katcorogpeppe1: yeah i don't like it16:05
katcorogpeppe1: but i already sent an email out about this a month or so ago, and this was the decision. so any new attempt at replacing it has to go through the tech review board16:06
katcorogpeppe1: do you want me to plop it on the schedule?16:06
rogpeppe1katco: just FWIW:16:07
rogpeppe1% g -r retry.CallArgs | wc16:07
rogpeppe1     15      85    129316:07
rogpeppe1% g -r utils.AttemptStrategy | wc16:07
rogpeppe1     81     397    706516:07
katcorogpeppe1: you are attempting to convince me of something i already believe :) but it doesn't change the path forward unfortunately16:07
rogpeppe1i.e. I think there's a lot of value in having a pluggable replacement for the existing mechanism that doesn't involve wholesale code rewriting16:07
rogpeppe1katco: please plop it :)16:08
katcorogpeppe1: will do! can you write up an email and send it to me? you might even be able to attend the meeting to make your case16:08
rogpeppe1katco: ok will do16:08
katcorogpeppe1: ta roger16:08
katcorogpeppe1: yeah i really dislike juju/retry's callback methodology and little knobs and such. i prefer inline myself. i think i wrote all this in my email whenever that was16:10
rogpeppe1katco: if you could review my code (and API) anyway, that would be great - then i can know whether it's worth continuing16:10
rogpeppe1katco: FWIW i've been thinking about this for ages, but hadn't come to a decent understanding of how to support the existing API in the face of the stop thing.16:11
rogpeppe1katco: and i just realised that it was actually OK for HasNext to block.16:11
katcorogpeppe1: fyi, it's on the agenda: https://docs.google.com/document/d/13nmOm6ojX5UUNtwfrkqr1cR6eC5XDPtnhN5H6pFLfxo/edit16:13
rogpeppe1katco: ta16:13
rogpeppe1katco: as a little experiment, i replaced one use of juju/retry with the new package (functionally identical i think although there are no tests to check that sadly). http://paste.ubuntu.com/23285245/16:24
rogpeppe1katco:  1 file changed, 17 insertions(+), 43 deletions(-)16:24
katcorogpeppe1: less code makes me happy :)16:25
alexisbredir, ping17:04
alexisbredir, when you are ready https://hangouts.google.com/hangouts/_/canonical.com/alexis-bruemme17:05
rediralexisb: ack brt17:07
natefinchrogpeppe1: you still around?17:16
=== frankban is now known as frankban|afk
rogpeppe1natefinch: yup, but not for long17:16
natefinchrogpeppe1: yep, figured.  Quick question on the cookie jar... I'm honestly not sure what the behavior should be.  Do you think we should exact match on the hostname?17:17
natefinchrogpeppe1: I agree that foo.apple.com removing cookies for bar.apple.com is confusing17:17
natefinchrogpeppe1: should removing apple.com remove cookies for foo.apple.com?  I don't know what is expected here.17:18
rogpeppe1natefinch: i think an exact match would be more intuitive17:18
natefinchrogpeppe1: fine by me.  Will do. Thanks.17:23
=== alexisb is now known as alexisb-afk
=== alexisb-afk is now known as alexisb
alexisbhml, you around?20:25
hmlalexisb: good afternoon20:25
alexisbheya :)20:25
alexisbdo you have a second for a quick call or hangout?20:25
hmlalexisb: sure, phone call would be better20:25
alexisbnumber?20:25
hmlalexisb: 781.929.367920:25
kwmonroehey juju-dev!  neiljerram noted something weird on #juju in rc321:58
kwmonroe<neiljerram>        UNIT                WORKLOAD  AGENT  MACHINE  PUBLIC-ADDRESS   PORTS  MESSAGE21:58
kwmonroe<neiljerram>        calico-devstack/0*  unknown   idle   0        104.197.123.20821:58
kwmonroewhere does that * in the unit name come from?21:58
kwmonroei thought maybe it was truncating for length, but i deployed ubuntu with a long name in rc3 and didnt' see it: http://paste.ubuntu.com/23286448/21:59
alexisbkwmonroe, I believe that means leader now22:09
alexisbthumper, ^^^22:09
thumperyeah22:09
thumperthat's right22:09
kwmonroecool!  thx alexisb thumper.  there ya go neiljerram.  it denotes leadership... i didn't see it because the ubuntu charm doesn't have that concept.22:11
neiljerramok thanks, good to know22:11
kwmonroeneiljerram: i'd be interested to know your output of 'juju status --format=yaml calico-devstack/0' shows it as well22:12
neiljerramkwmonroe, I can't easily get yaml for the deployment with calico-devstack in it.  But in the other deployment that I just ran, with more units, yes, I do see this in the yaml:22:13
neiljerram        leader: true22:14
kwmonroecool22:14
alexisbhml, axw I will be a couple min late22:29
menn0wallyworld: the new tools selection behaviour (no more --upload-tools) is nice but has one unfortunate side effect22:29
wallyworld:-(22:29
menn0wallyworld: if you're working on a feature and a new release arrives in the streams "juju bootstrap" stops using the tools you've just built22:29
menn0wallyworld: it's just bitten me again22:30
axwyeah, I get confused by that too22:30
wallyworldmenn0: yeah, you need to pull the latest source to get the new version22:30
menn0I lost a bit of time figuring out why my QA wasn't working22:30
wallyworldit's a small window but a pain none the less22:30
menn0wallyworld: is the solution to stop using go install and just use --build-agent when testing stuff?22:31
wallyworldyep22:31
menn0I will try and change my habits and see how that works out22:32
menn0wallyworld: I do like the new semantics overall, it's just this one thing22:32
wallyworldmenn0: here's a trivial cmd help text change for that users command we discussed yesterday https://github.com/juju/juju/pull/639222:51
menn0wallyworld: give me 2 mins22:52
wallyworldno hurry22:52
babbageclunkmenn0: Got a moment for a quick chat before standup?22:59
menn0babbageclunk: sure22:59
menn0babbageclunk: where?23:00
babbageclunkmenn0: https://hangouts.google.com/hangouts/_/canonical.com/xtian23:00
thumperhaha23:13
thumperfark!!!23:13
thumperI think I have found this race23:13
thumpergeeze it is a doozy23:13
alexisbbabbageclunk, feel free to join us23:17
babbageclunkalexisb: too sleepy - want to finish this test and crash23:18
alexisb:) understood23:18
perrito666alexisb: gah, now I am singing suses song23:45
alexisb:)23:46
thumpermenn0 or wallyworld: https://github.com/juju/juju/pull/639723:50
wallyworldlooking23:50
thumperwallyworld: thanks23:52
wallyworldsure23:52
wallyworldthumper: here's a really trivial one https://github.com/juju/juju/pull/639223:52
thumperlooking23:52
menn0wallyworld: review done...23:52
wallyworldta23:53
thumperdone23:53
wallyworldmenn0: i didn;t know about our summary being one line, i'll rework23:53
menn0wallyworld: yeah, it's the line that's shown when you do "juju help comands"23:54
menn0not sure what will happen with multiple lines23:54
wallyworldmenn0: fixed, plus also i did a quick driveby for another bug23:59
menn0wallyworld: looking23:59

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!