[00:05] axw: hey thanks for the review; i responded [00:08] wallyworld: nvm by the way, saw after you'd put up a pr against the split out repo that removed the rfc text as well as other things [00:12] mgz: yeah, sorry, the pr did a couple of things, including removing the bad file [00:55] katco: thanks/sorry for misunderstanding [02:04] Bug #1602053 opened: juju deployed lxd containers are missing a default gateway when configured with multiple interfaces [02:04] Bug #1602054 opened: juju deployed lxd containers are missing a default gateway when configured with multiple interfaces [02:17] Bug #1602053 changed: juju deployed lxd containers are missing a default gateway when configured with multiple interfaces [02:32] wallyworld: are you busy? could you please take a look at http://reviews.vapour.ws/r/5221/? [02:33] sure, give me a few minutes [02:33] np [02:53] Bug #1602067 opened: [2.0b11] Repeated "log-forwarder" errors in debug-log with Azure provider [03:15] wallyworld: PTAL [03:15] ok [03:17] axw: tv, lgtm [03:17] ty [03:18] wallyworld: thanks [03:18] wallyworld: I'll do a bootstrap && deploy just to be paranoid... [03:18] sgtm [03:36] axw: Hi, seems I can reproduce https://bugs.launchpad.net/juju-core/+bug/1599779 everytime using log-forwarder and setting debug level to INFO (as per convo from yesterdas). It useful for you if I file a new bug or just comment on that existing one? [03:36] Bug #1599779: Very high CPU usage from 'creating cloud image metadata storage' emitted in debug log every second [03:37] wallyworld: ^^ are you looking at this? [03:38] axw: not yet, hacking around in unit tests, will have a look next [03:38] veebers: I *think* we need a separate one, really depends on the root cause. I guess a separate one until we analyse it, and then we may mark as dup [03:39] veebers: there's an issue to do with how we start/stop workers in juju which is (I think) the cause of the linked bug, but I think there's more to the log-forwarding issue [03:40] axw: ack, I'll file a bug for it, it can always be marked as dupe or whatever if needed. [03:41] veebers: thanks [03:44] nw [03:56] Bug #1602084 opened: log-forwarding with level set to INFO starts endless logging loop. [04:02] Bug #1602084 changed: log-forwarding with level set to INFO starts endless logging loop. [04:08] Bug #1602084 opened: log-forwarding with level set to INFO starts endless logging loop. [04:26] Bug #1580497 changed: juju2 on maas permanent " connection is shut down" msg and loss of connection [04:49] axw: when you get a chance, here's some logfwd fixes http://reviews.vapour.ws/r/5223/ [04:52] veebers: are you able to share your notes on how to manually set up a test environment for the log forward - ie what charm you used, the steps to generate certs, the charm set up etc? [04:56] wallyworld: I'm in the process of writing a ci test for it (which would be easy to reproduce it), but I can also share the steps that I took to manually setup and test it [04:57] wallyworld: I'm just about to EOD though and head off to a conference meeting. I could throw something together for you after that but not sure when that would be for you [04:57] veebers: tis good, i'll see what i can get done and we can sync up tomorrow if needed [04:58] veebers: i will hopefully have landed by then code to allow the log fwd to be toggled on/off via a syslog-enabled config flag [04:59] wallyworld: nice! :-) [04:59] so you have have it set up and turn on / off as needed [04:59] of start without and set the config later [04:59] or [04:59] sweet, I'll ensure to have a test do both (at bootstrap and later on) [04:59] awesome, ty [05:00] i'll look at this cpu issue [05:01] wallyworld: if it'll help I can provide rough and ready details now and refine later on? (i.e. I have a script that generates a cert + key given an IP address for the server + the config used on the rysync sink side) [05:02] veebers: great ok, if it's easy to send through, otherwise i'll hack something up [05:04] wallyworld: looking [05:08] wallyworld: heh, it's not pretty but it's the steps I used to bootstrap my testing env: https://pastebin.canonical.com/160714/ === urulama|__ is now known as urulama [05:08] ty [05:10] wallyworld: oh, I may not have mentioned that once you expose the rsyslog chamr, the public ip address there needs to be plugged into the cert generator script as that needs to be embedded into the client cert. There is a comment in the script where that happens (like I said, rough and ready) [05:11] veebers: no problem, the SAN list is a pita for sure [05:15] I have the command line details if you want to generate certs that way too. I know now a lot more about create certs then I did a couple of days ago [05:41] wallyworld - bruzer and i hacked on the tls layer quite a bit this cycle. we're backending with EasyRSA but lib/tls.py has some helpers you may be able to pull out for helping with SANS generation/listing/manipulation [05:42] hah just kidding, i mean we put it in the reactive bits - https://github.com/juju-solutions/layer-tls/blob/master/reactive/tls.py#L291 [05:43] lazyPower: thanks! will take a look if needed. i hacked up a cert last week and did the SAN dance by hand. at this stage, i just need the one set of certs [05:43] ack, just trying to help if there's anywhere we can share/collab i'm game to do so. My go-fu however, is non-existant. [05:48] thak you, appreciate it [05:55] wallyworld: reviewed [05:56] ta [05:56] axw: i can't reproduce the CPU issue - the log forward worker uses a http endpoint to stream out the log entries so that doesn't create a loop. i quickly did an initial test by hacking the worker to write forwarded logs to a file rather than a log sink. but indications are that it's not related to any log forward issue (I did test on the latest code though that hasn't landed yet) [05:57] wallyworld: it does make API calls to update the last-sent log though doesn't it? [05:57] not that i saw, but i didn't dig into that part so much [05:58] i am tailing a file with forwarded data and it seems to behave as expected [06:10] wallyworld: okey dokey, I guess it's just the same issue as previously reported then [06:11] axw: could be. i did also start another model and it still behaved itself [07:04] wallyworld: model inheritance ... looks to be working. bootstrapping with test-mode config did go into all models [07:04] not sure if that was supposed to happen or not yet :) [07:06] urulama: you put that in cliuds.yaml? [07:07] clouds.yaml [07:07] that bit works [07:07] axw: off to soccer, will push final changes when i get back, hopefully can land tonight so chris can pick up tomorrow, we'll see how i go [07:08] wallyworld: okey dokey. I have another change to push shortly also. [07:36] wallyworld: finally, http://reviews.vapour.ws/r/5224/ -- sorry it's a little on the large side. nett negative though if that's consolation [09:02] jam, voidspace - hangout time [09:48] anyone know the best way of manually destroying a juju lxd controller and all its resources (juju kill-controller isn't doing the job - i bootstrapped with a different, incompatible, juju version) ? [10:02] rogpeppe1: I think it's just `juju unregister ` and then `lxc delete ` the machines. [10:02] babbageclunk: what about all the jujuds ? [10:03] rogpeppe1: Well, I'd be surprised if they stayed after you killed the containers. [10:03] rogpeppe1: (I might be misunderstanding your question.) [10:04] babbageclunk: lxc doesn't seem to have a delete subcommand [10:05] babbageclunk: hmm, it does actually [10:05] rogpeppe1: phew [10:05] rogpeppe1: was getting confused there. [10:07] babbageclunk: it gave me a usage message but i must've typed the wrong command [10:07] babbageclunk: jujuds are gone now, ta! [10:07] rogpeppe1: :) [10:09] babbageclunk: hey ... did you see my last comment here https://bugs.launchpad.net/juju-core/+bug/1593828 ? [10:09] Bug #1593828: cannot assign unit E11000 duplicate key error collection: juju.txns.stash [10:09] babbageclunk: could you verify with a simple test that it actually work or not? [10:09] urulama: yes! I started replying to it and then I started making a bug [10:09] babbageclunk: you know what it could be? [10:10] urulama: I see the same behaviour - I don't think it's anything to do with that bug. [10:10] babbageclunk: +1 on that [10:10] urulama: Well, not really - the failed machines have cloud-init logs that stop before installing all the juju stuff. [10:10] urulama: but I don't know why that is. [10:11] babbageclunk: oh, ok, i'll pay more attention to cloud-init logs, thanks [10:11] urulama: I'll just finish grabbing logs and put them on the bug, then I'll put a link to it into the other comments. [10:11] babbageclunk: awesome, ty [10:16] urulama: here it is: https://bugs.launchpad.net/juju-core/+bug/1602192 [10:16] Bug #1602192: deploy 30 nodes on lxd, machines never leave pending [10:19] babbageclunk: thanks, bumping importance, and preferred milestone [10:21] Bug #1602192 opened: deploy 30 nodes on lxd, machines never leave pending [10:24] urulama: Thanks - I don't know the protocol on those yet, never sure whether I should be setting them. [11:03] babbageclunk: https://bugs.launchpad.net/juju-core/+bug/1602192 [11:03] Bug #1602192: deploy 30 nodes on lxd, machines never leave pending [11:03] wallyworld: sorry, I could've sworn the sink name was set to the sensible thing :/ [11:03] I certainly believe that this is reproducible for you, but we need the logs from machine-0 which is doing the provisioning [11:03] vs the individual machines that fail. [11:03] wallyworld: do you agree that it should be what I described? [11:04] jam: ok, I'll add those. [11:04] axw: tis ok, it should be changed i think yeah [11:04] wallyworld: I'm thinking we'd have something like storage pools, and in this case the sink name is equivalent to storage pool name [11:04] sorta [11:05] so that remains fixed for the lifetime of the sink, but the config may change [11:05] yep [11:05] babbageclunk: so a cloud-init-output.log that stops fairly early... sounds a bit like just slow cloud-init due to your machine trying to run 30 containers at the same time. not sure [11:05] babbageclunk: the other thing that might be useful is 'cloud-init.log' vs 'cloud-init-output.log' if cloud-init is failing to complete. [11:05] jam: certainly my machine wasn't that happy when they were starting up. [11:06] machine-0.log would be if Juju provisioning is failing to request enough containers from LXD [11:06] cloud-init.log is for when cloud-init itself breaks [11:06] cloud-init-output.log is for when cloud-init thinks it's happy, but the initialization we request of it is wrong. [11:06] (its been rare that cloud-init.log is the problem, which is why it isn't my first go to) [11:10] jam: makes sense - adding the controller machine-0.log and cloud-init.log from a failed machine [11:21] Bug #1602084 changed: log-forwarding with level set to INFO starts endless logging loop. [11:29] but the LXD start 90s apart, which was enough time that they were up and running before another one hit [11:29] babbageclunk, jam ^ [11:40] urulama: I realised I hadn't rebuilt with my mgo patch, which didn't seem to be the problem but definitely muddied the water. So I'm redoing it now. [11:49] urulama: just a dup of bug #1599779 [11:49] Bug #1599779: Very high CPU usage from 'creating cloud image metadata storage' emitted in debug log every second [11:49] nothing to do with INFO level [11:50] and I don't think it is actually a 'loop' in the general sense, just something that we're doing poorly. [12:02] jam: so you assume when CPU issue is resolved, then provisioning will work as well? I doubt that, my CPU was not using all cores 100% [12:10] urulama: no, I think provisioning is separate from bug #1599779 [12:10] Bug #1599779: Very high CPU usage from 'creating cloud image metadata storage' emitted in debug log every second [12:10] it *might* be related if the provisioning failure is because there is too much CPU load causing the provisioned machines to fail to download the tools they need [12:10] kk [12:27] Bug #1602231 opened: juju status should use natural sort for units === akhavr1 is now known as akhavr [12:48] Bug #1602237 opened: log forwarding config watcher needs to be implemented [12:53] wallyworld: reviewed your branch [12:54] great ty [12:54] i'm adding to the feature test [13:00] axw: i replied to one of the comments, see what you think [13:01] wallyworld: yeah, that's what I was suggesting. log but otherwise ignore the invalid config. carry on with the existing sink [13:02] in which case "enabled" shouldn't be touched I think? [13:02] yep, am fixing that. but i still think the check for cfg.Enabled goes first before validating [13:10] wallyworld: I'm heading off, need anything from me before I go? [13:10] axw: all good, ty, will run live test now. just added feature test [13:10] wallyworld: thanks. good night [13:10] ttyl [14:12] wallyworld: I'll start looking at interactive bootstrap ASAP. [14:12] ok, ty [14:20] frobware, I am going to be late to our 1x1, will ping you [14:20] alexisb__: ack [14:33] Bug #1602067 changed: [2.0b11] Repeated "log-forwarder" errors in debug-log with Azure provider [14:39] rogpeppe1: Thanks for the review! [14:40] babbageclunk: np. we'll keep fingers crossed. [14:41] rogpeppe1: Also, I didn't understand this morning what you were saying about jujud processes hanging around - I didn't know that processes in lxd containers were visible on the host! [14:41] babbageclunk: they showed up with ps alxw | grep jujud [14:42] babbageclunk: maybe i should run my desktop in a container... [14:43] rogpeppe1: And with pgrep too. Maybe I should be less gung-ho about killing "stray" mongo processes on my machine in that case. [14:43] babbageclunk: it's ok, they start again immediately if you kill 'em [14:43] rogpeppe1: yay! [14:46] frobware, ok, I am free will join the HO [14:49] alexisb__: omw [14:52] is that really alexisb_? Or is it someone pretending to be alexisb_? [14:53] :) [14:53] it is alexisb with a bad connection [14:54] hmm... that's exactly what someone pretending to be alexisb_ would say.... [15:14] katco: I guess no standup? [15:14] natefinch: oh sorry, didn't know you were here [15:14] mgz: ping [15:15] katco: I'm here :) just you and I, I think, though [15:15] natefinch: yeah. let [15:15] natefinch: let's defer until we have at least 3 [15:15] natefinch: (i.e. thurs.) [15:15] katco: I'm just doing interactive bootstrap. [15:15] natefinch: i'm still working on audit [15:16] katco: cool. standup done! ;) [15:16] jam: yo [15:16] natefinch: lol yep [15:16] natefinch: wb btw [15:16] katco: thanks. Sorry about not updating the calendar that I'd be out yesterday. Swear I did it, but it was like two weeks ago, so who knows. [15:16] natefinch: no worries [15:18] anyone know how you're supposed to switch accounts from the command line? === rogpeppe1 is now known as rogpeppe [15:18] juju switch doesn't seem to support account switching [15:19] well, in β11 anyway [15:19] babbageclunk: have you pinged smoser about that cloud init issue for bug 1602192? [15:19] Bug #1602192: deploy 30 nodes on lxd, machines never leave pending [15:20] cherylj: no - [15:21] babbageclunk: yeah, I'd ping him before spending a whole bunch of time debugging cloud init [15:21] cherylj: ok, will do - thanks [15:21] np :) [15:21] he's around today, I have spoken to him... but I have some doubts it's actually cloud-init related [15:21] yeah, but he could give some hints about where to look if it's not [15:22] mgz: hi. I wanted to check if mgo is one of the -dev packages that we pull from the archive instead of bundling our own. [15:22] as we have an important fix for the next release, but does that mean we need to get a mgo release, and into xenial backports/updates before we can build juju with it? [15:23] jam: it is not [15:24] mgz: good to hear [15:25] hm, odd, it was on the list initially [15:27] mgz: do you have a theory about that bug? [15:27] hi all btw [15:30] frankban|afk: please ping me when you are back [15:30] babbageclunk: not really I'm afraid [15:30] perrito666: howdy [15:30] balloons: ^do you remember why we don't use the mgo package in xenial for juju? [15:31] mgz: darn. Why don't you think it's cloud-init related? [15:32] mgz, golang-gopkg-mgo.v2-dev is on the list. We couldn't use it for xenial because it's not in main [15:33] there's a MIR bug that's not finished for it [15:33] I'd expect useful stuff in cloud-init.log if there was really a problem starting the init step, given init-local succeeds for both [15:33] https://bugs.launchpad.net/ubuntu/+source/golang-gopkg-mgo.v2/+bug/1568162 [15:33] Bug #1568162: [MIR] golang-gopkg-mgo.v2 [15:34] jam: ^so, that's the answer, package not in main. your problem is fine for now then, though it may be polite to file an sru bug for the mgo update if it's likely to affect other users.... if there are other users. [15:34] alexisb__: one thing I meant to mention is that the MAAS folks are having a sprint in bluefin in august - it might be worth tagging alone for a couple of days to talk about IPv6 [15:40] jam, mgz: it shouldn't matter what's in main for most people, should it? Since simple streams uses our own built version. The only people using what's actually in ubuntu would be people who use upload-tools. [15:41] natefinch: in this case it's just a distro policy thing you really don't need to care about [15:42] mgz: right, I just wanted to make sure I was on the right page, in that, for the most part, we (thankfully) don't need to care about what ubuntu does with their wacky go packaging [15:42] and yeah, we already have some bugs that are not fixed if people use --upload-tools [15:46] mgz, natefinch - I'm not following this. What does it mean for mgo to be deployed from a package? When we package juju do we build it so that it's not statically linked? [15:46] babbageclunk: normally when you bootstrap juju, it goes and fetches the juju binary from streams.canonical.com [15:46] we build those with exactly what's in dependencies.tsv [15:47] not what's in the archive [15:47] ok [15:47] babbageclunk: but the binaries you get from "apt-get install juju" (which gives you a jujud) uses ubuntu '-dev' packages. [15:47] so we get the worst of all worlds as near as I can tell. [15:47] babbageclunk: what we deploy to streams is a different binary with different behavior than what they package in ubuntu. We use code that matches the versions in dependencies.tsv. They use.... some other versions. [15:49] jam - so the apt-gotten jujud dynamically loads mgo? [15:49] frobware, agreed [15:49] babbageclunk: no, it's just built statically with a different version [15:49] babbageclunk: ubuntu wants all go applications delivered with ubuntu to use the same version of mgo. So they pick one and compile everything with that one. [15:50] natefinch: ah. Ok, that was the bit I missed. [15:51] natefinch: that seems quite restrictive. [15:54] natefinch: babbageclunk: so according to mgz the 'mgo' library itself is not one of those, but there are about 10 dependencies that do work that way [15:54] gocrypto being one of them, I believe. [15:57] redir: sorry to add more churn in your delete^H^H^H^H^H^H remove-user command ;) [15:58] jam, natefinch: So this only affects people using --upload-tools? Other bootstraps get a good jujud from the stream and the weird jujud on their local machine doesn't really matter? [15:59] babbageclunk: yes. ideally we don't include jujud in the archive at all. [15:59] babbageclunk: so, I'm pretty uncomfortable having a jujud out there that isn't the real jujud and both of them claim exactly the same version, but AIUI that is true [16:00] but I got yelled at when I tried to sneak it out :) [16:00] heh [16:01] jam: oh, I can see why it's gross, just wanted to make sure I understood the implicatons. [16:01] mgz :) [16:02] thanks for the explanations everyone! [16:06] rogpeppe: oh your mail brightened my day... not :p [16:06] perrito666: sorry :) [16:07] rogpeppe: I assumed some things might break that where working a bit accidentally because of the previous implementation of users :) I am a bit saddened that no test at all blew with this [16:08] nah, that is not true, I am actually quite pissed :) [16:08] perrito666: the problem was that there were no juju command tests for this [16:08] rogpeppe: exactly [16:08] perrito666: which was actually slightly deliberate, unfortunately [16:09] rogpeppe: care to elaborate a bit on that? [16:23] cherylj: I tracked down the cause of bug #1599779 - every log forwarding call opens the API server, which starts some workers, then logs, the workers are stopped and the process repeats. The log sender needs to keep the API open on the client side to avoid this. [16:23] Bug #1599779: Very high CPU usage from 'creating cloud image metadata storage' emitted in debug log every second [16:23] cherylj: don't know who to assign it to, but I know I need to go cook dinner soon :-| [16:24] cherylj: happy to take it myself if nobody is free who worked on it [16:26] ok, so the server side needs to keep things open. Oh, I need a fresh brain. Will make more notes in the bug [16:30] dooferlad: i think wallyworld was working on this [16:39] Bug #1600302 changed: security groups could not be destroyed when an Openstack provider bootstrap was interrupted with ctrl-c [16:44] dooferlad: thanks for digging into it. I'll bring it up in the release call and see if we can get someone on it [17:02] I've seen mongod/jujud use a ton of cpu when I only have a single controller and one model with nothing deployed. `juju debug-log` shows nothing. How can I capture info out of mongo/juju in order to debug/provide feedback? [17:10] aisrael: you probably need to up the log level to show more... the default is pretty light on logging. juju set-model-config logging-config="=DEBUG" will help show more of what's going on. [17:11] aisrael: you will probably want to do that both for the default model and the controller model [17:12] natefinch: Excellent, thanks. I'll work on repeating the behavior and try to get something useful out of it. [17:13] aisrael: cool. you can also do --config logging-config="=DEBUG" at bootstrap time to set the log level immediately (I'm not 100% sure how that interacts with multiple models, though) [17:13] natefinch: good to know! [17:16] natefinch: when I do that, where will I find that log output? [17:16] natefinch: nevermind, I found it. Switch to controller model and look in /var/log/juju [17:16] aisrael: the usual debug-log... however most likely anything interesting will be in the debug-log of the controller model (debug-log is per-model now) [17:17] aisrael: yep [17:17] debug-log wasn't showing me anything in the controller model but I have a ton of data now. I'll let it log over lunch and poke the logs when I'm back. [17:18] aisrael: cool, good luck. I hope you can figure it out. [17:22] how do i pprof a controller? [17:22] i have a beta11 controller that's going nuts [17:23] cmars, aisrael: I think there's a bug that is causing this - https://launchpad.net/bugs/1599779 [17:23] Bug #1599779: Very high CPU usage from 'creating cloud image metadata storage' emitted in debug log every second [17:24] natefinch, thanks, checking my log [17:25] natefinch, its a bunch of 'log forwarding not supported' messages [17:26] natefinch, same or different issue, should I open a bug? [17:27] cmars: I think it's worth making it a new bug even if we eventually decide it's a duplicate, just to make it easier for others to find [17:27] cmars: there is already a bug open for that [17:28] cmars: i believe it's this one: bug 1598118 [17:28] Bug #1598118: log-forwarder worker bounces endlessly when forwarding is not configured <2.0> [17:28] natefinch, thanks, i feel less guilty for opening dups :) [17:29] katco, thanks, that's the one [17:29] cmars: a duplicate bug is far better than a missing bug [17:32] afk a while === natefinch is now known as natefinch-afk [18:39] katco: could you review that deployer fix we chatted about last week? http://reviews.vapour.ws/r/5227/ [18:39] cherylj: sure [18:39] ty! [18:55] cherylj: you have a review [18:55] thanks! [19:03] katco: for the testing comment - I did it asynchronously because if the deployer worker doesn't fail to deploy the unit, Wait() won't return [19:03] (since the loop would happily keep waiting for the next event) [19:04] cherylj: is there any way to set up the test to always fail? [19:04] katco: yeah, since I've embedded the interface and the next thing the deployer would do is call DeployUnit, it will panic if I don't have that defined (which I don't) [19:05] cherylj: so, you can remove the goroutine? [19:05] I could. [19:06] cherylj: yay? do you disagree that removing it is good? [19:06] It just felt nicer [19:06] to me at least [19:06] cherylj: it has a race condition though [19:06] fair point === natefinch-afk is now known as natefinch [19:13] katco: updated! http://reviews.vapour.ws/r/5227/ [19:13] cherylj: k tal [19:14] blergh, it didn't save my updates to the description [19:14] cherylj: doh... also the checklist calls for testing methodology [19:16] katco: I ran it through CI [19:16] Can't recreate manually [19:16] cherylj: ah that's right [19:16] cherylj: lgtm [19:16] could anyone make a quick review to http://reviews.vapour.ws/r/5228/ ? [19:16] thanks! [19:16] cherylj: ty [19:17] perrito666: I can look [19:17] natefinch: its literally 2 lines [19:17] just growing a table, ish [19:19] perrito666: ship it [19:19] tx a lot [19:52] ug, juju bootstrap --clouds is a terrible idea [19:53] it makes juju bootstrap do two totally different things [19:57] natefinch: what does it do? [19:57] perrito666: it's basically juju list-clouds combined with juju list-credentials [19:58] perrito666: I don't really understand why it exists, and especially not why it is a flag on bootstrap [20:01] perrito666: http://pastebin.ubuntu.com/19206808/ [20:02] natefinch: sounds like a very specific --help [20:05] perrito666: yeah [20:55] Bug #1602416 opened: Failure to perform action due to tcp i/o timeout [21:04] Bug #1602416 changed: Failure to perform action due to tcp i/o timeout [21:10] Bug #1602416 opened: Failure to perform action due to tcp i/o timeout [22:00] niedbalski_: we're running late, still in another meeting, be there soon [22:13] alexisb: is there the sts cross team? [22:13] thumper, yes [22:19] k leaving for a moment, wallyworld ill be back for standup [22:20] ok [22:34] cherylj, https://bugs.launchpad.net/juju-core/+bug/1602054 [22:34] Bug #1602054: juju deployed lxd containers are missing a default gateway when configured with multiple interfaces [22:49] thumper, did you want to take a coupld of minites and catch up before my eod?? [22:49] alexisb: sure [22:50] https://hangouts.google.com/hangouts/_/canonical.com/alexis-tim [23:37] redir: alexisb https://en.wikipedia.org/wiki/Teletext [23:48] menn0: would love a quick chat at some point today, when you have 5 minutes free, can be any time [23:49] veebers: that latest log forward branch finally got landed. the new config attribute is logforward-enabled (false by default) [23:50] wallyworld: sure. i'm close to proposing a fix for this debug-log issue. let's talk once i've done that. [23:51] no hurry at all [23:51] wallyworld: sweet [23:52] veebers: i am going to look at that CPU issue again - i couldn't repro it, but there's something amiss it seems. apart from that, everything should be functional [23:54] wallyworld: cool. I'll see if I can repro it again with a latest build [23:54] wallyworld: did those instructions came in handy yesterday? [23:55] veebers: i ended up hacking something else together for expediency (used a file sink not a syslog sink) - that was all i needed to test the bits i was interested in [23:55] but i will probs do the full set up at some point [23:56] ah nice, wasn't aware of a file sink :-\ That may have been easier