=== rumble is now known as grumble === disposable3 is now known as disposable2 [17:39] what's the procedure to delete a network space in juju? [17:39] or can we move the subnet from one space to other? [17:41] is there a useful way to get a log of the actions a unit may have been performing at a specific time from the juju cli? [18:21] xarses_: not from the CLI. I guess you could look for action results that occurred around the timeline? but you'd need to check the logs to get specific [18:22] xarses_: turning on the audit log would make it cleaner/easier a bit but it's still in a log [18:22] urgh, too many signals to just sift in every damned log [18:23] on a related subject, is there a way to disable the local haproxy in the openstack api charms? [18:25] something is raising 503's on me randomly, and that guy has like no log [18:25] and I have a balancer elsewhere [18:26] xarses_: do you know what action you're looking for? I'm curious what the log | grep action might be or the name of the action [18:26] nope, no clue what so ever [18:27] we run (openstack) rally against the cloud to detect failures, and have noticed recently that we are getting random failures on endpoints that are nearly always 503's [18:27] xarses_: so not sure what's actually erroring in the stack? [18:27] but calls prior and after, that in many even include said api pass [18:31] yep, no clue what's causing the errors [18:34] oh, the one I'm looking at currently isn't 503, its 502 (incomplete or invalid response) [18:34] bad gateway [18:50] argh, found it [18:50] this thing, it ruffles my feathers soo much [18:53] something like this happens https://gist.github.com/xarses/5a7beb4cc856588c3f91ce0b094b4f28 [18:54] and then anything to look at is SIGTERM and restarted [18:54] apache, npre, the python wsgi's [18:54] memcache [20:19] ugrh, so the systemd thing is probably just annoying [20:19] there is a session into the host before that, and then a bunch of things related to the charm reload, I'd guess that its the culprit as I originally suspected [20:20] so I'm back to, something in the charm allows for concurrent modification of members and ends up causing an outage [21:41] so `juju debug-log` doesn't seem to work [21:44] among it not having any logs for the units I'm looking for [21:44] --lines isn't respected [21:45] > juju debug-log --no-tail --include unit-percona-cluster-0 --lines=1 --replay | wc -l [21:45] > 4489 [21:45] and the example in --help has a `-T` flag which doesn't exist [22:43] I just ran in to a VersionConflict with requests when trying to create a new charm using charmtools. https://paste.ubuntu.com/25486709/ [22:45] grar [23:23] xarses: hey [23:23] xarses: which juju version? [23:25] xarses: also, thanks for pointing out the -T problem, we'll get that fixed [23:25] xarses: are you able to file a bug? if not we could add one [23:28] 2.2 [23:28] probably 2.2.2 [23:28] we where updating some of the controllers not sure which that was on [23:29] for which -T or the lines not being respected, or nothing from the unit im looking for? [23:29] xarses: we did update the include option so you should just be able to specify the unit like normal, so 'percona-cluster/0' [23:29] -T doesn't appear to be an option in the code [23:29] I'm kinda surprised it isn't complaining [23:29] oh, it complains of the invalid option [23:30] oh, good [23:30] its just in --help long description of examples [23:30] yeah, that needs to be fixed [23:30] also the examples should just specify the machine ids and unit ids rather than the unit- and machine- strings [23:30] I was looking initially for unit-neutron-0, which has one entry in the uncompressed log file, it never gave that back [23:31] also tried neutron/0 [23:31] ok... [23:31] it is possible that there are lines in the local file that never make it to the server [23:31] only switched to percona-cluster because it was spewing warnings and not respecting --lines [23:31] for debug-log to pick up [23:31] the local log files are written as things happen [23:31] no there is one line in the /var/lib/juju thingy on the controller [23:32] but debug-log only works on the logs that ended up in the controller [23:32] oh... [23:32] hmm... [23:32] that grep found [23:32] if it made it into logsink.log then it should be in the db [23:32] unless it was removed due to pruning [23:32] ya that one [23:32] so long story with debug-log aside [23:33] shouldn't it have logged actions like applying charm updates and restarting services? [23:33] they wheren't logged on either the controller or the unit [23:33] they don't emit a syslog [23:33] and are driving my troubleshooting crazy [23:34] you mean the charm actions? [23:34] as in the charm is restarting an openstack service [23:34] ? [23:34] or the restarting of juju agents? [23:35] well as in freaking out and causing an outage on all of my neutron (api) servers at the same time during a charm upgrade [23:35] I can only prove my issue is due to the charm upgrade, because less than a minute prior memcached is started for the first time ever [23:35] there are no other logs that I can find that show that juju intended to or was actively taking actions on the unit [23:36] what is the logging config set to? [23:36] who's config? [23:36] juju model-config logging-config [23:36] for the model in question [23:39] looks like a default of warning [23:39] in which case, you will only get told of things not working [23:40] juju model-config logging-config=juju=info [23:40] $ juju model-config logging-config [23:40] =WARNING;unit=WARNING [23:40] juju model-config "logging-config=juju=info;unit=warning" [23:41] xarses: that would then tell you more about what juju is doing for that model [23:41] le sigh, google didn't give me any information like this about levels [23:41] we are trying to make sure that INFO level logging is useful and not too verbose [23:41] all I could ever find was https://jujucharms.com/docs/2.2/troubleshooting-logs [23:42] xarses: I'll make sure that we get the docs updated to talk about the log levels [23:43] xarses: I'm going to relocate back home, should be back online in ~20 minutes [23:44] k [23:44] thanks this was insightful