[17:39] <gaurangt> what's the procedure to delete a network space in juju?
[17:39] <gaurangt> or can we move the subnet from one space to other?
[17:41] <xarses_> is there a useful way to get a log of the actions a unit may have been performing at a specific time from the juju cli?
[18:21] <rick_h> xarses_: not from the CLI. I guess you could look for action results that occurred around the timeline? but you'd need to check the logs to get specific
[18:22] <rick_h> xarses_: turning on the audit log would make it cleaner/easier a bit but it's still in a log
[18:22] <xarses_> urgh, too many signals to just sift in every damned log
[18:23] <xarses_> on a related subject, is there a way to disable the local haproxy in the openstack api charms?
[18:25] <xarses_> something is raising 503's on me randomly, and that guy has like no log
[18:25] <xarses_> and I have a balancer elsewhere
[18:26] <rick_h> xarses_: do you know what action you're looking for? I'm curious what the log | grep action might be or the name of the action
[18:26] <xarses_> nope, no clue what so ever
[18:27] <xarses_> we run (openstack) rally against the cloud to detect failures, and have noticed recently that we are getting random failures on endpoints that are nearly always 503's
[18:27] <rick_h> xarses_: so not sure what's actually erroring in the stack?
[18:27] <xarses_> but calls prior and after, that in many even include said api pass
[18:31] <xarses_> yep, no clue what's causing the errors
[18:34] <xarses_> oh, the one I'm looking at currently isn't 503, its 502 (incomplete or invalid response)
[18:34] <xarses_> bad gateway
[18:50] <xarses_> argh, found it
[18:50] <xarses_> this thing, it ruffles my feathers soo much
[18:53] <xarses_> something like this happens https://gist.github.com/xarses/5a7beb4cc856588c3f91ce0b094b4f28
[18:54] <xarses_> and then anything to look at is SIGTERM and restarted
[18:54] <xarses_> apache, npre, the python wsgi's
[18:54] <xarses_> memcache
[20:19] <xarses_> ugrh, so the systemd thing is probably just annoying
[20:19] <xarses_> there is a session into the host before that, and then a bunch of things related to the charm reload, I'd guess that its the culprit as I originally suspected
[20:20] <xarses_> so I'm back to, something in the charm allows for concurrent modification of members and ends up causing an outage
[21:41] <xarses_> so `juju debug-log` doesn't seem to work
[21:44] <xarses_> among it not having any logs for the units I'm looking for
[21:44] <xarses_> --lines isn't respected
[21:45] <xarses_> > juju debug-log --no-tail --include unit-percona-cluster-0 --lines=1 --replay | wc -l
[21:45] <xarses_> > 4489
[21:45] <xarses_> and the example in --help has a `-T` flag which doesn't exist
[22:43] <skay> I just ran in to a VersionConflict with requests when trying to create a new charm using charmtools. https://paste.ubuntu.com/25486709/
[22:45] <skay> grar
[23:23] <thumper> xarses: hey
[23:23] <thumper> xarses: which juju version?
[23:25] <thumper> xarses: also, thanks for pointing out the -T problem, we'll get that fixed
[23:25] <thumper> xarses: are you able to file a bug? if not we could add one
[23:28] <xarses> 2.2
[23:28] <xarses> probably 2.2.2
[23:28] <xarses> we where updating some of the controllers not sure which that was on
[23:29] <xarses> for which -T or the lines not being respected, or nothing from the unit im looking for?
[23:29] <thumper> xarses: we did update the include option so you should just be able to specify the unit like normal, so 'percona-cluster/0'
[23:29] <thumper> -T doesn't appear to be an option in the code
[23:29] <thumper> I'm kinda surprised it isn't complaining
[23:29] <xarses> oh, it complains of the invalid option
[23:30] <thumper> oh, good
[23:30] <xarses> its just in --help long description of examples
[23:30] <thumper> yeah, that needs to be fixed
[23:30] <thumper> also the examples should just specify the machine ids and unit ids rather than the unit- and machine- strings
[23:30] <xarses> I was looking initially for unit-neutron-0, which has one entry in the uncompressed log file, it never gave that back
[23:31] <xarses> also tried neutron/0
[23:31] <thumper> ok...
[23:31] <thumper> it is possible that there are lines in the local file that never make it to the server
[23:31] <xarses> only switched to percona-cluster because it was spewing warnings and not respecting --lines
[23:31] <thumper> for debug-log to pick up
[23:31] <thumper> the local log files are written as things happen
[23:31] <xarses> no there is one line in the /var/lib/juju thingy on the controller
[23:32] <thumper> but debug-log only works on the logs that ended up in the controller
[23:32] <thumper> oh...
[23:32] <thumper> hmm...
[23:32] <xarses> that grep found
[23:32] <thumper> if it made it into logsink.log then it should be in the db
[23:32] <thumper> unless it was removed due to pruning
[23:32] <xarses> ya that one
[23:32] <xarses> so long story with debug-log aside
[23:33] <xarses> shouldn't it have logged actions like applying charm updates and restarting services?
[23:33] <xarses> they wheren't logged on either the controller or the unit
[23:33] <xarses> they don't emit a syslog
[23:33] <xarses> and are driving my troubleshooting crazy
[23:34] <thumper> you mean the charm actions?
[23:34] <thumper> as in the charm is restarting an openstack service
[23:34] <thumper> ?
[23:34] <thumper> or the restarting of juju agents?
[23:35] <xarses> well as in freaking out and causing an outage on all of my neutron (api) servers at the same time during a charm upgrade
[23:35] <xarses> I can only prove my issue is due to the charm upgrade, because less than a minute prior memcached is started for the first time ever
[23:35] <xarses> there are no other logs that I can find that show that juju intended to or was actively taking actions on the unit
[23:36] <thumper> what is the logging config set to?
[23:36] <xarses> who's config?
[23:36] <thumper> juju model-config logging-config
[23:36] <thumper> for the model in question
[23:39] <xarses> looks like a default of warning
[23:39] <thumper> in which case, you will only get told of things not working
[23:40] <thumper> juju model-config logging-config=juju=info
[23:40] <xarses_> $ juju model-config logging-config
=WARNING;unit=WARNING
[23:40] <thumper> juju model-config "logging-config=juju=info;unit=warning"
[23:41] <thumper> xarses: that would then tell you more about what juju is doing for that model
[23:41] <xarses> le sigh, google didn't give me any information like this about levels
[23:41] <thumper> we are trying to make sure that INFO level logging is useful and not too verbose
[23:41] <xarses_> all I could ever find was https://jujucharms.com/docs/2.2/troubleshooting-logs
[23:42] <thumper> xarses: I'll make sure that we get the docs updated to talk about the log levels
[23:43] <thumper> xarses: I'm going to relocate back home, should be back online in ~20 minutes
[23:44] <xarses> k
[23:44] <xarses> thanks this was insightful