[02:30] <fishcooker> anyone here facing the same problem like this http://vpaste.net/t8RvT how to overcome ?
[02:31] <sarnold> fishcooker: 'reset' or 'stty sane'
[02:36] <fishcooker> thanks, sarnold. It works... could you explain what happened actually?
[02:37] <sarnold> fishcooker: terminals are complicated things with decades of history. There's some sequence of output chars that asks terminals to do things like set the title, select text, paste text, or switch to different sets of glyphs
[02:38] <sarnold> fishcooker: so if you just do something like 'cat /dev/urandom' you never know quite what your terminal will do. some terminals might even execute more or less arbitrary things ..
[02:38] <sarnold> fishcooker: so if you don't trust the source of data completely you should always run it through less or another program that knows how to sanitize output in a way that the terminals won't go crazy
[02:58] <fishcooker> so it will be dangerous thing let's say  we just do $ cat a file, sarnold
[03:08] <sarnold> fishcooker: exactly
[05:01] <fishcooker> how to nice or renice the program and the sub process executed ... let's say the program have been executed with main process with pid 242945 with all child process like http://vpaste.net/vybhd
[05:37] <cpaelzer> good morning
[05:47] <hateball> and to you
[07:03] <lordievader> Godo morning
[07:34] <fishcooker> morning sunshine cpaelzer hateball lordievader
[07:35] <lordievader> Hey fishcooker
[07:36] <cpaelzer> hiho
[07:55] <im0nde> Hello, I want to setup a XMPP server on my ubuntu-server instance. Any recommendations on software? I thought about OpenFire but there are no packets in the official repos
[07:55] <oskaress> Anyone with any experience creating bash scripts? I'm just trying to do a simple bash script that creates a user and sets the password for it. Both the username and password are taken as arguments. Currently the script looks something like this:
[07:56] <oskaress> USER=$1 PASS=$ adduser $USER echo "$USER:$PASS" | chpasswd
[07:56] <oskaress> The password does not get set correct, anyone have any ideas why?
[07:57] <oskaress> I tried :adduser $USER --gecos "First Last,RoomNumber,WorkPhone,HomePhone" --disabled-password
[07:57] <oskaress> and it didn't work either
[08:29] <hateball> im0nde: I personally use OpenFire. While not in repo, they provide deb files that work fine
[08:29] <hateball> im0nde: Think I followed this guide when I set it up http://www.meestuff.com/install-openfire-ubuntu-16-04-lts-server/
[08:31] <im0nde> hateball: Thanks, that guide looks good! I'll look into it. Are there any specific reasons why you dismissed the (easier to install) alternatives from the repos?
[08:32] <hateball> im0nde: They... sucked
[08:32] <hateball> :D
[08:32] <im0nde> :D
[08:33] <hateball> I've tried ejabberd, and jitsi
[08:33] <hateball> OpenFire "just works" and it has various plugins etc, simple to configure LDAP and so on
[08:35] <im0nde> hateball: ok, i see
[08:50] <lordievader> oskaress: I'm starting to think editing the shadow file through the script is easier...
[08:51] <lordievader> oskaress: Why do you echo the variables into the chpasswd program? And not 'chpasswd "$USER" "$PASS"'?
[08:57] <im0nde> hateball: I just installed openfire according to that guide, but I dont have a domain. Did you have one or just use the ip adress?
[08:57] <oskaress> That's what I've read you should do
[08:58] <hateball> im0nde: I have a domain yes (at work)
[09:10] <lordievader> oskaress: According to the man page your original method should work.
[09:11] <lordievader> oskaress: I suppose the PASS=$ is a typo for PASS=$2?
[09:13] <oskaress> Sorry yes, it should be PASS=$2, but I managed to get it work now somehow, thanks anyway.
[11:51] <fishcooker> anyone have this same issue to be worried like this http://vpaste.net/vYjxN when do-release-upgrade ?
[12:26] <TJ-> On 16.04 trying to bring up a bond/trunk/LA consisting of 4 1Gbps ports using ifenslave's ifupdown run-parts scripts, it always fails. Doing it directly using ip link up ${SLAVES}; ifenslave bond0 ${SLAVES} works. Can't find any Debian/Ubuntu bug reports on this; anyone have experience of this?
[15:24] <fishcooker> is there any option on cli to use default options on every confirmation during do-release-upgrade ?
[16:25] <dpb1> https://askubuntu.com/questions/250733/can-i-do-a-silent-or-unattended-release-upgrade
[16:25] <dpb1> tl;dr: yes
[16:26] <dpb1> (even though fishcooker is no longer with us)
[16:32] <nacc> stgraber: what's the (if there is a "the") recommended way to interact with lxd in python3? pylxd?
[16:34] <stgraber> nacc: pylxd most likely, though I don't know what's the state of exec/websocket in there these days. Otherwise, you can always just subprocess to "lxc", that's what autopkgtest does for example.
[16:35] <nacc> stgraber: yeah, I was planning on doing the latter as it's "easier", but figured I'd check on the former (reading its docs now)
[16:35] <nacc> stgraber: thanks!
[16:38] <smoser> nacc, you have a minute ?
[16:38] <smoser> git-ubuntu process...
[16:38] <teward> blurgh i keep discovering things that're busted in universe because of old versions >.<
[16:39] <smoser> oh. never mind, nacc. cpaelzer handled it for me
[16:40] <smoser> hm..
[16:40] <nacc> smoser: yeah, i gave a few linens of instructions yesterday
[16:40] <smoser> cpaelzer,
[16:40] <nacc> smoser: but i thinkn cpaelzer did it
[16:40] <smoser> cpaelzer, said "i tagged, pushed"
[16:40] <smoser> but :
[16:41] <smoser> git fetch
[16:41] <smoser> git log pkg/ubuntu/devel
[16:41] <smoser> still shows 2.0.874-4ubuntu1
[16:41] <smoser> what am i missing ?
[16:41] <nacc> smoser: let me check the logs
[16:45] <nacc> smoser: i'm kicking hte import job, let me see if it sees the upload (it'll take a bit to catch up). You can also import open-iscsi manually (it'll just catch up) if you're impatient
[16:48] <smoser> nacc, ok. i was thinking that by pushing he pushed that history . but he just pushed a tag
[16:48] <smoser> which will be identified and verified
[16:48] <nacc> smoser: right, when the importer sees it
[16:48] <nacc> now, i do't know why it wasn't seen (it looks like it was published ~9 hours ago)
[16:48] <nacc> smoser: so i've told the importer to walk back over the last 24 hours
[16:57] <lucidguy> Hey any openstack folks in the house ....
[16:57] <lucidguy> nobody is responding in #openstack
[16:57] <lucidguy> and google
[16:58] <dpb1> lucidguy: you need to update to xenial
[16:59] <smoser> nacc, and then...
[16:59] <smoser>  http://autopkgtest.ubuntu.com/packages/o/open-iscsi
[16:59] <smoser> that just hasnt run because autopkgtest backup ?
[17:00] <nacc> smoser: http://autopkgtest.ubuntu.com/running
[17:00] <nacc> smoser: yeah it's queued
[17:15] <lucidguy> dpb1: you didnt even hear my problem?
[17:15] <nacc> lucidguy: i think it was a joke about your nick (lucid being an EOL ubuntu release)
[17:16] <lucidguy> We are running Mitaka, also no longer supported I belive
[17:17] <nacc> lucidguy: oh so you actually are on lucid?
[17:17] <lucidguy> No, 16.04 is Xenial which runs Mitaka
[17:17] <dpb1> lucidguy: yes, a joke.  you never told your problem so I didn't hear it, no. :)
[17:19] <lucidguy> OK, our dashboard/horizon is so unpredictable when it comes to performance, sometimes freezes and fails all together.
[17:19] <dpb1> lucidguy: and yes, Mitaka would be supported by Canonical.
[17:21] <dpb1> lucidguy: what kind of triage debugging have you done?
[17:21] <lucidguy> I'm watching loads and all logs I can think of, nothing screams fix me.
[17:21] <dpb1> s/triage/troubleshooting/
[17:22] <dpb1> I *think* horizon logs are in /var/log/apache.... ?
[17:22] <dpb1> tbh, I don't remember that well
[17:22] <lordcirth_work> lucidguy, glancing at the docs, it seems Horizon uses memcached?  Did you check its logs?
[17:22] <dpb1> lucidguy: and, kind of a first-starter question.  how did you deploy openstack.
[17:23] <lucidguy> I actually never deployed it.
[17:23] <dpb1> conjure-up?  following a doc?
[17:23] <dpb1> juju?
[17:23] <lucidguy> I have high level understanding of where things are etc.  No another admin installed it from scratch to my knowledge
[17:23] <lucidguy> Following docs
[17:23] <dpb1> which docs
[17:23] <dpb1> (even if you didn't do it)
[17:24] <lucidguy> Unfortunately I won't have the answer to most of these questions.
[17:24] <dpb1> is it a single machine install?
[17:24] <lucidguy> I guess just common performance related recommendations is good .. looking into memcached
[17:24] <nwilson5> I know it's ubuntu-server, appropriate to ask here about "sort" arguments here?
[17:25] <lordcirth_work> nwilson5, seems reasonable to me
[17:25] <nwilson5> trying to figure out this field-separator for numeric types
[17:25] <dpb1> nwilson5: as long as it's not a quick vs merge sort debate
[17:25] <lucidguy> dpbl, no 30+ compute nodes, 2 main nodes running everything minus neutron
[17:25] <dpb1> lucidguy: ok
[17:25] <dpb1> lucidguy: and you don't interact with 'juju' command to get to each box or anything
[17:25] <lucidguy> i do not
[17:26] <lordcirth_work> nwilson5, so you want sort -t, what sort of separator do you want?
[17:26] <nwilson5> sorting a csv separated by comma, first column, numerically: "sort --field-separator=',' -n -k1 <file>" does not sort it numerically / correctly.
[17:26] <nwilson5> "sort --field-separator=',' -n -k1,1 <file>"
[17:26] <nwilson5> just trying to understand difference between numerical sort for -k1 vs -k1,1
[17:29] <dpb1> lucidguy: memcached might be involved, depends on how it's deployed.  Also, the working pieces you should look into are mysql (or percona, or whatever the db is), rabbitmq (message queue), disks on each of the controller nodes, and the logs in /var/log/syslog, /var/log/nova*, dmesg, `top` on the controller nodes, etc.
[17:30] <lucidguy> dpbl1, Ok I'm going to look this all over again, appreciate the assitance, I hope to have a more specific question soon.
[17:30]  * dpb1 nods
[17:32] <lucidguy> For troubleshooting purposes can I simply shutdown memcached.service, will that break stuff?
[17:33] <dpb1> restarting services seems reasonable steps to me, yes
[17:33] <lucidguy> How about shutdown?
[17:33] <lordcirth_work> lucidguy, it would probably stop working, but on start it *should* resume properly
[17:33] <dpb1> right
[17:33] <lucidguy> what would stop working?
[17:33] <lordcirth_work> Horizon
[17:33] <lucidguy> Gotchat
[17:33] <lordcirth_work> If horizon uses memcached, memcached going away will break it
[17:34] <lordcirth_work> You might need to restart horizon after bringing memcached back for it to notice
[17:34] <lucidguy> you mean apache?
[17:34] <smoser> nwilson5, it seems to generally work http://paste.ubuntu.com/25528925/
[17:34] <smoser> if you do '-k2,1', i'm not sure what its doing. it does show wierdnesst here
[17:35] <nwilson5> smoser, it isn't working in a case I'm testing unless I do the -k1,1
[17:36] <lucidguy> It could be a fluke, but all of a sudden its responding again .. hmm.
[17:36] <lordcirth_work> nwilson5, If I understand the docs, it's a range, in which case 1,1 seems redundant?
[17:36] <smoser> oh. i see. you must have to tell it where to stop or it sorts everything by numeric maybe
[17:36] <nwilson5> if we did LC_ALL=C it then does work
[17:36] <smoser> lordcirth_work, well end defaults to the last field
[17:36] <lordcirth_work> nwilson5, oh I see
[17:36] <smoser> so if yo dont want it to consider the other you do have to put end
[17:36] <lordcirth_work> Yeah -k1 means from field 1 to end, 1,1 would be only field 1
[17:36] <nwilson5> oh..
[17:36] <smoser> nwilson5, fuzzy parts of my brain indicate that i've fought this before
[17:36] <lordcirth_work> Which you'd think would be default, but ok
[17:37] <dpb1> smoser: hahah
[17:37] <nwilson5> so my other columns may be effecting it. when I didn't have any other columns it sorted fine
[17:37] <lordcirth_work> and 2,1 seems like it'd be backwards, I suspect that's not supported?
[17:37] <dpb1> and, nacc, thanks for getting my joke.  I was worried for a while.
[17:39] <lucidguy> dpb1: Using openstack in production?
[17:39] <dpb1> lucidguy: no, I managed development of the openstack autopilot at canonical.
[17:40] <lucidguy> dpb1: you should probably say no, once I find out your very knowledgeable with openstack I may start flooing you with questions... hah
[17:40]  * dpb1 hears alarm bells going off
[17:40] <lucidguy> Can one purchase support for OpenStack?
[17:41] <dpb1> lucidguy: ya, we have a better offering now (the autopilot is no longer something we sell).   2 things, let me get you a link
[17:41] <dpb1> lucidguy: https://www.ubuntu.com/cloud/openstack
[17:42] <dpb1> lucidguy: that second section, the 'fully managed on prem" option is very well done and popular
[17:42] <dpb1> lucidguy: honestly, openstack is a PITA to manage. I know from lots of late nights debugging things like you are running into. :/
[17:43] <lucidguy> dpb1 thanks for the info, does this mean no more questions :)
[17:43] <dpb1> lucidguy: hah
[17:43] <lordcirth_work> dpb1, is there a project simpler than openstack that you would recommend for smaller deployments?
[17:44] <dpb1> lordcirth_work: honestly, lxd is what I go to if I can.
[17:44] <lordcirth_work> dpb1, yeah, we've got lots of lxd containers, with both host and container managed by SaltStack
[17:44] <dpb1> but, nothing yet that gives that feeling of a centrally managed set of VMs that would replace openstack.
[17:45] <lordcirth_work> I'd like seamless migration between hosts so we can bring down a server room without problems
[17:45] <dpb1> lordcirth_work: kubernetes being an obvious thing to mention as well -- but again, not vms, and a different deployment strategy entirely
[17:46] <dpb1> lordcirth_work: ya, openstack can get you there, not without expert knowledge though.
[17:46] <lordcirth_work> Meanwhile I'm going the alternate route of making things HA so we can just shut down a room instead of migrate.
[17:46] <lordcirth_work> Even better, when the service lends itself to HA
[18:00] <cpaelzer> smoser: it will get what you look on import
[18:00] <cpaelzer> smoser: but there is pkg/upload/2.0.874-4ubuntu2 already
[18:01] <cpaelzer> smoser: which is what the importer will find then
[18:02] <cpaelzer> smoser: 11 hours ago https://git.launchpad.net/~usd-import-team/ubuntu/+source/open-iscsi
[18:02] <smoser> cpaelzer, right. thanks.
[18:03] <smoser> that wasnt there 30 minutes ago
[18:04] <nacc> smoser: the upload tag is unrnelated to the import tag (yet)
[18:05] <nacc> smoser: so i think it was there before (not 100%, i didnt' look)
[18:52] <rh10> guys, im truing to create repo mirror (ubuntu 16) apt-mirror said me "169.4 GiB will be downloaded into archive." is it real size of ubuntu mirror? x64.
[18:53] <rh10> can i slightly reduce size of repo?
[18:53] <rh10> *trying, sorry for typo
[18:54] <RoyK> rh10: I seriously doubt you can reduce the mirror size without losing anything
[18:54] <lordcirth_work> rh10, that seems pretty normal.
[18:54] <rh10> RoyK, lordcirth_work got it, thanks
[18:54] <lordcirth_work> Maybe you could drop older versions of packages?
[18:54] <nacc> rh10: why are you creating a repo mirror? could you just setup a local cache instead?
[18:54] <nacc> rh10: e.g, with squid or something?
[18:54] <lordcirth_work> And yeah, a caching proxy sounds better
[18:54] <nacc> if size is a concern, i mean
[18:55] <lordcirth_work> squid-deb-proxy is a very nice tool
[18:55] <dpb1> +1 for squid.
[18:55] <rh10> nacc, local cache will be caching only packages i installed using apt?
[18:55] <lordcirth_work> rh10, no, a network caching proxy
[18:55] <rh10> lordcirth_work, i've never heard of it. thanks.
[18:55] <lordcirth_work> rh10, so you set all of your machines to download through it, and it acts as a shared cache
[18:56] <dpb1> rh10: btw, this is prior art you should use if you really want or need a full mirror: https://insights.ubuntu.com/2017/08/31/running-an-ubuntu-mirror-with-juju/ -- will save you a bunch of time
[18:56] <rh10> dpb1, tnanks a lot
[18:56] <nacc> rh10: it depends on your configuration, obviously, but basically, you have all apt traffic run through your proxy and then the proxy will just cache all acceses (following config rules)
[18:56] <dpb1> but, for most networks, a caching proxy is a superior use of your time and resources.
[18:56] <lordcirth_work> rh10, https://www.garron.me/en/blog/ubuntu-deb-proxy-cache.html
[18:56] <nacc> rh10: the goal being that actually used packages get cached
[18:56] <nacc> rh10: and a mirror is costly to maintainn (IMO)
[18:57] <dpb1> rh10: the guy that wrote that actually got intrested in it after spending a year in africa, where it was essential for his ISP to mirror things. :)
[19:03] <rh10> dpb1, article said "It’s very big. The whole Ubuntu archive (for amd64 and i386) sits at around 2.5Tb" - is it size with sources? apt-mirror show me only 169 GB for x64. just interesting
[19:05] <nacc> smoser: fyi, open-iscsi should be udpated now
[19:07] <lordcirth_work> rh10, that's x64 and i386, binary and source, and probably older packages too
[19:07] <rh10> lordcirth_work, got it, thanks
[19:08] <lordcirth_work> We have a 15TB mirror here, but we are a large institution and one of the public mirrors
[19:08] <rh10> lordcirth_work, interesting. why so large? different versions of distro and x64, i386 and sources?
[19:09] <sarnold> rh10: that 2.5 Tb probably includes ports
[19:09] <lordcirth_work> rh10, sorry it's not just Ubuntu: http://mirror.csclub.uwaterloo.ca/  Ubuntu is 1.2TB apparently
[19:10] <sarnold> here's my own local mirror setup http://paste.ubuntu.com/25529365/
[19:10] <sarnold> (the unpacked trees are _stale_ by a looong way but shouldn't be wrong by more than 50%..)
[19:11] <teward> lordcirth_work: in total, or just a specific release?
[19:11] <lordcirth_work> teward, precise -> zesty apparently
[19:12] <rh10> sarnold, got it
[19:12] <teward> lordcirth_work: you able to get me the size for just Xenial, maybe?  (if possible)
[19:12] <lordcirth_work> But again we do that mostly to be a public mirror, not just for ourselves
[19:12] <sarnold> folks who do rsync mirrors aren't in a great position to report on sizes for just specific releases
[19:12] <teward> lordcirth_work: oh definitely, sorry to ask for additional data such as public mirror, etc. but i'm trying to find things :P
[19:12] <sarnold> it's really difficult to tell which packages belong to which releases
[19:12]  * teward wants to stand up a mirror just for Xenial locally xD
[19:13] <sarnold> teward: rh10 reported ~170 gigs when he or she joined. if you've got a 500 gig drive that'd probably be enough.
[19:14] <lordcirth_work> But really, if you have an internet connection, use a caching proxy
[19:14] <teward> sarnold: how's a 750gig vhd on a 10TB datastore for a VM :P
[19:14] <sarnold> teward: ought to sufficefor xenial amd64 + i386 :)
[19:15] <lordcirth_work> Running your own mirror will take more bandwidth than caching, and maybe even more than just downloading, if you don't have a huge number of systems.
[19:15] <sarnold> caching proxy is definitely wa easier to configure than picking-and-choosingw what parts of the archive you want to mirror
[19:15] <teward> lordcirth_work: 26 systems and counting :P
[19:15] <teward> i could probably set up a caching proxy though
[19:15]  * teward shrugs
[19:15] <teward> sorry to derail :)
[19:15] <teward> now where's ScottK, he needs to update his universe package to actually work with Python3...
[19:16] <teward> s/his universe/this universe/
[19:16] <lordcirth_work> teward, unless they are vastly different than each other, you'll use the same ~20GB of packages and be syncing the other 150GB forever and never use them
[19:16] <teward> 45GB but same difference :P
[19:17]  * teward has a few crazy servers :P
[19:22] <rh10> lordcirth_work, just interesting, which data storages are use to keep 15 TB data?
[19:23] <lordcirth_work> rh10, we've just moved to Ceph
[19:23] <lordcirth_work> >200TB free XD
[19:23] <rh10> lordcirth_work, tnanks.
[19:24] <lordcirth_work> But if you wanted to store that much, I'd say ZFS.  Our Ceph is actually on top of ZFS.
[19:24] <rh10> lordcirth_work, ubuntu distros on servers?
[19:25] <lordcirth_work> rh10, yup, 16.04
[19:25] <rh10> lordcirth_work, tnanks for answers.
[19:25] <lordcirth_work> Although we had to add a more recent ZFS for Ceph to work with it
[19:27] <sarnold> lordcirth_work: ooh, why did you need to update your zfs?
[19:28] <lordcirth_work> sarnold, Ceph requires support for some fancy attributes to store metadata.  Otherwise it runs slow.  Also you have change some Ceph config so that it knows to use those attributes; it only autodetects xfs and btrfs.
[19:29] <sarnold> lordcirth_work: i've wondered before if using zfs as underlying storage would make ceph way happier or way angrier.. letting zfs handle local disk failure and bitrot sounds way nicer than stressing the whole ceph cluster .. but ceph folks said they very much prefer xfs for simple high-throughput no-features storage..
[19:29] <sarnold> lordcirth_work: posix-sa attrs or similar? or something else?
[19:29] <lordcirth_work> sarnold, "ceph folks" haven't gotten around to implementing end-to-end checksumming, so they can deal with it :P
[19:30] <sarnold> lordcirth_work: yeah. I'd rather every read be checksummed.. heh.
[19:30] <lordcirth_work> sarnold, https://gist.github.com/lordcirth/884c37b93810340e507a42d420fddbe7
[19:31] <sarnold> lordcirth_work: aha! :D thanks
[19:33] <lordcirth_work> But you *must* have ZFS > 0.7 or presumably bad things happen!
[19:34] <sarnold> previously xattrs all incurred additional seeks+reads -- the posix sa-attr stuff in 0.6 'inlined' very short xattrs
[19:35] <sarnold> so if ceph used xattrs often, that'd be a huge number of additional seeks and reads
[19:36] <lordcirth_work> sarnold, that sounds right.  I wasn't actually closely involved with that part of the project.
[19:37] <sarnold> as it is I'm just using raw zfs on my one machine; it'd benefit greatly from ABD and compressed arc, so I'd love to see .7 backported to 16.04 LTS .. but that'd be difficult to do well, and I'm not sure I'm up for doing the upgrade myself, I'm just so very lazy.
[19:37] <lordcirth_work> sarnold, https://launchpad.net/~jonathonf/+archive/ubuntu/zfs
[19:38] <sarnold> also very paranoid :) hehe
[19:41] <rh10> guys, how do you think, is it real to test Ceph in local virtual env? for learning purposes.
[19:43] <sarnold> rh10: testing it in a few VMs sounds like a great way toget some experience with it, but I understand it -really- prefers to live directly on unvirtualized hardware in production
[19:43] <rh10> sarnold, ok got it.
[19:44] <lordcirth_work> rh10, you can test configuring it and breaking it fine, but of course performance numbers will be useless
[19:44] <rh10> lordcirth_work, i understand about perf. it's just for learn
[19:59] <RoyK> does ceph support anything like raid[56]?
[20:02] <sarnold> RoyK: yeah; it's far more flexible than that, more flexible even than raidz1/raidz2/raidz3 -- take a look at the erasure coded pool sectiohn of http://docs.ceph.com/docs/master/rados/operations/crush-map/
[20:03] <sarnold> RoyK: you can specify which 'level's of the network are how redundant, so you can keep 'resilvering' kinds of things to individual racks or rows or AZs or datacenters
[20:03] <RoyK> last time I looked at it, it was raid1 only (or raid 1+0)
[20:06] <RoyK> sarnold: damn - ceph supports tiered storage too?
[20:06] <sarnold> RoyK: yeah
[20:07] <RoyK> I'll have to read up on ceph - fancy :D
[20:13] <RoyK> sarnold: erm - is this really tiering or is it just caching?
[20:21] <sarnold> RoyK: hrm. It smelled like tiering to me ..
[20:23] <RoyK> didn't look that way in the docs
[20:26] <lordcirth_work> Anyone using an SIEM / have an opinion on what to use?