[04:53] <darkzek> Hey, anybody know how to make my 16.04 Ubuntu Server auto-login? Its a test server in vmware.
[04:53] <YankDownUnder> darkzek: Did you read what I posted in #ubuntu?
[04:57] <darkzek> YankDownUnder Yes, I would really prefer not to install a dm to save server resources.
[04:57] <YankDownUnder> darkzek: There is going to be no real logical or practical solution to having Ubu server "automagically login" - as it's not part of the "model" of the whole. However, that being said - and as I've had to do in the past, I've installed a very lightweight DM (XDM) and set it up for autologin...only had to install something nice and small like WindowMaker or twm or olvwm or such...which was nothing, really.
[04:58] <darkzek> So my options are install a dm, use a super hacky method that will take ages to configure. Or have to copy my very long password each boot
[04:58] <YankDownUnder> So, again, that being said, you're not the only one that has wanted to accomplish this task...trust me...
[04:58] <YankDownUnder> darkzek: Actually, it takes minutes...at the most...
[04:59] <YankDownUnder> On a VM over the weekend (Ubuntu Server 16.04.2) I got Dovecot/IMAP/POP3, lightdm and WindowMaker setup in, er, what, 10 minutes tops?
[05:00] <darkzek> YankDownUnder Haha ok, I guess i'll install xfce then :)
[05:01] <darkzek> Thanks for your help :)'
[05:01] <YankDownUnder> XFce is actually heavier than WindowMaker, AfterStep or whatever...HOWEVER, that being said, it's your VM, not mine. I prefer "less than" on servers...XFce *USED* to be very light, but it's grown a bit "thick" around the edges...ahem...
[05:04] <darkzek> YankDownUnder Yeah im not the best with Linux knowledge so I don't really feel comfortable installing my own window manager right now. Thanks again :)
[05:06] <YankDownUnder> darkzek: Fair enough. Just remember - it's EASY if you THINK IT'S EASY. Otherwise, it's a nightmare. Simple stuff: apt-get install -y lightdm && apt-get install -y xfce-desktop => pretty much all there is to it.
[05:06] <YankDownUnder> After you get lightdm installed, you can check the /etc/lightdm/lightdm.conf and edit it to suit for your autologin schmutz
[05:22] <darkzek> YankDownUnder I'll do that then, time to get out of my shell haha.
[05:23] <YankDownUnder> darkzek: Cool bananas...it's not so bad "outside the box" you know...
[05:47] <cpaelzer> nacc: rbasak: actually do we have another USBSD today or next week?
[05:48] <cpaelzer> Since it seems to work to encourange community and to get a grip on more issues I think we should give it a wiki page with like "next date" people can always check
[05:49] <cpaelzer> announcing on ML is fine, but as me being away last week I just missed it in the truckload of mails - so a page to check just as we have with the IRC meeting would be great IMHO
[05:49] <cpaelzer> let me know what you think about that
[06:11] <lordievader> Good morning
[08:50] <zioproto> hello all. I have a question about neutron-server ubuntu packaging. I had to do an ugly hack to /etc/init.d/neutron-server
[08:50] <zioproto> in xenial
[08:51] <zioproto> because I have some plugins, I had to hardcode more --config-file options
[08:51] <zioproto> I have an ugly line that looks like
[08:51] <zioproto> [ -n "$NEUTRON_PLUGIN_CONFIG" ] && DAEMON_ARGS="--config-file=$NEUTRON_PLUGIN_CONFIG --config-file=/etc/neutron/l2gw_plugin.ini --config-file=/etc/neutron/neutron_lbaas.conf"
[08:52] <zioproto> but this variable $NEUTRON_PLUGIN_CONFIG
[08:52] <zioproto> I dont understand it
[08:52] <zioproto> it is not like we have a file with all the configs for all the plugins
[08:52] <zioproto> is this a bug in the packaging ?
[08:58] <zioproto> what would be the clean way to start the daemon with these extra --config-file statements ?
[09:10] <cpaelzer> zioproto: the variable lives in /etc/default/neutron-server
[09:10] <cpaelzer> zioproto: by default it points to /etc/neutron/plugins/ml2/ml2_conf.ini
[09:13] <cpaelzer> zioproto: I don't know nova enough would it allow to list them comma separated, or create amaster conf file that includes multiple others?
[09:13] <cpaelzer> zioproto: I'd consider any of those "cleaner" if they are possible
[09:17] <cpaelzer> rbasak: is not finding /var/lib/libvirt/dnsmasq/default.leases a known uvtool issue?
[09:17] <cpaelzer> any bells ringing?
[09:18] <cpaelzer> rbasak: bug 1420142 seems like what I see, yet it is closed as dup of a fixed bug
[09:18]  * cpaelzer checking versions
[09:20] <cpaelzer> seems my trusty version is too old to work with UCA level libvirts, looking for the uvtool backports now
[09:23] <cpaelzer> rbasak: going to ppa:uvtool-dev/master fixed it
[09:23] <cpaelzer> rbasak: imho as far as I see this is broken for e.g. Trusty+UCA-Mitaka - would it be reasonable to ask the UCA Team to get Xenial version of uvtool into the UCA as well to let it work?
[09:24] <cpaelzer> jamespage: ^^ thoughts?
[09:25] <jamespage> cpaelzer: context?
[09:25] <cpaelzer> jamespage: the lines above, TL;DR I've found that with Trusty+UCA-Mitaka uvtool fails
[09:26] <cpaelzer> jamespage: not sure on the exact details to trigger, but the the root cause seem on old uvtool vs newer libvirt behaviour
[09:26] <cpaelzer> newer uvtool has it fixed already
[09:26] <jamespage> cpaelzer: is this something that we want to support?
[09:26] <jamespage> cpaelzer: UCA is really for OpenStack support, rather than just picking up a new virt stack
[09:27] <cpaelzer> jamespage: true, and people can - as I did - pull in the backport ppa that exists to get going
[09:27] <cpaelzer> I'll update the bug thou to help anyone else running into that case
[09:31] <cpaelzer> jamespage: thanks for quickly tihnking this through, I updated the bug and agree that there is no reason to pull into UCA
[09:31] <jamespage> cpaelzer: yw - we've had similar breaks in the past (newer django broke MAAS for example) and decided that was outside of the scope of the UCA purpose
[11:22] <kotVaska> hi, Why not installed ubuntu server on fujitsu celsius w370? Installation hangs..
[11:52] <cpaelzer> kotVaska: what way are you installing (ISO, Maas, ...) which release and at what point is it hanging?
[11:55] <cpaelzer> kotVaska: depending on that you likely have to select the right entry on https://wiki.ubuntu.com/DebuggingProcedures#Installation_and_Upgrades and provide further info than "hangs on install"
[11:56] <kotVaska> thanks
[12:45] <patsToms> morning
[12:45] <patsToms> someone have any idea why could screen have some artifacts?
[12:45] <patsToms> it renders Ubuntu 16 server terminal randomly
[12:48] <patsToms> oh, it works well on different monitor
[13:09] <cpaelzer> jamespage: do you (without rereading the code) remember how DPDK_OPTS in openvswitch init handling are read&passed to the ovs-ctl script?
[13:09] <cpaelzer> jamespage: I try to test something on T+UCA-Mitaka but the lack of systemd kind of lets me stumble
[13:10] <cpaelzer> most seems fixed, but the path from init -> ovs-lib -> ovs-ctl -> ovs_vswitch isn't exatly straight :-)
[13:10] <cpaelzer> so I miss the DPDK_OPTS to be set for now
[13:10] <cpaelzer> jamespage: I'll read through the scripts, but if you happen to remember let me know
[13:14] <cpaelzer> ok, so theory confirmed DPDK_OPTS are not set while ovs-ctl is running in that system setup
[13:26] <cpaelzer> jamespage: found the issue, as FYI on trusty it is running the upstart bits pre-start and that lacks an export of DPDK_OPTS
[13:26] <cpaelzer> jamespage: not a real issue as it was never meant to work ther I'd guess right?
[13:27] <jamespage> cpaelzer: no I think we agreed that the baseline was xenial right? due to kernel feature requirements
[13:27] <cpaelzer> jamespage: yeah I think we agreed on that
[13:27] <cpaelzer> jamespage: I'm running hwe-x anyway
[13:29] <cpaelzer> so the kernel dep won't kill me for now, but still I think it is not meant to run on T - so the fixes I make for my tests won't become your bugs
[13:30] <cpaelzer> jamespage: that is what I wanted to check
[13:32] <cpaelzer> jamespage: wow after understanding the whole picture the fix is super-easy since the /etc/default/openvswitch-switch is sourced
[13:33] <cpaelzer> jamespage: it comes down to add "export DPDK_OPTS" in that config file
[13:33] <cpaelzer> no "code" change needed
[13:43] <ahasenack> cpaelzer: $1 to add the oneliner
[13:43] <ahasenack> cpaelzer: $999 to know which one liner and where
[13:54] <cpaelzer> ahasenack: exactly
[13:55] <cpaelzer> ahasenack: but it already is in a git commit to be found by the search engine of your choice some day
[13:55] <cpaelzer> so price drops from $999 to just $499 for the next 5 hours
[13:55] <ahasenack> it's like a zero-day
[14:11] <Aison> why do I get a network interface name p22p1? I expected that ethernet devices are prefixed with en?
[14:11] <Aison> so why not enp22p1
[14:50] <teward> this will sound like a stupid question, but I have a server on a subnet of my network that uses a VPN tunnel outbound.  I need to route traffic from my local networks via the local network and not send them across the VPN tunnel; is there any way to setup such custom routing?
[14:56] <compdoc> dont assign a gateway to the tunnel
[15:33] <nacc> cpaelzer: argh, it should be this week, but with everything else, i dropped the ball
[15:34] <nacc> cpaelzer: i'll set it up for next week and maybe every two weeks after that, with a header thing and link to a wiki page describing it
[15:34] <dpb1> nacc: the bug party?
[15:34] <dpb1> err
[15:34] <dpb1> bug squashing day
[15:34] <nacc> dpb1: yeah
[15:34] <dpb1> k
[15:34]  * dpb1 was looking forward to that
[15:34] <nacc> i mean, there's nothing preventing us from doing it today :)
[15:34] <nacc> just forgot to announce it on the ML
[15:35] <dpb1> I can look forward to it next week too
[15:35] <dpb1> :)
[15:39] <nacc> dpb1: true )
[15:39] <nacc> :)
[16:21] <ahasenack> teward: you need to look up "source routing", the "ip" tools can do that just fine
[16:22] <ahasenack> teward: something like this: http://www.tldp.org/HOWTO/Adv-Routing-HOWTO/lartc.rpdb.simple.html
[16:22] <ahasenack> there may be ubuntu docs about it too
[16:42] <ikonia> win 14
[17:09] <teward> ikonia: confirmed: LOSS 14 recorded.  (Just kidding, and poking fun, sorry)
[17:10] <teward> ahasenack: thanks, I'll take a look.
[17:13] <teward> got a question for the server team people.  NGINX in 14.04, a request came through to have the 'geoip' module added to the nginx-naxsi flavor.  Unlike in Xenial and later we can't just use dynamic modules, that'd be a feature change that I'd need approval to get in, what're your thoughts?  Noting of course that nginx-naxsi is deprecated and no longer supported in any other releases, except maybe Precise and I that'll die soon enough.
[17:13] <teward> how would you suggest I proceed?
[17:15] <nacc> teward: just tell the user no? :)
[17:15] <teward> lol
[17:15] <nacc> teward: without knowing more about nginx, how would you add geoip support without dynamic module support?
[17:16] <teward> nacc: change the build rules to static-compile
[17:16] <teward> we already static-compile in Trusty
[17:16] <nacc> teward: ah ok
[17:16] <teward> we just have to change what modules are included
[17:16] <teward> the old style way of things, though we had to do that for Xenial and still do
[17:16] <nacc> teward: i'm not sure the request satisifies the SRU rules
[17:16] <nacc> teward: maybe in backports?
[17:17] <teward> nacc: i'm not sure it does either, and the backports team has too much of a backlog on their plate
[17:17] <nacc> yeah :/
[17:17] <teward> i would know i've had three backport requests sitting for two years gathering inordinate amounts of dust
[17:17] <teward> they're so old I don't even have the packaging for them anymore lol
[17:20] <teward> nacc: Won't Fix'd the bug, and referenced that it doesn't meet SRU criterion
[17:20] <teward> i'm so tired lol
[17:20] <teward> nacc: send me $450 worth of allergy meds and solve my misery for the next six months lol
[17:20] <nacc> teward: i think that's totally reasonable
[17:20] <teward> (allergies *suck*)
[17:20] <nacc> teward: and sorry for your allergies!
[17:21] <teward> if it were economically feasible I'd have a O2 container here, or at least a respirator that filters out the allergens.
[17:21] <teward> too bad i'm in debt.
[17:21] <teward> and too bad i can't afford a new computer, this one's startin to fall apart
[17:23] <teward> I'd *like* to get this $3000 business line workstation grade laptop from Dell, but I'm poor :P
[17:23] <teward> (yay for fifty simultaneous build envs if i had it lol)
[17:26] <drab> hi, just trying my hand at running kvm manually
[17:27] <drab> I'd like to use iommu and eventually use pci passthrough
[17:27] <drab> vt-d is enabled on the machine and iommu option loaded in grub
[17:27] <drab> [    0.000000] DMAR: IOMMU enabled
[17:27] <drab> however when I run sudo qemu-system-x86_64 -enable-kvm -machine type=pc,accel=kvm -device intel-iommu ....
[17:28] <compdoc> kvm is awesome, but Ive never found a good use for passthru
[17:28] <drab> I get an error,  qemu-system-x86_64: Option '-device intel-iommu' cannot be handled by this machine
[17:28] <compdoc> some motherboards are better than others at that
[17:28] <drab> I was planning on giving it a harddisk to write through directly and a network card. Is not the expected way to use it to speed things up?
[17:28] <compdoc> see if theres a bios update
[17:28] <drab> I'm enw to it so maybe misunderstanding basic concepts still
[17:29] <drab> also I'm having a really hard time on figuring out how to run it manually, the entirely web just talks about libvirt
[17:29] <drab> but I don't want to run libvirt/virt-manager/virsh
[17:29] <nacc> yeah, runnnig qemu manually is a PITA
[17:29] <nacc> drab: any reason why not?
[17:29] <sarnold> xml phobia? :)
[17:30] <nacc> heh
[17:30] <drab> heh, in part
[17:30] <drab> also compelxity-phobia, it looks like a lot of stuff to learn to do right, and I just want *1* instance
[17:30] <drab> everything else is lxd containers
[17:30] <drab> might need one more later, but to deploy libvirt to a node just to run one container seems not a good idea
[17:30] <drab> especially since it seems to want to do its own thing
[17:30] <nacc> i don't think you'd see a huge bump from hard disk passthru, but not sure
[17:30] <drab> ie create its own bridge etc
[17:31] <nacc> and the network card could be solved (better) with SR-IOV if you have it
[17:31] <nacc> (or more naturally, i mean)
[17:31] <drab> I have the host pretty "clean" with the main bridge for containers and I'd like to just add 1 kvm (for nfs)
[17:31] <drab> nacc: I'll look into SR-IOV, thanks for the tip
[17:31] <compdoc> I find qcow2 files are easier to copy, etc
[17:31] <nacc> drab: it requires hw support on the NIC
[17:32] <drab> so yeah, I'm trying to figure out how to spin up just this instance with pure qemu, but having a hard time especially since I need to pxe boot and have output to console...
[17:32] <drab> the host has a ZFS pool for lxd and planning to carve out a ZVOL to feed to kvm
[17:33] <drab> as root device
[17:33] <drab> but need to figure out the booting part first...
[17:33] <drab> can't even get it to start and give me output in console to run through installation right now
[17:33] <nacc> drab: so does `qemu-system-x86_64 --device help 2>&1 | grep intel-iommu` list it as supported?
[17:34] <drab> nacc: name "intel-iommu", bus System
[17:35] <nacc> drab: ok
[17:35] <nacc> drab: https://lists.gnu.org/archive/html/qemu-devel/2016-06/msg03548.html maybe?
[17:35] <__Yiota> how do I check disk read latency?
[17:35] <__Yiota> I'm trying to figure out why our reads on aws are slower
[17:36] <nacc> drab: fwiw, this is where libvirt can be handy
[17:36] <nacc> drab: as the XML is the same regardless (ideally) of qemu parameters
[17:37] <nacc> drab: i also believe libvirt can use existing bridges, etc.
[17:39] <sarnold> __Yiota: I understand that's basically aws's business model. iops are slow enough that people want to pay for the faster backends
[17:40] <sarnold> __Yiota: there's a huge load of measurement tools at https://github.com/iovisor/bcc
[17:40] <drab> nacc: I'm absolutely sure it can, and I don't question its usefuless, but I've tried to look at it and it struck me as *really* complex
[17:40] <drab> nacc: so I thought maybe it was gonna be quicker/simpler to just do straight qemu since I don't plan more tha a  couple instances
[17:40] <drab> but maybe not
[17:41] <drab> given how much of a nightmare it's been to figure qemu out so far
[17:42] <nacc> drab: right 'simpler' in that there are fewer layers
[17:42] <nacc> but those layers make the end-user experience sane :)
[17:51] <teward> sarnold: nacc: rbasak: just to keep you in the loop, once 17.10 is open (and after I get off my lazy butt) we're going to be putting nginx 1.12.* in.  That's been released, by the way :P
[17:51] <teward> i would have said this at the meeting yesterday but i was otherwise detained in a meeting
[17:53] <sarnold> teward: nice :)
[17:53] <nacc> teward: np, thanks!
[17:53] <__Yiota> sarnold thank you
[17:55] <ppetraki> __Yiota, hdparm -tT [bdev] isn't a bad place to start. Cached number tells you how fast your line speed is and the bufferred read from disk should be pretty close to what it says in the spec sheet.
[17:57] <__Yiota> bdev?
[17:57] <ppetraki> __Yiota, /dev/sda
[17:57] <__Yiota> ah
[17:57] <__Yiota> gotcha
[17:57] <teward> note to self: set up a script to initiate LXD containers with the standard utility sets lol
[17:58] <teward> (no `ping` on the LXD container that got started for Xenial o.O)
[17:58] <ppetraki> it's easy and it's always there, so quick example
[17:59] <nacc> teward: are you not using the cloud images remote?
[17:59] <ppetraki> __Yiota, http://pastebin.ubuntu.com/24415194/
[17:59] <teward> nacc: i was, but i think something fubar'd in the download lol
[18:00] <nacc> teward: yeah, i just checked and `lxc launch ubuntu:xenial` definitely has ping :)
[18:00] <ppetraki> __Yiota, That's a micron M600 attached to a 7 year old thinkpad
[18:00] <teward> *shrugs* it's working now.
[18:00] <teward> nacc: well, I still need to 'configure' the container with what I need on a standard system.  So a utility script will still be useful DO NOT JUDGE ME
[18:00] <__Yiota> thank you so much ppetraki now I have something to show my CTO
[18:00] <ppetraki> __Yiota, it has a 3G link, which is what I'm getting for cached reads. The drive can do almost 500 MB/s ... so my system's bus is the problem.
[18:00] <teward> standard utils for Ubuntu and "standard Ubuntu utils for teward's container" aren't the same ;)
[18:01] <nacc> teward: heh,sure :)
[18:01] <nacc> teward: cloud-init them?
[18:01] <ppetraki> sarnold, that's a sweet set of utilities . Thanks!
[18:03] <c0mrade> How can I make mongodb automatically start at boot time on Ubuntu 16.x ?
[18:03] <__Yiota> c0mrade system d?
[18:03] <sarnold> ppetraki: it's wonderful stuff. if you haven't found brandon gregg's homepage yet, it's worth finding. there's days of wonderful reading there ;)
[18:03] <c0mrade> Yiota: How?
[18:03] <nacc> c0mrade: didn't you ask this yesterday and were answered?
[18:03] <ppetraki> sarnold, I have not :)
[18:03] <sarnold> c0mrade: where did you get stuck?
[18:04] <c0mrade> I asked but I either didn't get an answer or I didn't see the answer.
[18:04] <dpb1> c0mrade: https://askubuntu.com/questions/61503/how-to-start-mongodb-server-on-system-start
[18:04] <nacc> c0mrade: specifically by sarnold :)
[18:04] <sarnold> ppetraki: oh bother I knew I'd butcher his name http://brendangregg.com/
[18:05] <c0mrade> One more thing which is a bit more complex, see if the server restarts, there's a script that I want to run when it boots first cd to a dirctory that I want then run ./bin/dev once that command executes it puts me into another shell where command 'run' has to be run.
[18:05] <c0mrade> How could I go about that?
[18:06] <teward> nacc: I will make one note: Debian's images, they have *nothing* on them lol
[18:06] <teward> (cross-OS testing lol)
[18:07] <nacc> c0mrade: you can't run the first script with an absolute path?
[18:07] <nacc> c0mrade: write a wrapper script for the wrapper for the wrapper
[18:07] <nacc> teward: yeah, they are very different
[18:07] <c0mrade> nacc: I can...
[18:08] <teward> oh hey exactly nine days to precise EOL.
[18:08] <ppetraki> sarnold, oh, he wrote the dtrace toolkit. ok :)
[18:08] <c0mrade> I mean would it run like ./lila/bin/dev ?
[18:08] <teward> guess I can go delete the nginx PPA packages now lol
[18:08] <Aison> is there a way to use systemd 233 on ubuntu 16.04?
[18:08] <nacc> teward: :)
[18:08] <nacc> !info systemd xenial
[18:08] <dpb1> Aison: sounds painful!
[18:08] <nacc> Aison: not in  supported way
[18:08] <c0mrade> nacc: wrapper script for the wrapper fo the wrapper o.O ?
[18:08] <nacc> Aison: and i don't know why you'd want to do that?
[18:08] <nacc> c0mrade: you said you needed to run a script at a given path at boot
[18:09] <Aison> nacc, I can life if it is unsupported :P
[18:09] <nacc> c0mrade: so write a script that cds to the path and runs the script
[18:09] <Aison> it's for a testing machine
[18:09] <nacc> Aison: ok, build systemd yourself :)
[18:09] <teward> c0mrade: how about a wrapper script for all the wrapper scripts which are wrapping for another wrapper script which are wrapping for more wrappers which wrap for backends..  *shot*
[18:09] <nacc> Aison: and enjoy that fresh hell
[18:09] <teward> sorry i couldn't help it :)
[18:09] <nacc> teward: :)
[18:09] <sarnold> ppetraki: yeah. he's an insanely productive guy. :)
[18:09] <teward> (hey we all need a little silliness sometimes :P)
[18:10] <c0mrade> nacc: Guys can you give me an asnwer with some code, I don't know what a wrapper script is anyway.
[18:10] <nacc> c0mrade: wrapper script == a script that calls something else
[18:10] <nacc> c0mrade: so a trivial shell script
[18:10] <Aison> nacc, I have a strange problem here. Systemd is not starting the network device exactly on one ubuntu server
[18:10] <nacc> c0mrade: i'm not going to write it for you
[18:11] <nacc> Aison: 'starting the network device' -- kernel sees it, but not getting an IP?
[18:11] <Aison> I always have to login locally and systemctl stop systemd-networkd.service  and then start
[18:11] <nacc> Aison: and then it works?
[18:11] <Aison> yes
[18:11] <nacc> Aison: 17.04?
[18:11] <Aison> no, 16.04
[18:11] <Aison> LTS
[18:11] <nacc> Aison: what is the error, if any, in the logs when it doesn't work at boot?
[18:12] <Aison> there is nothing in the logs. All errors I can see come from network drives that cant be mounted
[18:12] <nacc> Aison: 'nothing' in the logs? So it isn't indicated as failing?
[18:13] <nacc> systemd-networkd is a unit, so it has logs
[18:14] <c0mrade> But the thing is that when I execute the first command I get into another shell, how would that shell accept commands using that script?
[18:15] <Aison> hmm, how do I show the isolated systemd-networkd logs only?
[18:15] <ppetraki> __Yiota, have you used fio?
[18:15] <nacc> c0mrade: you need to interact with commands?
[18:15] <nacc> Aison: something like systemctl status systemd-networkd
[18:16] <Aison> nacc, that was always monitored as started, even though there was no network device up
[18:16] <nacc> hrm
[18:16] <nacc> Aison: ok, that's what i was asking before -- so systemd doesn't detect there is any issue?
[18:16] <c0mrade> nacc: Interact? How's that, your answer is very broad, can you be more specific?
[18:16] <nacc> c0mrade: i know nothing about your scripts
[18:16] <nacc> c0mrade: let's say you could start your scripts automatically at boot
[18:16] <nacc> c0mrade: do you need to send input to them?
[18:16] <__Yiota> ppetraki never
[18:17] <ppetraki> __Yiota, OK :) let's start with my cheatsheet
[18:17] <ppetraki> __Yiota, http://tfindelkind.com/2015/08/24/fio-flexible-io-tester-part8-interpret-and-understand-the-resultoutput/
[18:18] <rharper> Aison: journalctl -o short-precise --unit systemd-networkd
[18:18] <nacc> rharper: thanks, i knew there was a journalctl version too, but couldnt find it
[18:18]  * rharper knows it all too well 
[18:18] <rharper> =/
[18:18] <nacc> heh
[18:19] <sarnold> ppetraki: holy cow
[18:19] <ppetraki> __Yiota, he does a really good job of explaining what all the fields mean. In your case. I would devise a test that did 100% reads with a queuedepth of 1 and note where the latency histogram is accumulating the most hits
[18:19] <c0mrade> nacc: That's what I need to do, after system reboot execut the following: cd lila; ./bin/dev when I execute ./bin/dev I get a specific shell where I type 'run' and hit enter and that's it
[18:19] <ppetraki> sarnold, he nailed it
[18:21] <nacc> c0mrade: can't you just adjust to run the commands that are in ./bin/dev (I'm not sure why it spawns a shell) and run the 'run' command there?
[18:21] <Aison> rharper, nacc that's all what I get: systemd-networkd[296]: eth0: Renamed to enp2s0
[18:21] <Aison> then I restart this service
[18:22] <nacc> and then what does it say after restart?
[18:22] <c0mrade> nacc: I don't know.
[18:22] <rharper> networkd doesn't apply network config if the interface is already up or touched
[18:22] <ikonia> c0mrade: you said you had this all working and %90 automated
[18:23] <ikonia> it seems that you don't have the first step automated at all
[18:23] <nacc> ikonia: glad you have more context than I :)
[18:23] <ikonia> read up on "how to write a systemd unit"
[18:23] <Aison> rharper, the network device is not up after reboot. ifconfig  lists only the lo device. just ifconfig -a lists the device correctly
[18:23] <ppetraki> __Yiota, http://pastebin.ubuntu.com/24415321/
[18:23] <ikonia> nacc: sadly yes, he's been asking this for days how to build a lichess server on an ubuntu EC2 instance
[18:23] <nacc> ikonia: ah i see
[18:24] <nacc> Aison: how is your network device configured? /etc/network/interfaces?
[18:24] <ikonia> however he's screwed the install putting it under root account and the root directory which adds complexity, and doesn't understand the difference between say a cloud-init step and an systemd unit, so it's all a bit pointless
[18:24] <c0mrade> Should I put my script at /etc/rc.local?
[18:24] <ikonia> no
[18:24] <rharper> well, on 16.04 systemd-networkd isn't enabled by default, so that's going to be a problem;  if you want to use networkd then you need to enable it via systemctl enable systemd-networkd;  you'll need to write networkd configuration files (.link .network) files in /etc/systemd/network/* for your interfaces
[18:24] <ppetraki> __Yiota, save that to a file like config-fio-100-read.ini. and run it like this: $ sudo DISK=/dev/XXXX fio  config-fio-100-read.ini
[18:24] <ikonia> look at writing a systemd unit c0mrade
[18:24] <rharper> learn how to apply the Match parameter to target the interfaces you want
[18:24] <Aison> nacc, no, in interfaces there is only the lo device. Else I use systemd/network
[18:24] <rharper> and disable ifupdown networking
[18:25] <c0mrade> ikonia: systemd unit?
[18:25] <rharper> if you're still using /etc/network/interfaces then you won't be using systemd-networkd; rather only the 'networking' service script which calls out to ifup and friends (from the ifupdown package)
[18:25] <nacc> rharper: is all tht documented somewere (wiki?) or is that one of your tasks for the release notes?
[18:25] <ikonia> c0mrade: yes, as you've been told quite a few times
[18:25] <Aison> rharper, how can I disable network/interfaces completely?
[18:25] <rharper> nacc: no, we've not released an Ubuntu with networkd enabled by default
[18:26] <rharper> remove them from /etc/network/interfaces ?
[18:26] <c0mrade> ikonia: Also I've read this "To execute a script at startup of Ubuntu, simply edit /etc/rc.local, and add your commands. " at this link http://ccm.net/faq/3348-execute-a-script-at-startup-and-shutdown-on-ubuntu
[18:26] <Aison> rharper, I did that (except the loopback device)
[18:26] <c0mrade> ikonia: I just don't know what a systemd unit is.
[18:26] <nacc> rharper: ah right
[18:26] <ppetraki> __Yiota, so most of my 4K ios complete in just under 764us
[18:26] <ikonia> c0mrade: so that would be the first thing you research
[18:27] <Aison> the funny thing is, on two other ubuntu serveres it is working perfectly this way
[18:27] <rharper> Aison: and reboot; next reboot any interface that's not in /etc/network/interfaces won't get configured
[18:27] <c0mrade> ikonia: okay, i'll research that
[18:27] <Aison> rharper, yes, I tried, but it is still not configured by systemd
[18:27] <rharper> you have to write systemd network configuration
[18:28] <Aison> rharper, I did, that's why it is working after restart the netword service
[18:28] <ppetraki> __Yiota, also... I'm going straight to the block device, no middle man :) you can tell file to use a file and just point it at the mount point you're interested in. You want to start from the bottom up, how fast is my backend, *then* introduce the filesystem and see how much performance you loose.
[18:28] <ppetraki> er lose
[18:28] <rharper> and then disable ifupdown service 'systemctl disable networking';  write your new configs and enable networkd 'systemctl enable systemd-networkd'
[18:28] <Aison> ok, maybe that's the problem
[18:28] <c0mrade> ikonia: Just for your information, I've installed lichess on my home server, it's a physical server with 8GBs of RAM and a 2.4GHz Xeon 4 core CPU. It's online at http://www.instagramika.com/ and it's up and running.
[18:28] <Aison> I did not disable networking
[18:28] <ikonia> c0mrade: I don't care
[18:29] <nacc> Aison: ah yes, so maybe they are competing
[18:29] <c0mrade> ikonia: I know you don't care but you just made a comment up there that sadly I've been trying to install it on EC2 and this time that's not the case so I just wanted to correct things.
[18:29] <ppetraki> __Yiota, I don't know how big your reads are, you'll have to find that out. In the meanwhile, you can sweep the range using fio from 4K to say 512K and compare the completion times, see where they blow up
[18:29] <ikonia> c0mrade: you have - there is nothing incorrect in what I said, you've just told me it's running on a physical server, I said you've been trying and failed to get it running on an ec2 instance for days
[18:29] <__Yiota> ppetraki thank you so much
[18:30] <c0mrade> ikonia: Yeah but just to let you know it's no longer the case, it's not like am still doing that. I have it on my server now and trying to improve things.
[18:31] <ppetraki> __Yiota, you're welcome. performance instrumentation is hard work, just hang in there.
[18:31] <ikonia> c0mrade: and you've already told me days ago you had it running on your own server, and I explained I wasn't interested
[18:31] <c0mrade> ikonia: Yeah you know that but not everyone in here.
[18:32] <ikonia> I suspect no-one is that interested, they just want your problem solved so you stop asking the same thing every day
[18:34] <c0mrade> ikonia: I am like everyone else trying to ask a normal question, yes I asked this yesterday and waited a long time until I gave up and asked in another channel and also didn't get an answer, maybe after some long time someone answered I don't know, by that time I turned off my system and fell asleep hehe.
[18:34] <ikonia> c0mrade: it was answered for you yesterday, and the day before, and in multiple channels
[18:34] <ppetraki> __Yiota, If reads are your problem, I would also run htop and turn on the R/W bandwidth column to see who the big contributors are. It could be something as dumb as too much competition for the same volume.
[18:34] <ikonia> really try to focus on the information people are giving you, rather than the information you think you want
[18:35] <c0mrade> ikonia: This exact question? I told you maybe it was but after half a day?
[18:35] <ikonia> c0mrade: the exact question
[18:35] <__Yiota> ppetraki it's insane, the amazon SSD is slower than my google standard persistent disk
[18:35] <ppetraki> __Yiota, how slow is slow?
[18:35] <sarnold> c0mrade: I know I explained that you ought to investigate writing a systemd unit file yesterday
[18:36] <compdoc> somethings wrong with that
[18:36] <sarnold> c0mrade: .. and today you appear to have not done any reading about systemd unit files.
[18:36] <sarnold> c0mrade: therefore it's hard to want to help you any further. I hope you can understand this.
[18:36] <c0mrade> ikonia: All right, maybe I just missed it, but I didn't just ignore some answer on intention, that's the first time I get an answer quickly, I will be reading about systemd units and thanks for that.
[18:36] <sarnold> c0mrade: this explains ikonia's frustration
[18:36] <__Yiota> ppetraki https://bpaste.net/show/38df3b69ca01
[18:37] <c0mrade> sarnold: I really didn't see any answer yesterday, I told you I didn't like ignore it on intention, I just totally missed it by accident.
[18:37] <ikonia> sarnold: not fully as he's cross-posting it in about 4 other channels at least that I'm in, and ignoring the same info there too
[18:37] <ppetraki> __Yiota, so if it fits in cache you're basically on a 12G SATA link, if not... you're getting spinning disk sata perf
[18:37] <sarnold> c0mrade: that helps, a bit. time to investigate /lastlot -hilight in your irc client, too. :)
[18:37] <ppetraki> __Yiota, I found your problem :)
[18:37] <sarnold> ikonia: cross-posting is a quick way to exponentially grow frustration. :)
[18:37] <__Yiota> can you expand on that?
[18:38] <ikonia> hence why I'm tired of it
[18:38] <c0mrade> ikonia: First time I asked my question on here I assure you 100% that many hours passed without it being answered.
[18:39] <ikonia> you've just been told you asked it yesterday and was given the answer
[18:39] <ikonia> so how can it be "the first time"
[18:39] <ikonia> you're even telling yourself lies now
[18:39] <c0mrade> That's why if someone maybe answered my question I could've totally just missed it after waiting for hours I thought it won't even be answered.
[18:39] <ppetraki> __Yiota, if you have a cache miss you're going to pay for it dearly. I don't know how big the cache is, apparently big enough to move 10GB/s easy
[18:39] <ppetraki> I meant 20G
[18:39] <c0mrade> ikonia: You said I've been asking this for the past two days, I am not talking about yesterday but the day before that. The first time I asked it.
[18:40] <ikonia> so "days" then
[18:40] <ppetraki> __Yiota, so what application is the problem? Do you have htop setup?
[18:40] <__Yiota> yes, I have htop
[18:41] <c0mrade> All right thanks for the hint about systemd I'll be checking that out and see where I can get.
[18:42] <__Yiota> ppetraki, we haven't pinpointed the problem yet
[18:42] <__Yiota> we have comparable speeds to our google cluster
[18:45] <c0mrade> multi-user.target specifies what?
[18:45] <ppetraki> __Yiota, when you get into fio, enable the disk_read, disk_write, io_rbytes, and io_wbytes columns. and just sort by disk_reads for starters.
[18:45] <ppetraki> __Yiota, I meant htop, so many tools!
[18:45] <__Yiota> yeah no kidding
[18:45] <ppetraki> duh even simpler
[18:45] <ppetraki> dstat
[18:46] <ppetraki> __Yiota, sudo dstat -d[bdev]
[18:47] <nacc> c0mrade: `man systemd`
[18:47] <sarnold> c0mrade: the main systemd boot 'goal', most of the time
[18:47] <ppetraki> __Yiota, and it's stupid it just wants the name e.g. "sdb" not the whole path
[18:47] <c0mrade> Okay thanks.
[18:48] <ppetraki> __Yiota, if it looks like you're sinking a total of 100MB/s of R and W then you're probably out of bandwidth, if you're within 80% of that you still have a problem
[18:49] <ppetraki> __Yiota, "SSD" doesn't mean crap in the cloud. If you're app really does have a random IO pattern, this fake SSD may not have what it takes to give you uniform completion times, it could actually perform like a spinning disk.
[18:50] <c0mrade> Pfff systemd documentation seems complicated
[18:50] <nacc> c0mrade: well it's an init system for your entire macine, so it's complicated )
[18:50] <nacc> :)
[18:51] <ppetraki> __Yiota, ugh, it's -D not little d
[18:51] <ppetraki> __Yiota, http://pastebin.ubuntu.com/24415455/
[18:51] <c0mrade> I only need a couple of lines of code to make this thing work and am ending up reading complex stuff, which is a bit of pain :P
[18:52] <sarnold> c0mrade: chances are really good that your systemd unit files will just be one or two files, ten lines long. but knowing what to put in those files means you have to know what you want the file to accomplish.
[18:52] <sarnold> c0mrade: and that means reading.
[18:52] <c0mrade> Oh I found this useful link http://www.tecmint.com/create-new-service-units-in-systemd/
[18:53]  * c0mrade reading..
[18:53] <c0mrade> Someone with a similar issue like mine.
[18:55] <sarnold> most of that looks alright but he goes off the deep end writing a new unit for bringing up a specific network interface
[18:56] <c0mrade> sarnold: He's wasy of explaining is pretty cool, he makes it look pretty easy.
[18:56] <nacc> yeah systemd blog posts are almost always ... misinformed it feels like
[18:57] <nacc> or out of date at this point
[18:57] <c0mrade> I mean okay...It could be easy for someone working with linux everyday but for someone who might stumble upon this like once every half a year that's a problem, it's like there's no light you're in total darkness.
[18:58] <c0mrade> nacc: The link I mentioed is not good or incorrect?
[18:58] <nacc> c0mrade: i haven't read it, so i don't know
[18:58] <nacc> c0mrade: i'm sorry, but setting up a process to start at boot to spawn a service at every boot does require you to educate yourself
[18:58] <nacc> c0mrade: you're making a choice to do that in the first place
[18:59] <nacc> c0mrade: so just pay the cost of learning how to do it right :)
[18:59] <c0mrade> nacc: It'll take 10 seconds with you, just tell me how to do it :P
[19:00] <c0mrade> Gemme them codes and lines... :P
[19:01] <nacc> c0mrade: right, not my job )
[19:01] <nacc> )
[19:02] <nacc> i'm not used to using my laptop keyboard clearly!
[19:07] <c0mrade> you can understand that the booting procedure reaches the targets with a defined order. so how do I know the order, the example shows to execute the script after network.target which is when the boot process reaches the network service and starts it, but I need the boot process to complete and start my script
[19:08] <c0mrade> Oh, it looks easy to me now :P from what I read...
[19:09] <c0mrade> first step is to create a file.service in cd /etc/systemd/system/multi-user.target.wants/
[19:10] <c0mrade> Write some code in that ExecStart= specifies what to execute and WantedBy= specifies multi-user.target (runlevel or whatever)
[19:10] <sarnold> no, just /etc/systemd/system/
[19:11] <Aison> rharper, still not working, so old networking service was not the problem
[19:11] <nacc> c0mrade: you use symlinks in each target to specify what should run for that target
[19:11] <rharper> and what does networkctl show ?
[19:12] <rharper> are you sure your .link and .network file are accurate?
[19:12] <nacc> c0mrade: e.g., (iirc) systemctl add-watns <target> <service name>
[19:12] <nacc> *add-wants
[19:12] <Aison> rharper, I have no .link file, only .network and .netdev (for vlan)
[19:13] <Aison> rharper, networkctl show enp2s0 degraded
[19:13] <c0mrade> sarnold, nacc: Yeah I got it, I thought it's going to be difficult, because I've read some documenation about systemd and got overwhelmed but it looks pretty easy :)
[19:13] <rharper> ok and what's your device section in .network look like, are you matching via mac or some other property ?
[19:13] <c0mrade> But all I gotta do now is just worry about how am going to sun the command 'run' after I execute the first script...
[19:13] <c0mrade> Maybe use expect?
[19:14] <c0mrade> The prompt will look like (lila)$
[19:14] <c0mrade> So I would just use 'expect' with that stuff and send 'run'?
[19:14] <sarnold> where does the prompt come from?
[19:16] <c0mrade> sarnold: After I execute ./bin/dev which I think executes a JVM with -Xms and -Xmx args, that's what's inside the file some long command with many arguments related to java.
[19:17] <ThiagoCMC> Hey guys... Under MaaS Next (fully upgraded and recently installed), my PXE subnet have 0% "Available IPs"! But the subnet is a /23 and I only have 11 baremetal servers! How to clean it up?
[19:18] <sarnold> ThiagoCMC: iirc maas brings up nodes into a 'holding tank' of some sort, that also needs some spare ips -- do you have a network or a zone or something set aside for this? does it have enough ips?
[19:18] <sarnold> c0mrade: is '(lila)$' coming from bash? or from the java program?
[19:19] <ThiagoCMC> yes, O have other fabrics / subnets...
[19:19] <ThiagoCMC> I mean, /O/I/
[19:19] <c0mrade> sarnol: From the java.
[19:19] <sarnold> c0mrade: eww.
[19:19] <ThiagoCMC> I also have an extra DNS zone
[19:19] <c0mrade> sarnold: But how do I make sure, maybe am mistaken.
[19:20] <c0mrade> sarnold: But won't my idea of using 'expect' work?
[19:20] <sarnold> c0mrade: maybe you can just do "echo run | ./bin/dev"
[19:20] <sarnold> but that sounds like really gross software
[19:21] <c0mrade> sarnold: Heard of sbt?
[19:21] <sarnold> no
[19:21] <c0mrade> something used for building apps
[19:21] <c0mrade> http://www.scala-sbt.org/
[19:23] <sarnold> c0mrade: oh. hrm. https://en.wikipedia.org/wiki/Sbt_(software)#Example_use
[19:23] <sarnold> c0mrade: try "sbt run" as the command rather than using the interactive thing.
[19:24] <ThiagoCMC> Never mind... Figured it out! It was reserved for some reason...
[19:25] <c0mrade> sarnold: What about ./bin/dev ?
[19:25] <c0mrade> ahh you mean inside ./bin/dev script add that line?
[19:25] <c0mrade> sbt run?
[19:25] <sarnold> c0mrade: maybe. I have no idea what that tool does.
[19:26] <c0mrade> But I think I'll hit something a loop hh.
[19:26] <c0mrade> something like a loop*
[19:28] <c0mrade> sarnold: After= should be what? Most of the examples use network.target but what do you recommend using?
[19:29] <sarnold> c0mrade: do you need to wait for your mongo server to be up first? or just networking?
[19:29] <c0mrade> mongo server should be up before executing that command yes
[19:30] <sarnold> then be sure to put its unit in there too
[19:31] <c0mrade> I've already made mongo start automatically without doing any of this manual stuff
[19:31] <c0mrade> just executed systemctl then something and mongo.service
[19:31] <c0mrade> I forgot the command
[19:31] <c0mrade> yeah
[19:31] <nacc> c0mrade: yes, because mongo ships a unit already
[19:31] <c0mrade> systemctl enable mongo.service
[19:31] <nacc> c0mrade: so all you did was 'enable' it
[19:31] <c0mrade> sorry
[19:31] <c0mrade> systemctl mongod.service
[19:32] <c0mrade> systemctl enable mongod.service
[19:32] <c0mrade> :P
[19:32] <c0mrade> After=mongod.service ?
[19:33] <c0mrade> But then the question is that is that accurate? What if it needs some other services to be running? Is there a way that I just don't specify after= and just wait for boot process to complete and then run the script?
[19:34] <sarnold> c0mrade: you've got a few choices.. https://www.freedesktop.org/software/systemd/man/systemd.unit.html#Requires=
[19:34] <sarnold> c0mrade: Requires= if it just requires it and startup order doesn't matter; Requires= and After= if you have to wait
[19:35] <bindi> how come I have to do 'sudo iptables-apply' each time I reboot to have my rules take effect?
[19:35] <bindi> Applying new iptables rules from '/etc/network/iptables.up.rules'... done.
[19:35] <bindi> 16.04
[19:36] <c0mrade> bindi: I know how to fix this now :D create a systemd unit file! :D
[19:36] <c0mrade> Hehe kidding there should be anothe way.
[19:36] <c0mrade> another*
[19:38] <axisys> got a alert on cve-2009-2410, but do not see anything on ubuntu usn
[19:38] <ikonia> c0mrade: you understand that "the run" part is an interactive shell
[19:39] <axisys> any suggestion how to address this
[19:40] <nacc> axisys: https://people.canonical.com/~ubuntu-security/cve/2009/CVE-2009-2410.html
[19:40] <c0mrade> ikonia: Yeah it's some sort of shell but I don't know if it's created by java or what... So I want a script that will execute ./bin/dev then execute 'run', that's why I mentioned to use expect. But I think I didn't get what you're trying to say?
[19:40] <axisys> nacc: hmm.. i could not find it..
[19:41] <axisys> nacc: thanks though.. so I will just answer that
[19:41] <nacc> axisys: i usually start at https://people.canonical.com/~ubuntu-security/cve/
[19:41] <nacc> axisys: and go off the cve itself
[19:42] <axisys> nacc: thanks for the tip.. I will just create a function with that lookup..
[19:42] <c0mrade> ikonia: I know that everyone might get annoyed from be because of my noobness but yeah...
[19:42] <c0mrade> from me*
[19:43] <nacc> axisys: yw
[19:43] <sarnold> axisys: a moment..
[19:43] <axisys> sarnold: k
[19:44] <sarnold> axisys: when we triaged that initially sssd wasn't in ubuntu https://people.canonical.com/~ubuntu-security/cve/2009/CVE-2009-2410.html
[19:44] <sarnold> axisys: so there's a really good chance that it was fixed before sssd was added
[19:44] <sarnold> axisys: but I'd like to double-check when we triaged that
[19:44] <sarnold> .. and bzr log is sooo slow. heh.
[19:44] <nacc> sarnold: ah! yeah, i wasn't sure on the message format
[19:48] <nacc> sarnold: if it was "not present in ubuntu's sssd" or "sssd is not present in ubuntu"
[19:49] <sarnold> nacc: in this case, sssd wasn't present in ubuntu at the time
[19:50] <sarnold> nacc: in the case of 'not present in ubuntu's sssd' the report would look more like https://people.canonical.com/~ubuntu-security/cve/2016/CVE-2016-10249.html
[19:50] <axisys> sarnold: so how do I check if it is present on 12.04 LTS ?
[19:51] <nacc> sarnold: ah right
[19:51] <sarnold> axisys: apt-cache search sssd or dpkg -l sssd
[19:51] <sarnold> axisys: sorry I've got to run and this bzr log hasn't gotten to the check in that added that cve yet :/ back in an hour or so
[19:51] <ikonia> c0mrade: created by java ?
[19:51] <ikonia> c0mrade: it IS scala
[19:52] <ikonia> c0mrade: no-one is annoyed because you are new, people get frustrated because you don't listen, you admit you're too lazy to even describe problems properly and you spam channels with no respect for their rules and then try to evade bans
[19:52] <ikonia> c0mrade: thats why people get annoyed with you
[19:52] <ikonia> c0mrade: the bottom line is you need to understand the scala environment and not just cut and paste the commands blindly from lichess wiki into a script
[19:53] <ikonia> you need to understand how to setup the environment needs, how to launch non-interactive and how to trap and manage errors
[19:53] <ikonia> I suggest you focus on that
[19:53] <ikonia> then once you understand how to do this, you can then translate that into a systemd unit file
[19:54] <axisys> un  sssd                    <none>                  (no description available)
[20:12] <c0mrade> ikonia: Thanks for the info.
[20:13] <c0mrade> That will require some time. To run the lichess app two commands are required, the ./bin/dev then inside the interactive shell running 'run'.
[20:14] <c0mrade> Only these two, now I do agree with you that I'd have to dig deep into understanding the scala environment but I'll leave that for another time which won't be too long.
[20:15] <c0mrade> But for the time being am thinking of a simple (might be dirty) solution which is just run this ./bin/dev then send it the word run and put it in a systemd unit file.
[21:12] <Aison> hello, anybody an idea what can cause smbd to use almost 100% CPU usage (one core)?
[21:13] <Aison> smbstatus says "No locked files" and just two users
[21:13] <nacc> Aison: you could strace it, maybe?
[21:14] <sarnold> axisys: sure enough CVE-2009-2410 was added in 2009. Debian agrees that it was fixed before being added https://security-tracker.debian.org/tracker/CVE-2009-2410
[21:16] <Aison> nacc, 10s strace of smbd generates 4mb log file
[21:17] <nacc> Aison: :) yeah sounds busy! -- i'm guessing it's in a loop somewhere
[21:17] <nacc> Aison: can you pastebin it?
[21:18] <sarnold> maybe strace -c
[21:18] <sarnold> or .1 seconds of strace :)
[21:18] <nacc> yeah, less of it woould be fine, esp. if it's repetitive
[21:18] <dpb1> | head -500 :)
[21:18] <Aison> direct link to the 3mb log file https://people.alvhaus.ch/~ivost/smbd.log
[21:18] <Aison> :P
[21:19] <sarnold> does /var/run/samba/msg.lock exist?
[21:19] <nacc> it appears like it can't grap a lock
[21:19] <nacc> *gravb
[21:19] <nacc> /var/run/samba/msg.lock/*
[21:20] <Aison> no, does not exist
[21:21] <sarnold> Aison: do you have any apparmor DENIED messages in dmesg or auditd logs?
[21:22] <Aison> no, just this one (but that's mysqld): audit: type=1400 audit(1492628668.583:17): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/proc/5983/status" pid=5983 comm="mysqld" requested_mask="r" denied_mask="r" fsuid=102 ouid=102
[21:23]  * nacc thinks you could try creating that directory and seeing if smbd calms down, but you'd need to make sure to get ownership/permissions right. I think it'd match /var/run/samba but not sure
[21:23] <sarnold> the mysqld issue is probably 1658239
[21:28] <sarnold> yeah, I think i'd pick the same owner/group as /var/run/samba and set mode to 755
[21:28] <sarnold> I got the 755 from lib/param/util.c in one of the samba sources
[21:28] <nacc> yeah, i think that should be fine
[21:28] <nacc> i'm not sure why that directory doesn't exist, it seems like it should by default
[21:28] <nacc> Aison: what version of ubuntu?
[21:29] <sarnold> yeah, I can't figure out why it doesn't exist either. I half-expected an apparmor denial to explain it..
[21:30] <nacc> or i guess i would have expected the service wrapper to ensure it exists, or a postinst, or something ...a though if it's in /var/run ... that a tmpfs, so it needs to be a runtime thing
[21:30] <nacc> (iirc, /var/run is by default -> /run which is by default a tmpfs)
[21:32] <Aison> now these files exists (/var/run/samba/msg.lock/*)
[21:32] <Aison> but still high cpu load
[21:32] <sarnold> try a new strace?
[21:35] <Aison> samba is still trying to lock some files inside /var/run/samba/msg.lock/
[21:42] <sarnold> does this mean anything to you Aison?
[21:42] <sarnold> accept(36, {sa_family=AF_INET, sin_port=htons(45332), sin_addr=inet_addr("10.1.1.1")}, [16]) = 17
[21:42] <nacc> Aison: you might need to restart smbd as well, if it's trying to use existing lock file it couldn't create before
[21:51] <Aison> nacc, ok