[02:15] <keithzg> Huh, linux-headers-4.4.0-97 is taking *forever* to unpack on one of my server VMs.
[02:17] <sarnold> it's nearly 17k files
[02:17] <sarnold> what kind of iops do you have on that system?
[02:18] <keithzg> Not terribly great, since it's a VM running on an HDD. I'm just weirded out because other VMs on the same host didn't take nearly as long!
[02:18] <sarnold> hrm
[02:19] <keithzg> But yeah, fair enough, if any package is going to spend a long time unpacking this'd be the one, heh.
[02:19] <sarnold> check iostat -dmx1 or vmstat 1 or something similar to see if there's something doing a steady stream of sync writes to disk or similar?
[02:19] <sarnold> are there ioerror sin dmesg on host or guest?
[02:20] <sarnold> is the filesystem stored on an AF drive but with 512B sectors?
[02:23] <keithzg> Storage is qcow2, bus is virtio. Nothing showing up in dmesg. I must admit I don't know how to read vmstat, but iostat doesn't look *too* bad
[02:24] <keithzg> (on the host, guest doesn't have iostat installed and I can't do so right now, hah)
[02:25] <stokachu> stgraber: had a user see this with snap lxd stable http://paste.ubuntu.com/25782412/
[02:25] <stokachu> stgraber: and http://paste.ubuntu.com/25782424/
[02:26] <stgraber> stokachu: hmm, out of disk space maybe? that'd explain both of those
[02:27] <stokachu> bdx: ^
[02:28] <bdx> I was running in a vm, its likely that could have been the issue ... I believe it had plenty of space though
[02:30] <stokachu> im guessing the vm isn't up any longer?
[02:32] <bdx> its not, I appologize
[02:32] <bdx> i have some scroll back though from when I was logged into it
[02:32] <keithzg> sarnold: Checking logical_block_size and physical_block_size in /sys/class/block/sda/queue/ (and sdb) on the host seems to confirm that they're old non-AF drives and formatted in the according 512B sectors. Hmm. I'm reminded by this that the drive in question is in fact a pair, using hardware RAID. Tempted to just blame it on that somehow :P
[02:33] <bdx> so, I removed system level lxd, and I still see it here http://paste.ubuntu.com/25782525/
[02:33] <sarnold> keithzg: does the hardware controller have error logs available anywhere?
[02:33] <sarnold> keithzg: smart data?
[02:33] <bdx> I just have a feeling it was a cruft thing somehow, the system had lxd reconfigured quite a few times in and outside of the snap
[02:35] <keithzg> sarnold: This is very cheap commodity hardware, so I'd be surprised if the controller actually had accessible logs! For what it's worth smartcl hasn't logged any errors for either drive.
[02:36] <keithzg> (err, I should say, smartctl doesn't report that any smart errors have been logged on either drive)
[02:36] <sarnold> keithzg: hrm, somehow this is a bit unsatisfying :) it feels like it ought to be possible to nail down what's going on.
[02:37] <sarnold> I don't remember spinning metal drives a being -that- slow, somehow we survived back in the day :)
[02:37] <keithzg> hehe
[02:37] <keithzg> sarnold: No kidding! I'm almost tempted to just cancel the operation, install iostat, and start it up again :P
[02:38] <sarnold> keithzg: wait the damn thing is still going??
[02:38] <keithzg> sarnold: Haha actually inbetween me saying that and you replying, it finally got past that package!
[02:38] <sarnold> hehe
[02:40] <sarnold> 19:15:09 to 19:37:44. 12.5 files per second. that sounds slow.
[02:40] <sarnold> and presumably you only complained on irc after it'd been going on for a little while already.
[02:41] <keithzg> Yeah, I checked on it a few times and eventually went "seriously, *still*?" and only then piped up here
[02:42] <keithzg> Clearly I'm going to have to keep a watchful eye on this guest and its host . . .
[13:52] <FMan> hi
[15:49] <drab> hi, is it still a pipedream to have a simple solution to monitoring what's going on on the network?
[15:49] <drab> way back doing it "right" meant to set up cflow/netflow and it was a pita just to get through the standards
[15:50] <drab> any chance it got easier?
[15:51] <drab> the alternative used to be cacti, but that only really gives you a sense of traffic per port, not really the type of traffic like netflow does (well at least you get ip + port tuples on each switch port)
[19:37] <orogor> hi
[19:37] <orogor> anyone would know why sudo wouldn t work after upgrade to 17.10 ?
[19:38] <orogor> it just hang there after typing a good password
[21:20] <gunix> iface bond0.10 inet manual
[21:20] <gunix> what does .10 mean?
[21:21] <gunix> oh, vlan tag
[21:56] <Blueking> apt autoremove doesn't clean latest  and need to check what version ubuntu are currently running before clean/remove stuff in /boot