[00:22] <cncr04s> is there any way to force a resync of a new drive to the other partner in the raid1 array, that throws IO read errors on blocks that don't really matter. Ive tried copying the partition to the new drive, but I can't get it to add as a clean drive, it always starts to sync, then fails due to IO errors. I want to ignore these errors, I fan fix those missing sectors later
[00:51] <tomreyn> cncr04s: http://unix.stackexchange.com/questions/42277/linux-repairing-bad-blocks-on-a-raid1-array-with-gpt
[00:51] <tomreyn> but you really should replace the disks
[00:52] <CodeMouse92> Every so often, when I'm working with PHPldapadmin, it crashes Apache2 quietly. As in...nothing in the error log, but Apache2 just stops listening on port 80.
[00:52] <cncr04s> i replaced one
[00:52] <cncr04s> but the second is bad too
[00:52] <CodeMouse92> Even if I restart Apache2, that's the case. I have to restart the computer to fix it. What's going on?
[00:55] <tomreyn> cncr04s: experimenting with badblocks, if anywhere, only makes sense where the 1h wage of trained it staff is several magnitudes lower than that of a new drive.
[00:56] <tomreyn> cncr04s: to create an exact copy of one of your previous raid members use dd_rescue
[01:03] <cncr04s> can I just add sda3 to a whole new array, with a missing disk, It's the / partition not the /boot. so I just need to update fstab in that case for / being the new array. I would just copy the filesystem from the old array to the new one if I need to, and in theory it should just have a new clean array with the filedata. This is what I'm trying next
[01:04] <cncr04s> do I need to clear anything on sda3 in this case? superblock?
[01:57] <CodeMouse92> Answering my own question...mod_evasive was doing its job. Because of how PHPldapadmin works, it exceeded 2 page requests a second. I whitelisted my internal network and raised the DOSPageRequest number.
[03:40] <CodeMouse92> Will a user crontab run its @reboot on reboot if it isn't logged in?
[03:43] <patdk-lap> will a user crontab run if the user isn't logged in?
[03:49] <CodeMouse92> That's what I said, yeah :)
[07:50] <Seveas> yes, crontabs don't need the user to be logged in [*]
[07:50] <Seveas> [*]unless the user has an encrypted homedir and the cronjob uses things in said homedir. Then the cronjob will still run, but obviously fail.
[09:53] <rbasak> cpaelzer: the dovecot retest on armhf didn't work. I can request again because it is intermittent. But we should probably fix that.
[09:54] <rbasak> I'm pretty sure it's a race.
[10:06] <cpaelzer> rbasak: I'm pretty sure as well as none in that particular test is arch specific
[10:07] <cpaelzer> rbasak: But while I fixed the test to work on last merge I never was really deep into it (only fixed stuff on the surface)
[10:07] <cpaelzer> rbasak: so I guess it is nothing to "just do in 5 min" especially that we need a portbox or such and to hit the transient case to analyze
[10:07] <cpaelzer> rbasak: would you open a bug and copy-attach the failing log?
[10:07] <cpaelzer> rbasak: one more for the backlog I guess
[10:11] <rbasak> cpaelzer: will do.
[10:11] <cpaelzer> rbasak: thanks
[10:13] <rbasak> cpaelzer: bug 1638865.
[10:17] <cpaelzer> thanks
[10:43] <haasn> What's the proper way to set up macvlan interfaces via /etc/network/interfaces? The hack I'm doing right now is like this: https://0x0.st/2-V.txt but the problem is that since `ip link add` gets run dynamically, it gets a different MAC address every time. I _could_ hardcode a MAC using `hwaddress ether ...` as well, but this all seems like a hack. Is there a better way to statically attach multiple
[10:43] <haasn> macvlan interfaces to a single physical interface?
[11:29] <powersj> cpaelzer: around?
[11:41] <cpaelzer> powersj: whats up?
[11:50] <powersj> cpaelzer: I got the two added qemu/live migration runs added, I was going to ask for names, but figured it out.
[11:51] <cpaelzer> powersj: that was what I added before the commands
[11:52] <cpaelzer> powersj: but eventually it is just names - make sure we recognize what it is all else isn't important
[11:52] <powersj> ok, I did change them slightly just to keep things consistent sort of
[11:52] <cpaelzer> yeah I'm good with that
[11:53] <rbasak> stgraber: I get "rror: Error opening startup config file: "loading config file for the container failed"" when trying to get into a zesty container on zesty. Any ideas? http://paste.ubuntu.com/23420414/
[11:53] <rbasak> I don't see anything obvious in /var/log/lxd/
[11:59] <rbasak> stgraber: I fixed it. I tried downgrading lxd, but that failed with a gazillion missing libgolang dependencies, so I gave up and "sudo apt-get -f install" to restore lxd and lxd-client back to 2.5-0ubuntu1. Now it works. So some kind of upgrade path problem? diglett is set to unattended-upgrades everything.
[13:08] <coreycb> zul, I'm kicking a bunch of rebuilds off in ocata CI
[13:09] <zul> coreycb: k good luck :)
[13:10] <zul> coreycb: starting to look at autopkgtest
[13:10] <coreycb> zul, ok
[13:39] <coreycb> zul, i'm working through heat and trove
[13:40] <zul> coreycb: is stuff building again?
[13:40] <coreycb> zul, yeah
[13:41] <zul> coreycb: cool
[13:56] <theGoat> is there a way to tell whether ubuntu has booted via systemd or booted via upstart?
[14:06] <zul> coreycb: i got keystone
[14:06] <coreycb> zul, thanks
[14:09] <coreycb> theGoat, pid 1 should be systemd
[14:09] <coreycb> if running with systemd
[14:10] <theGoat> ok, tks.
[14:11] <theGoat> on another issue.  ever since i upgraded to 16.04, i can't open any files on my nfs mounts.  on the server side all i get is : lockd: cannot monitor <client> and on the client side i get "no locks available"
[14:12] <theGoat> been fighting it for a week.  not sure where to go next
[14:12] <cpaelzer> theGoat: my NFS worked just fine through the same upgrade - any special lock related mount options in place?
[14:13] <cpaelzer> theGoat: I had all sort of tuning in the past but realized I didn't need it so these days I only have "rw,user,noauto"
[14:13] <theGoat> here is an example from my exports: /dumping_ground/software           192.168.101.170(rw,sync,no_root_squash,no_subtree_check,insecure
[14:14]  * cpaelzer needs to do a few logins to comapre
[14:14] <theGoat> i have even tried nolocks and locallocks on the client side...no luck
[14:16] <cpaelzer> I'm down to (rw,no_subtree_check) these days, but nothing in your config seems wrong to me atm
[14:16] <cpaelzer> hmm
[14:17] <cpaelzer> theGoat: might this help you ? http://sophiedogg.com/lockd-and-statd-nfs-errors/
[14:18] <cpaelzer> na I think that is suprt old or some other way not applicable
[14:20] <cpaelzer> ah well I found the dirs on my nfs server
[14:20] <cpaelzer> thou tey are empty
[14:20] <theGoat> cpaelzer:  following that doc....there doesn't seem to be a service labeled nfslock
[14:21] <cpaelzer> theGoat: true might have a slightly different name or is not appliacble here
[14:21]  * cpaelzer is checking
[14:24] <cpaelzer> I seem to have rpcbind, nfs-kernel-server, nfs-server, nfs-config and nfs-mountd running
[14:24] <cpaelzer> theGoat: I'm sure there are dependencies solving most and you only have to stop/start a few, but I don't know which ones without going deeper
[14:25] <theGoat> yeah, i have tried just about anything....it's a test box, so i am half tempted to whack it and go back to 14.04 or something like that
[14:26] <cpaelzer> theGoat: if it is a test box try if just stopping all of the services, then cleaning that dir and rebooting gets you working
[14:27] <cpaelzer> theGoat: if you do  not insist on any of the export options I't also remove some like sync
[14:28] <theGoat> cpaelzer: something like this: /dumping_ground/software           192.168.101.170(rw,no_root_squash,no_subtree_check,insecure)
[14:28] <cpaelzer> yeah, I don't know about insecure but it sounds insecure :-)
[14:28] <cpaelzer> the two others should be good I tihnk
[14:29] <theGoat> and i noticed i only have one lockd process running
[14:29] <cpaelzer> theGoat: if you haven't stopped the services yet (por later if the probelm persists)
[14:29] <cpaelzer> you can look at
[14:29] <cpaelzer> rpcinfo -p localhost
[14:29] <cpaelzer> and
[14:29] <cpaelzer> rpcinfo -u localhost nlockmgr
[14:30] <cpaelzer> theGoat: http://serverfault.com/questions/188918/problem-with-nfs-server-lockd-timing-out-on-debian-linux
[14:30] <cpaelzer> but they end up with the same solution
[14:38] <Braven> Does anyone here use maas
[14:43] <DK2> how safe is a upgrade from ubuntu 10.4 to 16.4?
[14:45] <ogra_> while it should be safe it will definitely be painful
[14:46] <ogra_> you cant do a direct upgrade but need to go from one LTS to the next in steps
[14:46] <ogra_> i.e.: 10.04 -> 12.04 -> 14.04 -> 16.04
[14:47] <DK2> so a release upgrade will jump to 12.04 first automatically?
[14:47] <ogra_> 10.04 is EOL already ... so for the first hop you need to follow https://help.ubuntu.com/community/EOLUpgrades/
[14:48] <ogra_> all subsequent ones should just work via do-release-upgrade
[14:49] <codedmart> If I made a change to a file in /etc/init.d/ do I need to run any command for the changes to take effect and/or be read/used?
[14:59] <zul> coreycb: doesnt help no binaries have been published yet for a dep when you do a jenkins run
[14:59] <coreycb> zul, is CI not publishing to the testing ppa?
[15:00] <zul> coreycb: nah oslo.config ftbfs yesterday and yadda yadda
[15:00] <coreycb> zul, yeah a ftbfs would cause binaries to not be published
[15:01] <zul> coreycb: ass
[15:01]  * coreycb laughs
[15:01] <stgraber> rbasak: yeah, we've seen two reports of that before. Best guess right now is that in some cases the lxd daemon isn't respawned on upgrade causing that error when forkexec is invoked
[15:58] <coreycb> ddellav, zul: working on swift
[15:58] <ddellav> coreycb ack
[15:58] <zul> coreycb: dep just needs to be backported
[15:59] <coreycb> zul, pyeclib?
[15:59] <coreycb> zul, ah yeah, did you kick that off?
[15:59] <ddellav> coreycb zul anyone working on heat?
[16:00] <coreycb> ddellav, I think it may have issues with oslo.config 3.19.0.  I think we can hold off on heat until 3.19.0 is in upper-constraints.
[16:01] <ddellav> coreycb ok
[16:01] <zul> coreycb:  yes
[16:01] <ddellav> coreycb zul fwiw i ran through keystone before i knew zul was working on it. Getting test failures complaining about oslo.config 3.18 conflicting with requirements.txt. I updated the sbuild and it is using proposed so not sure whats up with that
[16:05] <coreycb> ddellav, this should get you into the chroot if you want to poke around 'schroot -c zesty-amd64 -u root'
[16:06] <ddellav> coreycb yea, i've got that in my notes, i am inside the schroot but i see that 3.19 is installed. I'm not sure why the tests are complaining about 3.18
[16:07] <coreycb> ddellav, d/control may need to be updated to >= 3.19.0
[16:08] <coreycb> ddellav, since 3.19.0 is in zesty-proposed and 3.18.0 is in zesty
[16:08] <ddellav> coreycb ok. zul did you finish keystone?
[16:17] <zul> coreycb: not yet still failing
[16:17] <coreycb> zul, ddellav ^
[16:41] <ddellav> coreycb zul ack
[16:41] <ddellav> coreycb I'm getting that version number issue like with cinder on keystone
[16:41] <ddellav> now that i fixed the oslo.config
[16:41] <ddellav> only 1 test failed
[16:41] <ddellav> heh :)
[16:42] <ddellav> ValueError: Unknown remainder ['0dev253'] in '10.0.0.0rc2.0dev253'
[18:34] <coreycb> ddellav, zul: I just pushed an update for PBR_VERSION to cinder that fixes the unknown remainder issue
[18:34] <zul> coreycb: cool
[18:54] <ddellav> coreycb excellent. Perhaps thats needed for keystone as well
[19:10] <zul> coreycb: heads up https://review.openstack.org/#/c/393344/
[19:11] <coreycb> zul, aha
[20:17] <skylite_> how can I use rsyslogd to separate all my logs? Why is there only local0-7 how whould that be enough if I want to separate more than seven service logs?
[20:18] <sarnold> having only seven local services is a holdover from the 70s or something similarly ancient, back when it was hard to imagine even their huge 4 megabyte memory machines from running more than eight services
[20:19] <skylite_> but thats really it? I would have to work it out with 7?:D I tought Im missing something
[20:22] <skylite_> maybe I'll try to use fluentd is that more suitable for this?
[20:23] <sarnold> depends upon what you're doing; most linux distributions (except gentoo?) are moving to systemd family of services, including the systemd journal
[20:24] <sarnold> but for your own services perhaps fluent or ELK stack or whatever else might be more appropriate
[20:26] <skylite_> sarnold: Im not familiar with journal but I fould that its loosing all the logs after a reboot ? ¯\(°_o)/¯
[20:26] <skylite_> *found
[20:27] <skylite_> also I dont really like the idea of binary logs I would be happy with simple text files
[20:28] <jathan> Hello ubuntu-server. Is there is something similar to Kickstart for CentOS and RedHat scenarios, but for doing this (as automatic installations and system configiguration) with Ubuntu Server 16.06 please?
[20:29] <jathan> I need to set up an Ubuntu Server 16.04 script with hardening based in the CIS Benchmark.
[20:29] <sarnold> jathan: native is debian preseed files; also possible are cloud-init scripts, FAI (fully automated installer), maybe MAAS ...
[20:46] <jathan> sarnold: Thanks a lot! So instead of use a kickstart file for Ubuntu Server like this https://gist.github.com/vrillusions/d292953ff9bc0e2041d9 I can use FAI and create the neccesary content? Do you know if I need to do that manually (writing a new script) through a GUI or are there some FAI templates to create Ubuntu Server machines?
[20:48] <sarnold> jathan: I've not used FAI myself, I just know that some of the regulars here have used it; it's packaged in the archive, so hopefully there's some good starting points also packaged
[21:56] <marxjohnson> I'm running 16.04 on my home server, and dnsmasq keeps dying.  Can anyone suggest how I might track down why?
[22:02] <compdoc> mysql kept dying on mine. turns out an update to v5.7 stopped using my.conf, and needed mysql.conf. it would start and run, but then die after 24 hours or so
[22:07] <tomreyn> marxjohnson: start with: sudo service dnsmasq status
[22:07] <tomreyn> (after it failed)
[22:11] <tomreyn> other than that you could enable core dumps to file and analyze those with gdb to get a backtrace.
[22:13] <sarnold> is there any information in the logs when it happens?
[22:15] <marxjohnson> tomreyn: thanks, I'll look at that next time it p
[22:15] <marxjohnson> dies
[22:15] <marxjohnson> sarnold: which logs should I be looking at?
[22:16] <tomreyn> in case it doesn't die but just freezes, you can make gdb attach to the (running) process to get a backtrace: PID=`cat /var/run/dnsmasq/dnsmasq.pid`; sudo gdb -q -n -p $PID -ex 'bt' -batch
[22:17] <sarnold> marxjohnson: auditd logs if you have those, dmesg, syslog, dhcp logs..