[01:03] <patdk-lap> karstensrage, that is not a very safe thing to do with security
[01:20] <karstensrage> patdk-lap, what do you mean?
[01:21] <patdk-lap> you are not matching the whole string, only the start of the process name
[01:38] <karstensrage> so why is that bad? patdk-lap
[01:39] <patdk-lap> whatever that check is doing, can be bypassed by using a process starting with the same name
[01:52] <karstensrage> well its an nss library like nss_ldap
[01:52] <karstensrage> and those processes are the ones that open the library but dont do anything with it
[01:53] <karstensrage> or close it
[01:53] <karstensrage> so that code is necessary to short circuit out if those processes open the library
[01:53] <karstensrage> so if there is process that has the same starting name, i guess i would want to short circuit out as well
[01:54] <karstensrage> this same problem is apparently with nss_ldap
[01:54] <karstensrage> but they handled it differently
[01:55] <karstensrage> debian sure makes things painful
[01:55] <tarpman> karstensrage: was just about to say, nss-ldap doesn't seem to have any of those process names hard-coded in it; what did they do differently?
[01:56] <karstensrage> tarpman, the work around afaict was to set a flag to either do a strong connection or a weak connection to the ldap server, in the former case keep trying after a failure, in the latter, abort if  it fails the first time
[01:56] <karstensrage> i dont have that luxury
[01:57] <karstensrage> i hate this way of doing it btw with the hardcoded names
[01:57] <karstensrage> but im not seeing a good way around it
[02:00] <karstensrage> tarpman, basically if you google "dbus nss_ldap" you can find all the discussions about the troubles nss_ldap had
[02:01] <karstensrage> it was really hard to narrow it down to dbus, but once i did that, i was able to put in the right debugging to see these processes and filter them out
[02:01] <tarpman> this is ringing some bells now... some of these bugs look very familiar
[03:06] <karstensrage> tarpman, any other suggestions?
[03:11] <rbasak> teward: I think you can set DEB_BUILD_MAINT_OPTIONS=hardening=-pie or something like that.
[03:11] <rbasak> teward: https://wiki.debian.org/Hardening#dpkg-buildflags
[03:34] <tarpman> karstensrage: in your position I'd be trying very hard to detect the "network unreachable" state from my module... that wasn't possible for libnss-ldap since libldap hides the network state behind the LDAP result code
[07:41] <jakst> is this the right channel to ask for assistance with data recovery with Linux Raid 5 / LVM ?
[08:10] <cpaelzer> jakst: it is one channel to ask - there is no one specific to ubuntu + lvm/raid
[08:11] <cpaelzer> jakst: you can still go on to the wider community in #ubuntu if you find no help in the more server specific group around here
[08:12] <jakst> cpaelzer: Of course, just wanted to make sure I wouldn't be chased away with torches and pitchforks for asking here :) Already tried #ubuntu but didn't get much of a response
[08:15] <cpaelzer> jakst: it maybe is still a slow satrt of the year
[08:15] <jakst> Well, I'll give it a shot
[08:15] <jakst> The thing is, my physical and logical volumes have disappeared from LVM, and my raid array has status 'active, degraded, not started' and reports the wrong size
[08:16] <cpaelzer> jakst: did any of the links that were linked there help you already ?
[08:16] <jakst> No, not really
[08:17] <cpaelzer> jakst: if all is gone (pv and lv and likely also vg) you have to start looking bottom up
[08:17] <cpaelzer> jakst: so #1 are the raw devices like /dev/sd... still there?
[08:17] <cpaelzer> jakst: from there go on with pvdisplay, maybe pvscan ... to find your pv's - and from there to vg and lv and so on
[08:17] <jakst> I can see them with fdisk -l
[08:18] <cpaelzer> jakst: the question is where it breaks
[08:18] <cpaelzer> jakst: ok so disks are there - and for the moment we assume they are intact
[08:18] <cpaelzer> jakst: you said LVM / Raid before - is it only LVM or is also an md involved?
[08:18] <jakst> cpaelzer: pvdisplay and pvscan display nothing
[08:18] <jakst> yes
[08:19] <cpaelzer> stacked which way - are the pv's on the md array - or have you made a md array out of lv's ?
[08:19] <jakst> cpaelzer:  /dev/md0 consists of raid5 of  sdc sdd and sde
[08:20] <cpaelzer> jakst: ok and the pv(s) is on /dev/md0 to shape off lvms from there right?
[08:20] <cpaelzer> jakst: is cat /proc/mdstat still happy about /dev/md0?
[08:20] <lordievader> Good morning, happy new year!
[08:20] <cpaelzer> good morning and year lordievader
[08:21] <jakst> Happy new year! :)
[08:21] <jakst> https://www.irccloud.com/pastebin/JcWed20J/
[08:21] <jakst> This is /proc/mdstat
[08:21] <cpaelzer> jakst: ok, so not lvm is broken (maybe it is later) but your md is down
[08:23] <jakst> Yeah that seems to be the case
[08:23] <cpaelzer> jakst: http://superuser.com/questions/603481/how-do-i-reactivate-my-mdadm-raid5-array
[08:23] <cpaelzer> jakst: that should get you to activate it again
[08:23] <cpaelzer> jakst: there are also commands to gather status on each member disk and such
[08:23] <cpaelzer> jakst: I'd do so and store that away before starting/assembling it
[08:24] <jakst> Do you mean mdadm --examine /dev/sdc etc?
[08:24] <cpaelzer> jakst: and all other raid devs
[08:25] <cpaelzer> jakst: I like to store debug info before changing something
[08:25] <jakst> Thanks for the tip!
[08:25] <cpaelzer> jakst: and then likely go with
[08:25] <cpaelzer> jakst: mdadm --stop /dev/md0
[08:25] <cpaelzer> jakst: mdadm --assemble --scan -v
[08:25] <cpaelzer> jakst: and let us know if it worked or why not if not
[08:26] <cpaelzer> jakst: the linked example has a case with out of date disks and uses force to reenable, but most of what follows depends so much on your case that you have to decide (e.g. if force is ok)
[08:28] <jakst> https://www.irccloud.com/pastebin/2FlDN3O1/
[08:28] <jakst> So this is the output of assemble
[08:29] <cpaelzer> jakst: that is a good start
[08:29] <cpaelzer> jakst:  as I read it it means it could reassemble the state and currently syncs up one of your devices
[08:30] <cpaelzer> your /proc/mdstat should show it syncing with an ETA
[08:30] <cpaelzer> jakst: after that you should be able to start it
[08:30] <cpaelzer> jakst: what does proc/mdstat show now?
[08:30] <jakst> https://www.irccloud.com/pastebin/nMMfc1Ir/
[08:31] <cpaelzer> jakst: also the state of the examine output should have changed now - the disks are now part of an array
[08:31] <cpaelzer> jakst: there is something like "Device Role" at the end of examine
[08:31] <cpaelzer> hrm - does that mean they are all as spares (S)
[08:31] <cpaelzer> need to check
[08:33] <jakst> cpaelzer: Device Role is the same as before, Active device 0, 1 and 2
[08:34] <cpaelzer> jakst: it very likely just needs the --force, but it is your data so I'm refusing to just say you should do so
[08:34] <cpaelzer> jakst: do you have enough spare storage to dd away the raw disk content before you do so?
[08:34] <jakst> No, I don't
[08:34] <jakst> what does force do?
[08:35] <cpaelzer> jakst: essentially it starts it anyway referring to the last line in https://www.irccloud.com/pastebin/2FlDN3O1/
[08:35] <cpaelzer> jakst: from the bit I see in your case it is 98% fixing your issue, but 2% killing your data - that is why I need you to make the call
[08:37] <cpaelzer> jakst: "if you search for "assembled from 2 drives and 1 rebuilding - not enough to start the array while not clean - consider --force" the net is full of recommendations to just do it
[08:37] <jakst> Well I don't have enough space to backup, and it's not ultra critical to recover. Just very very nice if it works
[08:38] <cpaelzer> jakst: so do the assemble with force, then start it
[08:38] <jakst> It says my devices are busy -.-
[08:38] <cpaelzer> jakst: it should be in recovery mode then
[08:38] <cpaelzer> jakst: stop before reassemble
[08:39] <jakst> Ok, but should I assemble manually? Don't know what it get sr0 from and suh
[08:40] <jakst> Nvm that
[08:40] <jakst> Now I forced it. Should I just mount it now?
[08:41] <cpaelzer> jakst: now that you forced the assemble you should madam start it and check /proc/mdstat
[08:42] <jakst> is that mdadm -A /dev/md0?
[08:43] <cpaelzer> jakst: assemble might start it automatically - it is too long ago since mine just works for years now
[08:43] <cpaelzer> jakst: what does /proc/mdstat show now (before searching for a start command that might not exists)
[08:44] <jakst> https://www.irccloud.com/pastebin/8LzgkVch/
[08:44] <jakst> Recovering
[08:44] <cpaelzer> jakst: good
[08:44] <cpaelzer> jakst: when that happened to me it was the day to read about upgrading to raid6 for the day two disks will break :-)
[08:45] <cpaelzer> jakst: you can use it now, after the recovery is done it will provide the extra level of failsafe again
[08:45] <jakst> Haha yeah, a lot of thoughts about upgrading have been passing through my head
[08:45] <cpaelzer> jakst: I waited to be recovered before using it thou
[08:46] <jakst> cpaelzer: Yeah I'll just check if it mounts properly, then I'll leave it to recovering
[08:46] <cpaelzer> jakst: in your case pvscan might be the next
[08:46] <cpaelzer> jakst: as you have pvs on the md
[08:46] <cpaelzer> jakst: and then vgscan, lvscan, mount
[08:49] <jakst> cpaelzer: Well it appears in pvscan, but without a volume group
[08:50] <cpaelzer> jakst: it apears without vg in pvscan because the vg isn't active I think
[08:51] <jakst> It's supposed to belong to vg group0
[08:51] <jakst> I think. Was a while since I set it up
[08:52] <cpaelzer> jakst: so pvdisplay shows your pv's
[08:52] <cpaelzer> jakst: but vgdisplay shows nothing - not even inactive?
[08:53] <jakst> vgdisplay shows my volume group, but only cointaing a caching disk that I never bothered to activate
[08:54] <cpaelzer> jakst: and vgscan is not re-finding your pvs now?
[08:54] <jakst> Nope =/
[08:55] <cpaelzer> jakst: sorry I'm out of remote-usable-skills now I guess
[08:55] <cpaelzer> jakst: has the pvdisplay all your pv's at least?
[08:57] <jakst> cpaelzer: pvdisplay shows md0, but not the individual drives
[08:58] <lordievader> jakst: That makes sense? Right?
[08:58] <lordievader> Ja
[08:58] <lordievader> Whoops
[09:00] <jakst> lordievader: Well I think I recall that each drive was listed under pvs
[09:01] <cpaelzer> jakst: if you did pvcreate on /dev/md0 you will only see /dev/md0 in pvdisplay
[09:01] <lordievader> For mdraid perhaps... but if you layer lvm on top of mdraid you won't see all drives in pvs/pvdisplay.
[09:01] <cpaelzer> jakst: the member disks are no more to be accessed directly or you will kill your raid
[09:01] <lordievader> ^ that.
[09:01] <cpaelzer> lordievader: ack
[09:02] <lordievader> If you'd let LVM do the raid5 then yes, you'd see all disks.
[09:03] <jakst> Ok, but trying to mount the array I get 'mount: wrong fs type, bad option, bad superblock on /dev/md0'
[09:04] <lordievader> jakst: You put lvm on the mdraid right?
[09:04] <lordievader> LVM ain't a filesystem ;)
[09:05] <jakst> lordievader: No I guess I haven't. How would I do that without destroying the data?
[09:06] <lordievader> jakst: What is the output of 'sudo pvscan && sudo vgscan && sudo lvscan'?
[09:06] <cpaelzer> that ^
[09:07] <jakst> https://www.irccloud.com/pastebin/v8VXxrij/
[09:07] <jakst> sdb is the device I was meaning to use as cache, but never did
[09:09] <lordievader> Hmm, md0 contains a PV signature but is not assigned to any volume group?
[09:09] <jakst> Before the crash I had a logical volume called data
[09:09] <jakst> Heh, yeah
[09:10] <lordievader> jakst: Could you pastebin the output of 'sudo lsblk -o NAME,KNAME,FSTYPE'?
[09:11] <jakst> https://www.irccloud.com/pastebin/0oD8Q8cw/
[09:14] <cpaelzer> jakst: it will likely just complain not knowing about "data" but what does this give you?: "vgchange -ay data"
[09:14] <jakst> Yeah not found
[09:15] <lordievader> Sda contains rootfs I presume?
[09:15] <cpaelzer> jakst: sudo vgcfgrestore --list data
[09:15] <jakst> yes
[09:16] <jakst> sudo vgcfgrestore --list data
[09:16] <jakst> No archives found in /etc/lvm/archive.
[09:16] <cpaelzer> :-/
[09:16] <jakst> But if I ls that directory I can see them
[09:16] <cpaelzer> ?
[09:16] <jakst> https://www.irccloud.com/pastebin/4xEbEngI/
[09:17] <cpaelzer> jakst: well you have backup of the group0 cache, but not of a data vg
[09:17] <cpaelzer> jakst: I slowly lean to assuming you once had a data lvm, but stopped using it a while ago
[09:17] <jakst> My system was up and running before new years
[09:18] <lordievader> jakst: What happened that you lost it?
[09:18] <lordievader> Power outage?
[09:18] <jakst> Might have been, not sure. I was away
[09:19] <jakst> But I also might have messed it up in my early rescue attempts
[09:19] <cpaelzer> I just checked your former pastes - since /dev/md0 is a proper PV it was used as PV - I wonder why it would auto-backup the cache but not the data config
[09:20] <jakst> But group0 countained the lv data, so should be correct right?
[09:20] <jakst> data wasn't it's own group
[09:21] <cpaelzer> jakst: are the files in /etc/lvm/archive human readable - and if yes is data in there?
[09:23] <lordievader> jakst: Is the data lv defined in /etc/lvm/backup/*
[09:23] <lordievader> ?
[09:24] <jakst> not in backup, but in archive
[09:24] <jakst> # Generated by LVM2 version 2.02.98(2) (2012-10-15): Wed Jul 15 12:27:07 2015
[09:24] <jakst> contents = "Text Format Volume Group"
[09:24] <jakst> version = 1
[09:24] <jakst> description = "Created *before* executing 'lvcreate group0 -L20M -n dataCacheMe$
[09:24] <jakst> creation_host = "NAS"   # Linux NAS 3.16.0-43-generic #58~14.04.1-Ubuntu SMP Mo$
[09:25] <cpaelzer> but that seems only to be the cache device
[09:25] <cpaelzer> or came more before flood control kicked you
[09:25] <lordievader> jakst: Could you pastebin that file?
[09:26] <jakst> https://www.irccloud.com/pastebin/xrkslaQV/
[09:26] <jakst> Yeah, I accidentally pasted raw :P
[09:27] <jakst> Hard to copy long texts from console...
[09:27] <cpaelzer> nice, it really has a backup
[09:27] <cpaelzer> not sure but you might be able to reload that with vgcfgrestore
[09:30] <jakst> I could!
[09:33] <jakst> And it mounted!!! My data is back!!!!
[09:33] <cpaelzer> yeah
[09:33] <lordievader> Whoop whoop
[09:33] <cpaelzer> gz jakst
[09:33] <lordievader> jakst: Nice
[09:33] <jakst> Love you guys cpaelzer lordievader
[09:33] <lordievader> jakst: What was the actual command you used to restore the backup?
[09:34] <jakst> sudo vgcfgrestore -f /etc/lvm/archive/group0_00008-621465970.vg group0
[09:34] <lordievader> Ah, cool.
[09:34] <lordievader> Thanks
[09:35] <jakst> I really couldn't have figured that out on my own, and I already spent a whole day trying
[09:35] <jakst> Now I learned a lot as well! Thanks :)
[09:36] <cpaelzer> your welcome
[09:36] <jakst> So, futureproofing.... Raid6. Anything else?
[09:37] <lordievader> I'd do the raid in LVM, but that is me ;)
[09:38] <jakst> What's the upside?
[09:38] <lordievader> More flexibility. LVM uses dmraid, like mdraid, but does so per LV instead of per disk.
[09:39] <lordievader> So you can determine per LV if you want linear, raid0, raid1, raid-whatever.
[09:39] <cpaelzer> jakst: also maybe share your won insight in something like http://askubuntu.com/questions/13981/recover-lvm-after-hdd-crash or a new post
[09:40] <jakst> cpaelzer: Absolutely, I'll do that!
[09:40] <jakst> lordievader: Okay, sounds nice. I'll have to look at that when I get more disks
[09:48] <ghostal> i need to set up sendmail on my ubuntu xenial server. the server just needs to send emails to users, it doesn't need to receive anything. i found this guide on DO, whose guides i've found to be excellent in the past, but this one seems a little more confusing to me https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-postfix-as-a-send-only-smtp-server-on-ubuntu-16-04
[09:49] <ghostal> particularly, i'm confused about hostname settings. should it even matter if all i'm doing is sending email?
[10:08] <rbasak> ghostal: unless you're using a service provider's email relay, if your hostname doesn't resolve to your source IP (and in reverse) then many hosts will block your emails for spam.
[10:10] <ghostal> rbasak: well, i'm not using a relay, i know that much :)
[10:10] <ghostal> my hostname is just "mir"
[10:14] <ghostal> but there is a DNS a record for the machine
[14:10] <kirkland> dasjoe: hmm, I think it's supposed to default to the last LTS
[15:30] <zul> coreycb: ping can you update your upstream report please?
[15:30] <coreycb> zul, that's in progress, did you see I moved that btw?
[15:31] <zul> coreycb: yeah im using the new location
[15:31] <coreycb> zul, ok
[15:32] <coreycb> zul, i'm working on barbican update-excuses failure.  waiting on a s390 instance to debug the neutron autopkgtest failure.
[15:33] <zul> coreycb: ack
[15:40] <coreycb> zul, do you have an MIR open for monasca-statsd?
[15:40] <zul> coreycb: no there needs to be one i think
[15:41] <coreycb> zul, ok i'll open one
[15:43] <zul> coreycb: k
[16:31] <jge> anyone ever used chrony in ubuntu before? I'm trying to query a chrony client on my network as "chronyc -h 192.168.1.22 tracking" but I'm wondering if it needs to be allowed first inside chrony.conf
[16:31] <ikonia> allowed ?
[16:31] <ikonia> if you're specifying it on the command line it won't take that parameter from the config
[16:35] <jge> ikonia: chrony operates as an ntp client by default, if I allow a host inside chrony.conf then it becomes a server for that client (if it needs to) but I just want to query it for skews
[16:35] <jge> I'll test it
[16:36] <ikonia> jge: right, but you're specifying -h on the command line so it won't care about that option in the host
[16:36] <ikonia> (host config)
[16:36] <jge> ikonia: I'm querying the remote server from another host in the network
[16:37] <ikonia> jge: yes, I understand that,
[16:37] <ikonia> jge: however the fact that you're setting -h on the command line replaces that parameter from the config
[16:37] <jge> same thing as an "ntpq -p
[16:38] <jge> but the other end needs to allow the connection
[16:38] <jge> no?
[16:38] <ikonia> jge: so you're talking about the config on the remote servcer it's querying
[16:38] <ikonia> rather than the client
[16:38] <jge> yep
[16:38] <ikonia> jge: ok, so yes you'll need to tell it to allow queries
[16:39] <jge> yeah I did, let me test it
[16:39] <jge> never worked with chrony so I wasnt sure
[16:39] <ikonia> jge: works %80 the same as ntp
[16:41] <jge> yeah the guy who has it running here swears by it
[16:41] <jge> "it's so much better than ntpd"
[16:41] <jge> but no explanation as to why he thought that.. had to look it up.
[16:42] <ikonia> I'm not sure why it's "better"
[16:42] <ikonia> I've found it "fine" but nothing to write home about as a big song and dance
[16:42] <ikonia> I don't see any real world benifit over ntp
[16:50] <ctjctj> Hello.  I'm attempting to mount a filesystem from an iscsi server.   My fstab has the _netdev option for the filesystem and it is using UUID.  The problem is that during the boot sequence iscsi-open start script hasn't run at the time the system attempts to mount the disk.  How do I get iscsi-open to run after network start and before the mounting of filesystems?
[16:54] <nacc> ctjctj: i wonder if you need to include iscsi into your initramfs
[16:55] <ctjctj> nacc, no.  I'm not booting off an iscsi disk.
[16:55] <ctjctj> UUID="xyzzy" /var/lib/mysql defaults,_netdev 1 1
[16:56] <ctjctj> So we boot of a local disk and then we should mount the iscsi disk before mysqld (mariadb) starts
[16:56] <nacc> ctjctj: ah sorry
[16:56] <ctjctj> nacc, it was a great answer, just not the one I needed.
[16:56] <nacc> ctjctj: 16.04?
[16:56] <ctjctj> 14.04 LTS
[16:57] <nacc> ctjctj: hrm, so maybe an upstart ordering is needed?
[16:59] <ctjctj> I thought that.  But we have S45open-iscsi in rcS.d which I *think* means to do this before we change out of single user mode and into a multi-user runstate.
[17:00] <ctjctj> My understanding was that by putting the _netdev it would cause the mount of network devices to wait until after open-iscsi completed.
[17:03] <coreycb> zul, monasca-statsd is optional so I added it to suggests
[17:05] <coreycb> zul, upstream report is updated now too for ocata
[17:26] <jge> ikonia: that worked but now I'm getting "517 Protocol versin mismatch" not much about it online.. the client querying is running Ubuntu and the other CentOS.. wondering if this is the problem
[17:27] <ikonia> shouldn't be
[17:27] <jge> ubuntu has chrony version 1.29 and Centos 1.29.1
[17:28] <ikonia> I have multiple distros using it with each other
[17:29] <jge> ikonia: looking at source here https://github.com/SuperQ/chrony/blob/master/client.c
[17:30] <jge> something to do with a bad header?
[17:30] <ikonia> jge: not sure, I'll need to look into it, but it works on mine
[17:31] <jge> ikonia: are you able to query other clients as "crhonyc -h ip tracking"
[17:31] <jge> ?
[17:32] <ikonia> jge: I can't check it at the moment as I don't have access to those hosts from where I am
[17:32] <jge> hmm ok.
[17:32] <jge> I don't know then :(
[17:33] <jge> it would be nice to have a switch for verbose
[17:35] <ikonia> I can try it for you later on
[17:56] <zul> coreycb: cool beans
[18:13] <coreycb> beisner, hi can you promote python-cryptography 1.0.1-1ubuntu1~cloud1.2 to liberty-proposed please?
[18:27] <beisner> coreycb, done, re: https://bugs.launchpad.net/horizon/+bug/1601986
[18:28] <coreycb> beisner, ty sir
[18:29] <zul> coreycb: mind if i sync python-muranoclient over from debian?
[18:29] <coreycb> zul, fine by me
[19:02] <jakst> How would I go about doing a smart scan of a disk if my ubuntu server is hosted in an ESXi hypervisor? In Ubuntu or in ESXi?
[19:28] <ctjctj> How do I force open-iscsi to start before network mounts?  At this point I have a _netdev in fstab for the disk in question. open-iscsi attaches the device correctly when it runs but upstart/systemd(?) are attempting to mount the disk before open-iscsi starts
[21:38] <coreycb> beisner, hey these are ready to promote from liberty-proposed->liberty-updates: cinder, heat, manila, nova, openstack-trove, sahara
[21:48] <ctjctj> I'm looking at an issue that /etc/init/mountall-net.conf will attempt to mount devices that are attached via open-iscsi.  But mountall-net.conf runs before open-iscsi runs.  Is there a fix for this?
[22:26] <jge> hey all, could I install a version of a package that's meant for say Xenial to Trusty?
[22:27] <nacc> jge: that's not recommended or supported
[22:29] <ikonia> jge: no
[22:29] <jge> so I'm better of intsalling from source if I need a version that's not available in repos?
[22:31] <ikonia> jge: what do you actually want
[22:31] <jge> ikonia: chrony version 2.3
[22:31] <jge> ttp://chrony.tuxfamily.org/doc/2.3/manual.html#Installation
[22:32] <ikonia> jge: why do you want that version ?
[22:32] <nacc> jge: 2.3 is not available in xenial either, afaict
[22:33] <ikonia> win 12
[22:33] <ikonia> oops
[22:33] <jge> ikonia: I'm getting an error trying to query another chrony client in Cent0S, "Read command packet with protocol version 5 (expected) 6" and from the mailing list here it looks like it might be related to the version: https://listengine.tuxfamily.org/chrony.tuxfamily.org/chrony-users/2010/06/msg00005.html
[22:34] <jge> so I wanted to test if upgrading to the latest release will help
[22:34] <ikonia> I've got 16.04 and Centos 7 hosts in sync from each other
[22:34] <jge> what version of chrony on both?
[22:35] <ikonia> saly, I can't check as I ended up not going home tonight
[22:36] <jge> I was able to test earlier from different clients (one 1.29 and the other 2.2) both CentOS and it worked, so I'm thinking is the version of Ubuntu..
[22:37] <ikonia> jge: what version does ubuntu use
[22:37] <jge> it appearently sends protocol version 5 when the other ends expects 6
[22:37] <ikonia> what actual chrony version does ubuntu use
[22:37] <ikonia> (not got a box here to check)
[22:37] <jge> ikonia: it's on 1.29.1 which is the latest stable
[22:37] <ikonia> jge: so applying logic, you have a 1.29 box working and 2.2 box working
[22:38] <ikonia> i don't think a 1.29.1 "won't" work, when a 1.29 box does
[22:38] <jge> same OS though
[22:38] <ikonia> jge: so ?
[22:39] <jge> well, I'm thinking it might be implemented differently.. it's clearly sending a different version of the protocol
[22:39] <ikonia> so if you think it's a different implmentation, upgrading it won't do anything
[22:39] <jge> if it would be the same code base then it shouldn't complain
[22:39] <ikonia> jge: have you actually looked at the config or arguments to see if things can be set
[22:40] <ikonia> jge: it is the same code base
[22:40] <ikonia> you've just said that
[22:40] <ikonia> you have a 1.29 client that works
[22:40] <ikonia> 1.29.1 is the same codebase
[22:41] <jge> my idea with upgrading is that the latest release could have better (compatability) with earlier versions as opposed to the opposite
[22:42] <ikonia> jge: sorry, thats just blind randomess
[22:42] <jge> maybe downgrade connection protocol, I don't know ..just spitting ideas
[22:42] <ikonia> jge: have you even done basic research to see if the clients support both versions of the protocol
[22:42] <ikonia> and if you can force the protocol, and what the default is
[22:43] <jge> i looked up focing the protocol but manual doesn't have anything for that..
[22:43] <jge> client obviously does not support one of the protocols
[22:45] <ikonia> why though
[22:45] <ikonia> as it's in the code base
[22:45] <ikonia> logically it's more likley to be a configuration option
[22:47] <jge> ikonia: https://github.com/mlichvar/chrony/blob/master/NEWS
[22:48] <jge> check out the security fix under version 1.29.1
[22:48] <jge> incompatible with previous protocol version..
[22:48] <ikonia> there you go then
[22:48] <ikonia> so you need to use the other protocol
[22:48] <jge> but would that be referring to 1.29 or 1.28?
[22:49] <ikonia> would what ?
[22:49] <jge> previous protocol version
[22:49] <ikonia> so 1.29 seems to support both
[22:49] <ikonia> 1.29.1 seems to patch one to fix a problem
[22:50] <ikonia> so the logical approach is to use the one that is supported by both
[22:50] <ikonia> how to foce it the question
[22:50] <ikonia> if you look there is a similar change in 1.27
[22:51] <jge> hm yeah I see it
[22:53] <jge> ikonia: I dont have chronyd open on the internet, maybe I could just go back to 1.29
[22:54] <jge> wait a minute, I was looking at another box... the ubuntu box is already on version 1.29
[23:18] <ctjctj> For anybody that cares about the open-iscsi mount on boot issue I was describing.  When we went to upstart we created a helper tool called "mountall" which processes fstab and mounts drives as they become available.  Once upstart starts the network /etc/init/mountall-net.conf runs and kills the mountall process.  BUT /etc/init.d/open-iscsi start has not yet run so any iscsi targets have not yet been mounted.  Thus the mount
[23:18] <ctjctj> fails and boot hangs.  The original intention was for the _netdev in /etc/fstab to keep any mount of the iscsi device from happening.  All the other remote devices would then be mounted by commands like "mount -a -t nfs -O _netdev"  Thus /etc/init.d/open-iscsi also does a "mount -a -O _netdev" because it runs after all of NFS/CIFS and such.  Catch 22.
[23:21] <keithzg> Any built-in way with systemd to have an escalating set of shutdown commands for a service? Specifically, I have a VirtualBox VM set up as a service, and I'd like it to first try VBoxManage controlvm $vmname acpishutdown, and (perhaps after a timeout) try poweroff instead of acpishutdown if the process hasn't halted.
[23:22] <Curiontice> Hi! is it possible to compile squid into a package such that no shared dependency exist?
[23:23]  * keithzg has tried to read systemd documentation, but for instance https://www.freedesktop.org/software/systemd/man/systemd.unit.html doesn't even *mention* ExecStop, much less document it.
[23:30] <ctjctj> keithzg, i believe there is a method.  The easiest that I can think of is to just have two shutdown VM commands.  One does the acpishutdown and waits upto 30 seconds  Then the second VM shutdown runs and does the poweroff.  Since all VMs that could be shutdown with acpishutdown will already be shutdown this only catches the once still on.
[23:32] <keithzg> ctjctj: Fair enough, I was thinking perhaps there was some native systemd way of doing this but that certainly sounds like it'd work. I'll try just using `/usr/bin/VBoxManage controlvm Sibrel acpipowerbutton && /bin/sleep 30 && /usr/bin/VBoxManage controlvm Sibrel poweroff`
[23:48] <ZJAY>  how would i soft link a path like /Volumes to my main path /media/dumpebut/<somehugedrive> i need it to see the soft link path in a script.