[00:08] <DammitJim> I don't think there is multipath for fusionIO cards on a host
[00:09] <sarnold> heh, yeah, there *better* be only a single pcie path to such a thing..
[00:11] <DammitJim> I'm not sure why _KaszpiR_ said that
[00:12] <sarnold> because your bug reminded him of his bug..
[09:05] <rbasak> cpaelzer_: are yo affected by the git-ubuntu libreadline problem?
[09:05] <rbasak> If so, bug 1796017 and "sudo snap refresh --channel=edge/gawk-readline-fix git-ubuntu" to test the fix please.
[09:31] <cpaelzer_> rbasak: I have seen the bug but did not yet hit it myself
[09:32] <cpaelzer> rbasak: but mostly because I never use build-source
[09:32] <cpaelzer> let me try
[09:35] <cpaelzer> rbasak: the original reporter already tested your case btw
[09:35] <cpaelzer> https://bugs.launchpad.net/usd-importer/+bug/1796017/comments/13
[09:35] <cpaelzer> right?
[09:36] <cpaelzer> I have a similar but not the same fail trying it
[09:36] <cpaelzer> checking the fix now
[09:38] <cpaelzer> rbasak: tested and confirmed, posted so on the bug
[09:41] <rbasak> Thanks!
[09:41] <rbasak> That's interesting.
[09:42] <rbasak> I wanted some wider testing because I wasn't sure my test/reproducer was exactly the same in all cases.
[09:42] <rbasak> I agree good to land now though
[10:51] <computa_mike> Hi people - I 'm having some issues trying to enable a self signed SSL cert on Nginx on ubuntu 16.04 - for some reason the server I have doesn't have the nice sites-enabled folders.  How can I troubleshoot TLS handshaking?  I've tried curling locally (on the server) and I get : curl: (35) gnutls_handshake() failed: The TLS connection was non-properly terminated.
[11:08] <blackflow> computa_mike: iirc you need special flags for curl to ignore invalid certs. self-signed is invalid in that context, unless you have the CA you signed it with, on the client side, and used. to debug these issues look closer into nginx logs, you can in crease verbosity. sites-enabled pattern is arbitrary and can be created if it doesn't exist.
[11:14] <computa_mike> blackflow: - ok.. I can check that out - looking at the man page -k should be it
[11:16] <computa_mike> blackflow: thanks for the help - still got a problem but I think it's an NGINX issue - probably one of these PICNIC errors...
[11:17] <blackflow> increase verbosity for the nginx' error log. if oyu set it to debug, iirc, it'll spit out quite a lot of things, so don't do it on a busy prod server :)
[11:19] <computa_mike> blackflow: good to know - this is a dev server we're testing some authentication stuff on.  We used to have Facebook authentication onto our site but they want like https endpoints to redirect to, and as this is a dev server I was going to throw a self signed cert on there - I was all like 'how hard can it be right?  I mean there's loads of guides about how to do this..'
[11:20] <blackflow> computa_mike: if that server is accessible over public internet, just shove a free letsencrypt cert in there. if you're testing, test as close as to real production env, which means proper certs, not self signed ones.
[11:23] <computa_mike> blackflow: I did consider letsencrypt - but it's not a publicly accessible server.
[11:24] <computa_mike> blackflow: I've found the nginx channel - I'll see if they have any ideas on how I can see why handshaking is failing.  I wonder if it's how I made the certs
[11:38] <lordievader> computa_mike: Do you have control over the DNS?
[11:38] <lordievader> If so, letsencrypt also supports challenges via DNS.
[11:39] <computa_mike> lordievader: I dont' have DNS - but there is a team that does - maybe I can get them to sort that out for us.
[11:39] <computa_mike> lordievader: thanks - that's good to know.
[11:43] <tomreyn> computa_mike: i have no first hand experience with it, but i suspect all third party SSO systems such as facebooks' will (intentionally) fail to work for authenticating to a non internet resource
[11:45] <computa_mike> tomreyn: We have been able to get it to work with Twitter - they currently still support http - and it works with our internal site so far - that will probably change in the future when they mandate that all endpoints are https - it's bound to happen
[11:46] <tomreyn> doh, twitter allows you to authenticate to resources they can't even verify exist? that's crazy.
[11:51] <computa_mike> tomreyn: google do it too...
[11:58] <tomreyn> weird, maybe i'm just not getting how this can be operated securely. and maybe it's a design flaw.
[12:04] <tomreyn> (probably and hopefully the former ;) )
[12:07] <computa_mike> tomreyn: my understanding is that you pass some security tokens to - in this instance twitter - these tokens are unique to you, and are passed by https.  This redirects the browser to the user, and they are asked whether the application is allowed to log them in.  If they agree, then the application ID is stored against their twitter profile (so they don't have to agree again) and the application redirects to th
[12:07] <computa_mike> e address that is configured.  Twitter - i suppose - don't care where the address is.  Their job is to verify that the application ID and client secret that you presented securely are correct.  Facebook are now requiring that the address be an https address - they may also have other restrictions (like - google will only pass you back to an address that is 'real' - like it has a correct top level domain)
[12:10] <tomreyn> i see, so google does care about it at least in parts.
[12:10] <computa_mike> tomreyn: dns tampering could redirect the user back to some fake or different site, but if the site wants to do something then I still think they need the client and secret credentials to do stuff - but it would be an interesting exercise - like if you had a coffee shop and you provided DNS, could you stand up a fake application end point?  and if so what could you grab (name, email maybe) -
[12:11] <computa_mike> tomreyn: thanks - you've given me an idea to explore the security of oauth within untrusted networks.
[12:12] <tomreyn> my pleasure :)
[12:12] <tomreyn> thanks for discussing it.
[12:26] <computa_mike> hey people - just figured out what I did wrong.  The default site was listening on 443 without a cert and it was failing - It seems strange that if there's a specific site set up with a cert that you'd need to remember to disable 443 on default because it won't work otherwise
[12:41] <computa_mike> so - it's lunchtime here so I need to disappear - thanks for all your help people on the #ubuntu-server channel/
[12:41] <computa_mike> cheerio
[13:23] <a8o> Anybody have a recommendation for doing office VM's on office Ubuntu server?  I'm wanting to do windows PDC's and some linux VMs.  I've got some running right now in VMWare ESX and later Virtualbox cause VMWare is limiting my CPU's and don't have the license money
[13:23] <a8o> Was debating keeping using Virtualbox or using KVM.  My main goal is to be able to do snapshots and backup VM's between machines and maybe do offsite backup.  not sure if virtualbox or kvm will be easier for that kind of hting
[13:27] <sdeziel> a8o: I always do my VM snapshots offline so I don't know if that is what you are after. That said, those offline snaps are easy to send to remote machines/offsite when using libvirt backed by ZFS
[13:28] <a8o> Do you like suspend the machines to get the snaps?
[13:28] <a8o> I have 2 physical servers.  The idea is to backup vm's from one to the other so if I ever have a physical machine go down I can still run critical vms
[13:29] <a8o> I've used VM's forever but doing it with business ciritical stuff and backing up to another server is new to me.
[13:29] <sdeziel> a8o: I don't suspend as I set it up to have a snapshot done on VM bootup
[13:30] <a8o> so everytime you reboot it does a snap.  That's pretty cool
[13:30] <sdeziel> a8o: suspending should work. The snapshot shipping portion is only ZFS
[13:30] <a8o> I haven't done much with ZFS
[13:30] <a8o> so that'll be new for me
[13:30] <a8o> Right now I'm mainly trying to figure out if I should standardize on Virtualbox or KVM.
[13:31] <a8o> if one is easier to manage and backup than the other.
[13:31] <sdeziel> a8o: I prefer KVM through libvirt but let's see what others have to recommend
[13:32] <a8o> virtualbox is what I use day to day.  but that's for desktop stuff, not sure if it's up to snuff for business critical things
[13:33] <lordievader> I strongly dislike Virtualbox. Oracle, blegh.
[13:33] <lordievader> I use Qemu/libvirt for all my virtual machine needs.
[13:34] <a8o> lordievader: Sweet!  Do you do any sort of backup/snapshots between machines?
[13:34] <a8o> I'm setting up a machine now on KVM. But the snapshot/backup between servers part is new to me.
[13:34] <lordievader> I backup certain dirs on the vm's. But not entire images.
[13:35] <a8o> is it too space intensive?
[13:37] <lordievader> My vm's are not critical enough (mostly home use). Besides everything is managed by Puppet. If something does go down I can easily re-create it.
[13:37] <a8o> oh nice.  would love to do more with Puppet.
[13:39] <a8o> For this office it's Windows Domain Controllers.  Keeping Active Directory and all that junk backed up for failure is why I wanted to do snapshots.  Can't really filesystem back it up.
[13:40] <a8o> I tried to setup LInux as a Secondary Domain Controller but they have a legacy AD setup that's a bit hosed so couldn't get stuff to work with Samba4.  So this will be next best thing I think
[13:41] <lordievader> Libvirt does support making snapshots. But you need to have your disks in qcow2 format.
[13:41] <sdeziel> a8o: there is a qemu-agent thing that should let tell the VM to make it's disk/fs consistent when you take a live backup/snapshot. This requires cooperation from the guest of course
[13:42] <a8o> lordievader: oh that's good to know.  I've got the disks in vdi now so I'll convert to test.
[13:42] <lordievader> https://www.cyberciti.biz/faq/how-to-create-create-snapshot-in-linux-kvm-vmdomain/
[13:42] <a8o> sdeziel: Oh nice, I'll check that out.
[13:45] <TJ-> a8o: there's another AD alternative, freeIPA which uses 389-ds, which can be configured to do AD replication. e.g. https://directory.fedoraproject.org/docs/389ds/howto/howto-one-way-active-directory-sync.html
[13:45] <a8o> TJ-: oh really!  Have you tried to connecting it a old AD?
[13:45] <TJ-> a8o: how 'old' ?
[13:45] <a8o> I would totally love to have a LInux PDC
[13:46] <a8o> like 2003 or so old.  That's been upgrade to 2008 then 2012 then 2018.  Apparently the AD domain name is what messes it up for me cause it doesn't follow proper format
[13:47] <a8o> zentyal is what I tried using to connect to existing domain controller.
[13:49] <TJ-> a8o: Windows 2003 is well-supported in 389-ds; here's another guide about configuring sync which talks about 2003 tasks. https://www.port389.org/docs/389ds/howto/howto-windowssync.html
[13:51] <a8o> TJ-: thanks, reading it now...
[15:05] <DammitJim> man, for those of you who remember my problem with the system crashing when formatting to xfs/ext4
[15:05] <DammitJim> it turns out that we were supposed to wait for the raid 1 array to finish building
[15:05] <DammitJim> after that, I had no issues formatting the LVs
[15:06] <DammitJim> I wish the system would give you a warning of some sort so that you don't format at this time or something...
[15:09] <teward> that's more or less "common sense" for RAID, just saying.
[15:09] <teward> DammitJim: you usually need to let *any* RAID array build before you use it
[15:09] <teward> for best performance
[15:09] <DammitJim> it seems this is NOT an issue with CentOS, though
[15:09] <teward> CentOS is weird :P
[15:09] <teward> don't compare CentOS to Ubuntu :p
[15:10] <DammitJim> I'm trying not to, but that's how this whole thing unwound
[15:10] <DammitJim> they were doing it on CentOS so they expected Ubuntu to work the same way
[15:11] <DammitJim> but gosh, crashing the system is pretty bad, I mean... I would expect some measures to not allow the system to do that?
[15:11] <DammitJim> but maybe I'm expecting too much and that functionality is too hard to implement or there are reasons why it's the way it is
[15:14] <teward> DammitJim: RAID is pain, whether it's Software RAID or done via a hardware RAID PERC card
[15:14] <teward> :P
[15:14] <DammitJim> understood
[15:14] <teward> best to build the array THEN mess with data :P
[16:47] <DammitJim> so, I have an Ubuntu VM and the swap is only 4GB
[16:47] <DammitJim> I need to make it at least 32GB (this server is running 64GB of RAM)
[16:47] <DammitJim> what is the best way to do that?
[16:49] <compdoc> which version? 18.04 uses a file now, and not a partition
[16:50] <teward> add a swap file, presuming you have enough disk space allocated
[16:50] <teward> lol ninja'd :P
[16:50] <DammitJim> 16.04
[16:50] <DammitJim> I have space available in the volume group
[16:50] <DammitJim> no, crap, I don't have that much available in the volume group!
[16:51] <DammitJim> It looks like I'm going to have to add another hard drive just for this
[16:51] <compdoc> the procedure is something like, turn swap off, grow partition, turn swap on
[16:51] <sdeziel> DammitJim: you could maybe shrink a couple of LVs?
[16:51] <compdoc> you can use a swap file, even with 16.04
[16:51] <DammitJim> probably not... my PV is 40GB and I was going to set up swap to be 64GB LOL
[16:52] <sdeziel> well file of LV doesn't change that space issue :)
[16:52] <DammitJim> nothing wrong with adding a virtual hard drive just for this, right?
[16:52] <sdeziel> perfectly fine
[16:52] <TJ-> DammitJim: how about a zram swap block device?
[16:52] <tomreyn> or you could just configure the service son this server so it doesn't need to swap
[16:52] <DammitJim> does it matter if I add the virtual hard drive to the VG?
[16:52] <sdeziel> DammitJim: in that case you could also put the swap directly on the whole drive (/dev/vdb) and not worry about partitioning and such
[16:52] <DammitJim> oh no, IBM DB2's best practices say to have it
[16:53] <DammitJim> sdeziel, I'm starting to like that idea
[16:53] <DammitJim> when I do: sudo swapon -s, it says: Filename: /dev/dm-1
[16:53] <DammitJim> what does that mean? where is this swap space coming from?
[16:54] <TJ-> DammitJim: "ls -l /dev/mapper | grep dm-1"
[16:54] <tomreyn> or: dmsetup ls
[16:55] <DammitJim> lrwxrwxrwx 1 root root       7 Oct 11 22:10 ubuntutemplate16--vg-swap_1 -> ../dm-1
[16:55] <sdeziel> DammitJim: I don't know if you want this additional swap space to be on your RAID array but if yes, adding a single drive won't do it
[16:55] <DammitJim> why is it that df doesn't show this?
[16:55] <sdeziel> DammitJim: df shows filesystem space usage, swap is no fs
[16:55] <tomreyn> df never shows swap
[16:55] <DammitJim> no, I won't have this on the raid array
[16:55] <TJ-> DammitJim: because swap is not a mounted file system
[16:56] <DammitJim> ah, thanks for clarifying that
[16:56] <DammitJim> rookie
[16:56] <DammitJim> so, effectively in my case, swap is an LV?
[16:56] <sdeziel> looks like it
[16:56] <tomreyn> lvs vg/swap-1
[16:56] <DammitJim> hhhmmm... I wonder if I can just add the hard drive to the volume group and then just expand that LV
[16:57] <TJ-> DammitJim: should be able to
[16:57] <DammitJim> I think I'm going to do that
[16:57] <sdeziel> DammitJim: why not just grow the disk?
[16:57] <DammitJim> brb
[16:57] <DammitJim> sdeziel, grow the disk how?
[16:57] <DammitJim> I think when one "grows" the disk in VMWare
[16:57] <DammitJim> then a new "drive" shows up in Ubuntu
[16:57] <sdeziel> DammitJim: really?
[16:57] <DammitJim> and one has to fdisk it, then add it to the VG
[16:57] <DammitJim> right?
[16:58] <teward> eh
[16:58] <teward> "it depends"
[16:58] <DammitJim> at least, that's how I understand it...
[16:58] <DammitJim> YIKES!
[16:58] <teward> because VMware if you expand the disk and not add a second disk
[16:58] <sdeziel> I don't like having a VG spanning across RAID and non-RAID disks
[16:58] <DammitJim> learning a new thing today... yet again!
[16:58] <teward> it'll just expand the existing disk volume
[16:58] <teward> it depends on the VM's configuration
[16:58] <DammitJim> sdeziel, I don't have such a setup
[16:58] <Epx998> whats the channel for cosmic/
[16:58] <Epx998> ?
[16:58] <sdeziel> DammitJim: were you not using RAID?
[16:58] <Ussat> NO......when you "grow" the disk in vmware, a new disk does not "show up"
[16:58] <teward> Epx998: #ubuntu+1
[16:58] <DammitJim> sdeziel, I was using RAID but only for the DB2 configuration
[16:59] <teward> DammitJim: if you grow a disk in VMware no new disk shows up, you just see 'extra space' on the disk after you reboot the VM
[16:59] <DammitJim> Ussat, when I do an: sudo dmesg | grep sd
[16:59] <TJ-> DammitJim: add the disk, pvcreate /dev/new-disk; vgextend VG /dev/new-disk
[16:59] <teward> if you add an additional disk to the VM on its own volume store or such, then that's a different issue
[16:59] <DammitJim> I'll see a new sdc or sde
[16:59] <teward> then it shows as a second disk
[16:59] <sdeziel> DammitJim: OK then simply grow the disk that's under your PV then you'll be able to grow the LVs as you see fit
[16:59] <DammitJim> sdeziel, yup, that's what I was going to do
[16:59] <sdeziel> DammitJim: adding a disk is more complicated than simply growing the one you already have
[17:00] <sdeziel> DammitJim: IMHO at least :)
[17:00] <Ussat> Trust me on this, I have hundreads of *nix systems in Vmware esxi
[17:00] <DammitJim> Ussat, I trust you and I'm going to do it now
[17:00] <Ussat> when you GROW a disk on vmware esxi it grows
[17:00] <DammitJim> ugh, I have to delete my snapshots first
[17:00] <sdeziel> Ussat: will the guest notice  it live?
[17:01] <Ussat> No, the guest will not. You need to tell the guest it has a bigger disk then
[17:01] <DammitJim> oh man, I can't do anything because of a backup that is taking 2 hours!!!
[17:01] <Ussat> and resize whatever you want/need
[17:01] <DammitJim> Ussat, how do you tell it that it has a bigger disk?
[17:01] <Ussat> hold on
[17:01] <DammitJim> I normally have to sudo fdisk
[17:02] <DammitJim> and then n, p, 2 (1 already exists), Enter, Enter, t, 8f, w, q
[17:02] <DammitJim> :D
[17:02] <TJ-> I think pvresize can be run on the 'disk' to detect the growth
[17:02] <DammitJim> OMG, that would be wonderful!
[17:02] <DammitJim> I knew it shouldn't be this difficult
[17:03] <sdeziel> depends on how things are laid out
[17:03] <TJ-> but if the disk is partitioned and the PV is a partition the partition needs to be extended first
[17:03] <Ussat> https://www.rootusers.com/use-gparted-to-increase-disk-size-of-a-linux-native-partition/
[17:03] <Ussat> is one way
[17:03] <sdeziel> you can put your PV on the bare disk or have it in a partition
[17:03] <sdeziel> Ussat: thx
[17:03] <sdeziel> having the PV reside in a partition is more classical
[17:03] <Ussat> and yes, it depends o=n how it was origionally set up, is it a LV or not..etc
[17:04] <DammitJim> ah, I see... so, using gparted... yeah, I get it
[17:04] <sdeziel> hmm, I always move partitions boundaries while the VM is running
[17:04] <Ussat> sdeziel, you CAN do that
[17:04] <sdeziel> find it annoying to boot a liveCD just for that
[17:04] <Ussat> again, it depends on origional setup and how carefull you are
[17:04] <DammitJim> +1 sdeziel
[17:04] <TJ-> DammitJim: if the disk grows, I'd use parted to add a partition to the end that uses all the new space, then "kpartx -u" or "partprobe /dev/sdX" then "pvcreate /dev/SDXy; vgextend VG /dev/sdXy"
[17:05] <sdeziel> Ussat: I was wondering what was the command to have the kernel re-check what kind of disk it had attached to it
[17:05] <Ussat> lots of ways to skin a cat
[17:05] <sdeziel> poor cats
[17:05] <DammitJim> TJ-, that's exaclyt what I do... I ADD a partition to the end
[17:05] <DammitJim> but the original partition cannot be "grown"
[17:05] <DammitJim> I know, I'm allergic to them
[17:05] <TJ-> DammitJim: right, but that doesn't matter
[17:05] <sdeziel> DammitJim: yes it can and should IMHO
[17:06] <DammitJim> but I appreciate the conversation because I had forgotten we can boot gparted to do what you guys were talking about
[17:06] <TJ-> the original partition, if trapped by another immediately following it, cannot be grown
[17:06] <DammitJim> thanks guys
[17:06] <DammitJim> man, I really have big problems here
[17:06] <TJ-> DammitJim: I do this stuff online from the live system; no need to have to boot into something else
[17:06] <DammitJim> why is it my veeam backup is only running @ 75MB/s
[17:07] <DammitJim> TJ-, I don't think I can grow the virtual drive on VMWare while the VM is running
[17:07] <DammitJim> or is there a secret command to tell the VM to refresh whatever it is to know the drive now has 50GB more of space?
[17:10] <TJ-> DammitJim: I think that depends on the hyervisor
[17:20] <DammitJim> oh
[17:33] <sdeziel> DammitJim: in the VM, try: echo "- - -" > /sys/class/scsi_host/host0/scan
[17:36] <DammitJim> when should I do that sdeziel
[17:37] <sdeziel> DammitJim: once VMWare is done growing the VM's disk
[17:37] <DammitJim> ah
[17:37] <sdeziel> DammitJim: you can also do "echo 1 > /sys/class/block/sda/device/rescan" but not all device type have that rescan file (virtio disk don't have it here for some reason)
[17:39] <DammitJim> ok
[17:49] <sdeziel> DammitJim: I don't know VMWare at all but I just tested with QEMU/KVM/libvirt and resizing the guest disk followed by: virsh qemu-monitor-command squid --hmp "block_resize drive-virtio-disk0 12G"
[17:49] <sdeziel> worked as the VM picked up the larger disk
[17:56] <DammitJim> what the what?
[18:26] <compdoc> heh
[18:29] <compdoc> he said all things are possible under the heavens
[18:30] <DammitJim> for Him they are, but not for me w/o Him
[18:30] <sdeziel> DammitJim: ?
[18:31] <DammitJim> don't pay attention to me... I'm going crazy 'cause I can't do anything you guys mentioned until this backup is done of the server (the backup created a snapshot)