[00:06] <dpb1> right, I meant, file a bug that the manpage should be in 8
[00:07] <dpb1> oddly, there is an entry here...
[00:07] <dpb1> http://manpages.ubuntu.com/manpages/bionic/en/man8/netplan.8.html
[00:07] <dpb1> heh
[00:08] <dpb1> but, I don't see it in the deb
[00:27] <mason> dpb1: Oh, interesting.
[00:27] <mason> I'll open up a bug tomorrow.
[04:28] <cpaelzer> good morning
[04:28] <Unit193> Heya, cpaelzer.
[12:13] <ahasenack> good morning
[12:48] <kstenerud> morning!
[12:58] <boxrick> I have the following pre-seed, and it works fine in most cases. But on some hosts when it comes to grub it asks where to put the bootloader, and defaults to /dev/mapper
[12:58] <boxrick> https://gist.github.com/boxrick/3a4022d003daa63b7d27cca7f0f99894
[12:59] <boxrick> But this is already set to /dev/sdb using the early command. So any ideas what is changing it ?
[13:02] <ahasenack> kstenerud: morning! (!)
[13:02] <ahasenack> kstenerud: is there light outside yet? :)
[13:03] <kstenerud> Almost dawn :)
[13:06] <boxrick> Seems I have identified a bug.... https://bugs.launchpad.net/ubuntu/+source/grub-installer/+bug/1012629
[13:06] <boxrick> Sad to see it still in bionic
[13:42] <rbasak> kstenerud: nice job on the postfix SRU. I saw it in the queue :)
[13:42] <boxrick> Can anyone tell me how grub-pc differs from grub grub2-common packages?
[13:43] <rbasak> kstenerud: one point on regression potential - that section is also to inform testers, so it would be helpful to explain what testers might focus on to find a regression in case there is a mistake in the SRU.
[13:43] <rbasak> So "normal and error paths around parsing includes", etc.
[13:43] <rbasak> (and in particular ENOENT)
[13:43] <rbasak> (or whatever it was; I'm sure I got the detail wrong)
[13:44] <kstenerud> rbasak: I'm not sure I follow. Are you speaking in general terms, or specifically to the sshd issue?
[13:44] <rbasak> In general terms the purpose of the regression potential section, and a specific example for the postfix SRU
[13:46] <kstenerud> Umm.. So in this case it was hinging on the intersection of failed open and EACCESS, which we decoupled, which means that ENOENT would also trigger the correct path, right?
[13:48] <kstenerud> Or do you mean check ENOENT as well as a tester just in case we messed up?
[13:48] <rbasak> No that was my mistake, sorry.
[13:48] <rbasak> I said ENOENT but I meant EACCESS
[13:49] <rbasak> And it perhaps wasn't in includes?
[13:49] <kstenerud> ok
[13:49] <rbasak> So I did really badly at providing an example that was actually connected to this bug.
[13:50] <rbasak> What I mean though is a general "these are the code paths that might be affected and this is how to exercise them"
[13:50] <kstenerud> ah ok :)
[13:50] <rbasak> Because then that can help drive how we test the SRU.
[13:52] <ahasenack> kstenerud: did you see the bug notification about postfix being accepted?
[13:54] <kstenerud> yup
[13:54] <ahasenack> kstenerud: ok, so now another process started
[13:55] <ahasenack> kstenerud: there are a few things to do now
[13:55] <ahasenack> kstenerud: first check that it built. That's the first link in the acceptance email: https://launchpad.net/ubuntu/+source/postfix/3.3.0-1ubuntu0.1
[13:55] <ahasenack> kstenerud: look at "builds" on the far right, and publishing
[13:56] <ahasenack> kstenerud: you can also see in the Upload details section that you are considered the one who uploaded it, but you were sponsored by someone else, since you can't upload yet
[13:57] <ahasenack> kstenerud: the next thing to keep an eye on is the so called "excuses" or "migration" page
[13:57] <ahasenack> kstenerud: for bionic, that is http://people.canonical.com/~ubuntu-archive/proposed-migration/bionic/update_excuses.html
[13:57] <ahasenack> replace "bionic" with the ubuntu release name for other SRUs
[13:57] <ahasenack> kstenerud: look for "postfix" in there. It may take a while to appear (isn't there atm)
[13:57] <ahasenack> kstenerud: that will show the dep8 tests of postfix, and of packages that depend on postfix
[13:58] <ahasenack> kstenerud: if anything goes red, checkout why. If it comes to that, ping me and we can check together
[13:58] <ahasenack> kstenerud: finally, as the bug notification said, ubuntu is now waiting for someone to confirm that the package in bionic-proposed fixes the problem that was reported
[13:59] <ahasenack> kstenerud: usually we prefer if the person who reported the bug verifies it. But if that doesn't happen "soon" (1d? 2d? More?), then you can do the verification yourself
[14:00] <ahasenack> kstenerud: the important thing is that the verification must use the package from bionic-proposed (confirmed via, for example, apt-cache policy <package>), and that the test described in the bug is performed. copy & paste is appropriate for showing test results
[14:01] <ahasenack> kstenerud: so, summary, 3 things: a) check it built; b) check dep8 passed in the excuses page; c) sru verification in the bug
[14:01] <kstenerud> ok, so there's a postfix entry in the excuses page talking about a missing build
[14:02] <ahasenack> aha, it just appeared
[14:02] <ahasenack> yeah
[14:02] <ahasenack> the build is in lp, but when the script checked, it wasn't there yet
[14:02] <ahasenack> just wait for the next page refresh
[14:02] <kstenerud> ok
[14:02] <ahasenack> it's not dynamic, it's cron generated, so don't hammer on the reload button :)
[14:02] <ahasenack> I think it runs twice an hour, give or take
[14:03] <ahasenack> but if you went to this page first, and saw missing build, then you should check launchpad to see if a build didn't fail
[14:04] <ahasenack> if you scroll down on that page you can see examples of a lot of different possible scenarios
[14:04] <ahasenack> failed runs, green runs, runs that are known to always fail, etc
[14:05] <kstenerud> which page? I don't see anything colored on the excuses page or the package page
[14:06] <ahasenack> kstenerud: the same page
[14:06] <ahasenack> gdm3, for example, has a regression
[14:07] <ahasenack> the excuses page
[14:07] <ahasenack> how can you not see that bright red? :)
[14:07] <kstenerud> oh ok I see it
[14:07] <kstenerud> so this is just a ticker for everything being built?
[14:07] <ahasenack> for that particular release
[14:08] <ahasenack> it won't move automatically to the updates pocket, because this is a stable release
[14:08] <ahasenack> but is one check the sru team will do before approving the update
[14:08] <ahasenack> approving means moving it from the proposed pocket, to the updates pocket, so it becomes available to all users
[14:09] <ahasenack> the proposed pocked is public, but opt-in
[14:09] <ahasenack> the bug notification explains how to enable it for those who want to help testing
[14:11] <sdeziel> kstenerud: I'll be glad to do the SRU verification for this postfix LP
[14:11] <sdeziel> I've already setup a reproducer and I'm waiting for the update to land in -proposed
[14:15] <kstenerud> cool thanks!
[14:24] <sdeziel> kstenerud: verification done
[14:27] <kstenerud> sdeziel: So when verification is done, is there a page that gets updated?
[14:28] <ahasenack> kstenerud: actually, there is
[14:28] <ahasenack> I forgot about that one
[14:28] <dpb1> :)
[14:28] <ahasenack> kstenerud: https://people.canonical.com/~ubuntu-archive/pending-sru.html
[14:28] <ahasenack> yet another random page out there
[14:28] <dpb1> people.c.c/~something/foo.html
[14:28] <ahasenack> kstenerud: search for postfix
[14:28] <ahasenack> kstenerud: this page is also cron generated, so it may take a while to update
[14:29] <ahasenack> kstenerud: your bug is "blue", meaning it's waiting for verification
[14:29] <ahasenack> kstenerud: once it detects the verification done by sdeziel, via bug tag changes, the bug number should turn to green, like others
[14:29] <ahasenack> kstenerud: red means bad. The verification could have failed, for example
[14:30] <sdeziel> it would be nice to have those links integrated to LP so that one can track the progress easily
[14:31] <ahasenack> kstenerud: the excuses page updated, your tests have began
[14:31] <ahasenack> kstenerud: see how it also runs dep8 tests of other packages
[14:31] <ahasenack> these are other packages that use postfix
[14:31] <ahasenack> this is to make sure they don't break because of a postfix update
[14:31] <ahasenack> for some definition of "sure", of course :)
[14:31] <ahasenack> way better than nothing
[14:34] <ahasenack> kstenerud: sru verification can be another source of work for us. Go over that page, check bugs that have not been verified yet and are sitting there for a long while, and perform the verification. If you have a package/service you know well, it's a good helping hand to do it
[14:42] <rbasak> I've had a plan for that for a while. But like everything no time to work on it.
[14:42] <rbasak> A bot which picks up information from various places and maintains an area inside the bug description with status, expectations that contributors can understand, etc.
[14:43] <ahasenack> rbasak: debian just pushed sssd 1.16.3, is there something you can kick to have g-u fetch that now? Or, when would it notice it?
[14:43] <rbasak> "It's in the queue/it's awaiting verification etc"
[14:43] <sdeziel> rbasak: it would help community member to push debdiffs and do SRU validation IMHO
[14:44] <rbasak> Agreed
[14:44] <sdeziel> I suspect the pending-sru and update_excuses pages are not widely known by the community members
[14:45] <sdeziel> but I hear you, ENOTIME
[14:46] <rbasak> git-ubuntu first I think
[14:46] <rbasak> That'll help get stuff into the pipeline.
[14:46] <rbasak> I want it to be possible for a contributor to clone one of our branches, git cherry-pick from upstream, and submit that.
[14:46] <kstenerud> I'm putting all this in the document
[14:46] <rbasak> We have code ("changelogify" and "quiltify") that automatically does the packaging work for simple cases.
[14:47] <rbasak> Inside git ubuntu build.
[14:47] <rbasak> It's just not quite ready for general use yet.
[15:02] <Ussat> ...
[15:07] <ahasenack> rbasak: did you see my ping?
[15:08] <rbasak> Oh, sorry
[15:08] <rbasak> It'll get noticed after Launchpad picks it up
[15:09] <rbasak> It needs to appear in https://launchpad.net/debian/+source/sssd/+publishinghistory first
[15:09] <ahasenack> thanks, good to know
[15:09] <rbasak> After that the importer should pick it up within half an hour (IIRC) if it's not busy
[15:09] <ahasenack> https://launchpad.net/debian/+source/sssd still has only 1.16.2 indeed
[15:09] <ahasenack> do you know when lp does that?
[15:09] <rbasak> I don't recall. Not quickly.
[15:10] <ahasenack> ok
[15:10] <rbasak> (on the order of a day IIRC)
[15:10] <rbasak> Part of that is Debian's publication process I think
[15:10] <dpb1> rbasak: not a bad idea (maintain status in the bug somehow), might work better with a service though
[15:10] <rbasak> Their publication runs are very slow compared to Launchpad
[15:10] <dpb1> web service that does that, then a link
[15:10] <dpb1> link in the bug, I mean
[15:11] <rbasak> dpb1: yeah rich HTML would be handy for links to everything
[15:11] <rbasak> dpb1: but perhaps a plaintext summary in the bug?
[15:11] <ahasenack> rbasak: it's showing up in rmadison already
[15:11] <dpb1> rbasak: not a bad idea
[15:11] <dpb1> rbasak: but ya, no time
[15:11] <rbasak> ahasenack: is it available through apt though?
[15:11] <rbasak> (in sid)
[15:13] <ahasenack> haven't checked
[15:14] <ahasenack> it's ok, "half an hour after lp has it" is the answer
[17:01] <ahasenack> cpaelzer: if still here, shouldn't bileto use cosmic-proposed if the target is cosmic? Or it never uses proposed?
[17:01]  * ahasenack checks the ppa deps
[17:02] <ahasenack> the ppa is fine, it's using proposed
[17:02] <ahasenack> but the dep8 tests did not
[18:28] <dpb1> ahasenack: hi
[18:28] <dpb1> are you back now?
[18:28] <ahasenack> freenode ssl let me in this time, yes
[18:28] <ahasenack> -emerson- :[Global Notice] Services are going to be rebooted for maintenance now, apologies for the inconvenience. <-- that kicked me out
[18:42] <coreycb> jamespage: this is a little awkward, heat-dashboard has it's own xstatic dependencies that differ from horizon's. i think i'll just bundle them into horizon.
[18:46] <jamespage> coreycb: oh right - yes - take a look at my most recent upload for heat-dashboard
[18:46] <jamespage> I did a bundle like horizon's
[18:46] <jamespage> but it needed some patching as well
[18:48] <coreycb> jamespage: ah ok great, thanks for doing that. now i just need to figure out why i'm still hitting the angular_uuid error.
[19:05] <jamespage> hmmm#
[19:26] <madLyfe> this just means that i havent set the drives up yet, correct? https://paste.fedoraproject.org/paste/5i275KRyRvtXXxZrqGEdMg
[19:26] <madLyfe> on /sda and /sdb
[19:42] <sdeziel> madLyfe: what's your goal with /dev/sda and /dev/sdb?
[19:44] <madLyfe> well im going to try and set them up in raid 1
[19:44] <madLyfe> software raid 1
[19:44] <madLyfe> i was just kind of taking inventory of the attached disks and was surprised by the errors
[19:44] <sdeziel> madLyfe: mdadm RAID or zfs mirroring ?
[19:45] <madLyfe> i think you kind of sold me on ZFS yesterday
[19:45] <sdeziel> hehe
[19:45] <sdeziel> then you don't need to do any partitioning of those 2 disks, zfs will take care of this
[19:46] <sdeziel> madLyfe: install the package zfsutils-linux first
[19:46] <madLyfe> i will be able to access this raid 1 array on the network and from win 10?
[19:47] <madLyfe> 'sudo apt install zfsutils-linux' ?
[19:48] <sdeziel> yup for apt install
[19:48] <sdeziel> Windows won't be able to read the FS is you were to plug the physical disks to it. If you network export them there will be no problem though
[19:48] <madLyfe> i dont really know the differences between the installers tbh
[19:49] <madLyfe> ya these are on a networked PC
[19:49] <sdeziel> madLyfe: then no worries, whatever you use as FS/RAID is irrelevant for nfs/cifs/smb export
[19:50] <madLyfe> this was the first msg on running that command. https://www.irccloud.com/pastebin/dxbuN2SJ/
[19:50] <madLyfe> i see
[19:50] <madLyfe> ok looks like ive installed it
[19:51] <sdeziel> good, now to create your mirror (~equiv of RAID1 from mdadm): sudo zpool create $POOL_NAME mirror sda sdb
[19:52] <sdeziel> madLyfe: the if you pick "data" as POOL_NAME, you should see a directory created /data
[19:53] <madLyfe> is that standard?
[19:53] <madLyfe> is that what the network will see it as?
[19:54] <sdeziel> madLyfe: there is no standard and no, it's not related to what network clients will see
[19:55] <sdeziel> madLyfe: are you familiar with LVM?
[19:55] <madLyfe> i know its logical volume management?
[19:55] <madLyfe> not sure what it does though.
[19:55] <sdeziel> yes, that what it expands to. OK
[19:56] <sdeziel> I was going to say that zfs is an hybrid of mdadm and LVM ... kinda
[19:56] <sdeziel> basically, from a zfs pool, you can create filesystems
[19:56] <sdeziel> and you got one created by default when you created the pool
[19:57] <sarnold> turn on compression before you go any further
[19:57] <madLyfe> i havent done anything yet
[19:57] <sarnold> I haven't kept up, maybe zstd is the best choice these days
[19:57] <sdeziel> sarnold: isn't it done by default?
[19:57] <sarnold> if zstd doesn't work lz4 is fine
[19:57] <sarnold> sdeziel: I'm not sure
[19:57] <sdeziel> I assumed that lx4 was default
[19:58] <sdeziel> but yeah, compression is a must
[19:58] <madLyfe> does it need to be /dev/sda and /dev/sdb or just sda sdb?
[19:59] <sdeziel> madLyfe: zpool create has a search path that includes /dev
[19:59] <sdeziel> so both are equivalent IIRC
[19:59] <sdeziel> sudo zpool create -O compression=on $POOL_NAME mirror sda sdb
[19:59] <madLyfe> so data is just what i want to name the pool/disk/mirror locally?
[20:00] <sdeziel> madLyfe: nice documentation on zfs concepts: https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/
[20:00] <sarnold> I love the pthree zfs intro
[20:02] <madLyfe> im guessing that name can be changed later?
[20:03] <sdeziel> madLyfe: yes but might be simpler to get it right the first time ;)
[20:04] <dpb1> sage advice
[20:06] <madLyfe> create -O is doing what?
[20:06] <sdeziel> man zpool
[20:07] <sdeziel> madLyfe: in short, it sets a property to apply to contained filesystems by default
[20:07] <madLyfe> got this https://www.irccloud.com/pastebin/Sr2psGS8/
[20:07] <sarnold> I didn't see an obvious way to permanently rename a pool. maybe it exists, maybe it doesn't.
[20:07] <sdeziel> sarnold: export then import
[20:07] <dpb1> export import, right?
[20:07] <dpb1> heh
[20:08] <sdeziel> madLyfe: add a "-f" there to force zpool to nuke the old raid signature on those drives that were apparently part of an old RAID array
[20:08] <madLyfe> ok at the end of the previous command string?
[20:08] <sdeziel> madLyfe: only use force if you need and want to :)
[20:08] <sarnold> and that name will persist through another export / import cycle/
[20:08] <sdeziel> sarnold: yes
[20:09] <sarnold> aha ;)
[20:09] <madLyfe> ya i want to remove all previous traces of raid on those disks
[20:09] <madLyfe> ok line returned with no errors, i think it worked.
[20:10] <madLyfe> from pthree.org page:
[20:10] <madLyfe> 'UPDATE: Since the writing of this post, LZ4 has been introduced to ZFS on Linux, and is now the preferred way to do compression with ZFS. Not only is it fast, but it also offers tighter compression ratios than LZJB- on average about 0.23%'
[20:11] <sarnold> yeah, but that's still a quite old update
[20:11] <sdeziel> madLyfe: lz4 is the default on newly created pools
[20:12] <madLyfe> sudo i see the data dir
[20:13] <madLyfe> parted -ls https://www.irccloud.com/pastebin/T5suApxd/
[20:13] <sdeziel> madLyfe: so you have one FS named like the pool (data). You can create other FSes to split out the pool's space. You can use quotas on those and a bunch of other settings
[20:14] <sdeziel> madLyfe: yup, zpool created a GPT partition label on both disks and created 2 parts on it. That's technical details you can overlook for now
[20:15] <madLyfe> so if i just want to leave it as is and start putting data on it it, im done?
[20:16] <sdeziel> madLyfe: that's an option, yes
[20:17] <sarnold> you might also want to set atime=off and perhaps change the hash to something stronger
[20:17] <ahasenack> kstenerud: sorry I wasn't with you longer this afternoon, I'm finishing up some merges/uploads due to tomorrow's feature freeze
[20:17] <sarnold> (those were the first few things I did on my pool)
[20:17] <sdeziel> madLyfe: but you may want to slice up your pool into multiple FSes
[20:18] <sdeziel> sarnold: you don't trust/like fletcher4 ?
[20:19] <sarnold> sdeziel: yeah, fletcher4 is fast but that's about it :)
[20:20] <sdeziel> sarnold: so you prefer sha256?
[20:20] <sdeziel> madLyfe: if you want to see lz4 compression in action: https://paste.ubuntu.com/p/ZhvfWrP3vf/
[20:20] <sarnold> sdeziel: yes, that's what I used; I've thought about swapping to skein but never looked into it beyond a "oh that'd be nice"
[20:21] <sdeziel> sarnold: I've heard rumours that sha3 was slow
[20:22] <sarnold> sdeziel: yeah, I think I'd expect it to be a touch slower than sha256
[20:22] <sdeziel> sarnold: also, sha512 is 50% faster here (at least in non-scientific sha256sum/sha512sum benchmarks)
[20:22] <sarnold> sdeziel: oho
[20:22] <sarnold> that's cool
[20:23] <sarnold> I've heard that the sha512 can be faster-enough than sha256 on 64 bit systems to justify using sha512/256 in place of sha256 if that's the security level you need..
[20:23] <sdeziel> yeah but for the storage case I presume the CPU improvement is also a tradeoff in space
[20:24] <sdeziel> agreed on the sha512/256 thing
[20:24] <madLyfe> https://paste.ubuntu.com/p/KNNQ9K9sBC/
[20:25] <sdeziel> madLyfe: do you have data on it?
[20:25] <madLyfe> nah im not sure what you were talking about by slicing it up and also what sarnold was talking about with the other options.
[20:26] <sdeziel> madLyfe: the atime things is for "access time" of each file
[20:27] <sdeziel> madLyfe: it gets updated whenever you read a file. This means a read operation incurs a write operation to update the atime. Disabling atime (=off) saves you the write part so it's faster
[20:27] <sdeziel> madLyfe: you can tune this now: sudo zfs set atime=off data
[20:27] <sarnold> madLyfe: "slicing it up", I've split my pool into a bunch of filesystems: http://paste.ubuntu.com/p/BC2YTNSWBG/
[20:28] <sarnold> I'm fascinated that the ubuntu main sources compress 2.03 times, but universe only 1.78 times, and restricted and multiverse even less
[20:29] <ahasenack> I have two sets of vms, libvirt and uvt
[20:29] <ahasenack> they compress differently
[20:29] <sdeziel> madLyfe: the other thing that sarnold mentioned is the checksum algo used by zfs.
[20:29] <mason> sarnold: No /home there?
[20:29] <ahasenack> nsnx/libvirt-images  compressratio  1.62x  -
[20:29] <ahasenack> nsnx/uvtool          compressratio  1.89x  -
[20:30] <sarnold> mason: no, I kept those on the OS disks
[20:30] <mason> sarnold: How did you get the compression stats?
[20:30] <sarnold> ahasenack: ha :) I didn't expect that
[20:30] <sarnold> mason: that was zfs list -o name,used,avail,compressratio,mountpoint
[20:31] <madLyfe> so you are saying change it to sha512?
[20:31] <mason> My libvirt-images is also my biggest compression.
[20:31] <ahasenack> var/log is amazing, I get 4.59x
[20:32] <mason> sarnold: Just saw the 6T. I'm envious now.
[20:32] <sarnold> mason: hehe :)
[20:32] <sdeziel> madLyfe: it's a personal choice but if you do not stick to the default, I'd recommend sha256
[20:33] <RoyK> or sha512, which is faster than sha256 on 64bit machines
[20:33] <sdeziel> those with libvirt-images probably don't hand zvols to VMs, right?
[20:33] <mason> Yeah, logs compress well: https://bpaste.net/show/e3af0baef4d9
[20:34] <sdeziel> RoyK: indeed but I'd be worried about the bigger storage overhead, no?
[20:34] <sarnold> 12x
[20:34] <sarnold> 10x
[20:34] <sarnold> nice
[20:35] <RoyK> sdeziel: oh - was this about zfs hecksums?
[20:35] <sdeziel> RoyK: yes
[20:35] <RoyK> IIRC zfs doesn't even support sha checksums for that
[20:35] <madLyfe> the default checksum is?
[20:35] <sdeziel> madLyfe: fletcher4
[20:35] <RoyK> too slow and heavy and large and complex and …
[20:35] <madLyfe> oh thats why you were talking about. gothca.
[20:36] <RoyK> and for a maximum of 2MB or whatever the largest block size is these days, not necessary
[20:36] <sarnold> hecksum :)
[20:37] <ahasenack> sdeziel: I don't, I use plain qcow files, simpler to manage
[20:37] <madLyfe> so i guess ill just use sha256
[20:39] <sdeziel> ahasenack: I see. Personally I settled on a tiny qemu script to snapshot on boot and keep a set of 3 rotating snapshots. Pretty nice to revert :)
[20:39] <madLyfe> hmm https://paste.ubuntu.com/p/DYFDj8sTkp/
[20:39] <sdeziel> qcow snapshots are too complicated to my taste
[20:39] <sdeziel> madLyfe: sudo zfs set checksum=sha256 data
[20:39] <ahasenack> sdeziel: I use vms only for testing bugfixes and more complicated deployment scenarios (like sssd + krb + ldap and all on different vms)
[20:39] <ahasenack> so they are short-lived, and are never running constantly
[20:40] <ahasenack> sdeziel: virt-manager has a nice GUI for managing the qcow2 snapshots
[20:40] <ahasenack> well, nice, I mean it has a gui :)
[20:40] <mason> FWIW, I recently moved from zvols backing VMs to qcow2 sitting on ZFS datasets. Fairly arbitrary I guess, but live migration wasn't happy with zvols.
[20:40] <sdeziel> hehe, right, I should revisit the GUI. It's been so long since I last used it
[20:40] <mason> The virt-manager GUI is pleasant.
[20:43] <madLyfe> is it possible to see what settings 'data' is using as a list?
[20:43] <madLyfe> get all?
[20:44] <ahasenack> you mean zfs get all data, where "data" is a zfs dataset?
[20:44] <madLyfe> https://paste.ubuntu.com/p/RH39kbw4gH/
[20:44] <madLyfe> ya ahasenack
[20:44] <RoyK> zfs get all pool/dataset (or just pool)
[20:44] <ahasenack> you want a different output format?
[20:45] <madLyfe> me? nah just a list like that is fine. i didnt know for sure if it was get all
[20:45] <ahasenack> ok
[20:45] <madLyfe> sarnold: can it be sliced up later?
[20:46] <RoyK> zpool get all <pool> and you get the zpool settins (zfs ... is for the dataset, not the pool)
[20:46] <sdeziel> madLyfe: yes, you can slice it anytime you like
[20:46] <madLyfe> like i know i was to setup a plex server on this box but im not sure if i want to put that on the OS thumb drive or the mirror.
[20:47] <madLyfe> by slicing do you just mean adding dirs? or?
[20:47] <sdeziel> madLyfe: I mean creating FSes under "data"
[20:48] <sdeziel> those sub-FSes will appear as directories under /data (by default)
[20:48] <madLyfe> i guess i dont know what i need right now or why they would need to be a different FS tbh.
[20:48] <sdeziel> like for example: "sudo zfs create -o quota=30G data/foo" will create /data/foo and you'll only be able to write 30G in it
[20:49] <sdeziel> madLyfe: for my samba server, I use a FS per export/share
[20:49] <RoyK> madLyfe: just play around with it a bit - nothing to lose
[20:49] <RoyK> and if you have many users, use a dataset per homedir, perhaps with a quota
[20:50] <RoyK> then the users will be allowed to see their own snapshots if you use things like automatic snapshotting
[20:50] <madLyfe> well there are no users. only zuul.
[20:50] <RoyK> ok
[20:51] <madLyfe> but seriously its just a backup spot and probably plex server for the data on this mirror
[20:51] <madLyfe> but the plex server is a ways out
[20:52] <madLyfe> now i just need to make this mirror avail on the network to my win 10 box
[20:53] <madLyfe> with my win10/kubuntu dual boot box. this was the hole point of making a mirror array on a dedicated network box.
[20:54] <madLyfe> would would be my best option for sharing it with win10 on the network?
[20:54] <sdeziel> madLyfe: I guess it's time to setup samba and have it share /data (or any other sub-dirs/FSes)
[20:56] <sarnold> madLyfe: yes, you can add new zfs datasets whenever :)
[20:58] <madLyfe> ok let me set the ip of this server to static first. ill be back.
[21:01] <madLyfe> shit. its a new method to change static ip in 18.04. researching.
[21:03] <madLyfe> would this be correct procedure? https://www.techrepublic.com/article/how-to-configure-a-static-ip-address-in-ubuntu-server-18-04/
[21:03] <ahasenack> madLyfe: you mean netplan?
[21:03] <madLyfe> ya
[21:03] <madLyfe> over interfaces
[21:03] <ahasenack> for netplan, this is a good official resource: https://netplan.io/examples
[21:04] <cyphermox> oi
[21:04] <ahasenack> cyphermox: :)
[21:06] <madLyfe> but that guide is having me make a new yaml config file and not use 50-cloud-init.yaml
[21:06] <RoyK> ahasenack: didn't look very hard
[21:06] <madLyfe> just wondering if thats correct
[21:07] <ahasenack> you can use the existing one
[21:07] <madLyfe> ok
[21:07] <ahasenack> would be odd to have two config files setting different things on the same nic
[21:09] <madLyfe> this is at the top of the file? https://paste.ubuntu.com/p/Rj8YFmStbV/
[21:12] <madLyfe> does that mean i cant make changes to that file that will persist?
[21:15] <ahasenack> is that ubuntu server installed with that new text based installer?
[21:15] <madLyfe> ya
[21:15] <madLyfe> fresh install with the freshest iso
[21:19] <ahasenack> I think you could just remove cloud-init, I've done that in the past
[21:19] <ahasenack> but I was prepared to handle any regressions
[21:19] <ahasenack> or, just do what that config file says
[21:19] <ahasenack> in the header
[21:23] <mike802> hi all! so i'm going through the ubuntu server guide and i'm trying to get phpmyadmin up and running.  it says to edit /etc/phpmyadmin/config.inc.php with the db_server address.  then i need to be sure that phpMyAdmin host has permissions to access the remote database
[21:24] <mike802> it seems like this is a step i should take (access permissions for phpMyAdmin to remote database), but i'm not sure what it would be
[21:25] <nacc> mike802: ... you don't know your mysql admin credentials and you want to use a php interface to adminster said mysql instance?
[21:26] <mike802> well, technically i'm still trying to do the bind-address in my.cnf, but even the wildcard 0.0.0.0 isn't allowing me to start mysql
[21:27] <nacc> mike802: ok, so your question is unrelated to phpmyadmin? :)
[21:28] <mike802> alright, i can just keep trying stuff
[21:29] <mike802> i was hoping connectivity could have helped
[21:29] <nacc> mike802: no, i mean, you want to know how to configure mysql right?
[21:29] <nacc> mike802: what error do you get when you try to start it?
[21:30] <mike802> Failed to start MySQL Community Server
[21:30] <nacc> mike802: :) look in the logs
[21:32] <mike802> ?
[21:32] <nacc> mike802: look in the mysql logs, that message just says it failed, which we already knew. I'm asking *why* it failed.
[21:33] <nacc> basic server debugging :)
[21:33] <mike802> i checked systemctl status mysql.service and journalctl -xe
[21:33] <mike802> they both just say failed to start
[21:34] <nacc> mike802: both will almost certainly say *more* than just that
[21:34] <nacc> but check the actual sql logs /var/log/mysql iirc
[21:36] <mike802> there seems to be a warning about Gtid table is not ready to be used
[21:38] <mike802> warning no UUID was found
[21:38] <mike802> warning failed to set up SSL
[21:38] <mike802> a few others then it shuts down
[21:51] <madLyfe> sdeziel: do you happen to still be around?
[21:52] <sdeziel> madLyfe: yes?
[21:52] <madLyfe> ive got the serve set to static ip now. lel
[21:53] <sdeziel> madLyfe: good
[21:54] <madLyfe> i think i needed to do that for samba?
[21:56] <sdeziel> madLyfe: that's usually better, yes
[21:58] <nacc> mike802: can you use a pastebin and paste the log?
[22:02] <madLyfe> sdeziel: where do you suggest i start?
[22:02] <mike802> https://pastebin.com/cX6WB5XA
[22:03] <sdeziel> madLyfe: for samba?
[22:03] <madLyfe> ya
[22:04] <sdeziel> madLyfe: I never looked at it but maybe you could glance at https://help.ubuntu.com/community/Samba ?
[22:04] <sdeziel> madLyfe: of maybe the more succinct one here: https://help.ubuntu.com/lts/serverguide/samba-fileserver.html
[22:05] <nacc> mike802: hrm, that log says on line 34 that it started
[22:05] <nacc> but then immediately shut down
[22:05] <mike802> yeah, i noticed
[22:05] <mike802> weird
[22:05] <nacc> mike802: none of the preceding lines indicate any errors afaict
[22:06] <mike802> it starts fine without the bind-address line in my.cnf
[22:07] <nacc> mike802: what address are you trying to use?
[22:07] <nacc> mike802: did you try just commenting out the bind-address line?
[22:07] <mike802> the address of my apache2 box with phpmyadmin
[22:07] <mike802> yeah, that works
[22:07] <nacc> wait
[22:08] <nacc> mike802: are you doing mysql on the same system as the one using apache?
[22:08] <mike802> no
[22:08] <nacc> mike802: then that's totally wrong
[22:08] <nacc> mike802: think about it
[22:08] <nacc> mike802: bind-address is the address for your sql server to *listen* on
[22:08] <nacc> mike802: it's the address of the machine the sql server is on, not the machine your apache server is on
[22:08] <mike802> localhost?
[22:09] <nacc> mike802: if you just specify no bind-address, it listens on all interfaces, iirc
[22:09] <mike802> alright, i will try that
[22:09] <nacc> mike802: ... no
[22:09] <nacc> mike802: localhost would be ... for local connectivity to the machine
[22:09] <mike802> ok, ty