[00:27] Hmm, running a 16.04 -> 18.04 upgrade I'm seeing a whole whack of "no candidate ver:" listings for ancient packages that I haven't had installed for good reason for quite some time (ex. a whole bunch of 3.x kernels). Is there some way to clear out those old listings? [00:42] keithzg[m]: you must have packages installed which still reference these [00:43] may i suggest https://github.com/tomreyn/scripts#foreign_packages to sort out leftover packages (after cleaning up your apt sources, i.e. removing those you no longer need). [00:45] tomreyn: Thanks! I'll definitely give that a shot once I'm done the upgrade. I'd be somewhat surprised if there was anything really, but then again this is a VM I actually inherited from the previous sysadmin so who knows what skeletons are in its closet, hah [00:46] a good idea then. also "ubuntu-support-status" [00:48] keithzg[m]: ^ and maybe debsums -as [00:48] + deborphan ;) [01:27] Huh, fail2ban is "unsupported" these days? Even more surprising, lazily searching packages.ubuntu.com returns no results for it, but that just seems to be that the website search isn't working (a local `apt policy fail2ban` shows that it's in universe, which I suppose would be the reason for ubuntu-support-status to report it as "unsupported" even though that seems misleading). [01:28] !info fail2ban [01:28] fail2ban (source: fail2ban): ban hosts that cause multiple authentication errors. In component universe, is optional. Version 0.10.2-2 (bionic), package size 321 kB, installed size 1698 kB [01:29] keithzg[m]: https://help.ubuntu.com/community/Repositories#Universe [01:31] tomreyn: Oh I know what 'universe' means for the Ubuntu repos. It's just that "supported" or "unsupported" seems like a misleading binary state to me in this case; that it's from a maintained package in the repos is surely at least *some* level of support, particularly compared to, say, if it was installed from a third-party repository that's no longer configured on the system. [01:32] here 'supported' refers to 'canonical provides security support for it' [01:32] fail2ban has been in universe since at least precise, probably earlier [01:32] Like, the distinction makes sense, it just means that `ubuntu-support-status` isn't necessarily too useful to me. [01:32] keithzg[m]: the important takeaway here is that if there's a bug in it that you want fixed, *you're* the one supporting it :) [01:33] sarnold: Eh, that's not the takeaway is it? Surely it's just, if there's a bug in it that I want fixed, it just isn't *Canonical's* problem :D [01:34] keithzg[m]: yeah :) it's just that all too often folks expect Someone Else to solve their problems.. [01:35] sarnold: Yeah, fair! Although I can't imagine such folks would let a package being in "universe" stop them, hell I bet rarely would "I downloaded this from some random website and half-followed the instructions" stop 'em ;) [01:36] keithzg[m]: too right you are ;) [06:01] Good morning [07:42] keithzg[m]: what else would you have "unsupported" mean but "unsupported by distro vendor" === coconut is now known as coconut_ [12:15] good morning [12:49] rbasak: thanks for the reviews. Could you take a quick look at https://code.launchpad.net/~ahasenack/ubuntu/+source/squid/+git/squid/+merge/356100/comments/926735 for one extra commit I added to squid? It's on top of what you reviewd already, I just had to regenerate the changelog after it [13:08] ahasenack: +1 (commented) [13:09] rbasak: thanks [13:10] rbasak: I think I'll take over https://code.launchpad.net/~paelzer/ubuntu/+source/strongswan/+git/strongswan/+merge/355589 [13:10] it was trumped by two security updates in the meantime, and the upload was rejected (https://launchpad.net/ubuntu/cosmic/+queue?queue_state=4&queue_text=strongswan) [13:10] rbasak: is there a place to see why it was rejected, although in this case I think that was the reason? [13:16] ahasenack: only the uploader gets the reject message unfortunately [13:16] ok [13:16] Ask in #ubuntu-release perhaps? [13:30] checked, it was the secteam's upload [13:30] I'll resubmit [13:31] rbasak: what happens with the git tree in this case? [13:31] I guess since it was never uploaded, it will never be imported [13:32] so the changes will never show up in the pkg git tree, just the upload tag [13:32] which won't match what was actually uploaded as that version/release [13:35] ahasenack: correct. Best to delete the upload tag to avoid confusion. [13:35] (which is the ugly part, but it's the least worst option IMHO) === lotuspsychje_ is now known as lotus|NUC [13:44] rbasak: https://code.launchpad.net/~ahasenack/ubuntu/+source/strongswan/+git/strongswan/+merge/356135 3rd mp about this :) [14:04] ack [14:22] hello, am I missing some kind of package to have libvirt support zfs pool on 18.04? trying "virsh pool-define-as nvme1 zfs --source-path /dev/zvol/nvme1" gives me "error: internal error: missing backend for pool type 11 (zfs)" [14:22] it works on 16.04, I don't remember that I had to install something special [14:23] hm [14:24] Slashman: try installing libvirt-daemon-driver-storage-zfs [14:24] ahasenack: thanks! this package does not exist on 16.04 [14:25] ahasenack: hm, same error, do I have to restart something or modify a config somewhere ? [14:25] try restarting libvirtd-bin (iirc) [14:26] ahasenack: "libvirtd.service", and it works now, thanks! time to update my ansible role [14:26] cool [14:32] Does libvirt-daemon-driver-storage-zfs end up setting up zvols per VM? [14:32] I've been doing this by hand, and I like the idea of it automatically happening. [14:33] are there particular advantages in using zvols instead of plain image files on a zfs dataset? [14:34] I find the image file quite convenient, mainly because of its name and ease of moving around if needed [14:34] ahasenack: very useful to transfer, clone, backup, etc [14:34] well, that's about zfs, not zvols in particular [14:34] ahasenack: Yes. send/receive/snapshot per VM [14:34] hm, per vm, instead of per directory where all vms are you mean? [14:34] Sorry, I should have said "per VM" [14:35] yes, datasets per VM is great, you can have several per VM too, I usually have at least one for the OS and an other for the data [14:36] ahasenack: I didn't benchmark it but I'd expect better performance from zvol when compared to raw|qcow2 on zfs [14:36] there are benchmarks out there comparing the two, and it's not that clear cut [14:36] you can snapshot the tree too, eg: "tank/VM/xenial/root and tank/VM/xenial/data", "zfs snapshot -r tank/VM/xenial" [14:37] or make a different tree to snapshot only the data [14:37] http://jrs-s.net/2018/03/13/zvol-vs-qcow2-with-kvm/ [14:38] tank/VM/root/vm1 tank/VM/root/vm2 tank/VM/data/vm1 tank/VM/data/vm2 => zfs snapshot -r tank/VM/data [14:38] ahasenack: interesting, thanks [14:40] the per VM snapshot is just too nice for me to abandon zvols though [14:42] yeah, I can see that [14:42] on a side note, using "ashift=13" on ssd is not a good idea in reality [14:42] this becomes especially nice when coupled with pre-boot snapshort that a qemu hook can do :) [14:42] it destroys the compressratio and the performance is basically the same [14:44] backups and migration of VM are also much more easier with zvol per VMs [14:45] rbasak: I applied your suggestions and ran the tests again, all good. Could you take another look? https://code.launchpad.net/~ahasenack/ubuntu/+source/sssd/+git/sssd/+merge/355524 [14:45] I'm kind of excited about libvirt-daemon-driver-storage-zfs now. [14:45] rbasak: and, do you have a preference whether to squash it all now or later? I think it's easier to review leaving as is [14:45] And since my hypervisors are Bionic, I can leap right in. [14:46] mason: does it create the zvol, or do you have to hand it one already created? [14:46] (if you have tried it already) [14:46] seems like libvirt-daemon-driver-storage-zfs is only the driver and does nothing else [14:47] ahasenack: We'll find out! I assume it creates it, because if it doesn't, the package doesn't do much. [14:47] Hrm. In that case, what's it actually do? I'm creating zvols by hand and passing them in as block devices. [14:47] mason: it activate the zfs pool storage, you cannot have one if you don't have it [14:47] you can create a pool with a random block device, no? Then what would be the difference indeed between that and a zvol? [14:47] Slashman: I can say for sure that I can have one without that package. :P [14:47] mason: I can't [14:48] mason: I tried before [14:48] see above [14:48] Slashman: Worked fine for me in Xenial, continues to work fine in Bionic... [14:48] ahasenack: +1 - commented [14:48] mason: it worked in xenial fine, this machine was installed with bionic, it was not upgraded [14:48] the nuance may be here [14:49] Slashman: Same here. I redid my hypervisor from scratch. [14:49] ahasenack: I'm caught up with you now I think? Anything else pending review for you right nwo? [14:49] Slashman: I have both cases. Desktop/hypervisor is an upgrade, and dedicated hypervisor was a fresh install. [14:49] Slashman: screenshot incoming [14:49] rbasak: nope, you've been stellar, thanks [14:49] mason: well, I had "error: internal error: missing backend for pool type 11 (zfs)" before I installed libvirt-daemon-driver-storage-zfs [14:50] I tried to define it via xml, via virsh pool-define-as and via virt-manager, same error [14:50] Slashman: https://imgur.com/a/E8jyKQp [14:50] mason: this is a raw disk [14:50] Slashman: And I don't have libvirt-daemon-driver-storage-zfs on either system. [14:51] Slashman: Yes. [14:51] that's not the same [14:51] Slashman: How so? [14:51] you can have pool of type zfs [14:51] Slashman: What's that buy me if zvols aren't autogenerated when I create a VM? [14:52] mason: maybe then, I have never used the autogeneration of storage [14:52] I have scripts that create everything and then define the VM [14:52] Slashman: I'm curious now. What does a "pool type of zfs" mean, tactically? [14:53] ahasenack: do you want to continue with triage for bug 1787739? I have some questions I'd like answered but I don't want to pull the reporter in two different directions at once. [14:53] bug 1787739 in bind9 (Ubuntu) "postfix name lookup failed after dist-upgrade (Aug-2018)" [Undecided,Incomplete] https://launchpad.net/bugs/1787739 [14:53] hm, I got that email [14:53] (I see you're subscribed but it came up in my triage today) [14:53] mason: https://apaste.info/cwL1 [14:54] rbasak: let me take a quick look [14:54] damn, no type [14:54] ahasenack: no rush - just don't want it to get lost if I leave it [14:54] rbasak: I still think it's something on his setup, the vagrant image doesn't help [14:54] Slashman: Well. But what does it do for you that I don't have passing in zvols as raw disks? [14:54] rbasak: this is now falling under support I think. Asking for tcpdump packet captures and the like [14:55] I have never used vagrant, though [14:55] ahasenack: yeah I'd ask him for reproduction instructions (rather than an image) and that hit public infrastructure [14:56] mason: well, in virt manager, you see a pool and can select the drives, etc, not sure about the definition of the host themselves [14:56] rbasak: I'd say we can't reproduce it [14:56] maybe suggest that he inspect the traffic with tcpdump, and bump the logs on his 192.168.0.130 nameserver [14:56] rbasak: ^ [14:57] mason: but you make a good point, I'm not really using that, I guess that you have a driver type "zfs" that should have bnetter perf than the "raw" one [14:57] ahasenack: I'd avoid going into support detail. That encourages a more-support-help response and he's better of getting that from askubuntu.com or Ubuntu forums or wherever rather than in a bug. [14:57] ahasenack: I wonder if this is https://people.canonical.com/~ubuntu-security/cve/2018/CVE-2018-5738.html [14:57] (ie. deliberate somehow) [14:58] rbasak: you mean as a regression, or that he hasn't updated? (didn't check the version number) [14:58] I mean that a security hole was closed and he's noticed [14:58] (I haven't looked in detail, but he seems to be claiming a regression-update?) [14:58] I think not, because he does get a correct response, but with an error status [14:58] At the least he can pin it down to a specific update for us. [14:58] it's odd. That's why I thought it was some sort of truncation [14:59] Yeah I don't know why that would be SERVFAIL rather than refused. [14:59] jamespage: hi! what's up with openvswitch (2.5.5-0ubuntu0.16.04.1) xenial? [14:59] ahasenack: you want me to reply? [14:59] jamespage: I want to fix some cves, and wonder if that's going to get released soon or not [14:59] Slashman: I'll compare performance sometime, as that's a fair bet. [15:00] mason: "zfs" driver type doesn't exists at least on xenial [15:00] rbasak: I think we should at least try his vagrant config, since he went through so much effort to try to help and give a reproducer [15:00] mason: Slashman: having a pool in libvirt (be it a zpool or a lvm one) means one can create new disks with only virsh access, no direct SSH required [15:00] rbasak: leave it to me [15:01] Thanks] [15:01] sdeziel: okay, that make sense [15:01] sdeziel: Okay. Okay, that's also reasonable. [15:01] sdeziel: just an FYI, ngx_brotli will only work over HTTPS :| [15:01] so it has no benefit for non-https connections [15:02] since I'm always creating the VM and their disk via automation, I never really looked at that, I just found useful to see the zpool with the volume in virt-manager when I needed to debug something [15:02] teward: I don't maintain a single HTTP only site ;) [15:03] teward: I looked at BREACH and the compression with HTTPS is only problematic when you compress stuff with secrets inside (like CSRF tocken) [15:04] teward: ideally your http site should only be here to redirect to https [15:05] teward: interesting, do you have a source for the BREACH stuff about secret? also from my researches, brotli uses a lot of CPU unless you compress in advance your content [15:05] Slashman: as that's its own discussion in itself, we'll store this argument later. [15:05] for later* [15:05] teward: so I _think_ I'm safe to use (gzip|br) for only CSS and JS [15:05] sdeziel: indeed. There's a headache in the brotli code though, if you give it text/html and a list of other MIMEtypes it throws a warning [15:05] sdeziel: but yeah all 'should' be OK. [15:06] sdeziel: basic tests seem to work in a container, so it'd work, but as there's some... code issues... that sarnold found, it wouldn't be in main [15:06] there's some overflow / out of bounds concerns [15:06] which could cause segv [15:06] teward: thanks for looking into this [15:06] you need much more mime type if you want to have a real gain, depending on your applciation [15:06] sdeziel: thank sarnold as well [15:07] sarnold: yeah, thank you indeed [15:07] sdeziel: one concern is text/html is *always* compressed [15:08] even if you only want to compress css and js [15:08] so unless you configure properly there may be a risk [15:08] I don't have details on how BREACH works, the Security team might know more than me on that for testing [15:08] sdeziel: but yeah it should be doable, provided that the issues sarnold found are non-issues [15:08] (we're waiting for upstream responses) [15:08] teward: what do you think of something like that for compression: https://paste.ubuntu.com/p/54fCqsc883/ (in httpd format) [15:08] sdeziel: it wouldn't be added until next cycle though [15:09] a bit late in the cosmic cycle to add L| [15:09] Slashman: i'm not sure how this got onto a discussion of "Is this sane" or not [15:09] I was following up with sdeziel on something from yesterday [15:09] teward: indeed, upstream doc confirms that text/html is always compressed when gzip is enabled [15:10] sdeziel: issues 21 and 22 are Seth's discoveries [15:10] teward: okay, but you seem to have some experience in compression for httpd servers [15:10] and those are what i'd wait on first :p [15:10] Slashman: not necessarily? [15:10] Slashman: i'm the nginx package maintainer [15:10] sdeziel asked me if getting brotli support in NGINX was doable [15:10] teward: EMISSINGREFERENCE, is there a bug I should be looking at? [15:11] sdeziel: upstream bugtracker, on the repo [15:11] thx [15:11] https://github.com/eustas/ngx_brotli/issues, 21 and 22 [15:11] code level concerns [15:11] teward: okay, I'm also interested in brotli for nginx [15:11] sdeziel: if sarnold ACKs for Main inclusion then I can add this to the standard module set for all the flavors [15:11] if he doesn't then it's stuck to -extras at the least [15:11] (because 'all the flavors' would include -core) [15:12] (for nginx, anyways) [15:15] sdeziel: TL;DR, there's a conditional ACK on this because of the code problems/risks [15:16] if there's no issues then all it determines is whether we want to MIR that plugin *into* the nginx-core flavor :P [15:16] I understood as much [15:16] *yawns, and goes to find more coffee* [15:17] Slashman: re your compression config for apache httpd, it includes text/html which opens a BREACH when using HTTPS [15:19] Slashman: for details see http://www.breachattack.com/ and more specifically the "Am I affected" section as more conditions are needed to be vulnerable [15:21] sdeziel: I wonder if that's a risk with brotli then as well, because it always compresses text/html? [15:21] not sure but thought I'd ask. [15:21] s/ask/mention it/ [15:21] teward: the way I understood this applies to every compression algo [15:21] sdeziel: then this would introduce another BREACH risk if left on the defaults (cc sarnold) [15:22] teward: anything that compresses the HTML body containing a secret thing [15:22] teward: well, same caveat as with gzip [15:22] indeed. with nginx you can configure brotli in a location block that matches only .css and .js or such to be enabled, thereby protecting against BREACH = [15:23] but that gets complex fast heh [15:23] teward: I must admit I don't like the always on compression for text/html [15:23] sdeziel: agreed [15:25] sdeziel: upstream issue 23 about breach opened. [15:28] sdeziel: okay thanks, I'll check with the dev team if we have all the condition to be vulnerable [15:29] teward: for the gzip part there is this bug already https://trac.nginx.org/nginx/ticket/1083 [15:29] sdeziel: yes, I know, but gzip_types actually lets you override to ignore text/html in NGINX code [15:29] that's the workaround [15:29] and it works [15:29] but brotli doesn't have that workaround [15:30] so it's a risk [15:31] teward: what I understood from the doc, is that gzip_types has an implied text/html [15:31] sdeziel: if you don't specify `gzip_types` and override it, yes. [15:31] but that's easily overridden [15:32] my point is that the workaround which protects against text/html that is adjusting the config. [15:32] if you provide it, say, `gzip_types application/javascript text/css;` it ignores text/html [15:32] that isn't the case in the brotli plugin [15:32] teward: please re-read https://nginx.org/en/docs/http/ngx_http_gzip_module.html#gzip_types [15:32] oop you're right i reread [15:32] i need to bump this i think [15:33] sdeziel: the other way is to just shut off gzip which is actually a default change I think [15:33] at least, for the configs we ship... *double checks* [15:33] sdeziel: given that the default is `gzip off;` this is only really a risk for people who use GZIP on their site [15:33] but you're not wrong [15:33] it's still a risk [15:34] teward: yup [15:42] mdeslaur: I'll kickoff the testing now and clear the way for your CVE's [15:42] thanks jamespage! [16:14] sarnold: lol, apparently I get a faster response to my "BREACH Risk" issue on ngx_brotli than your code related questions get a reply to lol [17:13] teward: nice find. [17:19] sarnold: thanks. yeah it was a "WTF" for a moment, but it looks like NGINX Upstream has the same problem and didn't do anything about it [17:19] cute. === jdstrand_ is now known as jdstrand [18:22] How to install XEN on Ubuntu 18.04. Seams that the repo is not installed by default. Any ideas? [18:23] petershaw: can you check if you have universe enabled in /etc/apt/sources.list? [18:23] petershaw: the 18.04 server installer had a bug where it would only enable the main repository [18:24] Ah. yes only main [18:25] Thank you very much, ahasenack [18:25] petershaw: welcome, sorry about the bug [18:25] it's fixed in the last release [18:40] ahasenack: it still has that bug, actually. [18:40] unless you mean the 18.04.1 ISO? [18:41] I thought 18.04.1 [18:42] but *could* be mistaken [18:42] 18.10 is fixed for sure, I tested that recently [18:43] ahasenack: just tested with the copy that got synced down on my local mirror, it only enabled main [18:43] so hopefully for 18.04.2 that'll be fixed? [18:43] :( [18:43] no reason why not, since 18.10 is fixed [18:43] teward: if you are curious, you may get the fixed version even with the 18.04.1 iso [18:44] just switch to a terminal and issue snap refresh, if networking is up already [18:44] "snap refresh subiquity" probably [18:44] I haven't tried that, but heard it should be possible [18:52] ahasenack: maybe. I have a script that I run to update everything currently to get what I need in terms of repos. [18:52] so heh === Miidlandz is now known as ChunkzZ [22:30] sorry i asked before but not found the answer, how to do this: AFTER only AFTER start the server (after 2 minutes execute a command stored in: /usr/scripts/reloadApplication.sh) of course the script have +x any advice? [22:40] jak2000 it is not really ubuntu related, but what is about a good old init.d script in combination with a sleep 120? [22:42] petershaw thanks, and sorry why not ubuntu related? [22:42] apply to any distro? [22:42] ... that uses sysvinit scripts still, yes [22:45] You could also use some method like rc.local script which measures result of uptime against the timestamp of the last dmesg entry [22:47] ( or just also waits the 120 seconds, etc) [22:47] or cron @reboot sleep 120; /usr/local/bin/blah [22:47] ..since rc.local is ran after system is otherwise fully booted [22:49] * genii slides sarnold a fresh mug [22:49] awwwwww yissss [22:49] hehe [22:50] interesting cron... [22:50] i want restart the server every Friday (it do)...... [22:50] and after boot up, run the script: /usr/scripts/reloadApplication.sh [22:52] sarnold, then: crontab -e and write: 30 1 * * 5 /sbin/shutdown -r now [22:52] and ? [22:53] Does someone has a tutorial-link for xen with netplan? I can't get the link working in my guest system. Since hours. I am getting mad. [22:53] and @reboot sleep 120 ; /usr/scripts/reloadApplication.sh [22:53] petershaw :( [22:53] petershaw: not sure what you mean exactly, what kind of link? [22:53] petershaw: most folks using ubuntu for virtualization either go with full openstack or libvirt.. xen's just not getting much love [22:54] petershaw: where are you stuck? maybe someone's seen it.. [22:54] 30 1 * * 5 @reboot sleep 120 ; /usr/scripts/reloadApplication.sh <--- reboot and after 120 seconds run the script? [22:54] sarnold i have a xenbr0 and a vlan, but my guest does not get a connection while installation. [22:55] is the guest the one you're trying to configure with netplan, or the host? [22:55] jak2000: no, the @reboot takes the place of the time/date/dow/dom specification entirely [22:56] sarnold this is my netplan conf https://pastebin.com/a9WGsANp [22:56] ok, the host. [22:57] petershaw: I guess the guest is getting connected on xenbr0? [22:57] should be. [22:58] my guess is the subnet is wrong [22:58] in your config for enp4s0f0 you use /19 [22:58] jap. that is correct. it is a /19 net. [22:58] in xenbr0 you use /24, there's possibly some mess there, where the dhcp server on that network can't reach the devices behind the bridge? [22:59] those look to be on the same network -- enp4s0f0 and vlan 1 are both on "vlan 1" [22:59] unless you do some magic with vlan tagging, that is [22:59] ah... I do not understand bridges, i guess. [23:00] What ip shoud the bridge have? [23:00] you might also need to set ip forwarding, if the main network is supposed to give DHCP [23:00] ip forwarding is enabled, also (network-script network-bridge) is incommented [23:00] petershaw: I don't know, it depends on your network setup, but it's one number off from the IP you set for enp4s0f0, except in /24 instead of /19 [23:01] so that /24 looks like it probably should be a /19? [23:02] for testing purposes: [23:02] petershaw: I don't know bridges either but it kind of looks like you've assigned an ip address to the interface that's attached to the bridge; I thought linux required the interface to not have an address, but give the bridge the address? [23:02] @reboot sleep 120 ; /home/jak/ftp/c.sh [23:02] 03 17 * * * /sbin/shutdown -r now [23:02] its ok? [23:04] jak2000: I'm pretty sure that'll reboot your machine every day. is that what you want? [23:04] yes [23:05] 17:03 [23:05] restarted :) [23:05] how to know if run my script: /usr/scripts/reloadApplication.sh [23:05] ? [23:05] see: [23:05] jak@vmi103461:~$ date [23:05] jue oct 4 17:05:47 MDT 2018 [23:05] and: [23:06] jak@vmi103461:~$ uptime [23:06] 17:05:49 up 1 min, 1 user, load average: 3.40, 1.55, 0.58 [23:06] sorry how to know if: /home/jak/ftp/c.sh was exxecuted? [23:11] what does it *do*? :) [23:11] i use glassfish... and every restart need restart the domain (the Glassfish server)....