[00:06] ping sbeattie - PM? [00:39] unping === a1berto_ is now known as a1berto === joakim_ is now known as joakim [12:05] good morning [12:11] kstenerud: try the server-next tag [12:11] kstenerud: https://bugs.launchpad.net/ubuntu/+bugs?field.tag=server-next [12:11] kstenerud: and bite-size [12:11] kstenerud: https://bugs.launchpad.net/ubuntu/+bugs?field.tag=bitesize [12:12] not all are server, though, that query needs to be refined [12:12] ok [12:12] use advanced search [12:17] kstenerud: https://bugs.launchpad.net/ubuntu/+source/fetchmail/+bug/1798786 should not be fix committed [12:17] Launchpad bug 1798786 in fetchmail (Ubuntu) "can't retrieve gmail emails. fetchmail: OU=No SNI provided; please fix your client./CN=invalid2.invalid" [High,Fix committed] [12:18] kstenerud: I'm going to add a cosmic task, and we will leave the main task for tracking the progress in the development release of ubuntu once it opens [12:18] kstenerud: are other releases also affected? [12:19] looks like bionic is, that's where the bug was reported [12:20] I think all previous versions will be affected [12:20] it wasn't filling out a name field, but server side wasn't checking either before [12:21] can you check please and let me know? Then I can add or not more tasks if needed [12:21] ok [12:26] ahaseanck: trusty and xenial work fine [12:33] kstenerud: good, thanks for checking [13:11] ahasenack: I'm still not clear on what fields are used when in the dep-3 header. They mostly seem to be duplicates of each other [13:12] kstenerud: yeah, there can be some confusion, and on top of that you will have reviewers with different opinions [13:12] kstenerud: I can see the intention of applied-upstream, but if the patch origin is upstream already, then it's redundant in my opinion. [13:13] kstenerud: if the patch is *not* from upstream, but applied upstream, then why wouldn't the origin not be upstream already? Maybe if they were different authors [13:14] What is the Origin field for? I see options for upstream, backport, vendor, other, but no description of what any of those mean [13:15] upstream is the software author [13:15] like samba.org for samba packages, openldap.org for openldap, and so on [13:15] backport is if you had to change the patch to fit the particular ubuntu package you are patching [13:16] and vendor is if it came from redhat, debian, suse, intel, etc. Not upstream, but other distributors of the software. I rarely use that one [13:16] because eventually it gets landed upstream [13:17] note also that the presence of one field may make another one optional [13:17] So origin will be one of those 4 words, a comma, and then a url? [13:17] like author vs origin [13:17] yes, word, comma, url [13:18] What does it mean when a patch is forwarded? [13:18] if you created the fix, for example, if you forwarded it upstream or not [13:18] i.e., if you let upstream know about the fix [13:18] sometimes a fix only makes sense for ubuntu, for example, in which case Forwarded would be "not-needed" or "no" [13:19] so "forwarded" and "bug" serve the same purpose? [13:20] there are many ways to forward a patch [13:20] sometimes upstream doesn't have a bugtracker, so you forward by email [13:20] like via a mailing list [13:22] So if you needed to send via email what do you put in? [13:22] If you got the patch from upstream, then I think just "Origin: upstream, ..." is sufficient. [13:23] Forwarded is useful for if you wrote the patch and sent it somewhere but hasn't been upstreamed (or if a contributor wrote the patch and the same applies) [13:23] kstenerud: I had to do that once, and it wasn't clear either since it's so rare. I think I just put the words "Emailed Foo Bar " [13:23] (plus Bug, Bug-*, Last-Update, etc) [13:24] rbasak: So Forwarded has allowed values URL or no [13:24] or not-needed [13:24] Forwarded: [13:24] it helps to have a saved-up dep3 template [13:24] yes I'm looking at the template [13:25] so it says URL in there :) [13:25] ah, you were making a statement, not a question [13:25] n/m [13:25] Yes, but if you wrote a patch that hasn't been upstreamed, there wouldn't be a URL to put in... [13:25] I think you can put whatever you like in Forwarded, except that anything apart from "no" and "not-needed" means "yes" so there are only two ways of saying no. [13:26] kstenerud: the url could be to the mailing list archive showing you emailed the list with the patch [13:26] "The field is really required only if the patch is vendor specific..." -- there you are :) [13:26] Otherwise you'd have an Origin header. [13:33] So for an Ubuntu maintainer, does this make sense? https://pastebin.ubuntu.com/p/Wjx2y34Cst/ [13:33] I want to update my document so I can remember [13:37] kstenerud: bug- can also be Bug-Debian, Bug-Fedora, etc [13:37] Would we put Bug-Debian? Wouldn't that be for a debian maintainer? [13:38] Bug-Debian is super common [13:38] ahasenack: ? [13:38] the point of dep-3 headers is to record patch history [13:38] smoser: yes? [13:38] i m issed a ping way up [13:38] then it's gone :) [13:39] oh so that means that there's a debian bug report? [13:39] what was it? [13:39] smoser: maybe about the git-ubuntu build{,-source} breakage? We had to revert your branch [13:39] kstenerud: yes [13:40] i think that was it, but what was wrong? [13:40] kstenerud: if you use dep3changelog to construct the d/changelog message from a patch, it will also record in the d/changelog message the debian "Closes: #nnnn" string [13:40] kstenerud: sometimes debian grabs our fixes, and that string tells them that this particular ubuntu upload is also fixing a debian bug [13:41] smoser: #1799300 [13:42] kstenerud: doesn't mean you have to go hunting and searching vendors' bug reports, but sometimes that is recorded in the launchpad bug already [13:42] ahasenack: dep3changelog is similar to git-ubuntu.reconstruct-changelog? [13:42] kstenerud: yes, but it also checks the syntax of the dep3 header for you, like if you missed a mandatory one [13:43] or just have an invalid syntax [13:48] hi all, a preseed issue: I am loading a preseed config via https which causes certificate verification errors as the busybox installer environment seems to be missing any ca certs. I am aware of the debian-installer/allow_unauthenticated_ssl=true option, but this didn't seem to work as a boot parameter [13:50] sam_w: how are you booting the installer? [13:50] are you aware of any way to either d-i preseed/include an http file from a preseed file included in the boot image, or manually add ca certificates to the install environment? [13:50] sam_w: I ask because the usual ways of doing that aren't secure so https brings little benefit. [13:50] rbasak: usb flash drive [13:51] Then you have a reasonable question :) [13:51] I understand the question, but I don't know the answer, sorry. [13:51] Are you sure the boot parameter syntax is correct? [13:51] I was under the impression that any preseed option could be a boot parameter. If not, perhaps that one should be added to the list. [13:53] rbasak: fairly sure. That was what I was wondering, if it was any or there was some mapping or explicit passthrough [13:56] from grub.cfg: `linux /install/vmlinuz noprompt auto=true priority=critical console-setup/ask_detect=false netcfg/choose_interface=auto locale=en_GB debian-installer/allow_unauthenticated_ssl=true url= quiet ---` [14:06] sam_w: seems reasonable to me. The next thing to do is to dive into the code I suppose. [14:06] sam_w: I'd check first that the key/value is correct, but you obviously can't do that using a regular preseed! [14:08] the only other thing would be: if it was possible to have a preseed file on the iso with that option, and then include one via https [14:09] but the impression I got from the docs was that preseed/include only works for the same scheme that the file it is in comes from [14:29] ahasenack: I was unable to reproduce the fetchmail bug on bionic [14:31] kstenerud: but that's where it was reported [14:32] kstenerud: and bionic and cosmic have the same exact versions [14:32] fetchmail | 6.3.26-3build1 | bionic | source, amd64, arm64, armhf, i386, ppc64el, s390x [14:32] fetchmail | 6.3.26-3build1 | cosmic | source, amd64, arm64, armhf, i386, ppc64el, s390x [14:32] you must be using the fixed package by mistake [14:32] I'll do a fresh install and see [14:36] Nope... Won't trigger on bionic, but triggers on cosmic [14:37] is it up-to-date? [14:37] apt dist-upgrade wise [14:37] yup [14:37] and both report the same version of fetchmail [14:38] what remains is the ssl version [14:38] I basically lxc launch ubuntu:cosmic or ubuntu:bionic and then https://pastebin.ubuntu.com/p/G9xHNGtQ9c/ [14:38] kstenerud: oh, wait, the reporter was using 18.10, not 18.04 [14:39] InstallationMedia: Ubuntu 18.04 LTS <-- he originally installed 18.04, but is now on 18.10 [14:39] ok [14:39] still weird though [14:39] maybe bionic doesn't support that tls version that this triggers? [14:39] what was it, tls 1.2? [14:39] maybe. Google only does this weird stuff if you ask for TLS 1.3 [14:40] can you check the ssl or gnutls library fetchmail is linked to in both cosmic and bionic? use ldd [14:40] what args do I use? [14:40] ldd [14:44] they're both the same [14:44] your test forces the tls version? [14:47] --sslproto TLS1.2+ [14:47] that's as high as it goes in both versions [14:48] bionic succeeds, cosmic fails [14:48] fetchmail -d0 -vk --sslcertck --sslproto TLS1.2+ pop.gmail.com [14:51] i cant rbasak ping [14:51] so what do you want me to do. fix is this: [14:51] http://paste.ubuntu.com/p/Vf2RfST58Q/ [14:52] That's fine if it works. [14:52] so just rebase my old branch? [14:53] Yeah, on origin/master please. Then we can do another CI run and ahasenack can test his use cases from it too, and if all happy we can merge. [14:53] k [15:04] kstenerud: there is an openssl difference between bionic and cosmic [15:04] kstenerud: bionic has openssl 1.1.0, cosmic has openssl 1.1.1 [15:05] ubuntu@bionic-fetchmail:~$ dpkg -S /usr/lib/x86_64-linux-gnu/libssl.so.1.1 [15:05] libssl1.1:amd64: /usr/lib/x86_64-linux-gnu/libssl.so.1.1 [15:05] ubuntu@bionic-fetchmail:~$ dpkg-query -W libssl1.1 [15:05] libssl1.1:amd64 1.1.0g-2ubuntu4.1 [15:05] I think that openssl 1.1.1 on cosmic has support for tls 1.3 [15:06] it's not just that, I think some default might have changed [15:06] I can reproduce the error on cosmic with just this: openssl s_client -connect pop.gmail.com:995 -noservername [15:07] with -tls1_3 it doesn't finish the handshake [15:07] According to upstream reports it's due to Google's bizarre behavior of passing back a self-signed cert in some circumstances [15:07] such as the SNI missing in a 1.3 connection [15:08] it downgrades to 1.2+, but also sends back a completely different cert [15:08] another thing I'm thinking is that openssl 1.1.0 is setting a default sni, if none is given [15:08] there is no -noservername in openssl 1.1.0's s_client command [15:09] fetchmail's --sslproto TLS1.2+ means 1.2 *or* newer, not > 1.2 [15:09] doesn't mean it's negotiating 1.3 [15:10] and the output of openssl's s_client -tls1_3 suggests that 1.3 is not supported [15:10] yeah, not sure what it's actually doing under the hood. That's just the chatter from the upstream bug reports [15:10] that being said, using --sslproto TLS1.2 (which asks for 1.2 exactly) works [15:10] qhttps://code.launchpad.net/~smoser/usd-importer/+git/usd-importer/+merge/357826 [15:10] so ok, let's leave bionic out of it [15:10] ahasenack, rbasak [15:11] Thanks! [15:11] you can test ust by adding 'usd-importer/bin' to your PATH and running 'git-ubuntu build' [15:11] ahasenack: once CI has passed, would you mind grabbing the built snap from CI and testing it please? [15:11] kstenerud: set the bionic task to invalid and add a comment about these tests you did, saying you couldn't reproduce it there or something lke that, even if the code is affected [15:11] Or that. [15:13] yes [15:14] tcpdump would tell you if SNI is used [15:15] kstenerud: it might boil down to just the fact that openssl 1.1.1 is the one implementing tls 1.3, and 1.1.0 isn't [15:15] hence, bionic not affected [16:02] is there a way to setup my ubuntu server to be a middle man for ubuntu updates, such that other machines I have query that server and if the update is not already cached, it retrieves it, otherwise it uses the cached version. [16:04] Kabriel, you can setup transparent squid proxy; and install a client machines to query local net providers over avahi first.... [16:05] Kabriel, https://packages.ubuntu.com/search?suite=default§ion=all&arch=any&keywords=squid-deb-proxy&searchon=names [16:05] squid-deb-proxy & squid-deb-proxy-client [16:15] Thanks for the hint. This seems like a good tutorial: https://fabianlee.org/2018/02/08/ubuntu-a-centralized-apt-package-cache-using-squid-deb-proxy/ [16:16] It lead me to apt-cacher-ng, which also looks interesting. [16:25] Hiya folks! I'm on my first attempt to install Ubuntu Server on a refurb. T410 [16:26] The goal is to have a prototype to offer to local clients: Office server, ERP, File server, Ecommerce+WooCommerce, integrated with the ERP on the LAN. [16:31] Kabriel, yeah, apt-cacher-ng is the other one. [16:32] Kabriel, there is also a cloud-mirror proxy, as a juju charm, which is deployed typically in cloud-regions. But it's slightly more heavier to use. [16:32] Kabriel, that one rsyncs dists/, and caches or proxies for the pool/ [16:33] Kabriel, or you can run a local ubuntu mirror using ubumirror scripts.... and just point all your clients to your mirror. [16:33] Kabriel, there are many options =) [16:38] I have a small setup -- 10 machines all running 16LTS (1 server, rest desktops). Cloud system doesn't sound right, or the mirror. I like the caching idea. [16:38] Any experince with squid vs cacher-ng [16:38] ? [16:52] Kabriel: I've been a happy user of apt-cacher-ng for many years [16:58] wow weird... sudo in cosmic always respects -p '', even if I copy the sudo from bionic (which doesn't respect -p '') [16:58] so there's some environmental issue maybe... [16:59] kstenerud: could be PAM-related, and default config related [16:59] kstenerud: the sudo manpage mentions an option about prompt overriding in /etc/sudoers [17:00] yeah, already looked in that, and sudoers.d. didn't see anything different [17:06] rbasak: dwnloading that snap from jenkins: [17:06] git-ubuntu_0+git.30720a7_amd64.snap 2%[++ ] 2.53M 63.6KB/s eta 28m 3s [17:06] :( [17:06] wow... [17:09] zoom zoom [17:10] hmm ok timebox up for sudo. The only ways it's supposed to override the prompt is if passprompt_override is set in sudoers (it isn't), or SUDO_PROMPT env is set (it isn't). It's not a problem with the binary because taking the bionic binary and running it from a cosmic machine works perfectly :/ [17:11] +1 [17:26] hi all [17:26] how’s people’s day going [17:28] it's good here [17:28] thanks [17:29] so, i’m trying to expand my main partition, for some reason the ubuntu installer created a 4G partition [17:29] and it keeps getting filled [17:30] https://pastebin.com/keXBG0b1 [17:30] there’s a bunch of available space on that sdi drive [17:30] did you use lvm? [17:30] yes [17:30] yeah, known bug :/ [17:31] yeah, i did have some problems when installing, had to test a couple of installer isos [17:31] https://bugs.launchpad.net/subiquity/+bug/1785321 [17:31] Launchpad bug 1785321 in subiquity "LVM Entire Disk option does not use entire disk" [Undecided,New] [17:32] yep, das it [17:32] so, i was wondering if i can do the expanding of the volume online [17:32] with growpart and resize2fs [17:33] See comment 2 there in that bug [17:33] lvresize has a --resizefs option [17:34] Saves a call to resize2fs, though that's more useful when shrinking rather than expanding [17:34] rbasak: thanks, i’ll read over the bug page [17:34] You can increase ext4 size online, so it should be straightforward. Note that shrinking can only be done offline, which is more of a pain for a root filesystem. [17:35] not using the space is a lot better bug than debian's default of "using everything, the whole VG, for last created LV and filesystem, leaving no space at all for snapshots or resizing" [17:37] rbasak: all done: /dev/mapper/ubuntu--vg-ubuntu--lv 108G 3.3G 100G 4% / [17:37] thank’s to everyone :D [17:39] looking at that bug report, this is in fact exactly how I'd want the "use entire disk for LVM" to work in Debian ;-) [17:40] rawco: can you still recall wich iso you used for install? [17:40] lotus|NUC: sorry, i don’t really remember what iso i used [17:41] i think i had to use the 18.04 iso, because 18.04.1 iso was not working with my hardware setup [17:41] it was a couple of months ago, sorry :( [17:41] i thought it was me and not the iso lol [17:41] so i just ignored [17:41] rawco: yeah might be relevant info for the channel here [17:42] i will lurk more here, since ya’ll are awesome [17:43] i have a gf already :p [17:46] ahasenack: I can grab historical git-ubuntu snap binaries for you if it would help [17:46] rbasak: do you still have 439 installed? Should be trivial to reproduce the bug. kstenerud or do you have it perhaps? [17:47] I'm on 440 [17:47] ahasenack, the mind boggles, why is this a bug! This is precisely how "use whole disk for LVM" ought to work -- PV indeed uses whole disk (apart from /boot partition) [17:47] I might be able to revert. === rawco_ is now known as rawco [17:48] jelly: it was unexpected, or at least not clear enough that this would happen. Some people were surprised to get "disk full errors" after installing a few more packages [17:48] at least expanding is easier than shrinking [17:51] it's a lot better than what d-i does. expanding is a fully online process. shrinking of xfs is impossible, shrinking of ext4 is offline (and unoptimized, up to 4 times slower than copying, reformatting and copying back the data if there's more than 25-50% space used) [17:51] ahasenack: http://people.canonical.com/~rbasak/VAGSRAriUyDDlqsLunShJTe7503Uw4GF_439.snap.zsync and http://people.canonical.com/~rbasak/VAGSRAriUyDDlqsLunShJTe7503Uw4GF_439.snap [17:52] no functional change seems required, just document things and maybe put up a notification [18:07] what do ya’ll use for monitoring your servers [18:07] ELK stack? [18:18] kstenerud: remember to create a card for fetchmail, if you haven't already (I didn't find it after a quick look) [18:23] depends on how many servers, and if you have a raspberry pi3 or a 16Gb machine for monitoring :) [18:24] elk is heavy [18:30] i have a nice hp proliant server with sas drives and bells+whistles [18:30] all the gigs [18:32] grafana is pretty for the graphs [18:32] negios (or its replacement, forgot the name) is good for alerts [18:33] ahasenack: icinga [18:33] that one [18:33] (icinga2 i think technically) [18:38] there's too many choices [18:38] if there were one that sucked but it was the only one available, it'd still be the obvious choice [18:38] but there's dozens :) [19:10] well, what do you use sarnold [19:12] I use munin on a small server, but I'm not very happy with it. I think that machine can take more. It only has 3Gb of ram and runs zfs, and that's stretching it already according to docs, but real world usage shows it has some memory free [19:12] Mem: 3.2Gi 2.1Gi 166Mi 2.0Mi 913Mi 890Mi [19:12] rawco: I'm currently suffering from analysis paralysis -- where I use nothing because I can't decide what to do :( [19:12] sarnold: that’s my current mood lmao [19:13] we’re already paying for connectwise, but it’s trash for monitoring [19:15] sarnold: Landscape. *shot* [19:15] (just kidding) [19:15] sarnold: analysis paralysis is bad. :P [19:15] teward: tell me about it.. [19:22] dehumanizing, i would say [19:22] i wanna surveil this goddam servers [19:22] 24/7 [19:24] Nagios3 serves us well but we don't have a huge park (~200 machines with 2k service checks) [19:25] the webUI makes your eyes bleed so we use check-mk-multisite instead [19:26] munin is for collecting performance data (no alerting capabilities that I'm aware). For perf data and some alerting netdata is pretty nice and comes with a nice webUI [19:26] sdeziel: that makes sense, monitoring != performance data [19:26] i wonder is there’s anything out there that does everything + looks nice [19:27] rawco: well, with nagios3 we also collect perf_data for quick graphs [19:29] splunk ony collects logs/files and graphs them , right? [19:29] no actual “monitoring" [19:36] rawco: zabbix, elk, grafana [19:43] thanks shubjero [19:56] actually munin does have some rudimentary alert facilities but they're not configured by default (or rather, they're configured to report via nagios by default on ubuntu - but they can be configured to report directly via e-mail) [19:56] here we go: https://munin.readthedocs.io/en/latest/tutorial/alert.html [19:57] good to know, thanks waveform [19:59] rawco: zabbix for active monitoring of hardware and os metrics. ELK for massive log aggregation. Grafana helps fill some gaps with zabbix for us [19:59] rawco: so on any server we monitor we would have a zabbix-agent and a filebeat client running