[00:03] sarnold: thx === nolan__ is now known as nolan_ [01:58] sarnold: there's also .6, still on releases: http://releases.ubuntu.com/releases/16.04.6/ [01:59] (ridik's gone) [01:59] tomreyn: ugh [01:59] tomreyn: we have too many download sites [01:59] :) [01:59] tomreyn: I plain couldn't recall the name releases.ubuntu.com. :( [01:59] I've spent half an hour over the last two weeks trying to find "all the download sources" because they all have different subsets of installers and images and so on [02:00] there are a lot of subdomains, and not just the servers' hostnames. [02:00] i tend to loose them, too. [02:03] i think generally everything that's still at least inofficially ESM supported is still on releases.u.c and only the *really* old stuff moves to old-releases.u.c now. and then there is cdimages.u.c and ports.u.c and probably something else i forgot [02:04] actually ports.u.c. are a different category, like archive.u.c but for non amd64 (and non i686). however, some installers are only on the archive mirrors. :) [02:05] cdimages was the easy one to remember :) [02:06] I also wish we had a single place to rsync all of ports.u.c. and archive.u.c -- there's no good way to have a consistent mirror of the contents of both of those, sharing data where possible :( [02:06] since I've got ppc64el, aarch64, and amd64 in the house, it'd be super to have a single mirror with all three [02:07] couldn't you just have /ports/ next to /archives/ ? [02:08] they'd still be separate but on the same hostname and next to one another at least [02:09] oh you're probably thinking in terms of automation where youcould just set a $arch variable, i see. [02:16] tomreyn: well, that and it'd be super-nice to not duplicate all the arch-indep files [02:16] tomreyn: zfs dedup could do it but at great cost to memory [02:16] tomreyn: .. and it'd still involve downloading two copies of all arch-indep packages [02:24] oh right the arch independant packages, there are some. i don't know how much disk space and how many FSO's they sum up to as opposed to arch dependant, but i guess it certainly slows down the rsync having to copy any of those twice. [02:27] i'm not sure how good it is but there are other deduplicating file systems other than zfs (and not all of them consume much ram, i think). btrfs can do it (but not sure how stable) and redhat open sourced VDO: https://www.marksei.com/vdo-linux-deduplication/ [02:33] I've got at least 380 gigs of *_all.deb packages on my mirror right now [02:33] funny thing, locate '*all.deb' | xargs ls -l ... emitted *tons* of file not found errors; I wonder what's busted now :) [02:34] need to run updatedb ;) [02:34] and come back tomorrow :) [02:34] -rw-r----- 1 root mlocate 4985480349 Jan 29 07:17 /var/lib/mlocate/mlocate.db [02:35] very curious [02:35] 4.64 GB, not bad [02:36] apparently VDO is also rather ram hungry [02:36] http://paste.ubuntu.com/p/fNFmSVZN2S/ [02:37] hmm, outdated kernel packages, but the others, i wouldn't know [02:40] http://archive.ubuntu.com/ubuntu/pool/main/f/freeradius/freeradius-common_3.0.20+dfsg-3_all.deb is a 404, but freeradius-common_3.0.20+dfsg-3build1_all.deb exists. [02:42] 2020-01-28 14:18 [02:45] i feel like i'm a happy person who does not need to understand those details and can just head to bed now. :) [02:47] tomreyn: that sounds like an excellent plan! [02:47] sarnold: even greater when you make it happen! [02:47] tomreyn: I wish you great success in your plan :) [02:48] as I should also find some dinner and exercise [02:48] and good luck with yours! [02:48] :D [02:56] https://www.youtube.com/watch?v=7CGr5LEAfRY https://github.com/dm-vdo/kvdo/blame/master/README.md#L80 (out of tree modules so far). and really gone. :) [02:59] haha nice find :) thanks [03:00] 1.65T completed (3240MB/s) estimated time remaining: 315079hr 55min 15sec [03:00] sheeesh. Maybe I don't want to know stats on this afterall :) [07:11] Good morning [12:34] Hi i'm trying to preseed bionic from whith a floppy containing the d-i selections. Seems like netboot 18.04 has floppy driver but doesn't automatically mount fd0. Is there any way to ask so from boot params ? Thx [12:35] alternatively, a solution enabling running arbitrary command (mount /dev/fd0 /media) from boot params would be accepted :) [12:51] cpaelzer: are you familiar with setting up nvdimm devices in qemu/libvirt? I followed rafaeldtinoco's template, but can't get some commands to work, like enable-namespaces, or create-namespaces, I'm wondering what I'm missing. strace didn't show anything obvious [12:51] https://pastebin.ubuntu.com/p/HMtXy2yqDT/ my xml [12:51] commands: https://pastebin.ubuntu.com/p/hjNCFBjykH/ [12:52] maybe I'm missing /dev/pmem* devices [12:55] Hi, which daily ubuntu-server 20.04 image is usable for installation ? i already tried some. But didn't had luck yet [12:57] I want to reinstall, because my ZFS has crashed SSD for the log device. After that it won't boot. It was 18.04 before. Upgrade to 20.04 didn't resolve `apt & dpkg` dependencies. Now i try to install a fresh 20.04. [12:57] Don't need a stable version. Just want to try 20.04 now [12:57] what faileD? [12:59] My raid-controller said, something about foreign disk. After cleaning that in the raid-controller, uefi said something like "ubuntu unavailable" ... [12:59] It's a no-name SSD. Wich wasn't supported my server firmware. [12:59] by [13:00] ahasenack: You mean, with the installer ? [13:00] yes [13:01] Oh, well. One time xz-compression didn't work. Other time the /cow couldn't get mounted. Lastly squashfs error . Not sure, if it's usb 3.0 stick with usb 2.0 port... [13:01] Maybe i try a usb 2.0 stick, to be sure [13:03] bcx: maybe you want something like https://www.debian.org/releases/stable/amd64/apbs02.en.html#preseed-auto rather [13:03] This is the version, which i try currently: http://cdimage.ubuntu.com/ubuntu-server/daily-live/20200129/ [13:04] sdhd-sascha: that's odd indeed, did you verify the hash of the downloaded image? [13:04] yes, gpg and sha256sum. But only on filesystem. Didn't know if i could check this after copying to usb. [13:06] ahasenack: well... Seems to be the usb-port on my current host. Because on copying the stick didn't blink ... Hmm, weird. Wait i try another machine or port... [13:06] I already, have trouble with this usb-port. Didn't find the cause, yet [13:09] is this server hardware? got an OOB access / networked KVM system with 'virtual media' / cdrom? [13:11] alternatively https://unix.stackexchange.com/questions/75483/how-to-check-if-the-iso-was-written-to-my-usb-stick-without-errors [13:12] tomreyn: the machine, with the strange usb-port is a small `zotac`. (Sometimes the usb port works, for only one time. If i do a shutdown, and remove the ac-power. After that, Linux 5.3 boots and the port works again... ) [13:12] The other server is a dell [13:12] tomreyn: thank you :-) [13:13] Now i use another machine for usb copy... I will report, if the installer works now. [13:13] * sdhd-sascha usb blinks now, on copying. how expected ;-) [13:13] :) [13:14] sdhd-sascha: if the dell got an idrac it may have virtual media support so you could mount the iso to the server 'directly' over network, assuming that is stable. [13:14] tomreyn: that's true. But there's maybe another machine here. Without an idrac [13:14] but i guess the usb method works, too, if data is actually written to it. ;) [13:15] :-) yes... it's still copying... Seems to work better now ;-) [13:15] But crazy, that the installer has booted. [13:20] tomreyn: Normally, i would use maas for installation. But that's the other machine, which makes trouble today ;-) [13:21] things could be so easy if things would just work. :) [13:21] :-) [13:22] i have here an error: `ACPI: SPCR: unexpected SPCR Access Width. Defaulting to byte size` . Can i ignore this on boot ? [13:27] ahasenack: ah here you actually asked :-) [13:27] ahasenack: https://libvirt.org/formatdomain.html#elementsMemory [13:27] sorry [13:27] too many channels [13:28] ahasenack: if you use a qemu built with pmem, then you should be bale to set [13:28] ahasenack: inside the element of the nvdimms [13:28] Is there a way to configure keyboard repeat in the python terminal installer ? [13:28] cpaelzer: I'm wondering if I can get away with not doing that rebuild by rebooting with the memmap trick in the kernel cmdline [13:28] It hangs sometimes [13:29] ahasenack: depends on what you actually want to test atm [13:29] I seem to have gotten the nvdimm devices, backed by files on the host [13:29] but kilobyte tells me that "persistence_domain":"unknown" says it's not recognized as pmem [13:29] so I need this other layer [13:30] ok, the qemu part is about AFTER you have whatever you miss to then use things for your guest [13:30] but I don't know the trick tinoco used to get it working in the tests [13:31] I also thought the kernel commandline to fake some would be the main part [13:31] seems there might be more [13:31] * cpaelzer is re-reading the inbox [13:31] ok, this is my cmdline: https://pastebin.ubuntu.com/p/X5V7vz9BhF/ [13:31] which is fine for faked nvdimms [13:31] tomreyn: ahasenack: thank you. It works now. (The keyboard repeat, was also normal now ;-) weird) [13:31] at least I'd think so [13:32] sdhd-sascha: nice [13:32] ahasenack: the pmem in the domain xml will add pmem=on to the commandline [13:32] and a non rebuilt qemu will tell you "I can't do that" [13:33] sdhd-sascha: you're welcome. SPCR is the serial port console redirection ACPI table. so unless you're installing through a serial console you can probably ignore it during installation. [13:33] let me ask him on telegram [13:33] ahasenack: can I log into your system to work with you there? [13:33] I have found old notes between me and tinoco [13:34] hm, not trivially [13:34] tomreyn: thanks :-) At the moment no serial console [13:34] I could move the vm to diglett [13:34] migration and sutch [13:34] does that work nowadays? :) [13:34] like live migrating? [13:34] it would work if you had shared storage [13:35] doesn't have to be live [13:35] shut down, copy over, bring up [13:35] a big copy, yeah [13:35] that should be ok [13:35] xml + img [13:35] cpaelzer: I got a hold of tinoco [13:35] he thinks he used a ppa you setup for the mir [13:35] so it must have been linked with pmem [13:36] that might explain [13:36] I don't remember a PPA for it TBH [13:36] if the mir succeeds, we would have that in focal, and require the security team to use focal when testing this [13:36] yes [13:36] or I could try the memmap cmdline trick [13:36] yes [13:37] the only PPA in that regard which I know is https://launchpad.net/~rafaeldtinoco/+archive/ubuntu/lp1853506 [13:42] cpaelzer: he may have been using debian as the host, hence, debian's qemu [14:04] cpaelzer: n00b libvirt migraton user here [14:04] cpaelzer: the "Migrate" option is grayed out in the context menu of the vm I want to migrate, in virt-manager [14:04] cpaelzer: but it's available in the running vm [14:04] live-migrate is the *only* migration option? [14:06] * ahasenack resorts to virsh [14:11] cpaelzer: any tip here? virsh -c is fine with that url, but not virsh migrate: https://pastebin.ubuntu.com/p/MYY5W4Sbrp/ [14:17] ahasenack: reading .. [14:19] with all those firewalls you never know [14:19] ahasenack: you can push it to a real file [14:19] virsh save bionic-nvdimm > bionic-nvdimm.state [14:19] "host key verification failed" seems to be an ssh issue, but virsh -c worked just fine :/ [14:20] also dumpxml (to virsh define the content on the target) [14:20] copy all that is shown in 'virsh domblklist bionic-nvdimm' as well [14:21] once all those (state, disks, xml defined) are in place [14:21] you can restore via [14:21] virsh restore bionic-nvdimm.state [14:21] otherwise you'd need ssh keys and all that between the daemons and so on [14:21] I guess if just you copy the files you might be better off on a one shot action [14:23] ahasenack: I'd be curious to see a strace of that 'virsh migrate', I wonder how SSH is invoked [14:24] I'll try [14:24] I'll need sudo for the strace, which will change the user, ssh key is not in /root/.ssh/, etc [14:24] I don't know the --p2p option but it sounds as if libvirt was going to try to reach the other QEMU itself and would thus have problem reaching your key [14:24] ahasenack: you can strace your own processes [14:25] hm, I don't see an exec for ssh, it must be the libvirt-daemon doing it [14:25] or have root strace your running user's process [14:25] check apparmor messages in dmesg [14:25] that is clear [14:25] hmm [14:28] with virsh -c, strace shows it calling ssh [14:28] but not with virsh migrate [14:28] that must be going through the daemon [14:28] tracing that [14:29] yeah, got something tracing the daemon [14:30] it's the libvirtd daemon calling ssh, and the user it's running as probably isn't me :) [14:31] 10950 openat(AT_FDCWD, "/root/.ssh/config", O_RDONLY) = -1 ENOENT (No such file or directory) [14:31] it's root, obviously [14:31] meh [14:32] so to migrate, the user running libvird (root) needs to have access to my ssh key, its known_hosts, needs to be clean, etc [14:32] shouldn't the above come with a lot of noise from Apparmor? [14:32] libvirtd not being able to read /root/.ssh, when it's running as root? [14:32] yeah, I wouldn't expect it to have access there [14:33] # Very lenient profile for libvirtd since we want to first focus on confining [14:33] # the guests. Guests will have a very restricted profile. [14:33] / r, [14:33] /** rwmkl, [14:33] that's in /etc/apparmor.d/usr.sbin.libvirtd [14:34] I would have maybe appreciated to have an include of abstractions/private-files-strict :) [14:35] ahasenack: is it the same behaviour without --p2p ? [14:35] --tunneled requires --p2p [14:35] I didn't try without --tunneled, let me check [14:36] $ virsh migrate bionic-nvdimm qemu+ssh://andreas@diglett/system --offline --persistent --copy-storage-all --verbose [14:36] error: operation failed: domain is no longer running [14:36] what has that got to do with anything [14:37] same thing without --offline [14:39] why does it have to be running. Isn't live migration harder? [14:42] * ahasenack gives up [14:42] ahasenack: maybe the order of those arguments matters? [14:42] I'm too used to lxd's migration [14:43] that's really easy, lxc copy [14:43] yeah :) [14:43] rejoice, lxd now supports driving VMs :) [15:23] to save the iptables over reboot all I need to do is 'iptables-save' ? [15:23] or is there something else as well? [15:25] zetheroo: it depends on what you use to manage your rules. I recommend the iptables-persistent package that makes it easy to persist rules between reboots [15:26] I just entered some new rules and want to save them over reboot. Here https://help.ubuntu.com/community/IptablesHowTo#Saving_iptables it says to just use 'iptables-save' ... [15:27] the 'iptables-persistent' method looks like a more work ... ok [15:27] *or? [15:34] zetheroo: it you use the command iptables-save, you then need to call the iptables-restore on the next boot, that's what iptables-persistent does for you [15:35] zetheroo: that help page is a bit dated as it assumes ifupdown which may or may not apply depending on which Ubuntu version you are using [15:36] oh [15:54] you can also save ip-tables to a file and reset it when needed, you can create a script like if condition a exists run tables-a, if b, then b. iptables are awesome [16:10] hi, I would like to know if exist a way to get access control of a website using PHP Sessions and Apache? I would like to control access using deny and allow access content in Apache basing on user PHP session information. Can anyone suggest a good material to do that or another suggestion to control access to a group of websites in apache? (I have many Wordpress instances on subdomains and I would like to control the access to al [16:10] l this sites.) [16:10] I will have a page of login and after the user login they will redirect to the main page with the links of all subdomains. [16:13] odd, i don't see biscoe having exited the kamer [16:14] blscoe: nvm - it was an "l" - do you mean like restricing ALL access to a website ONLY to those with ssh (or something)? or just those with root privlidges? [16:17] weedmic: it is a front-end cenario, the users is people that will see the page, not modify. But only users that have a password and username (I have a PHP and database for this.) So I would like to restrict the access to the webpages in my server. [16:18] So, when a people make a http request, this will be block if he don't have a php session open. [16:18] It is that I want to do. [16:18] But I don't know what do. [16:19] why not use ssh and set your router to reject attemps that do not have an approved key (or use port knocking)? [16:20] under that scenario they could never get to the webpage. [16:20] unless they had a valid key/account. [16:21] but how the user can get access to the web page via ssh using a browser? [16:22] blscoe: that's usually not done at the apache level but with PHP itself [16:23] blscoe: if you really want to do it with apache, cookies are available and you can probably hack something similar to https://httpd.apache.org/docs/2.4/howto/access.html#rewrite [16:23] sdeziel: my problem is that the webpage can accessed typing the direct url. For this reason I think in use apache restrictions. [16:24] https://httpd.apache.org/docs/2.4/howto/access.html#env is probably a better fit but that's really not where your access control logic should reside IMHO [16:26] blscoe: first of all, a website should be on it's own router (imho) and not inside the real network, second you set the dtu (that box you get from your provider) to send all traffick to your gw/portal/fw. make the gw/portal/fw a linux machine which has only one function, to have ssh on it, and restrict the ssh via iptables to allow only approved accounts (maybe after 5 failed attempts - block the incoming ip), then that fw machine sends the [16:26] approved traffick to the actual webserver. which I always make a live linux machine (DVD0) and the website (DVD1). you can replace DVD1 whenever you want (since linux lets live swapping) to update your site's pages. it's not that rare a thing. [16:28] or use something like aws who will do it all for you (for a fee) [16:30] sdeziel: I understand. Do you suggest that the logic control stay in a software layer, like intercept the requests like a PHP control layer? My problem is, how to hide of the user the real address of the pages. [16:31] weedmic: this is not that I expected, but I like this kind of solution. Is this like a ssh tunneling? [16:31] blscoe: trying to hide a URL is useless, it provides 0 security [16:33] sdeziel: for this reason I think in block the access via apache, not hide the url. [16:34] If the user get the url he can access. [16:35] blscoe: we agree that there is a need to block the access when the user is not authorized. What I'm saying is that logic is best done inside PHP itself [16:36] blscoe: or if you want something simpler, there is always HTTP authentication that can be done by Apache [16:41] sdeziel: thinking... :) [16:42] sdeziel: oh. my problem is that I have multiples sites over a single domain. Implement an authentication for a single site I can realize, but for many I cannot think in a way to control it. Maybe restrict the access to the internal network is a good idea. [16:43] ?maybe? - :D [16:43] blscoe: you can do per URI path authentication with apache. Doesn't matter if all your sites are using the same domain [16:44] blscoe: you can also use IP ACLs in your apache config/vhost definition [16:44] blscoe: there are may approaches and this really sounds to me like at least one long scrum session would be necessary to figure out exactly what you want, present what could be done, and let you pick the path. [16:49] ok. I got some informations here, I will seek more about that you said and try do something. It is my hobby project, so I need to do all thinks in my free time. But thanks for the tips and suggestions guys! Helped me a lot. [18:16] sdeziel: I made a simple solution to deny access to all files in my apache server with htpasswd function. In the future I will need to implement a more sofisticated solution, but it works for now. When a user access the site the browser shows a popup asking for the user and the password to get access. You can create an user with a command line and the users and password are storaged in a file called htpasswd. You can implement the [18:16] access control for a entiry directory changing the specific site configuration in /etc/apache2/sites-available/site.conf or only a specific folder using the file .htaccess in the desire folder (after you enable the "AllowOverride All" option.) [18:16] For references: https://httpd.apache.org/docs/1.3/howto/auth.html and https://www.interserver.net/tips/kb/apache-htpasswd-authentication-ubuntu/ [18:19] There are limitations for logout control, but are a trick to force logout. You need force the user access the url with another user request, like http://username:password@example.com/ to change the user and force the user to send again your user and password. [18:19] blscoe: good. You probably are not running Apache 1.3 though ;) [18:19] No. :P (Old references, sorry.) [18:20] But works here and it justs for understand. [18:20] I will consulting the update references... [18:20] ...only for curiosity. :D [18:24] Here: https://httpd.apache.org/docs/2.4/howto/auth.html [18:25] sdeziel: I see that and I remember of you: " For general access control, see the Access Control How-To." [18:25] :D [18:25] Thanks! [19:38] I am getting an issue as described in this post: https://askubuntu.com/questions/881175/why-installtion-fails-when-preseeding-the-partitioning-method. Has anyone run into something similar? [19:43] To answer a question in that post, I am booting legacy. I have also tried 'partman-auto/method string lvm' as well as 'partman-auto/method string regular' with the same result [19:45] https://xkcd.com/979/