/srv/irclogs.ubuntu.com/2020/01/30/#ubuntu-server.txt

ridiksarnold: thx00:03
=== nolan__ is now known as nolan_
tomreynsarnold: there's also .6, still on releases: http://releases.ubuntu.com/releases/16.04.6/01:58
tomreyn(ridik's gone)01:59
sarnoldtomreyn: ugh01:59
sarnoldtomreyn: we have too many download sites01:59
tomreyn:)01:59
sarnoldtomreyn: I plain couldn't recall the name releases.ubuntu.com. :(01:59
sarnoldI've spent half an hour over the last two weeks trying to find "all the download sources" because they all have different subsets of installers and images and so on01:59
tomreynthere are a lot of subdomains, and not just the servers' hostnames.02:00
tomreyni tend to loose them, too.02:00
tomreyni think generally everything that's still at least inofficially ESM supported is still on releases.u.c and only the *really* old stuff moves to old-releases.u.c now. and then there is cdimages.u.c and ports.u.c and probably something else i forgot02:03
tomreynactually ports.u.c. are a different category, like archive.u.c but for non amd64 (and non i686). however, some installers are only on the archive mirrors. :)02:04
sarnoldcdimages was the easy one to remember :)02:05
sarnoldI also wish we had a single place to rsync all of ports.u.c. and archive.u.c -- there's no good way to have a consistent mirror of the contents of both of those, sharing data where possible :(02:06
sarnoldsince I've got ppc64el, aarch64, and amd64 in the house, it'd be super to have a single mirror with all three02:06
tomreyncouldn't you just have /ports/ next to /archives/ ?02:07
tomreynthey'd still be separate but on the same hostname and next to one another at least02:08
tomreynoh you're probably thinking in terms of automation where youcould just set a $arch variable, i see.02:09
sarnoldtomreyn: well, that and it'd be super-nice to not duplicate all the arch-indep files02:16
sarnoldtomreyn: zfs dedup could do it but at great cost to memory02:16
sarnoldtomreyn: .. and it'd still involve downloading two copies of all arch-indep packages02:16
tomreynoh right the arch independant packages, there are some. i don't know how much disk space and how many FSO's they sum up to as opposed to arch dependant, but i guess it certainly slows down the rsync having to copy any of those twice.02:24
tomreyni'm not sure how good it is but there are other deduplicating file systems other than zfs (and not all of them consume much ram, i think). btrfs can do it (but not sure how stable) and redhat open sourced VDO: https://www.marksei.com/vdo-linux-deduplication/02:27
sarnoldI've got at least 380 gigs of *_all.deb packages on my mirror right now02:33
sarnoldfunny thing, locate '*all.deb' | xargs ls -l  ... emitted *tons* of file not found errors; I wonder what's busted now :)02:33
tomreynneed to run updatedb ;)02:34
sarnoldand come back tomorrow :)02:34
sarnold-rw-r----- 1 root mlocate 4985480349 Jan 29 07:17 /var/lib/mlocate/mlocate.db02:34
sarnoldvery curious02:35
tomreyn4.64 GB, not bad02:35
tomreynapparently VDO is also rather ram hungry02:36
sarnoldhttp://paste.ubuntu.com/p/fNFmSVZN2S/02:36
tomreynhmm, outdated kernel packages, but the others, i wouldn't know02:37
tomreynhttp://archive.ubuntu.com/ubuntu/pool/main/f/freeradius/freeradius-common_3.0.20+dfsg-3_all.deb is a 404, but freeradius-common_3.0.20+dfsg-3build1_all.deb exists.02:40
sarnold2020-01-28 14:1802:42
tomreyni feel like i'm a happy person who does not need to understand those details and can just head to bed now. :)02:45
sarnoldtomreyn: that sounds like an excellent plan!02:47
tomreynsarnold: even greater when you make it happen!02:47
sarnoldtomreyn: I wish you great success in your plan :)02:47
sarnoldas I should also find some dinner and exercise02:48
tomreynand good luck with yours!02:48
sarnold:D02:48
tomreynhttps://www.youtube.com/watch?v=7CGr5LEAfRY https://github.com/dm-vdo/kvdo/blame/master/README.md#L80 (out of tree modules so far). and really gone. :)02:56
sarnoldhaha nice find :) thanks02:59
sarnold1.65T completed (3240MB/s) estimated time remaining: 315079hr 55min 15sec03:00
sarnoldsheeesh. Maybe I don't want to know stats on this afterall :)03:00
lordievaderGood morning07:11
bcxHi i'm trying to preseed bionic from whith a floppy containing the d-i selections. Seems like netboot 18.04 has floppy driver but doesn't automatically mount fd0. Is there any way to ask so from boot params ? Thx12:34
bcxalternatively, a solution enabling running arbitrary command (mount /dev/fd0 /media) from boot params would be accepted :)12:35
ahasenackcpaelzer: are you familiar with setting up nvdimm devices in qemu/libvirt? I followed rafaeldtinoco's template, but can't get some commands to work, like enable-namespaces, or create-namespaces, I'm wondering what I'm missing. strace didn't show anything obvious12:51
ahasenackhttps://pastebin.ubuntu.com/p/HMtXy2yqDT/ my xml12:51
ahasenackcommands: https://pastebin.ubuntu.com/p/hjNCFBjykH/12:51
ahasenackmaybe I'm missing /dev/pmem* devices12:52
sdhd-saschaHi, which daily ubuntu-server 20.04 image is usable for installation ? i already tried some. But didn't had luck yet12:55
sdhd-saschaI want to reinstall, because my ZFS has crashed SSD for the log device. After that it won't boot. It was 18.04 before. Upgrade to 20.04 didn't resolve `apt & dpkg` dependencies. Now i try to install a fresh 20.04.12:57
sdhd-saschaDon't need a stable version. Just want to try 20.04 now12:57
ahasenackwhat faileD?12:57
sdhd-saschaMy raid-controller said, something about foreign disk. After cleaning that in the raid-controller, uefi said something like "ubuntu unavailable" ...12:59
sdhd-saschaIt's a no-name SSD. Wich wasn't supported my server firmware.12:59
sdhd-saschaby12:59
sdhd-saschaahasenack: You mean, with the installer ?13:00
ahasenackyes13:00
sdhd-saschaOh, well. One time xz-compression didn't work. Other time the /cow couldn't get mounted. Lastly squashfs error . Not sure, if it's usb 3.0 stick with usb 2.0 port...13:01
sdhd-saschaMaybe i try a usb 2.0 stick, to be sure13:01
tomreynbcx: maybe you want something like https://www.debian.org/releases/stable/amd64/apbs02.en.html#preseed-auto rather13:03
sdhd-saschaThis is the version, which i try currently: http://cdimage.ubuntu.com/ubuntu-server/daily-live/20200129/13:03
ahasenacksdhd-sascha: that's odd indeed, did you verify the hash of the downloaded image?13:04
sdhd-saschayes, gpg and sha256sum. But only on filesystem. Didn't know if i could check this after copying to usb.13:04
sdhd-saschaahasenack: well... Seems to be the usb-port on my current host. Because on copying the stick didn't blink ... Hmm, weird. Wait i try another machine or port...13:06
sdhd-saschaI already, have trouble with this usb-port. Didn't find the cause, yet13:06
tomreynis this server hardware? got an OOB access / networked KVM system with 'virtual media' / cdrom?13:09
tomreynalternatively https://unix.stackexchange.com/questions/75483/how-to-check-if-the-iso-was-written-to-my-usb-stick-without-errors13:11
sdhd-saschatomreyn: the machine, with the strange usb-port is a small `zotac`. (Sometimes the usb port works, for only one time. If i do a shutdown, and remove the ac-power. After that, Linux 5.3 boots and the port works again... )13:12
sdhd-saschaThe other server is a dell13:12
sdhd-saschatomreyn: thank you :-)13:12
sdhd-saschaNow i use another machine for usb copy... I will report, if the installer works now.13:13
* sdhd-sascha usb blinks now, on copying. how expected ;-)13:13
ahasenack:)13:13
tomreynsdhd-sascha: if the dell got an idrac it may have virtual media support so you could mount the iso to the server 'directly' over network, assuming that is stable.13:14
sdhd-saschatomreyn: that's true. But there's maybe another machine here. Without an idrac13:14
tomreynbut i guess the usb method works, too, if data is actually written to it. ;)13:14
sdhd-sascha:-) yes... it's still copying... Seems to work better now ;-)13:15
sdhd-saschaBut crazy, that the installer has booted.13:15
sdhd-saschatomreyn: Normally, i would use maas for installation. But that's the other machine, which makes trouble today ;-)13:20
tomreynthings could be so easy if things would just work. :)13:21
sdhd-sascha:-)13:21
sdhd-saschai have here an error: `ACPI: SPCR: unexpected SPCR Access Width. Defaulting to byte size` . Can i ignore this on boot ?13:22
cpaelzerahasenack: ah here you actually asked :-)13:27
cpaelzerahasenack: https://libvirt.org/formatdomain.html#elementsMemory13:27
ahasenacksorry13:27
ahasenacktoo many channels13:27
cpaelzerahasenack: if you use a qemu built with pmem, then you should be bale to set <pmem/>13:28
cpaelzerahasenack: inside the <source> element of the nvdimms13:28
sdhd-saschaIs there a way to configure keyboard repeat in the python terminal installer ?13:28
ahasenackcpaelzer: I'm wondering if I can get away with not doing that rebuild by rebooting with the memmap trick in the kernel cmdline13:28
sdhd-saschaIt hangs sometimes13:28
cpaelzerahasenack: depends on what you actually want to test atm13:29
ahasenackI seem to have gotten the nvdimm devices, backed by files on the host13:29
ahasenackbut kilobyte tells me that "persistence_domain":"unknown" says it's not recognized as pmem13:29
ahasenackso I need this other layer13:29
cpaelzerok, the qemu part is about AFTER you have whatever you miss to then  use things for your guest13:30
cpaelzerbut I don't know the trick tinoco used to get it working in the tests13:30
cpaelzerI also thought the kernel commandline to fake some would be the main part13:31
cpaelzerseems there might be more13:31
* cpaelzer is re-reading the inbox13:31
ahasenackok, this is my cmdline: https://pastebin.ubuntu.com/p/X5V7vz9BhF/13:31
cpaelzerwhich is fine for faked nvdimms13:31
sdhd-saschatomreyn: ahasenack: thank you. It works now. (The keyboard repeat, was also normal now ;-) weird)13:31
cpaelzerat least I'd think so13:31
ahasenacksdhd-sascha: nice13:32
cpaelzerahasenack: the pmem in the domain xml will add pmem=on to the commandline13:32
cpaelzerand a non rebuilt qemu will tell you "I can't do that"13:32
tomreynsdhd-sascha: you're welcome. SPCR is the serial port console redirection ACPI table. so unless you're installing through a serial console you can probably ignore it during installation.13:33
ahasenacklet me ask him on telegram13:33
cpaelzerahasenack: can I log into your system to work with you there?13:33
cpaelzerI have found old notes between me and tinoco13:33
ahasenackhm, not trivially13:34
sdhd-saschatomreyn: thanks :-) At the moment no serial console13:34
ahasenackI could move the vm to diglett13:34
ahasenackmigration and sutch13:34
ahasenackdoes that work nowadays? :)13:34
cpaelzerlike live migrating?13:34
cpaelzerit would work if you had shared storage13:34
ahasenackdoesn't have to be live13:35
ahasenackshut down, copy over, bring up13:35
ahasenacka big copy, yeah13:35
cpaelzerthat should be ok13:35
ahasenackxml + img13:35
ahasenackcpaelzer: I got a hold of tinoco13:35
ahasenackhe thinks he used a ppa you setup for the mir13:35
ahasenackso it must have been linked with pmem13:35
cpaelzerthat might explain13:36
cpaelzerI don't remember a PPA for it TBH13:36
ahasenackif the mir succeeds, we would have that in focal, and require the security team to use focal when testing this13:36
cpaelzeryes13:36
ahasenackor I could try the memmap cmdline trick13:36
cpaelzeryes13:36
cpaelzerthe only PPA in that regard which I know is https://launchpad.net/~rafaeldtinoco/+archive/ubuntu/lp185350613:37
ahasenackcpaelzer: he may have been using debian as the host, hence, debian's qemu13:42
ahasenackcpaelzer: n00b libvirt migraton user here14:04
ahasenackcpaelzer: the "Migrate" option is grayed out in the context menu of the vm I want to migrate, in virt-manager14:04
ahasenackcpaelzer: but it's available in the running vm14:04
ahasenacklive-migrate is the *only* migration option?14:04
* ahasenack resorts to virsh14:06
ahasenackcpaelzer: any tip here? virsh -c is fine with that url, but not virsh migrate: https://pastebin.ubuntu.com/p/MYY5W4Sbrp/14:11
cpaelzerahasenack: reading ..14:17
cpaelzerwith all those firewalls you never know14:19
cpaelzerahasenack: you can push it to a real file14:19
cpaelzervirsh save bionic-nvdimm > bionic-nvdimm.state14:19
ahasenack"host key verification failed" seems to be an ssh issue, but virsh -c worked just fine :/14:19
cpaelzeralso dumpxml (to virsh define the content on the target)14:20
cpaelzercopy all that is shown in 'virsh domblklist bionic-nvdimm' as well14:20
cpaelzeronce all those (state, disks, xml defined) are in place14:21
cpaelzeryou can restore via14:21
cpaelzervirsh restore bionic-nvdimm.state14:21
cpaelzerotherwise you'd need ssh keys and all that between the daemons and so on14:21
cpaelzerI guess if just you copy the files you might be better off on a one shot action14:21
sdezielahasenack: I'd be curious to see a strace of that 'virsh migrate', I wonder how SSH is invoked14:23
ahasenackI'll try14:24
ahasenackI'll need sudo for the strace, which will change the user, ssh key is not in /root/.ssh/, etc14:24
sdezielI don't know the --p2p option but it sounds as if libvirt was going to try to reach the other QEMU itself and would thus have problem reaching your key14:24
sdezielahasenack: you can strace your own processes14:24
ahasenackhm, I don't see an exec for ssh, it must be the libvirt-daemon doing it14:25
sdezielor have root strace your running user's process14:25
sdezielcheck apparmor messages in dmesg14:25
ahasenackthat is clear14:25
sdezielhmm14:25
ahasenackwith virsh -c, strace shows it calling ssh14:28
ahasenackbut not with virsh migrate14:28
ahasenackthat must be going through the daemon14:28
ahasenacktracing that14:28
ahasenackyeah, got something tracing the daemon14:29
ahasenackit's the libvirtd daemon calling ssh, and the user it's running as probably isn't me :)14:30
ahasenack10950 openat(AT_FDCWD, "/root/.ssh/config", O_RDONLY) = -1 ENOENT (No such file or directory)14:31
ahasenackit's root, obviously14:31
ahasenackmeh14:31
ahasenackso to migrate, the user running libvird (root) needs to have access to my ssh key, its known_hosts, needs to be clean, etc14:32
sdezielshouldn't the above come with a lot of noise from Apparmor?14:32
ahasenacklibvirtd not being able to read /root/.ssh, when it's running as root?14:32
sdezielyeah, I wouldn't expect it to have access there14:32
ahasenack  # Very lenient profile for libvirtd since we want to first focus on confining14:33
ahasenack  # the guests. Guests will have a very restricted profile.14:33
ahasenack  / r,14:33
ahasenack  /** rwmkl,14:33
ahasenackthat's in /etc/apparmor.d/usr.sbin.libvirtd14:33
sdezielI would have maybe appreciated to have an include of abstractions/private-files-strict :)14:34
sdezielahasenack: is it the same behaviour without --p2p ?14:35
ahasenack--tunneled requires --p2p14:35
ahasenackI didn't try without --tunneled, let me check14:35
ahasenack$ virsh migrate bionic-nvdimm qemu+ssh://andreas@diglett/system --offline --persistent --copy-storage-all --verbose14:36
ahasenackerror: operation failed: domain is no longer running14:36
ahasenackwhat has that got to do with anything14:36
ahasenacksame thing without --offline14:37
ahasenackwhy does it have to be running. Isn't live migration harder?14:39
* ahasenack gives up14:42
sdezielahasenack: maybe the order of those arguments matters?14:42
ahasenackI'm too used to lxd's migration14:42
ahasenackthat's really easy, lxc copy <src> <dst>14:43
sdezielyeah :)14:43
sdezielrejoice, lxd now supports driving VMs :)14:43
zetherooto save the iptables over reboot all I need to do is 'iptables-save' ?15:23
zetherooor is there something else as well?15:23
sdezielzetheroo: it depends on what you use to manage your rules. I recommend the iptables-persistent package that makes it easy to persist rules between reboots15:25
zetherooI just entered some new rules and want to save them over reboot. Here https://help.ubuntu.com/community/IptablesHowTo#Saving_iptables it says to just use 'iptables-save' ...15:26
zetheroothe 'iptables-persistent' method looks like a more work ... ok15:27
zetheroo*or?15:27
sdezielzetheroo: it you use the command iptables-save, you then need to call the iptables-restore on the next boot, that's what iptables-persistent does for you15:34
sdezielzetheroo: that help page is a bit dated as it assumes ifupdown which may or may not apply depending on which Ubuntu version you are using15:35
zetheroooh15:36
weedmicyou can also save ip-tables to a file and reset it when needed, you can create a script like if condition a exists run tables-a, if b, then b.  iptables are awesome15:54
blscoehi, I would like to know if exist a way to get access control of a website using PHP Sessions and Apache? I would like to control access using deny and allow access content in Apache basing on user PHP session information. Can anyone suggest a good material to do that or another suggestion to control access to a group of websites in apache? (I have many Wordpress instances on subdomains and I would like to control the access to al16:10
blscoel this sites.)16:10
blscoeI will have a page of login and after the user login they will redirect to the main page with the links of all subdomains.16:10
weedmicodd, i don't see biscoe having exited the kamer16:13
weedmicblscoe: nvm - it was an "l" - do you mean like restricing ALL access to a website ONLY to those with ssh (or something)?  or just those with root privlidges?16:14
blscoeweedmic: it is a front-end cenario, the users is people that will see the page, not modify. But only users that have a password and username (I have a PHP and database for this.) So I would like to restrict the access to the webpages in my server.16:17
blscoeSo, when a people make a http request, this will be block if he don't have a php session open.16:18
blscoeIt is that I want to do.16:18
blscoeBut I don't know what do.16:18
weedmicwhy not use ssh and set your router to reject attemps that do not have an approved key (or use port knocking)?16:19
weedmicunder that scenario they could never get to the webpage.16:20
weedmicunless they had a valid key/account.16:20
blscoebut how the user can get access to the web page via ssh using a browser?16:21
sdezielblscoe: that's usually not done at the apache level but with PHP itself16:22
sdezielblscoe: if you really want to do it with apache, cookies are available and you can probably hack something similar to https://httpd.apache.org/docs/2.4/howto/access.html#rewrite16:23
blscoesdeziel: my problem is that the webpage can accessed typing the direct url. For this reason I think in use apache restrictions.16:23
sdezielhttps://httpd.apache.org/docs/2.4/howto/access.html#env is probably a better fit but that's really not where your access control logic should reside IMHO16:24
weedmicblscoe: first of all, a website should be on it's own router (imho) and not inside the real network, second you set the dtu (that box you get from your provider) to send all traffick to your gw/portal/fw.  make the gw/portal/fw a linux machine which has only one function, to have ssh on it, and restrict the ssh via iptables to allow only approved accounts (maybe after 5 failed attempts - block the incoming ip), then that fw machine sends the16:26
weedmicapproved traffick to the actual webserver.  which I always make a live linux machine (DVD0) and the website (DVD1).  you can replace DVD1 whenever you want (since linux lets live swapping) to update your site's pages.  it's not that rare a thing.16:26
weedmicor use something like aws who will do it all for you (for a fee)16:28
blscoesdeziel: I understand. Do you suggest that the logic control stay in a software layer, like intercept the requests like a PHP control layer? My problem is, how to hide of the user the real address of the pages.16:30
blscoeweedmic: this is not that I expected, but I like this kind of solution. Is this like a ssh tunneling?16:31
sdezielblscoe: trying to hide a URL is useless, it provides 0 security16:31
blscoesdeziel: for this reason I think in block the access via apache, not hide the url.16:33
blscoeIf the user get the url he can access.16:34
sdezielblscoe: we agree that there is a need to block the access when the user is not authorized. What I'm saying is that logic is best done inside PHP itself16:35
sdezielblscoe: or if you want something simpler, there is always HTTP authentication that can be done by Apache16:36
blscoesdeziel: thinking... :)16:41
blscoesdeziel: oh. my problem is that I have multiples sites over a single domain. Implement an authentication for a single site I can realize, but for many I cannot think in a way to control it. Maybe restrict the access to the internal network is a good idea.16:42
weedmic?maybe? - :D16:43
sdezielblscoe: you can do per URI path authentication with apache. Doesn't matter if all your sites are using the same domain16:43
sdezielblscoe: you can also use IP ACLs in your apache config/vhost definition16:44
weedmicblscoe: there are may approaches and this really sounds to me like at least one long scrum session would be necessary to figure out exactly what you want, present what could be done, and let you pick the path.16:44
blscoeok. I got some informations here, I will seek more about that you said and try do something. It is my hobby project, so I need to do all thinks in my free time. But thanks for the tips and suggestions guys! Helped me a lot.16:49
blscoesdeziel: I made a simple solution to deny access to all files in my apache server with htpasswd function. In the future I will need to implement a more sofisticated solution, but it works for now. When a user access the site the browser shows a popup asking for the user and the password to get access. You can create an user with a command line and the users and password are storaged in a file called htpasswd. You can implement the18:16
blscoe access control for a entiry directory changing the specific site configuration in /etc/apache2/sites-available/site.conf or only a specific folder using the file .htaccess in the desire folder (after you enable the "AllowOverride All" option.)18:16
blscoeFor references: https://httpd.apache.org/docs/1.3/howto/auth.html and https://www.interserver.net/tips/kb/apache-htpasswd-authentication-ubuntu/18:16
blscoeThere are limitations for logout control, but are a trick to force logout. You need force the user access the url with another user request, like http://username:password@example.com/ to change the user and force the user to send again your user and password.18:19
sdezielblscoe: good. You probably are not running Apache 1.3 though ;)18:19
blscoeNo. :P (Old references, sorry.)18:19
blscoeBut works here and it justs for understand.18:20
blscoeI will consulting the update references...18:20
blscoe...only for curiosity. :D18:20
blscoeHere: https://httpd.apache.org/docs/2.4/howto/auth.html18:24
blscoesdeziel: I see that and I remember of you: " For general access control, see the Access     Control How-To."18:25
blscoe:D18:25
blscoeThanks!18:25
DSdavidDSI am getting an issue as described in this post: https://askubuntu.com/questions/881175/why-installtion-fails-when-preseeding-the-partitioning-method. Has anyone run into something similar?19:38
DSdavidDSTo answer a question in that post, I am booting legacy. I have also tried 'partman-auto/method string lvm' as well as 'partman-auto/method string regular' with the same result19:43
DSdavidDShttps://xkcd.com/979/19:45

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!