[01:25] <Howie69> I seem to be missing something...
[01:27] <Howie69> I used ssh-keygen to make an rsa key for a user on my server
[01:27] <Howie69> I tried to put that key into PuTtyGen but it said unable to convert, OpenSSH format
[01:28] <Howie69> How do I convert it from that format that something that PuTty can use and WinSCP can use (for a client)?
[01:31] <Howie69> You see, I did it the other way around before...
[01:32] <Howie69> I used AWS to make the key and then download it and used puttygen
[01:32] <sarnold> what did you try to import? the private key or the public key or the authorized_keys fingerprint line?
[01:32] <Howie69> The authorized_keys in .ssh
[01:33] <sarnold> are you trying to use the key from putty to log in to a linux machine?
[01:33] <Howie69> Used these directions: https://help.ubuntu.com/community/SSH/OpenSSH/Keys
[01:33] <Howie69> (I hope paste from ubuntu is ok)
[01:35] <dpb1> which key are you pasting in to "putty gen"
[01:36] <dpb1> you want to paste the 'private key', the one in ~/.ssh/id_rsa
[01:37] <dpb1> ~/.ssh/authorized_keys gets the one in ~/.ssh/id_rsa.pub
[01:39] <Howie69> the id_rsa and authorized_keys are the same, as I cat'd one to the other
[01:39] <Howie69> I downloaded the keyfile, and tried to open it with PuttyGen, and it said "Failed to open: OpenSSH Format"
[01:40] <Howie69> do I need to just paste the contents instead?
[01:40] <Howie69> this is the question that Google couldn't give me a straight answer to :)
[01:41] <sarnold> your ~/.ssh/authorized_keys SHOULD NOT be the same as your private key
[01:42] <Howie69> I misspoke... or did I mistype?  let me look closer
[01:48] <Howie69> nope, it's the right key
[01:48] <Howie69> just wanted to make sure I used the .pub
[01:48] <dpb1> well, a simple google search does appear to show a difference between putty keys and openssh keys
[01:48] <Howie69> right, but doesn't show how to convert them besides PuttyGen
[01:49] <dpb1> https://www.chiark.greenend.org.uk/~sgtatham/putty/faq.html#faq-ssh2-keyfmt
[01:50] <Howie69> right... that's what I use when I use AWS to generate the keys instead of OpenSSH
[01:50] <Howie69> But it complains about it being an SSH2 key
[01:50] <Howie69> not sure if that's what the issue is or not
[01:51] <sarnold> "and the pterm and command-line puttygen utilities are not described at all"  :(
[01:51] <Howie69> because all of the results say to import it into PuttyGen, which says it cannot open because it's OpenSSH format
[01:52] <dpb1> there is a #putty channel
[01:52] <sarnold> ssh2 *might* mean the old terrible horrible commercial closed source ssh.com ssh
[01:52] <Howie69> although the page I am reading is mentioning using the private key.. instead of the public
[01:52] <dpb1> might get better results there
[01:52] <sarnold> or maybe it means the modern protocol. hard to guess.
[01:53] <dpb1> I mean, you are running into problems using a putty tool, it's not something people will have experience with here.
[01:53] <Howie69> dpb1: I will try that.  Just trying to eliminate it is not on the server end first
[01:53] <dpb1> in general
[01:53] <dpb1> yes
[01:53] <dpb1> I understand it's involving two systems. :)
[01:53] <sarnold> but it does seem like you're more likely to find a puttygen expert in a putty channel, hehe
[01:53] <dpb1> I have used windows enough to grit my teeth at the problems encountered.
[01:54] <dpb1> anyway, I'm not really sure, sorry Howie69
[01:54] <Howie69> hrm.. here's a simple question that I may have overlooked...
[01:55] <Howie69> do I have to restart opensshd after I edit the .ssh/authorized_keys file?
[01:55] <dpb1> that FAQ is very clear that it's a problem, but finding the right solution is more difficult. :)
[01:55] <sarnold> no, the authorized_keys files are re-read as needed every attempt
[01:55] <Howie69> ok, that checks that one off the list...
[01:55] <sarnold> BUT if *that* isn't working, the usual solution is to set the permissions more restrictively :) opensshd is very picky about permissions on that file.
[01:56] <sarnold> but if you're getting an error message frmo puttygen, best sort that out first
[01:56] <Howie69> it's set to 600
[01:57] <sarnold> iirc the ~ and ~/.ssh permissions matter too
[01:57] <Howie69> bcause I run the webserver under /home/user/web folder
[01:58] <Howie69> I'm half tempted to reenable password auth :)
[02:01] <Howie69> and the disturbing answer is...
[02:02] <Howie69> that PuTtyGen wants you to import the PRIVATE key, enter the password for it to convert into their public key format
[02:02] <dpb1> that's expected at least by me
[02:03] <dpb1> your private key is always on your client
[02:03] <dpb1> it's how the server knows you are you
[02:04] <irwiss> what's disturbing about it? it can't derive your private key from your public key
[02:07] <Howie69> sleep deprivation it seems
[02:07] <Howie69> I knew better than that
[02:08] <Howie69> But in this scenario, but not in the AWS made keys, is that even with the key, it also asks for the associated key's password
[02:09] <sarnold> it needs to decrypt the blob to get the raw numbers back out of it somehow
[02:17] <Howie69> now if I could only find an openoffice channel with someone on it...
[02:19] <sarnold> does it have to be openoffice? i thought that project was pretty much dead
[02:19] <sarnold> but there's 198 folks in #libreoffice
[02:26] <dpb1> 198 poor souls
[02:27] <dpb1> :)
[02:27] <sarnold> :D
[02:42] <Howie69> I didn't realize that openoffice is dead... I just updated it last week
[02:42] <Howie69> I use openoffice on windows public machines because MS Office is retardedly expensive to use on a public thin client :)
[05:00] <cpaelzer> good morning
[05:57] <lordievader> Good morning
[07:43] <Neo4> what is good book about apache2.4 ?
[07:43] <Neo4> I'm going to read something
[07:43] <Neo4> lack knowledge
[07:44] <hateball> Neo4: I guess you can ask in #httpd they probably have some tips
[07:45] <Neo4> hateball: yes, and you personally what have read about?
[07:46] <Neo4> LAMP is always used, and we need knowing it
[07:46] <Neo4> hateball: I saw one book apache2 2002 years, does that book appropriate for 2018? )))
[07:47] <Neo4> hateball: I've install /localhost/manual , but that manual very boring read
[07:48] <Neo4> hateball: manual is enough https://ibb.co/cWQiOy
[07:51] <Neo4> $
[07:51] <Neo4> $
[07:51] <Neo4> $
[07:51] <Neo4> What do read about ubuntu - server?
[07:51] <Neo4> or about linux?
[07:52] <hateball> Neo4: Personally I don't read books. I just try and do something, if I fail I read the man-pages or search online for solutions
[07:52] <hateball> Learning by doing
[07:52] <Neo4> hateball: I am used to read books, Now can't stop :(
[07:53] <Neo4> hateball: I'm going to read 30 book dedicated linux,
[07:53] <hateball> Good luck!
[07:53] <Neo4> and I read only 7, 2 not about linux
[07:55] <Neo4> hateball: see books that I've read this year https://ibb.co/hN1t6J
[07:56] <Neo4> hateball: my record 30 English books for year, I want to increase to 50. Could it help me with English? I read in internet some people read 100 books per year.
[07:57] <Neo4> hateball: the best was linux bible
[07:58] <Neo4> really very useful, others are crape
[09:14] <tobasco> jamespage: coreycb do you think we could get a point release for neutron 12.0.2 http://tarballs.openstack.org/neutron/neutron-12.0.2.tar.gz
[09:14] <tobasco> on UCA xenial i can only see 12.0.1 right now, and there is a annoying bug that is fixed in 12.0.2
[09:26] <tobasco> it's actually pretty critical since it disrupts network
[09:39] <Neo4> when I do: a2dissite 000-default.conf I got this site
[09:40] <Neo4> and I don't understand why I can reach site http://test2 or http://test if in my /etc/hosts they not exist?
[09:40] <Neo4> strange behavior
[09:44] <kiokoman> - /etc/apache2/sites-enabled is empty ?
[09:54] <Neo4> empty
[09:54] <Neo4> kiokoman: see my host file https://ibb.co/d9QL3y
[09:56] <Neo4> kiokoman: in /etc/apache2/sites-enabled this
[09:56] <Neo4> https://ibb.co/ejciOy
[09:57] <Neo4> when I do systemctl stop apache2.service I can't reach
[09:58] <Neo4> how browser know about http://test2 if /etc/hosts doesn't have record?
[09:58] <Neo4> and why apache2 show sites if all disabled?
[10:00] <kiokoman> maybe cache, idk
[10:03] <Neo4> kiokoman: maybe not cache it sees when apache stop and show
[10:03] <lordievader> Neo4: Do you have a domain set up in `/etc/resolv.conf`?
[10:03] <Neo4> something says browser that test2 is myserver
[10:04] <Neo4> now check
[10:05] <Neo4> lordievader: no https://ibb.co/mcnWbJ
[10:06] <Neo4> if it was cache browser wouldn't refresh when apache stopped
[10:06] <lordievader> Does `test2.localdomain` resolve to your server?
[10:07] <lordievader> `dig @127.0.0.1 test2.localdomain`
[10:07] <Neo4> yes
[10:07] <Neo4> http://test2.localdomain show apache main page
[10:08] <Neo4> test.localdomain and test1
[10:08] <Neo4> all sites that I created before
[10:08] <lordievader> So, that is why `test2` resolves to your server.
[10:09] <Neo4> lordievader: why?
[10:10] <Neo4> $/etc/hosts doesnt' have test2 test test1
[10:10] <Neo4> I though we put name in hosts
[10:10] <lordievader> You have a domain defined in `/etc/resolv.conf`. This gets appended to `test2`, which is resolvable.
[10:11] <Neo4> lordievader:
[10:11] <Neo4> I have there this two
[10:11] <Neo4> nameserver 127.0.1.1
[10:11] <Neo4> search localdomain
[10:12] <Neo4> ddd.localdomain doesn't work
[10:13] <Neo4> only that I created before, but I removed them
[10:14] <lordievader> There is no host with the name `ddd` in your network?
[10:15] <Neo4> lordievader: no, nor with test2
[10:15] <Neo4> why I have test2 host?
[10:15] <Neo4> I remove that from /etc/hosts
[10:15] <lordievader> Perhaps your resolver (dnsmasq?) has cached that entry.
[10:19] <Neo4> lordievader: see https://paste.ubuntu.com/p/rYf4p66yDd/
[10:20] <lordievader> What is in `/etc/hosts`?
[10:20] <Neo4> lordievader: I tried add dddd virtual host and then remove it using my shell script. It seems something wrong with it
[10:21] <Neo4> lordievader: https://paste.ubuntu.com/p/8smFrPkRVN/
[10:23] <kiokoman> host is a dns lookup utility, you must have son dns server caching it somewhere
[10:23] <kiokoman> *some
[10:23] <lordievader> Yeah, that is my guess too. Dnsmasq (or something) caching the answe.
[10:24] <lordievader> answer*
[10:26] <kiokoman> sudo service network-manager restart
[10:26] <kiokoman> sudo kill -HUP $(pgrep dnsmasq)
[10:27] <Neo4> kiokoman: my local computer where installed vmware https://paste.ubuntu.com/p/YDjCSB9SHM/
[10:28] <Neo4> it means hosts on my local computer lnked to virtual machine hosts
[10:29] <Neo4> if domain in parent os (how it called suppervisor os?) that it aveilabe in virtual machine
[10:29] <Neo4> no I commendted and all right
[10:29] <Neo4> do you understand?
[10:30] <kiokoman> https://kb.vmware.com/s/article/1013644
[10:30] <kiokoman> ?
[10:30] <Neo4> my vmware see host form /etc/hosts where its installed
[10:32] <Neo4> kiokoman: don't know, just on virtual machine don't create sites with name that has parent computer cause they will intersect
[10:32] <Neo4> this is not cache
[10:33] <Neo4> I interesting will my vmware see ip my parent?
[10:33] <Neo4> ping that
[10:35] <Neo4> kiokoman: it can ping
[10:35] <Neo4> see from virtual machine we can reach outside computer
[10:35] <Neo4> without share folder, using ssh
[10:35] <Neo4> and filezila
[10:36] <Neo4> interesting why I have 3 ip when I do hostname -I ?
[10:36] <Neo4> before it was always one
[10:41] <Neo4> $
[10:41] <Neo4> $
[10:41] <Neo4> $
[10:41] <Neo4> who know why there 3 ip addresses?
[10:41] <Neo4> https://paste.ubuntu.com/p/M3dd6Wms4Q/
[10:41] <Neo4> hostnam -I show all ip addressess for hostname, my hostname neo, FQDN mail.neo.ru
[10:42] <Neo4> for virtual machine I have only one IP, and I use this "howtname -I" for determine IP adress in my shell scripts, it is right?
[12:03] <RoyK> Neo4: pastebin output of "ip a"
[12:04] <Neo4> RoyK: https://paste.ubuntu.com/p/HxHZsWwSmh/
[12:04] <Neo4> RoyK: I think maybe real host will have one ip and for shell enough hostname -I
[12:05] <Neo4> I cna check before run shell if one ip
[12:06] <Neo4> and this ip I use it a few times, one for put to apache ServerName ip
[12:06] <Neo4> might once only
[12:06] <Neo4> but interesting why there 3 ip
[12:07] <Neo4> each computer has 1 IP doesn't it?
[12:07] <Neo4> how could 1 computer has 3 IP?
[12:08] <RoyK> looks like you have two virtual machines there, each with an IP address
[12:09] <Neo4> RoyK: yes, had before two and remove one cause didnt have space, do you think its virtual machine added
[12:09] <Neo4> RoyK: yes, I remember i had one only before
[12:10] <Neo4> RoyK: or maybe no, virtual machine has different ip
[12:11] <Neo4> vmnet1 - virtual machine
[12:12] <RoyK> it's a virtual interface
[12:12] <RoyK> vmware?
[12:13] <Neo4> RoyK: yes
[12:13] <RoyK> well, what's the problem? ;)
[12:13] <Neo4> RoyK: when we isntlal vmware our host has one more ip?
[12:13] <Neo4> RoyK: without problem now
[12:13] <RoyK> we have a machine, without VMs at work, that has 64 IP addresses
[12:14] <RoyK> doing NAT operations for a few thousand users, so it needs a lot of addresses to balance the load (since there's only 65k TCP/UDP ports)
[12:16] <Neo4> ok, I don't understand this :)
[12:16] <Neo4> this theme
[12:23] <lordievader> RoyK: Hahaha, nice 😁
[12:59] <rbasak> ahasenack: would you be OK reviewing https://code.launchpad.net/~racb/usd-importer/+git/usd-importer/+merge/345617 please?
[12:59] <rbasak> nacc: ^
[12:59]  * ahasenack takes a look
[13:05] <jason_grammenos> the following process is hanging
[13:05] <jason_grammenos>   /usr/bin/python3 /usr/lib/ubuntu-release-upgrader/check-new-release -q
[13:06] <jason_grammenos> i am trying to figure out why
[13:06] <jason_grammenos> but nothing i do seems to be able to replicate the issue
[13:06] <jason_grammenos> it only hangs when invoked from the cron.weekly
[13:06] <jason_grammenos> but never when run manually
[13:15] <tomreyn> this can be due to different environments when you run it vs. when run by cron
[13:16] <jason_grammenos> i figured as much, so i attempted to run it with env -i
[13:17] <jason_grammenos> after dumping the environment of one of the hung process (cat /proc/pidnumber/environ)
[13:17] <tomreyn> does it also hang when you make the cron job run without -q ? i would guess it will, and if it does, this may hint on where it gets stuck.
[13:17] <jason_grammenos> hmm, the cron job only runs once a week
[13:18] <jason_grammenos> so i guess i could modify the job and wait a week
[13:18] <tomreyn> i would think the daily cron jobs run in the same environment
[13:18] <jason_grammenos> good point
[13:18] <tomreyn> and the hourly ones, too
[13:19] <jason_grammenos> ok, i copied the cron job over to the hourly
[13:20] <jason_grammenos> i might also try one in cron.d
[13:20] <jason_grammenos> with a really short run time
[13:20] <tomreyn> right, or root's crontab.
[13:21] <tomreyn> on a 16.04 system (i do not know what you are running there, you did not say), /usr/lib/ubuntu-release-upgrader/check-new-release is a symlink to /usr/bin/do-release-upgrade
[13:21] <tomreyn> so you could probably just run "/usr/bin/do-release-upgrade -cq" instead (with or without the q)
[13:21] <jason_grammenos> 14.04 and yes it is a symlink
[13:22] <tomreyn> but i guessw that's not the issue, it should not cause it to get stuck
[13:23] <jason_grammenos> ya, running it manually like that as root, it does not hang
[13:24] <tomreyn> when you type 'sh' and, once there, run "env -i /usr/bin/python3 /usr/lib/ubuntu-release-upgrader/check-new-release" - does it get stuck?
[13:25] <tomreyn> ^ as root
[13:26] <jason_grammenos> no
[13:26] <jason_grammenos> it ran successfully
[13:26] <tomreyn> okay, it was worth a try ;)
[13:26] <jason_grammenos> :)
[13:28] <tomreyn> it will be some network related issue, such as a mandatory proxy server not set
[13:28] <jason_grammenos> no proxies in my environment
[13:32] <tomreyn> if this changed cron job doesn't help you identify the issue, you can still use the python debugger (as seen in the second example, invoking it as a python module on the command line): https://docs.python.org/2/library/pdb.html
[13:32] <jason_grammenos> so my per minute cron job runs fine
[13:32] <jason_grammenos> * * * * * root /etc/cron.hourly/update-notifier-common > /tmp/test.log
[13:34] <tomreyn> this would not catch stderr output
[13:34] <jason_grammenos> so at this point i am starting to suspect some race condition or something, as in the job happens to run when some other schedule job runs and then bump heads
[13:34] <jason_grammenos> sure i guess it wont catch stderr output but it also is not hanging
[13:34] <tomreyn> right
[13:35] <tomreyn> and i agree the race condition or dead locking sounds like a possible explanation
[13:35] <jason_grammenos> i was expecting if it failed to see multiple hang jobs in ps -ef
[13:36] <jason_grammenos> s/hang/hung/
[13:36] <tomreyn> failed, as in hung? then that would be my expectation, too.
[13:37] <tomreyn> otherwise this theory seems wrong
[13:37] <jason_grammenos> right
[13:37] <jason_grammenos> i was also hopign the strace -p pid would help
[13:37] <jason_grammenos> but i dont get anything i see as usefull in the ouput
[13:37] <jason_grammenos> brief as it is
[13:38] <tomreyn> btw the cron.*ly jobs are run by /etc/crontab - you can modify the times when they run for your testing purposes
[13:38] <jason_grammenos> ah, cool thanks
[13:39] <tomreyn> make sure you keep the original times somewhere, though
[13:39] <jason_grammenos> right good idea
[13:40] <tomreyn> and keep in mind that running some of the regular cron jobs in shorter period can have adverse effects. e.g. log rotation
[13:41] <jason_grammenos> right
[13:41] <coreycb> tobasco: yes i'll get working on another queens update. must've just missed neutron with the last one.
[13:46] <coreycb> tobasco: we'll be tracking the queens updates in bug 1771572
[13:49]  * tomreyn afk
[14:05] <tobasco> coreycb: tyvm!
[14:13] <ahasenack> rbasak:
[14:13] <ahasenack> +        if self.raw_repo and self._delete_on_close:
[14:13] <ahasenack> +            shutil.rmtree(self.local_dir)
[14:13] <ahasenack> shouldn't that be self._local_dir?
[14:15] <rbasak> ahasenack: local_dir is a property that returns _local_dir
[14:15] <ahasenack> ah, I see it
[14:15] <ahasenack> thx
[14:15] <rbasak> I looked for a pattern to follow just now.
[14:16] <rbasak> The only other instance in the class itself seems to be in _maybe_quiltify_tree_hash which uses the property.
[14:16] <rbasak> So it's consistent at least.
[15:02] <ahasenack> rbasak: I'm getting leaked tmpdirs in /tmp when running  pytest-3 gitubuntu/test_importer.py. master doesn't have this behavior
[15:03] <rbasak> Thank you for spotting that. I didn't think to look, assuming that the test caught it.
[15:03] <rbasak> I'll take a look now.
[15:03] <ahasenack> rbasak: at the end of test_importer_main_cleanup_on_exception, the assert makes sure that the directory is emtpy, but it still exists
[15:04] <rbasak> Reproduced
[15:05] <rbasak> Ah right
[15:09] <rbasak> ahasenack: http://paste.ubuntu.com/p/wjC6DBHRpN/ fixes it I think.
[15:10] <rbasak> The diff looks horrible but really the change is in indentation
[15:10] <rbasak> http://paste.ubuntu.com/p/VqNGX2wwmz/ is clearer
[15:10] <ahasenack> commit on top and I can try
[15:12] <rbasak> ahasenack: pushed
[15:12] <rbasak> ahasenack: though that's the test I'm proposing to drop anyway
[15:13] <ahasenack> ok
[15:13] <ahasenack> in other news
[15:13] <ahasenack> what does this mean?
[15:13] <ahasenack> trying: apache2
[15:13] <ahasenack> skipped: apache2 (55, 0, 10)
[15:13] <ahasenack>     got: 15+0: a-8:a-1:a-1:i-1:p-3:s-1
[15:13] <ahasenack>     * ppc64el: libapache2-mod-proxy-uwsgi-dbg, libapache2-mod-shib2
[15:14] <rbasak> It means that libapache2-mod-proxy-uwsgi-dbg and libapache2-mod-shib2 become uninstallable on ppc64el if apache2 were to migrate to the release pocket
[15:14] <ahasenack> ugh
[15:21] <kristian2709_> Hey. I am getting this error. "mount: mounting https://images.maas.io/ephemeral-v3/daily/bionic/amd64/20180426.2/squashfs on /root failed: No such device". What I basically want is to pxe boot the ephemeral image from a custom pxe server. My kernel is "https://images.maas.io/ephemeral-v3/daily/bionic/amd64/20180426.2/ga-18.04/generic/boot-kernel", initrd is "https://images.maas.io/ephemeral-v3/daily/bionic/a
[15:21] <kristian2709_> md64/20180426.2/ga-18.04/generic/boot-initrd" and cmdline args "root=squash:https://images.maas.io/ephemeral-v3/daily/bionic/amd64/20180426.2/squashfs ro". Am I missing something?
[15:40] <tomreyn> kristian2709_: /join #maas
[16:04] <rbasak> cpaelzer, ahasenack: I don't think I'll be able to do the chrony review in the time I have left today, sorry.
[16:29] <ahasenack> rbasak: hmpf, turns out libapache2-mod-shib2 (one of the packages from the error message) is already uninstallable in the previous apache package
[16:30] <rbasak> ahasenack: AFAIK, not-worse is the criteria. Perhaps the real cause is the other one?
[16:31] <ahasenack> yeah, I'm tracing it down
[16:31] <ahasenack> eventually libxmltooling7 gets installed, and that fails because it wants libcurl3 which is what removes apache
[16:31] <ahasenack>  libxmltooling7 : Depends: libcurl3 (>= 7.16.2) but it is not going to be installed
[16:32] <ahasenack> I have to understand what's the story with libcurl3 and libcurl4 in the archie
[16:32] <ahasenack> archive*
[16:32] <ahasenack> there is libcurl3-gnutls, for example, but no libcurl4-gnutls
[16:32] <ahasenack> and so on
[16:33] <ahasenack> and libcurl4-gnutls-dev depends on libcurl3-gnutls (!)
[16:38] <ahasenack> hm
[16:38] <ahasenack> xmltooling (1.6.4-1ubuntu2) bionic; urgency=medium
[16:38] <ahasenack>   
[16:38] <ahasenack>   * Switch back to openssl1.0 via newly-added libcurl-openssl1.0-dev, since
[16:38] <ahasenack>     libxml-security is not ported to openssl1.1.
[16:46] <jr_admin> hi i keep getting emails from awstat.conf that it cant open access.log ...can anybody help me
[16:47] <jr_admin> permission denied
[16:47] <jr_admin> how do i have permmission for awstat.conf to write to access.log
[19:19] <ahasenack> jrahmy: there is a bug for that
[19:19] <ahasenack> hm, can't find it now
[19:20] <ahasenack> ah
[19:20] <ahasenack> jrahmy: sorry, not you
[19:20] <ahasenack> jr_admin, who is gone
[19:20] <ahasenack> jr_admin: https://bugs.launchpad.net/ubuntu/+source/awstats/+bug/1252467
[22:16] <hackeron> hi there, quick question, I installed 18.04 and /etc/default/rcS is missing, more specifically I cannot find the FSCKFIX=yes option anywhere. Can someone please point me to the 18.04 equivalent please?
[22:35] <nacc> hackeron: it's there in my 18.04; do you have 'initscripts' installed?
[22:37] <JanC> nacc: I assume it wouldn't be used though?
[22:39] <JanC> not sure exactly what the FSCKFIX=yes alternative would be with systemd
[22:40] <hackeron> JanC: hmm, initscripts is no longer used?
[22:40] <nacc> JanC: i'm not sure )
[22:40] <nacc> :)
[22:41] <JanC> https://discourse.osmc.tv/t/automatic-fsck-of-root-filesystem-on-start-stop/9163/2 might be useful
[22:41] <JanC> although it seems old
[22:44] <JanC> basically, adding "fsck.repair=yes" to the kernel command line is supposed to do the same
[22:49] <JanC> you should be able to do that with 'GRUB_CMDLINE_LINUX' in '/etc/default/grub' if you need it added to all kernel command lines automatically (run 'sudo update-grub' afterwards)
[22:49] <JanC> if you need it only once you can do that in grub itself, of course
[22:49] <JanC> (when booting)
[22:51] <JanC> and of course remember that sometimes it breaks your filesystem instead of fixing it  :)
[22:52] <JanC> (well, it would likely "fix" it, but you might lose data)
[23:14] <hackeron> JanC: oh neat! I'll try that thank you :)
[23:14] <JanC> hackeron: I didn't test it, so be careful  :)
[23:56] <Neo4> why sudo apt-get purge apache2.* doesn't remove apache and
[23:56] <Neo4> sudo apt-get purge apache2* can remove?
[23:56] <Neo4> what is difference between apache2* and apache2.* ?
[23:56] <Neo4> this both equal
[23:56] <Neo4> .* mean any symbol?
[23:57] <teward> Neo4: because Bash/APT globbing isn't regex
[23:57] <teward> it's straight globbing
[23:57] <teward> i.e. apache2* is equivalent to the regex /apache2.*/
[23:57] <teward> (between the slashes is the regex)
[23:58] <Neo4> teward: what is dot equvalent?
[23:58] <Neo4> . - any symbol?
[23:58] <teward> Neo4: i just gave you this...
[23:58] <teward> ***it's not regex***
[23:58] <teward> it is NOT a regular expression, it does not work AT ALL like a regular expression.  The single asterisk will mean Any Character
[23:58] <teward> unlimited number of them
[23:58] <teward> so REGEX(apache2.*) == GLOBBING(apache2*)
[23:59] <teward> so just use apache2*
[23:59] <Neo4> teward: ok, see I read how to remove php and there people suggested using php.* instead php* becasue php* could match ph(any symbol) they might have been wrong?
[23:59] <Neo4> in this topic https://askubuntu.com/questions/59886/how-to-completely-remove-php
[23:59] <teward> Neo4: yes, they were wrong.
[23:59] <teward> but that's to remove PHP
[23:59] <teward> **NOT** Apache
[23:59] <teward> that's a completely different question.