[01:25] I seem to be missing something... [01:27] I used ssh-keygen to make an rsa key for a user on my server [01:27] I tried to put that key into PuTtyGen but it said unable to convert, OpenSSH format [01:28] How do I convert it from that format that something that PuTty can use and WinSCP can use (for a client)? [01:31] You see, I did it the other way around before... [01:32] I used AWS to make the key and then download it and used puttygen [01:32] what did you try to import? the private key or the public key or the authorized_keys fingerprint line? [01:32] The authorized_keys in .ssh [01:33] are you trying to use the key from putty to log in to a linux machine? [01:33] Used these directions: https://help.ubuntu.com/community/SSH/OpenSSH/Keys [01:33] (I hope paste from ubuntu is ok) [01:35] which key are you pasting in to "putty gen" [01:36] you want to paste the 'private key', the one in ~/.ssh/id_rsa [01:37] ~/.ssh/authorized_keys gets the one in ~/.ssh/id_rsa.pub [01:39] the id_rsa and authorized_keys are the same, as I cat'd one to the other [01:39] I downloaded the keyfile, and tried to open it with PuttyGen, and it said "Failed to open: OpenSSH Format" [01:40] do I need to just paste the contents instead? [01:40] this is the question that Google couldn't give me a straight answer to :) [01:41] your ~/.ssh/authorized_keys SHOULD NOT be the same as your private key [01:42] I misspoke... or did I mistype? let me look closer [01:48] nope, it's the right key [01:48] just wanted to make sure I used the .pub [01:48] well, a simple google search does appear to show a difference between putty keys and openssh keys [01:48] right, but doesn't show how to convert them besides PuttyGen [01:49] https://www.chiark.greenend.org.uk/~sgtatham/putty/faq.html#faq-ssh2-keyfmt [01:50] right... that's what I use when I use AWS to generate the keys instead of OpenSSH [01:50] But it complains about it being an SSH2 key [01:50] not sure if that's what the issue is or not [01:51] "and the pterm and command-line puttygen utilities are not described at all" :( [01:51] because all of the results say to import it into PuttyGen, which says it cannot open because it's OpenSSH format [01:52] there is a #putty channel [01:52] ssh2 *might* mean the old terrible horrible commercial closed source ssh.com ssh [01:52] although the page I am reading is mentioning using the private key.. instead of the public [01:52] might get better results there [01:52] or maybe it means the modern protocol. hard to guess. [01:53] I mean, you are running into problems using a putty tool, it's not something people will have experience with here. [01:53] dpb1: I will try that. Just trying to eliminate it is not on the server end first [01:53] in general [01:53] yes [01:53] I understand it's involving two systems. :) [01:53] but it does seem like you're more likely to find a puttygen expert in a putty channel, hehe [01:53] I have used windows enough to grit my teeth at the problems encountered. [01:54] anyway, I'm not really sure, sorry Howie69 [01:54] hrm.. here's a simple question that I may have overlooked... [01:55] do I have to restart opensshd after I edit the .ssh/authorized_keys file? [01:55] that FAQ is very clear that it's a problem, but finding the right solution is more difficult. :) [01:55] no, the authorized_keys files are re-read as needed every attempt [01:55] ok, that checks that one off the list... [01:55] BUT if *that* isn't working, the usual solution is to set the permissions more restrictively :) opensshd is very picky about permissions on that file. [01:56] but if you're getting an error message frmo puttygen, best sort that out first [01:56] it's set to 600 [01:57] iirc the ~ and ~/.ssh permissions matter too [01:57] bcause I run the webserver under /home/user/web folder [01:58] I'm half tempted to reenable password auth :) [02:01] and the disturbing answer is... [02:02] that PuTtyGen wants you to import the PRIVATE key, enter the password for it to convert into their public key format [02:02] that's expected at least by me [02:03] your private key is always on your client [02:03] it's how the server knows you are you [02:04] what's disturbing about it? it can't derive your private key from your public key [02:07] sleep deprivation it seems [02:07] I knew better than that [02:08] But in this scenario, but not in the AWS made keys, is that even with the key, it also asks for the associated key's password [02:09] it needs to decrypt the blob to get the raw numbers back out of it somehow [02:17] now if I could only find an openoffice channel with someone on it... [02:19] does it have to be openoffice? i thought that project was pretty much dead [02:19] but there's 198 folks in #libreoffice [02:26] 198 poor souls [02:27] :) [02:27] :D [02:42] I didn't realize that openoffice is dead... I just updated it last week [02:42] I use openoffice on windows public machines because MS Office is retardedly expensive to use on a public thin client :) [05:00] good morning [05:57] Good morning [07:43] what is good book about apache2.4 ? [07:43] I'm going to read something [07:43] lack knowledge [07:44] Neo4: I guess you can ask in #httpd they probably have some tips [07:45] hateball: yes, and you personally what have read about? [07:46] LAMP is always used, and we need knowing it [07:46] hateball: I saw one book apache2 2002 years, does that book appropriate for 2018? ))) [07:47] hateball: I've install /localhost/manual , but that manual very boring read [07:48] hateball: manual is enough https://ibb.co/cWQiOy [07:51] $ [07:51] $ [07:51] $ [07:51] What do read about ubuntu - server? [07:51] or about linux? [07:52] Neo4: Personally I don't read books. I just try and do something, if I fail I read the man-pages or search online for solutions [07:52] Learning by doing [07:52] hateball: I am used to read books, Now can't stop :( [07:53] hateball: I'm going to read 30 book dedicated linux, [07:53] Good luck! [07:53] and I read only 7, 2 not about linux [07:55] hateball: see books that I've read this year https://ibb.co/hN1t6J [07:56] hateball: my record 30 English books for year, I want to increase to 50. Could it help me with English? I read in internet some people read 100 books per year. [07:57] hateball: the best was linux bible [07:58] really very useful, others are crape [09:14] jamespage: coreycb do you think we could get a point release for neutron 12.0.2 http://tarballs.openstack.org/neutron/neutron-12.0.2.tar.gz [09:14] on UCA xenial i can only see 12.0.1 right now, and there is a annoying bug that is fixed in 12.0.2 [09:26] it's actually pretty critical since it disrupts network [09:39] when I do: a2dissite 000-default.conf I got this site [09:40] and I don't understand why I can reach site http://test2 or http://test if in my /etc/hosts they not exist? [09:40] strange behavior [09:44] - /etc/apache2/sites-enabled is empty ? [09:54] empty [09:54] kiokoman: see my host file https://ibb.co/d9QL3y [09:56] kiokoman: in /etc/apache2/sites-enabled this [09:56] https://ibb.co/ejciOy [09:57] when I do systemctl stop apache2.service I can't reach [09:58] how browser know about http://test2 if /etc/hosts doesn't have record? [09:58] and why apache2 show sites if all disabled? [10:00] maybe cache, idk [10:03] kiokoman: maybe not cache it sees when apache stop and show [10:03] Neo4: Do you have a domain set up in `/etc/resolv.conf`? [10:03] something says browser that test2 is myserver [10:04] now check [10:05] lordievader: no https://ibb.co/mcnWbJ [10:06] if it was cache browser wouldn't refresh when apache stopped [10:06] Does `test2.localdomain` resolve to your server? [10:07] `dig @127.0.0.1 test2.localdomain` [10:07] yes [10:07] http://test2.localdomain show apache main page [10:08] test.localdomain and test1 [10:08] all sites that I created before [10:08] So, that is why `test2` resolves to your server. [10:09] lordievader: why? [10:10] $/etc/hosts doesnt' have test2 test test1 [10:10] I though we put name in hosts [10:10] You have a domain defined in `/etc/resolv.conf`. This gets appended to `test2`, which is resolvable. [10:11] lordievader: [10:11] I have there this two [10:11] nameserver 127.0.1.1 [10:11] search localdomain [10:12] ddd.localdomain doesn't work [10:13] only that I created before, but I removed them [10:14] There is no host with the name `ddd` in your network? [10:15] lordievader: no, nor with test2 [10:15] why I have test2 host? [10:15] I remove that from /etc/hosts [10:15] Perhaps your resolver (dnsmasq?) has cached that entry. [10:19] lordievader: see https://paste.ubuntu.com/p/rYf4p66yDd/ [10:20] What is in `/etc/hosts`? [10:20] lordievader: I tried add dddd virtual host and then remove it using my shell script. It seems something wrong with it [10:21] lordievader: https://paste.ubuntu.com/p/8smFrPkRVN/ [10:23] host is a dns lookup utility, you must have son dns server caching it somewhere [10:23] *some [10:23] Yeah, that is my guess too. Dnsmasq (or something) caching the answe. [10:24] answer* [10:26] sudo service network-manager restart [10:26] sudo kill -HUP $(pgrep dnsmasq) [10:27] kiokoman: my local computer where installed vmware https://paste.ubuntu.com/p/YDjCSB9SHM/ [10:28] it means hosts on my local computer lnked to virtual machine hosts [10:29] if domain in parent os (how it called suppervisor os?) that it aveilabe in virtual machine [10:29] no I commendted and all right [10:29] do you understand? [10:30] https://kb.vmware.com/s/article/1013644 [10:30] ? [10:30] my vmware see host form /etc/hosts where its installed [10:32] kiokoman: don't know, just on virtual machine don't create sites with name that has parent computer cause they will intersect [10:32] this is not cache [10:33] I interesting will my vmware see ip my parent? [10:33] ping that [10:35] kiokoman: it can ping [10:35] see from virtual machine we can reach outside computer [10:35] without share folder, using ssh [10:35] and filezila [10:36] interesting why I have 3 ip when I do hostname -I ? [10:36] before it was always one [10:41] $ [10:41] $ [10:41] $ [10:41] who know why there 3 ip addresses? [10:41] https://paste.ubuntu.com/p/M3dd6Wms4Q/ [10:41] hostnam -I show all ip addressess for hostname, my hostname neo, FQDN mail.neo.ru [10:42] for virtual machine I have only one IP, and I use this "howtname -I" for determine IP adress in my shell scripts, it is right? [12:03] Neo4: pastebin output of "ip a" [12:04] RoyK: https://paste.ubuntu.com/p/HxHZsWwSmh/ [12:04] RoyK: I think maybe real host will have one ip and for shell enough hostname -I [12:05] I cna check before run shell if one ip [12:06] and this ip I use it a few times, one for put to apache ServerName ip [12:06] might once only [12:06] but interesting why there 3 ip [12:07] each computer has 1 IP doesn't it? [12:07] how could 1 computer has 3 IP? [12:08] looks like you have two virtual machines there, each with an IP address [12:09] RoyK: yes, had before two and remove one cause didnt have space, do you think its virtual machine added [12:09] RoyK: yes, I remember i had one only before [12:10] RoyK: or maybe no, virtual machine has different ip [12:11] vmnet1 - virtual machine [12:12] it's a virtual interface [12:12] vmware? [12:13] RoyK: yes [12:13] well, what's the problem? ;) [12:13] RoyK: when we isntlal vmware our host has one more ip? [12:13] RoyK: without problem now [12:13] we have a machine, without VMs at work, that has 64 IP addresses [12:14] doing NAT operations for a few thousand users, so it needs a lot of addresses to balance the load (since there's only 65k TCP/UDP ports) [12:16] ok, I don't understand this :) [12:16] this theme [12:23] RoyK: Hahaha, nice 😁 [12:59] ahasenack: would you be OK reviewing https://code.launchpad.net/~racb/usd-importer/+git/usd-importer/+merge/345617 please? [12:59] nacc: ^ [12:59] * ahasenack takes a look [13:05] the following process is hanging [13:05] /usr/bin/python3 /usr/lib/ubuntu-release-upgrader/check-new-release -q [13:06] i am trying to figure out why [13:06] but nothing i do seems to be able to replicate the issue [13:06] it only hangs when invoked from the cron.weekly [13:06] but never when run manually [13:15] this can be due to different environments when you run it vs. when run by cron [13:16] i figured as much, so i attempted to run it with env -i [13:17] after dumping the environment of one of the hung process (cat /proc/pidnumber/environ) [13:17] does it also hang when you make the cron job run without -q ? i would guess it will, and if it does, this may hint on where it gets stuck. [13:17] hmm, the cron job only runs once a week [13:18] so i guess i could modify the job and wait a week [13:18] i would think the daily cron jobs run in the same environment [13:18] good point [13:18] and the hourly ones, too [13:19] ok, i copied the cron job over to the hourly [13:20] i might also try one in cron.d [13:20] with a really short run time [13:20] right, or root's crontab. [13:21] on a 16.04 system (i do not know what you are running there, you did not say), /usr/lib/ubuntu-release-upgrader/check-new-release is a symlink to /usr/bin/do-release-upgrade [13:21] so you could probably just run "/usr/bin/do-release-upgrade -cq" instead (with or without the q) [13:21] 14.04 and yes it is a symlink [13:22] but i guessw that's not the issue, it should not cause it to get stuck [13:23] ya, running it manually like that as root, it does not hang [13:24] when you type 'sh' and, once there, run "env -i /usr/bin/python3 /usr/lib/ubuntu-release-upgrader/check-new-release" - does it get stuck? [13:25] ^ as root [13:26] no [13:26] it ran successfully [13:26] okay, it was worth a try ;) [13:26] :) [13:28] it will be some network related issue, such as a mandatory proxy server not set [13:28] no proxies in my environment [13:32] if this changed cron job doesn't help you identify the issue, you can still use the python debugger (as seen in the second example, invoking it as a python module on the command line): https://docs.python.org/2/library/pdb.html [13:32] so my per minute cron job runs fine [13:32] * * * * * root /etc/cron.hourly/update-notifier-common > /tmp/test.log [13:34] this would not catch stderr output [13:34] so at this point i am starting to suspect some race condition or something, as in the job happens to run when some other schedule job runs and then bump heads [13:34] sure i guess it wont catch stderr output but it also is not hanging [13:34] right === RoyK^ is now known as RoyK_Heime [13:35] and i agree the race condition or dead locking sounds like a possible explanation [13:35] i was expecting if it failed to see multiple hang jobs in ps -ef [13:36] s/hang/hung/ [13:36] failed, as in hung? then that would be my expectation, too. [13:37] otherwise this theory seems wrong [13:37] right [13:37] i was also hopign the strace -p pid would help [13:37] but i dont get anything i see as usefull in the ouput [13:37] brief as it is [13:38] btw the cron.*ly jobs are run by /etc/crontab - you can modify the times when they run for your testing purposes [13:38] ah, cool thanks [13:39] make sure you keep the original times somewhere, though [13:39] right good idea [13:40] and keep in mind that running some of the regular cron jobs in shorter period can have adverse effects. e.g. log rotation [13:41] right [13:41] tobasco: yes i'll get working on another queens update. must've just missed neutron with the last one. [13:46] tobasco: we'll be tracking the queens updates in bug 1771572 [13:46] bug 1771572 in Ubuntu Cloud Archive "[SRU] queens stable release update" [Undecided,New] https://launchpad.net/bugs/1771572 [13:49] * tomreyn afk [14:05] coreycb: tyvm! [14:13] rbasak: [14:13] + if self.raw_repo and self._delete_on_close: [14:13] + shutil.rmtree(self.local_dir) [14:13] shouldn't that be self._local_dir? [14:15] ahasenack: local_dir is a property that returns _local_dir [14:15] ah, I see it [14:15] thx [14:15] I looked for a pattern to follow just now. [14:16] The only other instance in the class itself seems to be in _maybe_quiltify_tree_hash which uses the property. [14:16] So it's consistent at least. [15:02] rbasak: I'm getting leaked tmpdirs in /tmp when running pytest-3 gitubuntu/test_importer.py. master doesn't have this behavior [15:03] Thank you for spotting that. I didn't think to look, assuming that the test caught it. [15:03] I'll take a look now. [15:03] rbasak: at the end of test_importer_main_cleanup_on_exception, the assert makes sure that the directory is emtpy, but it still exists [15:04] Reproduced [15:05] Ah right [15:09] ahasenack: http://paste.ubuntu.com/p/wjC6DBHRpN/ fixes it I think. [15:10] The diff looks horrible but really the change is in indentation [15:10] http://paste.ubuntu.com/p/VqNGX2wwmz/ is clearer [15:10] commit on top and I can try [15:12] ahasenack: pushed [15:12] ahasenack: though that's the test I'm proposing to drop anyway [15:13] ok [15:13] in other news [15:13] what does this mean? [15:13] trying: apache2 [15:13] skipped: apache2 (55, 0, 10) [15:13] got: 15+0: a-8:a-1:a-1:i-1:p-3:s-1 [15:13] * ppc64el: libapache2-mod-proxy-uwsgi-dbg, libapache2-mod-shib2 [15:14] It means that libapache2-mod-proxy-uwsgi-dbg and libapache2-mod-shib2 become uninstallable on ppc64el if apache2 were to migrate to the release pocket [15:14] ugh [15:21] Hey. I am getting this error. "mount: mounting https://images.maas.io/ephemeral-v3/daily/bionic/amd64/20180426.2/squashfs on /root failed: No such device". What I basically want is to pxe boot the ephemeral image from a custom pxe server. My kernel is "https://images.maas.io/ephemeral-v3/daily/bionic/amd64/20180426.2/ga-18.04/generic/boot-kernel", initrd is "https://images.maas.io/ephemeral-v3/daily/bionic/a [15:21] md64/20180426.2/ga-18.04/generic/boot-initrd" and cmdline args "root=squash:https://images.maas.io/ephemeral-v3/daily/bionic/amd64/20180426.2/squashfs ro". Am I missing something? [15:40] kristian2709_: /join #maas [16:04] cpaelzer, ahasenack: I don't think I'll be able to do the chrony review in the time I have left today, sorry. [16:29] rbasak: hmpf, turns out libapache2-mod-shib2 (one of the packages from the error message) is already uninstallable in the previous apache package [16:30] ahasenack: AFAIK, not-worse is the criteria. Perhaps the real cause is the other one? [16:31] yeah, I'm tracing it down [16:31] eventually libxmltooling7 gets installed, and that fails because it wants libcurl3 which is what removes apache [16:31] libxmltooling7 : Depends: libcurl3 (>= 7.16.2) but it is not going to be installed [16:32] I have to understand what's the story with libcurl3 and libcurl4 in the archie [16:32] archive* [16:32] there is libcurl3-gnutls, for example, but no libcurl4-gnutls [16:32] and so on [16:33] and libcurl4-gnutls-dev depends on libcurl3-gnutls (!) [16:38] hm [16:38] xmltooling (1.6.4-1ubuntu2) bionic; urgency=medium [16:38] [16:38] * Switch back to openssl1.0 via newly-added libcurl-openssl1.0-dev, since [16:38] libxml-security is not ported to openssl1.1. [16:46] hi i keep getting emails from awstat.conf that it cant open access.log ...can anybody help me [16:47] permission denied [16:47] how do i have permmission for awstat.conf to write to access.log [19:19] jrahmy: there is a bug for that [19:19] hm, can't find it now [19:20] ah [19:20] jrahmy: sorry, not you [19:20] jr_admin, who is gone [19:20] jr_admin: https://bugs.launchpad.net/ubuntu/+source/awstats/+bug/1252467 [19:20] Launchpad bug 1252467 in awstats (Ubuntu) "/etc/cron.d/awstats: wrong user for cron job" [Undecided,Triaged] === miguel is now known as Guest45971 [22:16] hi there, quick question, I installed 18.04 and /etc/default/rcS is missing, more specifically I cannot find the FSCKFIX=yes option anywhere. Can someone please point me to the 18.04 equivalent please? [22:35] hackeron: it's there in my 18.04; do you have 'initscripts' installed? [22:37] nacc: I assume it wouldn't be used though? [22:39] not sure exactly what the FSCKFIX=yes alternative would be with systemd [22:40] JanC: hmm, initscripts is no longer used? [22:40] JanC: i'm not sure ) [22:40] :) [22:41] https://discourse.osmc.tv/t/automatic-fsck-of-root-filesystem-on-start-stop/9163/2 might be useful [22:41] although it seems old [22:44] basically, adding "fsck.repair=yes" to the kernel command line is supposed to do the same [22:49] you should be able to do that with 'GRUB_CMDLINE_LINUX' in '/etc/default/grub' if you need it added to all kernel command lines automatically (run 'sudo update-grub' afterwards) [22:49] if you need it only once you can do that in grub itself, of course [22:49] (when booting) [22:51] and of course remember that sometimes it breaks your filesystem instead of fixing it :) [22:52] (well, it would likely "fix" it, but you might lose data) === hehehe is now known as Guest88251 === Guest88251 is now known as hehehe [23:14] JanC: oh neat! I'll try that thank you :) [23:14] hackeron: I didn't test it, so be careful :) [23:56] why sudo apt-get purge apache2.* doesn't remove apache and [23:56] sudo apt-get purge apache2* can remove? [23:56] what is difference between apache2* and apache2.* ? [23:56] this both equal [23:56] .* mean any symbol? [23:57] Neo4: because Bash/APT globbing isn't regex [23:57] it's straight globbing [23:57] i.e. apache2* is equivalent to the regex /apache2.*/ [23:57] (between the slashes is the regex) [23:58] teward: what is dot equvalent? [23:58] . - any symbol? [23:58] Neo4: i just gave you this... [23:58] ***it's not regex*** [23:58] it is NOT a regular expression, it does not work AT ALL like a regular expression. The single asterisk will mean Any Character [23:58] unlimited number of them [23:58] so REGEX(apache2.*) == GLOBBING(apache2*) [23:59] so just use apache2* [23:59] teward: ok, see I read how to remove php and there people suggested using php.* instead php* becasue php* could match ph(any symbol) they might have been wrong? [23:59] in this topic https://askubuntu.com/questions/59886/how-to-completely-remove-php [23:59] Neo4: yes, they were wrong. [23:59] but that's to remove PHP [23:59] **NOT** Apache [23:59] that's a completely different question.