[03:21] <nabhash> hi all , I just installed ubuntu server, I have 2 Pcie graphics cards I want to disable the default drivers and install new drivers , how would I do that ?
[03:31] <sarnold> you may have better luck in #ubuntu -- the folks in this channel tend to use machines without video cards
[03:43] <nabhash> @sar
[03:43] <nabhash> sarnold  lol
[03:44] <sarnold> well, okay, so we have video cards in our laptops, but that's just so we can use urxvt :)
[03:47] <nabhash> sarnold i am not familiar with downloading files etc.. in text mode I feel stuck
[03:48] <sarnold> that's fine :)
[03:51] <nabhash> I need little help
[06:12] <lordievader> Good morning
[06:20] <spinningCat> anyone here?
[06:20] <spinningCat> i just purchased a domain and i have ubuntu server machine
[06:21] <spinningCat> i need to direct nameserver or something i dont know really. It is my first time work with server
[06:21] <spinningCat> how can i do that=,
[06:30] <lordievader> Do you manage the nameserver or does the company you bought the domain name at do that?
[08:09] <jamespage> cpaelzer: morning - could you give me an opinion on bug 1773449
[08:10] <jamespage> we're seeing issues with what I think is disk cache behaviour with qemu + librbd - I think the issue is either in qemu of librbd but wanted a second opinion
[08:10] <jamespage> we don't see the issue with older versions of ceph/qemu
[08:12] <cpaelzer> hi jamespage
[08:13]  * cpaelzer is reading
[08:19] <cpaelzer> jamespage: if I read correctly there should be set cache=none
[08:19] <cpaelzer> so the data isn't list in qemu
[08:19] <cpaelzer> for rbd you had some config
[08:19] <cpaelzer> like
[08:19] <cpaelzer> rbd_cache... values
[08:20] <cpaelzer> I don't know if that is read only, but from the mentioning of "dirty" it seems to cache writes as well
[08:20] <jamespage> cpaelzer: yep - I check that - in the libvirt xml, the cache is set to none by default for the rbd block device
[08:20] <cpaelzer> good on that at least
[08:20] <jamespage> yeah
[08:20] <cpaelzer> but I ahve more to ask
[08:20] <jamespage> sure
[08:20] <cpaelzer> somewhere data is lost, nobody doubts that
[08:20] <cpaelzer> while there could be a way to make rbd non-caching to fully avoid
[08:21] <cpaelzer> and feel free to go that way, but I'd expect an admin to do so as his own tradeoff choice of speed/stability
[08:21] <cpaelzer> instead I wonder about something else
[08:21] <cpaelzer> If our PCs crash why isn't this often an issue, there are caches as well
[08:21] <cpaelzer> the reason is that in 90+% of the time fsck will clean it and you'll be good other than maybe last written file
[08:21] <cpaelzer> but
[08:21] <cpaelzer> in the log of Ryan I see this
[08:22] <cpaelzer> Warning: fsck not present, so skipping root file system
[08:22] <cpaelzer> This sounds like the never answered https://ubuntuforums.org/showthread.php?t=2375459
[08:22] <cpaelzer> It might be possible that an adapted guest that can fun fsck on boot will most of the times recover
[08:22] <cpaelzer> to the level of a normal system that crashes
[08:23] <cpaelzer> If that would work that would be good, and an admin should then be able to choose extra safety by changing e.g. rbd caching config
[08:23] <cpaelzer> jamespage: one question on your comparison to xenial/ocata where t was good, could it be that this image has fsck avialable?
[08:23] <cpaelzer> or even available
[08:24] <cpaelzer> so did it not at all have dirty data - or did it come up because it clenaed up via fsck
[08:24] <cpaelzer> ?
[08:24] <jamespage> cpaelzer: used the same cirros image in both tests
[08:43] <cpaelzer> jamespage: too bad for my theory :-)
[08:43] <cpaelzer> so really the older stack didn't loose any buffers
[08:43] <cpaelzer> I can't even think how it would not loose at least a tiny bit
[08:44] <cpaelzer> jamespage: another thing, you said you had cache none
[08:44] <cpaelzer> but that is actually maybe worse now
[08:44] <cpaelzer> http://docs.ceph.com/docs/giant/rbd/qemu-rbd/#running-qemu-with-rbd
[08:45] <cpaelzer> is this running as cache=none AND rbd_cache=true then?
[08:45] <cpaelzer> or is rbd_cache not enabled either?
[08:45] <cpaelzer> because as I read it if rbd_cache is true, then you'd want to have cache=writeback to to flushes
[08:46] <cpaelzer> feels unintuitive
[08:47] <cpaelzer> and I'm not sure if that will use page cache on top and make it worth
[08:47] <cpaelzer> worse I mean
[08:47] <cpaelzer> but worth a check maybe
[08:47] <jamespage> cpaelzer: I'll test that out and see - I tried with writethrough but that had the same issue
[08:48] <cpaelzer> To enable write-through mode, set rbd cache max dirty to 0.
[08:48] <cpaelzer> on  the rbd side of things
[08:48] <cpaelzer> jamespage: ^1^
[08:49] <jamespage> ack
[08:49] <cpaelzer> from http://docs.ceph.com/docs/giant/rbd/rbd-config-ref/#rbd-cache-config-settings
[08:49] <jamespage> redeploying now to repro
[08:49] <cpaelzer> good luck
[09:31] <jamespage> cpaelzer: ok tried with writeback setting - still get the same issue after a hardkill on the qemu process
[09:31] <jamespage> nice
[09:45] <jamespage> cpaelzer: I think this is a librbd issue; the qemu code in 2.9 (works) and 2.11 (fails) is pretty much identical
[09:45] <jamespage> cpaelzer: cache=none disables the rbd cache option, cache=writeback enables the option
[09:46] <jamespage> but on the later ceph release, we see the block device corruption
[09:46] <jamespage> afaict
[09:46] <jamespage> block/rbd.c for reference
[09:47] <jamespage> unless a flush is not being correctly propagated of course :-)
[09:56] <cpaelzer> jamespage: ok, so with cache=none both caches should be off then?
[09:57] <cpaelzer> that really should not get dirty disk content then :-/
[10:09] <jamespage> cpaelzer: agreed
[10:10] <jamespage> cpaelzer: yeah cache=none results in rbd cache = false via the qemu rbd driver
[10:16] <jamespage> cpaelzer: figured it out
[11:39] <trupheenix> I am attempting to setup a postfix server to send email from a local process to my GApps email domain.
[11:39] <trupheenix> I have setup postfix with TLS.
[11:39] <trupheenix> But I cannot figure out how to setup user accounts with password authentication.
[11:39] <trupheenix> I followed this tutorial https://www.upcloud.com/support/secure-postfix-using-lets-encrypt/
[11:39] <trupheenix> I am able to send email to GApps domain but it gets thrown into spam.
[11:39] <blackflow> trupheenix: you need to install Dovecot, which is IMAP/POP3 daemon but also it does SASL, usable by Postfix.
[11:40] <trupheenix> blackflow, I have installed dovecot also.
[11:41] <blackflow> trupheenix: smtpd_recipient_restrictions = permit_sasl_authenticated    is the part of postifx configuration (from that tutorial) that sets up SASL authentication for sending. But you also have permit_mynetworks, which, depending on how you've set up the my networks list, will authorize those without needing to authenticate
[11:41] <blackflow> .... and all of which has nothing to do with your mail sent to Google ending up as Spam.
[11:41] <trupheenix> blackflow, ok
[11:42] <trupheenix> blackflow, so here's how I have it set up.
[11:42] <blackflow> your mail will often end up in google's spam for a ton of reasons outside of your control.
[11:42] <trupheenix> Oh ok
[11:42] <trupheenix> blackflow, at the moment I am able to send email without any user password.
[11:43] <blackflow> trupheenix: which is expected if your'e sending from any of the "my networks" IPs/hosts
[11:43] <trupheenix> blackflow, ok
[11:43] <trupheenix> blackflow, how does one create user accounts for email? Like a postmaster user account?
[11:44] <blackflow> trupheenix: but eh... permit_mynetworks should come _before_ permit_sasl_authenticated   iirc...
[11:45] <trupheenix> blackflow, ok
[11:45] <blackflow> trupheenix: postfix is mail TRANSFER agent, MTA. so you need to set up another agent to receive the mail and store locally in a datbase/filesystem. For example, Dovecot via "lmtp". Of course, you can also set up postfix to save incoming files locally in a maildir, but that's a bit more complex setup if you also need Dovecot to manipulate those files for IMAP/POP3.
[11:45] <blackflow> cenrtalizing everything through Dovecot is the best thing.
[11:46] <trupheenix> blackflow, I already have dovecot running
[11:47] <blackflow> trupheenix: that tutorial seems to be setting it up for Postfix to store mail directly in a maildir.
[11:47] <trupheenix> blackflow, yes
[11:48] <trupheenix> blackflow, I don't want to use DB
[11:49] <blackflow> you don't have to. the question is only whether Postfix saves mail as local files directly and Dovecot has access to them to (via a shared UID), or Postfix sends off to Dovecot via lmtp, so Dovecot is only authority in storing incoming and retrieving via imap/pop3.
[11:49] <blackflow> this explains how to set up SASL authentication via dovecot for various mechanisms:  https://wiki2.dovecot.org/Authentication
[11:50] <blackflow> "password databases" is what you need, eg. you can integrate with PAM for access to local system users.
[11:51] <blackflow> personally I like to have that separate. for that you can have "passwd-file", and specify a path to a file that's similar to /etc/passwd but it's independent.
[12:05] <spinningCat> hey
[12:05] <spinningCat> i have ubunut-server machine
[12:05] <spinningCat> and i have a domain
[12:05] <spinningCat> how to direct my machine to server
[12:06] <spinningCat> is there something like nameserver or something?
[12:09] <blackflow> you mean how to direct your _domain_ to server?
[12:09] <spinningCat> could be that
[12:09] <spinningCat> i dont know about that
[12:09] <spinningCat> it is my first experience
[12:09] <blackflow> yes, you need to set up DNS. the easiest thing to do is with your domain registrar, just set up the A (and other) records.
[12:10] <spinningCat> i will do SSL thing
[12:10] <blackflow> what is "SSL thing"?
[12:10] <spinningCat> certbot
[12:10] <spinningCat> i need to domain
[12:10] <blackflow> you mean you will use SSL/TLS certificates provided by LetsEncrypt....
[12:10] <spinningCat> i guess i need to connect host and domain
[12:10] <frickler> has anyone else seen unbound failing after the security update tonight?
[12:10] <spinningCat> yes that is right
[12:11] <frickler> failing as in not running after the update, needing a manual restart
[12:11] <spinningCat> just execute certbot-auto and show that there
[12:11] <blackflow> spinningCat: you will need to set up DNS (like mentioned above) and then a web server like nginx to respond to LetsEncrypts domain verification challenges.
[12:11] <spinningCat> show my domain when certbot ask?
[12:11] <spinningCat> i have nginx
[12:11] <blackflow> I don't know if any of LE tools work without a web server (ie. start a listener on port 80 themselves)
[12:12] <spinningCat> i have web server also you mean nginx right
[12:12] <blackflow> I don't use certbot so I don't know. I prefer "dehydrated"
[12:12] <spinningCat> set up DNS
[12:12] <spinningCat> my app work in machine DNS is that what you meant
[12:13] <blackflow> spinningCat: sorry, what?
[12:13] <spinningCat> hmm
[12:13] <spinningCat> my app work on nginx server
[12:14] <spinningCat> i can acccess my app from outside
[12:14] <spinningCat> and this app publish over DNS right
[12:14] <blackflow> "publish over DNS" doens't make sense.    do you mean to say that you can access your web application over a domain, not just IP address?
[12:15] <spinningCat> i can acces over IP address
[12:15] <spinningCat> sorry for my english
[12:15] <blackflow> okay, and now you want to access it over a domain?
[12:15] <spinningCat> that's right
[12:15] <spinningCat> i just save that ip as nameserver
[12:15] <blackflow> does your registrar support managing "DNS zones"? Can you provide your servers IP as "A record" in your registrar's control panel?
[12:15] <spinningCat> wii that work?
[12:16] <spinningCat> registrar you mean domain provider right
[12:16] <spinningCat> ?
[12:16] <blackflow> well yes you can set up a nameserver yourself, but being that you don' tknow how to, I would NOT recommend you to do that just now.
[12:16] <blackflow> registrair is the company you bought your domain from / registered your domain with.
[12:16] <spinningCat> blackflow,  i dont know about registrar
[12:16] <blackflow> where did you get your domain?
[12:17] <spinningCat> yesterday
[12:17] <blackflow> where, not when :)
[12:17] <spinningCat> but domain is from different company
[12:17] <spinningCat> ah
[12:17] <spinningCat> sorry
[12:17] <spinningCat> namecheap
[12:17] <spinningCat> where
[12:17] <spinningCat> America
[12:17] <blackflow> okay, so NameCheap is "registrar". They also allow you to configure "DNS zones" which you need to set up the "A record" to point to your servers IPv4.
[12:17] <blackflow> I meant "where" as "which company", so NameCheap.
[12:19] <blackflow> there must be some tutorial in NameCheap's KnowledgeBase on how to do that. In short, you need to designate NameCheap to be "the nameservers" for your domain. And then edit "DNS zone" and set up "A record".   With this you have all the terms to google for more info.
[12:22] <frickler> cpaelzer: jamespage: do you know if sdeziel is available somewhere? I just confirmed that https://bugs.launchpad.net/ubuntu/+source/unbound/+bug/1775833 is triggered by a simple manual install for me, too. i'd consider that a critical bug
[12:23] <cpaelzer> frickler:  sdeziel is a few ours out to wake up I'd think
[13:04] <DevNull1> Is netcat-openbsd supposed to be a default package on LTS 18.04?
[13:05] <cpaelzer> nice find in the ceph bug jamespage
[13:05] <cpaelzer> gz!
[13:06] <jamespage> cpaelzer: not an obvious one!
[14:28] <blackflow> ikonia: oh, what he did now :)
[14:30] <ikonia> youtube videos on linux, correcting his english and reviewing them
[14:32] <blackflow> ah.
[16:58] <nacc> rbasak: any luck with the snap?
[17:05] <rbasak> nacc: not tried it today, sorry
[17:05] <nacc> rbasak: nothing to apologize for :) was just curious
[17:05] <nacc> did you see my unrelated ping re: the testing changes i'll need to do in my (pending) branch
[17:05] <spinningCat> hey what am i doing here https://hastebin.com/oqifavisev.ini
[17:06] <nacc> spinningCat: what do you mean?
[17:07] <spinningCat> i am getting this https://hastebin.com/ucipuhiway.pas
[17:09] <spinningCat> ho
[17:10] <spinningCat> this is ubuntu*server i am sorry,
[17:15] <rbasak> nacc: no sorry. I don't see that scrolling back. Remind me please?
[17:16] <nacc> rbasak: my branch which is trying to fix our importer idempotency (well, it gets us closer, by first just doing all the unique import tags we expect to create and then doing branch manipulation at the end), will need test refactoring, since now import_{,un}applied_dsc no longer does any branch changes, while our tests assume they do
[17:19] <rbasak> nacc: ah. Yeah, that makes sense.
[17:19] <nacc> rbasak: just means it will take longer
[17:24] <axisys> I am still waiting on a response from #sssd .. but how do I upgrade sssd 1.11.8 with sssh 1.13.4 or above.. my sssd config works fine with ubuntu 16.04 which has sssd 1.13.4, but fails on ubuntu 14.04 which has sssd 1.11.8 ..
[17:25] <axisys> sssd mailing list suggesting me to upgrade sssd .. they are saying it is old.. so need help with recommended way to jump from sssd 1.11.8 to 1.13.4+ on ubuntu 14.04 ..
[18:49] <HackeMate> hello
[18:50] <HackeMate> is it possible to execute a command in every ssh login success? not in ~/.bashrc
[18:52] <blackflow> HackeMate: for what purpose?
[18:52] <HackeMate> i want to know when the user is logged in, since i have a permanent bruteforce attack in course since 2 weeks ago
[18:53] <HackeMate> already configured fail2ban, but i find this option usefull, if exists
[18:53] <HackeMate> i was reading about ssh ForceCommand but dont find much info about
[18:56] <blackflow> I don't know of a way other than via shell/login rcs, like ~/.bashrc or ~/.bash_profile for login shells.
[18:57] <nacc> i mean you could use ForceCommand with your special thing locally and then exec $SSH_ORIGINAL_COMMAND, based on the docs, but dunno if that would work
[18:58] <blackflow> indeed, ForceCommand, covered by sshd_config(5) manpage.
[18:59] <blackflow> HackeMate: btw, did you disable password login, enable only public key login, for ssh?
[19:03] <HackeMate> yes, can be a solution, but i also could like get a prowl notification when i log in, for fun
[19:07] <blackflow> not just "can be" but it is a must these days. also, keep in mind fail2ban won't help against distributed attacks.
[19:07] <blackflow> ideally, you should set up some proper intrusion detection like Snort.
[19:09] <HackeMate> found the solution
[19:09] <HackeMate> using /etc/pam.d/sshd created a new line with pam_exec.so, executes the script and works fine
[19:10] <HackeMate> using forcecommand in sshd_config closes the ssh connection, maybe because the script exists, then exists the ssh session who executed it
[19:10] <HackeMate> snort, i'll google it
[19:10] <blackflow> HackeMate: yes, hence the nacc's suggestion to spawn of SSH_ORIGINAL_COMMAND from it
[19:10] <blackflow> *spawn off
[19:11] <blackflow> but pam, yeah, interesting solution.
[19:12] <HackeMate> the bad part is i cant get the user who logged in, but still fine enough
[19:14] <blackflow> HackeMate: run `id` from the script to get the uid
[19:14] <HackeMate> oh, truth
[19:15] <blackflow> HackeMate: another, a bit less hacky way, is to have log monitoring. sshd will log all logins.
[19:18] <sarnold> HackeMate: I strongly dislike the idea of using pam_exec for alerting in this fashion; consider using auditd and audisdp to get the logs off the machine..
[19:18] <HackeMate> aha, i go read about this