[00:02] <drab> on a diff topic, anybody around running zfs root on ubuntu-server?
[00:03] <drab> it seemed not recommended/experimental, but I'm seeing more articles/ppl saying it works, just can't tell how stable/trustworthy it is
[00:03] <sarnold> one of our users put this together https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS
[00:05] <drab> sarnold: yeah I had seen that one and that was my definition of "experimental" :)
[00:05] <sarnold> :)
[00:21] <undriedsea> Any iptables gurus out there?
[00:21] <undriedsea> I am trying to figure out what I am doing wrong...
[00:21] <undriedsea> 	iptables -t raw -A PREROUTING -i $IFACE -p tcp --dport 80 -m tcp -j CT --notrack
[00:21] <undriedsea> iptables -A INPUT -i $IFACE -p tcp --dport 80 -m tcp -j ACCEPT
[00:21] <undriedsea> iptables -t raw -A OUTPUT -o $IFACE -p tcp --sport 80 -m tcp -j CT --notrack
[00:22] <undriedsea> ^ This rule set doesn't seem to be opening up TCP:30 (stateless fw)
[00:53] <drab> undriedsea: you mean tcp:80?
[00:53] <undriedsea> yea, long day :)
[00:53] <drab> k, just checking, been there myself :)
[00:53] <drab> is the pkt supposed to be destinated for the box the rule is on?
[00:56] <drab> undriedsea: and I assume you tcpdump'ed and can see the pkt making it there, yes?
[00:56] <drab> does it get dropped?
[00:58] <undriedsea> correct, I see it come in
[00:58] <undriedsea> 17:58:15.347645 IP XXXXX > YYYYYY.http: Flags [S], seq 1806319247, win 27320, options [mss 1366,sackOK,TS val 9381349 ecr 0,nop,wscale 7], length 0
[00:59] <drab> undriedsea: and do you see that getting dropped?
[00:59] <drab> are you using any LOG staments anywhere by any chance?
[00:59] <undriedsea> No
[01:00] <undriedsea> Let me google how to do that
[01:00] <drab> also what's in CT? maybe it gets dropped there?
[01:00] <drab> also why are you messing with raw and NOTRACK? are you trying to optimize a fw in front of a hosting web server?
[01:08] <drab> undriedsea: also fwiw I don't recall a --notrack, are you use that's working
[01:08] <drab> undriedsea: I don't use it so can't claim any experience with it, but from memory/some reading iirc you use it with -t raw -j NOTRACK
[01:08] <undriedsea> yeah, I think I just figured it out, the accept in prerouting wasn't enough, I needed a normal accept too
[01:09] <drab> -j CT --notrack looks odd to me
[01:09] <drab> oh ok, fair enough
[01:09] <undriedsea>  -j NOTRACK is deprecated
[01:09] <undriedsea> -j CT --notrack is the new syntax
[01:09] <drab> ah, well, like I said, been a while :)
[01:09] <undriedsea> no worries
[01:09] <drab> good to know, thank you, always learn something
[01:09] <undriedsea> indeed
[01:10] <drab> so is CT some new built in table? it doesn't exist in my list of tables in man iptables (on a latest ubuntu xenial)
[05:56] <azidhaka__> hi! when using canonical's kernel livepatch service, do i still need to do apt-get dist-upgrade to update the kernel a new one is released?
[06:03] <cpaelzer> azidhaka__: the livepatch helps you to get over the most critical issues without a unplanned outage
[06:03] <cpaelzer> azidhaka__: but you'll still have planned outages to update
[06:04] <cpaelzer> azidhaka__: by the way live patchign works in general not all issues are fixable by it, so an update/restart according to your policy is still required
[06:04] <cpaelzer> azidhaka__: but maintaining a good SLA with less unplanned outages is a huge win for security and uptime
[06:05] <azidhaka__> cpaelzer: so, i should run the typical update, upgrade, dist-upgrade and still reboot on my terms?
[06:06] <cpaelzer> azidhaka__: yes
[06:06] <azidhaka__> cpaelzer: thank you
[06:09] <lordievader> Good morning.
[06:09] <cpaelzer> hi lordievader
[06:09] <cpaelzer> lordievader: how are you today?
[06:10] <lordievader> Doing good. How are you, cpaelzer
[06:11] <cpaelzer> fighting the bug flood :-)
[06:11] <lordievader> Good luck ;)
[06:11] <lordievader> They never stop coming...
[06:12] <cpaelzer> by design, since by applying iteration every software can be written as one broken line of code
[06:13] <cpaelzer> 1. every software can be shrinked by a line
[06:13] <cpaelzer> 2. every software has a bug
[06:13] <cpaelzer> 3. iterate
[06:13] <cpaelzer> I wonder if #1 makes it "no line of code" eventually and that is broken by not doing anytihng ... hmm
[06:14] <lordievader> That would be interesting, I guess the bug 'it doesn't work holds true' if a program is zero lines of code.
[07:08] <maswan> is this meant to be missing? https://cloud-images.ubuntu.com/releases/16.04/release  - the link to latest release?
[07:19] <lordievader> maswan: Guess something was forgotten, I guess you want: https://cloud-images.ubuntu.com/releases/16.04/release-20170307/
[07:29] <maswan> hm. actually, let me take this to the vanguard of -mirror, that's probably the appropriate place
[07:30] <maswan> lordievader: yes, but that's significantly harder to script against. :)
[07:38] <lordievader> True...
[08:02] <cpaelzer> beisner: hiho on bug 1664737 are you sure UCA-N has the yakkety binaries?
[08:03] <cpaelzer> beisner: I thought not and a quick check did not bring in libvirt/qemu from Y, see http://paste.ubuntu.com/24279676/
[11:50] <kol65> hi guys, any chance of not needing to boot a server twice a week?
[11:51] <ikonia> ?
[11:51] <kol65> updates, webserver, needing reboot
[11:51] <kol65> 14.04
[11:52] <ikonia> what updates need reboots
[11:52] <maswan> Canonical's livepatch
[11:52] <ikonia> should really only be libc and the kenel
[11:52] <maswan> oh, lots of them
[11:52] <ikonia> really ?
[11:52] <ikonia> what other than libc and the kernel is needing an update
[11:52] <maswan> yeah, libc and kernel
[11:52] <maswan> and since kernel is a couple of reboots per month..
[11:52] <kol65> *what was todays?
[11:52] <ikonia> I'm sure there are others, but they should be edge cases
[11:52] <blackflow> dbus :)
[11:52] <maswan> but since kernels are the the frequent cause, livepatch is the solution
[11:53] <kol65> does get a bit much, my centos servers are like once every 3 months
[11:53] <ikonia> kol65: what updates are causing you to need reboots so much though ?
[11:53] <kol65> thought kern 4 was going to sort this out
[11:53] <kol65> ikonia:  kernel  etc etc
[11:53] <ikonia> how would kernel version 4 change the update pattern
[11:53] <kol65> regressions
[11:53] <ikonia> kol65: etc etc...no sorry
[11:53] <kol65> ok
[11:53] <ikonia> the kernel and libc are pretty much it
[11:54] <ikonia> and they are not released weekly as you state
[11:54] <ikonia> hence why I'm interested what updates are causing you to require reboots as often
[11:54] <kol65> like twice this week, hard to get through a week without a reboot
[11:54] <ikonia> you keep saying that
[11:54] <kol65> yeah, miffed :)
[11:54] <ikonia> but yet you don't say what is requiring a reboot
[11:54] <ikonia> blackflow: nice additional spot with dbus
[11:55] <kol65> read the security updates, usually says at the bottom
[11:55] <ikonia> can you give me an example of one
[11:55] <kol65> ok
[11:55] <ikonia> (please)
[11:55] <kol65> one moment
[11:55] <ikonia> sorry forgot my manners there for a moment
[11:56] <blackflow> well, according to our logs, we rebooted our 16.04 servers once every 8-12 days due to kernel updates in the past four months.
[11:56] <kol65> https://www.ubuntu.com/usn/ there is the one to start with
[11:56] <maswan> yeah, once every 8-12 seems right
[11:56] <kol65> lets pick them out now
[11:56] <ikonia> blackflow: more than it should be - kernel updates shouldn't be that frequent
[11:57] <OerHeks> 24th libc and today 30th a kernel, no big deal .. don't you have those updates with centos too?
[11:57] <maswan> 29th, 15th, 7th in march
[11:57] <kol65> https://www.ubuntu.com/usn/usn-3247-1/ another do you really want me to continue?
[11:57] <maswan> just kernel updates
[11:58] <maswan> on the up side, we get a much better flow of security patches than centos
[11:58] <ikonia> kol65: yes please
[11:58] <kol65> yeah, security is great
[11:58] <ikonia> kol65: as thats a security system inside the kernel
[11:58] <ikonia> so yes, I'd like another please
[11:59] <ikonia> and you maybe could do that without a reboot with a bit of thought, I'm not %100 sure off the top of my head though
[11:59] <kol65> when you have loads of servers running Ubuntu and major players as clients is a pain, sry
[11:59] <maswan> Sometimes I'm a bit miffed on that side when it goes weeks for redhat to make a rhel kernel update for something
[11:59] <blackflow> maswan: yeah, and personally I find it a nice balance between relatively recent kernel and stability updates.
[11:59] <ikonia> more so if you're dealing with major players
[11:59] <ikonia> kol65: you should be able to manage that
[11:59] <blackflow> maswan: and then it takes centos even more weeks to merge
[12:00] <maswan> anyway, for kernel updates, there exists a [non-free] solution
[12:00] <maswan> for rebootless upgrades
[12:00] <blackflow> maswan: it's free for up to few machines
[12:00] <kol65> ikonia: you looking for a job, cant go past 25k euro tho ;)
[12:00] <ikonia> I'm not comfortable with live patch as a production ready solution
[12:00] <maswan> blackflow: yes, but not Free :)
[12:00] <blackflow> and you get to be the beta for paying customers, but hey.... free rebootless upgrades :)
[12:00] <ikonia> kol65: not trying to be rude, bu tif it's major players as you say, your infrastructure should be setup to deal with service management
[12:01] <blackflow> maswan: are we talking about the canonical livepatch service? I thought it was free for just a few machines
[12:01] <kol65> ikonia: I blagged a bit, but major for me
[12:01] <ikonia> same point
[12:01] <maswan> blackflow: Yes, it is
[12:01] <ikonia> you really need to setup your infrastructure and practices to account for updates
[12:02] <kol65> rub salt, ty
[12:02] <ikonia> sorry, that wasn't the intention
[12:02] <kol65> np
[12:02] <ikonia> but it's something you should look at now if this is causing you this level of upset
[12:02] <kol65> indeed
[12:02] <ikonia> patching and maintenance is a fact of life and something you should be prepared for
[12:03] <kol65> prevention is always better than cure though
[12:04] <blackflow> maswan: oh you meant free as in speech
[12:05] <kol65> nah beer
[12:06] <maswan> blackflow: yeah. but it is a neat service. been thinking of applying it to some servers at work. but we ended up fixing our applications to the point where we can do downtimeless reboots by means of service migrations instead.
[12:07] <kol65> its not the downtime as that is like a minute or so but just having to boot
[12:07] <blackflow> maswan: which also covers for quick recovery in case of failure, so it's a win-win
[12:07] <kol65> also you get these fanatics who offer services that crucify you if your server is down at any time
[12:07] <maswan> for our hpc cluster nodes we do it all automatically, the only downside is the draining of jobs until the whole node is free, so we take a hit in throughput
[12:09] <kol65> sry anyway but I have this effect on irc
[12:09] <kol65> people start to chat
[12:09] <kol65> I should charge
[12:10] <kol65> and usually around 1 hr I am kicked :)
[12:10] <blackflow> lol
[12:10] <kol65> its my life
[12:15] <blackflow> the truth is, that kind of industry is very demanding and ungrateful. if you get crucified for any down time, you should then have a setup for that and probably charge it quite a lot. not patronizing, just sharing my own experience in "the industry".
[12:15] <kol65> yeah its tough eh
[12:16] <kol65> just services like say its bad because you boot a server, its fake news
[12:17] <blackflow> for example our particular use case tolerates such reboots. when it comes to the point that it won't be tolerable, there's always ip based failover, or if you wanna get fancy, virtualization and live migration
[12:17] <kol65> nerd :)
[12:17] <maswan> honestly, that was one of our first wins by moving into ganeti for virtualisation of services, VM reboots are much faster than hardware, and hardware reboots done after live migration of all the VMs away from the HW
[12:17] <maswan> ah, heh. :)
[12:18] <maswan> but 3 seconds of downtime before the webserver starts responding again when you reboot a VM is much nicer than waiting 3 minutes for bios and blaha.
[12:19] <kol65> I just do dedicateds, the thought of offering shared hosting fills me with fear
[12:19] <blackflow> maswan: try 5-10 when your setup has to check pxe to see what it should boot into :)
[12:20] <kol65> lol this laptop throws up a pxe error, what is it :)
[12:20] <kol65> on boot
[12:21] <kol65> seems to think its connect by wire by the looks or at least looks for it
[12:21] <blackflow> kol65: well, I had a client once who complained I wanted to reboot his machine once or twice a month. I did managed dedicated hosting. Sure, no problem I said, you'll need redundancy and blah blah and oh yeah, your cost would go 10x just on infra, plus additional maintenance costs. he quickly accepted reboots were just fine :)
[12:22] <kol65> I dont like to fleece people though
[12:22] <blackflow> wasn't fleecing. real cost of setting up failover pairs, additional DNS, monitoring, testing, ...
[12:23] <kol65> ty, noted
[12:23] <blackflow> I mean, we're talking about going from "here's a baremetal machine and I'll take care of software and updates" to a complete fault tolerant infrastructure
[12:24] <kol65> yeah, I do it at too low a rate
[12:27] <blackflow> it becomes significant when all these "public clouds" that promise redundancy and what not, start failling because they're not as redundant as advertised. a 5€ VPS at Leaseweb, advertised as fault tolerant, live migration in case of failure etc... was down two weeks because their storage layer failed including its redundancy. it "filled up" and fixing it required datacenter expansion, new
[12:27] <kol65> I think its good to under estimate yourself and suddenly realise that your not as thick as you thought
[12:27] <blackflow> hardware, experts brought in.  the funny part is it happened TWICE in two year period. one would think they learned the first time it happened.
[12:28] <kol65> blackflow: yeah there have been some major fkups etc with the biggest
[13:23] <maswan> blackflow: our pxe is fast, but I was optimistic about 3 minutes, we have HP servers, so that's more like 6-7 minutes before they ping
[13:24] <blackflow> maswan: yeah HP machines were what I had in mind :)
[13:42] <beisner> cpaelzer, ack you're right
[13:43] <cpaelzer> beisner: thank you a lot - you just scared the hell out of my last SRU activity :-
[13:43] <cpaelzer> )
[13:44] <cpaelzer> beisner: might I ask if you have arm boxes in the openstack lab or are those things driven by the HWE Team usually?
[13:44] <beisner> cpaelzer, i've got some.  :)  all in use atm but could arrange access next wk if necessary.
[13:45] <cpaelzer> beisner: this was more a generic question than the request to test this particular bug
[13:45] <cpaelzer> beisner: although over time I'd expect some of your Team might end up being the only one with the ressources to track that down
[13:46] <beisner> cpaelzer, ah, right.  yep generally-speaking we can work out short-term access to a machine for these type of bugs.
[13:56] <lucidguy> Should I use intel rapid storage fake raid on a linux server, or disable it and manually create my md volumes etc?
[14:03] <lunaphyte> hi.  i've increased the size of a virtual disk, but the os still sees the old size.  how can i make it see the new size, without rebooting?
[14:09] <lunaphyte> ah.  echo 1 > /sys/block/sdd/device/rescan
[14:10] <lunaphyte> it seems that rescan-scsi-bus doesn't quite rescan as thoroughly as one might expect
[14:24] <nacc> lunaphyte: i think you wanted the --forcerescan option
[14:26] <nacc> lunaphyte: ah maybe because rescan-scsi-bus is for rescanning busses  not disks?
[15:00] <lunaphyte> nacc: yeah, i guess
[15:09] <jbicha> rbasak: hi, I'm pinging again about LP: #1667195, Sweet5hark is out this week but I believe he was fine with it
[15:09] <jbicha> https://irclogs.ubuntu.com/2017/03/07/%23ubuntu-desktop.html#t16:04
[15:12] <rbasak> jbicha: thanks. OK, I'll drop it from the server seed.
[15:13]  * rbasak wonders if that needs an FFe.
[15:14] <jbicha> the other last thing that kept gconf and friends in main was emacs25 which finally migrated to zesty (without that dependency)
[15:15] <jbicha> my opinion is that since it wasn't shipped but only listed as "supported" that it wouldn't need a FFe
[15:15] <rbasak> Good point. It wouldn't make any changes to an image.
[15:16] <rbasak> jbicha: there's also supported-sysadmin-desktop: * mdbtools-gmdb
[15:16] <rbasak> Does that impede progress for you?
[15:18] <jbicha> yes, I think it needs to be unseeded there too to allow gconf, etc. to drop to universe
[15:19] <rbasak> I'm less willing to touch a desktop seed :-/
[15:19] <rbasak> Server seed changed
[15:19]  * rbasak asks in #ubuntu-desktop
[15:20] <jbicha> thanks
[15:30] <powersj> nacc, I used the server team package list and tried doing a pull-lp-source. Those packages that do not have source in zesty are in that 3rd list
[15:54] <faekjarz> Hey there! How do i set a specific order, in which modules are loaded / probed on boot?
[15:58] <kol65> Hi can someone tell me what minimal install means plz?
[15:58] <cpaelzer> rbasak: nacc: on sponsoring if one could look at bug 1671767 that would be great
[16:00] <cpaelzer> rbasak: nacc: the reporter is very active and I want to encourage by getting that moving, yet I can't upload asterisk on my own
[16:00] <nacc> cpaelzer: ack, will review today
[16:00] <cpaelzer> thanks
[16:00] <kol65> bugs eh
[16:00] <kol65> errors
[16:01] <rbasak> cpaelzer: thank you for following up on that. Add it to your "why I should be MOTU" list please :-)
[16:01] <nacc> kol65: you don't know what a minimal install is?
[16:01] <kol65> nacc:  minimal for what?
[16:01] <ogra> everything ?
[16:02] <kol65> ?
[16:02] <ogra> it is enough of a system to boot and run the package manager
[16:02] <nacc> kol65: you asked a question about what minimal install means
[16:03] <nacc> kol65: i was clarifying if you were literally asking for the definition
[16:03] <kol65> ogra:  thanks dude
[16:03] <kol65> ogra:  perfect explanation
[16:03] <ogra> :)
[16:04] <kol65> hehe
[16:04] <kol65> just noting that down
[16:04] <kol65> so its the basic platform
[16:05] <kol65> foundation to build on
[16:07] <kol65> are minimal installs strictly regulated?
[16:08] <kol65> what is actually in a minimal install?
[16:14] <cpaelzer> rbasak: I have already last week
[16:14] <rbasak> :)
[16:53] <nacc> worktoner: echo 1 > /sys/block/<sdwhatever>/device/rescan ?
[18:18] <ayush1706> Hey
[18:18] <ayush1706> Anyone used or using kernelcare here?
[18:54] <jge> hey all, trying to get a LAMP stack going with PHP 7 and PHP-FPM but for some reason this box is not cooperating and not showing FPM/FastCGI as the API when I do a quick php()info test.
[18:55] <jge> My steps were pretty much get PHP set up as: sudo apt-get install php php-mysql php-fpm libapache2-mod-fastcgi
[18:55] <jge> enabled the following modules actions fastcgi alias, added a config inside /etc/apache2/conf-available/php7.0-fpm.conf
[18:56] <jge> enabled it with a2enconf restarted apache and fpm and nothing..
[18:56] <jge> what did I miss!? :(
[19:07] <nacc> jge: any errors in the logs?
[19:09] <nacc> jge: and i assume you meant phpinfo(); ?
[19:10] <jge> nacc: yes sorry, phpinfo();
[19:11] <nacc> jge: np, just making sure it wasn't something easy :)
[19:11] <jge> I found it, forgot to disable mod_php :(
[19:11] <nacc> jge: ah :)
[19:11] <jge> yikes
[19:11] <jge> all good now, thanks
[20:54] <jge> anyone here ever deployed VTiger (CRM) on Ubuntu Server?
[20:54] <bekks> jge: whats your actual question besides that poll?
[20:56] <jge> bekks: getting an HTTP Error 500, really frustrating as I've already enabled debug logging on apache (which doesn't show anything relevant), PHP is blank (CRM is a php app) and VTiger's internal logging doesn't show anything
[21:13] <nacc> jge: if you get a 500, apache2's logs will tell you why, typically