[01:03] <DirtyCajun> PSA: For anyone who hasnt done the 18.04 upgrade that is going to and uses nginx+php: If you are using 7.0 sockets you will need to change them to 7.2 sockets. (20 minutes of confusion on my end till i figured it out)
[04:25] <cpaelzer> good morning
[12:35] <Mava> any opensource tips (on in general anything) when considering first tests with hyperconverged infrastructure
[13:00] <tomreyn> first of all, you'll need to dissolve this marketing term to make it clear what you're asking
[13:01] <tomreyn> (and then it's still a *very* broad question, and may not be well suited for a support channel)
[13:10] <gunix> Mava: what do you need? containers, vms?
[13:31] <RoyK> "hyperconverged" translates to something like "we got tired of the old system with separate storage and compute power, and went back to adding drives to our servers again, and then used the existing methods for controlling redundacy, so that system upgrades can be done on both storage and cpu at the same time" (which may or may not be a good idea, depending on your needs)
[14:09] <cyphermox> teward: cloud-init overwriting netplan is a message there for "safety". In some cases it /might/, say if you have cloud-init re-configure the system. On "static" not-cloud-based servers, it won't
[14:25] <Mava> RoyK: exactly, the good old methods starts to rise their head again.
[14:26] <Mava> gunix: either ones.
[14:26] <RoyK> Mava: I was being rethorical
[14:26] <gunix> Mava: if you need VMs, go for openstack. you need containers, go for kubernetes. you need both, go for both
[14:27] <Mava> gunix: yup, that was I was afraid =D
[14:27] <gunix> Mava: why afraid? this is the future :D
[14:27] <Mava> any idea how e.g. kubernetes konsiders the virtual san ?
[14:28] <gunix> Mava: there are various storage drivers for kubernetes. most people use glusterfs, with heketi (an API that helps kubernetes manage gluster volumes)
[14:28] <Mava> ah
[14:28] <gunix> Mava: other people use CEPH
[14:29] <gunix> mostly the people with heavy openstack background will do ceph when doing kubernetes, since they know ceph inside out (55% of openstack environments use ceph storage, according to the survey from last year)
[14:29] <RoyK> isn't CEPH the preferred one these days, as in over glusterfs?
[14:30] <gunix> RoyK: not for kubernetes.
[14:30] <RoyK> ok
[14:30] <gunix> RoyK: a big kubernetes project is openshift and it comes by default with glusterfs
[14:30] <gunix> RoyK: also, regarding storage, most people use kubernetes via public cloud, and public cloud solutions have their own dark magic with storage, so you can't know unless you have somebody on the inside :D
[14:31] <Mava> :D
[14:31] <Mava> but the storage plays a big role, somehow understandable that they are not very often advertised outside
[14:34] <Mava> nonetheless, thanks gunix and RoyK for the chat! this really got my brains back to the tracks
[14:34] <RoyK> :)
[14:34] <gunix> Mava: just start playing around. it's really fun technology
[14:34] <RoyK> Mava: as for raid1+0, please try zfs and check if it does what you want - it really isn't hard
[14:35] <Mava> and I definitely will (to both)! ^ ^
[14:35] <RoyK> :)
[14:36] <gunix> raid is a tabu subject in cloud
[14:36] <gunix> a lot of people use JBOD
[14:36] <gunix> you need mad skills to do raid the right way.
[14:38] <Mava> JBOD ? You got to be kidding?
[14:40] <Mava> afaik the zfs should be fine nowdays. at least i've been satisfied with the zfs so far
[15:55] <gunix> Mava: i am not kidding with jbod :D
[15:56] <gunix> Mava: https://www.youtube.com/watch?v=ShC1eN52CrE
[15:57] <gunix> maybe in the last 2 years people started to adopt this more often
[15:57] <gunix> but this was the status in 2016
[15:57] <gunix> with netapp leading research on this
[16:28] <nacc> rbasak: just checking in on the importer tests review
[16:29] <RoyK> gunix: the thing about jbod is that they use raid-ish things on top
[16:59] <gunix> RoyK: most people just let ceph/heketi/swift handle the disks
[17:00] <RoyK> https://xkcd.com/1987/
[19:15] <jaimehrubiks> I dont even program in python and my env is like hell
[19:21] <nacc> jaimehrubiks: ?
[19:22] <jaimehrubiks> i was talking about the previous message, the xkcd post
[19:22] <sarnold> nacc: https://xkcd.com/
[19:24] <nacc> ah
[23:04] <Howie69>  TCP: request_sock_TCP: Possible SYN flooding on port 80. Sending cookies.  Check SNMP counters.
[23:05] <Howie69> That's a new one on me...
[23:05] <Howie69> Especially since I don't have SNMP running...
[23:07] <sarnold> Howie69: check netstat -s
[23:08] <Howie69> 72 SYN cookies sent
[23:08] <Howie69> ?
[23:09] <sarnold> that's not too much of a flood..
[23:10] <Howie69> Yeah..
[23:10] <Howie69> I was wondering why it was logged
[23:11] <Howie69> I feel rusty...
[23:12] <Howie69> All of my scripting over the years... and now I am trying to hand write my apache2 configs :)
[23:12] <sarnold> hrm I get lost a bit in the kernel sources .. there's a possibility that your web server isn't handling requests quickly enough, and the 'backlog' of listen requests is being overflowed
[23:13] <Howie69> bleh...
[23:13] <sarnold> normally this happens when someone aims a synflood bot at your host
[23:14] <Howie69> Yeah, I've seen that, but didn't find any evidence
[23:14] <sarnold> but 72 syn cookies does'nt feel like much of a flood, since that'd be .01 seconds of data or less
[23:14] <sarnold> so maybe it's just your webserver unable to respond to incoming connections in reasonable time?
[23:15] <Howie69> Or oversensitive logs?
[23:15] <sarnold> I suspect this is a rare event, and thus worthwhile to log
[23:15] <sarnold> it'd be difficult to spot syn floods without network monitoring without this
[23:16] <Howie69> fair enough
[23:23] <Howie69> I am skipping something...
[23:26] <Howie69> So, my apache2 config is missing something
[23:27] <Howie69> Ah... I can see...
[23:27] <RoyK> sarnold: s/netstat/ss/ ;)
[23:28] <Howie69> Between my old debian servers and this ubuntu server
[23:28] <sarnold> RoyK: heh, I checked the ss manpage first but couldn't figure out how to get the data :(
[23:29] <Howie69> It seems that apache has seperated the vhost to a module?
[23:29] <Howie69> I see vhost.load
[23:29] <Howie69> but not a config...
[23:29] <Howie69> Is that for namevirtualhosts?
[23:29] <RoyK> Howie69: /etc/apache2/sites-enabled
[23:29] <RoyK> Howie69: /etc/apache2/sites-available
[23:30] <RoyK> the former contains symlinks to the latter
[23:30] <Howie69> RoyK: Those I am familiar with
[23:30] <Howie69> I've done all of that...
[23:30] <RoyK> ok, sorry
[23:30] <RoyK> I just barged in ;)
[23:30] <Howie69> and actually mirrored my debianb server
[23:30] <Howie69> debian
[23:30] <Howie69> but, it seems namevirtualhost isn't working
[23:31] <Howie69> everything is just going to my default dir
[23:31] <RoyK> just curious, but why move from debian to ubuntu?
[23:31] <Howie69> My server to AWS
[23:32] <Howie69> They didn't offer debian :)
[23:32]  * RoyK started out with slackware 25 years ago or so, moved to redhat, moved to debian, tried different things, moved to ubuntu, and moved back to debian
[23:32] <Howie69> RoyK: I will probably keep all my physical servers on Debian
[23:33] <Howie69> but, For AWS and GoogleCloud, they want to use Ubuntu
[23:33] <RoyK> Howie69: I'm using crowncloud for a few VMs
[23:33] <Howie69> So, it's not as bad as a conversion to their proprietary cloud OSes
[23:33] <RoyK> works well
[23:33] <Howie69> RoyK: I'm just starting.  I prefer my own servers :(
[23:33] <RoyK> and is rather on the cheap side, and supports a lot of distros
[23:33] <Howie69> :)
[23:34] <RoyK> so do I - but not for everything
[23:34] <Howie69> I was down for a few years medically... trying to catch up now
[23:34] <RoyK> welcome back :)
[23:34] <Howie69> I did some great work while I was in the hospital.  I just can't remember any of it
[23:35] <RoyK> I don't think I've had that sort of drugs - but what the hell - welcome back anyway
[23:36] <Howie69> Thanks
[23:38] <Howie69> but, to the matter at hand... I suppose I have to link vhost.load?
[23:38] <Howie69> vhost_alias.load
[23:38] <RoyK> whiver ubuntu version is this?
[23:38] <RoyK> s/whiver/which/
[23:38] <Howie69> crap... I forgot the command...
[23:38] <RoyK> lsb_release -a
[23:39] <Howie69> 16.04 I think
[23:39] <Howie69> I was right
[23:39] <RoyK> is the server listening to port 80?
[23:40] <RoyK> netstat -l or ss -l should show
[23:41] <Howie69> Yes
[23:41] <Howie69> I get default site
[23:42] <RoyK> ok, and there's a file for your virtualhost under /etc/apache2/sites-available and a symlink to it in /etc/apache2/sites-enabled ?
[23:42] <RoyK> Howie69: just going through this slowly to make sure
[23:43] <Howie69> RoyK: of course
[23:44] <sarnold> does it have a .conf name?
[23:44] <sarnold> iirc more recent apaches care about the filename
[23:44] <Howie69> it seems like that would be in the apache2 error.log or access.log, but I'll check
[23:45] <Howie69> well, yeah, I have a .conf and .load
[23:45] <RoyK> a .load in sites-enabled? that shouldn't be necessary - it's for modules
[23:46] <Howie69> Sorry, that's in the modules, yes
[23:47] <RoyK> Howie69: ./apache2.conf:IncludeOptional sites-enabled/*.conf
[23:47] <Howie69> You are saying they want a .conf in sites-available?
[23:47] <RoyK> that's from debian stretch
[23:48] <RoyK> probably the same on ubuntu 16.04
[23:48] <Howie69> stick around.  I have to deal with kids.  BBS