[00:03] <geigerCounter> tomreyn: ^
[00:04] <mwhudson> geigerCounter: you can configure what systemd does to shut down a service
[00:04] <geigerCounter> How do I do that beyond specifying it in ExecStop?
[00:05] <mwhudson> geigerCounter: there is also KillMode
[00:05] <geigerCounter> mwhudson: Tell me more/link me to docs?
[00:05] <mwhudson> geigerCounter: man systemd.kill
[00:05] <mwhudson> geigerCounter: but i think execstop is what you want here
[00:06] <mwhudson> geigerCounter: you need your execstop script to wait for the process exit though
[00:06] <mwhudson> hm sounds like you are trying that
[00:07] <mwhudson> so what you are doing _should_ work, i don't know off hand why it would not be
[00:07] <geigerCounter> Yeah.
[00:07] <geigerCounter> I don't either really. Today is my first day ever making a systemd unit.
[00:08] <geigerCounter> What happens when I use service stop is the java process just dies and then since it has no more child processes, screen exits normally. This isn't the expected behaviour.
[00:09] <tomreyn> geigerCounter: sorry, i lost track there.
[00:09] <geigerCounter> It should push 'stop\n' to the screen and the server's shutdown sequence should begin.
[00:09] <geigerCounter> Instead of just dying.
[00:09] <blackflow> geigerCounter: did you look at the KillMode option?
[00:10] <geigerCounter> Looking at it now on the suggestion of mwhudson
[00:10] <blackflow> also, that's not really usual or normal way services behave. essentially you want systemd to start your screen as an oneshot "service" and forget about it. then you deal with your screen and minecraft manually.
[00:11] <blackflow> this situation has been asked about before and the only answer is: open a bug report to minecraft devs and have them build proper service management controls into the daemon, namely respnding properly to TERM or QUIT signals.
[00:12] <geigerCounter> There is no daemon. It was never intended to be run this way. It's intended to be run as an interactive console application using Java's console tools with stdin and stdout.
[00:12] <geigerCounter> I want to run it *as* a daemon, using screen.
[00:12] <blackflow> even systemd, the greatest controversy since Linus began his hobby, expects services to understand these signals.
[00:12] <geigerCounter> Although yes, it should learn to respond to signals correctly.
[00:12] <geigerCounter> >_<
[00:13] <blackflow> geigerCounter: perhaps you can whip up some stdin command injection from a shell script exec'd via ExecStop
[00:13] <geigerCounter> That's basically what I'm doing using screen's "stuff" command
[00:13] <blackflow> use KillMode=none so it doesn't care about killing unresponsive processess (which this essentially is)
[00:13] <tomreyn> maybe think of some other (more common, maybe in ubuntu) java processes which might have the same issue and see whether a better solution was found there.
[00:13] <geigerCounter> Which allows you to "stuff" characters into a detached screen as though you were typing it in yourself.
[00:14] <geigerCounter> But the problem is systemd isn't paying attention to what I asked ExecStop to do.
[00:14] <geigerCounter> Will try that and report back.
[00:14] <blackflow> systemd will normall send a KillSignal (TERM by default) and if the process doesn't quit in TimeoutStopSec, it'll KILL It and all the kids. modulo KillSignal and ExecStop existing
[00:14] <blackflow> sorry, modulo KillMode
[00:14] <tomreyn> sounds sane to me.
[00:15] <geigerCounter> Oh I see. So that's probably part of the problem. I have my timeout set too low.
[00:15] <blackflow> geigerCounter: read the systemd.kill manpage
[00:15] <geigerCounter> I... rookie mistake.
[00:15] <geigerCounter> I'm reading it.
[00:15] <geigerCounter> Thank you.
[00:15] <geigerCounter> :)
[00:17] <blackflow> geigerCounter: also the first paragraph for ExecStop= entry in systemd.service(5) manpage
[00:18] <blackflow> it explains what's going on exactly, and which value of KillMode you need (none)
[07:06] <lordievader> Good morning
[08:14] <kstenerud> cpaelzer I got a build failure for amd64 (but not the other archs), but it wasn't  a build error. The build process and tests completed, then
[08:14] <kstenerud> Build killed with signal TERM after 150 minutes of inactivity
[08:14] <kstenerud> E: Build failure (dpkg-buildpackage died)
[08:16] <cpaelzer> kstenerud: well this could be a real issue (background processes hanging or such)
[08:17] <cpaelzer> kstenerud: but TBH most likely it is something awkward, you can ask the ops if soemthing known happened or just hit rebuild on your build
[08:17] <kstenerud> ok
[12:08] <ahasenack> good morning
[12:47] <kstenerud> ahasenack: I'm just not sure what would cause this kind of issue... It's a bug where php was closing file descriptors before a call to curl finished
[12:48] <kstenerud> and if it's testing whether a crash occurs or not, why would the entire testing rig die?
[12:48] <ahasenack> kstenerud: I'd try to run that test in isolation
[12:48] <ahasenack> kstenerud: well, it dies due to a timeout
[12:49] <ahasenack> side "A" is talking to side "B", side "B" crashes, side "A" doesn't notice and keeps waiting
[12:49] <kstenerud> It dies like this:
[12:49] <kstenerud> TEST 3443/14261 [ext/curl/tests/bug48203_multi.phpt]
[12:49] <kstenerud> E: Caught signal ‘Terminated’: terminating immediately
[12:49] <kstenerud> Then the calling process times out
[12:49] <ahasenack> check that test in isolation, is my suggestion
[12:50] <kstenerud> yeah I'm running an sbuild locally. If it crashes I'll shell in and see what's up
[13:33] <frickler> jamespage: coreycb: would it be possible to get an updated erlang version into UCA for xenial? like possibly the one from bionic? see the latest comments in https://bugs.launchpad.net/charm-rabbitmq-server/+bug/1783203
[14:24] <kstenerud> So when I run the build locally, it works :/
[14:32] <JuJUBee> Hey all.  I want to setup internal dns in my classroom.  I have an LTSP server running and dnsmasq is installed.  I have a separate server acting as my gateway running isc-dhcp server.
[14:32] <JuJUBee> I ultimately want to be able to access servers using fqdn internally like LTSP-Server.foo.local
[14:32] <JuJUBee> I know very little about setting up dns.  Any help?
[14:37] <lordcirth> JuJUBee, you can either set up DNS on your gateway, or on your LTSP server. Then set isc-dhcp to advertise the DNS server's IP
[14:39] <JuJUBee> lordcirth, dhcp currently advertising 2 dns servers (open dns), can I just add the third IP of my dnsmasq machine?
[14:40] <lordcirth> JuJUBee, you could, but if you want them to always use your DNS server, you should probably set only that.
[14:44] <JuJUBee> lordcirth, but if I only want to get to public sites, don't I need external dns servers?
[14:44] <lordcirth> JuJUBee, your DNS server should be configured to use those external DNS servers itself, for anything it doesn't manage.
[14:46] <JuJUBee> So this seems far beyond my expertise.  I was hoping it would be a fairly simple configuration
[14:48] <leftyfb> JuJUBee: the link I gave you shows you how to setup bind9
[14:48] <lordcirth> JuJUBee, it is fairly simple. DHCP points clients to your dnsmasq. dnsmasq answers for .local, and for everything else, does a lookup to the outside and caches it.
[14:55] <JuJUBee> leftyfb, bind kind of intimidates me.
[14:56] <JuJUBee> lordcirth, I need to see how to do that in dnsmasq.  I will read on it.
[14:56] <leftyfb> JuJUBee: and it will continue to intimidate until you try it, then it won't :)
[14:58] <JuJUBee> leftyfb, I do have that link open and have been reading it along with dnsmasq setup.  dnsmasq seems more appropriate for my simple needs.  I just don't want to misconfigure bind and let the nastys in or prevent appropriate requests from getting out...
[14:58] <sdeziel> abusing .local isn't a good idea but maybe that was just a made up example
[14:58] <leftyfb> JuJUBee: this DNS server should NOT be accessible to the internet. Regardless of which solution you go with.
[14:59] <leftyfb> sdeziel: abusing?
[14:59] <sdeziel> leftyfb: .local shouldn't be used for internal purposes other than mDNS
[14:59] <leftyfb> sdeziel: why is that?
[15:00] <sdeziel> leftyfb: https://serverfault.com/questions/17255/top-level-domain-domain-suffix-for-private-network/937808#937808
[15:01] <sdeziel> leftyfb: systemd-resolved chokes on .local names unless you configure it in a special way
[15:01] <leftyfb> sdeziel: it's been working on my local network for about a year now
[15:01] <sdeziel> leftyfb: do you also provide something.local as your search domain?
[15:02] <leftyfb> yes
[15:02] <sdeziel> leftyfb: that's why
[15:02] <sdeziel> leftyfb: this enables the special handling of .local by systemd-resolved
[15:02] <sdeziel> leftyfb: but normally .local is reserved for mDNS
[15:02] <leftyfb> oh wait, sorry. No. I only append .local
[15:03] <leftyfb> so <hostname>.local
[15:03] <leftyfb> works just fine on my network. I can resolve everything locally
[15:03] <sdeziel> leftyfb: yeah, as long as local is in the search domain, systemd-resolved will try to accommodate for this abuse
[15:04] <sdeziel> leftyfb: as an experiment, drop local from you search domain, restart systemd-resolved then try to resolve <hostname>.local, should fail
[15:04] <leftyfb> of course it will. Unless I have avahi/bonjour setup on every client
[15:06] <sdeziel> in theory, resolvers shouldn't attempt mDNS resolution when there are 2 labels with the last one .local but that is not implemented everywhere
[15:06] <sdeziel> that's why it's best to leave .local and everything under it reserved to mDNS
[15:07] <sdeziel> leftyfb: you said off course it would break without the search domain but that's only true for the .local domain... any other domain would have kept working
[15:09] <leftyfb> sdeziel: how would that work? If you have a local .home as your local domain and you try to ping hostname.home without having home as a search, how would it know to append that tld?
[15:09] <sdeziel> leftyfb: when you "ping hostname.home" no search label is appended because you provided one already
[15:10] <sdeziel> https://www.freedesktop.org/software/systemd/man/systemd-resolved.service.html then search for ".local" it explains it with all the details you'd want :)
[15:10] <leftyfb> sdeziel: and you said in the test, remove the search label. Obviously it would fail when you do that
[15:11] <sdeziel> leftyfb: the search domain is used to tell systemd-resolved that you are abusing .local and to back down on mDNS
[15:11] <leftyfb> oh sorry, my post above was wrong. %s/ping hostname.home/hostname/g :)
[15:11] <sdeziel> > Note that by default lookups for domains with the ".local" suffix are not  routed to DNS servers, unless the domain is specified explicitly as routing or search domain for the DNS server and interface
[15:11] <sdeziel> from the above link
[15:12] <sdeziel> leftyfb: well, if you refer to the shortname, then of course it requires a search domain ;)
[15:13] <sdeziel> > Note that today it's generally recommended to avoid defining ".local" in a DNS server, as RFC6762 reserves this domain for exclusive MulticastDNS use.
[15:13] <leftyfb> I'll think about changing it to something else at home :)
[15:14] <sdeziel> it's worth it if you can :)
[15:14] <leftyfb> after I figure out why my Unifi USG likes to disallow outbound DNS traffic around 6/7am requiring a reboot a few times a week :)
[15:14] <sdeziel> If only I had that insight years ago, I wouldn't have to leave with that damn foo.local at a client site ;)
[16:35] <leftyfb> A fresh install of Ubuntu server 18.04.2. sudo apt install mysql-server. I run sudo mysql_secure_installation. I select No for everything except reloading the privilege tables and set my password. I cannot run: "mysql -uroot -p <password>" to successfully login. What am I missing?
[16:36] <leftyfb> sorry, not space between p and the password
[16:40] <sdeziel> leftyfb: are you able to connect with mysql --defaults-extra-file=/etc/mysql/debian.cnf ?
[16:41] <leftyfb> negative
[16:41] <leftyfb> https://pastebin.ubuntu.com/p/Tr9Q8XdTGX/
[16:42] <leftyfb> also fails if I specify credentials
[16:43] <leftyfb> also of note, I can login just fine with sudo and no credentials and every time I run mysql_secure_installation it says it's running with a blank password. It seems that utility isn't actually setting our password?
[16:45] <sdeziel> leftyfb: I just tested in a fresh container and cannot reproduce the root auth failure
[16:45] <leftyfb> I've been 'just testing" it all morning with fresh installs from the same flash drive I used to several install of other types of servers since yesterday.
[16:47] <sdeziel> https://paste.ubuntu.com/p/NDMV5D3YmJ/ root doesn't have any password despite having been asked for one by mysql_secure_installation, weird
[16:49] <leftyfb> yeah, that's a problem
[16:50] <rbasak> I don't know about mysql_secure_installation
[16:50] <rbasak> But you get socket based auth by default on the default install.
[16:52] <leftyfb> what does that mean for credential based auth?
[16:52] <rbasak> I'm not sure.
[16:53] <rbasak> I'd have to read the manual, etc.
[16:53] <rbasak> It seems odd to be using mysql_secure_installation to me.
[16:53] <sdeziel> leftyfb: by default root has access through the Unix socket, without providing any password
[16:53] <rbasak> I was under the impression the maintainer scripts did the right thing, and there was no need to run it. I could be wrong though.
[16:54] <leftyfb> sdeziel: as in, the root user. No supplying root as the user on the command line as a non-root user
[16:54] <sdeziel> leftyfb: correct, the root user which is why it worked for you with sudo
[16:55] <leftyfb> ok, and if I want to authenticate using a php script?
[16:55] <rbasak> For the root user?
[16:55] <leftyfb> like I'm doing with the web app/db I'm trying to migrate from 16.04 to 18.04
[16:55] <leftyfb> rbasak: yes, for the mysql root user
[16:56] <rbasak> That seems dangerous. But if you insist, you'll have to set a root password. I'm not sure about how that interacts with socket auth (check the docs)
[16:56] <sdeziel> rbasak: it uses the auth_socket plugin
[16:57] <sdeziel> I'm assuming it's checking the UID of the user opening the Unix socket
[16:58] <leftyfb> https://bugs.launchpad.net/ubuntu/+source/mysql-5.7/+bug/1610574
[16:58] <sdeziel> a stock install: https://paste.ubuntu.com/p/t4JRNffHjR/
[16:59] <leftyfb> looks like I have a workaround I can try
[17:01] <rbasak> "Also note that mysql_secure_installation is largely redundant for a fresh 5.7 installation"
[17:01] <leftyfb> redundant how? What's the replacement?
[17:02] <rbasak> What are you expecting it to do for you?
[17:02] <leftyfb> set the root password
[17:03] <rbasak> So just set the root password then: sudo mysql -e "ALTER USER 'root'@'localhost' IDENTIFIED WITH 'mysql_native_password' BY 'mypass';"
[17:03] <leftyfb> there's other things you can config as well, but that's my main use case
[17:03] <rbasak> (frmo the bug)
[17:03] <leftyfb> yeah, I'm going to try that now on another fresh install
[17:04] <rbasak> Perhaps we should stop shipping mysql_secure_installation to avoid misleading users into thinking it's useful.
[17:04] <rbasak> Though that might be overkill because there are users who use mysql without the maintainer scrip tmanagement.
[17:05] <rbasak> BTW, you can do that if you want. Just use mysql-server-core-5.7 and operate mysql yourself directly.
[17:06] <leftyfb> That works
[17:21] <DammitJim> does ubuntu support php 7.0? I think 7.0 is eol
[17:22] <sdeziel> DammitJim: on 16.04, yes
[17:23] <DammitJim> TY
[17:23] <sdeziel> DammitJim: upstream declared it EOL but Canonical will keep backporting security fixes to it, like it does for 5.5 on previous Ubuntu versions
[17:23] <DammitJim> ty
[17:25] <DammitJim> how do I determine if canonical supports a piece of software
[17:25] <DammitJim> say, I'm looking at the fact that I installed nginx
[17:26] <DammitJim> how do I know if it came from main or universe or a different repo?
[17:27] <lordcirth> DammitJim, apt show nginx | grep Source
[17:28] <DammitJim> so, multiverse is NOT supported by Canonica, right?
[17:29] <lordcirth> correct. Also things in multiverse usually have licensing issues
[17:30] <DammitJim> actually, I just looked at say mongodb and we are using the mongodb repo
[17:30] <DammitJim> what a mess
[17:30] <lordcirth> Nothing wrong with using upstream repos, as long as they are well supported.
[17:30] <lordcirth> I have HAProxy 1.7 from PPA in production.
[17:31] <DammitJim> but 1.7 is not end of life
[17:32] <DammitJim> I'm trying to keep track of what software needs to be upgraded because it's going to be end of life
[17:32] <lordcirth> Oh I see
[17:36] <lotuspsychje> ubuntu-support-status --help
[17:39] <rbasak> lordcirth: I disagree. Using third party apt repositories is fundamentally broken and can break your system, even if well maintained.
[17:39] <rbasak> But I appreciate that many people use them anyway.
[17:39] <lordcirth> rbasak, so is not having the software you need.
[17:40] <rbasak> Sure
[17:40] <rbasak> But they should understand the risks.
[17:40] <lordcirth> It's more stable to run an Ubuntu LTS + upstream-maintained stable release than any other option.
[17:40] <lordcirth> I am happy that HAProxy 1.8.8 is in 18.04, so I don't need a PPA anymore
[17:41] <rbasak> In particular I'd strongly recommend against attempting a release upgrade if a third party package has been on the system.
[17:41] <lordcirth> I just don't do release upgrades. nuke and pave
[17:41] <rbasak> The other thing that goes wrong is experimentation, so in production it's essential to have prepared an automated deployment
[17:44] <DammitJim> ubuntu-support-status --help?
[17:44] <DammitJim> OMG
[17:44] <DammitJim> that's huge!
[17:46] <lordcirth> Yeah, pretty cool
[17:50] <sdeziel> ubuntu-support-status reports odd things here. libwoff1 is said to be supported by Canonical for 3 years yet it's in main
[17:56] <lotuspsychje> wich ubuntu version are you on sdeziel
[18:01] <lotuspsychje> same on bionic here: libbrotli1 libwoff1 3y
[18:03] <sdeziel> lotuspsychje: 18.04
[18:03] <sdeziel> lotuspsychje: yeah, same 2 packages in main that are reported as support for 3y only (instead of 5y)
[18:04] <sdeziel> oh I see what happens
[18:04] <sdeziel> those packages started in universe then were MRE after 18.04 release
[18:04] <lotuspsychje> ah
[18:05] <sdeziel> still, I don't see why they wouldn't get the full 5y support, looks like a bug in ubuntu-support-status
[18:05] <lotuspsychje> if you report, ill affect :p
[18:07] <sdeziel> how nice of you :P
[18:08] <lotuspsychje> i just want a cookie
[18:14] <sdeziel> lotuspsychje: https://bugs.launchpad.net/ubuntu/+source/update-manager/+bug/1820329
[18:14] <lotuspsychje> sdeziel: affected mate
[18:18] <lotuspsychje> sdeziel: you think it would differ with someone with the hwe kernel?
[18:20] <sdeziel> lotuspsychje: I wouldn't expect it to be different for those 2 libs but maybe for the hwe packages themselves since they've never been into bionic/main, only bionic-updates/main
[18:20] <lotuspsychje> kk
[18:21] <DammitJim> OMG, the things one doesn't know
[18:21] <DammitJim> ignorance is bliss!
[18:21] <DammitJim> I had no idea vim-nox was no longer supported!
[18:22] <sdeziel> DammitJim: I too was using vim-nox but have since moved to plain vim that's in main
[18:23] <lordcirth> I use neovim
[18:23] <DammitJim> new vim?
[18:23] <DammitJim> neovim?
[18:23] <DammitJim> hhhmmm
[18:24] <lordcirth> It's good. Some features have actually been backported to main vim.
[18:58] <DammitJim> is eol of ubuntu 14 at the beginning or end of APril?
[18:59] <sdeziel> DammitJim: April 25th, 2019
[18:59] <DammitJim> is it always on the 25th?
[19:01] <sdeziel> Canonical seems to have a thing for Thursdays but other than that, I think the EOL date can vary
[19:01] <sdeziel> DammitJim: 14.04 will not be officially EOL but will transition to ESM (paid support)
[19:02] <blackflow> Watched Popey's talk on snaps @ SCALE 17x. I couldn't disagree more on reasoning not to allow custom snap repos. What if a company wants to package their in-house, not publishable, apps as snaps in order to have a homogenous toolset and not a quagmire of snaps and dockers. Isn't that enough of a reason to allow custom repos?
[19:03] <lordcirth> blackflow, well, what was his reasoning?
[19:03] <popey> I only gave one reason
[19:03] <JanC> who'd want to use snap on a (production) server anyway?
[19:04] <blackflow> lordcirth: that allowing so would then cause the same issues PPAs have|had -- undiscoverability of programs published in custom repos
[19:04] <popey> It's entirely possible to do the use case you describe
[19:05] <popey> https://docs.ubuntu.com/snap-store-proxy/en/
[19:05] <blackflow> popey: yes but not with a repo, it'd have to be uploaded manually and handled --dangerous'ly
[19:05] <tomreyn> sdeziel: in bug 1820329 , do you actuall ymean MIR, not MRE (which was replaced by SRU, as far as I can tell)?
[19:06] <sdeziel> tomreyn: right, I wanted to say MIR
[19:06] <blackflow> popey: btw the vid linked from LINUX Unplugged, has bad audio and is missing segments. Is there a better one of your talk?
[19:06] <popey> not that I'm aware of
[19:06] <popey> I didn't even know it was streamed / recorded until afterwards.
[19:06] <lotuspsychje> sdeziel: a member tested it on 19.04 also with this result: libbrotli1 libwoff1 is shown as supported for 9 months (Jan 2020)
[19:07] <tomreyn> sdeziel: thanks for clarifying, i was just on a journey trying to understand what those different abbreviations mean (and got help there, so now i know)
[19:07] <sdeziel> lotuspsychje: 19.04 as a whole has 9 mo of support
[19:07] <lotuspsychje> sdeziel: allright, so not relevant for this bug right?
[19:07] <blackflow> popey: ah, k.  btw that link is about proxying the official snap store, or am I missing something in that?  I was thinking about a completely custom store a company could use in-house on their servers
[19:08] <sdeziel> lotuspsychje: I don't think
[19:08] <blackflow> popey: good talk btw (even though I disagree on that little point about snap store)
[19:08] <popey> blackflow: that's right. the proxy along with a brand store effectively gives you that
[19:08] <popey> thanks
[19:09] <popey> We have customers who have an enterprise proxy as a frontend to their own brand store, which they control the content of
[19:09] <DammitJim> this might not be an ubuntu question, but more of a server question
[19:09] <DammitJim> is there anything I should consider with regards to a max amount of services that a server runs?
[19:09] <DammitJim> I have RAM and CPU if needed
[19:10] <DammitJim> these are java processes
[19:10] <DammitJim> any input from any of you?
[19:10] <lordcirth> DammitJim, resource contention, effect of a reboot, security
[19:10] <blackflow> popey: is that available to non-advantage users? I can't seem to find anything about it in those docs
[19:10] <sdeziel> DammitJim: depends on the -Xms/-Xmx params you want to give to those JVMs I guess
[19:10] <JanC> that depends on how many resources those services need
[19:10] <lordcirth> DammitJim, are these diverse services, or just a bunch of instances of the same?
[19:11] <DammitJim> diverse, but similar in business logic
[19:11] <lordcirth> DammitJim, Consider running them all in unprivileged containers with resource quotas
[19:11] <lordcirth> Then you can reboot them independently, they can't spin out of control and eat all resources, etc
[19:11] <DammitJim> thanks... those are all good points
[19:12] <popey> blackflow: the proxy is installable - it's a snap ;). There are indeed projects which have brand stores, which don't have UA
[19:12] <lordcirth> We use LXC containers on big IAAS hosts, all controlled by Salt. works pretty well.
[19:12] <DammitJim> when you say unprivileged containers, do you mean their own JVM?
[19:12] <DammitJim> I'm using salt, too, but not to that leevl
[19:12] <lordcirth> DammitJim, no, linux containers, ie LXC / docker
[19:12] <DammitJim> level
[19:12] <JanC> OS containers
[19:13] <lordcirth> unprivileged LXC containers means that uid 0 (root) in the container actually maps to uid 100231 on the host, who doesn't exist and has no permissions.
[19:13] <JanC> https://en.wikipedia.org/wiki/Container_(virtualization)
[19:13] <lordcirth> privileged containers have no such mapping, so any hostile or compromised process that is root in the container can trivially break out
[19:13] <blackflow> lordcirth: Salt control on a completely isolated vlan, or is it publicly accessible? We rent servers with several providers and thus the control channel would have to traverse public networks. Something I am totally not comfortable with.
[19:14] <DammitJim> oh, these don't run as root either
[19:14] <lordcirth> blackflow, in this case, an internal, though not immensely locked-down, network
[19:14] <DammitJim> blackflow, ours is isolated vlan
[19:14] <blackflow> yeah
[19:14] <lordcirth> DammitJim, yes, but if someone exploits your java app, then escalates to root, they will only be root in the container.
[19:15] <lordcirth> blackflow, the only weakness Salt potentially has is when a minion first connects to the master - it trusts the master's key on first connect. If you don't trust the network, preseed all your minions with the master key, and no more holes.
[19:17] <blackflow> lordcirth: it's no that that I'm worried much about, it's the fact that the master would we a publicly accessible server -- because we rent them. I mean that one machine would be the gateway to all the servers. One point to compromise and then everything is.
[19:17] <blackflow> would *be
[19:18] <lordcirth> blackflow, ah, you are worried about the master being compromised. Well, one thing you could do is require a VPN to reach the master.
[19:18] <lordcirth> It would take a bit more preseeded setup, but you could require new minions to establish an 2-way auth'd VPN, then connect to Salt over it.
[19:18] <blackflow> lordcirth: it's still one publicly accessible port. one vuln in the network stack and poof...
[19:19] <blackflow> but then.... the same vuln could be used on all teh servers individually... moot point. I worry too much.
[19:19] <lordcirth> blackflow, that vuln in the network stack would affect all of your minions...
[19:19] <blackflow> yeah
[19:19] <DammitJim> we worry too much
[19:19] <lordcirth> There's only so much you can do while being connected to the internet :)
[19:19] <DammitJim> and you know what the worse part about that is? the day you get tired of worrying ('cause that happens to me), that day is when I make very poor decisions
[19:19] <lordcirth> But props for being paranoid :)
[19:20] <blackflow> yah :)
[19:20] <lotuspsychje> healthy paranoia is good
[19:20] <DammitJim> healthy paranoia is good, but it's not easy to achieve... gotta make sure to have a good balance
[19:20] <blackflow> that balance is hardest to achieve
[19:22] <blackflow> at the very least I manage to convince everyone in the company that public clouds and VPS'es are a big no-no, in the post Meltdown+Spectre world.
[19:24] <supaman> talking about being paranoid ... I am taking over a system (a few virtual ubuntu servers) where the prev sysadmin did a apt upgrade without waiting a while and checking the packages for bugs or doing any kind of test ... would you continue that practice? :-)
[19:25] <DammitJim> that's a good question... how does one deal with that supaman ?
[19:25] <supaman> depends on how paranoid you are ;-)
[19:25] <blackflow> with ZFS snapshots of course.
[19:28] <DammitJim> I'm not for that kinda stuff
[19:28] <DammitJim> we do run the production environment in a virtual lab and test our applications
[19:28] <DammitJim> but I don't go looking for bugs in the updates
[19:32] <lordcirth> I can't remember the last time that an apt upgrade broke something. Once or twice a full-upgrade kernel update did, but you need to reboot to notice that, and then you just reboot back to the old one
[19:58] <Ussat> you dont need ZFS snaps if you have good backups
[19:59] <lordcirth> Ussat, best thing is to take ZFS snapshots, and then backup the snapshots. Prevents smearing (I forget if there's a more common term)
[20:00] <Ussat> naaa...we dont and wont use zfs
[20:00] <lordcirth> The snapshots are instant and make sure your backup doesn't get different files at different times as it chugs along
[20:00] <Ussat> no need
[20:00] <lordcirth> Ussat, so, how do you detect bitflips?
[20:01] <Ussat> in all the years I have been doing this, I have NEVER had an issue with that
[20:02] <lordcirth> Ussat, that you know of
[20:02] <Ussat> That I am sure of
[20:02] <lordcirth> So, if a bit flipped somewhere on one of your drives, you are quite sure you would get an email about it?
[20:03] <lordcirth> Just a few months ago I suddenly got 30 checksum errors on each of 3 drives in a raidz. No idea why, maybe a power surge. Didn't matter, ZFS fixed it all
[20:03] <lordcirth> Granted that wasn't on enterprise hardware, but that's just a difference of degree.
[20:04] <Ussat> I dont use non-enterprise HW at work
[20:04] <Ussat> "just" a different of degree.....
[20:05] <lordcirth> so you are less likely to get bitflips. Not 100% assured.
[20:05] <Ussat> nothing is ever 100%...ever
[20:05] <lordcirth> ZFS decreases your odds of a bitflip by several orders of magnitude, which is enough for me. trusting hard drives isn't good enough for me.
[20:06] <Ussat> all my data I am concerned about is stored o a SAN, if a VM image is corrupted, no big deal.....
[20:07] <Ussat> We use a real enterprise SAN, sot a bunch of commercial disks off the shelf with a fancy FS on it
[20:08] <lordcirth> I trust ZFS a lot more than a "real enterprise SAN" blackbox
[20:08] <Ussat> to each their own
[20:08] <Ussat> I dont
[20:08] <lordcirth> Not to mention the cost and vendor lock-in
[20:09] <blackflow> lordcirth: hear hear
[20:09] <blackflow> guess some people are too lucky and never had a failure
[20:09] <Ussat> Oh I have had failures...sure......none of them crippling
[20:09] <lordcirth> Obviously a good SAN is better than what a lot of people do...
[20:09] <Ussat> My systems are very critical.....
[20:09] <lordcirth> Some people run production data on commercial drives with just RAID.. ow.
[20:10] <Ussat> a lot of people use a good SAN
[20:10] <blackflow> ZFS snapshots (sent offsite) are godsent for backups. quick, atomic, with data integrity checks.
[20:10] <Ussat> Not in my industry they dont
[20:16] <blackflow> "a good SAN" is not replacement for offsite backups. ZFS (or any other CoW) is orthogonal to SAN. Your SAN can be powered by ZFS. So I don't get your point.
[20:17] <lordcirth> A SAN could be powered by ZFS, but are generally powered by blackbox magic
[20:19] <Ussat> Who said I dont have offsite backups ?
[20:20] <Ussat> I never said that, I have a very robust backup solution w/offsite backups
[20:21] <lordcirth> Yes, I assumed you did.
[20:21] <blackflow> well you seem to have put ZFS in a opposition to SANs.
[20:22] <lordcirth> blackflow, he just said he doesn't need ZFS because he has a fancy SAN. Doesn't mean you couldn't somehow have both.
[20:22] <blackflow> which is wrong really, they're orthogonal. So is this: "20:58 < Ussat> you dont need ZFS snaps if you have good backups"  --- but ZFS snapshots _are_ good backups, when you `zfs send` them offsite
[20:23] <blackflow> doesn't make sense to put ZFS snapshots and good backups in opposing relationship.   that's like saying "You don't need a filesystem if you have good backups"
[20:23] <Ussat> incorrect, but thats ok you do what you want, I do what we do
[20:23] <JanC> and you would likely want to use some sort of snapshot on that SAN too...
[20:23] <Ussat> JanC, the SAN is mirrored between sites
[20:23] <JanC> to make backups
[20:24] <Ussat> NO...not to make backups
[20:24] <Ussat> I am not going to explain how our san enterprise works...it works quite well and is very efective
[20:24] <JanC> so how do you guarantee data consistency on you backups?
[20:24] <Ussat> Its not just a bunch of off the shelf disks and consumer grade PCs ....
[20:27] <blackflow> Ussat: not sure which part is incorrect. perhaps you don't know what ZFS is? a snapshot is a backup per se. it's a copy of data you can revert to. you can back your data from it. it becomes _good_ backup when you ship it offsite. you might not like it, that's okay. use something else, fine, but how is all this "incorrect"?
[20:27] <Ussat> per se
[20:27] <JanC> a snapshot is not a backup
[20:27] <Ussat> ^^^
[20:28] <JanC> but you need it to make consistent backups
[20:28] <blackflow> yes it is, it's just not good enough if it's local
[20:28] <blackflow> you can ship it elsewhere and it's still a zfs snapshot. in a physically different location.
[20:29] <blackflow> maybe y'all don't know about the `zfs send` feature. it's made to send snapshots to external pools. and `zfs receive` to bring them in from external pools. how's that not a backup.
[20:30] <JanC> I know what it is, and when you do that you have a proper backup indeed
[20:30] <Ussat> I am quite aware of them, and we looked at them and rejected them
[20:30] <lordcirth> arguing over the semantics of what is and is not a 'backup' is irrelevant when you agree on what they are
[20:30] <blackflow> it's okay if you don't like them. but fact remains they _are_ useful backups.
[20:30] <Ussat> Not in our case
[20:31] <lordcirth> What is special about your case that ZFS is not sufficient?
[20:31] <Ussat> Correct
[20:31] <Ussat> We evaluated it, and it doesnt meet pur needs
[20:32] <lordcirth> I am interested to know in what way ZFS fell short of your needs.
[20:32] <blackflow> I'm dying to know too
[20:33] <Ussat> I dont need to justify my descisionsto you.......as soon as you start cutting my paycheck, then that is a different matter
[20:33] <blackflow> lol
[20:33] <Ussat> It did not meet our needs
[20:33] <lordcirth> I don't see any reason to suddenly turn so hostile...
[20:33] <Ussat> Thats hardly hostile
[20:33] <blackflow> why do you think that's "justifying"? this is a discussion forum. if you don't like it you can always /part
[20:34] <blackflow> but I know what it is. you just have no idea what you're talking about. so lashing out is best defense to hide ignorance. fine by me.
[20:34] <lordcirth> blackflow, while that is possible, I don't think you have sufficient evidence to assume that.
[20:34] <Ussat> Yes, yes I do, we evaluated it and it did not meet our requirements
[20:34] <blackflow> oh I do. seen that type too often.
[20:34] <blackflow> armchair "admins" who feel threatened when asked to explain their use case.
[20:35] <Ussat> armchair admins...um sure
[20:35] <blackflow> yup.
[20:36] <JanC> Ussat: it would be useful to know in (roughly) what way it didn't meet your requirements
[20:36] <Ussat> OK, and you are obviousely the expert......you have no clue what industry or what the requirements are, but you can make that assumption. You seem to think that ZFS will meet all requirements
[20:36] <lordcirth> Ussat, no, we asked what requirements it didn't meet, and you refused to answer
[20:36] <lordcirth> Which usually means you don't know.
[20:36] <shibboleth> any word yet as to this supposed horribad vulnerability that made scroogle and fakebook down their networks for "disruptive, lengthy, unannounced but totally planned and routine maintenance"?
[20:36] <lordcirth> There are some requirements ZFS can't fulfill.
[20:36] <Ussat> lordcirth, ir which means it may be none of your business
[20:37] <lordcirth> Ussat, and yet you are still here arguing about it?
[20:37] <lordcirth> If you have an NDA, say so.
[20:37] <blackflow> Ussat: you don't know how to read either. I said several times it's fine if you don't like it or if it doesn't work for you. I never said ZFS _must_ meet your requirements.
[20:37] <Ussat> I am in a highly regulated industry, and ZFS did not meet the requirements
[20:37] <blackflow> that's fine. so we asked what does.
[20:38] <Ussat> How effective is ZFS on a streached cluster ?
[20:39] <lordcirth> You mean as a distributed filesystem? Normally I would run Ceph on top of ZFS for that.
[20:39] <Ussat> No I dont mean distributed FS
[20:39] <Ussat> I mean streached cluster
[20:40] <Ussat> behind SVC's
[20:40] <blackflow> a stretched cluster (not "streached"), is when two or more virt hosts are part of the same logical domain but localed in physically different locations. that's not in ZFS domain at all
[20:40] <lordcirth> Yeah, just looked that up. Seems like something you'd implement above ZFS.
[20:41] <blackflow> under
[20:41] <blackflow> or, well, depends on your strategies I guess.
[20:41] <lordcirth> Either would probably work, yeah
[20:41] <Ussat> Or maby, just maby we tested it and ZFS crapped out
[20:41] <lordcirth> Ussat, combining ZFS with what?
[20:42] <Ussat> and IDGAF about the spelling, I am dyslexic, correct away
[20:42] <blackflow> I personally don't have need for it nor personal experience so I can't vouch. I do know of people who happily use ZFS with lustre and for htat purpose exactly. zvol based virtualization like in a stretched cluster.
[20:42] <Ussat> Oh please.....I dont need to continue to justify what we do
[20:43] <blackflow> not justifying anything, just discussing your use case. that's what these public places are for.
[20:43] <blackflow> nobody has brought you to court, judged you and pressed you to defend yourself.
[20:45] <Ussat> the other issue was encryption at rest, ZFS , while it has it imlimented, does not meet the standards we must meet
[20:45] <Ussat> implimented
[20:46] <blackflow> ZFS encryption is still highly experimental. We use LUKS under id
[20:46] <blackflow> *it
[20:46] <Ussat> The time to encrypt/decrypt was horrendus
[20:47] <blackflow> in our tests, with AESNI, the difference was ~2%, very much acceptable in our case
[20:47] <Ussat> not ours
[20:47] <lordcirth> Ussat, you were using ZFS 0.8 rc?
[20:48] <Ussat> We are not useing ZFS at all now
[20:48] <lordcirth> Ussat, in you tests, I mean
[20:48] <Ussat> I believe so, it was a while ago
[20:48] <lordcirth> your*
[20:49] <lordcirth> Because ZFS 0.8 rc hasn't been out for that long, and still isn't a stable release, so it seems odd to rely on it for your apparently critical use case.
[20:49] <lordcirth> Or consider doing so, I mean
[20:49] <Ussat> I said we are not useing it, why do you think we are relying on it /
[20:50] <Ussat> or why do you think we would ? I said we tested and decided it would not meet our needs
[20:50] <lordcirth> Ussat, if you did the tests with ZFS encryption, then you must have been considering using it, no?
[20:50] <Ussat> We considered it, yes, and put it asside
[20:50] <lordcirth> It seems odd to run the tests only on 0.8 when it wouldn't be ready for your use case anyway, instead of testing, say, 0.7 on LUKS which would be.
[20:51] <lordcirth> We have a Ceph cluster in production which is backed by ZFS on dm-crypted drives.
[20:52] <lordcirth> Looking forward to 0.8 simplifying that, though.
[20:52] <Ussat> We dont use software based storeage
[20:52] <Ussat> no plan to ever
[20:52] <lordcirth> Well, I'm sure somebody will keep selling blackbox storage as long as big companies buy them.
[20:53] <JanC> your SAN is software-based too
[20:53] <Ussat> JanC, yes yes.....and all raid is software based also, even when it is on a raid card
[20:53] <lordcirth> Well yes, but raid cards suck
[20:53] <Ussat> Yes its software...
[20:54] <Ussat> lordcirth, big companies buy them because they are very reliable, depending on what you buy
[20:56] <Ussat> everything is software based, when it comes down to it.....
[21:34] <lordcirth> Ussat, but what most people mean by 'software defined storage' is that it is commodity hardware assembled by good, flexible, tunable software into a good system.
[21:35] <lordcirth> A big box that says "no user serviceable parts inside" uses lots of software, but it's not SDS
[21:56] <blackflow> needn't even be commodity hardware
[22:02] <lordcirth> It needn't be, but it's generally one of the selling points
[22:43] <blackflow> lordcirth: yeah, the strength of ZFS (and BTRFS) is that you can use crappy commodity hardware with no risk to your data
[23:21] <JanC> I wouldn't say that about btrfs...
[23:37] <blackflow> JanC: well okay, but in theory at least :)