[07:33] <lordievader> Good morning
[11:00] <devster31> hi guys, I was looking at RPI server images, what does the "preinstalled" mean? is it like the cloud images with some default users set but customizable with user-data?
[11:06] <lotuspsychje> maybe the arm guys also know that devster31
[11:06] <lotuspsychje> !arm
[11:08] <devster31> ok, thanks
[11:22] <tomreyn> devster31: i think "preinstalled" means they are disk images which can just be written to a storage using 'dd' or similar raw copy utilities.
[11:33] <jamespage> cpaelzer: coreycb may have already asked this but I have a problem with libvirt 5.0.0 on bionic via the UCA
[11:33] <jamespage> libvirt/qemu is not reporting a capability of domain type=kvm
[11:33] <jamespage> only type=qemu
[11:33] <jamespage> any ideas?
[11:40] <khedrub> hi there. I am a bit confused, I hope someone here may help me with that. :-) I need to set up a few services next to each other, for example nextcloud with web interface and discourse as a forum software. In my understanding the modern way to do this is to use containers. Snap or docker come to mind. So I used snap for nextcloud and it runs fine. But how does one go about setting up discourse as a seperate snap and tell it to also listen
[11:40] <khedrub> to port 80/443 but a different subdomain?
[11:40] <khedrub> Without containers it would be easy, but I want to do it right from the beginning
[11:41] <blackflow> khedrub: you don't need containers. they solve specific problems and you'd know if you had them. like inability to install some software regularly -- due to conflicts with system files, libs, other packages, or simply because there aren't any packaged for the distro.
[11:42] <blackflow> khedrub: "containerization" -- like isolation, namespacing, permissions, can all be solved with system unit configuration of the services, without the need to exponentially raise the complexity of your system with "containers"
[11:42] <khedrub> arent containers also more secure because its isolating software similar to virtual machines?
[11:42] <blackflow> containers per se are NOT security boundaries.
[11:42] <rbasak> Containers are useful for trying things out.
[11:43] <blackflow> they're totally not similar to virtual machines. they're just process (and uid and filesystem) namespacess
[11:43] <rbasak> Since what goes on in them generally doesn't affect the host system.
[11:43] <rbasak> For example if a third party software wants to stomp all over your system, which is very common.
[11:44] <blackflow> right, the highly specific problems I mentioned above :)
[11:44] <rbasak> discourse and nextcloud are perfect examples
[11:45] <blackflow> khedrub: btw, "listen on <same> port but different subdomain" does not work. you really need different IP, not subdomain per se. then, yes the different IP could be pointed at by the subdomain.
[11:45] <khedrub> Okay, so the old way of doing this (installing the software via git or apt and then configuring apache to use the sub domains for example) is still the good way to this nowadays? Because thats how we did it back in the days but I thought that is outdated and there are better ways in terms of security
[11:45] <blackflow> khedrub: yes. use snaps if the snap'd versions offer some functionality that you specifically need.
[11:47] <blackflow> also considering the drawbacks of snaps. for example they're rather bad for server applications because they autoupdate and you don't have any control over that.
[11:48] <khedrub> I see.
[11:48] <blackflow> (and you don't have the ability to supply your custom apparmor profile to them -- those two are for example the two biggest gripes I have against snaps at the moment, even if you ignore the "containerize everything" hype)
[11:49] <jamespage> cpaelzer: nm figured it out - perms on /dev/kvm where incorrect
[11:49] <khedrub> But from what you say it sounds like containers are quite a rare usecase. I had the impression that they are the new stuff that everyone is using nowadawys
[11:50] <blackflow> khedrub: they have their uses yes. it is _not_ to containerize everything by default, no questions asked. that's very bad.
[11:50] <rbasak> khedrub: I don't think blackflow's view is particularly representative of Ubuntu server users in genreal.
[11:50] <rbasak> Trying things out in containers is very common and is recommended.
[11:50] <blackflow> "trying things in containers" does not in any way conflict with anything I've said so far.
[11:51] <khedrub> okay, but what if you want to have the software as productive systems?
[11:51] <blackflow> I don't see "trying things" as specific requirement in the original question.
[11:51] <rbasak> In general, "disposable" deployment platforms are extremely common.
[11:51] <rbasak> Whether that's "start a cloud instance" or "start a container".
[11:51] <rbasak> Or a VM.
[11:52] <blackflow> sure they are. which, again, does not contradict anything I've said.
[11:52] <jamespage> cpaelzer: hmm but...
[11:52] <blackflow> if your use case calls for a container, then by all means use it. with all the virtues AND drawbacks of them.
[11:52] <rbasak> Something that encapsulates that part of the deployment, which you can throw away on a whim to try again, rather than doing anything on a host system installed by hand.
[11:53] <cpaelzer> jamespage: but why are they incorrect?
[11:53] <cpaelzer> they should be set by udev
[11:53] <jamespage> cpaelzer: change in qemu packaging I think
[11:53] <blackflow> rbasak: that's a very specific use case, not a default state of production environments.
[11:53] <cpaelzer> jamespage: yeah I stopped doing to in qemu, as udev already did
[11:53] <cpaelzer> and was the right place to do so
[11:53] <blackflow> production environemnts, especially money making ones, want as little change as possible.
[11:54] <jamespage> cpaelzer: that probably works ok on disco, but not so well when we backport to bionic
[11:54] <cpaelzer> It should even be on Bionic, but let me check to be sure
[11:54] <cpaelzer> if confirmed you can add it back on UCA
[11:54] <rbasak> blackflow: either you misunderstand my point, or your grossly incorrect. I'm not sure which.
[11:54] <rbasak> See "devops", "pets vs. cattle", etc.
[11:54] <blackflow> khedrub: one drawback of containers is that they contain and package ALL the libraries and requirements for a specific software.  that means, for example, if you had 100 containers on the system and each needed openssl, you'd have 100 individual openssl installations.
[11:55] <jamespage> bionic has 237
[11:55] <jamespage> might be 239 where that comes in
[11:55] <blackflow> khedrub: which also means that in case of security vulnerability, you'd HAVE to upgrade all 100 of them. which is a very complex situation. so this means that you need to upgrade EACH container separately which is a lot of work. that's why you use them only if they solve some specific use case you can't solve otherwise.
[11:56] <cpaelzer> jamespage: 239-6
[11:56] <rbasak> blackflow: rubbish. You're making that out to be a big problem. It is not. You are exagerrating.
[11:56] <rbasak> blackflow: if I used 100 VMs or cloud instances, I'd have the same problem.
[11:56] <khedrub> blackflow, dont they autoupdate like you said earlier?
[11:56] <blackflow> rbasak: neither. the original question was not about "temporary testing environment" so I don't know why you're trying to present that to be somehow against what I've said, which it isn't
[11:57] <rbasak> blackflow: if local, then why would I have 100?
[11:57] <rbasak> blackflow: and if, as is current best practice, you have code that can redeploy, with CI, etc, then upgrading each container is absolutely not a lot of work. It's automatic.
[11:59] <blackflow> apt install <package>; done.    how's that worse than deploying containers around?
[11:59] <khedrub> Indeed, my usecase is a server which has a little as possible running on it, only the 2 - 4 webservices like nextcloud and discourse.
[11:59] <rbasak> Because for server tasks you're not done after an apt install.
[11:59] <blackflow> or are you deliberately ignoring what I'm saying from teh beginning, in that containers DO have use cases, but they should NOT be a default solution, if another exists.
[11:59] <blackflow> rbasak: strawman
[11:59] <blackflow> neither you are with containers
[12:00] <rbasak> You seem to have some obsession with container hate.
[12:00] <rbasak> Note that I'm not talking about containers specifically.
 Something that encapsulates that part of the deployment...
[12:00] <blackflow> khedrub: snaps autoupdate, yes. I was talking in general that containers are isolated envs and if you build one, you have to maintain it as such.
[12:00] <rbasak> Encapsulation is something that is best practice.
[12:00] <rbasak> Do it with containers, or something else, doesn't matter.
[12:01] <blackflow> I do not have obsession with container hate. I've been doing this for over 10 years, even before the "container hype" with freebsd jails.   I _do_ have obsession with as simple as possible systems.
[12:01] <rbasak> As yes, the cost is that you have to maintain multiple encapsulations, but we have automation to help with that.
[12:02] <blackflow> I do have a problem with "containerize everything" hype which is misplaced. Again, they do have use cases, but they should not be defaulted to with no clear idea what problem you're trying to solve.
[12:02] <rbasak> Muddling everything together into a host system that you have to sysadmin by hand because so many things have happened to multiple tasks on the system that redeploying is now a huge task is not the way.
[12:02] <blackflow> rbasak: so you're saying apt should be replaced by snaps? you really ARE saying that, which that blueprint was apparently misquoted for?
[12:03] <khedrub> but if I used containers, how would I tell the nextcloud snap to use cloud.xyz.org and the discourse snap to use discourse.xyz.org? Since they are using seperate apache instances...
[12:03] <rbasak> Nope. I'm not saying that.
[12:03] <rbasak> When did I even mention snaps?
[12:03] <blackflow> "mudding everything" ... why would you do that? if a package exists, it's obviously integrated with the system, tested and available as part of the distro
[12:03] <rbasak> I'm talking about _encapsulation_
[12:03] <rbasak> Whether that's inside snaps, containers, VMs, cloud instances...doesn't matter.
[12:03] <blackflow> of what exactly?
[12:04] <rbasak> I think you should take a break, and read this conversation again tomorro.w
[12:04] <blackflow> if an apt package exists, why is a container of that software better solution?
[12:04] <ahasenack> good morning
[12:04] <rbasak> You can use apt inside a container.
[12:04] <cpaelzer> hi ahasenack
[12:04] <blackflow> and I'll repeat what I said earlier, service containerization IS achievable with systemd unit configuration WITHOUT the expense and complexity of handling a whole isolated OS tree environment.
[12:04] <rbasak> Now you can undo the entire container in one go.
[12:05] <ahasenack> hi cpaelzer
[12:05] <rbasak> You can do your service containerization with systemd units inside a container.
[12:05] <blackflow> rbasak: right, and why would you do that unless you had a specific need for a container? you mentioned test environments -- sure I agree, containers are very suitable for those. that, however was not the original question.
[12:05] <rbasak> A container is just a nested Ubuntu.
[12:06] <blackflow> sans the kernel, yes.
[12:06] <rbasak> (well, it can be many things, but it can be that)
[12:06] <rbasak> Sure. A nested Ubuntu userspace then.
[12:06] <blackflow> so why manage 100 copies of ubuntu if that can be solved with SHARED libraries, especially since you can STILL isolate the services with systemd unit config?
[12:07] <rbasak> Again, you're exaggerating
[12:07] <blackflow> I totally am not. the shared library model exists for a reason
[12:07] <blackflow> containers exist for a reason too. my whole point so far is unless you really need to isolate software in it, do yourself a favor and don't complicate the system.
[12:08] <rbasak> My point is that you _always_ want at least one level of isolation.
[12:08] <rbasak> Since the host system is always one part that's the most expensive to redo/redeploy.
[12:08] <jamespage> cpaelzer: right so a chgrp and chmod on /dev/kvm fixes things up
[12:09] <cpaelzer> jamespage: ok, glad to know
[12:09] <blackflow> rbasak: right. ProtectSystem= systemd directive seems to be one level of isolation (which I'm using, among other things, quite extensively).
[12:09] <jamespage> cpaelzer: I'm guessing you'd want to hold this as a patch for the UCA backport right?
[12:09] <rbasak> No, because to do that you already taint your host system configuration.
[12:09] <khedrub> blackflow, I am trying to find some info on isolating services with systemd unit configs. The search results are a bit unrelated though. Do you have any link to further reading or a good search term for this specific use case of unit files?
[12:09] <blackflow> there's no need to install a whole new Ubuntu, inside your existing Ubuntu, sans the kernel, just to run nginx off of it, for example.
[12:10] <cpaelzer> rbasak: blackflow: you two do realize that the problem of your discussion is, that you are both right - you can do things with/without containers (of various types) - and it depends on your problem if you want/need to use them (and there everyone can decide where to make the cut on their own - and it is ok that you two do so at different places)
[12:10] <blackflow> khedrub: https://gist.github.com/ageis/f5595e59b1cddb1513d1b425a323db04
[12:10] <blackflow> khedrub: and of course respective manpages for each of the directives
[12:11] <blackflow> cpaelzer: which is what I've been saying from the beginning, with the addition of don't _default_ to using a container unless you understand fully, what problem that will solve? why is that in any shape or form bad advice? it's GOOD system administration pattern.
[12:11] <khedrub> blackflow, thank you!
[12:14] <khedrub> It was very worthwhile for me that you two had that discussion. It brought some clarity about the pros and cons of either solution, if nothing else. :-) So thank you both
[12:15] <blackflow> rbasak: "taint your host system configuration" -- yes, so? if there's a conflict, please report a bug against the specific package. there can also be bugs with snaps you know. let's begin with all the littering of /snap, ~/snap and loop mounts.......
[12:15] <blackflow> if that is not tainting the host system, I don't know what is.
[12:16] <cryptodan> thats one of the first things i did when i installed ubuntu server was to remove snap from the system
[12:16] <blackflow> khedrub: and all I'm saying is don't default to them unless you understand the problems containers solve, if that fits your use case, and you're aware of all the drawbacks of containers.
[12:17] <blackflow> cryptodan: everyone does it, where I've been called to maintain Ubuntu systems. but somehow my views are not representative..... I think Ubuntu devs have a huge disconnect between what they think users want and what users actually DO in practice.
[12:17] <mason> blackflow: So, you've missed one critical point with containers vs a base system service - service stability. Even for a singleton, there's not live migration of containers.
[12:18] <blackflow> mason: you mean of base system service
[12:18] <cryptodan> i also removed the cloud stuff from the base install
[12:18] <mason> Or perhaps that should be *especially* for a singleton.
[12:18] <cryptodan> mason: then it should be optional
[12:18] <cryptodan> not forced
[12:18] <mason> blackflow: No, live migration of containers and a network config that goes with them.
[12:19] <mason> Which is to say, live migration of services.
[12:19] <blackflow> mason: sorry I don't follow what you wanted to say then
[12:19] <cryptodan> they should have the mentality of a single use server for base installs
[12:19] <mason> blackflow: You were arguing that containers aren't at all like VMs, but that's no longer quite true.
[12:19] <cryptodan> no added crap like snapd, cloud stuff, or anything like containers
[12:20] <blackflow> mason: I was arguing from the standpoint of ISOLATION, which as the discussion. one is process namespace, the other is a whole kernel running on the CPU with host-side irq handlers triggering on hardware virt
[12:21] <blackflow> which *was . VMs area  whole different level. not just from the CPU standpoint, but also from the hardware standpoint. memory compartmentalization. context switches.
[12:21] <mason> If isolation is the critical thing, it's also worth bringing up type I vs type II hypervisors, then.
[12:21] <blackflow> mason: sure but this convo and my original objection was only this:  don't use containers as _Default_ solution unless you understand what they do, what problems they'd solve for you, and what are the drawbacks.
[12:22] <mason> You'd hate my answer, then. For me, I like containers as lighter-weight VMs.
[12:22] <blackflow> the original question was about "In my understanding the modern way to do this is to use containers" -- for running services.  and I said, no, not by default.
[12:22] <blackflow> mason: if that solves your use case and you know what you're doing, that's not in any way contrary to what I'm saying :)
[12:23] <mason> Yeah. I was just thinking about it. If the *only* reason for using containers is isolation, that's still not *bad* in any way. It's like using a shorthand to talk about separation, rather than depending on not missing any of a range of available tools.
[12:24] <mason> As for the scale of managing one system's updates vs dozens, that's something to be automated anyway. The exact number shouldn't matter at all.
[12:25] <blackflow> mason: and even then, what kind of isolation. it's all bout namespaces. process, pid, filesystem, network.
[12:25] <mason> sure
[12:29] <blackflow> for example, in my use case, the packaged nginx, postgres, dovecot, and postfix  -- they all fit my needs. I isolate them with sytemd unit configuration options. additionally with apparmor profiles. and I benefit from those packages being maintained in the way they are, stable and with backported fixes.  I trust that way more than a random docker someone plopped on a hub somewhere, or a random
[12:29] <blackflow> snap someone uploaded.
[12:30] <blackflow> if I needed to use or test the super bleeding edge version of nginx for example, then totally yes, I'd use a container (LXD probably) for all the reasons mentioned here as benefits: full isolation without affecting the base OS.
[12:34] <cryptodan> id do it in a vm
[12:54] <mason> blackflow: Ah, that's different, pulling in the notion of random dockers.
[12:54] <mason> Hand-maintained homebrew containers can still use the nice, curated system packages.
[12:56] <blackflow> mason: yes, and then you have that problem if having to maintain multiple systems (sans the kernel) yourself manually. which is fine if that solves your case. unnecessariy complexity if `apt install X` would've solved your case in the first place.
[12:56] <blackflow> *of having
[12:56] <mason> I assume I already have that problem in all cases, though.
[13:03] <blackflow> mason: sure but the orig question was about using third-party prepackaged containers with snap or docker. those contain additional "don't do it unless you really need it and really know what you're doing" stickers. it's one thing to "I need to isolate this thing here in a way LXD does it, so I'll build an LXD container and apt install what I need, in there."    and quite anotehr to  " I'll
[13:03] <blackflow> install a docker or snap of package X because it's the 'modern thing to do' without undrestanding what that really means".
[13:03] <blackflow> I'll always bark against the latter.
[13:05] <mason> Ah. Ah. I'd missed that. I skimmed backlog to get an idea of things, but yeah, using someone else's packaged bundle leaves me cold too.
[13:24] <jamespage> coreycb: I'll make a start on the oslo.* ones
[13:48] <coreycb> jamespage: sounds good. i'm going to fix up the vitrageclient backport and then i'll get started
[14:29] <coreycb> sahid: nova 16.1.7 pushed and uploaded to pike-staging. thank you.
[14:44] <mwhahaha> coreycb, jamespage: we're getting tempest failures in puppet openstack because we appear to be missing https://review.openstack.org/#/c/605851/ in keystone. when's the next time you're going to update the stein packages?
[14:50] <sahid> coreycb: ack thanks
[15:03] <jamespage> mwhahaha: next 24-48hrs
[15:03] <mwhahaha> k thanks
[15:03] <jamespage> we're working deps first, and then will do the core projects
[15:09] <ykarel> mwhahaha, so till then we pin tempest, or wait?
[15:10] <mwhahaha> i'll propose a tempest pin if ic an figure out the patch that broke it
[15:10] <ykarel> mwhahaha, should i propose
[15:10] <ykarel> was preparing a patch
[15:10] <ykarel> commit message should explain it
[15:11] <mwhahaha> k
[15:11] <mwhahaha> if you have it sure
[15:11] <ykarel> ok
[16:00] <jamespage> coreycb: awesome new unpackaged dep for oslo.service
[16:00] <jamespage> \o/
[16:00] <coreycb> jamespage: oh great
[16:00] <coreycb> jamespage: let me know if i can help
[16:25] <CPressland> Afternoon all, was wondering if somebody could help me with Netplan? I'm spinning up a VM on Azure with multiple DHCP IPs per NIC for use in a Kubernetes Cluster. Netplan is detecting the Primary IP but cannot see any additional IPs. How do I configure Netplan to get all secondary IPs (30 of them)?
[16:31] <ruben23> guys my ubuntu server has some big text on my monitor screen, how do i adjust it to be smaller
[16:31] <ruben23> and a bit high res
[16:36] <ruben23> anyone here guys
[16:36] <lotuspsychje> !patience | ruben23
[16:38] <sdeziel> ruben23: that's usually something you send in your client's terminal
[16:38] <sdeziel> s/send/set/
[16:38] <tomreyn> CPressland: looking this up a little, i think this is bug 1759014
[16:40] <CPressland> tomreyn: That looks like it almost exactly! Thanks!
[16:41] <tomreyn> CPressland: Consider clicking on "This bug affects 14 people. Does this bug affect you?"
[16:41] <tomreyn> + subscribing
[16:45] <CPressland> Done! Fingers crossed for a backport. For now I'll do my prototyping on 16.04
[16:45] <CPressland> Thanks again.
[16:58] <sarnold> ruben23: is it coming up with 80x25 vga? or a framebuffer console? or X11? what do you want it to do?
[16:59] <ruben23>  sarnold:  coming from vga, the res is too big i cant see the whole picture of the server most specify when i try to check logs , how do i adjust it to smaller and high res a bit
[17:01] <sarnold> ruben23: most people just ssh into their machines and don't care about the display attached to it
[17:02] <sarnold> ruben23: there are a few kernel command line parameters you can try -- look at video= and vga= in https://github.com/torvalds/linux/blob/master/Documentation/admin-guide/kernel-parameters.txt#L4940
[17:02] <sarnold> ruben23: you can also install X11 if you want to
[17:04] <ruben23> sarnold: if there is X11 it will automatically adjust right.?
[17:04] <sarnold> ruben23: yeah I think it'll try to run at the best the video card and monitor can support
[17:08] <ChmEarl> CPressland, /usr/share/doc/netplan.io/examples/
[17:15] <CPressland> ChmEarl: Thanks, unfortunately those examples don't cover off what I'm trying to achieve here. Basically Azure has provisioned 31 IP Addresses for a single NIC, but I can only get Ubuntu itself to see the "Primary" IP. I can manually assign secondary IPs and it works just fine, but the point is that I won't always know what the IP address is (nor will Chef) as we're using DHCP.
[17:22] <CPressland> Looks like the Azure CNI may actually handle some of this for me. I'll do some further testing on that assumption.
[18:55] <Jofi00> hi, can anyone help me with first steps in ubuntu-server installation?
[18:56] <lordcirth> Jofi00, what is going wrong?
[18:56] <Jofi00> I have installed ubuntu-server and have nextcloud running, but I cannot find any apache folder to mod the configuration
[18:57] <lordcirth> Jofi00, /etc/apache2 should have your config
[18:57] <tomreyn> Jofi00: how did oyu install nextcloud?
[18:58] <Jofi00> in etc I cannot find the folder
[18:58] <Jofi00> I did install nextocloud via the menue in the installation process
[18:59] <Jofi00> could it be that by default any other server is running?
[18:59] <sdeziel> Jofi00: that sounds like the snap version of nextcloud
[18:59] <tomreyn> Jofi00: aaw snap, i think this installed a snap then.
[18:59] <lordcirth> I've never used that method. Perhaps it's a snap?
[18:59] <Jofi00> could be
[18:59] <Jofi00> so this would use its own server then?
[19:01] <lordcirth> Jofi00, run 'locate nextcloud'
[19:02] <Jofi00> untortunately, it doesnt return anything
[19:02] <lordcirth> Actually, snap info nextcloud
[19:03] <Jofi00> ah
[19:04] <teward> if you used the server live installer (subiquity based) and selected nextcloud, it used the snap.
[19:04] <teward> can guarantee that
[19:05] <Jofi00> ok, this is a good start for me to search for the config file, thanks
[19:05] <blackflow> that's terribad. this default to snaps nonsense must stop.
[19:06] <Jofi00> snaps are no good? havent used them before
[19:07] <sarnold> snaps aren't a bad way to try to get some of the benefits of windows-style software distribution
[19:07] <blackflow> I didn't say that. I said _default to snaps_ is bad. snaps per se are solving particular problems, yes.
[19:07] <Jofi00> gotcha
[19:08] <blackflow> in so that if you want to use the snap, then it should be a deliberate, conscious action of `snap install nextcloud`. not automagic where you're left wondering wth is this, where's ther files. ooh, in some squashfs loopbackmounted readonly dir.
[19:08] <sarnold> if your machine exists to do nextcloud, then using a nextcloud snap is a pretty decent idea. if it's just something that'll be there, and you don't care about specific features, specific bug fixes, upgrading every release, etc, then a deb might fit nicely
[19:08] <sdeziel> Jofi00: snaps are nice self contained software a bit less flexible than what you'd get from a .deb package. Could be either good or bad depending on what you are looking for
[19:09] <lordcirth> snap is a great way to install ipfs. But it puts its config files in ~/.ipfs anyway.
[19:09] <sdeziel> blackflow: IIRC, the live installer clearly mentions snap when providing a list of snaps to pick from
[19:11] <blackflow> apparently that didn't help. :)
[19:12] <Jofi00> it didnt help from getting me confused
[19:12] <lordcirth> It mentions snap, yes, but I don't think it defines the term or mentions that nothing will be where you'd expect.
[19:13] <sdeziel> yes, clearly the notice wasn't noticeable/clear enough ;)
[19:13] <sdeziel> the good thing is there is one so it's just a matter of improving it
[19:14] <sarnold> it's hard to convey the full range of pros and cons in one installer screen though :)
[19:15] <sdeziel> could be improved by dropping a motd snippet with a brief intro on snaps if one was picked during the installation?
[19:17] <blackflow> how about don't treat server users as idiots. unlike desktops, servers _should_ by all means be installed by skilled and experienced people who then will learn what snaps are, and whether they want it, with all the pros AND cons of it.
[19:17] <Jofi00> Maybe it was just me being completely unaware of snaps. However, some kind of notice or explanation would have helped.
[19:17] <blackflow> and as such, power users will have a choice to `snap install anything` should they decide to do so, being made aware of pros AND cons.
[19:19] <sdeziel> blackflow: such admins are likely not going to click anywhere in the installer's list and will deal with any snap installation later on
[19:20] <sdeziel> snaps are new so they need some kind of introduction that's not required for debs
[19:22] <sdeziel> Jofi00: you can learn more about that nextcloud snap in the README at https://github.com/nextcloud/nextcloud-snap
[19:22] <Jofi00> thanks
[19:24] <blackflow> Jofi00: and be aware that snaps update automatically and you have no control over it. depending on your use case, this might not be desireable. restarting server services should be a controled, scheduled activity. -- depending on your use case of course, perhaps you don't care about that at all
[19:26] <Jofi00> yes, I'll probably go with the non-snap installation
[19:29] <blackflow> sdeziel: definitely because as it is now, it's just a "Featured Server Snaps" selection menu with no explanation what snaps are, what are pros and cons.
[19:31] <sdeziel> Jofi00: in the nextcloud case, my personal recommendation would be to stick with the snap, or at least carefully consider what it means to not use it: no auto update, no automatic HTTPS cert, etc
[19:32] <sdeziel> nextcloud will be hosting potentially important data so updates and HTTPS are desirable
[19:32] <sdeziel> my 2c ;)
[19:32] <blackflow> and quite doable by the sysadmin even without the snap.
[19:33] <lordcirth> the snap comes with certbot?
[19:34] <blackflow> so, sure, if "Just gimme nextcloud, I don't care about the details" is what you want, snaps are fine. my whole objection is "the admin shoudl be aware of all those details and make a conscious decision".
[19:35] <sdeziel> lordcirth: dunno what client they use but they integrate seamlessly with Let's Encrypt
[19:35] <blackflow> lordcirth: it's a kitchen sink of Apache, MySQL, Redis, PHP and then some.
[19:37] <blackflow> so that's basically a whole appliance consisting of several software suites. people should _really_ be made aware of things like that.
[19:38] <blackflow> OHLOL AGPL licensed.... yeah be VERY very careful with that.
[19:39] <lordcirth> If you aren't changing the source, I don
[19:39] <lordcirth> *'t see why AGPL would be a problem?
[19:40] <lordcirth> Though you should be aware, yes
[19:40] <JanC> or when you are just running it for yourself
[19:42] <blackflow> for personal use it's okay. if you use it in conjunction with other sofware (eg. in a SaaS scenario) you have to release that other software source as well
[19:43] <JanC> AGPL only applies to the software itself & its dependencies
[19:43] <blackflow> another general license to be VERY careful with, is the new Commons Clause, you can't use it in conjunction with commercial products.
[19:43] <blackflow> JanC: and software that uses the AGPL'd component as its own dependency
[19:45] <sdeziel> AFAICT, that AGPL license is not specific to the snap though
[19:45] <JanC> only if it's really a dependency (e.g. a control panel doesn't become AGPL because it can start/configure an AGPL service)
[19:46] <blackflow> well thing is. AGPL and GPL'd software can't be made together into a single work (the snap). Also Redis now has its own, totally separate and totally FOSS unfriendly license, so whoever is packaging that snap, should be careful about which version its using.
[19:47] <blackflow> in other words it's a potential minefield, kitchen sink bloatwares like this, made of so many differently licensed components.  all the details people should be very much aware of before they one-click install a conveniently featured snap.
[19:47] <JanC> I assume they use an open source version of Redis
[19:47] <JanC> ?
[19:48] <lordcirth> JanC, I wasn't aware there was a proprietary version?
[19:48] <blackflow> you're confusing "open source" with "libre". Redis still is open source. it ain't libre no more tho'
[19:49] <JanC> Commons Clause isn't considered an Open Source license AFAIK
[19:49] <blackflow> "open source" or "libre"?
[19:50] <blackflow> open source means literally "here's the source code of this program". just that, nothing more.
[19:50] <lordcirth> https://redis.io/topics/license
[19:51] <lordcirth> blackflow, no, that would be "shared source" https://opensource.org/osd
[19:51] <JanC> the source being available doesn't make it Open Source
[19:51] <JanC> the source of MS Windows is also available
[19:51] <JanC> if you sign a whole bunch of NDAs etc.
[19:52] <blackflow> lordcirth:  https://techcrunch.com/2019/02/21/redis-labs-changes-its-open-source-license-again/
[19:52] <blackflow> https://www.gnu.org/philosophy/free-software-for-freedom.html
[19:52] <blackflow> Stallman on libre vs open source
[19:53] <Odd_Bloke> blackflow: Your definition of open source is not a generally accepted one.
[19:53] <blackflow> open source is literally "here's the source code with your fries". libre/free licenses give you rights wrt redistribution of that software, which is commonly (mistakingly) known as "open source"
[19:53] <blackflow> Odd_Bloke: ain't _mine_ tho :)
[19:54] <lordcirth> "Redis Source Available License" oof. But it only applies to certain modules so far, not the actual redis server
[19:54] <Odd_Bloke> Yeah, it's Redis Labs and not Redis itself that is affected.
[19:54] <lordcirth> Something to keep an eye on, though.
[19:55] <Odd_Bloke> Indeed.
[19:55] <blackflow> still something to be careful about, esp. in whole wheat packaged solutions. which redis modules are inside?
[19:56] <Odd_Bloke> Yep, it definitely makes Redis usage more fraught than it used to be.
[19:57] <sdeziel> https://github.com/nextcloud/nextcloud-snap/blob/master/snap/snapcraft.yaml#L228
[19:59] <JanC> the new Redis Labs license doesn't satisfy https://opensource.org/osd-annotated AFAIK, meaning it's not Open Source
[20:01] <Odd_Bloke> blackflow: I believe you're also mistaken about AGPL/GPL compatibility: https://opensource.stackexchange.com/a/1726
[20:05] <blackflow> or that SO poster is. my ideas of AGPL+GPL (v2 btw) compat is not mine alone. our lawyers have pretty much explained to us to stay away from agpl and commons clause like plague. my company does a SaaS.
[20:08] <Odd_Bloke> And, furthermore, I don't believe that a snap would be considered a 'work'; snaps are filesystem images, so them being a work would mean that any ISO would be, for example.
[20:09] <Odd_Bloke> Yeah, you should definitely stay away from both of those licenses, but presumably not because they'll compromise your GPL-licensed code but because they'll compromise your proprietary code.
[20:10] <blackflow> yes, that's the part in danger here. and btw, I think you're right, the AGPL part affects modified code only. if we modified and used an AGPL component, we'd have to release it even though we're not redistributing anything (it being SaaS)
[20:15] <lordcirth> Yes, that is the whole purpose of the AGPL
[20:17] <JanC> I wouldn't mind using AGPL for most purposes
[22:09] <rbasak> blackflow: AGPL-3 is DFSG compatible. If you don't like it or can't use it, then fair enough, but Debian (and therefore Ubuntu) already include AGPL-3 software, so you can't expect the distro to eliminate that for you.
[23:53] <foo> I'm looking for a monitoring system that has an API that I can feed an IP address (publicly accessible) and it can "onboard" that IP and then monitor. Specifically, I'd like it to determine how it can be monitored - eg. what ports are open, what ports share a banner, is it pingable - then share back when the system goes offline via an API and via whatever it originally found to be "onboarded." Maybe
[23:53] <foo> with a confidence score. eg. if 6 ports are open, and it's pingable, and everything goes unresponsive... it's likely it's all offline. Or, if 6 ports open and it's not pingable and 1 port closes, then there "may be" an issue. Does nagios or zenoss or something else happen to provide something like this? Or do we need to roll our own system