[00:27] <keithzg[m]> Hmm, running a 16.04 -> 18.04 upgrade I'm seeing a whole whack of "no candidate ver:" listings for ancient packages that I haven't had installed for good reason for quite some time (ex. a whole bunch of 3.x kernels).  Is there some way to clear out those old listings?
[00:42] <tomreyn> keithzg[m]: you must have packages installed which still reference these
[00:43] <tomreyn> may i suggest https://github.com/tomreyn/scripts#foreign_packages to sort out leftover packages (after cleaning up your apt sources, i.e. removing those you no longer need).
[00:45] <keithzg[m]> tomreyn: Thanks!  I'll definitely give that a shot once I'm done the upgrade. I'd be somewhat surprised if there was anything really, but then again this is a VM I actually inherited from the previous sysadmin so who knows what skeletons are in its closet, hah
[00:46] <tomreyn> a good idea then. also "ubuntu-support-status"
[00:48] <tomreyn> keithzg[m]: ^ and maybe debsums -as
[00:48] <tomreyn> + deborphan ;)
[01:27] <keithzg[m]> Huh, fail2ban is "unsupported" these days? Even more surprising, lazily searching packages.ubuntu.com returns no results for it, but that just seems to be that the website search isn't working (a local `apt policy fail2ban` shows that it's in universe, which I suppose would be the reason for ubuntu-support-status to report it as "unsupported" even though that seems misleading).
[01:28] <tomreyn> !info fail2ban
[01:29] <tomreyn> keithzg[m]: https://help.ubuntu.com/community/Repositories#Universe
[01:31] <keithzg[m]> tomreyn: Oh I know what 'universe' means for the Ubuntu repos. It's just that "supported" or "unsupported" seems like a misleading binary state to me in this case; that it's from a maintained package in the repos is surely at least *some* level of support, particularly compared to, say, if it was installed from a third-party repository that's no longer configured on the system.
[01:32] <tomreyn> here 'supported' refers to 'canonical provides security support for it'
[01:32] <sarnold> fail2ban has been in universe since at least precise, probably earlier
[01:32] <keithzg[m]> Like, the distinction makes sense, it just means that `ubuntu-support-status` isn't necessarily too useful to me.
[01:32] <sarnold> keithzg[m]: the important takeaway here is that if there's a bug in it that you want fixed, *you're* the one supporting it :)
[01:33] <keithzg[m]> sarnold: Eh, that's not the takeaway is it? Surely it's just, if there's a bug in it that I want fixed, it just isn't *Canonical's* problem :D
[01:34] <sarnold> keithzg[m]: yeah :) it's just that all too often folks expect Someone Else to solve their problems..
[01:35] <keithzg[m]> sarnold: Yeah, fair! Although I can't imagine such folks would let a package being in "universe" stop them, hell I bet rarely would "I downloaded this from some random website and half-followed the instructions" stop 'em ;)
[01:36] <sarnold> keithzg[m]: too right you are ;)
[06:01] <lordievader> Good morning
[07:42] <jelly> keithzg[m]: what else would you have "unsupported" mean but "unsupported by distro vendor"
[12:15] <ahasenack> good morning
[12:49] <ahasenack> rbasak: thanks for the reviews. Could you take a quick look at https://code.launchpad.net/~ahasenack/ubuntu/+source/squid/+git/squid/+merge/356100/comments/926735 for one extra commit I added to squid? It's on top of what you reviewd already, I just had to regenerate the changelog after it
[13:08] <rbasak> ahasenack: +1 (commented)
[13:09] <ahasenack> rbasak: thanks
[13:10] <ahasenack> rbasak: I think I'll take over https://code.launchpad.net/~paelzer/ubuntu/+source/strongswan/+git/strongswan/+merge/355589
[13:10] <ahasenack> it was trumped by two security updates in the meantime, and the upload was rejected (https://launchpad.net/ubuntu/cosmic/+queue?queue_state=4&queue_text=strongswan)
[13:10] <ahasenack> rbasak: is there a place to see why it was rejected, although in this case I think that was the reason?
[13:16] <rbasak> ahasenack: only the uploader gets the reject message unfortunately
[13:16] <ahasenack> ok
[13:16] <rbasak> Ask in #ubuntu-release perhaps?
[13:30] <ahasenack> checked, it was the secteam's upload
[13:30] <ahasenack> I'll resubmit
[13:31] <ahasenack> rbasak: what happens with the git tree in this case?
[13:31] <ahasenack> I guess since it was never uploaded, it will never be imported
[13:32] <ahasenack> so the changes will never show up in the pkg git tree, just the upload tag
[13:32] <ahasenack> which won't match what was actually uploaded as that version/release
[13:35] <rbasak> ahasenack: correct. Best to delete the upload tag to avoid confusion.
[13:35] <rbasak> (which is the ugly part, but it's the least worst option IMHO)
[13:44] <ahasenack> rbasak: https://code.launchpad.net/~ahasenack/ubuntu/+source/strongswan/+git/strongswan/+merge/356135 3rd mp about this :)
[14:04] <rbasak> ack
[14:22] <Slashman> hello, am I missing some kind of package to have libvirt support zfs pool on 18.04? trying "virsh pool-define-as nvme1 zfs --source-path /dev/zvol/nvme1" gives me "error: internal error: missing backend for pool type 11 (zfs)"
[14:22] <Slashman> it works on 16.04, I don't remember that I had to install something special
[14:23] <ahasenack> hm
[14:24] <ahasenack> Slashman: try installing libvirt-daemon-driver-storage-zfs
[14:24] <Slashman> ahasenack: thanks! this package does not exist on 16.04
[14:25] <Slashman> ahasenack: hm, same error, do I have to restart something or modify a config somewhere ?
[14:25] <ahasenack> try restarting libvirtd-bin (iirc)
[14:26] <Slashman> ahasenack: "libvirtd.service", and it works now, thanks! time to update my ansible role
[14:26] <ahasenack> cool
[14:32] <mason> Does libvirt-daemon-driver-storage-zfs end up setting up zvols per VM?
[14:32] <mason> I've been doing this by hand, and I like the idea of it automatically happening.
[14:33] <ahasenack> are there particular advantages in using zvols instead of plain image files on a zfs dataset?
[14:34] <ahasenack> I find the image file quite convenient, mainly because of its name and ease of moving around if needed
[14:34] <Slashman> ahasenack: very useful to transfer, clone, backup, etc
[14:34] <ahasenack> well, that's about zfs, not zvols in particular
[14:34] <mason> ahasenack: Yes. send/receive/snapshot per VM
[14:34] <ahasenack> hm, per vm, instead of per directory where all vms are you mean?
[14:34] <mason> Sorry, I should have said "per VM"
[14:35] <Slashman> yes, datasets per VM is great, you can have several per VM too, I usually have at least one for the OS and an other for the data
[14:36] <sdeziel> ahasenack: I didn't benchmark it but I'd expect better performance from zvol when compared to raw|qcow2 on zfs
[14:36] <ahasenack> there are benchmarks out there comparing the two, and it's not that clear cut
[14:36] <Slashman> you can snapshot the tree too, eg: "tank/VM/xenial/root and tank/VM/xenial/data", "zfs snapshot -r tank/VM/xenial"
[14:37] <Slashman> or make a different tree to snapshot only the data
[14:37] <ahasenack> http://jrs-s.net/2018/03/13/zvol-vs-qcow2-with-kvm/
[14:38] <Slashman> tank/VM/root/vm1 tank/VM/root/vm2 tank/VM/data/vm1 tank/VM/data/vm2 => zfs snapshot -r tank/VM/data
[14:38] <sdeziel> ahasenack: interesting, thanks
[14:40] <sdeziel> the per VM snapshot is just too nice for me to abandon zvols though
[14:42] <ahasenack> yeah, I can see that
[14:42] <Slashman> on a side note, using "ashift=13" on ssd is not a good idea in reality
[14:42] <sdeziel> this becomes especially nice when coupled with pre-boot snapshort that a qemu hook can do :)
[14:42] <Slashman> it destroys the compressratio and the performance is basically the same
[14:44] <Slashman> backups and migration of VM are also much more easier with zvol per VMs
[14:45] <ahasenack> rbasak: I applied your suggestions and ran the tests again, all good. Could you take another look? https://code.launchpad.net/~ahasenack/ubuntu/+source/sssd/+git/sssd/+merge/355524
[14:45] <mason> I'm kind of excited about libvirt-daemon-driver-storage-zfs now.
[14:45] <ahasenack> rbasak: and, do you have a preference whether to squash it all now or later? I think it's easier to review leaving as is
[14:45] <mason> And since my hypervisors are Bionic, I can leap right in.
[14:46] <ahasenack> mason: does it create the zvol, or do you have to hand it one already created?
[14:46] <ahasenack> (if you have tried it already)
[14:46] <Slashman> seems like libvirt-daemon-driver-storage-zfs is only the driver and does nothing else
[14:47] <mason> ahasenack: We'll find out! I assume it creates it, because if it doesn't, the package doesn't do much.
[14:47] <mason> Hrm. In that case, what's it actually do? I'm creating zvols by hand and passing them in as block devices.
[14:47] <Slashman> mason: it activate the zfs pool storage, you cannot have one if you don't have it
[14:47] <ahasenack> you can create a pool with a random block device, no? Then what would be the difference indeed between that and a zvol?
[14:47] <mason> Slashman: I can say for sure that I can have one without that package. :P
[14:47] <Slashman> mason: I can't
[14:48] <Slashman> mason: I tried before
[14:48] <Slashman> see above
[14:48] <mason> Slashman: Worked fine for me in Xenial, continues to work fine in Bionic...
[14:48] <rbasak> ahasenack: +1 - commented
[14:48] <Slashman> mason: it worked in xenial fine, this machine was installed with bionic, it was not upgraded
[14:48] <Slashman> the nuance may be here
[14:49] <mason> Slashman: Same here. I redid my hypervisor from scratch.
[14:49] <rbasak> ahasenack: I'm caught up with you now I think? Anything else pending review for you right nwo?
[14:49] <mason> Slashman: I have both cases. Desktop/hypervisor is an upgrade, and dedicated hypervisor was a fresh install.
[14:49] <mason> Slashman: screenshot incoming
[14:49] <ahasenack> rbasak: nope, you've been stellar, thanks
[14:49] <Slashman> mason: well, I had "error: internal error: missing backend for pool type 11 (zfs)" before I installed libvirt-daemon-driver-storage-zfs
[14:50] <Slashman> I tried to define it via xml, via virsh pool-define-as and via virt-manager, same error
[14:50] <mason> Slashman: https://imgur.com/a/E8jyKQp
[14:50] <Slashman> mason: this is a raw disk
[14:50] <mason> Slashman: And I don't have libvirt-daemon-driver-storage-zfs on either system.
[14:51] <mason> Slashman: Yes.
[14:51] <Slashman> that's not the same
[14:51] <mason> Slashman: How so?
[14:51] <Slashman> you can have pool of type zfs
[14:51] <mason> Slashman: What's that buy me if zvols aren't autogenerated when I create a VM?
[14:52] <Slashman> mason: maybe then, I have never used the autogeneration of storage
[14:52] <Slashman> I have scripts that create everything and then define the VM
[14:52] <mason> Slashman: I'm curious now. What does a "pool type of zfs" mean, tactically?
[14:53] <rbasak> ahasenack: do you want to continue with triage for bug 1787739? I have some questions I'd like answered but I don't want to pull the reporter in two different directions at once.
[14:53] <ahasenack> hm, I got that email
[14:53] <rbasak> (I see you're subscribed but it came up in my triage today)
[14:53] <Slashman> mason: https://apaste.info/cwL1
[14:54] <ahasenack> rbasak: let me take a quick look
[14:54] <Slashman> damn, no type
[14:54] <rbasak> ahasenack: no rush - just don't want it to get lost if I leave it
[14:54] <ahasenack> rbasak: I still think it's something on his setup, the vagrant image doesn't help
[14:54] <mason> Slashman: Well. But what does it do for you that I don't have passing in zvols as raw disks?
[14:54] <ahasenack> rbasak: this is now falling under support I think. Asking for tcpdump packet captures and the like
[14:55] <ahasenack> I have never used vagrant, though
[14:55] <rbasak> ahasenack: yeah I'd ask him for reproduction instructions (rather than an image) and that hit public infrastructure
[14:56] <Slashman> mason: well, in virt manager, you see a pool and can select the drives, etc, not sure about the definition of the host themselves
[14:56] <ahasenack> rbasak: I'd say we can't reproduce it
[14:56] <ahasenack> maybe suggest that he inspect the traffic with tcpdump, and bump the logs on his 192.168.0.130 nameserver
[14:56] <ahasenack> rbasak: ^
[14:57] <Slashman> mason: but you make a good point, I'm not really using that, I guess that you have a driver type "zfs" that should have bnetter perf than the "raw" one
[14:57] <rbasak> ahasenack: I'd avoid going into support detail. That encourages a more-support-help response and he's better of getting that from askubuntu.com or Ubuntu forums or wherever rather than in a bug.
[14:57] <rbasak> ahasenack: I wonder if this is https://people.canonical.com/~ubuntu-security/cve/2018/CVE-2018-5738.html
[14:57] <rbasak> (ie. deliberate somehow)
[14:58] <ahasenack> rbasak: you mean as a regression, or that he hasn't updated? (didn't check the version number)
[14:58] <rbasak> I mean that a security hole was closed and he's noticed
[14:58] <rbasak> (I haven't looked in detail, but he seems to be claiming a regression-update?)
[14:58] <ahasenack> I think not, because he does get a correct response, but with an error status
[14:58] <rbasak> At the least he can pin it down to a specific update for us.
[14:58] <ahasenack> it's odd. That's why I thought it was some sort of truncation
[14:59] <rbasak> Yeah I don't know why that would be SERVFAIL rather than refused.
[14:59] <mdeslaur> jamespage: hi! what's up with openvswitch (2.5.5-0ubuntu0.16.04.1) xenial?
[14:59] <rbasak> ahasenack: you want me to reply?
[14:59] <mdeslaur> jamespage: I want to fix some cves, and wonder if that's going to get released soon or not
[14:59] <mason> Slashman: I'll compare performance sometime, as that's a fair bet.
[15:00] <Slashman> mason: "zfs" driver type doesn't exists at least on xenial
[15:00] <ahasenack> rbasak: I think we should at least try his vagrant config, since he went through so much effort to try to help and give a reproducer
[15:00] <sdeziel> mason: Slashman: having a pool in libvirt (be it a zpool or a lvm one) means one can create new disks with only virsh access, no direct SSH required
[15:00] <ahasenack> rbasak: leave it to me
[15:01] <rbasak> Thanks]
[15:01] <Slashman> sdeziel: okay, that make sense
[15:01] <mason> sdeziel: Okay. Okay, that's also reasonable.
[15:01] <teward> sdeziel: just an FYI, ngx_brotli will only work over HTTPS :|
[15:01] <teward> so it has no benefit for non-https connections
[15:02] <Slashman> since I'm always creating the VM and their disk via automation, I never really looked at that, I just found useful to see the zpool with the volume in virt-manager when I needed to debug something
[15:02] <sdeziel> teward: I don't maintain a single HTTP only site ;)
[15:03] <sdeziel> teward: I looked at BREACH and the compression with HTTPS is only problematic when you compress stuff with secrets inside (like CSRF tocken)
[15:04] <Slashman> teward: ideally your http site should only be here to redirect to https
[15:05] <Slashman> teward: interesting, do you have a source for the BREACH stuff about secret? also from my researches, brotli uses a lot of CPU unless you compress in advance your content
[15:05] <teward> Slashman: as that's its own discussion in itself, we'll store this argument later.
[15:05] <teward> for later*
[15:05] <sdeziel> teward: so I _think_ I'm safe to use (gzip|br) for only CSS and JS
[15:05] <teward> sdeziel: indeed.  There's a headache in the brotli code though, if you give it text/html and a list of other MIMEtypes it throws a warning
[15:05] <teward> sdeziel: but yeah all 'should' be OK.
[15:06] <teward> sdeziel: basic tests seem to work in a container, so it'd work, but as there's some... code issues... that sarnold found, it wouldn't be in main
[15:06] <teward> there's some overflow / out of bounds concerns
[15:06] <teward> which could cause segv
[15:06] <sdeziel> teward: thanks for looking into this
[15:06] <Slashman> you need much more mime type if you want to have a real gain, depending on your applciation
[15:06] <teward> sdeziel: thank sarnold as well
[15:07] <sdeziel> sarnold: yeah, thank you indeed
[15:07] <teward> sdeziel: one concern is text/html is *always* compressed
[15:08] <teward> even if you only want to compress css and js
[15:08] <teward> so unless you configure properly there may be a risk
[15:08] <teward> I don't have details on how BREACH works, the Security team might know more than me on that for testing
[15:08] <teward> sdeziel: but yeah it should be doable, provided that the issues sarnold found are non-issues
[15:08] <teward> (we're waiting for upstream responses)
[15:08] <Slashman> teward: what do you think of something like that for compression: https://paste.ubuntu.com/p/54fCqsc883/ (in httpd format)
[15:08] <teward> sdeziel: it wouldn't be added until next cycle though
[15:09] <teward> a bit late in the cosmic cycle to add L|
[15:09] <teward> Slashman: i'm not sure how this got onto a discussion of "Is this sane" or not
[15:09] <teward> I was following up with sdeziel on something from yesterday
[15:09] <sdeziel> teward: indeed, upstream doc confirms that text/html is always compressed when gzip is enabled
[15:10] <teward> sdeziel: issues 21 and 22 are Seth's discoveries
[15:10] <Slashman> teward: okay, but you seem to have some experience in compression for httpd servers
[15:10] <teward> and those are what i'd wait on first :p
[15:10] <teward> Slashman: not necessarily?
[15:10] <teward> Slashman: i'm the nginx package maintainer
[15:10] <teward> sdeziel asked me if getting brotli support in NGINX was doable
[15:10] <sdeziel> teward: EMISSINGREFERENCE, is there a bug I should be looking at?
[15:11] <teward> sdeziel: upstream bugtracker, on the repo
[15:11] <sdeziel> thx
[15:11] <teward> https://github.com/eustas/ngx_brotli/issues, 21 and 22
[15:11] <teward> code level concerns
[15:11] <Slashman> teward: okay, I'm also interested in brotli for nginx
[15:11] <teward> sdeziel: if sarnold ACKs for Main inclusion then I can add this to the standard module set for all the flavors
[15:11] <teward> if he doesn't then it's stuck to -extras at the least
[15:11] <teward> (because 'all the flavors' would include -core)
[15:12] <teward> (for nginx, anyways)
[15:15] <teward> sdeziel: TL;DR, there's a conditional ACK on this because of the code problems/risks
[15:16] <teward> if there's no issues then all it determines is whether we want to MIR that plugin *into* the nginx-core flavor :P
[15:16] <sdeziel> I understood as much
[15:16] <teward> *yawns, and goes to find more coffee*
[15:17] <sdeziel> Slashman: re your compression config for apache httpd, it includes text/html which opens a BREACH when using HTTPS
[15:19] <sdeziel> Slashman: for details see http://www.breachattack.com/ and more specifically the "Am I affected" section as more conditions are needed to be vulnerable
[15:21] <teward> sdeziel: I wonder if that's a risk with brotli then as well, because it always compresses text/html?
[15:21] <teward> not sure but thought I'd ask.
[15:21] <teward> s/ask/mention it/
[15:21] <sdeziel> teward: the way I understood this applies to every compression algo
[15:21] <teward> sdeziel: then this would introduce another BREACH risk if left on the defaults (cc sarnold)
[15:22] <sdeziel> teward: anything that compresses the HTML body containing a secret thing
[15:22] <sdeziel> teward: well, same caveat as with gzip
[15:22] <teward> indeed.  with nginx you can configure brotli in a location block that matches only .css and .js or such to be enabled, thereby protecting against BREACH =
[15:23] <teward> but that gets complex fast heh
[15:23] <sdeziel> teward: I must admit I don't like the always on compression for text/html
[15:23] <teward> sdeziel: agreed
[15:25] <teward> sdeziel: upstream issue 23 about breach opened.
[15:28] <Slashman> sdeziel: okay thanks, I'll check with the dev team if we have all the condition to be vulnerable
[15:29] <sdeziel> teward: for the gzip part there is this bug already https://trac.nginx.org/nginx/ticket/1083
[15:29] <teward> sdeziel: yes, I know, but gzip_types actually lets you override to ignore text/html in NGINX code
[15:29] <teward> that's the workaround
[15:29] <teward> and it works
[15:29] <teward> but brotli doesn't have that workaround
[15:30] <teward> so it's a risk
[15:31] <sdeziel> teward: what I understood from the doc, is that gzip_types has an implied text/html
[15:31] <teward> sdeziel: if you don't specify `gzip_types` and override it, yes.
[15:31] <teward> but that's easily overridden
[15:32] <teward> my point is that the workaround which protects against text/html that is adjusting the config.
[15:32] <teward> if you provide it, say, `gzip_types application/javascript text/css;` it ignores text/html
[15:32] <teward> that isn't the case in the brotli plugin
[15:32] <sdeziel> teward: please re-read https://nginx.org/en/docs/http/ngx_http_gzip_module.html#gzip_types
[15:32] <teward> oop you're right i reread
[15:32] <teward> i need to bump this i think
[15:33] <teward> sdeziel: the other way is to just shut off gzip which is actually a default change I think
[15:33] <teward> at least, for the configs we ship... *double checks*
[15:33] <teward> sdeziel: given that the default is `gzip off;` this is only really a risk for people who use GZIP on their site
[15:33] <teward> but you're not wrong
[15:33] <teward> it's still a risk
[15:34] <sdeziel> teward: yup
[15:42] <jamespage> mdeslaur: I'll kickoff the testing now and clear the way for your CVE's
[15:42] <mdeslaur> thanks jamespage!
[16:14] <teward> sarnold: lol, apparently I get a faster response to my "BREACH Risk" issue on ngx_brotli than your code related questions get a reply to lol
[17:13] <sarnold> teward: nice find.
[17:19] <teward> sarnold: thanks.  yeah it was a "WTF" for a moment, but it looks like NGINX Upstream has the same problem and didn't do anything about it
[17:19] <sarnold> cute.
[18:22] <petershaw> How to install XEN on Ubuntu 18.04. Seams that the repo is not installed by default. Any ideas?
[18:23] <ahasenack> petershaw: can you check if you have universe enabled in /etc/apt/sources.list?
[18:23] <ahasenack> petershaw: the 18.04 server installer had a bug where it would only enable the main repository
[18:24] <petershaw> Ah. yes only main
[18:25] <petershaw> Thank you very much, ahasenack
[18:25] <ahasenack> petershaw: welcome, sorry about the bug
[18:25] <ahasenack> it's fixed in the last release
[18:40] <teward> ahasenack: it still has that bug, actually.
[18:40] <teward> unless you mean the 18.04.1 ISO?
[18:41] <ahasenack> I thought 18.04.1
[18:42] <ahasenack> but *could* be mistaken
[18:42] <ahasenack> 18.10 is fixed for sure, I tested that recently
[18:43] <teward> ahasenack: just tested with the copy that got synced down on my local mirror, it only enabled main
[18:43] <teward> so hopefully for 18.04.2 that'll be fixed?
[18:43] <ahasenack> :(
[18:43] <ahasenack> no reason why not, since 18.10 is fixed
[18:43] <ahasenack> teward: if you are curious, you may get the fixed version even with the 18.04.1 iso
[18:44] <ahasenack> just switch to a terminal and issue snap refresh, if networking is up already
[18:44] <ahasenack> "snap refresh subiquity" probably
[18:44] <ahasenack> I haven't tried that, but heard it should be possible
[18:52] <teward> ahasenack: maybe.  I have a script that I run to update everything currently to get what I need in terms of repos.
[18:52] <teward> so heh
[22:30] <jak2000> sorry i asked before but not found the answer, how to do this: AFTER only  AFTER start the server (after 2 minutes execute a command stored in: /usr/scripts/reloadApplication.sh)  of course the script have +x any advice?
[22:40] <petershaw> jak2000 it is not really ubuntu related, but what is about a good old init.d script in combination with a sleep 120?
[22:42] <jak2000> petershaw thanks, and sorry  why not ubuntu related?
[22:42] <jak2000> apply to any distro?
[22:42] <genii> ... that uses sysvinit scripts still, yes
[22:45] <genii> You could also use some method like rc.local script which measures result of uptime against the timestamp of the last dmesg entry
[22:47] <genii> ( or just also waits the 120 seconds, etc)
[22:47] <sarnold> or cron @reboot sleep 120; /usr/local/bin/blah
[22:47] <genii> ..since rc.local is ran after system is otherwise fully booted
[22:49]  * genii slides sarnold a fresh mug
[22:49] <sarnold> awwwwww yissss
[22:49] <genii> hehe
[22:50] <jak2000> interesting cron...
[22:50] <jak2000> i want restart the server every Friday (it do)......
[22:50] <jak2000> and after boot up, run the script: /usr/scripts/reloadApplication.sh
[22:52] <jak2000> sarnold, then: crontab -e and write: 30 1  *    *    5 /sbin/shutdown -r now
[22:52] <jak2000> and ?
[22:53] <petershaw> Does someone has a tutorial-link for xen with netplan? I can't get the link working in my guest system. Since hours. I am getting mad.
[22:53] <sarnold> and @reboot sleep 120 ; /usr/scripts/reloadApplication.sh
[22:53] <sarnold> petershaw :(
[22:53] <cyphermox> petershaw: not sure what you mean exactly, what kind of link?
[22:53] <sarnold> petershaw: most folks using ubuntu for virtualization either go with full openstack or libvirt.. xen's just not getting much love
[22:54] <sarnold> petershaw: where are you stuck? maybe someone's seen it..
[22:54] <jak2000> 30 1  *    *    5  @reboot sleep 120 ; /usr/scripts/reloadApplication.sh          <--- reboot and after 120 seconds run the script?
[22:54] <petershaw> sarnold i have a xenbr0 and a vlan, but my guest does not get a connection while installation.
[22:55] <cyphermox> is the guest the one you're trying to configure with netplan, or the host?
[22:55] <sarnold> jak2000: no, the @reboot takes the place of the time/date/dow/dom specification entirely
[22:56] <petershaw> sarnold  this is my netplan conf https://pastebin.com/a9WGsANp
[22:56] <cyphermox> ok, the host.
[22:57] <cyphermox> petershaw: I guess the guest is getting connected on xenbr0?
[22:57] <petershaw> should be.
[22:58] <cyphermox> my guess is the subnet is wrong
[22:58] <cyphermox> in your config for enp4s0f0 you use /19
[22:58] <petershaw> jap. that is correct. it is a /19 net.
[22:58] <cyphermox> in xenbr0 you use /24, there's possibly some mess there, where the dhcp server on that network can't reach the devices behind the bridge?
[22:59] <cyphermox> those look to be on the same network -- enp4s0f0 and vlan 1 are both on "vlan 1"
[22:59] <cyphermox> unless you do some magic with vlan tagging, that is
[22:59] <petershaw> ah... I do not understand bridges, i guess.
[23:00] <petershaw> What ip shoud the bridge have?
[23:00] <cyphermox> you might also need to set ip forwarding, if the main network is supposed to give DHCP
[23:00] <petershaw> ip forwarding is enabled, also (network-script network-bridge) is incommented
[23:00] <cyphermox> petershaw: I don't know, it depends on your network setup, but it's one number off from the IP you set for enp4s0f0, except in /24 instead of /19
[23:01] <cyphermox> so that /24 looks like it probably should be a /19?
[23:02] <jak2000> for testing purposes:
[23:02] <sarnold> petershaw: I don't know bridges either but it kind of looks like you've assigned an ip address to the interface that's attached to the bridge; I thought linux required the interface to not have an address, but give the bridge the address?
[23:02] <jak2000> @reboot sleep 120 ; /home/jak/ftp/c.sh
[23:02] <jak2000> 03 17 * * * /sbin/shutdown -r now
[23:02] <jak2000> its ok?
[23:04] <sarnold> jak2000: I'm pretty sure that'll reboot your machine every day. is that what you want?
[23:04] <jak2000> yes
[23:05] <jak2000> 17:03
[23:05] <jak2000> restarted :)
[23:05] <jak2000> how to know if run my script: /usr/scripts/reloadApplication.sh
[23:05] <jak2000> ?
[23:05] <jak2000> see:
[23:05] <jak2000> jak@vmi103461:~$ date
[23:05] <jak2000> jue oct  4 17:05:47 MDT 2018
[23:05] <jak2000> and:
[23:06] <jak2000> jak@vmi103461:~$ uptime
[23:06] <jak2000>  17:05:49 up 1 min,  1 user,  load average: 3.40, 1.55, 0.58
[23:06] <jak2000> sorry how to know if: /home/jak/ftp/c.sh    was exxecuted?
[23:11] <sarnold> what does it *do*? :)
[23:11] <jak2000> i use glassfish... and every restart need restart the domain (the Glassfish server)....