[00:04] <keithzg> Hmm, the "Ubuntu Customization Kit" seems to be dead, what's the easiest way to spin up a live image with customized packages? (Need the latest kernel on a live USB session to fix a BTRFS array on a server)
[00:05] <keithzg> Wait, I stand corrected, it's just the Ubuntu apps directory that an old forum post pointed me to for the uck doesn't go past 13.10, it does still appear to be in the repos
[00:06] <drab> keithzg: it didn't really work for me, but you're welcome to give it a go, maybe something in my setup
[00:06] <drab> keithzg: ended up spending about 12hrs over two days trying to find something that didn't require a ton of sweat and blood end eventually landed on this:
[00:06] <drab> https://launchpad.net/cubic
[00:07] <drab> keithzg: if you're willing to trust that ppa, the tool works and actually does so rather well
[00:07] <keithzg> drab: I shall take your recommendation and try that first :)
[00:07] <drab> the idea is really exactly the same, extract the iso, unsquash the squash root, chroot, install stuff, repackage
[00:08] <drab> I've looked at enough of those things I basically just do it myself manually at this point...
[00:08] <keithzg> Heh fair enough
[00:08] <keithzg> Yeah, I figured it wouldn't be *too* hard to do manually but I was sure there'd be some easy, automated way out there
[00:09] <drab> keithzg: lemme know how it goes, tbh I foudn this by accident almost, apart from uck not much is really advertised
[00:10] <drab> not quite sure why, maybe customizing isos isn't a common thing to do anymore
[00:11] <keithzg> Yeah, I mean to be fair folks' internet connections tend to be fast enough these days that just installing something and *then* customizing things tends to be the easy solution.
[00:11] <keithzg> In my case though I want it to just immediately boot with the 4.14 kernel since only then do I have a chance at replacing the dead drive in a Btrfs array!
[00:12] <drab> I hear you, I ended up making myself a custom pxe image for that
[00:12] <drab> cuz I didn't want to have to go around with usb keys or CDs and stuff
[00:12] <drab> and that was even worse, there's no single small pxe bootable rescue system based on ubuntu for some reason
[00:13] <drab> closest was the old dsl, but it's abandoned
[00:13] <drab> keithzg: oh, the other tool I found that looked nice was this: https://sourceforge.net/projects/pinguy-os/files/ISO_Builder/
[00:13] <drab> seems a fork of remastersys
[00:13] <drab> updated last year and reported to work on xenial
[00:14] <drab> keithzg: https://www.ostechnix.com/pinguy-builder-build-custom-ubuntu-os/
[00:14] <keithzg> drab: Hmm the more I think of it the more tempting it is to just unsqash, chroot, and resquash, heh
[00:14] <keithzg> I do have a PXE server running at work after all
[00:15] <keithzg> Can't remember now if I got UEFI live instances to work or not though
[00:28] <drab> keithzg: fwiw, these were my rough notes from the first pass... cleaned it up since then but been lazy and not republished
[00:28] <drab> keithzg: https://gist.github.com/spikedrba/057acad8b3bfb0266544347ced8b53d4
[00:28] <drab> keithzg: it's now offically called PXERescue ;)
[00:28] <drab> it uses ramboot initrd script to load the OS in ram
[00:30] <drab> keithzg: the bug I haven't fixed is dns resolution in busybox, so the pxe parameter ramboot should actually use the ip, not hostname
[00:30] <drab> ask for a bit
[00:46] <keithzg> Heh well Pinguy-builder is a bust certainly, since it crashes with "Gtk-WARNING **: Unable to locate theme engine in module_path: "adwaita",". Silly Gnome.
[00:46] <sarnold> OH NO NO THEME BETTER CRASH
[01:14] <keithzg> heh
[01:15] <keithzg> No go on uck either, I just get a "Building failed" popup eventually and the log says "kdialog: cannot connect to X server :0" "Script cancelled by user"
[01:16] <sarnold> ew
[01:17] <keithzg> Surely there are official instructions out there somewhere for how the *actual* ISOs all get built? I can't seem to find them for some reason.
[01:21] <sarnold> keithzg: I've been shown the exact code on launchpad several times and can't ever recall where it is when someone asks. :(
[01:24] <keithzg> sarnold: Drat! Yeah I keep finding things like https://wiki.ubuntu.com/DerivativeDistroHowto#Tools_for_building_distro and I'm thinking at this point "I don't want to know about the tools that 'make this easy', I want to know how to do it The Right Way"---the easy ways aren't so easy if they outright don't work!
[01:25] <keithzg> Maybe drab's pxe method will end up being the best way, debootstrapping along those lines now.
[02:23] <drab> keithzg: could never found official instructions, asked around in -dev, no joy either
[02:24] <drab> keithzg: if you find them let me know, I agree that that process should be documented somewhere, maybe an internal wiki
[02:25] <drab> keithzg: the pxe method I'm using is the cleanest ime, it's simple, makes a very small and fast image, has no dependencies past lpxelinux/ipxe and fetches over http so no funny nfs server or slow tftp server
[02:30] <drab> btw, about openvpn, found this which is kind of nice: https://github.com/Nyr/openvpn-install/blob/master/openvpn-install.sh
[02:35] <sarnold> wow, looks nice enough. pity it downloads and executes stuff without checking authenticity, but it's otherwise pretty sharp-looking
[02:52] <drab> sarnold: lol, now now, so demanding.. you want ppl to actually check what they download, ah!
[02:52] <drab> have some faith man, double rainbows and all of that
[02:53] <sarnold> hahahaha
[05:04] <keithzg> drab: Sadly, the PXE method didn't work for me in the end, although not because it wouldn't, but because the 4.14.2 packages from the Ubuntu Mainline Kernel PPA simply fail to install. So I seemingly could create a bootable PXE image your way, just not with the one thing customized that I actually want!
[05:05] <keithzg> I'm kindof surprised to find that there's no Linux distro out there that specializes in always having bleeding-edge kernels (or if there is one, my google-fu is apparently very weak)
[06:34] <cpaelzer> good morning
[07:05] <lordievader> Good morning
[07:06] <cpaelzer> hi lordievader
[07:07] <lordievader> Hey cpaelzer
[07:07] <lordievader> How are you doing?
[07:10] <cpaelzer> ok enough :-) and you?
[07:11] <lordievader> Doing okay. Haven't head coffee yet. I suppose this morning has chances of improving 😋
[09:09] <mojtaba> Hello, I have installed stunnel, and restarted the service; but it doesn't show up when I type: ps -ef | grep stunnel
[09:10] <peetaur2> mojtaba: which ubuntu release?
[09:11] <mojtaba> peetaur2: 16.04 LTS
[09:11] <peetaur2> so then let's see   systemctl status stunnel
[09:12] <mojtaba> peetaur2: inactive (dead)
[09:12] <mojtaba> Reason: No such file or directory
[09:12] <mojtaba> peetaur2: my conf is in /etc/stunnel
[09:13] <peetaur2> pastebin https://bpaste.net the whole output... snippets will just waste time
[09:13] <peetaur2> if there's no such file, I expect a filename too
[09:13] <mojtaba> peetaur2: http://paste.ubuntu.com/26063374/
[09:15] <mojtaba> peetaur2: systemctl status stunnel4 gives me http://paste.ubuntu.com/26063379/
[09:16] <peetaur2> bleh...silly pastebin has no raw button
[09:17] <peetaur2> so it seems not to say which file exactly, but seems to fail to find some SSL related file... maybe a CA cert
[09:17] <mojtaba> peetaur2: yes
[09:17] <peetaur2> and says [openvpn] on the next line, so maybe there's some openvpn ca cert you are missing
[09:18] <mojtaba> I have them inline in the openvpn config file. (ovpn file)
[09:21] <peetaur2> is it relative or absolute path? try absolute
[09:22] <mojtaba> peetaur2: I have pasted the cert file in the ovpn file.
[09:23] <mojtaba> peetaur2: between <ca></ca> tags.
[09:25] <peetaur2> ok, then that sounds good, but then why does it want a file? what other file might it expect?
[09:26] <mojtaba> peetaur2: I don't know. That should be just the .pem file.
[09:27] <peetaur2> did you set a .pem file?
[09:29] <mojtaba> peetaur2: cert = stunnel.pem
[09:29] <peetaur2> so try absolute path on that one
[09:31] <mojtaba> again failed
[09:32] <mojtaba> peetaur2: this one is different
[09:32] <mojtaba> peetaur2: http://paste.ubuntu.com/26063469/
[09:33] <peetaur2> mojtaba: ok so now it says permission denied...so maybe it's running as one user, like stunnel, and /var/run is owned by root, so it can't write
[09:33] <mojtaba> peetaur2: Yes, so what should I do?
[09:34] <peetaur2> so my favorite fix for that is to add in the init script (but that's systemd... will have to look that up) that it makes a dir and chowns it to that user, eg. /var/run/stunnel/ and then in the conf, set the pid file like /var/run/stunnel/stunnel.pid
[09:34] <peetaur2> and also report it as a bug...the distro should do all that work for you
[09:34] <peetaur2> but fix it first...just to verify you know what the problem really is
[09:35] <peetaur2> another option is make a blank file /var/run/stunnel.pid and then chown the file (not dir), and then hopefully it can modify the file instead of making a new one
[09:36] <peetaur2> systemd also should support making these files and doing that for you, but this error likely means the service is expected to do that part (which is normal for some...like apache requires that it does that work for itself, runs as root and drops privs)
[09:36] <peetaur2> and another option is run as root, and drop privs
[09:36] <peetaur2> the lazy insecure way is to only run as root.. you could also test that, but I don't recommend it (and running it that way can leave a mess of files behind owned by root, so you have to chown or rm them to clean up)
[09:37] <mojtaba> I think running as root and drop privs is better, what do you think?
[09:38] <mojtaba> How should I report this bug?
[09:38] <peetaur2> sure but the program has to support it... you have to see what's possible
[09:38] <peetaur2> first find a way to make it work, so you verify your assumptions
[09:38] <mojtaba> peetaur2: I see. Ok
[09:38] <mojtaba> I will try your second option
[09:38] <peetaur2> and then just report it the usual way.... paste the error, and say what it ought to do, and show the fix that works, and that afterwards the daemon runs as the correct user
[09:40] <peetaur2> one assumption to check is the service file... does it say like we expect  User=stunnel  rather than run as root and drop privs
[09:43] <mojtaba> peetaur2: There was stunnel4 directory in /var/run.
[09:43] <mojtaba> I just added that part in the config file.
[09:44] <mojtaba> So instead of pid = /var/run/stunnel.pid, it should be /var/run/stunnel4/stunnel.pid
[09:44] <mojtaba> peetaur2: Thanks for your help
[09:46] <peetaur2> ah good, and who made the error in the config, you or the distro?
[09:48] <mojtaba> peetaur2: The distro.
[09:48] <mojtaba> It was supposed to be like that, based on the doc.
[09:49] <peetaur2> so if the distro shipped a conf (that wasn't commented out or in the readme) that doens't work, you could still report it
[09:49] <peetaur2> pid file path is not really an admin's job to set... so probably their fault
[09:51] <cpaelzer> the default is actually /var/run/stunnel4.pid at least in the most recent version
[09:51] <cpaelzer> ... checking xenial
[09:52] <cpaelzer> yeah in xenial as well
[09:53] <mojtaba> cpaelzer: I checked with that too
[09:53] <cpaelzer> just started it with that - works fine
[09:53] <cpaelzer> let me read all your backlog here
[09:55] <peetaur2> mojtaba: and btw, you shouldn't need absolute path...just path relative to something; normally openvpn is relative to the conf file, but maybe that's controlled by the init (like maybe it does cd /somedir/; openvpn thatfile.conf, so it's really relative to the cwd, not the conf); so you could figure out what it's doing and set it relative if you want to
[09:55] <peetaur2> like in my conf I usually have a keys dir (that has stricter permissions), so the conf says   whatever=keys/blah.pem
[09:57] <cpaelzer> mojtaba: hmm - if you end up reporting a bug please make sure to describe the steps to trigger the actual issue as it seems to just work (in the basic setup)
[09:57] <cpaelzer> so the non-basic part of your setup is important to the bug report
[09:58] <peetaur2> yeah, if they can't reproduce it, they might not bother trying to fix it
[09:58] <mojtaba> peetaur2: cpaelzer: sure. and thanks for your help
[09:58] <peetaur2> like this bug of mine which they just ignore https://bugs.launchpad.net/ubuntu/+source/linux-lts-xenial/+bug/1724173
[09:59] <peetaur2> best I could reproduce was a "Kernel panic - not syncing: stack-protector: Kernel stack is corrupted"  which is not my original issue
[09:59] <peetaur2> and maybe if they fixed that, my test script would cause the original issue
[10:15] <mojtaba> peetaur2: cpaelzer: It doesn't work with stunnel4.pid in the config file, also it doesn't work with relative path to certificate.
[10:16] <cpaelzer> so after all a different issue
[10:16] <cpaelzer> ?
[10:17] <mojtaba> cpaelzer: No, I just tried those to see if they work or not.
[10:17] <cpaelzer> ok, thanks
[10:17] <mojtaba> I am now using openvpn with stunnel. But still no luck. I am in China, and I cann't open sites like youtube.com
[10:18] <mojtaba> Do you know any other way to work around this?
[10:36] <cpaelzer> peetaur2: FYI I slightly fixed your repro script in the bug and let it run
[10:36] <cpaelzer> with some luck I can make it confirmed and thereby bump it a bit
[10:38] <peetaur2> cpaelzer: thanks a bunch :)
[10:39] <peetaur2> cpaelzer: I found it crashed easily with a slow hdd cached on RAM, but never crashed with ram backing and hdd cache.... so not sure if ram + ram works too
[10:40] <cpaelzer> hrm
[10:40] <peetaur2> and also tested hdd and hdd I think, also no crash. Maybe it's an shm bug, and not even bcache ;)
[10:41] <cpaelzer> well you could state so in the bug and modify it slightly to base the "slow" dev on a local image file instead of shm
[10:41] <peetaur2> and I'll run it with ram+ram too
[10:43] <peetaur2> cpaelzer: how long do you plan to run it? for me, it sometimes crashed within 30 min, but other times took a few hours, but never a day
[10:45] <cpaelzer> depends on how soon the consumed cpu annoys me
[10:45] <cpaelzer> hours at least I think
[10:47] <cpaelzer> peetaur2: I added a modified version which sets up the disk on an image on the base disk
[10:47] <cpaelzer> that should be slow enough
[10:48] <cpaelzer> running with ~100-150k changes per sec according to /sys/block/bcache0/bcache/...
[10:48] <cpaelzer> will let you know in a few hours if it triggered
[11:10] <mojtaba> Hello, I am using stunnel and openvpn, (I am in China), but still I cannot open websites like youtube.com Does anybody know what should I do?
[11:12] <peetaur2> mojtaba: find out why... does dns fail?
[11:12] <mojtaba> peetaur2: How can I check it?
[11:14] <peetaur2> do a dns query, like with dig
[11:14] <peetaur2> if it returns the great firewall of china's "you have been caught, and goons have been dispatched to your location" page, then it fails
[11:15] <mojtaba> peetaur2: What should I run exactly?
[11:15] <peetaur2> just like   dig youtube.com
[11:20] <mojtaba> I ran stunnel in my server, and I am seeing: Error binding service [openvpn] to 0.0.0.0:443
[11:20] <mojtaba> bind: Permission denied (13)
[11:20] <mojtaba> peetaur2:
[11:26] <peetaur2> the port 443 can't be bound to by a non-privileged user... it has to be 1024 or higher
[11:27] <mojtaba> peetaur2: I want to show it as https
[11:27] <mojtaba> peetaur2: Do you know what should I do?
[11:29] <peetaur2> run as root, or redirect as root
[11:30] <peetaur2> or chnage the sysctl that sets which ports are privileged...which I think is net.ipv4.ip_unprivileged_port_start
[11:30] <peetaur2> or maybe there's a cap for that
[11:32] <mojtaba> peetaur2: I think I am running stunnel as root in server. How can I make sure?
[11:34] <peetaur2> if it's still running, ps -ef | grep stunnel
[11:34] <ahasenack> rbasak: when you have a moment, I'm seeing something weird with the branch being proposed here: https://code.launchpad.net/~orion-cora/ubuntu/+source/sssd/+git/sssd/+merge/334317
[11:35] <ahasenack> rbasak: there is his commit, orion-cora/xenial-sssd-hbac-rule-1722936 (4241de79bb78020f01c1a99017ef217173900101)
[11:35] <ahasenack> rbasak: then there is 42a95c2755c71846672a040fa3deda768b323442 which corresponds to an import of patches-unapplied of 1.13.4-1ubuntu1.9
[11:35] <ahasenack> rbasak: and 44f6b9dc1a1c2befd83ab9c114185993d5fc5579 which is pkg/upload/1.13.4-1ubuntu1.9
[11:36] <ahasenack> for some reason, the lintian thinks there are two changelog entries: one for 1.10 (his commit) and one for 1.9, but that is already there and wasn't added in his branch
[11:36] <ahasenack> was this that race we keep talking about, between upload tag and dput?
[11:37] <mojtaba> peetaur2: It is running by stunnel4 user, in a chroot
[11:37] <mojtaba> peetaur2: This is my stunnel.conf in my server: http://paste.debian.net/997971/
[11:45] <rbasak> ahasenack, cpaelzer: beta updated to master. I'm running a bind9 import now.
[11:47] <ahasenack> ok
[12:07] <cpaelzer> thanks rbasak
[12:11] <cpaelzer> peetaur2: it won't die in the last hour and I need my cpu back :-)
[12:12] <cpaelzer> peetaur2: I hope the fixups and clarification will help to be looked at by the kernel Team
[12:14] <cpaelzer> ahasenack: did you want to review the sssd MP yourself and just wanted an extra review slot?
[12:14] <ahasenack> cpaelzer: I mainly wanted it to be visible in our review queue
[12:15] <ahasenack> cpaelzer: but linter is complaining
[12:16] <cpaelzer> complaining for a bug in git ubuntu, or imperfect MP?
[12:16] <cpaelzer> maybe the missing pushed tags we spotted last week
[12:17] <peetaur2> cpaelzer: thanks so far, for taking a look :)
[12:17] <peetaur2> it'd be so nice if my ceph nodes didn't die every month or two
[12:18] <cpaelzer> reasonable wish
[12:18] <cpaelzer> jamespage: coreycb: ^^ in case you might have seen ceph+bcache=crash things consider reading the log above about peetaur2's bug
[12:22] <ahasenack> cpaelzer: I don't know, that's why I asked
[12:24] <cpaelzer> ahasenack: ok so you want me to look as well on that?
[12:24] <ahasenack> if you have the time, sure
[12:25] <cpaelzer> That is never the right condition, if we wait until I'm bored we wait forever :-)
[12:25] <cpaelzer> I'll try to look later on
[12:25] <ahasenack> true
[12:56] <rbasak> ahasenack: I think this could be a bug in the lint tool, or the the importer's commit graph, or both.
[12:56] <rbasak> (orion-cora's MP)
[12:57] <ahasenack> the lint tool is indeed detecting two changelog entries somehow
[12:58] <rbasak> ahasenack: though I get "All lint checks passed". What's your cmdline?
[12:58] <ahasenack> git ubuntu lint
[12:58] <rbasak> Version?
[12:58] <ahasenack> using the snap,
[12:58] <ahasenack> 0.6.2+git44.e7002be
[12:59] <ahasenack> also tried with master just now
[12:59] <ahasenack> I patched it to print what versions it found in that check
[12:59] <ahasenack> E: must add exactly one changelog entry
[12:59] <ahasenack> E: changelog.versions: [Version('1.13.4-1ubuntu1.10'), Version('1.13.4-1ubuntu1.9')]
[12:59] <rbasak> Can you find steps to reproduce in a fresh clone please?
[13:00] <rbasak> I can't reproduce with a git ubuntu clone, git ubuntu remote add, git checkout and git ubuntu lint.
[13:00] <rbasak> On the same version as you.
[13:01] <ahasenack> ok
[13:05] <ahasenack> rbasak: hmpf, worked after I did rm -rf sssd; clone sssd
[13:05] <ahasenack> I wonder if it failed before because I had my own remote, ahasenack, with a bunch of sssd branches, including the Version('1.13.4-1ubuntu1.9')] one
[13:08] <ahasenack> oh well
[13:10] <rbasak> ahasenack: I'm not sure. If you manage to figure out what was different, or the next time you see it, please, let me know.
[13:10] <ahasenack> rbasak: fwiw, git log now does NOT show that import patches-applied that I mentioned
[13:11] <ahasenack> rbasak: this is what it looked before: http://pastebin.ubuntu.com/26064436/
[13:12] <ahasenack> this is what it looks like now: http://pastebin.ubuntu.com/26064439/
[13:12] <ahasenack> ok, so it is still there
[13:12] <ahasenack> but now it has tags
[13:12] <ahasenack> git ubuntu lint would barf with http://pastebin.ubuntu.com/26064436/
[13:13] <rbasak> ahasenack: you didn't have the pkg branch tips either
[13:14] <ahasenack> rbasak: like I missed a git fetch pkg? Maybe with --tags?
[13:15] <rbasak> Maybe
[13:15] <ahasenack> but I had "Import patches-unapplied version 1.13.4-1ubuntu1.9 to ubuntu/xenial-proposed". The hash is the same. It just didn't have the tags
[13:16] <ahasenack> so maybe --tags was missing
[13:16] <ahasenack> from my fetch
[13:16] <ahasenack> gotta remember to add that
[13:16] <rbasak> But what did pkg/ubuntu/xenial-devel point to before?
[13:16] <ahasenack> commit fdff32f77aa7899455f215b9f631ea30f328016e (pkg/ubuntu/xenial-devel, ubuntu/xenial-devel)
[13:16] <ahasenack> ...
[13:16] <ahasenack>     Update ubuntu/xenial-devel from 1.13.4-1ubuntu1.7 to 1.13.4-1ubuntu1.8
[13:16] <ahasenack> I guess that explains it
[13:17] <ahasenack> lint added 1.9 to the list
[13:18] <ahasenack> because it thought 1.8 was the previous
[13:44] <ahasenack> rbasak: is the bind9 (re)import still ongoing?
[13:45] <rbasak> ahasenack: last I looked, yes. Sorry, it's failed a couple of times due to me (I suspended the laptop it was running from stupidly, and the second time I didn't see it requesting auth and it timed out).
[13:46] <ahasenack> ok
[13:51] <cpaelzer> ahasenack: rbasak: here you are :-)
[13:52] <ahasenack> tada!
[13:58] <rbasak> Aargh. It timed out on auth again. I thought I'd given it auth already. This is frustrating :-/
[13:58]  * rbasak files a bug
[14:00] <ahasenack> you mean that bit where you have to open a launchpad link and authorize the app?
[14:01] <cpaelzer> rbasak: should I do the import while you are filing?
[14:03] <rbasak> I've already tried to rerun it :-/
[14:03] <rbasak> cpaelzer: actually, I'll cancel
[14:03] <rbasak> Done
[14:04] <rbasak> cpaelzer: would you mind? It's more likely to actually land then. I've tried enough times :-/
[14:04] <cpaelzer> hehe
[14:04] <cpaelzer> ok started already
[14:04] <cpaelzer> lets see if prompts get less lost on less screens
[14:05] <ahasenack> make sure to approve it for more than 1h :)
[14:06] <cpaelzer> actually while it always came back to me due to a bug on the conversion it recently didn't ask anymore
[14:06] <mojtaba> Hello, I am trying to configure stunnel to communicate over port 443. But when I run netstat -natp | grep :443,  I get the following:
[14:07] <mojtaba> tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      20790/stunnel4
[14:07] <mojtaba> tcp        0      0 192.168.2.250:443       5.116.10.151:56716      ESTABLISHED 20790/stunnel4
[14:07] <mojtaba> Also I get Error binding service [openvpn] to 0.0.0.0:443
[14:07] <mojtaba> Do you know how can I fix this issue?
[14:07] <ahasenack> mojtaba: you already have it running and listening on port 443, and even servicing a connection to a client
[14:07] <mojtaba> ahasenack: What about the error?
[14:08] <rbasak> ahasenack: I used the old (timed out) link and approved it indefinitely, but seems to have not worked.
[14:08] <ahasenack> openvpn is failing because it's trying to use the same port, 443
[14:08] <ahasenack> mojtaba: you can't have to services binding to the same exact socket (0.0.0.0:443 in this casE)
[14:08] <rbasak> Maybe the protocol requires the original requesting cookie to complete the auth.
[14:08] <rbasak> (rather than completing on web ui approval)
[14:08] <ahasenack> rbasak: I think so, if it timed out, that old link is toast
[14:09] <mojtaba> ahasenack: I didn't configure openvpn to listen to 443!
[14:09] <cpaelzer> mojtaba: isn't that what you want overall http://blog.deadcode.net/tunneling-openvpn-with-https-to-bypass-censorship-with-stunnel-and-ubuntu/ ?
[14:10] <mojtaba> cpaelzer: yes
[14:11] <cpaelzer> mojtaba: I wonder if it would make sense to start over in a container
[14:11] <cpaelzer> with the doc as I linked it
[14:11] <cpaelzer> to ensure no old part of the config attemps interfree
[14:11] <cpaelzer> interfere
[14:11] <cpaelzer> not sure if there would be any no-no'S n regard to openvon in a container thou
[14:12] <ahasenack> that doc doesn't explain the openvpn bits, though
[14:12] <ahasenack> ah, later on it does
[14:12] <ahasenack> sorry
[14:12] <mojtaba> I am not sure, why I am getting that error, as I am not configuring openvpn to listen on 443.
[14:12] <mojtaba> Any idea?
[14:13] <ahasenack> paste the openvpn config
[14:13] <ahasenack> maybe you have multiple conf files in /etc/openvpn and it's starting one daemon for each via systemd
[14:13] <cpaelzer> ahasenack: the import is actually done already since 3 minutes
[14:13] <cpaelzer> ahasenack: could you check what you get on your end?
[14:14] <ahasenack> cpaelzer: checking
[14:15] <mojtaba> ahasenack: server or client for openvpn?
[14:15] <ahasenack> where stunnel is running and listening on port 443
[14:15] <mojtaba> ahasenack: server, ok
[14:17] <mojtaba> ahasenack: http://paste.debian.net/997988/
[14:18] <ahasenack> cpaelzer: looks good now, thanks
[14:19] <ahasenack> mojtaba: is that the only config file you have? Do you have something listening on port 1194 right now?
[14:21] <ahasenack> cpaelzer: one step further, but merge start crashes (https://bugs.launchpad.net/usd-importer/+bug/1734364 and new comment https://bugs.launchpad.net/usd-importer/+bug/1734364/comments/1)
[14:21] <ahasenack> I thought it could be crashing before because ubuntu/devel was incorrect and I was using ubuntu/bionic
[14:21] <ahasenack> ok, lunch time :)
[14:21] <mojtaba> ahasenack: that was for openvpn, config file
[14:22] <mojtaba> ahasenack: no, just openvpn
[14:49] <Slashman> hello, I have a server on ubuntu17.10, I have changed the config file /etc/netplan/01-netcfg.yaml, how can I reload the config file to apply it ?
[14:51] <peetaur2> Slashman: if it's just something read on service start, restart the service.
[14:52] <Slashman> peetaur2: do you know about netplan? because that is not that simple it seems
[14:52] <Slashman> https://wiki.ubuntu.com/Netplan
[14:52] <peetaur2> no idea
[14:53] <cpaelzer> ahasenack: did a check on your issue, and I think I found it - but we need rbasak to give it the code-POV
[14:53] <cpaelzer> ahasenack: I updated the bug
[14:53] <Slashman> ok, the answer is right on the page, did'nt look closely enough
[14:54] <cpaelzer> Slashman: thre is an apply/generate to netplan
[14:55] <Slashman> cpaelzer: yah, I just noticed that, I didn't saw it on the manpage
[15:02] <rbasak> cpaelzer: I think it's all in gitubuntu/merge.py
[15:02] <rbasak> I don't remember ever having looked in there.
[15:12] <jamespage> coreycb: think I have gnocchi ready for upload with py3 enabled; had todo one patch
[15:13] <coreycb> jamespage: awesome
[15:55] <jamespage> coreycb: yes confirmed - no more mismatch problems, and reports are now showing updates again!
[15:55] <jamespage> woot
[15:55] <coreycb> jamespage: yep looks good!
[16:03] <dpb1> Howdy all!  office hours is officially starting.  Please bring all questions
[16:12] <cpaelzer> thanks for opening that up dpb1
[16:43] <dpb1> ... the canonical server team puts their hands behind their head and their feet up
[16:48] <slashd> dpb1, lol
[16:49] <dpb1> oh, hi slashd, sorry, we weren't napping, just resting our eyelids
[16:50] <slashd> dpb1, of course ;)
[17:02]  * dpb1 turns around the office hours sign from open to closed.  night all!
[17:08] <xpistos> Hey guys. I could use a hand with something. I am automating a virus scan to send me an alert whenever I get a hit. I am setting the file name using "$(date +%B_%e)_scan_results.log" but I am not sure how to push that into mail using the date command. if it is a static file name I just use mail -s Test EMAIL << FILENAME.LOG it works fine
[17:09] <drab> xpistos: assign the date to a variable: $DATE_CUR=$(date+%B_%e) ; and the filename becomes ${DATE_CUR}_scan_results.log
[17:10] <drab> eer, DATE_CUR=, no $ there
[17:11] <drab> or probably even cleaner FN_NAME="$(date +%B_%e)_scan_results.log" and then mail -s Test EMAIL << $FN_NAME
[17:13] <xpistos> tells me the body is null
[17:14] <xpistos> using the $FN_NAME version
[17:15] <drab> well I don't know your script, that variable must be available by the time you call the mail command
[17:15] <drab> maybe pastebin your script on dpaste.com
[17:16] <xpistos> not a script, basically just touch "$(date +%B_%e)_scan_results.log" && DATE_CUR="$(date +%B_%e)_scan_results.log" && mail -s Test EMAIL < $DATE_CUR
[17:16] <xpistos> I am getting th email just nothing in the body
[17:16] <xpistos> and I have tried it with both < and <<
[17:17] <xpistos> I will probably make this a script though
[17:17] <dlloyd> you can use -a to attach a file
[17:17] <drab> well that's a different problem then, maybe the mail command doesn't work like that, haven't used it in a while, but iirc you echo to it, not sure < works
[17:18] <drab> try this: FN="$(date +%B_%e)_scan_results.log && echo $FN | mail -s Test EMAIL
[17:18] <drab> see if you get the filename in the body
[17:19] <drab> apparently mail -s xxx < file is accepted syntax
[17:19] <drab> is there anything in your file? touch won't put anything in it, so the email body will obviously be empty
[17:19] <xpistos> drab: I am trying to get the contents of the file in the body not the file name itself
[17:20] <drab> if all you're running is the above oneliner there's nothing in your file
[17:20] <xpistos> right now it says "Infected=0"
[17:21] <xpistos> drab: that file does anyway
[17:21] <drab> ok, then you aren't running the above oneliner. it's really a bad practice to tell ppl you're doing something that's not what you're doing and ask for help
[17:21] <drab> does mail -s Test < whatverfile_with_something_in_it work?
[17:22] <xpistos> I am an idiot. It was supposed to have somethign it but I didnt' cause I just touched it
[17:22] <drab> it's ok, it happens
[17:23] <drab> http://www.bash.org/?201579/
[17:23] <drab> a bash quote for every occasion... :)
[17:23] <xpistos> Good. I am not alone!
[17:23] <xpistos> drab: Thanks for the help.
[17:23] <drab> you're welcome
[17:26] <sdeziel> boy, my productivity just went down the drain, thank for the quotes site ;)
[17:26] <drab> like I said, you're welcome :P
[17:32] <drab> I got ovpn working in the end inside a container
[17:33] <drab> I'm somewhat confused why it works actually, I was expecting the bridge setup to require more work, but it doesn't
[17:33] <drab> I suspect it's something to do with the fact it's lxc and those network interfaces are already sitting on top of a bridge
[17:38] <drab> if two interfaces aren't bridged, one should not arp for the other's ip, should it?
[17:39] <drab> tun0 has its set of ips, which are overlapping with the one on eth0/LAN, but I still don't see how/why the host would respond to arp requests for a vpn client behind tun
[17:39] <maxb> I think there are some fairly confusing sysctl values to affect this
[17:40] <sdeziel> drab: there is no arp for tun devices, it's layer 3 only
[17:41] <sdeziel> but since you are talking about bridge, maybe you meant tap?
[17:42] <drab> sdeziel: so, see, that's the thing, I was gonna set it up on server-bridge + tap, but then just for testing I kept the default tun thinking of doing masquerading
[17:42] <drab> so it's on tun right now
[17:43] <sdeziel> tun is the recommended dev type by upstream
[17:43] <sdeziel> less overhead and generally cleaner
[17:43] <drab> right, but then you're supposed to masquerade, no?
[17:43] <drab> or you can do server-bridge with tun?
[17:43] <sdeziel> it really depends for the masquerading
[17:44] <sdeziel> bridging requires a tap as that's Ethernet bridging
[17:44] <drab> right
[17:46] <drab> so basically right now everything is working and I don't quite understand why :P , I thought it would not
[17:46] <drab> I was expecting to have to add some static routes
[17:51] <drab> yeah, I think I figured it out... it's an issue with how I'm testing
[17:53] <drab> altho, uhm, icmp pkts are coming from the vpn ip, so looks like all traffic is being correctly tunneled
[18:26] <HackeMate> hi, i want to forward all traffic from eth0 to eth1, shall i use iptables or simply sysctl net.ipv4.ip_forward=1 ?
[18:26] <HackeMate> the goal is firewall that lan
[19:49] <drab> HackeMate: iptables doesn't forward traffic per se, the sysctl setting is what does that
[19:49] <drab> to say tho that you want to forward traffic and then to say that you want to firewall that lan is confusing to me tho
[19:49] <drab> HackeMate: what are you trying to accomplish?
[19:54] <metastable> sysctl for forwarding, iptables for filtering rules, most likely.
[19:56] <HackeMate> i want put a minipc between router and lan computers and firewall its connection
[19:57] <HackeMate> the minipc has 2 ethernet, one for the router incoming data and the other one for the computer lan switch
[19:57] <HackeMate> is the plan correct?
[20:00] <drab> depends what correct means, what are you trying to achieve?
[20:00] <drab> install a firewall to protect the LAN?
[20:02] <metastable> Why are you putting a firewall box between the router and the switch?
[20:09] <HackeMate> yes, protect the lan
[20:10] <HackeMate> i do that because i have to save logs, parse them and show statistics based on those logs
[20:10] <metastable> Logs of what? Statistics of what?
[20:11] <metastable> "Protect the LAN" from what?
[20:11] <metastable> PS: "Hackers" is not an answer.
[20:12] <sarnold> be verbose in your answer, that may change the tools / approach we recommend :)
[20:12] <metastable> ^
[20:13] <drab> so I was correct, once I stepped out to public wifi I could vpn in, but get nowhere else except the vpn server
[20:13] <HackeMate> it is for an educative center, teachers wont allow students use its wifi connection for instagram in example, i said there are many things to reach instagram without opening instagram website, so this is the startpoint
[20:13] <drab> I'm glad thing still make sense even if it means it doesn't work :)
[20:14] <metastable> HackeMate: A firewall is NOT going to help you with that use case.
[20:14] <metastable> drab: I can help with VPNs.
[20:14] <HackeMate> vpn is slower though
[20:14] <metastable> HackeMate: The VPN comment wasn't to you.
[20:14] <HackeMate> ah sorry
[20:15] <HackeMate> a firewall is for block those connections to instagram and other social networks
[20:16] <HackeMate> or whatever they use for bypass firewall
[20:16] <metastable> HackeMate: You will NEVER accomplish that with any kind of real efficiency using iptables.
[20:16] <metastable> HackeMate: You are applying the wrong tool, plain and simple.
[20:16] <HackeMate> what could you use then?
[20:17] <sarnold> I've heard good things about http://e2guardian.org/cms/ but have never used it myself
[20:17] <HackeMate> dns proxy?
[20:18] <drab> HackeMate: if you can afford it, just get untangle https://www.untangle.com/
[20:18] <metastable> HackeMate: What you really need is a web category filter. Untangle is one option, though for non-home use it can get pricey.
[20:19] <metastable> pfSense and SquidGuard could also work.
[20:19] <drab> sarnold: it's the best in class, with redwood being second best (even if just because it's newer). then you have pfsense, but that's not linux anymore
[20:19] <HackeMate> pfsense is a router basically, no?
[20:19] <sdeziel> drab: if you are assigning your VPN clients IP addresses from say 10.8.0.0/24, you will see this net range when those VPN clients try to reach machines next to the VPN server
[20:19] <metastable> pfSense is a lot more than just a router.
[20:20] <metastable> drab: I don't quite care if it's Linux or not. All of this is off-topic here already because it's not Ubuntu.
[20:20] <drab> also pfsense will use e2guardian (optional) or dansguardian (built-in, and pretty meh)
[20:20] <drab> sure
[20:20] <sdeziel> drab: you have different solutions to make the return packet reach your VPN clients, one of them is adding a static route (back to the VPN server) to the machine you are trying to reach
[20:20] <metastable> drab: pfsense will use squid with whatever blacklists you enable.
[20:21] <drab> sdeziel: yeah those are the static routes I thought I'd need to add, trying now
[20:21] <sdeziel> drab: the other (less clean) is to SNAT/MASQUERADE what goes out of the VPN server itself. Something like -A POSTROUTING -s 10.8.0.0/24 -o eth+ -j MASQUERADE
[20:21] <sdeziel> drab: the SNAT/MASQUERADE trick is so much quicker though :)
[20:22] <drab> sdeziel: you might be right
[20:22] <drab> lemme look into that...
[20:22] <metastable> I don't EVER MASQ the stuff coming out of my VPN server. Enable the forwarding sysctl variable, and make sure the router knows how to send traffic to the VPN subnet.
[20:22] <metastable> One static route in the router to the VPN server's LAN IP, done.
[20:23] <drab> fair point
[20:24] <metastable> If you're using iptables, make sure that the FORWARD policy is ALLOW, or add a rule to that effect.
[20:24] <drab> I'm gonna try with static routes first, I think that's what I did a long time ago and it worked and saves me from having to think about the FW stuff
[20:24] <metastable> drab: I can set up just about any VPN from memory, so if you want to dig into this, I'm game.
[20:25] <drab> metastable: appreciate it, I like to do my homework before asking so lemme poke at it and if in bit I got nowhere I'll come and bug you
[20:25] <metastable> drab: Oh, I won't do it for you. Trust me, you'll learn plenty.
[20:25] <HackeMate> squidguard is a plugin for squid, squid is  a proxy, users can bypass the proxy settings, no? thats why i think about using a firewall, how can i force to use proxy, putting it as gateway?
[20:26] <metastable> HackeMate: Transparent proxies can't be bypassed by the means you're thinking of. They intercept ALL web requests, and require no configuration on the client system.
[20:26] <drab> sarnold: if the code was good, it looks promising: https://github.com/andybalholm/redwood
[20:26] <HackeMate> ah
[20:26] <drab> sarnold: somebody was trying to build debs a while back
[20:27] <drab> also e2g is being rewritten and 5 will be coming out soon with a completely diff design, including transparent ssl proxying, right now it only works in explicit mode
[20:27] <sarnold> drab: well, it's in go, so at least it's unlikely to have buffer overflows and use-after-frees and so on :) hehe
[20:27] <metastable> drab: Which will still suck unless you have an easy method of deploying the proxy's CA cert to the clients.
[20:27] <sdeziel> drab: metastable: the static route added to the router is the best way but require there is no more direct way between the servers and the VPN otherwise you will see some ICMP redirects
[20:28] <drab> metastable: tell me about it, was about 3 weeks of nightmares
[20:28] <drab> figuring out how firefox, chrome etc read the CA list
[20:28] <drab> which they all do differently
[20:28] <metastable> sdeziel: If you're entering the route in the correct place and your routing structure isn't a garbage fire, that shouldn't happen.
[20:29] <sdeziel> metastable: I don't want to assume anything about the network topology that drab's dealing with :)
[20:29] <metastable> sdeziel: Also correct. :P
[20:29] <metastable> And a very fair point.
[20:29] <metastable> I have worked in places where the routing structures were garbage fires, alas.
[20:30] <sdeziel> some put their VPN servers in their DMZ which makes it annoying when connecting to those other machines in the DMZ for example
[20:30] <drab> topology is pretty simple: one flat lan, one of the hosts on the lan has ovpn set up on it, gw/fw has a portforward to it. ovpn host has its own eth0 on the lan and a tun0 on the vpn network (diff than the lan network)
[20:31] <sdeziel> drab: so yeah, you'll have ICMP redirects :)
[20:31] <metastable> Static route will be your best bet, there.
[20:31] <metastable> Uhh. What?
[20:31] <metastable> How will you have ICMP redirects?
[20:32] <metastable> I feel like I'm missing a part of the conversation.
[20:32] <Epx998> Is it possible to rename a network interface via early command or something in the preseed?
[20:33] <sdeziel> metastable: all the LAN machines have a default GW as their only route
[20:34] <metastable> sdeziel: Yyyyyeah, and? VPN traffic hands traffic for a different subnet off to the router, router forwards that traffic to the next hop interface, etc.
[20:34] <sdeziel> metastable: so when the VPN server relay traffic for the VPN client range, the LAN machine will send the return packet to the default gw which will send ICMP redirect if it has a static route to the VPN range
[20:34] <metastable> sdeziel: I don't think that's right...
[20:34] <sdeziel> metastable: try it
[20:35] <metastable> Will do.
[20:35] <sdeziel> the VPN server, the gw and the LAN machines are all part of LAN so the gw has to tell the LAN machines to not hop through it because there is a shorter path
[20:35] <metastable> That does make sense, actually.
[20:36] <sdeziel> let's use some IP ranges to exemplify this
[20:36] <metastable> No, I get it.
[20:36] <HackeMate> i like the squidguard option, i just need the ipv4 forwarding for achieve this, right?
[20:36] <sdeziel> LAN: 192.168.0.0/24, GW: 192.168.0.1, VPN server: 192.168.0.94, serverA: 192.168.0.2
[20:37] <metastable> I GET IT.
[20:37] <metastable> :P
[20:37] <sdeziel> alright :)
[20:40] <Epx998> hmm
[20:46] <drab> sdeziel: is there a particular reason you brought up the ICMP redirects? I mean, is it just because of the added noise on the network or what that I should care about them?
[20:47] <sdeziel> drab: I heard this mechanism of finding a more optimal path didn't work reliably but I never really ran into a situation with ICMP redirects myself. Maybe it will work well in your environment?
[20:49] <drab> I guess I'll find out soon
[20:49] <drab> brb, someone can't print :(
[20:49] <sdeziel> drab: most people don't run into this problem because their VPN endpoint is their router
[20:52] <metastable> I have a dedicated box running strongSWAN, ocserv, openvpn.
[20:53] <metastable> strongswan is... interesting to configure, to say the least.
[20:55] <sdeziel> if by interesting you mean hugely fun then yes, I agree
[20:55] <sdeziel> never heard of ocserv though
[20:55] <sdeziel> nvm, openconnect.
[20:56] <metastable> It's the server-side component.
[20:57] <metastable> Technically, openconnect is the client.
[21:00] <metastable> I already use the AnyConnect client for work, so it made sense.
[21:04] <coreycb> jamespage: I have pike stable point releases queued up via bug 1734990
[23:05] <jog> powersj, I just opened bug https://bugs.launchpad.net/ubuntu/+source/ipxe/+bug/1735015
[23:05] <powersj> jog thanks will ping others about it as well