[00:04] Hmm, the "Ubuntu Customization Kit" seems to be dead, what's the easiest way to spin up a live image with customized packages? (Need the latest kernel on a live USB session to fix a BTRFS array on a server) [00:05] Wait, I stand corrected, it's just the Ubuntu apps directory that an old forum post pointed me to for the uck doesn't go past 13.10, it does still appear to be in the repos [00:06] keithzg: it didn't really work for me, but you're welcome to give it a go, maybe something in my setup [00:06] keithzg: ended up spending about 12hrs over two days trying to find something that didn't require a ton of sweat and blood end eventually landed on this: [00:06] https://launchpad.net/cubic [00:07] keithzg: if you're willing to trust that ppa, the tool works and actually does so rather well [00:07] drab: I shall take your recommendation and try that first :) [00:07] the idea is really exactly the same, extract the iso, unsquash the squash root, chroot, install stuff, repackage [00:08] I've looked at enough of those things I basically just do it myself manually at this point... [00:08] Heh fair enough [00:08] Yeah, I figured it wouldn't be *too* hard to do manually but I was sure there'd be some easy, automated way out there [00:09] keithzg: lemme know how it goes, tbh I foudn this by accident almost, apart from uck not much is really advertised [00:10] not quite sure why, maybe customizing isos isn't a common thing to do anymore [00:11] Yeah, I mean to be fair folks' internet connections tend to be fast enough these days that just installing something and *then* customizing things tends to be the easy solution. [00:11] In my case though I want it to just immediately boot with the 4.14 kernel since only then do I have a chance at replacing the dead drive in a Btrfs array! [00:12] I hear you, I ended up making myself a custom pxe image for that [00:12] cuz I didn't want to have to go around with usb keys or CDs and stuff [00:12] and that was even worse, there's no single small pxe bootable rescue system based on ubuntu for some reason [00:13] closest was the old dsl, but it's abandoned [00:13] keithzg: oh, the other tool I found that looked nice was this: https://sourceforge.net/projects/pinguy-os/files/ISO_Builder/ [00:13] seems a fork of remastersys [00:13] updated last year and reported to work on xenial [00:14] keithzg: https://www.ostechnix.com/pinguy-builder-build-custom-ubuntu-os/ [00:14] drab: Hmm the more I think of it the more tempting it is to just unsqash, chroot, and resquash, heh [00:14] I do have a PXE server running at work after all [00:15] Can't remember now if I got UEFI live instances to work or not though [00:28] keithzg: fwiw, these were my rough notes from the first pass... cleaned it up since then but been lazy and not republished [00:28] keithzg: https://gist.github.com/spikedrba/057acad8b3bfb0266544347ced8b53d4 [00:28] keithzg: it's now offically called PXERescue ;) [00:28] it uses ramboot initrd script to load the OS in ram [00:30] keithzg: the bug I haven't fixed is dns resolution in busybox, so the pxe parameter ramboot should actually use the ip, not hostname [00:30] ask for a bit [00:46] Heh well Pinguy-builder is a bust certainly, since it crashes with "Gtk-WARNING **: Unable to locate theme engine in module_path: "adwaita",". Silly Gnome. [00:46] OH NO NO THEME BETTER CRASH [01:14] heh [01:15] No go on uck either, I just get a "Building failed" popup eventually and the log says "kdialog: cannot connect to X server :0" "Script cancelled by user" [01:16] ew [01:17] Surely there are official instructions out there somewhere for how the *actual* ISOs all get built? I can't seem to find them for some reason. [01:21] keithzg: I've been shown the exact code on launchpad several times and can't ever recall where it is when someone asks. :( [01:24] sarnold: Drat! Yeah I keep finding things like https://wiki.ubuntu.com/DerivativeDistroHowto#Tools_for_building_distro and I'm thinking at this point "I don't want to know about the tools that 'make this easy', I want to know how to do it The Right Way"---the easy ways aren't so easy if they outright don't work! [01:25] Maybe drab's pxe method will end up being the best way, debootstrapping along those lines now. [02:23] keithzg: could never found official instructions, asked around in -dev, no joy either [02:24] keithzg: if you find them let me know, I agree that that process should be documented somewhere, maybe an internal wiki [02:25] keithzg: the pxe method I'm using is the cleanest ime, it's simple, makes a very small and fast image, has no dependencies past lpxelinux/ipxe and fetches over http so no funny nfs server or slow tftp server [02:30] btw, about openvpn, found this which is kind of nice: https://github.com/Nyr/openvpn-install/blob/master/openvpn-install.sh [02:35] wow, looks nice enough. pity it downloads and executes stuff without checking authenticity, but it's otherwise pretty sharp-looking [02:52] sarnold: lol, now now, so demanding.. you want ppl to actually check what they download, ah! [02:52] have some faith man, double rainbows and all of that [02:53] hahahaha [05:04] drab: Sadly, the PXE method didn't work for me in the end, although not because it wouldn't, but because the 4.14.2 packages from the Ubuntu Mainline Kernel PPA simply fail to install. So I seemingly could create a bootable PXE image your way, just not with the one thing customized that I actually want! [05:05] I'm kindof surprised to find that there's no Linux distro out there that specializes in always having bleeding-edge kernels (or if there is one, my google-fu is apparently very weak) [06:34] good morning [07:05] Good morning [07:06] hi lordievader [07:07] Hey cpaelzer [07:07] How are you doing? [07:10] ok enough :-) and you? [07:11] Doing okay. Haven't head coffee yet. I suppose this morning has chances of improving 😋 === Jynxie_ is now known as Jynxie [09:09] Hello, I have installed stunnel, and restarted the service; but it doesn't show up when I type: ps -ef | grep stunnel [09:10] mojtaba: which ubuntu release? [09:11] peetaur2: 16.04 LTS [09:11] so then let's see systemctl status stunnel [09:12] peetaur2: inactive (dead) [09:12] Reason: No such file or directory [09:12] peetaur2: my conf is in /etc/stunnel [09:13] pastebin https://bpaste.net the whole output... snippets will just waste time [09:13] if there's no such file, I expect a filename too [09:13] peetaur2: http://paste.ubuntu.com/26063374/ [09:15] peetaur2: systemctl status stunnel4 gives me http://paste.ubuntu.com/26063379/ [09:16] bleh...silly pastebin has no raw button [09:17] so it seems not to say which file exactly, but seems to fail to find some SSL related file... maybe a CA cert [09:17] peetaur2: yes [09:17] and says [openvpn] on the next line, so maybe there's some openvpn ca cert you are missing [09:18] I have them inline in the openvpn config file. (ovpn file) [09:21] is it relative or absolute path? try absolute [09:22] peetaur2: I have pasted the cert file in the ovpn file. [09:23] peetaur2: between tags. [09:25] ok, then that sounds good, but then why does it want a file? what other file might it expect? [09:26] peetaur2: I don't know. That should be just the .pem file. [09:27] did you set a .pem file? [09:29] peetaur2: cert = stunnel.pem [09:29] so try absolute path on that one [09:31] again failed [09:32] peetaur2: this one is different [09:32] peetaur2: http://paste.ubuntu.com/26063469/ [09:33] mojtaba: ok so now it says permission denied...so maybe it's running as one user, like stunnel, and /var/run is owned by root, so it can't write [09:33] peetaur2: Yes, so what should I do? [09:34] so my favorite fix for that is to add in the init script (but that's systemd... will have to look that up) that it makes a dir and chowns it to that user, eg. /var/run/stunnel/ and then in the conf, set the pid file like /var/run/stunnel/stunnel.pid [09:34] and also report it as a bug...the distro should do all that work for you [09:34] but fix it first...just to verify you know what the problem really is [09:35] another option is make a blank file /var/run/stunnel.pid and then chown the file (not dir), and then hopefully it can modify the file instead of making a new one [09:36] systemd also should support making these files and doing that for you, but this error likely means the service is expected to do that part (which is normal for some...like apache requires that it does that work for itself, runs as root and drops privs) [09:36] and another option is run as root, and drop privs [09:36] the lazy insecure way is to only run as root.. you could also test that, but I don't recommend it (and running it that way can leave a mess of files behind owned by root, so you have to chown or rm them to clean up) [09:37] I think running as root and drop privs is better, what do you think? [09:38] How should I report this bug? [09:38] sure but the program has to support it... you have to see what's possible [09:38] first find a way to make it work, so you verify your assumptions [09:38] peetaur2: I see. Ok [09:38] I will try your second option [09:38] and then just report it the usual way.... paste the error, and say what it ought to do, and show the fix that works, and that afterwards the daemon runs as the correct user [09:40] one assumption to check is the service file... does it say like we expect User=stunnel rather than run as root and drop privs [09:43] peetaur2: There was stunnel4 directory in /var/run. [09:43] I just added that part in the config file. [09:44] So instead of pid = /var/run/stunnel.pid, it should be /var/run/stunnel4/stunnel.pid [09:44] peetaur2: Thanks for your help [09:46] ah good, and who made the error in the config, you or the distro? [09:48] peetaur2: The distro. [09:48] It was supposed to be like that, based on the doc. [09:49] so if the distro shipped a conf (that wasn't commented out or in the readme) that doens't work, you could still report it [09:49] pid file path is not really an admin's job to set... so probably their fault [09:51] the default is actually /var/run/stunnel4.pid at least in the most recent version [09:51] ... checking xenial [09:52] yeah in xenial as well [09:53] cpaelzer: I checked with that too [09:53] just started it with that - works fine [09:53] let me read all your backlog here [09:55] mojtaba: and btw, you shouldn't need absolute path...just path relative to something; normally openvpn is relative to the conf file, but maybe that's controlled by the init (like maybe it does cd /somedir/; openvpn thatfile.conf, so it's really relative to the cwd, not the conf); so you could figure out what it's doing and set it relative if you want to [09:55] like in my conf I usually have a keys dir (that has stricter permissions), so the conf says whatever=keys/blah.pem [09:57] mojtaba: hmm - if you end up reporting a bug please make sure to describe the steps to trigger the actual issue as it seems to just work (in the basic setup) [09:57] so the non-basic part of your setup is important to the bug report [09:58] yeah, if they can't reproduce it, they might not bother trying to fix it [09:58] peetaur2: cpaelzer: sure. and thanks for your help [09:58] like this bug of mine which they just ignore https://bugs.launchpad.net/ubuntu/+source/linux-lts-xenial/+bug/1724173 [09:58] Launchpad bug 1724173 in linux-lts-xenial (Ubuntu) "bcache makes the whole io system hang after long run time" [Undecided,New] [09:59] best I could reproduce was a "Kernel panic - not syncing: stack-protector: Kernel stack is corrupted" which is not my original issue [09:59] and maybe if they fixed that, my test script would cause the original issue [10:15] peetaur2: cpaelzer: It doesn't work with stunnel4.pid in the config file, also it doesn't work with relative path to certificate. [10:16] so after all a different issue [10:16] ? [10:17] cpaelzer: No, I just tried those to see if they work or not. [10:17] ok, thanks [10:17] I am now using openvpn with stunnel. But still no luck. I am in China, and I cann't open sites like youtube.com [10:18] Do you know any other way to work around this? [10:36] peetaur2: FYI I slightly fixed your repro script in the bug and let it run [10:36] with some luck I can make it confirmed and thereby bump it a bit [10:38] cpaelzer: thanks a bunch :) [10:39] cpaelzer: I found it crashed easily with a slow hdd cached on RAM, but never crashed with ram backing and hdd cache.... so not sure if ram + ram works too [10:40] hrm [10:40] and also tested hdd and hdd I think, also no crash. Maybe it's an shm bug, and not even bcache ;) [10:41] well you could state so in the bug and modify it slightly to base the "slow" dev on a local image file instead of shm [10:41] and I'll run it with ram+ram too [10:43] cpaelzer: how long do you plan to run it? for me, it sometimes crashed within 30 min, but other times took a few hours, but never a day [10:45] depends on how soon the consumed cpu annoys me [10:45] hours at least I think [10:47] peetaur2: I added a modified version which sets up the disk on an image on the base disk [10:47] that should be slow enough [10:48] running with ~100-150k changes per sec according to /sys/block/bcache0/bcache/... [10:48] will let you know in a few hours if it triggered [11:10] Hello, I am using stunnel and openvpn, (I am in China), but still I cannot open websites like youtube.com Does anybody know what should I do? [11:12] mojtaba: find out why... does dns fail? [11:12] peetaur2: How can I check it? [11:14] do a dns query, like with dig [11:14] if it returns the great firewall of china's "you have been caught, and goons have been dispatched to your location" page, then it fails [11:15] peetaur2: What should I run exactly? [11:15] just like dig youtube.com [11:20] I ran stunnel in my server, and I am seeing: Error binding service [openvpn] to 0.0.0.0:443 [11:20] bind: Permission denied (13) [11:20] peetaur2: [11:26] the port 443 can't be bound to by a non-privileged user... it has to be 1024 or higher [11:27] peetaur2: I want to show it as https [11:27] peetaur2: Do you know what should I do? [11:29] run as root, or redirect as root [11:30] or chnage the sysctl that sets which ports are privileged...which I think is net.ipv4.ip_unprivileged_port_start [11:30] or maybe there's a cap for that [11:32] peetaur2: I think I am running stunnel as root in server. How can I make sure? [11:34] if it's still running, ps -ef | grep stunnel [11:34] rbasak: when you have a moment, I'm seeing something weird with the branch being proposed here: https://code.launchpad.net/~orion-cora/ubuntu/+source/sssd/+git/sssd/+merge/334317 [11:35] rbasak: there is his commit, orion-cora/xenial-sssd-hbac-rule-1722936 (4241de79bb78020f01c1a99017ef217173900101) [11:35] rbasak: then there is 42a95c2755c71846672a040fa3deda768b323442 which corresponds to an import of patches-unapplied of 1.13.4-1ubuntu1.9 [11:35] rbasak: and 44f6b9dc1a1c2befd83ab9c114185993d5fc5579 which is pkg/upload/1.13.4-1ubuntu1.9 [11:36] for some reason, the lintian thinks there are two changelog entries: one for 1.10 (his commit) and one for 1.9, but that is already there and wasn't added in his branch [11:36] was this that race we keep talking about, between upload tag and dput? [11:37] peetaur2: It is running by stunnel4 user, in a chroot [11:37] peetaur2: This is my stunnel.conf in my server: http://paste.debian.net/997971/ [11:45] ahasenack, cpaelzer: beta updated to master. I'm running a bind9 import now. [11:47] ok [12:07] thanks rbasak [12:11] peetaur2: it won't die in the last hour and I need my cpu back :-) [12:12] peetaur2: I hope the fixups and clarification will help to be looked at by the kernel Team [12:14] ahasenack: did you want to review the sssd MP yourself and just wanted an extra review slot? [12:14] cpaelzer: I mainly wanted it to be visible in our review queue [12:15] cpaelzer: but linter is complaining [12:16] complaining for a bug in git ubuntu, or imperfect MP? [12:16] maybe the missing pushed tags we spotted last week [12:17] cpaelzer: thanks so far, for taking a look :) [12:17] it'd be so nice if my ceph nodes didn't die every month or two [12:18] reasonable wish [12:18] jamespage: coreycb: ^^ in case you might have seen ceph+bcache=crash things consider reading the log above about peetaur2's bug [12:22] cpaelzer: I don't know, that's why I asked [12:24] ahasenack: ok so you want me to look as well on that? [12:24] if you have the time, sure [12:25] That is never the right condition, if we wait until I'm bored we wait forever :-) [12:25] I'll try to look later on [12:25] true [12:56] ahasenack: I think this could be a bug in the lint tool, or the the importer's commit graph, or both. [12:56] (orion-cora's MP) [12:57] the lint tool is indeed detecting two changelog entries somehow [12:58] ahasenack: though I get "All lint checks passed". What's your cmdline? [12:58] git ubuntu lint [12:58] Version? [12:58] using the snap, [12:58] 0.6.2+git44.e7002be [12:59] also tried with master just now [12:59] I patched it to print what versions it found in that check [12:59] E: must add exactly one changelog entry [12:59] E: changelog.versions: [Version('1.13.4-1ubuntu1.10'), Version('1.13.4-1ubuntu1.9')] [12:59] Can you find steps to reproduce in a fresh clone please? [13:00] I can't reproduce with a git ubuntu clone, git ubuntu remote add, git checkout and git ubuntu lint. [13:00] On the same version as you. [13:01] ok [13:05] rbasak: hmpf, worked after I did rm -rf sssd; clone sssd [13:05] I wonder if it failed before because I had my own remote, ahasenack, with a bunch of sssd branches, including the Version('1.13.4-1ubuntu1.9')] one [13:08] oh well [13:10] ahasenack: I'm not sure. If you manage to figure out what was different, or the next time you see it, please, let me know. [13:10] rbasak: fwiw, git log now does NOT show that import patches-applied that I mentioned [13:11] rbasak: this is what it looked before: http://pastebin.ubuntu.com/26064436/ [13:12] this is what it looks like now: http://pastebin.ubuntu.com/26064439/ [13:12] ok, so it is still there [13:12] but now it has tags [13:12] git ubuntu lint would barf with http://pastebin.ubuntu.com/26064436/ [13:13] ahasenack: you didn't have the pkg branch tips either [13:14] rbasak: like I missed a git fetch pkg? Maybe with --tags? [13:15] Maybe [13:15] but I had "Import patches-unapplied version 1.13.4-1ubuntu1.9 to ubuntu/xenial-proposed". The hash is the same. It just didn't have the tags [13:16] so maybe --tags was missing [13:16] from my fetch [13:16] gotta remember to add that [13:16] But what did pkg/ubuntu/xenial-devel point to before? [13:16] commit fdff32f77aa7899455f215b9f631ea30f328016e (pkg/ubuntu/xenial-devel, ubuntu/xenial-devel) [13:16] ... [13:16] Update ubuntu/xenial-devel from 1.13.4-1ubuntu1.7 to 1.13.4-1ubuntu1.8 [13:16] I guess that explains it [13:17] lint added 1.9 to the list [13:18] because it thought 1.8 was the previous [13:44] rbasak: is the bind9 (re)import still ongoing? [13:45] ahasenack: last I looked, yes. Sorry, it's failed a couple of times due to me (I suspended the laptop it was running from stupidly, and the second time I didn't see it requesting auth and it timed out). [13:46] ok [13:51] ahasenack: rbasak: here you are :-) [13:52] tada! [13:58] Aargh. It timed out on auth again. I thought I'd given it auth already. This is frustrating :-/ [13:58] * rbasak files a bug [14:00] you mean that bit where you have to open a launchpad link and authorize the app? [14:01] rbasak: should I do the import while you are filing? [14:03] I've already tried to rerun it :-/ [14:03] cpaelzer: actually, I'll cancel [14:03] Done [14:04] cpaelzer: would you mind? It's more likely to actually land then. I've tried enough times :-/ [14:04] hehe [14:04] ok started already [14:04] lets see if prompts get less lost on less screens [14:05] make sure to approve it for more than 1h :) [14:06] actually while it always came back to me due to a bug on the conversion it recently didn't ask anymore [14:06] Hello, I am trying to configure stunnel to communicate over port 443. But when I run netstat -natp | grep :443, I get the following: [14:07] tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 20790/stunnel4 [14:07] tcp 0 0 192.168.2.250:443 5.116.10.151:56716 ESTABLISHED 20790/stunnel4 [14:07] Also I get Error binding service [openvpn] to 0.0.0.0:443 [14:07] Do you know how can I fix this issue? [14:07] mojtaba: you already have it running and listening on port 443, and even servicing a connection to a client [14:07] ahasenack: What about the error? [14:08] ahasenack: I used the old (timed out) link and approved it indefinitely, but seems to have not worked. [14:08] openvpn is failing because it's trying to use the same port, 443 [14:08] mojtaba: you can't have to services binding to the same exact socket (0.0.0.0:443 in this casE) [14:08] Maybe the protocol requires the original requesting cookie to complete the auth. [14:08] (rather than completing on web ui approval) [14:08] rbasak: I think so, if it timed out, that old link is toast [14:09] ahasenack: I didn't configure openvpn to listen to 443! [14:09] mojtaba: isn't that what you want overall http://blog.deadcode.net/tunneling-openvpn-with-https-to-bypass-censorship-with-stunnel-and-ubuntu/ ? [14:10] cpaelzer: yes [14:11] mojtaba: I wonder if it would make sense to start over in a container [14:11] with the doc as I linked it [14:11] to ensure no old part of the config attemps interfree [14:11] interfere [14:11] not sure if there would be any no-no'S n regard to openvon in a container thou [14:12] that doc doesn't explain the openvpn bits, though [14:12] ah, later on it does [14:12] sorry [14:12] I am not sure, why I am getting that error, as I am not configuring openvpn to listen on 443. [14:12] Any idea? [14:13] paste the openvpn config [14:13] maybe you have multiple conf files in /etc/openvpn and it's starting one daemon for each via systemd [14:13] ahasenack: the import is actually done already since 3 minutes [14:13] ahasenack: could you check what you get on your end? [14:14] cpaelzer: checking [14:15] ahasenack: server or client for openvpn? [14:15] where stunnel is running and listening on port 443 [14:15] ahasenack: server, ok [14:17] ahasenack: http://paste.debian.net/997988/ [14:18] cpaelzer: looks good now, thanks [14:19] mojtaba: is that the only config file you have? Do you have something listening on port 1194 right now? [14:21] cpaelzer: one step further, but merge start crashes (https://bugs.launchpad.net/usd-importer/+bug/1734364 and new comment https://bugs.launchpad.net/usd-importer/+bug/1734364/comments/1) [14:21] Launchpad bug 1734364 in usd-importer "merge start fails with bind9" [Undecided,New] [14:21] I thought it could be crashing before because ubuntu/devel was incorrect and I was using ubuntu/bionic [14:21] ok, lunch time :) [14:21] ahasenack: that was for openvpn, config file [14:22] ahasenack: no, just openvpn [14:49] hello, I have a server on ubuntu17.10, I have changed the config file /etc/netplan/01-netcfg.yaml, how can I reload the config file to apply it ? [14:51] Slashman: if it's just something read on service start, restart the service. [14:52] peetaur2: do you know about netplan? because that is not that simple it seems [14:52] https://wiki.ubuntu.com/Netplan [14:52] no idea [14:53] ahasenack: did a check on your issue, and I think I found it - but we need rbasak to give it the code-POV [14:53] ahasenack: I updated the bug [14:53] ok, the answer is right on the page, did'nt look closely enough [14:54] Slashman: thre is an apply/generate to netplan [14:55] cpaelzer: yah, I just noticed that, I didn't saw it on the manpage [15:02] cpaelzer: I think it's all in gitubuntu/merge.py [15:02] I don't remember ever having looked in there. [15:12] coreycb: think I have gnocchi ready for upload with py3 enabled; had todo one patch [15:13] jamespage: awesome [15:55] coreycb: yes confirmed - no more mismatch problems, and reports are now showing updates again! [15:55] woot [15:55] jamespage: yep looks good! [16:03] Howdy all! office hours is officially starting. Please bring all questions [16:12] thanks for opening that up dpb1 [16:43] ... the canonical server team puts their hands behind their head and their feet up [16:48] dpb1, lol [16:49] oh, hi slashd, sorry, we weren't napping, just resting our eyelids [16:50] dpb1, of course ;) [17:02] * dpb1 turns around the office hours sign from open to closed. night all! [17:08] Hey guys. I could use a hand with something. I am automating a virus scan to send me an alert whenever I get a hit. I am setting the file name using "$(date +%B_%e)_scan_results.log" but I am not sure how to push that into mail using the date command. if it is a static file name I just use mail -s Test EMAIL << FILENAME.LOG it works fine [17:09] xpistos: assign the date to a variable: $DATE_CUR=$(date+%B_%e) ; and the filename becomes ${DATE_CUR}_scan_results.log [17:10] eer, DATE_CUR=, no $ there [17:11] or probably even cleaner FN_NAME="$(date +%B_%e)_scan_results.log" and then mail -s Test EMAIL << $FN_NAME [17:13] tells me the body is null [17:14] using the $FN_NAME version [17:15] well I don't know your script, that variable must be available by the time you call the mail command [17:15] maybe pastebin your script on dpaste.com [17:16] not a script, basically just touch "$(date +%B_%e)_scan_results.log" && DATE_CUR="$(date +%B_%e)_scan_results.log" && mail -s Test EMAIL < $DATE_CUR [17:16] I am getting th email just nothing in the body [17:16] and I have tried it with both < and << [17:17] I will probably make this a script though [17:17] you can use -a to attach a file [17:17] well that's a different problem then, maybe the mail command doesn't work like that, haven't used it in a while, but iirc you echo to it, not sure < works [17:18] try this: FN="$(date +%B_%e)_scan_results.log && echo $FN | mail -s Test EMAIL [17:18] see if you get the filename in the body [17:19] apparently mail -s xxx < file is accepted syntax [17:19] is there anything in your file? touch won't put anything in it, so the email body will obviously be empty [17:19] drab: I am trying to get the contents of the file in the body not the file name itself [17:20] if all you're running is the above oneliner there's nothing in your file [17:20] right now it says "Infected=0" [17:21] drab: that file does anyway [17:21] ok, then you aren't running the above oneliner. it's really a bad practice to tell ppl you're doing something that's not what you're doing and ask for help [17:21] does mail -s Test < whatverfile_with_something_in_it work? [17:22] I am an idiot. It was supposed to have somethign it but I didnt' cause I just touched it [17:22] it's ok, it happens [17:23] http://www.bash.org/?201579/ [17:23] a bash quote for every occasion... :) [17:23] Good. I am not alone! [17:23] drab: Thanks for the help. [17:23] you're welcome [17:26] boy, my productivity just went down the drain, thank for the quotes site ;) [17:26] like I said, you're welcome :P [17:32] I got ovpn working in the end inside a container [17:33] I'm somewhat confused why it works actually, I was expecting the bridge setup to require more work, but it doesn't [17:33] I suspect it's something to do with the fact it's lxc and those network interfaces are already sitting on top of a bridge [17:38] if two interfaces aren't bridged, one should not arp for the other's ip, should it? [17:39] tun0 has its set of ips, which are overlapping with the one on eth0/LAN, but I still don't see how/why the host would respond to arp requests for a vpn client behind tun [17:39] I think there are some fairly confusing sysctl values to affect this [17:40] drab: there is no arp for tun devices, it's layer 3 only [17:41] but since you are talking about bridge, maybe you meant tap? [17:42] sdeziel: so, see, that's the thing, I was gonna set it up on server-bridge + tap, but then just for testing I kept the default tun thinking of doing masquerading [17:42] so it's on tun right now [17:43] tun is the recommended dev type by upstream [17:43] less overhead and generally cleaner [17:43] right, but then you're supposed to masquerade, no? [17:43] or you can do server-bridge with tun? [17:43] it really depends for the masquerading [17:44] bridging requires a tap as that's Ethernet bridging [17:44] right [17:46] so basically right now everything is working and I don't quite understand why :P , I thought it would not [17:46] I was expecting to have to add some static routes [17:51] yeah, I think I figured it out... it's an issue with how I'm testing [17:53] altho, uhm, icmp pkts are coming from the vpn ip, so looks like all traffic is being correctly tunneled [18:26] hi, i want to forward all traffic from eth0 to eth1, shall i use iptables or simply sysctl net.ipv4.ip_forward=1 ? [18:26] the goal is firewall that lan === jc_ is now known as jc [19:49] HackeMate: iptables doesn't forward traffic per se, the sysctl setting is what does that [19:49] to say tho that you want to forward traffic and then to say that you want to firewall that lan is confusing to me tho [19:49] HackeMate: what are you trying to accomplish? [19:54] sysctl for forwarding, iptables for filtering rules, most likely. [19:56] i want put a minipc between router and lan computers and firewall its connection [19:57] the minipc has 2 ethernet, one for the router incoming data and the other one for the computer lan switch [19:57] is the plan correct? [20:00] depends what correct means, what are you trying to achieve? [20:00] install a firewall to protect the LAN? [20:02] Why are you putting a firewall box between the router and the switch? [20:09] yes, protect the lan [20:10] i do that because i have to save logs, parse them and show statistics based on those logs [20:10] Logs of what? Statistics of what? [20:11] "Protect the LAN" from what? [20:11] PS: "Hackers" is not an answer. [20:12] be verbose in your answer, that may change the tools / approach we recommend :) [20:12] ^ [20:13] so I was correct, once I stepped out to public wifi I could vpn in, but get nowhere else except the vpn server [20:13] it is for an educative center, teachers wont allow students use its wifi connection for instagram in example, i said there are many things to reach instagram without opening instagram website, so this is the startpoint [20:13] I'm glad thing still make sense even if it means it doesn't work :) [20:14] HackeMate: A firewall is NOT going to help you with that use case. [20:14] drab: I can help with VPNs. [20:14] vpn is slower though [20:14] HackeMate: The VPN comment wasn't to you. [20:14] ah sorry [20:15] a firewall is for block those connections to instagram and other social networks [20:16] or whatever they use for bypass firewall [20:16] HackeMate: You will NEVER accomplish that with any kind of real efficiency using iptables. [20:16] HackeMate: You are applying the wrong tool, plain and simple. [20:16] what could you use then? [20:17] I've heard good things about http://e2guardian.org/cms/ but have never used it myself [20:17] dns proxy? [20:18] HackeMate: if you can afford it, just get untangle https://www.untangle.com/ [20:18] HackeMate: What you really need is a web category filter. Untangle is one option, though for non-home use it can get pricey. [20:19] pfSense and SquidGuard could also work. [20:19] sarnold: it's the best in class, with redwood being second best (even if just because it's newer). then you have pfsense, but that's not linux anymore [20:19] pfsense is a router basically, no? [20:19] drab: if you are assigning your VPN clients IP addresses from say 10.8.0.0/24, you will see this net range when those VPN clients try to reach machines next to the VPN server [20:19] pfSense is a lot more than just a router. [20:20] drab: I don't quite care if it's Linux or not. All of this is off-topic here already because it's not Ubuntu. [20:20] also pfsense will use e2guardian (optional) or dansguardian (built-in, and pretty meh) [20:20] sure [20:20] drab: you have different solutions to make the return packet reach your VPN clients, one of them is adding a static route (back to the VPN server) to the machine you are trying to reach [20:20] drab: pfsense will use squid with whatever blacklists you enable. [20:21] sdeziel: yeah those are the static routes I thought I'd need to add, trying now [20:21] drab: the other (less clean) is to SNAT/MASQUERADE what goes out of the VPN server itself. Something like -A POSTROUTING -s 10.8.0.0/24 -o eth+ -j MASQUERADE [20:21] drab: the SNAT/MASQUERADE trick is so much quicker though :) [20:22] sdeziel: you might be right [20:22] lemme look into that... [20:22] I don't EVER MASQ the stuff coming out of my VPN server. Enable the forwarding sysctl variable, and make sure the router knows how to send traffic to the VPN subnet. [20:22] One static route in the router to the VPN server's LAN IP, done. [20:23] fair point [20:24] If you're using iptables, make sure that the FORWARD policy is ALLOW, or add a rule to that effect. [20:24] I'm gonna try with static routes first, I think that's what I did a long time ago and it worked and saves me from having to think about the FW stuff [20:24] drab: I can set up just about any VPN from memory, so if you want to dig into this, I'm game. [20:25] metastable: appreciate it, I like to do my homework before asking so lemme poke at it and if in bit I got nowhere I'll come and bug you [20:25] drab: Oh, I won't do it for you. Trust me, you'll learn plenty. [20:25] squidguard is a plugin for squid, squid is a proxy, users can bypass the proxy settings, no? thats why i think about using a firewall, how can i force to use proxy, putting it as gateway? [20:26] HackeMate: Transparent proxies can't be bypassed by the means you're thinking of. They intercept ALL web requests, and require no configuration on the client system. [20:26] sarnold: if the code was good, it looks promising: https://github.com/andybalholm/redwood [20:26] ah [20:26] sarnold: somebody was trying to build debs a while back [20:27] also e2g is being rewritten and 5 will be coming out soon with a completely diff design, including transparent ssl proxying, right now it only works in explicit mode [20:27] drab: well, it's in go, so at least it's unlikely to have buffer overflows and use-after-frees and so on :) hehe [20:27] drab: Which will still suck unless you have an easy method of deploying the proxy's CA cert to the clients. [20:27] drab: metastable: the static route added to the router is the best way but require there is no more direct way between the servers and the VPN otherwise you will see some ICMP redirects [20:28] metastable: tell me about it, was about 3 weeks of nightmares [20:28] figuring out how firefox, chrome etc read the CA list [20:28] which they all do differently [20:28] sdeziel: If you're entering the route in the correct place and your routing structure isn't a garbage fire, that shouldn't happen. [20:29] metastable: I don't want to assume anything about the network topology that drab's dealing with :) [20:29] sdeziel: Also correct. :P [20:29] And a very fair point. [20:29] I have worked in places where the routing structures were garbage fires, alas. [20:30] some put their VPN servers in their DMZ which makes it annoying when connecting to those other machines in the DMZ for example [20:30] topology is pretty simple: one flat lan, one of the hosts on the lan has ovpn set up on it, gw/fw has a portforward to it. ovpn host has its own eth0 on the lan and a tun0 on the vpn network (diff than the lan network) [20:31] drab: so yeah, you'll have ICMP redirects :) [20:31] Static route will be your best bet, there. [20:31] Uhh. What? [20:31] How will you have ICMP redirects? [20:32] I feel like I'm missing a part of the conversation. [20:32] Is it possible to rename a network interface via early command or something in the preseed? [20:33] metastable: all the LAN machines have a default GW as their only route [20:34] sdeziel: Yyyyyeah, and? VPN traffic hands traffic for a different subnet off to the router, router forwards that traffic to the next hop interface, etc. [20:34] metastable: so when the VPN server relay traffic for the VPN client range, the LAN machine will send the return packet to the default gw which will send ICMP redirect if it has a static route to the VPN range [20:34] sdeziel: I don't think that's right... [20:34] metastable: try it [20:35] Will do. [20:35] the VPN server, the gw and the LAN machines are all part of LAN so the gw has to tell the LAN machines to not hop through it because there is a shorter path [20:35] That does make sense, actually. [20:36] let's use some IP ranges to exemplify this [20:36] No, I get it. [20:36] i like the squidguard option, i just need the ipv4 forwarding for achieve this, right? [20:36] LAN: 192.168.0.0/24, GW: 192.168.0.1, VPN server: 192.168.0.94, serverA: 192.168.0.2 [20:37] I GET IT. [20:37] :P [20:37] alright :) [20:40] hmm [20:46] sdeziel: is there a particular reason you brought up the ICMP redirects? I mean, is it just because of the added noise on the network or what that I should care about them? [20:47] drab: I heard this mechanism of finding a more optimal path didn't work reliably but I never really ran into a situation with ICMP redirects myself. Maybe it will work well in your environment? [20:49] I guess I'll find out soon [20:49] brb, someone can't print :( [20:49] drab: most people don't run into this problem because their VPN endpoint is their router [20:52] I have a dedicated box running strongSWAN, ocserv, openvpn. [20:53] strongswan is... interesting to configure, to say the least. [20:55] if by interesting you mean hugely fun then yes, I agree [20:55] never heard of ocserv though [20:55] nvm, openconnect. [20:56] It's the server-side component. [20:57] Technically, openconnect is the client. [21:00] I already use the AnyConnect client for work, so it made sense. [21:04] jamespage: I have pike stable point releases queued up via bug 1734990 [21:04] bug 1734990 in nova (Ubuntu Artful) " [SRU] pike stable releases" [Undecided,New] https://launchpad.net/bugs/1734990 [23:05] powersj, I just opened bug https://bugs.launchpad.net/ubuntu/+source/ipxe/+bug/1735015 [23:05] Launchpad bug 1735015 in ipxe (Ubuntu) "FTBFS: ipxe on zesty" [Undecided,New] [23:05] jog thanks will ping others about it as well