[05:40] <Sircle> If squid cannot cache https traffic, what is the solution?
[08:19] <lordievader> Good morning
[10:30] <Sircle> mmlj4:  why go all through the trouble of mitm. No easy way? Almost all websites these days are https. So if its unable to be cached, its useless for having squid. no?
[10:30] <Sircle> If squid cannot cache https traffic, what is the solution?
[10:41] <blackflow> Sircle: I don't have the entire backlog, what was the issue?
[10:59] <JanC> blackflow: from what I can tell, he's wondering what use it has to run squid when more and more sites use HTTPS
[10:59] <JanC> he/she
[11:01] <AnotherGuyver> Hi guys, quick question regarding ssh on Ubuntu server:
[11:02] <AnotherGuyver> I have my key in the authorized_keys file. When I try to log in, it asks for a password and the log tells me, that the file con not be found. However, if I already have a window open and try to log in a second time from the Terminal, it log in without any problems.
[11:03] <JanC> ssh can share connections
[11:04] <maxb> Or, perhaps you're using encrypted home directories, such that the ssh server cannot see the authorized_keys file unless it has been unlocked by another session
[11:05] <AnotherGuyver> Ah, yes, I do. So after I log in the first time, the home directory becomes visible?
[11:05] <JanC> in that case you need to move the key outside the encrypted home
[11:05] <AnotherGuyver> Ah I see. Ok, thank you, I'll try that.
[11:14] <AnotherGuyver> Ok, it worked. However, I get a strange phenomenom. When I logged in with the password, i got the usual shell(zsh in my case): [user:~]$      Now I get user-www%
[11:17] <AnotherGuyver> Ah, and I also get a message in the beginning "The programs included with the Ubuntu system are free software; ...."
[11:17] <AnotherGuyver> So did it revert me to the bash shell? echo $0 still outputs -zsh
[11:35] <AnotherGuyver> And it seems the home folder has changed to include only: "Access-Your-Private-Data.desktop  README.txt"
[11:57] <JanC> AnotherGuyver: you probably need some fiddling with PAM or the like
[11:58] <AnotherGuyver> Is it possible to auto-login without a password if you have an encrypted directory? I can do an 'ecryptfs-mount-private' and then a 'exec zsh' (not ... && ... though) to get back to my shell.
[11:58] <JanC> technically it should be possible
[11:59] <JanC> not sure if it's implemented  :)
[12:00] <JanC> basically, you need a way to unlock the key for the encrypted file system
[12:00] <AnotherGuyver> You mean before I log into the system?
[12:01] <JanC> just after you log in
[12:01] <JanC> but before you run anything else
[12:01] <Sircle> squid cannot cache https traffic, what is the solution?
[12:02] <JanC> Sircle: why do you need a "solution"
[12:02] <Sircle> JanC:  of course to cache https traffic
[12:03] <Sircle> to save bandwidth and data transfer
[12:03] <AnotherGuyver> JanC: could I just autorun the ecrypt... and so on?
[12:03] <JanC> for most sites that use HTTPS (properly), you wouldn't cache much anyway
[12:04] <JanC> AnotherGuyver: I think that can happen with some help from PAM...
[12:04] <JanC> but I never tried that  :)
[12:04] <blackflow> Sircle: http://wiki.squid-cache.org/Features/HTTPS
[12:06] <Sircle> blackflow:  have yo uimplemented and agree its simple enough?
[12:07] <AnotherGuyver> JanC: Would something like that work: http://askubuntu.com/questions/115497/encrypted-home-directory-not-auto-mounting (the second solution with the 2 thumbs-ups)?
[12:08] <blackflow> Sircle: no, it's a pain and basically futile imho
[12:08] <Sircle> blackflow:  exactly my  point.
[12:10] <Sircle> blackflow:  is there a seamless solution?
[12:10] <blackflow> Sircle: no, it's the nature of SSL traffic. Uncacheable unless you break SSL
[12:11] <JanC> the whole point of HTTPS traffic is that it's not cacheable between server & client...
[12:12] <Sircle> blackflow:  JanC   don't you think it should be cacheable (as encrypted data). The browser decrypts it.
[12:12] <blackflow> Sircle: no because that't the nature of encryption. perfect encryption is indistinguishable form random noise.
[12:12] <JanC> the server & the client (browser) can cache it
[12:13] <blackflow> random noise cannot be compressed nor cached.
[12:14] <blackflow> JanC: "it" being content after decryption :)   so back to "unless you break SSL"
[12:15] <Sircle> hm
[12:15] <JanC> e.g. if the servers says a particular resource (e.g. an image) will never get changed, a browser should only fetch it if it's not in its cache
[12:16] <blackflow> note that properly done sites will return 304 for unmodified content, which is the best they can do do reduce encrypted traffic, aside from compression.
[12:16] <JanC> blackflow: it's not about breaking SSL, it's about using HTTP properly
[12:16] <JanC> of course you don't have much control over that as a user
[12:17] <blackflow> JanC: no, you're looking at two different OSI layers. You want to cache one level higher, with the infra that transports it (the cache in the middle) oblivious to those higher levels (beyond encryption)
[12:17] <blackflow> HTTP is not used improperly
[12:18] <blackflow> You can terminate SSL, cache content and serve cached content to your clients with your own certificate
[12:18] <JanC> of course those are different OSI layers
[12:19] <blackflow> But imho it's futile, the nature of modern web is not cacheable in that way, the best you can do, and that part works, is browsers asking if content they saw before has changed, and they receive it only if changed. that's the PROPER way to use http :)
[12:20] <blackflow> so browsers cache locally. Use the developer tool of your browser, eg. hit F12 in FireFox, and observe how much of it is responded with 304
[12:20] <JanC> for most sites there is no need to ask if content changed for most of their content
[12:20] <JanC> especially not for their big content
[12:20] <JanC> like images
[12:20] <blackflow> the sites decide that and can set very long timeout. that's what we do on images, for example. if the images change, they get new URLs, so we don' thave to play the cache invalidation game.
[12:21] <JanC> blackflow: exactly
[12:21] <JanC> blackflow: that's what I mean by sites using HTTP properly
[12:22] <JanC> which most don't
[12:22] <blackflow> JanC: ah yes. And sorry, I confused you with Sircle, thouhgt you were talking about using the cache directives of http even through https. That's why I mentioned different OSI layers :)
[12:23] <JanC> cache directives in HTTPS work the same as in HTTP (although browsers might use them differently)
[12:23] <blackflow> sure, but intermediate caches (eg the Squid in question) can't, unless they terminate SSL
[12:24] <blackflow> In fact that is what antiviruses on Windows do in order to inspect https, imaps and pop3s traffic.
[12:24] <blackflow> At least some that I've seen.
[12:24] <JanC> when using client keys, they can't intermediate at all
[12:25] <blackflow> true.
[12:25] <JanC> and those Windows antivirus have been abused
[12:25] <blackflow> yup.
[12:25] <JanC> to do pretty much what they are supposed to avoid
[12:25] <blackflow> It's MITM for all intents and purposes, benign or not.
[12:26] <JanC> but then again, the whole SSL model is flawed  :-(
[12:28] <JanC> well, not the whole model, but at the very least how it has been implemented
[12:28] <blackflow> Yeah, especially since we're still calling in SSL and nobody should be using SSL anyway.
[12:28] <JanC> doesn't matter if it's SSL or TLS
[12:28] <JanC> it's the trust model that is broken
[12:29] <blackflow> Yeah I know what you're talking about. Case in point: recent dropping of StartSSL from Chrome and FF.
[12:29] <JanC> they still support most country CAs
[12:29] <blackflow> those are cases that are detected, acted upon and publicized. But how many abuse cases are there that go undetected, unmitigated and silent.
[12:30] <JanC> they are unlikely to be detected if they are isolated
[12:32] <JanC> to be fair: I think it's great to have country CAs included, but there should be some way to make sure those are only used to sign government site certificates
[12:33] <JanC> and I doubt most SSL/TLS libraries do that right now...
[16:25] <jak2000> ow to know why my networkcard not up?
[16:27] <jak2000> when startup the linux i get this: http://postimg.org/image/8xsh3mmzd/   run the command and get this: https://postimg.org/image/eiyv6tnkv/   how to fix, why cant ping out the box thanks
[16:33] <pk2x3> open /etc/rc.local with root permissions and comment all lines except "exit 0", then restart the server.
[16:34] <pk2x3> "sudo nano /etc/rc.local"
[16:39] <pk2x3> 52/5000
[16:39] <pk2x3> Comment the lines by putting # in front of each line.
[16:39] <jak2000> ok
[18:13] <jaguardown> Hi all
[18:16] <jaguardown> I have Ubuntu Server 16.04.1 i386 installed on an old, home box. I set up encrypted LVM. Is there a way to automatically provide the passphrase to decrypt on boot so that rebooting won't cut off remote ssh access? I tried to search but wasn't sure exactly how to find it.
[18:16] <jaguardown> If not I suppose I will just reinstall without encrypted LVM
[18:17] <jaguardown> But the fact that it is an option on a server installation leads me to believe there is a way to handle this issue.
[18:17] <gorelative> hey folks, on ubuntu 16.04.01 LTS, apt-get upgrade tells me these packagse are held back... linux-headers-generic linux-headers-virtual linux-image-virtual linux-virtual
[18:18] <gorelative> #1 can i find out WHY, #2 how can i apply them?
[18:21] <rattking> gorelative: those packages probably require a new package being installed. 'upgrade' wont do that but a 'dist-upgrade' will
[18:21] <gorelative> ah ok thanks rattking do sudo apt-get upgrade linux-virtual looks to have resolved it :P
[18:22] <rattking> Ubuntu puts the kernel version in the package name, so apt considers them new packages.
[18:44] <jaguardown> Anyone? :-)
[18:46] <tomreyn> jaguardown: automatically providing the passphrase would defeat the purpose
[18:47] <jaguardown> I figured as much. Why would anyone even set up encrypted LVM on a server, then?
[18:47] <jaguardown> Unless there is a way to provide the passphrase via SSH
[18:49] <blackflow> jaguardown: there is, dropbox in initramfs and a custom init script hook. it's a manual set up tho'
[18:49] <jaguardown> ok
[18:51] <blackflow> uhm... dropbear... sorry, my mind was elsewhere :)
[18:51] <blackflow> dropbear SSH
[18:59] <jaguardown> np
[19:01] <jaguardown> of course I install dropbear via apt and I get an error at the end of configuration that says invalid authorized key file, and that remote unlocking of cryptroot via ssh won't work
[19:01] <jaguardown> -_- time to do some investigative work I guess
[19:01] <jaguardown> This is basically a fresh install apart from a static ip, openssh, and ufw
[19:03] <blackflow> jaguardown: yeah I got that too, but I left that task for some time later. There are a few guides online with various reported success rates, like http://unix.stackexchange.com/questions/5017/ssh-to-decrypt-encrypted-lvm-during-headless-server-boot
[19:03] <blackflow> and further links on that page
[19:04] <jaguardown> Thank you!
[19:06] <blackflow> jaguardown: in theory it's straightforward, and the only "dubious" part is setting up dropbear to work with the same keys, and having the same signature so you can locally keep the same known_hosts sig and private key.
[19:07] <jaguardown> ah
[19:08] <blackflow> yeah otherwise you install cryptsetup, it'll get included in initramfs by default (/etc/initramfs-tools/initramfs.conf  MODULES=most  I believe does it, which is default)
[19:08] <blackflow> and you need a hook that will hold mounting root until you ssh in and manually set it in motion by calling cryptsetup to unlock root and proceed with normal mounting and switching root.
[19:10] <blackflow> the only other gotcha I saw in that procedure is killing dropbear before root mounts so there's no lingering process occupying the ssh port, so OpenSSH can normally start and continue providing sshd service
[19:13] <blackflow> and frankly I'm wondering why is dropbear even used, why can't it just be openssh. It has exactly the same requirements: all binaries and libraries involved have to be present in initramfs or built statically, just like dropbear.
[19:18] <jaguardown> Okay sounds pretty straight forward. I'm gonna start by reading the document they talked about /usr/share/cryptsetup/README.remote.gz
[19:19] <jaguardown> er /usr/share/doc/cryptsetup/README.remote.gz*
[19:21] <blackflow> jaguardown: the initramfs is basically just a simple tarball containing a file called "init" that gets executed. In its most basic form, that script only has to call "exec switch_root /path/to/root-filesystem /sbin/init" (or systemd instead of that sbin/init). That's all there is conceptually. Everything else in the init script is procedure needed to find and mount root, before the switch.
[19:22] <blackflow> Ubuntu's initramfs scripts are big because they contain lots of tests and sub-scripts to automate all this for various scenarios, so it all works automatically regardless of filesystems used, LVM, Raid, etc...
[19:25] <blackflow> and of course all the binaries used (like cryptsetup) have to be present in the initramfs, so initramfs is a tarball of a "mini root filesystem" containing the tools to find and mount the real root.
[19:36] <GALL0> on 16.04.1 server, installed ZFS, created a pool which Plex could see/read for the past week. after reboot, Plex can no longer see contents.
[19:37] <GALL0> was set to `/mnt/data` but now seems to be attached to `/data` although Plex can see both folders they appear to be empty. however if I connect to `/mnt` from my Mac I can see all the contents in findr
[19:57] <blackflow> GALL0: "zfs list -o name,mountpoint" will show you where the datasets will automatically mount on import
[19:58] <teward> rbasak: if you're around, any idea how to force a package to *not* build with PIE?
[19:58] <blackflow> GALL0: also consider those are modulated with pool's altroot attribute, so eg. if the altroot is /mnt and a dataset is to mount in /data, it'll mount in /mnt/data
[19:58] <teward> or anyone on the server team
[19:58] <blackflow> teward: -nopie ?
[19:58] <GALL0> https://hastebin.com/duhobeqoni.hs
[19:59] <teward> blackflow: seems to be being ignored in the build flags :/
[19:59] <blackflow> teward: sorry, -no-pie
[19:59] <teward> didn't work either, I'll have to poke further once I'm not angry at sbuild >.>
[19:59] <GALL0> don't recall making `data/data` nor `six/backup`
[20:00] <blackflow> GALL0: so you have double mountpoints, not good
[20:00] <blackflow> set mountpoint=none   on six and data, I'm guessing that's what you want, so only data/data is mounted and six/six
[20:01] <blackflow> GALL0: it's not bad to have a dataset under the pool and not use the pool directly, so you can leave it that and just disable mounts for the pool roots
[20:01] <GALL0> blackflow  how can I delete `data/data` and `six/backup`? if I lose all data its a non issue, backed up elsewhere.
[20:01] <blackflow> GALL0: "zfs destroy data/data"   but CAREFUL, it'll destroy, won't ask "Are you sure"
[20:02] <blackflow> might need -r if you have snapshots in them
[20:02] <GALL0> havent done any snapshots yet, just created these pools a few days ago
[20:02] <blackflow> teward: possibly enviornment flags are added first, so package intrinsic flags override them?
[20:04] <GALL0> (1:523)$ sudo zfs destroy -r data/data
[20:04] <GALL0> umount: /mnt/data: target is busy
[20:04] <GALL0>         (In some cases useful info about processes that
[20:04] <GALL0>          use the device is found by lsof(8) or fuser(1).)
[20:04] <GALL0> sorry, thought it'd be one line
[20:04] <blackflow> are you currently in that path? or have something else from it mounted? a process has open files in it?
[20:06] <GALL0> ah, rclone, forgot I'm copying from Amazon Cloud Drive