/srv/irclogs.ubuntu.com/2016/12/29/#ubuntu-server.txt

=== JanC_ is now known as JanC
=== Into_the_Pit is now known as Frickelpit
SircleIf squid cannot cache https traffic, what is the solution?05:40
lordievaderGood morning08:19
Sirclemmlj4:  why go all through the trouble of mitm. No easy way? Almost all websites these days are https. So if its unable to be cached, its useless for having squid. no?10:30
SircleIf squid cannot cache https traffic, what is the solution?10:30
=== irv_ is now known as irv
=== rockstar_ is now known as rockstar
=== Tribaal_ is now known as Tribaal
blackflowSircle: I don't have the entire backlog, what was the issue?10:41
JanCblackflow: from what I can tell, he's wondering what use it has to run squid when more and more sites use HTTPS10:59
JanChe/she10:59
AnotherGuyverHi guys, quick question regarding ssh on Ubuntu server:11:01
AnotherGuyverI have my key in the authorized_keys file. When I try to log in, it asks for a password and the log tells me, that the file con not be found. However, if I already have a window open and try to log in a second time from the Terminal, it log in without any problems.11:02
JanCssh can share connections11:03
maxbOr, perhaps you're using encrypted home directories, such that the ssh server cannot see the authorized_keys file unless it has been unlocked by another session11:04
AnotherGuyverAh, yes, I do. So after I log in the first time, the home directory becomes visible?11:05
JanCin that case you need to move the key outside the encrypted home11:05
AnotherGuyverAh I see. Ok, thank you, I'll try that.11:05
AnotherGuyverOk, it worked. However, I get a strange phenomenom. When I logged in with the password, i got the usual shell(zsh in my case): [user:~]$      Now I get user-www%11:14
AnotherGuyverAh, and I also get a message in the beginning "The programs included with the Ubuntu system are free software; ...."11:17
AnotherGuyverSo did it revert me to the bash shell? echo $0 still outputs -zsh11:17
AnotherGuyverAnd it seems the home folder has changed to include only: "Access-Your-Private-Data.desktop  README.txt"11:35
JanCAnotherGuyver: you probably need some fiddling with PAM or the like11:57
AnotherGuyverIs it possible to auto-login without a password if you have an encrypted directory? I can do an 'ecryptfs-mount-private' and then a 'exec zsh' (not ... && ... though) to get back to my shell.11:58
JanCtechnically it should be possible11:58
JanCnot sure if it's implemented  :)11:59
JanCbasically, you need a way to unlock the key for the encrypted file system12:00
AnotherGuyverYou mean before I log into the system?12:00
JanCjust after you log in12:01
JanCbut before you run anything else12:01
Sirclesquid cannot cache https traffic, what is the solution?12:01
JanCSircle: why do you need a "solution"12:02
SircleJanC:  of course to cache https traffic12:02
Sircleto save bandwidth and data transfer12:03
AnotherGuyverJanC: could I just autorun the ecrypt... and so on?12:03
JanCfor most sites that use HTTPS (properly), you wouldn't cache much anyway12:03
JanCAnotherGuyver: I think that can happen with some help from PAM...12:04
JanCbut I never tried that  :)12:04
blackflowSircle: http://wiki.squid-cache.org/Features/HTTPS12:04
Sircleblackflow:  have yo uimplemented and agree its simple enough?12:06
AnotherGuyverJanC: Would something like that work: http://askubuntu.com/questions/115497/encrypted-home-directory-not-auto-mounting (the second solution with the 2 thumbs-ups)?12:07
blackflowSircle: no, it's a pain and basically futile imho12:08
Sircleblackflow:  exactly my  point.12:08
Sircleblackflow:  is there a seamless solution?12:10
blackflowSircle: no, it's the nature of SSL traffic. Uncacheable unless you break SSL12:10
JanCthe whole point of HTTPS traffic is that it's not cacheable between server & client...12:11
Sircleblackflow:  JanC   don't you think it should be cacheable (as encrypted data). The browser decrypts it.12:12
blackflowSircle: no because that't the nature of encryption. perfect encryption is indistinguishable form random noise.12:12
JanCthe server & the client (browser) can cache it12:12
blackflowrandom noise cannot be compressed nor cached.12:13
blackflowJanC: "it" being content after decryption :)   so back to "unless you break SSL"12:14
Sirclehm12:15
JanCe.g. if the servers says a particular resource (e.g. an image) will never get changed, a browser should only fetch it if it's not in its cache12:15
blackflownote that properly done sites will return 304 for unmodified content, which is the best they can do do reduce encrypted traffic, aside from compression.12:16
JanCblackflow: it's not about breaking SSL, it's about using HTTP properly12:16
JanCof course you don't have much control over that as a user12:16
blackflowJanC: no, you're looking at two different OSI layers. You want to cache one level higher, with the infra that transports it (the cache in the middle) oblivious to those higher levels (beyond encryption)12:17
blackflowHTTP is not used improperly12:17
blackflowYou can terminate SSL, cache content and serve cached content to your clients with your own certificate12:18
JanCof course those are different OSI layers12:18
blackflowBut imho it's futile, the nature of modern web is not cacheable in that way, the best you can do, and that part works, is browsers asking if content they saw before has changed, and they receive it only if changed. that's the PROPER way to use http :)12:19
blackflowso browsers cache locally. Use the developer tool of your browser, eg. hit F12 in FireFox, and observe how much of it is responded with 30412:20
JanCfor most sites there is no need to ask if content changed for most of their content12:20
JanCespecially not for their big content12:20
JanClike images12:20
blackflowthe sites decide that and can set very long timeout. that's what we do on images, for example. if the images change, they get new URLs, so we don' thave to play the cache invalidation game.12:20
JanCblackflow: exactly12:21
JanCblackflow: that's what I mean by sites using HTTP properly12:21
JanCwhich most don't12:22
blackflowJanC: ah yes. And sorry, I confused you with Sircle, thouhgt you were talking about using the cache directives of http even through https. That's why I mentioned different OSI layers :)12:22
JanCcache directives in HTTPS work the same as in HTTP (although browsers might use them differently)12:23
blackflowsure, but intermediate caches (eg the Squid in question) can't, unless they terminate SSL12:23
blackflowIn fact that is what antiviruses on Windows do in order to inspect https, imaps and pop3s traffic.12:24
blackflowAt least some that I've seen.12:24
JanCwhen using client keys, they can't intermediate at all12:24
blackflowtrue.12:25
JanCand those Windows antivirus have been abused12:25
blackflowyup.12:25
JanCto do pretty much what they are supposed to avoid12:25
blackflowIt's MITM for all intents and purposes, benign or not.12:25
JanCbut then again, the whole SSL model is flawed  :-(12:26
JanCwell, not the whole model, but at the very least how it has been implemented12:28
blackflowYeah, especially since we're still calling in SSL and nobody should be using SSL anyway.12:28
JanCdoesn't matter if it's SSL or TLS12:28
JanCit's the trust model that is broken12:28
blackflowYeah I know what you're talking about. Case in point: recent dropping of StartSSL from Chrome and FF.12:29
JanCthey still support most country CAs12:29
blackflowthose are cases that are detected, acted upon and publicized. But how many abuse cases are there that go undetected, unmitigated and silent.12:29
JanCthey are unlikely to be detected if they are isolated12:30
JanCto be fair: I think it's great to have country CAs included, but there should be some way to make sure those are only used to sign government site certificates12:32
JanCand I doubt most SSL/TLS libraries do that right now...12:33
jak2000ow to know why my networkcard not up?16:25
jak2000when startup the linux i get this: http://postimg.org/image/8xsh3mmzd/   run the command and get this: https://postimg.org/image/eiyv6tnkv/   how to fix, why cant ping out the box thanks16:27
pk2x3open /etc/rc.local with root permissions and comment all lines except "exit 0", then restart the server.16:33
pk2x3"sudo nano /etc/rc.local"16:34
pk2x352/500016:39
pk2x3Comment the lines by putting # in front of each line.16:39
jak2000ok16:39
jaguardownHi all18:13
jaguardownI have Ubuntu Server 16.04.1 i386 installed on an old, home box. I set up encrypted LVM. Is there a way to automatically provide the passphrase to decrypt on boot so that rebooting won't cut off remote ssh access? I tried to search but wasn't sure exactly how to find it.18:16
jaguardownIf not I suppose I will just reinstall without encrypted LVM18:16
jaguardownBut the fact that it is an option on a server installation leads me to believe there is a way to handle this issue.18:17
gorelativehey folks, on ubuntu 16.04.01 LTS, apt-get upgrade tells me these packagse are held back... linux-headers-generic linux-headers-virtual linux-image-virtual linux-virtual18:17
gorelative#1 can i find out WHY, #2 how can i apply them?18:18
rattkinggorelative: those packages probably require a new package being installed. 'upgrade' wont do that but a 'dist-upgrade' will18:21
gorelativeah ok thanks rattking do sudo apt-get upgrade linux-virtual looks to have resolved it :P18:21
rattkingUbuntu puts the kernel version in the package name, so apt considers them new packages.18:22
jaguardownAnyone? :-)18:44
tomreynjaguardown: automatically providing the passphrase would defeat the purpose18:46
jaguardownI figured as much. Why would anyone even set up encrypted LVM on a server, then?18:47
jaguardownUnless there is a way to provide the passphrase via SSH18:47
blackflowjaguardown: there is, dropbox in initramfs and a custom init script hook. it's a manual set up tho'18:49
jaguardownok18:49
blackflowuhm... dropbear... sorry, my mind was elsewhere :)18:51
blackflowdropbear SSH18:51
jaguardownnp18:59
jaguardownof course I install dropbear via apt and I get an error at the end of configuration that says invalid authorized key file, and that remote unlocking of cryptroot via ssh won't work19:01
jaguardown-_- time to do some investigative work I guess19:01
jaguardownThis is basically a fresh install apart from a static ip, openssh, and ufw19:01
blackflowjaguardown: yeah I got that too, but I left that task for some time later. There are a few guides online with various reported success rates, like http://unix.stackexchange.com/questions/5017/ssh-to-decrypt-encrypted-lvm-during-headless-server-boot19:03
blackflowand further links on that page19:03
jaguardownThank you!19:04
blackflowjaguardown: in theory it's straightforward, and the only "dubious" part is setting up dropbear to work with the same keys, and having the same signature so you can locally keep the same known_hosts sig and private key.19:06
jaguardownah19:07
blackflowyeah otherwise you install cryptsetup, it'll get included in initramfs by default (/etc/initramfs-tools/initramfs.conf  MODULES=most  I believe does it, which is default)19:08
blackflowand you need a hook that will hold mounting root until you ssh in and manually set it in motion by calling cryptsetup to unlock root and proceed with normal mounting and switching root.19:08
blackflowthe only other gotcha I saw in that procedure is killing dropbear before root mounts so there's no lingering process occupying the ssh port, so OpenSSH can normally start and continue providing sshd service19:10
blackflowand frankly I'm wondering why is dropbear even used, why can't it just be openssh. It has exactly the same requirements: all binaries and libraries involved have to be present in initramfs or built statically, just like dropbear.19:13
jaguardownOkay sounds pretty straight forward. I'm gonna start by reading the document they talked about /usr/share/cryptsetup/README.remote.gz19:18
jaguardowner /usr/share/doc/cryptsetup/README.remote.gz*19:19
blackflowjaguardown: the initramfs is basically just a simple tarball containing a file called "init" that gets executed. In its most basic form, that script only has to call "exec switch_root /path/to/root-filesystem /sbin/init" (or systemd instead of that sbin/init). That's all there is conceptually. Everything else in the init script is procedure needed to find and mount root, before the switch.19:21
blackflowUbuntu's initramfs scripts are big because they contain lots of tests and sub-scripts to automate all this for various scenarios, so it all works automatically regardless of filesystems used, LVM, Raid, etc...19:22
blackflowand of course all the binaries used (like cryptsetup) have to be present in the initramfs, so initramfs is a tarball of a "mini root filesystem" containing the tools to find and mount the real root.19:25
GALL0on 16.04.1 server, installed ZFS, created a pool which Plex could see/read for the past week. after reboot, Plex can no longer see contents.19:36
GALL0was set to `/mnt/data` but now seems to be attached to `/data` although Plex can see both folders they appear to be empty. however if I connect to `/mnt` from my Mac I can see all the contents in findr19:37
blackflowGALL0: "zfs list -o name,mountpoint" will show you where the datasets will automatically mount on import19:57
tewardrbasak: if you're around, any idea how to force a package to *not* build with PIE?19:58
blackflowGALL0: also consider those are modulated with pool's altroot attribute, so eg. if the altroot is /mnt and a dataset is to mount in /data, it'll mount in /mnt/data19:58
tewardor anyone on the server team19:58
blackflowteward: -nopie ?19:58
GALL0https://hastebin.com/duhobeqoni.hs19:58
tewardblackflow: seems to be being ignored in the build flags :/19:59
blackflowteward: sorry, -no-pie19:59
tewarddidn't work either, I'll have to poke further once I'm not angry at sbuild >.>19:59
GALL0don't recall making `data/data` nor `six/backup`19:59
blackflowGALL0: so you have double mountpoints, not good20:00
blackflowset mountpoint=none   on six and data, I'm guessing that's what you want, so only data/data is mounted and six/six20:00
blackflowGALL0: it's not bad to have a dataset under the pool and not use the pool directly, so you can leave it that and just disable mounts for the pool roots20:01
GALL0blackflow  how can I delete `data/data` and `six/backup`? if I lose all data its a non issue, backed up elsewhere.20:01
blackflowGALL0: "zfs destroy data/data"   but CAREFUL, it'll destroy, won't ask "Are you sure"20:01
blackflowmight need -r if you have snapshots in them20:02
GALL0havent done any snapshots yet, just created these pools a few days ago20:02
blackflowteward: possibly enviornment flags are added first, so package intrinsic flags override them?20:02
GALL0(1:523)$ sudo zfs destroy -r data/data20:04
GALL0umount: /mnt/data: target is busy20:04
GALL0        (In some cases useful info about processes that20:04
GALL0         use the device is found by lsof(8) or fuser(1).)20:04
GALL0sorry, thought it'd be one line20:04
blackfloware you currently in that path? or have something else from it mounted? a process has open files in it?20:04
GALL0ah, rclone, forgot I'm copying from Amazon Cloud Drive20:06
=== NewYears is now known as nchambers

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!