[00:10] <Arroyo1010> hehe, I'm europe based, too :)
[00:10] <Arroyo1010> But I'm an owl
[00:11] <sarnold> indeed :)
[00:13] <Arroyo1010> I'm actually making some progress. Turns out that the qemu "builder" in packer supports .img format. Now I need to figure out how to ingest the public key, etc
[00:15] <Arroyo1010> And, of course, I'm using the official Ubuntu cloud .img for that
[06:07] <lordievader> Good morning
[06:08] <ubernets> I have a Ubuntu 16.04 server, df -Th / shows 1.6 GB available space, there is only one partition in /etc/fstab . However I keep running into disk full errors from various programs. apt-get upgrade, apt-get autoremove, apt-get install, apt-get -f install show disk full and sometimes resulting dependency error. git pull also shows a disk full error message. Any idea what could be causing these errors even though the disk h
[06:08] <ubernets> as 1.6 GB free space. I have rebooted the server twice already, still the same issues. Here is an output of df -h http://paste.ubuntu.com/25538619/ . And here is a failed apt-get upgrade showing the error message http://paste.ubuntu.com/25538626/ .
[06:18] <lordievader> ubernets: I'd do a watch on the df command while running the upgrade. Perhaps apt is downloading so much as to fill the disk.
[06:19] <ubernets> lordievader, I will look into that, but it happened with the git pull too. So I am skeptical that that's the cause
[06:20] <lordievader> True. It makes for a strange issue.
[06:20] <lordievader> Can you make files yourself still? Or does that too throw an error?
[06:24] <ubernets> Checking
[06:25] <ubernets> lordievader, Yea I created a very small text file. It didn't throw an error
[06:25] <lordievader> Hmm, strange problem.
[06:27] <ubernets> lordievader, it gets stranger. I typed a rm command for the file and while using tab completion for the file name I got this error message: rm test-bash: cannot create temp file for here-document: No space left on device
[06:28] <ubernets> lordievader, the file was named test.txt. So there seems to be a problem with the /tmp folder ?
[06:28] <lordievader> ubernets: I saw multiple folders in the apt output complaining.
[06:29] <ubernets> I typed tab completion after typing test
[06:29] <lordievader> What kind of filesystem is on xvda1?
[06:29] <ubernets> rm test<tab>, then it completed to "rm test-bash: cannot create temp file for here-document: No space left on device"
[06:30] <ubernets> I think ext4 , double checking
[06:30] <ubernets> Yes ext4
[06:37] <lordievader> Are you able to reboot the server?
[06:40] <ubernets> Yes I rebooted it twice
[06:42] <lordievader> ubernets: What does 'du -hs /' report?
[06:43] <lordievader> Around 7.8G of usage?
[06:45] <ubernets> one sec
[06:52] <ubernets> lordievader, running the command now, it takes some time to complete
[06:52] <lordievader> I'm sure it will.
[06:53] <ubernets> lordievader, here is the output http://paste.ubuntu.com/25538754/
[06:55] <lordievader> Hmm, unless that /var/lib/lxcfs folder is ~1.5G this seems fine.
[06:56] <ubernets> Why is access denied to root?
[06:58] <lordievader> Good question. I wouldn't be suprised if AppArmor has something to do with it.
[07:00] <ubernets> lordievader, I want to show you something. THe files from the last paste have question marks for instead of permission flags
[07:01] <ubernets> lordievader, http://paste.ubuntu.com/25538796/
[07:01] <lordievader> That means you are not allowed to read the metadata from it.
[07:01] <lordievader> Now that I think about it, might also be a userspace mount. Sshfs mounts can result in this.
[07:02] <lordievader> Anyhow, I'd start cleaning up or extending the disk.
[07:02] <ubernets> more /etc/fstab : LABEL=cloudimg-rootfs	/	 ext4	defaults,discard	0 0
[07:04] <ubernets> How should I clean it up?
[07:04] <ubernets> You mean just keep deleting more files?
[07:06] <lordievader> In a sane way, yes.
[07:06] <lordievader> Remove packages you don't need, etc.
[07:28] <ubernets> lordievader, I think someone is bruteforcing the server and it fills up the auth and bmtp logs
[07:29] <lordievader> Why do you think that?
[07:30] <ubernets> tail -f auth.log keeps showing up these kinds of messages : Failed password for root from 121.18.238.106 port 59927 ssh2
[07:30] <ubernets> IP address from China
[07:31] <ubernets> btmp.log tripled in size in last few minutes
[07:32] <lordievader> Public ip?
[07:32] <lordievader> If so, configure fail2ban or something similar.
[07:41] <ubernets> lordievader, it's an ec2 instance and there are only 3 inbound ports allowed in the security group, but I see a lot of ssh2 attempts on all kinds of ports logged into auth.log .
[07:50] <lordievader> Ofcourse, if port 22 is publically accessible you get login attempts.
[07:51] <ubernets> THe attempts are made on ports like 59927 . See above
[07:52] <ubernets> Oh my bad
[07:52] <ubernets> it's the from port
[08:20] <cpaelzer> jamespage: does UCA publish debug symbols as well?
[08:21] <cpaelzer> I tried to fetch them as I'd do on a "normal" ppa, but that didn't work yet
[08:34] <fishcooker> i've download http://releases.ubuntu.com/16.04/ubuntu-16.04.3-server-i386.iso then create the usblive with startup disk creator.. and the result is always failed
[08:34] <fishcooker> isolinux.bin missing or corrupt
[08:40] <lordievader> fishcooker: Uefi or bios?
[08:47] <fishcooker> bios
[08:49] <fishcooker> will dd the image directly will solve the problem lordievader
[08:51] <lordievader> Bios is usually trouble free. You might try unetbootin. I'd leave the dd option as a last resort.
[09:08] <fishcooker> thanks for unetbootin lordievader, noted for the dd
[09:09] <fishcooker> i have dell server with 6 slots hard disk; with 2 sas disk 70GB then 2 disks 1TB, 2 slot will used as raid-1 for 2 sas disks what should i do with the rests resource ... let's say on the future i want to add 2 disk for the rest slot available should i go with lvm?
[10:28] <Adillian> Morning all. I just installed ubuntu-server and I can't connect to my lan let alone wifi. ifconfig has 'lo' with Local Loopback, and
[10:28] <Adillian> ..and virbr0 with link encap:ethernet
[10:29] <Adillian> any ideas what I can try or where I can find relevant documentation?
[10:58] <Adillian> never mind, fixed it
[13:02] <lordievader> fishcooker: Concerning your question about lvm. It does sounds like you could benefit from using lvm.
[13:02] <lordievader> Adding/removing disks is quite simple in lvm.
[13:59] <ilmaisin> hello
[14:00] <ilmaisin> can apparmor do following: let's say we have a port in the unprivileged range and we want only one user to be able to bind it
[14:48] <sdeziel> ilmaisin: no
[15:17] <disposable2> is there anything like bcache but in RAM? i don't care about data loss during power failure, this is for Ceph, data is replicated on other nodes. ideally i'd like something tiered (ram -> nvme ssd -> slow spinning disks).
[15:31] <JanC> disposable2: you can have memory-backed block devices; not sure how is currently the best for what you want to do though
[15:31] <rh10> guys, go language suitable for system automation tasks? one employer require it
[15:32] <rh10> is it real write such king of things with go lang instead bash or python?
[15:32] <rh10> kind*
[15:32] <JanC> although I guess something bcache-like could be more optimized than having the extra layer
[15:33] <ScottyAtHome> what FS are people using for a RAID1 type system 16.04 server. tried BTRFS but it is a bit too faffy and difficult to get working when testing raid1 issues, has anyone else had those issues?
[15:33] <nacc> rh10: you certainly can
[15:34] <rh10> nacc, thanks
[15:34] <sdeziel> ScottyAtHome: ZFS works pretty well on 16.04
[15:34] <dpb1> ScottyAtHome: you are running software raid?  mdraid?
[15:34] <JanC> I know people who write system automation "scripts" in C  ;)
[15:34] <nacc> JanC: indeed :)
[15:34] <JanC> ScottyAtHome: you can also use ZFS, or layered software raid + whatever filesystem you want
[15:35] <ScottyAtHome> sdeziel: might try it, how you finding it?
[15:36] <ScottyAtHome> dpb1: i am running the BTRFS raid, bit of a pain to get working when the root is on it and you are booting initramfs to get it back up on degraded
[15:36] <sdeziel> ScottyAtHome: I use it on all my physical machines
[15:36] <dpb1> ScottyAtHome: personally, I prefer zfs, or traditional raid + ext4.
[15:36] <ScottyAtHome> JanC: have you used ZFS? if so how have you found it?
[15:37] <JanC> I have ZFS in one system, but not using mirroring
[15:37] <sdeziel> I have not tried a root on ZFS though, for that I stick to mdraid + ext4
[15:38] <ScottyAtHome> sdeziel: I use BTRFS on my desktop & laptop, but this is the first time of having it on a server in raid1 and it is a pain. How you find the raid1 or which ever raid you use when testing fro failures?
[15:38] <sdeziel> ScottyAtHome: mirroring on ZFS works really well, done many rebuilds, lost many drives but 0 data :)
[15:38] <ScottyAtHome> sdeziel: thattttt sounds like a sensible idea
[15:38] <JanC> ZFS might have the same issue: you're not used to it (yet)  :)
[15:39] <ScottyAtHome> JanC: true. it was BTRFS when I had to start updating Grub stuff to just get it to accept degraded I thought this is no good for ssh ing into to sort
[15:41] <ScottyAtHome> Does anyone know where LXD containers are kept? as I want to keep the containers on an easily adjustable FS
[15:41] <sdeziel> ScottyAtHome: when you'd install LXD, it will ask you if you want to create a zpool for it's storage
[15:41] <rh10> nacc, quiestion, how handy is it? write in go. probably there is a lot of useful libs in that case
[15:43] <ScottyAtHome> sdeziel: that sounds better.  Whe you have had a degraded raid with ZFS will the system still boot? is the unbroken drive/s still usable?
[15:43] <nacc> rh10: probably better asked in a go channel
[15:43] <nacc> rh10: not sure what you mean, anyways
[15:43] <rh10> nacc, you're right
[15:44] <sdeziel> ScottyAtHome: my bootup doesn't depend on ZFS as my root FS is on md+ext4. That said, ZFS mounted what remained of the pool and was still usable
[15:44] <dpb1> ScottyAtHome: yes, filesystems can be mounted degraded.
[15:45] <sdeziel> ScottyAtHome: then when I replaced the faulty drive, rebuilding (resilver) was fast as it only rebuilds the data, not the full disk like md does
[15:45] <ScottyAtHome> dpb1: automatically or manually? I want one that    is automatic so I can ssh in to sort the problem out.
[15:46] <ScottyAtHome> sdeziel: thanks for the info, that is useful.  Glad I have come on here to ask as it is quicker than it has been testing the system out.
[15:46] <sdeziel> np
[15:48] <disposable2> ScottyAtHome: I always use ZFS for everything but /. No support for zfs in grub-efi and no installer support force me to use btrfs for /. Only once did i have an unbootable system after a powercut, so i had to boot up from usb disk and run "btrfs check" or something similar against my /. After that, grub would boot it up again.
[15:50] <disposable2> ScottyAtHome: one day, we'll have solaris-style boot environments (whether it's with btrfs or zfs, i don't care) and we'll all live happily ever after.
[15:50] <ScottyAtHome> disposable2: when I have been testing BTRFS it gets stuck at boot and goes into intitramfs, were you able get around this? I need the server to be remotely accessed to sort out the issues.
[15:51] <ScottyAtHome> disposable2: lol, what were solaris style boot environments?
[15:52] <disposable2> ScottyAtHome: unless you've taken a picture of the screen while it was stuck in initramfs, i don't have an answer for you. i've only ever had that 1 problem with btrfs and resolved it within 5 minutes.
[15:53] <disposable2> ScottyAtHome: Boot Environments are FS snapshots you can boot into if your system gets broken after an update. this has saved my neck countless times. on solaris, BEs were presented in grub as options you could boot into.
[15:54] <disposable2> ScottyAtHome: when it's properly integrated, a new boot environment is automatically created after something like 'apt upgrade'.
[15:54] <disposable2> ScottyAtHome: of before
[15:54] <disposable2> OR before
[15:56] <JanC> disposable2: do you mean snapshot-based?
[15:57] <disposable2> JanC: yes, but with good integration with grub and package manager.
[15:57] <JanC> ('apt-btrfs-snapshot' already exists)
[15:59] <ScottyAtHome> disposable2: is your / on raid?
[15:59] <disposable2> JanC: does it automatically create a new entry in grub that you have to confirm as valid (after a boot) or it will automatically rollback to last confirmed-as-working?
[16:00] <disposable2> JanC: it's the overall system integration that makes BEs useful.
[16:01] <JanC> I assume there are distros which implement this, but I doubt it's well-tested enough to be used by default in a major distro
[16:02] <JanC> (IMO btrfs isn't ready to be used by default either)
[16:03] <ScottyAtHome> JanC: i don't know what else is though with all the features BTRFS brings. I understand ZFS might be closer but the licensing is a bit dodgey
[16:04] <disposable2> JanC: i had to stop using btrfs for anything other than / when i discovered it's quickly degrading performance with many snapshots/clones.
[16:04] <disposable2> s/it's/its
[16:06] <JanC> that's one issue, but I was thinking about getting all edge case data loss bugs fixed  :)
[16:06] <JanC> bcachefs might also become useful at some point
[16:08] <JanC> and who knows, maybe HammerFS one day...
[16:08] <JanC> but OpenZFS is probably the only core filesystem code base that is really mature right now...
[16:08] <ScottyAtHome> JanC: Thanks for the information.
[16:08] <JanC> (outside the legacy ones like XFS & ext4, of course)
[16:09] <JanC> ScottyAtHome: all just my opinion, of course  :)
[16:09] <ScottyAtHome> JanC: but sounds like experienced opinion.
[16:10] <JanC> not really that experienced, but based on experiences from others I read
[16:13] <ScottyAtHome> JanC: would you use ZFS for root?
[16:13] <JanC> I have no experience with that
[16:14] <JanC> also, would depend for what (probably not for an important server)
[16:14] <ScottyAtHome> what would you use for an important server?
[16:16] <JanC> it would really depend, and you probably want to ask someone who's actually running lots of important servers  :)
[16:17] <nacc> on some level, the fs is a little less relevant for an 'important server', the hardware and backup story is probably the higher priority
[16:18] <nacc> now, some fs give you the backup story
[16:18] <nacc> but, honestly, i'd expect most 'important servers' to run the "legacy" (funny word that) fs that JanC mentioned
[16:18] <nacc> as they have been around and are "known stable"
[16:18] <nacc> ZFS is too new to really be on those long-running machines, IMO
[16:20] <JanC> nacc: by "legacy" I meant "traditional" filesystems, without integrated snapshots, raid, etc.
[16:20] <nacc> JanC: i know :)
[16:21] <nacc> JanC: just not heard of ext4 called that
[16:21] <JanC> and ZFS isn't really new
[16:21] <JanC> it's just new on linux  :)
[16:21] <nacc> right, sorry, i meant in the context of this channel
[16:21] <JanC> but the core code base is the same (it's mostly not a re-implementation)
[16:22] <nacc> JanC: yeah, that's my understanding too
[16:23] <JanC> so most of the code base is well-tested and has seen quite a bit of real-world use
[19:38] <hehehe_off> :)
[19:38] <hehehe_off> why such quiet channel
[19:38] <hehehe_off> :D
[20:53] <cliluw> What does disabling a user account do? Does it just prevent logins to that account?
[21:31] <lordcirth_work> cliluw, it prevents password logins - ssh keys can still get it
[21:32] <lordcirth_work> in*.  All --lock does is prepend '!!' to the password hash so it can't be matched.
[21:32] <cliluw> lordcirth_work: I think you're talking about disabled password. I'm thinking about disabled /account/.
[21:32] <lordcirth_work> cliluw, well then you'll need to be more specific about what you mean by disabling an account.  What command are you running?
[21:34] <cliluw> lordcirth_work: usermod --expiredate 1
[21:38] <lordcirth_work> cliluw, that ought to keep them out, but I haven't needed to use it myself.  You can always delete the account, though you'd need to be careful of uid's.