[00:50] Hey all. Doing an Ubuntu server install on a used HP Z600 with 3 500GB drives. It shows one as only having 115 MB when I get to the RAID config [00:51] booting into a liveUSB that particular drive won't let me delete the partition, or deactivate it with gparted [00:51] What must I do to get this drive emptied? [00:51] It shows sdb as only 115MB available [00:51] sda and sdc are just fine [00:54] are you sure that's not an on-board usb thing for lights-out or similar? [00:54] does dmesg show that it's the right make/model/etc? [00:58] CuChulaind, figured it out, it was an old LVM, removed the LVM, now all is good [00:58] yay [00:59] sarnold, This is my 1st attempt at a RAID, I have 3 500 GB drives. I believe I would like to run RAID 5 [01:00] This machine is only for me to play on, run some VM's, would RAID 5 be the way to go? [01:00] CuChulaind: have you heard about zfs? :) [01:00] no [01:01] CuChulaind: zfs fixes the raid5 write hole, has transparent compression, transparent checksumming, snapshots, and is helping return unicorns to their native lands :) [01:01] hahah [01:01] CuChulaind: I'm a big fan of zfs; give this series of blog posts a quick skim to see i fyou're interested https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ [01:02] THis is my play around machine, I'm up for learning and trying anything [01:02] excellent :D [01:02] it's still a fair amount of work to make zfs be a root filesystem; depending upon what you want the machine to do and how many more drives you have, this may or may not work great [01:03] (in case you're curious it's at https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS -- but I didn't use zfs as root on my own system yet, because it looks just that bit too annoying still.) [01:04] This will be my first RAID, should I start there with traditional first? This machine is quite old, got it cheap [01:05] and no separate RAID controller FWIW [01:05] I much prefer the zfs user interface over the mdadm interface [01:05] the seperation of zpool commands to work on disks and zfs commands to work on datasets makes sense to me [01:06] gotcha [01:06] many people have poor opinions on raid controllers; most of the zfs crowd would rather get a much simpler and cheaper HBA instead of a raid card [01:06] if the raid card dies you're in trouble; I've heard people say they were never able to put their drives back together again if the card dies [01:07] IC [01:08] From the quick read, looks like I install server then install and set up zfs? [01:08] yeah [01:08] so if you don't have a drive that's in a good position to be the OS drive, then maybe mdadm is the easier/better bet. but I really like zfs and wanted to make sure you knew about it as an option :) [01:09] what do you mean by having a drive that's in a good position? [01:09] either a fourth drive or a usb stick to boot to, or pxe boot, or something similar. [01:10] OK. I have server on a liveUSB, however it looks like I can't run it live, just have the install and check options [01:10] CuChulaind, Are you saying to always boot to the liveUSB, and my other 3 zfs drives are all storage [01:11] CuChulaind: yeah that's an option -- the SmartOS operating system is designed around this very idea ;) [01:11] ubuntu isn't so it probably wouldn't be as pleasant. [01:14] Other than setting it up, it looks easy to set up :-). The tutes point out just list the devices /dev/sda /dev/sdb /dev/sdc however the don't show a number like /dev/sda3 since 1 and 2 are OS [01:14] if not using a liveUSB etc [01:15] it's usual in zfs land to give the entire drives to zfs. if you're going to stick OS on one partition and data on another, then you'd had to adapt a bit [01:18] so use 1 drive for OS, and use the other 2 for zfs? [01:18] yeah you could do that [01:18] and the OS of course could be 200GB or so [01:18] or less, yeah [01:19] for my big machine the OS is on 120 gig ssds (mdadm mirror; I hope they never break because Ihave no idea how to use it :) and the data is on nine spinning metal disks [01:21] wiw [01:21] wow [01:22] it's a ton of fun to see all those lights blinking when it's mirroring the ubuntu archives or running a scrub [01:27] sarnold, with traditional RAID does it work the same way on install, you partition a drive for / /boot and /home, then tell it to RAID? [01:27] the other drives? [01:28] CuChulaind: you could also configure mdadm to set up a raid5 of the drives and -then- create the filesystems on the raid device [01:28] does on not typically include the OS in RAID [01:28] normally you would, yes [01:29] *does one [01:29] ok [01:29] ubuntu may some day support installing with zfs as root, but it takes work to do it.. [01:29] I think it'll be really nice to have snapshots integrated with apt-get at that point [01:29] I can dream :) [01:36] time to run, have fun CuChulaind [01:37] I agree, I believe I will try the RAID 5 for a bit lots of documentation for ubunut, and then read up on zfs [01:37] Always looking to learn and stay curent [01:49] Or maybe not, the ubuntu server RAID instructions are not working, it says to manually partition the first drive, when I try that and say yes, doesn't allow me to set up the ize [01:49] the inst say to set up the swap on all 3, then another partition as the rest of the drive on all 3 and make bootable, after that go into the RAID config [01:55] That's weird. On a brand new install of 16.04, sync-accounts results in "Can't use 'defined(@array)' (Maybe you should just omit the defined()?) at /usr/bin/sync-accounts line 67." === lfrlucas_ is now known as lfrlucas === JanC_ is now known as JanC [12:32] I copied my private key from one computer to use on another to access my ubuntu server via ssh. However on the other computer (the one that did not generate the pair originally) I get "Bad passphrase, try again" despite the password definitely being correct (i have repeated the process several times). I created the duplicate private key by creating a new file and pasting the contents of the key to be copied into it and chmod [12:32] to 600 on the secondary computer. Any ideas where I'm blundering ? [12:35] ahh .. it was the public key, sorry for the ignorance >,< That's a result of staying up all night I swear :D [12:44] Is it normal to be able to connect to SSH via any of the domain names I've set up with Apache/linodeManager? Is there a way to stop this ? [12:52] Is it because of the DNS records I set up at my domain name provider ? www, @, * ? === Malediction_ is now known as Malediction [13:22] Munt: ssh has nothing to do with apache [13:23] that makes sense, I was just confused as to why it seemingly randomly selected one of the domains pointed at the server to connect to when I specified the ssh connection by ip address [13:24] randomly selected one of the domains ? [13:24] what selected a domain ? [13:24] i connected like ssh user@122.123.123.123 and my firewall asked me to allow a connection to oneofmydomains.com:22 [13:25] that will be because of your reverse dns map [13:26] on my local machine ? [13:26] on whatever is the dns resolver for your host [13:28] I used the "DNS Manager" on my linode account to set up a few domains. I unfortunately don't understand what a dns resolver is at this moment [13:28] thats fine, I wouldn't worry about it then [13:29] ikonia : Thanks for your time, I'll worry about something else :p [13:29] a wise move, I assure you, you've not got a problem though, so don't get hung up on it [13:31] I know enough to know how badly wrong things can go, but not enough to prevent that :D [13:37] Do you guys know of a noob friendly incremental backup system for a headless ubuntu server install ? currently this is my backup protocol https://hastebin.com/ipikaxuxaq.pl [13:38] (tar archive) [13:41] I seen a youtube video where people had ButterFS and some opensource software that allowed complete tracking of all changes to the system and rollbacks on all modifications. I cant see m to find it again though (and i'm not using butterfs) [13:52] Munt: what sort of thing are you looking to back up ? [13:53] I'm 'messing' with my ubuntu install (learning) so I want to be able to roll the system back to a pre-messed with state [13:54] what sort of thing ? [13:57] ikonia : everything [13:58] I'd like to be able to take the backup and use it to image the server with if necessary [13:58] image is a strong word. [13:59] it's not really realistic to work that way [13:59] (hence why I'm asking your goal) [14:00] sounds like you want lvm snapshots [14:00] or zfs if you went through the trouble of setting up a zpool [14:00] I don't think it's something someone who is learning needs/wants [14:00] a simple backup of things you change before you do them would do [14:01] and a backup of core config files, so if you need to re-install you can just drop them back in place [14:02] so, it's not easily achieved for someone of my skill level to have incremental backups of a ubuntu install ? [14:02] thats not what I said [14:02] appologies, [14:02] lvm snapshots are really simple, zfs snapshots even more so, just setting up zfs on ubuntu isn't [14:03] if you want to actually have a hard rollback, you can use tools like clonezilla to make an image restore before you make major changes [14:03] and the installer will setup lvm for you, so no setup needed [14:03] just make snapshot and restore snapshots [14:03] as he's not running zfs and he's already installed probably not using lvm...means he'll need to re-install [14:03] Currently I;, runnign off an image supplied by linode Ubuntu 16 [14:03] oh, then your super limited to whatever linode supports [14:04] Munt: make a backup directly, and just backup core config files (not the whole root file system as you are doing now) [14:04] config files are just text so compressed they will take up almost zero space [14:04] so you can take a lot of regular backups without thinking [14:05] ahh, nice. is there a way to "patch" a fresh linode re-image with the core config files ? [14:05] what ? [14:07] which files am i backing up that should not be ? [14:07] so if you change a file...back it up before you change it [14:07] it's a simple model [14:09] I want to protect myself from myself. I have wiped out several drives before >,< [14:09] right, so backup the config files (and use a remote location if you want to be super sure) before you make changes [14:09] so the var and etc folders ? [14:10] no [14:10] the config FILES you change [14:10] say I run a stupid rm command, i want to be able to recover from that. Going by what you say, maybe a weekly clonezilla coupled with backups of each file that is changed ? [14:11] you won't be able to use clonezilla [14:11] as you're running in a linode vps [14:11] ok [14:11] you won't really be able to use your current backup technique either really [14:12] how come ? (and thanks for indulging my curiosity <3) [14:12] you're just taking a whole backup of the whole root file system [14:13] if you do a "dumb" rm command, either the backup image won't be there to recover from, or you'll have removed the tools/libraries needed to actually interact with the backup [14:13] I was thinking I could set up a fresh install and extract the tar on top of that [14:13] that seems far more effort than it needs to be [14:13] if i have the backup on a local machine that is [14:13] ikonia : I'd love an easier way :p [14:14] why don't you just backup the config files you actually want/need [14:14] and then either a.) roll back and changes you do b.) re-install and re-place the config file with the backups [14:14] a few text files compressed you could do every few hours with zero problem rather than the whole root file system [14:15] So I would have to create a file list of files that I change in order for them to be cron backed up hourly ? [14:15] thats one way, [14:15] how were you thinking ? [14:15] whatever works best for you relly [14:15] really [14:16] at home I just image my drive and re-image it when i break it [14:16] right, you're not at home [14:16] so you need to change your approach [14:16] I'm fishing for an approach at the moment [14:17] I've just told you a simple one [14:17] there are many more [14:17] Your suggestion is to manually backup each file that is changed ? [14:17] automate your key files and/or backup the files you change before you change them [14:18] Iwas lookin at tools such as rsnapshot and backintime [14:18] but they seem just out of reach of me at this moment [14:18] you can use them, I think it's more likely that you'll end up not being able to use them [14:19] (in a real world situation) [14:19] say someone hacks my server and i need to restore it to a known good configuration. What would I do? [14:19] you don't [14:20] you destroy the server [14:21] after I destroy the server and I want to reinstate all my config and packages what do i do ? Is there a package list and cofig restoration technique ? [14:22] you dno't [14:22] you rebuild the server from the ground up, you don't use the backups [14:22] why ? [14:22] because how do you know you're not putting back the exploit that allowed people in [14:23] how do you know the backups can be trusted [14:23] My main objective is to avoid having to rebuild the server from scratch [14:23] (i've done it 4 times in the past 2 days) [14:24] why ? [14:24] how have you got into that situation [14:24] learning and causing problems [14:24] how specifically though [14:24] most situations you should be able to recover from without a rebuild [14:25] Often like : I have a problem that I don't fully understand. -- Follow 30 guides on the internet that dont work -- then I dont want to have a system with the 50 changes i made in frustration [14:25] ok, so stop doing that then [14:25] thats a user problem [14:25] a rollback would make my learning much easier [14:26] no, learning how to fix the situation and being aware of what you're doing before you do it will make it an easier and better learning process [14:26] indeed. [14:26] blindly following guides you don't understand is the worst thing you can do [14:26] more so when so many people write bad/ill informed/works for me based guides [14:26] "tutorial" : A very common problem is that some people prefer to follow a step-by-step tutorial that shows them how to setup their server w/out reading the documentation or understanding what they are doing. If something goes wrong, they have no clue whatsoever about where to find hints, and they sometimes decide to start from scratch using a different tutorial. This is not The Proper Way. [14:27] There's only so many hours in the day. I am a novice. I'm learning many things. A rollback solves my problem. [14:27] learning how to do it right solves you rproblem [14:27] Mistakes happen [14:28] People learn in different ways. Failure is a popular learning tool [14:28] you just stated a lack of hours per day though [14:28] mistakes is the slowest learning method available [14:28] but you're not learning failure [14:28] you're looking for a way to cheat failure [14:28] What's so bad about that ? [14:29] carry on then [14:29] I've no interest in supporting such a bad approach [14:29] ok [14:30] I find it hard to believe that i'm the only person wanting a rollback of their ubuntu system [14:30] you're not [14:30] I roll back my development systems quite often, more so for differential comparision [14:31] Seems weird that you should focus on criticising my learning techniques. I have been putting in many hours into understanding this stuff. The rabbit hole is very deep however. i often make mistakes, simply not making mistakes is unrealistic. [14:32] it is not weird as you are creating the problems [14:32] with a minor adjustment and proper approach you'd minimse those problems and when you did have a problem learning more fixing it [14:32] it sensible to focus on the real problem rather than look at a shortcut for a fix [14:33] and the problem is your approach [14:34] ikonia : that's easier to say than to do [14:34] not really [14:35] it's up to you [14:35] I'm all ears. [14:35] What's this approach you speak of ? [14:35] I've already explained your problem [14:35] so it seems attention to detail would also help [14:36] To me it seems equivalent to someone saying " You dont need to use version control just be better and more careful" [14:36] it's nothing like that [14:37] How does what your telling me differ from that statement? [14:39] no-one said you don't need backups [14:39] and they are totally different senarios [14:41] Seems that me admitting to resorting to tutorials when I reach the limits of my knowledge has invalidated my need for a restorable system backup ? [14:41] nope [14:43] Ok, then you are so aggrieved by my attempts at learning ubuntu server that you refuse to help me further ? Whatever the case thanks for the time so far. [14:44] I'm nor aggrieved, I don't believe the way you are trying to learn is a good way, and I think you're trying to shortcut, as a result thats not something I want to support [14:45] I think you have a limited mental model of how i'm trying to learn [14:45] nope [14:45] I would like you to assume it's more nuanced than the few sentences i've uttered so far. [14:45] I think I should probably just let you get on with it [14:46] no worries. I'm just a little shook up by your tone. But that's neither here nor there. have a nice day. [14:46] my tone ? I've been nothing but polite [14:46] and shook up....really ? [14:46] * Munt leaves it be [14:47] if someone backing away from supporting your efforts because they don't agree with your approach "shakes you up" you'll have a hard time [14:48] I also have been polite and respect that you can choose to help me or not for any reason that you see fit. [14:49] right, i've not suggested you've not been polite [15:38] Munt: partially reading the backscroll, and keeping in mind that ikonia's advice not to take shortcuts, I suggest you get ackquainted with Ansible or something similar and build yourself a config management so you can rebuild from scratch with a single command. Yeah, even if it's a single server. [16:00] fallentree : Thanks for the suggestion, I'm reading about it now ... this is a paid solution? I agree with and appreciate the things ikoni_a suggested other than his contempt for my 'approach'. Right now I have tarballs and a log(notepad) of all file changes and command so far run on the server, and while that is probably good enough. To save time a complete restoration solution is very handy. [16:00] Ansible is fully free and open source config management software. [16:01] there are commercial solutions based on it, yeah, but those are additional value services. [16:02] Ahh, I was getting confused by the Tower product. I'll get reading here, it looks like it could be what I'm after. [16:07] fallentree : I was looking into bacula also [16:08] never used it. these days rsync over ssh and lately zfs snapshots are my cup of data backup coffe. [16:09] haha, I'm glad it floats your linux mothership [16:11] I use a front end for rsync (perhaps I'm selling it short) called carbon copy cloner. Which is what I've been search for an analogue of in the headless ubuntu world [16:11] s/search/searching [16:14] why do you need a front end for the server? You just run it, or put it in a script. It has nice options for inclusion/exclusion of paths, so it's scriptable and thus configurable with ansible. [16:15] Sorry, I meant I used CCC on a Mac Desktop computer to backup data and restore volumes. [16:16] note that I do agree with ikonia, you should back up only data you cannot re-generate, everything else should rely on a clean re-installation procedure. Backing up everything for "easy" restoration is not bad in itself, just insufficient if you have to restore after security breach. [16:17] agreed. For now, I'm poking around so much it'd nice to be able to reset and try again repeatedly. Then I know I haven't forgotten to manually roll anything back. [16:18] Then having an Ansible playbook sounds perfect for the job. You get familiar with what to install, in what order, and how to configure it, and have it all scripted. [16:19] also using zfs or btrfs snapshots sounds like another easy way to "reset" after poking the wrong hole :) [16:20] I currently have no understanding as to why that might be the case with those filesystems. I'll look into that. [16:20] because no other file system has snapshotting capabilities? :) both are Copy-on-write, meaning it's very easy for them to implement snapshots. those are just another reference in the CoW chain. [16:23] tl;dr, CoW systems work like this. when a block is copied, only a reference to it is copied, not the block itself. when either the original block or the copy becomes modified, it's copied to another physical location at that moment only, ie. copy on write. [16:23] there's much more to it, and there are many other features, but this CoW mechanism lends itself very easily for snapshots. without it, snapshots are very difficult to implement. [16:23] Sounds exactly like what I'm after. [16:24] I said that hours ago [16:24] patdk-lap : I don't deny you that :p [16:25] just keep in mind that btrfs and zfs are both "kitchen-sink" systems, they're filesystems + volume managers + raid, all in one. might take a bit of a paradigm shift when you work with them after using "simple" filesystems like ext4. [16:26] for starters, one of paradigm shifts is that they're pooled, so you don't need to partition the drive (other than what's minimally required, eg. bios boot + optional /boot + optional swap + btrfs/zfs pool) [16:26] Currently I lack the understanding necessary to implement most of these ideas. Now I know where to start looking though. Thanks to all of you folks for the suggestions. [16:27] in that pool you create zfs datasets or btrfs subvolumes, that are "separate filesystems" analogous to having individual partitions. This "separate filesystems" is important when, for example, you rsync with -x [16:30] What in the extended attributes is important in this scenario ? [16:31] I mean why does having a separate filesystem have importance with the xattrs [16:32] can he use zfs on linode [16:32] I thought they where locked to the image file systems [16:32] and a shared kernel, so no zfs module [16:33] I'll have to reserve that approach for my home system then. [16:36] Munt: I said -x not -X :) [16:36] ikonia: it's possible to pve boot into your own kernel [16:36] fallentree: that sounds like a pointless waste of time and effort on linnode host [16:36] eh pv [16:36] lol fallentree >,< [16:37] ikonia: when I was using Linode many years ago, I always ran with pv and own kernel [16:37] that doesn't change what I said [16:37] what, that it's pointless to run your own kernel? [16:38] distro kernel, that is [16:38] the effort and hassle of keeping that going on a paravm thats ouside of your contol for what gain ? [16:39] not sure what hassle that is. iirc it was just a matter of choosing pv from a drop down and it would boot into your image's kernel [16:39] now, is zfs an overkill on a linode host? maybe. but btrfs isn't imho [16:40] the droplets are supposed to be held in the same configuration as the host offers, [16:40] so anything thats outside of that seems a pointless ris [16:40] risk [16:40] eh, what risk? and droplets are DO, not Linode :) [16:40] oops, sorry, don't know why I thought droplets [16:40] besides, nowadays Linode switched to KVM, I don't know if that still means running host kernel by default [16:41] the risk is if you try to change it outside of the offering, they can shut it down [16:41] Linode ran Xen with host kernel because when they started building their infrastructure, pv boot was not available, iirc. they added it later as it became available. [16:41] if it's available from linnode as an offering then obviously it's safe [16:41] ikonia: I really doubt they'd shut you down for running your distro kernel. I never heard of it, and I ran with pv for years [16:42] fallentree: they shut hosts down if you try to change them outside the static offering [16:42] what kind of change? it'd be really stupid of them to shut you down because you boot into the distro kernel [16:43] many users run CentOS and needed selinux which wasn't available in their host kernel for long time, so they supported pv [16:43] fallentree: if the distro kernel is available as an official offering, then it's not a problem [16:43] I quit Linode back in 2010, so I admit I don't know if there are any policy changes in the past 7 years. but back then, pv was normal and supported. [16:44] I'm also convinced Canonical would've gone after Linode like they did after OVH if they didn't support default Ubuntu installations by not allowing/supporting the Ubuntu kernel be run, while offering "Ubuntu(tm)" images. [16:45] But then, back when I was running Linode, the hosts were Ubuntu iirc :) [16:45] canonical couldn't do anything [16:45] (nor would they care in my opinion) [16:45] ikonia: oh so you didn't hear about the OVH issue? [16:45] no [16:45] wait, I'll find links [16:45] I just moved to linode after years of using managed cpanel vps's. Kinda testing the waters now. [16:46] ikonia: https://news.slashdot.org/story/16/12/04/2235251/canonical-sues-cloud-provider-over-unofficial-ubuntu-images [16:46] ikonia: tl;dr, OVH offers "Ubuntu(tm)" but installs custom grsec kernel, so Canonical threatened OVH to stop using the "Ubuntu" trademark in that case. [16:46] lets have a read [16:46] afaik it settled with OVH entering the Canonical Certified Public Programme, and dropping the custom kernel [16:48] And Canonical was right, if you ask me. Changing Ubuntu like that makes it no longer "Ubuntu(tm)" but a derivative. If things go wrong, it paints a bad picture of Ubuntu, while the problem is in modifications. [16:49] an interesting case [16:49] more so as canonical removed the word linux from their distrbution and it's trademarks [16:49] said program which involves regular payments, though, IIRC [16:49] IANAL but I think that'd be a conflict of trademarks if they kept "Linux" in their own trademark. [16:50] tomreyn: it does, but it certifies there's a standard and you get what's advertised. [16:50] I was bitten by "waitaminute, this is not Ubuntu kernel" myself with OVH. Granted, it was easy to just reinstall the official image, but again, that just proves the whole problem. [16:51] s/image/kernel === Malediction is now known as Foritus === Foritus is now known as Foritus_ === Foritus_ is now known as Foritus [21:47] Er der en fra Danmark der kunne hjælpe mig lidt [22:38] if i want to clear any traces of an MAC address and/or UUIDs for a VM template, do you still need to clear udev rules on Ubuntu 16.04? [23:48] hey hey [23:48] who here runs servers on ovh? [23:48] as vps [23:48] I wonder if there is a way to clone