[00:52] Y [00:52] oop sorry [00:53] answering Y to an apt-get prompt in quassel *sigh === funkyHat_ is now known as funkyHat === alkisg_away is now known as alkisg [08:13] As I understand it, the installer should *not* offer a device selection for the boot loader when booted in and installing UEFI mode – however, the one for 16.04 I just see does. [08:14] And when it's installed, it doen't boot. Is there some place I can kick? [08:16] Not offer a device selection? Why so? What if someone has 2 disks? [08:16] (with 2 efi partitions etc etc...) [08:16] Dunno – that's what https://help.ubuntu.com/community/UEFI says. [08:17] "Note that in a UEFI-mode installation, Ubuntu will not ask you where to install the boot loader." [08:19] No idea, but maybe they mean if only one efi partition is found in the whole system... [08:19] You did test if you booted in uefi mode, right? [08:19] from the live cd, [ -d /sys/firmware/efi ] && echo "Installed in UEFI mode" || echo "Installed in Legacy mode" [08:21] Yup. My mainboard's boot menu also offers "USB:" and "UEFI:" boot modes for the same stick (I picked UEFI). [08:21] Having two EFI partitions seems to blow the installer's mind, FWIW. [08:22] Server installer anyway - I haven't tried it with the desktop installer. [08:25] Letting hardware vendors graphically style their config interfaces was an awful idea BTW. I long for nice, information dense BIOS interfaces instead of this graphical "hardware browser" clusterfuck on my nice new machine. [08:37] Good morning. [09:51] Hm, sort of solved my "it doesn't boot" problem: I vreated separate partitions for UEFI and /boot, and now it works. [09:51] Is that how it's supposed to be? [09:52] chewey: uefi and /boot partitions are different ones [09:53] uefi is vfat, /boot is ext2. in most cases. [09:58] Ah, right. [10:11] sgbirch: check schedule in topic mate [10:16] lotuspsychje: final beta is today according to the schedule [10:17] sgbirch: correct [10:17] sgbirch: did you install it yet? [10:17] member:lotuspsychje: yes .. it looks fantastic [10:18] lotuspsychje: Trusty has been brilliant, by far the best release. It looks like this is even better. [10:19] yep :p [11:21] howdy folks [12:04] anyone know what time the final beta is expected to be available today? [12:09] you want an ISO? the packages are already published [12:18] !info linux-image-generic xenial [12:18] linux-image-generic (source: linux-meta): Generic Linux kernel image. In component main, is optional. Version 4.4.0.15.16 (xenial), package size 2 kB, installed size 11 kB [13:18] kallo82: today is final beta, final release at 21 april [13:19] lotuspsychje: cool [14:38] Hi guys. Am I understanding correctly that today will be the first beta image of ubuntu(not lubuntu and other opt-ins) available? [14:41] Amunhateb: final beta [14:57] hey I'm using i3wm, and when I open unity-control-center, I've only the left menu options, not the rest [14:57] whereas it works on the default wm [14:57] any recommendation on how to report the problem ? [16:09] !schedule [16:09] A schedule of Xenial Xerus (16.04) release milestones can be found here: https://wiki.ubuntu.com/ReleaseSchedule === ochosi_ is now known as ochosi === khaldoon is now known as kallo82 [18:15] Hello all. I'm running into another problem while testing mariadb on 16.04. I have two VMs set up, and I'm trying to set one as a master and the other as slave. I'm getting: ERROR 1045 (28000): Access denied for user. I've granted replicate permissions: GRANT REPLICATION SLAVE ON *.* to 'replicate'@'10.2.150.118' identified by 'slavepassword'; Interestingly, I cannot connect to the master from the slave via telnet on port 3306, but there is no [18:15] firewall between them. [18:49] so whats new in the latest beta? [18:50] Is there a changelog? [20:00] Guys, when is beta releasing ? [20:01] when its annnounced on the mailinglist [20:02] arunpyasi: There's a release schedule in the /topic [20:02] Ah, you mean "what time today". [20:03] ChibaPet: Yes, today's date is on the schedule for the beta 2 release. [20:04] Ah, and I see that you already pointed him to the schedule. [20:04] Honestly, running the stuff, I expect to move into "release" status blissfully unaware. === salamanderrake_ is now known as salamanderrake [21:45] Weird one I've noticed with 16.04 ... bash_history doesn't remember a "sudo su -" command, but 14.04 *does* remember it properly [21:55] hi, I try to blackliste nouveau, but it didnt work. [21:56] this module is still in initramfs [21:56] irgendwer4711: so what went wrong on installing the driver? [21:56] did you look at the output? what kenrel is it? are the headers installed? [21:56] where is the nvidia driver from? [21:56] I dont know. 3 drivers were handling this card: the FB, nouveau and nvidia [21:57] first try was the xubuntu driver app. then then nvidia HP [21:57] if the nvidia is installed and not working correct, then it will fallback to the nouveau to make you a booting system at all. [21:57] so blacklistiing is just wrong [21:58] ok, so you installed the beta driver from the nvidia homepage? [21:58] no, one normal driver [21:59] so what is going wrong on your system? what output was there on installing the driver? what does the xorg log say? [21:59] this is sad, this worked before. but I had to reinstall, because of GPT and windows 7 didnt like UEFI. [21:59] I try a next run with the nvidia driver now. [21:59] since you use the nvidia website driver, you might want to ask their suport [22:01] the ubuntu one didnt work too. [22:02] logs or errors or it didnt happen ;p [22:03] but using the nvidia website driver there is nothing we could support since we cant do anything there. hence use the ubuntu one [22:04] is there a ubuntu driver for a nV gtx 980 ? [22:04] ah wait [22:04] now its loading the right driver module. [22:34] Weird one I've noticed with 16.04 ... bash_history doesn't remember a "sudo su -" command, but 14.04 *does* remember it properly. There's no HISTIGNORE set anywhere I can find. [22:34] Nukien: dont use that [22:34] use sudo -i [22:35] ancient muscle memory - like doing a "wr -t" on a cisco box ... [22:35] wr t that is [23:07] Can I turn off suspend on laptop lid down? [23:08] It's new on my machine, never had it before. I'm not running Unity. [23:09] Rovanion: what ubuntu release? [23:09] I'm here so 16.04 prerelease. [23:09] Rovanion: For systemd edit /etc/systemd/logind.conf and set "HandleLidSwitch=ignore" [23:10] Rovanion: for 16.04 the support channel is #ubuntu+1 [23:10] hahah, and we're on it *slaps self* [23:10] :D [23:10] My eyes need replacing for sure :D [23:10] Hate when that happens [23:11] information overload on the screen I think [23:11] Is it possible to edit https://help.ubuntu.com/16.04/ubuntu-help/power-closelid.html ? [23:13] Rovanion: looks like it needs a bug report to the Docs Team [23:14] Is it not a wiki? huh. [23:16] It is, but only editable by those people specifically on the Official Documentation Team, not regular Ubuntu/Launchpad/Community Wiki members [23:16] Well that's cumbersome. [23:17] Anyways. Thanks a ton for your time TJ-! I'll make sure to report any bugs I come across. [23:17] well, based on the bit-rot in the wiki it's understandable [23:26] Anyone know if installing to a ZFS root will be supported on 16.04 Server using the official installer CD? If not, is there an unofficial way to do it anyway that's fairly easy/reliable? I want to boot in UEFI mode, and have everything except the ESP as ZFS. [23:28] allquixotic, I don't know if they'll fully support installing to root-on-zfs [23:28] allquixotic: you'd need to ensure the zfs tools are included in the initrd by update-initramfs [23:28] But ... I do have a script that will build a root-on-zfs optionally-on-LUKS - interested ? [23:29] Nukien: Very interested, yes. Currently my plan is (assuming Ubiquity doesn't support ZFS root) to load up the Ubuntu *Desktop* ISO in "Try Ubuntu" mode, drop to a root shell, install packages, and debootstrap. Is that your approach as well? [23:29] Boot an iso on your system (or VM), pull down the script, edit a few vars, let 'er rip [23:30] Yup, that's it exactly, but all scripted [23:30] I am so *tired* of repeating commands ... scripts ftw [23:30] http://pastebin.com/fa83QrBk [23:30] I've got a system with 2 x SSD and 2 x HDD. The HDDs are several times larger than the SSDs. I want to do mirroring and use the HDDs as the primary storage layer, and then split the SSDs as ZIL and L2ARC. I've done it before with 14.04, but it was one heck of a hack job to get it working. [23:31] I'm familiar with how to configure the zpool but just don't feel like going through the headache of getting UEFI booting onto ZFS root working a second time... :P [23:31] This creates a file listing all the drives it sees - you have to edit that file to list *only* the ones you want for zfs [23:31] a little surprised to see ZFS ship default in an LTS without installer support... :/ [23:31] this is going to drive people mad for years lol [23:31] It has uefi in there, but I don't have a spare uefi box to test against [23:31] allquixotic: the driver isn't ready for prime time on many systems yet [23:32] allquixotic: e.g. it's in 16.04 for the lxd/lxc cloud snapshot COW etc support [23:32] allquixotic: but it doesn't have SSD discard support [23:32] The script does a simple raidz of the drives - you'll have to change the zpool create lines if you want different [23:33] Nukien: Fortunately, I'll be able to test your script with UEFI, since I have a dedicated server that I just provisioned (a new one, on top of my existing production box that is staying on 14.04 until I'm satisfied with the final 16.04 release on the new box) with real world hardware, and a real world Supermicro UEFI firmware to test with :) [23:33] Cool - will be good to see if/how it works [23:33] It's got an E3-1240v5 (Skylake) so the mobo is also very, very new [23:33] you can also test in qemu with qemu-efi; "... -b OVMF.fd ..." [23:34] All testing here has been in virtualbox using ramdrives as disks - makes for quicker installs/testing :/ [23:34] TJ-, hah, didn't know that - will have to look into qemu more I suppose [23:35] it's useful since it has the Tianocore/EDK EFI full shell [23:35] and yeah, for the zpool I'll do zpool create tank mirror /dev/disk/by-id/firstone-part1 /dev/disk/by-id/secondone-part1; zpool add tank log mirror /dev/disk/by-id/ssd1-part1 /dev/disk/by-id/ssd2-part1; zpool add tank cache /dev/disk/by-id/ssd1-part2 /dev/disk/by-id/ssd2-part2 [23:35] allquixotic, Nice mobo - I have an old X7DBI+ - so it's FB-ddr2 and the fans scream like a horde of banshees and it heats the house all by itself [23:35] i generally install the shellx64.efi 1st off on any EFI system, at /EFI/BOOT/BOOTX64.EFI in case there's a boot failure via menu [23:35] Line 400 in the script - change the ZPOOLEVEL var [23:36] Nukien: lol, I have started to see datacenter-hosted retail server rentals drop in price as the CPU generations increase, because of Intel's increasing focus on power efficiency and savings, leading to greatly reduced cooling and electricity costs for DCs [23:36] Heh yup - a buddy runs a DC over in NJ, and he moans about how he's cutting prices [23:36] in many cases it costs them *more* to sell you a Core 2 or Nehalem era dedi, even though the hardware has been paid off for years, than to eat the cost of a new Skylake or Haswell platform and have you use way less electricity while running it and slowly paying off their upfront investment [23:38] TJ-: on my 14.04 server, I have a default .nsh file that the shell loads after a 10 second timeout that basically kicks off my UEFI boot on ZFS root... wrote all that by hand, totally ad-hoc, but the server reboots on its own without me going into IPMI so I'm not complaining [23:38] it's just way harder than it has to be to configure all that [23:38] and then udev rule fiddling, and all kinds of stuff [23:39] allquixotic: yes, it is so much better than BIOS [23:40] Nukien: have you tested the overhead of LUKS at all? is it quite terrible? [23:40] Its' negligible [23:40] nice [23:40] Especially on a cpu with aes [23:40] does it use the hardware AES engine of Skylake? woohoo [23:40] You do notice it on an older one, but it's small [23:40] If you load the aesni-intel module [23:40] The script tests and loads for you :) [23:41] not that I don't trust my hosting provider, but I don't trust my hosting provider to shut down if someone hands them an NSL [23:43] Just updated script in pastebin to use a configurable zpool level (raidz, raidz2 etc) [23:44] have you considered contributing this to the ZOL wiki? their steps there are very much a slap-bang effort, mostly incomplete/incorrect and loaded with TODOs, especially for Xenial [23:46] https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Native-ZFS-Root-Filesystem [23:46] Yup I will. I literally *just* got it working cleanly an hour ago for xenial - was working nicely for trusty [23:46] Needs a lot more commenting I would imagine [23:47] btw, ashift=12 does not improve performance at all on 512n drives, and may actually harm it [23:47] it only helps for 512e and 4kn [23:48] *not* using ashift=12 on 512e drives is indeed pretty punishing to performance, but I'm not sure about the consequences of using ashift=12 on 512n drives [23:49] my desktop at home has 512e drives (4 TB HGST disks from 2014), but my server actually has 512n WD Re datacenter drives [23:51] I thought pretty much all drives were 4k these days [23:52] nah - if the capacity is 4 TB or more, it'll be either 512e or 4kn (by necessity, because you'll run out of addressable sectors when dividing 4 TB into 512 byte sectors), but any storage smaller than 4 TB may well just be 512n [23:52] and that includes some current-gen enterprise-grade HDDs [23:52] Western Digital has some product datasheets listing them out [23:53] within one product series I was reading up on, they had 512n, 512e and 4kn - it spanned the gamut and primarily differed by capacity [23:58] ashift=12 lets the firmware optimize the 512e mode by avoiding the need to do repeated 4k read/update/write cycles, and for 4kn drives it basically has the same effect except that the OS would have to be doing the read/update/write cycles instead of the disk controller if you were using ashift=9 [23:58] however, if the native sectors of the drive and the firmware both are 512 bytes wide, having the filesystem treat sectors as if they're 4k means that you are using, worst case, 8 times more write bandwidth than needed if all of the data you're writing is within a contiguous 512 byte sector [23:58] anyone know the directory location of the small splash screen that appears just after you type in login password in the latest kubuntu?