[01:08] sarnold: classy [01:08] (the more you know!) [01:08] :D === wuseman is now known as Guest56569 === wuseman is now known as Guest74961 === wuseman is now known as Guest24684 [03:34] For some reason update-grub on my box is *NOT* using UUID in /boot/grub/grub.cfg after the installer runs, this causes it to not boot since the device changes :( Any hints or tips? [03:34] (This is 16.04 LTS) [03:35] phibs, it should be configured in /etc/default/grub, something about UUID [03:36] yeah disabling UUID is not set, and default is false [03:36] ok, looking for the setting [03:37] GRUB_DISABLE_LINUX_UUID=false [03:41] yeah, it defaults to false ;0 [03:45] some reports that a script is using instead: GRUB_DISABLE_UUID [03:47] hmm [03:47] which [04:37] hmm /dev/disk/by-uuid (the dir itself) does not exist... [04:37] so the grub-mkconfig is gonna not use UUID since it tests for that... [04:42] Hi all, I have 2 servers, both are running the same website, the website running in my server is fast in the LAN but very slow in internet BUT, if I try to connect to the server which is also in the same LAN then it works fine why is that ? [04:51] phibs, did you upgrade from Trusty? [04:51] no, this is a 16.04 fresh install [04:58] phibs: Perhaps a longshot, OpenVZ? [04:58] bare metal lol [04:58] anyone around please ? [04:58] phibs, /dev/disk/by-uuid is created at bootup [04:58] I have 2 servers, both are running the same website, the website running in my server is fast in the LAN but very slow in internet BUT, if I try to connect to the server which is also in the same LAN then it works fine why is that ? [04:58] I think there is some issue with routing [04:58] how do I fix it [04:58] ? [04:58] ChmEarl: this is in the installer [05:01] arunpyasi_: there's not much to go on there. what kind of troubleshooting have you done so far with what results? [05:02] sarnold, I have no idea .. [05:02] sarnold, tried rebooting.. [05:02] sarnold, does iptables work if ufw is disabled ? [05:02] and if iptables is flushed ? [05:04] arunpyasi_: ufw is a front end to iptables; if you want to use ufw then you should use ufw; if you want to use another tool, or work with it by hand, then do that... [05:05] arunpyasi_: 'flush' in iptables usually means 'remove all rules' -- is that what you wanted? [05:05] sarnold, yes.. [05:05] sarnold, but still its not fixed.. thinking the iptables or routing issues [05:05] sarnold, I am worried how i can fix it [05:06] arunpyasi_: the linux kernel can route and firewall something like five million to ten million packets per second -- what kind of load is your server under? [05:07] sarnold, its a simple webserver [05:07] with a static file [05:08] arunpyasi_: just static content? how many requests per second? [05:08] sarnold, one [05:08] sarnold, I am the only one trying to access. [05:08] okay, so probably not network load then [05:08] sarnold, no not the load. [05:09] sarnold, is the system issue [05:09] what kind of ping times do you get from the machine to the world? what kind of packetloss? [05:12] sarnold, no packet losses. the thing is, I tested the network with another machine runnign a webserver at a different port. [05:12] the traffic from that webserver opens fine [05:13] I mean the website from that webserver opens finee [05:13] but not the mains linux server :( [05:15] sarnold, you got the scenario [05:15] ? [05:16] arunpyasi_: not really; I don't know if you've got two machines behind a NAT box or if they are directly routed, dunno if you've got a load balancer in front of them or not, don't know what kind of speeds you're expecting or what kind of speeds you're getting.. [05:16] sarnold, its like not speed. [05:16] its not even opening [05:16] thats the thing [05:16] sarnold, its behind the same NAT [05:17] heck if you're trying to get to these machines by DNS and maybe the name doesn't resolve with your seelected recursors, it could look like slow websites.. but it might be slow DNS instead. [05:17] no load balancers [05:17] no [05:17] its the IP I am trying [05:17] alright, so probably not dns, or not exclusively dns anyway :) [05:17] so you have port forwarding set up on your NAT router? [05:20] sarnold, yes [05:20] do you forward e.g. 80 to one computer and 81 to the other? [05:20] sarnold, yes [05:21] sarnold, that is what I did. [05:21] sarnold, if you want the IP, I can send you in PM [05:21] sure [05:23] sarnold, please check the PM [05:32] good morning [06:18] I can't see "nominate for releases" on LP, Do I need specific permission to do that? [06:24] seyeongkim: I believe you do, yes. I'm not sure exactly what is needed to be able to see that. [06:25] I see rbasak, Thanks [06:26] seyeongkim: you can ask in #ubuntu-bugs for any nominations you need. [06:27] ok rbasak === dpawlik2 is now known as danpawlik [08:39] hi guys do you recommend automatic security updates on / off on your bare server or on the VM 's or on all ? [08:40] we do our best to try to avoid regressions in packages, but sometimes it happens [08:41] you'll be safer if you can put the time in to test updates in a testing environment before putting them onto all your other machines, but that's expensive and time-consuming, so many people are content to just turnon automatic updates [08:42] redvic: well since you have fully functioning (verified) backups of your things, why not? [08:44] redvic: A good tradeoff might be to enable the automatic security updates, but disable it for certain critical packages. The typical examples being database services. [08:47] given how many upgrade failures I see in launchpad every single mysql point relesae that sounds like a pretty good idea. :) === Valfor is now known as Guest59562 [09:58] sorry was away for a moment, [09:59] so i leave my base installation on manual and have a vm server where i test updates [09:59] i am using raid so i could disconnect raid test the updates [10:11] hateball, my server uses raid1 i am busy istalling base server after which KVM and 4x vm servers wuold you recoomend auto update off on the bare/base and update on vm server? [10:21] redvic: fwiw I've never had any issues with Ubuntu updates [10:21] that said I tend to not use the automatic function for legacy reasons [10:22] since it used to be that apt didnt clean up old kernels, so if you had a default LVM setup, well then /boot is on its own partition which then fills up [10:22] just annoying. [11:32] hello, is there a reasonable path to upgrade in place server 12.04 LTS to 14.04 LTS to 16.04 LTS? or does it make more sense to simply reinstall? [11:37] TBH, I would reinstall [11:37] but it really depends on apps etc [11:37] if its a critical system, build up a 16.04 LTS in parallel to it then migrate [11:37] but thats me [11:38] Ussat: yeah that's what i'm thinking. I'm pretty sure I have the resources do a parallel. [11:39] fwiw I've done such upgrades [11:40] but yeah it depends on services used [11:40] for instance if you use apache, things need .conf extension from 14.04 and onwards, or it breaks [11:41] yeah. little diffs between versions... heh [11:43] I was gonna say, I have done upgrades like that, it is possible, I just like a clean install etc when possible.....gets all the "cruft" etc out and I always look at older systems and ask why the hell...... [11:45] I assume this is a VM ? if so, make a snap and try the upgrade and see how works out, worse case, you revert back to the snap and then do the parallel [11:47] snapshots <3 [11:47] I upgraded something earlier, whose package (third party) just overwrote all nginx configs without asking [11:48] pretty nice to be able to just revert to snapshot then :p [11:48] Yea, I live and die by snaps here [11:48] I manage rhel and ubuntu systems here and snaps have saved me a few times [12:24] Hi i'm looking for someone who can help me with configuring DNS on a ubuntu-server [12:25] Did everything i thought was needed, but somewhere along the way it doens't work. [12:25] I just want to make it possible to work outside the office [12:27] VPN ? [12:28] a TON easier than running your own dns server (is that what you mean by configuring dns) ? [12:28] I thought DNS was the easiest solution [12:29] Jorrit: you'll have to explain a bit in more detail what exactly you wish to achieve [12:29] The are students who need to acces the server voor Moodle. Is VPN also possible in such a situation? [12:29] honestly, it depends on what you want to do [12:29] voor=for [12:30] Students (about 10 a day max) need to take some courses and employees need to acces the Courses and our CRM [12:31] VPN [12:33] Ok and what do I need for VPN? [12:34] We have 100Mbps Up and Down so a strong and fast connection [12:34] Jorrit: you want students to access your LAN (office) from "outside" (public internet)? [12:34] Jorrit, um......setting up a VPN is non trivial, and porbably not something you can do with just IRC [12:36] what is recommended hardware or software raid? [12:36] I choose DNS to let them acces Moodle via our website. [12:36] redvic: software, unless you need huge arrays === chmurifree is now known as chmuri [12:42] Fallentree: I want to let them acces it yes via the public internet.. can I sence something like.. unsafe in your q.? [12:43] VPN [12:43] Jorrit: well, my question was to understand the needed layout of network, but indeed exposing anything to public internet is unsafe and requires proper precautions. Perhaps VPN is indeed the best solution, but as Ussat said, it's not trivial to set up. [12:45] also, setting up a DNS server is FAR more risky unless you know exactly what ya doin [12:45] Jorrit: you'd need a public server runing (Open)VPN that connects the LAN part of your network with the end users over public internet. Maybe there's a (paid probably) VPN service you can use for that [12:45] Fallentree: Well I need to check it out.. Don't know much about VPN'S other than the use of faking ones location [12:46] well, I don't think running an authoritative DNS service is THAT risky, besides one can always use a third party DNS service if they don't want to mess up with setting up DNS correctly. [12:46] I'm more concerned with securing the CRM and other services exposed to public internet [12:47] I understand, thats my concern to. But They want it cheap and safe at the same time, in the office. [12:47] Jorrit: in short, if your LAN has address space of 10.0.0.0/24, with VPN your end users (studends) would connect to it and have a network interface in that exact range, the VPN only bridges their computer to your LAN over the internet. [12:47] so your users can access 10.0.0.0/24 as if it were in their local network [12:49] fallentree: ok so all I have to do: find a tutorial for openVPN like this: https://www.digitalocean.com/community/tutorials/how-to-set-up-an-openvpn-server-on-ubuntu-16-04 and everything will be safer than using DNS? [12:50] it'd definitely be safer than exposing your internal CRM and stuff to public internet, yes. [12:50] if security is your concern, use a VPN [12:51] Ok, but is VPN legal? [12:51] Jorrit, "all you have to do"..... [12:51] what ? [12:51] Jorrit: of course, it stands for Virtual Private Network :) [12:51] yes of course [12:51] Ok [12:51] Jorrit: another solution would be using ssh tunnels, simpler to set up than VPN, and serves the similar purpose [12:52] but again, its NOT trivial, and a quick and dirty tutorial isnt quite.....anyway [12:52] Jorrit: if your users are on linux, running a single command would open a "socks proxy" through which they can connect to your internal applications === rvba` is now known as rvba [12:54] fallentree: but I don't think anyone of the students will use linux. [12:54] Jorrit, V P N [12:54] It's not that populair in Holland (as far as I know) [12:55] Ussat: Thanks :-) I think I'll go with that! lol [12:55] Jorrit, TBH at this point, considering the questions etc......you should probably go with a commercial VPN solution [12:55] if your users are windows based, there are lots of good ones [12:57] But that comes at a price, I will check the options for the Netherlands [12:58] Thanks so far Ussat and fallentree, I think I'll be in touch (on chat) when I need more info. Gonna check it with my collegea now [12:58] Yes, it comes at a proce....and with that price you get a professional setup, maintenance etc [12:58] some thing are REALLY worth paying for [13:00] fallentree, why do you prefer software ? [13:01] redvic: yes [13:02] why? [13:02] for several reasons, the best IMHO is that in HW raid if the controller goes bad, you need to replace PHYSICAL HW etc, in SW raid, you dont. UNLESS its a really big array, then you want a SAN or card. It honestly depends on a few things [13:03] Is it possible to use policies so that I can take care that some people may use some software installed and some not [13:03] maybe openlapd ? [13:08] redvic: what Ussat said. There's no need for HW raid today as CPUs are more than capable. Only with huge arrays that take too much of your CPU could you benefit with a HW card. Also, note that with HW raid, if it fails you need to replace with EXACT brand, and sometimes even exact model as they use proprietary formats that may change in newer models. [13:09] redvic: plus, if you use something like zfs or btrfs that does checksumming and real-time (corrupt) data recovery, you must NOT use HW raid. I don't even know if any HW raids are capable of that, maybe the higher end ones. [13:09] and honestly if youre in the arena where you NEED a card, you NEED a san [13:10] in a system with HW raid, if system craps out, you buy new system, new card etc......with sw raid, pop drive in different system, rebuuld SW raid..done [13:11] I am simplifying a bit, but not much, THAT said, I am on a SAN at work [13:14] awswome thanx you guys [13:21] Can I make it work with openlapd that a user can log into a server and may use only some software on the server [13:24] roelof: what software? [13:25] for example expensive cad or dtp software , fallentree [13:27] roelof: I don't know if ldap can do it, but you should be able to use ACLs to dis/allow access for certain users to certain binaries/paths. [13:28] oke, and with what software can I make ACL's ? [13:29] roelof: https://help.ubuntu.com/community/FilePermissionsACLs [13:29] thanks, I will dive into that [13:34] roelof: so if iirc you can do something like setfacl -m u:someuser: /path/to/expensive/cad/binary . That should revoke all rights, for that user on that binary, assuming the default is root owned o+x binary installed by a package [13:35] thanks , I will experiment with it [13:36] roelof: setfacl manpage has more examples === dpawlik is now known as danpawlik === dpawlik is now known as danpawlik [15:19] cpaelzer: triaging bug 1685332, I think it would be reasonable to say that non-experimental NVMe support for smartmontools is a wishlist request, so Triaged/Wishlist. What do you think? [15:19] bug 1685332 in smartmontools (Ubuntu) "does not monitor NVMe drives" [Undecided,Incomplete] https://launchpad.net/bugs/1685332 [15:25] redvic, Ussat, fallentree, if performance doesn't matter software raid is fine. Things will get interesting as NVDIMM and CrossPoint memory becomes more widespread. Then you'll have access to the same level of writeback performance as a hardware raid controller while keeping your data crash consistent. [15:27] rbasak: if you doc it as "the non experimental is wishlist" I agree [15:27] rbasak: but since all thatis on upstream atm it was incomplete for me [15:28] Triaged would mean we know what to do/pick-up/... [15:28] which we don't as it doesn't exist [15:28] rbasak: maybe better confirmed/wishlist [15:32] cpaelzer: I've always considered Triaged to mean that the report is valid and the issue is valid, rather than that the developer knows exactly what to do. My point being that the developer has enough information to find out and is unlikely to have to come back and say "the bug is Invalid". [15:33] nacc: would you like to chat about the changelog branch? Not urgent. [15:33] rbasak: sure [15:33] nacc: same URL as five minutes ago? [15:33] rbasak: omw [15:33] rbasak: in any case I'm fine with wishlist if it is called out that there is still the blocker to need non-experimental for it [15:34] OK I'll change it thanks. [15:34] cpaelzer: oh, I think I see what you're saying. [15:34] I think that's an entirely orthogonal thing. [15:35] That Ubuntu doesn't have support for NVMe is the bug. That we don't upload a patch directly because it's non-experimental is separate. An interested developer could always drive it upstream to resolve the bug in Ubuntu. [15:38] nacc: https://code.launchpad.net/~racb/usd-importer/+git/usd-importer/+merge/324476 [15:45] ppetraki: what's the performance penalty? [15:47] fallentree, depends on your workload :) write intensive workloads benefit from a writeback cache the most. Read intensive can get away with a writethrough cache because it's basically a read cache. [15:53] ppetraki: in my experience the wb cache benefit is insignificant unless you have a very specific workload that very frequently modifies a relatively small number of pages [15:54] I'd say the hybrid drives are far more beneficial in that department, or more advanced tech like ZFS with ZIL caches [15:54] though SSDs nowadays tend to make all the irrelevant [15:54] fallentree, like I said, workload specific. [15:55] too specific for a blanket statement of "if performance doesn't matter" :) [15:55] fallentree, read the rest of the sentence [15:56] I did, but you implied that there's a (significant) performance penalty with software raid [15:57] fallentree, sure, when you don't have a writeback cache that is *crash consistent* it hurts [15:57] fallentree, go ahead and turn on the wb cache on your software raid and yank the power [15:57] fallentree, let me know how that turns out [15:58] I never noticed a problem with power failures on our ZFS storage arrays [15:58] fallentree, I'm not interested in an argument, I've been hacking storage for about a decade. I merely wished to add to the conversation so the OP, who is apparently is no longer present, could gain some additional insight in the difference between the two options. [15:58] fallentree, thanks. [15:59] what you're talking about is BBU cards where the wb cache kind of ensures that the application sees successful write, "guaranteed" by the bbu in case of power failure at that particular moment. [16:00] but in the overall cost-benefit equation, personally I don't find that beneficial over potential problems with HW cards. [16:03] I'm not interested in an argument either, I was merely asking about the performance penalty :) Then you shifted goals to wb cache and power failures which has nothing to do with performance. [16:14] fallentree, it's storage, it's raid, it's supposed to be highly available. You get to lose your customer's data exactly once, if your product is in the alpha stage they'll give you a pass, after that, they're shopping for a new array vendor. [16:15] ppetraki, sure, but in the long run if performance is that much a issue, SAN [16:15] ppetraki: without a proper backup policy, no hardware feature will protect from "losing customer data" :) [16:15] ok I understand what happened [16:15] do you guys know what crosspoint can do? [16:15] me, personally, no [16:15] ok [16:16] it is a persistent DRAM that doesn't need any extra power plumbing like NVDIMM does. It's cheaper than NVDIMM but not as fast as DRAM. [16:16] nice [16:17] so once you have this as a building block, you can start to have some really interesting designs that normally only lived in SANs [16:17] I am more than willing to admit I am not a storeage guy.......I have a enterprise grade SAN I deal with via FC [16:17] no more journaling disks, that memory region is your journal [16:17] it's not an either-or replacement for SAN, tho' [16:18] SAN offers so much more than that right? LUNs etc, management plane [16:18] depends on what you want [16:18] redundancy, replication, ... :) [16:18] dedup [16:18] ugh, dedup [16:18] compression, encryption [16:18] (yes, we use those last 2) [16:18] compression really doesn't matter that much [16:19] * fallentree yawns and looks at ZFS :) [16:19] only if you can do hot cold separation [16:19] ppetraki, we do, ok well the san guys do [16:19] then you can do it on the backend during your garbage collection cycle [16:19] Like I said, I admit I am not a storeage guy so... [16:20] encryption kills dedup but it's a use case that some people just have to have [16:20] HIPPA [16:20] we have to have it [16:20] GOV [16:20] ppetraki: lots of people "save" a lot of money by buying "cold" storage systems and running "hot" workloads on them :-) [16:20] yeah [16:20] yup [16:20] we dont encrypt and its HUGE fins etc [16:20] fines [16:20] ZFS is about as good as you're going to get in the freeware world [16:21] seeing how memory dedup is so vulnerable and a big no-no, I kinda don't trust the hard storage dedup either. [16:21] I've personally never used it but I know it works [16:22] fallentree, dd if=/dev/zero bs=4096 count=1 | sha1sum - [16:23] fallentree, don't look at the data, look at it's signature [16:23] what about it [16:23] fallentree, if you see the same sig more than once, well you just deduped [16:23] not hard [16:23] not free either [16:24] what are you talking about? [16:24] how do implement dedup :) [16:24] dedup of what? a stream of zeroes? [16:25] what's a thinly provisioned disk? [16:25] ... [16:25] a bunch of zero'd sectors [16:25] that the system claims is available from here ... to there [16:25] actually, thin provisioning is not that... [16:26] ohs? [16:27] yeah. it's not a bunch of zero'd sectors. it's virtual space based on pooled resources. [16:28] it's a virtual range, but the how provision the backend is dependent on the system design. [16:28] at any rate, deduplication works by checksumming blocks and the referencing same blocks multiple times, if different consumers expect same data (checksum). [16:28] some arrays will just ingest until they hit 80% unique data and claim they're full [16:28] I don't trust it at all, given how memory dedup is vulnerable to abuse and injection of data. [16:29] this is on media [16:29] this is memory, you told the array to write [16:29] essentially it's the same thing, with memory not being persistent [16:30] and a barrier between the two [16:30] so I don't see how you can create an vulnerability [16:30] you told me to write X, I wrote it down, now what? [16:30] very simple. it's not different than having multiple hard links to the same file, in fact, it's almost the same thing but managed at the lower level than the FS [16:31] I don't care about filesystems [16:31] at all [16:31] blocks [16:31] are what I care about [16:31] ppetraki: there's a recent CCC presentation how memory deduped virtual servers can inject data into each other, with help of some vulnerabilities. I suggest you to check it out. [16:32] bottom line, race conditions and other specific cases can lead to problems with deduped blocks, so thanks, no thanks. [16:32] fallentree, link? that would be interesting [16:33] ppetraki: https://www.youtube.com/watch?v=H9gM938H7qY [16:33] fallentree, thanks [16:33] anyway, yes, blocks. like I said, it's the same thing like hard linked files, except it's managed BELOW the FS, ie. at block level. [16:34] ppetraki: oh, that's in german... I think this is the original https://media.ccc.de/v/33c3-8022-memory_deduplication_the_curse_that_keeps_on_giving [16:34] fallentree, I admit I am not a security guy, I have a friend whos a pen tester who freaks me out on a continual basis [16:36] fallentree, yeah its an arms race, and we're always behind. [16:39] hardly an arms race [16:39] jamespage: coreycb: btw magnum is broken http://logs.openstack.org/44/467044/1/check/gate-puppet-magnum-puppet-beaker-rspec-ubuntu-xenial-nv/a17b9c8/console.html#_2017-05-23_07_23_39_954393 [16:39] it's more like attempting to mouse proof a castle [16:41] bluerisc is kinda interesting but it operates under the conditions of creating a new binary that has encrypted instructions interespersed. So it runs to the checkpoint and then that chunk of code is run on the offboard engine. If the binary is modified in any way, it traps. That's my understanding of it anyways. [16:41] nasty side effect of exposing race conditions in your code that never existed before ;) [16:52] * ppetraki is really away [17:36] mwhahaha: do you know if that was failing any other time since 5/12? that's when 1.1.9 sqlalchemy went into -proposed. [17:37] coreycb: it used to pass. We're using updates though [17:37] oh you're using updates, ok. [17:37] coreycb: is seen that error before with rdo [17:38] coreycb: wanted to check in with you on the django merge for 17.10 [17:38] blake_r: and iirc, you said for maas, you're ok with me uploading and you'll deal with the fallout for maas? [17:38] nacc: hey, you'll probably want to check with jamespage tomorrow and see if he ran with it. [17:38] coreycb: thanks, will do [17:40] ahh. nice fresh 16.04 lts server. glad i'm going this route rather than piddle with hacky upgrades [17:41] can also take my time migrating. thanks for the input folks. and i'm doing certain things differently compared to last time. that's always nice [17:41] the Fieldy of dreams... [17:45] nacc: yes [17:46] blake_r: thanks, i'll update the bug so i don't forget this time :) [17:54] What triggers the detection of DMraid? [17:55] Capprentice: i'd assume an appropriate signature on the device [17:55] I dont want fake raid. The on board raid controller is selected as AHCI ans is disabled that way. [17:56] nacc, What does the OS looks for? [17:59] Fieldy: welcome to zystem dee! [17:59] Capprentice: i'm not sure what you mean? [18:00] what is fake raid? :) [18:02] dpb1: usually it's the onboard raid controller :) [18:02] dpb1: which is garbage non-raid but claiming to be raid [18:05] with out of date firmware to boot :) [18:06] dpb1: right -- and at that point, swraid is better [18:26] those fake raid bios remind me of the old software modem devices, remember those? [18:26] you mean current software modem devices? [18:26] they still exist? [18:26] it's almost impossible to locate a real modem [18:26] gives me a stupid human trick idea; VPS provider, create a "raw" disk device, then another, swraid them. compare performance with and without. would be funny if it was better (not likely) [18:27] the only real modem is ones that have a serial port on them [18:27] I meant old because I thought they were not manufactured anymore (the software modem ones) [18:27] Fieldy, they are the same [18:27] fakeraid is software raid, but with a bios level boot helper [18:27] hardware accel makes a big diff [18:28] ahasenack, I just bought 4 new ones [18:28] Fieldy, heh? [18:28] linux sw raid is very impressive though [18:28] fascinating, dial-up lives [18:28] linux sw raid and fakeraid are the same thing [18:28] if you install linux instead of windows [18:28] ahasenack, not dialup, fax :) [18:28] I use them for faxing [18:28] even more fascinating :) [18:29] nacc, I mean what should I disable or what should I do to make sure dmraid to be disabled. [18:30] Capprentice, it looks for a fakeraid signature on the disks, that is created by the bios being in raid mode [18:30] ahci doesn't mean you aren't in raid mode though [18:31] go in to the BIOS and turn it off [18:31] bios firmware naming for items is odd and inconsistant [18:31] my dells have sata, ahci, and raid options, but ahci and raid are the same thing [18:31] ahasenack: oh god I'd forgotten about those 'winmodems'. terrible things. [18:32] no, raid and ahci are not the same at all [18:33] patdk-lp, I have a server which was booted without disabling the RAID controller. Installation failed to the disk for reasons I dont know. Later on when I installed 16.04 with the AHCI set (The option is either AHCI or RAID), I found that all of the disks contains raid signature. [18:34] I have removed the signature by booting a live cd and creating a new msdos partition table on each of the disk except the one which had the OS installed. [18:34] Capprentice, yes, that is correct, because the fake raid put one on there [18:35] When I finish clearing, after reboot the OS did not boot. I will do a clean install on it. [18:35] How do I make sure no way RAID gets enabled? [18:36] Ussat: i think patdk-lp meant they are the same as far as their BIOS is concerned (setting name) [18:45] Capprentice, have the raid setting off first === Guest30325 is now known as IdleOne