[00:39] coreycb: hey - noted a few bumps to debhelper compat level 10 from debian - that's going to be awkward to backport without a backport of debhelper >= 10 first [00:40] there is on in backports - we might be able to use that [00:40] bradm: are you charm deployed? or just using the packages? [00:41] jamespage: charm deployed [00:41] bradm: ok so two options on each service [00:41] bradm: you can set "openstack-origin" to the new UCA pocket, this will perform a parallel upgrade on all units of the service [00:42] or you can toggle action-managed-upgrade to True via config; after that setting the config also needs an execution of the openstack-upgrade action on each unit [00:42] intent is that you can do things unit by unit [00:44] jamespage: that could be fun with things that require a db schema change, doing it unit by unit [00:44] bradm: the lead unit will take care of the db migration so you should do that one first [00:45] jamespage: are there any recommendations about service orders to do the upgrades in? [00:45] I guess keystone first, similar to how the charm upgrades are done [00:46] yep [00:46] ok, and this is liberty -> mitaka, so it seems I need to upgrade ceph to jewel first too [00:49] jamespage: there's no rollback is there? not expecting to need it, but have to ask the question [00:54] bradm: no [00:55] jamespage: excellent, good to know. [00:55] bradm: actually ceph does things in a different way - if you change the source, it will perform a managed upgrade across the units [00:56] jamespage: is any of this documented anywhere? all the cloud archive page says about it is to 'upgrade the packags' [00:56] packages. [00:57] is anyone here good at repairing grub? [01:05] bradm: in the charm readme's; there is a pending task to write a 'how to upgrade' section for the charm deployment guide [01:06] bradm: https://docs.openstack.org/charm-deployment-guide/latest/ [01:08] jamespage: aha, cool. [01:49] Epx998: a war? [01:54] work around [01:55] drab: ubuntu doesnt like to an interface not followed with a 0 during network installs [01:55] I see [01:55] anyway, I had that problem on some supermicro servers with intel quad nics [01:55] yeah [01:56] exactly what we are seeing, onboard nics 1gb, plus offboard intel x550 10gb [01:56] war for me was to disable the [un]predictable interaface naming so that stuff was once again called eth0 etc, then it worked just fine passing the interface name to select via preseed/kernel param [01:56] we want to use 10gb out the gate, but d-i netboot does not like [01:57] i suggested that to the guys trying to provision, they said it was too much work. so i creasted a new tftp entry where i specified eth4, the installer sees it, then tries eth0 regardless and falls on its face [01:58] how is one kernel parameter too much work? all you need is biosdevsomethingiforgot=0 [01:58] anyway [01:59] that's all I've got [01:59] altho if you already have eth0 and 4 that doesn't sound like your problem [02:02] sarnold: fwiw someone suggested a good way to do it with nginx using "geo" [02:02] http://nginx.org/en/docs/http/ngx_http_geo_module.html [02:04] so I can use geo to create a "rolling" variable and if the ip matches whichever sets of clients I wanna rollout I just catch them there and then use map to use a different pool [02:04] http://prabu-lk.blogspot.com/2017/03/select-different-backend-pools-based-on.html [02:06] drab: heh, neither one -quite- describes what's going on.. [02:06] drab: still it looks like it can do the job [02:07] I guess I just have to get over this idea that nginx is mostly a web server thingie... and tbh most of the verbiage on the site talks about http [02:07] which makes me somewhat uneasy about using it for generic tcp protocols [02:07] also this nginx plus thing isn't helping... half of the times I'm confused if it's payware feature or not... === an3k is now known as an3k^ === an3k^ is now known as an3k [04:03] coreycb: I think we're going to need to bump lescina to xenial [04:03] coreycb: the lack of support for conditional deps is created patching requirements which are unrelated to the uca [07:13] Good morning [07:46] morn' lordievader [07:47] Hey SmokinGrunts [07:47] How are you doing? [07:47] pretty good. I got vibes from fielding a successful ##networking issue, and so I feel good [07:48] * SmokinGrunts toots the ole ego horn [07:48] and you? === SmokinGrunts is now known as SG_Sleeps === JanC is now known as Guest4348 === JanC_ is now known as JanC [08:21] Doing good here. [08:21] Is ##networking an interesting channel? [08:52] Hi there! Where does iptables-persistent save it's rules? How can I clean them up? [08:54] adac: According to [1] in `/etc/iptables/rules.v{4,6}`. [1] http://www.microhowto.info/howto/make_the_configuration_of_iptables_persistent_on_debian.html#idp21024 [08:59] lordievader, ok thanks! So then I can simply delete it's content to reset it? I currently do run this script to clean now all up (reset the iptables firewall) https://gist.github.com/anonymous/cc01da7ccd09e292fb44e468e656163e when I run "iptables -L" then I have no more output. But when I print iptables-save -c I still get this output: [08:59] https://gist.github.com/anonymous/fe479f70643d0578a40c9f7d1adb8194 [08:59] I'm wondering if my iptables now is really cleaned fully or not? :) [09:00] No you don't want to blindly delete the content. [09:01] The commands you posted do flush the entire iptables. [09:01] It seems you (or some program) made CATTLE_* chains. [09:02] `sudo iptables -vnL` should show these. [09:05] lordievader, yes that was my intention to completely flush it. This CATTLE_* Chains come from rancher docker container who creates them. I was wondering why these are not deleted with my script I posted before? [09:05] Maybe they are immediately re-generated [09:05] by rancher [09:05] I try to shut down rancher and the flush all again [09:06] iptables -vnL does also not show them. No clue why actually [09:07] `iptables -X` probably does not remove them if there are still rules in them. [09:07] Hmm, if `iptables -vnL` does not show them `iptables-save` should not either. [09:09] lordievader, exactly what i also tought it should no be shown anymore after I executed this script. I now stopped the docker process (so no container runs anymore and therefore no iptables rules can be created by them) and then just re-run my flush script [09:09] and with iptables-save -c they are still shown :D [09:10] how's tat possible? [09:10] What happens when you explicitly delete the chains? [09:10] Also, could you pastebin the output of `iptables -vnL`? [09:10] http://lubos.rendek.org/remove-all-iptables-prerouting-nat-rules/ [09:11] this is iptables -vnL after stopping docker and re-run the flush script: https://gist.github.com/anonymous/44cde2821e55d9fd2379ca71d270e3e7 [09:12] and this at the same time is the output of iptables-save -c [09:12] https://gist.github.com/anonymous/9f11adf6d862e3e926e0b5dd03846b96 [09:15] Interesting [09:15] Oh well, you can always just edit the rules. [09:15] The saved file I mean. [09:16] lordievader, https://serverfault.com/a/200642 I think this finally flushed it :D [09:17] Yeah, that does the same but then through awk ;) [09:17] hehehe [09:17] great I finally have a reset and can now try again to set this up properly! [09:17] Thanks for your support! [09:18] No problem. [11:02] Hi all. joelio you know how you like ZFS just a little bit, right? [11:08] hey [11:08] a little yes :) [11:08] I did a Ubuntu Server FakeRaid on my assistant's machine to get them familiar with the OS but I am not satisfied with the boot time. I have found a little 20GB SSD in one of the machines here and reclaimed it by re-installing their main drive. However, it being so small, I want to know how to safely store applications on a ZFS pool. [11:08] Which would mean moving /usr /var and /etc I am guessing? [11:08] You can use it as a cache drive [11:09] (if you've done ZFS) [11:09] Does one of those hold the bulk and the rest are just links or a executables? [11:09] I wouldn't use fakeraid btw, mdadm [11:09] mdadm all the way if you're doing raid [11:09] Sec [11:09] So, the SSD is being installed as a lone drive. [11:10] are you using ZFS for boot? [11:10] Then I am going to ZFS attach the other three drives that were previously used in the FakeRaid [11:10] ZFS for boot? Me? :D [11:10] right, so ZFS is irrelevant here [11:11] SSD = / (boot) and ZFS drives = /home + (/etc /var /usr ???) [11:11] I'd ditch the fakeraid, use an alternate iso (server) to reinstall... use software raid [11:11] The FakeRaid is ditched. [11:12] you could potentially use the ssd, but if it dies you've lost /boot and it's not redundant [11:12] what I'd do is... [11:12] use mdadm to make a boot and small root on the drives (so it's raided) [11:12] It is just a desktop for their learning purposes. [11:13] and then use the remaining space on the drives to create zvols which you can use in a raidz [11:13] then use the ssd drive to act as a cache layer for zfs pools [11:13] You can RAID a 20GB SSD with 3x 160GB? I am hoping to make the desktop I slap on snappier and faster to load. [11:13] it sounds messy, but without doing full root ZFS that's the easiest route and will give you redundancy [11:13] no, don't use ssd in the raid [11:14] they're asymetrical in size and performance [11:14] *asymmetrical [11:14] you can use ssd as a cache layer in ZFS (or btrfs) [11:14] or if you don't want to use ZFS, bcache or EnhanceIO etc [11:14] that's what I'd do anyway :) [11:15] * Jenshae looks around the maze dazed [11:15] also bear in mind it's not just about being I/O bound, could be CPU or whatever too [11:15] does that make sense? [11:15] If it is running systemd, `systemd-analyze blame` may help. [11:17] Okay, let's start with step one. I plug in the 3x 160GB, fire up the USB with the ISO and then create a mdadm. Then I plug in the SSD and install the OS with just /boot going on the SSD? [11:17] I think the machine's bios is auto pushing for UEFI. [11:17] How does that fit in? [11:18] The mdadm is 3x160GB or it is 3x20GB and the Zpool goes over 3x140GB? [11:19] I'd not allocate the full 160GB to mdadm, but say 20G of it, leaving 140G on each disk for zvols [11:20] Okay so second part. [11:20] leave the ssd for a ZFS cache pool, add that after you've installed and created the ZFS pools [11:20] How do I get the mdadm to write from RAM to the hard drives? Can the SSD be plugged in straight away with the 160GBs from the start? [11:20] this is quite a bit of faff tbh, are you sure you want to go this route btw? :D [11:21] just making sure you know first and I'm not sending you down a rabbit hole [11:22] Jenshae: otherwise do a full disk mdadm raid setup [11:22] The way I did it at home was, I installed everything onto one 500GB drive. Then I put /.steam onto a zpool made up of 3x500GB SSHDs and my games' loading time improved. [11:22] and use the SSD cache with EnchanceIO or bcache on the md device perhaps? [11:23] what do you mean about mdadm wiriting to ram? [11:23] I want to give him the fastest desktop possible and then give him storage space in a pool after that. [11:23] that doesn't make sense :) [11:23] mdadm is the raid admin tool [11:23] When the USB is running, it is storing temp data into RAM partitions, right? [11:23] USB? [11:23] Including how things are partitions when installing. [11:23] The ISO / live USB. [11:24] USB is just for install media, ignore that [11:24] plus, don't use live - use the alternate iso [11:24] the text based / ncurses one [11:24] I need to use the USB to create the raid to then install into the raid, no? [11:25] like I said, create a server/alternate install image, whether you boot from CD/DVD/USB etc is irrelevant :) [11:25] Okay, I think I am starting to see the light. [11:25] https://help.ubuntu.com/community/Installation/SoftwareRAID [11:25] here you go :) [11:26] ... now where is the desktop environment installed? Does it have a specific partition? [11:26] it's all over the FS [11:26] /usr /var /etc [11:26] /etc can't be a seperate file system [11:27] I'm not sure that's what was being asked [11:27] thats fine, I'm just clarifying [11:28] plus is is techncailly possible btw root=/blah etc=/blah :) [11:28] well within reason [11:28] not that you should mind :) [11:29] it's not possible [11:33] Can specific folders within /var and /usr be mapped to the zpool? [11:34] Like say GIMP is installed and is rather big, I want it to go into the zpool instead of the mdadm raid. [11:37] When you've made the ZFS, rsync the contents and update fstab [11:39] ikonia: https://power-of-linux.blogspot.co.uk/2010/03/booting-with-etc-in-separate-partition.html [11:39] as mentioned, not advisable [11:39] anything is techincally possible here, just how much effort you want to put in [11:41] joelio: thats a bit of a cheat though isn't it [11:41] it's not really possible - it's someone cheating [11:41] except it completely is possible as... they did it [11:41] it's not cheating, it's programming [11:43] Missing a third party for the golden apple story. [11:50] Thank you, This will be interesting and a bit surprising for them. :D [12:01] coreycb: jamespage: fyi bug 1682102 - no more need to drop seccomp when backporting for UCA-queens [12:01] bug 1682102 in libseccomp (Ubuntu Xenial) "libseccomp should support GA and HWE kernels" [High,Fix committed] https://launchpad.net/bugs/1682102 [12:08] ^.^ Nice one cpaelzer. Do you guys have a #social-room? [12:10] if "you guys" is server folks then I think you are here :-) [12:10] IMHO no reason to separate too much unless it overloads the channel [12:14] Okay. You do all seem rather quiet, had all the conversations? Like joelio and ikonia having a difference of opinion there. I doubt everyone here shares "dank memes" or plays / makes a game together. It seems very disassociated from each other. Drop a bit of work on launchpad with a message, pick up another bit, etc. [12:15] Not sure that comes across right. This channel seems like it is the "office", where is the "team building"? [12:15] oh I see - that "social" - I guess we are all work addicts :-/ [12:16] there will be a chan for that but I don't know oO [12:19] * Jenshae thinks I have stumbled upon a cave of abused dwarves and elves that need to be dragged out to have fun in the sun. :P [12:20] hehe [12:20] white skin will become the trend again [12:21] got told this channel wasn't for fun a few years back :D [12:22] What about a #ubuntu-server server on Discord or would Discord's spying be too invasive? [12:22] oh sorry that was #ubuntu :) [12:23] Discord is handy because you can make a channel where only certain people can post things, that means a smaller group can link in their favourite or video guides they have made to inform new users without having all the "dank meme" videos dropped in there. The same applies for images / diagrams and text or links, naturally. [12:24] Then it does have voice also, for those who struggle to explain a new concept in text, whilst thinking aloud. [12:25] It can all split down in a tree from the main "ubuntu-server" [12:28] Finding X2GO very handy for RDPing into the archive server. [12:40] perhaps, although personally prefer IRC.. Slack and Gittr and just not IRC (they have their own shiny features, but still not IRC.. although can interface with them) [12:41] also used https://www.nomachine.com/ a bit (not open source though, but works well) [12:41] (re RDP/VNC stuff) [13:00] I like IRC because you can IRSSI into it, use it on an old mobile and the bandwidth / hardware usage is very small. I wonder if anyone is making a fancier client for it to display links the way Slack and Discord do? [14:42] Ubuntu needs to add http boot for filesystem.squashfs in casper init script [14:42] Debian already has this and no one these days wants to live boot from a stupid nfs server [14:42] https://forum.kde.org/viewtopic.php?f=309&t=136596 [14:42] Here is a modified caster script that supports http - https://pastebin.com/raw/V6W39XJu [14:55] ikonia: sdeziel fwiw I'm settling for ldirectord + fwmark , at least as my first choice to test, have not gone through a setup yet [14:55] but after figuring out how you'd do it with haproxy and nginx it's all "too shiny" and new for my taste [14:55] and actually not as flexible once I learned about fwmark [14:56] fwmark + ipset seems a really kickass solution since I can basically re-route entire sets of clients on the fly no restarts required with just one simple command (ipset add xxx xxx) [14:56] drab: good to hear that you found some possible solution [14:56] drab: well done for making a ca;; [14:56] call [14:56] i'll setup a test host today with a few containers, ldirectord on the host and we'll see what happens, but on a whiteboard it looks sane and simple [14:57] without fwmark probably haproxy or nginx would have won [14:59] well done you [14:59] yeah well, bouncing around ideas in chan was as usual very helpful [14:59] beats the rubber duck :) [15:30] Jenshae: just mention board games and you'll get a few of us chatting ;) [15:30] Even the more "dwarvy" of us [16:30] blackboxsw: Go room on KGS server? :P (Go feels a bit hollow to me now that AI beat humans) [16:31] Have a good evening o7 [16:31] in other, maybe less fun news, anybody here happens to be familiar with SAS, expanders, backplanes and raid cards? even tho I get the gist, I'm having a bit of a problem working through the nomenclature and being clear about what's what. [16:31] I've dealt with SES once or twice yea :) [16:31] drab: ^ [16:32] jonfatino: this would be best put on Launchpad, maybe a merge proposal to the relevant package [16:35] joelio: for a starter, expanders, backplanes, raid cards and HBAs are all different things, correct? sometimes it seems that backplanes are also expanders, but that's the first thing i'm not clear about [16:35] also are SAS SFF-8087 always 4-lane connectors? meaning a backplane with 2 connectors would at most only accommodate 8 drives? [16:35] or if it accommodates more you'd have oversubscription? [16:36] or would that be a case where an expander would come into play? and still have oversubscription tho [16:39] no idea about the specific SAS questions... but HBA's are hardware cards, which generally have a SAS connectior, which connects an enclosure to a given backplane [16:40] that can feed a single backplane with reduced bandwidht, or the enclousre can (generally) be carved up so you can have multiple SAS connections, to increase throughput (but reduce number of attached disks) [16:41] SES is generally the enclosure type [16:41] some tools exist for doing drive id notifications etc [16:41] ledctl etc [16:41] any reason you keep saying SES? first time I thought it was just a typo for SAS, but now I'm doubting that [16:41] they're different things [16:42] https://en.wikipedia.org/wiki/SCSI_Enclosure_Services [16:42] SES ^^ [16:42] oh, I see [16:42] https://en.wikipedia.org/wiki/SES-2_Enclosure_Management more currently [16:42] https://en.wikipedia.org/wiki/Serial_Attached_SCSI [16:42] SAS ^^ [16:43] not sure this applies here then, these are all internal drives, but ifrst time I hear about SES and only quickly glanced at that wikipedia page [16:43] thanks for sharing something new [16:43] oh, you mentioned enclosure, so.. :) [16:43] that's SES in my book [16:43] or expander sorry [16:43] oh, I did? :) [16:43] right, expander, thought that was a diff thing [16:44] it seems to be a card you can add to multiplex sort of way more drives than the controller can natively drive [16:44] but again, still trying to figure that out [16:44] not sure that defintiion is correct [16:44] also it seems the case backplanes cam have built-in expanders [16:45] HBA is the card that does the SAS connectivity (Like an LSI 9201 for example ) [16:45] https://www.scan.co.uk/products/16-port-lsi-sas-9201-16i-6gb-s-sasplussata-to-pci-express-host-bus-adapter-4x-internal-mini-sas-upto [16:45] that's internally attached SAS [16:45] but you can get external SAS that feed a SES [16:46] and slice and dice depending on how much performance you want [16:46] see, on that card, how do you get 512 non raid devices? if it has 4 SF-8087 ports, which it does say, and each one can get 4 drives, that's 16, which it also mentiones [16:46] where's the 512 coming from? [16:46] it's serially attached [16:47] so one port may have 48+ dries [16:47] *drives [16:47] but you can chain etc [16:47] operative word being serial ;) [16:48] so 512 in this case will be a limitation in the spec [16:48] I really doubt you'd address 512 drives from a single card like that [16:48] you could... I guess......... [16:49] there's not logical difference between an internal and external SAS port too, they're the same thing jjst in different places [16:49] hopefully makes some sense (probably not explained the best!) [17:02] joelio: how would you physically serially attach even 48 drives? [17:02] everything I'm reading is basically saying "if you need more than 8 drives use an expander or multiple HBAs" [17:10] ok, I think I'm getting some place... === CodeMouse92 is now known as CodeMouse92__ [17:11] so it seems that: [17:11] 1) an HBA or raid car connects usually through PCI to the mobo and has one or more SF8087 ports on, each of which can drive up to 4 drives [17:12] 2) an expander can be connected to the HBA or raid controller, each port on the expander (again SF8087) can drive up to 4 drives. this allows you to address > 8 drives (unless somehow you get a pricey HBA with more than 2 ports, still it seems there aren't many with more than 8, so for 48 bays you still need an expander) [17:13] 3) some backplanes have built in expanders with one or more SF8087 ports on it (and chipsets) connecting to one or more ports on the HBA [17:14] depending on the speed of the expander, and the number of ports going to the HBA, you can end up with oversubscription [17:14] here's a decent picture I found: http://img.my.csdn.net/uploads/201203/5/0_13309440980Tj9.gif [17:14] thats not ture... [17:14] I think that's it and relatively clear unless I missed something [17:14] i just bought an H700 [17:14] ok, great, what did I get wrong? [17:15] dell PERC H700 has 2 mini sas and can be daisy chained up to 255 drives [17:15] It cost me ~80USD on ebay [17:15] drab, the lingo for HBA/Raid Controllers sucks. [17:16] yeah that I figured :) [17:16] if i didnt work at a datacenter i wouldnt understand either. [17:16] But how many drives are you trying to control. [17:16] dirtycajunrice: how do you physically daisy chain drives to 2 mini sas? [17:16] isn't each SF8087 cable coming out with 4 sas cables? [17:16] ie 8 drives [17:17] so the way it works (normally) is that the 2 mini sas cables go to a backplane in the 2 IN ports . That backplane handles more connections and normally has OUT sas connections for daisy chaining [17:18] if you want an example look at a Dell R510 12 Bay. or a Dell R730XD [17:18] ok, right, see my 3), like I said it seems some backplanes have built in expanders [17:18] so you do have your HBA going to an expander, it's just built into the backplane [17:18] yeah, your example isn't wrong, it just isn't commonly built that way [17:19] usually the expander is built into the backplane [17:19] right [17:19] yes what qman__ said [17:19] almost ALWAYS [17:19] are you against buying a rackmount ? [17:19] that's ok, still, there is an expander in the mix, it's not straight HBA to disks [17:19] no I'm not, but I'm trying to get my terminology and design straight before I buy anything [17:19] drab, no thats not possible. the technology isnt designed that way [17:19] I dislike purchasing "black boxes" [17:20] if you are about to purchase, do you want a wonderful site? [17:20] ebay.com ? :P [17:20] trust me. i did white boxes.... IPMI is the way of the world [17:20] drab, https://labgopher.com/ [17:20] it scrapes ebay, and gives you only helpful information [17:20] I usually ebay dell or supermicro [17:21] qman__: yeah, I'm looking at a bunch of supermicros actually [17:21] qman__, me as well. Using this site i got 2 620s, a 510, and a 420 all for under 1200 bucks [17:21] x9s, the 10s seem still too expensive [17:21] nice [17:21] dell = hp > supermicro [17:21] this is coming from an enterprise background. [17:21] nothing beats iLO/iDRAC [17:22] the problem with iDRAC in particular is that it has to have the full license [17:22] I'll take that with a bag of salt if you don't mind... I've just about found any opinion and its opposite in a few days of googling... which isn't new, that's been about true for any tech I've ever looked at [17:22] I'm not familiar with how HP's stuff works [17:22] supermicro's stuff isn't sublicensed like that, so you get what you get [17:22] and of course all opinions coming from ppl with X years of experience :) [17:23] im personally a dell guy... but qman__ idrac express does not require a licence... [17:23] drab, that is true. "Mileage may vary" is the key phrase [17:23] righyt [17:23] different license levels have different feature sets, and the cheapest license level's feature set is pretty lame [17:23] at least with many models [17:23] all it needs is console and snmp [17:23] i.e. no console [17:24] all the rest is just lagniappe [17:24] supermicro IPMI, on the other hand, is just one product, they don't have different licesnes or feature sets [17:25] its the AMD to Intel :P [17:25] but that comes with the AMD bugs. a mileage may vary situation again [17:25] iDRAC has plenty of bugs too [17:26] dirtycajunrice: labgopher is really neat, thanks for sharing [17:26] the only bug that affects me is the browser one. but IPMI has browser compatibility with literally every vendor i have tested [17:26] HP doesnt like firefox... Dell doesnt like chrome.... [17:26] etc etc [17:28] oh and drab if you DO decide to go dell, most of the servers that are sold have idrac enterprise licence already added since they came from a working environment [17:28] (idrac licences cant be migrated. they are bound to the machine they are installed on) [17:31] until someone decides you have to pay for a yearly license? ;) [17:32] I'm sorry I started this :P [17:32] but thanks for clarifying/confirming, I think I get what's what now [17:39] haha its ok [17:39] thats the point of IRC [17:39] talk [17:39] argue [17:39] be pedantic [17:39] its fun :P === jancoow_ is now known as jancoow [18:46] jamespage: is there a job that uploads cloud-archive-utils? we need a bionic version. https://launchpad.net/~ubuntu-cloud-archive/+archive/ubuntu/tools/+packages [19:12] Anyone know why I can't echo something twice in bash but only once. [19:13] DATE=date echo $date echo $date [19:13] it only echos date once :-( [19:18] jonfatino: "DATE=date; echo $DATE $DATE" ? or do you mean "DATE=date; echo $($DATE)" ? [19:19] weird bash bug [19:19] PASSWORD=$(date +%s|sha256sum|base64|head -c 32) [19:19] echo $PASSWORD [19:19] echo $PASSWORD [19:19] Fixed it [19:21] jonfatino: there is a bash channel which is probably a better place to ask in the future [19:21] uhm , not sure what you're running but DATE=`date` ; echo $DATE ; echo $DATE worked just fine for me [19:21] drab, [19:21] dont you dare [19:21] backtick [19:21] ever again [19:21] XD [19:22] it's the end of the season, I've heard backticks are trendy again [19:22] rofl [19:30] doing "VAR=value command" makes VAR available to that command only [19:58] coreycb: its part of the build recipe hooked up to the branch - you can add bionic and request a rebuild [20:02] jamespage: ok i'll look for that, thx [21:13] dirtycajunrice: [21:14] drab, [21:14] whups, I was gonna say, about that labgopher, there seem to be a ton of G6, I think you said you have exp with HPs... aren't G6s too old? [21:14] we can't afford latest, but afaik we're at G9s, so G6s is 3 gens ago [21:14] They are old. But old is relative to what you are doing [21:14] right [21:14] for example [21:15] my ESX 6.5 hosts are 620s for dell [21:15] its the oldest you can go for esxi [21:15] but my iSCSI server is a 510 [21:15] because why not? cheaper and more bays [21:15] and has no requirement to be super new [21:15] I don't know HP, but in the case of SM for example, X8s are still kind of popular, but you can get X9s and they have a completely diff design mobo wise allowing much faster access to PCI-E (and therefore faster disk access with a PCI SAS HBA) [21:15] so let the server's job dictate the cost [21:16] drab, what is the goal of the server(s) you are looking to buy [21:16] so it's not really worth to buy X8s when you can get X9s for about the same price [21:16] NAS + VM host [21:16] so 2 servers? [21:16] 24bays, doing nfs for homedirs and samba [21:16] wait wait backup [21:17] how many servers [21:17] one if possible, 2 diff zfs pools, hoping to put something like a E5-25xx in it, 8 cores [21:17] ok [21:17] so 1 server [21:17] 24 2.5in bays? [21:17] because 3.5 in is in DAE territory [21:17] from what I can see, 3.5, 2.5 seems too expensive [21:17] what's DAE? direct attach something? [21:17] DAS? [21:18] Directly attached Expansion [21:18] ok, different than a DAS? [21:18] a DAE is a shelf you attach to expand a DAS [21:18] ah, ok [21:18] but you are multiusing your server so its foggy :P [21:18] lemme look [21:18] gimme 10 [21:19] yeah, well, we don't have the money (it's a charity) to get multiple machines (unless it makes sense to get 2 cheaper ones, but it's often not the case) [21:19] besides, the older the more generally power hungry [21:19] i mean... [21:19] not to mention that if you wanna hold on a few spare parts, like PSUs, you need twice as much [21:19] to be honest [21:19] 24 bays is not cheap as 1 server [21:19] but can absolutely be affordable as 2 [21:20] are you using enterprise drives or consumer drives [21:21] true, they could prolly do with 12. the thing is, I'm here, I may not be able to volunteer for them in the future so I'm trying to put in something that will last them 5-10 years, cavia adding some drives if their archives grow (they do a lot of media stuff for history projects) [21:21] dirtycajunrice: enterprisey I'm hoping, maybe WD reds [21:22] WD Reds are consumer drives [21:22] enterprise drives are literally HP or dell signed drives from the manufacturer [21:22] eeer, ok, fine, NAS drive then? [21:22] with special Firmware [21:22] it matters for if enterprise servers will read them [21:22] sec [21:22] I see [21:22] well then no, no enterprise drives [21:23] whats your budget? [21:23] finally being asked to migrate off ub12.... [21:24] (i dont user enterprise drives either. i have 12 8TB toshiba x300s lol [21:25] dirtycajunrice: about 1K including disks and will need at least 128GB to run all VMs [21:26] drab, https://www.ebay.com/itm/DELL-POWEREDGE-R510-12-BAYS-2x-QUAD-CORE-L5520-2-26GHz-24GB-NO-HDD-NO-RAIL/132224991815?hash=item1ec9394247:g:nb0AAOSwJtdZ-g5g [21:26] thats the server [21:26] you can get caddys for like 30 bucks [21:27] ram is a mother right now tho [21:27] but thats even a problem in consumer [21:27] the market is artificially inflated [21:28] thanks [21:44] Epx998: I just finished that 2 weeks ago [21:44] now got a few 14 that I'm getting rid of and moving to containers [21:44] they actually had some ub11 too going around... [21:45] different question I guess I'm still confused about regarding HBAs [21:45] from before [21:45] LSI SAS 9211-4i PCI Express to 6Gb/s SAS HBA <-- this guy has one SF8087 port splitting to 4 lanes [21:46] I'm understanding that each port can do 6Gbs, ie that's not comulating for all the ports at once, but that's the first thing I have doubts about [21:48] "through four internal 6Gb/s ports" so it definitely seems it's 6Gps per port, however that SF cable is going at once to the backplane... does that mean it's the same as 24Gpbs to the backplane? which then the drives would share? [22:02] ok, I think the SAS configuration table explains that, full duplex SAS is 4.8GBps, so about 400MB/s per drive [22:20] drab: my home machine uses an sas expander; I think either sas port on the HBA can drive any of the drives [22:21] drab: so that's roughly eight sata-lanes of performance, and I've got nine drives plugged into the thing; the lights all seem to blink simultaneously though so it feels more than good enough at the job, haha [22:22] 8 SAS drives hooked to a "home" machine surely is good enough ;) [22:23] sdeziel: I blame my friend who talked me into 3-way mirrors [22:23] sarnold: I thought that friends only recommend mirror for ZFS :P [22:24] sdeziel: two-way mirrors? your friends must not care for your data much :) [22:25] eer, a 3-way mirror sounds like a logical impossiblity... how's a mirror 3 way? :) [22:25] drab: easy: zpool create pool mirror sda sdb sdc mirror sdd sde sdf mirror sdg sdh sdi [22:26] tada! [22:26] three 3-way mirrors! :) [22:26] oh, a mirror with 3 vdevs, I see, ok [22:26] vdev with 3 disks :) [22:27] eer, that one [22:27] * drab runs 6 disks in raidz2 [22:27] seems good enough with 2 disks possible failure and more available space, no? [22:28] on a 3disks vdev you get the same 2 drive failures, but the capacity of only one [22:28] unless I'm missing something again [22:28] yeah you've got a pretty good sweet spot there [22:29] but I'd expect roughly nine times 100MB/s for bulk reads from three 3-way mirrors, vs roughly 100MB/s bulk reads from 6-disk raidz2 [22:30] I've read somewhere that it was a pain to grow a RAIDZ(2) setup [22:30] (I don't think I can actually get my queue depths deep enough to get that kind of throughput though) [22:30] trying to find the link to that [22:30] ah, that's new. would love to read that [22:30] sdeziel: it definitely is; the next logical step is to add another six disks in a new vdev, and of course then all the writes would go to the new vdev until they're about the same capacity... [22:31] http://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/ [22:31] that said, the plan was (given the 12 bays), to just add another vdev with the next batch of 6 hds [22:31] but I could easily add just another three disks to mine .. and suffer the same write-problem :) [22:31] sarnold: yeah, that was my recollection from this article ^ [22:36] ah, I learned something, I guess I misunderstood how vdevs where added together, I was still thinking raid10 [22:37] drab: that's probably the right intuition [22:39] I'm unsure about the maintenance windows tho, once a vdev with a 2xdisks mirror has a failure you are one disk away from losing everything [22:39] so yeah, it seems to me if you're gonna be running mirrors then it has to be 3-disks per mirror [22:39] and that gets kind of expensive [22:40] yes it does [22:40] which is why a 6-disk raidz2 feels like a very nice sweet spot [22:40] speaking of ZFS, the other thing I had misunderstood (no wonder..) is how the ZIL is supposed to work [22:40] I thought writes would go to it, ie behave like a write cache [22:41] btu that's only for synchronous writes afaiu [22:42] there's several interacting concepts here [22:42] async writes end up in mem and never touch the ZIL/SLOG [22:43] all pools have ZIL, you can use a SLOG to put the ZIL on super-fast storage [22:43] right [22:43] basically before going with ZFS I had looked at... I now forget the name... for linux [22:43] where you basically end up with something like a SHDD [22:43] bcachefs? [22:43] putting a couple of SSDs in front of a bunch of disks [22:43] ah yes, that's the one [22:44] I thought ZFS with SLOG on a diff disk would work like that, but that's not the case [22:44] and it seems in a sense less performant than a setup with bcachefs [22:44] because in terms of returning to the app, with bcachefs you only have to wait for writes to be done in the SSD [22:44] indeed, writes to the main drives still get flushed within a few seconds as writes to the slog, but the application is allowed to cobntinue once the write to the slog is complete [22:45] mmmh, but that seems true only for synchronous writes [22:45] I found that far fewer operations go through the slog than I expected. My intution suggested that atomic operations like mkdir would go through slog but I _never_ saw the slog write counters increment no matter what workload I tried :) [22:46] https://github.com/zfsonlinux/zfs/issues/1012 [22:47] https://www.ixsystems.com/blog/o-slog-not-slog-best-configure-zfs-intent-log/ [22:48] Use case: If your use case involves synchronous writes, utilizing a SLOG for your ZIL will provide benefit. Database applications, NFS environments, particularly for virtualization, as well as backups are known use cases with heavy synchronous writes. [22:48] which goes to the point that for async writes the whole ZIL/SLOG seems unhelpful [22:49] from that second link forcing all writes to be sync seems to be a matter of security of not losing data, but to me seems to actually make a diff in terms of performance as control to the app will be returned as soon as data is written to the SLOG [22:49] I guess I'll hvae to test that [22:50] drab: fwiw i'm quite happy to leave the defaults at the defaults [22:50] drab: and even though I've got a partition of my nvme set aside for slog, one of these reboots I'm going to disable it and just use the whole nvme for l2arc instead [22:51] yeah, I gave a share pof the NVME to slog right now and the rest of l2arc, but I'm fundamentally bugged by this default behavior [22:52] I would basically except, like in the case of bcachefs, to basically see nvme-like speeds for all writes [22:52] with "long term" storage being a sort of deferred write, ie from nvme device to HDDs [22:53] so app -> mirror nvme -> raidz2 on 6 drives [22:54] basically use the NVME as a cheaper version of those super expensive battery backed ram, zeusram or whatever, forgot what it's called [22:58] Hmm has anyone ever had Windows users experiencing DNS cache corruption or such while connected to an OpenVPN server? Trying to debug some users' sporadic issues (the VPN server is running Ubuntu 16.04 of course) and it's narrowing down to a point that simultaneously very specific yet extremely mysterious.