/srv/irclogs.ubuntu.com/2017/11/08/#ubuntu-server.txt

jamespagecoreycb: hey - noted a few bumps to debhelper compat level 10 from debian - that's going to be awkward to backport without a backport of debhelper >= 10 first00:39
jamespagethere is on in backports - we might be able to use that00:40
jamespagebradm: are you charm deployed? or just using the packages?00:40
bradmjamespage: charm deployed00:41
jamespagebradm: ok so two options on each service00:41
jamespagebradm: you can set "openstack-origin" to the new UCA pocket, this will perform a parallel upgrade on all units of the service00:41
jamespageor you can toggle action-managed-upgrade to True via config; after that setting the config also needs an execution of the openstack-upgrade action on each unit00:42
jamespageintent is that you can do things unit by unit00:42
bradmjamespage: that could be fun with things that require a db schema change, doing it unit by unit00:44
jamespagebradm: the lead unit will take care of the db migration so you should do that one first00:44
bradmjamespage: are there any recommendations about service orders to do the upgrades in?00:45
bradmI guess keystone first, similar to how the charm upgrades are done00:45
jamespageyep00:46
bradmok, and this is liberty -> mitaka, so it seems I need to upgrade ceph to jewel first too00:46
bradmjamespage: there's no rollback is there?  not expecting to need it, but have to ask the question00:49
jamespagebradm: no00:54
bradmjamespage: excellent, good to know.00:55
jamespagebradm: actually ceph does things in a different way - if you change the source, it will perform a managed upgrade across the units00:55
bradmjamespage: is any of this documented anywhere?  all the cloud archive page says about it is to 'upgrade the packags'00:56
bradmpackages.00:56
ReedK0is anyone here good at repairing grub?00:57
jamespagebradm: in the charm readme's; there is a  pending task to write a 'how to upgrade' section for the charm deployment guide01:05
jamespagebradm: https://docs.openstack.org/charm-deployment-guide/latest/01:06
bradmjamespage: aha, cool.01:08
drabEpx998: a war?01:49
Epx998work around01:54
Epx998drab: ubuntu doesnt like to an interface not followed with a 0 during network installs01:55
drabI see01:55
drabanyway, I had that problem on some supermicro servers with intel quad nics01:55
Epx998yeah01:55
Epx998exactly what we are seeing, onboard nics 1gb, plus offboard intel x550 10gb01:56
drabwar for me was to disable the [un]predictable interaface naming so that stuff was once again called eth0 etc, then it worked just fine passing the interface name to select via preseed/kernel param01:56
Epx998we want to use 10gb out the gate, but d-i netboot does not like01:56
Epx998i suggested that to the guys trying to provision, they said it was too much work.  so i creasted a new tftp entry where i specified eth4, the installer sees it, then tries eth0 regardless and falls on its face01:57
drabhow is one kernel parameter too much work? all you need is biosdevsomethingiforgot=001:58
drabanyway01:58
drabthat's all I've got01:59
drabaltho if you already have eth0 and 4 that doesn't sound like your problem01:59
drabsarnold: fwiw someone suggested a good way to do it with nginx using "geo"02:02
drabhttp://nginx.org/en/docs/http/ngx_http_geo_module.html02:02
drabso I can use geo to create a "rolling" variable and if the ip matches whichever sets of clients I wanna rollout I just catch them there and then use map to use a different pool02:04
drabhttp://prabu-lk.blogspot.com/2017/03/select-different-backend-pools-based-on.html02:04
sarnolddrab: heh, neither one -quite- describes what's going on..02:06
sarnolddrab: still it looks like it can do the job02:06
drabI guess I just have to get over this idea that nginx is mostly a web server thingie... and tbh most of the verbiage on the site talks about http02:07
drabwhich makes me somewhat uneasy about using it for generic tcp protocols02:07
drabalso this nginx plus thing isn't helping... half of the times I'm confused if it's payware feature or not...02:07
=== an3k is now known as an3k^
=== an3k^ is now known as an3k
jamespagecoreycb: I think we're going to need to bump lescina to xenial04:03
jamespagecoreycb: the lack of support for conditional deps is created patching requirements which are unrelated to the uca04:03
lordievaderGood morning07:13
SmokinGruntsmorn' lordievader07:46
lordievaderHey SmokinGrunts07:47
lordievaderHow are you doing?07:47
SmokinGruntspretty good. I got vibes from fielding a successful ##networking issue, and so I feel good07:47
* SmokinGrunts toots the ole ego horn07:48
SmokinGruntsand you?07:48
=== SmokinGrunts is now known as SG_Sleeps
=== JanC is now known as Guest4348
=== JanC_ is now known as JanC
lordievaderDoing good here.08:21
lordievaderIs ##networking an interesting channel?08:21
adacHi there! Where does iptables-persistent save it's rules? How can I clean them up?08:52
lordievaderadac: According to [1] in `/etc/iptables/rules.v{4,6}`. [1] http://www.microhowto.info/howto/make_the_configuration_of_iptables_persistent_on_debian.html#idp2102408:54
adaclordievader, ok thanks! So then I can simply delete it's content to reset it? I currently do run this script to clean now all up (reset the iptables firewall) https://gist.github.com/anonymous/cc01da7ccd09e292fb44e468e656163e when I run "iptables -L" then I have no more output. But when I print iptables-save -c  I still get this output:08:59
adachttps://gist.github.com/anonymous/fe479f70643d0578a40c9f7d1adb819408:59
adacI'm wondering if my iptables now is really cleaned fully or not? :)08:59
lordievaderNo you don't want to blindly delete the content.09:00
lordievaderThe commands you posted do flush the entire iptables.09:01
lordievaderIt seems you (or some program) made CATTLE_* chains.09:01
lordievader`sudo iptables -vnL` should show these.09:02
adaclordievader, yes that was my intention to completely flush it. This CATTLE_* Chains come from rancher docker container who creates them. I was wondering why these are not deleted with my script I posted before?09:05
adacMaybe they are immediately re-generated09:05
adacby rancher09:05
adacI try to shut down rancher and the flush all again09:05
adaciptables -vnL does also not show them. No clue why actually09:06
lordievader`iptables -X` probably does not remove them if there are still rules in them.09:07
lordievaderHmm, if `iptables -vnL` does not show them `iptables-save` should not either.09:07
adaclordievader, exactly what i also tought it should no be shown anymore after I executed this script. I now stopped the docker process (so no container runs anymore and therefore no iptables rules can be created by them) and then just re-run my flush script09:09
adacand with iptables-save -c they are still shown :D09:09
adachow's tat possible?09:10
lordievaderWhat happens when you explicitly delete the chains?09:10
lordievaderAlso, could you pastebin the output of  `iptables -vnL`?09:10
adachttp://lubos.rendek.org/remove-all-iptables-prerouting-nat-rules/09:10
adacthis is iptables -vnL after stopping docker and re-run the flush script: https://gist.github.com/anonymous/44cde2821e55d9fd2379ca71d270e3e709:11
adacand this at the same time is the output of iptables-save -c09:12
adachttps://gist.github.com/anonymous/9f11adf6d862e3e926e0b5dd03846b9609:12
lordievaderInteresting09:15
lordievaderOh well, you can always just edit the rules.09:15
lordievaderThe saved file I mean.09:15
adaclordievader,  https://serverfault.com/a/200642 I think this finally flushed it :D09:16
lordievaderYeah, that does the same but then through awk ;)09:17
adachehehe09:17
adacgreat I finally have a reset and can now try again to set this up properly!09:17
adacThanks for your support!09:17
lordievaderNo problem.09:18
JenshaeHi all. joelio you know how you like ZFS just a little bit, right?11:02
joeliohey11:08
joelioa little yes :)11:08
JenshaeI did a Ubuntu Server FakeRaid on my assistant's machine to get them familiar with the OS but I am not satisfied with the boot time. I have found a little 20GB SSD in one of the machines here and reclaimed it by re-installing their main drive. However, it being so small, I want to know how to safely store applications on a ZFS pool.11:08
JenshaeWhich would mean moving /usr /var and /etc I am guessing?11:08
joelioYou can use it as a cache drive11:08
joelio(if you've done ZFS)11:09
JenshaeDoes one of those hold the bulk and the rest are just links or a executables?11:09
joelioI wouldn't use fakeraid btw, mdadm11:09
joeliomdadm all the way if you're doing raid11:09
JenshaeSec11:09
JenshaeSo, the SSD is being installed as a lone drive.11:09
joelioare you using ZFS for boot?11:10
JenshaeThen I am going to ZFS attach the other three drives that were previously used in the FakeRaid11:10
JenshaeZFS for boot? Me? :D11:10
joelioright, so ZFS is irrelevant here11:10
JenshaeSSD = / (boot) and ZFS drives = /home + (/etc /var /usr ???)11:11
joelioI'd ditch the fakeraid, use an alternate iso (server) to reinstall... use software raid11:11
JenshaeThe FakeRaid is ditched.11:11
joelioyou could potentially use the ssd, but if it dies you've lost /boot and it's not redundant11:12
joeliowhat I'd do is...11:12
joeliouse mdadm to make a boot and small root on the drives (so it's raided)11:12
JenshaeIt is just a desktop for their learning purposes.11:12
joelioand then use the remaining space on the drives to create zvols which you can use in a raidz11:13
joeliothen use the ssd drive to act as a cache layer for zfs pools11:13
JenshaeYou can RAID a 20GB SSD with 3x 160GB? I am hoping to make the desktop I slap on snappier and faster to load.11:13
joelioit sounds messy, but without doing full root ZFS that's the easiest route and will give you redundancy11:13
joeliono, don't use ssd in the raid11:13
joeliothey're asymetrical in size and performance11:14
joelio*asymmetrical11:14
joelioyou can use ssd as a cache layer in ZFS (or btrfs)11:14
joelioor if you don't want to use ZFS, bcache or EnhanceIO etc11:14
joeliothat's what I'd do anyway :)11:14
* Jenshae looks around the maze dazed11:15
joelioalso bear in mind it's not just about being I/O bound, could be CPU or whatever too11:15
joeliodoes that make sense?11:15
lordievaderIf it is running systemd, `systemd-analyze blame` may help.11:15
JenshaeOkay, let's start with step one. I plug in the 3x 160GB, fire up the USB with the ISO and then create a mdadm. Then I plug in the SSD and install the OS with just /boot going on the SSD?11:17
JenshaeI think the machine's bios is auto pushing for UEFI.11:17
JenshaeHow does that fit in?11:17
JenshaeThe mdadm is 3x160GB or it is 3x20GB and the Zpool goes over 3x140GB?11:18
joelioI'd not allocate the full 160GB to mdadm, but say 20G of it, leaving 140G on each disk for zvols11:19
JenshaeOkay so second part.11:20
joelioleave the ssd for a ZFS cache pool, add that after you've installed and created the ZFS pools11:20
JenshaeHow do I get the mdadm to write from RAM to the hard drives? Can the SSD be plugged in straight away with the 160GBs from the start?11:20
joeliothis is quite a bit of faff tbh, are you sure you want to go this route btw? :D11:20
joeliojust making sure you know first and I'm not sending you down a rabbit hole11:21
joelioJenshae: otherwise do a full disk mdadm raid setup11:22
JenshaeThe way I did it at home was, I installed everything onto one 500GB drive. Then I put /.steam onto a zpool made up of 3x500GB SSHDs and my games' loading time improved.11:22
joelioand use the SSD cache with EnchanceIO or bcache on the md device perhaps?11:22
joeliowhat do you mean about mdadm wiriting to ram?11:23
JenshaeI want to give him the fastest desktop possible and then give him storage space in a pool after that.11:23
joeliothat doesn't make sense :)11:23
joeliomdadm is the raid admin tool11:23
JenshaeWhen the USB is running, it is storing temp data into RAM partitions, right?11:23
joelioUSB?11:23
JenshaeIncluding how things are partitions when installing.11:23
JenshaeThe ISO / live USB.11:23
joelioUSB is just for install media, ignore that11:24
joelioplus, don't use live - use the alternate iso11:24
joeliothe text based / ncurses one11:24
JenshaeI need to use the USB to create the raid to then install into the raid, no?11:24
joeliolike I said, create a server/alternate install image, whether you boot from CD/DVD/USB etc is irrelevant :)11:25
JenshaeOkay, I think I am starting to see the light.11:25
joeliohttps://help.ubuntu.com/community/Installation/SoftwareRAID11:25
joeliohere you go :)11:25
Jenshae... now where is the desktop environment installed? Does it have a specific partition?11:26
joelioit's all over the FS11:26
joelio /usr /var /etc11:26
ikonia /etc can't be a seperate file system11:26
joelioI'm not sure that's what was being asked11:27
ikoniathats fine, I'm just clarifying11:27
joelioplus is is techncailly possible btw root=/blah etc=/blah :)11:28
joeliowell within reason11:28
joelionot that you should mind :)11:28
ikoniait's not possible11:29
JenshaeCan specific folders within /var and /usr be mapped to the zpool?11:33
JenshaeLike say GIMP is installed and is rather big, I want it to go into the zpool instead of the mdadm raid.11:34
joelioWhen you've made the ZFS, rsync the contents and update fstab11:37
joelioikonia: https://power-of-linux.blogspot.co.uk/2010/03/booting-with-etc-in-separate-partition.html11:39
joelioas mentioned, not advisable11:39
joelioanything is techincally possible here, just how much effort you want to put in11:39
ikoniajoelio: thats a bit of a cheat though isn't it11:41
ikoniait's not really possible - it's someone cheating11:41
joelioexcept it completely is possible as... they did it11:41
joelioit's not cheating, it's programming11:41
JenshaeMissing a third party for the golden apple story.11:43
JenshaeThank you, This will be interesting and a bit surprising for them. :D11:50
cpaelzercoreycb: jamespage: fyi bug 1682102 - no more need to drop seccomp when backporting for UCA-queens12:01
ubottubug 1682102 in libseccomp (Ubuntu Xenial) "libseccomp should support GA and HWE kernels" [High,Fix committed] https://launchpad.net/bugs/168210212:01
Jenshae^.^ Nice one cpaelzer. Do you guys have a #social-room?12:08
cpaelzerif "you guys" is server folks then I think you are here :-)12:10
cpaelzerIMHO no reason to separate too much unless it overloads the channel12:10
JenshaeOkay. You do all seem rather quiet, had all the conversations? Like joelio and ikonia having a difference of opinion there. I doubt everyone here shares "dank memes" or plays / makes a game together. It seems very disassociated from each other. Drop a bit of work on launchpad with a message, pick up another bit, etc.12:14
JenshaeNot sure that comes across right. This channel seems like it is the "office", where is the "team building"?12:15
cpaelzeroh I see - that "social" - I guess we are all work addicts :-/12:15
cpaelzerthere will be a chan for that but I don't know oO12:16
* Jenshae thinks I have stumbled upon a cave of abused dwarves and elves that need to be dragged out to have fun in the sun. :P12:19
cpaelzerhehe12:20
cpaelzerwhite skin will become the trend again12:20
joeliogot told this channel wasn't for fun a few years back :D12:21
JenshaeWhat about a #ubuntu-server server on Discord or would Discord's spying be too invasive?12:22
joeliooh sorry that was #ubuntu :)12:22
JenshaeDiscord is handy because you can make a channel where only certain people can post things, that means a smaller group can link in their favourite or video guides they have made to inform new users without having all the "dank meme" videos dropped in there. The same applies for images / diagrams and text or links, naturally.12:23
JenshaeThen it does have voice also, for those who struggle to explain a new concept in text, whilst thinking aloud.12:24
JenshaeIt can all split down in a tree from the main "ubuntu-server"12:25
JenshaeFinding X2GO very handy for RDPing into the archive server.12:28
joelioperhaps, although personally prefer IRC.. Slack and Gittr and just not IRC (they have their own shiny features, but still not IRC.. although can interface with them)12:40
joelioalso used https://www.nomachine.com/ a bit (not open source though, but works well)12:41
joelio(re RDP/VNC stuff)12:41
JenshaeI like IRC because you can IRSSI into it, use it on an old mobile and the bandwidth / hardware usage is very small. I wonder if anyone is making a fancier client for it to display links the way Slack and Discord do?13:00
jonfatinoUbuntu needs to add http boot for filesystem.squashfs in casper init script14:42
jonfatinoDebian already has this and no one these days wants to live boot from a stupid nfs server14:42
jonfatinohttps://forum.kde.org/viewtopic.php?f=309&t=13659614:42
jonfatinoHere is a modified caster script that supports http - https://pastebin.com/raw/V6W39XJu14:42
drabikonia: sdeziel fwiw I'm settling for ldirectord + fwmark , at least as my first choice to test, have not gone through a setup yet14:55
drabbut after figuring out how you'd do it with haproxy and nginx it's all "too shiny" and new for my taste14:55
draband actually not as flexible once I learned about fwmark14:55
drabfwmark + ipset seems a really kickass solution since I can basically re-route entire sets of clients on the fly no restarts required with just one simple command (ipset add xxx xxx)14:56
sdezieldrab: good to hear that you found some possible solution14:56
ikoniadrab: well done for making a ca;;14:56
ikoniacall14:56
drabi'll setup a test host today with a few containers, ldirectord on the host and we'll see what happens, but on a whiteboard it looks sane and simple14:56
drabwithout fwmark probably haproxy or nginx would have won14:57
ikoniawell done you14:59
drabyeah well, bouncing around ideas in chan was as usual very helpful14:59
drabbeats the rubber duck :)14:59
blackboxswJenshae: just mention board games and you'll get a few of us chatting ;)15:30
blackboxswEven the more "dwarvy" of us15:30
Jenshaeblackboxsw: Go room on KGS server? :P (Go feels a bit hollow to me now that AI beat humans)16:30
JenshaeHave a good evening o716:31
drabin other, maybe less fun news, anybody here happens to be familiar with SAS, expanders, backplanes and raid cards? even tho I get the gist, I'm having a bit of a problem working through the nomenclature and being clear about what's what.16:31
joelioI've dealt with SES once or twice yea :)16:31
joeliodrab: ^16:31
sdezieljonfatino: this would be best put on Launchpad, maybe a merge proposal to the relevant package16:32
drabjoelio: for a starter, expanders, backplanes, raid cards and HBAs are all different things, correct? sometimes it seems that backplanes are also expanders, but that's the first thing i'm not clear about16:35
drabalso are SAS SFF-8087 always 4-lane connectors? meaning a backplane with 2 connectors would at most only accommodate 8 drives?16:35
drabor if it accommodates more you'd have oversubscription?16:35
drabor would that be a case where an expander would come into play? and still have oversubscription tho16:36
joeliono idea about the specific SAS questions... but HBA's are hardware cards, which generally have a SAS connectior, which connects an enclosure to a given backplane16:39
joeliothat can feed a single backplane with reduced bandwidht, or the enclousre can (generally) be carved up so you can have multiple SAS connections, to increase throughput (but reduce number of attached disks)16:40
joelioSES is generally the enclosure type16:41
joeliosome tools exist for doing drive id notifications etc16:41
joelioledctl etc16:41
drabany reason you keep saying SES? first time I thought it was just a typo for SAS, but now I'm doubting that16:41
joeliothey're different things16:41
joeliohttps://en.wikipedia.org/wiki/SCSI_Enclosure_Services16:42
joelioSES ^^16:42
draboh, I see16:42
joeliohttps://en.wikipedia.org/wiki/SES-2_Enclosure_Management more currently16:42
joeliohttps://en.wikipedia.org/wiki/Serial_Attached_SCSI16:42
joelioSAS ^^16:42
drabnot sure this applies here then, these are all internal drives, but ifrst time I hear about SES and only quickly glanced at that wikipedia page16:43
drabthanks for sharing something new16:43
joeliooh, you mentioned enclosure, so.. :)16:43
joeliothat's SES in my book16:43
joelioor expander sorry16:43
draboh, I did? :)16:43
drabright, expander, thought that was a diff thing16:43
drabit seems to be a card you can add to multiplex sort of way more drives than the controller can natively drive16:44
drabbut again, still trying to figure that out16:44
drabnot sure that defintiion is correct16:44
drabalso it seems the case backplanes cam have built-in expanders16:44
joelioHBA is the card that does the SAS connectivity (Like an LSI 9201 for example )16:45
joeliohttps://www.scan.co.uk/products/16-port-lsi-sas-9201-16i-6gb-s-sasplussata-to-pci-express-host-bus-adapter-4x-internal-mini-sas-upto16:45
joeliothat's internally attached SAS16:45
joeliobut you can get external SAS that feed a SES16:45
joelioand slice and dice depending on how much performance you want16:46
drabsee, on that card, how do you get 512 non raid devices? if it has 4 SF-8087 ports, which it does say, and each one can get 4 drives, that's 16, which it also mentiones16:46
drabwhere's the 512 coming from?16:46
joelioit's serially attached16:46
joelioso one port may have 48+ dries16:47
joelio*drives16:47
joeliobut you can chain etc16:47
joeliooperative word being serial ;)16:47
joelioso 512 in this case will be a limitation in the spec16:48
joelioI really doubt you'd address 512 drives from a single card like that16:48
joelioyou could... I guess.........16:48
joeliothere's not logical difference between an internal and external SAS port too, they're the same thing jjst in different places16:49
joeliohopefully makes some sense (probably not explained the best!)16:49
drabjoelio: how would you physically serially attach even 48 drives?17:02
drabeverything I'm reading is basically saying "if you need more than 8 drives use an expander or multiple HBAs"17:02
drabok, I think I'm getting some place...17:10
=== CodeMouse92 is now known as CodeMouse92__
drabso it seems that:17:11
drab1) an HBA or raid car connects usually through PCI to the mobo and has one or more SF8087 ports on, each of which can drive up to 4 drives17:11
drab2) an expander can be connected to the HBA or raid controller, each port on the expander (again SF8087) can drive up to 4 drives. this allows you to address > 8 drives (unless somehow you get a pricey HBA with more than 2 ports, still it seems there aren't many with more than 8, so for 48 bays you still need an expander)17:12
drab3) some backplanes have built in expanders with one or more SF8087 ports on it (and chipsets) connecting to one or more ports on the HBA17:13
drabdepending on the speed of the expander, and the number of ports going to the HBA, you can end up with oversubscription17:14
drabhere's a decent picture I found: http://img.my.csdn.net/uploads/201203/5/0_13309440980Tj9.gif17:14
dirtycajunricethats not ture...17:14
drabI think that's it and relatively clear unless I missed something17:14
dirtycajunricei just bought an H70017:14
drabok, great, what did I get wrong?17:14
dirtycajunricedell PERC H700 has 2 mini sas and can be daisy chained up to 255 drives17:15
dirtycajunriceIt cost me ~80USD on ebay17:15
dirtycajunricedrab, the lingo for HBA/Raid Controllers sucks.17:15
drabyeah that I figured :)17:16
dirtycajunriceif i didnt work at a datacenter i wouldnt understand either.17:16
dirtycajunriceBut how many drives are you trying to control.17:16
drabdirtycajunrice: how do you physically daisy chain drives to 2 mini sas?17:16
drabisn't each SF8087 cable coming out with 4 sas cables?17:16
drabie 8 drives17:16
dirtycajunriceso the way it works (normally) is that the 2 mini sas cables go to a backplane in the 2 IN ports . That backplane handles more connections and normally has OUT sas connections for daisy chaining17:17
dirtycajunriceif you want an example look at a Dell R510 12 Bay. or a Dell R730XD17:18
drabok, right, see my 3), like I said it seems some backplanes have built in expanders17:18
drabso you do have your HBA going to an expander, it's just built into the backplane17:18
qman__yeah, your example isn't wrong, it just isn't commonly built that way17:18
qman__usually the expander is built into the backplane17:19
drabright17:19
dirtycajunriceyes what qman__ said17:19
dirtycajunricealmost ALWAYS17:19
dirtycajunriceare you against buying a rackmount ?17:19
drabthat's ok, still, there is an expander in the mix, it's not straight HBA to disks17:19
drabno I'm not, but I'm trying to get my terminology and design straight before I buy anything17:19
dirtycajunricedrab, no thats not possible. the technology isnt designed that way17:19
drabI dislike purchasing "black boxes"17:19
dirtycajunriceif you are about to purchase, do you want a wonderful site?17:20
drabebay.com ? :P17:20
dirtycajunricetrust me. i did white boxes.... IPMI is the way of the world17:20
dirtycajunricedrab, https://labgopher.com/17:20
dirtycajunriceit scrapes ebay, and gives you only helpful information17:20
qman__I usually ebay dell or supermicro17:20
drabqman__: yeah, I'm looking at a bunch of supermicros actually17:21
dirtycajunriceqman__, me as well. Using this site i got 2 620s, a 510, and a 420 all for under 1200 bucks17:21
drabx9s, the 10s seem still too expensive17:21
qman__nice17:21
dirtycajunricedell = hp > supermicro17:21
dirtycajunricethis is coming from an enterprise background.17:21
dirtycajunricenothing beats iLO/iDRAC17:21
qman__the problem with iDRAC in particular is that it has to have the full license17:22
drabI'll take that with a bag of salt if you don't mind... I've just about found any opinion and its opposite in a few days of googling... which isn't new, that's been about true for any tech I've ever looked at17:22
qman__I'm not familiar with how HP's stuff works17:22
qman__supermicro's stuff isn't sublicensed like that, so you get what you get17:22
draband of course all opinions coming from ppl with X years of experience :)17:22
dirtycajunriceim personally a dell guy... but qman__ idrac express does not require a licence...17:23
dirtycajunricedrab, that is true. "Mileage may vary" is the key phrase17:23
drabrighyt17:23
qman__different license levels have different feature sets, and the cheapest license level's feature set is pretty lame17:23
qman__at least with many models17:23
dirtycajunriceall it needs is console and snmp17:23
qman__i.e. no console17:23
dirtycajunriceall the rest is just lagniappe17:24
qman__supermicro IPMI, on the other hand, is just one product, they don't have different licesnes or feature sets17:24
dirtycajunriceits the AMD to Intel :P17:25
dirtycajunricebut that comes with the AMD bugs. a mileage may vary situation again17:25
qman__iDRAC has plenty of bugs too17:25
drabdirtycajunrice: labgopher is really neat, thanks for sharing17:26
dirtycajunricethe only bug that affects me is the browser one. but IPMI has browser compatibility with literally every vendor i have tested17:26
dirtycajunriceHP doesnt like firefox... Dell doesnt like chrome....17:26
dirtycajunriceetc etc17:26
dirtycajunriceoh and drab if you DO decide to go dell, most of the servers that are sold have idrac enterprise licence already added since they came from a working environment17:28
dirtycajunrice(idrac licences cant be migrated. they are bound to the machine they are installed on)17:28
JanCuntil someone decides you have to pay for a yearly license?  ;)17:31
drabI'm sorry I started this :P17:32
drabbut thanks for clarifying/confirming, I think I get what's what now17:32
dirtycajunricehaha its ok17:39
dirtycajunricethats the point of IRC17:39
dirtycajunricetalk17:39
dirtycajunriceargue17:39
dirtycajunricebe pedantic17:39
dirtycajunriceits fun :P17:39
=== jancoow_ is now known as jancoow
coreycbjamespage: is there a job that uploads cloud-archive-utils? we need a bionic version. https://launchpad.net/~ubuntu-cloud-archive/+archive/ubuntu/tools/+packages18:46
jonfatinoAnyone know why I can't echo something twice in bash but only once.19:12
jonfatinoDATE=date      echo $date   echo $date19:13
jonfatinoit only echos date once :-(19:13
TJ-jonfatino: "DATE=date; echo $DATE $DATE" ? or do you mean "DATE=date; echo $($DATE)" ?19:18
jonfatinoweird bash bug19:19
jonfatinoPASSWORD=$(date +%s|sha256sum|base64|head -c 32)19:19
jonfatinoecho $PASSWORD19:19
jonfatinoecho $PASSWORD19:19
jonfatinoFixed it19:19
naccjonfatino: there is a bash channel which is probably a better place to ask in the future19:21
drabuhm , not sure what you're running but DATE=`date` ; echo $DATE ; echo $DATE worked just fine for me19:21
dirtycajunricedrab,19:21
dirtycajunricedont you dare19:21
dirtycajunricebacktick19:21
dirtycajunriceever again19:21
dirtycajunriceXD19:21
drabit's the end of the season, I've heard backticks are trendy again19:22
dirtycajunricerofl19:22
sdezieldoing "VAR=value command" makes VAR available to that command only19:30
jamespagecoreycb: its part of the build recipe hooked up to the branch - you can add bionic and request a rebuild19:58
coreycbjamespage: ok i'll look for that, thx20:02
drabdirtycajunrice:21:13
dirtycajunricedrab,21:14
drabwhups, I was gonna say, about that labgopher, there seem to be a ton of G6, I think you said you have exp with HPs... aren't G6s too old?21:14
drabwe can't afford latest, but afaik we're at G9s, so G6s is 3 gens ago21:14
dirtycajunriceThey are old. But old is relative to what you are doing21:14
drabright21:14
dirtycajunricefor example21:14
dirtycajunricemy ESX 6.5 hosts are 620s for dell21:15
dirtycajunriceits the oldest you can go for esxi21:15
dirtycajunricebut my iSCSI server is a 51021:15
dirtycajunricebecause why not? cheaper and more bays21:15
dirtycajunriceand has no requirement to be super new21:15
drabI don't know HP, but in the case of SM for example, X8s are still kind of popular, but you can get X9s and they have a completely diff design mobo wise allowing much faster access to PCI-E (and therefore faster disk access with a PCI SAS HBA)21:15
dirtycajunriceso let the server's job dictate the cost21:15
dirtycajunricedrab, what is the goal of the server(s) you are looking to buy21:16
drabso it's not really worth to buy X8s when you can get X9s for about the same price21:16
drabNAS + VM host21:16
dirtycajunriceso 2 servers?21:16
drab24bays, doing nfs for homedirs and samba21:16
dirtycajunricewait wait backup21:16
dirtycajunricehow many servers21:17
drabone if possible, 2 diff zfs pools, hoping to put something like a E5-25xx in it, 8 cores21:17
dirtycajunriceok21:17
dirtycajunriceso 1 server21:17
dirtycajunrice24 2.5in bays?21:17
dirtycajunricebecause 3.5 in is in DAE territory21:17
drabfrom what I can see, 3.5, 2.5 seems too expensive21:17
drabwhat's DAE? direct attach something?21:17
drabDAS?21:17
dirtycajunriceDirectly attached Expansion21:18
drabok, different than a DAS?21:18
dirtycajunricea DAE is a shelf you attach to expand a DAS21:18
drabah, ok21:18
dirtycajunricebut you are multiusing your server so its foggy :P21:18
dirtycajunricelemme look21:18
dirtycajunricegimme 1021:18
drabyeah, well, we don't have the money (it's a charity) to get multiple machines (unless it makes sense to get 2 cheaper ones, but it's often not the case)21:19
drabbesides, the older the more generally power hungry21:19
dirtycajunricei mean...21:19
drabnot to mention that if you wanna hold on a few spare parts, like PSUs, you need twice as much21:19
dirtycajunriceto be honest21:19
dirtycajunrice24 bays is not cheap as 1 server21:19
dirtycajunricebut can absolutely be affordable as 221:19
dirtycajunriceare you using enterprise drives or consumer drives21:20
drabtrue, they could prolly do with 12. the thing is, I'm here, I may not be able to volunteer for them in the future so I'm trying to put in something that will last them 5-10 years, cavia adding some drives if their archives grow (they do a lot of media stuff for history projects)21:21
drabdirtycajunrice: enterprisey I'm hoping, maybe WD reds21:21
dirtycajunriceWD Reds are consumer drives21:22
dirtycajunriceenterprise drives are literally HP or dell signed drives from the manufacturer21:22
drabeeer, ok, fine, NAS drive then?21:22
dirtycajunricewith special Firmware21:22
dirtycajunriceit matters for if enterprise servers will read them21:22
dirtycajunricesec21:22
drabI see21:22
drabwell then no, no enterprise drives21:22
dirtycajunricewhats your budget?21:23
Epx998finally being asked to migrate off ub12....21:23
dirtycajunrice(i dont user enterprise drives either. i have 12 8TB toshiba x300s lol21:24
drabdirtycajunrice: about 1K including disks and will need at least 128GB to run all VMs21:25
dirtycajunricedrab, https://www.ebay.com/itm/DELL-POWEREDGE-R510-12-BAYS-2x-QUAD-CORE-L5520-2-26GHz-24GB-NO-HDD-NO-RAIL/132224991815?hash=item1ec9394247:g:nb0AAOSwJtdZ-g5g21:26
dirtycajunricethats the server21:26
dirtycajunriceyou can get caddys for like 30 bucks21:26
dirtycajunriceram is a mother right now tho21:27
dirtycajunricebut thats even a problem in consumer21:27
dirtycajunricethe market is artificially inflated21:27
drabthanks21:28
drabEpx998: I just finished that 2 weeks ago21:44
drabnow got a few 14 that I'm getting rid of and moving to containers21:44
drabthey actually had some ub11 too going around...21:44
drabdifferent question I guess I'm still confused about regarding HBAs21:45
Epx998from before21:45
drabLSI SAS 9211-4i PCI Express to 6Gb/s SAS HBA <-- this guy has one SF8087 port splitting to 4 lanes21:45
drabI'm understanding that each port can do 6Gbs, ie that's not comulating for all the ports at once, but that's the first thing I have doubts about21:46
drab"through four internal 6Gb/s ports" so it definitely seems it's 6Gps per port, however that SF cable is going at once to the backplane... does that mean it's the same as 24Gpbs to the backplane? which then the drives would share?21:48
drabok, I think the SAS configuration table explains that, full duplex SAS is 4.8GBps, so about 400MB/s per drive22:02
sarnolddrab: my home machine uses an sas expander; I think either sas port on the HBA can drive any of the drives22:20
sarnolddrab: so that's roughly eight sata-lanes of performance, and I've got nine drives plugged into the thing; the lights all seem to blink simultaneously though so it feels more than good enough at the job, haha22:21
sdeziel8 SAS drives hooked to a "home" machine surely is good enough ;)22:22
sarnoldsdeziel: I blame my friend who talked me into 3-way mirrors22:23
sdezielsarnold: I thought that friends only recommend mirror for ZFS :P22:23
sarnoldsdeziel: two-way mirrors? your friends must not care for your data much :)22:24
drabeer, a 3-way mirror sounds like a logical impossiblity... how's a mirror 3 way? :)22:25
sarnolddrab: easy: zpool create pool mirror sda sdb sdc mirror sdd sde sdf mirror sdg sdh sdi22:25
sarnoldtada!22:26
sarnoldthree 3-way mirrors! :)22:26
draboh, a mirror with 3 vdevs, I see, ok22:26
sarnoldvdev with 3 disks :)22:26
drabeer, that one22:27
* drab runs 6 disks in raidz222:27
drabseems good enough with 2 disks possible failure and more available space, no?22:27
drabon a 3disks vdev you get the same 2 drive failures, but the capacity of only one22:28
drabunless I'm missing something again22:28
sarnoldyeah you've got a pretty good sweet spot there22:28
sarnoldbut I'd expect roughly nine times 100MB/s for bulk reads from three 3-way mirrors, vs roughly 100MB/s bulk reads from 6-disk raidz222:29
sdezielI've read somewhere that it was a pain to grow a RAIDZ(2) setup22:30
sarnold(I don't think I can actually get my queue depths deep enough to get that kind of throughput though)22:30
sdezieltrying to find the link to that22:30
drabah, that's new. would love to read that22:30
sarnoldsdeziel: it definitely is; the next logical step is to add another six disks in a new vdev, and of course then all the writes would go to the new vdev until they're about the same capacity...22:30
sdezielhttp://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/22:31
drabthat said, the plan was (given the 12 bays), to just add another vdev with the next batch of 6 hds22:31
sarnoldbut I could easily add just another three disks to mine .. and suffer the same write-problem :)22:31
sdezielsarnold: yeah, that was my recollection from this article ^22:31
drabah, I learned something, I guess I misunderstood how vdevs where added together, I was still thinking raid1022:36
sarnolddrab: that's probably the right intuition22:37
drabI'm unsure about the maintenance windows tho, once a vdev with a 2xdisks mirror has a failure you are one disk away from losing everything22:39
drabso yeah, it seems to me if you're gonna be running mirrors then it has to be 3-disks per mirror22:39
draband that gets kind of expensive22:39
sarnoldyes it does22:40
sarnoldwhich is why a 6-disk raidz2 feels like a very nice sweet spot22:40
drabspeaking of ZFS, the other thing I had misunderstood (no wonder..) is how the ZIL is supposed to work22:40
drabI thought writes would go to it, ie behave like a write cache22:40
drabbtu that's only for synchronous writes afaiu22:41
sarnoldthere's several interacting concepts here22:42
drabasync writes end up in mem and never touch the ZIL/SLOG22:42
sarnoldall pools have ZIL, you can use a SLOG to put the ZIL on super-fast storage22:43
drabright22:43
drabbasically before going with ZFS I had looked at... I now forget the name... for linux22:43
drabwhere you basically end up with something like a SHDD22:43
sarnoldbcachefs?22:43
drabputting a couple of SSDs in front of a bunch of disks22:43
drabah yes, that's the one22:43
drabI thought ZFS with SLOG on a diff disk would work like that, but that's not the case22:44
draband it seems in a sense less performant than a setup with bcachefs22:44
drabbecause in terms of returning to the app, with bcachefs you only have to wait for writes to be done in the SSD22:44
sarnoldindeed, writes to the main drives still get flushed within a few seconds as writes to the slog, but the application is allowed to cobntinue once the write to the slog is complete22:44
drabmmmh, but that seems true only for synchronous writes22:45
sarnoldI found that far fewer operations go through the slog than I expected. My intution suggested that atomic operations like mkdir would go through slog but I _never_ saw the slog write counters increment no matter what workload I tried :)22:45
drabhttps://github.com/zfsonlinux/zfs/issues/101222:46
drabhttps://www.ixsystems.com/blog/o-slog-not-slog-best-configure-zfs-intent-log/22:47
drabUse case: If your use case involves synchronous writes, utilizing a SLOG for your ZIL will provide benefit. Database applications, NFS environments, particularly for virtualization, as well as backups are known use cases with heavy synchronous writes.22:48
drabwhich goes to the point that for async writes the whole ZIL/SLOG seems unhelpful22:48
drabfrom that second link forcing all writes to be sync seems to be a matter of security of not losing data, but to me seems to actually make a diff in terms of performance as control to the app will be returned as soon as data is written to the SLOG22:49
drabI guess I'll hvae to test that22:49
sarnolddrab: fwiw i'm quite happy to leave the defaults at the defaults22:50
sarnolddrab: and even though I've got a partition of my nvme set aside for slog, one of these reboots I'm going to disable it and just use the whole nvme for l2arc instead22:50
drabyeah, I gave a share pof the NVME to slog right now and the rest of l2arc, but I'm fundamentally bugged by this default behavior22:51
drabI would basically except, like in the case of bcachefs, to basically see nvme-like speeds for all writes22:52
drabwith "long term" storage being a sort of deferred write, ie from nvme device to HDDs22:52
drabso app -> mirror nvme -> raidz2 on 6 drives22:53
drabbasically use the NVME as a cheaper version of those super expensive battery backed ram, zeusram or whatever, forgot what it's called22:54
keithzgHmm has anyone ever had Windows users experiencing DNS cache corruption or such while connected to an OpenVPN server? Trying to debug some users' sporadic issues (the VPN server is running Ubuntu 16.04 of course) and it's narrowing down to a point that simultaneously very specific yet extremely mysterious.22:58

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!