/srv/irclogs.ubuntu.com/2015/10/13/#ubuntu-server.txt

JanCisn't any different from how CPU caches work00:00
JanCdrive rotational speed is irrelevant00:01
RoyKas in how Dell Compellent does that - blocks 4MB big move around - if they are accessed frequently, they are moved up - you don't want to updated the block on the slower tier00:02
RoyKwith caching, you always update the 'low tier'00:03
JanCso basically it's a caching system with lazy write-back  ;)00:03
RoyKwith tiering you move the data00:03
JanCand all the "tiering" is just marketing speak00:03
RoyKno00:03
RoyKyou don't write back00:03
RoyKyou move the blocks to another tier00:03
IPUi'm only know tiers from storage area... on which files are distributed on the different tiers based on her access count00:03
RoyKIPU: usually blocks, not files00:04
RoyKIPU: most storage systems have no idea of what a file is00:05
JanCit will "write-back" when some time-out is triggered, or space on the higher tier runs out00:05
IPUfiles that are often accessed are laying on the faster tiers and files seldom used on tape or something00:05
JanCwhich is exactly how caches work00:05
RoyKJanC: that's not a write-back, it's a tier change - different philosophy00:06
JanCRoyK: it's exactly how CPU caches work though  ;)00:06
IPURoyK: files, blocks... sorry it's some time ago since i had to learn it ^^00:06
RoyKJanC: with caching, everything goes through the L1 and then L2 etc00:06
JanCno, it doesn't00:06
IPURoyK: afair on those systems we looked at the tiering is managed from some software... not filesystem itself00:07
JanCat best you could argue that it is a parallel caching system00:07
JanCinstead of a hierarchical one00:08
RoyKIPU: makes sense - very few storage systems knows what's on them - it's just 010101011110001010100:08
RoyKJanC: well, in caching, the cache isn't used for anything but caching, but in tiering, each tier holds actual data not in any other tier00:09
pmatulisdell buys emc, yikes00:09
JanCRoyK: eh, CPU & HDD cachs hold data that is not in the other level all the time00:10
JanCthat's the point exactly00:10
RoyKJanC: not really, cache is used for caching, temporary storage, of data, not permanent storage as with tiering00:11
RoyKJanC: even though the semantics are similar00:12
IPUi miss the good 'ol dlt tapes somehow...00:12
JanCas I explained before, caching does not have to be volatile00:12
RoyKpmatulis: wtf?00:12
RoyKpmatulis: so that's why they were saying equallogic wasn't to be developed after 2017...00:13
RoyKseems "dell storage" is getting even more complicated00:13
RoyKpmatulis: got an url on that one?00:13
* JanC wonders why it's so hard for people to look at the abstract mechanisms/algorithms behind buzzwords00:13
IPUit never gets boring if you had to use them ^^00:13
RoyKJanC: well, caching and tiering are still two different things, even though they relate00:15
RoyKdamn - we'll be running Dell vmware00:16
RoyKas if the vmware support wasn't bad enough00:17
JanCRoyK: feel free to believe the marketing speak (and the patent speak, I guess), but essentially there is no difference between the two (unless your definition of caching is narrowly defined by marketing to begin with)00:23
JanCand if you know how to program a bit, think about how you would implement both00:24
* pmatulis didn't know vmware was owned by emc, damn00:29
RoyKJanC: not trying to start a fight here, but afaik tiering actually moves the data around instead of just caching what's most used00:29
pmatulishttp://wp.me/p1FaB8-57Yt00:29
IPUhttp://storageswiss.com/2014/01/15/whats-the-difference-between-tiering-and-caching/00:29
IPUJanC: is there really no difference?00:29
RoyKI read up about vmware vsan and it said it was using tiering - it doesn't - it just caches00:31
JanCthe move vs. copy thing is a fallacy; eventually modified data will be written back when it's not "hot" anymore00:46
JanCwhich doesn't mean those solutions are bad; finding the right balance in a caching system is hard00:47
jamespagecoreycb`, https://bugs.launchpad.net/ubuntu/+source/heat/+bug/150544400:48
ubottuLaunchpad bug 1505444 in heat (Ubuntu) "Package missing file" [Undecided,New]00:48
jamespageif you are still around :-)00:48
JanCor call it a caching/tiering system if you want00:49
RoyKJanC: it all depends on size00:49
RoyKJanC: if you have 10PiB of data, you can't just do caching00:49
jamespagecoreycb`, also I think we're only on rc1 for heat - can we rev to rc2 pls00:49
JanC"just do caching" seems to refer to some some old interpretation of "caching"00:50
coreycb`jamespage, ok00:50
RoyKJanC: no, not really, you want hot data where it belongs and cold data where it belongs and not just copying whatever's hot upwards00:52
JanCs/old/limited/00:52
JanCRoyK: which is exactly what caching is about?00:53
RoyKJanC: not really - you *move* the blocks upward, not merely copy them00:54
RoyKcaching is copying00:54
JanCsuch a "move" is copy + delete00:54
RoyKobviously, yes00:54
JanCwhich is no different from copy + invalidate00:55
RoyKcaching won't clear space for more cold data, it'll just fill up more hot data on the cold space and use even more iops for the writes00:56
JanCthat's totally implementation-dependent00:56
RoyKwell, so far, all I've seen of caching systems (SSD cache whatever) only caches (copies) and keeps the original in sync00:57
JanCe.g. CPU cache levels do actually interact in such ways (in some CPUs at least)00:57
RoyKthen why do people call deduplication something new? it's just compression, right, on the macro-scal?00:58
RoyKscale00:58
RoyKanyway - tested vmware VSAN with some old boxes, I thought it'd show total space = spinning rust + ssd, but only spinning rust was calculated as space, so that obviously only does old-time caching01:00
=== markthomas is now known as markthomas|away
patdk-lapI tend not to like auto-level systems02:07
patdk-lapI have a very cold dataset, it is only used once a month02:07
patdk-lapbut when we use it, it has to be as fast as possible, cause we burn through it02:08
=== ming is now known as Guest52262
phunyguySo, anyone in here have a ton of experience with iSCSI (client side) on Ubuntu?05:07
lordievaderGood morning07:03
=== frediz_ is now known as frediz
rbasakAnyone else unable to reach https://bz.apache.org/bugzilla/show_bug.cgi?id=57328 ?07:57
ubottubz.apache.org bug 57328 in Core "Invalid memory access on ap_server_config_defines" [Critical,Resolved: fixed]07:57
lordievaderrbasak: Page works just fine here.07:59
rbasaklordievader: thanks. That helped me pin it down. It's not responding on IPv6 but is on IPv4.08:00
lordievaderHmm, I should be connecting over ipv6. Though chrome might switch over quickly.08:01
lordievaderAh yes, forcing wget to connect over ipv6 hangs...08:02
rbasakI'll email them. Thank you for confirming.08:04
T3DDYDoes anyone use any online drive software?08:41
lordievaderT3DDY: What do you mean with 'online drive software'?08:44
T3DDYlordievader: Like a website that you can upload files tooooooo?08:44
lordievaderAh, no I don't. Got my own server to host my files.08:45
=== frediz_ is now known as frediz
T3DDYSomething like owncloud that you can host yourself so you can upload to it when youre out, instead of using FTP and things08:46
=== cipi is now known as CiPi
=== CiPi is now known as cipi
=== Lcawte|Away is now known as Lcawte
=== balloons is now known as Guest77290
jgehey guys, so I recently set up rsnapshot with retention of 7 days, 4 weeks and 12 months. However, when I go into weeks I only see a single set but I'm expecting to see 7 days for that given week no?14:20
jgeor is it once a week, once a month? which would be pointless..could someone confirm pls14:22
hallyndannf: all tests pass with the merged kernel, i'll merge in your dif fnow and push to that same ppa.  so when ~ppa2 appears it should be ready for you14:43
dannfhallyn: X-cellent15:00
=== Lcawte is now known as Lcawte|Away
=== Guest48 is now known as tthoren
coreycb`jamespage, does a Conflicts sound ok for neutron-lbaas-agent and neutron-lbaasv2-agent?15:42
coreycb`in d/control for the binaries, that is15:43
=== markthomas|away is now known as markthomas
=== g4mby is now known as PaulW2U
tewardcan a server team dev provide me a second opinion glance at a patch please, before I consider it for inclusion?17:08
teward(in nginx)17:08
tewardhttps://bugs.launchpad.net/ubuntu/+source/nginx/+bug/1505734 has a patch attached that would 'resolve' the issue, but i would like it reviewed prior to me doing anything (whether upload because I can for nginx, or meh).  It's a change to an init script so i would like second reviews other than me.17:11
ubottuLaunchpad bug 1413555 in nginx (Ubuntu Trusty) "duplicate for #1505734 init script fails with error code 0 when configuration test doesn't pass" [Low,Confirmed]17:11
tewardyes it's a duplicate.17:11
=== Lcawte|Away is now known as Lcawte
samba35how to i check all services started since bootup17:53
jamespagecoreycb`, do they actually conflict? I don't know the answer to that - can we run v1 and v2 on the same units?17:57
coreycb`jamespage, there was a conflict with neutron-lbaas-agent-netns-cleanup but that could probably be worked around a different way18:05
hallyndannf: package built and qa-regression-tests passed.18:08
=== cipi is now known as CiPi
dannfhallyn: great, i'll go ahead and pull it into our customer ppa18:15
=== lborda is now known as lborda-sprint
=== coreycb` is now known as coreycb
jak2000hi all20:09
jak2000i want check the filesystem how to?20:09
jak2000fchsk?20:09
bekksDepends on the filesystem.20:10
geniiGenerally if you just run fsck on it, the proper filesystem type will be chosen automatically20:12
jak2000bekks!!!!20:17
jak2000genii thanks20:17
jak2000when iturn off the PC without logout or shutdown command or halt command,20:18
jak2000when starts run fsck right?20:18
bekksWhy do you turn off your PC without a proper shutdown? :)20:19
bekksAnd there may be situations when the startup does not invoke a fsck automatically.20:19
geniijak2000: There's a few better ways to do it. From a livecd, or from recovery option in Grub, or sudo touch /forcefsck20:30
=== JanC_ is now known as JanC
faddatHi everyone, I want to set up ubuntu openstack22:40
faddat...and I'm missing "dual NICs and dual HDDs"22:40
faddatbut have 10 boxes..... is this a doable thing?22:40
faddatWhat do I lose?22:40
sarnoldthose probably aren't strict requirements but it may make it harder to use the automated tools to help deploy the whole thing22:42
faddathm.  This is my first install.  Sarnold, have you done this before?  How intense is it?22:44
faddateg-- I probably need the automated tools even though I've done several clusters (k8s, rancher, kontina, flynn, deis, etc etc etc...)22:44
sarnoldfaddat: I've done bits and pieces of it, never deployed a full thing myself.22:45
faddatwell, some is better than none:  How rough is it?22:46
sarnoldfaddat: openstack is fairly complex, there's a thousand different supported configurations and everything is far too configurable for my taste22:46
sarnoldbut the end result is nice enough to use as a user so it might be worth sticking it out and making it work22:46
faddatdoes ubuntu's "distro" result in an "opinionated" system of openstack?22:47
faddat(I also hate insane levels of configurability)22:47
sarnoldI think the automated "autopilot" thing probably does help a lot22:47
faddatoh another question....22:56
faddatanyone know how to build a cluster of LXD servers?22:56
faddatI know how to build one of them22:56
faddatdon't know how to construct the cluster though22:56
sarnoldas I understand it, you more or less have independant relationships with each of the servers22:57
sarnoldwhen you want to start something up, you ask a specific lxd server to start whatever it is you want22:57
faddathow would you then ask that server to migrate an instance to another?22:57
sarnoldyou can move containers between them as you wish, but I don't think there's anything to bind them together as a cluster22:57
tewardsarnold: ping, got a security related question22:58
faddatpity.  That seems like it would scale quite poorly....22:58
sarnoldhey teward :)22:58
tewardsarnold: incoming PM22:58
sarnoldfaddat: it'd be like "lxc move host1:container1 host2:"22:58
faddathm22:58
tewardsarnold: then i have an unrelated question, if you've got a few seconds for patch review/opiniongiving :)22:59
sarnoldteward: if it's the nginx init script thing, I gave that a quick look and realized I don't know how the initscripts are supposed to work :) sorry23:01
tewardlol23:01
tewardmeh, tis fine i'll poke Debian and yell at em23:01
tewardi need them to make a decision on the nginx packaging ANYWAYS23:01
tewardsince they've not done 1.9.5 packaging23:01
sarnoldhehe23:02
sarnoldwait, no? I saw it in sources.debian.net earlier..23:02
sarnoldoh that was 1.9.4 http://sources.debian.net/src/nginx/1.9.4-1/23:02
sarnoldfaddat: there's a nova-lxd something or other module underway as well; that might be the "scales" option, beyond managing the lxd hosts individually23:03
faddatyeah, totally23:04
faddatin fact23:04
faddatI'm investigating openstack right now23:04
sarnoldfaddat: but of course that drags along the rest of openstack. I like how simple lxd's interface is, and not having different "tenants" with different security properties looks like a big part of that simplicity.23:04
faddatright?23:04
faddatA coreos-style cluster of LXD servers (or even... rancher-on-ubuntu-style) would be incredible23:05
sarnoldyeah. there's places where it makes sense, but for my home use, openstack is a bit overkill. lxd looks like a better fit. hehe :)23:05
tewardsarnold: 1.9.423:05
tewardnot 1.9.523:05
tewardand it's been out for over a month23:05
tewardPPAs delayed for the same reasons23:05
teward(and others)23:05
faddatwhat we're building, it will need to scale, but even with that stated, I can't really say that openstack would surely be worth it23:06
faddatseems a ton of overhead, and a great number of machines dedicated to coordinating as opposed to doing the gruntwork23:06
sarnoldmy favorite is the guide on HA openstack that starts with "you'll need 28 computers..."23:07
tewardlol23:07
faddatsee that'd be okay in fact-- but then it gets into "and they'll need 4 NICs each (well 3 of them will, and then another 3.5 will need 3 HDDs of the weasel variety....)23:08
sarnoldyeah, sometimes folks are trying to build their clouds with NUCs and sometimes with multi-socket xeon monsters with 100gb networking and ...23:09
faddatyes, exactly23:10
faddatI just did an inventory of what I have available23:10
faddatand if I bring in 6 machines from another site, I may have enough to get this rolling.  My ideal, I think is a generic installation of openstack, though that would mean missing out on all of the Ubuntu goodness...23:11
=== Lcawte is now known as Lcawte|Away

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!