[00:00] isn't any different from how CPU caches work [00:01] drive rotational speed is irrelevant [00:02] as in how Dell Compellent does that - blocks 4MB big move around - if they are accessed frequently, they are moved up - you don't want to updated the block on the slower tier [00:03] with caching, you always update the 'low tier' [00:03] so basically it's a caching system with lazy write-back ;) [00:03] with tiering you move the data [00:03] and all the "tiering" is just marketing speak [00:03] no [00:03] you don't write back [00:03] you move the blocks to another tier [00:03] i'm only know tiers from storage area... on which files are distributed on the different tiers based on her access count [00:04] IPU: usually blocks, not files [00:05] IPU: most storage systems have no idea of what a file is [00:05] it will "write-back" when some time-out is triggered, or space on the higher tier runs out [00:05] files that are often accessed are laying on the faster tiers and files seldom used on tape or something [00:05] which is exactly how caches work [00:06] JanC: that's not a write-back, it's a tier change - different philosophy [00:06] RoyK: it's exactly how CPU caches work though ;) [00:06] RoyK: files, blocks... sorry it's some time ago since i had to learn it ^^ [00:06] JanC: with caching, everything goes through the L1 and then L2 etc [00:06] no, it doesn't [00:07] RoyK: afair on those systems we looked at the tiering is managed from some software... not filesystem itself [00:07] at best you could argue that it is a parallel caching system [00:08] instead of a hierarchical one [00:08] IPU: makes sense - very few storage systems knows what's on them - it's just 0101010111100010101 [00:09] JanC: well, in caching, the cache isn't used for anything but caching, but in tiering, each tier holds actual data not in any other tier [00:09] dell buys emc, yikes [00:10] RoyK: eh, CPU & HDD cachs hold data that is not in the other level all the time [00:10] that's the point exactly [00:11] JanC: not really, cache is used for caching, temporary storage, of data, not permanent storage as with tiering [00:12] JanC: even though the semantics are similar [00:12] i miss the good 'ol dlt tapes somehow... [00:12] as I explained before, caching does not have to be volatile [00:12] pmatulis: wtf? [00:13] pmatulis: so that's why they were saying equallogic wasn't to be developed after 2017... [00:13] seems "dell storage" is getting even more complicated [00:13] pmatulis: got an url on that one? [00:13] * JanC wonders why it's so hard for people to look at the abstract mechanisms/algorithms behind buzzwords [00:13] it never gets boring if you had to use them ^^ [00:15] JanC: well, caching and tiering are still two different things, even though they relate [00:16] damn - we'll be running Dell vmware [00:17] as if the vmware support wasn't bad enough [00:23] RoyK: feel free to believe the marketing speak (and the patent speak, I guess), but essentially there is no difference between the two (unless your definition of caching is narrowly defined by marketing to begin with) [00:24] and if you know how to program a bit, think about how you would implement both [00:29] * pmatulis didn't know vmware was owned by emc, damn [00:29] JanC: not trying to start a fight here, but afaik tiering actually moves the data around instead of just caching what's most used [00:29] http://wp.me/p1FaB8-57Yt [00:29] http://storageswiss.com/2014/01/15/whats-the-difference-between-tiering-and-caching/ [00:29] JanC: is there really no difference? [00:31] I read up about vmware vsan and it said it was using tiering - it doesn't - it just caches [00:46] the move vs. copy thing is a fallacy; eventually modified data will be written back when it's not "hot" anymore [00:47] which doesn't mean those solutions are bad; finding the right balance in a caching system is hard [00:48] coreycb`, https://bugs.launchpad.net/ubuntu/+source/heat/+bug/1505444 [00:48] Launchpad bug 1505444 in heat (Ubuntu) "Package missing file" [Undecided,New] [00:48] if you are still around :-) [00:49] or call it a caching/tiering system if you want [00:49] JanC: it all depends on size [00:49] JanC: if you have 10PiB of data, you can't just do caching [00:49] coreycb`, also I think we're only on rc1 for heat - can we rev to rc2 pls [00:50] "just do caching" seems to refer to some some old interpretation of "caching" [00:50] jamespage, ok [00:52] JanC: no, not really, you want hot data where it belongs and cold data where it belongs and not just copying whatever's hot upwards [00:52] s/old/limited/ [00:53] RoyK: which is exactly what caching is about? [00:54] JanC: not really - you *move* the blocks upward, not merely copy them [00:54] caching is copying [00:54] such a "move" is copy + delete [00:54] obviously, yes [00:55] which is no different from copy + invalidate [00:56] caching won't clear space for more cold data, it'll just fill up more hot data on the cold space and use even more iops for the writes [00:56] that's totally implementation-dependent [00:57] well, so far, all I've seen of caching systems (SSD cache whatever) only caches (copies) and keeps the original in sync [00:57] e.g. CPU cache levels do actually interact in such ways (in some CPUs at least) [00:58] then why do people call deduplication something new? it's just compression, right, on the macro-scal? [00:58] scale [01:00] anyway - tested vmware VSAN with some old boxes, I thought it'd show total space = spinning rust + ssd, but only spinning rust was calculated as space, so that obviously only does old-time caching === markthomas is now known as markthomas|away [02:07] I tend not to like auto-level systems [02:07] I have a very cold dataset, it is only used once a month [02:08] but when we use it, it has to be as fast as possible, cause we burn through it === ming is now known as Guest52262 [05:07] So, anyone in here have a ton of experience with iSCSI (client side) on Ubuntu? [07:03] Good morning === frediz_ is now known as frediz [07:57] Anyone else unable to reach https://bz.apache.org/bugzilla/show_bug.cgi?id=57328 ? [07:57] bz.apache.org bug 57328 in Core "Invalid memory access on ap_server_config_defines" [Critical,Resolved: fixed] [07:59] rbasak: Page works just fine here. [08:00] lordievader: thanks. That helped me pin it down. It's not responding on IPv6 but is on IPv4. [08:01] Hmm, I should be connecting over ipv6. Though chrome might switch over quickly. [08:02] Ah yes, forcing wget to connect over ipv6 hangs... [08:04] I'll email them. Thank you for confirming. [08:41] Does anyone use any online drive software? [08:44] T3DDY: What do you mean with 'online drive software'? [08:44] lordievader: Like a website that you can upload files tooooooo? [08:45] Ah, no I don't. Got my own server to host my files. === frediz_ is now known as frediz [08:46] Something like owncloud that you can host yourself so you can upload to it when youre out, instead of using FTP and things === cipi is now known as CiPi === CiPi is now known as cipi === Lcawte|Away is now known as Lcawte === balloons is now known as Guest77290 [14:20] hey guys, so I recently set up rsnapshot with retention of 7 days, 4 weeks and 12 months. However, when I go into weeks I only see a single set but I'm expecting to see 7 days for that given week no? [14:22] or is it once a week, once a month? which would be pointless..could someone confirm pls [14:43] dannf: all tests pass with the merged kernel, i'll merge in your dif fnow and push to that same ppa. so when ~ppa2 appears it should be ready for you [15:00] hallyn: X-cellent === Lcawte is now known as Lcawte|Away === Guest48 is now known as tthoren [15:42] jamespage, does a Conflicts sound ok for neutron-lbaas-agent and neutron-lbaasv2-agent? [15:43] in d/control for the binaries, that is === markthomas|away is now known as markthomas === g4mby is now known as PaulW2U [17:08] can a server team dev provide me a second opinion glance at a patch please, before I consider it for inclusion? [17:08] (in nginx) [17:11] https://bugs.launchpad.net/ubuntu/+source/nginx/+bug/1505734 has a patch attached that would 'resolve' the issue, but i would like it reviewed prior to me doing anything (whether upload because I can for nginx, or meh). It's a change to an init script so i would like second reviews other than me. [17:11] Launchpad bug 1413555 in nginx (Ubuntu Trusty) "duplicate for #1505734 init script fails with error code 0 when configuration test doesn't pass" [Low,Confirmed] [17:11] yes it's a duplicate. === Lcawte|Away is now known as Lcawte [17:53] how to i check all services started since bootup [17:57] coreycb`, do they actually conflict? I don't know the answer to that - can we run v1 and v2 on the same units? [18:05] jamespage, there was a conflict with neutron-lbaas-agent-netns-cleanup but that could probably be worked around a different way [18:08] dannf: package built and qa-regression-tests passed. === cipi is now known as CiPi [18:15] hallyn: great, i'll go ahead and pull it into our customer ppa === lborda is now known as lborda-sprint === coreycb` is now known as coreycb [20:09] hi all [20:09] i want check the filesystem how to? [20:09] fchsk? [20:10] Depends on the filesystem. [20:12] Generally if you just run fsck on it, the proper filesystem type will be chosen automatically [20:17] bekks!!!! [20:17] genii thanks [20:18] when iturn off the PC without logout or shutdown command or halt command, [20:18] when starts run fsck right? [20:19] Why do you turn off your PC without a proper shutdown? :) [20:19] And there may be situations when the startup does not invoke a fsck automatically. [20:30] jak2000: There's a few better ways to do it. From a livecd, or from recovery option in Grub, or sudo touch /forcefsck === JanC_ is now known as JanC [22:40] Hi everyone, I want to set up ubuntu openstack [22:40] ...and I'm missing "dual NICs and dual HDDs" [22:40] but have 10 boxes..... is this a doable thing? [22:40] What do I lose? [22:42] those probably aren't strict requirements but it may make it harder to use the automated tools to help deploy the whole thing [22:44] hm. This is my first install. Sarnold, have you done this before? How intense is it? [22:44] eg-- I probably need the automated tools even though I've done several clusters (k8s, rancher, kontina, flynn, deis, etc etc etc...) [22:45] faddat: I've done bits and pieces of it, never deployed a full thing myself. [22:46] well, some is better than none: How rough is it? [22:46] faddat: openstack is fairly complex, there's a thousand different supported configurations and everything is far too configurable for my taste [22:46] but the end result is nice enough to use as a user so it might be worth sticking it out and making it work [22:47] does ubuntu's "distro" result in an "opinionated" system of openstack? [22:47] (I also hate insane levels of configurability) [22:47] I think the automated "autopilot" thing probably does help a lot [22:56] oh another question.... [22:56] anyone know how to build a cluster of LXD servers? [22:56] I know how to build one of them [22:56] don't know how to construct the cluster though [22:57] as I understand it, you more or less have independant relationships with each of the servers [22:57] when you want to start something up, you ask a specific lxd server to start whatever it is you want [22:57] how would you then ask that server to migrate an instance to another? [22:57] you can move containers between them as you wish, but I don't think there's anything to bind them together as a cluster [22:58] sarnold: ping, got a security related question [22:58] pity. That seems like it would scale quite poorly.... [22:58] hey teward :) [22:58] sarnold: incoming PM [22:58] faddat: it'd be like "lxc move host1:container1 host2:" [22:58] hm [22:59] sarnold: then i have an unrelated question, if you've got a few seconds for patch review/opiniongiving :) [23:01] teward: if it's the nginx init script thing, I gave that a quick look and realized I don't know how the initscripts are supposed to work :) sorry [23:01] lol [23:01] meh, tis fine i'll poke Debian and yell at em [23:01] i need them to make a decision on the nginx packaging ANYWAYS [23:01] since they've not done 1.9.5 packaging [23:02] hehe [23:02] wait, no? I saw it in sources.debian.net earlier.. [23:02] oh that was 1.9.4 http://sources.debian.net/src/nginx/1.9.4-1/ [23:03] faddat: there's a nova-lxd something or other module underway as well; that might be the "scales" option, beyond managing the lxd hosts individually [23:04] yeah, totally [23:04] in fact [23:04] I'm investigating openstack right now [23:04] faddat: but of course that drags along the rest of openstack. I like how simple lxd's interface is, and not having different "tenants" with different security properties looks like a big part of that simplicity. [23:04] right? [23:05] A coreos-style cluster of LXD servers (or even... rancher-on-ubuntu-style) would be incredible [23:05] yeah. there's places where it makes sense, but for my home use, openstack is a bit overkill. lxd looks like a better fit. hehe :) [23:05] sarnold: 1.9.4 [23:05] not 1.9.5 [23:05] and it's been out for over a month [23:05] PPAs delayed for the same reasons [23:05] (and others) [23:06] what we're building, it will need to scale, but even with that stated, I can't really say that openstack would surely be worth it [23:06] seems a ton of overhead, and a great number of machines dedicated to coordinating as opposed to doing the gruntwork [23:07] my favorite is the guide on HA openstack that starts with "you'll need 28 computers..." [23:07] lol [23:08] see that'd be okay in fact-- but then it gets into "and they'll need 4 NICs each (well 3 of them will, and then another 3.5 will need 3 HDDs of the weasel variety....) [23:09] yeah, sometimes folks are trying to build their clouds with NUCs and sometimes with multi-socket xeon monsters with 100gb networking and ... [23:10] yes, exactly [23:10] I just did an inventory of what I have available [23:11] and if I bring in 6 machines from another site, I may have enough to get this rolling. My ideal, I think is a generic installation of openstack, though that would mean missing out on all of the Ubuntu goodness... === Lcawte is now known as Lcawte|Away