[00:00] <JanC> isn't any different from how CPU caches work
[00:01] <JanC> drive rotational speed is irrelevant
[00:02] <RoyK> as in how Dell Compellent does that - blocks 4MB big move around - if they are accessed frequently, they are moved up - you don't want to updated the block on the slower tier
[00:03] <RoyK> with caching, you always update the 'low tier'
[00:03] <JanC> so basically it's a caching system with lazy write-back  ;)
[00:03] <RoyK> with tiering you move the data
[00:03] <JanC> and all the "tiering" is just marketing speak
[00:03] <RoyK> no
[00:03] <RoyK> you don't write back
[00:03] <RoyK> you move the blocks to another tier
[00:03] <IPU> i'm only know tiers from storage area... on which files are distributed on the different tiers based on her access count
[00:04] <RoyK> IPU: usually blocks, not files
[00:05] <RoyK> IPU: most storage systems have no idea of what a file is
[00:05] <JanC> it will "write-back" when some time-out is triggered, or space on the higher tier runs out
[00:05] <IPU> files that are often accessed are laying on the faster tiers and files seldom used on tape or something
[00:05] <JanC> which is exactly how caches work
[00:06] <RoyK> JanC: that's not a write-back, it's a tier change - different philosophy
[00:06] <JanC> RoyK: it's exactly how CPU caches work though  ;)
[00:06] <IPU> RoyK: files, blocks... sorry it's some time ago since i had to learn it ^^
[00:06] <RoyK> JanC: with caching, everything goes through the L1 and then L2 etc
[00:06] <JanC> no, it doesn't
[00:07] <IPU> RoyK: afair on those systems we looked at the tiering is managed from some software... not filesystem itself
[00:07] <JanC> at best you could argue that it is a parallel caching system
[00:08] <JanC> instead of a hierarchical one
[00:08] <RoyK> IPU: makes sense - very few storage systems knows what's on them - it's just 0101010111100010101
[00:09] <RoyK> JanC: well, in caching, the cache isn't used for anything but caching, but in tiering, each tier holds actual data not in any other tier
[00:09] <pmatulis> dell buys emc, yikes
[00:10] <JanC> RoyK: eh, CPU & HDD cachs hold data that is not in the other level all the time
[00:10] <JanC> that's the point exactly
[00:11] <RoyK> JanC: not really, cache is used for caching, temporary storage, of data, not permanent storage as with tiering
[00:12] <RoyK> JanC: even though the semantics are similar
[00:12] <IPU> i miss the good 'ol dlt tapes somehow...
[00:12] <JanC> as I explained before, caching does not have to be volatile
[00:12] <RoyK> pmatulis: wtf?
[00:13] <RoyK> pmatulis: so that's why they were saying equallogic wasn't to be developed after 2017...
[00:13] <RoyK> seems "dell storage" is getting even more complicated
[00:13] <RoyK> pmatulis: got an url on that one?
[00:13]  * JanC wonders why it's so hard for people to look at the abstract mechanisms/algorithms behind buzzwords
[00:13] <IPU> it never gets boring if you had to use them ^^
[00:15] <RoyK> JanC: well, caching and tiering are still two different things, even though they relate
[00:16] <RoyK> damn - we'll be running Dell vmware
[00:17] <RoyK> as if the vmware support wasn't bad enough
[00:23] <JanC> RoyK: feel free to believe the marketing speak (and the patent speak, I guess), but essentially there is no difference between the two (unless your definition of caching is narrowly defined by marketing to begin with)
[00:24] <JanC> and if you know how to program a bit, think about how you would implement both
[00:29]  * pmatulis didn't know vmware was owned by emc, damn
[00:29] <RoyK> JanC: not trying to start a fight here, but afaik tiering actually moves the data around instead of just caching what's most used
[00:29] <pmatulis> http://wp.me/p1FaB8-57Yt
[00:29] <IPU> http://storageswiss.com/2014/01/15/whats-the-difference-between-tiering-and-caching/
[00:29] <IPU> JanC: is there really no difference?
[00:31] <RoyK> I read up about vmware vsan and it said it was using tiering - it doesn't - it just caches
[00:46] <JanC> the move vs. copy thing is a fallacy; eventually modified data will be written back when it's not "hot" anymore
[00:47] <JanC> which doesn't mean those solutions are bad; finding the right balance in a caching system is hard
[00:48] <jamespage> coreycb`, https://bugs.launchpad.net/ubuntu/+source/heat/+bug/1505444
[00:48] <jamespage> if you are still around :-)
[00:49] <JanC> or call it a caching/tiering system if you want
[00:49] <RoyK> JanC: it all depends on size
[00:49] <RoyK> JanC: if you have 10PiB of data, you can't just do caching
[00:49] <jamespage> coreycb`, also I think we're only on rc1 for heat - can we rev to rc2 pls
[00:50] <JanC> "just do caching" seems to refer to some some old interpretation of "caching"
[00:50] <coreycb`> jamespage, ok
[00:52] <RoyK> JanC: no, not really, you want hot data where it belongs and cold data where it belongs and not just copying whatever's hot upwards
[00:52] <JanC> s/old/limited/
[00:53] <JanC> RoyK: which is exactly what caching is about?
[00:54] <RoyK> JanC: not really - you *move* the blocks upward, not merely copy them
[00:54] <RoyK> caching is copying
[00:54] <JanC> such a "move" is copy + delete
[00:54] <RoyK> obviously, yes
[00:55] <JanC> which is no different from copy + invalidate
[00:56] <RoyK> caching won't clear space for more cold data, it'll just fill up more hot data on the cold space and use even more iops for the writes
[00:56] <JanC> that's totally implementation-dependent
[00:57] <RoyK> well, so far, all I've seen of caching systems (SSD cache whatever) only caches (copies) and keeps the original in sync
[00:57] <JanC> e.g. CPU cache levels do actually interact in such ways (in some CPUs at least)
[00:58] <RoyK> then why do people call deduplication something new? it's just compression, right, on the macro-scal?
[00:58] <RoyK> scale
[01:00] <RoyK> anyway - tested vmware VSAN with some old boxes, I thought it'd show total space = spinning rust + ssd, but only spinning rust was calculated as space, so that obviously only does old-time caching
[02:07] <patdk-lap> I tend not to like auto-level systems
[02:07] <patdk-lap> I have a very cold dataset, it is only used once a month
[02:08] <patdk-lap> but when we use it, it has to be as fast as possible, cause we burn through it
[05:07] <phunyguy> So, anyone in here have a ton of experience with iSCSI (client side) on Ubuntu?
[07:03] <lordievader> Good morning
[07:57] <rbasak> Anyone else unable to reach https://bz.apache.org/bugzilla/show_bug.cgi?id=57328 ?
[07:59] <lordievader> rbasak: Page works just fine here.
[08:00] <rbasak> lordievader: thanks. That helped me pin it down. It's not responding on IPv6 but is on IPv4.
[08:01] <lordievader> Hmm, I should be connecting over ipv6. Though chrome might switch over quickly.
[08:02] <lordievader> Ah yes, forcing wget to connect over ipv6 hangs...
[08:04] <rbasak> I'll email them. Thank you for confirming.
[08:41] <T3DDY> Does anyone use any online drive software?
[08:44] <lordievader> T3DDY: What do you mean with 'online drive software'?
[08:44] <T3DDY> lordievader: Like a website that you can upload files tooooooo?
[08:45] <lordievader> Ah, no I don't. Got my own server to host my files.
[08:46] <T3DDY> Something like owncloud that you can host yourself so you can upload to it when youre out, instead of using FTP and things
[14:20] <jge> hey guys, so I recently set up rsnapshot with retention of 7 days, 4 weeks and 12 months. However, when I go into weeks I only see a single set but I'm expecting to see 7 days for that given week no?
[14:22] <jge> or is it once a week, once a month? which would be pointless..could someone confirm pls
[14:43] <hallyn> dannf: all tests pass with the merged kernel, i'll merge in your dif fnow and push to that same ppa.  so when ~ppa2 appears it should be ready for you
[15:00] <dannf> hallyn: X-cellent
[15:42] <coreycb`> jamespage, does a Conflicts sound ok for neutron-lbaas-agent and neutron-lbaasv2-agent?
[15:43] <coreycb`> in d/control for the binaries, that is
[17:08] <teward> can a server team dev provide me a second opinion glance at a patch please, before I consider it for inclusion?
[17:08] <teward> (in nginx)
[17:11] <teward> https://bugs.launchpad.net/ubuntu/+source/nginx/+bug/1505734 has a patch attached that would 'resolve' the issue, but i would like it reviewed prior to me doing anything (whether upload because I can for nginx, or meh).  It's a change to an init script so i would like second reviews other than me.
[17:11] <teward> yes it's a duplicate.
[17:53] <samba35> how to i check all services started since bootup
[17:57] <jamespage> coreycb`, do they actually conflict? I don't know the answer to that - can we run v1 and v2 on the same units?
[18:05] <coreycb`> jamespage, there was a conflict with neutron-lbaas-agent-netns-cleanup but that could probably be worked around a different way
[18:08] <hallyn> dannf: package built and qa-regression-tests passed.
[18:15] <dannf> hallyn: great, i'll go ahead and pull it into our customer ppa
[20:09] <jak2000> hi all
[20:09] <jak2000> i want check the filesystem how to?
[20:09] <jak2000> fchsk?
[20:10] <bekks> Depends on the filesystem.
[20:12] <genii> Generally if you just run fsck on it, the proper filesystem type will be chosen automatically
[20:17] <jak2000> bekks!!!!
[20:17] <jak2000> genii thanks
[20:18] <jak2000> when iturn off the PC without logout or shutdown command or halt command,
[20:18] <jak2000> when starts run fsck right?
[20:19] <bekks> Why do you turn off your PC without a proper shutdown? :)
[20:19] <bekks> And there may be situations when the startup does not invoke a fsck automatically.
[20:30] <genii> jak2000: There's a few better ways to do it. From a livecd, or from recovery option in Grub, or sudo touch /forcefsck
[22:40] <faddat> Hi everyone, I want to set up ubuntu openstack
[22:40] <faddat> ...and I'm missing "dual NICs and dual HDDs"
[22:40] <faddat> but have 10 boxes..... is this a doable thing?
[22:40] <faddat> What do I lose?
[22:42] <sarnold> those probably aren't strict requirements but it may make it harder to use the automated tools to help deploy the whole thing
[22:44] <faddat> hm.  This is my first install.  Sarnold, have you done this before?  How intense is it?
[22:44] <faddat> eg-- I probably need the automated tools even though I've done several clusters (k8s, rancher, kontina, flynn, deis, etc etc etc...)
[22:45] <sarnold> faddat: I've done bits and pieces of it, never deployed a full thing myself.
[22:46] <faddat> well, some is better than none:  How rough is it?
[22:46] <sarnold> faddat: openstack is fairly complex, there's a thousand different supported configurations and everything is far too configurable for my taste
[22:46] <sarnold> but the end result is nice enough to use as a user so it might be worth sticking it out and making it work
[22:47] <faddat> does ubuntu's "distro" result in an "opinionated" system of openstack?
[22:47] <faddat> (I also hate insane levels of configurability)
[22:47] <sarnold> I think the automated "autopilot" thing probably does help a lot
[22:56] <faddat> oh another question....
[22:56] <faddat> anyone know how to build a cluster of LXD servers?
[22:56] <faddat> I know how to build one of them
[22:56] <faddat> don't know how to construct the cluster though
[22:57] <sarnold> as I understand it, you more or less have independant relationships with each of the servers
[22:57] <sarnold> when you want to start something up, you ask a specific lxd server to start whatever it is you want
[22:57] <faddat> how would you then ask that server to migrate an instance to another?
[22:57] <sarnold> you can move containers between them as you wish, but I don't think there's anything to bind them together as a cluster
[22:58] <teward> sarnold: ping, got a security related question
[22:58] <faddat> pity.  That seems like it would scale quite poorly....
[22:58] <sarnold> hey teward :)
[22:58] <teward> sarnold: incoming PM
[22:58] <sarnold> faddat: it'd be like "lxc move host1:container1 host2:"
[22:58] <faddat> hm
[22:59] <teward> sarnold: then i have an unrelated question, if you've got a few seconds for patch review/opiniongiving :)
[23:01] <sarnold> teward: if it's the nginx init script thing, I gave that a quick look and realized I don't know how the initscripts are supposed to work :) sorry
[23:01] <teward> lol
[23:01] <teward> meh, tis fine i'll poke Debian and yell at em
[23:01] <teward> i need them to make a decision on the nginx packaging ANYWAYS
[23:01] <teward> since they've not done 1.9.5 packaging
[23:02] <sarnold> hehe
[23:02] <sarnold> wait, no? I saw it in sources.debian.net earlier..
[23:02] <sarnold> oh that was 1.9.4 http://sources.debian.net/src/nginx/1.9.4-1/
[23:03] <sarnold> faddat: there's a nova-lxd something or other module underway as well; that might be the "scales" option, beyond managing the lxd hosts individually
[23:04] <faddat> yeah, totally
[23:04] <faddat> in fact
[23:04] <faddat> I'm investigating openstack right now
[23:04] <sarnold> faddat: but of course that drags along the rest of openstack. I like how simple lxd's interface is, and not having different "tenants" with different security properties looks like a big part of that simplicity.
[23:04] <faddat> right?
[23:05] <faddat> A coreos-style cluster of LXD servers (or even... rancher-on-ubuntu-style) would be incredible
[23:05] <sarnold> yeah. there's places where it makes sense, but for my home use, openstack is a bit overkill. lxd looks like a better fit. hehe :)
[23:05] <teward> sarnold: 1.9.4
[23:05] <teward> not 1.9.5
[23:05] <teward> and it's been out for over a month
[23:05] <teward> PPAs delayed for the same reasons
[23:05] <teward> (and others)
[23:06] <faddat> what we're building, it will need to scale, but even with that stated, I can't really say that openstack would surely be worth it
[23:06] <faddat> seems a ton of overhead, and a great number of machines dedicated to coordinating as opposed to doing the gruntwork
[23:07] <sarnold> my favorite is the guide on HA openstack that starts with "you'll need 28 computers..."
[23:07] <teward> lol
[23:08] <faddat> see that'd be okay in fact-- but then it gets into "and they'll need 4 NICs each (well 3 of them will, and then another 3.5 will need 3 HDDs of the weasel variety....)
[23:09] <sarnold> yeah, sometimes folks are trying to build their clouds with NUCs and sometimes with multi-socket xeon monsters with 100gb networking and ...
[23:10] <faddat> yes, exactly
[23:10] <faddat> I just did an inventory of what I have available
[23:11] <faddat> and if I bring in 6 machines from another site, I may have enough to get this rolling.  My ideal, I think is a generic installation of openstack, though that would mean missing out on all of the Ubuntu goodness...