[16:13] <ChinnoBunny> lazypower: Are you concerned about security on Dropbox?
[16:14] <lazypower> I use ecryptfs to secure things I wish to remain private
[16:14] <lazypower> And I also dont put the launch codes on dropbox
[16:14] <lazypower> pleia2 ftfy
[16:14] <ChinnoDog> That seems a little inconvenient
[16:14] <lazypower> Not at all
[16:15] <ChinnoDog> I noticed that Spideroak is linux compatible, client side encrypted, and only $5/mo on a group plan with unlimited storage.
[16:16] <ChinnoDog> cryptfs on dropbox only seems inconvenient because then I have to select specific folders to share and then decrypt those folders on every client.
[16:20] <lazypower> *shrug* I dont find it that inconvenient to selective share. but thats my perference
[16:20] <lazypower> i use maybe 2 machines that need access to that folder
[16:20] <lazypower> and my handset
[17:41] <ChinnoDog> Anyone here use a bare metal cloud provider?
[21:40] <ChinnoDog> I signed up for one to poke around. I've never had so much buyers remorse. lol
[21:41] <ChinnoDog> I should try a bigger one.
[21:43] <jthan> Why buyers remorse?
[21:45] <r00t^2> the plot thickens
[21:48] <r00t^2> wait,
[21:48] <r00t^2> "bare metal" and "cloud" are...
[21:48] <r00t^2> exact opposites.
[22:03] <ChinnoDog> They are not exact opposites. Just because clouds are commonly facilitated by full system virtualization does not mean it is required to gain the flexibility or resource isolation that large public clouds have.
[22:04] <ChinnoDog> I signed up for access that required a $20 deposit but the service was not close to fully baked. I gave up before completing my first test.
[22:08] <r00t^2> ChinnoDog: no, it is- because at that point, it's big iron, not cloud. cloud is intentionally abstracted
[22:28] <ChinnoDog> r00t^2: What would it matter to you if the AWS instance types were actually hardware configurations? Everything else would work the same.
[22:29] <r00t^2> ChinnoDog: first off, that depends on the virt method; not all is created (or implemented) equally. secondly, shared NIC port. thirdly, hypervisor security vulnerabilities
[22:31] <ChinnoDog> There isn't a strict definition of "bare metal" so far as I know so lots of different interpretations are possible.
[22:31] <r00t^2> bare metal literally means hardware. that's... exactly what it means.
[22:32] <ChinnoDog> Hardware can share resources too
[22:32] <ChinnoDog> Like multiple disks on the same bus connected to different motherboards.
[22:32] <r00t^2> but it's still equally accessible to the OS.
[22:33] <ChinnoDog> Yes. The only way you would know it is bare metal is that there is no virtualization required in software.
[22:33] <r00t^2> and if it isn't, then those disks are probably a SAN, and thus not actually a part of the machine
[22:33] <r00t^2> ChinnoDog: i invite you to do yum -y install virt-what && virt-what then
[22:34] <r00t^2> plus, like i said- it entirely depends on the hypervisor.
[22:34] <r00t^2> virtual NIC interfacign as exposed to guests, for one
[22:34] <ChinnoDog> What depends on the hypervisor?
[22:34] <r00t^2> 18:33:07 < ChinnoDog> Yes. The only way you would know it is bare metal is that there is no virtualization required in software.
[22:35] <r00t^2> in many cases, a simple smartctl -a or cat /proc/cpuinfo or dmidecode or ifconfig -a (or ip l s, for the hepcats) will tell you "oh hey, this is virtualized"
[22:35] <ChinnoDog> You could make default drivers work but other parts of the real machine won't work. Virtualization instructions for example.
[22:36] <r00t^2> i'm sorry, i think i missed your point. what exactly are you arguing here?
[22:36] <ChinnoDog> Though VMware now allows nested virtualization I think.
[22:36] <ChinnoDog> I'm not arguing anything!
[22:36] <r00t^2> most do but it requires hardware that supports it and needs to be enabled via the hypervisor
[22:36] <r00t^2> so you're just saying random thoughts, or..?
[22:37] <r00t^2> because my point is there is a clear and determinable difference between physical hardware and a virtualized environment
[22:37] <ChinnoDog> I was just looking for a bare metal cloud provider. That is all.
[22:37] <ChinnoDog> I don't think it is that clear. How do you define the separation of the two?
[22:38] <r00t^2> right, and my ORIGINAL point is that doesn't exist- if they're selling it as such, they're using buzzwords rather than the actual tech behind the associated buzzwords
[22:38] <r00t^2> ChinnoDog: i literally just send like, 5 lines explaining how the guest knows it's in a virt environment
[22:39] <ChinnoDog> In that case a dedicated CPU with shared disk, memory, and IO channels should qualify as dedicated hardware so long as you can't tell in software, right?
[22:40] <r00t^2> no, it's just virtualized with dedicated resources
[22:41] <ChinnoDog> What is the difference between "virtualized" and "multiplexed"? There are lots of devices sharing the PCI bus. We don't call them virtualized.
[22:42] <r00t^2> because it's all at the same level of access. it's equal-level. or same-"ring" if you want to use quasi-outdated terminology
[22:43] <r00t^2> virtualized is done in-hardware. softraid, for instance, is virtualized raid- but we don't call it that, because we have a special name for it. but it's handled by the kernel, not "ring-(-1)"
[22:43] <ChinnoDog> I think the "ring" concept is going to break down when we scrutinize it. I think we can say that an entire system connected to the rest of the world via a NIC, real or otherwise, is dedicated. But if the system is intermingled with components outside of its control then even if it appears as an isolated system it is not.
[22:44] <r00t^2> (well, that's why the ring model is outdated, but the abstraction will have to do for now)
[22:44] <r00t^2> and i won't say that either- you can physically dedicate a single NIC to a virtual instance
[22:45] <r00t^2> if you're talking about clustered computing, though, we have a special term for that too. :)
[22:45] <ChinnoDog> You can but it doesn't necessarily have any impact on the performance characterisics. Depends where you think the network terminates. Does it terminate on a proprietary I/O chip or does it terminate when the packet reaches memory?
[22:46] <r00t^2> it terminates on the physical endpoint. OSI-0
[22:47] <ChinnoDog> "virtual" raid doesn't really tell us much either except that you can't access it through INT13 extensions. Performance wise high end raid cards offload some of the computational work but there isn't any reason you couldn't offload it using the CPU and other system resources.
[22:48] <r00t^2> that doesn't make softraid a physical raid though- you may be conflating what i'm saying. if i were running a high-write multi-access DB on that raid, you better believe i'm buying a dedicated chip for it- but that doesn't still make softraid "real" raid
[22:48] <ChinnoDog> Anyway, all I really wanted was a Windows system I could run VirtualBox on with a 64-bit version of CentOS.
[22:48] <r00t^2> http://joesdatacenter.com/ have fun