21 January 2011

Why are DRAM SSDs so pricey? (originally posted to StorageMonkeys June 10, 2009 )

As a UNIX veteran who has a vague recollection of /dev/drum, I keep thinking that it would be really nice to have a device to swap to that's somewhere between disk and memory in terms of speed and cost (total installed cost, not just each module).

Mostly, I feel constrained by the 32-48GB limits on moderately priced ($1-3k) servers. To go higher, for even modest processor speeds, is a $5-$10k premium. Moreover, DRAM doesn't really wear out, and it would be nice to put older, lower density modules to use.

The trouble is, what I've found so far is either very low capacity, priced much higher than the memory modules themselves, or both. I'm not particularly interested in adding 4GB of fast swap to a 48GB machine, though ACARD has something for $250 with a 48GB limit, with high density modules, defeating my second purpose. Similarly, I'm not interested in paying $10k for 16GB of RAM SSD ($625/GB?!) when I could just dump that money into the base server and get much faster access.

I'm not a hardware guy (in the EE sense), so I'm genuinely curious about this. Is it really that difficult/expensive to stick a memory controller (northbridge?) onto a SATA interface? Am I being too cynical in assuming that it's mere market "segmentation" without a low-end consumer segment?

What I described already exists with the name "motherboard," but the software package "scst" seems woefully incomplete. For example, the MPT-Fusion driver is still described as "alpha" or early development, so I'm not holding my breath on reliability, let alone performance. I'm sure participation by the vendors would help. LSI, are you listening?

No comments:

Post a Comment