Over the past several years, I keep stumbling upon deployment systems and such concepts as "sharding" which use as their raison d'ĂȘtre the ability to scale across an arbitrary number of cheap, "commodity" (usually 1U) servers.
The implication is that "larger" servers either have a higher price per performance or are somehow more difficult to administer[1]. I reject both suppositions.
Max Kalashnikov
11 April 2011
04 April 2011
When is it time for a senior sysadmin?
In the quest for the "perfect" startup to join, I have my own personal guidelines as to company size and growth. However, I also tend to ask questions to determine if it's too early or too late for me (as a system administrator) to be of adequate help.
I'm not just a porridge-swilling Goldilocks when it comes to this kind of timing. If it's too early[1], I'm going to get bored, while the company wastes its money, which isn't good for anyone. Too late, and I end up being incapable of overcoming legacy hurdles, which is a source of frustration and appearance of ineffectiveness, again not being good for anyone.
I'm not just a porridge-swilling Goldilocks when it comes to this kind of timing. If it's too early[1], I'm going to get bored, while the company wastes its money, which isn't good for anyone. Too late, and I end up being incapable of overcoming legacy hurdles, which is a source of frustration and appearance of ineffectiveness, again not being good for anyone.
25 March 2011
OpenStreetMap is a ghetto of stagnation.
Having interacted with a few other mappers, particularly in disputes, I had the odd impression that either they were a bit, shall we say, mentally challenged, or struggled with language. Now I know why.
Fully a year later, one of the people in charge communicates with me and, in summary, says that the community is favored over map quality every time. Wow.
Fully a year later, one of the people in charge communicates with me and, in summary, says that the community is favored over map quality every time. Wow.
01 March 2011
Secondary DNS
Here's my advice for "secondary" DNS service. I recommend running the master unlisted ("stealth master") and using it only to serve zone transfer to the slaves. It can also be a good idea to have a backup "stealth" slave that could become the master.
I call them "slaves" even though, in registration terms, I think they're still called "primary" and "secondary." I have yet to find a practical distinction, and, with a stealth master, there could be confusion.
Make sure to have at least one slave listed from a different TLD (.com, .org, .net, or a ccTLD).
A list of my preferred providers, reasonably priced:
DNS Made Easy (per 5-10 million query pricing)
BackupDNS (flat per zone per month)
EasyDNS (per million query pricing)
DNS Unlimited (cheap per million query pricing)
Durable DNS (per million query pricing)
No-IP "squared" (flat per domain per year)
Not all of them support configuring more than one master, but they all have web access to effect the changes.
More detailed advice may be forthcoming.
I call them "slaves" even though, in registration terms, I think they're still called "primary" and "secondary." I have yet to find a practical distinction, and, with a stealth master, there could be confusion.
Make sure to have at least one slave listed from a different TLD (.com, .org, .net, or a ccTLD).
A list of my preferred providers, reasonably priced:
DNS Made Easy (per 5-10 million query pricing)
BackupDNS (flat per zone per month)
EasyDNS (per million query pricing)
DNS Unlimited (cheap per million query pricing)
Durable DNS (per million query pricing)
No-IP "squared" (flat per domain per year)
Not all of them support configuring more than one master, but they all have web access to effect the changes.
More detailed advice may be forthcoming.
03 February 2011
Virtualization for databases (bad idea)
Originally in response to this (excerpt of a) discussion on LinkedIn:
The real problem is that virtualization is fundamentally flawed. What is an operating system for, in the first place? It's the interface between the hardware and the applications. Virtualization breaks this, without, IMO, adequate benefit.
Put another way, virtualization abstracts away hardware, to a lowest common denominator. It is therefore an unsurprising result that the subsequent performance is consistent with the lowest common denominator as well. "Commodity hardware" is a myth[1].
One of my greatest tools as a sysadmin is my knowledge of hardware, how it fits together, and how it interacts with the OS. Take that away from me by insisting on virtualization or ordering off a hosting provider's menu of servers, and I, too, suffer from the lowest common denominator syndrome.
[1] Really, it's that non-commodity "big iron" is extinct in my world, especially with the demise of Sun.
I think this is a LINUX issue! Because in linux the I/O is buffered or delegated to a proccess. When you install Postgres or any DB, Postgres tell to the OS that it can't wait to do the I/O, it must be done inmediattly. But what happens in a virtualized environment?There's no such thing as telling the OS to do an I/O immediately, as opposed to waiting. It's the other way around: non-buffered I/O requires waiting for it to actually complete. This is important for such features as data integrity (knowing it was written to the platter, or, perhaps, in the case of SSDs, that the silicon was erased and written to).
The real problem is that virtualization is fundamentally flawed. What is an operating system for, in the first place? It's the interface between the hardware and the applications. Virtualization breaks this, without, IMO, adequate benefit.
Put another way, virtualization abstracts away hardware, to a lowest common denominator. It is therefore an unsurprising result that the subsequent performance is consistent with the lowest common denominator as well. "Commodity hardware" is a myth[1].
One of my greatest tools as a sysadmin is my knowledge of hardware, how it fits together, and how it interacts with the OS. Take that away from me by insisting on virtualization or ordering off a hosting provider's menu of servers, and I, too, suffer from the lowest common denominator syndrome.
[1] Really, it's that non-commodity "big iron" is extinct in my world, especially with the demise of Sun.
22 January 2011
You only just swallowed us, I know, but please cough us back up.
I was asked recently what my ideal scenario to retain me long-term, and it occurred to me, after answering otherwise, that there does exist such a situation. Our new overlords would have to spin us off and let us operate independently, as a wholly-owned subsidiary.
21 January 2011
Compression at "Internet" scale (originally posted to StorageMonkeys November 22, 2009)
Storage on the cheap - lessons learned (originally posted to StorageMonkeys July 11, 2009)
Having purchased, assembled, configured, and turned up quite a number
of storage arrays, where a major concern was total cost, I've come up
with something of a checklist of best practices.
"Dark" storage: wastefulness or just good engineering? (originally posted to StorageMonkeys June 24, 2009)
Having recently read more and more discussion about so-called dark
storage, I've been reminded of something I routinely try to impress upon
managers, especially clients: unless your use case is archiving, total
bytes is a poor metric for storage.
In fact, the term "storage" itself may be partly to blame for the continued misconception. One need only glance at the prices of commodity disks to recognize that there isn't anything near a linear relationship between cost and bytes stored.
In fact, the term "storage" itself may be partly to blame for the continued misconception. One need only glance at the prices of commodity disks to recognize that there isn't anything near a linear relationship between cost and bytes stored.
Why are DRAM SSDs so pricey? (originally posted to StorageMonkeys June 10, 2009 )
As a UNIX veteran who has a vague recollection of /dev/drum, I keep
thinking that it would be really nice to have a device to swap to that's
somewhere between disk and memory in terms of speed and cost (total
installed cost, not just each module).
Mostly, I feel constrained by the 32-48GB limits on moderately priced ($1-3k) servers. To go higher, for even modest processor speeds, is a $5-$10k premium. Moreover, DRAM doesn't really wear out, and it would be nice to put older, lower density modules to use.
Mostly, I feel constrained by the 32-48GB limits on moderately priced ($1-3k) servers. To go higher, for even modest processor speeds, is a $5-$10k premium. Moreover, DRAM doesn't really wear out, and it would be nice to put older, lower density modules to use.
All about the Benjamins (originally posted to StorageMonkeys June 9, 2009)
The choice of the unit of measure of storage is interesting to me
because it's otherwise tought to measure price for performance.
I remain agape at the price tag on high-end, supposedly high-performance, storage systems. Connected by FibreChannel or gigabit Ethernet, that's a limit of 400 and 110 MB/s, respectively. (Yes, I know of 8Gb/s FC and 10GE, but these are prohibitively expensive, if supported.Even link-aggregated GigE practically tops out at 880MB/s) I'm thinking that writes across 40 7200RPM disks could saturate an FC link, and it would take fewer than 20 15k disks. Neither of these strikes me as impractical or unusual sizes of storage arrays, even doubling those numbers for RAID 1. More importantly, such arrays don't strike me as high performance.
Particularly shocking is that a brand name "SAN" solution of such a size would cost in the neighborhood of a quarter million dollars and be at its performance limit. Granted, it might be half that price without fancy management and replication software. whereas the less fancy alternative, at one tenth to one fifth the cost, would still be expandable from a performance standpoint. How much does the Veritas database suite cost these days?
I remain agape at the price tag on high-end, supposedly high-performance, storage systems. Connected by FibreChannel or gigabit Ethernet, that's a limit of 400 and 110 MB/s, respectively. (Yes, I know of 8Gb/s FC and 10GE, but these are prohibitively expensive, if supported.Even link-aggregated GigE practically tops out at 880MB/s) I'm thinking that writes across 40 7200RPM disks could saturate an FC link, and it would take fewer than 20 15k disks. Neither of these strikes me as impractical or unusual sizes of storage arrays, even doubling those numbers for RAID 1. More importantly, such arrays don't strike me as high performance.
Particularly shocking is that a brand name "SAN" solution of such a size would cost in the neighborhood of a quarter million dollars and be at its performance limit. Granted, it might be half that price without fancy management and replication software. whereas the less fancy alternative, at one tenth to one fifth the cost, would still be expandable from a performance standpoint. How much does the Veritas database suite cost these days?
Subscribe to:
Posts (Atom)