Excerpt from Howard Marks of Network Computing https://www.networkcomputing.com/servers-storage/hyperconverged-stacks-from-simplivity-an/240007306
In last week’s post, “The Hyperconverged Infrastructure,” we explored the industry trend toward integrated storage, compute and networking stacks, and its latest development, in which vendors combine the compute and storage components into a single, hyperconverged, scale-out building block. Two vendors, SimpliVity and Scale Computing, used last month’s VMworld to reveal new hyperconverged systems.
The OmniCube’s software also provides inline data deduplication, which not only expands the available storage in the cluster but also reduces the amount of internode data replication traffic needed to make the system able to survive a node failure. Since OmniCube’s storage subsystem was designed specifically to host vSphere VMs, it has enough context to manage storage on a VM rather than a volume basis–including per-VM application-consistent snapshots and replication.
Since the data is deduplicated in ingest, OmniCubes use significantly less WAN bandwidth for replication than other storage systems. SimpliVity even has a software instance of the OmniCube stack that can run on a public cloud so organizations can use cloud providers for disaster recovery.
If SimpliVity’s offering was just a scale-out hybrid storage system with inline dedupe and the ability to do per-VM application-consistent snapshots as well as replicate VMs to a public cloud provider, it would join Tintri as one of my top storage systems for virtualization. Add in that I can run my workloads on the same system or from other vSphere hosts so I can scale compute and storage separately, and I start thinking it might be too good to be true.
By comparison, Scale Computing aimed its HC3 at significantly smaller use cases and customers than SimpliVity. For the past several years, Scale Computing has been selling scale-out unified storage systems for SMB/SME customers built from 1U servers running Linux and an extended version of IBM’s GPFS distributed file system. While most hyperconverged systems use a virtual machine running under a hypervisor as a virtual storage appliance, Scale’s HC3 uses clustered GPFS and runs the KVM hypervisor on top of GPFS.
While KVM, Linux and GPFS provide a reliable platform, they’re not widely known as easy to use and generally require a significantly higher level of technical expertise to install, optimize and administer than most SMBs can muster. Scale addresses this by providing a simple Web UI for administering the whole shebang, from creating file shares to spinning up new virtual machines.
A three-node HC3 cluster, which Scale recommends for up to 30 virtual servers, will cost an SMB or remote office about $25,500, while an eight-node cluster is just less than $68,000. These are all-inclusive prices for servers, storage, hypervisor and the Web management software. Most users would probably pay significantly more for three servers, a low-end disk array and vSphere licenses.
Users can add nodes to the cluster at any time, should they need more compute or storage resources. Scale is even allowing users with storage-only nodes to upgrade to HC3 with a memory and software upgrade.
Of course, for less than $10,000 a node, Scale isn’t providing the same performance as SimpliVity or Nutanix. Each HC3 node has a single quad-core Xeon processor, 32 Gbytes of memory and four 1-Tbyte disk drives. Scale doesn’t currently use flash for acceleration, so small clusters will have rather modest storage performance from a dozen or so 7,200 RPM drives. Luckily, most SMBs have rather modest storage performance needs.
SimpliVity and Scale Computing join Nutanix and Pivot3 in the hyperconverged infrastructure arena. These two examples show that the concept of hyperconvergence can extend from a very modest cost to high-performance systems with leading-edge storage features.