Search

Tag: sds

View:
Artem Gaevoy
  • Artem Gaevoy
  • March 21, 2019

Hyperconvergence backline: How to make sure that your hyperconverged environment rocks?

Admins love hyperconvergence because it allows conjoining compute, storage, and networking resources that makes the environment cheaper and easier to manage. Experienced users can build an infrastructure in different ways using any parts. For instance, you can grab some servers from Dell and install an industry-standard hypervisor (Hyper-V, KVM, ESXi, whatever) on top of them. If you do not know that much about hyperconvergence though, consider buying an appliance.
Read more
Dima Yaprincev
  • Dima Yaprincev
  • November 9, 2017

Microsoft SQL Server Failover Cluster Instance and Basic Availability Group features comparison

Microsoft SQL Server 2016 has a pretty decent feature set to achieve cost-effective high availability for your environment and build a reliable disaster recovery solution. Basic Availability Groups (BAGs) and Failover Cluster Instances (FCI) are included in SQL Server 2016 Standard Edition and serve to implement high redundancy level for business-critical databases. In this article, I would like to discuss some differences between these solutions and open the curtain on how it can be done with Software-Defined Storage like Storage Spaces Direct (S2D) and VSAN from StarWind (StarWind VSAN).
Read more
Jon Toigo
  • Jon Toigo
  • October 11, 2017

Back to Enterprise Storage

An under-reported trend in storage these days is the mounting dissatisfaction with server-centric storage infrastructure as conceived by proprietary server hypervisor vendors and implemented as exclusive software-defined storage stacks.  A few years ago, the hypervisor vendors seized on consumer anger around overpriced “value-add” storage arrays to insert a “new” modality of storage, so-called software-defined storage, into the IT lexicon.  Touted as a solution for everything that ailed storage – and as a way to improve virtual machine performance in the process – SDS and hyper-converged infrastructure did rather well in the market.  However, the downside of creating silo’ed storage behind server hosts was that storage efficiency declined by 10 percent or more on an enterprise-wide basis; companies were realizing less bang for the buck with software-defined storage than with the enterprise storage platforms they were replacing.
Read more
Ivan Talaichuk
  • Ivan Talaichuk
  • September 7, 2017

Hyperconvergence – another buzzword or the King of the Throne?

Before we have started our journey through the storage world, I would like to begin with a side note on what is hyperconverged infrastructure and which problems this cool word combination really solves. Folks who already took the grip on hyperconvergence can just skip the first paragraph where I’ll describe HCI components plus a backstory about this tech. Hyperconverged infrastructure (HCI) is a term coined by two great guys: Steve Chambers and Forrester Research (at least Wiki said so). They’ve created this word combination in order to describe a fully software-defined IT infrastructure that is capable of virtualizing all the components of conventional ‘hardware-defined’ systems.
Read more
Jon Toigo
  • Jon Toigo
  • August 22, 2017

The Pleasant Fiction of Software-Defined Storage

Whether you have heard it called software-defined storage, referring to a stack of software used to dedicate an assemblage of commodity storage hardware to a virtualized workload, or hyper-converged infrastructure (HCI), referring to a hardware appliance with a software-defined storage stack and maybe a hypervisor pre-configured and embedded, this “revolutionary” approach to building storage was widely hailed as your best hope for bending the storage cost curve once and for all.  With storage spending accounting for a sizable percentage – often more than 50% — of a medium-to-large organization’s annual IT hardware budget, you probably welcomed the idea of an SDS/HCI solution when the idea surfaced in the trade press, in webinars and at conferences and trade shows a few years ago.
Read more
Jon Toigo
  • Jon Toigo
  • August 17, 2017

The Need For Liquidity in Data Storage Infrastructure

Liquidity is a term you are more likely to hear on a financial news channel than at a technology trade show.  As an investment-related term, liquidity refers the amount of capital available to banks and businesses and to how readily it can be used.  Assets that can be converted quickly to cash (preferably with minimal loss in value) in order to meet immediate and short term obligations are considered “liquid.” When it comes to data storage, liquid storage assets can be viewed as those that can be allocated to virtually any workload at any time without compromising performance, cost-efficiency/manageability, resiliency, or scalability.  High liquidity storage supports any workload operating under any OS, hypervisor, or container technology, accessed via any protocol (network file systems, object storage, block network, etc.), without sacrificing data protection, capacity scaling, or performance optimization.
Read more
Alex Bykovskyi
  • Alex Bykovskyi
  • August 16, 2017

Ceph-all-in-one

This article describes the deployment of a Ceph cluster in one instance or as it’s called “Ceph-all-in-one”. As you may know, Ceph is a unified Software-Defined Storage system designed for great performance, reliability, and scalability. With the help of Ceph, you can build an environment with the desired size. You can start with a one-node system and there are no limits in its sizing. I will show you how to build the Ceph cluster on top of one virtual machine (or instance). You should never use such scenario in production, only for testing purposes. The series of articles will guide you through the deployment and configuration of different Ceph cluster builds.
Read more
Augusto Alvarez
  • Augusto Alvarez
  • July 18, 2017

Microsoft Azure Stack in General Availability (GA) and Customers will Receive it in September. Why is this Important? Part I

Microsoft’s Hybrid Cloud appliance to run Azure in your datacenter has finally reached to General Availability (GA) and the Integration Systems (Dell EMC, HPE and Lenovo for this first iteration) are formally taking orders from customers, which will receive their Azure Stack solution in September. But, what exactly represents Azure Stack? Why is this important to organizations?
Read more
Andrea Mauro
  • Andrea Mauro
  • June 7, 2017

Design a ROBO infrastructure. Part 4: HCI solutions

As written in the previous post, for ROBO scenario the most interesting HCI (Hyper-Converged Infrastructure) configuration is a two nodes configuration, considering that two nodes could be enough to run dozen VMs (or also more). For this reason, not all hyperconverged solutions could be suitable for this case (for example Nutanix or Simplivity need at least 3 nodes). And is not simple scale down an enterprise solution to a small size, due to the architecture constraints.
Read more