Storage virtualisation: back in fashion?

By | September 7, 2005

The term “virtualisation” has enjoyed a rollercoaster of fortunes since its widespread introduction by storage companies in 2001. It’s been somewhat of a victim of the IT fashion police: it was hot for a while, then totally out of favour with the urbane and sophisticated the next.

And every now and then it enjoys something of a renaissance. Right now, its star is riding reasonably high, probably due to the fact that it is more widely understood, and by more people, than it was four years ago.

The reality is that almost all enterprise storage systems employ virtualisation to some extent, so it’s really nothing new. Logical drives are created from a pool of physical disks, often using certain RAID configurations. The difference today is that storage management software that is not necessarily tied to a single brand of storage hardware. It was this development that exposed virtualisation to the sharp-eyed analysts.

In common with other storage management software developers, we at FalconStor frequently need to explain that we are not building “virtualisation” products; we do, however, use the virtualisation concept to create storage management software products as real business solutions, for applications such as managing heterogeneous storage, provisioning storage via multiple protocols, enhancing performance and enabling storage services. For each of these applications, the virtualisation technique is applied to achieve the optimal design.

Managing heterogeneous storage system

One of the biggest problems in enterprise data storage management is the myriad of storage systems from different vendors. While vendors are all for standards, unfortunately they are all for their own standards. An appropriate example is the behaviour of storage controllers: most are totally different than others, if not outright contradicting. The recent proliferation of low cost, high capacity storage systems has brought many more choices for storage configurations, but also highlighted the problem.

Storage management software isolates the morass of mixed storage hardware, and presents a uniform view of virtual disks to client hosts. This hides all of the hardware idiosyncrasies from the end user, so they don’t need to deal with the capricious and chaotic behaviour of a mix of storage connectivity and devices. Therefore, the ‘virtualisation software’ does more than just virtualise the storage; it really acts like the “beast master” keeping the wild animals under control, while making them perform productive tasks for the users in a ‘civilised’ manner.

Storage provisioning via multiple protocols

Almost all storage systems provision storage devices through a specific connectivity. For example, client hosts can connect to storage systems by either SCSI, or Fibre Channel (FC), or more recently, iSCSI. By virtualising the storage, it is now possible to provision the same storage via multiple protocols, with different connectivity.

Therefore, for a system that is using FC as the primary connectivity, it can use iSCSI, via gigabit Ethernet, to connect to the same device as a failover path. This makes it much more economical when implementing SAN storage, and this capability is also useful for disaster recovery and business continuity. When a data centre is shutdown for any reason and FC connectivity is not available between the recovery sites, the Internet can be used for remote access, thus eliminating potential downtime.

Performance enhancement

When virtualisation comes up, there still persists an ‘in-band versus out-of-band’ debate. Some claim that the “in-band” technique degrades performance due to added latency. The truth is that latency itself is not a significant factor in today’s storage performance. The reason is that modern computing uses multiple I/O to access data in storage.

Any latency is quickly absorbed by simultaneous I/Os, so the total throughput and IOPS are unaffected. On the contrary, “in-band” makes it possible to boost performance by allowing an intelligent caching mechanism to be created.

Because every I/O is going through the system, it provides necessary information to cache both read and write precisely. It can implement smart read-ahead. It can perform long-term access analysis to further boost hit rates. It can also effectively use other available fast storage, such as solid state disks to substitute virtual segments of the storage dynamically.

The beauty of it is that they can be used as they become available, and can be plugged into the switch. Re-soldering of the IC chips or rewriting the applications is not necessary. The notion that “in-band” virtualisation has a negative effect on performance is just a myth.

A peek into the future

As storage capacity becomes larger and larger, companies will generate more and more data to fill them. The task of managing this data is becoming the most formidable challenge, from individual home users to enterprise data centres. We always seem to return back to basics: storage provisioning, and data protection. Virtualisation, in one form or another, with or without the experts’ approval, or even their recognition, will be one of the important technologies supporting the storage infrastructure. This is guaranteed… virtually.

Leave a Reply