Why Tiering Makes Sense

By | December 11, 2005

For the student of data networking, certain trends have emerged and appear to be fait accompli in the evolution of our industry. One of these trends is the concept of “tiering”. “Tiered networks”, “tiered storage”, “tiered infrastructures” are concepts that have received heavy play in the media and from vendors and industry experts alike. While their implementations are different, their drivers are the same: the need to scale infrastructure costs while assuring the appropriate levels of service.

In recent months, McDATA has defined a strategy for migrating our enterprise customers to tiered network architectures. The logic for this is undeniable; it is simply the best solution for building scalability, cost control and availability into the storage area network as it grows in size and complexity. ‘Flat,’ or core-to-edge, architectures served the enterprise well for many years, but they are reaching their limits of scalability. Tiering the infrastructure offers the logical next step in the evolution of storage networking.

A tiered infrastructure is different from tiered storage (which is the matching of storage arrays by price/performance to the importance and age of the data). However, it is a parallel concept and is the ‘plumbing’ that can greatly facilitate tiered storage strategies. Tiered architectures use a variety of switching products, each with varying levels of functionality, capacity and price, where they are best utilised in the network. Therefore, rather than using the highest tier product at every part of the network, you can use lower-cost switching products to support the needs of lower-tier applications. (These infrastructures also rely on intelligent routing products and comprehensive SAN management software.) The top line benefit to the company is that you can employ lower cost equipment wherever possible and reserve the equipment with the highest functionality and capacity (and corresponding price tag) to support the most mission critical data. The math is simple: the larger your network, the greater the potential for cost savings.

Tiered infrastructures should comprise three tiers: the ‘access’ layer, the ‘distribution’ layer and the ‘backbone’ layer. The access layer includes low cost switches, such as Fabric Switches, which provide access into the network for low end servers and direct data traffic to the backbone. The distribution layer includes mid range directors, to support mid-range servers and aggregate and direct data traffic to the backbone. The backbone layer would rely on an ultra high port count, high performance backbone director. At this layer, the organisation’s mission critical data is routed to the appropriate storage and/or server end node and can be aggregated for sending to distant data centers (either over dark fiber to distances of up to 200 km at 10 Gb/s or over the WAN). Because the backbone carries the highest traffic and consolidates data from different fabrics, it must deliver carrier class availability and be able to maintain separation of data, management and fabric services through hard partitioning.

Tiered infrastructures will let you tier bandwidth and performance (using 2, 4 and 10 Gb/s throughput where appropriate) while simplifying network design and management, delivering further bottom-line cost reduction. This means that the SAN can aggregate data, performance and protocol as you ascend the tiers, simplifying data transfer to the wide area and enabling the deployment of a Global Enterprise Data Centre (GEDC).

Based on a vision of global consolidation and connection of enterprise resources, the GEDC delivers customers with automated access to information at any hour of the day from any location on the globe.

A tiered storage network from can deliver the architecture necessary to meet the requirements for this GEDC, based on flexibility, interoperability, security, performance, management and cost.

As data quantities push into the petabytes and as SANs grow increasingly diverse, tiered architectures will be much more capable of meeting scalability, cost and availability requirements for enterprises that are wholly dependent on their data.

Leave a Reply