click to expand ...

Home
About SVA ... Contact us
Overview (all documents)
Product description
Services with BVQ
BVQ advantages
New functions and future plans
White papers
Downloads and releases
Users Manual and users guide
Further WIKI documents

 


Abstract

This white paper is an excerpt of an analysis we prepared for a customer.

The customer wanted to know about the optimal strategy for future capacity growth. He was running out of disk space in his high performance class and needed a quick solution here. In this step he also wanted to replace his old DS4700 systems.
A good way to approach this is a BVQ Storage Tier Analysis because it clearly shows which kind of performance demands the customer has. The BVQ Storage Tier Analysis can be used to find the best storage mix and also a very exact capacity amount for SSD if Easy Tier becomes an option.

The BVQ Storage Tier Analysis also gives an excellent overview where volumes should be stored best in sense of performance and price.


Download for printing and easier reading:
Storage Tier Analysis.pdf
 

Presentation in pdf format
BVQ Use Case - Storage Tier Analysis with BVQ.pdf


Benefit of the BVQ Storage Tier Analysis

The performance capabilities of most storage environments are oversized, because very often the owner doesn´t have an exact overview about the real applications´ performance needs.
In a SVC / Storwize environment the storage cache has such a positive effect on backend performance that even experts cannot give precise performance information without measuring.

It is no option to run into performance constraints because of bad planning. In cases of uncertain performance needs and capabilities a decision for higher cost storage is always favored.

BVQ helps to uncover enormous savings potential here. 

A little example:

If there is a difference of $1000 between two storage classes and 100TB of capacity is found which can be moved to a lower tier storage class, a savings potential of $100.000 exists.
This saving will normally be implemented in the next capacity expansion, like shown in this whitepaper. Using Easy Tier might be a solution here but a hybrid storage pool in itself is high cost capacity because approx. 7% of SSD capacity is needed with a more than 10 times higher price than normal disk capacities.

First it has to be analyzed by using Easy Tier whether SSD will support the specific performance needs because not every workload is favorable for SSD.

 

 

BVQ Storage Tier Analysis


The BVQ Storage Tier Analysis is done in three steps.

  1. Step by step break down of the measured 7 day mean performance to visualize which amount of capacity needs what kind of storage class. This is done to give a better feeling about the actual needed performance.

  2. Use the BVQ Heatmap overlay to proof the results of the first steps. The BVQ Heatmap shows all these results in one step and can be used with more experience. When a first time analysis is done for a customer, this gradual approach is used because in this way the customer gets a better feeling for his best storage tier levels.

    The results of the BVQ Heatmap are recommendations which volumes are misplaced regarding their performance and should be moved to another storage tier. These volumes are still named 'candidates' because 7 days mean values were used to find these results. It has still to be checked whether time periods with high activities which cannot be handled can be found on other storage tiers. It is also important to ensure that no business reason demands a special storage class for a low performer. If the BVQ SLM Package is used this information is available directly – otherwise it has to be checked whether it is allowed to move the volumes to other storage classes.

  3. The BVQ Performance View is used for all this 'candidates' to check if high performance peaks can be found for these volumes. Time can be saved if this is done for several volumes at once. 
    Volumes which pass this check can then be moved to other storage tiers. BVQ supports this step with its drag and drop feature.

 

Customer situation

The customer SVC setup is a stretched cluster with 4 different kinds of storage types in the backend.

Storage class

Subsystem

Disk type /

Raid level

Capacities

Design IOPS/TiB

Initial plans

High end

DS8000

300GB 15k R6

80TB

76.5TB used 3.5TB free

340 IOPS/TB

Is running full
--> expand capacity?

Low end1

DS4700

750GB 7k R5

15.5TB

17.5TB used 1TB free

57 IOPS/TB

Sunset DS47 migrate to XIV or DS8000

Low end 2

XIV

XIV 2TB, 7k Disk RAID1

78TB

61TB used 17TB free

45 IOPS/TB

Expand capacity

Low end 3

DS4700

2TB 7k

36TB

21TB used 14TB free

21IOPS/TB

Sunset DS47 migrate to XIV

 

The high performance storage and the XIV are running full and the DS47 systems should be replaced.

 

The customer needs a recommendation about the next steps:

  • Expand the existing DS8000 storage and the XIV storage?

  • Implement a new class between DS8000 and XIV – should it be a 15k class or a 10k class to relieve the XIV and DS8 by migrating data to this class?

  • Implement a new low tier class and migrate data down to this tier to relieve the upper tiers?

Step 1: step by step performance break down


In this step the best fitting storage tier mix shall be determined. A BVQ Treemap is prepared displaying the storage classes with their capacities.
This is done without a representation of the managed disk groups because the results would become to complex for the first step.


The big amount of high performance capacity with DS8000 storage at the beginning is remarkable. It is nearly 45% of the complete storage capacity.
The DS8000 class is the capacity which is running full and needs to be expanded or reorganized.
The question to solve is whether there is a real need to expand this very costly resource or whether storage space can be reclaimed from this group by identifying volumes, which don´t have any need for staying in the highest performance class.


The second task is to find storage capacities for the DS47 storage systems which shall be replaced in the next step.

The analysis is startetd by marking all disks with a 7 days mean performance of more than 1000 IOPS/TB. This is done because this performance class is a clear candidate for SSD performance or for Easy Tier with SSD.


The marking is performed by switching the treemap to an IO density aspect, all objects with more than 1000 IOPS are marked and then switched back to the capacity aspect. All vDisks running with more than 1000 IOPS are now marked orange.

The result shows very clearly that only a very small capacity in the DS8000 storage class is running with this highest performance demand. The performance needs for these volumes are very high compared to the performance design of this storage class. If these disks run into performance issues it should be considered to use Easy Tier with SSD to allow the requested performance of more than 1000 IOPS/TiB.

 

The selection was extended in step one from >1000 TiB to > 250 IOPS/TiB.

 

The DS8000 class was designed for 340 IOPs/TiB so we decided to analyze on 250 IOP/TiB to find volumes that should be placed at least in the DS8000 class. All orange marked capacities are candidates to remain in the DS8000 class. This is only a very small fraction of the capacities actually stored there.
 

The selection was extended in step 2 from >1000 TiB to > 150 IOPS/TiB.

 

The left picture now shows volumes which need more than 150 IOPS/TB (in orange).

 

150 IOPS/TiB are very typical for standard 15k disk classes with RAID 5.

It can also be an acceptable decision to leave these volumes in the DS8000 class because the capacity is not big enough to justify the implementation of new a class in between.

 

Every green volume is now a candidate to be moved out of this class to eventually new 10k storage classes.

The picture shows the end result for this specific situation where two storage tears are the target setup. It clearly shows that there is plenty of capacity on the DS800 which does not need to be stored there because of performance reasons. All green volumes are candidates for less performant storage.

 

The results of these analysis steps are very impressing. The performance needs for the volumes are reduced stepwise. It becomes very obvious that the prime spot of this storage setup is not 340 IOPS/TiB.
It is less than 50 IOPS/TiB. Performing this analysis in this way makes it is easier for the customer to realize this fact because he clearly recognizes only a very small capacity needs highest performance storage. Later it can be seen that all these steps can also be done with only one task, if the BVQ IO Density Analysis is used.

Instead of adding capacity to the expensive DS8000 class just work through the candidate-list to free capacities on this class. This will lead to a decision not to extend the DS8 class at all but to extend the low tier classes. The difference in purchase price may be several thousand dollars per TB.

Why considering the candidate list – because only 7 days mean values are used here. But this doesn't cover the situation that one of the green volumes has high activity in a short timeframe. This has to be analyzed in an extra step per volume. It is also unknown whether a business reason to run a volume on the DS8 class exists. So both of these aspects have to be additionally worked out in extra steps. It is a whole lot of work which has to be done but it should be kept in mind that each TiB may be worth several thousand dollars or euros  – only for purchasing costs which are normally only a fraction of the whole cost calculation.

 

 

 

Step 2: the BVQ Heatmap performs the same kind of analysis with one click

 

It took some work and steps to create the analysis with the single marked volumes. This is done for one reason only. It is an experience for the owner of the storage environment to see how small the capacities in his storage environment are which really need high performance capabilities. In a customer analysis situation this is used because it leads our customer step by step into the experience how much performance can be improved in his storage tier setup.

The treemap with BVQ IO Density Analysis overlay is the 'I show you all at once' approach which confuses less experienced users at the beginning. But it delivers better results and the working list for step 3 because it shows a clear recommendation whether:

  • a volume is placed correctly (green) or

  • can be moved to lower performant storage (blue)

  • is a candidate for higher performant classes (yellow and red).


If it is kept in mind that the smaller blue rectangles are still 1.5 TB with a saving potential of sometimes far more than $1000 (depending on target storage class and size) it is only imaginable how big the overall saving potential in this picture is.

 

 

The treemap picture led to the following recommendation for this customer:

  1. Customer needs a new low range class T3 with capacity disk drives with a min capacity of 116TB

  2. approx. 50% of the XIV capacity (68TB) can be moved to T3

  3. at least 30% (48TB) of the DS8 storage capacity can be moved to XIV and 20% (32TB) to T3 storage

  4. 40% (16GB) DS47 capacity can be moved to T3 storage, 60% of DS47 storage capacity can be moved to XIV storage

  5. DS8 storage capacity does not have to be extended.

  6. XIV storage capacity does not have to be extended.

 

 

 

The result of the analysis is very clear and it should be easy to calculate the possible cost savings with it.