Customer use case: analysis of a completely overloaded system

We started an analysis on customer side because of pending decisions how to extend the available capacities.
these are the results after the first look (1h) - the most interesting thing here is that the system is not looking to bad on the SCSI level - but when one looks deeper ...

The Customer has performance issues since a longer time and there is a need to clarify the root causes before spending money.

There are several ideas starting from newer Nodes (it is a 2145-8G4 node) or higher performance storage classes

The result might be - get rid of the old N6060 storage with the performance issues as planned and invest in medium performance capacities
because there is no real need for highest performance classes.
The Cache on the 2145-8G4 nodes is small - we have to go deeper to find whether it is a technical need to invest in newer SVC nodes.

Customer Session on 4. April 2003Click on pictures to enlarge

The IO Heat Analysis Overlay in the Treemap shows
that the customer does not use the performance capabilities
of his backed storage.


This shows - that enough performance capability is
available. (IN THEORY)

In Praxis - if the systems in the storage backend cannot
deliver the performance they are built for than all these
calculations are for nothing.

The result is then, that the performance of storage
backend systems is not enough beause of several
reasons.

The processor of the system is too slow. It doesn't help
when we build in hundreds of disks for performance when
the processor is not able to serve all theses IOS.

Something is speeding down the performance like
a WAFL file system which is too full to perform striping.
(which might be the case here)

The coloring is based on calculations we did for the Arrays.

  • blue - does only use less than 20% of the
    performance capabilities
  • Green between 20% and 80%
  • Yellow between 80% and 100%
  • Red more than 100%

 

 

The CPU Load and RW Cache is in a normal range.


The Global Write Cache max is much too high -
it should normally be from 0% up to less than 80%.

In this situation the write Cache max is mainly mote
than 75% which is much too high.


This is a normal loaded System

  • (red) CPU << 70%
  • (blue) RW Cache SVC = 78
  • (blue) RW Cache Storwize = 72
  • (green) Global W - Cache max < 80%
  • (dark green) (not shown here)
    Global W - Cache min often 0

This is one of the managed disk groups from
the customer.

The Max Write Cache fullness is nearly always
more than 80%.

 

It looks like the SVC cannot get rid of the cache
content because of restricted performance in
the storage back end.


  • (green) MDG W cache max - sometimes 100%
    for small timeframes. Normally should be
    at 80% or less. Long times on 100% is overload
    especially when MDG W Cache min is over 80%.
    More than 100% only single peaks. In this case
    watch out for min value - has to be <80%

  • (dark green) (not shown here)
    MDG W Cache min - should often be 0 sometimes
    for smaller periods 80% seldom more than 80%
    90% and more is alarm signal

The SAN does not show any problems.
In the time period of cache overflow no heavy
transfer activity can be measured on the SAN
ports and no Buffer Credit Wait % problems occor.

 

  • (Yellow) Buffer Credit Wait % - always measure this
    on the single port (never on aggregate).
    less than 10% is OK as long as there are no other
    performance complaintsin this moment.

  • All other codes from the debug section have to be 0

Another Sign of congestion is that the response time
of the SAP volumes go in parallel with their IOP demand.

 

More IOPS lead automatically to a higher response time.



 

Let us help!

Bottleneck Analysis

Planning Analysis

Health Check

Consulting


Popular content:

Page: How destage rates react on cache partition fullness , Page: Performance bottleneck analysis on IBM SVC and IBM Storwize V7000 , Page: Unlock storage cost savings potentials with BVQ , Page: BVQ analysis VAAI: undercover write operations , Page: BVQ Use Case - Solve performance issue in SVC with space efficiency analysis


General links

Return on invest 

Performance analysis whitepapers


The BVQ Blog
BVQ brings complete storage monitoring, performance analysis, alerting and reporting to the IBM Storwize family.
BVQ Version 3.3 supports Storwize code level 7.3,x. this will be first time presented at the SVC User Group Day at IBM in Ehningen
Next week we will first time show BVQ 3.3 Beta at the German SVC user group day in Ehningen. BVQ version 3.3 supports the SVC / Storwize version 7.3 completely, including the new cache architecture. Further improvements are removable window, with which...
Nice little success story!
I am very proud about these kind of success stories, where we again could help a customer to solve a performance problem in shortest time. Like this example: 2:30 pm we were informed by one of our sales colleagues, that the client has performance...
Use BVQ V3.2 for the new SVC / Storwize codelevel 7.3.x
BVQ version 3.2 supports the new SVC / Storwize code versions 7.3.x The BVQ scanner from older versions is not able to work with the new Storwize code. New BVQ code Version 3.2   If you upgrade from BVQ...
BVQ Version 3.2 is available for download
We have just released BVQ Version 3.2 Highlights: Improved 'Look and Feel' of the GUI with  customization possibilities and more space for content More possibilities for searching, sorting and filtering functions in all...
Scan mirrored SVC Storwize systems with the the updated BVQ offline scanner
Another new feature of the latest BVQ offline scanner allows to scan more than one systems at once. This is especially helpful when you plan to use BVQ Copy Services Package to analyze Metro Mirror or Global Mirror connected systems. A profile is...
BVQ Offline Scanner for Code Version 7.3.x
The BVQ offline scanner has been updated . This scanner covers now also the SVC / Storwize code version 7.3 . You can find the download and instructions for the offline scanner here . BVQ Offline Scanner  
The advantage of 1 minute measurement intervals
I have added a new document about the benefits and the limits of 1 minutes measurement intervals The advantage of 1 minute measurement intervals     
Understand performance issues in huge SVC / Storwize environments (part 2)
Use BVQ to optimize SVC and Storwize in multi IO group clusters Complete whitepaper Abstract (The whitepaper is in draft mode! ) The experience with the analysis of multi I/O group clusters always show the same kind of problem. Technical...
Understand reasons for performance issues in huge SVC environments (part 1)
I just finished an performance bottleneck analysis for an 8 nmode cluster with several 100 TB and found one thing which I think might be a commonly made mistake in many of the bigger SVC environments. This picture shows the MDisk performance of SSD storage...
an amazing analysis taken from real life
Win a SVC / Storwize analysis for free What I can give for free, when you find the correct answer to this question, is an analysis of your SVC or  Storwize. You just need to find the correct answer to the question "what kind of...

(.)