Give credible information about the state of the component, on which
business decisions can be taken.
As regards faults that should be found through volume testing are those,
where the behaviour of the software deviates from that expected for a specified
volume of data. Thus a bank system will be tested for faults at much larger
volumes of data, than that of small retailer software. A fault which is only
manifested on a table with a million records will be of no concern to the retail
software, but will be picked up by the bank testers.
Credible information about how the software will behave is essential. During
the dot com boom many websites went live without knowing, what the effect would
be if the back end database grew exponentially. Of course many suffered crashes
as a result.
Why test at the component level? This is because we can then see how the code
behaves, and confirm that the component will not be a bottle neck and slow down
the whole system. (Alternatively, use too many system resources.)
For example, a window often used window object is populated with data, by
calling a database object which runs a complex SQL query. Supposing the
component is tested against tables with only 4-5 records. Of course it will
return within seconds. Everything seems fine. It is then integrated with the
window, and system tested. Again everything seems ok. It is only when the
application is in User Acceptance (or even gone live) and it is tested against
100,000 records, is it discovered that, the SQL was not properly optimized and
the tables not indexed. Thus it should have been tested at the component level.
Some methodologies such as the RUP suggest early testing. Therefore in the
early phases, (inception and elaboration), volume testing might take place to
confirm that the central core architecture is the one to proceed with.