OneStopTesting - Quality Testing Jobs, eBooks, Articles, FAQs, Training Institutes, Testing Software, Testing downloads, testing news, testing tools, learn testing, manual testing, automated testing, load runner, winrunner, test director, silk test, STLC

Forum| Contact Us| Testimonials| Sitemap| Employee Referrals| News| Articles| Feedback| Enquiry
Testing Resources
  • Testing Articles
  • Testing Books
  • Testing Certification
  • Testing FAQs
  • Testing Downloads
  • Testing Interview Questions
  • Career In Software Testing
  • Testing Jobs
  • Testing Job Consultants
  • Testing News
  • Testing Training Institutes
  • Introduction
  • Designing Test Cases
  • Developing Test Cases
  • Writing Test Cases
  • Test Case Templates
  • Purpose
  • What Is a Good Test Case?
  • Test Specifications
  • UML
  • Scenario Testing
  • Test Script
  • Test Summary Report
  • Test Data
  • Defect Tracking
    Software testing
  • Testing Forum
  • Introduction
  • Testing Start Process
  • Testing Stop Process
  • Testing Strategy
  • Risk Analysis
  • Software Listings
  • Test Metrics
  • Release Life Cycle
  • Interoperability Testing
  • Extreme Programming
  • Cyclomatic Complexity
  • Equivalence Partitioning
  • Error Guessing
  • Boundary Value Analysis
  • Traceability Matrix
    SDLC Models
  • Introduction
  • Waterfall Model
  • Iterative Model
  • V-Model
  • Spiral Model
  • Big Bang Model
  • RAD Model
  • Prototyping Model
    Software Testing Types
  • Static Testing
  • Dynamic Testing
  • Blackbox Testing
  • Whitebox Testing
  • Unit Testing
  • Requirements Testing
  • Regression Testing
  • Error Handling Testing
  • Manual support Testing
  • Intersystem Testing
  • Control Testing
  • Parallel Testing
  • Volume Testing
  • Stress Testing
  • Performance Testing
  • Agile Testing
  • Localization Testing
  • Globalization Testing
  • Internationalization Testing
    Test Plan
  • Introduction
  • Test Plan Development
  • Test Plan Template
  • Regional Differences
  • Criticism
  • Hardware Development
  • IEEE 829-1998
  • Testing Without a TestPlan
    Code Coverage
  • Introduction
  • Measures
  • Working
  • Statement Coverage
  • Branch Coverage
  • Path Coverage
  • Coverage criteria
  • Code coverage in practice
  • Tools
  • Features
    Quality Management
  • Introduction
  • Components
  • Capability Maturity Model
  • CMMI
  • Six Sigma
    Project Management
  • Introduction
  • PM Activities
  • Project Control Variables
  • PM Methodology
  • PM Phases
  • PM Templates
  • Agile PM
    Automated Testing Tools
  • Quick Test Professional
  • WinRunner
  • LoadRunner
  • Test Director
  • Silk Test
  • Test Partner
  • Rational Robot
    Performance Testing Tools
  • Apache JMeter
  • Rational Performance Tester
  • LoadRunner
  • NeoLoad
  • WAPT
  • WebLOAD
  • Loadster
  • OpenSTA
  • LoadUI
  • Appvance
  • Loadstorm
  • LoadImpact
  • QEngine
  • Httperf
  • CloudTest
  • Perl Testing
  • Python Testing
  • JUnit Testing
  • Unix Shell Scripting
    Automation Framework
  • Introduction
  • Keyword-driven Testing
  • Data-driven Testing
    Configuration Management
  • History
  • What is CM?
  • Meaning of CM
  • Graphically Representation
  • Traditional CM
  • CM Activities
  • Tools
  • What Is Software Testing?
  • Effective Defect Reports
  • Software Security
  • Tracking Defects
  • Bug Report
  • Web Testing
  • Exploratory Testing
  • Good Test Case
  • Write a Test
  • Code Coverage
  • WinRunner vs. QuickTest
  • Web Testing Tools
  • Automated Testing
  • Testing Estimation Process
  • Quality Assurance
  • The Interview Guide
  • Upgrade Path Testing
  • Priority and Severity of Bug
  • Three Questions About Bug
    Home » Testing Articles » Manual Testing Articles » Performance Testing

    Performance Testing

    A D V E R T I S E M E N T

    Ensuring that an application will meet performance and scalability requirements when deployed requires performance testing. Without performance validation, there is great risk that the application will not provide the performance necessary to adequately support the business; this will result in customer dissatisfaction, lost business opportunity, and/or lost revenue. Most software development organizations recognize the need for performance testing, and many have invested in automated tools to help execute this testing. However, despite the commitment to testing and the investment in tools and resources, many organizations still have performance issues when their applications reach production deployment. The software doesn't scale as expected; it doesn't satisfy expected response times; and performance collapses under certain conditions or at certain stages of a transaction.

    When these application performance failures occur despite testing, it is typically the result of a poorly defined or implemented performance testing practice. Setting up performance testing correctly is extremely challenging. It must accurately reflect such things as the expected population of application users, the number and frequency of tasks that will be executed, the distribution of those tasks, and the parallel activities that might compete for resources. Additionally, it requires well-defined and documented performance goals or requirements from which tests can be built. Without the right test model that reflects the full range of expected or actual usage and that has measurable performance criteria, performance testing will not be successful.

    Developing an accurate usage model is key to ensuring that performance testing produces valid results and accurately predicts actual application performance. Yet, most organizations have difficulty with this task. Many do not spend enough time researching and defining the basis for the model. Others focus too narrowly on meeting specific requirements or small sets of requirements (such as Service Level Agreements [SLAs]), then find that their applications succeed in meeting these specific requirements, but suffer performance failures in all other areas of use. A good performance testing practice focuses on developing a good foundation usage model.

    Linking performance testing to functional testing is another key element for success. Performance testing extends naturally from the requirements management policy and functional testing practice. Ideally, the requirements define the expected execution paths through the application (based on use cases), the expected distribution of user paths being executed simultaneously, the anticipated peak traffic levels, and any SLA performance requirements you are contracted to satisfy. Additionally, functional testing implements test cases that verify whether a single instance of each expected user path operates successfully. Performance testing should then build upon this foundation by taking the functional tests and executing them in the correct combination and frequency to simulate the usage model. To assess scalability, you verify whether the functionality operates correctly under the given load. To assess SLA compliance, you verify whether the application satisfies performance metrics (such as metrics that specify an acceptable threshold for the total time of path execution or the time of each path step execution).

    An effective performance testing practice involves the definition of guidelines for using performance testing technologies effectively and then implementing and integrating those guidelines (along with supporting technologies and configurations) into your software development lifecycle to ensure that your teams apply the practice consistently and regularly. It also requires a means to monitor and measure the practice's application.

    Effective performance testing must also be an iterative process. It is difficult to get the model defined exactly right the first time. You must be prepared to test, measure, evaluate, and then modify your test structure based on initial test runs in order to refine the model to correctly reflect usage patterns. And as environments change and applications are modified or extended - or as business expectations change and performance requirements are modified - performance testing needs to reflect these changes. Consequently, it is important to have a well-defined performance testing practice that is repeatable, extendable, and iterative.

    More Manual Testing Articles
    1 2 3 4 5 6 7 8 9 10 11 Next

    discussionDiscussion Center


    Yahoo Groups
    Y! Group
    Sirfdosti Groups
    Contact Us

    Looking for Software Testing eBooks and Interview Questions? Join now and get it FREE!
    A D V E R T I S E M E N T

    Members Login

    Email ID:

    Forgot Password
    New User
    Testing Interview Questions
  • General Testing
  • Automation Testing
  • Manual Testing
  • Software Development Life Cycle
  • Software Testing Life Cycle
  • Testing Models
  • Automated Testing Tools
  • Silk Test
  • Win Runner
    Testing Highlights

  • Software Testing Ebooks
  • Testing Jobs
  • Testing Frequently Asked Questions
  • Testing News
  • Testing Interview Questions
  • Testing Jobs
  • Testing Companies
  • Testing Job Consultants
  • ISTQB Certification Questions
    Interview Questions

  • WinRunner
  • LoadRunner
  • SilkTest
  • TestDirector
  • General Testing Questions

  • Testing Forum
  • Downloads
  • E-Books
  • Testing Jobs
  • Testing Interview Questions
  • Testing Tools Questions
  • Testing Jobs
  • A-Z Knowledge
    Study ABROAD ?

    Study Abroad

    Vyom Network : Free SMS, GRE, GMAT, MBA | Online Exams | Freshers Jobs | Software Downloads | Programming & Source Codes | Free eBooks | Job Interview Questions | Free Tutorials | Jokes, Songs, Fun | Free Classifieds | Free Recipes | Bangalore Info | GATE Preparation | MBA Preparation | Free SAP Training
    Privacy Policy | Terms and Conditions
    Sitemap | Sitemap (XML)
    Job Interview Questions | Placement Papers | SMS Jokes | C++ Interview Questions | C Interview Questions | Web Hosting
    German | French | Portugese | Italian