OneStopTesting - Quality Testing Jobs, eBooks, Articles, FAQs, Training Institutes, Testing Software, Testing downloads, testing news, testing tools, learn testing, manual testing, automated testing, load runner, winrunner, test director, silk test, STLC

Forum| Contact Us| Testimonials| Sitemap| Employee Referrals| News| Articles| Feedback| Enquiry
 
Testing Resources
 
  • Testing Articles
  • Testing Books
  • Testing Certification
  • Testing FAQs
  • Testing Downloads
  • Testing Interview Questions
  • Career In Software Testing
  • Testing Jobs
  • Testing Job Consultants
  • Testing News
  • Testing Training Institutes
  •  
    Fundamentals
     
  • Introduction
  • Designing Test Cases
  • Developing Test Cases
  • Writing Test Cases
  • Test Case Templates
  • Purpose
  • What Is a Good Test Case?
  • Test Specifications
  • UML
  • Scenario Testing
  • Test Script
  • Test Summary Report
  • Test Data
  • Defect Tracking
  •  
    Software testing
     
  • Testing Forum
  • Introduction
  • Testing Start Process
  • Testing Stop Process
  • Testing Strategy
  • Risk Analysis
  • Software Listings
  • Test Metrics
  • Release Life Cycle
  • Interoperability Testing
  • Extreme Programming
  • Cyclomatic Complexity
  • Equivalence Partitioning
  • Error Guessing
  • Boundary Value Analysis
  • Traceability Matrix
  •  
    SDLC Models
     
  • Introduction
  • Waterfall Model
  • Iterative Model
  • V-Model
  • Spiral Model
  • Big Bang Model
  • RAD Model
  • Prototyping Model
  •  
    Software Testing Types
     
  • Static Testing
  • Dynamic Testing
  • Blackbox Testing
  • Whitebox Testing
  • Unit Testing
  • Requirements Testing
  • Regression Testing
  • Error Handling Testing
  • Manual support Testing
  • Intersystem Testing
  • Control Testing
  • Parallel Testing
  • Volume Testing
  • Stress Testing
  • Performance Testing
  • Agile Testing
  • Localization Testing
  • Globalization Testing
  • Internationalization Testing
  •  
    Test Plan
     
  • Introduction
  • Test Plan Development
  • Test Plan Template
  • Regional Differences
  • Criticism
  • Hardware Development
  • IEEE 829-1998
  • Testing Without a TestPlan
  •  
    Code Coverage
     
  • Introduction
  • Measures
  • Working
  • Statement Coverage
  • Branch Coverage
  • Path Coverage
  • Coverage criteria
  • Code coverage in practice
  • Tools
  • Features
  •  
    Quality Management
     
  • Introduction
  • Components
  • Capability Maturity Model
  • CMMI
  • Six Sigma
  •  
    Project Management
     
  • Introduction
  • PM Activities
  • Project Control Variables
  • PM Methodology
  • PM Phases
  • PM Templates
  • Agile PM
  •  
    Automated Testing Tools
     
  • Quick Test Professional
  • WinRunner
  • LoadRunner
  • Test Director
  • Silk Test
  • Test Partner
  • Rational Robot
  •  
    Performance Testing Tools
     
  • Apache JMeter
  • Rational Performance Tester
  • LoadRunner
  • NeoLoad
  • WAPT
  • WebLOAD
  • Loadster
  • OpenSTA
  • LoadUI
  • Appvance
  • Loadstorm
  • LoadImpact
  • QEngine
  • Httperf
  • CloudTest
  •  
    Languages
     
  • Perl Testing
  • Python Testing
  • JUnit Testing
  • Unix Shell Scripting
  •  
    Automation Framework
     
  • Introduction
  • Keyword-driven Testing
  • Data-driven Testing
  •  
    Configuration Management
     
  • History
  • What is CM?
  • Meaning of CM
  • Graphically Representation
  • Traditional CM
  • CM Activities
  • Tools
  •  
    Articles
     
  • What Is Software Testing?
  • Effective Defect Reports
  • Software Security
  • Tracking Defects
  • Bug Report
  • Web Testing
  • Exploratory Testing
  • Good Test Case
  • Write a Test
  • Code Coverage
  • WinRunner vs. QuickTest
  • Web Testing Tools
  • Automated Testing
  • Testing Estimation Process
  • Quality Assurance
  • The Interview Guide
  • Upgrade Path Testing
  • Priority and Severity of Bug
  • Three Questions About Bug
  •    
     
    Home » Testing Articles » Automated Testing Articles » 100% Automation Pass Rates

    100% Automation Pass Rates

    A D V E R T I S E M E N T


    Well, now that the summer has "mostly" passed and we are "mostly" finished with our latest release I am finally able to see light at the end of the tunnel. So, time to start blogging again!

    Back in May my colleague wrote a post on why 100% automation pass rates are bad. In the past I also questioned whether striving for 100% pass rates in automated test passes was an important goal. I wondered whether setting this goal focused testers on the wrong objective. I  was concerned that my managers might assume that 100% automation implied the product was of high quality. And I struggled with the notion of investing time in automated tests that won't likely find new bugs after they are designed versus time spent "testing."

    Automated testing is now more pervasive throughout the industry as compared to the past. Automated tests running against daily builds are an essential element in an Agile lifecycle. Continuous integration (CI) requires continuous feedback from automated tests. So, let's dispel some myths and explain why 100% pass rates are not only a good thing, but are and important goal to strive towards. Let's start by understanding the nature and purpose of many automated test suites.

    Even with high volume automation the number of automated tests are a relatively small percentage of the overall (explicit and exploratory) testing effort dedicated to any project. This limited subset of targeted tests does not in any way imply the product works as expected. An automated test suite provides information about the status of specific functional attributes of the product for each new build that incorporated changes in the code base. The ability of those tests to provide accurate information is determined by the design and reliability of the test and the effectiveness of the oracle used by that test. Basing the overall quality of a product on a limited subset of automated test is as foolish as the notion that automated tests will replace testers.

    Also, most automated testing suites are intended to provide baseline assessments, or measure various aspects of non-functional behaviors such as performance, bandwidth, or battery usage. For example, my unit tests provide a baseline assessment of a function or method in code, so when I refactor that code at a later date my unit tests validate that method works exactly as before the code change. Likewise, many higher level test automation suites are some form of regression tests similar to lower level unit tests. Regression tests help identify bugs that are introduced by changes in the product code through refactoring, adding or removing features, or fixing bugs. Automated regression test suites are usually not intended or designed to find "new" bugs. Automated regression test suites provide a baseline and can help find regressions in product functionality after changes in the code base.

    We should also remember that the purpose of testing is to provide information continually throughout the lifecycle of a product. Testers who are only focused on finding bugs are only providing one aspect of information. I know many books have been written touting the purpose of testing is to find bugs. However, we should get beyond outdated and limited views of testing and mature the profession towards the more valuable service of collecting and analyzing data through tests and providing the decision makers with the appropriate information that will help them make the appropriate decisions. Testers should strive to provide information that:

    • assesses the product's ability to satisfy explicit and implicit goals, objectives, and other requirements � measure
    • identifies functional and behavioral issues that may negatively impact customer satisfaction, or the business  � find bugs

    Automated tests are just one tool that helps the team (especially management) quickly assess a product's ability to satisfy a limited set of explicit conditions after changes in the code base (especially in complex products with multiple developers and external dependencies). Automated tests enable earlier identification of regressions in the product (things that used to work but are not broken), and also provides a baseline for additional testing.

    Automation pass rates below 100% indicate a failure in the "system." The "system" in this case includes the product, the test code, the test infrastructure, or external dependencies such as Internet connections. An automated test that is failing due to a bug in the product indicates that product doesn't satisfy specific functional, non-functional, or behavioral attributes that are important. (I wouldn't write time writing automated tests for unimportant things. In other words, if the automated test fails and the problem doesn't have a good probability of being fixed then I likely wouldn't waste my time writing an automated test.) So, if the pass rate is something that is looked at every day (assuming daily builds) by the leadership team then there is usually more focus on getting a fix in sooner.

    Tests that are reporting a failure due to faulty test code are essentially reporting false positives. False positives indicate a bug, but in this case the bug is in the test code; not in the product. Bugs in test code are caused by various reasons. But, whenever a test throws a false positive it eats up valuable time (testers need to troubleshoot/investigate/fix) and also reduces confidence in the effectiveness of the automation suite. An automated test suite should be bullet proof�and testers should adopt zero tolerance for faulty or error prone test code. Ultimately, every test failure (test fails, aborts, or the outcome is inconclusive) must be investigated or the team may become numb to failing tests.

    Becoming numb to automated test pass rates less than 100% is a significant danger. In one case a team overlooked a product bug because pass rate was consistently around a 95% and they had stopped investigating all failures in the automated test suite. That team became accustomed to the pass rate varying a little due to unreliable tests that sometimes passed and sometimes threw a false positives due to network latency. So, when 1 test accurately detected a regression in product functionality is went unnoticed because the team became numb to the pass rate and did not adequately investigate each failure.

    The bottom line is that teams should strive for a 100% pass rate in functional test suites. I have never met a tester who would be satisfied with less than 100% of all unit tests passing, so we shouldn't be hypocritical and not demand the same from our own functional automated test suites.



    More Automated Testing Articles
    1 2 3 4 5 6 7 8 9 10 11 Next



    discussionDiscussion Center
    Discuss
    Discuss

    Query

    Feedback
    Yahoo Groups
    Y! Group
    Sirfdosti Groups
    Sirfdosti
    Contact Us
    Contact

    Looking for Software Testing eBooks and Interview Questions? Join now and get it FREE!
     
    A D V E R T I S E M E N T
       
       

    Members Login


    Email ID:
    Password:


    Forgot Password
    New User
       
       
    Testing Interview Questions
  • General Testing
  • Automation Testing
  • Manual Testing
  • Software Development Life Cycle
  • Software Testing Life Cycle
  • Testing Models
  • Automated Testing Tools
  • Silk Test
  • Win Runner
  •    
       
    Testing Highlights

  • Software Testing Ebooks
  • Testing Jobs
  • Testing Frequently Asked Questions
  • Testing News
  • Testing Interview Questions
  • Testing Jobs
  • Testing Companies
  • Testing Job Consultants
  • ISTQB Certification Questions
  •    
       
    Interview Questions

  • WinRunner
  • LoadRunner
  • SilkTest
  • TestDirector
  • General Testing Questions
  •    
       
    Resources

  • Testing Forum
  • Downloads
  • E-Books
  • Testing Jobs
  • Testing Interview Questions
  • Testing Tools Questions
  • Testing Jobs
  • A-Z Knowledge
  •    
    Planning
    for
    Study ABROAD ?


    Study Abroad


    Vyom Network : Free SMS, GRE, GMAT, MBA | Online Exams | Freshers Jobs | Software Downloads | Programming & Source Codes | Free eBooks | Job Interview Questions | Free Tutorials | Jokes, Songs, Fun | Free Classifieds | Free Recipes | Bangalore Info | GATE Preparation | MBA Preparation | Free SAP Training
    Privacy Policy | Terms and Conditions
    Sitemap | Sitemap (XML)
    Job Interview Questions | Placement Papers | SMS Jokes | C++ Interview Questions | C Interview Questions | Web Hosting
    German | French | Portugese | Italian