OneStopTesting - Quality Testing Jobs, eBooks, Articles, FAQs, Training Institutes, Testing Software, Testing downloads, testing news, testing tools, learn testing, manual testing, automated testing, load runner, winrunner, test director, silk test, STLC

Forum| Contact Us| Testimonials| Sitemap| Employee Referrals| News| Articles| Feedback| Enquiry
 
Testing Resources
 
  • Testing Articles
  • Testing Books
  • Testing Certification
  • Testing FAQs
  • Testing Downloads
  • Testing Interview Questions
  • Career In Software Testing
  • Testing Jobs
  • Testing Job Consultants
  • Testing News
  • Testing Training Institutes
  •  
    Fundamentals
     
  • Introduction
  • Designing Test Cases
  • Developing Test Cases
  • Writing Test Cases
  • Test Case Templates
  • Purpose
  • What Is a Good Test Case?
  • Test Specifications
  • UML
  • Scenario Testing
  • Test Script
  • Test Summary Report
  • Test Data
  • Defect Tracking
  •  
    Software testing
     
  • Testing Forum
  • Introduction
  • Testing Start Process
  • Testing Stop Process
  • Testing Strategy
  • Risk Analysis
  • Software Listings
  • Test Metrics
  • Release Life Cycle
  • Interoperability Testing
  • Extreme Programming
  • Cyclomatic Complexity
  • Equivalence Partitioning
  • Error Guessing
  • Boundary Value Analysis
  • Traceability Matrix
  •  
    SDLC Models
     
  • Introduction
  • Waterfall Model
  • Iterative Model
  • V-Model
  • Spiral Model
  • Big Bang Model
  • RAD Model
  • Prototyping Model
  •  
    Software Testing Types
     
  • Static Testing
  • Dynamic Testing
  • Blackbox Testing
  • Whitebox Testing
  • Unit Testing
  • Requirements Testing
  • Regression Testing
  • Error Handling Testing
  • Manual support Testing
  • Intersystem Testing
  • Control Testing
  • Parallel Testing
  • Volume Testing
  • Stress Testing
  • Performance Testing
  • Agile Testing
  • Localization Testing
  • Globalization Testing
  • Internationalization Testing
  •  
    Test Plan
     
  • Introduction
  • Test Plan Development
  • Test Plan Template
  • Regional Differences
  • Criticism
  • Hardware Development
  • IEEE 829-1998
  • Testing Without a TestPlan
  •  
    Code Coverage
     
  • Introduction
  • Measures
  • Working
  • Statement Coverage
  • Branch Coverage
  • Path Coverage
  • Coverage criteria
  • Code coverage in practice
  • Tools
  • Features
  •  
    Quality Management
     
  • Introduction
  • Components
  • Capability Maturity Model
  • CMMI
  • Six Sigma
  •  
    Project Management
     
  • Introduction
  • PM Activities
  • Project Control Variables
  • PM Methodology
  • PM Phases
  • PM Templates
  • Agile PM
  •  
    Automated Testing Tools
     
  • Quick Test Professional
  • WinRunner
  • LoadRunner
  • Test Director
  • Silk Test
  • Test Partner
  • Rational Robot
  •  
    Performance Testing Tools
     
  • Apache JMeter
  • Rational Performance Tester
  • LoadRunner
  • NeoLoad
  • WAPT
  • WebLOAD
  • Loadster
  • OpenSTA
  • LoadUI
  • Appvance
  • Loadstorm
  • LoadImpact
  • QEngine
  • Httperf
  • CloudTest
  •  
    Languages
     
  • Perl Testing
  • Python Testing
  • JUnit Testing
  • Unix Shell Scripting
  •  
    Automation Framework
     
  • Introduction
  • Keyword-driven Testing
  • Data-driven Testing
  •  
    Configuration Management
     
  • History
  • What is CM?
  • Meaning of CM
  • Graphically Representation
  • Traditional CM
  • CM Activities
  • Tools
  •  
    Articles
     
  • What Is Software Testing?
  • Effective Defect Reports
  • Software Security
  • Tracking Defects
  • Bug Report
  • Web Testing
  • Exploratory Testing
  • Good Test Case
  • Write a Test
  • Code Coverage
  • WinRunner vs. QuickTest
  • Web Testing Tools
  • Automated Testing
  • Testing Estimation Process
  • Quality Assurance
  • The Interview Guide
  • Upgrade Path Testing
  • Priority and Severity of Bug
  • Three Questions About Bug
  •    
     
    Home » Testing Articles » Automated Testing Articles » Models of Automation

    Models of Automation

    A D V E R T I S E M E N T


    Why do many people completely miss obvious opportunities to automate? This question has been bothering me for years. If you watch carefully, you'll see this all around you: people, who by rights have deep and expansive experience of test automation, unable to see the pot of gold lying right in front of them. Instead, they charge off hither and thither, searching for nuggets or ROI-panning in the dust.

    Last year I witnessed a conversation, which whilst frustrating at the time, suggested an answer to this puzzle. One of my teams had set about creating a set of tools to help us to test an ETL process. We would use these tools to mine test data and evaluate it against a coverage model (as a prelude to data conditioning), for results prediction and for reconciliation. Our client asked that we hook up with their internal test framework team in order to investigate how we might integrate our toolset with their test management tool. The resulting conversation resembled first contact, and without the benefit of an interpreter:

    Framework team: So how do you run the tests?

    Test team: Well, first we mine the data, and then we condition it, run the parallel implementation to predict results, execute the real ETL, do a reconciliation and evaluate the resulting mismatches for possible bugs.

    Framework team: No, I mean, how do you run the automated tests?

    Test team: Well, there aren't any automated tests per se, but we have tools that we've automated.

    Framework team: What tests do those tools run?

    Test team: They don't really. We run the tests, the tools provide information�

    Framework team: So�how do you choose which test to run and how do you start them?

    Test team: We never run one test at a time, we run thousands of tests all at the same time, and we trigger the various stages with different tools.

    Framework team: Urghh?

    Human beings are inveterate modelers: we represent the world through mental constructs that organize our beliefs, knowledge and experiences. It is from the perspective of these models that we draw our understanding of the world. And when people with disjoint mental models interact, the result is often confusion.

    So, how did the framework team see automation?

    • We automate whole tests.

    And the test team?

    • We automate tasks. Those tasks can be parts of tests.

    What we had here wasn't just a failure to communicate, but a mismatch in the way these teams perceived the world. Quite simply, these two teams had no conceptual framework in common; they had little basis for a conversation.

    Now, such models are invaluable, we could not function without them: they provide cues as to where to look, what to take note of, what to consider important. The cognitive load of processing every detail provided by our senses would paralyze us. The price is that we do not perceive the world directly; rather, we do so through the lens of our models. And it is easy to miss things that do not fit those models. This is exactly like test design: the mechanism is the same. When we rely on a single test design technique, which after all is only a model used to enumerate specific types of tests, we will only tend to find bugs of a single class: we will be totally blind to other types of bug. When we use a single model to identify opportunities to automate, we will only find opportunities of a single class, and be totally blind to opportunities that don't fit that particular mold.

    Let's look at the model being used by the framework team again. This is the more traditional view of automation. It's in the name: test automation. The automation of tests. If you're not automating tests it's not test automation, right? For it to be test automation, then the entire test must be automated. This is a fundamental assumption that underpins the way many people think about automation. You can see it at work behind many common practices:

    • The measurement, and targeting, of the proportion of tests that have been automated. Replacement (and I use the term loosely) of tests performed by humans by tests performed by machines. The unit of measure is the test, the whole test and nothing but the test.
    • The selection of tests for automation by using Return on Investment analysis. Which tests, when automated, offer the greatest return on the cost of automation? Which tests should we consider? The emphasis is on evaluating tests for automation, not on evaluating what could be usefully automated.
    • Seeking to automate all tests. Not only must whole tests be automated, every last one should be.

    This mental model, this belief that we automate at the test level, may have its uses. It may guide us to see cases where we might automate simple checks of results vs. expectations for given inputs. It is blind to cases that are more sophisticated. It is blind to opportunities where parts of a test might be automated, and other parts may not be. But many tests are exactly like this! Applying a different model of what constitutes a test, and applying this in the context of testing on a specific project, can provide insight into where automation may prove useful. So, when looking for opportunities for automation, I've turned to this model for aid1:

    • Analyze. Can we use tools to analyze the testing problem? Can it help us extract meaningful themes from specifications, to enumerate interesting or useful tests from a model, to understand coverage?
    • Install and Configure. Can we use tools to install or configure the software? To rapidly switch between states?  To generate, condition and manage data?
    • Drive. Can we use tools to stimulate the software? To simulate input or interactions with other components, systems?
    • Predict. Can we use tools to predict what the software might do under certain conditions?
    • Observe. Can we use tools to observe the behavior of the software? To capture data or messages? To monitor timing
    • Reconcile. Can we use tools to reconcile our observations and predictions? To help draw attention to mismatches between reality and expectations?
    • Evaluate. Can we use tools to help us make sense of our observations? To aggregate, analyze or visualize results such that we might notice patterns?

    Of course, this model is flawed too. Just as the whole test / replacement paradigm misses certain types of automation, this model most likely has its own blind spots. Thankfully, Human beings are inveterate modelers, and there is nothing to stop you from creating your own and hopping frequently from one to another. Please do: we would all benefit from trying out a richer set of models.



    More Automated Testing Articles
    1 2 3 4 5 6 7 8 9 10 11 Next



    discussionDiscussion Center
    Discuss
    Discuss

    Query

    Feedback
    Yahoo Groups
    Y! Group
    Sirfdosti Groups
    Sirfdosti
    Contact Us
    Contact

    Looking for Software Testing eBooks and Interview Questions? Join now and get it FREE!
     
    A D V E R T I S E M E N T
       
       

    Members Login


    Email ID:
    Password:


    Forgot Password
    New User
       
       
    Testing Interview Questions
  • General Testing
  • Automation Testing
  • Manual Testing
  • Software Development Life Cycle
  • Software Testing Life Cycle
  • Testing Models
  • Automated Testing Tools
  • Silk Test
  • Win Runner
  •    
       
    Testing Highlights

  • Software Testing Ebooks
  • Testing Jobs
  • Testing Frequently Asked Questions
  • Testing News
  • Testing Interview Questions
  • Testing Jobs
  • Testing Companies
  • Testing Job Consultants
  • ISTQB Certification Questions
  •    
       
    Interview Questions

  • WinRunner
  • LoadRunner
  • SilkTest
  • TestDirector
  • General Testing Questions
  •    
       
    Resources

  • Testing Forum
  • Downloads
  • E-Books
  • Testing Jobs
  • Testing Interview Questions
  • Testing Tools Questions
  • Testing Jobs
  • A-Z Knowledge
  •    
    Planning
    for
    Study ABROAD ?


    Study Abroad


    Vyom Network : Free SMS, GRE, GMAT, MBA | Online Exams | Freshers Jobs | Software Downloads | Programming & Source Codes | Free eBooks | Job Interview Questions | Free Tutorials | Jokes, Songs, Fun | Free Classifieds | Free Recipes | Bangalore Info | GATE Preparation | MBA Preparation | Free SAP Training
    Privacy Policy | Terms and Conditions
    Sitemap | Sitemap (XML)
    Job Interview Questions | Placement Papers | SMS Jokes | C++ Interview Questions | C Interview Questions | Web Hosting
    German | French | Portugese | Italian