Not sure what to automate? Check out these 15 reasons why you should (or shouldn’t) automate a test

PerfectoTopAd

Below are some factors to consider to help identify which manual tests should or should not be automated. Just because you can automate something doesn’t necessarily mean that you should. Here are some guidelines to help identify good candidates for test automation:

Tests that should be automated:

  • Tests that need to be run against every build/release of the application, such as smoke test, sanity test and regression test.
  • Tests that utilize the same workflow but different data for its inputs for each test run (data-driven and boundary tests).
  • Tests that need to gather multiple information during run time, like SQL queries and low-level application attributes.
  • Tests that can be used for performance testing, like stress and load tests
  • Tests that take a long time to perform and may need to be run during breaks or overnight. Automating these tests maximizes the use of time.
  • Tests that involve inputting large volumes of data.
  • Tests that need to run against multiple configurations — different OS & Browser combinations, for example.
  • Tests during which images must be captured to prove that the application behaved as expected.

  • Important: Remember that the more repetitive the test run, the better it is for automation

    Tests that should not be automated:

  • User experience tests for usability (tests that require a user to respond as to how easy the app is to use).
  • Tests that you will only run one-time. (This is a general rule. I have automated one–time tests for data population situations in which the steps can be automated quickly, and when placing in a loop can produce thousands of records, saving a manual tester considerable time and effort)
  • Test that need to run ASAP.
  • Tests that require ad hoc/random testing based on domain knowledge/expertise.
  • Tests without predictable results. For automation validation to be successful, it needs to have predictable results in order to produce pass and fail conditions.
  • If a test needs to be manually “eyeballed” to determine whether the results are correct.
  • Test that cannot be 100% automated should not be automated at all — unless doing so will save a considerable amount of time.
  • What would you add or remove from this list? Let me know.

    7 comments
    Raman - October 6, 2010

    1.Does QTP supports the Phone based Interactive Voice Response recognition
    (i.e. verify that the cotent of the announced prompt matchess expected results?
    If yes pls let me know how to do?
    Otherwise is there any tool to support?

    Reply
      Joe Colantonio - October 6, 2010

      Hi Raman,

      Since IVR is a pretty specialized interface out of the box I don’t think QTP has support for it. You may be to roll your own DLL to get the functionality you need and have QTP call the DLL. I think
      NuEcho may have a IVR solution called NuBot http://www.nuecho.com/content/view/28/147/lang,en/

      Reply
    Paul - October 13, 2010

    Mentioning “usability testing” in your article, I’d suggest you take a look at http://www.userfeel.com.

    Reply
    Ben - November 22, 2011

    As you already eluded to via your statement “unless doing so will save a considerable amount of time” I would take your last 3 points under “Tests that should not be automated” with a grain of salt. We have often automated test that have high re-run rates or save a lot of time as you pointed out but less than predictable results or results that are difficult to validate via QTP. We flag the results with a warning or failure to prompt the test executer to review the results manually and either accept the failure or manually pass the test. Would be better if QTP/QC had a status of “Manual Review” or similar so we didn’t have to use the micFail status but…

    Thoughts on this approach?

    Reply
      Joe Colantonio - November 22, 2011

      Ben » Hi Ben Have you tried going into your QC Project Entities\Run\Status Field settings and adding a New Item named “Manual Review”? You could then created some QC OTA code that sets this value for you if the QTP script needs to be reviewed.

      Reply
    Vikas Gholap - May 8, 2012

    Hi Joe,
    Last one year i’m doing QTP scripting for automating Java Desktop based client/server application. We use the BPT model and data driven framework.
    I want to know that how we can decided which framework/methodologies we should use / best fit for application.
    Could you explain this through a new article? Or please suggest if you know any best book for this.
    Thanks,
    Vikas

    Reply
      Joe Colantonio - May 9, 2012

      Vikas Gholap » HI Vikas – that’s a tough question to answer because it all depends on the technology and the people using it. I would setup a small pilot program using both approaches (just a small example of each one) and get feed back from the user’s to decide which approach they prefer.

      Reply
    Click here to add a comment

    Leave a comment: