Monday, August 29, 2011

Does extrapolation in estimation work?

How do you estimate test automation efforts for building a regression suite for functionality that is already in Production?


The approach I am following is:
> Get some level of understanding about the domain
> Look at a subset of existing test cases of varying complexities and sizes
> Estimate the effort for each of them and also identify the complexity involved in each, with the risks and assumptions.
> Identify some obvious spikes / tech tasks that will be needed
> Extrapolate the estimates
> Add 10-15% buffer in the estimates for unknowns.


And presto!! .... I have my estimates for the task on hand.


Is this approach right? 
Is extrapolating the estimates going to give valid results?
What can be different / better ways of getting the results required?

Thursday, August 18, 2011

To test or not to test ... do you ask yourself this question?

For people involved in Test Automation, I have seen that quite a few of us get carried away and start automating anything and everything possible. This inherently happens at the expense of ignoring / de-prioritizing other equally important activities like Manual (exploraratory / ad-hoc) testing.

Also, as a result, the test automation suite gets very large and unmaintainable, and in all possibilities, not very usable, with a long feedback cycle.

I have found a few strategies work well for me when I approach Test Automation:
  • Take a step back and look at the big picture.
  • Ask yourself the question - "Should I automate this test or not? What value will the product get by automating this?"
  • Evaluate what test automation will truly provide good and valuable feedback.
  • Based on the evaluation, build and evolve your test automation suite.
One technique that is simple and quick to evaluate what tests should be automated or not, is to do a Cost Vs Value analysis of your identified tests using the graph shown below.


This is very straight forward to use.

To start off, analyze your tests and categorize them in the different quadrants of the above graph.
  1. First automate tests that provide high value, and low cost to build / maintain = #1 in the graph. This is similar to the 80/20 rule.
  2. Then automate tests that provide high value, but have a high cost to build / maintain = #2 in the graph.
  3. Beyond this, IF there is more time available, then CONSIDER automating tests that have low value, and low cost. I would rather better utilize my time at this juncture to do manual exploratory testing of the system.
  4. DO NOT automate tests that have low value and high cost to build / maintain.

    Thursday, August 11, 2011

    How do you create / generate Test Data?

    Testing (manual or automation) depends upon data being present in the product.


    How do you create Test Data for your testing?


    • Manually create it using the product itself?
    • Directly use SQL statements to create the data in the DB (required knowledge of the schema)?
    • Use product apis / developer help to seed data directly? (using product factory objects)?
    • Use production snapshot / production-like data?
    • Any other way?

    How do you store this test data?


    • Along with the test (in code)?
    • In separate test data files - eg: XML, yml, other types of files?
    • In some specific tools like Excel?
    • In a separate test database?
    • Any other way?

    Do you use the same test data that is also being used by the developers for their tests?


    What are the advantages / disadvantages you have observed in the approach you use (or any other methods)?


    Looking forward to knowing your strategy on test data creation and management!

    Thursday, August 4, 2011

    Lessons from a 1-day training engagement - a Trainer perspective

    I was a Trainer for bunch of smart QAs recently. The training was about "Agile QA". This term is very vague, and if you think about it more deeply, it is also a very vast topic.

    The additional complexity was that this training was to be done in 1 day.

    So we broke it down to what was really essential to be covered in this duration, and what could be covered in this duration.

    We came down to 2 fundamental things:
    1. How can the QA / Test team be agile and contribute effectively to Agile projects?
    2. How can you be more effective in Test Automation - and that too start this activity in parallel with Story development?

    So we structured our thoughts, discussions and presentations around this.

    At the end of the exhausting day (both, for the Trainers as well as the Trainees), a couple of things stood out (not in any particular order):
    • What is the "right" Agile QA process?
    • Roles and responsibilities of a QA on an Agile team
    • Sharing case studies and the good & not-so-good experiences around them
    • Effectiveness of process and practices
    • Value of asking difficult questions at the right time
    • Taboo game - playing it, and reflecting on its learnings
    • What to automate and what NOT to automate?
    • Discussion around - How do you create / generate Test Data?