Friday, June 29, 2012

Behavior Driven Testing (BDT) in Agile

I am speaking in SoftTec 2012 in Bangalore on 14th July on Behavior Driven Testing (BDT) in Agile


Abstract: 
In this talk, I will explain Agile Testing and how a technique called "Behavior Driven Testing (BDT)" can make your testing more effective. I will also cover the differences between BDD (Behavior Driven Development) and BDT, how BDT affects the Test Pyramid, and the value proposition of using BDT. 

Feedback on WAAT

I am considering adding some functionality to WAAT. However, before I do that, I would like to know what your opinion is.


So, to all those who are either using, tried using, or, want to use WAAT: Can you please provide me some feedback based on the following questions:



  • Which flavor of WAAT do you use? 
    • Java
    • Ruby
    • Both
  • Have you faced any problems using WAAT? 
    • If yes, what problems? How did you resolve them?
  • WAAT using the httpSniffer approach has known limitations (namely: does not support https request capturing, and on non-Windows platform you need to run the tests using root access). 
    • Have you run into these limitations? 
    • How did you resolve the issue?
  • Do you find the WAAT wiki useful?
    • If not, what if done differently will provide more value?
  • Any other thoughts / comments on how WAAT can be made better?
Looking forward for your comments.

Thanks.

Anand

Thursday, June 21, 2012

Test Driven Development via Agile Testing - slides + audio

I just finished a presentation on Test Driven Development via Agile Testing in the Next Generation Testing Conference in Bangalore. Went pretty well.


The following topics and answer questions related to:
  • Overview of Agile Testing
  • The Test Pyramid
  • Different flavors of TDD
    • BDD – Behavior Driven Development
    • ATDD – Acceptance Driven Development
    • BDT – Behavior Driven Testing
      • Difference between BDD and BDT
      • Tools that support BDT
      • The value proposition BDT offers
Here is a link to the slides. The audio recording of my talk can be downloaded from the link below. You will be able to  listen to the talk using VLC Player or similar.


Wednesday, June 20, 2012

vodQA Geek Night - Behavior Driven Testing (BDT) on 5th July

I had presented a topic on Behavior Driven Testing (BDT) in vodQA - Testing and Beyond


We are now running a workshop as a followup to the session to provide a first-hand experience into understanding BDT, and how it can potentially help you in your testing efforts.


Since this is a workshop, seats are limited. If you are interested in attending, please join our vodQA group on facebook and confirm your presence for the vodQA Geek Night event scheduled at 5.30pm on 5th July 2012 in ThoughtWorks Pune.

Thursday, May 31, 2012

Test Driven Development via Agile Testing

I will be giving a talk in the "Next Generation Testing Conference" held in Bangalore on 21st June 2012. 


The topic and abstract is as mentioned below. See you at the conference!

Title:

Test Driven Development via Agile Testing 

Abstract covering main features of the talk:
In this talk, I will cover the following topics and answer questions related to:
·       Overview of Agile Testing
·       The Test Pyramid
·       Different flavors of TDD
o   BDD – Behavior Driven Development?
o   ATDD – Acceptance Driven Development?
o   BDT – Behavior Driven Testing?
§  Difference between BDD and BDT
§  Tools that support BDT
§  The value proposition







Thursday, May 17, 2012

Keeping your test suites "green"

My article on Keeping your test suites "green" has been published in SiliconIndia's QA City. Looking forward for your comments.


Same article quoted below:


In days where we are talking and thinking more and more on how to achieve "Continuous Delivery" in our software projects, Test Automation plays an even more crucial role. 

To reap the benefits of test automation, you want to run it as often as possible. However, just putting your test automation jobs in some CI tool like Hudson / Jenkins / GO / etc., and have it run every so often is of little value, unless, the tests are passing, or the failures are identified and analyzed immediately, AND, proper action is taken based on the failures. 

If the number of failures / jobs are quite a few, then the test failure analysis and test maintenance activity takes a lot of time. Also, as a result, the development / product / project team may start losing confidence in the automation suite because the CI always shows the jobs in red. Eventually, test automation may lose priority and value, which is not a good sign. 

Before I explain a technique that may help keep your test suites "green" - and reduce the test failure analysis and maintenance time, let us understand why we get into this problem.

I have seen the functional tests failing for 3 main reasons:

1. The product has undergone some "unexpected change". As a result, the test has caught a regression bug as the product has changed when it was not supposed to.
2. The product has undergone some "expected" change and the test has not yet been updated to keep up with the new functionality.
3. There is an intermittent issue - maybe related to environment / database / browser / network / 3rd party integration / etc.
Regardless of the reason, if there is even 1 failure in your CI job, it means the whole job fails and turns "red". 

This is painful and more importantly, this does not provide the correct picture of the health of the system.

To determine the health of the system, we now need to:

• Spend dedicated time per test run to ensure the failures in the jobs are analyzed and accounted for,
• In case of genuine failures, defects are reported against the product, or,
• In case of test failures based on expected product changes, update the tests to be in accordance with the new functionality, or,
• In case of intermittent failures, rerun the test again to confirm the failure was indeed due to an intermittent issue.

This is not a trivial task to keep doing on every test run. So can something be done to keep your test suites green, and provide a true representation of what the health of the product under test?

Here is a strategy, which will reduce the manual analysis of your test runs, and, provide a better understanding into how the product conforms to what its supposed to do:

Lets make some assumptions:


1. Say, you have 5 jobs of various types in your CI
2. Each job uses a specific tag / annotation to run specific types of tests.

Now here is what you do:

1. Create appropriate commands / tasks in your test framework to execute tests with a new "failing_tests" tag / annotation.
2. Create a new job in CI - "Failing Tests" and point it to run the tests with tag / annotation "failing_tests".
3. Analyze all your existing / earlier jobs, and for all tests that have failed for any of the reasons mentioned earlier, comment out the original tag / annotation, and instead, add the tag / annotation "failing_tests" to such tests.

Run all the tests again and the now the following should be seen:

• The above steps have ensured the tests that pass, will continue to pass, with the added benefit of making the CI job green
• The tests that fail, will continue to fail - but in another, special "Failing Tests" CI job. 
• As a result, all the original 5 jobs you had in CI, will now turn GREEN and you just need to monitor the "Failing Tests" job.

This means now that your effort of test analysis has been reduced from 5 jobs to just 1 job. 

When a failing test passes, replace the "failing_tests" tag with the original tag back to it.

If you want to categorize the failing tests in a better way, you could potentially create separate category "Failing Tests" jobs like:

• "Failing Tests - Open Defects"
• "Failing Tests - Test updates needed"
• "Failing Tests - Intermittent / environment issues"

Regardless of your approach, the solution should be simple to implement, and you should be saving time at the end of the day, to focus on more important testing activities, instead of just analyzing the test failures.

One of my colleagues asked:
"What if a smoke test is failing? Should we move that also to a Failing Tests job?"


My answer was: 


"As with most things, you cannot apply one rule for everything. In this case also, you should not apply one strategy to all problems. As each problem is different in nature, you need to create a correct strategy that solves the problem in the best possible way.

That said, fundamentally, the smoke suite should always be "green". If there is any reason it is not, then we need to stop everything else, and make sure this is a passing test suite.

However, if you have various jobs representing the smoke suite, then you could potentially create a "Smoke - Failing Suite" on the above mentioned lines IF that helps reduce time wasted in test result analysis and provides the correct product health representation quickly, and consistently."

To summarize:

• Create a failing tests CI job and run all the failing tests as part of this job
• All existing CI jobs should turn "green"
• Monitor the failing tests and fix / update them as necessary
• If any of the passing tests fail at any point, first move them to the "Failing Tests" job to ensure the other jobs remain "green"
• When a failing test passes, move that test back from the "Failing Tests" job to the original job.

I have been profiled

SiliconIndia's QA City portal has put up my career profile on their site. You can see that here.

Monday, April 30, 2012

Theoretical Vs Practical knowledge

http://dilbert.com/strips/comic/2012-04-30/

Funny ... but on the other hand, at times this is true. Just because something has been written about, does not necessarily mean it is always true. Things change, evolve, and we need to change and move accordingly. At times, we need to flow against the tide for what we think and believe is the right thing to do.


This definitely applies to what I have seen in my career so far ... so keep thinking in innovative and creative ways - even if at times you have to swim against the tide!

Multi-tasking .... good or bad?

Many a times I end up trying to do too many things at almost the same time. I have got mixed results out of this approach.


I think off late, more often than not, I have not been too successful at juggling many things together ... this could be because of mental fatigue and burnout. 


As a result I have consciously tried to take a step away from items of relatively lower priority. This has helped me tremendously.  Also, I came across this post (http://blogs.hbr.org/schwartz/2012/03/the-magic-of-doing-one-thing-a.html)- which talks about techniques how to be more effective in your work. See if this helps you too!