Showing posts with label feedback. Show all posts
Showing posts with label feedback. Show all posts

Saturday, September 5, 2020

Questions at at Taquelah - Does your functional automation really add value?

I spoke at Taquelah Lightning Talks on one of my favorite topics - 

Does your functional automation really add value?


 


You can find the slides here  - https://www.slideshare.net/abagmar/does-your-functional-automation-really-add-value

Some references:

https://essenceoftesting.blogspot.com/2020/07/does-your-functional-automation-really.html

https://essenceoftesting.blogspot.com/2020/03/tracking-functional-coverage.html

Tuesday, July 7, 2020

Does your functional automation really add value?


We all know that automation is one of the key enablers for those on the CI-CD journey.

Most teams are:

  • implementing automation
  • talking about its benefits
  • up-skilling themselves
  • talking about tooling
  • etc.

However, many a times I feel we are blinded because of the theoretical value test automation provides, or because everyone says it adds value, or because of the shiny tools / tech-stacks we get to use , or ...

To try and understand more about this, can you answer the below questions?

In your experience, or in your current project:
  1. Does your functional automation really add value?
  2. What makes you say it does / or does not?
  3. How long does it take for tests to run and generate reports?
  4. In most cases, the product-under-test is available on multiple platforms – ex: Android & iOS Native, and on Web. In such cases, for the same scenario that needs to be automated, is the test implemented once for all platforms, or once per platform?
  5. How easy is it to debug and get to the root cause of failures?
  6. How long does it take to update an existing test?
  7. How long does it take to add a new test?
  8. Do your tests run automatically via CI on a new build, or do you need to “trigger” the same?
  9. What is the test passing percentage?
  10. Do you “rerun” the failing tests to see if this was an intermittent issue?
  11. Is there control on the level of parallel execution and switch to sequential execution based on context?
  12. How clean & DRY is the code?

In my experience, unfortunately most of the functional automation that is built is:
· not optimal
· not fit-for-purpose
· does not run fast enough
· gives inconsistent feedback, hence unreliable

Hence, for the amount of effort invested in implementing automation,
  1. Are you really getting the value from this activity?
  2. How can automation truly provide value for teams?


Monday, November 5, 2018

Upcoming webinar - The Missing Feedback Loop

I am very excited to share that I am going to conduct a webinar hosted by testcraft.io on "The Missing Feedback Loop - The Tools, Techniques, and Automation to Solve It". 

You can register for the webinar from here (https://hubs.ly/H0fp4by0).





Date & Time:
Thursday, November 21, 2018 at 02:00 PM New-York (EDT), 11:00 AM San-Francisco (PDT) and 08:00 PM Amsterdam (UTC+2)


Friday, October 12, 2018

Conference season here is - talks, workshops, travelling, networking!

September & October 2018 is a busy conference season for me.

On 27th September, I played a game - "Collaboration - A Taboo!" at ATA GTR 2018 with an audience of 100+ people. There was absolute chaos in the game - a lot of it self-inflicted ... and thankfully - exactly was I wanted it to be. So much fun, energy and enthusiasm in the room meant there was no one feeling drowsy in the post lunch session! 

Typically I play this game in 45-min to 1 hour duration. At ATA GTR 2018 though, I had only 30 min to play the game, and add my own twist on top of it. But, never have I ever taken more than the allocated time - and I managed to get the objectives of the game achieved as well in these 30 min.

Below are some pictures from the game.




Then on 28th September, I spoke on "Measuring Consumer Quality - The Missing Feedback Loop" at StepIn's PSTC 2018. Slides from that talk can be found here.

In October, I will be off to Agile & Automation Days in Krakow, Poland. Here I will be speaking about "Measuring Consumer Quality - The Missing Feedback Loop" and also conducting a workshop on - "Analytics Rebooted - A Workshop". See detailed schedule here

Then I fly directly to Arlington, VA to participate in STPCon Fall 2018. Here I will be conducting 2 workshops - "Analytics Rebooted - A Workshop" and "Practical Agile Testing Workshop". I am also speaking about "Measuring Consumer Quality - The Missing Feedback Loop".

Will share experiences from these conferences soon!


Saturday, March 17, 2018

Measuring Consumer Quality - The Missing Feedback Loop

I spoke in vodQA at ThoughtWorks, Pune on "Measuring Consumer Quality - the Missing Feedback Loop". 

This talk address the why and how from my earlier blog post on "Understanding, Measuring and Building Consumer Quality". I recommend you read that first, before going through the slides and video for this talk.


Abstract:

How to build a good quality product is not a new topic. Proper usage of methodologies, processes, practices, collaboration techniques can yield amazing results for the team, the organisation, and for the end-users of your product.

While there is a lot of emphasis on the processes and practices side, one aspect that is still spoken about "loosely" - is the feedback loop from your end-users to making better decisions.

SO, What is this feedback loop? Is it a myth? How do you measure it? Is there a "magic" formula to understand this data received? How to you add value to your product using this data?

In this interactive session, we will use a case study of a B2C entertainment-domain product (having millions of consumers) as an example to understand and also answer the following questions:

  • The importance of knowing your Consumers 
  • How do you know your product is working well? 
  • How do you know your Consumers are engaged with your product? 
  • Can you draw inferences and patterns from the data to reach of point of being able to make predictions on Consumer behaviour, before making any code change? 

Video:


Slides can be found here.

Pictures:



Tuesday, January 12, 2016

The story of a 'small' vodQA ending up being 'x-large'

We are extremely happy to start the new year with YASV (Yet Another Successful vodQA) event, this time with the theme - Agile Testing Workshop, conducted on 9th January 2016 in ThoughtWorks, Pune office.

Why the theme - "Agile Testing Workshop"?

Over the past few years, after having worked on numerous projects, interacted with a lot of clients (and their partners / vendors), and gaining insights from speaking with individuals & teams in conferences & organizations, we (the vodQA Pune team), realized that a decent portion of the Software (testing) Industry lacks decent / good understanding of Agile and effective Testing on Agile projects / teams.

So, we decided to conduct the next vodQA in Pune - focussed on Agile Testing to answer questions like - "What is Agile and what does it mean to Test on Agile projects / teams?"

Highlights

  • When we started planning for this edition of vodQA, the plan was to keep it very lean - in planning, execution and participation as well. For this, we planned to keep this vodQA 'small'. Little did we realize it would end up being a patiala peg.
  • What started out as an event aimed at 30 attendees soon shot up to 180+ RSVPs on Facebook to 160+ confirmations and eventually we had 85+ attendees. Including ThoughtWorkers, we (again) crossed 100+ people for vodQA Pune! - There went a lot of our 'being-lean' out of the window!
  • This event was completely driven by the Facebook group (from announcements to registrations to updates).
  • We had a quite a few attendees travel from out of Pune for vodQA (ex: Mumbai, Nagpur)
  • This was one of the most vocal, enthusiastic and interactive audience vodQA Pune has seen. They shared their experiences and asked a lot of questions as well.
  • True to our objective for this vodQA, we ensured there was sufficient time between sessions / workshops to facilitate discussions and answer specific questions from the attendees.
  • We had impromptu fishbowl discussion on certain Parking Lot questions.
  • After the first session of the day (Agile Game), the attendees celebrated (it was over) by bursting the balloons - early Diwali some would say … :)
  • A huge shoutout to the organisers who were constantly tweaking their execution methods, days before the event as our expected turnout gradually rose from 30 to 100+.

Agenda and Slides

TopicBySlides
Welcome noteAnand Bagmar
Agile GameAbhay Dalvi, Vardhan Bhatt & Vikrant Chauhan
Tea break

What is Agile Testing?Amit Gundiyal & Prasad Kalgutkarhttp://www.slideshare.net/vodqanite/what-is-agile-testing-56891493
Effective Strategies for Distributed TestingPreeti Mishrahttp://www.slideshare.net/vodqanite/strategies-for-distributed-testing
Lunch

Testing the Mysterious SphereAnjali Wadhwa, Ashwini Ingle & Preeti Mishrahttp://www.slideshare.net/vodqanite/testing-the-mysterious-sphere
Break

Test Automation - Principles, PracticesVardhan Bhatt & Vikrant Chauhanhttp://www.slideshare.net/vodqanite/lessons-learnt-from-test-automation-principles-practices
Tea + Snacks break

Patterns in Test Automation (Framework + Data)Anand Bagmarhttp://www.slideshare.net/abagmar/patterns-in-test-automation
 

Feedback

  • Overall workshop was wonderful. Presentation and content was good. Helpful to understand and implement in our current process.
  • Agile testing game taught us to focus more on quality than quantity & take feedback as soon as possible from the PO
  • Though I am not working in Agile env currently, I understood whole session and got to learn something.

The always rocking vodQA Pune team!!
vodQA Pune team

Saturday, April 25, 2015

Push the Envelope at vodQA, Bangalore

[UPDATED - Slides added]

Yet another vodQA begins today, Saturday, 25th April 2015 - this time at ThoughtWorks, Bangalore. The theme for this vodQA is - "Push the Envelope". The detail agenda can be found here.


I conducted a workshop on "Client-side Performance Testing" in vodQA Bangalore. 


Abstract of the workshop:



In this workshop, we will see the different dimensions of Performance Testing and Performance Engineering, and focus on Client-side Performance Testing. 

Before we get to doing some Client-side Performance Testing activities, we will first understand how to look at client-side performance, and putting that in the context of the product under test. We will see, using a case study, the impact of caching on performance, the good & the bad! We will then experiment with some tools like WebPageTest and Page Speed to understand how to measure client-side performance.



Lastly - just understanding the performance of the product is not sufficient. We will look at how to automate the testing for this activity - using WebPageTest (private instance setup), and experiment with yslow - as a low-cost, programmatic alternative to WebPageTest.

Here are the slides used in the workshop:

Wednesday, December 17, 2014

Testing in the Medical domain

I had the opportunity recently to do some testing, though for a very short time, in the Medical domain - something that I have always aspired to. I learnt a lot in this time and have gained a lot of appreciation for people working in such mission-critical domains. 

Some of these experiences have been published here as "A Humbling Experience in Oncology Treatment Testing" on ThoughtWorks Insight. Looking forward for your comments and feedback on the same.

Saturday, November 22, 2014

To Deploy or Not to Deploy - decide using Test Trend Analyzer (TTA) in AgilePune 2014

I spoke on the topic - "To Deploy or Not to Deploy - decide using Test Trend Analyzer (TTA)" in Agile Pune, 2014.

The slides from the talk are available here, and the video is available here.



 

Below is some information about the content.


The key objectives of organizations is to provide / derive value from the products / services they offer. To achieve this, they need to be able to deliver their offerings in the quickest time possible, and of good quality!
In order for these organizations to to understand the quality / health of their products at a quick glance, typically a team of people scramble to collate and collect the information manually needed to get a sense of quality about the products they support. All this is done manually.


So in the fast moving environment, where CI (Continuous Integration) and CD (Continuous Delivery) are now a necessity and not a luxury, how can teams take decisions if the product is ready to be deployed to the next environment or not?


Test Automation across all layers of the Test Pyramid is one of the first building blocks to ensure the team gets quick feedback into the health of the product-under-test.

The next set of questions are:
  • How can you collate this information in a meaningful fashion to determine - yes, my code is ready to be promoted from one environment to the next?
  • How can you know if the product is ready to go 'live'?
  • What is the health of you product portfolio at any point in time?
  • Can you identify patterns and do quick analysis of the test results to help in root-cause-analysis for issues that have happened over a period of time in making better decisions to better the quality of your product(s)?
The current set of tools are limited and fail to give the holistic picture of quality and health, across the life-cycle of the products.
 

The solution - TTA - Test Trend Analyzer
 
TTA is an open source product that becomes the source of information to give you real-time and visual insights into the health of the product portfolio using the Test Automation results, in form of Trends, Comparative Analysis, Failure Analysis and Functional Performance Benchmarking. This allows teams to take decisions on the product deployment to the next level using actual data points, instead of 'gut-feel' based decisions.
 
There are 2 sets of audience who will benefit from TTA:
1. Management - who want to know in real time what is the latest state of test execution trends across their product portfolios / projects. Also, they can use the data represented in the trend analysis views to make more informed decisions on which products / projects they need to focus more or less. Views like Test Pyramid View, Comparative Analysis help looking at results over a period of time, and using that as a data point to identify trends.

 
2. Team Members (developers / testers) - who want to do quick test failure analysis to get to the root cause analysis as quickly as possible. Some of the views - like Compare Runs, Failure Analysis, Test Execution Trend help the team on a day-to-day basis.
 
NOTE: TTA does not claim to give answers to the potential problems. It gives a visual representation of test execution results in different formats which allow team members / management to have more focussed conversations based on data points.

Some pictures from the talk ... (Thanks to Shirish)








Thursday, July 31, 2014

Enabling Continuous Delivery (CD) in Enterprises with Testing

I spoke about "Enabling Continuous Delivery (CD) in Enterprises with Testing" in Unicom's World Conference on Next Generation Testing

I started this talk by stating that I am going to prove that "A Triangle = A Pentagon". 

A Triangle == A Pentagon??

I am happy to say that I was able to prove that "A Triangle IS A Pentagon" - in fact, left reasonable doubt in the audience mind that "A Triangle CAN BE an n-dimensional Polygon".
Confused? How is this related to Continuous Delivery (CD), or Testing? See the slides and the video from the talk to know more.

This topic is also available on ThoughtWorks Insights.

Below are some pictures from the conference.






Friday, June 27, 2014

The Feedback Tradeoff

If you are a tester doing or involved with Test Automation, or a developer, I hope you are following the exciting debate about Test Driven Development (TDD) and its impact on software design. If you are not, you should!

My summary and takeaways from Part 3 of the series is now out on ThoughtWorks Insights - "The Feedback Tradeoff".