Showing posts with label appium. Show all posts
Showing posts with label appium. Show all posts

Wednesday, June 2, 2021

Automating Functional / End-2-End Tests Across Multiple Platforms

Do you want to know more about an approach to Functional (end-to-end) Test Automation that works for a product available on multiple platforms?


If yes, then read my blog post on Automating Functional / End-2-End Tests Across Multiple Platforms which shares details on the thought process & criteria involved in creating a solution that includes how to write the tests, and run it across the multiple platforms without any code change.

Lastly, the open-sourced solution - teswiz also has examples on how to implement a test that orchestrates the simulation between multiple devices / browsers to simulate multiple users interacting with each other as part of the same test.


Wednesday, December 9, 2020

Getting started with implementing Automation

Getting started with implementing tests for automation (web or native apps) may seem daunting for those who are doing this for the first time. 

Assuming you are using open-source tooling like Selenium or Appium, there are multiple ways you can get started.

  1. DIY - Build your own framework by scripting based on the documentation

  2. Use Selenium-IDE for quick record and playback

  3. Use TestProject Recorder for quick record and playback

  4. Use TestProject SDK to build your own custom scripts for automating the tests

Each of the above approaches has its own pros-and-cons. Let's look at this in some detail:

Approach #1 - DIY - Build your own framework

Selenium: https://www.selenium.dev/documentation/en/

Appium: https://appium.io/docs/en/about-appium/intro/

Pros

Cons

You can build all features and capabilities as per your design & requirement

*You need to learn a programming language


*You have to build everyone on your own (though you can use supporting libraries)



* Depending on the context of the team, the above points can also be considered as an advantage


Approach #2 - Selenium-IDE

https://www.selenium.dev/selenium-ide

Pros

Cons

Easy to set up

Basic reports

Works in Chrome & Firefox

Works only for automating Web applications

Code can be exported in various formats


Recorded tests can be run from command line


Tests can be run in your own CI


Will always be in-sync with underlying WebDriver




Approach #3 - TestProject Recorder

https://testproject.io/easy-test-automation/

Pros

Cons

Advanced recorder (lot of actions, validations, self-healing, customisations possible, and a lot of community Addons)

Recorder works only in Chrome, but tests can be executed on all browsers

Recorder works for Web applications as well as Native Apps (in real devices or emulators) for Android and iOS (even iOS on Windows machine)

Generated code is very simple - good as a reference to see how the underlying implementation / interaction is done

TestProject agent automatically determines all available browsers available and devices connected to the machine and execution can be customised accordingly

Each recorded test needs to be exported individually. No concept of reuse in this approach

Can schedule test runs as one-time, or repeated activity via build-in scheduler / CI/CD tool integrations or via their RESTful API


Reports are comprehensive with meaningful data, including screenshots and option to download to PDF format


Code can be generated from the recorded script


Can share tests easily using the "Share test" feature




Approach #4 - TestProject SDK

https://testproject.io/advanced-scripting-capabilities

Pros

Cons

Probably the most powerful way of these 4 approaches as it uses WebDriver / Appium under the hood. Get the power of building your own framework, while reusing out-of-the-box features like driver management, automatic reporting, etc.

You need to learn a programming language

Driver management is TestProject responsibility. Test implementer can focus on automating tests




Sunday, December 6, 2020

Long time no see? Where have I been

It has been a few months since I published anything on my blog. Does not mean I have not been learning or experimenting with new ideas. In fact, in the past few months I have been privileged to have my articles published on Applitools and Test Project blogs.

Below is the link to all those articles, for which I have received very kind reviews and comments on LinkedIn and Twitter.

Apart from this, I have also been contributing to open source - namely - Selenium, AppiumTestDistribution and building an open-source kickstarter project for API testing using Karate and for end-2-end testing for Android, iOS, Windows, Mac & Web.

Lastly, I have also been speaking in virtual conferences, webinars and last week I also recorded a podcast, which will be available soon.

The end of Smoke, Sanity and Regression

https://applitools.com/blog/end-smoke-sanity-regression/

Do we need Smoke, Sanity, Regression suites?
  • Do not blindly start with classifying your tests in different categories. Challenge yourself to do better!
  • Have a Test Automation strategy and know your test automation framework objective & criteria (“Test Automation in the World of AI & ML” highlights various criteria to be considered to build a good Test Automation Framework)
  • Choose the toolset wisely
  • After all the correct (subjective) approaches taken, if your test execution (in a single browser) is still taking more than say, 10 min for execution, then you can run your tests in parallel, and subsequently, split the test suite into smaller suites which can give you progressive indication of quality
  • Applitools with its AI-power algorithms can make your functional tests lean, simple, robust and includes UI / UX validation
  • Applitools Ultrafast Grid will remove the need for Cross-Browser testing, and instead with a single test execution run, validate functionality & UI / Visual rendering for all supported Browsers & Viewports

Design Patterns in Test Automation

Criteria for building a Test Automation Framework

Writing code is easy, but writing good code is not as easy. Here are the reasons why I say this:
  • “Good” is subjective.
  • “Good” depends on the context & overall objective.

Similarly, implementing automated test cases is easy (as seen from the getting started example shared earlier). However, scaling this up to be able to implement and run a huge number of tests quickly and efficiently, against an evolving product is not easy!

I refer to a few principles when building a Test Automation Framework. They are:
  • Based on the context & (current + upcoming) functionality of your product-under-test, define the overall objective of Automation Testing.
  • Based on the objective defined above, determine the criteria and requirements from your Test Automation Framework. Refer to my post on “Test Automation in the World of AI & ML” for details on various aspects you need to consider to build a robust Test Automation Framework. Also, you might find these articles interesting to learn how to select the best tool for your requirements:
    • Criteria for Selecting the Right Functional Testing Tools
    • How to Select the Best Tool – Research Process
    • How To Select The Right Test Automation Tool

Stop the Retries in Tests & Reruns of Failing Tests

Takeaways
  • Recognise reasons why tests could be flaky / intermittent
  • Critique band-aid approach to fixing flakiness in tests
  • Discuss techniques to identify reasons for test flakiness
  • Fix the root-cause, not the symptoms to make your tests stable, robust and scalable!

Measuring Code Coverage from API Workflow & Functional UI Tests

 
Why is the Functional Coverage important?
I choose an approach keeping the 80-20 rule in mind. The information the report provides should be sufficient to understand the current state, and take decisions on “what’s next”. For areas that need additional clarity, I can then talk with the team, explore the code to get to the next level of details. This makes it a very collaborative way of working, and joint-ownership of quality! 🚀

You can choose your own way to implement Functional Coverage – based on your context of team, skills, capability, tech-stack, etc.

Tuesday, July 7, 2020

Does your functional automation really add value?


We all know that automation is one of the key enablers for those on the CI-CD journey.

Most teams are:

  • implementing automation
  • talking about its benefits
  • up-skilling themselves
  • talking about tooling
  • etc.

However, many a times I feel we are blinded because of the theoretical value test automation provides, or because everyone says it adds value, or because of the shiny tools / tech-stacks we get to use , or ...

To try and understand more about this, can you answer the below questions?

In your experience, or in your current project:
  1. Does your functional automation really add value?
  2. What makes you say it does / or does not?
  3. How long does it take for tests to run and generate reports?
  4. In most cases, the product-under-test is available on multiple platforms – ex: Android & iOS Native, and on Web. In such cases, for the same scenario that needs to be automated, is the test implemented once for all platforms, or once per platform?
  5. How easy is it to debug and get to the root cause of failures?
  6. How long does it take to update an existing test?
  7. How long does it take to add a new test?
  8. Do your tests run automatically via CI on a new build, or do you need to “trigger” the same?
  9. What is the test passing percentage?
  10. Do you “rerun” the failing tests to see if this was an intermittent issue?
  11. Is there control on the level of parallel execution and switch to sequential execution based on context?
  12. How clean & DRY is the code?

In my experience, unfortunately most of the functional automation that is built is:
· not optimal
· not fit-for-purpose
· does not run fast enough
· gives inconsistent feedback, hence unreliable

Hence, for the amount of effort invested in implementing automation,
  1. Are you really getting the value from this activity?
  2. How can automation truly provide value for teams?


Friday, October 11, 2019

Overcoming chromedriver version compatibility issues the right way

I encountered an interesting challenge recently when doing Native Android / iOS app automation - this was related to Chrome browser versions getting updated automatically and my tests failing because of errors like:


org.openqa.selenium.SessionNotCreatedException: session not created: This version of ChromeDriver only supports Chrome version 74
23:04:25 (Driver info: chromedriver=74.0.3729.6 (255758eccf3d244491b8a1317aa76e1ce10d57e9-refs/branch-heads/3729@{#29}),platform=Windows NT 6.3.9600 x86_64) (WARNING: The server did not provide any stacktrace information)


So I asked a question on LinkedIn


And I tweeted asking how to manage ChromeDriver version when running WebDriver / Appium tests.






The answer was common and obvious – use WebDriverManager. This is a beautiful, simple and indeed the right answer and solution to the problem.

However, that was a partial answer for me. 

Here is my context and problem statement in detail:

  • My Test Automation Framework is based on Java / Appium and I use AppiumTestDistribution (ATD) 
  • ATD is open-source, and takes away my pain and effort of managing appium and the devices and also takes care of running the tests in parallel or distributed mode, on android as well as iOS
  • In my local lab setup, I have many different android devices connected - which run tests as directed by ATD
  • Since you cannot control how Google PlayStore / Apple App Store pushes out new versions of apps for different android / iOS versions on devices, it is easily possible to end up with different versions of chrome browser in your device lab. When this happens, the tests start failing because of chromedriver incompatibility issues.
Once I was very kindly reminded by the community about WebDriverManager (which I had forgotten about), I now knew what was to be done.

I looked at the ATD code, and realised that it was using the default chromedriver version as setup when I had installed appium. This chromedriver was being used when instantiating a new instance of the AndroidDriver.

So I submitted a PR for ATD - which essentially did the following:
  • Query the chrome browser versions on each connected device
  • For the **highest version of the browser, use WebDriverManager and get the appropriate chromedriver downloaded
  • Pass the path to the correct chromedriver when creating an instance of the AndroidDriver
**highest version - what does that mean? Well, I also got confused initially. But the answer was simple. On some devices, the Chrome browser is installed by default, as a system app. This cannot be removed. So as new versions of the browser get installed, the default Chrome system app is always there. So when you query for the versions of Chrome on the device, you will see 2 such versions. My code logic was to get all these versions, and pick the highest version from them.

Here is the code snippet of how I solved the problem:
Special thanks to Sai Krishna for quickly approving and merging this PR.

Hope this provides more information about my problem statement, and how I used your suggestion for WebDriverManager to solve the problem.


Wednesday, September 25, 2019

Analytics - The Brain of the Software



An Analogy 



I am not a doctor, nor did I enjoy biology too much in my curriculum as a student.  However, I do know that the body has many organs and each organ plays a vital role in the well being of the individual.

Each organ has to:

  • function correctly (movement, senses, core functions, etc.)
  • has to perform as per expectations in different conditions the individual may be going through (walking, running, swimming, etc.)
  • has to be secure from external parameters (heat, cold, rain, what we eat / drink, etc.)
  • has to have a proper user experience (ex: if the human hands had webs like ducks, would we be able to hold a pen correctly to write?



  • I would like to think of the brain as the super computer which keeps track of what is going on in the body, if each piece playing its part correctly, or not. And if there is something unexpected going on, then there are mechanisms of giving that feedback internally and externally so that course correction would be possible.

How does this relate to software?

Software is similar in some ways. For any software product to work, the following needs to be done:

Functionality works as expected


  • The architecture, testability of the system will allow for various types of testing activities to be performed on the software to ensure everything works as expected
  • Test Automation practices will give you quick feedback



There is a plethora of open-source and commercial tools in this space to help in this regard - the most popular open-source tools being Selenium and Appium.


Software is performant


  • We can do performance testing at various different levels to ensure at different loads and conditions, the users will be able to use the product in a seamless fashion
  • There are many tools to assist in Performance Testing - some popular ones being JMeter and Gatling.

Software is secure


  • Building and testing for security is critical as you do not want user information to be leaked or manipulated and neither do you want to allow external forces to control / manipulate your product behaviour and control
  • The Test Automation Pyramid hence also includes NFRs





User experience is validated, and consistent


  • In the age of CD (Continuous Delivery & Continuous Deployment), you need to ensure your user experience across all your software delivery means (browsers, mobile-browsers, native apps for mobiles and tablets, etc.) is consistent and users do not face the brunt of UI / look-and-feel issues in the software at the cost of new features
  • This is a relatively new domain - but there are already many tools to help in this spaces as well - the most popular one (in terms of integration, usage and features) being the AI-powered Applitools
Visual Validation is the new tip of the Test Automation Pyramid!





What is the brain of the software?

The above is all good, and known in various ways. But what is the "brain" of the software? How does one know if everything is working fine or not? Who will receive the feedback and how do we take corrective action on this?

Analytics is that piece in the Software product that functions as the brain. It keeps collecting data about each important piece of software, and provides feedback on the same.

I have come across some extreme examples of Business / Organizations who have all their eggs in one basket - in terms of

  • understand their Consumers (engagement / usage / patterns / etc.),
  • understand usage of product features, and,
  • do all revenue-related book-keeping

This is all done purely on Analytics! Hence, to say “Business runs on Analytics, and it may be OK for some product / user features to not work correctly, but Analytics should always work” - is not a myth!

What this means is Analytics is more important now, than before.

Unfortunately, Analytics is not known much to the Software Dev + Test community. We know it very superficially - and do what is required to implement it and quickly test it out. But what is analytics? Why is it important? What is the impact of this not working well? Not many think about this.

I have been testing Analytics since 2010 ... and the kind of insights I have been able to get about the product have been huge! I have been able to contribute back to the team and help build better quality software as a result.

But I have to be honest - it is painful to test Analytics. And that is why I created an open-source framework - WAAT - to help automate some of this testing activities.

I also do workshops to help people learn more about Analytics, its importance, and how they can automate this as well.

In the workshop, I do not assume anything and approach is to discuss and learn by example and practice, the following

  • How does Analytics works (for Web and Mobile)?
  • Test Analytics manually in different ways
  • Test Analytics via the final reports
  • Why some Automation strategies will work, and some WILL NOT WORK (based on my experience)!
  • We will see a demo of the Automation running for the same.
  • Time permitting, we will setup running some Automation scripts on your machine to validate the same

Takeaways from the workshop

We will learn by practicing the following:
  • What is Analytics?
  • Techniques to test analytics manually.
  • How to automate the validation of analytics, via a demo, and if time permits, run the automation from your machine as well.
Hope this post helps you understand the importance of Analytics and why you need to know more about it. Do reach out to me if you want to learn more about it.

Next upcoming Analytics workshop is in TestBash Australia 2019. Let me know if you would be interested in attending the same


Friday, June 14, 2019

Quality & Release Strategy for Native Android & iOS Apps at AppiumConf 2019


What an amazing time speaking at the first AppiumConf 2019 in Bangalore, India. I spoke about my experiences in setting "Quality & Release Strategy for Native Android & iOS Apps"

Abstract:
Experimentation and quick feedback is the key to success of any product, while of course ensuring a good quality product with new and better features is being shipped out at a decent / regular frequency to the users.

In this session, we will discuss how to enable experimentation, get quick feedback and reduce risk for the product by using a case study of a media / entertainment domain product, used by millions of users across 10+ countries - i.e. - we will discuss Testing Strategy and the Release process an Android & iOS Native app - that will help enable CI & CD.

To understand these techniques, we will quickly recap the challenges and quirks of testing Native Apps and how that is different than Web / Mobile Web Apps.

The majority of the discussion will focus on different techniques / practices related to Testing & Releases that can be established to achieve our goals, some of which are listed below:
  • Functional Automation approach - identify and automate user scenarios, across supported regions
  • Testing approach - what to test, when to test, how to test!
  • Manual Sanity before release - and why it was important!
  • Staged roll-outs via Google’s Play Store and Apple’s App Store
  • Extensive monitoring of the release as users come on board, and comparing the key metrics (ex: consumer engagement) with prior releases
  • Understanding Consumer Sentiments (Google’s Play Store / Apple’s App Store review comments, Social Media scans, Issues reported to / by Support, etc.)

Slides:



Quality & Release Strategy for Native Android & iOS Apps from Anand Bagmar

Monday, June 3, 2019

Visual Validation - The Missing Tip of the Automation Pyramid at QuaNTA NXT at Globant

I spoke about Visual Validation - The Missing Tip of the Automation Pyramid at QuaNTA NXT event organised by Globant India Pvt. Ltd.




The event was very well organised and I had the opportunity to interact with a full house, and also later meet and talk with a lot of interesting people - curious about current state of testing, test automation and how AI can impact it in the future.

Agenda:



Below is the abstract of my talk:

The Test Automation Pyramid is not a new concept. While Automation helps validate functionality of your product, the look & feel / user-experience (UX) validation is still mostly manual.

With everyone wanting to be Agile, doing quick releases, this look & feel / UX validation becomes the bottleneck, and also is a very error-prone activity which causes brand, revenue and leads diluting your user-base.

In this session, we will explore why Automated Visual Validation is now essential in your Automation Strategy and also look at how an AI-powered tool - Applitools Eyes, can solve this problem.


Recording from the talk:




Some pictures:






.

Friday, July 20, 2018

Implementing Soft Assertions

Back in 2009 / 2010, I was working on implementing end-2-end tests for a web site using Java / Selenium / TestNG based automation. The challenge I was facing was that the tests used to fail for trivial (but valid) reasons - and I wondered that the test did not even get to core validations before it failed. How will the team ever know about the main issues in the product if the test fails for trivial issues? 

That was a trigger point for me to think about Soft Assertions - what if there was a way to say if there is a type of failure that I want to know about, but the test can continue with the remaining set of validations - unless something does not make sense to proceed with.

Ex: If the text message of a field is incorrect, I can continue. But if login fails, no point in proceeding with the rest of the test.

This idea seemed interesting - so I came up with the following requirements from such an  implementation as listed below:

  • Clear distinction between what type of failure I can continue from, or not
    • Ex: assert.** is for hard asserts. verify.** is for soft asserts
  • All failures that I can continue from (i.e. soft asserts), need to be collated and at the end of the test, the complete list of those soft assert failures should be presented with the test result (and in the report), while the test failed just once
    • Ex: There were 5 soft assertion failures)
  • Capture relevant screenshots whenever Soft Assertion failed
  • If there was a hard assert along the way of the test execution, the test failure should include the prior soft assert failures along with the hard assert failure, as appropriate

For the actual implementation, I did the following (in 2009/2010):
  • I looked into the TestNG code base, and I could not really find any out-of-the-box support for what I wanted to do. 
  • So for lack of knowledge on better ways of implementation, 
    • I checked-out the TestNG code, 
    • added the Soft Assertion implementation, and, 
    • built a custom TestNG.jar file
    • checked-in the jar file as a library artefact in our automation framework. 

In hindsight, I should have sent that functionality as a PR to the project. 

But not all is lost, TestNG now (or maybe since some time now) has support for Soft Assertions - out-of-the-box. And it is pretty straightforward to implement / use it as well.

Implementing Soft Assertions in your test framework

See this gist for implementation that you can you use with TestNG (I tested with v6.10).

Using Soft Assertions in your tests

Here is how you can use the Soft Assertions in your tests.


Soft Assertions in any other tech stack?

What if you are not using TestNG, or Java - rather, what if you are using completely different programming language / tools / test-runner? Can you still use Soft Assertions? 

Absolutely YES! All you need to understand is the concept, and figure out the best way to implement the same, if any out-of-the-box solution does not exist in that tech stack. 

Hope this helps you!