Showing posts with label webdriver. Show all posts
Showing posts with label webdriver. Show all posts

Friday, March 25, 2022

How to handle Select certificate popup from Chrome using Selenium WebDriver?

For one of the applications, when I run some Selenium WebDriver tests. I see a Select a certificate popup. I am not able to handle this, and hence my test fails. 

Has anyone been able to handle this popup, or rather, avoid this popup being shown?


If I manually select a certificate and click OK, I am prompted for my mac credentials:


Thanks in advance!

Monday, February 21, 2022

Why not to use PageFactory and FindsBy in Selenium WebDriver

Many users of Selenium WebDriver may be using the PageFactory created by Simon Stewart. However, it is not a good idea to use it.

You may be thinking why should I not use it? It is so easy to use it, and its popular.

Well, here are 2 reasons why you should not use the PageFactory:

Reason #1. Simon Stewart (https://twitter.com/shs96c), the creator of WebDriver, and the PageFactory himself says, do not use it. It is not recommended.

The `FindsBy` annotation isn't recommended, because the PageFactory class is really badly implemented and inflexible, but it's not going away in the java bindings.

The `FindsByX` interfaces are going away. Better to use a `By` locator and use that.


PageFactory is really badly implemented
 

https://twitter.com/shs96c/status/1196865907185868801


Reason #2: While Reason #1 should have been sufficient, many people implementing automation using Selenium WebDriver do not know, or did not pay heed to what Simon said. So another WebDriver & WATIR contributor, Titus Fortner (https://twitter.com/titusfortner) explained in detail why using PageFactory is not a good idea in his blog post - https://titusfortner.com/2021/02/03/page-factory-optimization.html

 

I sincerely hope these reasons are sufficient for you to move away from the PageFactory and use something more efficient. 

 

Wednesday, June 2, 2021

Automating Functional / End-2-End Tests Across Multiple Platforms

Do you want to know more about an approach to Functional (end-to-end) Test Automation that works for a product available on multiple platforms?


If yes, then read my blog post on Automating Functional / End-2-End Tests Across Multiple Platforms which shares details on the thought process & criteria involved in creating a solution that includes how to write the tests, and run it across the multiple platforms without any code change.

Lastly, the open-sourced solution - teswiz also has examples on how to implement a test that orchestrates the simulation between multiple devices / browsers to simulate multiple users interacting with each other as part of the same test.


Sunday, December 6, 2020

Long time no see? Where have I been

It has been a few months since I published anything on my blog. Does not mean I have not been learning or experimenting with new ideas. In fact, in the past few months I have been privileged to have my articles published on Applitools and Test Project blogs.

Below is the link to all those articles, for which I have received very kind reviews and comments on LinkedIn and Twitter.

Apart from this, I have also been contributing to open source - namely - Selenium, AppiumTestDistribution and building an open-source kickstarter project for API testing using Karate and for end-2-end testing for Android, iOS, Windows, Mac & Web.

Lastly, I have also been speaking in virtual conferences, webinars and last week I also recorded a podcast, which will be available soon.

The end of Smoke, Sanity and Regression

https://applitools.com/blog/end-smoke-sanity-regression/

Do we need Smoke, Sanity, Regression suites?
  • Do not blindly start with classifying your tests in different categories. Challenge yourself to do better!
  • Have a Test Automation strategy and know your test automation framework objective & criteria (“Test Automation in the World of AI & ML” highlights various criteria to be considered to build a good Test Automation Framework)
  • Choose the toolset wisely
  • After all the correct (subjective) approaches taken, if your test execution (in a single browser) is still taking more than say, 10 min for execution, then you can run your tests in parallel, and subsequently, split the test suite into smaller suites which can give you progressive indication of quality
  • Applitools with its AI-power algorithms can make your functional tests lean, simple, robust and includes UI / UX validation
  • Applitools Ultrafast Grid will remove the need for Cross-Browser testing, and instead with a single test execution run, validate functionality & UI / Visual rendering for all supported Browsers & Viewports

Design Patterns in Test Automation

Criteria for building a Test Automation Framework

Writing code is easy, but writing good code is not as easy. Here are the reasons why I say this:
  • “Good” is subjective.
  • “Good” depends on the context & overall objective.

Similarly, implementing automated test cases is easy (as seen from the getting started example shared earlier). However, scaling this up to be able to implement and run a huge number of tests quickly and efficiently, against an evolving product is not easy!

I refer to a few principles when building a Test Automation Framework. They are:
  • Based on the context & (current + upcoming) functionality of your product-under-test, define the overall objective of Automation Testing.
  • Based on the objective defined above, determine the criteria and requirements from your Test Automation Framework. Refer to my post on “Test Automation in the World of AI & ML” for details on various aspects you need to consider to build a robust Test Automation Framework. Also, you might find these articles interesting to learn how to select the best tool for your requirements:
    • Criteria for Selecting the Right Functional Testing Tools
    • How to Select the Best Tool – Research Process
    • How To Select The Right Test Automation Tool

Stop the Retries in Tests & Reruns of Failing Tests

Takeaways
  • Recognise reasons why tests could be flaky / intermittent
  • Critique band-aid approach to fixing flakiness in tests
  • Discuss techniques to identify reasons for test flakiness
  • Fix the root-cause, not the symptoms to make your tests stable, robust and scalable!

Measuring Code Coverage from API Workflow & Functional UI Tests

 
Why is the Functional Coverage important?
I choose an approach keeping the 80-20 rule in mind. The information the report provides should be sufficient to understand the current state, and take decisions on “what’s next”. For areas that need additional clarity, I can then talk with the team, explore the code to get to the next level of details. This makes it a very collaborative way of working, and joint-ownership of quality! 🚀

You can choose your own way to implement Functional Coverage – based on your context of team, skills, capability, tech-stack, etc.

Friday, October 11, 2019

Overcoming chromedriver version compatibility issues the right way

I encountered an interesting challenge recently when doing Native Android / iOS app automation - this was related to Chrome browser versions getting updated automatically and my tests failing because of errors like:


org.openqa.selenium.SessionNotCreatedException: session not created: This version of ChromeDriver only supports Chrome version 74
23:04:25 (Driver info: chromedriver=74.0.3729.6 (255758eccf3d244491b8a1317aa76e1ce10d57e9-refs/branch-heads/3729@{#29}),platform=Windows NT 6.3.9600 x86_64) (WARNING: The server did not provide any stacktrace information)


So I asked a question on LinkedIn


And I tweeted asking how to manage ChromeDriver version when running WebDriver / Appium tests.






The answer was common and obvious – use WebDriverManager. This is a beautiful, simple and indeed the right answer and solution to the problem.

However, that was a partial answer for me. 

Here is my context and problem statement in detail:

  • My Test Automation Framework is based on Java / Appium and I use AppiumTestDistribution (ATD) 
  • ATD is open-source, and takes away my pain and effort of managing appium and the devices and also takes care of running the tests in parallel or distributed mode, on android as well as iOS
  • In my local lab setup, I have many different android devices connected - which run tests as directed by ATD
  • Since you cannot control how Google PlayStore / Apple App Store pushes out new versions of apps for different android / iOS versions on devices, it is easily possible to end up with different versions of chrome browser in your device lab. When this happens, the tests start failing because of chromedriver incompatibility issues.
Once I was very kindly reminded by the community about WebDriverManager (which I had forgotten about), I now knew what was to be done.

I looked at the ATD code, and realised that it was using the default chromedriver version as setup when I had installed appium. This chromedriver was being used when instantiating a new instance of the AndroidDriver.

So I submitted a PR for ATD - which essentially did the following:
  • Query the chrome browser versions on each connected device
  • For the **highest version of the browser, use WebDriverManager and get the appropriate chromedriver downloaded
  • Pass the path to the correct chromedriver when creating an instance of the AndroidDriver
**highest version - what does that mean? Well, I also got confused initially. But the answer was simple. On some devices, the Chrome browser is installed by default, as a system app. This cannot be removed. So as new versions of the browser get installed, the default Chrome system app is always there. So when you query for the versions of Chrome on the device, you will see 2 such versions. My code logic was to get all these versions, and pick the highest version from them.

Here is the code snippet of how I solved the problem:
Special thanks to Sai Krishna for quickly approving and merging this PR.

Hope this provides more information about my problem statement, and how I used your suggestion for WebDriverManager to solve the problem.


Monday, June 3, 2019

Visual Validation - The Missing Tip of the Automation Pyramid at QuaNTA NXT at Globant

I spoke about Visual Validation - The Missing Tip of the Automation Pyramid at QuaNTA NXT event organised by Globant India Pvt. Ltd.




The event was very well organised and I had the opportunity to interact with a full house, and also later meet and talk with a lot of interesting people - curious about current state of testing, test automation and how AI can impact it in the future.

Agenda:



Below is the abstract of my talk:

The Test Automation Pyramid is not a new concept. While Automation helps validate functionality of your product, the look & feel / user-experience (UX) validation is still mostly manual.

With everyone wanting to be Agile, doing quick releases, this look & feel / UX validation becomes the bottleneck, and also is a very error-prone activity which causes brand, revenue and leads diluting your user-base.

In this session, we will explore why Automated Visual Validation is now essential in your Automation Strategy and also look at how an AI-powered tool - Applitools Eyes, can solve this problem.


Recording from the talk:




Some pictures:






.

Friday, July 20, 2018

Implementing Soft Assertions

Back in 2009 / 2010, I was working on implementing end-2-end tests for a web site using Java / Selenium / TestNG based automation. The challenge I was facing was that the tests used to fail for trivial (but valid) reasons - and I wondered that the test did not even get to core validations before it failed. How will the team ever know about the main issues in the product if the test fails for trivial issues? 

That was a trigger point for me to think about Soft Assertions - what if there was a way to say if there is a type of failure that I want to know about, but the test can continue with the remaining set of validations - unless something does not make sense to proceed with.

Ex: If the text message of a field is incorrect, I can continue. But if login fails, no point in proceeding with the rest of the test.

This idea seemed interesting - so I came up with the following requirements from such an  implementation as listed below:

  • Clear distinction between what type of failure I can continue from, or not
    • Ex: assert.** is for hard asserts. verify.** is for soft asserts
  • All failures that I can continue from (i.e. soft asserts), need to be collated and at the end of the test, the complete list of those soft assert failures should be presented with the test result (and in the report), while the test failed just once
    • Ex: There were 5 soft assertion failures)
  • Capture relevant screenshots whenever Soft Assertion failed
  • If there was a hard assert along the way of the test execution, the test failure should include the prior soft assert failures along with the hard assert failure, as appropriate

For the actual implementation, I did the following (in 2009/2010):
  • I looked into the TestNG code base, and I could not really find any out-of-the-box support for what I wanted to do. 
  • So for lack of knowledge on better ways of implementation, 
    • I checked-out the TestNG code, 
    • added the Soft Assertion implementation, and, 
    • built a custom TestNG.jar file
    • checked-in the jar file as a library artefact in our automation framework. 

In hindsight, I should have sent that functionality as a PR to the project. 

But not all is lost, TestNG now (or maybe since some time now) has support for Soft Assertions - out-of-the-box. And it is pretty straightforward to implement / use it as well.

Implementing Soft Assertions in your test framework

See this gist for implementation that you can you use with TestNG (I tested with v6.10).

Using Soft Assertions in your tests

Here is how you can use the Soft Assertions in your tests.


Soft Assertions in any other tech stack?

What if you are not using TestNG, or Java - rather, what if you are using completely different programming language / tools / test-runner? Can you still use Soft Assertions? 

Absolutely YES! All you need to understand is the concept, and figure out the best way to implement the same, if any out-of-the-box solution does not exist in that tech stack. 

Hope this helps you!


Friday, June 9, 2017

Changing logcat buffer size in Android devices ... almost works

My (debug-build of) app under test logs extra information about test execution to system logs which is accessible via logcat on Android devices. This is very powerful as now I can run my cucumber-jvm / Appium tests, copy the logcat file after the test execution completes, parse it for relevant information, and do appropriate assertions on the same.

The default buffer size on Android devices I have seen is 256kb. This is less for me - as I end up losing the earlier information, and hence my assertions fail.

Thankfully, there is a programmatic way to change the logcat buffer size in the device before running tests. The command is

adb logcat -G 3M

This adb command works in the Motorola devices in my MAD LAB, but does not work in Samsung devices. The error I see on running the above command is "failed to set the log size"

Any idea why this would not work in Samsung devices? or rather, what do I need to do to change the logcat buffer size?

[UPDATE] - Interestingly - this works on Samsung Galaxy S7, but NOT in Samsung J5 Prime OR Samsung J7 Prime

Friday, March 17, 2017

Patterns in Test Automation Framework at STPCon

I spoke about Patterns of a "good" Test Automation Framework at STPCon 2017. Here are the details from the talk.


Abstract

Building a Test Automation Framework is easy – there are so many resources / guides / blogs / etc. available to help you get started and help solve the issues you get along the journey.
However, building a “good” Test Automation Framework is not very easy. There are a lot of principles and practices you need to use, in the right context, with a good set of skills required to make the Test Automation Framework maintainable, scalable and reusable.
Design Patterns play a big role in helping achieve this goal of building a good and robust framework.
In this talk, we will talk about, and see examples of various types of patterns you can use for:
  • Build your Test Automation Framework
  • Test Data Management
Using these patterns you will be able to build a good framework, that will help keep your tests running fast, and reliably in your CI / CD setup!

Session Takeaways:


  • Patterns for building Test Automation Framework.
  • Patterns for Test Data Management, with pros and cons of each.

Slides



Pictures




Thursday, February 16, 2017

How to upgrade the appium-uiautomator2-driver version for appium 1.6.3?

I am using appium v1.6.3 - which comes with appium-uiautomator-driver@0.2.3 with appium-uiautomator-server@0.0.8.

I need to upgrade to the newer appium-uiautomator-driver@0.2.9 (which has a fix for an issue I am seeing - https://github.com/appium/appium/issues/7527).

Any idea how I can upgrade the uiautomator2 driver (while using the same appium@1.6.3)?




Friday, December 16, 2016

How to enable seamless running of appium tests on developer machines?

I am implementing cucumber-jvm based framework to drive mobile apps (using Appium).

Here is what I need to be able to do -
  1. Run tests on local machine for quick validation. This is mainly for developers to be able to run the tests before pushing code changes in git.
  2. Trigger and Run the tests in the cloud to run against emulators / real devices. 
To achieve point #1, I need the setup to be simple. I do not want the team to go through massive steps to get the environment (Appium, emulators, etc.) setup.

Can / should the whole setup be put inside a docker container - and provide single command to setup and run the tests?

Any other approach you recommend?

Of course - whatever approach is taken, should potentially extend seamlessly to address point #2.

Monday, November 21, 2016

Shared (relatively less) pain of using Protractor in SeConf London 2016

On 15th Nov 2016, I spoke in Selenium Conference London 2016 on the topic - "Sharing the 'pain' of using Protractor & WebDriver". This time, the pain was less - as I had already shared this in Selenium Conference India 2016. From that experience, and lot of interactions, I worked on some of the challenges I was facing and implemented good solutions for the same.

So this time it was easier.

The interesting challenge though was when the A/V stopped working after the 1st slide - and had to do on-the-spot improv to keep the ball rolling and prevent delays in subsequent talks. To know more about the fun I had, watch the video linked below :)

You will find the abstract, slides and video of the talk:

Abstract



There is a saying ..."Sukh baatne se badhta hai, dukh baatne se kam hota hai", translated as - "happiness increases & sadness reduces on sharing with others".

We want to take this opportunity to share with our experiences - the good and the bad, in the journey of building a Test Automation framework for an AngularJS based application.

We will learn, by a case study, what thought process we applied on the given context (product, team, skills, capabilities, long term vision) to come up with an appropriate Test Automation Strategy. This Test Automation strategy covered all aspects of Test Automation - Unit, Integration, UI - i.e. End-2-End tests (E2E).

Next, we will share how we went about narrowing-down, and eventually selecting a specific Tech Stack + Tools (Javascript / Jasmine / Protractor / Selenium-WebDriver) to accomplish the Test Automation for the product.
 

Lastly, we will share the challenges that came up in the implementation of the Test Automation, and how we overcame them. This will also include how we managed to get the tests running in Jenkins - a Continuous Integration tool.

This discussion is applicable to all team members who are working on Test Automation!

P.S. We will be attempting to make a sample protractor-based automation framework available on github for anyone to use as a starting point for setting up protractor-based Test Automation framework.

Slides