Showing posts with label learning. Show all posts
Showing posts with label learning. Show all posts

Thursday, September 6, 2018

Some good examples of Data Science, AI & ML

Following up on my earlier post about ODSC - Data Science, AI, ML - Hype, or Reality?, I thought it is good to also share some of the good examples of work happening in the field.

Here are some of the examples I got to hear in the ODSC conference, most of which are available to the common human:
  • Amazing work done in the complex field of Speech recognition 
    • Why complex? Think about languages, dialects, multiple conversations at the same time, different speed of talking, etc.
  • Text to speech
    • Ex: This is especially very helpful for people with disabilities
  • Speech to Text
    • Ex: Alexa, Google Voice, etc. type of applications
  • Traffic control / Routes / Navigation
    • Ex: Google Maps
  • Recommendation engines
    • Ex: eCommerce products
  • Preventive maintenance
    • Lot of advanced vehicles have a number of sensors that can alert the driver / car manufacturer about potential issues coming up / service due for the vehicle
  • Autonomous vehicles
    • Ex: Self driving vehicles
    • Ex: Optimizing Cab scheduling / routing - There was a good session on how OLA manage its complexity in scheduling and routing - which is very applicable to eCommerce, Aviation industry, Hotel industry, etc.
    • I recently also saw a video about Volvo truck driver getting out of the truck in a difficult terrain, and walking in front of the truck, controlling its movement using a game-like controller
  • Medical equipment / gadgets for preventive / alerting health-care products

Also, Dr. Ravi Mehrotra, from IDeaS made a very powerful statement in his keynote - that I loved!! 

He said - "Best way to learn, is to forget what is not important".

This statement resonates a lot with what I think .... one needs to forget what is not (as) important, in order to focus and prioritize on what is important and can add value.

Especially true for Testers to keep in mind!


Monday, September 3, 2018

ODSC - Data Science, AI, ML - Hype, or Reality?

I got a chance to attend ODSC India, held in Bangalore on 31st Aug / 1st Sept. For those who don't know, ODSC is the largest Applied Data Science and AI conference, and it was conducted in India the first time this year.

I was very excited to attend this for couple of reasons:

  • I was attending a conference after a long time (i.e. where I was not speaking). So this was going to be a pure learning and knowing expedition for me.
  • Data Science / AI / ML have become huge buzzwords in the industry now. I had some opinions about it - but that was with limited knowledge / understanding about it. I was hungry to learn some specific of these buzzwords.


Since I was going to travel to Bangalore for ODSC anyway, I also decided to participate in the pre-conference workshop - Advanced Data Analysis, Dashboards And Visualization. I thought it would be interesting to learn about the What, Why and How of the techniques of Data Analysis, Dashboards and visualization - which would help me as I rebuild / extend TTA (Test Trend Analyzer). Though the workshop was good, it focused completely on Tableau as a tool and unfortunately did not meet my objectives / expectations. That said, there is another tool I came across in the conference - KNIME - seems interesting and am going to try it out.

The conference was good though. I attended a lot of sessions and had lot of hallway-conversations with many interesting people. Typical outcome of attending a conference, some sessions I liked better than others, some were amazing, some were mediocre. 

Here is my unstructured assessment of what I now think about what I heard and discussed:
  • Advanced mathematics learnt in colleges has an application in data science. So if children / kids ask why should they study Statistics - here is an answer!
  • Creating data models without Business Context will not work. If it does, you have been lucky :)
  • There are some interesting case studies and success stories of AI & ML. But these are the same success stories around since quite some time. All the other "noise" of AI & ML so far seems a hype so far.
  • There is a lot of value in understanding historical data better. Based on that understanding, there can be opportunities to forecast the future. There is a huge risk of doing this forecasting, IF % of uncertainty is not included as part of it. However, it is very easily ignored.
  • Understanding of Neural Networks, computing, and algorithms is essential to building intelligent solutions for complex problems.
  • It is not sufficient to get better / accurate prediction results. Being able to explain how and why those results are better / same / worse is equally important. In many cases, this would be a regulatory requirement.
  • Data Science is the "art" & "science" of understanding data better. To do this, we need to first cleanse / prep the data, simplify it using various techniques, and learn techniques to visualize the data.
  • There is a "grammar of graphics" and a "grammar of interactive graphics" - which helps in thinking about data visualization.
  • Deploying these AI / ML solutions to production is not a trivial task - mainly due to the fact of high computing and huge volume of data processing required to make it production ready. - This is a huge opportunity for the general Software Development / Testing/ DevOps community to solve problems faced by data scientists / people in the data science / AI / ML domain.
  • With data privacy laws rightly becoming stricter, you need to be careful and use only legally obtained sample datasets for analyzing / training the data models - else there is going to be huge penalties for companies involved. (This is in reference to GDPR, a new law coming up in USA and also India.)
  • Earlier, only PhD holder were qualified folks to work on Data Science. Now-a-days, the trend is to get relevant training to interns, and have them work on these problems, and then get the results validated / explained by the PhD specialists.
  • In a nutshell - Data Science, AI, ML are using specialized types of tools and technologies to solve different problems. People / organizations have been doing these activities before the buzzwords were formed / or got popular.
So, what is my core takeaway from this? 
  • As with any new buzzword, there is interesting work happening in Data Science, AI & ML - but the majority claiming to be in the field are just creating and riding the hype!

That said, I want to do the following:
  • Find opportunities to investigate and understand the Data Science + AI + ML in more detail. 
  • Understand the skills and capabilities required from a software developer + QA role perspective to contribute more effectively in solving these newer problem statements
  • Learn python / R 
  • Experiment with various tools / libraries related to data visualization


Thursday, July 12, 2018

Return of the todo (learning) list

Since at least a decade, I have had a list of TODOs which I actively updated and maintained. The items on this list focused on - 

  • what new things I wanted to learn / experiment with
  • new conference talk ideas
  • open source ideas / updates
Unfortunately, due to various reasons (some of which were my doing as well), the focus on learning and experimentation got lost ... and I felt awful about it. I also shared my pain with some friends and colleagues of unable to find time / opportunity / focus / support to be in the continuous learning phase. But not much could be done / changed in those circumstances.

But I am very happy to share that the list is back, and back with a bang!!!

The days of learning and experimentation continue. My list is now overflowing, and continuing to grow with ideas and things I want to learn and experiment.

I am happy again!!

Saturday, March 17, 2018

Measuring Consumer Quality - The Missing Feedback Loop

I spoke in vodQA at ThoughtWorks, Pune on "Measuring Consumer Quality - the Missing Feedback Loop". 

This talk address the why and how from my earlier blog post on "Understanding, Measuring and Building Consumer Quality". I recommend you read that first, before going through the slides and video for this talk.


Abstract:

How to build a good quality product is not a new topic. Proper usage of methodologies, processes, practices, collaboration techniques can yield amazing results for the team, the organisation, and for the end-users of your product.

While there is a lot of emphasis on the processes and practices side, one aspect that is still spoken about "loosely" - is the feedback loop from your end-users to making better decisions.

SO, What is this feedback loop? Is it a myth? How do you measure it? Is there a "magic" formula to understand this data received? How to you add value to your product using this data?

In this interactive session, we will use a case study of a B2C entertainment-domain product (having millions of consumers) as an example to understand and also answer the following questions:

  • The importance of knowing your Consumers 
  • How do you know your product is working well? 
  • How do you know your Consumers are engaged with your product? 
  • Can you draw inferences and patterns from the data to reach of point of being able to make predictions on Consumer behaviour, before making any code change? 

Video:


Slides can be found here.

Pictures:



Tuesday, May 2, 2017

Criteria for setting up a Mobile Test Automation LAB

I recently got asked this question related to the MAD LAB (Mobile Automation Devices LAB) - "Would like to understand how can we setup something similar in our organisation?"

Since this question is applicable for all those thinking of, or have already set up their own lab, thought I would share my answer here.

To setup your own LAB for Mobile Test Automation, multiple things need to align:


Supportive management who -
  • allows experiments (within reason of course) and encourages learning through failure, 
  • willing to invest in infrastructure ($$)

Skilled and Passionate team members who -
  • understand the domain well, 
  • willing to learn, experiment, re-learn and fail fast, 
  • keep looking for innovative solutions to solve problems on hand, 
  • do not reinvent the wheel. 

Philosophy aside, our MAD LAB has the following: 
  • Mac Minis (8-12 devices per Mac Mini), 
  • Powered USB Hubs (I use the ones shown below - and they are working pretty well)

  • High-quality USB cables (I use the ones shown below - and they are working pretty well)
  • CI (Jenkins) setup correctly to keep running tests continuously, proper reporting  in place (else whats the use of running tests if you do not look at the results)

You could start with similar IF it fits your product-under-test context

After I answered this on LinkedIn, I realised, there are more parameters to think about, than just the above.
  • Knowing which devices to use in your Lab
  • Having good, reliable Internet connection
  • Devices should be "seen" easily
  • Should be easy to work on / with the devices as and when required
  • Know how you the devices will be placed in the lab. We tried the following:
    • 2-way tape - that didn't work. Devices used to stay up for a few days, then "drop" suddenly. Of course, that also depends on the back surface of the devices.
    • We tried many mobile stands / hangers (shown below) - but each had their own limitations



    • Finally I found an industrial-strength velcro (1" velcro tape that could take a couple of pounds of weight) - and my devices have not budged since. PS: Please be careful when putting on this velcro on the devices. IF it gets on your hand, you will have a velcro tattoo for a long long time.

What other parameters would you consider for setting up your own Lab? Looking forward to the comments below.


Monday, February 20, 2017

Sharing implementation of cucumber-jvm - Appium test framework

I recently shared the Features of my Android Test Automation Framework and also the challenges, and, how we overcame those, to make the parallel test execution work well with Android 7.0 devices as well.

In this blog post, I will be sharing the details (including code) of the implementation. If you have not read my post on - 
Features of my Android Test Automation Framework - I highly recommend you read that first.



Implementation Details

Tech Stack Summary

To recap - here is the tech stack that we currently have:
  • cucumber-jvm - v1.2.5
  • cucumber-reporting - v3.5.1
  • appium - v1.6.3
  • appium-java-client - v5.0.0-BETA2
  • appium-uiautomator2-driver - v0.2.3

1. Configure Jenkins Node (in Jenkins Server)






We currently have 5 Jenkins Nodes setup as shown below.













Each node is configured like this:
















2. Setup Jenkins Job (in Jenkins Server)

Once the Nodes are setup, we can now configure the Jenkins Jobs. 










We have setup the following jobs in Jenkins for our test executions.













Each job is configured as a Jenkins Pipeline Project and we use the the Jenkins file available here (https://github.com/anandbagmar/cucumber-jvm-appium-infra/blob/master/2%20-%20Setup%20Jenkins%20Jobs/2.2%20Jenkinsfile) as a sample from git to configure what the job is supposed to do.


3. Setup Jenkins Agent

Once the Jenkins Nodes and Jenkins Jobs are configured, we now need to get the Jenkins Agents itself setup and configured to be able to service the requests from the Jenkins server.














We use the JNLP way to connect the Jenkins Slave to the Jenkins server. For this, we have a template .sh script, which we need to copy and update 2 values in it. This is needed for each new Jenkins Node that we connect.

The template .sh script can be found here (https://github.com/anandbagmar/cucumber-jvm-appium-infra/blob/master/3%20-%20Setup%20Jenkins%20Agent/3.1%20start-e2e-moto.sh).

Now, our Jenkins setup is done. But a big piece is still missing. 

In order to run our tests on the Agent, we need some basic software to be installed. To do this, we created a shell script, that will help provision the machine. This is required to be done just once - but we do plan to have multiple Mac Mini host machines that will run various number of Jenkins agents - so the script will help keep same software (including version) on our machines - which means the same test execution environment.

This shell script can be found here - (https://github.com/anandbagmar/cucumber-jvm-appium-infra/blob/master/3%20-%20Setup%20Jenkins%20Agent/3.2%20JenkinsAgentMachineFirstTimeSetup.sh)


4. Manage Test Infrastructure & Test Execution

By this stage, our Jenkins Server, Jenkins Agent setup is done, including the software required to run the tests. Next thing is now at the Test Framework level.


























Our build tool is gradle. All infrastructure related work is handled via this build.gradle file. 

Before we get into the details of the gradle file, it is important to understand what the code structure is.


























Via groovy / gradle, we managed to solve the complexity of:

  • Finding matching devices based on the CONNECTED_DEVICE_IDS
  • Downloading the apk file from where ever it is available
    • This is done just once per test run - regardless of how many devices the test is going to run on
    • The URL to download is passed as an environment variable - APP_URL
    • For local testing, you can give a local absolute path to the apk file via the APP_PATH environment variable instead of specifying APP_URL
  • Finding the list of scenarios to be run (based on the cucumber tags specified via the environment variable - 'run'
  • Managing start / stop of Appium Servers
  • Cleaning up the device before test runs
  • Executing Cucumber scenarios in parallel
  • Building consolidated reports locally (cucumber-reports) - IF not using the Jenkins cucumber-reports plugin



5. Run Tests

Last, step in this process - is to manage the Android Driver. We use the Cucumber-jvm's @Before and @After hooks to set the right capabilities for instantiating the AndroidDriver, and also stopping the same after test execution is complete.


















These helper files can be found here (https://github.com/anandbagmar/cucumber-jvm-appium-infra/tree/master/5%20-%20Run%20Tests).


Sample Code

All the sample code can be found from my github repository cucumber-jvm-appium-infra - https://github.com/anandbagmar/cucumber-jvm-appium-infra


Happy Testing!



Thursday, June 30, 2016

Learnings from Selenium Conference 2016, Bangalore

The value one gets by attending any conference / training / meetup / etc. is subjective to various aspects, some of which are mentioned below (in no particular order):

  • Individual skills & capabilities
  • Past experiences
  • Existing knowledge / information / expertise on the subject 
  • Open mindedness
  • Willingness to learn
  • Current work (tools & tech stack, challenges, risks, priorities, backlog, tech debt, team members, etc.)

The above aspects definitely played a part in what takeaways I had from the recently concluded Selenium Conference 2016 in Bangalore as well.

Here are my key takeaways, which I am going to work on learning more about, or implementing in the near future - special thanks to +Dave Haeffner , +Marcus Merrell , +Simon Stewart+Bret Pettichord for helping me find these takeaways as part of various conversations during these few days.


  • Related to Protractor
    • Use Proxy Server in tests (Protractor framework) to capture HAR file on specific actions (AJAX calls) - and capture performance metrics from the same
    • Read and experiment with the Marionette driver for Firefox - maybe it helps me overcome some of my challenges with Firefox & Maps in CI environment (headless using xvfb)
    • Remove "phantomJS" as a supported browser from my framework by ensuring headless tests work with Chrome & Firefox using xvfb
    • Highlight element when running tests before taking screenshots - will help in debugging
    • Experiment with different loggers & reporters - Allure, Winston logger
    • Better "promise" handling in framework to keep abstraction layers sane
  • Revive WAAT - Web Analytics Automation Testing Framework - create new plugin using Proxy Server approach. Also remove Omniture Debugger and HttpSniffer plugin.
  • Refocus energy on TTA - Test Trend Analyzer.
  • Keep vodQA going strong - its a good community initiative

See you all in Selenium Conference UK in November 2016!


Tuesday, January 5, 2016

Starting 2016 with vodQA in Pune - Agile Testing Workshop

Over the past few years, after having spoken in a lot of individuals & teams in conferences & organizations, I realized the understanding of Agile and what does it mean to do effective Testing on Agile projects / teams is very poor.

So, we at ThoughtWorks, Pune, as part of vodQA Pune - Agile Testing Workshop, start 2016 with the objective of connecting with our peers in the Software industry to discuss and understand - "What is Agile and what does it mean to Test on Agile projects / teams?"

We have planned and organized this vodQA conference in a very Lean and simple way - announcement, registrations, agenda and updates - all directly from our vodQA group in facebook.

This edition of vodQA will be held on Saturday, 9th January, 2016 at the ThoughtWorks, Pune office on the 4th floor. For more details, see the vodQA event page in facebook.

Wednesday, September 30, 2015

vodQA - Quest for Quality & Taste of Mobile in October

There are 2 vodQA editions coming your way in October 2015.

Quest for Quality - ThoughtWorks, Gurgaon - Saturday, 17th October, 2015

ThoughtWorks Gurgaon is happy to announce 8th edition of vodQA happening on 17th October, 2015 at TW Gurgaon office.

The theme for this edition is "Quest For Quality", i.e. anything and everything related to quality. The speaker registrations are now open, request you all to submit talks here.
 

Taste of Mobile - ThoughtWorks, Chennai - Saturday, 31st October, 2015

Welcoming speakers for the 6th edition of VodQA - Chennai
This edition of VodQa will give you the 'TASTE OF MOBILE'. We will discuss new practices and ideas in the field of mobile testing.
You could present your thoughts as a hands-on workshop, a short lightning talk (\~10 mins) or a detailed presentation (\~30 mins), submit your talks here.

Friday, August 14, 2015

Client-side Performance Testing workshop video

As mentioned here, I conducted a Client-side Performance Testing workshop in TechJam.

It was a full house, and almost turned into a flop show because there was no wifi available - an essential requirement for the workshop. There were 2 things that saved me:
1. The attendees, thankfully (in this case) did not read the prerequisites well - most of them came without a laptop
2. Because of the above, I could get by using a 3G USB connection and just do a demo of the tools I wanted to show.

End of the day, all was good. I got good feedback from the participants that they really enjoyed the workshop, and it was very informative and useful. (Thank you all again for the kind words!)

Below is the video from the workshop.


The slides are available here:


Sunday, August 9, 2015

Questions about the Test Pyramid

After watching my presentation on "Enabling Continuous Delivery (CD) in Enterprises with Testing", I recently got asked a couple of questions about the Test Pyramid. Thought it would be good to reply publicly - that may help others who had similar doubts. 

If you have any other questions, please reach out, or add it as comments on this post.

  • Why do you talk about a JavaScript Test? I mean, why you don't consider this type of testing inside another? So, what do you mean by JavaScript test.
    • JavaScript testing requires different toolset, not the standard xUnit based ones. Hence I classify it separately. Also, there is potentially a lot of logic that can be built in the JavaScript layer - so it is essential to write tests for that too - say using Jasmine.
  • What's the difference between View and UI?
    • UI test should focus on business / user journey validations. However a view test is different. Consider a journey which has a 5 step / screen workflow. To validate some UI change on the 4th step / screen, you will need to go through, in sequence, from step 1 to 4 and then validate the changes. This is very slow and costly approach. Instead, if you build the right type of stubs / mocks, then you can setup the state in your product which simulates the step 1-3 are completed, and directly open the UI, go to step #4, and validate your changes. This is the difference in View and UI tests.

Thursday, August 6, 2015

Agile Testing - Metrics can be fun too!

Metrics are meaningless unless in the right context. In this case, my "right" context is purely a "feel-good-factor". 

In April 2011, I published the "Agile QA Process" paper on SlideShare. I am very happy to see it has received over 30000 views and has been downloaded over 1400 times!

On a similar note, I created a mindmap for Test Insanse - titled - "Agile QA - Capabilities & Skills". That also seems to be hitting a good note - with almost 1000 views in under 25 days!

So in this case - Metrics are fun! I don't mind this ego boost to continue writing more, and sharing more!

Tuesday, July 14, 2015

Client-side Performance Testing Workshop in TechJam, 13th August 2015

I am conducting a Client-side Performance Testing workshop in TechJam on Thursday, 13th August 2015.

You can register for the same from the TechJam page.


Abstract

In this workshop, we will see the different dimensions of Performance Testing and Performance Engineering, and focus on Client-side Performance Testing.
Before we get to doing some Client-side Performance Testing activities, we will first understand how to look at client-side performance, and putting that in the context of the product under test. We will see, using a case study, the impact of caching on performance, the good & the bad! We will then experiment with some tools like WebPageTest and Page Speed to understand how to measure client-side performance.
Lastly - just understanding the performance of the product is not sufficient. We will look at how to automate the testing for this activity - using WebPageTest (private instance setup), and experiment with yslow - as a low-cost, programmatic alternative to WebPageTest.

Expected Learnings

  1. What is Performance Testing and Performance Engineering.
  2. Hand's on experience of some open-source tools to monitor, measure and automate Client-side Performance Testing.
  3. Examples / code walk-through of some ways to automate Client-side Performance Testing.

Prerequisites

  1. Participants are required to bring their own laptop for this workshop.
  2. Also, please install phantomJS on your machine (http://phantomjs.org/download.html)

Tuesday, May 19, 2015

Role of Automation in Testing

I am speaking in Discuss Agile 2015 conference on 13-14 June 2015 on the following topics - 

As part of this conference, I also did an interview with Saket Bansal and Atulya Mishra on - The Role of Automation in Testing.

This was an interesting, virtual interview - where interested people had asked questions during registration, and also a lot of questions came up during the interview.

Below is the video recording of the interview. 


I also referenced some slides when speaking about some specific topics. Those can be seen below, or directly from slideshare.




Monday, May 11, 2015

vodQA Geek Night in ThoughtWorks, Hyderabad - Client-side Performance Testing Workshop

I am conducting a workshop on "Client-side Performance Testing" in vodQA Geek Night, ThoughtWorks, Hyderabad from 6.30pm-8pm IST on Thursday, 14th May, 2015.

Visit this page to register!

Abstract of the workshop:

In this workshop, we will see the different dimensions of Performance Testing and Performance Engineering, and focus on Client-side Performance Testing. 

Before we get to doing some Client-side Performance Testing activities, we will first understand how to look at client-side performance, and putting that in the context of the product under test. We will see, using a case study, the impact of caching on performance, the good & the bad! We will then experiment with some tools like WebPageTest and Page Speed to understand how to measure client-side performance.

Lastly - just understanding the performance of the product is not sufficient. We will look at how to automate the testing for this activity - using WebPageTest (private instance setup), and experiment with yslow - as a low-cost, programmatic alternative to WebPageTest.

Venue:

ThoughtWorks Technologies (India) Pvt Ltd.
3rd Floor, Apurupa Silpi,
Beside H.P. Petrol Bunk (KFC Building),
Gachibowli,
Hyderabad - 500032, India