Vinayaka Mayura will be presenting the following session
Vinayaka Mayura G G - Search Relevancy Testing: QA in Machine Learning ModelsVinayaka Mayura G GQAThoughtWorks
schedule 1 year agoSold Out!
The adoption of Artificial Intelligence is getting more traction, it is in need to enhance QA capabilities to cope up with these skills. Machine Learning is used extensively in retail applications for solving complex problems, one of them is solving the search relevancy. Showing the appropriate results for the user is important for the conversion rate to go high. As Machine Learning poses different challenges such as a Test Oracle, Fairness, Correctness and Robustness to do QA, We may need to follow different approaches and testing techniques to do the QA for Machine Learning models.
Different Machine learning types such as Supervised and Unsupervised Models have different characteristics and are used for different types of problems. Though these solves different complex problems, Machine learning Models also a unit of software code that needs to be verified as a normal software system. When a Machine learning model is seen as a whole system, it may look complex and unsolvable. We can group them into small modules and verify for quality. Black box and White box testing techniques can be applied to verify the functionality. Data, Feature Engineering and Algorithms are the major part of the Machine Learning model. We will see how we applied different techniques to validate these.
This talk is focused on viewing the Machine Learning software as a whole and performing the Quality Analysis for it. We look at how different is testing a machine learning model from typical software testing. We will discuss the challenges that came across, the Process involved in building an ML model. We take an example of Search Relevance for an explanation. We will dive into the areas where quality is assessed. The significant factors considered here are measuring Accuracy and Efficiency. We will look into the different black box testing techniques for different Algorithms. Let us also see how traditional testing is different from testing machine learning applications. I will go through different black-box testing techniques with examples following a live demo.
1. What got you started/interested in Testing?
I always expect the product we deliver should be of great quality. Also testing is a combination of Engineering and building the right product. It makes the Testing interesting.
2. What has been your best moment/highlight working with Selenium?
Last discussion I had with Jason Huggins about the Selenium growth and the challenges.
3. What do you think is the biggest challenge faced by Software Testers today?
Most of the computational processing and transformations tasks have become complex. It has grown from simple user experience testing to verifying your how application performs well for huge traffic. There is an extensive work required in creating testing tools and frameworks for areas like Machine Learning, Open GL, Audio/Video/Image processing, Autonomous driving system.
4. What is your advice to testers, who are new to automation?
Don't stick to one tool, analyse multiple tools and build a framework which can extend for future requirements and problem.
5. Tell us about the session(s) you will be presenting at the conference and why did you choose those topics?
I am presenting on a topic related to Machine Learning. There are many challenges involved in testing that. There is not much work done at the ground level and it is great learning experience.
6. What are some of the key takeaways from your session(s)?
Audience will get to know about what is Machine Learning, Search Relevancy. How to test a Machine Learning application. What are the different techniques used for testing Machine Learning model.
7. Which sessions are you most looking forward to attending at Selenium Conf?
There are so many interesting topics listed. Even if i miss to attend sessions running in parallel, will watch them later in youtube.