0% found this document useful (0 votes)
39 views11 pages

Scenario Based Interview Questions For Automation Tester

Uploaded by

pt17.tiwari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views11 pages

Scenario Based Interview Questions For Automation Tester

Uploaded by

pt17.tiwari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Scenario Based

Interview
Questions for
Automation
Tester
With Answers

By Mayur Nikam
https://www.linkedin.com/in/mayur-nikam-369b31241
1) Your automated tests for a login page are failing
intermittently. How would you investigate and address
this issue?
If my login tests were acting up, I'd start by looking for patterns.
For example, maybe the failures only happen on Mondays after a
new deployment, or only when using Firefox. Then I'd check the
environment.

Once, I had tests failing because the test server kept running out of
memory. Of course, I'd review my code.

A common issue I've seen is using Thread.sleep() instead of explicit


waits, which can cause problems if the page loads slower than
expected.

And I always check the logs – one time, the browser console showed
a JavaScript error that was causing the login button to be disabled.
2) The website you're testing has elements that
constantly change IDs or positions. What techniques
would you use to create reliable locators for your tests?
I've worked with sites where element IDs were generated
dynamically, making them different every time the page loaded. To
handle that, I relied on more stable attributes.

For example, if there's a unique 'data-testid' attribute, I'd use that in


my locator. Relative XPaths are also helpful.

Let's say the login button is always inside a div with the class 'login-
form'. I could use an XPath like //div[@class='login-form']//button
to find it reliably.
3) A critical API endpoint is returning unexpected errors.
How would you use automation to help debug this?

When debugging APIs, I like to use Postman to send requests and


inspect the responses. I might create a test suite with different
requests to cover various scenarios, like valid logins, invalid logins,
and edge cases like empty passwords.

I'd add assertions to check things like the response code – for a
successful login, I'd expect a 200 OK. I'd also make sure the response
body contains the expected data, like a user token.

And if the API is slow, I might use JMeter to simulate a heavy load
and see how it performs under stress.
4) Your web application needs to work flawlessly across
different browsers (Chrome, Firefox, Safari). How would
you approach cross-browser test automation?
In a previous project, we used Selenium Grid to run our tests on
Chrome, Firefox, and Safari. We had a Jenkins server that would
trigger the tests automatically whenever we pushed new code.

We also used BrowserStack for testing on some less common


browsers and mobile devices. We did run into some browser-specific
issues, like different ways of handling file uploads, so we had to
write some conditional code to handle those differences.
5) Users complain about slow loading times on a specific
page. How can automation help identify performance
issues?
We once had a page that was taking a really long time to load. Using
Selenium, I wrote a test that measured the time it took for the page
to become fully interactive.

I also integrated this test with JMeter to simulate multiple users


accessing the page at the same time. This helped us identify a
bottleneck in a database query, which we then optimized to
significantly improve the page load time.
6) You need to test a form with a large number of
different input values. How would you efficiently manage
and use test data in your automation?
I've used data-driven testing extensively for forms.
For example, if I'm testing a registration form, I might create a CSV
file with different sets of test data – valid data, invalid data,
boundary conditions, and so on.

Then I'd use my testing framework (like TestNG or JUnit) to read this
data and run the tests with each data set. I've also used libraries like
Faker to generate realistic test data, like names, addresses, and
email addresses.
7) The company is prioritizing mobile users. How would
you automate testing on different mobile devices and
operating systems?
I have experience using Appium to automate tests for both Android
and iOS apps. In one project, we used Appium to test a shopping
app.

We wrote tests to verify user flows like browsing products, adding


items to the cart, and completing the checkout process. We ran
these tests on a combination of real devices and emulators to get
good coverage.

We also integrated our Appium tests with a CI/CD pipeline, so they


would run automatically with every new build.
8) You encounter a feature that seems impossible to
automate (e.g., audio/video playback). How do you
handle this in your test strategy?
When it comes to features like audio/video playback, I know there
are limitations to what we can automate.

Take a video streaming app, for instance.

While I can definitely automate tests to check if the controls work


(like play, pause, volume), or if the video loads correctly in different
resolutions, it's much harder to automate something like assessing
the video quality or checking for buffering issues.

In these cases, I'd focus my automation efforts on the functional


aspects, like UI interactions and API calls. But I'd also advocate for
manual testing to cover those subjective areas, maybe having
testers evaluate the video under different network conditions.

It's all about finding the right balance between automation and
manual testing to ensure comprehensive coverage.
9) You're tasked with automating tests for an old
application with limited documentation. How would you
approach this?
If I had to automate tests for an older application without much
documentation, I'd focus on a few key things.

First, I'd work closely with the development team to understand the
application's core functions and identify the most critical areas to
test.

Then, I'd take a phased approach, starting with small, manageable


tests and gradually expanding coverage over time. Because
documentation is limited,

I'd rely more on exploratory testing to learn how the application


behaves and prioritize my automation efforts. It's important to be
adaptable and resourceful in these situations, and I'm confident I can
effectively automate tests even with limited information.
10) How would you integrate your automated tests into a
Continuous Integration/Continuous Delivery pipeline?

To make sure our automated tests are really valuable, I'd want them
to be a core part of our CI/CD pipeline. Let's say we're using Jenkins.
I'd set things up so that every time the developers push new code,
Jenkins automatically kicks off a build. Then, as soon as that build is
done, Jenkins would trigger my automated tests.

But it's not just about running the tests, it's about making the results
easy to understand. I'd configure Jenkins to create clear reports,
maybe even with some nice graphs, showing which tests passed,
which failed, and how long everything took.

That way, everyone on the team can quickly see if there are any
problems. And if a test does fail, the pipeline should immediately let
the developers know so they can fix it right away. This keeps the
feedback loop tight and helps us catch bugs early on.

You might also like