A Balancing Act: Finding a Place for Exploratory Testing

By 
Elizabeth Zagroba

We had automated “all” our tests

For the first year building our product, we went from zero customers to a handful of beta customers. The developers and testers on the teams built up a suite of hundreds of automated tests by writing new tests for each story. Testing stories involved limited exploration, because the roadmap for the product wasn’t clear. Any bugs or edge cases we found by exploring would just be tossed in the backlog to languish as we practiced demo-driven-development. Our release testing strategy was: run all the automated tests, work through the failures until they all turned green. We believed this proved we hadn’t broken anything. 

We got new & more stakeholders

People around the company started to have a stake in what we built. We worked on a search feature that integrated with the desktop team and the web team. We spent a few weeks hashing out the API specifications with the desktop team, then didn’t show our work for months while we built and tested. 

We released the search feature. That’s when sh*t hit the fan

It turned out the web team had different requirements from the desktop team. They expected an empty search to return the most popular items, instead of an empty list. They expected results if they searched in the middle of words, while we were only indexing results from the beginning of words. The desktop team had a filter that defaulted to not showing any search results. Automation could have caught this behavior, if a human had told it to expect something different.


Having a strong focus on automated tests didn’t work for this point in our product lifecycle, when more teams were integrating with us and we were headed out of beta and towards general availability. Building and debugging automated tests took time away from thinking about how our products worked together.

We had a reckoning

As Winston Churchill said, “Never let a good crisis go to waste.” The web team realized they needed to state their expectations, and started doing so. The desktop team asked for more updates while we were building.


But what should the developers and testers on our team do differently? The developers were afraid to stop writing tests and start thinking like testers, because they knew they weren’t good at that. The developers felt more effective when they wrote code, but they struggled to decide what to automate and debug existing tests. Another tester on the team worried about what their daily work might look like if we “forced” developers to do exploratory testing.

We removed some automated testing, and we’re better for it

We’re posting a high-level view on our testing in a shared Slack channel as we’re working on stories that affect the other teams. We were expecting additions or suggestions; instead the response has been an increase in trust and excitement about our team’s work. 


We’re releasing more frequently in smaller chunks. We’re addressing the bugs (features are in the eye of the beholder) that the other teams found in our first big release. But we’re not fixing all of them. We’re setting expectations around what is possible for our team to accomplish. (Did you know there’s still a global pandemic?) 


We revisited the test strategy for our team and consciously decided to automate less. The time saved in writing and maintaining the tests is used to keep finding answers to the question: what’s the risk here?


Many of our stories still have some automated component. In feature testing, we look deeply into a feature and ask the tricky questions. In release testing, we only want to make sure that the branch we already tested got merged. We started making a checklist of stories that didn’t have an automated component, and wrote charters that described where we should start investigating:

  • See what happens if the description on the detail page is empty.
  • See if a regular user without admin rights can search for {publicly available data}. 
  • See if you get a pop-up error message when you search for weird characters.

Where you can try it

Find a test that always passes. Ask if it would tell you important information if it failed, or if you’d work to fix it. Watch João’s talk about deleting tests


Consider your next user story. There are things you could automate, but do they need to do the setup part, the executing part, and the asserting part? Could you write something to set yourself up for more thorough exploratory testing? Watch Richard’s talk about redefining test automation


Don’t force your developers to exploratory test. Start with the feedback you’re already providing on each story. Get on a call to see which items matter and which don’t. Get on a call, but have a quick question for them too. Get on a call, and make sure you understood one of the requirements correctly. Get on a call so they can help you set up your configuration, or figure out what this log message means, or help you debug. Seeing enough examples of how you’re thinking and making decisions will start to rub off on them.

About the Author

Elizabeth Zagroba is a Test Engineer and Team Lead at Mendix in Rotterdam, where she actively encourages, supports, and contributes to quality throughout the organization. She’s also presented at conferences around the world. You can find more of Elizabeth’s writing on her personal website, the Ministry of Testing website, and Twitter.

Discover what an exploratory testing companion can do for you

Start capturing your exploratory testing notes with TestBuddy.
Enjoy a free 14-day trial with full access to every feature

  • Give consent to receive product tips and updates via email
  • Give consent for your personal information to be used in line with TestBuddy's Privacy Policy
  • Agree to TestBuddy's Terms of Use

Thanks for signing up!
Please check your inbox for a verification email.
You'll be able to login as soon as you've verified your account by following the link in the email.

Oops! Something went wrong. Would you mind trying again?