As I continue learning in QA, I’m beginning to better understand the difference between writing test cases in advance and doing exploratory testing during execution.
When I write test cases, my thinking is usually based on requirements and acceptance criteria. I try to cover expected user flows, along with positive and negative scenarios that can be anticipated at that point in time. This helps ensure coverage of known behaviours.
Exploratory testing feels different because the thinking happens while interacting with the system. Instead of validating predefined steps, I observe how the application actually behaves and let those observations guide what I test next. New ideas often come from small signals such as unexpected delays, unclear messages, or inconsistent behaviour — things that aren’t always obvious when designing test cases upfront.
A practical example:
While testing a login feature, the test cases covered scenarios like valid login, invalid credentials, empty fields, and account lockout. During exploratory testing, I noticed that after multiple failed login attempts, the response time increased and the error messages became inconsistent. This led me to explore further — retrying actions at different intervals and observing how the system responded.
These behaviours weren’t clearly defined in the requirements and weren’t easy to predict while writing test cases. Observing them in real time helped surface usability and reliability concerns that could impact the user experience.
Through experiences like this, I’m learning that exploratory testing isn’t about missing scenarios earlier, but about discovering new risks by responding to the system’s behaviour in the moment.
Open question for the community:
For teams that use a lot of automation, how is exploratory testing usually approached or kept relevant?
Really well put. You captured the difference between planned coverage and real-time discovery in a very grounded way.
What resonates most is your point that exploratory testing is not about “what we missed earlier,” but about responding to how the system actually behaves. That shift in thinking is where its value shows up, especially in areas like performance, usability, and consistency that rarely surface from requirements alone.
For teams with heavy automation, exploratory testing stays relevant when it is intentionally positioned as a learning activity. Automation secures the expected behavior, while exploratory testing focuses on signals, side effects, timing, and user perception. The kind of response-time and message inconsistency you noticed is a great example of something automation might pass while still masking real user risk.
In practice, short, focused exploratory sessions after automated checks pass, especially around recent changes or unstable areas, tend to work best. They turn automation into a safety net and exploration into a way to uncover new risks worth addressing next.
Thanks for raising this, it is a thoughtful way to frame the role of exploratory testing in mature teams.
I always thought of ET as a way to fill the missing scenarios, but I agree with what you wrote.
Also as @Omri_Berkovich stated, I agree that the automation is the safety net, and actually ET is more important when more testing are automated.
Good morning! Our automation efforts are primarily focused on regression testing, covering the core functionality of components, tools, or modules. We typically run these once a build is stable, but also execute them earlier to catch any immediate breaking changes or regression bugs.
Exploratory Testing (ET) remains a key part of our strategy, particularly for new products or changes, and to complement our automated and scripted tests. Its prominence has increased since we adopted PractiTest, which offers excellent support for ET.
We’ve found that all approaches have their place. Automating frequently changing or subjective functionality often incurs significant overhead and maintenance challenges . Leveraging ET for key, reusable usability flows, as you’ve described, is particularly valuable for us.
ET also serves as a time-saver, allowing us to quickly leverage a tester’s knowledge to validate a change, which is often faster than writing and executing a new script. Another valuable use case for ET is during the early stages of development, where we can explore emergent behaviour and verify its acceptability with product teams or business stakeholders.