There’s a lot of hype right now around AI agents that handle testing end-to-end with no human in the loop. Sounds great in theory. But our team recently wrote about why chasing full autonomy might actually be setting teams up for frustration rather than results.
The argument: AI today is genuinely powerful as a collaborator, but as an autonomous actor, it is just not there yet.
AI today is missing context. It doesn’t know your project structure, your existing coverage, or your history of edge cases. So even when it generates something useful, you’re still stuck copy-pasting and re-explaining everything from scratch.
The more productive framing might be: AI does the heavy lifting, humans stay in the decision loop.
Curious what you’re all seeing in practice ![]()
- Are you experimenting with AI in your QA workflow?
- Where has it actually saved you time, and where has it let you down?
- Has anyone run into the “working blind” problem, where AI suggestions missed the mark because it had no real project context?
Would love to hear how this community is navigating it!
P.s. Full blog post here if you want the longer read: The Problem With Chasing Fully Autonomous QA Agents
Or, you can watch this great session where Joel Montvelisky walks through PractiTest’s MCP approach and shows how AI tools can operate with real testing context and turn AI from isolated prompts into a connected testing assistant: Orchestrate AI for Testing ![]()