We're Investing $3.5M in AI Testing So You Can Test Without Writing Code

I believe that software testing should be accessible to everyone, regardless of their programming background. It's why I built MagicPod, a no-code, end-to-end test automation platform: to empower teams to create and maintain robust test coverage without writing a single line of code.
And it works. Non-technical team members can now contribute meaningfully to the QA process, reducing the bottlenecks traditionally caused by limited engineering resources. But despite all the progress we've made, no-code isn't a silver bullet.
Although democratizing test automation with no-code features has contributed to fewer manual testing errors and more maintainable tests, our users still report tests breaking for mysterious reasons.
Why Test Automation Still Breaks
If you’re familiar with test automation, you might be familiar with these problems:
- A test runs, but fails for no clear reason.
- A test fails only once every three days.
- A button clearly visible on the UI triggers a “cannot be found” error.
- Tests pass individually, but fail when run in sequence.
- A test that worked fine the previous week suddenly doesn’t.
Our support team hears about these mysterious failures every day from our users. And in most cases, they stem from one root cause: over-reliance on the UI's exact state.
For instance, a test step clicks on a button labeled “Unread Emails (9)”, but if the count changes to 8, the test fails -- because the locator was looking for a precise text match. That's not a bug in the test logic, but a limitation of how rigid UI-based automation can be.
To solve these issues, we have continuously improved MagicPod's test generation logic and auto-healing features. However, there are limits to what static logic can fix, and it’s tough to automate every edge case perfectly. We needed a fundamentally different approach.
How AI Agents Fix UI-Dependent Test Failures
Generative AI has proven helpful in answering questions and generating code, but it hasn’t fundamentally changed how engineers work. AI agents, on the other hand, can take on more tedious tasks, autonomously performing tasks across browsers, tools, and systems.
Today, you can instruct an AI agent to navigate to a website, sign up as a user, and complete a multi-step onboarding process. What once required precise, brittle test scripts can now be handled dynamically by an agent that understands intent.
And while AI agents are capable of generating tests from scratch, we believe the greatest value lies in a hybrid approach: combining no-code workflows with intelligent AI agents.
If we applied a hybrid approach to our 'Unread Emails (9)' button example, generative AI would recognize it as a dynamic label while recording user actions. The AI would adjust the test to target just 'Unread Emails'. This kind of contextual reasoning would dramatically reduce unexplained test failures.
Recording user actions is often faster than giving written instructions, just as doing a task yourself is often faster than teaching someone else. Hence, the real power of AI agents may lie after a test is created.
Thanks to tools like MCP and Agent Development Kit, it's becoming much easier to tailor generative AI to your own services, in our case, our software testing suite.
A ¥500M ($3.5M) Investment for the Future
AI agents represent a paradigm shift that will transform the software testing industry. That's why we are investing 500 million yen ($3.5M) into building this technology with our no-code testing platform.
Why such a bold move? Because despite advances in automation, manual testing still dominates the $660 billion global software market. Test automation remains too brittle, too inflexible, and too technical for widespread adoption. However, AI agents could make automated testing adaptable and straightforward enough to replace manual testing at a scale we've never seen before.
To achieve this, we're focusing on two key transformations:
- Enhancing the test creation phase through external AI agents
- Revolutionizing test maintenance using internal AI agents
Test Creation Phase
MCP servers are the bridge between AI agents and software testing platforms. Our first step is developing an MCP Server that enables code AI agents like Cline, Cursor, or Claude to interact with MagicPod functions.
We’re also looking to support code-based test scripts alongside our no-code format.
Initially, this was to help engineers and QA collaborate more effectively, but it matters even more now with AI agents. By converting tests to code and storing them in Git repositories, AI agents can edit and manage them directly. And since they remain viewable and editable in no-code format, they’re still accessible to non-engineers.
This approach preserves test stability because MagicPod scripts provide a consistent framework that eliminates the unpredictability often associated with pure AI-generated solutions. It also avoids the ongoing costs of repeatedly engaging AI models for test execution.
Key features like history tracking, branching, and GitHub integration are central to this strategy. These ensure changes made by AI agents or humans remain transparent, auditable, and easy to manage.
Maintenance Phase
Even the best test suite breaks when your UI changes significantly. Today, that often means hours of manual maintenance. To solve this problem, we’re building an enhanced auto-healing feature.
MagicPod already includes an auto-healing feature, which automatically updates tests in response to UI changes—but it currently detects only about 30% of those shifts.
We’re building a much more advanced system, an AI-driven maintenance engine that can:
- Understand the application context
- Consider the user’s original intent behind creating the test
- Autonomously try and retry different fixes until the test passes
While this system might not technically qualify as a full AI “agent”, since it does not act on direct user instructions, it embodies the true potential of AI agents: solving complex problems without human intervention.
Testing Without Coding Skills
The evolution of AI agents and no-code tooling is breaking down barriers that have long separated testing from the rest of the product lifecycle.
After trying tools like Bolt.new, which use generative AI to build web services, I’ve realized that while they make it easy to create a basic framework, you still need programming skills to fine-tune the results. Without that knowledge, using these tools in a real business setting can be challenging.
This is why the objective for us is to combine the accessibility of no-code with the intelligence of AI to make agentic testing a reality. For teams. For non-engineers. For everyone.
We’re excited to lead the way.
MagicPod is a no-code AI-driven test automation platform for testing mobile and web applications designed to speed up release cycles. Unlike traditional "record & playback" tools, MagicPod uses an AI self-healing mechanism. This means your test scripts are automatically updated when the application's UI changes, significantly reducing maintenance overhead and helping teams focus on development.