Tags:#ai-coding#testing#tdd
A failing test is worth a thousand prompts
307 words · 2 min read
)](/img/2025-12-25-failing-test/head-img.w3840q85.webp)
If have a good automated test engineering practice you're sitting on an AI coding superpower.
One of my favorite ways to prompt my AI assistant is run the test suite and remediate any test failures. It is efficient with keystrokes, context window usage, and how often I need to reprompt to get good results.
I easily get repetitive strain injuries (RSI) so anything that gives me more leverage with my keyboard is very helpful.
Here's what that prompt does most of the time (when I know there is a failing test):
- Assistant runs the test suite
- Observes the build output
- A failing test is noted
- The assistant triages whether it's a problem with the test or the underlying code (Kiro is great at this)
- The underlying code with the defect is identified
- A change is implemented fixing the defect
- The test suite is run again
- Assistant declares victory
Then I smugly smirk and move on to my next task.
How does this work?
When you code with AI, every text surface in your project is a prompt. Directory structure, file names, file contents, and build task output. When your assistant sees a failing test, it has confirmation of exactly what's broken and diagnostic output to fix it. The project structure subtly biases the LLM to look for what to update in the right places.
Coding with AI has increased the amount of leverage you can exert from your test suite. Tests used to catch bugs. Now they catch meaning.
When you prompt "make this function handle edge cases better," the AI has to guess what you mean. But when it sees...
test('should handle empty arrays', () => { expect(processData([])).toBe([]) })
... failing, there's zero ambiguity. Paragraphs prompt guessing. Failing tests prompt precision. Naturally, you can prompt the test case into existance as well!
When you start your next coding session, write a failing test first, either yourself or with your coding assistant. Then prompt your AI with run the test suite and remediate any test failures.
© 2025 Archie Cowan
