Sitemap

Don’t Just Code with AI. Test with it too!

3 min readMay 31, 2025

--

The way we write software is changing fast as tools like GitHub Copilot, Gemini Code Assist, Cursor, and other AI assistants are helping to generate code quickly, automate repetitive tasks & accelerate feature development. It’s exciting and powerful but one crucial thing that is hovering around my head is testing.

As we adopt AI tools to write code, we also need to evolve the way we test that code. If AI helps us build faster, it should also be used to build more safely.

AI-Generated Code Still Needs Human Responsibility?

When AI writes code, it’s based on patterns it has learned. But it may not know the full context of application, specific business logic or some weird edge cases. It might generate something that compiles and looks clean but we cannot guarantee correctness.

That’s why tests are more important than ever. They are the safety net that ensures code actually works the way it is intended. Without tests, we might be trusting a black box. I have realized myself that we most of the time tend to quickly push the code generated by AI rather than reviewing line by line.

Let’s use AI to Write Tests too

We often use AI to generate functions or refactor code but stop short of asking it to write tests which feels like a missed opportunity.

Modern AI tools are capable of generating entire test suites with simple prompts like:

Write unit tests for this function including edge cases & failure scenarios

Nicer if they could follow team’s testing and coding structure. Let’s say we ask to create request mocks for external APIs, include timeout scenarios, permission errors or malformed input cases.

I see it as a high time to not take test as optional but instead treat them as the second half of the feature.

Use AI to Generate Better Test Data

Consider a common example: an image recognition feature. Testing such a system requires a wide variety of images to cover different lighting conditions, blurs, partial objects etc. Traditionally, gathering these kind of multiple dataset is time consuming & requires lots of resources.

Now, with generative AI, we can create hundreds of synthetic images covering different kinds of real world scenarios in just few seconds. That too covering both success and failure cases be testing how it handles photos taken at night, with motion blur or with heavy occlusion and so on. We can easily generate them on demand! This kind of dataset diversity used to be a bottleneck for which AI can be utilized to remove that barrier.

Establish Prompting Guidelines for Testing

One thing that helps when working with AI is setting clear prompting patterns. Developers on the team should know what to include when asking an AI tool for code. That could ranging from both the implementation and the tests to even naming convention of test files to include instructions for edge cases and invalid inputs.

Also, consistency matters a lot when developers use different AI tools or styles resulting in messy codebase. A shared prompt guide helps keep everyone aligned.

AI in development is no longer a future concept. It’s part of our day to day work. But while AI can accelerate development, quality remains a human responsibility.

Next time when using AI to generate code, remember to follow up with thorough tests. More speed is great but only if there is trust on what we build!

--

--

No responses yet