Artificial Intelligence is no longer just a buzzword in software development — it’s a powerful collaborator. From speeding up prototyping to navigating complex codebases, AI has turned the development process upside down.
| We spoke with Taras Trischuk, a Senior Software Engineer with extensive experience using AI-driven tools like Claude Code, GPT-5 Codex, and Gemini CLI, to explore how AI helps, where it struggles, and how to integrate it effectively into development workflows. | ![]() |
Let’s dive into the practical realities of AI in software engineering — the tools, processes, and lessons learned from real projects.
Go-To AI Tools for Development
My go-to tool is Claude Code with Opus. For code reviews, I rely on Codex with GPT-5, which excels at understanding and exploring complex flows. When I need to analyze a large codebase, I switch to the Gemini CLI since it's pretty fast and supports a large context window.
I always run Claude Code with MCP servers — as a developer, I prefer automating routine tasks. The tools I use most often are:
- @trishchuk/codex-mcp-tool (this is a gateway to GPT-5/O3 - excellent for deep research and creative reasoning);
- gemini-mcp-tool (useful in certain cases)
- chrome-devtools (essential for frontend flow testing and console inspection)
- Context7 helps with unfamiliar libraries.
I also configure sub-agents and custom commands for tasks like build analysis and routine operations. An essential part of the setup is a well-written CLAUDE.md / AGENTS.md. The agent generates the base, but I add behavioral rules and tool descriptions on top. For Angular, I include specific instructions; for backend and infrastructure, separate, more nuanced ones.
Where AI Actually Helps
Research and prototyping.
This is where AI is simply unbeatable. Figuring out an unfamiliar codebase, generating boilerplate, writing documentation — all the tasks that used to take hours can now be done in minutes. As a bonus, AI can provide alternative perspectives on the problem. There have been so many times when it suggested a solution I wouldn’t have even considered.
CLI and infrastructure.
Opus and Sonnet handle the console better than many engineers. Routine terminal operations, Kubernetes diagnostics, and investigating issues in logs — if you know how to use them effectively, your productivity can multiply several times. This isn’t an exaggeration; I’m speaking from experience here.
Refactoring.
This is where it gets interesting. AI can handle it, but it requires strict oversight. Your role is that of a senior developer who keeps a junior colleague in check: small steps, a clear plan, and testing after each stage. If you give it free rein, you'll end up with spaghetti code with broken architecture.
How I Tackle Complex Coding Tasks with AI
I start with deep research using Claude or Gemini, then run an interview-style session with the model to clear up any details. After that, I let the agent analyze the task and the codebase and ask it to draft an MD document outlining how it would approach the implementation. Then I review the plan and make any necessary adjustments.
Once the document is solid, I switch the agent into planning mode. If the task is complex or touches the architecture, I refine the plan through MCP using Codex or Gemini. After the implementation, I review the code and give instructions for corrections. The last step is writing automated tests and running them.
When AI Can Make Your Job Harder
The main issue is that the model wants to please you. If it can’t fix the code, it might “fix” the tests just to make everything green. Or it may choose a sloppy workaround simply to say the task is done. I catch these things only through code reviews and automated tests — I’ve started adding more unit and integration tests to track progress and regressions.
With large codebases, the situation is different. Limited context — plus the fact that expanding context is expensive — often leads to low-quality code, duplicated logic, and broken architecture.
I believe that over the next 1–2 years, AI will mostly serve as an assistant for knowledge discovery and structuring. Writing production-ready code will remain semi-automated and still require significant involvement from engineers.
Emerging Tools and Documentation Strategies
The reality is, the industry isn’t standing still. New tools are emerging that tackle this challenge from different angles. Google is experimenting with CodeWiki, a structured way to represent an entire codebase. Devin.ai goes even further, generating a full project wiki and knowledge graph.
The idea is straightforward: instead of making the agent re-digest thousands of lines of code every time, you give it a structured, high-level understanding of the architecture and how all the pieces fit together.
I’ve also started generating full codebase documentation in my projects — it gives the agent the context it needs without re-digesting everything from scratch. But even with these tools, the realistic outlook for the next 1–2 years remains unchanged: AI will be most useful for searching, organizing, and interpreting knowledge. Fully autonomous code generation for enterprise systems is still science fiction. Developers stay in the loop — and that’s a good thing.
When it comes to Security and IP, the rule is simple: zero trust. Any AI-generated code must undergo standard security tools and manual review. For projects where IP sensitivity is an issue, large models aren’t an option to begin with. And what about local models, you might ask? Well, they’re simply too weak for real production-grade tasks.
Making AI Work for You or When It Feels Like AI Isn’t Worth It — but Actually Is
The biggest mistake is failing to adapt your processes for AI. Coding agents are essentially new teammates, and you have to adjust to working with them.
1. Type safety.
Set up TypeScript and lint so they simply don’t allow low-quality or inconsistent code. In TS/JS, contracts must be explicit so the agent can follow them. Invalid states should be impossible at the type level. Without this, AI will miss edge cases and allow incorrect logic to slip through.
2. Test-driven mindset.
Without comprehensive test coverage, AI-assisted development turns into a game of “find the bug.” Automated tests aren’t just for validating code — they’re your primary quality-control system for AI output. Edge cases, invalid states, boundary conditions — everything AI tends to skip must be caught by tests.
3. High-quality input.
Garbage in, garbage out — truth as old as time. Provide detailed task descriptions, clear acceptance criteria, and behavior examples. Keep the feedback loop tight; long cycles create long chains of errors. Without this, AI will generate more problems than it solves.
Conclusion
As McKinsey notes, AI doesn’t just change workflows — it reshapes the very structure of work within organizations. Workforce planning becomes less about headcount and more about skills.
The reality for developers is nuanced. AI can accelerate research, prototyping, and repetitive coding tasks, but it also introduces new risks: brittle code, missed edge cases, and hidden errors. The key is to treat AI as a teammate — powerful, but requiring guidance, oversight, and solid processes such as type safety, test-driven development, and high-quality input.
In other words, AI isn’t here to replace all developers; it’s here to shift the focus from “how many people” to “which skills deliver most value.” Smart teams leverage AI to structure knowledge, speed up experimentation, and handle tedious tasks — while developers remain central to building reliable, maintainable, and secure software.
