Posted 12/03/2025
Vibecoding” is essentially AI-assisted coding pushed to the extreme—letting AI generate entire features, modules, or files instead of small code snippets. Most of the issues I describe here apply not only to vibecoding, but to AI-assisted development in general.
I decided to try vibecoding myself because it’s trending everywhere and often praised for its speed. To test its limits, I built a small offline Angular application composed of two modules:
Both features share a common IndexedDB storage layer.
I immediately understood the “wow moment.” In a few minutes, the AI generated code that actually ran. But anyone with engineering experience can also see how fragile that code is and how quickly it becomes unmaintainable.
AI cannot run or test the application. It only predicts code that resembles something correct.
Typical issues:
LLM-generated code always requires manual validation and debugging.
Without strict guardrails, AI often produces:
Even with explicit instructions, the AI frequently drifts away from best practices. It knows them, but does not reliably apply them.
Examples from this project:
Even with instructions, code still requires refactoring.
LLMs know best practices but do not consistently apply them.
The application required shared logic for accessing IndexedDB. On the first attempt, the AI generated two separate services containing almost identical IndexedDB logic, both managing the same database operations.
This immediately caused:
When I asked it to fix bugs, it tried correcting each service independently, producing more duplication and more conflicts. I eventually had to force it to create a single shared data-access service — and even then, the structure was flawed.
AI consistently struggles with shared architecture, cross-cutting concerns, and designing reusable abstractions.
This part is non-negotiable. When deploying to Azure, the AI recommended configurations that introduced serious security risks.
Examples of what AI may generate:
Security, DevOps, and infrastructure configuration cannot be delegated to AI..
For styling, small UI tweaks, and simple code generation, AI is extremely productive. However, once complexity increases, the quality drops sharply.
AI is excellent at scaffolding, but humans still handle: Humans still handle:
AI-generated code is equivalent to a junior developer’s first draft—useful, but never production-grade by itself.
On the positive side, the entire app was built three to four times faster thanks to AI. Its help with CSS, layout design, and quick prototypes was extremely valuable.
For:
AI is genuinely great. It accelerates development dramatically and reduces busywork, as long as you accept that it is not shaping a long-term architecture.
By default, Copilot rewrites entire files, which leads to:
Asking for diff-based output avoids almost all of these issues.
Good prompts follow a simple structure:
The clearer your structure, the better the results.
For more details: General “Prompt Structure” for Clear Results
The most effective workflow today looks like this:
This maximizes speed while maintaining quality.
Vibecoding delivers impressive speed, removes repetitive boilerplate work, and dramatically accelerates early prototyping. But today’s AI systems lack architectural awareness, produce inconsistent abstractions, and frequently generate code that “looks right” while failing at runtime. They amplify productivity only when paired with experience.
AI is an excellent accelerator — not an engineer.
The real value comes from combining AI-generated drafts with human-driven design, validation, and refinement. Used strategically, it can make teams faster. Used without oversight, it can make codebases fragile.
The future of software development is not AI replacing developers, but developers who know when to let AI accelerate and when to take full control.