AI Can Write Code. That Doesn’t Mean It Can Build Software.
What I've learned about AI, trust, and the future of software engineering ... so far
“AI is going to replace software engineers.”
”Software engineering as a career will soon be obsolete.”
”AI is going to replace junior developers.”
Yapper, yapper, yaaaapper.
These are all statements we’ve heard repeatedly over the last two years, following the rapid innovations and developments in the AI sector. And to a large extent, I actually believed them.
I remember the first time I saw AI generate code. I thought, “OMG, I’m screwed. AI is going to replace me.”
Anyway, fast forward to today. If you’ve been paying attention to the industry, you’ll realise that AI has not done such a good job of “replacing” software engineers across all levels.
Let’s take a quick look at a few shockers we’ve witnessed in recent years:
Replit’s AI coding assistant deleting an entire production database
McDonald’s hiring chatbot exposing data of 64 million applicants (password: 123456)
A biased lending AI that cost a bank $2.5 million in fines
The ChatGPT diet that landed a man in the hospital
What these incidents reveal is that, although AI is disruptive, it may be too disruptive in its current state.
It’s not ready. At least not 100% yet.
There are still too many edge cases and scenarios that make deploying AI without multiple layers of human intervention extremely risky.
How does this relate to software engineering jobs?
All of the incidents above demonstrate one thing: AI in 2025 could not be fully trusted.
Humans (software engineers) are still critically needed to ensure AI systems don’t embarrass companies, violate regulations, or cause real-world harm. The cost of failure is simply too high, both for tech companies and for organisations outside the tech sector.
Can we really vibe-code a SaaS in 30 minutes?
Another popular belief this year is vibe coding.
Don’t get me wrong, I’m not against vibe coding. But I do think the term, at face value, made non-technical folks believe they could build the next Facebook using only natural language.
Internal screaming.
I think this belief is what many people were referring to when they claimed AI would eliminate software engineering jobs.
So, can we vibe-code a SaaS in 30 minutes?
The honest answer is: it depends.
1. The scale of the application
The more complex the app (in terms of features, expected users, geographic distribution, and security requirements), the harder it becomes to vibe-code.
A simple one-page app? Absolutely doable.
The next Facebook in 30 minutes?
For lack of better words… crazy.
2. The experience level of the developer
This is the biggest factor.
I wouldn’t bet on someone who’s watched three Python videos on YouTube to vibe-code Facebook. But I would put money on a seasoned engineer/CTO/senior/founding engineer/PhD/MSc/BSc (I’ve run out of titles and qualifications, but you get the point).
Vibe coding works best when you already know what to do, but you’re just too lazy, too busy, or too time-constrained to do it manually.
AI-generated code still needs to be thoroughly inspected to ensure it’s correct, optimised, secure, and production-ready.
3. The AI tools being used
Some LLMs are better at generating code than others. Some integrate deeply with terminals, IDEs, and deployment pipelines, making semi-autonomous development possible.
Vibe coding requires knowing which tools to use, how to combine them, and where their limits are.
Why humans still matter
Human-in-the-loop is one of the most practical approaches to AI-assisted software engineering today.
It means using AI to build software while keeping critical decision-making in human hands.
Right now, a human developer should always be responsible for deciding:
How the software actually solves user problems
What security risks exist, and how they’re mitigated
Which architectural decisions are appropriate
What criteria must be met before real users can access the system
These are not decisions we should hand over entirely to AI.
What’s next for software developers?
AI is not going to replace software engineers, but it will change how we work.
It introduces a new set of skills developers must adopt to work effectively with AI agents.
Prompt engineering
At one point, people thought this would become a standalone career.
While that may not be the case, clear communication with AI is now a core skill:
writing structured prompts, using markdown, defining constraints, understanding context windows, and being explicit about what the AI should and should not do.
System design
If you’ve tried to vibe-code an app, you’ll quickly realise that dumping one massive prompt doesn’t work.
System design remains the developer’s responsibility. You must break complex problems into smaller, well-defined tasks for AI agents and sub-agents.
This is not something we can outsource to AI.
Quality assurance
From my perspective, QA is one of the safest roles in the AI era.
Do we really expect AI to reliably test AI?
More internal screaming.
Being able to validate, test, and reason about AI-generated code is now a critical skill for all developers.
Integration
Modern applications depend on multiple external services and APIs. These integrations must be designed carefully and not left open-ended for AI to guess.
Conclusion
AI won’t replace software engineers, but engineers who refuse to adapt may struggle.
If you’re already a developer, learn how to use AI to increase your productivity, not replace your thinking.
If you’re just starting, focus on building strong foundations while recognising that the industry has shifted. Don’t get stuck learning things that AI can already do effortlessly. Instead, learn why things work, then apply AI to build faster and better.
Understand the shift.
Join the wave.
That’s all for now.







