Peter Pang at CREAO put words to something we live every day: the difference between adding AI to an existing process and rebuilding the process from the ground up around AI agents.
Peter Pang, CTO at CREAO, recently wrote about how they went from using AI to becoming AI-first. His description of the journey is recognizable. Almost exactly. But we see it from the other side of the table.
We're a consultancy. We don't deliver to ourselves. We deliver to smaller customers and simpler operations, but also to customers in regulated industries — banking and real estate — where requirements on traceability, security, and quality are non-negotiable. That makes the AI-first journey different. And in some ways harder.
Everyone says they use AI. Few have changed how they work.
Peter's point that "AI-first is not the same as using AI" is the most important insight. Most companies we meet have added Copilot to VS Code. Maybe a GPT subscription. Someone has tried Cursor. But the process, the organization, the roles — it all looks the same as before. AI is an add-on, not a foundation.
For us it's the opposite. AI agents are the primary builders. Humans instruct, review, and accept accountability. It sounds like a small reformulation, but it changes everything. Who you hire. How you estimate. How you price. How you think about quality.
Architect and operator, not junior and senior
Peter talks about the roles Architect and Operator. We see the same. The old division between junior and senior developers loses its meaning. What matters now is whether you can design systems that agents understand, and whether you can review what they produce.
We call it "harness engineering" — the same term Peter uses. The job isn't to write code. The job is to build the environment, the instructions, and the quality controls that allow an AI agent to deliver good work. That's a completely different skill. And it's the skill we sell.
The consulting perspective makes it harder, and more interesting
Peter and CREAO build internally. He can choose monorepo, standardize tooling, control the whole chain. We can't. We step into customers' existing environments. Legacy systems. Proprietary APIs. Organizations that sometimes don't even have their source code under version control.
The difference lies in the process. Agents can quickly map existing systems, identify patterns, generate code and tests. But they need context. Good context. That's where our experience comes in. Building that context, documenting the domain, creating instructions that agents can act on — that's the actual work.
Building a bank with a handful of people
We're currently helping Zinova build the infrastructure for a credit market institution aiming for a banking license. The frontend is built in Lovable. The backend, security, and infrastructure are built by us. With AI agents.
Think about that for a second. A bank. With requirements for traceability, encryption, incident handling, and access control. Built by a team of a handful of people, where a substantial part of the code is produced by AI agents.
That wouldn't have been possible three years ago. Not even a year ago, really. It's possible now because we're not just using AI — we've built processes, tools, and routines around AI that make it possible to deliver at the quality required.
The system is the product, not the prompt
Peter writes that "prompts are disposable" and that the system is what you build. That matches our experience exactly. We have an internal framework, Fae, that handles how AI agents work against different customers' environments. Tenant isolation, credential management, knowledge context, quality controls.
But Fae isn't our moat. Frameworks can be copied. Our moat is the process. How we build domain knowledge per customer. How we structure instructions. How we validate AI-generated code intended for production environments. That takes months to build per customer and can't be generated by a prompt.
Management time, same observation
Peter mentions that management time collapsed from 60% to 10%. We see the same pattern. When agents do the heavy lifting, the need for detailed follow-up disappears. You don't need daily standups (hooray!) to ask "how far have you gotten" when you can see it directly in the commit history or in knowledge graphs. You don't need three rounds of estimation meetings.
It frees up time for what actually creates value. Architectural decisions. Customer relationships. Strategic advice. Things AI can't do.
This isn't a trend. It's a shift.
Companies waiting to become AI-first — not AI-assisted, but AI-first — will struggle to compete with those who've already made the transition. Not because AI writes better code. But because the entire delivery model changes.
We're ten people (and growing — apply today!). We deliver to small customers as well as multinational giants. We build what traditional consultancies need 50-100 people for. Not because we work harder. Because we work differently.
That's the difference between using AI and being AI-first.

Written by
Daniel Berg
Read more about AI

Fae — have we built a Lovable for Enterprise?
Tools like Claude Code and Codex are fantastic for individual developers. But how do you scale that to an organization? That's the question our framework Fae — Full Agentic Enterprise — tries to answer.
Read more
Can we offer a complete development team at a fixed (low) monthly cost?
50,000 SEK per month for a complete development team: a senior architect and 3-4 AI programmers delivering the same capacity as a traditional team.
Read more
AI-first vs Human-first
Three fundamentally different ways of working with AI in development: autocomplete, vibe coding, and agentic development. The difference between using AI and being AI-first.
Read more