Concrete examples of the AI-first effect: a project estimated at 26-62 hours delivered in 4. And why maintenance projects become even more efficient than greenfield.
The proof is in the pudding, as they say. Even though we ourselves think it's wildly fun what we can do as an AI-first company, it ultimately comes down to the value it actually creates for our customers.
Not least how this relates to the news that we will no longer charge for programming, which was picked up in an article in Breakit.
A concrete example
We recently did a project that, with traditional development, we'd have estimated at 26-62 hours. With our AI-first processes the delivery took 4 hours. That's not an improvement. It's a different playing field — a different galaxy.
Better delivery, not just faster
It's not just about speed. In traditional projects, documentation, testing, and code quality are often deprioritized to maximize functionality within budget. These are sometimes called "on-a-rainy-day" activities. With AI-first we make room for all of that. When coding time shrinks dramatically, there's space for what actually makes a difference long-term.
Maintenance: even more efficient
Are we equally efficient in maintenance projects? The answer is yes, often even more efficient than in greenfield projects. In a maintenance project there's a codebase to start from, often some form of documentation. With AI we can quickly form a picture of how things stand.
In a greenfield project there are more manual activities: workshops, design decisions, non-functional requirements. That work still requires humans doing the heavy lifting and burning through cognitive cycles fast.
AI-first throughout the entire process
Programming is the lion's share of what we've optimized. But the landscape changes fast. We apply our AI-first way of working to every part of the development process: requirements analysis, testing, documentation, deployment, bug-squashing, and user feedback.
How do we go AI-first across all the boxes?
- Requirements: Structure, analyze, find gaps
- Analysis and design: Architecture choices, pattern recognition, documentation
- Programming: AI writes, we review and accept accountability
- Test and QA: Test generation, edge cases, quality assurance
- Deployment: Pipelines, infrastructure, configuration
- Feedback and bugs: Log analysis, root causes, faster fixes
Human accountability stays at every step. AI accelerates, humans assure quality.
It's not a question of whether this becomes the standard. It's a question of when.

Written by
Daniel Berg
Read more about AI

AI-first is not the same as using AI
Peter Pang at CREAO put words to something we live every day: the difference between adding AI to an existing process and rebuilding the process from the ground up around AI agents.
Read more
Fae — have we built a Lovable for Enterprise?
Tools like Claude Code and Codex are fantastic for individual developers. But how do you scale that to an organization? That's the question our framework Fae — Full Agentic Enterprise — tries to answer.
Read more
Can we offer a complete development team at a fixed (low) monthly cost?
50,000 SEK per month for a complete development team: a senior architect and 3-4 AI programmers delivering the same capacity as a traditional team.
Read more