04 / Services / AI Web Apps
Service · Build

AI Web Apps.

Small bespoke tools — quote generators, lead qualifiers, internal dashboards, content pipelines — built for one client, owned outright, not rented from a SaaS dashboard.

A project-based engagement for SMEs and consultancies with a specific repetitive task that AI can sit inside. We replace the spreadsheet, the Zap, the third intern, or the SaaS subscription that doesn’t quite fit — with a small, well-built web application designed for exactly your process and nobody else’s.

Engagement
Fixed-scope project · code transferred at handover
Investment
From £14,500 · staged
Timeline
8–14 weeks · prototype in week 2
The problem

Most internal tools die in Excel.

There’s a process in your business that costs a person two hours a day, runs on a spreadsheet with a macro nobody remembers writing, and is one resignation away from a fire. Everyone knows about it. Nobody has the bandwidth to fix it. Every six months somebody floats a SaaS that’s “basically what we need.” It isn’t. You pay for it anyway and Susan keeps using her spreadsheet.

SaaS exists for the average customer. Your process isn’t the average customer — that’s why a generalist tool keeps not fitting. The cost isn’t the subscription; it’s the configuration tax, the awkward workarounds, the human in the middle reformatting CSVs at half-past-five on a Thursday.

Custom software used to be the prerogative of companies with engineering teams. It isn’t any more. A small, well-aimed web app — one form, one workflow, one AI step that does the boring part — can be built in a couple of months for less than two years of the SaaS licence it replaces. And it does exactly what your process actually does, not what the average customer’s does.

What you get

Four layers, assembled into a tool.

Every build moves through the four layers below. We agree the surface area inside each one in week 0; nothing is added halfway through without a written change.

Interface is the bit your staff or customers actually touch — built in Next.js or Astro depending on whether the tool is interactive or document-shaped. No design system bought off the shelf, no Bootstrap, no Material. The UI matches your existing brand and is sized for the actual screens it’ll be used on (often a second monitor at 1280, not a designer’s 27”).

Data is a single Postgres database, migration files in version control, and a clean schema you can read in an afternoon. Every record carries audit metadata — who, when, from where — and every destructive operation soft-deletes. Backups run nightly to an off-host store. If the app ever has to die, your data leaves as a clean SQL dump.

AI workload is the deliberate bit: which step in the workflow benefits from a model, which model, with what context, on what budget. I write the prompts as code rather than configuration; the evaluation harness lives in the repo; the model is a configuration value that can be swapped without a redeploy. Where retrieval makes sense, vector storage rides on pgvector inside the same Postgres.

Operations is everything that keeps the thing alive after I’m gone. Auth (your Google Workspace, or magic links, or a real password — whichever fits), staging and production environments, log aggregation, an admin view of the AI’s actual costs, and a runbook short enough that the next developer can read it on a train.

Who this is for

Built for processes you can describe in a paragraph.

If you can write down what the tool should do, end to end, on the back of a beer mat — it’s probably buildable. Three signals of fit.

— A / Teams drowning in spreadsheets

Operators with a critical workflow living in a workbook.

Quotes generated by hand. Stock allocated by hand. Inbound enquiries triaged by hand. You know which spreadsheet I mean. We replace it with a small web app that does the boring part — data entry, lookup, formatting, the AI-assisted draft — and gives a manager a real view of what’s actually happening.

— B / Consultancies productising a service

Experts who’ve realised half their delivery is the same job, repeated.

Architects writing planning rationales. Solicitors drafting standard letters. Accountants formatting client reports. The AI doesn’t replace the expert; it does the 70% of the draft that’s always the same, so the expert spends their time on the 30% that’s actually expensive to think about.

— C / Pipeline owners

Businesses with messy inbound data and a known good output.

PDF invoices that need extracting. Tender documents that need scoring. Inbound emails that need categorising and routing. The shape of the work is: ugly input, clean output, repeated thousands of times. AI is dramatically good at exactly this kind of job — if the harness around it is built properly.

How it works

Prototype on real data, then build the rest.

Week 0–1 · Discovery

Process & data audit.

Two calls and a written process map: every step of the current workflow, who does it, what tools they use, where it breaks. We agree the scope, the success metric, and the model budget before anything is built.

Week 1–2 · Prototype

Working slice.

A single end-to-end happy path on real anonymised data, on a private staging URL, by the end of week two. If the AI step doesn’t earn its keep here, the project pauses for a rescope — before the bulk of the budget is committed.

Weeks 2–9 · Build

App & integration.

The rest of the workflow, auth, admin views, the cost dashboard, the failure modes. Weekly Loom walkthroughs on the staging URL; no agency standups, no Slack channel rot. Your team uses it in parallel and tells me what’s wrong.

Weeks 9–14 · Handover

Repo & runbook.

Production deploy, training session with the team who’ll use it, the codebase transferred to a repo you own outright, and a runbook a new developer can read in 90 minutes. 60 days of post-launch tuning included.

A client said
It replaced a job we’d been quietly recruiting for. Eighteen months on, no SaaS subscription has caught up.”

Helen Fairhurst

Operations Director, Attivo Care

Enquiry triage tool · 2024
Questions

Questions you’ll want answered before signing anything.

If yours isn’t here, send it to rich@flexiweb.digital and I’ll reply within a working day.

How is this different from buying an off-the-shelf SaaS?

SaaS is the right answer for a process that fits the average customer. Pipedrive, Monday, Zapier — all excellent at what they do. Custom software is the right answer when the configuration tax of bending a SaaS into your shape outweighs the cost of building something that fits properly.

The rule of thumb I use: if you’re spending more than £400 a month on a SaaS to do this job, and a person is still reformatting CSVs around it, a custom build pays back inside 18–24 months. If you’re paying £40 a month and nobody’s touching the output, stay where you are.

Do I own the code outright?

Yes. The codebase, the database, the prompts, the evaluation set, the deploy infrastructure — all yours, transferred to a repository under your account at handover. No licence, no per-seat charge, no vendor lock-in. If you decide to take the project to another developer or in-house team three years from now, you can.

Anything I use that’s open-source stays open-source under its own licence. Anything I write for you is yours.

What does running the AI cost, month to month?

It depends entirely on how often the AI step actually fires — a tool that processes 50 invoices a day is different from one that drafts ten proposals a week. For typical SME-scale workloads, the AI bill lands between £40 and £300 a month, on top of about £25 for hosting and the database.

The dashboard built in week 9 shows you that figure live, broken down by workflow step. Per-route token budgets mean a runaway loop can’t quietly cost you four figures while you’re on holiday.

What if the AI provider changes their pricing or shuts down?

The AI provider is a configuration value, not a baked-in dependency. Today most builds default to a cheap Anthropic or OpenAI model with the other as the fallback. If either doubles their prices or vanishes, switching — or moving to an open-weight model on a managed UK host — is a configuration change measured in hours, not a rebuild.

Every previous client of mine has moved models at least once since launch, usually to something cheaper. The architecture is designed for it.

How do you handle our data and security?

Data lives in a managed Postgres instance on a UK-hosted provider (typically Supabase London or a private Hetzner box, depending on the sensitivity); never on my laptop, never on a US cloud unless you’ve explicitly chosen one. Backups are encrypted, off-host, and tested.

Calls to the AI provider go through accounts in your name with zero-retention settings enabled where available, so your content isn’t used to train anyone else’s model. Where the data is genuinely sensitive — regulated, HR, financial — we run open-weight models on infrastructure you own, and nothing leaves your boundary at all.

A working prototype, on your actual data, in a fortnight.

Send me a description of the process you’d most like to stop doing by hand and an anonymised sample of the data it touches. I’ll build a single end-to-end slice of the tool and send back a private URL within fourteen days — before either of us commits to a full project.