03 / Services / AI-Powered Websites
Service · Build

AI-Powered Websites.

Conversational search, on-page assistants, and answer surfaces — built into the site, trained on your own content, judged on whether they earn their keep.

A project-based engagement for SMEs and consultancies who want the modern web stack rather than a “we use AI” sticker. The AI does specific commercial work — helps customers find the right product, answers questions in plain English, gets your pages cited by Google AI Overviews — or it doesn’t ship.

Engagement
Fixed-scope project · optional tuning retainer
Investment
From £9,500 · staged
Timeline
6–10 weeks · calibration in week 4
The problem

Most AI on websites is decoration.

A purple chat bubble in the bottom right. A “Ask our AI” banner above the fold. A blog post that ends “written with AI assistance.” None of it changes a number on a sales dashboard. None of it can answer a question that isn’t already on the page in plain English. It’s a sticker.

The honest version is harder and quieter. AI on a website is useful when it does one of three things: it lets a visitor search by what they mean rather than what they typed; it answers a real customer question, in your voice, from your actual product or service knowledge; or it shapes the page so that Google’s AI Overview, Perplexity, and ChatGPT cite you when somebody asks about your category.

That work is unglamorous — vector indexes, evaluation prompts, schema, retrieval pipelines — and almost none of it shows up as a glowing rectangle. Which is probably why so few people are actually doing it.

What you get

Four capabilities, one site.

Most builds use three of the four; almost none need all four on day one. We agree the scope in week 0 and ship in order of commercial impact.

Semantic search is search that understands what people mean. A visitor searching for “a hydraulic ram for a 12° tipper” finds the right product page even if those exact words appear nowhere on it. Same goes for “the form we use when subbing work to another firm” finding the right document, or “do I need planning for a single-storey extension” finding the right guide. The difference between a search box people use and a search box people don’t.

On-page assistant is a conversational answerer that draws only on your real content — product pages, service descriptions, pricing PDFs, past support replies if you’ll share them. Every answer cites the page it came from, so visitors can click through and read more. It says “I don’t have that” rather than inventing things. The point is to be more useful than a contact form and more honest than a chatbot.

AEO surfaces are the work that gets your pages cited by AI search. When somebody asks ChatGPT, Perplexity, or Google AI Overviews about your industry, this is what gets your page named in the answer — question-shaped page structure, summary blocks where they belong, structured tagging that AI tools recognise. Baseline AEO is included in every build; if you want it standalone against an existing site, see AEO consulting.

Cost & performance is the part nobody costs properly in proposal. Every AI answer costs something, and without guardrails the bill can run hot. I set spending limits per feature, cache repeated answers, and use the cheapest model that meets the quality bar — premium models only when you genuinely need them. Each month you get a one-page summary: what the AI cost, what it did, and what to adjust.

Who this is for

Built for sites that already know things.

AI is only as useful as the corpus you point it at. If you’ve spent a decade writing real material, this earns its keep. Three signals of fit.

— A / Knowledge-heavy businesses

Catalogues, datasheets, technical specs.

Engineering, manufacturing, B2B distribution. Your buyers ask oblique, specific questions and your site has the answers, somewhere — usually buried four clicks deep in a PDF. Semantic search and an on-page assistant turn that buried material back into commercial frontage.

— B / Content-rich publishers

Sites with a decade of articles, guides, or case studies.

Charities, professional bodies, niche media. The archive is the product, but nobody finds the right piece in time. A conversational search interface trained on the back catalogue is the cheapest editorial intervention you can make. The articles you already paid to write start earning a second time.

— C / Consultancies who answer for a living

Solo experts and small firms whose pipeline is questions.

Solicitors, accountants, architects, planners. Half your inbound enquiries open with a question your site could answer if anyone could find the right page. AEO surfaces get you cited by AI Overviews; the on-page assistant answers the question on the page directly. Both, ideally.

How it works

Build the boring bits first, calibrate in public.

Weeks 0–1 · Discovery

Your content, your real questions.

We list the material the AI will draw from — pages, PDFs, support replies — and collect 80–120 real questions from your sales inbox, support thread, or analytics. The list of questions becomes the test set: the AI doesn’t ship until it can answer them.

Weeks 1–3 · Indexing

Building the search and answer pipeline.

Your content is broken into searchable pieces and indexed. The pipeline that finds the right piece for a given question is built, tested, and graded against the test set before anyone writes a single line of front-end code. Quality of answers is decided here, not later.

Weeks 3–6 · Calibration

Tuning voice and behaviour.

A working assistant on staging that we tune together: voice, length, citation style, when it should say “I don’t have that” and when it should hand over to a human. This is the bit clients enjoy — and the bit that decides whether it sounds like you or sounds like ChatGPT.

Weeks 6–10 · Launch

Launch and measurement.

Front-end integration, cost guardrails, monitoring, an admin view of every conversation, and a weekly summary of what the AI cost and did. Thirty days of post-launch tuning, plus an optional retainer if you want me to keep watching.

A client said
It answers the questions our junior sales would have answered, in the voice we’d have used, on a Sunday afternoon.”

Marcus Reilly

Commercial Director, MC² Print

On-page assistant · 2025
Questions

The questions buyers actually ask.

If yours isn’t here, send it to rich@flexiweb.digital and I’ll reply within a working day.

Is this just ChatGPT in a wrapper?

No, and that’s the point. ChatGPT in a wrapper answers from a general-purpose model that has no idea what your business does — charming, confident, often wrong. What I build is retrieval-augmented: every answer is grounded in your specific content, with a citation back to the page it came from. The model is the writer; your site is still the source of truth.

The underlying model itself is a swappable choice — we tend to default to a cheap one and only reach for a frontier model when the eval set demands it.

What about hallucinations — wrong or made-up answers?

The retrieval-first architecture is the first defence: the assistant can only answer from content it’s actually been given, and it’s instructed to refuse rather than invent. We also run an evaluation set — the 80–120 real questions collected in discovery — on every change, and the assistant has to pass before it ships.

In practice the failure mode isn’t hallucination, it’s refusal: the AI says “I don’t have that” on a question it should have answered, because the underlying page is missing. That’s a content problem, and it’s one you can see and fix.

What does it cost to run, month to month?

For a typical SME site, model and infrastructure costs land between £30 and £180 a month — vector index hosting, embedding refreshes, and the per-token cost of answers actually served. The variability is mostly traffic-driven.

Per-route token budgets and response caching keep that predictable. You get a monthly report showing what the AI cost and, where it’s measurable, what it earned — assisted enquiries, deflected support tickets, citation pickups.

Do I need my own LLM, or to train a model?

Almost certainly not. Fine-tuning a model is the wrong tool for the job 95% of the time — expensive, slow to iterate, and obsolete the moment the underlying model improves. Retrieval-augmented generation against a well-prepared corpus beats fine-tuning for almost every commercial use case and stays cheap.

If your situation genuinely calls for a private or on-premise model — regulated data, contractual constraints — we can run open-weight models on a managed UK host. Rare, but I’ve done it.

What happens when the AI tech changes — will I have to rebuild?

The architecture is designed for exactly that. Your content, your index, your prompts, and your evaluation set live in your own infrastructure. The model itself is a configuration value — we change it the same way we’d change a database driver.

Every previous client of mine has swapped models at least once since launch — usually to something cheaper or faster — without touching the rest of the site. That’s the whole reason for building it this way.

A semantic-search demo, on your own content.

Send me a sitemap or a folder of PDFs. I’ll build a small, working semantic-search prototype against your real material and send back a private URL within a fortnight — so you can see what an AI-powered version of your site would actually feel like before anyone commits.