iWebest.1995Talk to an expert
AI for Commerce · Practical · Governed · Operational

Most AI pages overpromise.
We help commerce businesses implement AI safely, pragmatically, operationally.

AI is only valuable when connected to operational reality — structured data, governed workflows, named decision-makers, human oversight. We start with business problems, not AI tools. We build into the systems already running your commerce. And we say no to the parts that won't pay back.
Talk to an expertor read how we think about AI →
600+
Commerce programmes
8–24
Systems per programme
0
Hallucinations in trading
1995
Founded
01 · The gap

Most AI pages in commerce are operationally empty.

They talk in abstractions. They overpromise automation. They list generic use cases and chase the hype. They sound futuristic but ignore governance, data, and commercial reality.
Buyers we work with are between curiosity and paralysis — under pressure to "do something with AI", uncertain where the value comes from, worried about risk and accuracy, unclear about governance. The honest position is that AI in commerce is a serious operational discipline, not a product feature.
What the market promises
What actually pays back
"AI will transform commerce."
Most pilots never reach trading. The ones that do are narrow, governed, well-instrumented.
"Autonomous agents replace teams."
Agents amplify teams. Removing the human turns small errors into customer incidents at scale.
"Plug in a model, get value."
Value comes from the data, the workflow it sits in, and the rules that govern it.
"AI fixes bad product data."
AI reflects bad product data — louder. Fix the PIM first.
"Generative AI everywhere."
Generation is the easy part. The hard parts are accuracy, brand voice, audit, rollback.
"AI is the new platform."
AI is a layer across your platform. Treat it like an integration, not a replatform.
02 · Problems before tools

Start with business problems, not AI tools.

Most organisations are approaching AI backwards — starting with the tool, then hunting for the problem. We start at the other end: where the operational friction is, what it costs, and whether AI is even the right answer. Often it isn't, and saying so saves a six-figure pilot.
How most teams start
01
ChatGPT / copilots
Pick the tool first.
02
Find a use case
Hunt for somewhere to apply it.
03
Run a pilot
Demo internally. Generate excitement.
04
Try to scale
Hit the data and governance wall.
05
Quietly stall
Pilot becomes a slide, not a system.
How we start
01
Operational friction
Where workflows actually break.
02
Commercial value
What an hour of that friction costs.
03
Data + workflow audit
What needs to be true for AI to work here.
04
Smallest useful slice
Narrow scope, governed, measurable.
05
Scale what worked
Expand only the parts that paid back.
The first hour of any AI conversation we have is operational. What's the bottleneck? Who owns it? What does an hour of it cost the business? Tools come last — and only when there's a problem worth solving with one.
03 · Where AI creates real value

AI is an operational layer, not a product.

The wins are unglamorous and they compound. Faster product onboarding. More consistent merchandising. Search that finally works. Customer service that keeps brand voice. Internal teams that stop doing the same manual task forty times a day. AI augments commerce capability — it doesn't replace the team running it.
01High
Product enrichment
Attribute generation, gap-filling, taxonomy alignment across thousands of SKUs.
Owner · PIM
02High
Search & discovery
Vector + keyword hybrid, intent rewrites, merchandising rules that learn.
Owner · Storefront
03High
Merchandising assist
Category curation, promotion ideas, anomaly flags reviewed by a human.
Owner · Trading
04High
Customer service
Drafted responses with brand voice, retrieval from policy and order history.
Owner · Service
05Medium
Content generation
Category copy, PDP variants, email — drafted, human-approved, version-tracked.
Owner · Marketing
06Medium
B2B account assist
Quote summaries, account-specific recommendations, replenishment prompts.
Owner · B2B
07Medium
Internal knowledge
Q&A across runbooks, contracts, product info — for staff, never customers.
Owner · Operations
08Foundational
Data normalisation
Cleanup of legacy product data, supplier feeds, return reasons, taxonomy drift.
Owner · PIM
We don't pick favourites. The right starting point depends on where the friction is, what data is governable today, and which workflow can absorb a phased rollout without disrupting trading.

AI without operational structure creates noise. AI connected to commerce systems creates leverage.

The next three sections cover what makes AI commercially credible — the data underneath it, the governance around it, and the order in which it gets adopted.
8–24
Systems per programme
ERP · OMS · WMS · PIM · CDP · search · payments · tax
600+
Commerce programmes
Since 1995 — 31 years on the same problem
<5%
AI initiatives in production
Industry · per published surveys, 2024–25
100%
iWeb AI work · governed
Human-in-the-loop, audit trail, rollback
04 · Foundations

AI runs on the data underneath it.

Pages that ignore data feel unserious. AI pages that lead with autonomous agents and don't mention PIM are a tell. Good AI in commerce depends on governed product information, connected systems, and operational context — none of which are interesting until they are missing.
We have credibility here because we've spent thirty-one years on the unglamorous half of commerce — Adobe Commerce, Akeneo, ERP integration, B2B complexity. The same expertise underwrites the AI work. You can't bolt intelligence onto a broken substrate.
AI use cases
Generation, agents, recommendations, search.
Layer 5
Workflow & governance
Approvals, thresholds, audit, rollback.
Layer 4
Operational context
Customer, order, inventory, fulfilment state.
Layer 3
Connected systems
ERP · OMS · WMS · PIM · CDP · search · payments.
Layer 2
Governed product data
Attributes, taxonomy, quality, ownership.
Layer 1
Each layer up depends on the one beneath. Pilots fail when teams skip layers.
05 · Responsible AI & governance

Governed automation, with humans in the loop.

The serious commerce buyers we work with are deeply uncomfortable with uncontrolled AI. They worry about hallucinations, brand risk, customer trust, audit, compliance. They should. AI should operate inside governed systems — approval thresholds, confidence floors, audit trails, named owners, reversible actions. The matrix below is how we decide who reviews what.
Model confidence
High
Model confidence
Low / uncertain
Customer impact
Internal · low
Oversight model
Auto-publish · audit log
High confidence, low impact. Run free, log everything, periodic spot-check.
Oversight model
Human review · before publish
Low confidence, low impact. Drafts queued for staff, brand voice checked.
Customer impact
Customer-facing · high
Oversight model
Human approval · per action
High confidence, customer-facing. Gated behind a named approver.
Oversight model
No autonomous action
Low confidence, customer-facing. AI assists, the human decides.
Every AI action is logged. Every customer-facing action is reversible. Confidence thresholds are set per use case, never globally.
Customer-facing automation gets the highest oversight. Internal-facing assistance — staff drafting, search ranking, anomaly flags — can run lighter. The principle is simple: the further an AI output sits from a human, the more governance has to wrap it.
06 · Pragmatic adoption

Phased adoption beats big-bang transformation.

Most businesses do not want a massive AI transformation programme. They want a sensible starting point, controlled experimentation, measurable value early, and a way to evolve capability over time. The shape below is how a typical first AI engagement runs — twelve weeks, three phases, one production use case at the end.
Weeks 01–02
Discover
Operational friction, data audit, oversight model.
Weeks 03–04
Design
Smallest useful slice. Confidence floor. Rollback path.
Weeks 05–08
Build · pilot
Internal-only first. Instrumented. Human in every loop.
Weeks 09–10
Govern
Approval thresholds, audit, brand voice review.
Weeks 11–12
Production
Limited release. Measured against baseline. Owner named.
Kickoff────────── 12-week phased adoption ──────────Production · governed
By week twelve there is one production AI capability, with measurable value, governed by a documented oversight model. From there capability scales — never as a single transformation, always as the next phased increment.
Featured · The intelligence layer
WithPraxis.ai
Strategic partner · Governance & decision support

The strategic intelligence and governance layer behind responsible AI in commerce.

AI implementation is not just about tools. It is about judgement, prioritisation, governance frameworks, and structured organisational thinking. WithPraxis works alongside us — they bring the operating models, decision support and AI strategy; we bring the commerce systems, integration and engineering. Together: strategy, governance and execution under one delivery.
Capability stack
The six things WithPraxis brings to a programme. Strategy, governance and decision support — surfaced as named outputs, not abstract advice.
AI readiness assessment
Where the organisation actually sits — data, governance, capability, risk appetite.
Operating model design
Who owns AI, who reviews it, who escalates, how it scales.
Governance frameworks
Approval thresholds, audit trails, confidence floors, brand voice rules.
Decision support · structured
Frameworks that surface evidence and trade-offs before AI decisions get made.
Workflow intelligence
Mapping where AI can amplify the team — and where it must not.
Responsible automation
Boundaries on autonomous action. What stays human. What never gets automated.
Where it fits
How the partnership runs day-to-day, and what WithPraxis is deliberately not.
How the partnership works
On AI engagements WithPraxis sits at the strategy table — readiness, operating model, governance framework, decision support. iWeb runs the engineering — data, integrations, the AI layer itself, the production rollout. Both are accountable to the same delivery plan. Clients get one programme, not two consultancies.
What WithPraxis is not
Not a generic AI consultancy. Not a buzzword shop. Not a vendor of "autonomous agents". The position is calm, evidence-led, governance-aware, anti-hype. The audience these conversations are built for is operations, finance, compliance — not the innovation lab.
Read further
How WithPraxis articulates the same operating models, governance frameworks and AI readiness work — in their own words.

How WithPraxis thinks about responsible AI.

Their site goes deeper on operating models, AI readiness and the structured-thinking frameworks we run programmes against — the same governance work clients see inside our engagements.
AI failure map · Industry vs iWeb-led

Why most AI initiatives in commerce never reach trading.

Six failure modes account for the majority of stalled AI initiatives. Industry bars on the left. The right column shows the same modes on the AI work we've shipped — same problems, lower numbers, because the operational discipline came first.
Failure mode
Industry · share of stalled
iWeb · 2024–26
Poor product / operational data
68%
8%
No governance · uncontrolled output
57%
4%
Tool chosen before problem
51%
3%
Pilot doesn't survive integration
46%
6%
Hallucination · brand-voice drift
38%
2%
No named owner · pilot stalls
34%
0%
Industry bars · share of stalled AI initiatives attributable to each cause, weighted across published AI-in-enterprise surveys 2024–25 · iWeb bars · AI-adjacent work shipped on commerce programmes 2024–26.
Accreditations & assurance
Gold Commerce Partner
Specialized in Commerce
ISO certified
27001 · 9001 · 42001
Cyber Essentials Plus
Independently verified security
WCAG 2.2 AA
Accessibility embedded by design
Employee-owned
The same team, long term
Next step

Brief us. We'll tell you which AI ideas are real, which won't pay back, and what we'd do.

You'll get a written response from a senior expert — what's worth building, what isn't, the data and integration work that has to come first, and a phased plan with named owners. No demo of an autonomous agent.
Talk to an expertor re-read the seven sections →

Practical · Governed · Operational · Human-in-the-loop · Auditable · Reversible