How Configuration Became the Default

Modern enterprise software has trained us to accept configuration complexity as inevitable.

If a system is powerful, we expect it to be difficult to configure.
If it spans many teams, regions, or regulations, we assume large consulting engagements are required.
If it touches audit or compliance, we expect long deployments and permanent operational drag.

This has become so normal that few people stop to ask a simpler question.

Why does so much effort go into configuring systems that are meant to describe what already happened.

Configuration as an industry

Across enterprise software over the last two decades, a clear pattern appears.

Platforms such as ServiceNow, SAP, and field service optimisation tools like Salesforce (ClickSoftware) are all deeply configurable, workflow centric, and opinionated about how work should be done.

This is not accidental. These systems were designed to model organisations rather than record reality.

Once a platform embeds assumptions about roles, approvals, states, and flows, it immediately begins to diverge from how work actually happens. Configuration becomes the mechanism used to reconcile that gap.

Over time, configuration stops being a feature and becomes a business model.

Entire companies now exist to translate domain reality into platform specific schemas, manage ongoing configuration drift, optimise field to worker alignment, and keep systems usable as organisations evolve.

The software did not fail. It did exactly what it was designed to do.
But it also created a permanent dependency layer.

Configuration is a tax, not a capability

The scale of configuration ecosystems is often mistaken for sophistication.

In practice, configuration is a tax paid because the core system is opinionated. The more assumptions embedded in the foundation, the more labour is required to keep the system aligned with reality.

This becomes especially visible in physical and distributed environments.

Large scale asset tracking, leasing, logistics, and field operations rarely fail because the work is inherently complex. They fail because trust decays across distance, time, and organisational boundaries.

I have seen this first hand.

Earlier in my career, I worked on a nationwide deployment in India contracted by a major car manufacturer involving tens of thousands of leased vehicles and dozens of independent dealerships. The goal was simple. Prevent stock from disappearing.

The system worked not because it was heavily configured, but because it produced regular, verifiable evidence. Weekly audits created consequences. Reality was checked often enough that lying became expensive.

A small team delivered it. Not because the work was easy, but because the system focused on recording what happened rather than explaining how an organisation wished to operate.

This is the part that often surprises people.

The enterprise misunderstanding

There is a persistent belief that large enterprise vendors understand hard problems better because they are large.

In reality, enterprises tend to focus on what they already know how to sell, not on what they lack expertise in.

They are exceptionally good at procurement, risk transfer, and repeatability. They build systems around workflows, approvals, and internal consistency because that aligns with how their customers buy software.

That does not mean they have solved the harder problem of durable truth.

Enterprise success is evidence of market fit, not necessarily evidence of understanding how work happens on the ground, across time, or between independent actors.

Most platforms optimise for internal explainability, compliance optics, and organisational consistency.

They are far less concerned with whether records remain meaningful when people change, vendors rotate, or incentives drift.

This is where configuration quietly replaces trust.

A different starting point

An alternative approach starts from a much simpler premise.

What if the system did not try to model the organisation at all.

Instead of encoding workflows, roles, and assumptions, the core system records evidence. What happened, who attested to it, when it occurred, and what the consequences were.

In this model, the core is permanent, economically stable, and cryptographically secure. It is intentionally not opinionated on any specific use case or vertical. Domain knowledge is brought to the system rather than embedded into it.

Applications become secondary. Interfaces are generated from requirements. Configuration becomes optional rather than mandatory.

This does not eliminate service providers or field teams. It changes what they are valuable for. They stop translating reality into configuration and start contributing verified evidence and expertise.

Applications as a byproduct

Another reason configuration centric systems are starting to break down is cost asymmetry.

Historically, applications were expensive to design, build, and maintain. Interfaces were scarce, workflows were handcrafted, and change carried significant overhead. It made sense to treat the application as the primary asset and the data beneath it as secondary.

That assumption no longer holds.

With modern AI tooling, applications are increasingly generated from needs and requirements rather than engineered upfront. Walkthroughs, dashboards, lead magnets, asset creation flows, and reporting interfaces can be produced quickly and iterated at very low cost.

In this environment, the application becomes a byproduct.

What matters instead is the quality of the underlying data, the integrity of the audit trail, and the ability to query and recombine evidence reliably over time. A dashboard or query engine is ultimately just a reusable template rendered against trustworthy memory.

Rather than building a monolithic application and forcing organisations to conform to it, the foundation becomes the star. AI adapts downstream to generate fit for purpose interfaces, while the core remains stable, neutral, and durable.

When applications are cheap, truth is the scarce resource.

Why this leads to the memory layer

For years, configuration heavy systems were the only viable way to achieve flexibility at scale. Interfaces were expensive. Integration was brittle. Memory was implicit.

That world is ending.

The harder problem was always building a neutral, durable memory of what actually happened.
We avoided it because it was boring, difficult, and rarely rewarded.

That avoidance shaped an entire industry.

This is explored further in The Memory Layer We Keep Avoiding, which builds on the same observation. Systems fail not because they cannot act, but because they cannot reliably remember.

A note on implementation

This approach is being explored in production at DOVU.

Rather than designing applications or workflows, the focus is on building a general audit trail that is permanent, economically stable, and cryptographically secure, while remaining deliberately domain neutral.

The assumption is simple. If you can reliably remember what happened, everything else can be derived.


マット・スミシーズ
CTO, DOVU