Pushing Meaning Uphill

Community Article Published March 14, 2026

Writing, coding, and publishing real frameworks before the language around them was ready

push

You spend long evenings trying to give shape to something that does not yet have stable language around it. You draft it, redraft it, argue with it, code it, test it, publish it, and still it stands in a strange space. Too concrete to be called mere thought. Too early to be widely recognized as obvious. Neither people nor LLMs are yet saturated with the pattern, so every explanation feels like pushing meaning uphill.

That is why, when the world slowly begins to speak in that direction, the feeling is not triumph. It is a relief. A small confirmation that the labour was not imaginary.

Agentic SDLC

What is becoming clearer now is that AI in software delivery cannot remain confined to code suggestion. The real shift is toward the lifecycle itself: how work is framed, broken down, delegated, reviewed, corrected, and carried forward with continuity and responsibility.

That was the direction behind my work on the DDSE Framework. It was not written as an abstract commentary. It was shaped as a real framework for thinking about software delivery in an agent-aware, decision-driven way, with implementation seriousness behind it.

DDSE - DECISION DRIVEN SOFTWARE ENGINEERING

Agentic Governance

Another reality now emerging is that agents cannot be trusted into serious work through optimism alone. The moment they begin to act, governance becomes structural. Boundaries, contracts, traceability, controlled execution, failure handling, and accountability have to be designed in, not spoken about after the fact.

That is the ground from which I built ACM, the Agentic Contract Model. Again, not as a paper idea, but as a coded and tested framework intended to make governance explicit in the design of agentic systems.

AGENTIC CONTRACT MODEL - ACM

Agentic Hybrid Search and Context Engineering

The third thing the industry is now discovering more honestly is that weak retrieval produces weak intelligence. Pure vector optimism was never enough for serious systems. Reliable AI needs hybrid search, stronger knowledge shaping, and disciplined context construction.

That is why I published CEF, the Context Engineering Framework. It came from the belief that context is not a side utility around AI systems. It is part of the engineering foundation itself, and it must be treated with the same seriousness as architecture.

CONTEXT ENGINEERING FRAMEWORK


Even before these frameworks, in my book Enterprise AI: Strategic Blueprint for Purple People

I tried to make a simpler but harder point:

Before placing LLMs everywhere, we need to understand decisions properly. We need to know what is workflow, what is deterministic logic, and where intelligence truly belongs.

I still feel that this part remains early. Terms such as AI Effectiveness Index or Decision Domain Profiling may sound unnecessary or too abstract to some people today. But I have learned to be patient with that feeling. Many ideas look excessive before the surrounding failures make them appear necessary. Governance sounded heavy until agents began to act. Hybrid context sounded elaborate until weak retrieval began to show its limits. Decision modelling may be passing through the same stage now.

I sometimes wonder whether part of the difficulty was not the idea itself, but the familiarity of the examples. When people saw business process lifecycles such as O2C or P2P, they may have felt they already understood the territory. But the point was never the lifecycle names themselves. The point was to show that beneath familiar workflows, there are different kinds of decision spaces, and that not all of them deserve the same treatment from software, rules, or LLMs. Familiar examples may have made the deeper distinction too easy to overlook.

Community

Sign up or log in to comment