Barmus Systems

The execution layer for
real-time operational constraints.

Barmus is an operational constraint execution system — a runtime that governs production scheduling, resource allocation, and distribution orchestration across constrained, time-bound operational environments.

It does not assist operators. It defines and enforces the execution logic of the operation itself.

Barmus operates as a three-layer execution system. Each layer manages a distinct class of operational constraint — production capacity, distribution orchestration, and interaction resolution — with synchronous state propagation across the full system graph.

The architecture is stateful and event-driven. Every committed slot, every assigned resource, and every state transition is reflected immediately across all dependent subsystems.

Layers are independently resilient and causally coupled: a constraint resolved at L1 immediately constrains the L2 allocation model. A state change in L2 propagates to L3 without polling or reconciliation cycles.

Interaction Resolution Layer
Care+  ·  AI resolution engine
L3 / AI
Distribution Orchestration Layer
Connect  ·  AI routing + fleet state
L2 / AI
Constraint Execution Core
Logistics  ·  production + allocation engine
L1 / Core
Operational data plane
API gateway / event bus / WooCommerce
Infra

Each layer manages a specific class of operational constraint. The table describes constraint type, resolution mechanism, and downstream state effects.

Constraint class Layer Resolution mechanism State propagation
Production capacity L1 Core Slot capacity graph — per-product, per-interval, per-site. Commitment is atomic; structural overcommit is rejected at the execution boundary. Triggers slot closure across storefront and booking engine simultaneously
Resource availability L1 Core Availability matrix evaluated against active run assignments and proximity model (Google Distance Matrix) Updates allocation graph; propagates to L2 routing scheduler
Order state mutation L1 Core Modification engine validates feasibility against current slot state before committing. Infeasible requests are rejected with state context. Re-evaluates production slot, resource assignment, and delivery timing in sequence
Distribution path L2 AI AI routing model continuously optimizes delivery run composition and sequence under live fleet state. Adapts without re-dispatch. Updates driver execution path; exposes ETA deltas to L3 resolution layer
Interaction resolution L3 AI State-aware AI engine resolves queries and mutation requests against live system state. Executes mutations in L1 where permitted; escalates with context where not. Executes state changes directly or surfaces structured escalation packages
L1 · Core · Required
Logistics
Constraint Execution Core

The primary execution layer. Logistics maintains a real-time constraint graph over production capacity, resource availability, and order state. All booking decisions are evaluated against this graph before commitment — overcommit and resource conflicts are resolved at the system boundary.

The APX production engine manages capacity at the per-slot, per-product, per-site level. The allocation engine assigns resources using an availability matrix cross-referenced against active run states and distance topology. Order modification requests are validated against current system state and propagate through production, allocation, and delivery layers atomically where feasible.

Atomic
Slot commitment model
Real-time
Capacity graph state
v1.2
Current build
APX Production Engine
Slot capacity modeled as a constraint graph. Per-product, per-interval rules enforced. Commitments evaluated and locked atomically.
Intelligent Slot Allocation
Client-facing availability derived from live constraint graph. Only feasible slots exposed — feasibility computed from production load, resource availability, and distance topology.
Resource Availability Matrix
Courier availability modeled as a temporal matrix cross-referenced with active run assignments. Assignment decisions proximity-weighted via Google Distance Matrix.
Collection Point Graph
Geolocated operational nodes with independent capacity constraints. Hotspot and direct delivery topologies share a unified execution model.
Order Mutation Engine
Modification requests evaluated against live constraint state before execution. Feasible mutations propagate atomically across production, allocation, and distribution layers.
Distance Computation Layer
Google Distance Matrix integrated at the constraint resolution boundary. Travel time data informs slot feasibility, resource scoring, and ETA computation.
L2 · AI · Add-on
Connect
Distribution Orchestration Layer

Connect operates as the AI-driven distribution orchestration layer. It receives committed delivery assignments from L1 and continuously optimizes run composition and execution sequence against live fleet state, traffic topology, and delivery window constraints.

The routing model is adaptive and stateful — it maintains an active optimization loop that recalculates execution paths as fleet state and new assignments evolve. State changes propagate bidirectionally to L1 (resource availability updates) and L3 (ETA and delivery state for resolution queries).

Adaptive
Routing model
Live
Fleet state graph
iOS+
Driver execution node
AI Routing Optimization Engine
Continuous optimization of delivery run composition and sequence. Adapts to incoming assignments, fleet state, and traffic conditions without re-dispatch.
Fleet State Graph
Live operational graph of all active nodes — drivers, runs, delivery states. State substrate for both routing optimization and L3 resolution queries.
Driver Execution Node (Mobile)
Mobile execution endpoint. Receives optimized path instructions, captures proof-of-delivery events, reports state transitions to the fleet graph in real time.
ETA Propagation Interface
Computed delivery windows exposed to L3 resolution layer and customer-facing interfaces. ETA values derived from live routing model state.
L3 · AI · Add-on
Care+
Interaction Resolution Layer

Care+ is the AI-driven interaction resolution layer. It processes inbound queries and state-change requests against live L1 and L2 data — resolving autonomously where system state permits, escalating with full operational context where it does not.

The resolution engine operates without a static response model. Query responses are derived from live order state, fleet position, and constraint graph data. Executable requests are passed directly to the L1 mutation engine for constraint validation and execution.

24/7
Resolution availability
<2s
Resolution latency
Live
System state binding
State-Aware Resolution Engine
Queries resolved against live L1/L2 state. Response content computed from actual system state, not a static knowledge base.
Direct Mutation Execution
Feasible modification requests passed directly to the L1 constraint engine. Resolution terminates at state change, not at acknowledgment.
Contextual Escalation Protocol
Non-resolvable cases escalated with full operational context — interaction transcript, order state snapshot, attempted resolution path.
Multi-channel Interface Layer
Unified resolution engine across SMS, live chat, and email. Channel configuration handled at interface layer; resolution logic is channel-agnostic.
Proactive State Notification
L1/L2 state changes affecting committed orders trigger outbound notifications from the system event stream without manual triggers.
Resolution Analytics Layer
Query volume, resolution rate, escalation frequency tracked at system level. Provides signal for L3 model improvement and upstream constraint optimization.

End-to-end state flow across subsystems. Each transition is governed by a specific constraint layer and produces deterministic downstream state effects.

T+0
Booking request
L1 Core

Slot feasibility evaluated against live capacity graph. Commitment executed atomically if feasible; request rejected at boundary if not.

T+1
Production commitment
L1 Core

APX Engine locks slot capacity. State propagates to storefront availability model. Allocation graph updated with delivery assignment.

T+2
Route optimization
L2 Connect

Assignment ingested by AI routing model. Run composition recalculated under live fleet state. Driver execution node updated.

T+3
Query resolution
L3 Care+

Inbound queries resolved against live L1/L2 state. Mutation requests validated and executed. ETA served from routing model.

T+n
State closure
All layers

Delivery event captured. State transitions logged across graph. Constraint model updated for subsequent allocation cycles.

Production deployments

Barmus is not a tool.
It is an execution layer.

Access is by qualification. Our engineering team maps your operational topology and defines integration scope before any deployment commitment. Typical evaluation: 30 minutes.

© 2026 Barmus Systems Ltd — All rights reserved

All systems nominal