Integration

Adapters, decision loops, sync facade, and the LLM evaluator hook — everything an integrator needs to embed the framework.

What integration looks like

Integration is three things, in this order:

  1. Pick (or build) a StorageBackend.
  2. Construct each engine you need; compose AlignmentEngine if you want cross-cutting checks.
  3. Call check_alignment before each meaningful action; subscribe to events for observability; run full_audit on a sweep schedule.

Storage adapters

The StorageBackend Protocol has five async methods. Implement them against your store; that's the whole adapter.

Two reference recipes ship under examples/:

  • examples/postgres_storage.pyasyncpg against a single JSONB-typed table.
  • examples/redis_storage.pyredis.asyncio using a hash-per-prefix layout.

Sync hosts

Embedding in a Jupyter notebook, classic Django, or a synchronous CLI? Use agent_values.sync: thin wrappers around each engine that share a daemon-thread event loop and expose every async method synchronously.

from agent_values.sync import (
    SyncValuesEngine, SyncAlignmentEngine,
)

values = SyncValuesEngine.from_storage(my_storage)
report = align.check_alignment({"description": "...", "tags": ["..."]})  # sync!

Decision-loop recipe

The shipped examples/decision_loop.py demonstrates the canonical pattern: build a fresh hierarchy, propose an action, run check_alignment, refuse on misalignment with the framework's recommendation, otherwise execute. Lift it directly or use it as a structural template.

Wiring up events

Each engine exposes on(event, handler). Handlers may be sync or async; the engine awaits anything with an __await__. Useful events:

  • value_added, weight_updated, value_deactivated — track value-set churn.
  • belief.evidence_added, belief.confidence_changed, belief.decayed — feed observability.
  • purpose.contextual_set, purpose.inconsistency_detected — alarms for purpose drift.
  • goal.created, goal.decomposed, goal.status_changed — drive a backlog UI.
  • alignment.resolved, alignment.goal_suggested, alignment.audit_completed — the cross-cutting telemetry.

LLM evaluator hook

The default RuleBasedEvaluator is deterministic and offline; it reads keywords and tags off action proposals and looks them up against active values, beliefs, and the primary purpose. For integrators who want LLM-backed reasoning without coupling the framework to any specific SDK, LLMEvaluator is BYO-client — pass any async (prompt: str) -> str callable.

from agent_values.alignment.llm_evaluator import LLMEvaluator

async def my_client(prompt: str) -> str:
    # call your LLM provider here
    ...

alignment = AlignmentEngine(
    values, beliefs, purpose, desires, goals,
    evaluator=LLMEvaluator(client=my_client),
)

On parse or network failure the evaluator falls back to the rule-based path. No LLM SDK is added as a runtime dependency.