PUBLIC BETA — v0.2 ON PYPI · v0.3 IN FLIGHT
Instrument your agents with OpenTelemetry-native, audit-grade telemetry. One Helm install. Works alongside the observability stack you already run — Phoenix, Langfuse, Datadog, Honeycomb. No vendor lock-in, no UI to learn, Apache 2.0.
$ pip install 'singleaxis-fabric[anthropic,otlp]' # In your agent: from fabric import Fabric fabric = Fabric.from_env() fabric.enable_auto_instrumentation() with fabric.decision(session_id=sid) as d: with d.llm_call(system="anthropic", model="claude-haiku-4-5") as call: response = client.messages.create(...) call.set_usage( input_tokens=response.usage.input_tokens, output_tokens=response.usage.output_tokens, ) # Spans now flow with tenant/agent/session # attribution + gen_ai.* conventions to whatever # OTel backend you already use.
WHAT IT DOES
Fabric is a Python SDK plus a custom OpenTelemetry collector plus a set
of in-pod sidecars, packaged as one Helm chart. Each capability below
has a status —
green
is shipping in v0.2 today,
amber
lands in v0.3 (next ~5 weeks),
purple
is roadmap, grey is explicitly Commercial (not in the OSS).
Wrap each agent turn in fabric.decision(...). Every span carries fabric.tenant_id, fabric.agent_id, fabric.session_id, fabric.user_id, fabric.profile. Multi-tenant filter, audit-by-user, billing-by-tenant — all queryable from one trace backend.
decision.llm_call() and decision.tool_call() emit child spans carrying the standard gen_ai.* attributes (system, model, tokens, finish reasons). Auto-instrumentor extras for Anthropic, OpenAI, Bedrock, LangChain, and Cohere light up automatically.
Custom fabricguard processor strips any span attribute outside the configured namespace allowlist before traces leave your cluster. Last-mile defense against accidental PII or internal-attribute leakage to third-party backends.
decision.remember() records semantic / episodic / scratch memory writes with hashed content. decision.record_retrieval() captures RAG events. decision.request_escalation() is the first-class hook for human-in-the-loop reviews.
Plug-in adapters for LangGraph, Microsoft Agent Framework, and CrewAI that emit Fabric spans without forcing you to refactor your agent code. Your orchestrator stays; Fabric layers on top.
Per-pod sidecar over Unix domain socket. Detects EMAIL, PHONE, SSN, CREDIT_CARD and named entities; returns tag-replaced text (<EMAIL>) by default for prompt forwarding, with HMAC fingerprint mode for telemetry attribution. Sub-millisecond regex pre-filter handles obvious patterns.
Per-pod sidecar over Unix domain socket. Composes after PII redaction so the guardrail engine never sees raw sensitive data. Jailbreak defence, off-topic refusal, output safety rails. Pluggable interface lets you swap in commercial classifiers in v0.4.
Kubernetes CronJob that runs adversarial probe suites against your agent on a schedule or as a pre-release CI gate. Emits findings as both JSON reports and OpenTelemetry spans tagged event_class=red_team_result so failures flow through the same audit pipeline as everything else.
One helm install deploys the collector, custom processors, the PII and guardrail sidecars, and the adversarial-testing runner. Regulatory profiles (permissive-dev, eu-ai-act-high-risk) preset network policies, PDBs, and fail-loud validators.
Open Policy Agent embedded at four enforcement points: input, tool-call, output, and egress. Every decision recorded as a fabric.policy.evaluation span event with bundle digest, rule id, action, and reason. Hot-reloadable signed policy bundles.
Model Context Protocol instrumentation (mcp.connect, mcp.call_tool, resource reads). Rich tool tracking (params, results, errors). Memory-read primitive. Async-judge hook. fabric.task workflow span for long-running, event-driven agents.
SPIFFE / SPIRE-shaped identifiers on every span and mTLS handshakes for sidecar-to-collector traffic. Cryptographic proof that a span came from the agent it claims to — what regulated buyers ask for when audit forensics need to stand up in court.
Cryptographically signed evidence packets, WORM-backed storage with 7–10 year retention, per-regulation mapping (EU AI Act, SR 11-7, HIPAA, ISO/IEC 42001, NIST AI RMF). Regulator-acceptable export formats with verifiable provenance. Not OSS — the paid plane that consumes OSS telemetry.
Hosted judge worker pools consume Fabric's queue_judge events and write verdicts back as span events. The Context Graph links retrievals to decisions for full cross-turn provenance. Escalation routing with human-in-the-loop SLAs. Not OSS — the paid plane that consumes OSS telemetry.
HOW IT WORKS
The agent wraps each turn in fabric.decision(...).
The SDK opens UDS connections to in-pod PII and guardrail sidecars and
emits OpenTelemetry spans.
The collector enforces an egress allowlist before forwarding to whichever
OTel-compatible backend you already run.
fabric.decision(...) opens the parent span with tenant_id, agent_id, session_id, user_idguard_input() calls the in-pod PII redaction sidecar over Unix domain socketguard_input() calls the in-pod guardrail sidecar over Unix domain socketllm_call(...) wraps the call to your LLM provider (Anthropic, OpenAI, Bedrock, …)fabricguard — strips any attribute outside the configured namespace allowlistfabricredact — second-pass PII scan at egress (belt-and-braces)fabricsampler — HMAC-keyed tail sampling per event classfabricpolicy — OPA / Rego policy evaluationWORKS WITH YOUR STACK
Fabric is additive, not invasive. Keep your LLM provider, your agent framework, and your observability backend. Add Fabric for the governance and audit telemetry you don't have.
COMPANION: SASF
SingleAxis ships two distinct products. Fabric is the instrumentation and audit substrate you install in your cluster. The SingleAxis Standardized AI Safety Framework (SASF) is a human-led evaluation methodology — 162 codes across 16 failure categories, EU AI Act-aligned. Use one, the other, or both. They compose.
Open-source instrumentation, redaction, guardrails, and audit-grade telemetry for AI agents.
Standardized human-led AI safety evaluation. Independent third-party verdicts with audit-grade evidence.
GET STARTED
Install the SDK, wrap a turn, point at any OpenTelemetry backend.
A real LLM call inside a fabric.decision context produces a
parented span tree visible in Phoenix, Langfuse, Datadog, or wherever
you ship OTel today.
One pip command. The [anthropic] extra (or [openai], [bedrock], …) auto-wires the upstream OTel instrumentor.
pip install 'singleaxis-fabric[anthropic,otlp]'
The SDK manages the OpenTelemetry span lifecycle and attribution.
from fabric import Fabric
fabric = Fabric.from_env()
fabric.enable_auto_instrumentation(capture_content=False)
with fabric.decision(session_id="sess-1",
request_id="req-1",
user_id="user-42") as decision:
with decision.llm_call(system="anthropic",
model="claude-haiku-4-5-20251001") as call:
response = anthropic_client.messages.create(...)
call.set_usage(
input_tokens=response.usage.input_tokens,
output_tokens=response.usage.output_tokens,
finish_reason=response.stop_reason,
)
Spans render natively wherever you already ship OTel. For a self-hosted demo, Phoenix is one container.
export OTEL_EXPORTER_OTLP_ENDPOINT=http://your-collector:4318 export FABRIC_TENANT_ID=tenant-demo export FABRIC_AGENT_ID=support-bot # Or run Phoenix locally: docker run -p 6006:6006 arizephoenix/phoenix:latest
Bundles the collector, PII redaction sidecar, guardrail sidecar, and adversarial-testing runner. Available in v0.3 — next ~5 weeks; follow the repo.
helm install fabric oci://ghcr.io/singleaxis/charts/fabric \ --version 0.3.0 \ --namespace fabric --create-namespace \ --set tenant.id=my-tenant \ --set exporter.endpoint=http://your-backend:4318
We're looking for three design partners for the v0.3 Beta cohort. Free access during the design phase, founder-led onboarding, and direct influence on which Commercial-plane features ship first.