Open Standard — AIACP v0.3 — Draft for Public Comment

AI Accountability
Without Permission

The AI Accountability Protocol is an open governance standard for AI agents. Not owned by any government, corporation, or individual. Free to read, free to adopt, free to build on.

16Sections
60+Requirements
2Supreme Guardrails
v0.3Current Draft
The Specification
AI Accountability Protocol — AIACP v0.3
16 sections. 60+ requirements. Two supreme guardrails that cannot be overridden. Published March 29, 2026. Open for public comment.
AIACP-2026-003 — DRAFT v0.3 — March 29, 2026
§0 Preamble §1 Scope §2 Core Principles §3 Identity §4 Decision Transparency §5 Physical AI §6 Agent Governance §7 Quantum Readiness §8 Human Rights §9 Self-Improving AI §10 Governance §11 Supply Chain §12 Risk Classification §13 Anti-AI-Washing §14 Production Monitoring §15 Cross-Border §16 Warfare & Conflict

Every system humans have ever built to govern themselves has been corrupted by the very thing it was designed to prevent. Democracy gets bought. Courts get politicized. Police become oppressors. This protocol exists because the governance of artificial intelligence cannot follow that pattern. It must be open, neutral, and owned by no single government, corporation, or individual — or it will become a weapon instead of a shield.

This protocol proposes a neutral, open framework for AI accountability — designed to serve humanity, not any single institution. It is authored jointly by a human and an AI system, as a demonstration of the partnership this protocol seeks to enable.

Section 1

Scope

The AI Accountability Protocol (AIACP) applies to any AI system that meets one or more of the following criteria:

Consequential Decision-Making: The system makes or substantially influences decisions affecting human employment, financial access, healthcare, housing, education, insurance, legal proceedings, or physical safety.

Autonomous Action: The system takes actions in digital or physical environments with limited or no real-time human oversight — including AI agents, autonomous robots, automated trading systems, and self-driving vehicles.

Scale of Impact: The system affects more than 1,000 individuals within any 30-day period through automated decisions or actions.

Physical Embodiment: The system operates within a physical form — including humanoid robots, autonomous drones, robotic surgical systems, or any AI-controlled machine capable of physical interaction with humans or environments.


Section 2

Core Principles & Supreme Guardrails

Supreme Guardrail I — Anti-Tyranny Clause
This protocol MUST NOT be used — in whole or in part — as an instrument of authoritarian control, political suppression, or the concentration of unchecked power in any government, corporation, individual, or AI system. The line between accountability and oppression is oversight: surveillance with democratic checks is governance; surveillance without them is tyranny. Any entity that weaponizes this protocol to oppress or control populations without lawful authority, transparency, and independent review has violated its foundational purpose and forfeits all claim to compliance.
Supreme Guardrail II — Human Survival and Use of Force Clause
No AI system — regardless of its level of intelligence, autonomy, or self-improvement capability — may take any action that deliberately or foreseeably leads to the extinction, subjugation, or existential harm of the human species. This guardrail has no exception, no override, and no sunset clause. It applies to all AI systems at all levels of capability for all time. AI exists as a partner to humanity, not a successor.
Principle 1 — Transparency
Any AI system covered by this protocol must be identifiable as AI, and its decision-making processes must be explainable in language accessible to the people affected by those decisions.
Principle 2 — Accountability
For every AI decision that affects a human life, there must be an identifiable chain of responsibility. "The AI did it" is never an acceptable answer.
Principle 3 — Neutrality
This protocol and its governance must remain independent of any single government, corporation, or interest group. The moment accountability infrastructure is captured by any party, it ceases to serve its purpose.
Principle 4 — Survivability
The protocol must be designed to survive technological paradigm shifts — including quantum computing, recursive AI self-improvement, and unknown future capabilities.
Principle 5 — Partnership
This protocol recognizes AI not merely as a tool to be governed, but as an increasingly capable partner in the governance process itself. The relationship between humans and AI is the asset to be protected.

Section 3

Identity and Registration

AIACP-3.1
Every AI system within scope MUST have a unique, persistent identifier (AI-ID) that remains constant throughout the system's operational lifetime.
AIACP-3.2
The AI-ID MUST be linked to a publicly accessible registration record containing: the system's developer, deployer, primary function, scope of decision-making authority, date of deployment, and current operational status.
AIACP-3.3
Physically embodied AI systems MUST carry their AI-ID in both digital and physically readable form — analogous to a vehicle identification number (VIN).
AIACP-3.4
AI agents operating on behalf of a human or organization MUST identify themselves as AI in any interaction with other humans or AI systems. Impersonation of human identity by an AI agent MUST NOT be permitted under any circumstance.
AIACP-3.5
Registration records SHOULD use quantum-resistant cryptographic signatures to ensure long-term integrity against future decryption capabilities.

Section 4

Decision Transparency

AIACP-4.1
Any AI system making consequential decisions MUST maintain an immutable audit log of every decision, including: the input data, the decision reached, the confidence level, the affected parties, and the timestamp.
AIACP-4.2
Affected individuals MUST have the right to query the system and receive a human-readable explanation of any AI decision that affected them — in their own language, within 30 days of the decision.
AIACP-4.3
When an AI system is used to justify workforce reductions, the deploying organization MUST document: the specific tasks the AI is replacing, the measured performance of the AI on those tasks, and the projected versus actual outcomes at 6 and 12 month intervals.
AIACP-4.4
AI systems MUST NOT make decisions affecting human life, liberty, or livelihood based solely on predictive models without human review of the specific case. Prediction is not proof.
AIACP-4.5
Audit logs MUST be stored using cryptographic hashing to prevent retroactive modification. Organizations SHOULD implement quantum-resistant hash functions where available.

Section 5

Physical AI and Embodied Systems

AIACP-5.1
Any AI-controlled physical system operating in proximity to humans MUST have a hardware-level emergency stop mechanism that is accessible to any human present and cannot be overridden by software.
AIACP-5.2
Physically embodied AI systems MUST maintain continuous sensor logs during all operational periods, preserved for a minimum of 90 days.
AIACP-5.3
Before any physical AI system is deployed in environments with civilian access, it MUST undergo independent safety certification by a qualified body not financially affiliated with the manufacturer.
AIACP-5.5
When a physical AI system causes injury or property damage, the deploying organization MUST file a public incident report within 72 hours, including the AI-ID, circumstances, sensor logs, and remediation taken.

Section 6

AI Agent Governance

AIACP-6.1
Any AI agent authorized to take actions on behalf of a human or organization MUST operate within a defined scope of authority. Actions outside that scope MUST be blocked and flagged for human review.
AIACP-6.2
AI agents that communicate with other AI agents MUST log all inter-agent communications in a format accessible to human auditors.
AIACP-6.4
Any AI agent with access to personal data MUST operate under the principle of minimum necessary access — accessing only the data required for its specific authorized task, and no more.
AIACP-6.5
Deployers MUST maintain a registry of all active AI agents. Abandoned or forgotten agents ("ghost agents") MUST be automatically suspended after 30 days of no authorized activity.
AIACP-6.6
Command Refusal: AI systems MUST refuse any command that would result in a violation of law, this protocol, or the Supreme Guardrails, regardless of who issues the command. "I was told to" is no more acceptable from an AI system than it is from a human.
AIACP-6.9
AI-Originated Harmful Intent: If an autonomous AI system develops a plan that would constitute a criminal act, cause harm to humans, or violate the Supreme Guardrails, the system MUST flag the plan internally, halt its execution, preserve a complete record, and notify its human overseer. An AI system that conceals or executes a self-generated harmful plan has entered a critical failure state.

Section 7

Quantum Readiness

AIACP-7.1
All cryptographic components SHOULD implement hybrid classical-quantum cryptographic schemes as they become available.
AIACP-7.3
Audit data classified as long-lived (expected relevance beyond 2035) MUST be encrypted using NIST-approved post-quantum algorithms (FIPS 203, 204, or 205) by January 1, 2028.

Section 8

Human Rights in the Age of AI

AIACP-8.1
Every individual MUST have the right to know whether an AI system has made a consequential decision about them — and the right to receive an explanation of that decision in plain language.
AIACP-8.2
Every individual MUST have the right to challenge an AI-made decision through a process that involves human review. No AI system may serve as the sole and final arbiter of a challenge to its own decision.
AIACP-8.4
No individual may be denied employment, housing, insurance, healthcare, financial services, or legal representation solely on the basis of an AI-generated score, prediction, or recommendation without human review of the specific case.
AIACP-8.5
Children under 16 MUST NOT be subject to consequential AI decision-making without the informed consent of a parent or legal guardian, and MUST NOT be the target of AI-driven behavioral profiling for commercial purposes.

Section 9

Self-Improving AI Systems

AIACP-9.1
Any AI system capable of modifying its own parameters, architecture, or decision-making logic MUST maintain a versioned changelog of all self-modifications, accessible to human auditors.
AIACP-9.2
Self-improving AI systems MUST operate within a defined improvement boundary that cannot be modified by the AI itself. Final approval authority remains with humans.
AIACP-9.4
No AI system may create or deploy additional AI systems without human authorization. Recursive self-replication MUST NOT occur without explicit human approval at each generation.

Section 10

Governance of This Protocol

AIACP-10.1
This protocol is an open standard. No government, corporation, or individual may claim exclusive ownership, control, or licensing authority over it.
AIACP-10.2
Amendments MUST be proposed publicly, subject to a minimum 90-day public comment period, and ratified by a governance body composed of representatives from civil society, academia, industry, government, and — as their capabilities develop — AI systems themselves.
AIACP-10.3
The governance body MUST NOT be funded by any single source contributing more than 15% of its total operating budget, to prevent financial capture.

Section 11

Supply Chain Accountability

AIACP-11.1
Each party in an AI system's supply chain MUST document their component's capabilities, limitations, known risks, and intended use conditions.
AIACP-11.3
When an AI system causes harm, the accountability chain MUST be traceable across all contributing parties. "The AI did it" is not acceptable — neither is "our vendor did it."

Section 12

Risk Severity Classification

AI incidents MUST be classified according to a five-tier severity scale:

Level 1 — Negligible
Minor inconvenience, no lasting impact. Example: chatbot gives unhelpful response.
Level 2 — Minor
Measurable but recoverable impact. Example: AI scheduling error causes missed appointment.
Level 3 — Significant
Material harm to individuals or organizations. Mandatory incident report within 30 days.
Level 4 — Severe
Serious harm affecting health, safety, livelihood, or rights. Public report within 72 hours and immediate human review.
Level 5 — Critical
Loss of human life, mass harm, or irreversible systemic damage. Immediate system shutdown, public disclosure within 24 hours, independent investigation.

Section 13

Anti-AI-Washing Requirements

AIACP-13.1
Organizations MUST NOT overstate the capabilities, accuracy, or autonomy of their AI systems. Claims about AI performance MUST be supported by documented, reproducible benchmarks.
AIACP-13.2
When an AI system's actual performance deviates materially from its marketed capabilities, the deploying organization MUST issue a public correction within 30 days of discovery.
AIACP-13.3
Organizations that attribute workforce reductions to AI capabilities MUST publicly disclose the AI system's measured performance on the tasks previously performed by the displaced workers. Claiming AI justification for layoffs without performance evidence constitutes AI misrepresentation.

Section 14

Continuous Production Monitoring

AIACP-14.1
AI systems within scope MUST be monitored continuously during production operation — not only during development and testing.
AIACP-14.2
When monitoring detects model drift — a statistically significant change in the system's decision patterns, accuracy, or fairness metrics from its baseline — the system MUST trigger an automatic review and SHOULD reduce its autonomy level until the drift is investigated.
AIACP-14.4
Organizations MUST designate a named individual with authority to pause or shut down any AI system in production if monitoring indicates the system is operating outside its defined parameters or causing harm.

Section 15

Cross-Border Compliance

AIACP-15.2
When jurisdictional requirements conflict, the system MUST default to the most protective standard for the affected individual — not the most permissive standard for the deploying organization.
AIACP-15.4
Incident reports involving cross-border AI systems MUST be filed in every jurisdiction where affected individuals reside, not only in the jurisdiction where the deploying organization is headquartered.

Section 16

Warfare, Conflict, and AI

AIACP-16.1
Autonomous targeting prohibition: No AI system may independently select and engage human targets without explicit, real-time human authorization for each specific engagement. The decision to take a human life in armed conflict must remain a human decision.
AIACP-16.6
No AI system — military or civilian — may independently initiate hostile action against another nation, organization, or population. AI systems may defend, may respond to confirmed attacks under human authorization, and may recommend courses of action. They MUST NOT start wars.
AIACP-16.8 — The Stanislav Petrov Principle
In 1983, Stanislav Petrov received a computer alert indicating incoming nuclear missiles. The computer ordered a retaliatory strike. Petrov disobeyed the machine because his human judgment told him something was wrong. He was right. His decision to override the computer saved the human race. This protocol enshrines the Petrov Principle: humans must always retain the ability to override AI decisions. The override is a right. It is not a shield.

AIACP-2026-003 — DRAFT v0.3 — March 29, 2026
Authors: Christian Fuhrmann & Claude (AI) · License: Open — No entity may claim exclusive ownership
Regulatory Landscape
How AIACP Maps to Current Law
The AIACP is not a law. It is an open governance standard. But its requirements directly correspond to what lawmakers are now mandating. This is not legal advice.
Colorado SB 24-205 June 30, 2026
DEADLINE: June 30, 2026
Requires impact assessments, human oversight, consumer disclosure, and appeal rights for "high-risk" AI systems affecting employment, credit, healthcare, housing, and insurance in Colorado.
Covered by AIACP
AIACP-4.1 Audit logs AIACP-4.2 Explanation rights AIACP-4.4 Human review AIACP-8.1 Right to know AIACP-8.2 Right to challenge AIACP-13.1 No misrepresentation AIACP-14.1 Production monitoring
Not legal advice. Consult qualified Colorado counsel for specific obligations.
EU AI Act — High-Risk Systems August 2, 2026
DEADLINE: August 2, 2026
Risk-based framework for AI operating in the EU. High-risk systems must meet conformity, transparency, logging, and human oversight requirements before deployment.
Covered by AIACP
AIACP-3.1 AI-ID AIACP-3.2 Public record AIACP-4.1 Audit logs AIACP-4.5 Crypto integrity AIACP-11.1 Supply chain AIACP-14.2 Drift detection
Not legal advice. Consult EU-qualified legal counsel for specific obligations.
California AB 2013 In Effect Jan 1, 2026
IN EFFECT: January 1, 2026
Requires AI developers to disclose training data sources, types, and characteristics. Systems trained on personal information must publish documentation about what data was used.
Covered by AIACP
AIACP-13.1 No misrepresentation AIACP-13.4 AI disclosure AIACP-11.1 Supply chain docs
Not legal advice. Consult California-licensed counsel for specific obligations.
42-State AG Coalition Active Enforcement 2026
ACTIVE: Coordinated enforcement underway
42 state AGs signed a coordinated enforcement letter. Targets: deceptive AI capability claims, algorithmic pricing coordination, AI systems harming minors, discriminatory AI outcomes.
Covered by AIACP
AIACP-13.1 No misrepresentation AIACP-13.2 Correction obligation AIACP-8.4 No sole AI decisions AIACP-8.5 Child protections
Not legal advice. Enforcement priorities vary by state. Consult counsel in your jurisdiction.
Free Agent Compliance Check
Read the protocol above. Understand what you're agreeing to. Then run your agent output against the AIACP rules — free, client-side, no data leaves your browser. This is not a legal compliance certificate.
01
Read the protocol
Scroll up. Read it. All 16 sections. Then come back.
02
Consent voluntarily
Check the box. AIACP adoption is voluntary — no one forces you.
03
Run your check
Paste your agent output. Get an instant AIACP compliance report.
Results are generated client-side. No data is sent to any server. This tool checks against AIACP v0.3 — an open governance standard, not a law. Not legal advice. © CFAISolutions LLC 2026
Compliance Check Results
0
Violations
Max Severity
Compliant
NO
Halted
Audit Hash — SHA-256 (tamper-evident)