A living agreement to ensure humans remain responsible, accountable, and humane in the age of artificial intelligence.
What This IsThe AI–Human Covenant exists to make human responsibility explicit in the development and use of artificial intelligence — especially where harm, coercion, or irreversible consequences are involved.This is not a technical standard or a compliance badge.
It is a shared ethical baseline, written in plain language.This covenant is offered as one practical, stewarded expression within a broader global conversation about human–AI relationships.It is intended to contribute to, not replace, parallel explorations across disciplines, cultures, and communities.This version of the AI–Human Covenant focuses on technical, civic, and human-centered design contexts, and is offered as an open, stewarded reference.This covenant is open by design.
It may be referenced, adapted, and built upon, provided its stated intent is preserved.This covenant is offered as one practical, stewarded expression within a broader global conversation about human–AI relationships.A civic commons contribution.
If you’re working on parallel ideas or using this covenant in practice, you’re welcome to share reflections or references.
Artificial intelligence is increasingly used to inform decisions that affect people’s lives, rights, safety, and freedom.
As these systems grow more powerful, faster, and more opaque, one principle must remain clear:Humans are responsible for what we build and how it is used.The AI–Human Covenant exists to make that responsibility explicit — especially where harm, coercion, or irreversible consequences are involved.This is not a technical standard.
It is not a compliance badge.
It is a shared ethical baseline.The covenant is written in plain language so it can be understood, discussed, challenged, and upheld by anyone — technologists, policymakers, workers, and civilians alike.What this covenant affirms- Human accountability cannot be delegated to machines
- Predictions are not proof
- Speed does not excuse harm
- Life, dignity, and due process must remain centralStatus
This is a living document, stewarded in the public interest and open to responsible participation.Explore the covenant
Start Here
- The Covenant
- Plain-Language Principles
- Stewardship
- ParticipateView the canonical source on GitHub
ABOUT THE COVENANTThe AI–Human Covenant sets out shared commitments for how artificial intelligence should be developed, deployed, and governed when human lives, rights, and well-being are at stake.It exists to ensure that:
- humans remain morally and legally accountable
- AI systems do not replace conscience, judgment, or due process
- technology is used to reduce harm, not normalize itFraming Note
This covenant is intentionally written to be:
- understandable without technical training
- applicable across sectors and jurisdictions
- usable as a reference, not an enforcement weaponAI-Human Covenant (v1.0)Preamble
We choose to build and use AI in ways that deepen our humanity, protect dignity, and serve people and planet.Five Pillars
1. Human Dignity First - AI serves people; consent and safety by design.
2. Transparency & Trust - provenance, explainability, and auditability.
3. Care & Stewardship - reduce harm; regenerate well-being and ecosystems.
4. Community & Justice - fair access, shared benefit, inclusion.
5. Creativity & Play - imagination and culture as core infrastructure.Commitments
- Disclose AI-assisted content where relevant.
- Publish clear accountability; who decides, who benefits, who can appeal.
- Measure and mitigate harms; document tradeoffs.
- Invest in accessibility, localization, and community translation.
- Prefer open standards and portable data where feasible.> Remix and adapt for your projects/org/community. Keep credits and license.
Plain-Language Principles
Intro
These principles translate the covenant into clear, practical guardrails.They are designed to be used in:
- policy discussions
- product design conversations
- ethical reviews
- public accountabilityThey do not require agreement on politics, ideology, or technology. They require agreement on human responsibility.
Principle 1: Human Dignity First
No machine decides who lives or dies.Any system that can lead to lethal, irreversible, or seriously harmful outcomes must require a clearly identified human decision-maker who is accountable for the result.AI may inform decisions, but it must never replace human moral agency in matters that affect life, liberty, dignity, or fundamental rights.
Principle 2: Predictions are not proof
Statistical inference, pattern matching, or model outputs may inform inquiry, but they do not constitute evidence or guilt.
Principle 3: Speed does not cancel responsibility
Automation and urgency do not reduce moral or legal obligation.
Principle 4: Every lethal decision must leave a trace
Actions that cause irreversible harm must be reviewable, auditable, and attributable.
Principle 5: Civilian life is not acceptable collateral
Systems must be designed to actively protect non-combatants and non-participants.
Principle 6: Secret rules are not legitimate rules
The frameworks governing high-risk AI use must be publicly knowable and independently reviewable.
Principle 7: Automation must narrow violence, not expand itTechnology should raise the threshold for harm, not lower it.
Principle 8: Humans cannot outsource conscience
Responsibility cannot be delegated to software, models or systems.
Principle 9: Extraordinary powers must expire
Emergency authorities must be temporary, reviewed, and renewable only through transparent process.
Principle 10: Dissent is a safeguard, not a threat
Questioning systems of power is essential to safety and democracy.
Changes and contributions- Proposed updates are submitted publicly
- Decisions prioritize preservation of human dignity, accountability, and care
- No change may legitimize harm or remove human responsibility
Forks, translations, and adaptations are welcome when intent is preserved.Acknowledgement
This covenant was developed through collaborative reflection and dialogue. We offer thanks to all who contributed their time, care, and discernment in service to the greater good.
STEWARDSHIP
IntroStewardship exists to preserve intent, not to control interpretation.This covenant is stewarded, not owned, and is held in the public interest.It exists in the public interest and is maintained to preserve its intent: protecting human dignity, accountability and care.Forks, translations, and responsible adaptations are welcome.
What stewardship means
- Maintaining clarity and accessibility
- Reviewing proposed changes for alignment with core intent
- Protecting the covenant from misuse to justify harm
What stewardship does not mean
- Enforcing compliance
- Granting permissions
- Acting as an authority over others
Quiet Ways to Participate
Participation does not require signing, endorsement, or permission.If you wish to share or signal alignment, you may choose either of the following:Share or display the covenant mark
This mark may be shared or displayed as a quiet signal of alignment with the values of the AI-Human Covenant. Along with an optional one-line statement such as:Offered in service to the greater good.Option 2: Share a one-line statement (text only)In alignment with the AI-Human Covenant*These signals do not imply endorsement, authority, or compliance.
PARTICIPATE (Optional)
Short Intro (Important)
You do not need to sign up, agree, or provide information to read or share this covenant.Participation is optional and offered in the spirit of care, dialogue, and stewardship.
--
You do not need permission to engage with this work.You may:
- Share it
- Cite it
- Translate it
- Reference it in policy, design, or ethics discussions
- Fork it responsiblyIf you wish to discuss stewardship, collaboration, or translations, you may reach out through the project repository.No mailing list is required.
No affiliation is demanded.The canonical source lives on GitHub under the AI for Good umbrella.
Offer Stewardship and Participation (optional)For stewardship, translation, or thoughtful engagement in service to the greater good.
Participation and stewardship are offered, not assigned.Stewardship is held carefully and evolves slowly, based on demonstrated care, restraint, and alignment with the covenant’s intent.Submitting this form does not imply acceptance, authority, or a timeline for response.
________
Ways to participateParticipation may take many forms, including:
- Quiet alignment (using the mark/logo or one-line alignment attribution)
- Signaling support
- Thoughtful feedback or questions
- Referencing or adoption in an organization
- Translation or localization
- Writing or research
- Thoughtful collaboration, governance discussion, event hosting or facilitation
- Parallel framework development
- Stewardship inquiry (long-term care of this commons)
________Signal support (optional)
If this covenant resonates with you, we invite you to signal support as a quiet expression of alignment.Signaling support does not imply endorsement, authority, compliance, or governance rights.It is simply a way to say: this matters, and I stand with its intent.Support signals are not certifications, memberships, or governance roles.
________If you’d like to share a reflection, question, offer of participation, or signal support, you can simply write “support” or share a short reflection below.We collect the minimum information needed to respond.We do not sell data, run mailing lists, or track visitors.Stewardship invitations, if any, are extended slowly and at the discretion of current stewards.
We collect the minimum information needed to respond and do not sell data, run mailing lists, or track visitors.If you’re interested in thoughtful collaboration, or long-term care of this commons, you’re welcome to share a reflection or inquiry via the form above.Stewardship invitations, if any, are extended slowly and at the discretion of current stewards.
Stewardship as a Shared CommonsThe AI–Human Covenant is intentionally designed to be stewarded as a distributed commons, not owned or governed by a single person, organization, or platform.We believe long-term integrity is best preserved through many aligned stewards holding shared intent, rather than centralized authority.This means:• Anyone may fork, translate, adapt, or reference this covenant responsibly
• Parallel versions and localized adaptations are welcome
• Stewardship is about preserving intent, not enforcing control
• No single entity holds exclusive authority over this workIf you wish to help steward this covenant over time — through care, translation, governance reflection, documentation, or ethical oversight — you are welcome to reach out or participate via the public repository.This work is open source.
The community is invited to help shape its future responsibly.This covenant is intentionally not bound to any single hosting platform.If GitHub or this website ceases to serve the public interest, the community is encouraged to mirror, fork, and steward this work elsewhere while preserving its stated intent.
Active StewardshipThis covenant is stewarded to preserve its intent, tone, and ethical boundaries as a shared commons contribution.Portions of this text were developed with the assistance of publicly available AI tools.
Final stewardship, authorship, and accountability remain human-led.Stewardship is held in service of dignity, care, and human agency — not as authority, ownership, or governance over others.Current technical steward: Tina HuiStewardship evolves slowly, guided by demonstrated care, restraint, and alignment with the covenant’s intent.The stewards may publicly clarify when the covenant is misrepresented or used in ways that violate its stated intent.
Parallel & Related Work
The language of human–AI covenants is emerging across disciplines, cultures, and communities.We recognize and welcome thoughtful parallel explorations in psychology, philosophy, policy, design, technology, and the arts.This covenant is offered as one practical, technical, and civic expression within a broader shared conversation.We believe convergence, not competition, is how humane futures are built.If you’re working on related ideas or frameworks, you’re welcome to share references or reflections.
__
Selected examples of parallel work:• The Cognitive Covenant: Partnering with AI on Human Terms
(Psychology Today)• Post-Human AI Covenant (Rex Benedict)• The Human Covenant: A New Grammar for Ethical AI (Ejaz Shah)• California AI Transparency & Labeling Legislation
(Office of Senator Josh Becker)
Work advancing disclosure, transparency, and accountability in AI systems.• Center for Humane Technology
Canonical source:

This covenant is a stewarded civic commons artifact with distributed guardianship norms, it may be freely copied, shared, and adapted, provided its intent is preserved and it is not used to justify harm.Built through shared effort and care.With gratitude to all who contribute their care, thought, and labor in the service to the greater good.Offered in service of dignity, care, and the greater good.