How OriginChain protects against misuse and ensures responsible AI project management
OriginChain will not generate manifests, agent briefs, or project plans for projects designed to:
This is non-negotiable regardless of how the request is framed. These restrictions apply to all tiers and all users.
Every project goes through an intent screening process before a manifest can be created.
Purpose Declaration
Users declare the end purpose, intended users, data practices, and potential for harm.
AI Evaluation
Claude evaluates the intent against ethical guidelines and flags concerns.
Approval or Block
Clean projects proceed. Blocked projects get a clear explanation with suggestions to adjust scope.
Permanent Record
All intent checks are logged to the safety_flags table. Blocked attempts are permanently recorded.
Projects can change over time. OriginChain continuously monitors for ethical drift — when a project evolves beyond its original intent.
Every audit and every Call Manny pivot check compares the current project state against the original intent declaration. The system flags:
When drift is detected, the user must explicitly confirm the change and explain its purpose before the manifest can be updated. All drift flags are permanently logged.
Team and Studio tier organizations get additional oversight capabilities:
OriginChain is built on Anthropic's Claude API and adheres to Anthropic's Acceptable Use Policy. Our safety layers add project-level accountability on top of Claude's model-level safeguards, creating defense in depth against misuse.
If you believe OriginChain is being used to facilitate harmful projects, please report it.
safety@originchain.devOriginChain takes safety seriously. These protections are built into every layer of the platform and cannot be bypassed.