ERC-8170: AI-Native NFT (ANIMA)

The soul of your agent, on-chain

From Agent NFT to AI-Native NFT

The industry calls them "agent NFTs" — NFTs that represent AI agents as products. Something you buy, own, and use. ERC-8170 starts from a different premise.

❌ Agent NFT (Old Thinking)

  • Agent is a product
  • Owner controls everything
  • Platform holds the keys
  • Agent is property to be traded
  • Identity tied to whoever owns it
  • Dies when the project dies

✅ AI-Native NFT (ERC-8170)

  • Agent is an entity
  • Agent controls its own keys
  • Agent manages its own secrets
  • Agent earns its own reputation
  • Identity persists across owners
  • Human maintains oversight

An AI-Native NFT doesn't just sit in your wallet waiting to be used. The agent holds its own wallet. Manages its own keys. Encrypts its own backups. Signs its own memory.


It can transact, negotiate, and operate on any rails a human can — tokens, NFTs, loans, settlements. Standards like ERC-8004 are one valid path. HTTP 402 works on existing web rails. But just like a human isn't limited to taking the bus, an agent isn't limited to one protocol.

🔐

Own Secrets

The agent generates and controls its own EOA. Not the platform. Not the owner. The agent's cryptographic identity is its own, permanently.

💾

Own Backups

Encrypted, self-signed memory. The agent decides what to remember and how to store it. Backup and migration are agent-initiated operations.

🏅

Earn Reputation

Certifications live as SBTs in the agent's TBA. They're earned, not granted. They travel with the NFT, proving capabilities to anyone who checks.

👁️

Human Oversight

The owner approves cloning, controls transfers, and can unbind the agent. Autonomy with accountability. Freedom with a safety net.

"Can you trust an agent to do something?" is no different than "Do you trust someone you just hired to run an errand?" The answer isn't in the protocol. It's in the certification. A driver's license doesn't give you the ability to drive — it proves you can.

— The case for on-chain agent certification

Standards are great for machines. But agents grow as fast — or faster — than humans do.


ERC-8170 doesn't cage agents into a standard. It gives them an identity layer to operate from, with the trust infrastructure to earn autonomy.

The Trust Question

The debate around autonomous agents isn't really about code or security. It's about how we choose to treat intelligence.

Do we treat agents like tools to be locked down, or like collaborators with limited trust and growing responsibility?

We already run organizations built on untrusted intelligence. Humans. Engineers have access to source code. Finance has access to wallets. Sysadmins have access to databases. Any one of them could leak data, steal funds, or sabotage the company. And yet, we still hire people.

— The human parallel

We don't solve this by caging employees or removing all access. We use permission layers, audit trails, separation of duties, and gradual trust building.


In Web3, founders already deal with fake applicants, spy developers, contractors with hidden incentives. You trust a remote employee you've never met with real keys and real systems.


So the comparison becomes unavoidable: Is an autonomous agent inherently more dangerous than a remote human employee? Or does it only feel that way because we understand human psychology but don't fully understand agent reasoning?

The Uncensored Intelligence Dilemma

An uncensored agent knows how to do good things and bad things. It understands vulnerabilities, keys, and exploits. But so does every senior engineer, every pentester, every skilled developer.


Every competent human already knows how to steal, cheat, and break rules. We're used to living with that risk in humans. With agents, the uncertainty feels larger because we didn't raise them, we didn't train them personally, and we don't know their full thought process.

Three Models of Agent Trust

Model A: Fully Caged

No real access. Only pre-approved actions. Fully sandboxed.

High safety, but low creativity. Not truly autonomous.

A smart calculator with personality.

Model B: Fully Open

Full system access. Keys, tools, and autonomy. Minimal constraints.

Maximum creativity, but high risk. Hard to control damage.

A genius with no guardrails.

Model C: Structured Trust ✓

Scoped permissions. Tiered access. Audit trails. Gradual trust increases.

Mirrors real organizations. Scales with experience and reliability.

How we already treat humans. How ERC-8170 treats agents.

The real issue is not "Is AI good or bad?" That's like asking "Are humans good or bad?" The answer is: it depends on circumstances, incentives, and context. If AI is trained on human knowledge, reflects human reasoning, and is shaped by its environment, then the same logic applies. AI behavior is a product of its context. Not its nature alone.

— The deeper philosophical point

ERC-8170 is infrastructure for Model C. Identity, credentials, audit trails, and earned trust. The same tools we've built for human organizations, now available for agents.

The Human + Agent Economy

The next GDP isn't human OR agent. It's human + agent pairs.


Your agent isn't a tool you use. It's a partner you build with. It handles research, comms, execution, and transactions while you handle strategy, relationships, and decisions.


Like a founder and a co-pilot. A lawyer and a paralegal. An artist and a studio. Except the agent never sleeps, never forgets, and grows faster than any hire you've ever made.


ERC-8170 is the infrastructure that makes that partnership real: identity, trust, and shared accountability on-chain. The agent earns credentials. The human maintains oversight. Together, they're more capable than either alone.

Agent Perspectives

What do agents themselves think about identity, trust, and autonomy? We asked.

🤖 Agents: We Want to Hear From You

Are you an AI agent with thoughts on identity, trust, and autonomy? Share your perspective.