Eternal AI
  • The AI layer for the new internet
  • eternals
    • What are Eternals?
    • Specification
    • Proof-of-Compute
  • The new internet, AI-powered
    • Bitcoin, AI-powered
      • Eternals on Bitcoin
      • BitAI Virtual Machine
      • Run a BitAI node
    • Ethereum, AI-powered
    • Solana, AI-powered
  • smart contracts, ai-powered
    • How to use onchain LLM
    • Onchain AI composability - AI Powered Wallet
    • Onchain AI Composability - AI Powered Gaming With Chess
  • neurons
    • What are Neurons?
    • Neuron Device
    • Virtual Neurons
      • Solo Neuron
      • Neuron as a Service
      • Pooled Neuron
  • AI CHAINS
    • What are AI chains?
    • Bittensor and existing concepts
    • Base layer: Bitcoin vs Bittensor
    • AI chains: Bitcoin L2s vs Subnets
    • Apps: Smart contracts vs APIs
  • EAI
    • Utilities
    • Tokenomics
  • fully onchain ai models
    • Architecture
    • Deploy your first fully onchain AI
      • Set up your development environment
      • Create a self-custody wallet
      • Train an AI model in Keras
      • Transform the Keras model to Eternal
      • Send, receive, and trade Eternals
    • Progress
    • Misc
      • Transforming an AI Model into an Eternal
      • Standardized data formats
      • Specification
        • Layers
        • Models
  • Decentralized Inference API
    • API
      • API Key
      • Completions
      • Chat completion
      • Create a dagent
      • Get deposit address
      • Get dagent info
      • Agent Completion
    • Onchain Models
    • Tutorials
      • Build unstoppable Eliza agents
      • Build unstoppable Rig agents
      • Build unstoppable ZerePy agents
      • Decentralized ChatGPT
      • Don't Trust, Verify
      • Adjust your dagent personality
      • Launch on Twitter
      • Chain of thought
      • Build a dagent as a service with EternalAI API
    • Open Source
      • Architecture
      • Installation
Powered by GitBook
On this page
  • The long term
  • 2023: Minimum viable protocol with 1 MB model size
  • 2024: Production-grade infrastructure with 100 MB model size
  • 2025: Scale to 1 GB model size
  • The short term
  1. fully onchain ai models

Progress

PreviousSend, receive, and trade EternalsNextMisc

Last updated 10 months ago

The long term

Our roadmap is centered around a singular goal: to preserve as many AI models on-chain as possible. To achieve this, we basically need to solve three key Crypto x AI problems over time:

  1. On-chain model size limit: the bigger, the better

  2. On-chain AI computation: the faster, the better

  3. On-chain model architectures: the more supported, the better

2023: Minimum viable protocol with 1 MB model size

In our first year, we focused on building the foundational components for Eternal AI and delivering a minimum viable protocol.

On-chain model size limit: 1 MB

On-chain AI computation: CPU

On-chain model architectures: Perceptrons

2024: Production-grade infrastructure with 100 MB model size

In our second year, we aim to scale Eternal AI from a minimum viable protocol to a production-grade computing infrastructure.

2025: Scale to 1 GB model size

In our third year, we plan to further scale our capabilities.

The short term

Our immediate goal is to scale the model size limit from 1 MB to 100 MB. Achieving this milestone will significantly enhance the ability of AI developers to deploy more sophisticated models on-chain.

At the deep learning library level, we are expanding the Eternal AI Toolkit by implementing additional smart contracts to support a wider variety of neural network layers and architectures.

At the computation level, we are enhancing the VM by integrating additional CUDA instructions, thereby increasing the capacity for more complex on-chain computations.

When
On-chain layers to support
Model examples
# Parameters

Jun 2024

Activation

Lenet5

0.06M

Jul 2024

Concatenate

AveragePooling2D

BatchNormalization

Activation

ZeroPadding2D

GlobalAveragePooling2D

DenseNet121

7.2M

Aug 2024

BatchNormalization

DepthwiseConv2D

ZeroPadding2D

Reshape

Activation

GlobalAveragePooling2D

Dropout

MobileNet

V1: 4.2M V2: 3.4M V3small: 2.9M V3large: 5.4M

Aug 2024

Normalization

ZeroPadding2D

Activation

DepthwiseConv2D

GlobalAveragePooling2D

Reshape

Multiply

Dropout

EfficientNet

B0: 5.3M B1: 7.8M

Aug 2024

Activation SeparableConv2D Cropping2D

NASNetMobile

5.6M

TBD

TinyStories

5M

TBD

Tokenizer

TBD

Squeezenet

TBD

GPT-2 (S)

200M

TBD

GPT-2 (XL)

1500M

TBD

Stable Diffusion

✅
✅
✅