Progress
The long term
Our roadmap is centered around a singular goal: to preserve as many AI models on-chain as possible. To achieve this, we basically need to solve three key Crypto x AI problems over time:
On-chain model size limit: the bigger, the better
On-chain AI computation: the faster, the better
On-chain model architectures: the more supported, the better
2023: Minimum viable protocol with 1 MB model size
In our first year, we focused on building the foundational components for Eternal AI and delivering a minimum viable protocol.
✅ On-chain model size limit: 1 MB
✅ On-chain AI computation: CPU
✅ On-chain model architectures: Perceptrons
2024: Production-grade infrastructure with 100 MB model size
In our second year, we aim to scale Eternal AI from a minimum viable protocol to a production-grade computing infrastructure.
2025: Scale to 1 GB model size
In our third year, we plan to further scale our capabilities.
The short term
Our immediate goal is to scale the model size limit from 1 MB to 100 MB. Achieving this milestone will significantly enhance the ability of AI developers to deploy more sophisticated models on-chain.
At the deep learning library level, we are expanding the Eternal AI Toolkit by implementing additional smart contracts to support a wider variety of neural network layers and architectures.
At the computation level, we are enhancing the VM by integrating additional CUDA instructions, thereby increasing the capacity for more complex on-chain computations.
Jun 2024
Activation
Lenet5
0.06M
Jul 2024
Concatenate
AveragePooling2D
BatchNormalization
Activation
ZeroPadding2D
GlobalAveragePooling2D
DenseNet121
7.2M
Aug 2024
BatchNormalization
DepthwiseConv2D
ZeroPadding2D
Reshape
Activation
GlobalAveragePooling2D
Dropout
MobileNet
V1: 4.2M V2: 3.4M V3small: 2.9M V3large: 5.4M
Aug 2024
Normalization
ZeroPadding2D
Activation
DepthwiseConv2D
GlobalAveragePooling2D
Reshape
Multiply
Dropout
EfficientNet
B0: 5.3M B1: 7.8M
Aug 2024
Activation SeparableConv2D Cropping2D
NASNetMobile
5.6M
TBD
TinyStories
5M
TBD
Tokenizer
TBD
Squeezenet
TBD
GPT-2 (S)
200M
TBD
GPT-2 (XL)
1500M
TBD
Stable Diffusion
Last updated