Eternal AI
  • The AI layer for the new internet
  • eternals
    • What are Eternals?
    • Specification
    • Proof-of-Compute
  • The new internet, AI-powered
    • Bitcoin, AI-powered
      • Eternals on Bitcoin
      • BitAI Virtual Machine
      • Run a BitAI node
    • Ethereum, AI-powered
    • Solana, AI-powered
  • smart contracts, ai-powered
    • How to use onchain LLM
    • Onchain AI composability - AI Powered Wallet
    • Onchain AI Composability - AI Powered Gaming With Chess
  • neurons
    • What are Neurons?
    • Neuron Device
    • Virtual Neurons
      • Solo Neuron
      • Neuron as a Service
      • Pooled Neuron
  • AI CHAINS
    • What are AI chains?
    • Bittensor and existing concepts
    • Base layer: Bitcoin vs Bittensor
    • AI chains: Bitcoin L2s vs Subnets
    • Apps: Smart contracts vs APIs
  • EAI
    • Utilities
    • Tokenomics
  • fully onchain ai models
    • Architecture
    • Deploy your first fully onchain AI
      • Set up your development environment
      • Create a self-custody wallet
      • Train an AI model in Keras
      • Transform the Keras model to Eternal
      • Send, receive, and trade Eternals
    • Progress
    • Misc
      • Transforming an AI Model into an Eternal
      • Standardized data formats
      • Specification
        • Layers
        • Models
  • Decentralized Inference API
    • API
      • API Key
      • Completions
      • Chat completion
      • Create a dagent
      • Get deposit address
      • Get dagent info
      • Agent Completion
    • Onchain Models
    • Tutorials
      • Build unstoppable Eliza agents
      • Build unstoppable Rig agents
      • Build unstoppable ZerePy agents
      • Decentralized ChatGPT
      • Don't Trust, Verify
      • Adjust your dagent personality
      • Launch on Twitter
      • Chain of thought
      • Build a dagent as a service with EternalAI API
    • Open Source
      • Architecture
      • Installation
Powered by GitBook
On this page
  • Mission Manager & Configuration
  • LLM
  • Tools (Actions)
  1. Decentralized Inference API
  2. Open Source

Architecture

PreviousOpen SourceNextInstallation

Last updated 5 months ago

In this document, we describe the components of an agent created via the EternalAI open-source platform and explain how these components interact with each other.

Mission Manager & Configuration

This is a periodic job that schedules the agent to execute missions defined in the configuration file. Each time the agent performs a mission, it proceeds through a series of steps in a chain-of-thought process sequentially (i.e., each step uses the output of the previous step as input when interacting with the LLM model) to achieve the mission's goal using predefined actions configured for the agent.

The Mission Manager utilizes information defined in the configuration file to execute missions as follows:

  • Characteristic, which includes:

    • system_prompt

  • Mission, which includes:

    • task

    • system_reminder

    • toolset_cfg

    • llm_cfg

    • scheduling

      • interval_minutes

LLM

At each step of execution, the agent calls an LLM model to receive an instruction. The instruction includes one of the following action types: thought, action, or final_answer, which directs the agent on its next step.

Since the EternalAI platform is powered by a decentralized AI infrastructure, all LLM interactions are performed asynchronously through smart contract calls. The framework generates an inference request to a smart contract corresponding to an LLM model deployed on a blockchain whenever the agent interacts with the model. The smart contract returns an inference_id, which is used to retrieve the inference response (instruction) submitted by the LLM model’s miners.

Initially, the LLM module supported only the Hermes model. However, the LLM provider is implemented in an abstracted manner, allowing future contributors to easily add support for new LLM models.

Tools (Actions)

Based on the LLM model's instruction, the agent executes a corresponding action.

The Tools module defines toolsets for each specific use case. For example:

  • twitter_toolset for a Twitter agent.

  • trading_toolset for a trading agent.

Similar to the LLM module, contributors can easily add new toolsets to the module, enabling users to configure additional actions for their use cases.

Code reference .

Code reference .

Code reference .

here
here
here