Technical Details
Last updated
Last updated
ChatOLM has just been upgraded into gasless mode (announcement), without sacrificing verifiability and decentralization. More documentation will be released soon.
Much like how Telegram differs from WhatsApp, ChatOLM distinguishes itself from centralized chatbot services like ChatGPT by being decentralized and censorship-resistant.
Unlike traditional AI applications, which rely on centralized servers, ChatOLM is powered by decentralized infrastructure of ORA AI. This unique architecture allows for open, free-flowing communication that is resilient against censorship, while also providing users unlimited and permissionless features in their interactions.
ChatOLM is enabling unrestricted use cases such as NSFW agents, and Search for censored information compared to platforms like ChatGPT or Perplexity.
ChatOLM is designed to offer a seamless, dual-functionality experience through its two core features:
Much like ChatGPT, ChatOLM’s chat function allows users to engage in natural language conversations powered by advanced AI language models.
Users can ask questions, hold discussions, or receive assistance in real-time. The decentralized nature of ChatOLM means that these conversations are processed and settled on blockchains in a verifiable way.
ChatOLM Interface
The front-end where users submit AI requests (e.g., “What is decentralized AI?”).
Sends requests to the smart contract for processing.
Smart Contract
A blockchain-based intermediary that routes requests from the interface to the ORA AI Oracle.
Handles payment logic and integrates with the AI system.
ORA AI Oracle
Acts as a bridge between the smart contract and the AI models.
Integrates OLM AI Models to process the request.
Ensures secure, verifiable AI model selection and execution.
ORA Decentralized AI Network
A decentralized network of nodes responsible for running AI inference.
Processes AI requests and generates the response, which is sent back via the AI Oracle.
User Input: The user submits a query through the ChatOLM Interface.
Smart Contract: The query is sent to a smart contract on-chain.
ORA AI Oracle: The smart contract forwards the request to the ORA AI Oracle, which selects the appropriate OLM AI Model.
Decentralized AI Network: The Oracle relays the request to the network, where the AI inference is processed.
Response: The processed result is returned through the same path to the ChatOLM Interface.
Beyond its chat function, ChatOLM also incorporates a search feature, functioning similarly to platforms like Perplexity.
This feature allows users to retrieve information from the web through a search interface powered by decentralized AI.
ChatOLM Interface
The front-end where users submit AI search requests (e.g., “Who’s Vitalik?”).
Initiates the request by sending it to the AI Search Node.
AI Search Node
Receives the search query and comprehends the request by fetching information from external sources like DuckDuckGo or other search engines.
Processes and organizes the contextual information into a format suitable for AI inference.
Sends the processed context to the smart contract.
Smart Contract
Acts as the on-chain handler that routes the processed search context from the AI Search Node to the ORA AI Oracle.
Ensures secure interaction and token/payment handling (if applicable).
ORA AI Oracle
Integrates with the OLM AI Models to process the search context received from the smart contract.
Relays the search request to the decentralized AI network for inference.
ORA Decentralized AI Network
Processes the AI inference using the contextual data provided by the Oracle.
The decentralized network performs the inference and generates the AI-based search result.
User Input: The user submits a search query through the ChatOLM Interface.
AI Search Node: The query is received by the AI Search Node, which gathers relevant information from search engines and organizes it into context.
Smart Contract: The organized context is forwarded to the smart contract, which integrates with the ORA AI Oracle.
ORA AI Oracle: The Oracle selects the appropriate OLM AI Model and sends the request to the decentralized network for processing.
Decentralized AI Network: The network processes the inference and returns the search result.
Response: The result is passed back to the ChatOLM Interface for display to the user.
While ChatOLM’s Chat and Search functionalities both leverage decentralized AI, they serve distinct purposes and utilize different technologies to meet user needs.
Chat Functionality
Text and Image Generation: ChatOLM’s Chat is built on advanced language models (similar to ChatGPT) and can also integrate Stable Diffusion for image generation. This allows ChatOLM’s Chat to handle a wide range of natural language tasks, including answering questions, providing explanations, and generating visual content based on prompts.
Language Model-Based: The Chat function relies purely on internal AI models (OLM AI Models) for generating responses and creative outputs. It does not access real-time external data.
Search Functionality
External Information Access: ChatOLM’s Search is designed to access external internet information. Unlike Chat, Search interacts with real-time data from external sources, like web searches using DuckDuckGo or other search engines, to provide the latest and most accurate information.
AI Model with Browsing Capability: Search retrieves and processes context from the internet, similar to AI tools like Perplexity or ChatGPT with browsing. This feature allows it to fetch up-to-date and specific information that may not be stored in static AI models.
Here are examples of when to use Chat or Search depending on the type of query:
Chat (Text and Image Generation)
"Generate a cute dog"
Task: Image generation using Stable Diffusion.
Result: ChatOLM will generate a visual representation of a cute dog based on its internal AI model.
"Tell me a short story about AI taking over a spaceship"
Task: Creative text generation.
Result: ChatOLM will generate a short story using its language model.
"Summarize the benefits of decentralized AI"
Task: Text summarization based on internal knowledge.
Result: ChatOLM will generate a summary of decentralized AI from its model's understanding.
Search (Accessing Real-Time External Information)
"What's Ethereum's TPS right now counting L2s?"
Task: Fetching real-time external data.
Result: Search will retrieve the latest transaction per second (TPS) data, including Layer 2 scaling solutions.
"What are the latest regulations for AI in Europe?"
Task: Accessing recent news or articles.
Result: Search will gather current regulatory information from external web sources.
"What's the weather like in Hong Kong today?"
Task: Real-time data retrieval.
Result: Search will fetch current weather data for Hong Kong from external internet sources.