Skip to content

Frequently Asked Questions (FAQ)

Everything you need to know about ravenbot.


🦅 General

What is ravenbot?

ravenbot is an autonomous technical research agent built on the Google ADK and MCP. It functions as a proactive assistant for software engineers — monitoring technical news, running research pipelines, managing reminders, and delivering results via web, Telegram, or Discord.

Is ravenbot free?

The bot itself is open-source (MIT License). However, you are responsible for any costs associated with the AI models you use (e.g., Gemini API usage fees or the electricity required to run Ollama locally).

Can I run it on a Raspberry Pi?

Yes! ravenbot is optimized for ARM64 and runs on a Raspberry Pi 5. The Docker image auto-detects your platform via the TARGETARCH build arg.


🌐 Web UI

How do I access the web interface?

After starting ravenbot, open http://localhost:8080 in your browser. The port is configurable via the WEB_PORT environment variable. No additional setup is required.

What can I do from the web UI?

  • Chat with RavenBot directly in the browser
  • Browse research reports with Markdown rendering
  • View available agents and workflow pipelines
  • Launch research missions and track their progress
  • Access all bot commands via the Tools page

Does the web UI require authentication?

No. The web interface is designed for local/private network use and has no authentication layer. If exposing to the internet, use a reverse proxy with authentication.


🧠 Intelligence & Models

Which AI backend should I use?

  • Gemini (Google AI): Best for complex reasoning, high-speed classification, and deep research. Requires an API key and internet access.
  • Ollama (Local): Best for privacy and cost-conscious users. Requires a machine with a decent GPU or high-speed RAM.

Can I use other models?

Any model supported by Ollama or the Gemini API can be used by updating your .env configuration.


🔄 Pipelines & Missions

What are workflow pipelines?

Pipelines are structured chains of sub-agents that execute in sequence. The built-in research pipeline chains ResearchAssistant (gathers data) with ResearchSynthesizer (formats into a report), producing more consistent output than single-call invocations.

How do I launch a mission?

  • Web UI: Go to the Agents page, select a pipeline, enter a prompt, and click Launch.
  • Chat: Use /research <topic> to trigger the research pipeline.

Where do I see mission results?

The Missions page in the web UI shows all missions with their status (running/completed/failed), prompts, and results. Missions persist in SQLite across restarts.


🛠 Features

How do I add my own tools?

ravenbot uses the Model Context Protocol (MCP). Add any MCP-compliant tool by adding its configuration to the mcpServers object in config.json, then assign it to a sub-agent.

How does the bot "remember" things?

ravenbot uses a three-layer memory system: 1. Short-term: ADK session state (per-platform conversation history). 2. Medium-term: Session summaries stored in SQLite. Compressed asynchronously when conversations grow large. Cached in memory for fast access. 3. Long-term: The memory MCP server (knowledge graph stored in data/memory.jsonl).


🛡️ Security

Is my data safe?

Yes. All data (conversations, summaries, reports, missions, vault files) is stored locally in SQLite. The only external communication is with your chosen LLM API for processing prompts.

Why does it only respond to one chat ID?

This is a security feature to prevent unauthorized users from using your bot's tokens or accessing your private files.

How are web sessions secured?

Web sessions use cookie-based IDs (ravenbot_session). Form-supplied session IDs are rejected to prevent session hijacking. HTTP request logs anonymize IPs by stripping port numbers.