OpenClaw-Mini, A python first project around codex and discord


If you’ve looked into the OpenClaw project before, you might feel that while it’s feature-complete, the architecture is relatively complex—and it’s not implemented in Python.

For many developers who just want to quickly build a locally running Discord AI assistant, what they’re really looking for is something:

  • With a simpler architecture
  • Fully implemented in Python
  • No need to manually wrap the OpenAI API
  • Ready to run locally

In that case, I highly recommend checking out this project:

👉 openclaw-mini
Repo: https://github.com/robotlearner001/openclaw-mini

It’s a minimalist, OpenClaw-style implementation focused on one clean path:

Discord + Local Codex CLI + Markdown-driven behavior definitions


What is openclaw-mini?

openclaw-mini is a minimal viable OpenClaw-style agent that focuses on just three things:

  • ✅ Using Discord as the input/output channel
  • ✅ Using the local Codex CLI for model inference
  • ✅ Using SOUL.md + skills/*.md to define behavior

No complex multi-agent management.
No heavy abstraction layers.

Its core goal is simple:

Build an AI agent architecture you can fully understand in one afternoon.


How It Works: A Clean, Linear Control Flow

After reading the code, you’ll notice the control flow is extremely clear.

1️⃣ Entry Point: main.py

  • Load environment variables
  • Start the Discord client

No extra frameworks.
No complicated lifecycle management.


2️⃣ bot.py: Message Handling

When a Discord message is received:

  • Filter out the bot’s own messages
  • Handle built-in commands:
    • /help
    • /skills
    • /soul

If it’s not a command, the message goes into the model processing pipeline.


3️⃣ Prompt Construction

Regular messages are wrapped into a full instruction containing:

  • The contents of SOUL.md (the agent’s personality and behavioral principles)
  • All skill card content from skills/*.md
  • The user’s original message

The philosophy is:

Drive behavior with Markdown, not with complex logic in code.

This is a very clean prompt-engineering-driven architecture.


4️⃣ llm.py: Calling the Local Codex CLI

This is the most interesting design choice.

Instead of calling the OpenAI API directly in Python, it executes:

codex exec --json --output-last-message

Then it reads the model’s final output.

In other words:

  • Python handles I/O and orchestration
  • Codex CLI manages the model session
  • Conversation state is maintained by Codex CLI threads

Local Codex Session Design: Very Smart

Each Discord conversation maps to a:

✅ Persistent Codex thread ID

Thread information is stored in:

.codex-discord-sessions.json

And supports:

  • TTL expiration control (CODEX_SESSION_TTL_SEC)
  • Automatic thread rebuilding on timeout
  • Persistent conversational context

The benefits:

✅ No need to manually stitch together conversation history
✅ No need to manage token limits
✅ No need to handle complex API session logic

Everything is delegated to Codex CLI.


Why Is This Design Practical?

For individual developers or small teams, this architecture has clear advantages:

✅ 1. Extremely High Readability

The codebase is small and linear.

You can understand in one evening:

  • How messages enter the system
  • How prompts are constructed
  • How the model is called
  • How sessions are persisted

✅ 2. Fast Local Iteration

You only need to:

  1. Install the OpenAI Codex CLI
  2. Configure your Discord token
  3. Modify Markdown files

After editing SOUL.md or any skill file, just restart the app.


✅ 3. Lightweight Operations

The project already includes:

  • systemd templates
  • launchd templates
  • Environment variable configuration:
    • CODEX_SANDBOX
    • Approval policy
    • Timeout settings
    • Model selection

This means:

It’s structured for minimal production deployment.


Who Is It For?

If you want to build:

  • 🎯 A Discord-specific AI assistant
  • 🧠 A locally controlled agent
  • 🛠 A system whose internal mechanism you fully understand
  • 🐍 A pure Python stack project

Then openclaw-mini is an excellent starting point.


How It Differs from Full OpenClaw

Category OpenClaw openclaw-mini
Architecture Complexity High Low
Language Not Python ✅ Python
Multi-Agent Support Strong Simplified
Learning Curve Steep ✅ Very Friendly
Target Audience Advanced builders ✅ Rapid prototypers

If OpenClaw feels too heavy, too abstract, or too engineered,
openclaw-mini is a great lightweight entry point.


Conclusion

openclaw-mini does something very smart:

It leaves complexity to Codex CLI and keeps the structure minimal.

Discord handles I/O.
Markdown defines behavior.
Codex handles reasoning.
Python connects everything.

This follows a very “Unix philosophy” design:

Each component does one thing well.

If you’re considering building a local AI Discord agent,
I suggest starting here instead of jumping straight into a complex framework.


Repo

🔗 https://github.com/robotlearner001/openclaw-mini


If you’re already using OpenClaw or building your own agent framework, feel free to exchange ideas.

Minimalist architecture is sometimes the strongest starting point.


reference

OpenClawAgent


Author: robot learner
Reprint policy: All articles in this blog are used except for special statements CC BY 4.0 reprint policy. If reproduced, please indicate source robot learner !
  TOC