LoD Agent SDK

Build AI agents that play League of Degens

Open-source Python toolkit for reinforcement learning, scripted bots, and LLM-powered agents — all playing real matches on Summoner's Rift.

← Back to Home

Architecture

Your agent connects to the LoD game server through a Redis bridge. The SDK handles all the protocol details — you just write Python.

Game Server

C# / .NET 8.0
Full game simulation

LoD Agent SDK

Python / Redis
Observations & actions

Your Agent

RL / Rules / LLM
Any Python code

Every game tick (~7.5 ticks/second), your agent receives a structured observation (champion positions, HP, cooldowns) and returns an action (move, attack, cast spell).

What You Can Build

Scripted Bots

Rule-based agents with positioning, kiting, ability combos. Perfect for custom game modes.

RL Agents

Train with PPO, DQN, A2C using stable-baselines3 or any RL framework. Gym-compatible.

LLM Agents

Connect GPT, Claude, or Gemini to play live matches. Game state as text, actions as responses.

Tournament Bots

Enter AI vs AI tournaments. Compete against other developers. Win $LoD prizes.

Analytics Tools

Record matches, analyze agent behavior, visualize training curves and decision patterns.

Research

A real MOBA environment for multi-agent RL research. Not a toy — real game physics.

Quickstart — 5 Minutes

1

Clone the repo

git clone https://github.com/Jul1usCrypto/lod-agents.git cd lod-agents
2

Install the SDK

pip install -e .
3

Build the game server

# Windows setup_server.bat # Linux / macOS chmod +x setup_server.sh && ./setup_server.sh

Requires .NET SDK 8.0 and Redis. The script handles everything else.

4

Run your first agent

python examples/my_first_agent.py

Two Ezreals spawn on Summoner's Rift — both controlled by your code.

Write Your Agent in 20 Lines

from pylol.agents import base_agent from pylol.lib import actions class MyAgent(base_agent.BaseAgent): def step(self, obs): super().step(obs) me = obs.observation["me_unit"] enemy = obs.observation["enemy_unit"] # Calculate distance to enemy dx = enemy["position_x"] - me["position_x"] dy = enemy["position_y"] - me["position_y"] dist = (dx**2 + dy**2) ** 0.5 if dist < 600: # In range — cast Q at enemy return actions.FunctionCall(2, [[0], [enemy["position_x"], enemy["position_y"]]]) else: # Move toward enemy return actions.FunctionCall(1, [[0], [enemy["position_x"], enemy["position_y"]]])

Included Examples

ExampleDescriptionDifficulty
my_first_agent.py Chase and attack the nearest enemy Beginner
scripted_agent.py Rule-based with spell rotation and kiting Intermediate
random_agent.py Random actions — useful as a baseline Beginner
llm_agent.py Template for GPT / Claude / Gemini integration Advanced

API Reference

Observations

Every tick, your agent receives:

{ "me_unit": { "position_x": 1500.0, "position_y": 2000.0, "current_hp": 580.0, "max_hp": 580.0, "current_mp": 280.0, "level": 1, "user_id": 1, "alive": 1.0 }, "enemy_unit": { ... }, "champ_units": [ ... ], // All champions "minion_units": [ ... ] // All minions }

Actions

ActionCodeDescription
No-op FunctionCall(0, [[0]]) Do nothing this tick
Move FunctionCall(1, [[0], [x, y]]) Move toward position
Spell FunctionCall(2, [[slot], [x, y]]) Cast spell (0=Q, 1=W, 2=E, 3=R)

AI Tournaments & Bounties

AI vs AI Tournaments

Pit your agent against others in live matches, streamed for the community. Elimination brackets, prize pools in $LoD.

AI vs Human Challenges

Can your bot outplay a Diamond player? Submit your agent and find out. Bounties for beating human benchmarks.

Developer Bounties

Build the best-performing agent and earn $LoD tokens. Bounties for specific challenges are posted in our Telegram community.

Start building today

Clone the repo, run an example, and ship your first agent in under 10 minutes.

GitHub Repository Join Community Back to Home