Building a League of Legends Draft Intelligence System
Before a single ability is cast in a League of Legends match, the outcome is already being shaped. The draft phase — where two teams alternate banning and picking champions from a roster of over 160 — is one of the most informationally dense decision points in competitive gaming. You're juggling counter-picks, synergies, damage profiles, crowd control coverage, and the meta of the current patch, all under time pressure, all with incomplete information about what the other team is planning.
Most players navigate this by gut feel and habit. I wanted to build something better.
The Problem With Existing Tools
The draft assistance tools that exist today mostly do one thing: they tell you win rates. "This champion has a 53% win rate this patch." That's useful, but it's shallow. A champion's win rate is an aggregate that hides the context that actually matters. Why does it win? What does it bring to a team composition that a raw percentage can't capture?
A champion with a 53% win rate might be terrible for your specific draft if your team already has four AD damage dealers and no crowd control. The number doesn't know your team. It doesn't know what you're missing.
What I wanted was a system that understands team compositions as multi-dimensional profiles — not just "good" or "bad," but specifically what a composition does well and where it falls short.
How This Project Found Its Shape
I originally designed this for professional play. Teams at the highest level draft with extraordinary intentionality, and a tool that could surface non-obvious composition profiles seemed like a natural fit for coaching staffs.
That died quickly. Riot Games explicitly bans AI and ML-based drafting solutions from competitive play. Fair enough — they want the draft to remain a test of human game knowledge.
So I pivoted to pre-made teams running scrimmages. Same idea, lower stakes. But even here, the framing mattered. A tool that says "pick Orianna" crosses a line. A tool that says "your composition is low on AoE crowd control — here are several options that address that, depending on whether you want to prioritize engage or peel" is something different. It's decision support, not decision making. The team still exercises judgment.
Then I realized the most natural audience was staring me in the face: solo and duo queue. Two players who queue together already know their roles. They have champion pools. They want to climb. And the constraints of duo queue actually make the problem more tractable — with two of five positions locked in, the search space shrinks considerably. You can ask focused questions: given what we play, given what our teammates have picked, given what the enemy has shown, what should we be looking at?
The Architecture of a Draft Profile
The core idea is straightforward, even if the implementation isn't.
Every champion can be scored across several dimensions that describe what they bring to a team:
- Crowd control — the total lockdown a champion provides, weighted by type (hard CC vs. soft CC), reliability (point-and-click vs. skillshot), duration, and whether it's single-target or AoE
- Tankiness — base stats, scaling, defensive steroids, sustain
- Damage output — raw damage potential in teamfights and skirmishes
- Damage type — the AD/AP split, plus special categories like percent-health damage and true damage that circumvent defensive itemization
- Utility — shields, heals, vision control, zone control, objective-taking power
A team composition is the sum of its parts across these dimensions. And the hypothesis is that winning compositions at a given elo and patch tend to cluster around certain profiles — enough CC to lock down a fight, enough damage diversity that the enemy can't itemize against you cheaply, enough frontline to create space.
The system's "ground truth" comes from aggregating the highest-winning team compositions across all ranked games, segmented by elo and patch. You decompose those winning comps into their dimensional scores, and you get a picture of what a balanced, winning team profile looks like at your rank, right now.
Then, mid-draft, the system compares your current team's profile against those winning profiles and surfaces the gaps. Not "pick this champion," but "your team is skewed toward physical damage and light on reliable engage — here are champions tiered by how much they address those gaps."
The Hard Parts
This is where the project gets hard.
Quantifying crowd control. A Leona has three forms of hard CC and a root on her ultimate. An Ashe has a single-target stun that scales with distance and a global AoE slow. How do you compare them? Duration, AoE, reliability, cooldown — these all matter, and collapsing them into a single number requires making judgment calls about what "CC contribution" means. My current approach uses external data sources that rate champion CC profiles as a starting gauge, then adjusts based on ability-level characteristics. It's imperfect, but it's a starting point.
Synergy is hard to isolate from individual strength. If two champions have a high win rate when played together, is that because they genuinely synergize, or because they're both individually overpowered this patch? To pull those apart, you'd have to compare the duo win rate against what you'd expect given each champion's independent win rate, controlled for elo and patch. The MVP version leans on raw duo win rates filtered by rank and patch, which is fuzzy but directionally useful.
Data gets thin fast. When you segment by elo, patch, and specific champion combinations, sample sizes shrink. Diamond+ games on the current patch featuring a specific bot lane duo might number in the hundreds, not the thousands. This is a real constraint that limits how confident the system can be in its recommendations for niche picks at high elo.
The Pipeline
All data flows from Riot's official API. The pipeline is built in Python and follows a straightforward path:
- Ingestion — pull match data, champion statistics, and patch information from the Riot API. Store raw data for reprocessing as the scoring model evolves.
- Scoring — transform raw champion data into dimensional scores. This is where the CC quantification, damage profiling, and tankiness calculations live.
- Aggregation — build composition profiles from scored champions. Segment winning compositions by elo and patch to establish ground truth.
- Recommendation — given a partial draft, compute the current team's profile, compare against winning profiles, and surface champions that close the most significant gaps. Tier suggestions rather than ranking them — the player chooses, the system informs.
Why This Matters Beyond the Game
The draft in League is a smaller version of a question I keep running into: how do you make good choices when you can't see the full picture and the right answer depends on context that won't stop shifting?
I keep landing on the same design philosophy. Don't make the decision for the person — show them what they're not seeing, give them options, and let them choose. Build tools that extend someone's judgment instead of replacing it.
That's what this project is really about. The game is just where the idea gets tested.
What's Next
This project is currently in the design and data pipeline stage. The scoring model is being developed, the API integration is underway, and the recommendation layer is being architected. I'll be writing more as the system takes shape — particularly about the CC quantification problem, which I suspect will be its own article.