Introduction
CrickBot is a mobile cricket game featuring intelligent AI opponents powered by a sophisticated rule-based probabilistic system. Rather than relying on reinforcement learning or neural networks, the AI uses a 4-phase architecture combining static rules, player modeling, Bayesian probability estimation, and optional Claude API integration to create adaptive, challenging opponents.
This blog explores the engineering behind CrickBot's 4-phase AI system and how it delivers intelligent cricket gameplay across 100 campaign levels, six difficulty tiers, and a surge system that escalates from Power Play to Legend Mode.
Game Mechanics and Campaign
CrickBot features swipe-based batting mechanics (loft, drive, cut, pull, sweep, defend) against AI bowlers. The campaign spans 100 levels across themed locations with escalating difficulty. A surge system adds strategic depth: players progress from Power Play through Super Over, Ultra Smash, and Legend Mode, each tier applying power multipliers to make later campaign levels more engaging.
- Difficulty Tiers: Easy, Medium, Hard, Legend, plus adaptive scaling that adjusts in real-time during gameplay.
- Ball Speed Scaling: Ranges from 2-3 px/frame on Easy to 5-8 px/frame on Legend, creating clear progression challenges.
- Phase-Based AI: Four interconnected systems (Rules Engine, Player Modeling, Probability Predictors, Claude API) that work together to deliver intelligent bowling decisions.
- Variation Penalties: The Rules Engine prevents repetitive deliveries, ensuring bowlers don't spam the same delivery type consecutively.
The 4-Phase AI Architecture
CrickBot's AI is built on a four-phase system that separates concerns for maintainability and mobile performance. Each phase builds on the previous one:
Phase 1: Rules Engine
The foundation layer uses static lookup tables mapping shot effectiveness by ball type and length. The Rules Engine defines four bowling strategies: wicket_hunting (aim for dismissals), defensive (accumulate dot balls), balanced, and aggressive. A variation penalty mechanism prevents the same delivery type from being bowled consecutively, creating more realistic bowling sequences and keeping batsmen guessing.
Phase 2: Player Modeling
This phase builds a lightweight behavioral profile of the batsman. It tracks shot selection patterns per ball type (e.g., how often the player pulls short deliveries vs. drives full ones) and calculates an aggression score using exponential moving average (EMA). The system identifies the player's primary weakness—the ball type with the highest miss rate—and feeds this intelligence back to Phase 1 to inform future bowling selections. Memory usage is capped at 2-4KB to ensure mobile performance isn't impacted.
Phase 3: Probability Predictors
Three complementary predictors work together to estimate shot likelihood and difficulty scaling:
- BayesPredictor: Uses Laplace smoothing to estimate P(shot | delivery context). This Bayesian approach handles sparse data elegantly and avoids overconfidence on rare deliveries.
- SequencePredictor: A first-order Markov chain estimating P(next shot | current shot). Captures temporal patterns in batting behavior.
- AdaptiveDifficulty: Monitors win rate with EMA, targets 45% win rate as the "flow" point. Includes frustration guards (auto-downscaling after -4 loss streak) and boredom guards (auto-upscaling after +5 win streak). Produces scaling multipliers for ball speed, accuracy, reaction time, and AI aggression.
Phase 4: Claude API Integration (Optional)
An optional async, non-blocking AI advisor that provides commentary and strategy suggestions. The system uses a circuit breaker pattern (opens after 3 errors for 60 seconds) and LRU cache with TTL for reliability. Critically, the game remains fully playable without this phase—API failures don't impact gameplay. Claude is a third-party AI service by Anthropic, used for optional gameplay commentary.
Technical Deep Dive
CrickBot's AI is fundamentally a rule-based probabilistic system—not machine learning. This section covers the actual implementation patterns:
BayesPredictor Implementation
The BayesPredictor estimates the probability a batsman will play a specific shot given a delivery context. It uses Laplace smoothing to handle sparse data gracefully:
class BayesPredictor:
def __init__(self, alpha=1.0):
self.shot_counts = {} # {(delivery_type, delivery_length): {shot: count}}
self.delivery_counts = {} # {(delivery_type, delivery_length): total}
self.alpha = alpha # Laplace smoothing parameter
self.shots = ["loft", "drive", "cut", "pull", "sweep", "defend"]
def update(self, delivery_type, delivery_length, observed_shot):
key = (delivery_type, delivery_length)
if key not in self.shot_counts:
self.shot_counts[key] = {shot: 0 for shot in self.shots}
self.delivery_counts[key] = 0
self.shot_counts[key][observed_shot] += 1
self.delivery_counts[key] += 1
def predict(self, delivery_type, delivery_length):
"""Returns {shot: probability} for all possible shots"""
key = (delivery_type, delivery_length)
if key not in self.delivery_counts:
return {shot: 1/len(self.shots) for shot in self.shots} # Uniform
total = self.delivery_counts[key]
num_shots = len(self.shots)
probabilities = {}
for shot in self.shots:
count = self.shot_counts[key].get(shot, 0)
# Laplace smoothing: (count + alpha) / (total + alpha*num_shots)
prob = (count + self.alpha) / (total + self.alpha * num_shots)
probabilities[shot] = prob
return probabilities
AdaptiveDifficulty and Win Rate Tracking
The adaptive difficulty system uses exponential moving average (EMA) to smoothly track win rate and applies scaling multipliers to four game parameters:
class AdaptiveDifficulty:
def __init__(self, target_win_rate=0.45, ema_alpha=0.2):
self.target_win_rate = target_win_rate
self.ema_alpha = ema_alpha
self.ema_win_rate = 0.5
self.win_streak = 0
self.loss_streak = 0
self.base_difficulty = 1.0
def update(self, game_result):
"""game_result: 1 for win, 0 for loss"""
# Update EMA
self.ema_win_rate = (self.ema_alpha * game_result +
(1 - self.ema_alpha) * self.ema_win_rate)
# Track streaks
if game_result == 1:
self.win_streak += 1
self.loss_streak = 0
else:
self.loss_streak += 1
self.win_streak = 0
# Frustration guard: -4 loss streak triggers downscale
if self.loss_streak >= 4:
self.base_difficulty *= 0.85
self.loss_streak = 0
# Boredom guard: +5 win streak triggers upscale
if self.win_streak >= 5:
self.base_difficulty *= 1.15
self.win_streak = 0
def get_scaling_multipliers(self):
"""Returns multipliers for ball_speed, accuracy, reaction_time, ai_aggression"""
# Sigmoid-like scaling based on deviation from target
deviation = self.ema_win_rate - self.target_win_rate
# If winning too much, increase difficulty; if losing, decrease it
difficulty_scale = 1.0 + (deviation * -2.0) # -2.0 is sensitivity
return {
'ball_speed': max(0.5, min(2.0, difficulty_scale)),
'accuracy': max(0.5, min(1.5, difficulty_scale)),
'reaction_time': max(0.5, min(2.0, difficulty_scale)),
'ai_aggression': max(0.3, min(1.5, difficulty_scale))
}
Player Modeling with EMA
Phase 2 tracks batsman aggression and identifies weaknesses using exponential moving average for smooth, responsive adaptation:
class PlayerModeling:
def __init__(self, ema_alpha=0.15, memory_limit_bytes=4096):
self.shot_by_delivery = {} # {delivery_type: {shot: count}}
self.ema_aggression = 0.5 # 0=defensive, 1=aggressive
self.ema_alpha = ema_alpha
self.memory_limit = memory_limit_bytes
def update(self, delivery_type, shot_played, is_aggressive):
"""Update player profile after each ball"""
if delivery_type not in self.shot_by_delivery:
self.shot_by_delivery[delivery_type] = {}
if shot_played not in self.shot_by_delivery[delivery_type]:
self.shot_by_delivery[delivery_type][shot_played] = 0
self.shot_by_delivery[delivery_type][shot_played] += 1
self.ema_aggression = (self.ema_alpha * is_aggressive +
(1 - self.ema_alpha) * self.ema_aggression)
# Implement memory cap for mobile efficiency
self._enforce_memory_limit()
def find_weakness(self):
"""Identify delivery type with highest miss rate"""
miss_rates = {}
for delivery_type, shots in self.shot_by_delivery.items():
total = sum(shots.values())
misses = shots.get('miss', 0)
miss_rates[delivery_type] = misses / total if total > 0 else 0
if not miss_rates:
return None
return max(miss_rates, key=miss_rates.get)
def _enforce_memory_limit(self):
"""Prune oldest data if memory exceeds limit"""
# Simplified: in production, serialize and measure actual size
if len(self.shot_by_delivery) > 20:
oldest_key = list(self.shot_by_delivery.keys())[0]
del self.shot_by_delivery[oldest_key]
Rules Engine: Bowling Strategy Selection
Phase 1 uses static lookup tables and variation penalties to select deliveries:
class RulesEngine:
def __init__(self):
# Static effectiveness lookup: {strategy: {shot_type: effectiveness_score}}
self.strategy_effectiveness = {
'wicket_hunting': {'yorker': 8, 'bouncer': 7, 'dot': 3},
'defensive': {'dot': 9, 'length': 7, 'line': 6},
'balanced': {'length': 7, 'dot': 6, 'variation': 6},
'aggressive': {'short': 8, 'width': 7, 'pace': 8}
}
self.last_delivery = None
self.variation_penalty = 0.3 # Penalize same delivery repetition
def select_delivery(self, strategy, player_weakness=None):
"""Select next delivery type based on strategy and weakness"""
candidates = self.strategy_effectiveness[strategy]
# Apply variation penalty to last delivery type
if self.last_delivery:
candidates = candidates.copy()
candidates[self.last_delivery] *= self.variation_penalty
# Bias toward player weakness if identified
if player_weakness and player_weakness in candidates:
candidates[player_weakness] *= 1.3
# Normalize and pick weighted random
total = sum(candidates.values())
probabilities = {k: v/total for k, v in candidates.items()}
selected = self._weighted_random_choice(probabilities)
self.last_delivery = selected
return selected
def _weighted_random_choice(self, probabilities):
import random
r = random.random()
cumsum = 0
for choice, prob in probabilities.items():
cumsum += prob
if r <= cumsum:
return choice
return list(probabilities.keys())[-1]
What CrickBot Does NOT Have
It's important to be clear about what technologies CrickBot deliberately avoids, given the emphasis on mobile performance and determinism:
- No Reinforcement Learning or Neural Networks: CrickBot doesn't use RL, DQN, Q-learning, or policy gradients. These would require significant compute overhead and make AI behavior harder to debug and tune.
- No Field Placement AI: Fielders use static line offsets based on shot history rather than dynamic position optimization.
- No Online Learning: Player models reset between sessions (though Phase 2 memory is preserved during campaign play). Global learning across players is not implemented.
Why Rule-Based + Probabilistic?
CrickBot's architecture prioritizes transparency, mobile performance, and determinism. Every AI decision is traceable: you can log exactly why the bowler chose a yorker (it scored 8 points against aggressive batsmen), why the accuracy scaled up (win rate exceeded target), and what the probability distribution looked like for that shot. This makes balancing and debugging far easier than debugging opaque neural network decisions, and ensures the game runs smoothly on mid-range phones with sub-50MB memory budgets.
Future Directions
While the current system is sophisticated and effective, potential enhancements could include:
- Cross-Match Learning: Retaining player models across multiple campaign sessions to deepen adaptation.
- Field Placement Optimization: Using constraint satisfaction or genetic algorithms to compute optimal fielder positions without neural networks.
- Claude API Integration Expansion: Richer commentary that understands game context and provides tactical insights based on real-time match state. Claude is provided by Anthropic.
Ready to Experience AI Cricket?
CrickBot brings intelligent opponents and real competition to your mobile device. Every match is a learning experience.
← Back to ExaFabs