Skip to content

Security Features

pancakes-proxy edited this page Jul 14, 2025 · 1 revision

Security Features

AIMod provides comprehensive security features to protect Discord servers from various threats including bots, raids, spam, and malicious users. This document covers all security mechanisms and their configuration.

🛡️ Security Architecture

Multi-Layer Protection

┌─────────────────────────────────────────────────────────┐
│                 Global Ban System                       │
│              (Cross-server protection)                  │
└─────────────────────┬───────────────────────────────────┘
                      │
┌─────────────────────▼───────────────────────────────────┐
│                 Guild-Level Security                    │
│  ┌─────────────┐ ┌─────────────┐ ┌─────────────┐       │
│  │ Bot Detect  │ │ Raid Defense│ │ Rate Limit  │       │
│  └─────────────┘ └─────────────┘ └─────────────┘       │
└─────────────────────┬───────────────────────────────────┘
                      │
┌─────────────────────▼───────────────────────────────────┐
│                 AI Moderation                           │
│           (Content-based protection)                    │
└─────────────────────────────────────────────────────────┘

Security Event Flow

User Action → Security Checks → Risk Assessment → Response Action → Logging
     ↓              ↓               ↓               ↓           ↓
  Message        Global Ban      Threat Level    Timeout/Ban  Audit Log
  Join Event     Bot Detection   Calculation     Alert Mods   Statistics
  Invite Use     Rate Limits     Confidence      Lockdown     Webhooks

🚫 Global Ban System

Overview

The Global Ban System protects all servers using AIMod from users who have committed severe violations.

Features

Automatic Enforcement:

  • Users are banned immediately upon joining any server
  • Existing members are banned when added to global list
  • Cross-server violation tracking
  • Appeal process for wrongful bans

Global Ban Criteria:

  • Severe harassment across multiple servers
  • Doxxing or privacy violations
  • Coordinated attacks or raids
  • Distribution of illegal content
  • Persistent ban evasion

Implementation

# Global ban enforcement on member join
@commands.Cog.listener()
async def on_member_join(self, member: discord.Member):
    if member.id in GLOBAL_BANS:
        try:
            ban_reason = "Globally banned for severe universal violation."
            await member.guild.ban(member, reason=ban_reason)
            
            # Log the action
            await log_global_ban_enforcement(member.guild.id, member.id)
            
            # Notify administrators
            await notify_admins_global_ban(member.guild, member)
            
        except discord.Forbidden:
            # Log permission error
            await log_permission_error(member.guild.id, "global_ban", member.id)

Commands

/globalban - Add Global Ban

@app_commands.describe(
    user="User to globally ban",
    reason="Reason for the global ban"
)
async def globalban(
    interaction: discord.Interaction,
    user: discord.User,
    reason: str
):
    """Add user to global ban list (Bot Owner only)."""
    
    # Verify bot owner permissions
    if not await is_bot_owner(interaction.user):
        await interaction.response.send_message(
            "❌ Only bot owners can issue global bans.",
            ephemeral=True
        )
        return
    
    # Add to global ban list
    await add_global_ban(user.id, reason, interaction.user.id)
    
    # Enforce across all servers
    banned_count = await enforce_global_ban_all_servers(user.id, reason)
    
    await interaction.response.send_message(
        f"✅ {user} has been globally banned.\n"
        f"**Reason:** {reason}\n"
        f"**Enforced in:** {banned_count} servers"
    )

/unglobalban - Remove Global Ban

@app_commands.describe(user="User to remove from global ban list")
async def unglobalban(interaction: discord.Interaction, user: discord.User):
    """Remove user from global ban list (Bot Owner only)."""
    
    if not await is_bot_owner(interaction.user):
        await interaction.response.send_message(
            "❌ Only bot owners can remove global bans.",
            ephemeral=True
        )
        return
    
    # Remove from global ban list
    await remove_global_ban(user.id)
    
    await interaction.response.send_message(
        f"✅ {user} has been removed from the global ban list.\n"
        f"Note: Existing server bans are not automatically lifted."
    )

Global Ban Database

CREATE TABLE global_bans (
    user_id BIGINT PRIMARY KEY,
    reason TEXT NOT NULL,
    banned_by BIGINT NOT NULL,
    created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
    appeal_count INTEGER DEFAULT 0,
    last_appeal TIMESTAMP WITH TIME ZONE
);

-- Index for quick lookups
CREATE INDEX idx_global_bans_user_id ON global_bans(user_id);

🤖 Bot Detection System

Detection Methods

Keyword Analysis:

  • Configurable keyword patterns
  • Common bot phrases detection
  • Suspicious link patterns
  • Promotional content identification

Behavioral Analysis:

  • Message frequency patterns
  • Repetitive content detection
  • Timing analysis
  • Account age correlation

Content Patterns:

  • Discord invite links
  • Suspicious URLs
  • Promotional messages
  • Scam indicators

Configuration

BOTDETECT_CONFIG = {
    "enabled": False,
    "keywords": [
        "discord.gg/",
        "free nitro",
        "click here",
        "dm me for",
        "check my bio",
        "limited time offer",
        "claim your prize",
        "verify your account"
    ],
    "action": "timeout",
    "timeout_duration": 3600,
    "log_channel": None,
    "whitelist_roles": [],
    "whitelist_users": [],
    "sensitivity": "medium"  # low, medium, high
}

Detection Algorithm

async def analyze_bot_behavior(message: discord.Message) -> dict:
    """Analyze message for bot-like behavior."""
    
    risk_score = 0
    indicators = []
    
    # Keyword detection
    content_lower = message.content.lower()
    for keyword in BOTDETECT_KEYWORDS:
        if keyword in content_lower:
            risk_score += 25
            indicators.append(f"Keyword: {keyword}")
    
    # Account age check
    account_age = (datetime.utcnow() - message.author.created_at).days
    if account_age < 7:
        risk_score += 30
        indicators.append(f"New account: {account_age} days")
    
    # Message frequency check
    recent_messages = await get_recent_messages(message.author.id, minutes=5)
    if len(recent_messages) > 10:
        risk_score += 40
        indicators.append(f"High frequency: {len(recent_messages)} messages")
    
    # Link analysis
    urls = extract_urls(message.content)
    for url in urls:
        if await is_suspicious_url(url):
            risk_score += 50
            indicators.append(f"Suspicious URL: {url}")
    
    return {
        "risk_score": min(risk_score, 100),
        "indicators": indicators,
        "action_recommended": get_action_for_score(risk_score)
    }

Commands

/botdetect - Configure Bot Detection

@app_commands.describe(
    enabled="Enable/disable bot detection",
    action="Action to take (timeout/ban/kick/warn)",
    duration="Timeout duration in seconds",
    sensitivity="Detection sensitivity (low/medium/high)"
)
async def botdetect(
    interaction: discord.Interaction,
    enabled: bool = None,
    action: str = None,
    duration: int = None,
    sensitivity: str = None
):
    """Configure bot detection settings."""
    
    if not interaction.user.guild_permissions.administrator:
        await interaction.response.send_message(
            "❌ You need Administrator permissions to configure bot detection.",
            ephemeral=True
        )
        return
    
    config = await get_botdetect_config(interaction.guild.id)
    
    # Update configuration
    if enabled is not None:
        config["enabled"] = enabled
    if action is not None:
        if action not in ["timeout", "ban", "kick", "warn"]:
            await interaction.response.send_message(
                "❌ Invalid action. Use: timeout, ban, kick, or warn",
                ephemeral=True
            )
            return
        config["action"] = action
    if duration is not None:
        config["timeout_duration"] = duration
    if sensitivity is not None:
        if sensitivity not in ["low", "medium", "high"]:
            await interaction.response.send_message(
                "❌ Invalid sensitivity. Use: low, medium, or high",
                ephemeral=True
            )
            return
        config["sensitivity"] = sensitivity
    
    await set_botdetect_config(interaction.guild.id, config)
    
    embed = discord.Embed(
        title="🤖 Bot Detection Configuration",
        color=discord.Color.green() if config["enabled"] else discord.Color.red()
    )
    embed.add_field(name="Enabled", value=config["enabled"], inline=True)
    embed.add_field(name="Action", value=config["action"], inline=True)
    embed.add_field(name="Sensitivity", value=config["sensitivity"], inline=True)
    
    await interaction.response.send_message(embed=embed)

🛡️ Raid Defense System

Raid Detection

Join Rate Monitoring:

  • Track member joins per time period
  • Configurable thresholds and timeframes
  • Account age analysis
  • Pattern recognition

Mass Action Detection:

  • Coordinated message spam
  • Simultaneous role requests
  • Mass channel creation
  • Bulk invite usage

Defense Mechanisms

Automatic Lockdown:

  • Temporarily restrict new member permissions
  • Pause invite creation
  • Enable verification requirements
  • Alert administrators

Proactive Measures:

  • Preemptive bans for suspicious accounts
  • Temporary channel restrictions
  • Enhanced monitoring mode
  • Automatic backup creation

Configuration

RAID_DEFENSE_CONFIG = {
    "enabled": False,
    "threshold": 10,        # Members joining
    "timeframe": 60,        # In 60 seconds
    "alert_channel": None,
    "auto_action": "lockdown",  # lockdown, ban, kick
    "account_age_threshold": 7,  # Days
    "lockdown_duration": 3600,   # Seconds
    "whitelist_invites": []      # Trusted invite codes
}

Implementation

class RaidDefense:
    def __init__(self):
        self.join_tracker = {}  # guild_id: [timestamps]
        self.lockdown_status = {}  # guild_id: lockdown_end_time
    
    async def track_member_join(self, member: discord.Member):
        """Track member join for raid detection."""
        guild_id = member.guild.id
        now = datetime.utcnow()
        
        # Initialize tracking for guild
        if guild_id not in self.join_tracker:
            self.join_tracker[guild_id] = []
        
        # Add current join
        self.join_tracker[guild_id].append(now)
        
        # Clean old entries
        config = await get_raid_defense_config(guild_id)
        timeframe = config.get("timeframe", 60)
        cutoff = now - timedelta(seconds=timeframe)
        
        self.join_tracker[guild_id] = [
            timestamp for timestamp in self.join_tracker[guild_id]
            if timestamp > cutoff
        ]
        
        # Check for raid
        threshold = config.get("threshold", 10)
        if len(self.join_tracker[guild_id]) >= threshold:
            await self.trigger_raid_defense(member.guild, config)
    
    async def trigger_raid_defense(self, guild: discord.Guild, config: dict):
        """Trigger raid defense measures."""
        action = config.get("auto_action", "lockdown")
        
        if action == "lockdown":
            await self.initiate_lockdown(guild, config)
        elif action == "ban":
            await self.ban_recent_joins(guild, config)
        elif action == "kick":
            await self.kick_recent_joins(guild, config)
        
        # Alert administrators
        await self.alert_administrators(guild, config)
        
        # Log the event
        await self.log_raid_defense_trigger(guild, config)

Commands

/raiddefense - Configure Raid Defense

@app_commands.describe(
    enabled="Enable/disable raid defense",
    threshold="Number of joins to trigger defense",
    timeframe="Timeframe in seconds",
    action="Action to take (lockdown/ban/kick)"
)
async def raiddefense(
    interaction: discord.Interaction,
    enabled: bool = None,
    threshold: int = None,
    timeframe: int = None,
    action: str = None
):
    """Configure raid defense settings."""
    
    # Implementation similar to botdetect command
    # Updates raid defense configuration
    # Validates parameters
    # Shows current status

📊 Message Rate Limiting

Rate Limiting Algorithm

class MessageRateLimiter:
    def __init__(self):
        self.user_messages = {}  # user_id: [timestamps]
        self.violations = {}     # user_id: violation_count
    
    async def check_rate_limit(self, message: discord.Message) -> bool:
        """Check if user exceeds rate limit."""
        user_id = message.author.id
        now = datetime.utcnow()
        
        # Get configuration
        config = await get_message_rate_config(message.guild.id)
        if not config.get("enabled", False):
            return True
        
        # Check whitelist
        if await is_rate_limit_whitelisted(message.author):
            return True
        
        # Initialize tracking
        if user_id not in self.user_messages:
            self.user_messages[user_id] = []
        
        # Clean old messages
        timeframe = config.get("timeframe", 10)
        cutoff = now - timedelta(seconds=timeframe)
        self.user_messages[user_id] = [
            timestamp for timestamp in self.user_messages[user_id]
            if timestamp > cutoff
        ]
        
        # Add current message
        self.user_messages[user_id].append(now)
        
        # Check limit
        max_messages = config.get("max_messages", 5)
        if len(self.user_messages[user_id]) > max_messages:
            await self.handle_rate_limit_violation(message, config)
            return False
        
        return True
    
    async def handle_rate_limit_violation(
        self, 
        message: discord.Message, 
        config: dict
    ):
        """Handle rate limit violation."""
        user_id = message.author.id
        
        # Track violations
        if user_id not in self.violations:
            self.violations[user_id] = 0
        self.violations[user_id] += 1
        
        # Determine action based on violation count
        violation_count = self.violations[user_id]
        action = self.get_escalated_action(violation_count, config)
        
        # Execute action
        if action == "timeout":
            duration = config.get("timeout_duration", 300)
            await message.author.timeout(
                timedelta(seconds=duration),
                reason="Rate limit violation"
            )
        elif action == "kick":
            await message.author.kick(reason="Repeated rate limit violations")
        elif action == "ban":
            await message.author.ban(reason="Severe rate limit violations")
        
        # Delete recent messages
        await self.delete_recent_messages(message.author, message.channel)
        
        # Log violation
        await self.log_rate_limit_violation(message, action, violation_count)

🔒 Advanced Security Features

Invite Monitoring

async def monitor_invite_usage(invite: discord.Invite, member: discord.Member):
    """Monitor invite usage for suspicious patterns."""
    
    # Track invite usage
    await track_invite_usage(invite.code, member.id)
    
    # Check for suspicious patterns
    recent_uses = await get_recent_invite_uses(invite.code, hours=1)
    
    if len(recent_uses) > 10:  # Suspicious bulk usage
        await alert_suspicious_invite(invite, recent_uses)
        
        # Consider disabling invite
        if len(recent_uses) > 20:
            await disable_suspicious_invite(invite)

Account Analysis

async def analyze_new_member(member: discord.Member) -> dict:
    """Comprehensive analysis of new members."""
    
    risk_factors = []
    risk_score = 0
    
    # Account age
    account_age = (datetime.utcnow() - member.created_at).days
    if account_age < 1:
        risk_score += 50
        risk_factors.append("Very new account")
    elif account_age < 7:
        risk_score += 25
        risk_factors.append("New account")
    
    # Avatar analysis
    if not member.avatar:
        risk_score += 15
        risk_factors.append("No avatar")
    
    # Username patterns
    if await is_suspicious_username(member.name):
        risk_score += 30
        risk_factors.append("Suspicious username pattern")
    
    # Global ban check
    if member.id in GLOBAL_BANS:
        risk_score = 100
        risk_factors.append("Globally banned user")
    
    return {
        "risk_score": min(risk_score, 100),
        "risk_factors": risk_factors,
        "recommendation": get_security_recommendation(risk_score)
    }

Security Monitoring Dashboard

The web dashboard provides real-time security monitoring:

Security Metrics:

  • Active threats detected
  • Global ban enforcement statistics
  • Raid defense triggers
  • Bot detection accuracy
  • Rate limit violations

Alert System:

  • Real-time notifications
  • Webhook integrations
  • Email alerts for critical events
  • Mobile push notifications

Threat Intelligence:

  • Trending attack patterns
  • Cross-server threat correlation
  • Suspicious user tracking
  • Malicious domain monitoring

Next: Logging and Analytics - Comprehensive logging system and analytics

Clone this wiki locally