-
Notifications
You must be signed in to change notification settings - Fork 0
Description
Popcorn Hack #1: Real-World Example of a Biased Computer System
Task Recap:
Think of a real-world example where a computer system has shown bias. It could be something you’ve read about, experienced, or imagined. Describe the biased system, explain what type of bias it represents (Pre-existing Social Bias, Technical Bias, or Emergent Social Bias), and suggest one way to reduce or fix the bias.
Example Response: YouTube’s Recommendation Algorithm
System:
YouTube’s AI-driven recommendation algorithm suggests videos to users based on their watch history, engagement patterns, and similar users’ behavior.
Bias Identified:
Emergent Social Bias
Explanation:
The bias emerges from how the recommendation algorithm evolves and responds to user behavior. Over time, YouTube has been criticized for sending users down "rabbit holes" of increasingly extreme or polarizing content—especially in categories like politics, health, or social commentary. The algorithm is designed to maximize watch time and engagement, which often means pushing content that provokes strong reactions. As more users interact with this kind of content, the AI "learns" that extreme content keeps people watching longer and begins to suggest similar videos to more people.
This behavior wasn’t programmed intentionally—it emerged through the AI’s interaction with user data over time. As a result, the algorithm may end up reinforcing political polarization, spreading misinformation, or amplifying fringe views.
Real-World Impact:
- Viewers become trapped in "echo chambers" or filter bubbles.
- Polarizing and harmful content spreads rapidly.
- Misinformation becomes more visible than factual content.
Solution:
- Introduce Diversity-Aware Recommendation Techniques: Build recommendation systems that incorporate a wider range of viewpoints and perspectives—even if they’re not the most engaging.
- Algorithmic Oversight & Human Review: Use real-time monitoring and audits to ensure that harmful or extreme content isn’t being disproportionately recommended.
- User Control Settings: Allow users to adjust their recommendation feed to include more diverse or unfamiliar content.
- Transparency and Explanation: Show users why a video is being recommended to help them understand and challenge biased suggestions.
Popcorn Hack #2: Financial Industry AI Bias
Task Recap:
In the financial industry, an AI system used to approve loan applications unintentionally favors male applicants over female applicants because it was trained on past loan approval data, which reflected gender biases. This is an example of Pre-existing Social Bias.
Question: Give two ways to mitigate this bias and make the system more fair.
Example Response: Loan Approval AI Bias
Bias Type:
Pre-existing Social Bias
Explanation:
The AI system was trained on historical loan data reflecting human decisions shaped by societal and institutional discrimination. Historically, women have been less likely to be approved for loans, even with similar credit scores or financial history. The AI model learns these biased patterns and replicates them when making new predictions. The model may not include gender explicitly, but it can infer it through proxies like occupation, income, or location.
Two Ways to Reduce or Eliminate This Bias:
1. Bias-Aware Data Cleaning and Preprocessing
- What to Do: Examine the training dataset for patterns of discrimination. Remove or neutralize features that serve as proxies for gender.
- How It Helps: Ensures the model doesn’t replicate historical inequalities by reducing bias in the data before training.
- Example: Banks can resample or reweight training data to balance approval rates across demographics.
2. Fairness-Conscious Algorithm Design
- What to Do: Incorporate fairness constraints during model development. Use methods such as:
- Equal Opportunity (ensuring equal true positive rates across groups)
- Disparate Impact Removal
- How It Helps: Ensures predictions do not disproportionately harm one group.
- Suggestion: Regular testing with demographic-specific metrics to catch unfair trends.
Homework Hack: Bias in Everyday Technology
Task Recap:
Think of a system or tool that you use every day—this could be a website, app, or device. Can you identify any bias that might exist in the way the system works?
Instructions:
- Describe the system you’re thinking of.
- Identify the bias in that system and explain why it might be biased.
- Propose one way to reduce or fix the bias in that system.
Example Response: TikTok Content Recommendations
System:
TikTok’s "For You Page" (FYP) – a personalized video feed driven by AI and user interaction data.
Bias Identified:
Emergent Social Bias (with elements of Pre-existing Social Bias)
Explanation:
TikTok's algorithm rewards engagement metrics such as likes, shares, and watch time. However, marginalized creators—especially creators of color, LGBTQ+ users, and creators with disabilities—have reported lower visibility, content suppression, and reduced reach, despite posting high-quality content. This bias likely stems from a mix of historical biases in training data and emergent user behavior patterns (i.e., biased user interactions reinforcing the algorithm’s bias).
Real-World Consequences:
- Limited visibility and opportunities for underrepresented creators.
- Reduced exposure to diverse content and perspectives.
- Reinforcement of mainstream norms and stereotypes.
Solution:
Bias-Aware Recommendation Balancing
- What to Do: Implement fairness-aware algorithms that intentionally include content from underrepresented communities in recommendation results.
- How It Helps: Encourages diversity and mitigates content suppression.
- Implementation Tips:
- Use diversity sampling in content feeds.
- Offer users an option to explore diverse creators.
- Transparently report algorithm outcomes across groups.
Feedback and Flagging Mechanism
- Allow creators and users to report bias or suppression.
- Use reports to audit the algorithm’s decision patterns and adjust future recommendations.
Summary Table
| System | Bias Type | Why It's Biased | Solution |
|---|---|---|---|
| YouTube Recommendations | Emergent Social Bias | Reinforces engagement loops; leads to polarizing content | Diversity-aware algorithms, user control, audits |
| AI Loan Approval Tool | Pre-existing Social Bias | Historical gender bias in training data affects predictions | Balanced datasets, fairness-aware model development |
| TikTok For You Page | Pre-existing + Emergent Bias | Biased user engagement limits exposure for underrepresented creators | Fairness metrics in ranking algorithms, diverse discovery tools, user feedback systems |