(But Were Afraid to Ask)
A curated collection of resources for understanding artificial intelligence—from technical foundations to societal implications.
About this wiki: Companion resource for the "Everything You Wanted to Know About AI But Were Afraid to Ask" event hosted by CIISR Very much in progress... Contributions welcome via pull request.
- Getting Started: What is AI?
- Dueling Perspectives: AI Optimism vs. Criticism
- Sociological & Methodological Perspectives
- Social Science Workflow with AI
- Responsible AI & Ethics
- AI Governance & Regulation
- AI Incidents & Failures
- Domain Applications
- Courses & Learning Resources
- Institutional AI Guidelines
On Terminology (Abramson et al. 2026, p. 5):
"Artificial intelligence (AI) refers to technologies designed to mimic human performance on tasks that historically required human intelligence. This can include recognizing patterns, extracting text from .pdf files, classifying images, summarizing interviews, or generating synthetic content such as manipulated images or text. Some subfields commonly used in qualitative research workflows include machine learning (ML) for analyzing behaviors and cases, natural language processing (NLP) for parsing language data, and computer vision for analyzing images. Large language models (LLMs)—deep learning systems trained on mass-scale text data to predict and/or generate language (GPT is a commercial example)—are a subset of AI." See Abramson et al. (2026), Qualitative Research in an Era of AI, Table 1 for a full typology. Today the term is often used synonymously with generative Large Language Models (LLMs) like ChatGPT, Claude, and Gemini.
Short, accessible introductions to large language models and AI fundamentals.
- What is a Large Language Model? — Cloudflare's accessible technical overview
- What are Large Language Models? — IBM's introduction to LLMs
- A Very Gentle Introduction to LLMs without the Hype — Mark Riedl (Georgia Tech); balanced, jargon-free overview
- A Guide to Understanding AI as Normal Technology — Narayanan & Kapoor (2025); demystifying AI hype
- OpenAI Prompt Engineering Guide — Official best practices for GPT models
- Anthropic Claude Prompt Engineering — Official guide for Claude models
- Gemini Prompt Engineering Guide — Official guide for Gemini models
- Context Engineering Guide — DataCamp; beyond prompts: designing full information flows for AI
- Ollama Quickstart Guide — Run open-source models locally—installation, first model, API basics
- Context Engineering with Agents — Anthropic (2024); managing context windows for AI agents beyond prompt engineering
- Responsible AI at Stanford — "Best practices" for AI use in academic settings
- ChatGPT Prompts for University Educators - Faculty from a dozen disciplines shared prompts they use for teaching and research
Contrasting viewpoints to help you form your own position.
- One Useful Thing — Ethan Mollick; AI as collaborative partner; practical applications for knowledge work
- Co-Intelligence (2024) — Ethan Mollick; book-length treatment of human-AI collaboration
- I Set Out to Study Which Jobs Should Be Done by AI — Allison Pugh; human connection has limits that AI cannot cross
- The Last Human Job (2024) — Allison Pugh; why certain work requires irreplaceable human qualities
- AI Snake Oil — Narayanan & Kapoor; separating genuine capabilities from hype
- A Guide to Understanding AI as Normal Technology — Narayanan & Kapoor; AI as evolving technology, not magic
- From Carbon Paper to Code: Crafting Sociology in an Age of AI — Corey M. Abramson; AI tools are part of our world now, for better or worse—but they can be repurposed with sociological imagination
How social scientists are thinking about AI.
- The Society of Algorithms — Burrell & Fourcade (2021), Annual Review of Sociology. How algorithms mediate social life.
- Qualitative Research in an Era of AI — Abramson et al. (2026), Annual Review of Sociology. Large review of uses, examples, workflow, cautions in social science; includes traditional and large-scale qualitative examples; concludes with discussion of technological change and implications.
- Can Generative AI Improve Social Science? — Bail (2024), PNAS. Review of AI applications across survey, experiments, content analysis, agent-based models.
- Start Generating: Harnessing GAI for Sociological Research — Davidson (2024), Socius. Overview of GAI applications: text classification, image analysis, synthetic media.
- A Sociological Approach to Analyzing Satellite and Streetscape Imagery with Generative AI Tools — Law & Roberto (2025), SMR 54(3). Using generative AI for image analysis in social science research.
- Updating "The Future of Coding" — Than, Fan, Law, Nelson & McCall (2025), SMR 54(3):849-888. Systematic comparison of LLM coding approaches to human coding.
- LLM Social Simulations Are a Promising Research Method — Anthis, Kozlowski, Evans et al. (2025), ICML 2025. Using LLMs to simulate human research subjects—challenges and possibilities.
- Simulating Subjects — Kozlowski & Evans (2025), SMR 54(3):1017-1073. Promise and peril of using LLMs to simulate human subjects and social interactions.
- Generative AI in Sociological Research: State of the Discipline (preprint) — Alvero, Stoltz, Stuhler & Taylor (2025), Sociological Science. Survey of authors across 50 sociology journals on GenAI use and attitudes. Finds sociologists primarily use GenAI for writing tasks; low trust in outputs regardless of expertise; few differences between computational and non-computational scholars.
- Is it OK for AI to Write Science Papers? Nature Survey Shows Researchers Are Split — Kwon (2025), Nature 641. Survey of ~5,000 researchers worldwide; sharp divisions on ethical acceptability of AI in manuscript preparation; perception-practice gap where few disclose AI use despite broad acceptance.
- Introducing Anthropic Interviewer: What 1,250 Professionals Told Us About Working with AI — Handa et al. (2025), Anthropic Research. AI-conducted qualitative interviews with 1,250 professionals (general workforce, scientists, creatives). Notable for using AI to do qualitative research at scale; finds optimism alongside stigma concerns (69%), displacement anxiety (55%), and low trust among scientists for core research tasks. Public dataset.
- Integrating Generative AI into Social Science Research — Davidson & Karell (2025), SMR 54(3):775-793. Introduction to SMR special issue; discusses measurement, prompting, and simulation themes across ten contributed articles.
- From Codebooks to Promptbooks — Stuhler, Ton & Ollion (2025), SMR 54(3):794-848. Extracting information from text with generative LLMs.
- LLMs for Text Classification: Zero-Shot to Instruction-Tuning — Chae & Davidson (2025), Sociological Methods & Research. Comparing 10 LLMs across prompting, fine-tuning, and instruction-tuning.
- Scaling Hermeneutics — Dunivin (2025), EPJ Data Science 14(1):28. Hybrid approach preserving interpretive depth while scaling qualitative coding with LLMs; includes codebook adaptation workflow and intercoder reliability benchmarks.
- Utilizing AI to Facilitate Qualitative Surgical Research — Farber, Abramson & Reich (2025), Annals of Surgery Open 6(2):e577. AI and qualitative in medicine: uses, cautions, challenges.
- Contextual Text Coding — Lichtenstein & Rucks-Ahidiana (2023), SMR 52(2):606-641. Mixed-methods approach for large-scale textual data with context-specific meanings.
- Flexible Coding of In-depth Interviews — Deterding & Waters (2018), SMR. Twenty-first-century approach to flexible coding.
- The Living Codebook — Reyes et al. (2021), SMR. Documenting the process of qualitative data analysis.
- Computational Grounded Theory — Nelson (2020), SMR 49(1):3-42. Foundational three-step workflow (pattern detection, refinement, confirmation) for computational text analysis.
- Qualitative Coding in the Computational Era — Li, Dohan & Abramson (2021), Socius 7. BERT example using local ML and human review for interview text classification. Appendix deals with false positives versus negatives in qualitative analysis. Related blog.
- Ethnography and Machine Learning — Li & Abramson (2025), Oxford Handbook of the Sociology of Machine Learning, pp. 245-272. Workflow with ML, local models, updated benchmarks for offline systems runnable on consumer hardware; also discusses file naming for QDA.
- Inequality in the Origins and Experiences of Pain — Abramson et al. (2024), RSF 10(5):34-65. Simplified semantic networks using ML to subset text and visualize alongside in-depth reading.
- The Promises of Computational Ethnography — Abramson et al. (2018), Ethnography 19(2):254-284. Broader case for triangulation and engagement with computation in in-depth qualitative data using data science/computational approaches.
- Meaning in Hyperspace — Boutyline & Arseniev-Koehler (2025), ARS 51:89-107. Word embeddings as tools for cultural measurement; contains examples, good overview, links to pieces on measurement and similarity. Relevant to AI (embeddings are a key layer increasingly used in and outside of AI).
- Computational Analysis for Qualitative Data: Workflow and Visualization Resources — Computational Ethnography Lab. Comprehensive teaching repository with workflow summaries, Python toolkits, bibliography, and practical resources for integrating computational text analysis with qualitative research.
- De-jargoning Qualitative Coding — Academic resource simplifying qualitative coding concepts (academic cite in Li & Abramson 2025)
- Sub-setting Qualitative Data for Machine Learning — Guide to creating comparison sets in QDA
- From Carbon Paper to Code — Abramson (2024). Short, simple argument for how AI can be part of triangulation—repurposing language models as a sociological tool. This is what the field has done since Mills used a filing cabinet to write about the power elite, and Du Bois used data visualization to debunk myths about Black Americans.
- Using Machine Learning with Ethnographic Interviews — Blog companion to Li, Dohan & Abramson (2021)
Adapted from Abramson et al. (2026), Qualitative Research in an Era of AI, Annual Review of Sociology, Table 2:
| Assistive | Automated | Agentic | |
|---|---|---|---|
| Research Design | Citation management, project records, version control | Readability checks, data-assisted sampling, simulating sample-size | Literature review synthesis |
| Data Collection | Participant/site tracking, hyperlink field artifacts, e-consent capture, digital diary, cloud backup | Multi-media aggregation, sensor/geospatial logging, timestamping, live transcription | Adaptive or event-based SMS prompts |
| Data Processing | Interview transcription, transcript editing, file-format normalization, data versioning | Scanned docs/images to text, A/V speech-to-text pipelines, de-identification workflows, metadata tagging, quality checks | Adaptive or event-based reminders |
| Data Analysis | Human coding, quote retrieval, memo writing | List/regex scripts coding, inter-coder reliability tests, pattern examination, visualizing patterns, counterfactual checks, network overlays | LLM-assisted coding, LLM-assisted memos, ML classifiers, ML embeddings, augmented retrieval, semantic Q&A |
| Writing & Presentation | Triangulation, consistency checks, real-time writing collaboration | Retrieval of analytic products, generating visuals, citation formatting, plain-language summaries, accessibility audits | Assisted writing, assisted editing |
| Sharing & Preservation | Replication code, notebooks, codebooks, DOI archiving, long-term preservation | Containerized analytic spaces, interactive data portals/APIs, tiered access controls, encryption for sensitive data | Simulated participants |
Key Insight: "AI assists but does not replace researcher judgment. The most effective workflows maintain human oversight at decision points while leveraging AI for repetitive or scale-dependent tasks." (Abramson et al. 2026)
Frameworks for thinking about AI ethics and responsibility.
| Framework | Organization | Link |
|---|---|---|
| AI Risk Management Framework (2023) | NIST | nist.gov |
| EU AI Act (2024) | European Union | artificialintelligenceact.eu |
| Recommendation on the Ethics of AI (2021) | UNESCO | unesco.org |
| Ethically Aligned Design | IEEE | ethicsinaction.ieee.org |
| Code of Ethics | ACM | acm.org |
Tracking how governments and institutions are responding to AI.
| Resource | Type | Coverage |
|---|---|---|
| AI Watch: Global Regulatory Tracker | Tracker | 30+ jurisdictions (EU, US, China, UK, etc.) |
| Stanford AI Index Report 2025 | Annual Report | Comprehensive data on AI trends, investment, policy |
| Stanford STS 14/CS 134: AI Governance | Course | Graduate syllabus with readings on governance |
Learning from what goes wrong.
- AI Incident Database — Searchable database of AI failures and harms
- Optimizing for the Wrong Metric — Thomas & Uminsky (2022), Patterns. When metric optimization causes harm.
AI in specific fields.
- AI for Sports: Technologies and Applications — Li et al. (2025), Intelligent Sports and Health
- A Guide to Deep Learning in Healthcare — Esteva et al. (2019), Nature Medicine
- A Silicon Cage?: Qualitative Research in the Era of AI — Abramson (2023), Medical Cultures Lab. Weber's "iron cage" meets AI tools.
Structured learning paths for AI governance, ethics, and computational text analysis.
- Stanford STS 14/CS 134 — Graduate; AI Governance: full syllabus with readings
- Computational Analysis for Qualitative Data — Computational Ethnography Lab; workflow, Python toolkits, visualization, bibliography, and practical resources for integrating computational text analysis with qualitative research
University-specific policies for responsible AI use.
| Institution | Resource |
|---|---|
| Rice University | AI Usage Guidelines |
| Stanford University | Responsible AI |
To suggest additions:
- Fork this repository
- Add your resource to the appropriate section
- Include: Title, URL, Author/Year, and 1-sentence description
- Submit a pull request
Last updated: February 2026
Some content in this repository was edited and formatted with assistance from Claude Opus 4.6 (Anthropic).
