Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Experiment: Modifying Language #17

Open
slifty opened this issue May 13, 2018 · 3 comments
Open

Experiment: Modifying Language #17

slifty opened this issue May 13, 2018 · 3 comments
Labels
experiment idea An idea for an intervention / experiment

Comments

@slifty
Copy link
Member

slifty commented May 13, 2018

Hypothesis

Language (words and speech patterns) can signal "tribe" or trigger heightened emotional responses (negative or positive) in a reader, making them more likely to assume bad intent from the author (or otherwise augmenting the risks of dis-confirmation bias) when consuming credible information.

Proposed intervention

Adjust the presentation of credible information by replacing charged terms with terms that will be interpreted neutrally by the reader based on their identity.

American examples: "Guns" vs "Firearms"; "Illegal" vs "Undocumented"

Concierge MVP

  1. Select a fact check and write multiple versions of it for specific audience archetypes.
  2. Run a study to see if readers are more receptive to fact checks that have been written with their personal predispositions in mind.

Engineered MVP

  1. Identify a set of audience archetypes
  2. Identify language translations for each archetype
  3. Have a user select their archetype
  4. Find / replace words and phrases in content based on the dictionary + archetype combinations

Associated Research

{Public health research around the impact of language?}

@slifty slifty added the experiment idea An idea for an intervention / experiment label May 13, 2018
@crupar
Copy link
Contributor

crupar commented May 17, 2018

Linguistics Question: Would this be considered using different 'registers'?

@slifty
Copy link
Member Author

slifty commented Dec 14, 2018

There are some wireframes for this project now; I think they reflect the following high level "chunks" of complexity:

Frontend:

  • Site scaffolding

Backend:

  • Claim detection integration
  • Camp signaling detection
  • Judgmental / sentiment analysis
  • Entity detection; entity sentiment mapping

@slifty
Copy link
Member Author

slifty commented Dec 14, 2018

Language Check.pdf

slifty added a commit that referenced this issue Jan 1, 2019
These wireframes represent a little bit of "not actually likely to
happen" -- for instance, it might not be realistic to be able to
identify when a phrase is particularly judgmental just yet.  That said,
the point is to indicate the nature of the UX for this initial
experiment.

The goal here is to expose these potentially charged components of
a piece of content to authors.  The primary user in mind here would be
journalists and fact checkers.

Issue #5 Create wireframes
Issue #17 Experiment: Modifying Language
@slifty slifty mentioned this issue Jan 1, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
experiment idea An idea for an intervention / experiment
Projects
None yet
Development

No branches or pull requests

2 participants