show menu
Thank you for visiting. Please not that the content population on this website is still in progress.

Type of speech assessment

Last updated:

This speech assessment tool was designed as part of a joint DPO/OSAPG report on A Conceptual Analysis of the Overlaps and Differences between Hate Speech, Misinformation and Disinformation to help practitioners identify whether an example of speech should be defined as hate speech, misinformation or disinformation. 

This table has been created for illustrative purpose and does not provide a defined list. Some of the categories may also appear in hate speech, misinformation and disinformation. Each incident needs to be individually evaluated based on the various criteria including inter alia, the Rabat threshold test. It aligns with the ABCDE analysis framework and can be used to inform response activities.

 

Actors: Who is involved and impacted?

 

 

Sharer of Speech

Hate Speech

Disinformation 

Misinformation

  • Private Individuals 
  • Community Leaders: e.g. Local Business Owner, Faith Leader, Civic Representative 
  • Influencers: e.g. Celebrity, Well Known Person 
  • State Actors: e.g. Politician, Official Spokesperson, Military Leader 
  • Non-State Actors: e.g. Armed Group, News/Media organisation, Activist Group

Sharer Intends to Harm

Yes

Yes

No

 

 

 

Target of Speech

An individual

  • A group or individuals targeted because of their identity with speech that attacks or uses prerogative or discriminatory language 
  • Certain facts (contemporary and in the past) e.g. Holocaust and genocide denial
  • Individuals or a group of individuals with a common trait (e.g. occupation) targeted with false or distorted information. This does not need to be hateful or discriminatory. 
  • State actor or organisation Non-state actor or organisation 
  • Certain facts (contemporary and in the past) e.g. Holocaust and genocide denial 
  • A value or ideal (e.g., democracy, science)
Example: A well-known journalist, activist, politician targeted because of their identity (e.g. race, gender, religion) with speech that is discriminatoryExample: A well-known journalist, activist, politicians targeted because of his/her occupation with misinformation and disinformation

Audience of Speech

  • Limited Audience 
  • Would reach a defined community, either in person or online 
  • Large, multi-community audience with the likelihood of repetition over multiple days or weeks 
  • National or Global Audience (via news or exceptionally large online audience)

 

 

CONTENT: What has been created? Is there intent to harm?

 

 

 

 

 

Content

Hate Speech

Disinformation

  • Denial and distortion of some historical events (for example the Holocaust or other genocide demonstrated by international court of law)
  • Content designed to emphasise in-group/out-group differences 
  • Harmful content created using AI (e.g., deep fakes)
  • Explicit calls to violence based on identity, including genocide 
  • Explicit calls to discriminate based on identity 
  • Content designed to demonise and/or dehumanise based on identity 
  • Explicit recommendation to take action that would cause someone harm based on identity 
  • Use of dog-whistles (a subtly aimed political message which is intended for, and can only be understood by, a particular demographic group) and coded language related to identity 
  • Use of identity-based slurs 
  • Using words or phrases designed to evade content moderation
  • Content designed to deceive or evade (e.g., sharing false claims; creating false accounts; deceptive editing) 
  • Content designed to mislead (e.g., cherry picking statistics, editing quotes, out of context images) 
  • Content designed to undermine trust in institutions and official processes (e.g., conspiracies)

 

 

DISTRIBUTION (DEGREE): What platforms and tactics are being used to encourage the distribution of disinformation or hate speech?

 

 

Platforms

Hate Speech

Disinformation

  • Offline mechanisms: speech, pamphlets, posters, peer-to-peer conversations Broadcast mechanisms: radio, television, newspapers 
  • Closed digital spaces: encrypted messaging apps, online groups and communities 
  • Public digital spaces: video sharing apps, social networks, online advertising 
  • Disinformation: forgeries, paid demonstrations, fake testimonies, use of inauthentic accounts
  • Other: academic conferences and journals and various types of art, graffiti, memes, songs

 

 

HARMS (EFFECT): What damage could be caused?

 

 

 

 

 

 

 

 

Harms

Hate Speech

Disinformation

Misinformation

  • Impacts on both physical (loss of life or injury, including sexual violence) and mental health 
  • Self-protective or forced withdrawal of certain groups of people from the public square (offline or online) (e.g., female politicians, journalists, voters) 
  • Serious reputational damage to the target of the speech, which can lead to barriers to accessing rights and services, restrictive or discriminatory measures, acting as a trigger for additional disinformation and/or hate speech 
  • Increased societal polarisation and climate of fear 
  • Heightened hostility and hatred against the target of speech 
  • Loss of morale and self-belief amongst target group
Demonisation, dehumanisation, marginalisation and discrimination against the target of the speechDecline of trust in an institution or value (e.g., belief in science, democratic values, freedom of expression)
Hate crimes or violence against targets of hate speechUndermining belief in peace and cooperation and increasing justification for conflicts and barriers to integration, possibility of triggering violence and conflict
Self-protective or forced displacement; segregation of communities and creation of identity-based enclaves vulnerable to violenceResistance to public policy measures (e.g., climate, public health) leading to different material impacts (financial sector/economy, employment, environmental).
Genocide, crimes against humanity or war crimesElections impacted (including votes suppressed, distortion of policy debates, results not considered legitimate).
Sorry… This form is closed to new submissions.