show menu
Thank you for visiting. Please not that the content population on this website is still in progress.

ABCDE analysis framework

Last updated:

This is a shared framework for conducting analyses and assessments of risks to information integrity. This approach sets a standard that can be tailored to the needs of specific contexts or teams yet retains its coherence and consistency.

The framework has been widely tested and used by researchers, civil society, industry actors, Governments and many UN and humanitarian partners and provides a common language for information sharing, joint analysis, and coordinated responses.  It breaks down the problem into a series of questions to help understand the misinformation, disinformation or hate speech, and inform your response activities. When assessing information from multiple sources and digital platforms, this framework also offers a way to carry out risk assessment with greater accuracy.  

 

ActorWhat kinds of actors are involved? This question can help establish, for example, whether the case involves organic rumours, authority figures, foreign state actors, etc...
BehaviourWhat activities are exhibited? This inquiry can help establish, for instance, evidence of coordination and inauthenticity or organic rumours.
ContentWhat kinds of content are being created and distributed? This line of questioning can help establish what type of information risk is being shared.
DegreeWhat is the overall scale of the information risks? For example, how many users are resharing posts.
EffectWhat are the risks posed as a result of the information risks? Such as protection risks, reputational risks, risks to staff safety, etc...

 

The questions below should be informed by information gathered through contextual knowledge, surveys and monitoring and detection activities. These questions can and should be adjusted as needed to adjust to contextual and operational factors.

Actor

The actor component of the framework enables an assessment of the actor(s) such as authority figures, media outlets, social media pages, etc... involved in the case.

Relevant Questions

  • Who are the influential individuals, pages, channels, media outlets, etc... sharing information risks? What is their reach? Who is their typical audience?
  • Are you able to determine the origin of the information risks and/or determine their intent (ie. financial goals, political goals, etc...)?
  • Are the actors involved acting in a private capacity or part of an organisation, institution or other group?

Behaviour

The behaviour component assesses to what extent actors are behaving in a coordinated manner, organically or otherwise. 

Relevant Questions

  • Are there indications or evidence that the presence of information risks or certain narratives are part of a coordinated campaign with an actor (or actors) behind them? If so, what tactics do you see being used?
  • What information risks trends or patterns have emerged in your context?   
  • Are the actors disguising their identity or actions?
  • Are the actors seeming to be engaging in organic activity? Is there evidence that there are bots involved or coordinated behaviour?
  • Do the actors seem to be targeting a specific audience with their content?

Content

The content component of the framework focuses on the information risks observed. This part of the framework includes analysing narratives which can inform potential protection, reputational and security risks.

Relevant Questions

  • What type of information risks have been observed? Misinformation, disinformation, hate speech, other? For hate speech, can you differentiate between top, intermediate, and bottom level examples? 
  • Are there key terms utilized in this context that could be used in a harmful manner and may not be obvious to those without contextual knowledge? (e.g. slang term(s) for refugees or relevant minorities.)
  • Which narrative(s) have arisen in the relevant context in relation to UNHCR's mandate? 
  • Is the content threatening to a group?
  • Is the content seemingly in violation of the community standards of the given platform?
  • Is the content verifiably untrue or deceptive?
  • Does the content align with known information risks?
  • Is the content manipulated or artificial (such as generated imagery or text)?
  • Is the content reasonable self-expression protected by fundamental freedoms?
  • Is the content seemingly in violation of the community standards of the given platform?
  • Is the content potentially illegal under domestic or international legislation?

Degree

The degree component looks at the scale and spread of the content observed and the audience(s) it has reached.

Relevant Questions

  • Where have information risks been observed (e.g. X, Facebook, TikTok, etc…)? Have they moved to multiple platforms?
  • Which languages are used in the spread of the information risks or other online content in question?
  • What population groups are affected by the information risks? 
  • Who constitutes the content’s main target audience(s)?
  • How or why are they targeted or impacted by these risks and how may the consequences be different across different population groups and geographic areas? (NB: an age, gender and diversity (AGD) lens should be included in this analysis.)
  • Does the scale indicate a single incident or an ongoing campaign?

Effect

This final component of the ABCDE analysis framework uses seeks to determine current, and inform the potential emergence of, protection and reputational risks resulting from the observed information risks. Indicators can be drawn together based on the first four components and include other contextual or secondary information.

Relevant Questions

  • What contextual factors are influencing the crisis dynamic and resultant protection (or reputational, security, etc...) situation? 
  • When, where, why, and how has information risks been present in the context in the past?)
  • What are the current or historic factors that make (or could make) the spread of information risks more likely and/or more likely to cause harm? (such as political context, events, relations with government, history of persecution, conflict, social division, etc…)                
  • What are the general sentiments towards forcibly displaced and stateless populations in your context from the public, authority figures and other key groups?           
  • What are the general sentiments towards UNCHR and humanitarian organisations in your context from the public, authority figures and other key groups? 
  • In your context, how are information risks influencing protection and reputational risks, incidents or interventions?      
  • How might information risks and narratives evolve in the short, medium, and long term?
  • Can you classify current or future protection gaps with reference to domestic law, international human rights law, refugee law, or instruments relevant to internally displaced or stateless people?
  • What existing capacities are available address the protection, reputational, security, etc... risks, either by mitigating the consequences or addressing the driving factors of the threat?

 

Sorry… This form is closed to new submissions.