Indicators and metrics for analysis
Social Listening Tips
- Context Matters
Always consider the context in which content is shared. Misleading content can often be hidden in otherwise factual statements. - Source Verification
Check the credibility of the sources sharing information. Reputable sources are less likely to share harmful content. - Patterns of Spread
Monitor how content spreads. Rapid viral spread with identical messaging can indicate coordinated disinformation or hate campaigns. - Language and Tone
Pay attention to the language and tone used in the content. Aggressive, inflammatory, or divisive language is a red flag for hate speech and malinformation. - Overlapping of Harms
Be aware that some content may be categorised as more than one type of harm.
Online Social Listening Indicators and Metrics
Although monitoring misinformation, disinformation, malinformation and hate speech requires constant adaptation, there are some indicators that can remain helpful in identifying harmful content and distinguishing which type of harmful content is being spread online.
It’s also important to understand that indicators are different to metrics, which are measurable data-points that may or may not be available on each social platforms or website. Indicators will vary based on the type of harmful content that is being monitored, whereas the metrics available to analyse each harm remain the same but be interpreted differently.
RESOURCE
Checklist for Social Media Analysis
Developed as part of the Using Social Media for CBP Guide, this checklist provides several considerations for operations when undertaking social media analysis.
Misinformation
Indicators
- Factually Incorrect Information: Statements or claims that are verifiably false.
- Misleading Content: Content that takes facts out of context or misrepresents them.
- Erroneous Headlines: Sensationalist or misleading headlines that don't match the content.
- Unverified Sources: Information from unknown or untrusted sources, lacking credible references.
- Viral Spread Without Verification: Content that is rapidly shared without checks on its accuracy.
- Misinterpretation of Data: Incorrect or misleading interpretation of statistics or reports.
Metrics
- Total Engagements: Number of likes, shares, comments on false information.
- Correction Rate: Percentage of misinformation corrected or debunked over time.
- Reach: Number of people exposed to the misinformation.
- Mentions: Overall mentions of the misinformation narrative over a given period of time.
Disinformation
Indicators
- Coordinated Campaigns: Patterns suggesting organised efforts to spread specific narratives.
- Manipulated Content: Images, videos, or documents that have been altered to mislead.
- False Attribution: Misattributing quotes or statements to credible sources.
- Fake News Sites: Content originating from websites that mimic legitimate news outlets but are designed to misinform.
- Bots and Fake Accounts: High volumes of shares or posts from automated or fake social media accounts.
- Inconsistent Posting: Accounts that switch topics suddenly to promote a specific agenda, often without a clear pattern.
- Echo Chambers: Content that is heavily shared within specific groups that consistently reinforce false narratives.
Metrics
- Mentions frequency: Overall mentions of the disinformation narrative over a given period of time, with particular notice given to the speed of the spread.
- Keywords frequency: Looking at the precision of the language in the narrative (i.e.: how consistent is the linguistic content across all mentions)
Malinformation
Indicators
- Doxxing: The release of private information with the intent to harm or harass.
- Out-of-Context Information: True information presented in a misleading context to create a false impression.
- Selective Reporting: Omitting critical information to present a distorted view of the facts, selectively framing information to harm an individual or group.
- Old News Resurfacing: Resharing old or outdated content as if it’s current to manipulate perceptions.
- Leaked Information: Information that is true but was meant to remain confidential and is shared to cause harm.
Metrics
- Sentiment: Emotional tone of the conversation surrounding harmful true information.
- Influence and visibility: Extend of exposure of leaked or doxxed information, looking at the number of followers of influential voices.
- Reach: Number of people exposed to the malinformation.
Hate Speech
Indicators
- Derogatory Language: Use of slurs, insults, or offensive language targeting specific groups.
- Incitement to Violence: Content that encourages or glorifies violence or genocide against a group or individual.
- Dehumanisation: Describing individuals or groups as subhuman, inferior, or as animal.
- Stereotyping: Content that promotes harmful stereotypes or generalisations.
- Targeted Harassment: Coordinated efforts to harass or intimidate individuals or groups.
- Dog Whistling: Use of coded language or symbols that subtly incite hate without explicit terms.
- Extremist Symbols or References: Use of symbols, imagery, or language associated with hate groups or extremist ideologies.
- Scapegoating: Blaming a particular group for societal problems, often without evidence.
Metrics
- Sentiment: Emotional tone of the conversation surrounding harmful true information.
- Reach: Number of people exposed to the hate speech.
- Engagements: Number of comments, shares, and likes on specific piece of content.
- Individual or group mentions: Frequency of specific individual or groups attacked in the overall conversation.
Online Social Listening Analysis
As monitoring starts, there are a few ways in which the information can be analysed so to better understand the situation at hand.
- Content Analysis: Involves examining the content itself to identify false claims, misleading narratives, or harmful messages. Techniques include fact-checking, source verification, and detecting the use of manipulative language.
- Sentiment Analysis: Assessing the emotional tone of the conversations to determine if the sentiments expressed in the content itself or in reaction to the content are positive, negative, or neutral.
- Trend Analysis: Tracks the evolution, spread and changes in online discussions over time. This helps identify emerging patterns, seasonal variations, and the impact of specific events or activities on the proliferation of information.
- Topic Analysis: Categorising and understanding the main themes or subjects being discussed. This helps in identifying key conversations and information gap that might require further monitoring.
- Network Analysis: Examines the spread of information through online networks. It helps identify key nodes, platforms, sources and patterns of dissemination, including how content propagates and on which platforms, as well as which accounts or entities are influential.
- Behavioural Analysis: Focuses on user interactions with potentially misleading or harmful content. This includes analysing patterns in how people share, comment on, or engage with such content.
RESOURCE
Developed as part of the Using Social Media for CBP Guide, this chapter looks at how to use social media analytics to inform campaigns, strategies and engagement assessments. It helps you use insights from social media data to get a better understanding of how to engage communities and adjust your programmes.
Offline Social Listening
Offline monitoring is particularly important for operations and situations in areas where digital access is limited, or where harmful content is more likely to be disseminated through traditional means. Some methodologies for offline monitoring include:
- Field observation and reporting: Deploying trained observers in key locations to monitor, document, and report instances of information risks. Observers can be community members, volunteers, or staff.
- Media monitoring (print, broadcast, radio): Monitoring local newspapers, magazines, radio shows, and television broadcasts for instances of information risks.
- Community based monitoring: Engaging local communities in monitoring and reporting information risks, leveraging grassroots networks, community and religious leaders, or civil society organisations.
- Focus groups and community dialogues: Conduct focus groups or community dialogues to gather qualitative data on the spread and impact of information risks, either with people affected by these harms or with the communities in which these narratives are circulating.
- Surveys and questionnaires: Design and distribute surveys or questions to collect data on exposure and perception of information risks in a particular context or with particular groups.