show menu
Thank you for visiting. Please not that the content population on this website is still in progress.

Use cases

Last updated:

Can misinformation be monitored using social listening? 

Yes, misinformation can be monitored by looking for specific words associated with a misleading narrative and can be useful in identifying and assessing the spread and reach of a false or misleading information.

Can disinformation be monitored using social listening? 

Yes and no. Disinformation is an inherently complex issue for social listening due to the difficulty in accurately attributing intent to users who share potentially false information. Differentiating between deliberate dissemination of misinformation and unintentional sharing influenced by misunderstandings or genuine beliefs requires careful analysis of content, context, and user behaviour, making it very challenging to effectively differentiate misinformation from disinformation purely based on data acquired through social listening or media monitoring. 

Can malinformation be monitored using social listening? 

Yes, malinformation can be found using social listening tools by tracking how truthful information is framed or used out of context to cause harm, with a focus on contextual analysis, sentiment, and narrative framing across various platforms. However, it requires careful attention to the intent and context in which the information is shared.

Can hate speech be monitored using social listening?  

Yes, social listening can be utilized to monitor hate speech by tracking keywords, phrases, expressions and sentiments associated with discriminatory languages. Monitoring hate speech can be difficult, due to varying cultural interpretations of what constitutes hate speech and the rapid evolution of slang and coded language.

RESOURCE

OSAPG, Guidance on Monitoring Online Hate Speech

The United Nations Office of the Special Adviser on the Prevention of Genocide (OSAPG) has developed a comprehensive monitoring methodology for addressing online hate speech. This resource introduces a standardized methodology for monitoring online hate speech, to identify, assess, and mitigate risks, including when it constitutes risks of genocide, war crimes, and crimes against humanity. This methodology is based on an extensive review of existing methodologies used for this purpose across academia, technology companies, governments, the United Nations, and NGOs, and synthesizes those approaches into a standard set of practices that best fit the use cases relevant to the UN and its partners.

Sorry… This form is closed to new submissions.