Challenges, limitations and considerations
Data Availability
The most common challenge faced when trying to monitor online information risks is access to social media data. This difficulty to access data stems from three main trends seen with social platforms, the first of which being the increasing barriers to access of data created by social platforms (end of Twitter API, CrowdTangle, etc.). Secondly, social media content is seeing a shift from text-based content to audio-video content, which are much more difficult to collect and even more so to automatically parse and analyse. Finally, the rise of private group messaging apps such as WhatsApp, Telegram, and Signal, means that more harmful conversations are happening being digital “close walls”, which are impossible to access and monitor.
Geographic and Language Limitations
Monitoring misinformation, malinformation, disinformation and hate speech across languages and regions faces significant challenges tied to linguistic and technical gaps. Large language models (LLMs) used by commercial tools and bespoke tools alike often underrepresent low-resource languages, leading to inaccuracies in identifying harmful content that relies on regional dialects, slang, or cultural nuances. This lack of linguistic coverage reduces the availability of reliable tools for many regions, creating blind spots in multilingual detection of online harms, especially in languages that might be particularly relevant to humanitarians. Language barriers further complicate manual efforts when local experts are unavailable or when translations fail to capture contextual subtleties. Technically, geographical tracking is hindered by the use of VPNs or proxies, which can obscure a user’s actual location and mislead analyses tied to specific regions. These limitations reduce the effectiveness of region-specific monitoring, particularly for identifying narratives tailored to local contexts.
Bias and Authenticity
If using automated tools, whether they be commercially available or bespoke, algorithmic bias will most likely occur. AI systems may inadvertently perpetuate biases present in training data, leading to inaccurate or discriminatory identifications of hate speech. Moreover, the challenge of distinguishing nuanced language and cultural contexts can compromise the authenticity of AI-driven detections, potentially mislabeling legitimate expressions as hate speech. It is crucial to continuously refine AI algorithms, incorporate diverse datasets, and particularly always employ human oversight to mitigate these risks and ensure fair and accurate assessments in addressing hate speech online.
Vicarious Trauma and Psychological Wellbeing
Engaging in monitoring activities for hate speech, disinformation, and misinformation can expose individuals to the risk of vicarious trauma. Constant exposure to harmful narratives and disturbing content can have a profound emotional and psychological impact on those conducting social listening and media monitoring. It's important for personnel involved in these tasks to prioritize self-care, establish clear boundaries, and seek support when needed. UNHCR emphasizes the well-being of its staff and partners, providing resources and training to mitigate the effects of vicarious trauma and ensure sustainable support for vulnerable communities.
As such, some of UNHCR’s teams that are more directly exposed to harmful content online, such as the social media team and research, analytics, and strategy unit, have piloted online training on how to recognize, mitigate, and respond to vicarious trauma.
RESOURCE
Dart Center, Handling Traumatic Imagery: Developing a Standard Operating Procedure
This guide goes through a series of structured steps for how to craft a personalised standard operating procedure for handling graphic content that depicts death, injury, and other violations.
Privacy, Security, Transparency, and Human Rights
Incorporating privacy, security, transparency, and human rights considerations into media monitoring and social listening practices is essential for humanitarian organizations, not just for ethical reasons but also to ensure credibility, trust with affected communities, and overall humanitarian missions. These considerations include, but are not limited to, the following points:
Privacy
Humanitarian organizations must ensure they only collect necessary data, avoid over collection, and anonymize any personal or identifiable information. Ideally, any monitoring or data collection should respect the privacy and consent of individuals, especially when dealing with vulnerable populations such as refugees or displaced people. While posts and comments on social media are often public, organizations should be mindful of not entering spaces with the intention of … Additionally, if personal data is captured, it should be anonymized to prevent the identification of individuals. This is particularly important for protecting individuals who may be at risk.
Security
Data security is extremely important when doing monitoring for information integrity purposes. This includes using internally secured storage cloud and systems to protect sensitive information and having strong cybersecurity mitigations in place. It is not recommended to use third party platforms to share information collected with such monitoring (i.e.: WhatsApp, Facebook Messenger, Slack, Trello, etc.) (see Data Protection and Privacy).
Human Rights and Ethical Frameworks
Monitoring must respect freedom of expression and minimize harm, ensuring that vulnerable individuals are not further stigmatized or exposed to danger. Humanitarian organisations have a responsibility to use data to advocate for human rights and protect marginalized groups (see Human Rights Due Diligence and Artificial Intelligence and Ethics). Monitoring activities should adhere to core humanitarian principles, remain contextually sensitive, and avoid exacerbating vulnerabilities or politization. Following ethical guidelines ensure that data collection remains impartial, accurate, and beneficial to affected communities.
RESOURCE
Do's and Don'ts for Social Media Analysis
Developed as part of the Using Social Media for CBP Guide, this short list of do's and don'ts provides relevant considerations for operations to make when undertaking any form of social media analysis, whether that be a situation analysis or social listening.