Mediating in the digital age: insights from RightsCon

The DFRLab joined the Centre for

Mediating in the digital age: insights from RightsCon

Share this story
THE FOCUS

The DFRLab joined the Centre for Humanitarian Dialogue and other experts at RightsCon to explore what a social media peace agreement framework might look like

(Source: PA Images via Reuters Connect)

The frontlines of conflict are increasingly digital. That’s why the strategy session at the recent RightsCon summit on “Mediating in the Digital Age: Integrating social media into peace agreements” was so timely and important. Here’s what we talked about. This is a blogpost by the Centre for Humanitarian Dialogue (HD), Build Up, the DFRLab, and Khadeja Ramali, crossposted from the HD blog.

Mediators traditionally seek peace through agreements that regulate the actions of conflict parties, such as ceasefires where both sides agree to stop attacking each other. But these agreements rarely cover aggression in the digital world or manipulation of the information environment to advance positioning, despite their emergence and prevalence as a tool of war.

We are working to change this. At the Centre for Humanitarian Dialogue, we aim to bring conflict parties to the table to discuss their digital behavior and persuade them to restrain their harmful actions online — in other words, to embrace social media peace agreements.

At RightsCon this year, we joined other experts at the strategy session to explore what a framework for a social media peace agreement might look like. The conversation focused on three key questions:

1.What is covered by a social media peace agreement?

The potential scope is wide-ranging — from hate speech on public platforms to disinformation on encrypted messaging platforms. But what is realistically possible to mediate?

More specific clauses, with a narrow scope, are more likely to be upheld. Beyond the types of content an agreement could contain, we also considered who the agreement would apply to and who gets to be involved in determining the scope and content of an agreement.

For example, are affected communities and civil society more knowledgeable about the types of online behavior that cause problems? How can agreements include these essential perspectives even while applying to or being negotiated by a smaller subset of parties? Similarly, we also discussed the ways digital agreements are particularly susceptible to spoilers.

Our conversation examined adjacent or relevant fields that already include, or account for, the digital world in their legal processes or institutional agreements. In family mediation, for example, discussions are limited to six people, allowing for a more intimate and efficient negotiation process.

If partners find certain content abusive or harassing, rules of behavior can be established to avoid this, such as refraining from tagging and posting about each other on social media. How could this work in a country with a fragmented government and multiple warring armed groups, such as Libya? Existing frameworks may not be perfect but mediators can draw upon them and adapt them to a conflict context.

We also considered the role of social media platforms — the places where most of the content prohibited by these kinds of agreements are hosted. Platforms may not be signatories to an actual agreement but ideally would be involved and play a positive role in implementing or monitoring peace accords.

This could be done by providing data access to the monitoring body of the agreement or creating direct channels of communication with mediators to flag problematic behavior, or take action on accounts violating the agreement itself.

Finally, an important aspect of agreements themselves is taking into account what platforms are most important in the region in question. This includes looking at whether agreements need to include generalized commitments and behaviors across all platforms or more targeted components for specific platforms.

2. Why would parties sign an agreement?

Conflict parties will not be easily convinced to put down their digital weapons, so mediators must think about how to bring them to the table. What kinds of incentives might work to convince parties to sign a social media peace agreement?

Conflict parties may not consider actions in digital space as relevant to conflict dynamics and hence not part of what is able to be mediated. As a first step, mediators can help conflict parties to understand that the online conflict space is not separate from the physical conflict space. They are closely linked and affect one another.

Educating conflict parties on the impact and importance of social media in peace processes can help set the norm that digital behavior should be governed as a standard part of any peace agreement.

Beyond education, mediators must think about concrete incentives that could bring parties to the table. Social media platforms such as Facebook publicize their “takedowns” of networks engaging in coordinated inauthentic behavior, often involving state or non-state actors in countries in armed conflict. But it is not clear whether this has a deterrent effect on conflict parties or only emboldens them to find new ways to undermine peace online.

Could efforts by conflict parties to sign a code of conduct or memorandum of understanding during the negotiation period of peace talks act as a positive incentive or confidence-building measure? This would be a first step to a more comprehensive social media peace agreement. But keeping momentum and trust between parties during the sensitive negotiation period will be difficult.

3. How could we monitor the agreement?

For such agreements to have real impact, they must be closely monitored and include robust mechanisms for dealing with violations. We discussed what these mechanisms could look like, who might be able to play that role and what skills they would need.

Having robust monitoring helps create incentives for signatories but only if the system has the approval of the conflict parties. Multiple actors could form part of the mechanism such as civil society, government institutions, mediators and conflict parties themselves. But not all groups are trusted by the conflict parties, are seen as neutral, or have the technical capacity and knowledge to monitor online content and behavior.

Technical capacity is particularly relevant when it comes to identifying disinformation or coordinated inauthentic behavior — work that is often reliant on data from social media platforms. Mediators must also upskill in the basics of social media monitoring to better understand and identify harmful social media content and behavior in a conflict context.

If an online conflict has elements of coordinated inauthentic behavior that are not directly attributable to a party, how will conflict parties be held accountable for their actions? This may open up a role for social media platforms to engage in the social media peace agreement.

To address the imbalances in technical capacity and political leanings, a task force comprised of civil society, policy experts, tech experts, mediators and conflict groups could provide a model with the legitimacy and confidence to monitor and raise issues of violations. In many cases, regional experts are available but lack sufficient resources, so building local technical capacity is key.

Due to the sensitivity of reporting on such fragile topics, social media monitoring during peace agreements must avoid politicizing or further polarizing the information environment it intends to monitor.

The many insights at RightsCon expanded our thinking but the conversation does not stop here.

Mediators must begin to address the digital elements of conflict and incorporate social media into peace talks and agreements but this can only be done with the help of civil society, social media platforms and the RightsCon community. We look forward to working together to turn this vision into a transformative reality. Get in touch if you are interested in joining us.


Follow along for more in-depth analysis from our #DigitalSherlocks.