This Statement of Principles is a shared commitment to technology shaping a more just and democratic world
As digital technologies become increasingly central to the exercise of civil and political rights, technologists and private sector stakeholders should take steps to ensure that their products and services affirm those rights and reinforce democratic values. Technology has become critical to democratic processes and discourse. Societies rely on online networks and systems to facilitate participatory democracy, foster debate around candidates, campaigns, and policies, confront policymakers, access information about their political rights, policies and utilize government services, among others. The National Democratic Institute, in collaboration with members of the Design 4 Democracy Steering Committee, has developed this Statement of Principles in their shared commitment that technology platforms and products help build a more just and democratic world, and to bring together the concerns of the democracy community for action to these platforms.
With these action-driven Principles, we will work collaboratively with each other and technology stakeholders towards a digital future that strengthens democracy globally.
To safeguard democracy, we assert that technology platforms must recognize their role in the creation and preservation of healthy, inclusive democracies globally and actively work to promote products and principles that can foster them.
Inherent in these principles are underlining commitments by the democracy community to ensure the continued dedication of technology companies to:
Transparency to ensure all citizens can access information that enables responsible decision-making and the vindication of rights. Platforms need to be transparent about their actions, policies, data, and platforms need to play a role in making information about the democratic process more accessible and transparent to the public.
Privacy to protect political actors and marginalized groups from online harassment and to enable the exercise of freedoms of assembly, movement, and association.
Accountability to create trust with users and foster trust in democratic systems.
Openness and Inclusivity to ensure that all people, regardless of language, gender, ability, race/ethnicity, religion, location or background, can participate fully and safely in the networked public sphere.
This is intended to be a living document that will be amended by the democracy community to reflect emerging challenges and opportunities. We also continue to support and incorporate established human rights principles and international frameworks. These Principles are inspired by the democracy community and how together we can enhance democratic governance and free, fair, inclusive, and pluralistic political processes.
1 Promote Information Integrity
Information integrity is essential to public trust in democratic institutions and processes, including elections. Technology companies should strive to promote access to credible, verifiable information through their platforms. This includes facilitating systematic identification and verification of social media accounts, particularly for organizational, governmental or political accounts, so that end users can more easily differentiate between authentic and impartial information sources as opposed to paid advertisements, personal opinions, and information spread by partisan or malign actors. Companies should provide sufficient reporting and public information to enable public oversight bodies and independent monitors to accurately assess compliance with national, regional, and international standards to promote information integrity, including campaign and media codes of conduct and campaign finance laws.
2 Freedom of Expression
Technology companies should commit to protect freedom of expression on their platforms and networks, through policies and technical systems consistently across the world that allow for the widest possible range of speech within acceptable norms while recognizing the threat of harmful, hateful, or other dangerous speech through moderation, restriction and other forms of mitigation. A priority should be placed on empowering end users to safely differentiate between accurate and impartial news content, paid advertising, personal opinions and information spread by partisan or malign actors. Technology companies should protect content moderation systems from authoritarian influence determining content takedowns, while also working with civil society regarding unjust takedowns.
3 Open Data for Governance
Technology companies gathering or warehousing public data that can be used to assess the integrity, inclusiveness and credibility of political processes (such as data on voting and counting procedures, registered voters, campaign expenditures, and budgetary or legislative information) should consider the international principles of open data, as enumerated in the Open Election Data Initiative, in their system design. Technology products should be able to meet the transparency demands of government bodies, election administrators and citizens, including the ability to provide data in analyzable, complete, timely, granular, non-proprietary and non-discriminatory formats.
4 Protecting Against Surveillance
Platforms and other technology companies should implement measures to secure their systems and non-public services from government surveillance of data, content, and other user information. Notice should be provided to users regarding the circumstances under which governments may have access to such information. Platforms should only permit governments to access such information in connection with legitimate purposes under international human rights law.
5 Protecting Anonymity and Identity
Technology companies that provide users the right to speak anonymously should implement policies that recognize the distinction between anonymous speech and fraudulent misrepresentations, intentional harassment, or hate speech. For platforms that require users to operate under real identities, companies should put in place safeguards to ensure user identity data is protected and that responses are in place to mitigate harmful attacks on identifiable users. Technology companies have a responsibility to help protect the data of key actors in the political process (such as parties, candidates and election commissions) as well as democracy and human rights actors, from attacks on their personal information. These actors are increasingly targeted by hacking and disinformation campaigns, and companies should work with diverse civil society, government and other stakeholders to identify and protect these actors from attack.
6 Engagement with Civil Society
Technology companies should commit themselves to regular engagement and consultation with civil society organizations working on issues of democracy, human rights, and inclusiveness globally. This engagement should include mechanisms for information-sharing and escalating concerns. Technology companies should be solicitous of advice and input from diverse civil society organizations in the development and implementation of new products, policies, or services, particularly as they intersect with inclusion, free expression, political participation, and other civil and political rights.
7 Responding to Gendered Disinformation and Hate Speech Attacks on Marginalized Communities
Women and marginalized groups are primary targets of online disinformation, hate speech, and cyberattacks. Those attacks systematically lead women and marginalized groups to be dissuaded from running for office, campaigning, engaging in online political discourse, and/or be subject to voter suppression. Technology companies should take proactive steps to identify instances in which behavior on their platforms is likely to lead to disenfranchisement and marginalization in public discourse, and put in place measures to ensure all communities can engage in healthy civic discourse - including sensitizing content moderators, identifying and addressing coordinated and uncoordinated hate speech, and developing products that are explicitly designed to protect the safety of targeted individuals and groups. .
8 Transparency in Political Advertising
Political and issue advertisements present particular risks to the political and electoral information environment upon which citizens rely to exercise their rights to vote and fundamental freedoms of expression and peaceful assembly. It is therefore essential that technology companies implement measures to promote transparency related to political and issue advertising on their platforms. These measures should enable users to distinguish paid advertisements from organic political discourse, understand who has paid for the advertisement, the amount spent, and determine in what country the advertiser is located. They should include safeguards to verify the identity of advertisers, to screen for compliance with platform policies, and provide context related to ad targeting data, allowing users to know the source of an ad and why a particular ad targeted that specific user, as well as how it was funded. Mechanisms for transparency, accountability and equity among political actors should be in place for paid political content. All advertisement transparency measures should be implemented globally.
9 Information for Elections and Political Processes
Social media companies and other platforms have become a critical means through which citizens access information about where and when to vote, how to cast their ballot and request information. Technology companies and particularly social media platforms have a duty to ensure the accuracy of information related to elections, referendums or other political processes on their platforms. These organizations should have established and vetted mechanisms for the review and moderation of such information, developed in consultation and collaboration with national election management bodies and conducted in accordance with international standards, to mitigate against disinformation and other threats to information integrity. The information disseminated by the electoral management bodies, as primary sources, should be prioritized.
10 Mitigating Coordinated Influence Campaigns
Coordinated influence campaigns, using organic and inorganic methods, can have major detrimental effects on online discourse when supported by organized operations, from political campaigns to state actors. Platforms have a special responsibility to work with civil society, as well as private sector and public actors to identify, mitigate or disable these campaigns, particularly during elections or other critical democratic periods. When manipulation is detected, social media companies, messaging apps, and other platforms should provide users with information regarding the level of risk, including indicators of state-sponsored activity or of technical manipulation that would not be otherwise evident to users.
11 Mechanisms for Expedited Response to Election-Related Incidents
Elections, referendums, and other critical polls are pivotal moments for democracies, when the information landscape can impact how issues are decided. Given the central nature of these periods in democratic systems and the potential for manipulation of public discourse, voting patterns and trust in electoral systems, technology companies should recognize this risk and establish mechanisms for expedited response to election-related incidents on their platforms. Technology companies should commit to engagement with civil society actors and election management bodies to build mechanisms and processes to quickly identify and respond to online incidents through established practices in any context around the globe. Decisions should be transparent and consistent globally.