Weekly Roundup 12/10/20

By Elizabeth Sutterlin | December 10, 2020

Small Photo
Header image from the Washington Post showing an illustrated hand and red flag over a Facebook feed
Image credit: Derek Abella for The Washington Post

Facial recognition software developed by Huawei sparks concerns about the dangers of artificial intelligence surveillance--in this case, a potential Chinese government crackdown on the oppressed minority group, the Uyghurs. Facial recognition software is capable of estimating age, gender, and ethnicity through an effortless scan, and AI researchers and advocates are concerned about the normalization and proliferation of this technology. They argue that ethnicity recognition technology can assist in discrimination or profiling, and when implemented on a large scale, can easily contribute to the making of a surveillance state.


Concerned by China’s growing technological power and cybersecurity threats, the US has previously made moves against Huawei and other Chinese tech companies through sanctions and exclusionary policies. Facial recognition software in the US has received backlash due to its potential biases, but other countries outside of the American sphere of influence could be receptive to the idea.


Top weekly headlines curated for you:


Global Tech Policy:

  • European privacy laws have come up against backlash after restricting the use of automated software that scans the internet for child sexual abuse and exploitation imagery. While lawmakers struggle to find some middle ground, privacy advocates say that scanning personal communications of Europeans for these images is a violation of their rights. Child's rights activists, however, fear that this legislation may lead to a domino effect in which platforms no longer scan and report abuse imagery anywhere because they have no legal obligation to do so.


Open Internet:

  • Internet shutdowns pose three major challenges to protest movements, an article for the Washington Post claimed. First, online apps are essential coordination tools for protesters; second, the internet also connects protest movements to the global community, making it possible for them to draw worldwide attention to their cause; and  third, because social media and citizen journalism are crucial for documenting violence against protesters when state security forces crack down, internet shutdowns make it harder to hold perpetrators accountable. Subnational data suggests that governments may deliberately limit Internet access when conducting military operations that result in indiscriminate violence.
  • In Kazakhstan, authorities have begun to limit access to foreign social media websites under the guise of a digital security initiative. While citizens were instructed to download a security certificate to ensure safe access to the internet, internet freedom advocates raised concerns that the government certificate would collect sensitive data (including passwords), track users' online behavior, and could even share personal information with third parties.
  • The Alliance for Affordable Internet has released its 2020 report, which focuses on the inequalities in global internet access and affordability thrown in stark relief by the coronavirus pandemic and the switch to remote work and learning. The report examines national broadband plans as a tool for lowering the costs of one gigabyte of mobile broadband data, and recommends that governments invest in digital skills as well as infrastructure in their broadband plans to support access to meaningful connectivity in the next decade.
  • The CASE Act is being considered for inclusion in next week’s spending bill. This copyright reform bill has been widely critiqued, with experts (including the EFF and the ACLU) claiming it jeopardizes the freedom of expression of regular internet users who could face large penalties for accidentally or innocently sharing works they don't realize are covered by copyright. It also routes around the Title III courts by handing disputes about private rights to the executive branch.



  • Facebook is re-engineering its hate speech algorithms as part of its "Worst of the Worst" project, which replaces the company's previously "race-blind" prioritization practices with a system that assesses posts based on severity, with higher priority placed on hate speech targeting minority communities. While hate speech directed against non-marginalized groups can still be reported, it will be categorized as low sensitivity by the algorithm.
  • Members of the intelligence community and data privacy experts argue that the data collected from internet companies about our online behavior could soon be used to fuel more targeted disinformation and influence operations designed to sow divisions. Legislation on data protection needs to catch up before this threat becomes a reality, DefenseOne's technology editor stresses.
  • A recent report from the European Parliament on disinformation in the Western Balkans identified efforts to improve governance and strengthen democracy in the region as the best defense against disinformation campaigns from outside actors like Russia and Turkey as well as false narratives disseminated by domestic actors.



  • Access Now has published a report on the EU's "Trustworthy AI" strategy in conjunction with the Vodafone Institute. Stakeholders agree that transparency must be a minimum requirement, but there is little consensus on further regulation and legislation of artificial intelligence technology. The Vodafone Institute emphasized the need for concrete risk assessments before governments implement further regulation of AI and algorithms.
  • The co-lead of Google's Ethical AI team, Timnit Gebru, announced on Twitter that the company had forced her out due to a conflict over a paper she was drafting with other researchers on the potential risks of large-language NLP models, as well as their environmental implications. Other AI research experts both within and outside of Google expressed concerns over the company's censorship of Gebru's research.



  • The University of Toronto's Citizen Lab identified twenty-five countries whose governments were clients of Circles, a cyberespionage tool associated with the NSO Group. Many of the government customers identified in the research have a history of leveraging technology to perpetrate human rights abuses.
  • The National Security Agency has warned that Russian state-sponsored groups have been actively attacking a vulnerability in multiple enterprise remote-work platforms developed by VMware.The affected VMware products all relate to cloud infrastructure and identity management
  • A sophisticated global phishing campaign has tried to harvest credentials from companies involved in the supply chain that will get COVID-19 vaccines to people in need. IBM researchers are releasing findings that a campaign has for months targeted a significant number of those companies, across six different countries.


Other Tech News: 

  • Airbnb is launching a nonprofit, Airbnb.org, where hosts can rent out their properties on its platform to provide free and discounted stays to refugees, people affected by natural disasters, and frontline workers in the coronavirus pandemic.