Algorithmic surveillance – Part Two

Technology changes rapidly, and never has that seemed to be more of the case than now. We are continually adapting to keep pace with new advancements that can be viewed as a benefit to society or a detriment. Or sometimes even both. Algorithmic policing technology is just one example.

Algorithmic policing is technology used to help police analyze crime data and predict where crime is likely to occur. In a recent study titled To Surveil and Predict A Human Rights Analysis of Algorithmic Policing in Canada, researchers from Citizen Lab found algorithmic policing can also provide law enforcement officials with “sophisticated” surveillance and monitoring functions, allowing for the automated collection of data, such as online images.

The report studied four types of surveillance technologies – automated licence plate readers, social media surveillance software, social network analysis and facial recognition. Proponents say the strategy allows police to work more proactively with limited resources and has the potential to deter crime before it occurs. However, many critics worry the price we pay with our civil liberties is too high. Some see it as an economic issue with police expanding the use of digital technology to fight crime because of funding cuts, while others view it as overreach.

In the end, the success and acceptance of algorithmic policing may ultimately depend on how it is utilized.

Who is watching you?

The Washington Post reported last year that U.S. law enforcement officials are using state motor vehicle records to identify Americans without their consent, including those who have no criminal record. Despite the report and others like it, 56 percent of Americans trust law enforcement agencies to use these technologies responsibly, according to a Pew Research Center (PRC) survey. Also, 50 percent of the public say it is acceptable for police to use facial recognition tools to assess public spaces' security threats.

However, in the United Kingdom, a report earlier this year found that national guidance is urgently needed to oversee law enforcement's use of data-driven technology. A study published by the Royal United Services Institute (Rusi) called for regulations to ensure that the use of data analytics, artificial intelligence (AI) and computer algorithms is developed "legally and ethically," The Guardian reported.

"There are significant opportunities to create better, safer and fairer services for society through AI, and we see this potential in policing. But new national guidelines, as suggested by Rusi, are crucial to ensure police forces have the confidence to innovate legally and ethically," Roger Taylor, chairman of the Centre for Data Ethics and Innovation, which commissioned the report, told The Guardian.

How we use technology is important

According to the PRC, many of the experts polled say "technology is neither inherently helpful nor harmful. It is simply a tool." 

"They said the real effects of technology depend upon how it is wielded. It can be used to inspire and catalyze change just as easily as it can be used in ways that are detrimental to society," the report stated. One technology editor said "artificial intelligence and genetic engineering are technologies. How we choose to use these tools, the ethical choices we as human societies make along the way will define us." David Bray, executive director for the People-Centered Internet Coalition, agreed. "Fire can be used to cook a meal and thus be helpful. Fire can also be used to harm or destroy," he told the Pew Research Center. "So, the bigger questions worth asking involve how we humans, both individually and in communities, choose to use technologies. Ideally, we will use them to uplift individuals. The good or bad is not in technology. It is in us."

In its report, Citizen Lab found the use of algorithmic policing technologies can raise constitutional and civil liberties violations under the Canadian Charter of Rights and Freedoms as well as in international human rights law. "In particular, the analysis identified a number of issues related to the use of invasive forms of surveillance of personal data and mass data, algorithmic and systemic bias, discriminatory impacts, lack of transparency, and due process concerns," according to the report, concluding that "the use of algorithmic policing is fundamentally incompatible with constitutional and human rights protections, particularly concerning rights relating to liberty, equality, and privacy."

The report found the use of this technology in policing endangers the following human rights:The report found the use of this technology in policing endangers the following human rights:

  • The right to privacy, through issues such as indiscriminate surveillance and data-sharing between law enforcement and other government agencies;
  • The rights to freedom of expression, peaceful assembly, and association, through issues such as undermining anonymity of the crowd and targeting social movements and marginalized communities;
  • The right to equality, through issues such as algorithmic bias perpetuating discriminatory feedback loops;
  • The right to liberty and to be free from arbitrary detention, through issues such as generalized statistical inferences supplanting individualized suspicion; and
  • The right to due process and to a remedy through issues such as lack of transparency, accountability, and oversight mechanism.

Call to action

Citizen Lab found that while the use of algorithmic technology in Canada does not yet appear to be widespread, they suggest safeguards are necessary, providing 20 recommendations to federal and provincial governments and law enforcement officials intended to offset the risk to human and constitutional rights.

Researchers listing the following as their priority recommendations:

  • Governments must place moratoriums on law enforcement agencies' use of technology that relies on algorithmic processing of historic mass police data sets pending completion of a comprehensive review through a judicial inquiry.
  • The federal government should conduct a comprehensive review regarding law enforcement agencies' potential repurposing of historic police data sets for use in algorithmic policing.
  • Governments must make reliability, necessity, and proportionality prerequisites conditions for the use of algorithmic policing.·  Law enforcement agencies must be fully transparent with the public and with privacy commissioners.
  • Provincial governments should enact directives regarding the use and procurement of algorithmic policing technologies, including requirements that law enforcement authorities conduct algorithmic impact assessments prior to their development or use and publish annual public reports that disclose details about how the technologies are employed.
  • Law enforcement authorities must not have unchecked use of algorithmic policing technologies in public spaces and obtain prior judicial authorization before deploying algorithmic surveillance tools at public gatherings and in online environments.
  • Governments and law enforcement authorities must engage in external expertise when developing regulation and oversight mechanisms for algorithmic policing technologies.

At ICONA, we have a team of web professionals, designers, writers and content creators who know what it takes to make law firms stand out from the crowd. If you have any questions about content creation, please do get in touch. We are here to help.