Introduction:
Facial recognition software has emerged as one of the most active fields of research in computer vision and pattern recognition over the past few decades, and the methods of its applications are being actively utilized and explored by both private and federal institutions (Adjabi et. al., 2020). This technology, as a biometric identifier; that is, a method of human identification via physical characteristics, has many appealing characteristics for bureaucratic, legal, and commercial applications despite having more technical challenges than other biometric identifiers, such as fingerprints (Cole, 2012). Facial recognition software can be executed from a distance without physical contact, it can be used to survey a much larger set of information (faces) at once, and it can be conducted discreetly without the explicit knowledge of those being identified. In addition, Western societies, in particular, are culturally accustomed to the face as being a key marker in one’s identity (Cole, 2012). In the United States, in particular, there has been a preoccupation with developing and employing facial recognition technology post-September 11, 2001. Since then, the use of these algorithms has been employed by industry, government and research constituencies despite overwhelming evidence that these technologies are racially biased and not neutral, and in spite of concerns that countries who overwhelming rely on facial recognition technologies to monitor their citizens are veering closer towards becoming a surveillance state (Bisaillon, 2012).
Facial Recognition Technology: Algorithms of Surveillance
Facial Recognition Technology is a form of image analysis and pattern recognition that compares a facial image of interest, known as a probe, against a database of images, known as a gallery (Parks & Monson, 2008). Building an algorithm that can successfully isolate, calculate, and identify faces amongst multiple, and simultaneous, stimuli requires deep learning; that is, the machine mimics the neural networks necessary for human cognition and utilizes successively complex layers of information to become incredibly specific, and in theory, accurately recognize facial features and identify them (Martinez, 2017). Although rudimentary attempts at facial recognition technology were explored since the 1960’s, it wasn’t until 1991 that a true, near-real-time computer tracking system was developed by Matthew Turk and Alex Pentland at the Massachusetts Institute of Technology (Turk & Pentland, 1991). Their success came from the realization that they could design an algorithm that could exploit the fact that all faces share common basic structures. These structures, called principal components, and the analysis of a large set of faces via principal component analysis (PCA), allowed researchers to remove similar correlations between faces to leave behind only distinguishing characteristics of a face –an eigenface (Tsao & Livingstone, 2008).
The current method of facial recognition technology involves creating a “template” of the target’s face, which is created by measuring the face through specific characteristics such as the distance between the eyes or the width of the nose; these facial landmarks are known as nodal points (Kristine & Rachel, 2019). Once the template is created in code, it is compared to a database of faces until a positive match is found. Automated facial recognition requires that the machine detects the face, extracts its features to depict an accurate face normalization, and classifies it through verification or identification (Adjabi et al., 2020). It is important to note that facial recognition, and deep learning, are not rule-based algorithms set by a programmer; rather, the machines themselves can be trained to define their own rules for detection, analysis and classification (Bueno, 2019). As a result, although facial recognition algorithms may seem neutral, the rules generated by the machines come from large databases of facial images that are often racially biased or not comprehensive.
The Jim Code: Racism and biases coded into the Machine
Because facial recognition software requires it to be trained on a large database of information, whose makeup is often racially skewed or disproportionate, there has been a mounting interest recently in exploring how accurate and biased these algorithms can become. In addressing the question of whether or not machines can be racist in her book, Race after technology, Ruha Benjamin (2019) asserts that “robots, designed in a world drenched in racism, will find it nearly impossible to stay dry. [They] learn to speak the coded language of their human parents…one’s individual racial identity offers no surefire insulation from the prevailing ideologies (p.62).” Indeed, “automated anti-Blackness” can be seen as particularly sinister because it is a product of data-driven decision making, which is assumed to be objective but requires the subjective actions and training by developers – who often end up encoding their own biases into the programs (Nkonde, 2019). This results in either overrepresentation or underrepresentation of communities in certain databases. The Gender Shades project launched by the Algorithmic Justice League (AJL) exposed the divergent error rates across demographic groups and showed that the poorest accuracy was found within Black people, and particularly Black women, when compared to white men – at most with an error difference of 34.4% (Buolamwini & Gebru, 2018).
Facial recognition technology continues to be developed in a world deeply affected by racial disparities, and as a result, reinforces racial discrimination. In most Western countries, Black people are more likely to be stopped and investigated by a police officer, and have their biometric information – including a face photo – entered into a bureaucratic database, which is then used by the machines to optimize their algorithms (Bacchini & Lorusso, 2019). As Benjamin notes “some technologies fail to see Blackness, while others render Black people hypervisible and expose them to systems of racial surveillance (Benjamin, p.99)”. In fact, since facial recognition systems can only certify individuals in its databases, Black Americans are more often found as a match, which results in a disproportionate amount of both true and false accepts. This in turn results in more Black people being stopped, investigated, and implicated as a consequence of facial recognition technology, which suggests its capacity to strengthen and perpetuate racially biased patterns of law enforcement that has existed since the 18th century “lantern laws” (Bacchini & Lorusso, 2019).
In the simplest sense, algorithms can not be racially neutral because the world they are learning from, and whose data they process, is not as well. In an effort to expose the biases, many organizations such as the AJL continue to research and audit the software used by major bureaucratic and industrial entities (Buolamwini & Gebru, 2018). It is important to highlight that private industry choices are public policy decisions – that is, many bureaucratic organizations use data collected by the private industry and by extension, these industries have an often unseen but impactful influence in the legislature and systems of surveillance produced. In order to create an equitable and actively non-racist technology, there is a pressing need to diversify databases, consistently audit widely-used software, and center the impacted communities in the design process. By doing so, policy makers can create the frameworks for dismantling anti-Black systems and create a path “to develop socially just policies that can regulate biometric technologies” (Nkonde, 2019).
Where’s Waldo?: Modern applications of recognition technology and methods of resistance
Live Facial Recognition is known for the heavy racial bias it perpetuates, due to algorithms being trained on datasets lacking diversity (Fussey & Murray, 2019) . Creating fairer databases could be a way to counter the issues stemming from these biases (Lamb, 2020), but could only do so much as long as racism prevails in western countries. Could art spark difficult conversations and help highlight the urgency in legitimising the right to privacy and individual agency (Nisbet, 2004), and notably the heavier policing of marginalised communities?
Brief history of art as a form of resistance
Art, in all its forms, has always been used as a powerful political tool (Anapur, 2016). But so was heavy surveillance from the state to enforce its power over the people. Evading censorship, targeted violence and surveillance has long been a concern for activists and artists (NCAC, 2019). How, before the coming of high tech surveillance, did artists find ways to confront and expose the brutality of a political system?
Uruguay, 1983. Artist Nelbia Romero was showcasing her installation, Sal-si-puedes, revisiting the massacre that eventually led to the disparition of the Native people of Uruguay. But, as Andrea Giunta shows, the actualisation of historical events allowed Romero to recall the lived experience of life under a strict dictatorship, whilst bypassing the heavy censorship of the time (Giunta, 2016) .
A more recent example of adapting to methods of oppression in that way would be the work of Dread Scott, “On the Impossibility of Freedom in a Country Founded on Slavery and Genocide”, reenacting a racist past event that one could immediately associate to more actual diplays of racial discrimination. His statement, “I make revolutionary art to propel history forward”, is rooted in the same idea: politics and history are cyclical, as are the oppressions they carry within (Duggan, 2014).
How can art help make statements that challenge the status quo and address discrimination in our current age of pervasive, hidden mass surveillance? What can it do to confront algorithmic racial bias and spark discussions about the risks pertained by the disappearance of privacy?
LFR avoidance-based art
These past years have seen a major surge in anti-surveillance art practices, allowing individuals to evade scrutiny using a variety of methods to throw the algorithm off – usually by concealing or playing with the artist’s faces in visually intricate ways (McMullan, 2018).
The most talked about example this year would be CV Dazzle (Harvey, 2020), a way of using facial makeup and hair that was used to trick AIs by transforming and hiding key areas of the face. A wide range of objects also appeared, from LED glasses, adversarial examples printed on shirts, masks or frames to scarves covered in a multitude of human faces – all hyper-aesthetic methods of surveillance avoidance by using the limitations of LFR softwares (Tapper, 2020).
As the collective Hyphen-Labs said, “The control of identity and image has been a way to oppress freedom from groups who have been historically and systemically marginalized both in the U.S. and globally”. Reclaiming that control is, in a way, addressing identity politics ascribed by an oppressive system (McMullan, 2018).
Artist and researcher Zach Blas, by working with masks made mixing the features of multiple people, does offer a way to evade these imposed monolithic perceptions by completely deleting the wearer’s identity, offering to create a secret milieu of free expression (Blas, 2011-2014).
A question, then, would be to see how efficient all these methods are. LFR technologies get updated frequently, getting more and more accurate; CV Dazzle doesn’t work as well with these newer algorithms (Harvey, 2020), and individuals can be identified even just by their gait (Kang, 2018). Evading surveillance would then demand we all stay vigilant every second we spend in public spaces, changing our behaviours, looks and even the way we walk – eventually changing our very identity at its core, creating an entire new persona to showcase to the outside world.
But does art need to be efficient to be relevant? Making a statement to disrupt a narrative or spark discussions could have a greater long lasting impact on how society reacts to and implicitly accepts state surveillance.
The main issue, unfortunately, is how these methods can be used by marginalised people, who are already targets of heavier scrutiny. FR algorithms are known for perpetuating racial (and gender) bias, resulting in over policing (Funk, 2020); overt attempts to avoid surveillance or disrupt the system have a strong risk of being perceived as criminal acts, sometimes with lethal consequences. The ability to publicly wear masks or show defiance then becomes a statement of individual privilege, slightly missing the opportunity to question the ubiquity of said surveillance. The intersectionality of oppressive systems requires us to rethink what we accept, and how the responsibility of one’s protection shouldn’t be solely put on the individual, but instead a collective push enforced by the global public (Monahan, 2015).
Just like history, the interest of the public in these conversations is cyclical. Every year brings its lot of new scandal regarding police brutality, racism and unfair use of LFR, making conversations around anti-surveillance relevant once again, only to be soon forgotten – until the next revelation or murder.
“Looking back”
So how can these art practices stay relevant?
For artist Nancy Nisbet, who had two microchips implanted,surveillance always comes with a context: a specific unique person, a location, an object of interest. She also mentions three key points for her practice: Avoidance, Intervention and Subversion.
Avoidance revolves around bypassing surveillance, and most of the anti-surveillance techniques mentioned before would fit in this first category, this first step to take to make a difference.
Intervention implies directly altering the data or the tool of surveillance, reclaiming control over one’s data.
Subversion then becomes the act of rendering all the collected data meaningless; as a state of surveillance implies having static, unchanging singular identities, that are well defined within a context. By having not one, but two microchips implanted, she then becomes an anomaly, breaking the very rules surveillance works with (Nisbet, 2004).
Similarly, artist Hasan Elahi, wrongly identified as a terrorist, interrogated and placed in the US Terrorist watchlist, updated his location in real time for years. His constant, open disclosure of location renders official state surveillance irrelevant and subsequently stripping them of any power (Elahi, ongoing).
Torin Monahan, on the other hand, writes that to be efficient and impactful, anti-surveillance art practices should be not only relying on avoidance, but inverting the positions of power and challenging the system by putting the oppressors in the spotlight, legitimating a right to “look back” (Monahan, 2015).
The works of Trevor Paglen and Laura Poitras use this method to attract public attention to concerning issues, photographing drones or various surveillance headquarters that wouldn’t even be found on maps (Crawford & Paglen, 2019. Poitras, 2016) . Observing the Observer comes with its risks, and they are both under heavy surveillance, creating an almost poetic loop if it weren’t for the heavy implications that come with it.
Artist and filmmaker Manu Luksch, in her project Faceless, fully renounces avoidance of surveillance and uses CCTV and her right to access her data to narrate a film, revealing in that way how omnipresent these cameras are, in a more concrete way than face masks or facial makeup (Luksch, 2007).
But the most front-facing way to oppose the system might just be the art of protests, seen as a performance. The Hong-Kong protests of these past years are probably the most obvious example of this, with chants, laser pointer plays, umbrellas, Lennon walls and human chains making for a day-and-night mix of artistic performances and installations. In this way, too, they call back historical events (the Baltic Way, Lennon walls, black bloc techniques) to make a statement about current struggles (Ioanes, 2019).
Masks, umbrellas and lasers become both objects of performance and tools to counter surveillance, concealing identities and blinding cameras and law enforcement.
But the BLM protests of 2020 saw a resurgence of conversations in regards to racial bias both in the police and in the algorithms (Stokel-Walker, 2020). Inspired by the Hong-Kong protests techniques, it turned into an opposition of low-tech against a high-tech police, but its power resided in numbers: all around the world, activists organised protests using the same visual language, the same chants and demands, becoming a worldwide performance and display of solidarity in the fight against systemic racism (Maqbool, 2020). It became one collective push against discrimination and intersectional issues, forcing both the public and the various governments to face a somber reality already getting out of hand. Recording the police and police brutality became a mechanic of accountability (Lind & Fong, 2014), putting the focus on them instead of blaming the victims.
2020 normalised the use of masks in public space for now, but maybe the future of anti-surveillance art resides in this: a communal set of actions combined with avoidance methods to protect the individuals, while surveilling the perpetrators of state violence.
The recent law proposals in France, aiming to make filming the police illegal, goes to show how that last part is crucial, and how far we still are from actual freedom and privacy.
Conclusion
In addition to decolonising training databases for major LFR softwares, there is a need to engage in a bigger conversation about the invasive rise of mass surveillance as a threat to individual freedom under a guise of “care” (Crawford & Paglen, 2019) .
By disrupting mainstream narratives around the notion of identity/identities, self, and the passive acceptance of this enforced state of surveillance, artistic practices can play a key role in resisting insidious systemic discriminations (Duggan, 2014).
More specifically, art that doesn’t focus only on individual avoidance and aesthetic but brings awareness and put oppressive systems and their perpetrators under a spotlight will effectively challenge said state of surveillance (Monahan, 2015), as seen in recent mass protests that turned into worldwide performances against racial discrimination (McGarry, Erhart, Eslen-Ziya, Jenzen & Korkut, 2019).
It is important to approach practices with intersectionality in mind, to start a shift from individual responsibility to collective action.
CITATIONS:
Adjabi, I., Ouahabi, A., Benzaoui, A., & Taleb-Ahmed, A. (2020). Past, Present and Future of Face Recognition: A Review. Electronics, 9(8), 1-52. Retrieved from https://www.mdpi.com/2079-9292/9/8/1188/htm
Bacchini, F. & Lorusso, L. (2010). Race, again: how face recognition technology reinforces racial discrimination. Journal of Information, Communication & Ethics in Society, 17(3). 321-335. Retrieved from http://dx.doi.org.gold.idm.oclc.org/10.1108/JICES-05-2018-0050
Benjamin, R. (2019). Race after Technology. Medford, MA: Polity Press
Bisaillon, L. (2012). Our biometric future: Facial recognition technology and the culture of surveillance. Surveillance & Society, 10(1), 95-96. Retrieved from https://gold.idm.oclc.org/login?url=https://www-proquest-com.gold.idm.oclc.org/scholarly-journals/our-biometric-future-facial-recognition/docview/1315522743/se-2?accountid=11149
Bueno, C. C. (2019). The Face Revisited: Using Deleuze and Guattari to Explore the Politics of Algorithmic Face Recognition. Theory, Culture and Society, 37(1), 73-90. Retrieved from https://doi-org.gold.idm.oclc.org/10.1177/0263276419867752
Buolamwini, J. & Gebru, T. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research, 81, 1-15. Retrieved from http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf
Cole, S. A. (2012). The face of biometrics. Technology and Culture, 53(1), 200-203. Retrieved from https://gold.idm.oclc.org/login?url=https://www-proquest-com.gold.idm.oclc.org/scholarly-journals/face-biometrics/docview/929039973/se-2?accountid=11149
Hamann, K., & Smith, R. (2019). Facial recognition technology. Criminal Justice, 34(1), 9-13. Retrieved from https://gold.idm.oclc.org/login?url=https://www-proquest-com.gold.idm.oclc.org/trade-journals/facial-recognition-technology/docview/2246857277/se-2?accountid=11149
Geoghegan, S. (2013). BIOMETRICS: FACIAL RECOGNITION. Law & Order, 61(11), 56-59. Retrieved from https://gold.idm.oclc.org/login?url=https://www-proquest-com.gold.idm.oclc.org/trade-journals/biometrics-facial-recognition/docview/1470038366/se-2?accountid=11149
Martinez A. M. (2017). Computational Models of Face Perception. Current directions in psychological science, 26(3), 263–269. Retrieved from https://doi.org/10.1177/0963721417698535
Nkonde, M. (2019). Automated anti-blackness: Facial recognition in brooklyn, new york. Kennedy School Review, 20, 30-36. Retrieved from https://gold.idm.oclc.org/login?url=https://www-proquest-com.gold.idm.oclc.org/scholarly-journals/automated-anti-blackness-facial-recognition/docview/2404400349/se-2?accountid=11149
Parks, C. & Monson. K. (2018). Recognizability of computer-generated facial approximations in an automated facial recognition context for potential use in unidentified persons data repositories: Optimally and operationally modeled conditions. Forensic Science International, 291, 272-278. Retrieved from https://doi.org/10.1016/j.forsciint.2018.07.024
Tsao, D. Y., & Livingstone, M. S. (2008). Mechanisms of face perception. Annual review of neuroscience, 31, 411–437. https://doi.org/10.1146/annurev.neuro.30.051606.094238
Turk, M. & Pentland, A. (1991). Eigenfaces for Recognition. Journal of Cognitive Neuroscience, 3(1), 71-86. Retrieved from https://doi.org/10.1162/jocn.1991.3.1.71
Fussey, P. & Murray, D. (2019). Independent report on the London Metropolitan Police Service’s trial of Live Facial Recognition technology. The Human Rights, Big Data and Technology Project, 20-22. Retrieved from https://48ba3m4eh2bf2sksp43rq8kk-wpengine.netdna-ssl.com/wp-content/uploads/2019/07/London-Met-Police-Trial-of-Facial-Recognition-Tech-Report.pdf
Anapur, E. (2016, Oct. 27). The Strong Relation Between Art and Politics. Widewalls. https://www.widewalls.ch/magazine/art-and-politics
A selective timeline of art censorship from 1989 to the present. National Coalition Against Censorship. Retrieved from: https://ncac.org/resource/art-and-culture-censorship-timeline
Giunta, A. (2016). Archives, Performance, and Resistance in Uruguayan Art Under Dictatorship. Representations, (136), 36-53. doi:10.2307/26420577
Duggan, B. (2014, Oct. 22).Performance Art and Modern Political Protest. Big Think. https://bigthink.com/Picture-This/performance-art-and-modern-political-protest
McMullan, T. (2018, Jun. 13). Fighting AI surveillance with scarves and face paint. Medium. https://medium.com/s/story/fighting-ai-surveillance-with-scarves-and-face-paint-6b634ef174a1
Harvey, A. (2020, Jun. 15). Computer vision dazzle camouflage. CV Dazzle. Retrieved from https://cvdazzle.com/
Tapper, J. (2020, Feb. 1). Hiding in plain sight: activists don camouflage to beat Met surveillance. The Guardian. https://www.theguardian.com/world/2020/feb/01/privacy-campaigners-dazzle-camouflage-met-police-surveillance
Hyphen-Labs Collective. Hyphen-labs. Retrieved from http://www.hyphen-labs.com/
Blas, Z. (2011-2014). Facial weaponization suite. Zach Blas. Retrieved from https://zachblas.info/works/facial-weaponization-suite/
Kang, D. (2018, Nov, 6). Chinese ‘gait recognition’ tech IDs people by how they walk. AP News. https://apnews.com/article/bf75dd1c26c947b7826d270a16e2658a
Funk, A. (2020, Jun. 22). How domestic spying tools undermine racial justice protests. Freedom House. https://freedomhouse.org/article/how-domestic-spying-tools-undermine-racial-justice-protests
Monahan, T. (2015). The right to hide? Anti-surveillance camouflage and the aestheticization of resistance. Communication and Critical/Cultural Studies, 12(2), 159‑178. https://doi.org/10.1080/14791420.2015.1006646
Nisbet, N. (2004). Resisting Surveillance: identity and implantable microchips. Leonardo, 37(3), 210-214. Retrieved from https://doi.org/10.1162/0024094041139463
Elahi, H. (2009-present). Tracking Transience v2.2. Hasan Elahi. Retrieved from https://elahi.gmu.edu/track/
Crawford, K. & Paglen, T. (2019, Sept. 19). Excavating AI: the politics of training sets for Machine Learning. Excavating AI. Retrieved from https://excavating.ai
Poitras, L. (2016). Astro Noise. Whitney museum of American art. Retrieved from https://whitney.org/exhibitions/laura-poitras
Luksch, M. (2007). Faceless. Manu Luksch. Retrieved from http://www.manuluksch.com/project/faceless/
Ioanes, E. (2019, Aug. 23). Hong Kong protesters are forming a human chain 30 years after the Baltic Way democracy protests. Business Insider. https://www.businessinsider.com/hongkongers-are-forming-a-baltic-way-style-human-chain-2019-8
Stokel-Walker, C. (2020, Jun. 24). The lasting effect of digital surveillance at Black Lives Matter protests. The Face. https://theface.com/society/black-lives-matter-facial-recognition-digital-surveillance-george-floyd
Maqbool, A. (2020, Jul. 9). Black Lives Matter: From social media post to global movement. BBC News. https://www.bbc.co.uk/news/world-us-canada-53273381
Lind, D & Fong, J. (2014, Oct. 6). Why recording the police is so important. Vox. https://www.vox.com/2014/10/6/6905253/cops-on-camera-these-powerful-video-clips-show-why-recording-the
Lamb, H. (2020, Sept. 4). Facial recognition for remote ID verification wins Africa Prize. E&T, Engineering and Technology. https://eandt.theiet.org/content/articles/2020/09/facial-recognition-for-remote-id-verification-wins-africa-prize/
McGarry, A., Erhart, I., Eslen-Ziya, H., Jenzen, O., & Korkut, U. (Éds.). (2019). The aesthetics of global protest : Visual culture and communication. Amsterdam University Press. https://doi.org/10.2307/j.ctvswx8bm