Magazine

FM College ~ News & Articles

Managing the Human Risks of Biometric Applications

Aug 29, 2024 | Public | 0 comments

Topics

Frontiers

An MIT SMR initiative exploring how technology is reshaping the practice of management.

 

More in this series

Gary Waters/Ikon Images

In April, Colorado became the first state to mandate that companies protect the privacy of data generated from a person’s brain waves, an action spurred by concerns over the commercial use of wearable devices intended to monitor users’ brain activity. Use of those and other devices that enable the collection of humans’ physiological data warrants robust discussion of the legal and moral implications of the increasing surveillance and datafication of people’s lives.

Biometric technologies measure intimate body characteristics or behaviors, such as fingerprints, retinas, facial structure, vein patterns, speech, breathing patterns, brainwaves, gait, keystroke patterns, and other movements. While much activity in the field has focused on authenticating individuals’ identities in security applications, some biometric technologies are touted as offering deeper insights into humans’ states of mind and behaviors. It’s when these biometric capabilities are put into play that companies may endanger consumers’ trust.

On the other hand, even the most well-intended applications of biometrics can solicit heightened levels of creepiness, which, according to human-computer interaction researchers, refers to the feelings of unease when technology extracts information that users unknowingly or reluctantly have provided. This feeling is exacerbated when consumers fear that biometric information may be used to harm them or discriminate against them.

Balancing these conflicting interests is tricky, and compromising one in favor of the other can have costly consequences for organizations. Public opposition to Amazon Fresh’s use of video surveillance at checkout, and accusations that video recordings of customers were being analyzed by offshore workers, contributed to the grocery chain eventually discontinuing video surveillance in its stores. Such examples give rise to an important conversation about whether and how organizations can deploy biometrics without being creepy and without violating people’s rights to be respected and treated ethically.

Dignity Tensions Arising From Biometrics

While much discussion of how organizations acquire, store, and use personal data centers around privacy — individuals’ rights to control their personal information and be free from intrusive monitoring — decisions about the use of biometrics must also address human dignity.

In this context, dignity refers to the worth and value of every individual. It encompasses respect, honor, and the recognition of one’s humanity. Inherent dignity refers to the notion that all humans are deserving of moral respect, regardless of their background, abilities, or circumstances. Behavioral dignity means that access to resources supports people’s well-being. The capabilities of a biometric system can therefore both enable and constrain dignity, by allowing users to live freely and autonomously (or not) or by granting them access to resources for a better life (or not). When an individual feels creepiness in a biometric context, they are anticipating a harm to their dignity through the devaluation of their state of being or through the loss of resources to live a fulfilling life.

Privacy is tied to dignity. Respecting someone’s privacy acknowledges their autonomy and inherent worth, and violating privacy can harm an individual’s dignity, making them feel devalued or exposed. However, the two are not inextricably connected, and it is possible for a biometric system to respect privacy but inadvertently harm users’ dignity. Biometric boarding, used at many U.S. airport gates, relies on facial recognition to authenticate the identity of passengers during check-in. Although the U.S. Customs and Border Protection agency adheres to all applicable legal privacy rules and regulations, biometric boarding exposes passengers to constant surveillance, scrutiny, and suspicion. This can lead to embarrassment and frustration when, due to technical glitches or demographic differentials (such as age, gender, skin color, or even height), rightful customers are not correctly authenticated and are denied entry.

Respecting someone’s privacy acknowledges their autonomy and inherent worth, and violating privacy can harm an individual’s dignity.

HireVue, a recruitment technology company, introduced a biometrics-based platform that analyzes job candidates’ facial movements, word choices, and voices through recorded video interviews and then automatically assigns them an employability score using a machine learning algorithm. Its service has been adopted by employers across numerous sectors, at companies such as Goldman Sachs, Hilton, and Unilever. HireVue claims that its platform allows employers to screen much larger pools of candidates more quickly and to accurately identify strong candidates while reducing the biases and vagaries of human evaluation and judgment. It claims that the biometric tool doesn’t infringe upon people’s privacy because they understand its purpose; however, this overlooks the threat to human dignity. When individuals have no insight into how the biometrics-based algorithm assesses their employability, they face the potential of being judged and discriminated against unfairly and without recourse. If such automated assessments preclude them from receiving a job offer, their behavioral dignity is directly harmed.

In fact, HireVue offers no information on what facial movements and other biometric markers lead to a positive evaluation. Meredith Whittaker, cofounder of the AI Now Institute, called it “a profoundly disturbing development that we have proprietary technology that claims to differentiate between a productive worker and a worker who isn’t fit based on their facial movements.” Regulators are also alarmed by such practices. In March, the U.S. District Court for the Northern District of Illinois agreed to proceed with a class action lawsuit by job candidates who had to use HireVue’s platform in the application process, alleging that their biometric identifiers were being captured, processed, and distributed opaquely and without consent on how they were going to be used.

Regulators and Advocacy Groups Raise the Alarm

As biometrics are introduced in more settings, safeguarding human dignity is paramount. Regulators, consumer advocates, and managers all play pivotal roles in ensuring that a person’s right to be respected and treated ethically is protected and promoted.

While most laws are still centered around personal data, we’re seeing increased activity from regulators in advancing legislation to protect individuals from biometric surveillance. We view this as an acknowledgment that biometrics are a more intimate and intrusive subset of personal data and thus require specific guidelines to protect human dignity.

Worldcoin, a cryptocurrency project cofounded by OpenAI CEO Sam Altman, seeks to take iris scans of individuals in exchange for a digital identifier called World ID. The identifier certifies that they are actual humans — as opposed to deepfakes created by generative AI models — and qualifies them to participate in a new global financial network backed by Worldcoin’s cryptocurrency token, WLD. Critics have noted that much of Worldcoin’s iris-scanning activity has targeted the developing world, raising concerns about exploitation. Given the potential for data misuse and harm to individuals, the Spanish Data Protection Agency ordered the company to cease further biometric data collection in that country. More broadly in Europe, the General Data Protection Regulation differentiates biometric data from personal data and requires organizations to receive explicit consent from individuals before collecting it.

In the U.S., the Federal Trade Commission recently released a memo expressing concerns about the sophisticated nature of biometrics and warned businesses that they must assess foreseeable harms to consumers before collecting any biometric data. The agency recently announced enforcement action against drugstore chain Rite Aid, which deployed a facial recognition surveillance system that was ostensibly aimed at deterring theft but was found to have been implemented in a discriminatory way: The company selectively released it in urban, low-income communities populated by people of color. It generated numerous false positives, leading to wrongful accusations and humiliation, and harming the dignity of innocent customers.

As biometrics are introduced in more settings, safeguarding human dignity is paramount.

Consumer advocacy groups are also pushing back against biometric surveillance. Amnesty International has called for an outright ban of biometric recognition systems due to their capacity to enable mass surveillance and discriminatory practices. Similarly, the Canadian Civil Liberties Association likened facial recognition to “facial fingerprinting,” singling out companies like Clearview AI that trawl the web for facial images and then link those images to individual names, addresses, and workplaces without people’s consent.

Toward a Thoughtful Approach to Biometric Implementation

Organizations seeking to apply biometric technologies responsibly will emphasize doing so in a way that safeguards and upholds the dignity of their employees and consumers. We offer two recommendations for organizations to increase the likelihood of positive outcomes associated with biometrics.

Shift from scanning humans to scanning their interactions with objects. There are ways to achieve the same efficiencies in improved authentication and convenience while scaling down the intimacy of the surveillance. This can be accomplished by using less-intrusive biometrics — for example, via typing patterns rather than facial recognition or iris scanning — or by collecting different kinds of human data. While Amazon Fresh removed its video surveillance technology in response to consumer pushback, Amazon’s Whole Foods stores have deployed palm reader technology to facilitate identification and payment. However, Whole Foods is also rolling out a solution that does not rely on capturing and storing customer biometrics, called Dash Carts. Customers using these smart carts log into their Amazon accounts by typing in their credentials, and the carts then scan and weigh the items that customers select. This achieves the same level of payment authentication as using palm prints and is as convenient to the customer as a video-based no-checkout system but without the use of biometrics.

This isn’t to say that biometrics should never be used — BHP’s use of smart hats promotes worker safety, after all. Rather, we encourage organizations to consider whether they truly need to scan humans, which risks eroding human dignity, or whether scanning objects would be sufficient. Object recognition is becoming more sophisticated and is expected to be deployed widely across numerous sectors. Optical recognition company Alitheon introduced its FeaturePrint technology, which it touts as a computer vision platform that provides unique item identifiers that are analogous to human fingerprints. Scanning objects instead of humans offers an opportunity to advance authentication and convenience while also protecting individuals’ dignity.

Opt for smaller, restricted, and open-source architectures. The proliferation of server-side artificial intelligence models and services means that human input, including biometric data, is being transferred en masse between individual devices and global servers, granting little control to individuals. However, in most data-processing use cases, much smaller models can achieve the same results. Such models are portable and can be restricted to individual devices, without having to communicate with an external server. For example, in June, Apple introduced a small on-device AI model aimed at protecting individuals and showed that server-side models performed only marginally better.


Biometrics are emerging as an increasingly powerful tool for organizations to increase production and improve operational efficiencies. However, this emergence also comes with the potential for heightened creepiness and more pernicious cases of violations to human dignity. Surveillance and discrimination are two undesirable side effects we see today, but there will undoubtedly be many others that will need to be explored as biometrics continue to proliferate. Ethically conscious organizations must become better aware of the risks of biometrics and protect human dignity when adopting them.

 

Reprint #:

66121

The post "Managing the Human Risks of Biometric Applications" appeared first on MIT Sloan Management Review

0 Comments

Submit a Comment

Are Scents a True Indicator of Cleanliness?

Editor’s note: FM Perspectives are industry op-eds. The views expressed are the authors’ and do not necessarily reflect...

The Impact of Lighting on Aesthetics and Atmosphere in Healthcare

Lighting manufacturers discuss how lighting affects the look and feel of healthcare facilities. While not usually talked...

IFMA publishes accessible guide for leveraging AI

The International Facility Management Association (IFMA) released a publication for understanding and leveraging AI in the...

Hidden dangers

Conor Logan, Technical Director, Colt International on the hidden dangers of neglecting smoke control system maintenance In...

How to Improve Facility Operations with Visual Management Strategies

Visual management enables businesses to communicate important information about processes and procedures. Where...