Home Latest Facing Novel Legal Issues Concerning Facial Recognition Technology in 2020 | JD Supra

Facing Novel Legal Issues Concerning Facial Recognition Technology in 2020 | JD Supra

0
Facing Novel Legal Issues Concerning Facial Recognition Technology in 2020 | JD Supra

[ad_1]

“Beware the Kodak,” warned The Hartford Daily Courant in July 1888. “The sedate citizen can’t indulge in any hilariousness without incurring the risk of being caught in the act and having his photograph passed around among his Sunday school children.”1 Over a century later, the warning rings true. Today, facial recognition technology is everywhere: Facebook uses facial recognition to assist users in tagging photos and videos with the names of others; Sephora uses facial recognition to allow customers to virtually apply and test cosmetics; JetBlue uses facial recognition in lieu of a boarding pass and passport; law enforcement agencies use facial recognition to identify suspects and victims; and now, amidst the ongoing and rapidly evolving COVID-19 pandemic, facial recognition technology is being used for contactless access control credentials as well as contact tracing of people infected with the coronavirus.

As facial recognition technology invades every aspect of our lives, shutdown or not, technological advances, news coverage, and legal approaches concerning such technology continue to expand and evolve. And, while this technology clearly provides benefits to society and public safety, it can create legal issues from privacy to insurance coverage to racial bias.

Privacy

As the use of facial recognition technology rises, so do privacy concerns. The technology is ubiquitous. When an individual walks out her own door, the technology can track her movements. While “the sedate citizen” may have become used to “having his photograph passed around,” the anonymity that was once associated with being photographed has been lost as a result of facial recognition technology. Privacy advocates thus believe that the widespread use of facial recognition should be regulated. There is no biometric privacy law at the federal level, but states, with Illinois doing the legwork, are filling the void.

When Illinois enacted the Biometric Information Privacy Act, 740 ILCS 14/1 et seq. (“BIPA”) in 2008, it became the first state to regulate the collection and storage of biometric information. Under BIPA, a private entity cannot collect or store the biometric data of Illinois residents without first providing notice, obtaining written consent, and making certain disclosures, including a written biometrics privacy policy.2 The statute contains a private right of action provision that permits recovery of liquidated damages of $1,000 or actual damages for each negligent violation or liquidated damages of $5,000 or actual damages for each willful violation. BIPA is considered the most stringent state law concerning biometric information; indeed, it is the only law that provides a private right of action.

BIPA did not gain notoriety until 2015, when a series of class action lawsuits were brought against Facebook and Shutterfly. The companies were accused of using facial recognition technology to collect and store biometric information in violation of BIPA.3 The risks associated with capturing biometric information for commercial use, however, are not limited to social media platforms. The initial cases against Facebook and Shutterfly have led to hundreds of other consumer-based class action lawsuits (e.g., Six Flags), as well as employee-based class action lawsuits (e.g., Southwest Airlines, Wendy’s) under BIPA. And, a significant decision by the Illinois Supreme Court last year, holding that an individual need not allege actual damage to establish that they are “aggrieved” under BIPA,4 has only further opened the floodgates. Would-be plaintiffs need only a “technical” violation of BIPA to have standing.5

Whether in the context of marketing or employee management, or anywhere in between, companies should understand whether they are capturing biometric information belonging to Illinois residents and, if so, ensure their policies and procedures comply with BIPA. As the Illinois Supreme Court explained in Rosenbach:

Compliance should not be difficult; whatever expenses a business might incur to meet the law’s requirements are likely to be insignificant compared to the substantial and irreversible harm that could result if biometric identifiers and information are not properly safeguarded; and the public welfare, security, and safety will be advanced.6

The onslaught of BIPA class action lawsuits has been substantial, and companies need to be prepared. Indeed, in July, Facebook offered $650 million – “a record-breaking settlement” – to settle the class actions brought against the company in 2015.7

Insurance Coverage

With the increase in commercial use of facial recognition, Chubb – one of the world’s largest insurers – has warned about the risks of biometric privacy legislation. In Chubb’s Cyber InFocus Report, “Know the Latest Trends in Cyber Risks,” the carrier identified the rising number of lawsuits under Illinois’ BIPA as a significant digital issue in the insurance industry. Various Fortune 500 companies have encountered BIPA lawsuits. “Illinois courts have now seen an increase of BIPA-related litigation,” the insurer warned in its report. “Companies doing business in that state need to be aware of the law’s requirements, especially if the company regularly collects biometric information.”8 Chubb also noted a growing trend beyond Illinois: “Biometric data regulation varies at the state level and has been a focus of U.S. federal and international legislators and regulators, so it is imperative that companies understand the legal requirements of each state and of the countries in which they conduct business.”9

Chubb’s warning was perhaps warranted. On June 5, 2020, the American Guarantee and Liability Insurance Company (“AGLIC’) filed an action in Illinois state court seeking a declaration that there is no coverage for the lawsuit against a Burger King franchisee, Toms King LLC, in a lawsuit accusing it of violating BIPA. Toms King, which owns and operates Burger King restaurants in Illinois and other states, faces a putative class action over its requirement that employees use their fingerprints as a means of authentication to clock in and out of work, without written consent required under BIPA. The franchisee has commercial general and umbrella liability insurance with AGLIC, but the insurer argues that allegations in the lawsuit do not assert any claims within the coverage of its policies and cites several exclusions, including “employers liability,” “knowing violation of rights of another,” and “access or disclosure of confidential or personal information” exclusions.10 In a prior coverage dispute, however, the Illinois Appellate Court did find that an insurance company was obligated to defend an L.A. Tan franchisee against claims brought under BIPA because the customer’s claims alleged a “personal injury” and the “violation of statutes” exclusion did not apply.11

Insurance policies currently available on the market, including commercial general, cyber, directors’ and officers’ (“D&O”), and errors and omissions (“E&O”) liability policies, may not adequately cover the risks that accompany facial recognition technology and the potential violation of biometric privacy legislation. Thus, in light of Chubb’s warning and various insurance coverage-BIPA lawsuits, as well as the growing concern of racial bias addressed below, companies should review the terms and conditions of their insurance policies carefully to determine any necessary changes to address potential coverage gaps.

Racial Bias

While facial recognition technology has improved over the past decade, it has been shown to suffer from racial bias (among other biases, such as gender bias), which can make the technology unreliable for law enforcement and ripe for potential civil rights abuses. Last year, a National Institute of Standards and Technology (“NIST”) study found “empirical evidence for the existence of a wide range of accuracy across demographic differences in the majority of the current face recognition algorithms that were evaluated.” Indeed, race-based biases were evident in the majority of facial recognition algorithms studied. For one-to-one matching (i.e., authentication), there were higher rates of false positives (up to 100 times more) for the faces of Asians and African Americans relative to Caucasian faces. Among U.S.-developed algorithms, there were similar rates of false positives for Asians, African Americans, and native groups12; the American Indian demographic had the highest rates of false positives. Notably, however, there was no dramatic difference in false positives between Asian and Caucasian faces for algorithms developed in Asian countries. For one-to-many matching (i.e., identification or search), there were higher rates of false positives for African American females in an FBI database of domestic mugshots.13

The government’s study did not include facial recognition technology from Amazon, which it sells to both government agencies and private entities. Nonetheless, “Rekognition,” Amazon’s program, has also been criticized for its racial bias. The American Civil Liberties Union, for example, previously found that Rekognition incorrectly matched 28 members of Congress to people who had been arrested. Using the same system that Amazon offers to the public, the ACLU built a face database and search tool using 25,000 publicly available arrest photos and then searched that database against public photos of every member of Congress at that time. The false matches were disproportionately of people of color; Rekognition misidentified lawmakers of color at a rate of 39% even though they only made up 20% of Congress. The false matches included 6 then-members of the Congressional Black Caucus, among them was the late civil rights leader Rep. John Lewis (D-Ga.). “An identification – whether accurate or not – could cost people their freedom or even their lives,” the ACLU said.14

After the death of George Floyd, and as racial bias in policing has become a greater part of our national discourse, the focus had turned to facial recognition technology and, according to some critics, the enablement of such racial bias.15 Congressional Democrats are seeking information from the FBI and other agencies to understand whether authorities are using facial recognition technology against protesters; states including New York are considering legislation to ban police use of the technology; and tech giants IBM, Amazon, and Microsoft are all edging away from their own technology. IBM, in a letter to Congress, stated that it will no longer offer its general purpose facial recognition or analysis software,16 Amazon announced a one-year moratorium on police use of Rekognition,17 and Microsoft said that it will not sell its facial recognition for police use until “a national law is in place, grounded in human rights, that will govern this technology.”18 Other companies, whether developing their own technology or utilizing third party technology, should note this weakness and potential liability of facial recognition.

But can facial recognition overcome its racial bias? According to Anil K. Jain,19 head of the Biometrics Research Group at Michigan State University, it can.20 “State-of-the-art face recognition methods are based on deep learning which requires millions of face images for training the recognition algorithm to achieve desired accuracy,” Dr. Jain said. “Inevitably, the distribution of training data has a significant impact on the performance of the resultant deep learning models. Where the number of faces in each cohort is unequal, it is well understood that face datasets exhibit imbalanced demographic distributions. And, models trained with imbalanced datasets lead to biased discrimination.”

According to Dr. Jain, “we need diversity of ‘labeled’ datasets of images for training.” By “labeled,” he means demographic factors (e.g., race, gender). These labels are also fittingly referred to as “ground truth.” Because of privacy concerns, however, it is becoming difficult to obtain the diverse datasets necessary to train facial recognition technology. Surprisingly, there is a conflict between privacy and racial bias. While privacy activists often cite racial discrimination in the criminal justice system and elsewhere as an argument against such technology, the bias could be overcome with larger and more diverse databases and, perhaps, “less” privacy.

The use of facial recognition is rapidly on the rise. In 2020, a year of unprecedented developments and change, it is up to companies to be aware of and address the many novel legal issues that facial recognition technology poses.

 

1 Beware the Kodak, The Hartford Daily Courant (Jul. 28, 1888), https://www.newspapers.com/image/367380225/.
2 BIPA defines “biometric information” as “any information, regardless of how it is captured, converted, stored, or shared, based on an individual’s biometric identifier used to identify an individual.” “Biometric identifier” includes “a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry.” 740 ILCS 14/10.
3 See Pezen v. Facebook, Inc., 1:15-cv-03484 (N.D. Ill. Apr. 21, 2015); Licata v. Facebook, Inc., 2015CH05427 (Ill. Cir. Ct. Cook County Apr. 1, 2015); Patel v. Facebook, Inc., 1:15-cv- 04265 (N.D. Ill. May 14, 2015); Norberg v. Shutterfly, Inc., 1:15-cv-05351 (N.D. Ill. June 17, 2015).
4 Rosenbach v. Six Flags Entertainment Corporation, 2019 IL 123186, at ¶ 40 (2019).
5 Rosenbach, 2019 IL 123186, at ¶ 34 (the Supreme Court found that the appellate court’s characterization of BIPA violations as merely being “technical” in nature “misapprehends the nature of the harm [the Illinois] legislature [was] attempting to combat through this legislation” because “[BIPA] vests in individuals and customers the right to control their biometric information by requiring notice before collection and giving them the power to say no by withholding consent”).
6 Rosenbach, 2019 IL 123186, at ¶ 37.
7 Malathi Nayak, Facebook Sweetens Biometric Privacy Accord to $650 Million, Bloomberg (July 23, 2020, 4:55 PM, updated 6:26 PM), https://www.bloomberg.com/news/articles/2020-07-23/facebook-proposes-650-million-to-settle-biometric-privacy-case.
8 Know the Latest Trends in Cyber Risks, Cyber InFocus (Aug. 2019), https://www.chubb.com/us-en/_assets/doc/cyber-infocus_july2019_updated.pdf.
9 Press Release, PRNewswire, Report spotlights Biometric Privacy Act, iEncrpyt and cyber risks facing financial institutions (Aug. 27, 2019), https://www.prnewswire.com/news-releases/new-chubb-infocus-report-outlines-latest-cyber-risks-300907678.html.
10 Am. Guar. Liab. & Ins. Co. v. Toms King LLC et al., No. 2020CH04472 (Ill. Cir. Ct. Cook County June 5, 2020). Another example of an insurance carrier resisting coverage of a BIPA lawsuit concerns McDonald’s franchisees in Am. Family Mut. Ins. Co., S.I. v. Amore Enters., Inc. et al., No. 1:20-cv-01659 (N.D. Ill. Mar. 9, 2020).
11 W. Bend Mut. Ins. Co. v. Krishna Schaumburg Tan, Inc., No. 1-19-1834, 2020 WL 191834 (Ill. App. Ct. 1st Dist. Mar. 20, 2020).
12 Native American, American Indian, Alaskan Indian, and Pacific Islanders.
13 NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software, Nat’l Inst. of Stds. & Tech. (Dec. 19, 2019), https://www.nist.gov/news-events/news/2019/12/nist-study-evaluates-effects-race-age-sex-face-recognition-software.
14 Jacob Snow, Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots, Am. Civil. Liberties Union (July 26, 2018, 8:00 AM), https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28.
15 Indeed, the first documented example in the U.S. of someone being wrongfully arrested based on a false hit produced by facial recognition technology took place earlier this year. Bobby Allyn, ‘The Computer Got It Wrong’: How Facial Recognition Led to False Arrest Of Black Man, Nat’l Pub. Radio (June 24, 2020, 8:00 AM), https://www.npr.org/2020/06/24/882683463/the-computer-got-it-wrong-how-facial-recognition-led-to-a-false-arrest-in-michig.
16 IBM CEO’s Letter to Congress on Racial Justice Reform, IBM (June 8, 2020), https://www.ibm.com/blogs/policy/facial-recognition-sunset-racial-justice-reforms/.
17 We are implementing a one-year moratorium on police use of Rekognition, Amazon (June 10, 2020), https://blog.aboutamazon.com/policy/we-are-implementing-a-one-year-moratorium-on-police-use-of-rekognition.
18 Jay Greene, Microsoft won’t sell police its facial-recognition technology, following similar moves by Amazon and IBM, Wash. Post (June 11, 2020, 2:30 PM), https://www.washingtonpost.com/technology/2020/06/11/microsoft-facial-recognition/. Clearview AI, the controversial startup with over three billion face images, however, announced that it will continue to provide its technology to law enforcement. Nick Statt, Amazon bans police from using its facial recognition technology for the next year, The Verge (June 10, 2020, 5:37 PM), https://www.theverge.com/2020/6/10/21287101/amazon-rekognition-facial-recognition-police-ban-one-year-ai-racial-bias. Conversely, in connection with a BIPA class action, the company previously announced that it “is terminating the accounts of any non-law enforcement or government entity.” Mutnick v. Clearview AI, Inc., No. 1:20-cv-00512, at Dkt. 56 (N.D. Ill. May 6, 2020).
19 Anil K. Jain, Mich. St. Univ. (last accessed Aug. 12, 2020), https://www.cse.msu.edu/~jain/.
20 Biometrics Research Group, Mich. St. Univ. (last accessed Aug. 12, 2020), http://biometrics.cse.msu.edu/.

 

[ad_2]

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here