Rite Aid was banned from using facial recognition software for five years by the FTC for its “reckless use of facial surveillance systems” that humiliated customers and put their “sensitive information at risk.”
After Rite Aid filed for Chapter 11 bankruptcy in October, the FTC ordered Rite Aid to delete all images it collected for its facial recognition system rollout and any products made from those images. To protect personal data, the company must implement a strong data security program.
A 2020 Reuters report described how the drugstore chain secretly installed facial recognition systems in 200 U.S. stores starting in 2012, using “largely lower-income, non-white neighborhoods” as the testbed.
The FTC targeted Rite Aid as it focused on biometric surveillance abuse. The company claims that Rite Aid and two contracted companies created a “watchlist database” with images of customers who committed crimes at one of its stores. CCTV or employee mobile phone cameras captured these low-quality images.
When a customer entered a store that supposedly matched an image on its database, employees received an automatic alert to “approach and identify,” meaning verifying the customer’s identity and asking them to leave. These “matches” were often false positives that caused employees to wrongly accuse customers, causing “embarrassment, harassment, and other harm,” according to the FTC.
The complaint states that employees, acting on false positive alerts, followed consumers around its stores, searched them, ordered them to leave, called the police to confront or remove them, and publicly accused them, sometimes in front of friends or family, of shoplifting or other wrongdoing.
The FTC also claimed that Rite Aid failed to inform customers of its facial recognition technology and instructed employees not to disclose it.
Face-off
Facial recognition software is one of the most controversial aspects of AI-powered surveillance. Many cities have banned the technology in recent years, while politicians have fought to regulate police use. Meanwhile, facial recognition technology companies like Clearview AI have been sued and fined worldwide for data privacy breaches.
Recent FTC findings on Rite Aid highlight AI biases. The FTC found that Rite Aid’s technology was “more likely to generate false positives in stores located in plurality-black and Asian communities than in plurality-white communities,” putting consumers at risk.
Rite Aid failed to test or measure the accuracy of their facial recognition system before or after deployment, according to the FTC.
Rite Aid said it was “pleased to reach an agreement with the FTC,” but disagreed with the allegations in a press release.
Rite Aid stated, “The allegations relate to a facial recognition technology pilot program the company deployed in a limited number of stores.” Prior to the FTC’s investigation, Rite Aid stopped using the technology in this small group of stores more than three years ago.