The demand for automated authentication and identification is exploding across applications and environments. In response, a growing number of security integrators are adding one or more biometric identity solutions to their standard offerings. If you’re one of them, you’re most likely selling biometric technology as part of an access control solution. Biometrics offer an alternative to fobs, cards, PINS and mobile credentials, or a convenient way to facilitate dual-factor authentication. 

Facial geometry is a popular biometric modality. Facial recognition is touchless, unobtrusive and often employs standard security cameras instead of specialized readers, reducing upfront equipment costs. 

Adding biometrics to access control solutions is smart business. So far, so good. Then, a common customer request arises: “Can you integrate the facial recognition system with our video management platform?”

Can you? Yes. Should you? Maybe, but not so fast. Doing so could be setting your customer up for potential HR headaches and, depending on how they plan to use it, expensive lawsuits. Before proceeding, there are many issues to consider.

Currently, no federal laws define how facial recognition, or the broader category of biometrics, can be used for surveillance in commercial, non-government settings. However, there are plenty of state and local regulations. Illinois, Texas and Washington have laws that ban or restrict the use of biometric data without user consent, and several other states are expected to follow suit. 

In high-security workplaces, employers may wish to use surveillance cameras to extend their knowledge of workers’ movements beyond areas secured by access control. Obtaining user consent is critical to avoiding any legal minefields. Provided employees know that their faces will be enrolled and understand how systems will be leveraging their data, this is an excellent application of biometrics. 

From the workers’ perspective, the technology is seamless. For security teams, automated notifications made possible through facial recognition helps to mitigate risk. Why is Sal in the mailroom after hours? Why is Maria wandering around the eighth floor when she spends most of her time on the sixth? Facial identity analytics can tag these anomalous situations as they occur, so security teams can immediately monitor the situation and decide whether to intervene. In high-security environments, this capability is a game changer.

However, once companies have integrated facial recognition with a video management system (VMS) platform, there is a slippery slope between using the technology for legitimate security purposes and applying it to ethically questionable, big-brother-like applications.

Take the example of MSG Entertainment, which received a lot of bad press last Christmas when its security team at Radio City Music Hall denied entry to a woman chaperoning her daughter’s Girl Scout Troop’s outing to see the Rockettes. MSG has a policy that bars any lawyers who work at firms in litigation against MSG from entering their properties, including Radio City. 

The Girl Scout mom was, in fact, a lawyer at such a firm, but she was not personally involved in any cases against MSG. Nonetheless, MSG scanned her face into its biometric database without her consent, probably using images found online. When surveillance cameras in the lobby saw her face, the system notified security of a “match.” Guards approached her as she passed through security, verified her identity, and demanded that she leave. She claims that they knew her name before she told it to them. To many, this use of biometrics seems outright creepy.

One firm has successfully sued MSG for its policy, arguing that banning its employees is being done as “punishment” rather than for legitimate security concerns. The legal argument was not about MSG’s use of biometrics, but enforcing the dubious policy could not happen without the technology’s help. 

New York State is currently evaluating whether to permit school districts to use facial recognition in conjunction with their surveillance systems. As of now, the practice is prohibited. Proponents argue that the technology would not be used to track students' everyday activities; it would be used to identify prohibited individuals from entering school grounds or buildings — people like registered sex offenders. Critics fear a district's ability to upload images of the entire student body and use the data in unspecified ways, without consent, opens a pandora’s box of potential privacy violations.

Biometric identity solutions are powerful tools with immense potential to improve security. However, before integrating facial recognition with VMS platforms, you should discuss with clients how they intend to use the technology. If all biometric data will be enrolled and stored with user consent, and if management plans to be transparent about how the data will be used, such integrations deserve a green light. 

In all other instances, proceed with caution. Legislation and public attitudes will soon catch up with technology, but for now the flashing red light demands that you first stop and look in all directions with care.