Last September, Russian President Vladimir Putin stated, “Whoever becomes the leader in [artificial intelligence] will become the ruler of the world.” While not everyone finds the Russians completely trustworthy with regard to foreign intelligence, there is no denying that artificial intelligence is the next great frontier in which companies are racing to stake a claim. 

But before anyone plants a flag in the top of Mount AI, it’s important to know what the mountain is. While there is undoubtedly confusion about the meaning of AI, some in the industry may be using the term when really they would be more accurate to use a different term. Amir Shechter, director of advanced solutions, Convergint Technologies, Schaumburg, Ill., says some in the industry hasten to use the latest buzzwords because they sound cool, but are using terminology that is not a true reflection of the products they’re providing. “I think people stretch this concept of AI to areas that are not truly AI,” Shechter believes.

Will General AI Make You Redundant Anytime Soon? 

 

General AI is definitely not something that exists today, at least not in any meaningful way, says Travis Deyle of Cobalt Robotics.

General AI is the human-like ability to learn, understand, solve problems, and even think, in a wide variety of areas in a way that would look very much like how humans think and reason and are conscious. 

On the other hand, narrow AI, which is what most or all AI being used today would be categorized as, is able to do one thing very well, sometimes better than humans, such as learning to search videos for specific things, learning your music and lighting preferences in your home, and playing chess. 

So the question is: Is general AI, as portrayed in sci-fi movies and books, an imminent reality, and are the machines after your job?

Deyle compares producing general AI to trying to create a machine that’s even as capable as a toddler: “It’s actually very hard, because [toddlers] do so many things that are not reproducible yet. And yet at the same time, you end up with very narrow systems that can play chess better than any human player. So it’s a matter of what scope you are looking at.” 

Deyle says the general consensus on general AI, based on conversations he has had with luminaries in that field, is that “today we are at the equivalent of cavemen playing with fire, and what you’re trying to do is like describing to them a nuclear weapon.” 

We don’t even really have the vocabulary to understand or discuss general AI in any meaningful way, he adds.

Stephen Smith at D/A Central explains that although many universities have neural net projects, they are miles away from being human. “They just can’t act like us; they cannot cope with change and variability and different accents and all of that — even Google Voice. We use it every day on our phone. How good is it? It’s pretty good. It’s pretty darn good, but it’s far from perfect. It’s not thinking like me. I’m not worried about being replaced by a machine.”

This “stretching” creates confusion, because a company that has been an analytics company suddenly claims to have an AI component, when, in fact, they have just redefined their offering — an understandable stretch in a traditional industry concerned with demonstrating relevancy, but certainly no one wants a reputation for stretching the truth in an industry that sells trust. 

Defining the Terms

While not everyone agrees on what is and is not artificial intelligence, there is some consensus. The degrees of a technology’s abilities might be subjective, but in a broad sense, AI must be able to take information of different types, “learn” from the knowledge, analyze that data and make decisions based on it. Generally these decisions would be something a human was previously required to do. “AI at its most basic is the ability for a machine to learn on its own,” explains Brad Eck, strategic alliances program owner – Americas, Milestone Systems, Beaverton, Ore.

Chris Johnston, regional marketing manager, Bosch Security & Safety Systems, Fairport, N.Y., defers to who he calls “the godfathers of artificial intelligence,” professor Marvin Minsky from MIT and professor John McCarthy from Stanford University: “Artificial intelligence is any task performed by a machine that, if carried out by a human, we would say the human had to apply intelligence to accomplish the same task.”

Maybe a simpler definition would be, “AI is the notion of a computer or other machine displaying human intelligence,” as Stuart Tucker, vice president, enterprise systems, AMAG Technology, Torrance, Calif., says. However, most definitions leave some wiggle room. After all, not every human displays the same level of intelligence, and not everyone would agree on what satisfies the standard of “displaying human intelligence,” even if we could define the level of human intelligence. 

His Head in a Cloud

 

AI in the movies is often portrayed as individual machines that can think and reason independently of any larger system. Often they have their own personality and may even “turn” against their creators or whatever system is over them. Other sci-fi AI systems act as a part of a collective consciousness. The rules are fuzzy for whether taking out the mother ship will render all the minions lifeless. 

For practical AI deployment in security, is AI better suited to work on the edge or in the cloud? 

Stephen Smith, D/A Central, says cloud is a necessity. “You cannot do what a cloud provider can do. There is Amazon Web Services [AWS], there’s Microsoft Azure, and Google is number three for market share. Everyone else takes up the last 5 percent or something like that. 

“You cannot build a datacenter that checks all the boxes that Amazon Web Services checks. You couldn’t put your data in a safer place in the public cloud. You get redundancy… you get so much. And another thing that is really important is you get elasticity. If suddenly I need to do a whole lot of AI or a whole lot of computing, I can expand my storage, my RAM, my CPUs, and my GPUs; I can rent from Google, AWS or Azure; I can rent GPU time; and you can do it just on demand so you’re not buying all this equipment — you can rent it as you need it. We are big believers in access control in the cloud. We have a big go-to-market strategy with that and really believe in it because of its capability.”

Bill Hogan, D/A Central, says one of the platforms they are working with actually has an AI engine built into it. So why is that important? “Well, you don’t know what you’re going to use the AI for,” Hogan says. “It can self-learn any devices that you put on a platform. So anything that hits a dry contact that gives you ones and zeros, you can use it and bring it into that AI platform and really learn a lot of different things.” 

Having this ability greatly enhances access control from just blindly opening the door when it recognizes someone to being able to detect anomalous behavior and alert authorities. “What I can do is say, ‘Hey, all of a sudden there is no activity or tons of activity at one gate,’ but you are not filtering for that; you were not looking for that. All of a sudden though, you are getting an alert that is saying something is not correct or something is very different,” Hogan says.

Agent Vi has an offering called innoVi, a cloud-based video analytics software as a service. innoVi’s deep learning algorithms actively and continuously learn how to categorize objects with precision, and are able to distinguish between people, vehicles, static objects and even between cars, motorcycles, bicycles and trucks, describes Zvika Ashani, CTO and co-founder of Agent Video Intelligence.

But some of the processing is still done on the edge: “Agent Vi has developed a distributed processing architecture whereby a portion of the video analysis is performed by innoVi Edge, a compact, CPU load-conserving appliance that supports all ONVIF/RTSP streams,” Ashani says. “So, users can connect all of their cameras to the innoVi Edge appliance, and in this way, effortlessly enable them with video analytics capabilities that transform these ordinary CCTV cameras into smart video devices.”

But as Sean Lawlor, data scientist at Genetec Inc., Montreal, Canada, points out, artificial intelligence really means artificial intelligence — “something that can understand the world it lives in, absorb input and learn topics it was not specifically designed to learn.” In this sense, nothing currently labeled artificial intelligence could truly live up to the that definition, so it would be helpful to delineate between that human-like ability to think and understand and learn to solve new and unique problems about its world, such as that of the computer HAL from 2001: a Space Odyssey, and what we commonly accept as artificial intelligence today. 

General AI has all of the characteristics of human intelligence such as planning, understanding language, recognizing objects and sounds, learning, and problem solving, says Johnston. Narrow AI “focuses on only a few or maybe a single facet of human intelligence, such as being able to recognize speech, like Siri or Alexa, or the ability to recognize human faces like a facial recognition application,” Johnston describes.

Travis Deyle, cofounder and CEO, Cobalt Robotics, San Mateo, Calif., says most systems we have today are considered narrow AI. He describes: “You build a specific algorithm that does a very good job of recognizing cats in videos on something like YouTube. That’s mostly what we see today. Whether it’s in robotics or image processing, you’re seeing some very specific AI capabilities that do exceed human-level performance.”

Deyle says these algorithms give us new tools in our tool belt to craft better solutions, “but they are not a general purpose, do-everything panacea solution.”

For the purposes of this article, “artificial intelligence” will refer to narrow AI. (See sidebar “Will General AI Make You Redundant Any Time Soon?” on page 53 for more about general AI.)  

Mind-Probing Robots From the Future

 

Main image

 

Many people are naturally wary of turning any level of decision making to a machine. One major area of concern — and rightly so with all of the recent hacks and data breaches — is privacy. 

AI is essentially a man-made tool that will do what humans program it to do, just like computers. But as computers can be used for nefarious purposes by nefarious players, so can AI. 

Amir Shechter, Convergint Technologies, says all the leaders in the high-tech world claim that China is going to be the leader in AI. “They already have some pretty amazing things, but this is where privacy conflicts with AI, because you can use it for enforcement, or you can use it for enhancement of services,” Shechter says. “In India they have a national database of fingerprints, so you can use it as a national ID. So the entire AI concept is scary on one hand, but there are some opportunities on the other hand — it depends on how you are going to use it. Police wearing [facial recognition] glasses and processing images in real time and matching against databases… that’s pretty scary, at least for the bad people.”

Shechter says other use cases with capabilities of deeper machine learning are really to identify crowd behavior. “You can detect any other type of pattern. You can do it through social media; you can do it through Bluetooth; you can do it through many different data points that desegregate to some sort of processing system. You can take all of that and really drive a lot of data that can be used for various purposes. From a business standpoint, it can definitely be used for enhancement of business processes,” he says.

“In an ideal, AI-informed world,” says Ashani of Agent Video Intelligence, “my security would be ensured while my personal space remains unviolated.” In reality, however, AI is one more tool to enhance security, and recently, at least, privacy has taken somewhat of a backseat to security. 

“I live in a country where we have bag checks at every public building, mall, bus and train station, supermarket — you name it,” Ashani says. “Security is of primary importance; privacy is secondary to security.”

Ashani describes reasonable expectations for AI: “For video analytics, what is reasonable to expect of AI is improved security and safety, where operators are alerted only in cases where human intervention is required. On a smart city level, it’s reasonable to expect operational efficiencies: smoother traffic flow of people and/or vehicles, crowd control, queue management — even something as prosaic as municipal waste management can be improved by analyzing video data from street cameras.”

All of these things will have a tradeoff of privacy for security. Just how much of each depends on what people will tolerate, but judging by the lines of people at the airports waiting for the privilege of stripping to their stocking feet, laying out their valuables for inspection, and being patted down by a stranger, the people will tolerate whatever amount of privacy deprivation makes their life safer and more convenient.  

And even if mind-probing robots from the future really existed, could they really extract any more information from us than we’ve already shared on Facebook?

The decisions AI makes today, Shechter says, are really based on how well you train the software to understand the data and understand the implications, which can be as complicated as understanding your facial expressions. To illustrate this process, Shechter recounts an NVIDIA demonstration at the company’s experience center in which the NVIDIA trains its software to recognize flowers. “You have a gazillion options, but they show that the faster the process is and the more knowledge you bring into the system as a part of the learning capability, the more accurate the results are. That’s true for facial recognition, for behavior, for items, autonomous cars, for a lot of things. So learning is a big part of it until the machine can teach itself.”

Darrin Bulik, director of product marketing, client devices, Western Digital, San Jose, Calif., describes the necessity for learning and the lack of clarity about AI’s capabilities as one of the most confusing aspects of AI as it’s talked about today. “How quickly and accurately can the camera and/or NVR detect the object and then determine if it is a threat or not?” Bulik asks. “It will depend on how the AI is being performed and how much data is available to learn from.” Because AI-enabled cameras and systems are still in their early stages, it is a misconception that they are ready for “primetime,” he explains. A lot of AI’s ability, then, depends on a great deal of learning before it can be effective — and on tempered expectations, in some cases. 

What AI Is Not

Perhaps noting some of the things AI is not would be helpful, due to the confusing nature of the concept. For one thing, AI is not the capability to process a great deal of data. 

“Something as simple as motion detection or counting the number of amusement park guests that pass through a doorway may not be enough to justify calling it AI,” says Bulik, “because no decisions or critical analyses are needed in these cases.”

The NASA Effect

Conspiracy theories aside, landing on the moon was a huge national lift for a country still reeling from the assassination of JFK and embroiled in the Vietnam War. However, space programs cost a lot of money, and some skeptics question whether having one is worth the investment. 

A common answer to the criticism is the practical, everyday items we use that were spinoffs of NASA technology, research and/or inventions. NASA claims memory foam, freeze-dried food, firefighting equipment, DustBusters, cochlear implants, CMOS image sensors — all told, the organization claims more than 2,000 spinoffs in the fields of computer technology, agriculture, healthcare, public safety, transportation, and more.

So if general AI in the sense of a human-like consciousness that can think, learn and react like a human does not exist, nor probably will exist, at least anytime soon, is it worth continuing to pursue AI at all?

Travis Deyle, Cobalt Robotics, says emphatically yes.  

“Even with the AI as it exists today, whether it is the very early phases of general AI or narrow AI,” Deyle says, “it produces capabilities that allow us to develop solutions that do push the boundaries forward.” 

Deyle describes technology in his company’s robot that allows it to detect anomalies. “When it detects these anomalies, it says, ‘Hey, I see an outlier. I see something weird.’ And it doesn’t know what “weird” is yet, but it knows it is weird, and then you can rely on a person to deal with that by providing high-level intelligence and real-time response.”

A practical use for this technology that Deyle believes is unique with Cobalt robots is spill and leak detection. “You can’t go to Google and download a data set of leaks and spills thermal images — it doesn’t exist,” he explains. “So as our system is moving around and building up these models and flagging potential leaks and spills to a human, we are able to very quickly build up data sets that are unique and can provide a lot of value for security facilities and employee health and safety and emergency response.”

In essence, pursuing these technologies has already demonstrated a great deal of benefit to the security industry. “So you do not have to have general AI in order for AI and machine learning to be profoundly enabling,” he concludes.

Two of the biggest technologies often mistakenly called “AI” are machine learning and deep learning. “Machine learning is an approach to achieve artificial intelligence and made its entrance in the 1980s,”says Jennifer Hackenburg, senior product manager, Dahua Technology, Irvine, Calif.

She offers the example of filtering spam messages. “Due to having to hand-code devices and develop algorithms, computers and image detection via machine learning have not rivaled humans until recently,” she says. Perhaps the fact that the popularity of AI somewhat coincides with machine learning’s being used more commonly partially explains the confusion. 

Stephanie Weagle, CMO, BriefCam, Boston, further explains, “Machine learning is a technology in which the program learns to perform specific tasks based on data, rather than explicitly programming it to do so. This results in simpler code which is easier to maintain with better performance.”

Deep learning, Hackenburg explains, is a technique that breaks down tasks for implementing machine learning. “For example, deep learning has helped AI advance by training the device to recognize data such as the traffic patterns for driverless cars. Data mining is the core technology in deep learning used as a tool for further enhancement.”

Cobalt’s Deyle describes deep learning as a subset of machine learning, and machine learning as a subset of AI. “Deep learning is a subset of machine learning that has a very specific architecture,” he says. “The way in which the algorithm works is very specific; it relies on these layers of neural networks, which are a computational abstraction. In the ’70s, you would see these neural networks that were maybe one, two, maybe three layers deep, and what you’re seeing now is networks that are 15, 20, maybe even more layers deep that can provide additional reasoning.”

Deyle says a large part of the success of neural networks is both the amount of data that can be processed and their computational architecture. “For example,” he adds, “using GPUs to be able to do training and inference very quickly has enabled them to increase their effectiveness so much more than even five to 10 years ago.”

Another misconception that often leads to unfounded fears about AI, is that AI is created with the intention of replacing people. AI in any form does not mean removing the decision maker. “AI techniques should be leveraged to empower the decision maker, not to remove them from the loop,” says Christine Trainor, director data intelligence, global midmarket buildings, data enabled business at Johnson Controls, Westford, Mass. 

Rise of the Machines — or Not

 

GettyImages-979013066

 

Some big names in the technology world have named AI as our biggest existential threat, but are those fears warranted?

It’s a common trope: Man creates artificial intelligence, and that machine or robot or consciousness ends up destroying him. Elon Musk, Tesla and SpaceX CEO, stated in a documentary by Chris Paine, “If AI has a goal and humanity just happens to be in the way, it will destroy humanity as a matter of course without even thinking about it. No hard feelings.” Musk has also stated that AI could pose a greater risk to the world than a nuclear conflict with North Korea.

So do we have cause to be worried that the AI we are developing will one day rise up and destroy us?

The short answer is no. That’s not to say this technology could never become extremely dangerous if used in a harmful manner, but narrow AI, the only real AI technology currently being used, is like most other technology: It does what it is designed to do. So if evil players use it for evil purposes, then of course it could be harmful. That is far different, however, from designing a system in good faith that is supposed to help and enhance the lives of humans, but suddenly becomes self-aware, rebels against its creators, and subjugates humans to slavery, death or being human batteries for their matrix-y machinations.

“The fundamental technology behind a nuclear weapon, for example, is also profoundly enabling for nuclear power plants,” says Travis Deyle, Cobalt Robotics. “If you’re looking at it as a nation-state, it’s probably very different, because you’re looking at the ways in which nefarious individuals could cause harm, and so there probably are legitimate concerns around the ways in which any form of narrow or general AI is used. It is very important that the people working on these things share similar values, and really the use cases you’re developing, because inherently these systems will become better at what we design them to do. Our intent matters a lot.”

Stephen Smith, D/A Central, says, “Musk and [Microsoft founder Bill] Gates are saying, ‘Look, if you make autonomous drones for military purposes and they are equipped with weapons, what happens if they just decide everyone is the enemy?’ I get it, but again that technology is awfully fanciful and pie-in-the-sky and doesn’t have legs on the ground yet.” Smith adds that automating warfare would probably be a bad idea.

So as a tool, yes, AI could be dangerous — just as a gun or a hammer or a cast-iron griddle could be in the wrong hands. But used as tools to enhance security, AI is already proving to be profoundly useful and valuable. 

Jumbi Edulbehram, regional president – Americas, Oncam, Billerica, Mass., agrees. “While AI can be used to enhance functionality, there still remains a need to have a human element to respond to threats in a timely and effective manner. With the use of the technology, security professionals are equipped with more accurate information for making good decisions on emerging threats.”

Bill Hogan, president and CEO, owner, D/A Central Oak Park, Mich., and member of Security-Net, prefers the term “augmented intelligence” over artificial intelligence because he thinks the latter has become so confusing for people. Hogan also sees the technology as an enhancement to people’s jobs, not a threat. “You are enhancing human experience through all of this learning. It is a broad, broad topic,” he says.

That’s not to say AI will not be disruptive, though. “Following an empowerment of the decision maker with the first of its kind insights, AI can enable a disruption in the industry that can change the way security is monitored today,” Trainor adds.

That leads to the practical applications of AI in security.

Artificial Intelligence — Real Benefits

In the foreseeable future, AI’s sweet spot most likely will be narrowly focused improvements that will enhance our lives in very specific ways; they will most likely not look like the Star Wars droid C-3PO or Rosie the robot maid from The Jetsons (who, though a bit more cantankerous, would be almost indistinguishable from the indomitable and very human housekeeper Alice from The Brady Bunch).

The narrow AI that has become such a hot topic has go-to market strategies right now, such as anomalous behavior detection. “You can sell AI that is inside cameras learning anomalous behavior in a scene,” says Stephen Smith, manager of IT services, D/A Central. “For instance, we have a product that can learn a scene over time and learn that vehicles don’t really belong in a particular area, and it will get your attention when it sees vehicles.”

Smith describes a very realistic scenario in which this technology would be beneficial: “A great example is snow plows plowing the sidewalk. [The AI in the camera] has only ever seen people there. It can say, ‘OK, I need to get the SOC’s attention via various ways — mass notification, you name it — to alert them that there is a vehicle where there ought not be one or there’s a human being.’ And you didn’t have to program that; it learned it by itself over time.”

The anomalous behavior detection isn’t restricted to just bad players from the outside, though. Ryan Zatolokin, senior technologist, Axis Communications Inc., Chelmsford, Mass., describes one practical application of AI, including deep learning and machine learning, as the ability to monitor entire systems for anomalies. “Ultimately we can take methodologies that in the past have been reactive in terms of maintenance and make them proactive,” Zatolokin says. “Let’s say you have a thousand units of X, and four of the units have a weird behavior. Now you can really say, ‘These are anomalous, and we need to take a deeper look at them. Is there an aspect of the software or hardware that’s failing?’” 

Hogan sees AI as also taking analytics to the next level. “We are looking at audio analytics in the security industry,” he says. “So what is anomalous behavior in sound? Maybe you have had a microphone in the hallway and have been recording sound for a long period of time, and you know that it gets louder in between classes with a bunch of teenagers. But the question is: Is there something that happened that is outside the norm? Is it a gunshot? Is there noise during classes when there shouldn’t be? You are not really programming it; it is really learning over time. So a lot of the actual AI portions of these video analytics systems or audio analytics systems require a learning period.”

Smith says this learning period could be at least 45 days, reinforcing Bulik’s assertion that often AI is not ready for primetime without a significant learning period. 

6 Questions to Ask When Considering AI

Implementing AI is not a goal but a tool, says Zvika Ashani, Agent Video Intelligence. The first question that needs to be asked and answered as clearly as possible is: What problem am I trying to solve? Once this is clear you can then start checking various solutions to see if they are a good fit. 

Questions you would need to ask are:

  • Does the solution have proven success in solving my specific problem? 
  • What accuracy level can be expected?
  • Can the solution work at the scale that I need?
  • How much effort will be required in installing and maintaining the solution?
  • What hardware infrastructure would be needed?
  • What would be the total cost of ownership?

Appearance search is another practical use case for AI, and one that many companies are starting to get very good at. “I believe the biggest impact beyond the very detailed labeling/tagging of video would be allowing users to perform timely and accurate forensics, investigations, and alerts,” says Matt Sailor, CEO, IC Realtime, Pompano Beach, Fla. “Over the years I have seen very little progress on how we search, retrieve and, most importantly, share video surveillance footage. It has traditionally remained a slow and cumbersome process. 

IC Realtime developed its product “ella” as an appearance search to solve those problems. Sailor says using ella has proven to be an enormous asset to large-scale projects that otherwise would need a team of dedicated security professionals to take on the daily tasks of video searching, retrieval and archiving. “As time progresses this will only get stronger and more accurate,” he says. 

Hogan says of appearance search technology: “You can right-click on a human in the scene in your camera on recorded video and you can say, ‘OK, I want to find this guy anywhere else he was across all my cameras,’ and it will do that. That’s the efficiency. 

“Now if I have, heaven forbid, an active shooter,” he continues, “and we have someone we need to find fast, being able to right-click them and find them right where they last were, the most recent picture of them, almost takes you instantly to where that person is. You’re not sifting through video; it’s about efficiency in the search.”

Hogan and Smith say they have a go-to market biometrics product they’re selling now that learns your face over time without you having to register your eyes or anything like that. “You can grow facial hair, for example, and over time it is just re-profiling your face and learning every time it grants you access,” Hogan says. “It updates itself to the tune of thousands of images, building a larger database of you and what makes you unique. It is pretty interesting technology.”

Europe’s new General Data Protection Regulation (GDPR) is another area in which AI will be useful. “AI is going to play a role in GDPR: it’s going to be necessary,” Smith says. “You’re not going to be able to scrub people out of your system without massive computational power. And I think it’s just more opportunity for our industry.” 

AI isn’t something that should make us recoil. “Don’t fear change; embrace it,” Smith implores. “This change is coming, and we want to be at the forefront of it.”

It is remarkable some of the things that are being done, but ultimately it will enhance our lives and our work. “If AI could take away just the stupid, dangerous jobs — that would be great,” Smith adds. “It would enhance human existence.”

Zatolokin says the question dealers and integrators should start with is simply: “What is it I’m trying to accomplish?” That is not to say AI can necessarily do it yet, but that’s the fundamental question to start the process. “What is the use case for what I’m trying to do?” Zatolokin asks. “If I need to, for example, be able to go back through video or even live video, and spot people who are dressed in a certain way or have a certain profile, those types of applications that have some sort of deep learning might be very helpful. But if that’s not what I’m trying to solve, then that’s not going to help me.”

Zatolokin says you need to fully define your challenges and what you’re trying to do first, and then start implementing the technologies to address those concerns. After a while, you might not be able to imagine a world without AI.

3 Tips for Successful AI

Security to date has been fairly siloed in how it treats data and access to APIs, and there are several profoundly enabling things, whether it’s robots or other IoT devices or cameras. 

Here are three tips for making AI successful for you.

  1. You actually need to continuously push computing to the edge, because there is often more data generated than you can upload to the cloud, so edge computing is a big trend. 
  2. Ownership of the data is important. Cobalt takes a unique position in that the customer owns all the data; we just have a license to use it to make all systems better across the whole fleet for all customers.
  3. It is really about having open APIs so that security practitioners can build a security program that can ingest data from all these different programs and react proactively to it. It’s not enough to just have a logging. It’s not OK to just be an autonomous data machine; it’s very important that you have responsiveness, because, especially in the security context, the immediacy and the relevancy of it has to be acted upon in order for it to be an effective tool.

Cobalt has access control right there on the robot; there’s a badge reader right under the screen where people can badge in and prove their identity to the system. You’re able to get good training data from that. 

In many ways it’s a combination of a VMS and access control. It’s also able to implement security policy as though it were a computer, and allow you to programmatically trade off security policy decisions.

The ability to implement physical security policy as though it was a cyber system starts to elevate the conversation away from the guns, gates and guards, and more toward risk, compliance, business continuity and emergency response, in a way that is provable, is accountable, and frankly, in our case, just works. — Contributed by Travis Deyle, cofounder and CEO, Cobalt Robotics, San Mateo, Calif.