According to Edward Newman, one of the greatest challenges is managing exaggerated customer expectations about video analytics. Newman is vice president of Hauppauge, N.Y.-based Universal Security Systems Inc., a large-scale integrator that provides turnkey electronic and physical security solutions, particularly to industrial and infrastructure clients.
“There’s a lot of hype regarding what video analytics can do,” Newman points out. “There are video analytics products that do amazing things. But within the wide range of hardware and software solutions out there, there are some that are true video analytics, and some that are no more than basic motion detection.”
Expectations about the productivity improvements to be derived from using video analytics must also be addressed, he says. Many customers hold preconceived notions video analytics can bring reductions in manpower and increases in the effectiveness of existing manpower. The assumptions aren’t necessarily true. “In some cases, the use of the video analytic product can actually increase the operator burden,” Newman says.
A computer could, for instance, watch all 100 cameras monitoring a given facility in a way a single operator or even several operators couldn’t, he says. But it takes a human being to actually decide if a threat caught by any of those 100 cameras requires additional action. “Video analytics can give customers something they don’t have, but it comes at a cost, both in dollars and in operational requirements,” Newman believes.
It’s also important for the integrator to keep in mind that the software companies providing analytic solutions won’t always tell him everything he should know. “Your analytics system is, in fact, not magic and can’t perform miracles,” says Jacob Loghry, systems engineer II with Adesta LLC, based in Omaha, Neb., SDM’s 2008 Systems Integrator of the Year. “It has to be properly designed, and sometimes this means more cameras and possibly better cameras, more lights and more infrastructure.”
THE CANDID CUSTOMER
Integrators must learn, for example, how customers plan to use their systems, as well as their performance needs and their budgets, he explains. “Once you understand how the customer pictures the system working and performing, you can move forward with determining whether analytics can make that a reality, and more specifically, which vendor is best suited for the application,” Loghry explains.
Newman echoes Loghry’s assertion that budget must be addressed. Some software makers give a list price on their analytic of only about $1,000 per channel, integrators say. The reality, he claims, is that by the time the server hardware and the professional services are factored in, that price could go to $10,000 or even $100,000 per channel on a very sophisticated system.
Integrators also must ensure customers’ expectations about how many false alarms they can handle per hour are realistic. Many clients have the false impression that a video analytics system is like a fire alarm, and there may be only one false alarm per month or per year. Instead, it’s more realistic to use Universal Security Systems’ guideline to expect one false alarm per camera per hour, Newman relates.
Camera positioning is another key aspect to discuss with customers, he adds. The integrator should explain to customers that the camera angles and views needed in video analytics are often very different than those needed in surveillance situations.
In surveillance, an individual monitoring a setting of, say, a room entryway, will want the camera to offer a low angle on the door from about 10 to 20 feet away. In video analytics applications designed to identify tailgating situations (where an authorized individual opens a door with a passcard, and is quickly followed by intruders who enter through the open door), the camera should be mounted directly overhead.
“That would provide the optimum angle for certain video analytics,” Newman explains. “But that would be useless to the human operator.”
For Loghry, the most difficult aspect of setting up a video analytics system is in achieving proper lighting and camera layout. Lighting is difficult for one reason, and that is the performance specifications, he says. “For a first timer, I would suggest actually testing your selected camera and analytics vendor under the various lighting conditions to get a rough idea of how it will work in the field,” he advises. “If the picture is pretty clear, has just a small amount of pixilation, and you â€” the installer â€” can clearly see your target area and object, then your analytics system will also [see the target area and object].”
CALCULATING CAMERA COVERAGE
Know your threats or risks. In other words, what type of target object are you seeking â€” a person, a vehicle, a boat, or some other object?
How many pixels on target does your analytics vendor require for detection, classification and identification? “This could be completely specific to the vendor’s algorithms, or it could be based on some standard like the Johnson’s Criteria or Rotakin Standard,” Loghry says.
What rule will you be setting up? According to Loghry, this will determine whether a camera’s FOV should be parallel, perpendicular, bird’s eye, mounted high, or mounted low relative to the area of interest.
Do you need to cover blind spots or have overlapping coverage?
Newman believes the most difficult application involving a video analytics system is deploying such a system in an outdoor environment affected by profound seasonal changes. “In that situation, you actually have to go through a full year of seasonal light and precipitation changes to fully configure and tune that system,” he says. “[Clients] need to understand it will take a full year to fully deploy the system. The first time it snows, it’s a rude awakening for the system. The computer is a little like a baby. It has to learn the answers to questions like “What is snow?” and “How do I handle that?”
Much can go wrong, resulting in false alarms. But there’s also a lot that can be done upfront to avoid problems. An improperly designed analytics system will not only fall far short of client expectations, but will necessitate the expenditure of extra money and time reworking the solution, Loghry says. Proper design will avoid these errors:
Lighting issues. Not enough light causes poor video quality; light in the wrong places can produce shadows, glare and dark spots.
Detection ranges. Know how many pixels on target your system requires, and design your camera coverage accordingly.
High false alarm rates. Design camera poles for appropriate wind loading in the application’s region, and always avoid aiming the camera at areas where glare, shadows and other nuisances are present.
Transmission issues. “Poor networks, wireless links and improperly terminated video cable always cause headaches,” Loghry says. “Bottom line: garbage in, garbage out.”
SIDEBAR: Rating Video Analyticsâ€™ Difficulty
Newman’s response? An 11. “It’s definitely the most challenging new product we’ve ever worked with,” he says. Software firms have made good progress in reducing false alarms and increasing accuracy. But the improvement integrators are still waiting for is the ability of systems to self tune, Newman says.
For his part, Loghry says he would have rated the difficulty a 10 several years ago. That was because, while the systems are easy to explain conceptually, there exist many variables that need to be addressed before systems work optimally.
After working in the field for a while, “I’ve learned a thing or two, and know what types of situations will either cause problems or drive costs up,” he says.
“I would change my answer to something like a 6 or 7, relative to other intrusion detection systems available.”