What’s Driving Deep Learning-Powered Video Analysis Towards Edge Computing
In a previous blog post I discussed which data processing model—edge or centralized computing—could best meet the high demands of processing video surveillance data. The answer is not simple; each model has its functionality benefits, and, today, a combination of processing models is optimal for most video surveillance implementations. But will that be true looking forward?
Edge computing currently has some disadvantages; for example, on-camera, edge-based video analytics applications have less processing power because of memory and computation limitations. Despite such constraints, continued migration to comprehensive on-camera analytics is anticipated, because the field of edge-based video analytics technology is rapidly evolving. Due to benefits including enhanced operational reliability, faster processing and reduced privacy risks, on-edge video content analysis is increasing in popularity. But, beyond these advantages, there are three main factors driving the shift towards on-camera analytics and edge computing:
Increased Demand for Real-Time Processing
Today there is a growing number of active video surveillance cameras deployed– a trend that is continuing. This increase in cameras and the subsequent rise in real-time processing requirements to support larger volumes of data are motivating technology providers to create, market and distribute smarter, more sophisticated edge processing cameras.
The ultimate goal and driver of on-camera analytics development is to deliver general purpose identification, extraction, tracking and classification of all objects in video footage at the edge. Today’s on-camera analytic capabilities have evolved from basic motion detection to specific object identification and classification to support more advanced activities such as intruder detection, license plate recognition and people counting. If bolstered with more powerful processors, edge devices will ultimately be able to deliver comprehensive on-camera analytics as opposed to mere point solutions.
AI Chip Development
Today Deep Learning-driven video analysis has become standardized, and leading hardware providers are developing dedicated AI chips to support VCA on edge devices. Such small form factor chips are characterized by high efficiency and low energy consumption and -because they support only specific instructions required for Deep Learning inference – they are cost effective for conserving bandwidth.
Decreasing Costs for Decoding High Resolution Video
Today, centralized computing services enable cross-camera analytics and insight aggregation. To be centrally processed, video data must first be uploaded to the central servers to be processed, which demands a lot of bandwidth and relies on high-speed, highly available network connections. By collecting and processing data on an edge device, organizations can conserve bandwidth and reduce the costs of maintaining on-premise infrastructure.
The way data is transmitted to recording archives, live monitors, or centralized processing servers is by encoding video to reduce bandwidth requirements. However, the process of then decoding this higher resolution video – such as 4K – is work-intensive. But as data processing on the edge becomes more prevalent, it will become possible to circumvent video decoding and, ultimately, reduce the computational requirements for processing the overwhelming amounts of high-quality video data. By the time the video captured by the edge device is transferred to the centralized location, it will have already been processed, so decoding will only be required as needed. For post-event investigation, for instance, only the video for the time and camera ranges relevant to the case will need to be decoded. Thus, the extraction of evidence will not be inhibited by bandwidth or encoding and decoding demands.
What’s Next in Edge Computing
Currently on-camera analytics are limited to specific analytic capabilities, effectively making them point solutions. While there are some obstructions along the path to edge processing models, research and development is accelerating, leading to more extensive on-camera analytic capabilities. A gradual migration of comprehensive and complete video content analytics to edge devices is well underway and, ultimately, it will be possible to power a complete analytics suite that includes object tracking, classification and recognition from the edge. As on-camera analytic devices are more widely adopted over time, the AI-backed video content analytic activities possible on the edge will continue to expand to support a wide range of industries from transportation to higher education, healthcare and retail in new and innovative ways.