Video content analysis (VCA) systems continuously process video surveillance data and analyze the metadata aggregated over time so that video can be transformed into actionable intelligence. Four factors are influencing data processing models for the VCA industry: 1) the development of “smarter,” more sophisticated cameras that produce higher resolution footage; 2) an increase in the number of deployed video cameras; 3) the adoption of Deep Learning as the standard that enables VCA; and 4) balancing the high processing demands incurred by Deep Learning activities. The first three factors drive up processing costs and requirements, necessitating the focus on the fourth factor: combatting these demands with cost-effective computing solutions to support those expansive networks.
Deep Learning-backed video data processing enables VCA systems to continuously derive insights from more cameras and aggregate more metadata over time. Deep Learning techniques use Deep Neural Networks (DNNs) to train computer systems, imitating the way a human is taught and learns. By applying Deep Learning to effectively detect, classify and recognize objects in video, as well as their attributes, VCA solutions leverage previously under-used video content. Especially in light of the increased camera coverage and larger volumes of higher resolution video available today, Deep Learning is a powerful enabler of real-time, comprehensive video content analysis. However, DNNs require more computing power, with specific hardware requirements. As a result of that demand for computing power, the total operating costs of video surveillance integrations have increased.
With all these factors to consider, technology providers are determining how to best overcome the challenges of high processing costs and identify the best data processing model for meeting the high demands of video processing. In this post, I’ll review the considerations behind edge versus centralized computing for powering video content analytics.
Video content analytics technologies have embraced centralized computing models because of the following considerations:
Today, analytics solutions on edge devices, most commonly on camera analytics, are increasing in popularity, driven by the reduced latency and greater security enabled by edge computing. Most importantly, edge computing helps conserve bandwidth: Uploading video to centralized remote locations or public clouds for processing requires a lot of bandwidth and relies on high-speed, highly available network connections. By collecting and processing data on an edge device, organizations can reduce the costs of maintaining the on-premise infrastructure.
Nonetheless, there are several compelling reasons holding the industry back for a full transition to the edge, and organizations should bear in mind that edge computing is not the appropriate choice for every implementation:
As cameras and other edge devices are growing in sophistication and being bolstered with more powerful processors, comprehensive on-camera analytics will become more prevalent. (I’ll share more about this in a future blog post – stay tuned!)
This post began by asking which data processing model could best meet the high demands of video processing: edge or centralized computing? The simple answer is “it depends.” Each model has its unique role and functionality benefits. But, by taking a comprehensive approach that combines centralized and edge processing, a deployment can benefit from the flexibility and scalability of centralized processing, while also dedicating specific, strategic devices for edge analytics.
Wondering which option is best for your organization? If you’re looking to get started with video content analytics or just want to learn more, schedule a personalized BriefCam platform demonstration to understand the deployment requirements and analytic capabilities required for your business.
Signup to receive a monthly blog digest.