The move to the edge started with storage on SD cards in cameras and has exploded to the stage where cameras are actually powerful computers able to collect video data, analyse it using the latest AI algorithms and send alerts when the AI flags something as out of the ordinary. However, that’s only how it happened in the security surveillance industry.
The edge, or edge computing, has seen extraordinary growth over the last few years as IoT technologies become standard fare in industries across the globe. Today the edge is made up of diverse solutions, some with remote sensors that communicate small amounts of information, such as temperature or location, to those that collect and analyse enormous volumes of data, like surveillance cameras.
The promise of edge technologies in surveillance is that, with effective AI-enhanced analytics on board, cameras can do their own analysis, filter out false alarms and send alerts when something happens – apart from storing video onboard. An event could be as simple as verifying that a human has crossed a line, through to identifying faces in a crowd. What we used to rely on servers to do in bulk, each camera can now do on its own (as long as it is a modern camera designed for that purpose). We even have cameras today that allow users to load different analytics applications to the camera, depending on the user’s requirements.
In an edge surveillance solution, the video is stored on the camera with only alerts sending images of a few seconds of video to a control room. For later investigation, all the video can be downloaded, or it can be done at off-peak times to prevent bandwidth latency and bottlenecks. While some cameras can raise the alarm themselves, cloud services can be used as a qualifying analytic to double check the cameras, alert to make sure it is not a false alarm. This is being done with great success by some local cloud-friendly companies.
Of course, there are variations in this architecture, with some cloud companies installing an edge-processing device onsite which can monitor the video from all the environment’s cameras, whether edge ready or not, and send alerts to the cloud for verification and subsequent action.
To find out what is happening in the world of edge surveillance, Hi-Tech Security Solutions asked a few people in the industry for their take on the market right now and for their views on the future of edge surveillance in general. Participating in the article are:
• Axis Communications’ sales manager for Africa, Marcel Bruyns.
• Azena’s VP of marketing, Fabio Marti.
• Forbatt SA’s Vaughn Tempelhoff.
• Hikvision’s Luke Liu.
• inq’.s CTO, Pramod Venkatesh.
Hi-Tech Security Solutions: In your experience, are organisations realising the value of edge storage and analytics and implementing it in surveillance rollouts or are they still more focused on storing and analysing data on central servers (or cloud servers)?
Bruyns: At Axis Communications, edge storage has been a successful part of our surveillance product solution for many years. It is not extremely common to use edge storage, but we have some unique environments where edge storage has made the solution perform impressively.
Edge storage makes the system easy to deploy and highly manageable in our partnered transportation solution. In some cases, edge storage and edge analytics go hand in hand. Some applications allow the devices to make decisions while recording the data to the SD card. Although more organisations are adapting to AI applications, the overall adoption rate is slow. Some of the more popular AI applications we are seeing in the market include facial recognition, automatic number plate recognition (ANPR), and human and vehicle detection, to name just a few. The more complex applications like weapon detection and behavioural analytics are not mainstream yet because these applications are considered case-specific and involve a solid understanding of what the jobs entail.
Despite more applications being deployed, most are still either server- or edge-based. We have seen an increase in cloud applications, but adoption is a little slower in the South African context. However, we have been noticing that many of the global data centre organisations are investing in infrastructure in South Africa, which will hopefully allow for more cloud-based solutions to be driven into the market.
Depending on the scenario and use case, we still believe a hybrid analytics solution is an excellent way to process data.
Marti: There is no doubt that edge analysis has grown in popularity as edge computing continues to grow in power and decline in costs. Processing the camera’s live stream at the edge allows for transmitting and storing only the most appropriate data, which also provides huge savings on bandwidth and storage costs.
While storage costs are also decreasing, video resolution has also dramatically increased, resulting in more data-rich video that further taxes bandwidth. So what we see is that more organisations are increasing their investments at the edge.
Ultimately many deployments are hybrid, and for a good reason. Combining edge analytics with central components offers a variety of additional benefits. One of our partners is using analytics on the edge to generate events and alarms that help run their security operations centres (SOC) more efficiently, while increasing the situational awareness of operators and enabling better security and incident response. Centralised video management is still used for forensics after the fact, which also becomes more efficient if you can also search for events generated by analytics at the edge. So it is not always one or the other; for real-time analysis, edge computing may provide crucial advantages, but it also provides additional value if it is integrated with central components, especially when looking at analysis and forensics.
Tempelhoff: We have certainly seen an upwards trend where end users specifically demand more features and functionality from their CCTV systems. This is mostly driven in the form of analytics giving you relevant video information as and when you require. Edge-based storage is currently not as popular and its advantages are still debatable.
Liu: We have seen that edge analytics powered by AI is becoming more popular in the security industry. Improvement of algorithms, increased computing performance, and decreased cost of chips due to the advance of semiconductor technology in recent years have supported this development. More importantly, customers have recognised the value of edge analytics and have found new uses in various scenarios. Along with ANPR, automated event alerts and false alarm reduction, AI technologies are also being used for a wider range of applications, like personal protective equipment detection, fall detection for the elderly, mine surface examination and more.
Venkatesh: Most of our clients are looking for edge processing, but prefer storing their video in the cloud. The reasoning behind this is that if an attack happens and the guys steal the NVR or camera, the evidence is completely gone. Hence they are more interested in having the processing happening on the edge (to reduce the bandwidth), but securing the data in the cloud.
Hi-Tech Security Solutions: What benefits does edge processing offer the end user? Is this a reasonable option given the declining power availability situation?
Bruyns: Edge computing is a very efficient approach, as data can be processed right at the beginning of this process. This means images can be analysed even before image processing takes place and before the video stream is compressed. Both functionalities can be a significant advantage for analytics accuracy.
Theoretically, edge processing is very scalable. If more devices are needed, they are deployed where needed without the need to expand server hardware for the analytics. Of course, recording capacities still need to be considered.
Power is a constant pain point, but has some edge processing benefits. It might be easier to maintain backup power for some devices in the field instead of a full rack of servers. Having access to that information after the fact is crucial. The cameras can store the data locally and send it to the server (onsite or cloud) once it is back online. This reinforces the hybrid solutions approach for edge server or cloud solutions.
Tempelhoff: Edge-based processing minimises much of the processing that was previously done on recorders. The computing itself improves the efficiency of information and enriches application data.
Liu: Edge computing uses local computing to enable analytics at the source of the data. Powered by AI technology, it strengthens the sensing capability of front-end cameras and helps us understand the captured scene more effectively and accurately. With AI algorithms woven into the edge devices, selected information such as an individual or a vehicle in a video image will be extracted and sent, which significantly enhances the transmission efficiency and reduces the network bandwidth need, while still sustaining high quality and accuracy. Edge computing will also accelerate more efficient business responses, creating immediate actions and events alerts.
Venkatesh: In Africa specifically, data cost are still high. Moreover, for some use cases, such as mining, farms, or in-vehicular cameras, the availability of high-speed networking is not present or is very expensive. In that case it makes sense for the edge devices to process most of the data and then only transfer back relevant information to the cloud. Power is a problem, but most of these edge devices (and the one we sell), have an ability to cap the power at 5 W, thereby any solar or battery powered system will be sufficient when power issues arise.
Hi-Tech Security Solutions: Video analytics, with or without AI, requires a lot of processing power. What technology do you include in your cameras that make them ‘edge analytics’ ready as opposed to just being able to store video on an SD card?
Bruyns: An increase in appetite for real-time analytics and wanting to reduce transmitted information to the cloud amplifies the need for higher and higher performance at the edge. To this end, we have developed our ARTPEC chipsets for many years and launched the ARTPEC 8 chipset in 2021, featuring an improved System on Chip, which allows for AI-based deep learning applications to run more efficiently on the edge.
Almost every new camera we introduce this year will come with AI capabilities. We provide the ideal base for edge processing with powerful image processing and efficient encoding functionality. In fact, the ability to process at the edge has never been more critical. Access to raw video allows the analytics to see things which could be compromised in video encoding.
Metadata reduces network traffic for server- or cloud-based processing, but also for event generation to support real-time use cases.
Tempelhoff: TVT cameras uses technology based on a deep learning network architecture. It can efficiently calculate the target features we need to know and integrate secondary apps with endless possibilities.
Liu: As mentioned above, more powerful edge computing performance and optimised algorithms are two key elements to make edge analytics possible. High-speed graphics processing units (GPUs) perform accelerated computing while deep learning algorithms improve accuracy compared to traditional cameras that still rely on conventional CPUs.
Venkatesh: inq. does not sell cameras, but we work with any camera systems to process video using our edge devices. We generally work in two ways:
1. Transparent mode where we use the IP protocol such as RTSP/HLS streams to process the streams on the edge or cloud using our edge devices.
2. Inline mode where we the events are generated by the camera intelligence and then our system acts as a secondary filter to further process the same to reduce false positives.
Hi-Tech Security Solutions: How do you ensure the security of the data on the edge and what do you do to prevent unauthorised access or people trying to gain control of the cameras?
Bruyns: Cybersecurity best practices are always important; understanding the risk and how to mitigate some of them is critical. We have a helpful hardening guide available on our website to help system integrators and end-users understand the risks associated with cyberattacks. With our new ARTPEC 8 chipset, we have embedded cybersecurity features into the camera: Edge Vault, Secure Boot, signed firmware and Trusted Platform Module (TPM).
The Edge Vault security component enables the automatic and secure identification of new devices during installation. Secure Boot acts as a gatekeeper for your surveillance system, ensuring that unauthenticated, tampered code is blocked and rejected during the boot process before it can attack or infect your system.
Devices with signed firmware can validate the firmware before allowing installation. TPM is a standalone hardware component that ensures cryptographic keys and certificates are safe and secure, even in a security breach.
Tempelhoff: Under TVT’s security system standards, it endows cameras with security features comprising six characteristics, redefining them as trusted products to achieve security compliance. The whole process is controllable, available at all times, data is not lost, sensitive data is not leaked and business is never interrupted.
Liu: Cyber resilience requires serious cybersecurity investments in a solid foundation by equipment vendors, and one of the most effective investments is the implementation of Secure-by-Design. Hikvision takes cybersecurity and privacy very seriously and has implemented Secure-by-Design in its production process.
The Hikvision Security Development Life Cycle (HSDLC) is an essential part of Hikvision’s cybersecurity programme. Cybersecurity checks take place at every stage of product development, from concept to delivery. The HSDLC management guides us to make every effort to produce products that are as cyber secure as possible.
We also always strive to ensure that our products adhere to the most rigorous cybersecurity certifications, and that they meet the requirements of industry-leading standards.
Venkatesh: Each of our edge nodes is monitored using our Elastic Monitoring System with Security, and all the data is sent back to our SoC for event monitoring and handling, so any unauthorised access will lock the device so the system is protected.
Hi-Tech Security Solutions: Is it possible for users to update the onboard analytic apps, either to newer versions or even to replace the apps with different analytics? Do you think this is a necessary feature in some situations?
Bruyns: For applications on the edge, our strategy has always been to create a stable platform and then load applications independently of the firmware onto the device. We have recently launched our fourth generation of Axis Camera Application Platform (ACAP 4).
The applications can be updated or rolled back to previous versions if not compatible with different Virtual Memory System (VMS) platforms. Multiple applications can be added to the same device; one or more applications could be used, depending on the processing requirement. The applications can be removed if no longer needed, but licencing is dependent on the developer of the application.
Tempelhoff: Based on the arrival of the era of AI intelligence, we are getting closer to this type of scenario, which is an important trend for the future. TVT’s latest generation of software-defined cameras is based upon this. Users can migrate different AI algorithms according to their different scenarios, making the application attributes of the camera more powerful and satisfying each environment’s intelligent application demands.
Liu: Currently, the updating of onboard analytics by users is not possible, but AI cameras are becoming increasingly flexible by being able to incorporate multiple AI algorithms.
As technology advances, AI chipset performance has improved to enable massive computing power using various algorithms and contributing to multi-intelligence functionality. We believe that multi-intelligence will be the trend for , but securing the data in the cloud next generation of AI-empowered cameras as several intelligent tasks will be accomplished by one camera.
Let us use vehicle intersections as an example. In many cities you can see ten or more cameras installed at intersections to detect traffic flow, to identify violations, to detect vehicle types and licence plate numbers, protect sidewalks and so on. But now, with multi-intelligence cameras, two or three cameras will be enough for an intersection. Since fewer cameras will be equipped for more than one application scenario, the cost of equipment, installation, maintenance, and management will all be reduced.
Moreover, scenario-defined cameras will become common as manufacturers insert different algorithms into security cameras according to specific application scenarios, allowing customers to choose customised functionality for their needs.
Venkatesh: Yes, all of that is completely managed on our inq. Control Platform which can add, delete or even change the rules any time. This is very necessary because each client requirement is different, and it can keep changing; most environments are agile, and our system caters to any process or function the client requires.
Hi-Tech Security Solutions: What are the latest edge analytics solutions you provide? Do your cameras run your own analytics software or is it open to third-party developers as well?
Bruyns: Our AI strategy is multi-layered, and we have developed some fundamental applications in-house. Our analytics solutions make it easier to get the insights you need to protect your people and property, which empowers you to make better decisions about your business and operations.
APD – The AXIS Perimeter Defender reinforces physical access controls to give you an edge where security starts, namely at the perimeter of your site. Together with Axis cameras, it provides an effective edge-based system that automatically detects and responds to people and vehicles intruding on your property. When combined with thermal and PTZ cameras, it is suitable even for high-security locations such as chemical and power plants, and prisons.
Axis Verifier – Ideal for free flow, slow-speed traffic and vehicle access control scenarios, the AXIS Licence Plate Verifier makes it easy to detect and read licence plates, monitor vehicles, create a vehicle access solution, identify stolen vehicles and much more.
Retail Analytics – Our retail analytics cover three tiers: loss prevention, store optimisation and traditional safety and security. Loss prevention protects your profits with innovative, integrated solutions. When you combine Axis surveillance hardware and analytics, you can tackle theft and fraud everywhere on your premises. From a store optimisation perspective, in-store cameras can be powerful tools for gathering and processing numeric data. It is a way to understand the behaviours and needs of your customers and can be the key to unlocking the full potential of your business. On the other hand, no retail operation can ever be successful unless visitors feel safe. Axis IP video and audio solutions provide a blanket of security that makes your stores welcoming to potential shoppers.
AOA – Axis Object Analytics delivers real-time intelligence you can act on so you can focus your attention on what happens, when it happens. Axis Object Analytics is available in two versions: machine learning and deep learning. The detection and classification capabilities of AOA are camera dependent. Cameras with a machine learning processing unit (MLPU) can classify humans and vehicles. Cameras with a deep learning processing unit (DLPU) offer more granular object classification, meaning they can classify humans and vehicles as well as different types of vehicles, such as cars, trucks, busses, and motorcycles or bicycles. Ideal for busier scenes and more demanding surveillance requirements, it also offers better detection and classification capabilities for people in unusual positions (e.g., hunched) and only partially visible objects.
In addition, we have a mature technology partner programme and expect to see more exciting applications that use deep learning and enhanced processing on the edge. Axis supports third-party development through the Axis Developer Community, which currently has over 10 000 members and allows developers to use modern programming and collaboration tools like Python, GitHub and Docker. The latest version of ACAP 4 has also introduced a new SDK called the Computer Vision SDK, this allows developers to make use of the deep learning capabilities of ARTPEC 8.
Marti: There are many common use cases, such as people counting, object recognition, and traffic analysis for which we have nearly 20 different options in our app store, and these will continue to grow as different developers create new and better apps in these categories. Some of the latest apps to join our app store are in weapons detection, patient fall detection in a healthcare setting, drone detection and more. What particular use case the apps solve and how they solve them are limited only to the imagination and expertise of the app developer to meet the specific need of the customer.
In our view, we are just at the beginning and many use cases have not yet been identified. One year ago, nobody would have thought of seabird detection, but now we have a partner offering just that, integrated with other components for a large salmon farmer, thus solving a big pain point of these seabirds attacking and damaging fish pens and in effect, impacting fish quality.
Tempelhoff: In addition to recognising people or objects, localisation and tracking, tracking direction and motion, one can draw the motion trajectories and heat maps of people or objects. For the upcoming software-defined cameras, it is expected that more developers will develop software in the cloud and then deploy it to smart cameras at the edge. Smart cameras can continuously deploy new applications and services as demand changes, extending the life cycle of the camera.
At present, TVT has an experienced algorithm team, focusing on researching surveillance video. They have implemented many scene analysis functions around the world, and also achieved good results in well-known algorithm competitions.
Liu: The Hikvision DeepinView camera ranges offer various edge analytics options to our customers. And the latest Hikvision Dedicated DeepinView cameras incorporate several AI-powered deep learning algorithms in one unit. Accordingly, users can simply enable an algorithm manually for dedicated use, then later switch the algorithm as needed.
Hikvision also provides an HEOP programme for third-party technology partners to develop their own applications and install them directly onto Hikvision cameras, which brings a greater variety of intelligent functionality directly to customers.
Venkatesh: We have 15 different features we can enable, from object tracking, ANPR, safety helmet, suspicious behaviour, face recognition (which has been trained specifically for Africa), loitering, counting sacks and more. We can work with third-party developers as well. We currently have built all our AI in-house, but there could be a potential for other AI vendors to collaborate with us as a marketplace.
For more information contact:
• Azena, www.azena.com
|Tel:||+27 11 543 5800|
|Articles:||More information and articles about Technews Publishing|
|Tel:||+27 11 548 6780|
|Fax:||+27 11 548 6799|
|Articles:||More information and articles about Axis Communications SA|
|Tel:||+27 11 469 3598|
|Fax:||+27 11 469 3932|
|Articles:||More information and articles about Forbatt SA|
|Tel:||+27 87 701 8113|
|Articles:||More information and articles about Hikvision South Africa|
© Technews Publishing (Pty) Ltd | All Rights Reserved