Much of the marketing for CCTV AI detection implies the client can just drop the AI into their existing systems and operations, and they will be detecting all criminals and be far more efficient when doing it. The truth is far from this, with AI requiring the kind of attention you give a toddler in a house surrounded by potential hazards.
It is potentially going to grow up strong and competent, but the chance of plenty of injuries and potential hospital visits along the way is there. The fit to purpose, capabilities, as well as integration into an existing surveillance system without major disruptions requires a lot of thought and effort. Preparing the right environment and thinking about how you want it to grow and contribute do not automatically fall into place.
The way AI or analytics will serve the purpose you have your CCTV systems and control room for is an important starting point. I have seen a company marketing a feature with a lot of ‘wow’ factor, which had no relevance to the main purpose of what the client wanted to achieve. However, it caused the client to commit to purchase because it looked so cool. Never mind the capabilities of the system to deliver for the real purpose. So, your expectations of how the AI is going to help achieve your objectives is really important to set out up front, and you need to hold your supplier to this. Further, you want to make sure that your control room does not become so focused on responding to AI-generated alerts, many of which may be false alarms, that it stops focusing on its main purpose.
AI in CCTV is not a solution created specifically for you. It is usually a generic system that provides capabilities that often needs to be customised, set up, taught, optimised, and integrated into a smoothly operated and focused control room environment. It’s introduction frequently requires comprehensive effort and reallocation of resources to get it working properly. While sales people push the fact that it is intelligent and can learn, they often say little about how much effort this requires, and that the intelligence part is not going to be nearly what you expect it to be.
AI must work for your environment
Simply, if you buy a system, you have to provide an environment in which it can work, and you will have to teach it what to do. This process can involve huge amounts of time trying to train a system that sometimes does not recognise the simplest thing you would expect it to. In some cases, does not actually seem to retain the learning you try to give it to recognise and differentiate. There is a lot of talk about relying solely on AI-powered black/blank screen technology that only activates when something happens, but doing so means you are completely reliant on something you cannot be certain of, which is a problem, because you do not know what you are missing.
Furthermore, you lose the context in which things are happing, the situational awareness of what led up to the alarm, and why the alarm may be triggering. AI alerts work best in a situation where there is little ambiguity and simple detection is all that is required, not always something available in real life. You can often dial the sensitivity of AI detection sensitivity up or down, something that you may need to pay careful consideration to when looking at the effectiveness of detection rather than frequency of false alarm rates.
If you adopt an AI CCTV system, my suggestion is to ensure that it has the right learning capacity, and you dedicate resources to specifically bring it up to a suitable level of intelligence. Even better, get the company who sold it to you to take accountability for this development and define the required outcome, and a refund if it is not achieved. The time wasted on doing false alarm handling and classifying each false alarm takes up valuable time and distracts operators from everything else that is happening on site.
Alternatively, invest in a specific AI package that focuses specifically on eliminating false alarms. False alarms hinder AI implementation and control room efficiency. If a system repeatedly flags harmless perimeter events, like animals as threats, despite being corrected, it indicates a significant learning deficiency.
AI requires certain conditions to do its job effectively. This includes the kind of physical setting it has to analyse events in. So, you need to ensure that it can indeed handle the different aspects of your site environment, and you will need to create a physical environment that is suitable for AI. False alarms are commonly caused by a range of factors, including:
· Vegetation
· Weather conditions – wind, lighting/shadows, rain, mist
· Wildlife
· Insects
· Birds
· Time of day – lighting, glare and shadow effects
· Lights that switch on and off, or pass by
· Background complexity
· Amount of active movement in foreground to background
Activities such as clearing the ground around perimeters, cutting back tree branches and foliage, acting quickly on things like spider webs, minimising potential movement around perimeters are all common preparation and maintenance activities. They have to occur to create settings that the AI can perform effectively in. It may require activation at different times of the day, or changes in sensitivity to deal with specific conditions that arise at those times. While you can potentially train it to recognise causes of false alarms, getting the physical things right in the beginning and keeping them right makes a huge difference to performance.
AI is not magic, it needs support
The equipment that the AI is using, and the way it is set up, is also a fundamental part of how well it will work. Ideally, cameras and other alarm components need to be set up with analytics in mind from the beginning. The quality of the camera and view provides a simple baseline of how well detection can work. Once this is assured, equipment needs to be used well within specifications – the more you push your camera capabilities, the less AI is likely to be effective.
The analytics parameters that are set up for AI also need to be done on an informed basis of what the threats are, where they are likely to come from, and what countermeasures may be employed, as well as minimising potential false alarms. Using a number of complimentary AI features together to provide a broader scope of detection opportunities and to provide confirmation using various detection technologies are both considerations that should inform what you put in and how you configure them. This applies to both AI and conventional security measures, where you should be using a multilayered approach. We also have the increasing use of AI to confirm AI with specialised products that have a much more effective detection and filtering capacity than standard products.
Your strategic security approach should determine what AI can do for you, and the performance measures that would show it is successful. It will require some dedicated efforts to implement, including providing a physical environment in which it can be successful, and a technical capacity to allow it to work effectively. This is an ongoing process, not something you set up and forget about.
Build in feedback loops to provide an ongoing enhancement and adjust process from the people who work with the AI for service delivery. Where you can, grow competent AI capabilities, it could be extremely useful in handling several issues that could just be time wasters for the human operator, besides things like perimeter protection and alarms. This includes relatively simple but important things. A client during a recent visit was discussing how something as simple as a ‘hard hat’ detection capability could be automated and linked to an audio speaker output. This would inform someone not wearing their hard hat in the area that they were in violation of safety requirements and urge them to put it on for their own welfare.
Be aware of what AI can do, and what it cannot do, as well as the likely path that AI is going to take in the future. Bear in mind that human operators are still essential for both detection, filtering AI, and confirmation of serious conditions. Moreover, in many circumstances, they are still a lot more intelligent. Also bear in mind that in order to realise their own intelligence and manage AI, humans also need training. The ability to recognise tactical and subtle criminal behaviour in the context of situational awareness is still very much the preserve of skilled human operators.
About Craig Donald
Dr Craig Donald is a human factors specialist in security and CCTV. He is a director of Leaderware, which provides instruments for the selection of CCTV operators, X-ray screeners and other security personnel in major operations around the world. He also runs CCTV Surveillance Skills and Body Language, and Advanced Surveillance Body Language courses for CCTV operators, supervisors and managers internationally, and consults on CCTV management. He can be contacted on
Tel: | +27 11 543 5800 |
Email: | [email protected] |
www: | www.technews.co.za |
Articles: | More information and articles about Technews Publishing |
Tel: | +27 11 543 5800 |
Email: | [email protected] |
www: | www.securitysa.com |
Articles: | More information and articles about SMART Security Solutions |
Tel: | +27 11 787 7811 |
Email: | [email protected] |
www: | www.leaderware.com |
Articles: | More information and articles about Leaderware |
© Technews Publishing (Pty) Ltd. | All Rights Reserved.