Aggregation, automation and augmentation
April 2018, This Week's Editor's Pick, CCTV, Surveillance & Remote Monitoring
In December 2017, Memoori Market Research interviewed Milestone CTO Bjorn Eilertsen, who spoke about future trends in the video surveillance Industry. The following article includes excerpted highlights: What will be the impact of mobile recording devices, hardware acceleration, and artificial intelligence? What is the role of the open-platform community for innovation?
What impact do body-worn cameras and mobile devices have on the security and surveillance industry?
There’s a number of trends and forces happening at the same time with video capture. There’s a large base of fixed and mounted cameras, which will continue to increase in numbers and features. In addition, a lot of the new ways of doing business are much less fixed; there’s a lot of mobility involved. Most obvious are police forces with body-worn cameras, but these will also come to healthcare professionals.
And video will certainly come to any type of transportation, there’s a lot of things moving around us. With strong wireless networks emerging, new capabilities will allow us to talk about smart buildings and systems where the capturing entities are not as fixed as they used to be. I don’t believe this will reduce the number of fixed cameras, we’ll simply add a lot more data-capturing entities.
Of course, everyone is capturing video with their smartphones, and that video is capable of complementing other surveillance streams. An example would be to use Milestone Mobile’s Video Push feature, where any regular mobile phone with a proper camera can act as a surveillance camera. For pop-up events, festivals, any type of sports events and so forth, we see a lot of that happening on the capture side. The mobile phone in itself becomes a security camera.
Do you see a situation where non-professional and professional video merge?
There are certainly some big surveillance institutions, like city surveillance authorities, that are starting to use the public as an eye. The eye of the public can report simple things like potholes or vandalism – is that surveillance or not? It would be a natural fit to have an app that the public could download and use to report incidents and accidents, but also what they see happening, and that is of course borderline to surveillance.
Already, if you look at what police are investigating, a lot of the video material they review is not necessarily that of fixed video cameras. It’s video streams from public citizens’ phones and social media. But that’s only part of it. With the introduction of IoT devices, where the Internet is used to collect sensor information from any type of smart device, we will be able to use all that data to augment the video.
This will eventually aggregate a lot more information than what we have today. That aggregation will allow us to automate insights about how a building is operating and what’s happening to drive us to an actionable outcome. System users will benefit from a much higher level of insight and interaction with the system.
In terms of computing capabilities, how do you see the hardware architecture evolving to support the huge needs of video and data processing?
Systems are being connected in such a way that there are massive amounts of video and sensor streams coming in, but we’ve not been able to use the data in any clever way because the computing power needed to decode and analyse those streams has simply not been available. What has been holding us back from moving from an aggregation to automation and onwards to an augmentation has been the limitation of the CPU element.
We’re getting help from hardware manufacturers, like NVIDIA and Intel, with advances in the graphical processing unit (GPU). A lot of the video computing power is shifting from the CPU to the graphics card and by doing that, we’ve actually moved the needle significantly on how much video can actually be used for automation. The end goal is to deploy deep learning and make use of intelligence, but we haven’t had the computing power necessary.
Milestone’s work with hardware acceleration is really turning the tables on this one. We recently set up a demonstration with a standard, off-the-shelf piece of hardware, with no more than $4 000 of hardware added to that. So, for about $14 000, we had a system managing 1 500 video streams, in full HD, with proper decoding making it available for algorithms to start identifying and working with patterns.
Hardware acceleration will allow us to have all of that aggregated video and sensor information being used to analyse and assist and augment the way we manage buildings, the way we manage incidents and accidents. It will change the industry.
Machine learning, pattern recognition and AI are evolving. How soon will we see full automation where computers can make decisions based on video data?
We need to develop greater computing power with all our community partners and allow them to rewrite their analytics into a deep learning or machine learning way. This requires the network to be trained – it needs to learn what a dog is in order to identify a dog – but it’s happening very quickly. Many new networks are available today as pre-trained networks, and they’re being improved as we speak. For the machine learning that is based on a trained network, I would argue that we’re less than two years away from having that as a normal way of driving analytics in the industry.
From a Milestone point of view, we’re also about two years from achieving the computing power necessary for partners to put all of their AI or machine learning algorithms on the Milestone system. Parallel video and data processing will allow them to perform algorithms and join that with an augmentation on the client’s side or mobile side, or wherever they want to put that.
And even further out, we’re working with some of our partners on robots that may one day behave as a guard. Instead of having a guard watching the perimeter of a company you would have a robot. But a robot will need to know where to go and what to do, and how to identify situations and then propose actions, and that part will require a lot more investigation. A lot more investment into new networks is needed.
In an industry criticised for being fragmented, you’ve said we need to think more like a community and innovate together. Open systems have helped, but do you envision even more collaboration between partners?
Only an open, community-based, collaborative-thinking industry will allow us all to innovate fast enough to meet the challenges ahead. No single vendor can do this alone. The complexity we face for securing the buildings, campuses, and cities of the future requires a ton of innovation to be addressed. This innovation can only come from a community that comes together to address their respective parts of the challenge.
For example, Milestone is working to develop a computing platform to enable all partners to develop upon. We see it as a video processing service where partners will have access to all the decoded video from thousands of cameras available to run an algorithm or network, and the customers would consume a platform of Milestone technology.
Manufacturers, developers and vendors – the entire community – need to collaborate openly for fast and effective industry innovation. No matter who you are, no matter how big you are, there is always something you can learn from working with someone else.
Watch the entire Memoori video interview of Milestone Systems CTO Bjorn Eilertsen: https://www.youtube.com/watch?v=rUqmJcFUqNo&feature=youtu.be
For more information contact Milestone Systems, +27 (0)82 377 0415, firstname.lastname@example.org, www.milestonesys.com