Lukas Esterle, Peter R. Lewis, Xin Yao, Richie McBride SelPhys, CPS Week 2017, Pittsburg, USA, April 2017.
Smart cameras are embedded devices combining a visual sensor, a processing unit and a communication interface, allowing the processing of images on the device, such that only aggregated information, instead of raw video data, is transmitted. Smart camera networks are typically used for large-scale high value security applications such as person tracking in airports or amusement parks. However, current smart cameras are expensive and have only very limited mobility, acting as a barrier to their wider adoption.
The SOLOMON project is driven by the rising demand for rapid-deployment camera networks which can adapt to provide security in the context of unforeseen situations and unfolding scenarios. This is evidenced by the rapid growth of leading body-cam company Edesix Ltd, whose VideoBadge technology is being adopted by police forces worldwide3. However, recent research advances in smart camera networks have not yet been realised in dynamic body-worn camera networks, and still rely on prohibitively expensive static hardware.
In the SOLOMON project we will develop a novel type of lightweight, inexpensive smart camera network suitable for rapid deployment and reconfiguration, where low-cost camera devices such as Edesix’s VideoBadge, are paired with the processing capabilities of smartphones. These are then worn by people (e.g. police, security guards) or mounted on mobile robots. This not only lowers cost, but allows us to introduce a feedback loop between the sensing cameras and the acting people/robots, enabling the camera network to adapt to changes during runtime, for example to prioritise or cover newly relevant areas, in response to an unfolding situation. Novel techniques in collective decision making and self- organisation as well as multi-objective online learning will be developed, in order to achieve this vision.
Since smart cameras will be deployed in unknown and rapidly changing environments, this requires them to learn at runtime about i) the changing locations and orientations of the cameras, ii) the movement of monitored objects and iii) the effects of various actions performed by themselves as well as other nodes in the network. Their learning and action selection will impact the performance of the system in several dimensions, giving rise to complex tradeoffs, e.g. between tracking performance, scalability, resource usage etc. Therefore, such learning must inherently incorporate the management of multiple objectives. Furthermore, the aggregate learnt behaviour of the entire network will determine the system-wide performance, despite individual cameras processing only local information. However, existing state of the art techniques for dynamic management of smart camera networks do not take into account very large scale deployments or mobility in the context of real-time feedback, nor are they designed for rapid self-organisation and learning. The outcome of the project will therefore be novel computational techniques to enable the creation of large adaptive smart camera networks from existing embedded body worn cameras and everyday consumer equipment, such as smart phones.
To achieve this vision, SOLOMON will focus on decentralised decision making in individual mobile cameras to achieve changing network-wide goals. The fellow will develop and study (i) new self-organisation mechanisms to enable effective runtime repositioning of camera networks to maximise visual coverage in changing environments, (ii) techniques to model and exploit runtime trade-offs, (iii) novel multi-objective online learning techniques to enable autonomous behaviour in a dynamic environment.
This work was supported by the SOLOMON project (Self-Organisation and Learning Online in Mobile Observation Networks) funded by the European Union H2020 Programme under grant agreement number 705020.