Toward Spatial Queries for Spatial Surveillance Tasks

Surveillance systems are largely focused on the movement, storage, and review of video and audio streams. The recent shift from human monitoring toward automated interpretation presages a fundamental change in our relationship with surveillance systems. Despite this shift, the state of the art has so far remained trapped by the notion of a sensor stream. That is, the systems being sold today still largely constrain their analysis tools to operate on a single input stream. Some research systems have tried to present video streams in context: superimposed on a floor plan. Some allow searches for salient people or objects across video streams. We present here a technique for generating queries that are embedded in context. We allow the operator to specify queries that take advantage of the spatial context, by utilizing spatial gestures to assemble the query terms on a map of the site. We show an early prototype system operating on data from a reasearch facility observed by a heterogeneous network of sensors.