The deep learning model and neural network engine built-in VORTEX cameras are the core to identify and determine the critical attributes of objects of interest. For people's search, filters such as gender, age group, cloth color, and accessories factor in. For vehicles, VORTEX can incorporate type and color as selectors.
Thanks to the hybrid cloud configuration, all the heavy lifting computing and analytics are done by on-edge cameras. In addition, when VORTEX cameras operate for video surveillance and recordings, analytics and metadata (e.g., tracking path and attributes) simultaneously upload to the cloud. In this way, once users work on post search, the system can directly access the metadata stored in the cloud and hence minimize time elapses.
When users input search criteria, two parallel thumbnails show up combined. One is the sharpest image of the target fitting the search criteria. The other half is the tracking path of the object in the field of view. By presenting details of the target and their corresponding trajectories, users can have a quicker and more holistic view before further review.
Introduce Re-Search, VORTEX's advanced search function based on the similarity of people of interest. After finding a target on one camera, Re-Search function can match the feature vectors (e.g., shape, cloth color..., etc.) across cameras installed. So users can rapidly track the suspicious person's footprints around the property.