Abstract:
A method and system using face tracking and object tracking is disclosed. The method and system use face tracking, location, and/or recognition to enhance object tracking, and use object tracking and/or location to enhance face tracking.
Abstract:
Methods and systems are provided for monitoring a point of sale (POS) transaction. Operations performed by the methods and systems include generating POS primitives by processing non-video data of a transaction recorded at POS terminal. The operations also include generating video primitives by processing video data of the transaction recorded at the POS terminal. The operations further include determining that the transaction comprises an exceptional transaction by comparing the non-video data and/or the video data to exceptional transaction rules. Additionally, the operations include determining that the exceptional transaction comprises a verified exceptional transaction by generating a video event based on the video primitives and a corresponding video rule.
Abstract:
Methods and systems are provided for monitoring a point of sale (POS) transaction. Operations performed by the methods and systems include generating POS primitives by processing non-video data of a transaction recorded at the POS terminal. The operations also include generating video primitives by processing video data of the transaction recorded at the POS terminal. The operations further include determining that the transaction comprises and exceptional transaction by comparing the non-video data and/or the video data to exceptional transaction rules. Additionally, the operations include determining that that exceptional transaction comprises a verified exceptional transaction by generating a video event based on the vide primitives and a corresponding video rule.
Abstract:
A video surveillance system is set up, calibrated, tasked, and operated. The system extracts video primitives and extracts event occurrences from the video primitives using event discriminators. The system can undertake a response, such as an alarm, based on extracted event occurrences.
Abstract:
Systems, methods and computer-readable media for creating and using video analysis rules that are based on map data are disclosed. A sensor(s), such as a video camera, can track and monitor a geographic location, such as a road, pipeline, or other location or installation. A video analytics engine can receive video streams from the sensor, and identify a location of the imaged view in a geo-registered map space, such as a latitude-longitude defined map space. A user can operate a graphical user interface to draw, enter, select, and/or otherwise input on a map a set of rules for detection of events in the monitored scene, such as tripwires and areas of interest. When tripwires, areas of interest, and/or other features are approached or crossed, the engine can perform responsive actions, such as generating an alert and sending it to a user.
Abstract:
A method for predicting when an object will arrive at a boundary includes receiving visual media captured by a camera. An object in the visual media is identified. One or more parameters related to the object are detected based on analysis of the visual media. It is predicted when the object will arrive at a boundary using the one or more parameters. An alert is transmitted to a user indicating when the object is predicted to arrive at the boundary.
Abstract:
Systems, methods and computer-readable media for creating and using video analysis rules that are based on map data are disclosed. A sensor(s), such as a video camera, can track and monitor a geographic location, such as a road, pipeline, or other location or installation. A video analytics engine can receive video streams from the sensor, and identify a location of the imaged view in a geo-registered map space, such as a latitude-longitude defined map space. A user can operate a graphical user interface to draw, enter, select, and/or otherwise input on a map a set of rules for detection of events in the monitored scene, such as tripwires and areas of interest. When tripwires, areas of interest, and/or other features are approached or crossed, the engine can perform responsive actions, such as generating an alert and sending it to a user.
Abstract:
A content analysis engine receives video input and performs analysis of the video input to produce one or more gross change primitives. A view engine coupled to the content analysis engine receives the one or more gross change primitives from the content analysis engine and provides view identification information. A rules engine coupled to the view engine receives the view identification information from the view engine and provides one or more rules based on the view identification information. An inference engine performs video analysis based on the one or more rules provided by the rules engine and the one or more gross change primitives.
Abstract:
Methods, devices and systems for performing video content analysis to detect humans or other objects of interest a video image is disclosed. The detection of humans may be used to count a number of humans, to determine a location of each human and/or perform crowd analyses of monitored areas.
Abstract:
Systems, methods and computer-readable media for creating and using video analysis rules that are based on map data are disclosed. A sensor(s), such as a video camera, can track and monitor a geographic location, such as a road, pipeline, or other location or installation. A video analytics engine can receive video streams from the sensor, and identify a location of the imaged view in a geo-registered map space, such as a latitude-longitude defined map space. A user can operate a graphical user interface to draw, enter, select, and/or otherwise input on a map a set of rules for detection of events in the monitored scene, such as tripwires and areas of interest. When tripwires, areas of interest, and/or other features are approached or crossed, the engine can perform responsive actions, such as generating an alert and sending it to a user.