Some parameters can be bulk configured for Situation Analysis detection tools. To configure them, do as follows:
Select the Object Tracker object (1).
If you need to enable recording of video stream metadata, select Yes from the Record object tracking list (2).
Video decompression and analysis are used to obtain metadata, which causes high Server load and limits the number of video cameras that can be used on it. |
If a video camera supports multistreaming, select the stream for which detection is needed (3). Selecting a low-quality video stream allows reducing the load on the Server.
To establish the correct display of multi-streaming camera tracks, all video streams must have the same aspect ratio |
If you require automatic adjustment of the sensitivity of scene analytic detection tools, in the Auto Sensitivity list, select Yes (5).
Enabling this option is recommended if the lighting fluctuates significantly in the course of the video camera's operation (for example, in outdoor conditions) |
By default, the frame is compressed to 1920 pixels on the longer side. To avoid detection errors on streams with a higher resolution, it is recommended that compression be reduced (6).
In the Motion detection sensitivity field (7), set the sensitivity for motion detection tools, on a scale of 1 to 80.
If necessary, configure the neural network filter. The neural network filter processes the results of the tracker and filters out false alarms on complex video images (foliage, glare, etc.).
Enable the filter by selecting Yes (1).
Select the processor for the neural network — CPU, one of GPUs or a IntelNCS (2).
A neural network filter can be used either only for analyzing moving objects, or only for analyzing abandoned objects. You cannot operate two neural networks simultaneously. |
The general parameters of the situation analysis detection tools are now set.