Some parameters can be configured for Scene Analytics detection tools simultaneously. To configure them, do as follows:
Select the Object tracker object.
By default, video stream metadata are recorded to the database. You can disable the recording by selecting No in the Record objects tracking list (1).
Note | ||
---|---|---|
| ||
Video decompression and analysis are used to obtain metadata, which causes high Server load and limits the number of video cameras that can be used on it. |
If the video camera supports multistreaming, select the stream for which detection is needed (2). Selecting a low-quality video stream allows reducing the load on the Server.
Note | ||
---|---|---|
| ||
To display the object tracks properly, make sure that all video streams from multistreaming camera have the same aspect ratio settings. |
If you need to automatically adjust the sensitivity of the scene analytics detection tools, select Yes in the Auto sensitivity list (3).
Info | ||
---|---|---|
| ||
It is recommended to enable this option if the lighting fluctuates significantly in the course of the video camera operation (for example, in outdoor conditions). |
To reduce the number of false positives from a fish-eye camera, you have to position it properly (4). For other devices, this parameter is not valid.
Analyzed framed are scaled down to a specified resolution (8, 1280 pixels on the longer side). This is how it works:
If the longer side of the source image exceeds the value specified in the Frame size change field, it is divided by two.
If the resulting resolution falls below the specified value, it is used further.
If the resulting resolution still exceeds the specified limit, it is divided by two, etc.
Info | ||
---|---|---|
| ||
For example, the source image resolution is 2048*1536, and the specified value is set to 1000. In this case, the source resolution will be halved two times (512*384), as after the first division, the number of pixels on the longer side exceeds the limit (1024 > 1000). |
Info | ||
---|---|---|
| ||
If detection is performed on a higher resolution stream and detection errors occur, it is recommended to reduce the compression. |
If necessary, configure the neural network filter (see Hardware requirements for neural analytics operation). The neural network filter processes the results of the tracker and filters out false positives on complex video images (foliage, glare, etc.).
Note | ||
---|---|---|
| ||
A neural network filter can be used either for analyzing moving objects, or for analyzing abandoned objects only. You cannot operate two neural networks simultaneously. |
Enable the filter by selecting Yes (1).
Select the processor for the neural network—CPU, one of NVIDIA GPUs, or one of Intel GPUs (2, see Hardware requirements for neural analytics operation, General information on configuring detection).
Note | ||
---|---|---|
| ||
|
Tip |
---|
The general parameters of the Scene Analytics detection tools are now set.