Documentation for C-Werk 2.0.

Previous page Settings specific to Move from area to area detection tool  Setting up VMD-based Scene Analytics detection tools Next page

To configure the Scene Analytics detection tools based on Neurotracker, do the following:

  1. Select the Neurotracker object. 
  2. By default, metadata are recorded into the database. To disable metadata recording, select No from the Record objects tracking list.
  3. If the camera supports multistreaming, select the stream for which detection is needed
  4. The Decode key frames parameter is enabled by default. In this case, only key frames are decoded. To disable decoding, select No in the corresponding field. Using this option reduces the load on the Server, but at the same time the quality of detection is naturally reduced. We recommend enabling this parameter for "blind" (without video image display) Servers, on which you want to perform detection. For MJPEG codec decoding isn’t relevant, as each frame is considered a key frame.

    Attention!

    The Number of frames processed per second and Decode key frames parameters are interconnected.

    If there is no local Client connected to the Server, the following rules work for remote Clients:

    • If the key frame rate is less than the value specified in the Number of frames processed per second field, the detection tool will work by key frames.
    • If the key frame rate is greater than the value specified in the Number of frames processed per second field, the detection will be performed according to the set period.

    If a local Client connects to the Server, the detection tool will always work according to the set period. After a local Client disconnects, the above rules will be relevant again.

  5. In the Decoder mode field, select a processing resource for decoding video streams. When you select a GPU, a stand-alone graphics card takes priority (when decoding with NVIDIA NVDEC chips). If there is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources will be used for decoding.
  6. You can use the neurofilter to sort out certain tracks. For example, the neurotracker detects all freight trucks, and the neurofilter sorts out only video recordings that contain trucks with cargo door open. To set up a neurofilter, do the following:

    1. to use the neurofilter, select Yes in the corresponding field.

    2. in the Neurofilter file field, select a neural network file.
    3. in the Neurofilter mode field, select a processor to be used for neural network work (see General information on configuring detection).

  7. In the Number of frames processed field, specify the number of frames for the neural network to process per second. The higher the value, the more accurate tracking, but the load on the CPU is also higher.

    Attention!

    6 FPS or more is recommended. For fast moving objects (running individuals, vehicles), you must set the frame rate at 12 FPS or above (see Examples of configuring Neurotracker for solving typical tasks).

  8. Set the Detection threshold for objects in percent. If the recognition probability falls below the specified value, the data will be ignored. The higher the value, the higher the accuracy, but some triggers may not be considered.
  9. In the Neurotracker mode field, select the processor for the neural network—CPU, one of NVIDIA GPUs, or one of Intel GPUs (see Hardware requirements for neural analytics operation, General information on configuring detection).

    Attention!

    • We recommend using the GPU. It may take several minutes to launch the algorithm on NVIDIA GPU after you apply the settings. You can use caching to speed up future launches (see Optimizing the operation of neural analytics on GPU).
    • If neurotracker is running on GPU, object tracks may be lagging behind the objects in the surveillance window. If this happens, set the camera buffer size to 1000 milliseconds (see The Camera object).
    • Starting with Detector Pack 3.11, Intel HDDL and Intel NCS aren’t supported.
  10. In the Object type field, select the recognition object:

    1. Human.
    2. Human (top view).
    3. Vehicle.
    4. Human and Vehicle (Nano)low accuracy, low processor load.
    5. Human and Vehicle (Medium)medium accuracy, medium processor load.
    6. Human and Vehicle (Large)high accuracy, high processor load.

  11. To eliminate false positives when using a fisheye camera, in the Camera position field, select the correct device location. For other devices, this parameter is irrelevant.

  12. If you don't need to detect moving objects, select Yes in the Hide moving objects field. An object is treated as static if it doesn't change its position more than 10% of its width or height during its track lifetime.
  13. If you don't need to detect static objects, select Yes in the Hide static objects field. This parameter lowers the number false positives when detecting moving objects. An object is considered static if it has not moved more than 10% of its width or height during the whole time of its track existence.

    Attention!

    If a static object starts moving, the detection tool will trigger, and the object will no longer be considered static.

  14. Specify the Minimum number of detection triggers for the neurotracker to display the object's track. The higher the value, the more is the time interval between the object's detection and the display of its track on the screen. Low values of this parameter may lead to false positives.
  15. If necessary, enable the Model quantization parameter. It allows you to reduce the consumption of the GPU processing power.

    Attention!

    Grundig conducted a study in which a neural network model was trained to identify the characteristics of the detected object with quantization. The following results of the study were obtained: model quantization can lead to both an increase in the percentage of recognition and a decrease. This is due to the generalization of the mathematical model. The difference in detection ranges within ±1.5%, and the difference in object identification ranges within ±2%.

    Model quantization is only applicable to NVIDIA GPUs.

    The first launch of a detection tool with quantization enabled may take longer than a standard launch.

    If GPU caching is used, next time a detection tool with quantization will run without delay.

  16. If you use a unique neural network, select the corresponding file.

    Attention!

    • To train your neural network, contact Grundig (see Data collection requirements for neural network training).
    • A trained neural network for a particular scene allows you to detect only objects of a certain type (for example, a person, a cyclist, a motorcyclist, and so on).
    • If the neural network file is not specified, the default file will be used, which is selected automatically depending on the selected object type and the selected processor for the neural network operation. If you use a custom neural network, enter a path to the file. The selected object type is ignored when you use a custom neural network.
  17. If necessary, specify the class of the detected object. If you want to display tracks of several classes, specify them separated by a comma with a space. For example, 1, 10.
    The numerical values of classes for the embedded neural networks: 1—Human/Human (top view), 10—Vehicle.
    1. If you leave the field blank, the tracks of all available classes from the neural network will be displayed.
    2. If you specify a class/classes from the neural network, the tracks of the specified class/classes will be displayed.
    3. If you specify a class/classes from the neural network and a class/classes missing from the neural network, the tracks of a class/classes from the neural network will be displayed.
    4. If you specify a class/classes missing from the neural network, the tracks of all available classes from the neural network will be displayed.

      Note

      Starting with Detector Pack 3.10.2, if you specify a class/classes missing from the neural network, the tracks won’t be displayed.

  18. To enable the search for similar persons, in the Similitude search field, select Yes. If you enabled the parameter, it increases the processor load.

    Attention!

    The Similitude search works only on tracks of people.

  19. In the Time of processing similitude track (sec) field, set the time in the range [0; 3600] required for the algorithm to process the track to search for similar persons.
  20. In the Time period of excluding static objects field, set the time in seconds after which the track of the static object is hidden. If the value of the parameter is 0, the track of the static object isn't hidden.
  21. In the Track retention time field, set the time in seconds after which the object track is considered lost. This helps if objects in scene temporarily overlap each other. For example, a larger vehicle may completely block the smaller one from view. 
  22. By default, the entire FOV is a detection area. If you need to narrow down the area to be analyzed, you can set one or several detection areas in the preview window.

    Note

    The procedure of setting areas is identical to the basic tracker's (see Setting General Zones for Scene analytics detection tools). The only difference is that the neurotracker areas are processed while the basic tracker areas are ignored.

  23. Click the Apply button.
  24. The next step is to create and configure the necessary detection tools on the basis of neurotracker. The configuration procedure is the same as for the basic tracker (see Setting up Tracker-based Scene Analytics detection tools).

    Attention!

    • To trigger a Motion in Area detection tool on the basis of neurotracker, an object must be displaced by at least 25% of its width or height in FOV.
    • The abandoned objects detection tool works only with the basic object tracker.
  • No labels