Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. To record mask (highlighting of recognized objects) to the archive, select Yes for the corresponding parameter (1).
  2. If the camera supports multistreaming, select the stream for which detection is needed (2).
  3.  The Decode key frames parameter (3) is enabled by default. In this case, only key frames are decoded. To disable decoding, select No in the corresponding field. Using this option reduces the load on the Server, but at the same time the quality of detection is naturally reduced. We recommend enabling this parameter for "blind" (without video image display) Servers, on which you want to perform detection. For MJPEG codec decoding isn’t relevant, as each frame is considered a key frame.

    Note
    titleAttention!

    The Number of frames processed per second and Decode key frames parameters are interconnected.

    If there is no local Client connected to the Server, the following rules work for remote Clients:

    • If the key frame rate is less than the value specified in the Number of frames processed per second field, the detection tool will work by key frames.
    • If the key frame rate is greater than the value specified in the Number of frames processed per second field, the detection will be performed according to the set period.

    If a local Client connects to the Server, the detection tool will always work according to the set period. After a local Client disconnects, the above rules will be relevant again.

  4. Select a processing resource for decoding video streams (4). When you select a GPU, a stand-alone graphics card takes priority (when decoding with NVIDIA NVDEC chips). If there is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources will be used for decoding.
  5. Set the number of frames for the detection tool to process per second (5). This value must be in the range [0; 100].

    Info
    titleNote

    The default values (three output frames and 1 FPS) indicate that Neurocounter will analyze one frame every second. If Neurocounter detects the specified number of objects (or more) on three frames, then it triggers.


  6. Set the recognition threshold for objects in percent (6). If the recognition probability falls below the specified value, the data will be ignored. The higher the value, the higher the recognition accuracy, but some triggers may not be considered. This value must be in the range [0.05; 100].

  7. Select the processor for the neural networkCPU, one of NVIDIA GPUs, or one of Intel GPUs (7, see Hardware requirements for neural analytics operation, General information on configuring detection).

    Note
    titleAttention!
    • If you specify other processing resource than the CPU, this device will carry the most of computing load. However, the CPU will also be used to run Neurocounter.
    • It may take several minutes to launch the algorithm on NVIDIA GPU after you apply the settings. You can use caching to speed up future launches (see Optimizing the operation of neural analytics on GPU).
    • Starting with Detector Pack 3.11, Intel HDDL and Intel NCS aren’t supported.


  8. Set the triggering condition for Neurocounter:

    1. In the Number of alarm objects field (8), set the threshold value for the number of objects in the frame. This value must be in the range [0; 100].

    2. In the Trigger upon count field (10), select when you want to generate a triggerwhen the number of objects in the detection area is:

      1. Greater than or equal to threshold value.
      2. Less than or equal to threshold value.

        Info
        titleNote

        Neurocounter will generate a trigger from the specified threshold (8).


  9. In the Object type field (9), select the object for counting:
    1. Human.
    2. Human (top view).
    3. Vehicle.
    4. Human and Vehicle (Nano)low accuracy, low processor load.
    5. Human and Vehicle (Medium)medium accuracy, medium processor load.
    6. Human and Vehicle (Large)high accuracy, high processor load.

  10. If you need to outline the recognized objects in the preview window, select Yes for the Detected objects parameter (11).
  11. If you use a unique neural network, select the corresponding file (12).

    Note
    titleAttention!
    • To train your neural network, contact Grundig (see Data collection requirements for neural network training).
    • A trained neural network for a particular scene allows you to detect only objects of a certain type (for example, a person, a cyclist, a motorcyclist, and so on).
    • If the neural network file is not specified, the default file will be used, which is selected automatically depending on the selected object type (9) and the selected processor for the neural network operation (7). If you use a custom neural network, enter a path to the file. The selected object type is ignored when you use a custom neural network.
    • For correct neural network operation in Linux OS, place the corresponding file in the /opt/Grundig/DetectorPack/NeuroSDK directory.


  12. Set the minimum number of frames on which Neurocounter must detect objects in order to trigger (13). The value must be in the range [2; 20].
  13. If necessary, specify the class/classes of the detected object (14). If you want to display tracks of several classes, specify them separated by a comma with a space. For example, 1, 10.
    The numerical values of classes for the embedded neural networks: 1—Human/Human (top view), 10—Vehicle.
    1. If you leave the field blank, the tracks of all available classes from the neural network will be displayed (9, 12).
    2. If you specify a class/classes from the neural network, the tracks of the specified class/classes will be displayed (9, 12).
    3. If you specify a class/classes from the neural network and a class/classes missing from the neural network, the tracks of a class/classes from the neural network will be displayed (9, 12).
    4. If you specify a class/classes missing from the neural network, the tracks of all available classes from the neural network will be displayed (9, 12).

      Info
      titleNote

      Starting with Detector Pack 3.10.2, if you specify a class/classes missing from the neural network, the tracks won’t be displayed (9, 12).


  14. In the preview window, you can set the detection areas with the help of anchor points much like privacy masks in Scene Analytics detection tools (see Setting General Zones for Scene analytics detection tools). By default, the entire frame is a detection area.

  15. Click the Apply button.

...