Documentation for C-Werk 2.0.

Previous page Configure Pose detection tools  Specific settings for People masking detection tool Next page

To configure the common parameters for Pose detection tools, do as follows:

  1. Select the Pose detection object.
  2. By default, video stream metadata are recorded to the database. You can disable it by selecting No in the Record objects tracking list (1).
  3. If the camera supports multistreaming, select the stream for which detection is needed (2). Selecting a low-quality video stream allows reducing the load on the Server.
  4. Select a processing resource for decoding video streams (3). When you select a GPU, a stand-alone graphics card takes priority (when decoding with NVIDIA NVDEC chips). If there is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources will be used for decoding.
  5. Set the frame rate value for the detection tool to process per second (4). This value should be in the range [0,016; 100]. 

    Attention!

    With static individuals in scene, set the FPS to no less than 2. With moving individuals in scene, the FPS should be set to 4 and above. 

    The higher the FPS value, the higher the accuracy of pose detection, but the load on the CPU is higher as well. For FPS=1, the accuracy will be no less than 70%.

    This parameter varies depending on the object speed of movement. To solve typical tasks, FPS value from 3 to 20 is sufficient. Examples:

    • pose detection for moderately moving objects (without sudden movements)FPS 3;
    • pose detection for moving objectsFPS 12.
  6. Select the processor for the neural networkCPU, one of NVIDIA GPUs or one of Intel GPUs (5, see Hardware requirements for neural analytics operation, General information on configuring detection).

    Attention!

    • It may take several minutes to launch the algorithm on NVIDIA GPU after you apply the settings. You can use caching to speed up future launches (see Optimizing the operation of neural analytics on GPU).
    • If you specify other processing resource than the CPU, the selected device will carry the most of computing load. However, the CPU will also be used to run the detection tool.
    • Starting with Detector Pack 3.11, Intel HDDL and Intel NCS aren’t supported.
    • Man down or sitting pose detection accuracy may depend on the particular processor. If another selected processor gives less accurate results, set the detection parameters empirically, and configure scene perspective (see Specific settings for the Man down detection tool, Specific settings for the Sitting person detection tool).
  7. Select a neural network file (6). The standard neural networks for different processor types are located in the C:\Program Files\Common Files\Grundig\DetectorPack\NeuroSDK directory. You don't need to select the standard neural networks in this field, the system will automatically select the required one. If you use a custom neural network, enter a path to the file.

  8. By default, the entire FOV is an area for detection. If necessary, you can specify the areas for detection and skip areas in the preview window. To set an area for detection, right-click on the image, and select the required area. 

    Note

    The areas are set the same way as for the Scene Analytics detection tools (see Configuring the Detection Zone).

    This is how it works:

    1. if you specify areas for detection only, no detection will be performed in the rest of FOV.

    2. if you specify skip areas only, the detection will be performed in the rest of FOV.

  9. Select the required detection tool (7).
  10. Set the minimum number of frames with a relevant pose or behavior for the tool to trigger (8).

    Note

    The default values (2 frames and 1000 milliseconds) indicate that the tool will analyze one frame every second. When a pose is detected in two subsequent frames, the tool will trigger.

    This parameter is not used when configuring people masking.

  11. Click the Apply button.

Setting up the common parameters for the Pose detection tools is complete.

  • No labels