Documentation for C-Werk 2.0.

Previous page Camera requirements for the object presence detection tool  Personal protective equipment detection tools Next page

To configure the object presence detection tool, do the following:

  1. To record the sensitivity scale of the detection tool to the archive (see Displaying information from a detection tool (mask)), select Yes for the Record mask to archive parameter (1).
  2. If the camera supports multistreaming, select the stream for which detection is needed (2). Selecting a low-quality video stream allows reducing the load on the Server.
  3. Select a processing resource for decoding video streams (3). When you select a GPU, a stand-alone graphics card takes priority (when decoding with NVIDIA NVDEC chips). If there is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources will be used for decoding.
  4. Set the frame rate value for the detection tool to process per second (4). The value should be in the [0.016; 100] range.
  5. Select the processor for the neural network operation—CPU, one of NVIDIA GPUs or one of Intel GPUs (5, see Hardware requirements for neural analytics operation, General information on configuring detection). If you specify other processing resource than the CPU, this device will carry the most of computing load. However, the CPU will also be used to run the detection tool.

    Attention!

    • It may take several minutes to launch the algorithm on NVIDIA GPU after you apply the settings. You can use caching to speed up future launches (see Optimizing the operation of neural analytics on GPU).
    • If you specify other processing resource than the CPU, this device will carry the most of computing load. However, the CPU will also be used to run the detection tool.
    • Starting with Detector Pack 3.11, Intel HDDL and Intel NCS aren’t supported.
  6. Select a neural network file (6). The standard neural networks for different processor types are located in the C:\Program Files\Common Files\Grundig\DetectorPack\NeuroSDK directory. You don't need to select the standard neural networks in this field, the system will automatically select the required one. If you use a custom neural network, enter a path to the file.

  7. Set the minimum number of frames with the object present for triggering the tool (7). The value should be in the [5; 20] range.
  8. To detect the objects without changing the frame size, select Yes in the Scanning mode field (8). To work in the scanning mode, the neural network must support the scanning mode.
  9. Set the sensitivity of the detection tool by trial and error (9). The value should be in the [1; 99] range. The preview window displays the sensitivity scale of the detection tool that relates to the sensitivity parameter. If the scale is green, the object is not detected. If the scale is yellow, the object is detected, but not enough to trigger the tool. If the scale is red, the object is detected and the detection tool will trigger, if the scale is red through the analysis period (50 seconds by default, see item 4).
    Example. The sensitivity parameter value of 40 means that the detection tool will trigger when the scale has at least 4 divisions full over the entire analysis period. The triggering will stop when the scale has less than 2 divisions full over the analysis period. The detection tool will trigger again if the scale has at least 4 divisions full over the entire analysis period.
  10. Select Yes for the Ignore black and white image parameter (10), if it is necessary that the detection tool does not trigger when the image is black and white.
  11. By default, the entire frame is a detection area. If necessary, in the preview window, you can set detection areas using the anchor points:
    1. Right-click in the preview window.
    2. If you want to set the detection area by one or more rectangles, select Detection area (rectangle). If you specify a rectangular area, the detection tool will work only within its limits. The rest of the frame will be ignored.
    3. If you want to set the detection area by one or more polygons, select Detection area (polygon). If you specify one or several polygonal areas, the detection tool will process the entire frame, but the part of the frame that is not included in the specified polygons will be blacked out.

      Attention!

      You can configure detection areas similarly to exclude areas in Scene analytics detection tools (see Setting General Zones for Scene analytics detection tools).

      You can use trial and error method to decide which type of detection area (rectangular or polygonal) is more effective in your case. Some neural networks give better detection with rectangles, while others are better with polygons.

  • No labels