Documentation
  • Introduction
  • Tutorials
    • Getting started
    • Python SDK quickstart
    • Model-assisted labeling
  • How to annotate
    • Label images
      • View and navigate in the image interfaces
      • Image interface settings
      • Image segmentation interface
      • Image vector interface
    • Label 3D point clouds
      • View and navigate in the 3D interface
      • Upload, view, and overlay images
      • 3D interface settings
      • 3D point cloud cuboid interface
      • 3D point cloud vector interface
      • 3D point cloud segmentation interface
      • Merged point cloud view (for static objects)
      • Batch mode (for dynamic objects)
      • Smart cuboid propagation
      • 3D to 2D Projection
      • Tips for labeling cuboid sequences
    • Label sequences of data
      • Use track IDs in sequences
      • Use keyframe interpolation
    • Annotate object links (beta)
    • Customize hotkeys
  • How to manage
    • Add collaborators to a dataset
    • Create an organization
    • Configure the label editor
    • Customize label queue
    • Search within a dataset
    • Clone a dataset
    • Work with issues
    • Bulk change label status
    • Manage QA processes
  • How to integrate
    • Import data
      • Cloud integrations
    • Export data
      • Structure of the release file
      • Exporting image annotations to different formats
    • Integrations
      • Hugging Face
      • W&B
      • Databricks
      • SceneBox
    • Create an API key
    • Upload model predictions
    • Set up webhooks
  • Background
    • Main concepts
    • Sequences
    • Label queue mechanics
    • Labeling metrics
    • 3D Tiles
    • Security
  • Reference
    • Python SDK
    • Task types
    • Sample formats
      • Supported file formats
    • Label formats
    • Categories and attributes
    • API
Powered by GitBook
On this page
  • Use short feedback loops
  • Set up a linting process
  • Discover additional features to improve QA

Was this helpful?

  1. How to manage

Manage QA processes

How Segments.ai helps to streamline QA, how to set up a linting process and which additional features can be leveraged

PreviousBulk change label statusNextImport data

Last updated 8 months ago

Was this helpful?

Use short feedback loops

The platform ensures short feedback loops through the following design choices:

  1. Each sample is annotated by 1 labeler. A sample is either an individual image or point cloud, or a sequence of images, point clouds or multiple sensors, depending on the dataset settings - see .

  2. Labelers are not able to manually select samples to label first. Through the "Start Labeling" workflow, samples are presented for them.

  3. After a reviewer has rejected a sample, the original labeler has to correct the rejected sample before being able to continue labeling any other samples in the queue. The rejected sample will appear in the beginning of the labeler's queue. For more details about the label queue, see Label queue mechanics

  4. After a labeler has corrected a rejected sample, the original reviewer has to review the corrected sample. The corrected sample will appear in the beginning of the reviewer's queue.

Set up a linting process

Linting is the process of performing static analysis to flag erroneous patterns. For example, one would want to programmatically ...

  1. Identify cuboids with unexpected dimensions or positions

  2. Spot too small segmentation masks or uncover unlabeled pixels

  3. Observe movement errors in sequences

  4. Flag incorrect categories

Using the API/SDK & webhooks system, it is straightforward to set up such linting process and verify labels against expected properties:

  • The webhooks system allows to receive event notifications whenever a sample has been labeled, such that the linting process can be triggered (see also Set up webhooks)

  • The API/SDK offers a programmatic way to (see also Python SDK quickstart)

    • List & pull labels

    • Verify properties for these labels

    • Change the status of labels to e.g. "Rejected"

    • Report issues

Discover additional features to improve QA

Work with issues

Leave comments, post screenshots or ask questions with the issues functionality. See Work with issues

Review using ratings

Enable the ratings functionality in the dataset and leave star-based ratings

Add an additional QA round

Add a "Verified" label status, intended to support an additional QA round of all labels with status "Reviewed"

Enable validation check to warn users about unlabeled points in the 3D segmentation interface

This feature displays an alert when attempting to save if there are any unlabeled points.

This option is only available for datasets with the "Pointcloud" data type and the "Segmentation" task.

For more details, please .

contact us
Sample
Leave star-based ratings
Enable ratings and add a "Verified" label status in the dataset settings
The warning is optional and can be enabled through dataset settings