Documentation
  • Introduction
  • Tutorials
    • Getting started
    • Python SDK quickstart
    • Model-assisted labeling
  • How to annotate
    • Label images
      • View and navigate in the image interfaces
      • Image interface settings
      • Image segmentation interface
      • Image vector interface
    • Label 3D point clouds
      • View and navigate in the 3D interface
      • Upload, view, and overlay images
      • 3D interface settings
      • 3D point cloud cuboid interface
      • 3D point cloud vector interface
      • 3D point cloud segmentation interface
      • Merged point cloud view (for static objects)
      • Batch mode (for dynamic objects)
      • Smart cuboid propagation
      • 3D to 2D Projection
      • Tips for labeling cuboid sequences
    • Label sequences of data
      • Use track IDs in sequences
      • Use keyframe interpolation
    • Annotate object links (beta)
    • Customize hotkeys
  • How to manage
    • Add collaborators to a dataset
    • Create an organization
    • Configure the label editor
    • Customize label queue
    • Search within a dataset
    • Clone a dataset
    • Work with issues
    • Bulk change label status
    • Manage QA processes
  • How to integrate
    • Import data
      • Cloud integrations
    • Export data
      • Structure of the release file
      • Exporting image annotations to different formats
    • Integrations
      • Hugging Face
      • W&B
      • Databricks
      • SceneBox
    • Create an API key
    • Upload model predictions
    • Set up webhooks
  • Background
    • Main concepts
    • Sequences
    • Label queue mechanics
    • Labeling metrics
    • 3D Tiles
    • Security
  • Reference
    • Python SDK
    • Task types
    • Sample formats
      • Supported file formats
    • Label formats
    • Categories and attributes
    • API
Powered by GitBook
On this page
  • Label queue
  • Review queue

Was this helpful?

  1. Background

Label queue mechanics

The label and review queue make it easy for teams to efficiently work on a dataset together.

PreviousSequencesNextLabeling metrics

Last updated 11 months ago

Was this helpful?

Note that a single can either be an individual image or an image sequence consisting of multiple frames, depending on the chosen dataset type. Same for point cloud data.

When you upload a new sample, its initial status is unlabeled. Samples are always either unlabeled, prelabeled, labeled (or labeling in progress), reviewed (or reviewing in progress), rejected, or skipped.

Dataset administrators can open any sample directly via the Samples tab. They can update the label and freely change the label status.

Dataset labelers and reviewers don't have access to the Samples tab. They can only click the blue Start labeling or Start reviewing buttons. This brings them in a workflow where they are automatically assigned samples from the label or review queue in a certain order, as explained below.

Label queue

When a labeler presses the Start labeling button, a single sample is fetched from the label queue.

In the labeling workflow, there are three buttons:

  • Submit: set the sample status to labeled (moving it to the review queue) and go to the next sample in the queue.

  • Skip: set the sample status to skipped and go to the next sample in the queue.

  • Save (only visible if enabled in the dataset settings): set the sample status to labeling in progress but don't go to the next sample in the queue yet. Can be helpful when labeling larger samples.

If a labeler presses the "Start labeling" button, they will get samples from the label queue in this order:

  1. Samples which they started labeling but didn't finish yet. Only if the save button is enabled.

  2. Samples they labeled but which were rejected in the reviewing step, and now need to be corrected.

  3. Unlabeled or prelabeled samples which are not assigned to a specific user.

  4. If no such samples exist, the label queue is empty and no more samples need to be labeled.

Review queue

When you press the Start reviewing button, a single sample is fetched from the review queue.

In the reviewing workflow, there are four buttons:

  • Accept: set the sample status to reviewed.

  • Reject: set the sample status to rejected, moving it back onto the label queue.

  • Skip (only visible if enabled in the dataset settings): set the sample status to skipped.

  • Save (only visible if enabled in the dataset settings): set the sample status to reviewing in progress but don't go to the next sample in the review queue yet. Can be helpful when reviewing larger samples.

If a reviewer presses the Start reviewing button, they will get samples from the review queue in this order:

  1. Samples which they started reviewing but didn't finish yet. Only if the save button is enabled.

  2. Samples they rejected before and have now been corrected by the original labeler, so they need to be reviewed again.

  3. Labeled samples which are not assigned to a specific user, and which haven’t been labeled by this user (to avoid reviewing their own labeled samples if a team participates in both labeling and reviewing).

  4. Labeled samples which are not assigned to a specific user, independent of who labeled them before (to prevent deadlock if a single user wants to both label and review a dataset).

  5. If no such samples exist, the review queue is empty and no more samples need to be reviewed.

Unlabeled or prelabeled samples which are to this user, through the assigned_labeler field.

Within each step, samples with higher priority are returned first. Read more about how you can .

Labeled samples which are to this user, through the assigned_reviewer field.

Within each step, samples with higher priority are returned first. For samples with the same priority, the oldest one is returned first. Read more about how you can .

customize the queue priority
sample
specifically assigned
specifically assigned
customize the queue priority
'Start Labeling' and 'Start Reviewing' workflows
'Start Labeling' and 'Start Reviewing' workflows