Documentation
  • Introduction
  • Tutorials
    • Getting started
    • Python SDK quickstart
    • Model-assisted labeling
  • How to annotate
    • Label images
      • View and navigate in the image interfaces
      • Image interface settings
      • Image segmentation interface
      • Image vector interface
    • Label 3D point clouds
      • View and navigate in the 3D interface
      • Upload, view, and overlay images
      • 3D interface settings
      • 3D point cloud cuboid interface
      • 3D point cloud vector interface
      • 3D point cloud segmentation interface
      • Merged point cloud view (for static objects)
      • Batch mode (for dynamic objects)
      • Smart cuboid propagation
      • 3D to 2D Projection
      • Tips for labeling cuboid sequences
    • Label sequences of data
      • Use track IDs in sequences
      • Use keyframe interpolation
    • Annotate object links (beta)
    • Customize hotkeys
  • How to manage
    • Add collaborators to a dataset
    • Create an organization
    • Configure the label editor
    • Customize label queue
    • Search within a dataset
    • Clone a dataset
    • Work with issues
    • Bulk change label status
    • Manage QA processes
  • How to integrate
    • Import data
      • Cloud integrations
    • Export data
      • Structure of the release file
      • Exporting image annotations to different formats
    • Integrations
      • Hugging Face
      • W&B
      • Databricks
      • SceneBox
    • Create an API key
    • Upload model predictions
    • Set up webhooks
  • Background
    • Main concepts
    • Sequences
    • Label queue mechanics
    • Labeling metrics
    • 3D Tiles
    • Security
  • Reference
    • Python SDK
    • Task types
    • Sample formats
      • Supported file formats
    • Label formats
    • Categories and attributes
    • API
Powered by GitBook
On this page
  • Uploading as pre-labels
  • Uploading in a separate label set

Was this helpful?

  1. How to integrate

Upload model predictions

PreviousCreate an API keyNextSet up webhooks

Last updated 1 year ago

Was this helpful?

Uploading as pre-labels

Once you've trained an initial machine learning model on your labeled data, you can upload the model predictions as pre-labels to further speed up your labeling workflow.

To add a label to a sample programmatically, use the function in the Python SDK. Note that the format of the attributes field depends on the .

sample_uuid = "602a3eec-a61c-4a77-9fcc-3037ce5e9123"
labelset = "ground-truth"
attributes = {
    "format_version": "0.1",
    "annotations": [
        {
          "id": 1,
          "category_id": 1,
          "type": "bbox",
          "points": [
            [12.34, 56.78],
            [90.12, 34.56]
          ]
        }
    ]
}

client.add_label(sample_uuid, labelset, attributes)

The sample will now have a label status of prelabeled, and will appear in the label queue along with any unlabeled samples. Instead of having to label the sample from scratch, the labelers can now focus on verifying and correcting the pre-label though.

Uploading in a separate label set

Instead of uploading the labels as pre-labels of the ground truth label set, you can also upload them in a separate label set. Doing so unlocks a few features:

  • You can visualize the ground truth and predicted labels side by side.

  • When labeling, you can copy the label from any label set with the click of a button, to avoid labeling from scratch.

  • You can upload a prediction score along with the label, allowing you to sort and browse your predictions by accuracy.

First, create a new label set:

  1. Go to the Samples tab.

  2. Click the "Add new label set" link.

  3. Choose a name and optionally a description for the new label set.

  4. Click the "Create" button.

sample_uuid = "602a3eec-a61c-4a77-9fcc-3037ce5e9123"
labelset = "name-of-your-labelset"
attributes = {
    "format_version": "0.1",
    "annotations": [
        {
          "id": 1,
          "category_id": 1,
          "type": "bbox",
          "points": [
            [12.34, 56.78],
            [90.12, 34.56]
          ]
        }
    ]
}
score = 0.92

client.add_label(sample_uuid, labelset, attributes, score=score)

You can search through the ground-truth and uploaded labels simultaneously. For example, ground-truth.car:>0 my-predictions.car:=0 matches samples where the "ground-truth" label contains strictly more than 0 "car" objects AND the "my-predictions" label contains 0 "car" objects. For more information, see .

Then, refer to this label set when adding a label to a sample using :

client.add_label()
label type
how to search through the dataset
client.add_label()