Documentation
  • Introduction
  • Tutorials
    • Getting started
    • Python SDK quickstart
    • Model-assisted labeling
  • How to annotate
    • Label images
      • View and navigate in the image interfaces
      • Image interface settings
      • Image segmentation interface
      • Image vector interface
    • Label 3D point clouds
      • View and navigate in the 3D interface
      • Upload, view, and overlay images
      • 3D interface settings
      • 3D point cloud cuboid interface
      • 3D point cloud vector interface
      • 3D point cloud segmentation interface
      • Merged point cloud view (for static objects)
      • Batch mode (for dynamic objects)
      • Smart cuboid propagation
      • 3D to 2D Projection
      • Tips for labeling cuboid sequences
    • Label sequences of data
      • Use track IDs in sequences
      • Use keyframe interpolation
    • Annotate object links (beta)
    • Customize hotkeys
  • How to manage
    • Add collaborators to a dataset
    • Create an organization
    • Configure the label editor
    • Customize label queue
    • Search within a dataset
    • Clone a dataset
    • Work with issues
    • Bulk change label status
    • Manage QA processes
  • How to integrate
    • Import data
      • Cloud integrations
    • Export data
      • Structure of the release file
      • Exporting image annotations to different formats
    • Integrations
      • Hugging Face
      • W&B
      • Databricks
      • SceneBox
    • Create an API key
    • Upload model predictions
    • Set up webhooks
  • Background
    • Main concepts
    • Sequences
    • Label queue mechanics
    • Labeling metrics
    • 3D Tiles
    • Security
  • Reference
    • Python SDK
    • Task types
    • Sample formats
      • Supported file formats
    • Label formats
    • Categories and attributes
    • API
Powered by GitBook
On this page

Was this helpful?

  1. How to integrate
  2. Export data

Exporting image annotations to different formats

Exporting the release file for image datasets to different formats

You can export the release file for image datasets to different formats with the Python SDK. Use the export_datasetutil function for this, setting the export_format parameter to one of the following:

Value

Description

coco-instance

coco-panoptic

yolo

instance

Grayscale PNGs (16-bit) where the values correspond to instance ids

semantic

Grayscale PNGs (8-bit) where the values correspond to category ids

instance-color

Colored PNGs where the colors correspond to different instances

semantic-color

polygon

For exporting segmentation bitmap labels to polygons

Example:

# pip install segments-ai
from segments import SegmentsClient, SegmentsDataset
from segments.utils import export_dataset

# Initialize a SegmentsDataset from the release file
client = SegmentsClient('YOUR_API_KEY')
release = client.get_release('jane/flowers', 'v1.0') # Alternatively: release = 'flowers-v1.0.json'
dataset = SegmentsDataset(release, labelset='ground-truth', filter_by=['labeled', 'reviewed'])

# Export to COCO panoptic format
export_dataset(dataset, export_format='coco-panoptic')

Alternatively, you can use the initialized SegmentsDataset to loop through the samples and labels, and visualize or process them in any way you please:

import matplotlib.pyplot as plt
from segments.utils import get_semantic_bitmap

for sample in dataset:
    # Print the sample name and list of labeled objects
    print(sample['name'])
    print(sample['annotations'])
    
    # Show the image
    plt.imshow(sample['image'])
    plt.show()
    
    # Show the instance segmentation label
    plt.imshow(sample['segmentation_bitmap'])
    plt.show()
    
    # Show the semantic segmentation label
    semantic_bitmap = get_semantic_bitmap(sample['segmentation_bitmap'], sample['annotations'])
    plt.imshow(semantic_bitmap)
    plt.show()
PreviousStructure of the release fileNextIntegrations

Last updated 1 year ago

Was this helpful?

segmentation format

segmentation format

object detection format

Colored PNGs where the colors correspond to different categories, with colors as configured in the when available

COCO instance
COCO panoptic
Yolo Darknet
label editor settings