Manage QA processes
How Segments.ai helps to streamline QA, how to set up a linting process and which additional features can be leveraged
Use short feedback loops
The platform ensures short feedback loops through the following design choices:
Each sample is annotated by 1 labeler. A sample is either an individual image or point cloud, or a sequence of images, point clouds or multiple sensors, depending on the dataset settings - see Sample.
Labelers are not able to manually select samples to label first. Through the "Start Labeling" workflow, samples are presented for them.
After a reviewer has rejected a sample, the original labeler has to correct the rejected sample before being able to continue labeling any other samples in the queue. The rejected sample will appear in the beginning of the labeler's queue. For more details about the label queue, see Label queue mechanics
After a labeler has corrected a rejected sample, the original reviewer has to review the corrected sample. The corrected sample will appear in the beginning of the reviewer's queue.
Set up a linting process
Linting is the process of performing static analysis to flag erroneous patterns. For example, one would want to programmatically ...
Identify cuboids with unexpected dimensions or positions
Spot too small segmentation masks or uncover unlabeled pixels
Observe movement errors in sequences
Flag incorrect categories
Using the API/SDK & webhooks system, it is straightforward to set up such linting process and verify labels against expected properties:
The webhooks system allows to receive event notifications whenever a sample has been labeled, such that the linting process can be triggered (see also Set up webhooks)
The API/SDK offers a programmatic way to (see also Python SDK quickstart)
List & pull labels
Verify properties for these labels
Change the status of labels to e.g. "Rejected"
Report issues
For more details, please contact us.
Discover additional features to improve QA
Work with issues
Leave comments, post screenshots or ask questions with the issues functionality. See Work with issues
Review using ratings
Enable the ratings functionality in the dataset and leave star-based ratings
Add an additional QA round
Add a "Verified" label status, intended to support an additional QA round of all labels with status "Reviewed"
Enable validation check to warn users about unlabeled points in the 3D segmentation interface
This feature displays an alert when attempting to save if there are any unlabeled points.
This option is only available for datasets with the "Pointcloud" data type and the "Segmentation" task.
Last updated