Sample formats
A sample is a data point you want to label. Samples come in different types, like an image, a 3D point cloud, or a video sequence. When uploading (client.add_sample()
) or downloading (client.get_sample()
) a sample using the Python SDK, the format of the attributes
field depends on the type of sample. The different formats are described here.
The section Import data shows how you can obtain URLs for your assets.
Image
Supported image formats: jpeg, png, bmp.
If the image file is on your local computer, you should first upload it to our asset storage service (using upload_asset()
) or to another cloud storage service.
Image sequence
Supported image formats: jpeg, png, bmp.
3D point cloud
On Segments.ai, the up direction is defined along the z-axis, i.e. the vector (0, 0, 1) points up. If you upload point clouds with a different up direction, you might have trouble navigating the point cloud.
pcd
Required. Point cloud data.
images
Reference camera images.
name
string
Name of the sample.
timestamp
int
, float
, or string
Timestamp of the sample. Should be in nanoseconds for accurate velocity/acceleration calculations. Will also be used for interpolation unless disabled in dataset settings.
ego_pose
Pose of the sensor that captured the point cloud data.
default_z
float
Default z-value of the ground plane. 0 by default. Only valid in the point cloud cuboid editor. New cuboids will be drawn on top of the ground plane, i.e. the default z-position of a new cuboid is 0.5 (since the default height of a new cuboid is 1).
bounds
dict
of <string
, float
>
Point cloud bounds: a dict
with values that are used to initialize the limiting cuboid. The z-values are also used for height coloring when provided.
Supported values: min_x
, max_x
, min_y
, max_y
, min_z
and max_z
.
Point cloud data
See 3D point cloud formats for the supported file formats.
url
string
Required. URL of the point cloud data.
type
string
: "pcd" | "binary-xyzi" | "kitti" | "binary-xyzir" | "nuscenes" | "ply"
If the point cloud file is on your local computer, you should first upload it to our asset storage service (using upload_asset()
) or to another cloud storage service.
Camera image
A calibrated or uncalibrated reference image corresponding to a point cloud. The reference images can be opened in a new tab from within the labeling interface. You can determine the layout of the images by setting the row
and col
attributes on each image. If you also supply the calibration parameters (and distortion parameters if necessary), the main point cloud view can be set to the image to obtain a fused view.
name
string
Name of the camera image.
url
string
Required. URL of the camera image.
row
int
Required. Row of this image in the images viewer.
col
int
Required. Column of this image in the images viewer.
intrinsics
Intrinsic parameters of the camera.
extrinsics
distortion
Distortion parameters of the camera.
camera_convention
string
: "OpenGL" | "OpenCV"
Convention of the camera coordinates. We use the OpenGL/Blender coordinate convention for cameras. +X is right, +Y is up, and +Z is pointing back and away from the camera. -Z is the look-at direction. Other codebases may use the OpenCV convention, where the Y and Z axes are flipped but the +X axis remains the same. See diagram 1.
rotation
float
If the image file is on your local computer, you should first upload it to our asset storage service (using upload_asset()
) or to another cloud storage service.
Camera intrinsics
intrinsic_matrix
Camera extrinsics
translation
object
: {
"x": float
,
"y": float
,
"z": float
}
rotation
object
: {
"qx": float
,
"qy": float
,
"qz": float
,
"qw": float
}
Distortion
model
string
: "fisheye" | "brown-conrady"
coefficients
Fisheye:
object
: {
"k1": float
,
"k2": float
,
"k3": float
,
"k4": float
,
}
Brown-Conrady:
object
: {
"k1": float
,
"k2": float
,
"k3": float
,
"p1": float
,
"p2": float
}
Ego pose
The pose of the sensor used to capture the 3D point cloud data. This can be helpful if you want to obtain cuboids in world coordinates, or when your sensor is moving. In the latter situation, supplying an ego pose with each frame will ensure that static objects do not move when switching between frames.
position
object
: {
"x": float
,
"y": float
,
"z": float
}
Required. XYZ position of the sensor in world coordinates.
heading
object
: {
"qx": float
,
"qy": float
,
"qz": float
,
"qw": float
}
Segments.ai uses 32-bit floats for the point positions. Keep in mind that 32-bit floats have limited precision. In fact, only 24 bits can be used to represent the number itself (the significand, excluding the sign bit), or about 7.22 decimal digits. If you want to keep two decimal places, this only leaves 5.22 decimal digits, so the numbers shouldn't be larger than 10^5.22 = 165958.
To avoid rounding problems, it is best practice to subtract the ego position of the first frame from all other ego positions. This way, the first ego position is set to (0, 0, 0) and the subsequent ego positions are relative to (0, 0, 0) . In your export script, you can add the ego position of the first frame back to the object positions.
3D point cloud sequence
frames
Required. List of 3D point cloud frames in the sequence.
Multi-sensor sequence
sensors
Required. List of the sensors that can be labeled.
Sensor
name
string
Required. The name of the sensor.
task_type
string
attributes
object
Last updated