HOW WE ENSURE QUALITY

Most manipulation datasets ship raw trajectories with no quality guarantee. You find out the data is bad when your policy doesn't improve. We verify before we deliver.

STAGE 01

MULTI-MODAL CAPTURE

Purpose-built capture facilities with synchronized multi-camera arrays, depth sensing, and tactile instrumentation. Every modality time-aligned to sub-10ms.

EGOCENTRIC VIDEO

Multi-camera RGB at high frame rate. First-person perspective capturing the worker's view during manipulation tasks.

DEPTH

LiDAR depth maps providing metric-scale 3D geometry. Accurate world-space hand and object positioning.

TACTILE

3-axis force and shear sensing across the full hand. Contact dynamics that cameras cannot see.

KINEMATICS

3D hand pose in MANO format. 21-joint articulation with metric-scale positioning refined by depth.

STAGE 02

AUTOMATED ANNOTATION + EXPERT REVIEW

Automated pre-labeling reduces human annotation to edge-case review. Hand pose estimation, object segmentation, and action classification run on every frame. Expert reviewers handle the fraction where models are uncertain.

HAND POSE

Dense 3D hand mesh recovery on every frame. Fused with tactile signals to disambiguate during occlusion.

OBJECT TRACKING

Instance segmentation with depth-informed boundaries. Consistent object identity across the full episode.

ACTION SEGMENTS

Temporal action labels from tactile events, hand motion, and audio. Language-annotated task descriptions per episode.

STAGE 03

PHYSICS VALIDATION GATE

Every episode is checked against physical constraints before it leaves our pipeline. Force values verified against biomechanical norms. Joint angles tested against anatomical limits. Contact sequences validated for physical consistency.

WHAT WE CHECK

  • Force values within physiological and material-specific ranges
  • Joint angles anatomically plausible across the full trajectory
  • Contact sequence physically consistent (no teleportation, no impossible state transitions)
  • Temporal coherence across all synchronized modalities
  • Sensor dropout detection — no frozen or missing channels

WHAT HAPPENS ON FAILURE

  • Episodes that fail validation are quarantined, not shipped
  • Failure type is classified and logged
  • Every delivered episode includes a validation certificate with pass/fail status
  • Deliberate failure + recovery episodes are captured separately and validated independently
ALSO IN STAGE 03

FAILURE + RECOVERY EPISODES

Knowing what goes wrong is as valuable as knowing what goes right.

We deliberately capture and validate failure modes and recovery behaviors. Dropped objects, slipped grasps, cross-threaded fasteners, readjusted grips. Your model learns to detect, react, and recover — not just replay the golden path.

BY DESIGN

Capture protocols include failure + recovery

CLASSIFIED

Failure type tagged per episode

VALIDATED

Physics-checked recovery trajectories

STAGE 04

WHAT YOU RECEIVE

PER EPISODE

  • Multi-camera RGB video (MP4)
  • Depth maps (PNG per frame)
  • Tactile force profiles (Parquet)
  • 3D hand kinematics — MANO format (Parquet)
  • Task annotations with language descriptions
  • Episode metadata (operator, scene, task variant)
  • Validation report (pass/fail with failure type if applicable)

FORMAT + DELIVERY

  • LeRobot v3 schema (synchronized MP4 + Parquet)
  • GR00T N1 and Isaac Lab compatible
  • Chunked storage for efficient streaming
  • Delivered via bucket or direct transfer
  • Per-episode and per-dataset statistics
  • Camera extrinsics and calibration data included

SEE IT IN YOUR PIPELINE

Request a sample episode. Drop it into your training stack. If it works, we talk.