Learn more.

Best practices and resources for submitting high-quality egocentric videos.

Why Now

The demand for high-quality egocentric video is growing rapidly as AI systems and robots move from research into the real world. These systems learn best from authentic, first-person recordings of real people performing real tasks — not staged footage or synthetic data.

Our mission is to help fill this growing gap by making it easy for everyday contributors to capture and share useful, realistic egocentric videos. By doing so, we enable better AI training, safer robotics, and systems that more accurately understand how humans move, work, and interact with their environments.

Quality matters. Clear, natural footage doesn't just improve individual submissions — it directly impacts how well future AI systems perform. This guide exists to help ensure your contributions are both simple to record and genuinely valuable.

Why Egocentric Video Matters

The demand for high-quality egocentric video is accelerating as artificial intelligence and robotics systems move from controlled environments into the real world. Training these systems requires more than large volumes of data — it requires clear, realistic, first-person recordings that reflect how humans actually work, move, and interact with their surroundings.

Our mission is to help meet this demand by building a marketplace focused on useful, authentic egocentric video, captured by real people performing real tasks. We aim to bridge the gap between everyday human activity and the data AI systems need to learn safely and effectively.

Egocentric (first-person) video offers a perspective that traditional cameras cannot replicate. By capturing what a person sees and does, these videos provide critical information about:

  • Visual attention and situational awareness
  • Hand–object interaction and fine motor control
  • Task sequencing and natural decision-making
  • Environmental context and spatial relationships

For AI systems and robots, this perspective is essential for learning how tasks unfold in real environments, not just in ideal or simulated conditions.

The Challenge: Quality, Not Just Quantity

While there is no shortage of video content online, most existing footage is not suitable for AI training. Common issues include:

  • Cinematic or third-person framing
  • Heavy editing, filters, or stabilization artifacts
  • Inconsistent camera placement
  • Staged or exaggerated actions

AI systems learn from patterns. When the data is noisy, artificial, or inconsistent, learning outcomes suffer. Our platform prioritizes clarity, continuity, and realism so that each contribution meaningfully improves training quality.

Our Approach

We believe that high-value egocentric data does not require professional production. Instead, it requires:

  • Clear guidance
  • Consistent expectations
  • Respect for natural human behavior

By keeping requirements simple and focusing on fundamentals, we enable contributors of all experience levels to produce strong submissions using accessible equipment.

This guide reflects those principles — emphasizing what matters most while avoiding unnecessary complexity.

Who This Platform Is For

Our marketplace is intentionally open. Contributors come from many backgrounds and skill levels, including:

  • Tradespeople and technicians
  • Warehouse and logistics workers
  • Retail and service staff
  • Homeowners performing everyday tasks

You do not need specialized training or expensive gear. If you can follow instructions, work naturally, and pay attention to framing and clarity, you can produce footage that is highly valuable.

The Role of Contributors

Contributors play a direct role in shaping how future AI systems understand the world. Each submission helps improve:

  • Task recognition
  • Motion planning
  • Safety awareness
  • Human–robot interaction

Small details — camera stability, consistent framing, natural pacing — have an outsized impact on training outcomes. This guide exists to help you recognize and capture those details.

A Shared Responsibility

Building useful AI systems is a shared effort. We rely on contributors to follow guidelines, respect privacy, and record responsibly. In return, we aim to provide:

  • Clear mission definitions
  • Transparent expectations
  • Fair access to opportunities

Quality egocentric data benefits everyone — contributors, researchers, companies, and the broader ecosystem of AI and robotics.

Guide to Submitting High-Quality Egocentric Video

Egocentric videos - first-person footage captured from a wearable camera – provide a direct view of what the wearer sees and does. This unique perspective is invaluable for training AI systems and robots to understand how humans move, perceive, and interact with the world. The usefulness of this data depends heavily on clarity, consistency, and realism.

This guide is designed to help contributors of all experience levels capture videos that are both easy to record and highly valuable. Attention to detail makes a meaningful difference in the quality of the data.

Getting Started

What is an egocentric video?

An egocentric video captures the world from the recorder's point of view, closely matching what a person sees as they move and act. The camera effectively replaces the viewer's eyes.

This perspective allows AI systems to learn:

  • How humans visually navigate environments
  • How hands interact with objects
  • How tasks naturally unfold over time

Egocentric videos are typically recorded using head-mounted or chest-mounted cameras and should feel natural and immersive rather than staged or cinematic.

Who can submit videos?

Anyone can submit videos, regardless of background or technical skill. The marketplace is open by design. That said, higher-quality submissions are generally more useful and may be valued more highly.

If you can follow basic instructions, move naturally, and pay attention to clarity, you can produce strong submissions with minimal equipment.

Video Requirements

Basic submission criteria

All videos must meet a small set of foundational requirements to be accepted. These ensure the footage is usable for AI training.

Your video must:

  • Be recorded from a first-person (POV) perspective
  • Show a clear, unobstructed view of the scene
  • Be recorded as a continuous clip with minimal cutting
  • Avoid filters, overlays, or stylized effects

Some missions may introduce additional requirements. When they do, always follow mission instructions first.

Recommended video characteristics

While not mandatory, the following qualities significantly improve usefulness:

  • Stable footage with limited shake
  • Clear visibility of hands when interacting with objects
  • Predictable framing (objects don't constantly leave the frame)
  • Natural pacing without rushing or exaggeration

These characteristics help AI systems better understand context, motion, and intent.

Camera Setup & Mounts

Camera placement

Camera placement plays a major role in video quality. The goal is to approximate a human field of view as closely as possible.

Recommended placements include:

  • Head-mounted cameras, which closely match eye-level perspective
  • Chest-mounted cameras, which offer stability and a clear view of hands

Handheld footage is discouraged unless explicitly requested, as it introduces instability and inconsistent framing.

Mount stability

Once mounted, the camera should remain fixed for the duration of the recording. A shifting or loose camera can significantly reduce data quality.

Before recording:

  • Tighten all straps or clips
  • Check framing by reviewing a short test clip
  • Confirm the horizon is level

Recommended mounts:

  • Head-mounted camera straps – link
  • Chest harness mounts – link
  • Body-mounted clips – link

Lighting & Environment

Lighting best practices

Good lighting ensures that objects, textures, and motion are clearly visible. Poor lighting reduces detail and increases noise.

Best practices include:

  • Recording in evenly lit environments
  • Avoiding bright light sources behind objects
  • Using natural daylight when possible
  • Cleaning the lens before each session

Consistent lighting is more important than dramatic or high-contrast lighting.

Recording environment

Choose environments that support clear visibility and minimal distraction.

Whenever possible:

  • Reduce unnecessary visual clutter
  • Avoid mirrors or reflective surfaces unless relevant
  • Keep background noise to a reasonable level

Everyday background sounds are acceptable, but loud music or television can interfere with clarity.

Movement & Framing

Natural movement

Move as you normally would. AI systems benefit most from natural, unforced behavior.

Avoid:

  • Exaggerated motions
  • Unnatural pauses or over-slowness
  • Sudden, repeated head movements

Brief pauses before and after major actions can help provide context without disrupting realism.

Framing guidelines

The camera should consistently capture the most important part of the action.

Keep in mind:

  • Important objects should stay within the center of the frame
  • Hands should remain visible when performing tasks
  • Avoid looking away from the task unnecessarily

If something leaves the frame accidentally, continue naturally rather than restarting.

Continuity & Authenticity

Continuous recording

Long, uninterrupted clips provide richer context than short, segmented ones. Whenever possible, record full sequences from beginning to end.

Continuous footage helps AI systems understand:

  • Task progression
  • Cause and effect
  • Transitions between actions

Avoid stopping and restarting unless required.

Authentic behavior

Do not "perform" for the camera. Real-world behavior — including minor mistakes or adjustments — is valuable.

Authenticity matters more than perfection. Natural variation improves training outcomes.

File Formats & Uploads

Supported file formats

The platform currently supports the following formats:

  • MP4
  • AVI
  • MKV
  • SVO

If your device outputs a different format, convert it prior to upload using standard tools.

Upload tips

Before uploading:

  • Confirm the video plays correctly
  • Ensure the full clip is included
  • Avoid unnecessary compression

A clean, original file is always preferred.

Common Mistakes

Issues that reduce submission value

The following issues commonly lead to rejection or lower valuation:

  • Lens obstruction (hair, hats, fingers)
  • Excessive camera shake
  • Dark or poorly lit footage
  • Incorrect orientation
  • Heavy editing or filters

Review your footage once before uploading to catch obvious issues.

Advanced Tips

For experienced contributors

If you plan to submit regularly or work on advanced missions:

  • Keep camera placement consistent across sessions
  • Record similar activities in similar environments
  • Track what was recorded and when
  • Follow naming and labeling instructions precisely

Consistency across submissions increases overall usefulness.

Gear & Resources

Helpful equipment

You do not need professional gear, but the following can help improve quality:

Learning resources

Final Notes

Quality matters. Clear, stable, authentic footage helps train better AI systems and increases the value of your contribution. If a mission includes additional instructions, always follow those requirements first.