Camera Selection Guide

You are currently viewing the documentation for the latest version (2.5.3). To access a different version, click the "Switch version" button located in the upper-right corner of the page.

■ If you are not sure which version of the product you are currently using, please feel free to contact Mech-Mind Technical Support.

When selecting a camera model, follow the steps below to gradually narrow down to the appropriate option:

  1. Determine the camera mounting mode and exclude models that are not compatible.

  2. Based on the target object’s size and position, filter out models whose working range meets the requirements, and confirm whether the vision-guided picking accuracy of each model satisfies the project needs. Record the specifications of the valid models for later use.

  3. According to the target object type, the camera’s recommended application scenarios, and typical acquisition time, select the optimal model that fits the project requirements.

Camera Mounting Mode

The common mounting modes are Eye-to-hand (ETH) and Eye-in-hand (EIH). For detailed explanation, refer to Mounting Modes. Supported mounting modes for each model are shown below:

LSR XL

Eye-to-hand

Other models

Eye-to-hand, Eye-in-hand

Camera Working Range

Enter the target object’s size and the distance between the camera and the object surface into the 3D Camera Selector. The tool will automatically list the available camera models based on camera working distance and field of view. Then, refer to the table below to check the vision-guided picking accuracy of the available models and record the specifications of the models that meet the project requirements.

You may also use the table below to quickly review the basic information of each camera model. For detailed information of each camera model, see Camera Technical Specifications.

Model 2D image color (1) Specification Object focal distance (mm) Working distance (mm) Vision-guided picking accuracy (2)

DEEP-GL

Color

DEEP-V4D3000A

2500

1200–3500

±5

LSR S-GL

Color

LSR S-V4D800A

800

500–900

±1.5

LSR S-V4D1400A

1400

900–1500

LSR L-GL

Color

LSR L-V4D1500A

1500

1200–1800

±2.5

LSR L-V4D3000A

2500

1800–3000

±3

LSR XL-GL

Color

LSR XL-V5D2500A

2500

1600–3500

±1.5

NANO-GL

Monochrome

NANO-V4D350M

350

300–450

-

NANO-V4D550M

550

450–600

Color

NANO-V4D350C

350

300–450

-

NANO-V4D550C

550

450–600

NANO ULTRA-GL

Monochrome

NANO ULTRA-350M

350

250–500

NANO ULTRA-700M

700

400–800

±1

PRO S-GL

Monochrome

PRO S-V4D500M

500

500–600

±0.5

PRO S-V4D700M

700

600–800

±0.7

PRO S-V4D1000M

1000

800–1000

±1

Color

PRO S-V4D500C

500

500–600

±0.5

PRO S-V4D700C

700

600–800

±0.7

PRO S-V4D1000C

1000

800–1000

±1

PRO M-GL

Monochrome

PRO M-V4D1200M

1200

1000–1300

±1.5

PRO M-V4D2000M

1800

1300–2000

±2

Color

PRO M-V4D1200C

1200

1000–1300

±1.5

PRO M-V4D2000C

1800

1300–2000

±2

UHP-140-GL

Monochrome

UHP-140-MP30D300M

300

280–320

±0.2

Laser L Enhanced

Monochrome

Laser L Enhanced-12MP-1500M

1500

1200–1700

-

Laser L Enhanced-12MP-3000M

3000

1700–3000

(1) Refers to the color of the 2D image / 2D image (texture). For details, refer to data types.

(2) The vision-guided picking accuracy values are based on empirical data collected from multiple project sites. The allowable picking accuracy tolerance of the user’s project should be greater than these values. If the picking accuracy meets the requirement but the camera’s field of view is insufficient, consider deploying multiple cameras to expand the field of view.

Recommended Application Scenarios

Based on the target object type and the camera’s recommended application scenarios, further narrow down the suitable models. Use the typical capture time to determine whether the model meets the project takt time requirements. See the table below:

Model Characteristics Recommended application scenarios (1) Typical capture time(s)

DEEP-GL

Color. Large field of view, large depth of field, high speed.

  • Logistics scenarios such as depalletizing and palletizing of cartons, sacks, and turnover boxes.

0.5–0.9

LSR S-GL

Color. Small volume, high precision, high resistance to ambient light.

  • Outdoor scenarios such as refueling,charging, port container lock removal, and construction drilling.

0.5–0.9

LSR L-GL

Color. High precision, large field of view, high resistance to ambient light.

  • Scenarios prone to ambient light interference, such as manufacturing plants.

  • Loading/unloading and picking of highly reflective metal parts such as brake discs, shafts, and steel plates.

0.5–0.9

Laser L Enhanced

1.4–1.7

LSR XL-GL

Color. Ultra-high resolution, super-large scanning range.

  • High-precision applications at long working distances, such as loading of large battery cells and unloading of stamping parts.

0.6–1.1

NANO-GL

Available in monochrome and color versions.
Small volume, ultra-high precision, high resistance to ambient light.
Can be mounted on a robot arm.

  • Precision operations such as high-accuracy picking, vision-guided positioning, and assembly.

0.6–1.1

NANO ULTRA-GL

Monochrome. Palm-sized, ultra-high precision.
Can be mounted on robot arms or cobots.

  • Metal part loading/unloading and picking.

  • Precision operations such as high-accuracy assembly and screw-driving.

0.5–0.9

PRO S-GL

Available in monochrome and color versions.
High accuracy, high speed.

  • Complex objects such as transparent, highly reflective, or dark surfaces.

  • Objects of various materials including metal, plastic, and wood.

  • Medium-range applications with high precision requirements, such as loading/unloading, random picking, positioning, assembly, and academic research.

0.3–0.6

PRO M-GL

UHP-140-GL

Monochrome. Micron-level accuracy.

  • Highly reflective objects with complex surfaces, such as enameled copper wire or metal parts with surface dents.

  • Automotive component manufacturing or assembly processes.

  • Inspection/measurement tasks such as position tolerance, gaps, and surface deviation.

0.6–0.9

(1) The recommended application scenarios differ between color and monochrome models:

Color

  • Demonstration / research.

  • Target objects are colored (except blue), and 2D images / 2D texture images are used for deep learning.

Monochrome

  • Target objects are blue.

  • Higher requirements for picking accuracy.

  • Ambient light intensity is high.

Is this page helpful?

You can give a feedback in any of the following ways:

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.