Share to:
Teaching and Maintenance

Quickly master the learning and use of visual labeling machines

Time:2025-04-17 Page views: 909次

Quickly master the learning and use of visual labeling machines

1、 Quick Start Framework for Visual Labeling Machine

The purpose of this article is to assist users in quickly mastering how to use itVisual labeling machine.For users who have just started using visual labeling machines, they can follow the "principle cognition → hardware configuration
Learn the path of "setting ->system operation ->modeling practice ->optimization and upgrading". This article will provide a structured learning guide to help you quickly master core skills in a short amount of time
Can.


2、 Understand the logical principles of device operation

The working principle of visual labeling machine
1. Core workflow:

Image acquisition → feature recognition → coordinate calculation → motion control → quality feedback


Image acquisition: Capture product images through industrial cameras


Feature recognition: Algorithm extracts label position feature points


Coordinate Conversion: Convert the image coordinate system to a mechanical coordinate system


Motion control: Drive the sticker head to accurately reach the target position

Labeling: Labeling action completed


Closed loop feedback: Secondary photography verification of labeling accuracy (optional)


2. Composition of key subsystems

Simple understanding, learning comprehension

Imaging system: matching relationship between camera, lens, and light source optical parameters;

Processing system: industrial computer/embedded processor, software platform operation logic;

Execution system: servo motor, pneumatic components, motion control parameter settings.

3、 Quick Course in Visual Hardware Fundamentals

1. Three elements of camera selection

Resolution: Select based on detection accuracy (formula: accuracy=field of view width/number of pixels)


Conventional application: 2-5 million pixels (such as Basler acA2000)


High precision scene: 12 million pixels and above (such as Hikvision MV-CH120)


Frame rate: Must be ≥ production line pace (e.g. 60 frames per second corresponds to 3600 pieces per hour)


Interface type: GigE visual suitable for most scenarios, 10GigE suitable for high-speed production lines


2. Quick Reference Table for Lens Parameters

Example scenario of parameter calculation method

Focal length (f) f=working distance x chip size/field of view width 30cm, working distance selected 16mm lens

Depth of Field (DOF) DOF=2 × allowable circle of dispersion × F value ² Depth of Field at F8 aperture is approximately ± 3mm

The distortion rate of industrial lenses should be less than 0.1%, and that of telecentric lenses can reach 0.01%

3. Light source selection skills

Bar light: suitable for flat objects (such as paper boxes)


Coaxial light: eliminates reflection (such as on metal surfaces)


Dome light: solving complex surface shadow problems


4、 Four Step System Operation Method

Step 1: Basic parameter settings

Set camera IP address and trigger mode (hard trigger/soft trigger);


Configure pixel equivalent (mm/pixel), for example: 0.02mm/pixel corresponds to 50x magnification;


Establish a mapping relationship between the origin of the coordinate system and the reference point of the robotic arm.


Step 2: Core Methods of Visual Modeling

(1) Template matching modeling

Capture the ROI area of the standard image


Adjust the matching threshold (recommended 0.7-0.9)


Set the allowable range for rotation/scaling (± 5 °, ± 10%)


(2) Feature point modeling

Select 3 or more stable feature points


Establish a topological relationship model for feature points


Set matching tolerance (recommended ± 2 pixels)


(3) Deep learning modeling

Collect 100 sample images


Label the target area (recommended LabelImg tool)


Train YOLO lightweight model (with over 3000 iterations)


Step 3: Motion calibration

Establishing a visual mechanical coordinate transformation matrix using the nine point calibration method


Verify calibration accuracy (error should be<0.1mm)


Step 4: Production testing

Set NG product judgment rules (position deviation, angle deviation, label missing)


Optimize detection cycle (end-to-end delay from trigger to output<50ms)


5、 Three practical skills for modeling optimization

1. Anti interference optimization

Add pre-processing filter (Gaussian fuzzy histogram equalization)


Set dynamic ROI area (automatically adjusted according to product position)


Enable multi template voting mechanism (median value for 3 templates)


2. Speed improvement plan

Using image pyramid hierarchical search (first 1/4 resolution coarse localization)


Limit the search angle range (± 15 ° → ± 5 ° can accelerate by 40%)


Enable GPU acceleration (3-5 times faster on NVIDIA Jetson platform)


3. Precision enhancement strategy

Sub-pixel algorithm improvement (accuracy up to 1/10 pixel)


Multi camera data fusion (binocular vision eliminates occlusion effects)


Temperature compensation model (automatically calibrated for thermal drift every 2 hours)


6、 Technology Trends and Learning Resources

1. Frontier technology direction

3D visual labeling: structured light technology achieves curved surface bonding (accuracy up to ± 0.05mm)


Edge AI: Jetson Orin platform achieves 200fps real-time detection


Digital twin: virtual debugging reduces on-site debugging time by 50%


2. Recommended learning path

Master the basic operators of Halcon/VisionPro


Complete 3 or more practical projects (such as labeling medication boxes and bottle bodies)


Learning Python OpenCV to Implement Algorithm Optimization


3. Recommended free resources

MOOC website "Practical Introduction to Industrial Vision"


GitHub Open Source Project: AutoLabel (Automatic Labeling Tool)


Official technical white paper library of Hikvision, etc


7、 Frequently Asked Questions Quick Reference Manual

Problem phenomenon, troubleshooting steps and solutions

1. Image blur: Check the lens focal length and aperture value, refocus, and adjust the F value to 4-5.6;

2. Matching failed: Verify the template update status and enable the dynamic template update function;

3. Coordinate offset: Perform a nine point calibration review and update the calibration matrix parameters;

4. Slow detection speed: Analyze algorithm time distribution, enable GPU acceleration or model lightweighting

Mastering the above methods and toolchain, combined with daily practical training, can quickly grow into an expert in the application of Longhai Huanyu visual labeling machine. Suggest focusing on modeling optimization and
The integration of emerging technologies and close ties with Longhai Huanyu will determine our competitiveness in the field of intelligent manufacturing in the future.

微信咨询

售前咨询热线
13602637212

售后咨询热线
4009615365