Machine vision
Machine vision (MV) is the technology and methods used to provide imaging-based automatic inspection and analysis for such applications as automatic inspection, process control, and robot guidance in industry.[1][2] The scope of MV is broad.[2][3][4] MV is related to, though distinct from, computer vision.[2][how?]
Applications
The primary uses for machine vision are automatic inspection and industrial robot guidance.[5] Other machine vision applications include:
- Automated Train Examiner (ATEx) Systems
- Automatic PCB inspection
- Wood quality inspection
- Final inspection of sub-assemblies
- Engine part inspection
- Label inspection on products
- Checking medical devices for defects
- Final inspection cells
- Robot guidance and checking orientation of components
- Packaging Inspection
- Medical vial inspection
- Food pack checks
- Verifying engineered components[4]
- Wafer Dicing
- Reading of Serial Numbers
- Inspection of Saw Blades
- Inspection of Ball Grid Arrays (BGAs)
- Surface Inspection
- Measuring of Spark Plugs
- Molding Flash Detection
- Inspection of Punched Sheets
- 3D Plane Reconstruction with Stereo
- Pose Verification of Resistors
- Classification of Non-Woven Fabrics
Methods
Machine vision methods are defined as both the process of defining and creating an MV solution,[6][7] and as the technical process that occurs during the operation of the solution. Here the latter is addressed. As of 2006, there was little standardization in the interfacing and configurations used in MV. This includes user interfaces, interfaces for the integration of multi-component systems and automated data interchange.[8] Nonetheless, the first step in the MV sequence of operation is acquisition of an image, typically using cameras, lenses, and lighting that has been designed to provide the differentiation required by subsequent processing.[9][10] MV software packages then employ various digital image processing techniques to extract the required information, and often make decisions (such as pass/fail) based on the extracted information.[11]
Imaging
While conventional (2D visible light) imaging is most commonly used in MV, alternatives include imaging various infrared bands,[12] line scan imaging, 3D imaging of surfaces and X-ray imaging.[5] Key divisions within MV 2D visible light imaging are monochromatic vs. color, resolution, and whether or not the imaging process is simultaneous over the entire image, making it suitable for moving processes.[13] The most commonly used method for 3D imaging is scanning based triangulation which utilizes motion of the product or image during the imaging process. Other 3D methods used for machine vision are time of flight, grid based and stereoscopic.[14]
The imaging device (e.g. camera) can either be separate from the main image processing unit or combined with it in which case the combination is generally called a smart camera or smart sensor.[15][16] When separated, the connection may be made to specialized intermediate hardware, a frame grabber using either a standardized (Camera Link, CoaXPress) or custom interface.[17][18][19][20] MV implementations also have used digital cameras capable of direct connections (without a framegrabber) to a computer via FireWire, USB or Gigabit Ethernet interfaces.[20][21]
Though the vast majority of machine vision applications are solved using two-dimensional imaging, machine vision applications utilizing 3D imaging are a growing niche within the industry.[14][22] One method is grid array based systems using pseudorandom structured light system as employed by the Microsoft Kinect system circa 2012.[23][24] Another method of generating a 3D image is to use laser triangulation, where a laser is projected onto the surfaces of an object and the deviation of the line is used to calculate the shape. In machine vision this is accomplished with a scanning motion, either by moving the workpiece, or by moving the camera & laser imaging system. Stereoscopic vision is used in special cases involving unique features present in both views of a pair of cameras.[25]
Image processing
After an image is acquired, it is processed.[19] Machine vision image processing methods include[further explanation needed]
- Stitching/Registration: Combining of adjacent 2D or 3D images.[citation needed]
- Filtering (e.g. morphological filtering)[26]
- Thresholding: Thresholding starts with setting or determining a gray value that will be useful for the following steps. The value is then used to separate portions of the image, and sometimes to transform each portion of the image simply black and white based on whether it is below or above that grayscale value.[27]
- Pixel counting: counts the number of light or dark pixels[citation needed]
- Segmentation: Partitioning a digital image into multiple segments to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze.[28][29]
- Inpainting
- Edge detection: finding object edges [30]
- Color Analysis: Identify parts, products and items using color, assess quality from color, and isolate features using color.[5]
- Blob discovery & manipulation: inspecting an image for discrete blobs of connected pixels (e.g. a black hole in a grey object) as image landmarks. These blobs frequently represent optical targets for machining, robotic capture, or manufacturing failure.[31]
- Neural net processing: weighted and self-training multi-variable decision making [32]
- Pattern recognition including template matching. Finding, matching, and/or counting specific patterns. This may include location of an object that may be rotated, partially hidden by another object, or varying in size.[33]
- Barcode, Data Matrix and "2D barcode" reading [34]
- Optical character recognition: automated reading of text such as serial numbers [35]
- Gauging/Metrology: measurement of object dimensions (e.g. in pixels, inches or millimeters) [36]
- Comparison against target values to determine a "pass or fail" or "go/no go" result. For example, with code or bar code verification, the read value is compared to the stored target value. For gauging, a measurement is compared against the proper value and tolerances. For verification of alpha-numberic codes, the OCR'd value is compared to the proper or target value. For inspection for blemishes, the measured size of the blemishes may be compared to the maximums allowed by quality standards.[34]
Outputs
A common output from machine vision systems is pass/fail decisions.[citation needed] These decisions may in turn trigger mechanisms that reject failed items or sound an alarm. Other common outputs include object position and orientation information from robot guidance systems.[5] Additionally, output types include numerical measurement data, data read from codes and characters, displays of the process or results, stored images, alarms from automated space monitoring MV systems, and process control signals.[6][10]
Market
This article needs to be updated.(November 2011) |
As recently as 2006, one industry consultant reported that MV represented a $1.5 billion market in North America.[37] However, the editor-in-chief of an MV trade magazine asserted that "machine vision is not an industry per se" but rather "the integration of technologies and products that provide services or applications that benefit true industries such as automotive or consumer goods manufacturing, agriculture, and defense."[3]
As of 2006, experts estimated that MV had been employed in less than 20% of the applications for which it is potentially useful.[38]
See also
References
- ^ Steger, Carsten; Markus Ulrich; Christian Wiedemann (2008). Machine Vision Algorithms and Applications. Weinheim: Wiley-VCH. p. 1. ISBN 978-3-527-40734-7. Retrieved 2010-11-05.
{{cite book}}
: Unknown parameter|last-author-amp=
ignored (|name-list-style=
suggested) (help) - ^ a b c Graves, Mark; Bruce G. Batchelor (2003). Machine Vision for the Inspection of Natural Products. Springer. p. 5. ISBN 978-1-85233-525-0. Retrieved 2010-11-02.
{{cite book}}
: Unknown parameter|last-author-amp=
ignored (|name-list-style=
suggested) (help) - ^ a b Holton, W. Conard (October 2010). "By Any Other Name". Vision Systems Design. 15 (10). ISSN 1089-3709. Retrieved 2013-03-05.
- ^ a b Relf, Christopher G. (2004). Image Acquisition and Processing with LabVIEW. Vol. 1. CRC Press. ISBN 978-0-8493-1480-3. Retrieved 2010-11-02.
- ^ a b c d Turek, Fred D. (June 2011). "Machine Vision Fundamentals, How to Make Robots See". NASA Tech Briefs. 35 (6): 60–62. Retrieved 2011-11-29.
- ^ a b West, Perry A Roadmap For Building A Machine Vision System Pages 1-35
- ^ Dechow, David (January 2009). "Integration: Making it Work". Vision & Sensors: 16–20. Retrieved 2012-05-12.
- ^ Hornberg, Alexander (2006). Handbook of Machine Vision. Wiley-VCH. p. 709. ISBN 978-3-527-40584-8. Retrieved 2010-11-05.
- ^ Hornberg, Alexander (2006). Handbook of Machine Vision. Wiley-VCH. p. 427. ISBN 978-3-527-40584-8. Retrieved 2010-11-05.
- ^ a b Demant C.; Streicher-Abel B.; Waszkewitz P. (1999). Industrial Image Processing: Visual Quality Control in Manufacturing. Springer-Verlag. ISBN 3-540-66410-6.
{{cite book}}
: Unknown parameter|last-author-amp=
ignored (|name-list-style=
suggested) (help)[page needed] - ^ Hornberg, Alexander (2006). Handbook of Machine Vision. Wiley-VCH. p. 429. ISBN 978-3-527-40584-8. Retrieved 2010-11-05.
- ^ Wilson, Andrew (April 2011). "The Infrared Choice". Vision Systems Design. 16 (4): 20–23. Retrieved 2013-03-05.
- ^ West, Perry High Speed, Real-Time Machine Vision CyberOptics, pages 1-38
- ^ a b Murray, Charles J (February 2012). "3D Machine Vison Comes into Focus". Design News. Retrieved 2012-05-12.
- ^ Belbachir, Ahmed Nabil, ed. (2009). Smart Cameras. Springer. ISBN 978-1-4419-0952-7.[page needed]
- ^ Dechow, David (February 2013). "Explore the Fundamentals of Machine Vision: Part 1". Vision Systems Design. 18 (2): 14–15. Retrieved 2013-03-05.
- ^ Wilson, Andrew (May 31, 2011). "CoaXPress standard gets camera, frame grabber support". Vision Systems Design. Retrieved 2012-11-28.
- ^ Wilson, Dave (November 12, 2012). "Cameras certified as compliant with CoaXPress standard". Vision Systems Design. Retrieved 2013-03-05.
- ^ a b Davies, E.R. (1996). Machine Vision - Theory Algorithms Practicalities (2nd ed.). Harcourt & Company. ISBN 978-0-12-206092-2.[page needed].
- ^ a b Dinev, Petko (March 2008). "Digital or Analog? Selecting the Right Camera for an Application Depends on What the Machine Vision System is Trying to Achieve". Vision & Sensors: 10–14. Retrieved 2012-05-12.
- ^ Wilson, Andrew (December 2011). "Product Focus - Looking to the Future of Vision". Vision Systems Design. 16 (12). Retrieved 2013-03-05.
- ^ Davies, E.R. (2012). Computer and Machine Vision: Theory, Algorithms, Practicalities (4th ed.). Academic Press. pp. 410–411. ISBN 9780123869081. Retrieved 2012-05-13.
- ^ http://research.microsoft.com/en-us/people/fengwu/depth-icip-12.pdf HYBRID STRUCTURED LIGHT FOR SCALABLE DEPTH SENSING Yueyi Zhang, Zhiwei Xiong, Feng Wu University of Science and Technology of China, Hefei, China Microsoft Research Asia, Beijing, China
- ^ R.Morano,C.Ozturk,R.Conn,S.Dubin,S.Zietz,J.Nissano, "Structured light using pseudorandom codes",IEEE Transactions on Pattern Analysis and Machine Intelligence 20 (3)(1998)322–327
- ^ 3-D Imaging: A practical Overview for Machine Vision By Fred Turek & Kim Jackson Quality Magazine, March 2014 issue, Volume 53/Number 3 Pages 6-8
- ^ Demant C.; Streicher-Abel B.; Waszkewitz P. (1999). Industrial Image Processing: Visual Quality Control in Manufacturing. Springer-Verlag. p. 39. ISBN 3-540-66410-6.
{{cite book}}
: Unknown parameter|last-author-amp=
ignored (|name-list-style=
suggested) (help) - ^ Demant C.; Streicher-Abel B.; Waszkewitz P. (1999). Industrial Image Processing: Visual Quality Control in Manufacturing. Springer-Verlag. p. 96. ISBN 3-540-66410-6.
{{cite book}}
: Unknown parameter|last-author-amp=
ignored (|name-list-style=
suggested) (help) - ^ Linda G. Shapiro and George C. Stockman (2001): “Computer Vision”, pp 279-325, New Jersey, Prentice-Hall, ISBN 0-13-030796-3
- ^ Lauren Barghout. Visual Taxometric approach Image Segmentation using Fuzzy-Spatial Taxon Cut Yields Contextually Relevant Regions. Information Processing and Management of Uncertainty in Knowledge-Based Systems. CCIS Springer-Verlag. 2014
- ^ Demant C.; Streicher-Abel B.; Waszkewitz P. (1999). Industrial Image Processing: Visual Quality Control in Manufacturing. Springer-Verlag. p. 108. ISBN 3-540-66410-6.
{{cite book}}
: Unknown parameter|last-author-amp=
ignored (|name-list-style=
suggested) (help) - ^ Demant C.; Streicher-Abel B.; Waszkewitz P. (1999). Industrial Image Processing: Visual Quality Control in Manufacturing. Springer-Verlag. p. 95. ISBN 3-540-66410-6.
{{cite book}}
: Unknown parameter|last-author-amp=
ignored (|name-list-style=
suggested) (help) - ^ Turek, Fred D. (March 2007). "Introduction to Neural Net Machine Vision". Vision Systems Design. 12 (3). Retrieved 2013-03-05.
- ^ Demant C.; Streicher-Abel B.; Waszkewitz P. (1999). Industrial Image Processing: Visual Quality Control in Manufacturing. Springer-Verlag. p. 111. ISBN 3-540-66410-6.
{{cite book}}
: Unknown parameter|last-author-amp=
ignored (|name-list-style=
suggested) (help) - ^ a b Demant C.; Streicher-Abel B.; Waszkewitz P. (1999). Industrial Image Processing: Visual Quality Control in Manufacturing. Springer-Verlag. p. 125. ISBN 3-540-66410-6.
{{cite book}}
: Unknown parameter|last-author-amp=
ignored (|name-list-style=
suggested) (help) - ^ Demant C.; Streicher-Abel B.; Waszkewitz P. (1999). Industrial Image Processing: Visual Quality Control in Manufacturing. Springer-Verlag. p. 132. ISBN 3-540-66410-6.
{{cite book}}
: Unknown parameter|last-author-amp=
ignored (|name-list-style=
suggested) (help) - ^ Demant C.; Streicher-Abel B.; Waszkewitz P. (1999). Industrial Image Processing: Visual Quality Control in Manufacturing. Springer-Verlag. p. 191. ISBN 3-540-66410-6.
{{cite book}}
: Unknown parameter|last-author-amp=
ignored (|name-list-style=
suggested) (help) - ^ Hapgood, Fred (December 15, 2006 – January 1, 2007). "Factories of the Future". CIO. 20 (6): 46. ISSN 0894-9301. Retrieved 2010-10-28.
- ^ Hornberg, Alexander (2006). Handbook of Machine Vision. Wiley-VCH. p. 694. ISBN 978-3-527-40584-8. Retrieved 2010-11-05.