Software with an IQ]
|Industry||Automation; Machine Vision; Software; Artificial Intelligence; Big Data; Cybernetics|
|Founded||2001, Commenced Operations 2008|
|Key people||David A. Peters, CEO
Hob Wubbena, VP
Richard Alan Peters II, Ph.D., CTO
|Products||Neocortex; Spatial Vision, Unlimited Depalletization, 3D Inspection, engineering services|
||This article appears to be written like an advertisement. (December 2013)|
Universal Robotics, Inc. is a software engineering and robotics company headquartered in Nashville, Tennessee. The company offers state-of-the-art artificial intelligence with multi-dimensional sensing and motion control to help companies automate processes, from making machines more flexible to analyzing big data.
Founded in 2008, the company specializes in complex processes not previously automated. Universal’s flagship intelligence software, Neocortex, enables robots to perform tasks too costly, dangerous or impossible for humans to undertake. The technology was funded by DARPA and NASA, and was originally co-developed through a 7-year partnership between NASA and Vanderbilt University and is employed in NASA’s Robonaut.
By combining the Neocortex intelligence platform with modular sensing and control software products, Universal Robotics currently provides flexible applications for materials handling. Today’s software products include 3D machine vision products (SpatialVision, Spatial Vision Robotics, Spatial Vision Inspection, and automated robot programming (Autonomy). Applications include Unlimited Box Moving, Unlimited Depalletization, Random Bin Picking, Random Bag Picking, and 3D Inspection.
- 1 Products and services
- 2 Applications
- 3 Leadership
- 4 References
- 5 External links
Products and services
Universal Robotics offers three modular software product families: Neocortex® provides real-time intelligence. Spatial Vision® performs multi-dimensional sensing (Spatial Vision Robotics – 3D vision guidance, Spatial Vision Inspection – 3D visual inspection). Autonomy provides automated enhanced control of robots and machines.
Traditional AI gives robots programmed actions corresponding to variables, leading to failure each time an unprogrammed variable is encountered. Neocortex is based on the pattern of learning in nature which is common to all creatures. The patent-protected software allows a machine to develop its own understanding from sensing and acting in the physical world, using information from up to 70 channels of sensor data. NVIDIA GPUs are used to speed up processing.
With Neocortex, machines learn from their experiences. It enables robots to perform nearly any task that requires adaptive human input. Neocortex allows a machine to determine its actions by remembering what worked and failed in past attempts. It responds dynamically to change with real-time sensory input and uses memory to match what is known with what it is learning. Its database compounds over time allowing for adaptation. With enough experience, Neocortex can enable a machine to draw correlations to attempt an entirely new solution to a given task.
Neocortex technology was developed at Vanderbilt University and NASA, where it was used as “brain” of Robonaut. Today, Neocortex is enabling machines to perform highly specialized, automated tasks that require them to react and adapt to their environments.
For example, in the materials handling industry, Neocortex helps machines adapt to mixed-size boxes for palletizing and de-palletizing. Traditional AI has not typically worked in this area because of the difficulty in programming for every circumstance. Since Neocortex helps machines learn, it can adapt to a highly variable palletization process.
3D vision software--Spatial Vision
Universal’s Spatial Vision line of products was created during the development of Neocortex. The products provide 3D vision and include Spatial Vision, Spatial Vision Robotics, and Spatial Vision Inspection.
3D vision systems have many benefits over 2D, including better accuracy and object identification. Although technology advances have reduced expense and complexity, slow adoptions exist because of long-standing perceptions that systems are costly and difficult to maintain.
Spatial Vision software combines the images from two, off-the-shelf USB webcams to determine a point’s 3D coordinates. This 3D data can be used to measure, identify objects, and calibrate to help guide robots with higher accuracy than 2D systems. Spatial Vision provides 0.1 sub pixel accuracy but does not require precision mounting or specialized cameras, which makes it easy to set up and costs a fraction of traditional 3D vision systems.
The product provides programmatic interfaces to 3D calibration files for custom C, C++ and MATLAB applications. It supports GigE Allied Vision cameras and accuracy tools, such as snap-to-corner measurement assistance and accuracy calculator displays.
Spatial Vision can be used for tasks ranging from engineering applications to motion capture to improved facial recognition. The system also can be used to measure situations such as in-store foot traffic patterns as well as scientific applications requiring object tracking and visual analytics.
It can be deployed in any setting where a pair of cameras can be installed, including manufacturing lines, warehouses, laboratories, office buildings and department stores.
Spatial Vision Robotics provides 3D vision guidance software, which is specially engineered to guide robots. It tracks moving machinery being controlled with Spatial Vision’s 3D data relative to its surroundings and objects of interest. The software offers real-time vision guidance for random parts picking, pallet sorting, automated kitting and box moving (palletization and depalletization).
As part of an ongoing collaboration with Motoman Robotics, a division of Yaskawa America, Inc., Universal launched MotoSight™ 3D Spatial Vision, a 3D vision system for cost-effective, flexible and scalable real-time guidance for Motoman robots. The system determines six degrees of object position and pose information (X, Y, Z, Rx, Ry, Rz) and is accurate within 2-4mm with off-the-shelf Logitech 9000 webcams. Accuracy can be improved by substituting webcams with GigE cameras.
Spatial Vision Inspection provides 3D inspection of objects up to the size of a pallet. It improves quality by reducing the variability of visual inspection while inspecting at production speeds. In the case of pallets, it identifies a wide range of defects such as raised nails, damaged wood, split or loose boards, or missing wood.
Spatial Vision Inspection creates an accurate 3D image of the object with the proper placement of a combination of cameras and other sensors. This enables clarity even at the edges of the field of view where most the damage to pallets occur.
In 2011, Spatial Vision Inspection provided the first industrial application using the Microsoft Kinect structured light sensor to provide real-time 3D. Then, in 2012, it was the first industrial application using four simultaneous Microsoft Kinect structured light sensors.
Both Spatial Vision Robotics and Spatial Vision Inspection use a variety of sensors and cameras for multi-D sensing, such as structured light sensors, camera pairs (stereopsis), webcams, lasers, Light Detection And Ranging (LIDAR), and Time Of Flight sensors.
Automated robot programming--Autonomy
Universal’s Autonomy is motion control software that automates robot programming for moving a robot at high speed using a variety of sensor inputs. It provides real-time autonomous reaction for robots as well as motion planning and collision avoidance. Autonomy couples Spatial Vision Robotics with robot kinematics to allow a robot to react dynamically to changes in object positioning. For example, the software will allow a paint-spraying robot to maintain a consistent distance from an assembly line part swaying on a moving cable, which reduces over-spray.
Universal Robotics’ applications include Unlimited Box Moving and Unlimited Depalletization, Random Bin Picking, Random Bag Picking, and 3D Inspection.
Unlimited Box Moving & Unlimited Depalletization application
Universal Robotics recently introduced on-the-fly handling of boxes of any size, a new area of flexible automation for logistics. It moves up to 1,400 cases per hour of cartons never seen before in any orientation with any combination of labels. The application works independently of box variability, including size or condition, weight, location or orientation within the work cell, label quantity/type, or box graphics or color. The boxes can also be in any random orientation or location.
The application can move an unlimited number of boxes – palletizing or depalletizing – whether on the floor, on a moving conveyor, or in a container. It unloads partial, mixed, or full pallets of loosely or tightly packed boxes – regardless of the number of layers and handles single, double and triple picks on the fly. It can move boxes from floor to conveyor, table to pallet, pallet to conveyor, truck to conveyor, or assembly line to staging area.
Its flexibility enables picking and placing a wide range of box sizes from 6 inches up to 48 inches. The application provides depalletization of pallets up to 48 inches x 48 inches and 60 inches high. Stacking of the cases can be in any orientation and order and layers can be fully packed to partially filled or homogeneous to mixed. Boxes can be plain or have a variety of shipping labels, tape, lettering, or graphic designs. Boxes may contain frozen goods or require special handling for fragile contents.
Random Bin Picking application
The Random Bin Picking application enables a robot to automatically move a number of randomly placed parts at typical speeds, regardless of their orientation or how deep the stack. It uses a suite of sensors that integrates off-the-shelf structured light sensors and pairs of cameras for stereoscopic vision.
The standard application moves one part in any orientation at up to 30 per minute with standard motion control. Whether loosely or tightly packed, either on the floor, conveyor or container, the parts can be in any orientation. It also provides 3D guidance to the robot regardless of the presence or type of labels or material type and whether the parts are individually placed on a flat or randomly packed tightly in bin up to 48 inches deep. This cost-effective approach eliminates expensive fixturing and automated tables and works well under varying light conditions. Optionally, Universal’s Random Bin Picking can handle up to 3 parts in any orientation with any combined mix of SKUs per layer. Optional high-speed sensor servoing can further increase throughput where required.
Random Bag Picking application
The Random Bag Picking application enables a robot to automatically move a number of randomly placed bags at typical speeds, regardless of their orientation or how many layers are ‘piled’ together. It uses a cost-effective suite of sensors that integrate off-the-shelf structured light sensors and pairs of cameras for stereoscopic vision.
The standard application moves one bag in any orientation at up to 12 per minute with standard motion control. Whether loosely or tightly packed, on the floor, conveyor or container, the bags can be in any orientation. The application dynamically provides 3D guidance to the robot for bags regardless of labels or material type. Bags can be stacked in unlimited number of layers up to 60 inches high. This cost-effective approach eliminates expensive fixturing and automated tables and works well under varying light conditions. Optionally, the Random Bag Picker can handle up to 3 boxes in any orientation with any combined mix of SKUs per layer. Optional high-speed sensor servoing can further increase throughput where required.
3D inspection application
Traditionally, pallet inspection has been done visually with varying levels of manual handling or automated machinery, requiring the pallet to be lifted and flipped to see all surfaces. Stringent and frequent audits are required to reduce the variability of visual inspection and traditional 2D does not offer reliable inspection.
The 3D inspection application replicates manual inspection through an automated 3D vision system, which flexibly adapts to a pallet structure. It quickly identifies a wide range of defects, including raised nails, and wood damage – whether split, loose, or missing – at a productive line speed. CHEP, the global leader in pallet and container pooling services, uses this application for automated 3D pallet inspection worldwide.
Universal’s engineering team has engineering expertise in technologies related to sensing, manipulation and artificial intelligence. The company offers customers world class engineering services in the following areas:
- Sensor applications: 2/3D Vision (including object recognition), Force, Infrared, and Acceleration
- Reactive robotics: Kinematics, End-of-tool Design, and High Speed Control
- Machine learning: Pattern Extraction, Path Planning, Hierarchical Control, Closed Loop Control, Supervised/Unsupervised learning, Reinforcement learning, Sensori-Motor coordination, Programming by Demonstration
- Middleware: CUDA and CORBA
Universal Robotics was founded and is led by David Peters, CEO and his brother Dr. Alan Peters, CTO. Dr. Peters is the principal architect of Neocortex and is an Associate Professor at Vanderbilt University in Nashville, Tennessee. Hob Wubbena is the company’s Vice President of Strategic Planning and Marketing,.
- "Universal Robotics, Inc.". Bloomberg Businessweek. Retrieved 8 June 2011.
- Hogan, Hank. (November/December 2010). "Universal Robotics' 3D Vision." Robotics Business Review: 23-25.
- " Un iversal Robotics Introduces Neocortex, "Software with an IQ." Yahoo! Finance. Retrieved 8 June 2011.
- "Universal Robotics Introduces ‘Software with an IQ’". Robotics Trends. Retrieved 15 June 2011.
- Fifer, Rolf. (1999). "Understanding Intelligence." The MIT Press.
- "Teaching Robots New Tricks". Canadian Manufacturing. Retrieved 15 June 2011.
- Fifer, Rolf. (1999). "Understanding Intelligence." The MIT Press.
- "Emerging Companies Review: Universal Robotics". Nvidia. Retrieved 15 June 2011.
- "Robots Can’t Learn if They’re Tied Down". The Global Transition. Retrieved 15 June 2011.
- Ambrose, Rob (July 2000). "Robotnaut: NASA's space humanoid." IEEE Intelligent Systems 15 (4): 57-63.
- "iRobot: A Robot that Learns.". Nashville Technology Council: Catalyst. 2010.
- "Universal Robotics Overview of Products". Retrieved 20 December 2012.
- "Universal Robotics Launches Spatial Vision and Spatial Vision Robotics". Inspect. Retrieved 8 June 2011.
- "Articulating Motion". Motion System Design. Retrieved 15 June 2011.
- "3D Vision Made Easy". Universal Robotics. Retrieved 15 June 2011.
- "Webcams as 3D Vision Systems Software Updated". Plant Engineering. Retrieved 15 June 2011.
- "3D Spatial Vision Intelligence". Universal Robotics. Retrieved 15 June 2011.
- "Motoman Robotics announces MotoSight 3D Spatial Vision". Automation. Retrieved 15 June 2011.
- "CHEP Selects Universal Robotics to Provide Innovative Pallet Inspection System". CHEP Inc. Retrieved 12 December 2012.
- "CHEP Selects Universal Robotics to Provide Innovative Pallet Inspection System". Packaging Revolution. Retrieved 12 December 2012.
- "Engineering". Universal Robotics. Retrieved 8 June 2011.
- "David Peters". Universal Robotics. Retrieved 15 June 2011.
- "Alan Peters". Universal Robotics. Retrieved 15 June 2011.
- "Richard Alan Peters III, PH.D.". Vanderbilt School of Engineering. Retrieved 15 June 2011.
- "Robotics Software Firm Hires Marketer". Nashville Post. Retrieved 15 June 2011.
- "Hob Wubbena". Universal Robotics. Retrieved 15 June 2011.