Artificial Passenger

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

The Artificial Passenger is a telematic device, developed by IBM, that interacts verbally with a driver to reduce the likelihood of them falling asleep at the controls of a vehicle.[1] It is based on inventions covered by U.S. patent 6,236,968.[2] The Artificial Passenger is equipped to engage a vehicle operator by carrying on conversations, playing verbal games, controlling the vehicle's stereo system, and so on. It also monitors the driver's speech patterns to detect fatigue, and in response can suggest that the driver take a break or get some sleep.[3][4] The Artificial Passenger may also be integrated with wireless services to provide weather and road information, driving directions, and other such notifications systems.[5]

Voice control interface[edit]

According to Dimitri Kanevsky, a former IBM researcher, currently at Google, The Artificial Passenger was developed using the Conversational Interactivity for Telematics (CIT) speech system which counts on the driver's natural speech instead of the use of hands. The CIT relies on a Natural Language Understanding (NLU) system that is difficult to develop because of the low-powered computer systems available inside cars. IBM suggests that this system be located on a server and accessed through the cars' wireless technologies. IBM also says they are working on a "quasi-NLU" that uses fewer resources from the CPU and can be used inside the car.[6] The CIT system includes another system called the Dialog Manager (DM). The DM takes the load of the NLU system by interacting with the vehicle, the driver, and external systems such as weather systems, email, telephones and more.[7]

The NLU system receives a voice command from the driver and looks through a file system to come up with an action to be performed and executes that action.[6] The DM works with questions asked by the driver such as "How far is The Gallatin Field Airport from here?" The NLU system will still not be able to understand everything a driver says. Reasons for that are the different idioms and dialects of different regions. IBM is working on developing a system that recognizes where the driver is and acknowledge the regional diction used in that area.[7]

Another system used within this technology is the Learning Transformation (LT) system which monitors the actions of the occupants of the car and of the cars around it, learns patterns within the driver's speech and store that data, and learns from such data to try to improve the performance of the technology as a whole.[6]

Speech recognition[edit]

The speech recognition process relies on three steps. The front-end filters out any unwanted noise such as noise from the car, background music, or background passengers. It gets rid of all low energy and high variability signal being recognized.[7] The labeler breaks apart the speech and searches in a data base to recognize what is being said. It starts broad by seeing what subject the driver is speaking of. Then goes into more details of what the driver is truly asking. The decoder next takes all this information and formulates a response to the driver.[6] IBM states through much experimentation that the speech recognition is very accurate but the process has not fully been refined and still has kinks with in it.[7]

The main part of the Artificial Passenger is the disruptive speech recognition. This technology keeps a conversation with the driver and analyzes what the driver is saying and how s/he is saying it. It can recognize fluctuations in the driver's voice to determine if the driver is sleepy, upset, or in a good mood through different vibration patterns in the driver's speech. It also record the time it takes for a driver to respond in the conversation and from that determine if the driver is nodding off or being distracted by something.[7]

Driver drowsiness prevention[edit]

When the computer recognizes that the driver is dozing off, it sends a signal to interfere. The computer will step in by changing the radio, trying to play games with the driver, or by opening window to wake the driver up.[5] The computer wants to improve their alertness by doing these. If it finds that the driver is nodding off over and over, the computer system is programmed to ask to call a nearby hotel and book a room or suggest the driver take a break.[6]

The Artificial Passenger will try to read jokes, play games, ask questions or read interactive books to stimulate the driver. Drivers that show more drowsiness will be given content that is more stimulating than a driver who is not as drowsy.[6]

Distributive user interface between cars[edit]

IBM recognizes that there are more dangers to a driver than him/herself. Artificial Passenger is proposed to work between cars by relaying information to one another. The information could include driving records to show if they have a history of being a bad driver or on-time analysis of all drivers to show which ones are becoming drowsy and can interfere through this information. It can also show if a driver is being distracted by games or wireless devices and interfere with all surrounding drivers.[7]

See also[edit]

References[edit]

  1. ^ Sample, Ian, You drive me crazy, New Scientist, Issue 2300, July 2001. Retrieved on June 29, 2008.
  2. ^ "U.S. patent: Sleep prevention dialog based car system". Retrieved 2008-06-29. 
  3. ^ The New York Times, December 27, 2001. A passenger whose chatter is always appreciated Archived October 17, 2009, at the Wayback Machine. by A. Eisenberg. Retrieved on June 29, 2008
  4. ^ Kanevsky, D., Telematics: Artificial Passenger and beyond, Human Factors and Voice Interactive Systems, Signals and Communications Technology Series, Springer US, pp. 291-325. http://www.springerlink.com/content/x6446438jk375707/
  5. ^ a b Kharif, olga. "IBM to Drivers: Wake Up!". Retrieved 6 December 2011. 
  6. ^ a b c d e f Kanevsky, Dimitri. "IBM Research Report" (PDF). Retrieved 6 December 2011.