Handwriting recognition

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Signature of country star, Tex Williams.

Handwriting recognition (or HWR[1]) is the ability of a computer to receive and interpret intelligible handwritten input from sources such as paper documents, photographs, touch-screens and other devices. The image of the written text may be sensed "off line" from a piece of paper by optical scanning (optical character recognition) or intelligent word recognition. Alternatively, the movements of the pen tip may be sensed "on line", for example by a pen-based computer screen surface.

Handwriting recognition principally entails optical character recognition. However, a complete handwriting recognition system also handles formatting, performs correct segmentation into characters and finds the most plausible words.

Off-line recognition[edit]

Off-line handwriting recognition involves the automatic conversion of text in an image into letter codes which are usable within computer and text-processing applications. The data obtained by this form is regarded as a static representation of handwriting. Off-line handwriting recognition is comparatively difficult, as different people have different handwriting styles. And, as of today, OCR engines are primarily focused on machine printed text and ICR for hand "printed" (written in capital letters) text. There is no OCR/ICR engine that supports handwriting recognition as of today.

Problem domain reduction techniques[edit]

Narrowing the problem domain often helps increase the accuracy of handwriting recognition systems. A form field for a U.S. ZIP code, for example, would contain only the characters 0-9. This fact would reduce the number of possible identifications.

Primary techniques:

  • Specifying specific character ranges
  • Utilization of specialized forms

Character extraction[edit]

Off-line character recognition often involves scanning a form or document written sometime in the past. This means the individual characters contained in the scanned image will need to be extracted. Tools exist that are capable of performing this step.[2] However, there are several common imperfections in this step. The most common is when characters that are connected are returned as a single sub-image containing both characters. This causes a major problem in the recognition stage. Yet many algorithms are available that reduce the risk of connected characters.

Character recognition[edit]

After the extraction of individual characters occurs, a recognition engine is used to identify the corresponding computer character. Several different recognition techniques are currently available.

Neural networks[edit]

Neural network recognizers learn from an initial image training set. The trained network then makes the character identifications. Each neural network uniquely learns the properties that differentiate training images. It then looks for similar properties in the target image to be identified. Neural networks are quick to set up; however, they can be inaccurate if they learn properties that are not important in the target data.

Feature extraction[edit]

Feature extraction works in a similar fashion to neural network recognizers however, programmers must manually determine the properties they feel are important.

Some example properties might be:

  • Aspect Ratio.
  • Percent of pixels above horizontal half point
  • Percent of pixels to right of vertical half point
  • Number of strokes
  • Average distance from image center
  • Is reflected y axis
  • Is reflected x axis

This approach gives the recognizer more control over the properties used in identification. Yet any system using this approach requires substantially more development time than a neural network because the properties are not learned automatically.

On-line recognition[edit]

On-line handwriting recognition involves the automatic conversion of text as it is written on a special digitizer or PDA, where a sensor picks up the pen-tip movements as well as pen-up/pen-down switching. This kind of data is known as digital ink and can be regarded as a digital representation of handwriting. The obtained signal is converted into letter codes which are usable within computer and text-processing applications.

The elements of an on-line handwriting recognition interface typically include:

  • a pen or stylus for the user to write with.
  • a touch sensitive surface, which may be integrated with, or adjacent to, an output display.
  • a software application which interprets the movements of the stylus across the writing surface, translating the resulting strokes into digital text.

General process[edit]

The process of online handwriting recognition can be broken down into a few general steps:

  • preprocessing,
  • feature extraction and
  • classification.

The purpose of preprocessing is to discard irrelevant information in the input data, that can negatively affect the recognition.[3] This concerns speed and accuracy. Preprocessing usually consists of binarization, normalization, sampling, smoothing and denoising.[4] The second step is feature extraction. Out of the two- or more-dimensional vector field received from the preprocessing algorithms, higher-dimensional data is extracted. The purpose of this step is to highlight important information for the recognition model. This data may include information like pen pressure, velocity or the changes of writing direction. The last big step is classification. In this step various models are used to map the extracted features to different classes and thus identifying the characters or words the features represent.

Hardware[edit]

Commercial products incorporating handwriting recognition as a replacement for keyboard input were introduced in the early 1980s. Examples include handwriting terminals such as the Pencept Penpad[5] and the Inforite point-of-sale terminal.[6] With the advent of the large consumer market for personal computers, several commercial products were introduced to replace the keyboard and mouse on a personal computer with a single pointing/handwriting system, such as those from PenCept,[7] CIC[8] and others. The first commercially available tablet-type portable computer was the GRiDPad from GRiD Systems, released in September 1989. Its operating system was based on MS-DOS.

In the early 1990s, hardware makers including NCR, IBM and EO released tablet computers running the PenPoint operating system developed by GO Corp.. PenPoint used handwriting recognition and gestures throughout and provided the facilities to third-party software. IBM's tablet computer was the first to use the ThinkPad name and used IBM's handwriting recognition. This recognition system was later ported to Microsoft Windows for Pen Computing, and IBM's Pen for OS/2. None of these were commercially successful.

Advancements in electronics allowed the computing power necessary for handwriting recognition to fit into a smaller form factor than tablet computers, and handwriting recognition is often used as an input method for hand-held PDAs. The first PDA to provide written input was the Apple Newton, which exposed the public to the advantage of a streamlined user interface. However, the device was not a commercial success, owing to the unreliability of the software, which tried to learn a user's writing patterns. By the time of the release of the Newton OS 2.0, wherein the handwriting recognition was greatly improved, including unique features still not found in current recognition systems such as modeless error correction, the largely negative first impression had been made. After discontinuation of Apple Newton, the feature has been ported to Mac OS X 10.2 or later in form of Inkwell (Macintosh).

Palm later launched a successful series of PDAs based on the Graffiti recognition system. Graffiti improved usability by defining a set of "unistrokes", or one-stroke forms, for each character. This narrowed the possibility for erroneous input, although memorization of the stroke patterns did increase the learning curve for the user. The Graffiti handwriting recognition was found to infringe on a patent held by Xerox, and Palm replaced Graffiti with a licensed version of the CIC handwriting recognition which, while also supporting unistroke forms, pre-dated the Xerox patent. The court finding of infringement was reversed on appeal, and then reversed again on a later appeal. The parties involved subsequently negotiated a settlement concerning this and other patents Graffiti (Palm OS).

A Tablet PC is a special notebook computer that is outfitted with a digitizer tablet and a stylus, and allows a user to handwrite text on the unit's screen. The operating system recognizes the handwriting and converts it into typewritten text. Windows Vista and Windows 7 include personalization features that learn a user's writing patterns or vocabulary for English, Japanese, Chinese Traditional, Chinese Simplified and Korean. The features include a "personalization wizard" that prompts for samples of a user's handwriting and uses them to retrain the system for higher accuracy recognition. This system is distinct from the less advanced handwriting recognition system employed in its Windows Mobile OS for PDAs.

Although handwriting recognition is an input form that the public has become accustomed to, it has not achieved widespread use in either desktop computers or laptops. It is still generally accepted that keyboard input is both faster and more reliable. As of 2006, many PDAs offer handwriting input, sometimes even accepting natural cursive handwriting, but accuracy is still a problem, and some people still find even a simple on-screen keyboard more efficient.

Software[edit]

Initial software modules could understand print handwriting where the characters were separated. Commercial examples came from companies such as Communications Intelligence Corporation and IBM. In the early 90s, two companies, ParaGraph International, and Lexicus came up with systems that could understand cursive handwriting recognition. ParaGraph was based in Russia and founded by computer scientist Stepan Pachikov while Lexicus was founded by Ronjon Nag and Chris Kortge who were students at Stanford University. The ParaGraph CalliGrapher system was deployed in the Apple Newton systems, and Lexicus Longhand system was made available commercially for the PenPoint and Windows operating system. Lexicus was acquired by Motorola in 1993 and went on to develop Chinese handwriting recognition and predictive text systems for Motorola. ParaGraph was acquired in 1997 by SGI and its handwriting recognition team formed a P&I division, later acquired from SGI by Vadem. Microsoft has acquired CalliGrapher handwriting recognition and other digital ink technologies developed by P&I from Vadem in 1999. Wolfram Mathematica (8.0 or later) also provides a hand writing or text recognizing function can be called by writing command TextRecognize[^] user can then drag the picture to be analysed on the place of "^".

Research[edit]

Method used for exploiting contextual information in the first handwritten address interpretation system developed by Sargur Srihari and Jonathan Hull [9]

Handwriting Recognition has an active community of academics studying it. The biggest conferences for handwriting recognition are the International Conference on Frontiers in Handwriting Recognition (ICFHR), held in even-numbered years, and the International Conference on Document Analysis and Recognition (ICDAR), held in odd-numbered years. Both of these conferences are endorsed by the IEEE. Active areas of research include:

A survey of research on handwriting recognition (2000) is by R. Plamondon and S. N. Srihari.[10]

Results since 2009[edit]

Since 2009, the recurrent neural networks and deep feedforward neural networks developed in the research group of Jürgen Schmidhuber at the Swiss AI Lab IDSIA have won several international handwriting competitions.[11] In particular, the bi-directional and multi-dimensional Long short term memory (LSTM)[12][13] of Alex Graves et al. won three competitions in connected handwriting recognition at the 2009 International Conference on Document Analysis and Recognition (ICDAR), without any prior knowledge about the three different languages (French, Arab, Farsi) to be learned. Recent GPU-based deep learning methods for feedforward networks by Dan Ciresan and colleagues at IDSIA won the ICDAR 2011 offline Chinese handwriting recognition contest; their neural networks also were the first artificial pattern recognizers to achieve human-competitive performance[14] on the famous MNIST handwritten digits problem[15] of Yann LeCun and colleagues at NYU.

See also[edit]

Lists

References[edit]

  1. ^ http://acronyms.thefreedictionary.com/HWR
  2. ^ Java OCR, 5 June 2010. Retrieved 5 June 2010
  3. ^ Huang, B.; Zhang, Y. and Kechadi, M.; Preprocessing Techniques for Online Handwriting Recognition. Intelligent Text Categorization and Clustering, Springer Berlin Heidelberg, 2009, Vol. 164, "Studies in Computational Intelligence" pp. 25–45.
  4. ^ Holzinger, A.; Stocker, C.; Peischl, B. and Simonic, K.-M.; On Using Entropy for Enhancing Handwriting Preprocessing, Entropy 2012, 14, pp. 2324-2350.
  5. ^ Pencept Penpad (TM) 200 Product Literature, Pencept, Inc., 1982-08-15 
  6. ^ Inforite Hand Character Recognition Terminal, Cadre Systems Limited, England, 1982-08-15 
  7. ^ Users Manual for Penpad 320, Pencept, Inc., 1984-06-15 
  8. ^ Handwriter (R) GrafText (TM) System Model GT-5000, Communication Intelligence Corporation, 1985-01-15 
  9. ^ S. N. Srihari and E. J. Keubert, "Integration of handwritten address interpretation technology into the United States Postal Service Remote Computer Reader System" Proc. Int. Conf. Document Analysis and Recognition (ICDAR) 1997, IEEE-CS Press, pp. 892–896
  10. ^ R. Plamondon and S. N. Srihari (2000). "On-line and off-line handwriting recognition: a comprehensive survey". In: IEEE Transactions on Pattern Analysis and Machine Intelligence 22(1), 63–84.
  11. ^ 2012 Kurzweil AI Interview with Jürgen Schmidhuber on the eight competitions won by his Deep Learning team 2009-2012
  12. ^ Graves, Alex; and Schmidhuber, Jürgen; Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks, in Bengio, Yoshua; Schuurmans, Dale; Lafferty, John; Williams, Chris K. I.; and Culotta, Aron (eds.), Advances in Neural Information Processing Systems 22 (NIPS'22), December 7th–10th, 2009, Vancouver, BC, Neural Information Processing Systems (NIPS) Foundation, 2009, pp. 545–552
  13. ^ A. Graves, M. Liwicki, S. Fernandez, R. Bertolami, H. Bunke, J. Schmidhuber. A Novel Connectionist System for Improved Unconstrained Handwriting Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 5, 2009.
  14. ^ D. C. Ciresan, U. Meier, J. Schmidhuber. Multi-column Deep Neural Networks for Image Classification. IEEE Conf. on Computer Vision and Pattern Recognition CVPR 2012.
  15. ^ LeCun, Y., Bottou, L., Bengio, Y., & Ha�ner, P. (1998). Gradient-based learning applied to document recognition. Proc. IEEE, 86, pp. 2278-2324.

External links[edit]