|Developed by||Joint Bi-level Image Experts Group|
|Contained by||Portable Document Format, FAX|
|Standard||ITU T.88 & ISO/IEC 14492|
JBIG2 is an image compression standard for bi-level images, developed by the Joint Bi-level Image Experts Group. It is suitable for both lossless and lossy compression. According to a press release from the Group, in its lossless mode JBIG2 typically generates files 3–5 times smaller than Fax Group 4 and 2–4 times smaller than JBIG, the previous bi-level compression standard released by the Group. JBIG2 has been published in 2000 as the international standard ITU T.88, and in 2001 as ISO/IEC 14492.
Ideally, a JBIG2 encoder will segment the input page into regions of text, regions of halftone images, and regions of other data. Regions that are neither text nor halftones are typically compressed using a context-dependent arithmetic coding algorithm called the MQ coder. Textual regions are compressed as follows: the foreground pixels in the regions are grouped into symbols. A dictionary of symbols is then created and encoded, typically also using context-dependent arithmetic coding, and the regions are encoded by describing which symbols appear where. Typically, a symbol will correspond to a character of text, but this is not required by the compression method. For lossy compression the difference between similar symbols (e.g., slightly different impressions of the same letter) can be neglected; for lossless compression, this difference is taken into account by compressing one similar symbol using another as a template. Halftone images may be compressed by reconstructing the grayscale image used to generate the halftone and then sending this image together with a dictionary of halftone patterns. Overall, the algorithm used by JBIG2 to compress text is very similar to the JB2 compression scheme used in the DjVu file format for coding binary images.
PDF files versions 1.4 and above may contain JBIG2-compressed data. Open-source decoders for JBIG2 are jbig2dec, the java-based jbig2-imageio and the decoder found in versions 2.00 and above of xpdf. An open-source encoder is jbig2enc.
Typically, a bi-level image consists mainly of a large amount of textual and halftone data, in which the same shapes appear repeatedly. The bi-level image is segmented into three regions: text, halftone, and generic regions. Each region is coded differently and the coding methodologies are described in the following passage.
Text image data
Text coding is based on the nature of human visual interpretation. A human observer cannot tell the difference between two instances of the same characters in a bi-level image even though they may not exactly match pixel by pixel. Therefore, only the bitmap of one representative character instance needs to be coded instead of coding the bitmaps of each occurrence of the same character individually. For each character instance, the coded instance of the character is then stored into a "symbol dictionary". There are two encoding methods for text image data: pattern matching and substitution (PM&S) and soft pattern matching (SPM). These methods are presented in the following subsections.
- Pattern matching and substitution
- After performing image segmentation and match searching, and if a match exists, we code an index of the corresponding representative bitmap in the dictionary and the position of the character on the page. The position is usually relative to another previously coded character. If a match is not found, the segmented pixel block is coded directly and added into the dictionary. Typical procedures of pattern matching and substitution algorithm are displayed in the left block diagram of the figure above. Although the method of PM&S can achieve outstanding compression, substitution errors could be made during the process if the image resolution is low.
- Soft pattern matching
- In addition to a pointer to the dictionary and position information of the character, refinement data is also required because it is a crucial piece of information used to reconstruct the original character in the image. The deployment of refinement data can make the character-substitution error mentioned earlier highly unlikely. The refinement data contains the current desired character instance, which is coded using the pixels of both the current character and the matching character in the dictionary. Since it is known that the current character instance is highly correlated with the matched character, the prediction of the current pixel is more accurate.
Halftone images can be compressed using two methods. One of the methods is similar to the context-based arithmetic coding algorithm, which adaptively positions the template pixels in order to obtain correlations between the adjacent pixels. In the second method, descreening is performed on the halftone image so that the image is converted back to grayscale. The converted grayscale values are then used as indexes of fixed-sized tiny bitmap patterns contained in a halftone bitmap dictionary. This allows decoder to successfully render a halftone image by presenting indexed dictionary bitmap patterns neighboring with each other.
Arithmetic entropy coding
When used in lossy mode, JBIG2 compression can potentially alter text in a way that's not discernible as corruption. This is in contrast to some other algorithms, which simply degrade into a blur, making the compression artifacts obvious. Since JBIG2 tries to match up similar-looking symbols, the numbers "6" and "8" may get replaced, for example.
In 2013, various substitutions (including replacing “6” with “8”) were reported to happen on many Xerox Workcentre photocopier and printer machines, where numbers printed on scanned (but not OCRed) documents could have potentially been altered. This has been demonstrated on construction blueprints and some tables of numbers; the potential impact of such substitution errors in documents such as medical prescriptions was briefly mentioned. David Kriesel and Xerox were investigating this.
Xerox subsequently acknowledged that this was a long-standing software defect, and their initial statements in suggesting that only non-factory settings could introduce the substitution were incorrect. Patches that comprehensively address the problem were published later in August, but no attempt has been made to recall or mandate updates to the affected devices – which was acknowledged to affect more than a dozen product families. Documents previously scanned continue to potentially contain errors making their veracity difficult to substantiate. German and Swiss regulators have subsequently (in 2015) disallowed the JBIG2 encoding in archival documents.
- Press release from the Joint Bi-level Image experts Group Archived 2005-05-15 at the Wayback Machine.
- "ITU-T Recommendation T.88 – T.88 : Information technology - Coded representation of picture and audio information - Lossy/lossless coding of bi-level images". Retrieved 2011-02-19.
- "ISO/IEC 14492:2001 – Information technology – Lossy/lossless coding of bi-level images". Retrieved 2011-02-19.
- JBIG2-the ultimate bi-level image coding standard, by F. Ono, W. Rucklidge, R. Arps, and C. Constantinescu, in Proceedings, 2000 International Conference on Image Processing (Vancouver, BC, Canada), vol. 1, pp. 140–143.
- jbig2dec home page.
- open source jbig2 plugin for Java's ImageIO.
- jbig2enc home page.
- F. Ono, W. Rucklidge, R. Arps, and C. Constantinescu, "JBIG2-the ultimate bi-level image coding standard", Image Processing, 2000. Proceedings. 2000 International Conference on , vol. 1, pp. 140–143 vol. 1, 2000.
- P. Howard, F. Kossentini, B. Martins, S. Forchhammer, and W. Rucklidge, "The emerging JBIG2 standard", Circuits and Systems for Video Technology, IEEE Transactions on , vol. 8, no. 7, pp. 838–848, Nov 1998.
- What is the patent situation with JBIG?, archived from the original on 2012-02-23
- What is JBIG2?, retrieved 2012-04-07
- JBIG2 patents, retrieved 2012-04-07
- Zhou Wang, Hamid R. Sheikh and Alan C. Bovik (2002). "No-reference perceptual quality assessment of JPEG compressed images" (PDF). Archived from the original (PDF) on 2013-11-02.
- "Xerox scanners/photocopiers randomly alter numbers in scanned documents". 2013-08-02. Retrieved 2013-08-04.
- "Confused Xerox copiers rewrite documents, expert finds". BBC News. 2013-08-06. Retrieved 2013-08-06.
- "Xerox investigating latest mangling test findings". 2013-08-11. Retrieved 2013-08-11.
- Update on Scanning Issue: Software Patches To Come, Xerox (blog), 2013-08-11
- Kriesel, David. "Video and Slides of my Xerox Talk at 31C3". D. Kriesel Data Science, Machine Learning, BBQ, Photos, and Ants in a Terrarium. Retrieved 31 July 2016.