Inside–outside–beginning (tagging)

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

The IOB format (short for inside, outside, beginning) is a common tagging format for tagging tokens in a chunking task in computational linguistics (ex. named-entity recognition).[1] It was presented by Ramshaw and Marcus in their paper "Text Chunking using Transformation-Based Learning", 1995[2] The B- prefix before a tag indicates that the tag is the beginning of a chunk, and an I- prefix before a tag indicates that the tag is inside a chunk. The B- tag is used only when a tag is followed by a tag of the same type without O tokens between them. An O tag indicates that a token belongs to no chunk.

Another similar format which is widely used is IOB2 format, which is the same as the IOB format except that the B- tag is used in the beginning of every chunk (i.e. all chunks start with the B- tag).

A readable introduction to entity tagging is given in Bob Carpenter's blog post, "Coding Chunkers as Taggers".[3] 'BIO' is plausibly a synonym for 'IOB'.

An example with IOB format:[inconsistent]

Alex I-PER
is O
going O
to O
Angeles I-LOC

An example with IOB2 format:

Alex B-PER
is O
going O
to O
Angeles I-LOC

Related tagging schemes sometimes include "START/END: This consists of the tags B, E, I, S or O where S is used to represent a chunk containing a single token. Chunks of length greater than or equal to two always start with the B tag and end with the E tag."[4]

Other Tagging Scheme's include BIOES/BILOU, where 'E' and 'L' denotes Last or Ending character is such a sequence and 'S' denotes single element.

An Example with BIOES format:

Alex S-PER
is O
going O
with O
Marty B-PER
Rick E-PER
to O
Angeles E-LOC


IOB syntax does not permit any nesting, so cannot (unless extended) also represent even very simple phenomena such as sentence boundaries (which are not trivial to locate reliably), the scope of parenthetical expressions in sentences, grammatical structures, nested Named Entities such as "University of Wisconsin Dept. of Computer Science", and so on. It also leaves no place for metadata such as an identifier for the particular sample, the confidence level of the NER assignment, and so on, which are commonplace in NLP systems.

Because of these limitations, data must often be converted out of IOB format, or projects must create custom extensions, which has led to a large number of not-quite-interoperable "IOB-like" formats.

The space and "O" (meaning "not in any chunk") convey no information and could simply be omitted. The same is true for putting the "type" suffix on "I-" or "E-" markers as in some variants of "BIOES"; and for marking both "I" and "E" (if you have begun and not ended you are "in", and if you are "in", you have begun and not ended). Some other formats deploy verbosity to improve readability and/or error-checking, but no such benefits appear to come to IOB in exchange for its verbosity.

IOB's "one token per line" depends on the tokenization used, even though tokenization is not standardized in NLP, and details of tokenization do not have to be entangled with the representations of NERs. "11/31/2019" could be anywhere from one to five tokens in different systems, but the NER is the same. Some systems even permit whitespace within tokens, and space as a delimiter collides with this, narrowing the applicability of IOB and motivating more extensions. "space" might or might not include tab, multiple spaces, hard spaces, and so on, differences which are difficult to detect when proofreading.

More powerful formats (most obviously XML and JSON) can handle far more diverse annotations, have less variation between implementations, and are often shorter and more readable as well. For example:

<PER>Alex</PER> is going with <PER>Marty A. Rick </PER>to<LOC> Los Angeles</LOC>

XML takes 80 bytes to do the same things as the 91 byte BIOES version shown above, or the 79 byte IOB version. However, it can easily also support sentence boundaries, part-of-speech annotations, and other features commonly needed in NLP systems. Breaking all tokens in particular places is not strictly part of the NER task; but even if every token were tagged (like "<T>is</T>") the total would grow only to 139 bytes:



  1. ^ "Entity Recognition".
  2. ^ Ramshaw and Marcus (1995). "Text Chunking using Transformation-Based Learning". arXiv:cmp-lg/9505040.
  3. ^ Bob Carpenter (2009). "Coding Chunkers as Taggers: IO, BIO, BMEWO, and BMEWO+".
  4. ^