Talk:Burrows–Wheeler transform

From Wikipedia, the free encyclopedia
Jump to: navigation, search


The text says that EOF is ignored when sorting, but the example seems to suggest that EOF is considered to come after all normal letters. AxelBoldt 14:14 Aug 26, 2002 (PDT)

The dot wasn't an EOF. It was just a dot that was there to make the rotations stand out visually. Hopefully it's clearer now. --LC 08:06 Aug 27, 2002 (PDT)

Obviously sufficiently many repeats of the transform must eventually restore the original text, and obviously that provides another (usually inefficient?) way of inverting a transform (just store the inputs temporarily, and give back the last input before the text to be inverted turns up again). But one interesting thing is this: since the original is itself the result of a transform on some text, clearly the transform does not necessarily always produce something better suited for compression. It must be something that is statistically likely, based on properties that are common in many inputs. What are these, and what are the odds that a transform won't actually help and might even hurt? If we do get one of these cases, how can we tell (cheaply)? And so on. PML.

How can repeating the transform restore the original text? The transform is a not a bijection between strings of length n. It is a one-to-one function from strings of length n into the set of ordered pairs of a string of length n and an integer less than n. Thus, different inputs can produce the same output string with a different index. For example .BANANA. and ANA..BAN produce the same output string.

There is a simple way to avoid the use of an EOF symbol which is treated differently in the many verisons of BWT. Since any rotation of the original string gets you the same BWT string output. Which is the way the Burrows Wheeler Transform is usually first discussed in the literature. One problem with the Special symbol for EOF is that when dealing with a file on 256 character types one needs to add a special character in to do the BWT. Then one has to pick its value some make it the heaviest symbol some the lightest and in some binary versions they pick a weight half way in between. One method that I have used to get back to the traditional BWT is to rotate the string until it represents a single lyndon word. And then do the BWT as done in the start of this article and pass as an index the number of rotations needed to get the start of original string. One nice feature of this it that when the string is rotated to make a single lyndon word the string resulting from the classical BWT and the Bijective BWT are the SAME. — Preceding unsigned comment added by (talk) 15:23, 22 August 2012 (UTC)

Bad example[edit]

This article gives a really bad example for this algorithm. Take the text:


and replace all repeated substrings (as deflate would detect them) with a special control character which I'll represent as _:


Now take the transformed text:


and do the same:


Clearly, the original string compresses better. And indeed gzip and bzip2 confirm this: gzip compresses the above to 67 bytes, bzip2 to 69 bytes. (Both can hardly be called compression: the original was only 44 bytes ...) — Timwi 14:14, 9 Nov 2004 (UTC)

That is unfortunately the curse of writing about data compression: in a substantial number of cases, it can be almost impossible to find an example small enough to be practical, and clear enough to show the principle involved, but large enough so that the savings achieved with the method can be immediately seen to be significant. Add in a requirement about 'no other method can do this example better' and I am afraid you're asking for the impossible. IMHO, it's much better to use an example that shows the principle clearly and note in the text if needed that 'in this example, the saving from the run-length encoding doesn't overcome the overhead of the encoding; in a larger example which produced longer runs, it would.' After all, generations of textbooks have used recursive Fibonacci algorithms to demonstrate recursion when that's not the most efficient way to calculate Fibonacci numbers.

By the way, have you noticed that your deflate example is incorrect? Deflate operates on strings of minimum length 3. Even with your simplification of all replaced substrings taking as much space as one literal, here's how it would actually look:




Not a stellar improvement. -- Antaeus Feldspar 15:11, 9 Nov 2004 (UTC)

It doesn't improve deflate, but I think it does improve RLE. My impression was that the point is to create runs. Deco 20:42, 9 Nov 2004 (UTC)
Timwi's point was that in the example given, deflate outperforms BWT. However, his comparison was flawed because he showed deflate getting LZ77 compression from repetition of two-byte sequences. It doesn't; BWT gets compression from repetition of two-byte sequences, but deflate's implementation of LZ77 compression doesn't work on sequences less than three bytes. -- Antaeus Feldspar 02:17, 10 Nov 2004 (UTC)
If not deflate, then what does bzip2 apply after the BWT? — Timwi 11:31, 10 Nov 2004 (UTC)
To the best of my knowledge, it used to apply RLE and then arithmetic coding, but because of patent problems, it switched to RLE and then Huffman coding. It wouldn't make sense to apply deflate after the BWT, because while deflate can get compression from repeated byte sequences, long runs of the same character, and relative frequency of some characters over others, the BWT is going to remove almost every instance of repeated byte sequences by re-ordering the characters so that they lengthen runs, instead. Deflate would still be able to work on the runs, but it wouldn't do so as effectively as an encoding step that works only on runs.
bzip2 never used arithmetic encoding, bzip did. bzip2 does RLE => BWT => MFT => Huffmann, and the reason it does RLE has nothing to do with compression,

but the fact that the authors (because they didn't understand why it was needed and left some part out) didn't implement the BWT properly in bzip, causing extremely long (as in years) of compression time on some inputs. The RLE pre-pass only exists to avoid these "degenerate" cases. If the BWT would have been implemented properly this would not be needed, and in fact, a lot of documentation still refers to fixed-size blocks of input being compressed, which is no longer the case due to the RLE prepass. So RLE is a bug workaround, it's not the best method to apply after a BWT, a straight MFT encoder is. 2001:470:D08D:0:0:0:0:1 (talk) 07:20, 25 January 2013 (UTC)

It may help to remember that lossless compression can't create compression from nothing; it can only remove redundancy that already exists. The following table shows which compression methods exploit which kind of redundancy. (Remember that Deflate is LZ77 coding followed by Huffman coding.)
LZ77 BWT RLE Huffman/Arithmetic
repeated multi-byte sequences yes, at cost of two parameters (offset, length) converts into single-character runs no no
single-character runs yes, at cost of two parameters (offset, length) no yes, at cost of single parameter (length of run) no
non-flat frequency distribution of characters no no no yes, exploits optimally
As you see, BWT destroys one of the two kinds of redundancy LZ77 can work on, and converts it to the other kind; LZ77 can still work on it, but at twice the cost of RLE. So it wouldn't make sense to use deflate after BWT instead of RLE and Huffman. -- Antaeus Feldspar 16:55, 10 Nov 2004 (UTC)
In the original paper that Burrows and Wheeler wrote, they suggested that their transform be used in conjuction with a Move to Front encoder and an entropy encoder, such as Huffman; they made no reference to RLE. According to the bzip2 website, RLE is used, but the author indicates that it isn't necessary and is only retained because that's how he originally wrote it and he didn't want to create yet another compression format. -- Zawersh 15:05, 14 May 2006 (UTC)

This whole discussing about a bad example is rather pointless and can be easily solved by quoting burrows and wheeler form there original report (to read it yourself => you can find it at the sources-list in the article) "The size of the input block must be large (a few kilobytes) to achieve good compression." ~ Band B 22:40, 14 February 2007 (UTC)

C implementation[edit]

Is there any reason that the C implementation given in the Polish Wikipedia shouldn't be included here? --Shutranm 21:47, 20 May 2005 (UTC)

Collation issues[edit]

Nobody reading this page cares about Posix collation. Why don't we pick a sample string with no such issues? --Doradus 17:45, 16 January 2006 (UTC)

How do you get from "I don't care" to "nobody cares"? People reading this page quite clearly do care about POSIX collation, or there wouldn't be a mention of it in the first place. I certainly care, and I'm equally certainly reading this page. The point that BWT is sensitive to collation issues is worth making, IMO. — Haeleth Talk 20:19, 22 March 2006 (UTC)
BWT isn't sensitive to Posix; the sort will work in whatever way you implement the sort. Practically speaking, it's very unlikely anyone will be doing real BWT with Posix C string sorting functions; I found it to be quite a non-sequitur with little or no apparent relation to the subject at hand. The removed section follows: --Brion 03:31, 19 May 2006 (UTC)
==== Note on sorting convention ====

If you sort with [[Posix]] [[collating]], you get the slightly different string
instead of

[[ISO 8859]] has complex collating rules, but in this case, periods are
ignored.  Posix collating treats periods as characters.

As a visitor trying to use this page to work out my own implementation, it would be nice to know what sorting convention is being used for the example. Otherwise, I'd say that th example in't very useful. -- (talk) 01:20, 7 March 2014 (UTC)

Mistake in example?[edit]

Could someone check the example, please? I was trying to verify something and my simple quick implementation of BWT outputs




as the article writes.

(And note that I intend to at least replace the black dot with another character, or to remove it altogether. It is not a good style to unnecessarily convey meaning only with a color variation, mind e.g. blind people.)

--Mormegil 18:12, 18 June 2006 (UTC)

  • My code gives the same output as Mormegil's, not as the article writes.--Thorwald 06:05, 3 August 2006 (UTC)
  • Just tried this out, and I get exactly the same as Mormegil and Thorwald. If you try doing it, and use something like Microsoft Excel (yes, you can stop spitting now), you get the following table:


Which is understandable, as a space is a smaller ASCII value than [A..Z|a..z]

--Ambulnick 10:17, 27 Sept 2006 (GMT)

If you take the C code and run it on ".BANANA" instead of "Polska Wikipedia", you don't get the same answer as the article gives: BNN.AA@A, you get ANNB.AA. That's because "." sorts before capital letters in ascii, not after, as the example shows. Changing "." to "x" makes the example work. If someone else will verify that they see the same problem, I'll fix the example. But it's late at night -- I may just be confusing myself :-)

Cbogart2 06:04, 18 December 2006 (UTC)


The BWT is not just the transformed string. If using method above you need to include something

like the position of original data in example above where the SIX.MIXED row occurs. If we don't do this then people we wrongly think the normal BWT is just a permutation. When in fact its two outputs the permutation and an integer index. —Preceding unsigned comment added by (talk) 16:28, 2 March 2009 (UTC)


Article says:

The Burrows-Wheeler transform (BWT, also called block-sorting compression)

Technically, BWT is not a compression. Should it be noted? It leads to wrong understanding of the subject —lim 16:54, 16 May 2007 (UTC)

Bad inefficiency in the example implementation[edit]

Putting aside the issue that this is a really horrible example of program code, there's a major inefficiency in it: In the function rotlexcmp() if 'li' is equal to 'ri', it will needlessly traverse the entire data and then just return 0. According to my own experiments this makes it almost 100 times slower than it could be (well, depending on the exact implementation of the sorting function, of course). Simply adding a "if(li==ri) return 0;" at the beginning solves this issue and makes the program a lot faster with large datasets.

Wikipedia is not about tweaking the last bit of Performance out of some Algorithm. It only has to give the right oputput. —Preceding unsigned comment added by (talk) 08:35, 6 February 2009 (UTC)


We don’t need two “sample implementations” and should not have them here both. We are not a code repository. Maybe we should clean one of them to focus on the algorithm (e.g. remove those #includes and main() in the C version)? --Mormegil 17:54, 9 November 2007 (UTC)

I agree with you, I am going to remove the C-implementation. I don't think it serves any purpose here. Sample implementations can be found around the web. The C-code also points to the Polish Wikipedia as the source. Wikipedia is not a reliable source. —ZeroOne (talk / @) 16:28, 14 December 2008 (UTC)
I do not agree. Although the WP:NOT says Wikipedia is "not a code repository", this is only a suggestion, not a rule. I do not agree with this policy. I think Wikipedia should always have sample code whenever possible. It provides probably the best context to the article. I agree that we should primarily focus on the algorithm, but practical/actual code is always a help. It is yet another reason to visit Wikipedia. I am, and have always been, an inclusionist. PS: What about this entire category: Category:Articles with example C code? Are you going to remove that as well? --Thorwald (talk) 00:02, 15 December 2008 (UTC)
Well, I'm glad we got a discussion. I also think that sample code is good. In this case I removed the C code and left the Python code, because I found the Python code to be clearer. There's nothing wrong with having C code but I think a sample in one language is enough. If I have to choose between C and Python I remove the C example; if I had to choose between, say, C and assembly language, I'd probably remove the assembly example. Having an example in more than one language feels like artificially expanding the article. If we allow two example languages, why not three? If we allow three different example languages, why not four? Why not five, six or seven examples? —ZeroOne (talk / @) 13:40, 15 December 2008 (UTC)
Okay. I agree. The Python code is sufficient (and clearer and cleaner). --Thorwald (talk) 01:20, 16 December 2008 (UTC)

Mathematica Implementation[edit]

Just for Fun.

BWT[str_] := Block[{list, n, data},

 list = StringSplit[str, ""];
 n = Length@list;
 data = Sort@Table[RotateRight[list, i], {i, 1, n}];
 StringJoin@dataAll, -1


inBWT[str_, endOfFile_: "@"] := Block[{list, n, K},

 list = StringSplit[str, ""];
 n = Length[list];
 K = Sort@Transpose@List@list;
 Do[K = Sort@Transpose@Prepend[Transpose@K, list] ;, {n - 1}];
 StringJoin@Select[K, Last@# == endOfFile &]1
 ]  —Preceding unsigned comment added by (talk) 22:35, 24 October 2009 (UTC) 

The Bijective BWT[edit]

Does anybody have independent confirmation of the Bijective BWT actually working? After having read the link to David Scott's method, I'm extremely sceptical that his technique works.

The main reason why I'm extremely sceptical is because he essentially appears to be using the 1st column of the BWT reversed. If that works, then there's no reason why you couldn't use a binary input (instead of ASCII characters) and end up with what would surely be the world's most compressible file, consisting merely of a count of 0's and 1's. Tiggs (talk) 00:16, 30 August 2008 (UTC)

Actually Yuta Mori has put it in OpenBWT-v1.3 people on one of the more active compression sites have tested it with many files. Surprisingly it often bets regular BWT based compressors using code tuned for standard BWT the main draw back beside being hard to understand is that it is somewhat slower to execute than normal BWT. It is not the 1st column of the BWT reversed you can try Scott's or Mori's code on any reasonably sized file to see this this is not what is happening. —Preceding unsigned comment added by (talk) 22:14, 3 October 2008 (UTC)

Agreed. I've spent some time looking over the source code and it is, indeed, a bijective transform.

Whether or not it should be classified as a Burrows Wheeler transform, or whether it should be called a Scott Transform, however, is a completely different question. As it changes the order that the characters are transformed into, I'd favour the later. Tiggs (talk) 20:53, 10 October 2008 (UTC)

Actually there really is no standard BWT even for the index version. Many consider the one where you have the full wrap around on compares the original others use a special character for EOF some treat the EOF character as if it has the greatest some the lowest value there are many way to do it. And actually there are more an how to label the index. But if you look at the BWT that uses the full wrap around and do a UNBWTS on it you get the original file back except that it may be rotated since there is no index. Actually the normal BWT could be thought of as subset of BWTS. I have not heard of anyone calling it the Scott Transform but even Yuta referred to as the BWT Scottified. In fact there are a whole series of transforms that could be called bijective BWT transforms but few know about them or understand the concept. But tests done by others show this version tends to lead to better compression than what the users there call a normal BWT. The field is wide open not just for compression. —Preceding unsigned comment added by (talk) 02:33, 11 October 2008 (UTC)

Purpose of bijective transformation[edit]

Would it be possible to explain why the bijective version is interesting? Eg where is it used and why? Saving the cost of one index does not seem to justify it. The text seems to suggest it was created for more than its mathematical beauty. (talk) 15:17, 7 June 2014 (UTC) Bill

Source for bijective transformation[edit]

I'm concerned that the only source for the bijective transformation is the paper by Gil and Scott that seems to be (in traditional terms) an "unpublished manuscript". That isn't quite up to Wikipedia standards. That isn't just useless policy-wanking in this case; the paper contains meaning-disturbing typos and mistakes that I don't think would have survived in a published paper. At least we shouldn't mislead readers into thinking this is as reliable a source as the ones we usually provide. Furthermore, Gil is an academic, but no reference to the paper appears on his web page, which could indicate that it was later retracted.

In particular: The paper presents a "string-sorting transform" with two variants, depending on how one compares strings of different length where one is a prefix of the other. It also presents an inverse transform, though one without any variants, and it is not quite stated which of the two main transforms it is an inverse of. According to my pencil-and-paper experiments, the inverse transform does seem to invert the "infinite-periodic" variant of the main transform. It does not work for the "usual lexicographical" variant, and indeed that variant is not a bijection! As a minimal example, the input strings "bab" and "abb" both transform to "bba".(*)

Unfortunately it is only the "usual lexicographical" transformation that the authors provide a linear-time algorithm for (though it is very sketchy, and I'm not convinced by the time analysis). For the infinite-periodic variant, the authors explicitly deny having a linear algorithm. They claim to have an O(nlogn) one, but do not describe it further.

While a bijective variant of the BWT is indeed an interesting and worthy topic, the one presented by the paper is not quite as ready for use as the paper's abstract suggests.

(*)  Input:                 bab      abb
     Lyndon factorization:  b·ab     abb
     Cyclic rotations:      b,ab,ba  abb,bba,bab
     Sorted:                ab       abb
                            b        bab
                            ba       bba
     Output:                bba      bba

(The "infinte-periodic" variant sorts "b" after "ba" and therefore transforms "bab" to "bab").

Henning Makholm (talk) 13:44, 6 December 2010 (UTC)

Huh. I managed to completely overlook the Kufleitner paper, which seems to be a published source. Somebody ought to convert those external links to in-line citations ... –Henning Makholm (talk) 14:08, 6 December 2010 (UTC)
You have the correct transfrom for abb which is bba

however bab which break up to b ab is sorted wrong it should be

ab to b
ba to a
bb to b

you repeat each substring as needed before the comparistion so b sorts after ba not before thus the bwts is bab for bab its a self bwts. One conceptual way to do this is repeat the substring so a common length example if sub strings 3 6 2 1 5 use 30 characters then sort as you would in normal bwt

I also don't think the last changes to explaination aid to helping people understand what it is. I would like to change it back but would prefer someone else fix the english. I read what is now there and I don't think it helps many people to understand the concept. — Preceding unsigned comment added by (talk) 15:15, 17 January 2012 (UTC)

when when is comparing two strings lexicographically  you pretend the shorter string is padded with spaces this is not the same as comparing two unequal length strings in bijective BWT  you keep repeating the strings until there is a difference.  — Preceding unsigned comment added by (talk) 23:25, 30 September 2012 (UTC) 

BWT-based self-indexes[edit]

Section "BWT in bioinformatics" and pages Compressed suffix array and FM-index should probably be merged into a single page. They are all about essentially the same data structure:

  1. Build the BWT of the text.
  2. Encode the BWT in a way that supports rank(c, i) (the number of occurrences of character c in BWT[1..i]) and select(c, i) (the position of the i th occurrence of character c in the BWT) efficiently.
  3. Use the BWT to simulate the suffix array.

The differences between various compressed suffix arrays and FM-indexes are mostly in the encodings they use. The following survey articles are probably the best resources about compressed self-indexes so far:

  • G. Navarro, V. Mäkinen: Compressed Full-Text Indexes. ACM Computing Surveys, 2007.
  • P. Ferragina, R. González, G. Navarro, R. Venturini: Compressed Text Indexes: From Theory to Practice. ACM Journal of Experimental Algorithms, 2009.

Jouni Sirén (talk) 23:48, 18 December 2010 (UTC)

Example - Inverse Transformation[edit]

The article is very nicely written and clear even for a beginner like me. But I didn't get one point:

Would it be possible to explain, why step 3 in inverse transformation is possible? Sort and concatenate with BW'ed string in step 2 is clarified in the text, but why one can sort again and concatenate the BW'ed string again? Though it works.

cheers, Oliver Drechsel (talk) 08:39, 20 January 2011 (UTC)

WRONG transformation[edit]

the added character @ should be lexicographically before any other character, so:


should be


and the resulting string would be ABNN^AA@ — Preceding unsigned comment added by (talk) 19:33, 12 April 2012 (UTC)