Jump to content

Chvátal–Sankoff constants

From Wikipedia, the free encyclopedia
(Redirected from Chvátal–Sankoff constant)

In mathematics, the Chvátal–Sankoff constants are mathematical constants that describe the lengths of longest common subsequences of random strings. Although the existence of these constants has been proven, their exact values are unknown. They are named after Václav Chvátal and David Sankoff, who began investigating them in the mid-1970s.[1][2]

There is one Chvátal–Sankoff constant for each positive integer k, where k is the number of characters in the alphabet from which the random strings are drawn. The sequence of these numbers grows inversely proportionally to the square root of k.[3] However, some authors write "the Chvátal–Sankoff constant" to refer to , the constant defined in this way for the binary alphabet.[4]

Background

[edit]

A common subsequence of two strings S and T is a string whose characters appear in the same order (not necessarily consecutively) both in S and in T. The problem of computing a longest common subsequence has been well studied in computer science. It can be solved in polynomial time by dynamic programming;[5] this basic algorithm has additional speedups for small alphabets (the Method of Four Russians),[6] for strings with few differences,[7] for strings with few matching pairs of characters,[8] etc. This problem and its generalizations to more complex forms of edit distance have important applications in areas that include bioinformatics (in the comparison of DNA and protein sequences and the reconstruction of evolutionary trees), geology (in stratigraphy), and computer science (in data comparison and revision control).[7]

One motivation for studying the longest common subsequences of random strings, given already by Chvátal and Sankoff, is to calibrate the computations of longest common subsequences on strings that are not random. If such a computation returns a subsequence that is significantly longer than what would be obtained at random, one might infer from this result that the match is meaningful or significant.[1]

Definition and existence

[edit]

The Chvátal–Sankoff constants describe the behavior of the following random process. Given parameters n and k, choose two length-n strings S and T from the same k-symbol alphabet, with each character of each string chosen uniformly at random, independently of all the other characters. Compute a longest common subsequence of these two strings, and let be the random variable whose value is the length of this subsequence. Then the expected value of is (up to lower-order terms) proportional to n, and the kth Chvátal–Sankoff constant is the constant of proportionality.[2]

More precisely, the expected value is superadditive: for all m and n, . This is because, if strings of length m + n are broken into substrings of lengths m and n, and the longest common subsequences of those substrings are found, they can be concatenated together to get a common substring of the whole strings. It follows from a lemma of Michael Fekete[9] that the limit

exists, and equals the supremum of the values . These limiting values are the Chvátal–Sankoff constants.[2]

Bounds

[edit]

The exact values of the Chvátal–Sankoff constants remain unknown, but rigorous upper and lower bounds have been proven.

Because is a supremum of the values which each depend only on a finite probability distribution, one way to prove rigorous lower bounds on would be to compute the exact values of ; however, this method scales exponentially in n, so it can only be implemented for small values of n, leading to weak lower bound. In his Ph.D. thesis, Vlado Dančík pioneered an alternative approach in which a deterministic finite automaton is used to read symbols of two input strings and produce a (long but not optimal) common subsequence of these inputs. The behavior of this automaton on random inputs can be analyzed as a Markov chain, the steady state of which determines the rate at which it finds elements of the common subsequence for large values of n. This rate is necessarily a lower bound on the Chvátal–Sankoff constant.[10] By using Dančík's method, with an automaton whose state space buffers the most recent h characters from its two input strings, and with additional techniques for avoiding the expensive steady-state Markov chain analysis of this approach, Lueker (2009) was able to perform a computerized analysis with n = 15 that proved .

Similar methods can be generalized to non-binary alphabets. Lower bounds obtained in this way for various values of k are:[4]

k Lower bound on
2 0.788071
3 0.671697
4 0.599248
5 0.539129
6 0.479452
7 0.44502
8 0.42237
9 0.40321
10 0.38656

Dančík & Paterson (1995) also used automata-theoretic methods to prove upper bounds on the Chvátal–Sankoff constants, and again Lueker (2009) extended these results by computerized calculations. The upper bound he obtained was . This result disproved a conjecture of J. Michael Steele that , because this value is greater than the upper bound.[11] Non-rigorous numerical evidence suggests that is approximately , closer to the upper bound than the lower bound.[12]

In the limit as k goes to infinity, the constants grow inversely proportionally to the square root of k. More precisely,[3]

Distribution of LCS lengths

[edit]

There has also been research into the distribution of values of the longest common subsequence, generalizing the study of the expectation of this value. For instance, the standard deviation of the length of the longest common subsequence of random strings of length n is known to be proportional to the square root of n.[13]

One complication in performing this sort of analysis is that the random variables describing whether the characters at different pairs of positions match each other are not independent of each other. For a more mathematically tractable simplification of the longest common subsequence problem, in which the allowed matches between pairs of symbols are not controlled by whether those symbols are equal to each other but instead by independent random variables with probability 1/k of being 1 and (k − 1)/k of being 0, it has been shown that the distribution of the longest common subsequence length is controlled by the Tracy–Widom distribution.[14]

References

[edit]
  1. ^ a b Chvatal, Václáv; Sankoff, David (1975), "Longest common subsequences of two random sequences", Journal of Applied Probability, 12 (2): 306–315, doi:10.2307/3212444, JSTOR 3212444, MR 0405531, S2CID 250345191.
  2. ^ a b c Finch, Steven R. (2003), "5.20.2 Common Subsequences", Mathematical Constants, Encyclopedia of Mathematics and its Applications, Cambridge University Press, pp. 384–385, ISBN 9780521818056.
  3. ^ a b Kiwi, Marcos; Loebl, Martin; Matoušek, Jiří (2005), "Expected length of the longest common subsequence for large alphabets", Advances in Mathematics, 197 (2): 480–498, arXiv:math/0308234, doi:10.1016/j.aim.2004.10.012, MR 2173842.
  4. ^ a b Kiwi, M.; Soto, J. (2009), "On a speculated relation between Chvátal–Sankoff constants of several sequences", Combinatorics, Probability and Computing, 18 (4): 517–532, arXiv:0810.1066, doi:10.1017/S0963548309009900, MR 2507735, S2CID 10882010.
  5. ^ Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001), "15.4", Introduction to Algorithms (2nd ed.), MIT Press and McGraw-Hill, pp. 350–355, ISBN 0-262-53196-8.
  6. ^ Masek, William J.; Paterson, Michael S. (1980), "A faster algorithm computing string edit distances", Journal of Computer and System Sciences, 20 (1): 18–31, doi:10.1016/0022-0000(80)90002-1, MR 0566639.
  7. ^ a b Sankoff, David; Kruskal, Joseph B. (1983), Time Warps, String Edits, and Macromolecules: The Theory and Practice of Sequence Comparison, Addison-Wesley, Bibcode:1983twse.book.....S.
  8. ^ Hunt, James W.; Szymanski, Thomas G. (1977), "A fast algorithm for computing longest common subsequences", Communications of the ACM, 20 (5): 350–353, doi:10.1145/359581.359603, MR 0436655, S2CID 3226080.
  9. ^ Fekete, M. (1923), "Über die Verteilung der Wurzeln bei gewissen algebraischen Gleichungen mit ganzzahligen Koeffizienten", Mathematische Zeitschrift (in German), 17 (1): 228–249, doi:10.1007/BF01504345, S2CID 186223729.
  10. ^ Dančík, Vlado; Paterson, Mike (1995), "Upper bounds for the expected length of a longest common subsequence of two binary sequences", Random Structures & Algorithms, 6 (4): 449–458, doi:10.1002/rsa.3240060408, MR 1368846.
  11. ^ Lueker, George S. (2009), "Improved bounds on the average length of longest common subsequences", Journal of the ACM, 56 (3), A17, doi:10.1145/1516512.1516519, MR 2536132, S2CID 7232681.
  12. ^ Dixon, John D. (2013), Longest common subsequences in binary sequences, arXiv:1307.2796, Bibcode:2013arXiv1307.2796D.
  13. ^ Lember, Jüri; Matzinger, Heinrich (2009), "Standard deviation of the longest common subsequence", The Annals of Probability, 37 (3): 1192–1235, arXiv:0907.5137, doi:10.1214/08-AOP436, MR 2537552, S2CID 8143348.
  14. ^ Majumdar, Satya N.; Nechaev, Sergei (2005), "Exact asymptotic results for the Bernoulli matching model of sequence alignment", Physical Review E, 72 (2): 020901, 4, arXiv:q-bio/0410012, Bibcode:2005PhRvE..72b0901M, doi:10.1103/PhysRevE.72.020901, MR 2177365, PMID 16196539, S2CID 11390762.