Jump to content

Talk:Splay tree

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 130.243.147.144 (talk) at 23:11, 20 February 2006. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Someone please do a pseudocode implementation of splay on the main page. (I removed the note suggesting that, since such notes should be confined to the talk pages.) 68.100.96.143

I've been trying to work out a way to balance a splay tree since I saw this article's claim that splay trees are "self-balancing" and support O(log N) "amortized time" insertions and deletions. I have found multiple splay trees that cannot be balanced and also keep the order defined by splaying. I'm changing the "amortized time" for now, but I would like to chat with someone about the idea that a splay tree can self-balance, just in case I'm crazy. Tjdw 14:16, 15 Jan 2004 (UTC)

I believe splay trees (at least above a certain size) will be (roughly) balanced (meaning their height will be in O(log n)), or at least cannot stay unbalanced once nodes in unbalanced parts of the tree are accessed. I have not checked for myself, but Sleator and Tarjan state in their original article that the height of nodes on the access path to a node being accessed get approximately halfed. If splay trees could be and stay unbalanced, the given worst-case access time could hardly hold. As many internet sources claim this too, I reintroduced it to the article (otherwise the last paragraph would in my eyes not be correct).DrZ 23:11, 14 May 2004 (UTC)[reply]
Actually, splay trees are not necessarily self-balancing, they are only stochastically self-balancing. That is why you get O(log n) amortized time - a single access can take n comparisons, just as with a plain binary search tree. It will, however, also somewhat rebalance the tree. Moreover, the insertations that debalance the tree are very cheap, so that in sum you get excellent behaviour. --Stephan Schulz 13:57, 26 May 2005 (UTC)[reply]

Should note that the complexity approximates O(M) where M is the working set of nodes, and is therefore appropriate to use when the working set is significantly smaller than the full set. Should also note that both reading (find) and writing (insert/delete) are about equally expensive, so it may behave poorly compared to other self-balancing trees (that only modify the tree in update) when updates are rare.

Complexity should be O(log M), unless I misuderstand what you are saying. --Stephan Schulz 13:57, 26 May 2005 (UTC)[reply]

Although the working set theorem says splays have a near O(1) access time for those items in the working set (given working set is small relative to the number of total nodes and constant), in practice randomly constructed BSTs have been shown to outperform splays even when the access pattern is heavily skewed (see, for example, Bell and Gupta). Also of note is that splay trees have better cache performance than BSTs under certain settings. Finally, we should discuss different splalying variants. Top Down vs. Bottom up. The "simple" top-down splaying variant (explained by Sleator and Tarjan) as implemented by Weiss (Data structures in C) is fast in practice (as compared to other variants).


I should probably write up pseudocode (or other more detailed explanation of the algorithm), given that I have both implemented it and documented it before. Bovlb 07:56, 2004 Mar 5 (UTC)

Diagrams of Splaying would be helpful.

I would like to know what literature/research support the claim that "randomly rebalancing the tree can avoid this unbalancing effect and give similar performance to the other self-balancing algorithms"? I didn't manage to verify that, though I tried.