Talk:Hash table: Difference between revisions
No edit summary |
|||
Line 421: | Line 421: | ||
::But O(n) on a 50% full table with 1000 entries and a good hash function, it's not just "very rare" as in the text, it just never happens and never would happen before the end of the universe. If it has never been seen, and never would be seen, then it isn't ''rare'', it just never happens.[[User:Wolfkeeper|WolfKeeper]] 19:12, 16 October 2007 (UTC) |
::But O(n) on a 50% full table with 1000 entries and a good hash function, it's not just "very rare" as in the text, it just never happens and never would happen before the end of the universe. If it has never been seen, and never would be seen, then it isn't ''rare'', it just never happens.[[User:Wolfkeeper|WolfKeeper]] 19:12, 16 October 2007 (UTC) |
||
== Confusing a hash table and a hash map == |
|||
The intro to this article is severely misguided and confusing. A hash table is not the same thing as a "hash map" (which is a Javaism meaning a map/dictionary implemented by a hash table instead of a tree). Hash tables have nothing to do with keys and values -- a hash table only knows that elements have hashes, and that elements are at the index in its array corresponding to the value of the hash function for an element. For example, a hash table can also be used to implement an unordered set. Additionally, the concept of "buckets" is not intrinsic to a hash table. It is a common strategy for resoloving collisions but there are others (double hashing, for instance). |
|||
What's the correct way to tag a page that has inaccurate information on it? |
|||
There are some non-wikipedia ghits that have the correct definition, such as: http://www.sparknotes.com/cs/searching/hashtables/section1.html |
|||
However, it's a fairly common misconception that hashes map keys to values, since so often they are used to implement maps or dictionaries. I am fairly sure that Knuth makes this distinction, but I don't have it on hand. |
|||
--[[Special:Contributions/67.180.15.227|67.180.15.227]] ([[User talk:67.180.15.227|talk]]) 16:49, 5 December 2007 (UTC) |
Revision as of 16:49, 5 December 2007
This is the talk page for discussing improvements to the Hash table article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
Archives: 1 |
|
Who is Bob Jenkins?
I'm glad he decided to contribute so much, but honestly it seems like he is just plugging his website. Unless his work has been published in a peer-reviewed journal, I don't think it should be continuously cited throughout this article. A simple mention of his site in the external links section would suffice. --Kibblesnbits 17:16, 13 March 2007 (UTC)
- Bob has produced some very good research articles in hash functions. We who do hash function research are often in the industry, and do not have easy access to academic publications. MegaHasher 06:31, 18 August 2007 (UTC)
Pseudo-Code
I have a few small issues with the current Hash table article. First, the pseudo-code is a little convoluted—in that the findSlot(..) function is not 100% obvious. It hides away the linear probe code, which makes reading the set(..) and lookup(..) functions a little confusing on first read. I was just going over it with an intro computer science student and even had to read it twice myself. Josh 08:11, 10 April 2006 (UTC)
- OK, I tweaked find_slot() to make it more obvious (to me) that it is doing a linear probe.
- Or did I just make it more confusing?
- Please feel free to improve it further.
O(1) Removal
There is a line which states:
- The O(1) remove method above is only possible in linearly probed hash tables with single-slot stepping. In the case where many records are to be deleted in one operation, marking the slots for deletion and later rebuilding may be more efficient.
I'm not really sure that this is accurate. I might agree wth calling it expected O(1), or average O(1). But it is not, as far as I can tell, worst-case O(1). In fact, remove is worst-case O(n), as in all hash operations.Josh 08:11, 10 April 2006 (UTC)
- In my experience the O() notation usually refers to expected runtime, unless otherwise stated, but feel free to disambiguate it.WolfKeeper 14:07, 10 April 2006 (UTC)
I know that there has been some discussion of amortized runtime here, but I'm not really sure that you can use amortized analysis on this function. Please correct me if I am wrong, otherwise, I will change shortly. Further, I am not sure why the article says only possible in linearly probed hash tables... If the remove function is O(1) in linear probe, then why not with a quadratic probe? Josh 08:11, 10 April 2006 (UTC)
- You need the delete to remove spaces. the quadratic probe moves a different amount through the hash table, depending on the initial number from the hash function, rather than the slot that the initial collision occured at. So two hash entries may have collided initially at say, slot 10; and one of them got put into slot 20. If slot 10 is deleted, with quadratic probing there's no way to compact down the slot 20 into slot 10, because there's no way to find it- the hash of the object at 10 doesn't tell you how to find any of the successors. With linear probing you just need to look in the next entry and then decide whether to move it or not.WolfKeeper 14:07, 10 April 2006 (UTC)
- I think that this discussion reveals a real need to disambiguate expected vs. worst-case runtime. I think that we'll both agree that remove(..) in a hash table requires worst-case O(n) steps, even with a linear probe. If you don't then we need to first discuss that..
- If you don't see why the worst-case isn't particularly relevant, then I don't want to discuss this further.WolfKeeper 16:31, 10 April 2006 (UTC)
- I found a very good explanation in CLRS03, Chapter 11. Basically, the text proves that, on average, probes are needed in an unsuccessful search, where is the load factor. Using this formula, we can see that even at , the number of probes required is 10. I would still argue that worst case is relevant, but this clearly does not apply until the hash table is very full. The magic number of 80% is explained in the article, but perhaps this would be better understood with a graph illustrating how the performance changes with load? --Josh 01:24, 11 April 2006 (UTC)
- Now, lets go back to what you're saying. Yes, you are correct, that in a linear probe, you only need to look at the next slot, although I would nitpick and say that you need to look at the next n slots if the hash table is full. However, in a quadratic probe, instead of saying , you say . We assume that hash(key) was cached and does not need to be recomputed. Therefore, as long as we know i, we can clearly find the next slot in constant time. The same applies for double hashing. You just say . Since we already know hash(key), then hash(hash(key)) takes O(k), as per previous discussion.
- I'm sorry if my explanation was inadequate. I would recommend you read Knuth or a good book on it.WolfKeeper 16:31, 10 April 2006 (UTC)
- I've looked through CLRS, and I don't really see a reference for the fact that an O(1) remove can only be achieved using a linear probe. If we can do a search in any open address hash table in constant time, then shouldn't we be able to delete elements and eliminate holes in constant time as well? --Josh 01:24, 11 April 2006 (UTC)
- In summary, I think that there needs to be a lot of reworking between the term O(..) and expected O(..) and worst-case O(..). Further, I think that we might argue that remove(..) is expected O(1), but not worst-case O(1). I think that it is always worst-case O(n) with linear and quadratic probing. With double hashing, we would say that it is worst-case O(k n), assuming that k is not fixed size. Josh
- With all due respect, I completely disagree. The whole point of hash tables is to design them so that you are able to apply statistical techniques like averaging. Using worst case values for randomised parameters gives significantly inaccurate results.WolfKeeper 16:31, 10 April 2006 (UTC)
- I'm not suggesting that we re-do all of the analyses in terms of worst-case performance. I understand that doing so would give an incredibly inaccurate picture of hash table performance. I simply think that we should qualify the asymptotic notation with "expected", to acknowledge that bad cases do exist. --Josh 01:24, 11 April 2006 (UTC)
- I agree with Josh here. Unqualified big O notation specifically implies worst-case behavior in many publications. Better to either add the word "expected" where appropriate or make a big sweeping blanket statement at the beginning that it's implied. Deco 11:30, 8 June 2006 (UTC)
"When we say T(N) [is] O(f(N)), we are guaranteeing that the function T(N) grows at a rate no faster than f(N)." (Data Structures and Algorithm Analysis in C++, Third Edition, Mark Allen Weiss, page 44)
It is entirely possible to implement a hash table that makes a collision every time a new value is added. This would be a useless implementation, but a valid one. A hash table can't then be O(1) ever in any way if the above possibility exists.
"Hash tables can be used to implement the insert and contains operations in constant average time...The worst case for hashing generally results from an implementation error." (page 207)
I would suggest that the article should be changed to mention that hash tables are not O(1) by the definition of Big-O notation, but that all good implementations come closer to acting like they are than binary search trees. 199.111.229.133 00:23, 16 October 2007 (UTC)
Something i don't understand from the article
If when adding to a table the hashing function leads to a collision for 2 particular keys, a and b, then using probing b will be stored somewhere after a. When then looking up the value associated with b, won't the table actually return the value associated with a. How does it know which of the two values to return, and how to get to the index associated with b?
I hope that makes sense. Iae 11:22, 8 June 2006 (UTC)
- Providing we're not using perfect hashing, we must search through all the places the desired element could be, comparing our search key with each one, until we hit the end of the list. This is why hash tables require not only a hash function but a means of comparing for equality. This only works efficiently because in a hash table with enough room, these lists are overwhelmingly very short. Deco 11:27, 8 June 2006 (UTC)
- Ah I see, thanks very much. Iae 11:49, 8 June 2006 (UTC)
how do you delete from a hash table that uses probing?
- How do you know that you have hit the end of the list. Or to cut to the gist of the matter; How do you delete from a hash table that uses probing.
You know you hit the end of the list in a (probed) hash table when you hit a empty "not occupied" slot. In theory, one could have a "occupied" bit for each row of the hash table that is initially 0. (In practice, typically each slot that is not occupied begins with a NULL byte. If it *is* occupied, that byte is the first byte of the key).
I know of 2 very different ways to delete from a hash table: the "deleted bit" method", and the "move stuff around" method. (My understanding is that the "move stuff around" method is impossible to implement with "quadratic probing" or "double hashing" hash tables (and probably a few other types). Those hash tables are forced to use the "deleted bit" method.)
Let me try to explain those 2 methods
"deleted bit" method: works with any kind of hash table. Every row of the hash table (in addition to the key, value pairs) has a "deleted" bit that starts out 0. To delete a record from the hash table, use the key to find it (using find_slot), then set the "deleted" bit to 1. (Some hash tables cram the "deleted" bit and the "occupied" bit into the "key" part of the record, reserving 1 special key to indicate "unoccupied", another special key to indicate "deleted", and any other value to indicate a real occupied "key" ).
"move stuff around" method: only works with linear probing.
The function remove(key) in the article is supposed to describe the "move stuff around" method. How could we make it easier to understand?
Often when the application deletes a record, the following slot is already not occupied. In that case you can wipe out the record by marking that record as not occupied -- overwriting the key with a NULL byte -- and you are done.
Unfortunately, there are a bunch of special cases the application needs to be able to handle, even though they rarely happen. As the article says, "For all records in a cluster, there must be no vacant slots between their natural hash position and their current position (else lookups will terminate before finding the record)." The application needs to scan through *all* the records between the record you want to delete, and the next "not occupied" slot, and make sure the above requirement is met. In some cases, you must move all those records up 1 slot. In other cases, you must move some (but not all) of those records.
In yet other cases, you must not move any of them, just mark the deleted slot "not occupied" just like the simple case. (For example, if you want to delete the record in slot 870, and you see that the "key" in slot 871 actually hashes to "871", and slot 872 is "not occupied" -- you must mark slot 870 "not occupied", and leave the record in slot 871 alone).
Once you understand how it works, please update the article to make it easier to understand for the next person.
--68.0.120.35 07:42, 5 March 2007 (UTC)
More concrete suggestions for hash function?
I wanted to check in here before making huge changes to the article, but one thing I'd find very helpful is a discussion of concrete choices for the hash function itself. Here's an outline of what I'd say:
It's very common to implement hash tables with poor hashing functions. Knuth is largely to blame, advocating the very weak "multiplicative hash" function, and even going so far as to claim that its clustering property is a good thing! (section 6.4, TAoCP vol III, 2e, p. 517). Variants of the multiplicative hash remain popular, as do other linear techniques such as CRC.
Surveying the field, two excellent newer alternatives stand out. For most simple hashing tasks, the Fowler Noll Vo Hash is an excellent performer. It is among the simplest of known hash functions, is quite fast, and has a good distribution. For use in a hash table, the FNV-1a variant is likely the best choice, as it has better (more dispersed) clustering behavior than FNV-1.
For some applications, particularly when keys are long, the newer Jenkins lookup3.c hash function may be a better performer. It achieves better speed by consuming 12 bytes of the input per iteration, as opposed to one byte for FNV. Disadvantages include greater code complexity, a sensitivity to machine endianness (causing potential difficulties when reusing hash values across disparate computers), and the need to pad byte-aligned input to 32 bit words. Do keep in mind, though, that benchmarks showing impressive speed for large blocks of data may not translate to real-world gains. Common usage patterns involve relatively short keys, so the amount of time spent in the hashing function inner-loop may be less relevant than, say, the gains from a compiler being able to automatically inline a simpler hash function.--Raph Levien 02:32, 5 July 2006 (UTC)
Some interesting links I came across while searching:
- Hash functions by Paul Hsieh
- The Art of Hashing
- Hashtables, Part 2 by Maciej Stachowiak
- Hash Functions at Pluto Scarab - with nice graphs
You might also want to check out HSH 11/13 which seems to distribute quite well (as per published graph) and also performs nicely with its handful of code lines.
Followup:
I went ahead and completely rewrote the section. I may have come close to bending the letter of the law on doing original research (measuring avalanche behavior of Jenkins One-at-a-time hash using Bret Mulvey's tools) and NPOV (I advocate particular hash functions and trash criticize other popular choices), but everything I said is verifiable, and I think the result is the most helpful and reliable advice on choosing hash functions anywhere on the Internet or in dead tree form.
I measured the HSH 11/13 hash using Mulvey's AvalancheTest, and found it slightly inferior to the Jenkins One-at-a-time hash. Herbert Glarner's tests for uniform distribution are not as sensitive as Mulvey's chi-squared tests. Further, HSH 11/13 is quite slow, because of its inner loop.
Obviously, I also changed my mind about FNV, swayed in large part by Mulvey's analysis. It's not a bad choice, being one of the simplest of the "pretty good" choices, but even there the XOR-folding adds a bit more complexity. In any case, I feel like I've provided enough information for people to make their own informed choice.--Raph Levien 21:31, 12 August 2006 (UTC)
- I agree that this analysis of various hash functions are worth putting into Wikipedia.
- But wouldn't the hash function article be a better place?
- Or are there special considerations for hash functions used in a hash table that don't apply to hash functions used for other purposes?
- --68.0.120.35 07:42, 5 March 2007 (UTC)
- Yes, this section evaluates hash functions solely for their use in a hash table. Functions like the Jenkins one-at-a-time are very well suited for such uses, and extremely bad for other hash applications like message integrity checking, which is the domain of cryptographic hashes. Fair enough? --Raph Levien 04:12, 9 May 2007 (UTC)
- Referring to the statement: "Further, HSH 11/13 is quite slow, because of its inner loop." - The HSH documentation states, that a "key like 'Yvonne' [...] requires 92 machine instructions to generate a hash value". - Now I am just wondering, does there exist a speed comparison of any sort in order to choose a fast one? - Regards, --Gulliveig 16:02, 14 August 2007 (UTC)
Benchmarks
Simplicity and speed are readily measured objectively
There is a caveat here. Experimental measurements of speeds are necessarily done on a "representative sample" of inputs. It may be the case that such or such algorithm performs with varying speed depending on the kind of inputs, and that some sample, representative of one kind of inputs, may not be representative of another kind. I don't think this would happen with usual hash functions on e.g. strings but this may happen in more esoteric cases. David.Monniaux 14:38, 13 September 2006 (UTC)
- Without checking further into the matter of this article specifically, the speed of an algorithm is usually (or at least often) expressed as the worst case speed, using the Big O notation. It is an objective measurement which gives the asymptotic upper bound of the execution time as a function of the input length. Of course you are correct in that one algorithm may perform well with a certain kind of input and badly with other kinds of input, but if one algorithm always works in O(n) time, it is, after certain point, always faster than an algorithm that works in O(n2) time. It's just pure mathematics to define that point, as is calculating the asymptotic upper bound too.
- I would be more concerned about the simplicity claim. "Simplicity" is not something you can measure mathematically. You can always count the instructions etc but that's both machine and implementation dependent. —ZeroOne (talk / @) 16:44, 14 August 2007 (UTC)
Unfinished question
Problem 1: The Hash Table will be used for storage of student records. You may assume the maximum number of Items will not exceed 240. An item corresponds to a student record. A key is taken to be an ASCII character string of maximum length 32 containing name of the student. The data is taken to be 9-digit id and the name of the state of residence of the student in India. A list node contains an Item and a reference to the next node. You will need to implement the classes Item and the ListNode with construct and appropriate data access operations. 2. Write a main() program to test your class HashTable. 3. Input: The name of the input file, containing data items (student records) and data operations in the format given below, and the name of the output file, will be passed to your program on the command line or interactively. Data Items: student-id, student-name, state-of-residence Example. 200412018, Swati Mishra, Uttar Pradesh one per line. Data Operations: <operation> <item> The <operation> is s,i, or d for search, insert and delete, respectively. The item will have fields: student-name, student-id, and state-of-residence. The fields of an item will be separated by commas, and operation will be separated from the item by a colon ”:” and a space. Example. s: 200211001, Akash Gokhale, Maharashtra one per line. The data items will be separated from the data operations by a blank line. 4. Output: Your program is to read the input file and populate the Hash Table with student records by repeated Insert() operations. And then print to the output file, the size of the Hash Table, and the size of each linked list in the Hash Table. Then, it will continue to read each line and execute appropriate data operations. Following each Insert()/Delete() operation, it will output the size of the Hash Table and the size of each of the linked list, and for each Search operation, it will output
-- 220.225.53.35 09:18, 11 October 2006
joaat_hash function error
Hi, I've implemented the joaat_hash function which is described in pseudocode on this page in my C Program, and encountered an error. Better said I produced one, since I misunderstood len as the len of the hashtable, and not as len of the key.
here is my (correct) function implemented in C, please consider updating the article:
int joaat_hash(char *key, size_t len) //len is the size of the hashtable { unsigned int hash = 0; unsigned int i; for (i = 0; i < strlen(key); i++) /* [...] as in the article */ return (hash % len); }
--88.76.141.17 03:43, 2 January 2007 (UTC)
You are right -- the "len" in the pseudocode in the article is the length of the key. (That pseudocode gives the same results as the original C implementation "One-at-a-Time Hash" ub4 one_at_a_time(char *key, ub4 len) http://www.burtleburtle.net/bob/hash/doobs.html , right?)
I think the above implementation gives the same results as the original C implementation for all possible ASCII strings. Unfortunately, the above implementation gives different results for other sorts of data structures cast into byte arrays, when those data structures include a zero byte.
Since the original version gives *better* results for those data structures, and the *same* results for everything else, I think I'll stick with the original version, except use "key_len" to clarify exactly of what it is the length. (Or have I missed something obvious?) --68.0.120.35 07:42, 5 March 2007 (UTC)
cryptographic hash functions
From the article: "In fact, even a cryptographic hash does not provide protection against an adversary who wishes to degrade hash table performance by choosing keys all hashing to the same bucket."
I thought that one of the criteria for a cryptographic hash function was that it be infeasible to find collisions. Therefore, it would provide defense against such an adversary. Or am I misunderstanding something? Ralphmerridew 21:55, 30 January 2007 (UTC)
- Hash tables are usually very limited in size, and the length of the hash function is clipped modulo the hash table size. That is, while a hash might produce a result of 160 bits, it is obvious that a hash table of 2160 entries would be infeasible, not to mention useless. In-memory hash tables rarely exceed millions of entries, and it is relatively trivial to brute force through this amount of hashes for finding collisions. For a sense of magnitude, a low-end processor today can compute around half a million SHA-1 hashes of 16-character strings per second. -- intgr 23:07, 30 January 2007 (UTC)
- Doesn't that require that the attacker also knows the hash table size? (Alternately, if the attacker adds enough entries to be able to work out the size, it's also likely to force a rehash.) Ralphmerridew 00:51, 31 January 2007 (UTC)
- Well yes, if secrecy is a choice, it's a good idea to choose a large prime as the hash table size. This is, however, unrelated to whether one is using a cryptographic hash function or a classic one — it is impossible to cause collisions if you cannot predict the modulus or one of its factors.
- But lots of hash table implementations resize automatically to predefined constant sizes, and as far as I know, many even use the worst choice of power-of-two sizes. Power-of known number sizes mean that the attacker can cause clustering even if they mispredict the size, and that the attack is effective even through reclustering. This is because when , and even if , it will just cause clustering on different keys simultaneously, which can still be fatal with large hash tables. -- intgr 07:19, 31 January 2007 (UTC)
- I have no idea what I was thinking earlier, the equation should be . -- intgr 17:41, 31 January 2007 (UTC)
- But lots of hash table implementations resize automatically to predefined constant sizes, and as far as I know, many even use the worst choice of power-of-two sizes. Power-of known number sizes mean that the attacker can cause clustering even if they mispredict the size, and that the attack is effective even through reclustering. This is because when , and even if , it will just cause clustering on different keys simultaneously, which can still be fatal with large hash tables. -- intgr 07:19, 31 January 2007 (UTC)
- Re point 1, with a bad hash function, an attacker can choose keys that hash to the same value, which will cause collisions regardless of modulus. Ralphmerridew 16:41, 31 January 2007 (UTC)
- Oh, were you thinking of generating hashes small enough to always be less than the modulus? Good point, never thought that. (though I'm not the author of the quoted claim) -- intgr 17:41, 31 January 2007 (UTC)
- No, I mean that, with a bad hash function, an attacker could, say, generate arbitrarily many strings that hash to, say, 0x16de fa32 4261 1ab3. Whatever modulus is used, all those strings will fall into the same bucket. With a cryptographically secure hash function, given a known modulus, an attacker might be able to produce a large number of strings such that (hash % modulus) are all the same, but he'd be unable to produces a significant number that all have the same hash value. Ralphmerridew 21:43, 31 January 2007 (UTC)
- I wouldn't count on the speed (well, slowness) of a hash function for protection against clustering attacks. While it makes the attack slightly more expensive, it also complicates hash table inserts and lookups by the same proportional amount. If you can cope with the extra processing overhead, you're most likely better off with data structures that do not exhibit such critical worst-case performance, such as various balanced trees.
- Perhaps this is an overly cryptographic point of view, but it is not that expensive to generate truncated hash collisions for cryptographic hash algorithms by brute force. And as mentioned above, a general purpose low-end processor (Athlon 64 3000+ here) is capable of generating half a million hashes per second. A heavily parallelized FPGA-, or even ASIC-based chip could improve that by several magnitudes. (Such programmable FPGA computers are readily available on the market, e.g. COPACOBANA, which can do 1013 DES trials per second) -- intgr 22:37, 31 January 2007 (UTC)
- I'm not depending on "Given string 'str', calculate hash(str)" being slow; I'm depending on "Given 'value', find a string 'str' such that hash(str) == value". The latter is part of the definition of cryptographically secure. And even 10^13 trials/second means brute force will take about three weeks per full collision with a 64 bit hash, and is ineffective against even the deprecated MD5 (128 bits) or SHA-1 (160 bits). By comparison, IIU multiplicative hash C, it's possible to find a full collision in O(#bits) time. Ralphmerridew 23:12, 31 January 2007 (UTC)
- Did you forget that to utilize all these 64 bits, you need to store the table somewhere? There are no practical applications of a 264-entry hash table, and the space requirements are proportional to the brute force collision-finding time (both scale O(2n bits)). Just storing this many hashes (or 8-byte values) alone will take up of space. If you create a smaller hash table, you have to truncate the hash (throw away some of its information), which inherently speeds up brute force cracking. -- intgr 23:29, 31 January 2007 (UTC)
- But a collision against a truncated hash is only useful against a small number of moduli, and then only if the attacker knows or can work out the modulus. Ralphmerridew 23:57, 31 January 2007 (UTC)
- This seems to effectively boil down to what I formulated earlier, "were you thinking of generating hashes small enough to always be less than the modulus?". I do agree that in case the attacker does not know the modulus, and the modulus is a non-tiny prime, then this effectively disables clustering attacks. I disagree that when the modulus is known, the hash table needs to be "small", but let's leave it at that. -- intgr 00:54, 1 February 2007 (UTC)
- Well, I've changed the article now and put a {{fact}} on it, since we still need to cite a source for it. Or do you happen to have one? -- intgr 12:41, 5 February 2007 (UTC)
- I agree that, for a certain special case, Mallory (the attacker) can guarantee that all the keys he generates hash to the same bucket, even when a cryptographic hash is in use.
- That special case happens when an attacker doesn't know the exact table size, but does know that it is a power of 2, and at most some maximum size -- Mallory knows that slot_number = hash(key) % 2^n, and he knows the particular cryptographic hash() used, and although he doesn't know n exactly, he knows some k such that n <= k.
- By doing some work reminiscent of hashcash to zero out the k least-significant bits, Mallory can generate keys that all hit the same bucket. Mallory generates O(2^k) trial keys before he finds one that he knows will hit the same bucket. (With most non-cryptographic hashes, Mallory only needs to do O(1) work to construct a key that will hit the same bucket).
- But so what? Is there any application where this relevant?
- What sorts of applications need to accept keys from a potentially malicious attacker?
- Say we did the "secure" thing by picking a L bit cryptographic hash, and "randomly" picking a new large prime number p every time we resize the table, and using slot_number = ( hash(key) % p ) % tablesize. (Since tablesize << p << 2^L , does it matter whether the tablesize is a power of 2 or not?).
- Even with this "secure" hash -- even if Mallory has no clue what hash we are using or how we are reducing it to the tablesize -- Mallory could still force collisions by sending O(tablesize) submissions.
- (By the birthday paradox, he is *likely* to cause a collision even with O(tablesize^(1/2)) submissions).
- Is there some application where this "secure" hash would work, but the above power-of-2 "special case" wouldn't work?
- --68.0.120.35 07:42, 5 March 2007 (UTC)
Open hashing
I'm considering merging the open hashing article with this article, as has already been done for closed hashing. For that I think we should rename the section on Chaining to Open hashing and make more clear the already mentioned fact that linked lists are only one way, though perhaps the most popular one, of organizing hash buckets and introduce a mention of buckets allocated in a contiguous memory/disk space that is presently discussed in open hashing. Jyotirmoyb 05:16, 20 February 2007 (UTC)
- I agree -- let's keep all the different types together in this one article until the article gets too big. At that time, it will (hopefully) be easier to look at the article and decide the "natural" breaking points to split it up into multiple articles. Big Buckets First. --68.0.120.35 07:42, 5 March 2007 (UTC)
- I agree, too. It makes more sense to contrast "closed hashing" with "open hashing". I'd like to see the two sections so labeled.Anjin\\talk 02:23, 25 April 2007 (UTC)
- I agree, too. I can see no good reason why not to incorporate such a strongly related topic into this article. I find the question of length of the article largely irrelevant as long as the topics in it are relevant. I'd rather read one long coherent article (possibly skipping sections) than having to jump back and forth in Wikipedia. My 5 cents... TFJ 09:54, 14 June 2007 (UTC+1)
Makes sense - can't have it both ways - with closed hashing merged (thought it's hard to find any content called "closed hashing" in the article) and open hashing in its own article. On the other hand, there is no problem with an article on "open" and "closed" and other types of hashing if there is enough content to justify. Does wikipedia ave guidelines on article length??--121.45.246.110 14:10, 9 April 2007 (UTC)
- I agree with all this, because open hashing is simply irrelevant in any context outside of hash tables. Dcoetzee 01:13, 15 June 2007 (UTC)
Bug in 'remove' Pseudocode
The remove function will loop indefinitely if the hash table is full, since it only exits when it finds an unoccupied slot. I think adding the following after the j := (j+1) line should fix it:
if j = i exit loop
32.97.110.142 20:36, 11 April 2007 (UTC)
Ambiguity in amortized analysis
In the discussion of the costs of resizing the hash, there seems to be an ambiguity in the amortized running time analysis:
... If in the end it contains n elements, then the total add operations performed for all the resizings is:
1 + 2 + 4 + ... + n = 2n - 1. Because the costs of the resizings form a geometric series, the total cost is O(n)....
The n in "2n - 1" is actually the value of the nth element, not the number of elements. The sentence that follows makes it seem that the sum of a geometric series is linear, which is clearly not the case.
Guray9000 00:47, 16 April 2007 (UTC)
- Actually, it is the number of elements. I'm not sure what makes you think it's the value of the nth element - it's not. And the sum of a geometric sequence is linear in n, if n is the last term of the sequence. Dcoetzee 11:34, 9 June 2007 (UTC)
Mistake in table resizing section ?
The article states:
To see why this is true, suppose a hash table using chaining begins at the minimum size of 1 and is doubled each time it fills above 100%. If in the end it contains n elements, then the total add operations performed for all the resizings is:
1 + 2 + 4 + ... + n = 2n - 1.
This does not seem right. The expected number of steps it takes to double the table size from (1) to (n>1) at each overfill should be: truncate(LOG2(n-1))+1 (whare LOG2 means logarithm base 2 of n). Also, if my math is correct 1 + 2 + 4 + ... + n = n(n+1)/2, but this does not seem the right computation to make: we're not supposed to add the values corresponding to each step but to count them.
If this reasoning is correct the total cost should read O(LN(n)) rather than O(n), which should mean this method scales very well with n.
- I can't understand your reasoning at all. First of all 1 + 2 + 4 + ... + n is in fact 2n - 1, and not n(n+1)/2 (this is 1 + 2 + 3 + ... + n). Second, this is summing the add operations performed during the resizings. During each resizing, all elements currently in the table have to be added again to the new table, and each resizing occurs at a power of 2 number of elements, so the numbers being added are correct. I really don't understand what you're saying. Dcoetzee 10:29, 9 June 2007 (UTC)
How this can be true (question about statement in section 'Time complexity and common uses of hash tables')?
In that section it is written: "Like arrays, hash tables provide constant-time O(1) lookup on average, regardless of the number of items in the table." (*)
Searching in sorted list of keys by using binary search can be done in O(log N). Average running time of binary search is also log N (this is according to http://planetmath.org/encyclopedia/BinarySearch.html). I don't know any better algorithm for non special sorted data. So I think statement (*) isn't true in asymptotic sense.
However if we know that N is bounded by some number we can say that O(log N) = O(1). But this would be wrong... But in some philosophical (or maybe practical) sense we could be right (we have large but fixed maximum amount of RAM, HDD space, maximum amount of hash table entries etc). But this would be not right in asymptotic sense where N not bounded but free.
What I don't understand?
- Really, in a fairly real sense, hash tables don't search, they go straight to the right place in the table.WolfKeeper 22:56, 28 June 2007 (UTC)
- At a maximum table utilisation factor (say 80%), the Hash table will do on average a constant number of comparisons (k= ~2 at 80%) which is still an O(1) operation, as k is independent of n, the size of the table. In other words, at 80% full, there's going to be only about 2 colliding entries. Hope this helps.WolfKeeper 22:56, 28 June 2007 (UTC)
- Thanks, WolfKeeper and sorry for my ignorance. Your answer made me do more searching and I understood that, I was confusing hash_map with map (in C++ terminology). map permits lookup, insertion and removal in logarithmic time on average. And hash_map permits these operations in constant time on average. 78.56.1.150 12:37, 29 June 2007 (UTC)
Implementations
This section is quickly devolving into yet another trivia list. MegaHasher 19:45, 11 July 2007 (UTC)
Hash table and binary tree in one data structure
Are there any references on a data structure that implements a hash table and binary tree at the same time? MegaHasher 20:04, 3 August 2007 (UTC)
- Um, huh? It's not clear to me why you'd want to do that, or how that even makes sense. Can you expand a bit on why you're asking? Dcoetzee 02:45, 4 August 2007 (UTC)
- Seems that a treap could be combined with a hash table. I wasn't able to come up with any citations though. MegaHasher 06:11, 14 August 2007 (UTC)
- I still have no idea what you mean. Combined in what way, for what purpose? Dcoetzee 08:13, 14 August 2007 (UTC)
- to provide O(1) read access, and ordered traversal at the same time MegaHasher 08:19, 14 August 2007 (UTC)
- You could certainly do that by merely keeping a hash table and a binary tree at the same time with the same contents - but on modern systems, I don't think the lookup cost of hash tables compared to an effective cache-aware tree data structure is significantly better (it may even be worse). Considering the space overhead and time overhead for other operations, I wouldn't consider this an effective solution. Dcoetzee 21:50, 20 August 2007 (UTC)
- to provide O(1) read access, and ordered traversal at the same time MegaHasher 08:19, 14 August 2007 (UTC)
- I still have no idea what you mean. Combined in what way, for what purpose? Dcoetzee 08:13, 14 August 2007 (UTC)
- Seems that a treap could be combined with a hash table. I wasn't able to come up with any citations though. MegaHasher 06:11, 14 August 2007 (UTC)
Citations
This article could be improved with more direct citations. MegaHasher 06:11, 14 August 2007 (UTC)
Random table based hash function
I am suspicious of the random table based hash function. This seems to be a variation on Pearson's hash function, except there are multiple tables, therefore much worse cache misses. Pearson's hash function is not all that fast to start with. MegaHasher 03:22, 18 August 2007 (UTC)
For single-byte indices and 32 bit hash numbers, the tables are 256*4 = 1024 bytes each. This algorithm is obviously best when the cache size is larger, but also very suitable when a few keys account for most of the lookups, or when the cost of a collision is far greater than the cost of a memory cache miss (such as when hashing into disk records).
By having multiple tables, this algorithm ensures lookups to the same index in the table don't cancel each other out, thereby generating literally perfectly randomly distributed but repeatable hashes. The Wikipedia article on Pearson's hash function indicates it employ a table with the numbers 0 to 255 randomly arranged, and it ultimately selects one of the 256 different values in the table at random. The multi-table algorithm you removed doesn't simply select one of the 256 values in one of the tables, it xors values together to create a far stronger hash. Look at the algorithm properly and you might appreciate it more.
FWIW, the algorithm is proven technology, used to hash > 3 million highly-repetitive financial security keys by the world's leading financial data provider's proprietary security database, handling hundreds of thousands of database inserts per second.
I'll reinstate the algorithm, with the attribution removed as you've (offensively) referred to it as "vanity" material, and I hope you, Mr "MegaHasher", have much better reasons before you remove it again.
- Wikipedia generally does not accept original research, but you are welcome to publish your algorithm in a different location, and submit to Wikipedia as a cited work of concise length. The article of Hash function is probably a better location. MegaHasher 04:19, 29 August 2007 (UTC)
I don't consider this to be original "research". It is simple stuff, and important precisely because of that. Too many programmers either don't use hash tables or use poor algorithms because they don't have a feel for or ability to assess mathematical hashes. This may suit the few specialists who write the algorithms and relish the elitist mystique of it all, but it's a big loss for computing as a whole. More generally, I might accept that this belongs on the Hash Function page if you also moved the joaat_hash off this one. You can't have it both ways. I agree with the earlier discussion comments that your posting and defense of Bob Jenkin's algorithm looks very inappropriate on this page, and all the more so for your refusal to accept anything that removes any of the focus from it. —Preceding unsigned comment added by 205.228.104.142 (talk) 23:59, August 29, 2007 (UTC)
- Wikipedia's policy is very simple. Please look at the welcoming material in your user talk page. You need to publish your full article in a location off Wikipedia, then write an one paragraph summary of it on Wikipedia, and give a citation link to your full article. MegaHasher 18:20, 30 August 2007 (UTC)
Load factor
I have seen two separate referenced articles that had measurements that show under separate chaining, an increase of load factor by 1 has the impact of increasing CPU instruction count around 4 to 5%. This is much better than what I would have guessed. The first reference is "Dynamic Hash Tables" by Larson (1988), and the second reference is "Main-memory linear hashing" by Pettersson (1993). Have I completely mis-read these articles? MegaHasher 21:33, 20 August 2007 (UTC)
Picture
Can somebody please add this picture to the article? I believe it will help a lot of people to understand the notion of the hash table. --yanis 12:25, 21 August 2007 (UTC)
- Yes, it is a good picture, but no we can not add it to the article since at the image page you state that you copied it from a non-free source. But as you state on the image page we can draw a license free image that is inspired by it. Then I suggest some changes:
- Make it look like the images we already have (colour match, shapes and so on).
- The output of the hash function would be more clear if it is written as [04] instead of [4].
- The hash function would be more clear if it is written as "Key modulo 100" instead of "Key % 100". Of course, a better hash function usually is to do modulo for instance 101 but that would make the image unclear.
- Perhaps use the phone number as key?
- Each record must also contain the key, since hash tables need to check if they found the right record or not. So I suggest the record should hold the key (the phone number) and some data (the persons name), just like the other images we now use.
- Since I made most of the images in the article now I might take a shot at it when I am in the mood. (But I had lots of input from others when doing them and some one else remade them as SVGs.)
- --David Göthberg 13:00, 21 August 2007 (UTC)
Not really O(n)
Like arrays, hash tables provide constant-time O(1) lookup on average, regardless of the number of items in the table. However, the very rare worst-case lookup time can be as bad as O(n).
This is essentially wrong. In a non buggy implementation which is when the table is a reasonable size, and the table is never too full and the hash function is working correctly n-collisions cannot occur.WolfKeeper 14:36, 16 October 2007 (UTC)
That's wrong for another reason. Dynamic perfect hashing methods like cuckoo hashing provide constant lookup time. Only updates may take O(n) time in the worst case. 23:24, 2 November 2007 (UTC)
The probability of n collisions by sheer chance is p^n where p is a number usually well under 0.8. The chances of say, 200 collisions is 0.8^200 = 4e-20, and on modern computers that would still run quickly. At a million hashes every second that statistically wouldn't happen once in a million years; and that assumes the table is full, and that's a small table by hashing standards. There's more chance of the hardware failing, by about a factor of 10^6.WolfKeeper 14:36, 16 October 2007 (UTC)
- But Big-Oh notation is based on the worst case and the worst case for hash tables would be poor or incorrect implementation/small n. The only reason hash tables never 'feel' like that is because only good implementations would be kept and reused. You can't ignore possibilities just because they aren't likely to occur when you're talking with Big-Oh.199.111.229.133 18:47, 16 October 2007 (UTC)
- But O(n) on a 50% full table with 1000 entries and a good hash function, it's not just "very rare" as in the text, it just never happens and never would happen before the end of the universe. If it has never been seen, and never would be seen, then it isn't rare, it just never happens.WolfKeeper 19:12, 16 October 2007 (UTC)
Confusing a hash table and a hash map
The intro to this article is severely misguided and confusing. A hash table is not the same thing as a "hash map" (which is a Javaism meaning a map/dictionary implemented by a hash table instead of a tree). Hash tables have nothing to do with keys and values -- a hash table only knows that elements have hashes, and that elements are at the index in its array corresponding to the value of the hash function for an element. For example, a hash table can also be used to implement an unordered set. Additionally, the concept of "buckets" is not intrinsic to a hash table. It is a common strategy for resoloving collisions but there are others (double hashing, for instance).
What's the correct way to tag a page that has inaccurate information on it?
There are some non-wikipedia ghits that have the correct definition, such as: http://www.sparknotes.com/cs/searching/hashtables/section1.html
However, it's a fairly common misconception that hashes map keys to values, since so often they are used to implement maps or dictionaries. I am fairly sure that Knuth makes this distinction, but I don't have it on hand.