From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

WikiProject Computer science (Rated C-class, Low-importance)
WikiProject iconThis article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
C-Class article C  This article has been rated as C-Class on the project's quality scale.
 Low  This article has been rated as Low-importance on the project's importance scale.

Untitled/undated discussion[edit]

Usually, the term frequency is just the count of a term in a document (NOT divided by the total number of terms in the document), which is confusing because it isn't really a frequency.

I strongly agree, in all the technical papers I've been reading for my Internet services class at U.Washington, TF is the count, and so TF*IDF is biased (usually has higher values) for longer documents therefore needing to be normalized.


Why title of the article is in lower case? Why not "TF-IDF"? --ajvol 15:29, 25 November 2006 (UTC)

  • I believe the short story of this is that tf-idf is a well known function in the literature and that is how it is referred. I know that in some cases it is used to help differentiate it from the uppercase variations that are sometimes used to refer to other equations. Josh Froelich 03:19, 11 December 2006 (UTC)
In other papers I see the sign of multiplication TF*IDF, not minus. See, e.g. S. Robertson, Understanding inverse document frequency: on theoretical arguments for IDF. Journal of Documentation 60, 503-520, 2004.
What do you think about renaming the article? --AKA MBG (talk) 14:18, 7 March 2008 (UTC)
By far, the most common representation in the literature is lowercase with an asterisk or similar multiplication symbol. I agree that this article should be renamed if the Wikimedia allows * in article names. 13:20, 26 June 2009 (UTC)


You could extract the most relevant terms from a version of a page of Wikipedia, perhaps this very one, as an example. -- 15:20, 15 March 2007 (UTC)

This is a good idea. I'll do it tomorrow. —Preceding unsigned comment added by (talk) 07:10, 9 December 2008 (UTC)
I'd warn against using a Wikipedia article, since they change over time, which impedes reproducibility; it's better to choose a static document, such as a public domain text. If it explicitly references Wikipedia, it also runs afoul of Wikipedia:Avoid self-reference. Dcoetzee 09:29, 9 December 2008 (UTC)

Text Data Clustering[edit]

  • I think we can also use tf-idf in text data clustering. I would like to know any Java source code on unstructured text data clustering based on tf-idf? —Preceding unsigned comment added by (talk) 03:09, 12 September 2007 (UTC)

Normalized frequencies[edit]

The frequency of the terms isn't usually normalized by dividing it for the total length of each document. Instead, normalization is done by dividing for the frequency of the most used term in the document (as outlined in —Preceding unsigned comment added by (talk) 18:59, 29 February 2008 (UTC)

I've removed the normalization from the definition of tf following several discussions on mailing lists about tf implementations. The (unsourced!) variant previously described has sown a lot of confusion on the 'net. Qwertyus (talk) 12:46, 29 June 2011 (UTC)

Also, with the information given in the example it is not possible to calculate the TF of the term "cow". It is wrong to say that TF(cow) is the frequency of the term cow (3) divided by the number of terms (100). —Preceding unsigned comment added by (talk) 02:34, 13 October 2009 (UTC)

I've added a banner to the section requesting expert help. The name "term frequency" probably isn't clear to outsiders, whether it should be a raw count or a normalised count. If someone from the text-retrieval community could simply clarify whether it is "normal" to normalise the value or not, that would improve this page! --mcld (talk) 11:14, 3 February 2012 (UTC)

  • I'm probably not the "expert" you're looking for, but either is a measure of term frequency. Normalization accounts, up to a point, for how term frequency tends to favor long documents, but pace comments above, a very simple measure of tf is still tf. I don't think there needs to be too much stress over this. Universaladdress (talk) 06:43, 15 March 2012 (UTC)
I have edited the section to provide one particular formula, but have tried to emphasize that the given formula is not necessarily the definitive version. Hope this was helpful. (talk) 05:03, 17 August 2012 (UTC)


Can someone please specify the logarithm bases correctly? Is that binary or base 10 log? —Preceding unsigned comment added by Godji (talkcontribs) 12:03, 20 March 2008 (UTC)

it doesn't matter as long as they are all the same in your calculations (talk) 23:42, 1 June 2008 (UTC)
Remember that . This means that converting between two logarithm bases is just multiplication by a constant. (talk) 04:03, 17 August 2012 (UTC)

Idf definition[edit]

It maybe the issue of the Information Retrieval community as whole, but the definition of IDF is an intellectual insult for anybody with a reasonable natural sciences background. Saying that IDF (Inverse Document Frequency) = log ( 1 / document frequency ) should be prohibited! Maybe the place to fix this is wikipedia, since we can't fix IR textbooks and papers... —Preceding unsigned comment added by (talk) 19:50, 14 June 2008 (UTC)

That would be original research, which is not permitted. We should describe IDF as it is defined and used in the field of IR. If someone in the natural sciences has published something about why this is a poor choice for a formula, it could probably be given a couple sentences somewhere. Dcoetzee 09:41, 9 December 2008 (UTC)
So what if someone is insulted. The job of the wikipedia is to convey information...if you feel insulted about something, go ask your mother for a hug. —Preceding unsigned comment added by (talk) 20:16, 13 May 2009 (UTC)
If there is a significant difference in the way the term is used across disciplines, a disambiguation page may be in order. That information may not need to be included in this article, however. Universaladdress (talk) 06:14, 12 August 2012 (UTC)

Notation very confusing[edit]

The index notation here is difficult to understand quickly because you use i for word and j for document. It would be much easier to read and grok if you used w, w', w" for words, and d, d', d" for documents. —Preceding unsigned comment added by (talk) 20:12, 13 May 2009 (UTC)

Why is there a sign instead of  ? It is not a cross-product here:

—Preceding unsigned comment added by (talk) 12:13, 17 June 2009 (UTC)

Enjoy!! —Preceding unsigned comment added by (talk) 08:54, 3 May 2010 (UTC)

Example is incorrect[edit]

In the example the term frequency should the absolute count of term in the document, and shouldn't be divided by all term counts in the dictionary

  • Detailed discussion about this question above on the talk page. Expert help has been sought to clarify the article. Universaladdress (talk) 15:46, 29 April 2012 (UTC)

corrupted text at Web mining[edit]

Concerns the topic of this article. Appears after "When the length of the words in a document goes to". I'm deleting it; hopefully s.o. here can fix. — kwami (talk) 23:13, 17 January 2014 (UTC)

Why are you reporting this here, instead of at Talk:Web mining? QVVERTYVS (hm?) 23:24, 17 January 2014 (UTC)

Double Normalization K[edit]

What does the "K" stand for in "Double Normalization K"? — Preceding unsigned comment added by (talk) 15:06, 15 September 2015 (UTC)

Looks like it's just a nameless constant that somebody decided to call K. I.e., it doesn't stand for anything. QVVERTYVS (hm?) 15:14, 15 September 2015 (UTC)