Jump to content

Data mining: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
AntiSpamBot (talk | contribs)
RV -- Reverting edits by Adrian.walker due to detected spam.
Line 184: Line 184:
==External links==
==External links==
*{{dmoz|Computers/Software/Databases/Data_Mining/|Data Mining}}
*{{dmoz|Computers/Software/Databases/Data_Mining/|Data Mining}}
* [http://www.reengineeringllc.com Internet Business Logic] is a Wiki-like system for writing and running data mining knowledge in the form of rules that are similar to syllogisms. The rules specify patterns that may be in the data. Running the rules finds the patterns, if present. If the patterns are absent, an abductive explanation shows what is missing, and can be useful in refining the rules. An example is [http://www.reengineeringllc.com/demo_agents/MedMine2.agent MedMine2]
* [http://www.reengineeringllc.com Internet Business Logic] is a Wiki-like system for writing and running data mining knowledge in the form of rules that are similar to syllogisms. See, for example [http://www.reengineeringllc.com/demo_agents/ArmsDealerMeeting1.agent ArmsDealerMeeting1] and [http://www.reengineeringllc.com/demo_agents/MedMine2.agent MedMine2]. The rules specify patterns that may be in the data. Running the rules finds the patterns, if present. If the patterns are absent, an abductive explanation shows what is missing, and can be useful in refining the rules.

[[Category:Data mining| ]]
[[Category:Data mining| ]]


Line 212: Line 213:
[[vi:Khai phá dữ liệu]]
[[vi:Khai phá dữ liệu]]
[[zh:数据挖掘]]
[[zh:数据挖掘]]
[http://www.example.com link title]

Revision as of 15:05, 26 January 2007

Data mining (DM), also called Knowledge-Discovery in Databases (KDD) or Knowledge-Discovery and Data Mining, is the process of automatically searching large volumes of data for patterns using tools such as classification, association rule mining, clustering, etc.. Data mining is a complex topic and has links with multiple core fields such as computer science and adds value to rich seminal computational techniques from statistics, information retrieval, machine learning and pattern recognition.

Example

A simple example of data mining, often called Market Basket Analysis, is its use for retail sales. If a clothing store records the purchases of customers, a data mining system could identify those customers who favour silk shirts over cotton ones.

Another is that of a supermarket chain who, through analysis of transactions over a long period of time, found that beer and diapers were often bought together. Although explaining this relationship may be difficult, taking advantage of it is easier, for example by placing the high-profit diapers in the store close to the high-profit beers. (This example is questioned at Beer and Nappies -- A Data Mining Urban Legend.)

The two examples above deal with association rules within transaction-based data. Not all data is transaction based and logical or inexact rules may also be present within a database. In a manufacturing application, an inexact rule may state that 73% of products which have a specific defect or problem, will develop a secondary problem within the next 6 months.

Use of the term

Data mining has been defined as "the nontrivial extraction of implicit, previously unknown, and potentially useful information from data" [1] and "the science of extracting useful information from large data sets or databases" [2].

It involves sorting through large amounts of data and picking out relevant information.

It is usually used by businesses and other organizations, but is increasingly used in the sciences to extract information from the enormous data sets generated by modern experimentation.

Metadata, or data about a given set of data, are often expressed in a condensed data mine-able format, or one that facilitates the practice of data mining. Common examples include executive summaries and scientific abstracts.

Although data mining is a relatively new term, the technology is not. Companies for a long time have used powerful computers to sift through volumes of data such as supermarket scanner data, and produce market research reports. Continuous innovations in computer processing power, disk storage, and statistical software are dramatically increasing the accuracy and usefulness of analysis.

Data mining identifies trends within data that go beyond simple analysis. Through the use of sophisticated algorithms, users have the ability to identify key attributes of business processes and target opportunities.

The term data mining is often used to apply to the two separate processes of knowledge discovery and prediction. Knowledge discovery provides explicit information that has a readable form and can be understood by a user. Forecasting, or predictive modeling provides predictions of future events and may be transparent and readable in some approaches (e.g. rule based systems) and opaque in others such as neural networks. Moreover, some data mining systems such as neural networks are inherently geared towards prediction rather than knowledge discovery.

Although the term "data mining" is usually used in relation to analysis of data, like artificial intelligence, it is an umbrella term with varied meanings in a wide range of contexts. Unlike data analysis, data mining is not based or focused on an existing model which is to be tested or whose parameters are to be optimized.

In statistical analyses where there is no underlying theoretical model, data mining is often approximated via stepwise regression methods wherein the space of 2k possible relationships between a single outcome variable and k potential explanatory variables is smartly searched. With the advent of parallel computing, it became possible (when k is less than approximately 40) to examine all 2k models. This procedure is called all subsets or exhaustive regression. Some of the first applications of exhaustive regression involved the study of plant data.[3]

Data dredging

Data dredging or data fishing are terms one may use to criticize someone's data mining efforts when it is felt the patterns or causal relationships discovered are unfounded. In this case the pattern suffers of overfitting on the training data.

Data dredging is the scanning of the data for any relationships, and then when one is found coming up with an interesting explanation. The conclusions may be suspect because data sets with large numbers of variables have by chance some "interesting" relationships. Fred Schwed [4] said:

"There have always been a considerable number of people who busy themselves examining the last thousand numbers which have appeared on a roulette wheel, in search of some repeating pattern. Sadly enough, they have usually found it."

Nevertheless, determining correlations in investment analysis has proven to be very profitable for statistical arbitrage operations (such as pairs trading strategies), and correlation analysis has shown to be very useful in risk management. Indeed, finding correlations in the financial markets, when done properly, is not the same as finding false patterns in roulette wheels.

Some exploratory data work is always required in any applied statistical analysis to get a feel for the data, so sometimes the line between good statistical practice and data dredging is less than clear.

Most data mining efforts are focused on developing highly detailed models of some large data set. Other researchers have described an alternate method that involves finding the minimal differences between elements in a data set, with the goal of developing simpler models that represent relevant data. [5]

When data sets contain a big set of variables, the level of statistical significance should be proportional to the patterns that were tested. For example, if we test 100 random patterns, it is expected that one of them will be "interesting" with a statistical significance at the 0.01 level.

Cross validation is a common approach to evaluating the fitness of a model generated via data mining, where the data is divided into a training subset and a test subset to respectively build and then test the model. Common cross validation techniques include the holdout method, k-fold cross validation, and the leave-one-out method.

Privacy concerns

There are also privacy concerns associated with data mining - specifically regarding the source of the data analyzed.

Data mining government or commercial data sets for national security or law enforcement purposes has also raised privacy concerns. [6]

There are many legitimate uses of data mining. For example, a database of prescription drugs taken by a group of people could be used to find combinations of drugs exhibiting harmful interactions. Since any particular combination may occur in only 1 out of 1000 people, a great deal of data would need to be examined to discover such an interaction. A project involving pharmacies could reduce the number of drug reactions and potentially save lives. Unfortunately, there is also a huge potential for abuse of such a database.

Essentially, data mining gives information that would not be available otherwise. It must be properly interpreted to be useful. When the data collected involves individual people, there are many questions concerning privacy, legality, and ethics.

Combinatorial game data mining

Since the early 1990s, with the availability of oracles for certain combinatorial games, also called tablebases (e.g. for 3x3-chess) with any beginning configuration, small-board dots-and-boxes, small-board-hex, and certain endgames in chess, dots-and-boxes, and hex; a new area for data mining has been opened up. This is the extraction of human-usable strategies from these oracles. This is pattern-recognition at too high an abstraction for known Statistical Pattern Recognition algorithms or any other algorithmic approaches to be applied: at least, no one knows how to do it yet (as of January 2005). The method used is the full force of Scientific Method: extensive experimentation with the tablebases combined with intensive study of tablebase-answers to well designed problems, combined with knowledge of prior art i.e. pre-tablebase knowledge, leading to flashes of insight. Berlekamp in dots-and-boxes etc. and John Nunn in chess endgames are notable examples of people doing this work, though they were not and are not involved in tablebase generation.

Notable uses of data mining

Of course, two notable pitfalls in this type of justice application are the scarcity of suspect datapoints and the learning capabilities of adversaries. The first issue is based on the simple fact that a handful of suspects within a dataset of 200 million people usually yield patterns which are scientifically questionable and often result in pointless investigative efforts. The second issue is based on the fact that as adversaries change strategy, their patterns of past behavior fail to provide clues to future activities. Hence, while data mining may well give useful results when applied to the behavior of customers shopping at discounts stores, its applications within the justice system will for ever be hindered by the scarcity of suspect data and the natural dynamic changes in adversarial strategies.

See also

Induction algorithms

Application areas

Software

References

  1. ^ W. Frawley and G. Piatetsky-Shapiro and C. Matheus (Fall 1992). "Knowledge Discovery in Databases: An Overview". AI Magazine: pp. 213-228. ISSN 0738-4602. {{cite journal}}: |pages= has extra text (help)
  2. ^ D. Hand, H. Mannila, P. Smyth (2001). Principles of Data Mining. MIT Press, Cambridge, MA,. ISBN 0-262-08290-X.{{cite book}}: CS1 maint: extra punctuation (link) CS1 maint: multiple names: authors list (link)
  3. ^ A.G. Ivakhnenko (1970). "Heuristic Self-Organization in Problems of Engineering Cybernetics" (PDF). Automatica. 6: pp.207–219. ISSN 0005-1098. {{cite journal}}: |pages= has extra text (help).
  4. ^ Fred Schwed, Jr (1940). Where Are the Customers' Yachts?. ISBN 0-471-11979-2..
  5. ^ T. Menzies, Y. Hu (November 2003). "Data Mining For Very Busy People". IEEE Computer: pp. 18-25. ISSN 0018-9162. {{cite journal}}: |pages= has extra text (help).
  6. ^ K.A. Taipale (December 15, 2003). "Data Mining and Domestic Security: Connecting the Dots to Make Sense of Data". Colum. Sci. & Tech. L. Rev. 5 (2). SSRN 546782 / OCLC 45263753..
  7. ^ Stephen Haag; et al. Management Information Systems for the information age. pp. pp 28. ISBN 0-07-095569-7. {{cite book}}: |pages= has extra text (help); Explicit use of et al. in: |author= (help)

General References

  • Kurt Thearling, An Introduction to Data Mining (also available is a corresponding online tutorial)
  • Dean Abbott, I. Philip Matkovsky, and John Elder IV, Ph.D. An Evaluation of High-end Data Mining Tools for Fraud Detection published a comparative analysis of major high-end data mining software tools that was presented at the 1998 IEEE International Conference on Systems, Man, and Cybernetics, San Diego, CA, October 12-14, 1998.
  • Mierswa, Ingo and Wurst, Michael and Klinkenberg, Ralf and Scholz, Martin and Euler, Timm: YALE: Rapid Prototyping for Complex Data Mining Tasks, in Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-06), 2006.
  • Peng, Y., Kou, G., Shi, Y. and Chen, Z. "A Systemic Framework for the Field of Data Mining and Knowledge Discovery", in Proceeding of workshops on The Sixth IEEE International Conference on Data Mining (ICDM), 2006
  • Hari Mailvaganam and Daniel Chen, Articles on Data Mining

Books

  • Ronen Feldman and James Sanger, The Text Mining Handbook, Cambridge University Press, ISBN 9780521836579
  • Pang-Ning Tan, Michael Steinbach and Vipin Kumar, Introduction to Data Mining (2005), ISBN 0-321-32136-7 (companion book site)
  • Richard O. Duda, Peter E. Hart, David G. Stork, Pattern Classification, Wiley Interscience, ISBN 0-471-05669-3, (see also Powerpoint slides)
  • Phiroz Bhagat, Pattern Recognition in Industry, Elsevier, ISBN 0-08-044538-1
  • Ian Witten and Eibe Frank, Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations (2000), ISBN 1-55860-552-5, (see also Free Weka software)
  • Yike Guo and Robert Grossman, editors: High Performance Data Mining: Scaling Algorithms, Applications and Systems, Kluwer Academic Publishers, 1999.
  • Mark F. Hornick, Erik Marcade, Sunil Venkayala: "Java Data Mining: Strategy, Standard, And Practice: A Practical Guide for Architecture, Design, And Implementation" (Broché)


  • Data Mining at Curlie
  • Internet Business Logic is a Wiki-like system for writing and running data mining knowledge in the form of rules that are similar to syllogisms. See, for example ArmsDealerMeeting1 and MedMine2. The rules specify patterns that may be in the data. Running the rules finds the patterns, if present. If the patterns are absent, an abductive explanation shows what is missing, and can be useful in refining the rules.