This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)
The Moby Project is a collection of public-domain lexical resources created by Grady Ward. The resources were dedicated to the public domain, and are now mirrored at Project Gutenberg. As of 2007[update], it contains the largest free phonetic database, with 177,267 words and corresponding pronunciations.
The Moby Hyphenator II contains hyphenations of 187,175 words and phrases (including 9,752 entries where no hyphenations are given, such as through and avoir). The character encoding appears to be MacRoman, and hyphenation is indicated by a bullet (character value 165 decimal, or A5 hexadecimal). Some entries, however, have a combination of actual hyphens and character 165, such as "bar•ber-sur•geon".
There is little to no documentation of the hyphenation choices made; the following examples might give some flavour of the style of hyphenation used: at•mos•phere; at•tend•ant; ca•pac•i•ty; un•col•or•a•ble.
|Language||Words||Size (in bytes)|
However, some of the lists are contaminated, for example the Japanese list contains English words such as abnormal and non-words such as abcdefgh and m,./. There are also unusual peculiarities in the sorting of these lists, as the French list contains a straight alphabetical listing, while the German list contains the alphabetical listing of traditionally capitalized words and then the alphabetical listing of traditionally lower-cased words. The list of Italian words, however, contains no capitalized words whatsoever.
The lists do not use accented characters, so "e^tre" is how a user would look up the French word être ("to be").
Moby Part-of-Speech contains 233,356 words fully described by part(s) of speech, listed in priority order. The format of the file is word\parts-of-speech, with the following parts of speech being identified:
|Verb (usually participle)||V|
The Moby Pronunciator II contains 177,267 entries with corresponding pronunciations. Most of the entries describe a single word, but approximately 79,000 contain hyphenated or multiple word phrases, names, or lexemes. The Project Gutenberg distribution also contains a copy of the cmudict v0.3. The file contains lines of the format word[/part-of-speech] pronunciation. Each line is ended with the ASCII carriage return character (CR, '\r', 0x0D, 13 in decimal).
The word field can include apostrophes (e.g. isn't), hyphens (e.g. able-bodied), and multiple words separated by underscores (e.g. monkey_wrench). Non-English words are generally rendered, as stated in the documentation, without accents or other diacritical marks. However, in 36 entries (e.g. São_Miguel), some non-ASCII accented characters remain, represented using Mac OS Roman encoding.
The part-of-speech field is used to disambiguate 770 of the words which have differing pronunciations depending on their part-of-speech. For example, for the words spelled close, the verb has the pronunciation //, whereas the adjective is //. The parts-of-speech have been assigned the following codes:
Following this is the pronunciation. Several special symbols are present:
|_||Used to separate words|
|'||Primary stress on the following syllable|
|,||Secondary stress on the following syllable|
The rest of the symbols are used to represent IPA characters. The pronunciations are generally consistent with a General American dialect of English, that exhibits father-bother merger, hurry-furry merger and lot-cloth split, but does not exhibit cot-caught merger or wine-whine merger. Each phoneme is represented by a sequence of one or more characters. Some of the sequences are delimited with a slash character "/", as shown in the following table, but note that the sequence for // is delimited by two slash characters at either end:
To this collection are added a number of extra sequences representing phonemes found in several other languages. These are used to encode the non-English words, phrases and names that are included in the database. The following table contains these extra phonemes, but note that the extent to which some of these may exist due to encoding errors is not clear.
|N||Nasalisation of preceding vowel|
|O||[intent not clear]|
|V||v, β, ʋ|
The Moby Thesaurus II contains 30,260 root words, with 2,520,264 synonyms and related terms – an average of 83.3 per root word. Each line consists of a list of comma-separated values, with the first term being the root word, and all following words being related terms.
|ACRONYMS.TXT||6,213||Common acronyms and abbreviations|
|COMMON.TXT||74,550||Common words present in two or more published dictionaries|
|COMPOUND.TXT||256,772||Phrases, proper nouns, and acronyms not included in the common words file|
|CROSSWD.TXT||113,809||Words included in the first edition of the Official Scrabble Players Dictionary|
|CRSWD-D.TXT||4,160||Additions to the Official Scrabble Players Dictionary in the second edition|
|FICTION.TXT||467||A list of the most commonly occurring substrings in the book The Joy Luck Club|
|FREQ.TXT||1,000||Most frequently occurring words in the English language, listed in descending order|
|FREQ-INT.TXT||1,000||Most frequently occurring words on Usenet in 1992, listed with corresponding percentage in decreasing order|
|KJVFREQ.TXT||1,185||Most frequently occurring substrings in the King James Version of the Bible, listed in descending order|
|NAMES.TXT||21,986||Most common names used in the United States and Great Britain|
|NAMES-F.TXT||4,946||Common English female names|
|NAMES-M.TXT||3,897||Common English male names|
|OFTENMIS.TXT||366||Most common misspelled English words|
|PLACES.TXT||10,196||Place names in the United States|
|SINGLE.TXT||354,984||Single words excluding proper nouns, acronyms, compound words and phrases, but including archaic words and significant variant spellings|
|USACONST.TXT||7,618||United States Constitution including all amendments current to 1993|
|Total||863,149||Not the total of unique words.|
|Total Uniq||639,995||Total of single, proper nouns, acronyms, and compound words and phrases (all of the files that contain unique words).|