Jump to content

User:Ironeyes16/sandbox

From Wikipedia, the free encyclopedia

Hashing[edit]

Hashing is a common technique used in cryptography to encode information quickly using typical algorithms. Generally, an algorithm is applied to a string of text, and the resulting string becomes the “hash value”. This creates a “digital fingerprint” of the message, as the specific hash value is used to identify a specific message. The output from the algorithm is also referred to as a “message digest” or a “check sum”. Hashing is good for determining if information has been changed in transmission. If the hash value is different upon reception than upon sending, there is evidence the message has been altered. Once the algorithm has been applied to the data to be hashed, the hash function produces a fixed-length output. Essentially, anything passed through the hash function should resolve to the same length output as anything else passed through the same hash function. It is important to note that hashing is not the same as encrypting. Hashing is a one-way operation that is used to transform data into the compressed message digest. Additionally, the integrity of the message can be measured with hashing. Conversely, encryption is a two-way operation that is used to transform plaintext into cipher-text and then vice versa. In encryption, the confidentiality of a message is guaranteed.[1]

Hash functions can be used to verify digital signatures, so that when signing documents via the Internet, the signature is applied to one particular individual. Much like a hand-written signature, these signatures are verified by assigning their exact hash code to a person. Furthermore, hashing is applied to passwords for computer systems. Hashing for passwords began with the UNIX operating system. A user on the system would first create a password. That password would be hashed, using an algorithm or key, and then stored in a password file. This is still prominent today, as web applications that require passwords will often hash user’s passwords and store them in a database.[2]

Claude Shannon[edit]

Claude E. Shannon is considered by many to be the father of mathematical cryptography. Shannon worked for several years at Bell Labs, and during his time there, he produced an article entitled “A mathematical theory of cryptography”. This article was written in 1945 and eventually was published in the Bell System Technical Journal in 1949. Shannon continued his work by producing another article entitled “A mathematical theory of communication”. Shannon was inspired during the war to address “[t]he problems of cryptography [because] secrecy systems furnish an interesting application of communication theory”. It is commonly accepted that this paper, published in 1949, was the starting point for development of modern cryptography. Shannon provided the two main goals of cryptography: secrecy and authenticity. His focus was on exploring secrecy and thirty-five years later, G.J. Simmons would address the issue of authenticity. “A mathematical theory of communication” highlights one of the most significant aspects of Shannon’s work: cryptography’s transition from art to science.[3]

In his works, Shannon described the two basic types of systems for secrecy. The first are those designed with the intent to protect against hackers and attackers who have infinite resources with which to decode a message (theoretical secrecy, now unconditional security), and the second are those designed to protect against hackers and attacks with finite resources with which to decode a message (practical secrecy, now computational security). Most of Shannon’s work focused around theoretical secrecy; here, Shannon introduced a definition for the “unbreakability” of a cipher. If a cipher was determined “unbreakable”, it was considered to have “perfect secrecy”. In proving “perfect secrecy”, Shannon determined that this could only be obtained with a secret key whose length given in binary digits was greater than or equal to the number of bits contained in the information being encrypted. Furthermore, Shannon developed the “unicity distance”, defined as the “amount of plaintext that… determines the secret key.”[3]

Shannon’s work influenced further cryptography research in the 1970s, as the public-key cryptography developers, M.E. Hellman and W.Diffie cited Shannon’s research as a major influence. His work also impacted modern designs of secret-key ciphers. At the end of Shannon’s work with cryptography, progress slowed until Hellman and Diffie introduced their paper involving “public-key cryptography”.[3]

Modern Cryptography[edit]

Encryption in modern times is achieved by using algorithms that have a key to encrypt and decrypt information. These keys convert the messages and data into “digital gibberish” through encryption and then return them to the original form through decryption. In general, the longer the key is, the more difficult it is to crack the code. This holds true because deciphering an encrypted message by brute force would require the attacker to try every possible key. To put this in context, each binary unit of information, or bit, has a value of 0 or 1. An 8-bit key would then have 256 or 2^8 possible keys. A 56-bit key would have 2^56, or 72 quadrillion, possible keys to try and decipher the message. With modern technology, these numbers are becoming easier to decipher; however, as technology advances, so does the quality of encryption. Since WWII, one of the most notable advances in the study of cryptography is the introduction of the public-key. These are algorithms that use a public key to encrypt, but a particular, private key to decrypt.[4]

Beginning around the 1990s, the use of the Internet for commercial purposes and the introduction of e-commerce called for a widespread standard for encryption. Before the introduction of the Advanced Encryption Standard (AES), information sent over the Internet, such as financial data, was encrypted using the Data Encryption Standard (DES), a symmetric-key cipher. This was used for its speed, as DES could scramble massive amounts of data at high speeds. The problem with this was that over time, more users knew the key, and the risk of security breaches increased. Around the late 1990s to early 2000s, the use of the public-key became a more common approach for encryption, and soon a hybrid of the two schemes (key-wrapping) became the way for e-commerce operations to proceed. Additionally, the creation of a new protocol known as the Secure Socket Layer, or SSL, led the way for online transactions to take place. Transactions ranging from purchasing goods to online bill pay and banking used SSL. Furthermore, as wireless Internet connections became more common among households, the need for encryption grew, as a level of security was needed in these everyday situations. [5]

Charles Babbage and Ada Lovelace[edit]

Charles Babbage is often regarded as one of the first pioneers of computing. Beginning in the 1810s, Babbage had a vision of mechanically computing numbers and tables. Putting this into reality, Babbage designed a calculator to compute numbers up to 8 decimal points long. Continuing with the success of this idea, Babbage worked to develop a machine that could compute numbers with up to 20 decimal places. By the 1830s, Babbage had devised a plan to develop a machine that could use punched cards to perform arithmetical operations. The machine would store numbers in memory units, and there would be a form of sequential control. This means that one operation would be carried out before another in such a way that the machine would produce an answer and not fail. This machine was to be known as the “Analytical Engine”, which was the first true representation of what is the modern computer. [6]

Ada Lovelace (Augusta Ada Byron) is credited as the pioneer of computer programming and is regarded as a mathematical genius, a result of the mathematically heavy tutoring regimen her mother assigned to her as a young girl. Lovelace began working with Charles Babbage as an assistant while Babbage was working on his “Analytical Engine”, the first mechanical computer. During her work with Babbage, Ada Lovelace became the designer of the first computer algorithm, which had the ability to compute Bernoulli’s numbers. Moreover, Lovelace’s work with Babbage resulted in her prediction of future computers to not only perform mathematical calculations, but also manipulate symbols, mathematical or not. While she was never able to see the results of her work, as the “Analytical Engine” was not created in her lifetime, her efforts in later years, beginning in the 1940s, did not go unnoticed. [7]

Alan Turing and the Turing Machine[edit]

In 1937, Alan Turing introduced his idea of what are now referred to as Turing Machines. These machines were designed to formally determine, mathematically, what can be computed, taking into account limitations on computing ability. If a Turing machine can complete the task, it is considered Turing computable. [8]

Turing machines are not physical objects, but mathematical ones. They show if and how any given algorithm can be computed. Turing machines are state machines, where a state represents a position in a graph. State machines use various states, or graph positions, to determine the outcome of the algorithm. To accomplish this, a theoretical one-dimensional tape is said to be divided into an infinite number of cells. Each cell contains a binary digit, 1 or 0. As the read/write head of the machine scans in the subsequent value in the cell, it uses this value to determine what state to transition to next. To accomplish this, the machine follows an input of rules, usually in the form of tables, that contain logic similar to: if the machine is in state A and a 0 is read in, the machine is going to go to the next state, say, state B. The rules that the machines must follow are considered the program. These Turing machines helped define the logic behind modern computer science. Memory in modern computers is represented by the infinite tape, and the bus of the machine is represented by the read/write head.[8]

Turing focused heavily on designing a machine that could determine what can be computed. Turing concluded that as long as a Turing machine exists that could compute a precise approximation of the number, that value was computable. This does include constants such as pi. Furthermore, functions can be computable when determining TRUE or FALSE for any given parameters. One example of this would be a function “IsEven”. If this function were passed a number, the computation would produce TRUE if the number were even and FALSE if the number were odd. Using these specifications, Turing machines can determine if a function is computable and terminate if said function is computable. Furthermore, Turing machines can interpret logic operators, such as "AND, OR, XOR, NOT, and IF-THEN-ELSE"[8] to determine if a function is computable.

John von Neumann and the von Neumann Architecture[edit]

In 1946, a model for computer architecture was introduced and became known as von Neumann Architecture. Since 1950, the von Neumann model provided uniformity in subsequent computer designs. The von Neumann Architecture was considered innovative as it introduced an idea of allowing machine instructions and data to share memory space. The von Neumann model is composed of 3 major parts, the arithmetic logic unit (ALU), the memory, and the instruction processing unit (IPU). In von Neumann machine design, the IPU passes addresses to memory, and memory, in turn, is routed either back to the IPU if an instruction is being fetched or to the ALU if data is being fetched. [9]<

Von Neumann’s machine design uses a RISC (Reduced instruction set computing) architecture, which means the instruction set uses a total of 21 instructions to perform all tasks. (This is in contrast to CISC, complex instruction set computing, instruction sets which have more instructions from which to choose.) Addresses, operations and data types comprise this instruction set. With von Neumann architecture, main memory along with the accumulator (the register that holds the result of logical operations)[10] are the two memories that are addressed. Operations can be carried out as simple arithmetic (these are performed by the ALU and include addition, subtraction, multiplication and division), conditional branches (these are more commonly seen now as “if” statements or “while” loops. The branches serve as ‘go to’ statements), and logical moves between the different components of the machine, i.e., a move from the accumulator to memory or vice versa. Von Neumann architecture accepts fractions and instructions as data types. Finally, as the von Neumann architecture is a simple one, its register management is also simple. The architecture uses a set of seven registers to manipulate and interpret fetched data and instructions. These registers include the "IR (instruction register), IBR (instruction buffer register), MQ (multiplier quotient register), MAR (memory address register), and MDR (memory data register)."[9] The architecture also uses a program counter (PC) to keep track of where in the program the machine is. [9]

References[edit]

  1. ^ Harris, Shon. "Cryptography" (PDF). Retrieved 2/15/2013. {{cite web}}: Check date values in: |accessdate= (help)
  2. ^ Grah, Joseph Sterling. [bora.uib.no/bitstream/handle/1956/3206/47401627.pdf;jsessionid=731B83E2FC9F8F769C1F57D7A137E32C.bora-uib_worker?sequence=1 "Hash Functions in Cryptography"]. Retrieved 2/15/2013. {{cite web}}: Check |url= value (help); Check date values in: |accessdate= (help)
  3. ^ a b c Berlekamp, Elwyn (January 2002). "Claude Elwood Shannon (1916–2001)" (PDF). Notices of the AMS. 49 (1): 8–16. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)CS1 maint: date and year (link)
  4. ^ Froomkin, Dan (May 8, 1998). [www.washingtonpost.com/wp-srv/politics/special/encryption/encryption.htm "Deciphering Encryption"]. Washington Post. Retrieved 2/15/2013. {{cite news}}: Check |url= value (help); Check date values in: |accessdate= (help)
  5. ^ Lee, Tom (August 2000). [aip.org/tip/INPHFA/vol-6/iss-4/p31.pdf "Cryptography and the New Economy"] (PDF). The Industrial Physicist. 6 (4): 31. {{cite journal}}: Check |url= value (help)CS1 maint: date and year (link)
  6. ^ ""Charles Babbage"". Encylopedia Britannica Online Academic Edition. Encyclopedia Britannica In. Retrieved 2/20/2013. {{cite encyclopedia}}: Check date values in: |accessdate= (help)
  7. ^ Isaacson, Betsy. "Ada Lovelace, World's First Computer Programmer, Celebrated With Google Doodle". The Huffington Post. http://www.huffingtonpost.com/2012/12/10/google-doodle-ada-lovelace_n_2270668.html. Retrieved 2/20/2013. {{cite web}}: Check date values in: |accessdate= (help); External link in |publisher= (help)
  8. ^ a b c Barker-Plummer, David. [<http://plato.stanford.edu/archives/win2012/entries/turing-machine/>. ""Turing Machines""]. The Stanford Encyclopedia of Philosophy. Retrieved 2/20/2013. {{cite encyclopedia}}: Check |url= value (help); Check date values in: |accessdate= (help)
  9. ^ a b c Cragon, Harvey G. (2000). Computer Architecture and Implementation. Cambridge: Cambridge University Press. pp. 1–13. ISBN 0521651689.
  10. ^ "Accumlator" Def. 3. Oxford Dictionaries.