Jump to content

User:AdvaitPanicker/Artificial intelligence

From Wikipedia, the free encyclopedia

Frameworks

[edit]

Artificial Intelligence projects can have their ethical permissibility tested while designing, developing, and implementing an AI system. An AI framework such as the SUM framework developed by the Alan Turing Institute tests projects in four main areas:[1]

  • RESPECT the dignity of individual people
  • CONNECT with other people sincerely, openly and inclusively
  • CARE for the wellbeing of everyone
  • PROTECT social values, justice and the public interest

Other developments in ethical frameworks include those decided upon during the Asilomar Conference, the Montreal Declaration for Responsible AI, and the IEEE's Ethics of Autonomous Systems initiative, among others[2]; however, these principles do not go without their criticisms, especially regards to the people chosen contributes to these frameworks.[3]

Promotion of the wellbeing of the people and communities that these technologies affect requires consideration of the social and ethical implications at all stages of AI system design, development and implementation, and collaboration between job roles such as data scientists, product managers, data engineers, domain experts, and delivery managers.[4]

Algorithmic bias and fairness[edit]

[edit]

Main articles: Algorithmic bias and Fairness (machine learning)

Machine learning applications will be biased if they learn from biased data. The developers may not be aware that the bias exists. Bias can be introduced by the way training data is selected and by the way a model is deployed. If a biased algorithm is used to make decisions that can seriously harm people (as it can in medicine, finance, recruitment, housing or policing) then the algorithm may cause discrimination. Fairness in machine learning is the study of how to prevent the harm caused by algorithmic bias. It has become serious area of academic study within AI. Researchers have discovered it is not always possible to define "fairness" in a way that satisfies all stakeholders.

On June 28, 2015, Google Photos's new image labeling feature mistakenly identified Jacky Alcine and a friend as "gorillas" because they were black. The system was trained on a dataset that contained very few images of black people, a problem called "sample size disparity". Google "fixed" this problem by preventing the system from labelling anything as a "gorilla". Eight years later, in 2023, Google Photos still could not identify a gorilla, and neither could similar products from Apple, Facebook, Microsoft and Amazon.

COMPAS is a commercial program widely used by U.S. courts to assess the likelihood of a defendant becoming a recidivist. In 2016, Julia Angwin at ProPublica discovered that COMPAS exhibited racial bias, despite the fact that the program was not told the races of the defendants. Although the error rate for both whites and blacks was calibrated equal at exactly 61%, the errors for each race were different -- the system consistently overestimated the chance that a black person would re-offend and would underestimate the chance that a white person would not re-offend. In 2017, several researchers showed that it was mathematically impossible for COMPAS to accommodate all possible measures of fairness when the base rates of re-offense were different for whites and blacks in the data.

A program can make biased decisions even if the data does not explicitly mention a problematic feature (such as "race" or "gender"). The feature will correlate with other features (like "address", "shopping history" or "first name"), and the program will make the same decisions based on these features as it would on "race" or "gender". Moritz Hardt said "the most robust fact in this research area is that fairness through blindness doesn't work."

Criticism of COMPAS highlighted a deeper problem with the misuse of AI. Machine learning models are designed to make "predictions" that are only valid if we assume that the future will resemble the past. Machine learning models trained on past data reflect patterns in that data, and if that past data contains unrepresentative or biased data, the model's predictions will be similarly unrepresentative and biased. Examples of this, including a paper published by Stanford researchers which predicts sexual orientation based solely off of images of a person's face, demonstrate how AI is a reflection of the stereotypes (like whether a person has facial hair or glasses being an indicator of their sexuality) and unintended patterns in the real world, resulting in invalid conclusions.[5]

Bias and unfairness may go undetected because the developers are overwhelmingly white and male: among AI engineers, about 4% are black and 20% are women.

At its 2022 Conference on Fairness, Accountability, and Transparency (ACM FAccT 2022) the Association for Computing Machinery, in Seoul, South Korea, presented and published findings recommending that until AI and robotics systems are demonstrated to be free of bias mistakes, they are unsafe and the use of self-learning neural networks trained on vast, unregulated sources of flawed internet data should be curtailed.

Applications[edit]

[edit]

Main article: Applications of artificial intelligence

AI and machine learning technology is used in most of the essential applications of the 2020s, including: search engines (such as Google Search), targeting online advertisements, recommendation systems (offered by Netflix, YouTube or Amazon), driving internet traffic, targeted advertising (AdSense, Facebook), virtual assistants (such as Siri or Alexa), autonomous vehicles (including drones, ADAS and self-driving cars), automatic language translation (Microsoft Translator, Google Translate), facial recognition (Apple's Face ID or Microsoft's DeepFace and Google's FaceNet) and image labeling (used by Facebook, Apple's iPhoto and TikTok).

Games

[edit]

Main article: Game artificial intelligence

Game playing programs have been used since the 1950s to demonstrate and test AI's most advanced techniques. Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov, on 11 May 1997. In 2011, in a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin. In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps. Then it defeated Ke Jie in 2017, who at the time continuously held the world No. 1 ranking for two years. Other programs handle imperfect-information games; such as for poker at a superhuman level, Pluribus and Cepheus. DeepMind in the 2010s developed a "generalized artificial intelligence" that could learn many diverse Atari games on its own.

Military

[edit]

Main article: Military artificial intelligence

Various countries are deploying AI military applications. The main applications enhance command and control, communications, sensors, integration and interoperability. Research is targeting intelligence collection and analysis, logistics, cyber operations, information operations, and semiautonomous and autonomous vehicles. AI technologies enable coordination of sensors and effectors, threat detection and identification, marking of enemy positions, target acquisition, coordination and deconfliction of distributed Joint Fires between networked combat vehicles involving manned and unmanned teams. AI was incorporated into military operations in Iraq and Syria.

Generative AI

[edit]

Main article: Generative artificial intelligence

In the early 2020s, generative AI gained widespread prominence. ChatGPT, based on GPT-3, and other large language models, were tried by 14% of Americans adults. The increasing realism and ease-of-use of AI-based text-to-image generators such as Midjourney, DALL-E, and Stable Diffusion sparked a trend of viral AI-generated photos. Widespread attention was gained by a fake photo of Pope Francis wearing a white puffer coat, the fictional arrest of Donald Trump, and a hoax of an attack on the Pentagon, as well as the usage in professional creative arts.

Industry Specific Tasks

[edit]

There are also thousands of successful AI applications used to solve specific problems for specific industries or institutions. In a 2017 survey, one in five companies reported they had incorporated "AI" in some offerings or processes. A few examples are energy storage, medical diagnosis, military logistics, applications that predict the result of judicial decisions, foreign policy, or supply chain management.

In agriculture, AI has helped farmers identify areas that need irrigation, fertilization, pesticide treatments or increasing yield. Agronomists use AI to conduct research and development. AI has been used to predict the ripening time for crops such as tomatoes, monitor soil moisture, operate agricultural robots, conduct predictive analytics, classify livestock pig call emotions, automate greenhouses, detect diseases and pests, and save water.

In medicine, AlphaFold 2 (2020) demonstrated the ability to approximate, in hours rather than months, the 3D structure of a protein.

Artificial intelligence is used in astronomy to analyze increasing amounts of available data and applications, mainly for "classification, regression, clustering, forecasting, generation, discovery, and the development of new scientific insights" for example for discovering exoplanets, forecasting solar activity, and distinguishing between signals and instrumental effects in gravitational wave astronomy. It could also be used for activities in space such as space exploration, including analysis of data from space missions, real-time science decisions of spacecraft, space debris avoidance, and more autonomous operation.

Bad actors and weaponized AI[edit]

[edit]

Main articles: Lethal autonomous weapon, Artificial intelligence arms race, and AI safety

A lethal autonomous weapon is a machine that locates, selects and engages human targets without human supervision. By 2015, over fifty countries were reported to be researching battlefield robots. These weapons are considered especially dangerous for several reasons: if they kill an innocent person it is not clear who should be held accountable, it is unlikely they will reliably choose targets, and, if produced at scale, they are potentially weapons of mass destruction. In 2014, 30 nations (including China) supported a ban on autonomous weapons under the United Nations' Convention on Certain Conventional Weapons, however the United States and others disagreed.

AI provides a number of tools that are particularly useful for authoritarian governments: smart spyware, face recognition and voice recognition allow widespread surveillance; such surveillance allows machine learning to classify potential enemies of the state and can prevent them from hiding; recommendation systems can precisely target propaganda and misinformation for maximum effect; deepfakes and generative AI aid in producing misinformation; advanced AI can make authoritarian centralized decision making more competitive with liberal and decentralized systems such as markets.

AI facial recognition systems are used for mass surveillance, notably in China. In 2019, Bengaluru, India deployed AI-managed traffic signals. This system uses cameras to monitor traffic density and adjust signal timing based on the interval needed to clear traffic.

Terrorists, criminals and rogue states can use weaponized AI such as advanced digital warfare and lethal autonomous weapons.

Machine-learning AI is also able to design tens of thousands of toxic molecules in a matter of hours.

  1. ^ "Understanding artificial intelligence ethics and safety" (PDF). 2019.
  2. ^ Floridi, Luciano; Cowls, Josh (2019-06-23). "A Unified Framework of Five Principles for AI in Society". Harvard Data Science Review. doi:10.1162/99608f92.8cd550d1.
  3. ^ Buruk, Banu; Ekmekci, Perihan Elif; Arda, Berna (2020-09-01). "A critical perspective on guidelines for responsible and trustworthy artificial intelligence". Medicine, Health Care and Philosophy. 23 (3): 387–399. doi:10.1007/s11019-020-09948-1. ISSN 1572-8633.
  4. ^ Kamila, Manoj Kumar; Jasrotia, Sahil Singh (2023-01-01). "Ethical issues in the development of artificial intelligence: recognizing the risks". International Journal of Ethics and Systems. ahead-of-print (ahead-of-print). doi:10.1108/IJOES-05-2023-0107. ISSN 2514-9369.
  5. ^ Wang, Yilun; Kosinski, Michal (2018-02). "Deep neural networks are more accurate than humans at detecting sexual orientation from facial images". Journal of Personality and Social Psychology. 114 (2): 246–257. doi:10.1037/pspa0000098. ISSN 1939-1315. {{cite journal}}: Check date values in: |date= (help)