User:Jhcaraveo/sandbox
Contents
[edit]- 1[ORIGINAL] Artificial intelligence in healthcare
- 2[FINAL EDITS] Artificial intelligence in healthcare
[ORIGINAL] Artificial intelligence in healthcare[edit]
[edit]Clinical applications[edit]
[edit]Cardiovascular[edit]
[edit]Artificial intelligence algorithms have shown promising results in accurately diagnosing and risk stratifying patients with concern for coronary artery disease, showing potential as an initial triage tool, though few studies have directly compared the accuracy of machine learning models to clinician diagnostic ability. Other algorithms have been used in predicting patient mortality, medication effects, and adverse events following treatment for acute coronary syndrome. Wearables, smartphones, and internet-based technologies have also shown the ability to monitor patients' cardiac data points, expanding the amount of data and the various settings AI models can use and potentially enabling earlier detection of cardiac events occurring outside of the hospital. Another growing area of research is the utility of AI in classifying heart sounds and diagnosing valvular disease. Challenges of AI in cardiovascular medicine have included the limited data available to train machine learning models, such as limited data on social determinants of health as they pertain to cardiovascular disease.
Dermatology[edit]
[edit]Dermatology is an imaging abundant speciality and the development of deep learning has been strongly tied to image processing. Therefore, there is a natural fit between the dermatology and deep learning. There are 3 main imaging types in dermatology: contextual images, macro images, micro images. For each modality, deep learning showed great progress. Han et al. showed keratinocytic skin cancer detection from face photographs. Esteva et al. demonstrated dermatologist-level classification of skin cancer from lesion images. Noyan et al. demonstrated a convolutional neural network that achieved 94% accuracy at identifying skin cells from microscopic Tzanck smear images.
Recent advances have suggested the use of AI to describe and evaluate the outcome of maxillo-facial surgery or the assessment of cleft palate therapy in regard to facial attractiveness or age appearance.
In 2018, a paper published in the journal Annals of Oncology mentioned that skin cancer could be detected more accurately by an artificial intelligence system (which used a deep learning convolutional neural network) than by dermatologists. On average, the human dermatologists accurately detected 86.6% of skin cancers from the images, compared to 95% for the CNN machine.
Gastroenterology[edit]
[edit]AI can play a role in various facets of the field of gastroenterology. Endoscopic exams such as esophagogastroduodenoscopies (EGD) and colonoscopies rely on rapid detection of abnormal tissue. By enhancing these endoscopic procedures with AI, clinicians can more rapidly identify diseases, determine their severity, and visualize blind spots. Early trials in using AI detection systems of early gastric cancer have shown sensitivity close to expert endoscopists.
Infectious diseases[edit]
[edit]AI has shown potential in both the laboratory and clinical spheres of infectious disease medicine. As the novel coronavirus ravages through the globe, the United States is estimated to invest more than $2 billion in AI-related healthcare research by 2025, more than 4 times the amount spent in 2019 ($463 million). Neural networks have been developed to rapidly and accurately detect a host response to COVID-19 from mass spectrometry samples. Other applications include support-vector machines identifying antimicrobial resistance, machine learning analysis of blood smears to detect malaria, and improved point-of-care testing of Lyme disease based on antigen detection. Additionally, AI has been investigated for improving diagnosis of meningitis, sepsis, and tuberculosis, as well as predicting treatment complications in hepatitis B and hepatitis C patients.
Oncology[edit]
[edit]AI has been explored for use in cancer diagnosis, risk stratification, molecular characterization of tumors, and cancer drug discovery. A particular challenge in oncologic care that AI is being developed to address is the ability to accurately predict which treatment protocols will be best suited for each patient based on their individual genetic, molecular, and tumor-based characteristics. Through its ability to translate images to mathematical sequences, AI has been trialed in cancer diagnostics with the reading of imaging studies and pathology slides. In January 2020, researchers demonstrated an AI system, based on a Google DeepMind algorithm, capable of surpassing human experts in breast cancer detection. In July 2020, it was reported that an AI algorithm developed by the University of Pittsburgh achieves the highest accuracy to date in identifying prostate cancer, with 98% sensitivity and 97% specificity.
Pathology[edit]
[edit]For many diseases, pathological analysis of cells and tissues is considered to be the gold standard of disease diagnosis. AI-assisted pathology tools have been developed to assist with the diagnosis of a number of diseases, including hepatitis B, gastric cancer, and colorectal cancer. AI has also been used to predict genetic mutations and prognosticate disease outcomes. AI is well-suited for use in low-complexity pathological analysis of large-scale screening samples, such as colorectal or breast cancer screening, thus lessening the burden on pathologists and allowing for faster turnaround of sample analysis. Several deep learning and artificial neural network models have shown accuracy similar to that of human pathologists, and a study of deep learning assistance in diagnosing metastatic breast cancer in lymph nodes showed that the accuracy of humans with the assistance of a deep learning program was higher than either the humans alone or the AI program alone. Additionally, implementation of digital pathology is predicted to save over $12 million for a university center over the course of five years, though savings attributed to AI specifically have not yet been widely researched. The use of augmented and virtual reality could prove to be a stepping stone to wider implementation of AI-assisted pathology, as they can highlight areas of concern on a pathology sample and present them in real-time to a pathologist for more efficient review. AI also has the potential to identify histological findings at levels beyond what the human eye can see, and has shown the ability to utilize genotypic and phenotypic data to more accurately detect the tumor of origin for metastatic cancer. One of the major current barriers to widespread implementation of AI-assisted pathology tools is the lack of prospective, randomized, multi-center controlled trials in determining the true clinical utility of AI for pathologists and patients, highlighting a current area of need in AI and healthcare research.
Primary care[edit]
[edit]Primary care has become one key development area for AI technologies. AI in primary care has been used for supporting decision making, predictive modelling, and business analytics.Despite the rapid advances in AI technologies, general practitioners' view on the role of AI in primary care is very limited–mainly focused on administrative and routine documentation tasks.There are only few examples of AI decision support systems that were prospectively assessed on clinical efficacy when used in practice by physicians. But there are cases where the use of these systems yielded a positive effect on treatment choice by physicians.
Psychiatry[edit]
[edit]In psychiatry, AI applications are still in a phase of proof-of-concept. Areas where the evidence is widening quickly include predictive modelling of diagnosis and treatment outcomes, chatbots, conversational agents that imitate human behaviour and which have been studied for anxiety and depression.
Challenges include the fact that many applications in the field are developed and proposed by private corporations, such as the screening for suicidal ideation implemented by Facebook in 2017.Such applications outside the healthcare system raise various professional, ethical and regulatory questions. Another issue is often with the validity and interpretabiltiy of the models. Small training datasets contain bias that is inherited by the models, and compromises the generalizability and stability of these models. Such models may also have the potential to be discriminatory against minority groups that are underrepresented in samples.
Radiology[edit]
[edit]AI is being studied within the field of radiology to detect and diagnose diseases through Computerized Tomography (CT) and Magnetic Resonance (MR) Imaging. It may be particularly useful in settings where demand for human expertise exceeds supply, or where data is too complex to be efficiently interpreted by human readers. Several deep learning models have shown the capability to be roughly as accurate as healthcare professionals in identifying diseases through medical imaging, though few of the studies reporting these findings have been externally validated. AI can also provide non-interpretive benefit to radiologists, such as reducing noise in images, creating high-quality images from lower doses of radiation, enhancing MR image quality, and automatically assessing image quality. Further research investigating the use of AI in nuclear medicine focuses on image reconstruction, anatomical landmarking, and the enablement of lower doses in imaging studies.
[FINAL EDITS] Artificial intelligence in healthcare[edit]
[edit]Clinical applications[edit]
[edit]Cardiovascular[edit]
[edit]Artificial intelligence algorithms have shown promising results in accurately diagnosing and risk stratifying patients with concern for coronary artery disease, showing potential as an initial triage tool, though few studies have directly compared the accuracy of machine learning models to clinician diagnostic ability. Other algorithms have been used in predicting patient mortality, medication effects, and adverse events following treatment for acute coronary syndrome. Wearables, smartphones, and internet-based technologies have also shown the ability to monitor patients' cardiac data points, expanding the amount of data and the various settings AI models can use and potentially enabling earlier detection of cardiac events occurring outside of the hospital. Another growing area of research is the utility of AI in classifying heart sounds and diagnosing valvular disease. Challenges of AI in cardiovascular medicine have included the limited data available to train machine learning models, such as limited data on social determinants of health as they pertain to cardiovascular disease.
Dermatology[edit]
[edit]Dermatology is an imaging abundant speciality and the development of deep learning has been strongly tied to image processing. Therefore, there is a natural fit between the dermatology and deep learning. There are 3 main imaging types in dermatology: contextual images, macro images, micro images. For each modality, deep learning showed great progress. Han et al. showed keratinocytic skin cancer detection from face photographs. Esteva et al. demonstrated dermatologist-level classification of skin cancer from lesion images. Noyan et al. demonstrated a convolutional neural network that achieved 94% accuracy at identifying skin cells from microscopic Tzanck smear images.
Recent advances have suggested the use of AI to describe and evaluate the outcome of maxillo-facial surgery or the assessment of cleft palate therapy in regard to facial attractiveness or age appearance.
In 2018, a paper published in the journal Annals of Oncology mentioned that skin cancer could be detected more accurately by an artificial intelligence system (which used a deep learning convolutional neural network) than by dermatologists. On average, the human dermatologists accurately detected 86.6% of skin cancers from the images, compared to 95% for the CNN machine.
Gastroenterology[edit]
[edit]AI can play a role in various facets of the field of gastroenterology. Endoscopic exams such as esophagogastroduodenoscopies (EGD) and colonoscopies rely on rapid detection of abnormal tissue. By enhancing these endoscopic procedures with AI, clinicians can more rapidly identify diseases, determine their severity, and visualize blind spots. Early trials in using AI detection systems of early gastric cancer have shown sensitivity close to expert endoscopists.
Infectious diseases[edit]
[edit]AI has shown potential in both the laboratory and clinical spheres of infectious disease medicine. As the novel coronavirus ravages through the globe, the United States is estimated to invest more than $2 billion in AI-related healthcare research by 2025, more than 4 times the amount spent in 2019 ($463 million). Neural networks have been developed to rapidly and accurately detect a host response to COVID-19 from mass spectrometry samples. Other applications include support-vector machines identifying antimicrobial resistance, machine learning analysis of blood smears to detect malaria, and improved point-of-care testing of Lyme disease based on antigen detection. Additionally, AI has been investigated for improving diagnosis of meningitis, sepsis, and tuberculosis, as well as predicting treatment complications in hepatitis B and hepatitis C patients.
Obstetrics and Gynecology[edit]
[edit]Current uses of AI in obstetrics and gynecology (OB/GYN) can aid in the detection of preterm labor and pregnancy complications to reduce the morbidity and mortality rates of mothers and infants. One current use of AI in OB/GYN is a fetal heart rate monitoring system which predicts possible outcomes by analyzing cardiotocographs. There are also trials such as the Computerized Interpretation of Fetal Heart Rate During Labor (INFANT) study to evaluate the ability of AI to assist practitioners through CTG interpretation during labor. A different group used deep learning to predict perinatal outcomes in women with short cervix length by combining AI with biological, imaging, demographic, and clinical factors.
Oncology[edit]
[edit]AI has been explored for use in cancer diagnosis, risk stratification, molecular characterization of tumors, and cancer drug discovery. A particular challenge in oncologic care that AI is being developed to address is the ability to accurately predict which treatment protocols will be best suited for each patient based on their individual genetic, molecular, and tumor-based characteristics. Through its ability to translate images to mathematical sequences, AI has been trialed in cancer diagnostics with the reading of imaging studies and pathology slides. In January 2020, researchers demonstrated an AI system, based on a Google DeepMind algorithm, capable of surpassing human experts in breast cancer detection. In July 2020, it was reported that an AI algorithm developed by the University of Pittsburgh achieves the highest accuracy to date in identifying prostate cancer, with 98% sensitivity and 97% specificity.
Ophthalmology[edit]
[edit]Current AI applications in ophthalmology focus on high-incidence diseases such as diabetic retinopathy (DR) and age-related macular degeneration (AMD). The identification of DR, the leading cause of blindness in working-age adults, with AI technology has attracted much attention over the past few years. Upon receiving many labeled images with diagnostic lesions, the computers build a model after extracting the characteristics found in the images. With this model, the computer is able to identify and interpret new images. Similarly, many AI efforts in ophthalmology have been made towards the automatic diagnosis of AMD, a leading cause of central vision loss in people over the age of 50. Many groups have developed models that use fundus images as input and extract the features of AMD at different stages and have found the sensitivity to range from 87-100%.
Pathology[edit]
[edit]For many diseases, pathological analysis of cells and tissues is considered to be the gold standard of disease diagnosis. AI-assisted pathology tools have been developed to assist with the diagnosis of a number of diseases, including hepatitis B, gastric cancer, and colorectal cancer. AI has also been used to predict genetic mutations and prognosticate disease outcomes. AI is well-suited for use in low-complexity pathological analysis of large-scale screening samples, such as colorectal or breast cancer screening, thus lessening the burden on pathologists and allowing for faster turnaround of sample analysis. Several deep learning and artificial neural network models have shown accuracy similar to that of human pathologists, and a study of deep learning assistance in diagnosing metastatic breast cancer in lymph nodes showed that the accuracy of humans with the assistance of a deep learning program was higher than either the humans alone or the AI program alone. Additionally, implementation of digital pathology is predicted to save over $12 million for a university center over the course of five years, though savings attributed to AI specifically have not yet been widely researched. The use of augmented and virtual reality could prove to be a stepping stone to wider implementation of AI-assisted pathology, as they can highlight areas of concern on a pathology sample and present them in real-time to a pathologist for more efficient review. AI also has the potential to identify histological findings at levels beyond what the human eye can see, and has shown the ability to utilize genotypic and phenotypic data to more accurately detect the tumor of origin for metastatic cancer. One of the major current barriers to widespread implementation of AI-assisted pathology tools is the lack of prospective, randomized, multi-center controlled trials in determining the true clinical utility of AI for pathologists and patients, highlighting a current area of need in AI and healthcare research.
Primary care[edit]
[edit]Primary care has become one key development area for AI technologies. AI in primary care has been used for supporting decision making, predictive modelling, and business analytics.Despite the rapid advances in AI technologies, general practitioners' view on the role of AI in primary care is very limited–mainly focused on administrative and routine documentation tasks.There are only few examples of AI decision support systems that were prospectively assessed on clinical efficacy when used in practice by physicians. But there are cases where the use of these systems yielded a positive effect on treatment choice by physicians.
Psychiatry[edit]
[edit]In psychiatry, AI applications are still in a phase of proof-of-concept. Areas where the evidence is widening quickly include predictive modelling of diagnosis and treatment outcomes, chatbots, conversational agents that imitate human behaviour and which have been studied for anxiety and depression.
Challenges include the fact that many applications in the field are developed and proposed by private corporations, such as the screening for suicidal ideation implemented by Facebook in 2017.Such applications outside the healthcare system raise various professional, ethical and regulatory questions. Another issue is often with the validity and interpretabiltiy of the models. Small training datasets contain bias that is inherited by the models, and compromises the generalizability and stability of these models. Such models may also have the potential to be discriminatory against minority groups that are underrepresented in samples.
Radiology[edit]
[edit]AI is being studied within the field of radiology to detect and diagnose diseases through Computerized Tomography (CT) and Magnetic Resonance (MR) Imaging. It may be particularly useful in settings where demand for human expertise exceeds supply, or where data is too complex to be efficiently interpreted by human readers. Several deep learning models have shown the capability to be roughly as accurate as healthcare professionals in identifying diseases through medical imaging, though few of the studies reporting these findings have been externally validated. AI can also provide non-interpretive benefit to radiologists, such as reducing noise in images, creating high-quality images from lower doses of radiation, enhancing MR image quality, and automatically assessing image quality. Further research investigating the use of AI in nuclear medicine focuses on image reconstruction, anatomical landmarking, and the enablement of lower doses in imaging studies.
Surgery[edit]
[edit]AI in surgery will likely become widespread through augmenting human capabilities with computers. These technologies have been used to augment decision making such as in the identification of high risk patients before operations or predicting the real-time risk of hypoxaemia during general anesthesia. Future directions of AI in surgery will include every phase of care. For example, during the preoperative and postoperative period for surgeries, a patient may track physical data on wearable fitness trackers and mobile apps. AI can then automate the analysis of this data and monitor the risk and recovery as well as predict complications before and after surgery. This data could then be used during intraoperative monitoring to assist in prediction and avoidance of adverse events.
Urology[edit]
[edit]AI in urology has many applications in its subfields. In urogynecology, AI methods predicted the time and number of incontinence events, or losses of bladder control, from incontinence data gathered from wearable devices. In pediatric urology, AI has been used to predict surgical outcomes, condition severity based on imaging, and abnormalities in imaging. One group specifically was able to predict pyeloplasty outcomes from ANN based on uretero-pelvic junction obstruction data in children with 100% sensitivity and specificity for their specified outcome measures. In uro-oncology, AI methods with ML and DL algorithms have been used to predict the nuclear grade, prognosis, recurrence, and survival outcomes of differentiating renal masses. In a study to predict the nuclear grade of clear cell renal cell carcinoma (ccRCC), the algorithm successfully differentiated 85.1% of nuclear grades in the different cases.
This is a user sandbox of Sophiaacarino. You can use it for testing or practicing edits.
This is not the sandbox where you should draft your assigned article for a dashboard.wikiedu.org course. To find the right sandbox for your assignment, visit your Dashboard course page and follow the Sandbox Draft link for your assigned article in the My Articles section. Get Help |
- ^ Jump up to: a b
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^ Jump up to: a b
- ^ Jump up to: a b
- ^ "COVID-19 Pandemic Impact: Global R&D Spend For AI in Healthcare and Pharmaceuticals Will Increase US$1.5 Billion By 2025". Medical Letter on the CDC & FDA. May 3, 2020 – via Gale Academic OneFile.
- ^
- ^
- ^
- ^
- ^
- ^
- ^ Jump up to: a b c d e
- ^
- ^
- ^
- ^
- ^ Jump up to: a b
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^ Jump up to: a b
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^ Jump up to: a b
- ^ Jump up to: a b
- ^ "COVID-19 Pandemic Impact: Global R&D Spend For AI in Healthcare and Pharmaceuticals Will Increase US$1.5 Billion By 2025". Medical Letter on the CDC & FDA. May 3, 2020 – via Gale Academic OneFile.
- ^
- ^
{{cite journal}}
: Check date values in:|date=
(help) - ^
- ^
{{cite journal}}
: Check date values in:|date=
(help) - ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^
{{cite journal}}
: Check date values in:|date=
(help) - ^
{{cite journal}}
: Check date values in:|date=
(help) - ^
{{cite journal}}
: Check date values in:|date=
(help) - ^ Jump up to: a b c d e
- ^
- ^
- ^
- ^
- ^ Jump up to: a b
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^
- ^ Jump up to: a b
{{cite journal}}
: Check date values in:|date=
(help) - ^
{{cite journal}}
: Check date values in:|date=
(help) - ^
{{cite journal}}
: Check date values in:|date=
(help) - ^
{{cite journal}}
: Check date values in:|date=
(help) - ^
{{cite journal}}
: Check date values in:|date=
(help) - ^
{{cite journal}}
: Check date values in:|date=
(help) - ^
{{cite journal}}
: Check date values in:|date=
(help
The final draft of the contribution to the article can be found under the "Art" section for the Applications of Artificial Intelligence linked below:
Article draft on Sophia Carino's page: https://en.wikipedia.org/wiki/User:Sophiaacarino/Applications_of_artificial_intelligence?veaction=edit&preload=Template%3ADashboard.wikiedu.org_draft_template
Copied Draft below:
Article Draft
[edit]Brief Summary
[edit]We expand on the "Art" section of the Applications of Artificial Intelligence page by including the history of AI art and how to understand and analyze art through AI.
Art [Original Page]
[edit]Main article: Artificial intelligence art
AI has been used to produce visual art. Initiatives such as Google Magenta, conducted by the Google Brain team, use AI to create art.
The exhibition "Thinking Machines: Art and Design in the Computer Age, 1959–1989" at MoMA provided an overview of AI applications for art, architecture, and design. Exhibitions showcasing the usage of AI to produce art include the 2016 Google-sponsored benefit and auction at the Gray Area Foundation in San Francisco, where artists experimented with the DeepDream algorithm and the 2017 exhibition "Unhuman: Art in the Age of AI", which took place in Los Angeles and Frankfurt. In spring 2018, the Association for Computing Machinery dedicated a magazine issue to the subject of computers and art. In June 2018, "Duet for Human and Machine", an art piece permitting viewers to interact with an artificial intelligence, premiered at the Beall Center for Art + Technology. The Austrian Ars Electronica and Museum of Applied Arts, Vienna opened exhibitions on AI in 2019. Ars Electronica's 2019 festival "Out of the box" explored art's role in a sustainable societal transformation.
Art [New Contributions Added]
[edit]Main article: Artificial intelligence art
AI has been used to produce visual art. AI art began with a computer program called AARON by Harold Cohen in 1968 at the University of California at San Diego. AARON is the most notable example of AI art in the era of GOFAI programming because of its use of a symbolic rule-based approach to generate technical images.[1] Cohen developed AARON with the goal of being able to code the act of drawing. In its primitive form, AARON created simple black and white drawings. Cohen would later finish the drawings by painting them. Throughout the following years, Cohen continued to work on AARON to be able to not only draw but also paint. Cohen designed AARON to paint using special brushes and dyes that were chosen by the program itself without mediation from Cohen.[2]
In recent years, AI art has shifted into a new paradigm with the emergence of GAN computer programming, which generates technical images through machine learning frameworks that surpass the need for human operators.[1] Newer programs, such as Magenta, are emerging in an age of GAN programming. Magenta began as a research project in 2016 from the Google Brain team that aimed to build programs and algorithms that can generate art and music, without need of human intervention. Other GAN programs that are used to generate art include Artbreeder and DeepDream.
AI art generated from GANs programming challenged the parameters of art and only recently entered the art auction market.[3]On October 25, 2018, Portrait of Edmond Belamy by the Parisian collective, Obvious, was the first art piece created by artificial intelligence to be offered at Christie’s Auction House and was sold for $432,500.[4] The presence of AI art at Christie’s has only further increased since then, with digital artist Beeple’s NFT titled “Everydays: The First 5000 Days” being sold for $69.3 million on March 11, 2021. It was the first purely digital art to be auctioned at Christie’s.[5]
The exhibition "Thinking Machines: Art and Design in the Computer Age, 1959–1989" at MoMA provided an overview of AI applications for art, architecture, and design. Exhibitions showcasing the usage of AI to produce art include the 2016 Google-sponsored benefit and auction at the Gray Area Foundation in San Francisco, where artists experimented with the DeepDream algorithm and the 2017 exhibition "Unhuman: Art in the Age of AI", which took place in Los Angeles and Frankfurt. In spring 2018, the Association for Computing Machinery dedicated a magazine issue to the subject of computers and art. In June 2018, "Duet for Human and Machine", an art piece permitting viewers to interact with an artificial intelligence, premiered at the Beall Center for Art + Technology. The Austrian Ars Electronica and Museum of Applied Arts, Vienna opened exhibitions on AI in 2019. Ars Electronica's 2019 festival "Out of the box" explored art's role in a sustainable societal transformation.
Understanding Art with AI
[edit]Due to large-scale digitization of artwork in the past few decades, research methods that utilize AI have been generated to quantitatively analyze digital art collections. Although the main goal of digitization was to allow for accessibility and exploration of these collections, the use of AI in analyzing them has brought about new research perspectives.[6]
Two computational methods, distant viewing and close reading, are the typical approaches used to analyze digitized art.[7] Close reading focuses on specific visual aspects of one piece. Some tasks performed by machines in close reading methods include computational artist authentication and analysis of brushstrokes or texture properties. In contrast, through distant viewing methods, the similarity across an entire collection for a specific feature can be statistically visualized. Common tasks relating to this method include automatic classification, object detection, multimodal tasks, knowledge discovery in art history, and computational aesthetics.[6] Whereas distant viewing includes the analysis of large collections, close reading involves one piece of artwork.
In addition to computational methods evaluating the visual aspect of digitized art, researchers have introduced models that reflect emotional responses to art such as ArtEmis, a large-scale dataset with machine learning models that contain emotional reactions to visual art as well as predictions of emotion from images or text.[8]
- ^ a b Poltronieri, Fabrizio Augusto; Hänska, Max (2019-10-23). "Technical Images and Visual Art in the Era of Artificial Intelligence: From GOFAI to GANs". Proceedings of the 9th International Conference on Digital and Interactive Arts. Braga Portugal: ACM: 1–8. doi:10.1145/3359852.3359865. ISBN 978-1-4503-7250-3. S2CID 208109113.
- ^ "Fine art print - crypto art". Kate Vass Galerie. Retrieved 2022-05-07.
- ^ Wu, Chujun; Seokin, Ko; Zhang, Lina (2021-01-29). "On GANs Art in Context of Artificial Intelligence Art". 2021 the 5th International Conference on Machine Learning and Soft Computing. ICMLSC'21. New York, NY, USA: Association for Computing Machinery: 168–171. doi:10.1145/3453800.3453831. ISBN 978-1-4503-8761-3. S2CID 235474022.
- ^ "Is artificial intelligence set to become art's next medium? | Christie's". www.christies.com. Retrieved 2022-05-07.
- ^ Kugler, Logan (2021-08-24). "Non-fungible tokens and the future of art". Communications of the ACM. 64 (9): 19–20. doi:10.1145/3474355. ISSN 0001-0782. S2CID 237283169.
- ^ a b Cetinic, Eva; She, James (2022-02-16). "Understanding and Creating Art with AI: Review and Outlook". ACM Transactions on Multimedia Computing, Communications, and Applications. 18 (2): 66:1–66:22. arXiv:2102.09109. doi:10.1145/3475799. ISSN 1551-6857. S2CID 231951381.
- ^ Lang, Sabine; Ommer, Bjorn (2018). "Reflecting on How Artworks Are Processed and Analyzed by Computer Vision: Supplementary Material": 0.
{{cite journal}}
: Cite journal requires|journal=
(help) - ^ Achlioptas, Panos; Ovsjanikov, Maks; Haydarov, Kilichbek; Elhoseiny, Mohamed; Guibas, Leonidas (2021-01-18). "ArtEmis: Affective Language for Visual Art". arXiv:2101.07396.
{{cite journal}}
: Cite journal requires|journal=
(help)
This is a user sandbox of Jhcaraveo. You can use it for testing or practicing edits. This is not the sandbox where you should draft your assigned article for a dashboard.wikiedu.org course. To find the right sandbox for your assignment, visit your Dashboard course page and follow the Sandbox Draft link for your assigned article in the My Articles section. |
Peer Review by Je Yeong Soh
[edit]Brief Summary
[edit]We expand on the "Art" section of the Applications of Artificial Intelligence page by including the history of AI art and how to understand and analyze art through AI.
Art [Original Page]
[edit]Main article: Artificial intelligence art
AI has been used to produce visual art. Initiatives such as Google Magenta, conducted by the Google Brain team, use AI to create art.
The exhibition "Thinking Machines: Art and Design in the Computer Age, 1959–1989" at MoMA provided an overview of AI applications for art, architecture, and design. Exhibitions showcasing the usage of AI to produce art include the 2016 Google-sponsored benefit and auction at the Gray Area Foundation in San Francisco, where artists experimented with the DeepDream algorithm and the 2017 exhibition "Unhuman: Art in the Age of AI", which took place in Los Angeles and Frankfurt. In spring 2018, the Association for Computing Machinery dedicated a magazine issue to the subject of computers and art. In June 2018, "Duet for Human and Machine", an art piece permitting viewers to interact with an artificial intelligence, premiered at the Beall Center for Art + Technology. The Austrian Ars Electronica and Museum of Applied Arts, Vienna opened exhibitions on AI in 2019. Ars Electronica's 2019 festival "Out of the box" explored art's role in a sustainable societal transformation.
Art [New Contributions Added]
[edit]Main article: Artificial intelligence art
AI has been used to produce visual art. AI art began with a computer program called AARON by Harold Cohen in 1968 at the University of California at San Diego. AARON is the most notable example of AI art in the era of GOFAI programming because of its use of a symbolic rule-based approach to generate technical images.[1] Cohen developed AARON with the goal of being able to code the act of drawing. In its primitive form, AARON created simple black and white drawings. Cohen would later finish the drawings by painting them. Throughout the following years, Cohen continued to work on AARON to be able to not only draw but also paint. Cohen designed AARON to paint using special brushes and dyes that were chosen by the program itself without mediation from Cohen.[2]
In recent years, AI art has shifted into a new paradigm with the emergence of GAN computer programming, which generates technical images through machine learning frameworks that surpass the need for human operators.[1] Newer programs, such as Magenta, are emerging in an age of GAN programming. Magenta began as a research project in 2016 from the Google Brain team that aimed to build programs and algorithms that can generate art and music, without need of human intervention. Other GAN programs that are used to generate art include Artbreeder and DeepDream.
AI art generated from GANs programming challenged the parameters of art and only recently entered the art auction market.[3]On October 25, 2018, Portrait of Edmond Belamy by the Parisian collective, Obvious, was the first art piece created by artificial intelligence to be offered at Christie’s Auction House and was sold for $432,500.[4] The presence of AI art at Christie’s has only further increased since then, with digital artist Beeple’s NFT titled “Everydays: The First 5000 Days” being sold for $69.3 million on March 11, 2021. It was the first purely digital art to be auctioned at Christie’s.[5] [I really like what you have written so far. My only question about your use of Beeple's NFT as an example is whether it is considered AI art?]
The exhibition "Thinking Machines: Art and Design in the Computer Age, 1959–1989" at MoMA provided an overview of AI applications for art, architecture, and design. Exhibitions showcasing the usage of AI to produce art include the 2016 Google-sponsored benefit and auction at the Gray Area Foundation in San Francisco, where artists experimented with the DeepDream algorithm and the 2017 exhibition "Unhuman: Art in the Age of AI", which took place in Los Angeles and Frankfurt. In spring 2018, the Association for Computing Machinery dedicated a magazine issue to the subject of computers and art. In June 2018, "Duet for Human and Machine", an art piece permitting viewers to interact with an artificial intelligence, premiered at the Beall Center for Art + Technology. The Austrian Ars Electronica and Museum of Applied Arts, Vienna opened exhibitions on AI in 2019. Ars Electronica's 2019 festival "Out of the box" explored art's role in a sustainable societal transformation.
Understanding Art with AI
[edit]Due to large-scale digitization of artwork in the past few decades, research methods that utilize AI have been generated to quantitatively analyze digital art collections. Although the main goal of digitization was to allow for accessibility and exploration of these collections, the use of AI in analyzing them has brought about new research perspectives.[6]
Two computational methods, distant viewing and close reading, are the typical approaches used to analyze digitized art.[7] Close reading focuses on specific visual aspects of one piece. Some tasks performed by machines in close reading methods include computational artist authentication and analysis of brushstrokes or texture properties. In contrast, through distant viewing methods, the similarity across an entire collection for a specific feature can be statistically visualized. Common tasks relating to this method include automatic classification, object detection, multimodal tasks, knowledge discovery in art history, and computational aesthetics.[6] Whereas distant viewing includes the analysis of large collections, close reading involves one piece of artwork.
In addition to computational methods evaluating the visual aspect of digitized art, researchers have introduced models that reflect emotional responses to art such as ArtEmis, a large-scale dataset with machine learning models that contain emotional reactions to visual art as well as predictions of emotion from images or text.[8]
[I thought this was fairly well written!]
- ^ a b Poltronieri, Fabrizio Augusto; Hänska, Max (2019-10-23). "Technical Images and Visual Art in the Era of Artificial Intelligence: From GOFAI to GANs". Proceedings of the 9th International Conference on Digital and Interactive Arts. Braga Portugal: ACM: 1–8. doi:10.1145/3359852.3359865. ISBN 978-1-4503-7250-3. S2CID 208109113.
- ^ "Fine art print - crypto art". Kate Vass Galerie. Retrieved 2022-05-07.
- ^ Wu, Chujun; Seokin, Ko; Zhang, Lina (2021-01-29). "On GANs Art in Context of Artificial Intelligence Art". 2021 the 5th International Conference on Machine Learning and Soft Computing. ICMLSC'21. New York, NY, USA: Association for Computing Machinery: 168–171. doi:10.1145/3453800.3453831. ISBN 978-1-4503-8761-3. S2CID 235474022.
- ^ "Is artificial intelligence set to become art's next medium? | Christie's". www.christies.com. Retrieved 2022-05-07.
- ^ Kugler, Logan (2021-08-24). "Non-fungible tokens and the future of art". Communications of the ACM. 64 (9): 19–20. doi:10.1145/3474355. ISSN 0001-0782. S2CID 237283169.
- ^ a b Cetinic, Eva; She, James (2022-02-16). "Understanding and Creating Art with AI: Review and Outlook". ACM Transactions on Multimedia Computing, Communications, and Applications. 18 (2): 66:1–66:22. arXiv:2102.09109. doi:10.1145/3475799. ISSN 1551-6857. S2CID 231951381.
- ^ Lang, Sabine; Ommer, Bjorn (2018). "Reflecting on How Artworks Are Processed and Analyzed by Computer Vision: Supplementary Material": 0.
{{cite journal}}
: Cite journal requires|journal=
(help) - ^ Achlioptas, Panos; Ovsjanikov, Maks; Haydarov, Kilichbek; Elhoseiny, Mohamed; Guibas, Leonidas (2021-01-18). "ArtEmis: Affective Language for Visual Art". arXiv:2101.07396.
{{cite journal}}
: Cite journal requires|journal=
(help)