Jump to content

Edward Chang (neurosurgeon): Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
stub expansion
stub expansion
Line 14: Line 14:
Chang pioneered the use of high-density direct [[Electrophysiology|electrophysiological]] recordings from cortex, which enabled him and colleagues to determine the selective tuning of [[Cerebral cortex|cortical neurons]] to specific acoustic and phonetic features in consonants and vowels.<ref>{{Cite web |title=Researchers Watch As Our Brains Turn Sounds Into Words |url=https://www.npr.org/sections/health-shots/2014/01/30/268432705/researchers-watch-as-our-brains-turn-sounds-into-words/ |website=NPR}}</ref> His lab discovered the neural coding of vocal pitch cues in prosodic intonation for English and lexical tones in Mandarin.<ref>{{Cite web |title=Really? Really. How Our Brains Figure Out What Words Mean Based On How They're Said |url=https://www.npr.org/sections/health-shots/2017/08/24/545711940/pitch-neurons |website=NPR}}</ref> Chang's lab determined how the auditory cortex detects temporal landmarks such as onsets and acoustic edges in the speech envelope signal to extract syllables and stress patterns,<ref>{{Cite web |title=The Loudness Of Vowels Helps The Brain Break Down Speech Into Syl-La-Bles |url=https://www.npr.org/sections/health-shots/2019/11/20/780988618/the-loudness-of-vowels-helps-the-brain-break-down-speech-into-syl-la-bles |website=NPR}}</ref> important for the rhythm and intelligibility of speech.
Chang pioneered the use of high-density direct [[Electrophysiology|electrophysiological]] recordings from cortex, which enabled him and colleagues to determine the selective tuning of [[Cerebral cortex|cortical neurons]] to specific acoustic and phonetic features in consonants and vowels.<ref>{{Cite web |title=Researchers Watch As Our Brains Turn Sounds Into Words |url=https://www.npr.org/sections/health-shots/2014/01/30/268432705/researchers-watch-as-our-brains-turn-sounds-into-words/ |website=NPR}}</ref> His lab discovered the neural coding of vocal pitch cues in prosodic intonation for English and lexical tones in Mandarin.<ref>{{Cite web |title=Really? Really. How Our Brains Figure Out What Words Mean Based On How They're Said |url=https://www.npr.org/sections/health-shots/2017/08/24/545711940/pitch-neurons |website=NPR}}</ref> Chang's lab determined how the auditory cortex detects temporal landmarks such as onsets and acoustic edges in the speech envelope signal to extract syllables and stress patterns,<ref>{{Cite web |title=The Loudness Of Vowels Helps The Brain Break Down Speech Into Syl-La-Bles |url=https://www.npr.org/sections/health-shots/2019/11/20/780988618/the-loudness-of-vowels-helps-the-brain-break-down-speech-into-syl-la-bles |website=NPR}}</ref> important for the rhythm and intelligibility of speech.


A general finding in his work is that the internal phonological representation of speech sounds result from complex auditory computations in the STG; including processes such as adaptation, contrast enhancement, normalization, complex spectral integration, non-linear processing, prediction, and temporal dynamics.<ref>{{Cite journal |last=Bhaya-Grossman |first=Ilina |last2=Chang |first2=Edward F. |date=2022-01-04 |title=Speech Computations of the Human Superior Temporal Gyrus |url=https://www.annualreviews.org/doi/10.1146/annurev-psych-022321-035256 |journal=Annual Review of Psychology |language=en |volume=73 |issue=1 |pages=79–102 |doi=10.1146/annurev-psych-022321-035256 |issn=0066-4308 |pmc=PMC9447996 |pmid=34672685}}</ref>
A general finding in his work is that the internal phonological representation of speech sounds result from complex auditory computations in the STG; including processes such as adaptation, contrast enhancement, normalization, complex spectral integration, non-linear processing, prediction, and temporal dynamics.<ref>{{Cite journal |last=Bhaya-Grossman |first=Ilina |last2=Chang |first2=Edward F. |date=2022-01-04 |title=Speech Computations of the Human Superior Temporal Gyrus |url=https://www.annualreviews.org/doi/10.1146/annurev-psych-022321-035256 |journal=Annual Review of Psychology |language=en |volume=73 |issue=1 |pages=79–102 |doi=10.1146/annurev-psych-022321-035256 |issn=0066-4308 |pmc=PMC9447996 |pmid=34672685}}</ref>

His lab demonstrated that superior [[temporal lobe]] is critical for conscious speech perception. That is, it is not only integral for detecting speech sounds but also interpreting them. For example, they showed how the superior temporal cortex can selectively attend to one voice when multiple voices are present<ref>{{Cite news |last=Beck |first=Melinda |date=2012-04-23 |title=What Cocktail Parties Teach Us |language=en-US |work=Wall Street Journal |url=http://online.wsj.com/article/SB10001424052702303459004577361850069498164.html |access-date=2023-07-19 |issn=0099-9660}}</ref> and how it restores missing sounds to words when a [[phoneme]] segment is replaced with noise.<ref>{{Cite web |title=Your brain fills gaps in your hearing without you realising |url=https://www.newscientist.com/article/2124214-your-brain-fills-gaps-in-your-hearing-without-you-realising/?campaign_id=RSS%7CNSNS- |access-date=2023-07-19 |website=New Scientist |language=en-US}}</ref>


== Awards ==
== Awards ==

Revision as of 14:13, 19 July 2023

Edward Chang is an American neurosurgeon and scientist. He is the Joan and Sandy Weill Chair of the Department of Neurological Surgery at the University of California, San Francisco and Jeanne Robertson Distinguished Professor.

Chang specializes in operative brain mapping to ensure the safety and effectiveness of surgeries for treating seizures and brain tumors, as well as micro-neurosurgery for treating cranial nerve disorders such as trigeminal neuralgia and hemifacial spasm. In 2020, Chang was elected into the National Academy of Medicine[1] for “deciphering the functional blueprint of speech in the human cerebral cortex, pioneering advanced clinical methods for human brain mapping, and spearheading novel translational neuroprosthetic technology for paralyzed patients.”[2][3]

Academic career

Chang attended medical school at UCSF, where he also did a predoctoral fellowship on auditory cortex neurophysiology with Professor Michael Merzenich. He later did his neurosurgery residency at UCSF and trained under the mentorship of Dr Mitchel Berger for brain tumors, Dr Nicholas Barbaro for epilepsy, and Dr. Michael Lawton for vascular disorders. During residency, he did post-doctoral fellowship on human cognitive neuroscience with Dr Robert Knight at UC Berkeley.[4]

Chang joined the UCSF neurosurgery faculty in 2010, and was promoted to department chair in 2020.[4]

Scientific contributions

Chang has made fundamental contributions to understanding the neural code of speech and neuropsychiatric conditions in the human brain.[5]

Chang pioneered the use of high-density direct electrophysiological recordings from cortex, which enabled him and colleagues to determine the selective tuning of cortical neurons to specific acoustic and phonetic features in consonants and vowels.[6] His lab discovered the neural coding of vocal pitch cues in prosodic intonation for English and lexical tones in Mandarin.[7] Chang's lab determined how the auditory cortex detects temporal landmarks such as onsets and acoustic edges in the speech envelope signal to extract syllables and stress patterns,[8] important for the rhythm and intelligibility of speech.

A general finding in his work is that the internal phonological representation of speech sounds result from complex auditory computations in the STG; including processes such as adaptation, contrast enhancement, normalization, complex spectral integration, non-linear processing, prediction, and temporal dynamics.[9]

His lab demonstrated that superior temporal lobe is critical for conscious speech perception. That is, it is not only integral for detecting speech sounds but also interpreting them. For example, they showed how the superior temporal cortex can selectively attend to one voice when multiple voices are present[10] and how it restores missing sounds to words when a phoneme segment is replaced with noise.[11]

Awards

References

  1. ^ "National Academy of Medicine Elects 100 New Members". October 19, 2020.
  2. ^ Belluck, Pam (2021-07-14). "Tapping Into the Brain to Help a Paralyzed Man Speak". The New York Times. ISSN 0362-4331. Retrieved 2021-07-22.
  3. ^ Willingham, Emily. "New Brain Implant Transmits Full Words from Neural Signals". Scientific American.
  4. ^ a b "Edward Chang, MD, Appointed Joan and Sanford I. Weill Chair of Department of Neurological Surgery". UCSF School of Medicine. Retrieved 2023-06-19.
  5. ^ Hernandez, Daniela (2022-09-02). "How Brain-Computer Interfaces Could Restore Speech and Help Fight Depression". Wall Street Journal. ISSN 0099-9660. Retrieved 2023-07-19.
  6. ^ "Researchers Watch As Our Brains Turn Sounds Into Words". NPR.
  7. ^ "Really? Really. How Our Brains Figure Out What Words Mean Based On How They're Said". NPR.
  8. ^ "The Loudness Of Vowels Helps The Brain Break Down Speech Into Syl-La-Bles". NPR.
  9. ^ Bhaya-Grossman, Ilina; Chang, Edward F. (2022-01-04). "Speech Computations of the Human Superior Temporal Gyrus". Annual Review of Psychology. 73 (1): 79–102. doi:10.1146/annurev-psych-022321-035256. ISSN 0066-4308. PMC 9447996. PMID 34672685.{{cite journal}}: CS1 maint: PMC format (link)
  10. ^ Beck, Melinda (2012-04-23). "What Cocktail Parties Teach Us". Wall Street Journal. ISSN 0099-9660. Retrieved 2023-07-19.
  11. ^ "Your brain fills gaps in your hearing without you realising". New Scientist. Retrieved 2023-07-19.
  12. ^ "2022 NAS Awards Recipients Announced".