|Part of a series on|
Whole brain emulation (WBE) or mind uploading (sometimes called "mind copying" or "mind transfer") is the hypothetical process of scanning mental state (including long-term memory and "self") of a particular brain substrate and copying it to a computational device, such as a digital, analog, quantum-based or software-based artificial neural network. The computational device could then run a simulation model of the brain's information processing, such that it responds in essentially the same way as the original brain (i.e., indistinguishable from the brain for all relevant purposes) and experiences having a conscious mind.
Mind uploading may potentially be accomplished by either of two methods: Copy-and-Transfer or gradual replacement of neurons. In the case of the former method, mind uploading would be achieved by scanning and mapping the salient features of a biological brain, and then by copying, transferring, and storing that information state into a computer system or another computational device. The simulated mind could be within a virtual reality or simulated world, supported by an anatomic 3D body simulation model. Alternatively, the simulated mind could reside in a computer that is inside (or connected to) a (not necessarily humanoid) robot or a biological body in real life.
Among some futurists and within the transhumanist movement, mind uploading is treated as an important proposed life extension technology. Some believe mind uploading is humanity's current best option for preserving the identity of the species, as opposed to cryonics. Another aim of mind uploading is to provide a permanent backup to our "mind-file", and a means for functional copies of human minds to survive a global disaster or interstellar space travels. Whole brain emulation is discussed by some futurists as a "logical endpoint" of the topical computational neuroscience and neuroinformatics fields, both about brain simulation for medical research purposes. It is discussed in artificial intelligence research publications as an approach to strong AI. Computer-based intelligence such as an upload could think much faster than a biological human even if it were no more intelligent. A large-scale society of uploads might, according to futurists, give rise to a technological singularity, meaning a sudden time constant decrease in the exponential development of technology. Mind uploading is a central conceptual feature of numerous science fiction novels and films.
Substantial mainstream research in related areas is being conducted in animal brain mapping and simulation, development of faster super computers, virtual reality, brain–computer interfaces, connectomics and information extraction from dynamically functioning brains. According to supporters, many of the tools and ideas needed to achieve mind uploading already exist or are currently under active development; however, they will admit that others are, as yet, very speculative, but still in the realm of engineering possibility. Neuroscientist Randal Koene has formed a nonprofit organization called Carbon Copies to promote mind uploading research.
- 1 Overview
- 2 Theoretical benefits and applications
- 3 Relevant technologies and techniques
- 4 Issues
- 5 Advocates
- 6 Skeptics
- 7 See also
- 8 References
- 9 External links
The human brain contains about 86 billion nerve cells called neurons, each individually linked to other neurons by way of connectors called axons and dendrites. Signals at the junctures (synapses) of these connections are transmitted by the release and detection of chemicals known as neurotransmitters. The established neuroscientific consensus is that the human mind is largely an emergent property of the information processing of this neural network.
Importantly, neuroscientists have stated that important functions performed by the mind, such as learning, memory, and consciousness, are due to purely physical and electrochemical processes in the brain and are governed by applicable laws. For example, Christof Koch and Giulio Tononi wrote in IEEE Spectrum:
"Consciousness is part of the natural world. It depends, we believe, only on mathematics and logic and on the imperfectly known laws of physics, chemistry, and biology; it does not arise from some magical or otherworldly quality."
Eminent computer scientists and neuroscientists have predicted that specially programmed computers will be capable of thought and even attain consciousness, including Koch and Tononi, Douglas Hofstadter, Jeff Hawkins, Marvin Minsky, Randal A. Koene, and Rodolfo Llinas.
Such an artificial intelligence capability might provide a computational substrate necessary for uploading.
However, even though uploading is dependent upon such a general capability, it is conceptually distinct from general forms of AI in that it results from dynamic reanimation of information derived from a specific human mind so that the mind retains a sense of historical identity (other forms are possible but would compromise or eliminate the life-extension feature generally associated with uploading). The transferred and reanimated information would become a form of artificial intelligence, sometimes called an infomorph or "noömorph".
Many theorists have presented models of the brain and have established a range of estimates of the amount of computing power needed for partial and complete simulations. Using these models, some have estimated that uploading may become possible within decades if trends such as Moore's law continue.
Theoretical benefits and applications
"Immortality" or backup
In theory, if the information and processes of the mind can be disassociated from the biological body, they are no longer tied to the individual limits and lifespan of that body. Furthermore, information within a brain could be partly or wholly copied or transferred to one or more other substrates (including digital storage or another brain), thereby – from a purely mechanistic perspective – reducing or eliminating "mortality risk" of such information. This general proposal appears to have been first made in the biomedical literature in 1971 by biogerontologist George M. Martin of the University of Washington.
An "uploaded astronaut" is the application of mind uploading to human spaceflight. An uploaded astronaut would consist of a human mental content transferred or copied to a space humanoid robot or a spacecraft's data storage device. This would eliminate the harms caused by a zero gravity environment, the vacuum of space and cosmic radiation to the human body since both a humanoid robot and a spacecraft can be more resistant than a biological entity in such conditions, permitting longer and farther voyages through outer space than manned spaceflight. Furthermore, an uploaded astronaut may not require a large spacecraft so spacecrafts at the scale of the StarChip might suffice. Alien uploaded astronauts are conceivable as well. In science fiction, Charlie Stross's Accelerando features a can-sized starship that visits a nearby star system with an "e-crew" of 63 uploaded astronauts.
Relevant technologies and techniques
The focus of mind uploading, in the case of copy-and-transfer, is on data acquisition, rather than data maintenance of the brain. A set of approaches known as loosely coupled off-loading (LCOL) may be used in the attempt to characterize and copy the mental contents of a brain. The LCOL approach may take advantage of self-reports, life-logs and video recordings that can be analyzed by artificial intelligence. A bottom-up approach may focus on the specific resolution and morphology of neurons, the spike times of neurons, the times at which neurons produce action potential responses.
Advocates of mind uploading point to Moore's law to support the notion that the necessary computing power is expected to become available within a few decades. However, the actual computational requirements for running an uploaded human mind are very difficult to quantify, potentially rendering such an argument specious.
Regardless of the techniques used to capture or recreate the function of a human mind, the processing demands are likely to be immense, due to the large number of neurons in the human brain along with the considerable complexity of each neuron.
In 2004, Henry Markram, lead researcher of the "Blue Brain Project", stated that "it is not [their] goal to build an intelligent neural network", based solely on the computational demands such a project would have.
It will be very difficult because, in the brain, every molecule is a powerful computer and we would need to simulate the structure and function of trillions upon trillions of molecules as well as all the rules that govern how they interact. You would literally need computers that are trillions of times bigger and faster than anything existing today.
Five years later, after successful simulation of part of a rat brain, the same scientist was much more bold and optimistic. In 2009, when he was director of the Blue Brain Project, he claimed that
A detailed, functional artificial human brain can be built within the next 10 years
Required computational capacity strongly depend on the chosen level of simulation model scale:
|$1 million super‐computer
(Earliest year of making)
|Analog network population model||1015||102||2008|
|Spiking neural network||1018||104||2019|
|States of protein complexes||1027||108||2052|
|Distribution of complexes||1030||109||2063|
|Stochastic behavior of single molecules||1043||1014||2111|
Simulation model scale
Since the function of the human mind, and how it might arise from the working of the brain's neural network, are poorly understood issues, mind uploading relies on the idea of neural network emulation. Rather than having to understand the high-level psychological processes and large-scale structures of the brain, and model them using classical artificial intelligence methods and cognitive psychology models, the low-level structure of the underlying neural network is captured, mapped and emulated with a computer system. In computer science terminology,[dubious ] rather than analyzing and reverse engineering the behavior of the algorithms and data structures that resides in the brain, a blueprint of its source code is translated to another programming language. The human mind and the personal identity then, theoretically, is generated by the emulated neural network in an identical fashion to it being generated by the biological neural network.
On the other hand, a molecule-scale simulation of the brain is not expected to be required, provided that the functioning of the neurons is not affected by quantum mechanical processes. The neural network emulation approach only requires that the functioning and interaction of neurons and synapses are understood. It is expected that it is sufficient with a black-box signal processing model of how the neurons respond to nerve impulses (electrical as well as chemical synaptic transmission).
A sufficiently complex and accurate model of the neurons is required. A traditional artificial neural network model, for example multi-layer perceptron network model, is not considered as sufficient. A dynamic spiking neural network model is required, which reflects that the neuron fires only when a membrane potential reaches a certain level. It is likely that the model must include delays, non-linear functions and differential equations describing the relation between electrophysical parameters such as electrical currents, voltages, membrane states (ion channel states) and neuromodulators.
Since learning and long-term memory are believed to result from strengthening or weakening the synapses via a mechanism known as synaptic plasticity or synaptic adaptation, the model should include this mechanism. The response of sensory receptors to various stimuli must also be modelled.
Furthermore, the model may have to include metabolism, i.e. how the neurons are affected by hormones and other chemical substances that may cross the blood–brain barrier. It is considered likely that the model must include currently unknown neuromodulators, neurotransmitters and ion channels. It is considered unlikely that the simulation model has to include protein interaction, which would make it computationally complex.
A digital computer simulation model of an analog system such as the brain is an approximation that introduces random quantization errors and distortion. However, the biological neurons also suffer from randomness and limited precision, for example due to background noise. The errors of the discrete model can be made smaller than the randomness of the biological brain by choosing a sufficiently high variable resolution and sample rate, and sufficiently accurate models of non-linearities. The computational power and computer memory must however be sufficient to run such large simulations, preferably in real time.
Scanning and mapping scale of an individual
When modelling and simulating the brain of a specific individual, a brain map or connectivity database showing the connections between the neurons must be extracted from an anatomic model of the brain. For whole brain simulation, this network map should show the connectivity of the whole nervous system, including the spinal cord, sensory receptors, and muscle cells. Destructive scanning of a small sample of tissue from a mouse brain including synaptic details is possible as of 2010.
However, if short-term memory and working memory include prolonged or repeated firing of neurons, as well as intra-neural dynamic processes, the electrical and chemical signal state of the synapses and neurons may be hard to extract. The uploaded mind may then perceive a memory loss of the events and mental processes immediately before the time of brain scanning.
A full brain map has been estimated to occupy less than 2 x 1016 bytes (20,000 TB) and would store the addresses of the connected neurons, the synapse type and the synapse "weight" for each of the brains' 1015 synapses.[not in citation given] However, the biological complexities of true brain function (e.g. the epigenetic states of neurons, protein components with multiple functional states, etc.) may preclude an accurate prediction of the volume of binary data required to faithfully represent a functioning human mind.
A possible method for mind uploading is serial sectioning, in which the brain tissue and perhaps other parts of the nervous system are frozen and then scanned and analyzed layer by layer, which for frozen samples at nano-scale requires a cryo-ultramicrotome, thus capturing the structure of the neurons and their interconnections. The exposed surface of frozen nerve tissue would be scanned and recorded, and then the surface layer of tissue removed. While this would be a very slow and labor-intensive process, research is currently underway to automate the collection and microscopy of serial sections. The scans would then be analyzed, and a model of the neural net recreated in the system that the mind was being uploaded into.
There are uncertainties with this approach using current microscopy techniques. If it is possible to replicate neuron function from its visible structure alone, then the resolution afforded by a scanning electron microscope would suffice for such a technique. However, as the function of brain tissue is partially determined by molecular events (particularly at synapses, but also at other places on the neuron's cell membrane), this may not suffice for capturing and simulating neuron functions. It may be possible to extend the techniques of serial sectioning and to capture the internal molecular makeup of neurons, through the use of sophisticated immunohistochemistry staining methods that could then be read via confocal laser scanning microscopy. However, as the physiological genesis of 'mind' is not currently known, this method may not be able to access all of the necessary biochemical information to recreate a human brain with sufficient fidelity.
It may also be possible to create functional 3D maps of the brain activity, using advanced neuroimaging technology, such as functional MRI (fMRI, for mapping change in blood flow), magnetoencephalography (MEG, for mapping of electrical currents), or combinations of multiple methods, to build a detailed three-dimensional model of the brain using non-invasive and non-destructive methods. Today, fMRI is often combined with MEG for creating functional maps of human cortex during more complex cognitive tasks, as the methods complement each other. Even though current imaging technology lacks the spatial resolution needed to gather the information needed for such a scan, important recent and future developments are predicted to substantially improve both spatial and temporal resolutions of existing technologies.
There is ongoing work in the field of brain simulation, including partial and whole simulations of some animals. For example, the C. elegans roundworm, Drosophila fruit fly, and mouse have all been simulated to various degrees.
The Blue Brain Project by the Brain and Mind Institute of the École Polytechnique Fédérale de Lausanne, Switzerland is an attempt to create a synthetic brain by reverse-engineering mammalian brain circuitry.
Underlying the concept of "mind uploading" (more accurately "mind transferring") is the broad philosophy that consciousness lies within the brain's information processing and is in essence an emergent feature that arises from large neural network high-level patterns of organization, and that the same patterns of organization can be realized in other processing devices. Mind uploading also relies on the idea that the human mind (the "self" and the long-term memory), just like non-human minds, is represented by the current neural network paths and the weights of the brain synapses rather than by a dualistic and mystic soul and spirit. The mind or "soul" can be defined as the information state of the brain, and is immaterial only in the same sense as the information content of a data file or the state of a computer software currently residing in the work-space memory of the computer. Data specifying the information state of the neural network can be captured and copied as a "computer file" from the brain and re-implemented into a different physical form. This is not to deny that minds are richly adapted to their substrates. An analogy to the idea of mind uploading is to copy the temporary information state (the variable values) of a computer program from the computer memory to another computer and continue its execution. The other computer may perhaps have different hardware architecture but emulates the hardware of the first computer.
I would be glad to know... whether when my brain has lost its original structure, and when some hundred years after the same materials are fabricated so curiously as to become an intelligent being, whether, I say that being will be me; or, if, two or three such beings should be formed out of my brain; whether they will all be me, and consequently one and the same intelligent being.
A considerable portion of transhumanists and singularitarians place great hope into the belief that they may become immortal, by creating one or many non-biological functional copies of their brains, thereby leaving their "biological shell". However, the philosopher and transhumanist Susan Schneider claims that at best, uploading would create a copy of the original persons mind. Susan Schneider agrees that consciousness has a computational basis, but this doesn't mean we can upload and survive. According to her views, "uploading" would probably result in the death of the original person's brain, while only outside observers can maintain the illusion of the original person still being alive. For it is implausible to think that one's consciousness would leave one's brain and travel to a remote location; ordinary physical objects do not behave this way. Ordinary objects (rocks, tables, etc.) are not simultaneously here, and somewhere else. At best, a copy of the original mind is created. Others have argued against such conclusions. For example, Buddhist transhumanist James Hughes has pointed out that this consideration only goes so far: if one believes the self is an illusion, worries about survival are not reasons to avoid uploading, and Keith Wiley has presented an argument wherein all resulting minds of an uploading procedure are granted equal primacy in their claim to the original identity, such that survival of the self is determined retroactively from a strictly subjective position.
Another potential consequence of mind uploading is that the decision to "upload" may then create a mindless symbol manipulator instead of a conscious mind (see philosophical zombie). Are we to assume that an upload is conscious if it displays behaviors that are highly indicative of consciousness? Are we to assume that an upload is conscious if it verbally insists that it is conscious? Could there be an absolute upper limit in processing speed above which consciousness cannot be sustained? The mystery of consciousness precludes a definitive answer to this question. Numerous scientists, including Kurzweil, strongly believe that determining whether a separate entity is conscious (with 100% confidence) is fundamentally unknowable, since consciousness is inherently subjective (see solipsism). Regardless, some scientists strongly believe consciousness is the consequence of computational processes which are substrate-neutral. On the contrary, numerous scientists believe consciousness may be the result of some form of quantum computation dependent on substrate (see quantum mind).
In light of uncertainty on whether to regard uploads as conscious, Sandberg proposes a cautious approach:
Principle of assuming the most (PAM): Assume that any emulated system could have the same mental properties as the original system and treat it correspondingly.
It is argued that if a computational copy of one's mind did exist, it would be impossible for one to recognize it as their own mind. The argument for this stance is the following: for a computational mind to recognize an emulation of itself, it must be capable of deciding whether two Turing machines (namely, itself and the proposed emulation) are functionally equivalent. This task is uncomputable due to the undecidability of equivalence, thus there cannot exist a computational procedure in the mind that is capable of recognizing an emulation of itself.
Copying vs. moving
|This section needs additional citations for verification. (October 2016) (Learn how and when to remove this template message)|
A philosophical issue with mind uploading is whether the newly generated digital mind is really the "same" sentience, or simply an exact copy with the same memories and personality. This issue is especially obvious when the original remains essentially unchanged by the procedure, thereby resulting in a copy which could potentially have rights separate from the unaltered, obvious original.
Most projected brain scanning technologies, such as serial sectioning of the brain, would necessarily be destructive, and the original brain would not survive the brain scanning procedure. But if it can be kept intact, the computer-based consciousness could be a copy of the still-living biological person. It is in that case implicit that copying a consciousness could be as feasible as literally moving it into one or several copies, since these technologies generally involve simulation of a human brain in a computer of some sort, and digital files such as computer programs can be copied precisely. It is assumed that once the versions are exposed to different sensory inputs, their experiences would begin to diverge, but all their memories up until the moment of the copying would remain the same.
The problem is made even more apparent through the possibility of creating a potentially infinite number of initially identical copies of the original person, which would of course all exist simultaneously as distinct beings with their own emotions and thoughts. The most parsimonious view of this phenomenon is that the two (or more) minds would share memories of their past but from the point of duplication would simply be distinct minds.
Toward the goal of resolving the copy-vs-move debate, some have argued for a third way of conceptualizing the process, which is described by such terms as split and divergence. The distinguishing feature of this third terminological option is that while moving implies that a single instance relocates in space and while copying invokes problematic connotations (a copy is often denigrated in status relative to its original), the notion of a split better illustrates that some kinds of entities might become two separate instances, but without the imbalanced associations assigned to originals and copies, and that such equality may apply to minds.
Depending on computational capacity, the simulation's subjective time may be faster or slower than elapsed physical time, resulting in that the simulated mind would perceive that the physical world is running in slow motion or fast motion respectively, while biological persons will see the simulated mind in fast or slow motion respectively.
A brain simulation can be started, paused, backed-up and rerun from a saved backup state at any time. The simulated mind would in the latter case forget everything that has happened after the instant of backup, and perhaps not even be aware that it is repeating itself. An older version of a simulated mind may meet a younger version and share experiences with it.
Ethical and legal implications
The process of developing emulation technology raises ethical issues related to animal welfare and artificial consciousness. The neuroscience required to develop brain emulation would require animal experimentation, first on invertebrates and then on small mammals before moving on to humans. Sometimes the animals would just need to be euthanized in order to extract, slice, and scan their brains, but sometimes behavioral and in vivo measures would be required, which might cause pain to living animals.
In addition, the resulting animal emulations themselves might suffer, depending on one's views about consciousness. Bancroft argues for the plausibility of consciousness in brain simulations on the basis of the "fading qualia" thought experiment of David Chalmers. He then concludes:
If, as I argue above, a sufficiently detailed computational simulation of the brain is potentially operationally equivalent to an organic brain, it follows that we must consider extending protections against suffering to simulations.
It might help reduce emulation suffering to develop virtual equivalents of anaesthesia, as well as to omit processing related to pain and/or consciousness. However, some experiments might require a fully functioning and suffering animal emulation. Animals might also suffer by accident due to flaws and lack of insight into what parts of their brains are suffering. Questions also arise regarding the moral status of partial brain emulations, as well as creating neuromorphic emulations that draw inspiration from biological brains but are built somewhat differently.
Brain emulations could be erased by computer viruses or malware, without need to destroy the underlying hardware. This may make assassination easier than for physical humans. The attacker might take the computing power for its own use.
Many questions arise regarding the legal personhood of emulations. Would they be given the rights of biological humans? If a person makes an emulated copy of himself and then dies, does the emulation inherit his property and official positions? Could the emulation ask to "pull the plug" when its biological version was terminally ill or in a coma? Would it help to treat emulations as adolescents for a few years so that the biological creator would maintain temporary control? Would criminal emulations receive the death penalty, or would they be given forced data modification as a form of "rehabilitation"? Could an upload have marriage and child-care rights?
If simulated minds would come true and if they were assigned rights of their own, it may be difficult to ensure the protection of "digital human rights". For example, social science researchers might be tempted to secretly expose simulated minds, or whole isolated societies of simulated minds, to controlled experiments in which many copies of the same minds are exposed (serially or simultaneously) to different test conditions.
Political and economic implications
Emulations could create a number of conditions that might increase risk of war, including inequality, changes of power dynamics, a possible technological arms race to build emulations first, first-strike advantages, strong loyalty and willingness to "die" among emulations, and triggers for racist, xenophobic, and religious prejudice. If emulations run much faster than humans, there might not be enough time for human leaders to make wise decisions or negotiate. It's possible that humans would react violently against growing power of emulations, especially if they depress human wages. Or maybe emulations wouldn't trust each other, and even well intentioned defensive measures might be interpreted as offense.
Emulation timelines and AI risk
There are very few feasible technologies that humans have refrained from developing. The neuroscience and computer-hardware technologies that may make brain emulation possible are widely desired for other reasons, so cutting off funding doesn't seem to be an option. If we assume that emulation technology will arrive, a question becomes whether we should accelerate or slow its advance.
Arguments for speeding up brain-emulation research:
- If neuroscience is the bottleneck on brain emulation rather than computing power, emulation advances may be more erratic and unpredictable based on when new scientific discoveries happen. Limited computing power would mean the first emulations would run slower and so would be easier to adapt to, and there would be more time for the technology to transition through society.
- Improvements in manufacturing, 3D printing, and nanotechnology may accelerate hardware production, which could increase the "computing overhang" from excess hardware relative to neuroscience.
- If one AI-development group had a lead in emulation technology, it would have more subjective time to win an arms race to build the first superhuman AI. Because it would be less rushed, it would have more freedom to consider AI risks.
Arguments for slowing down brain-emulation research:
- Greater investment in brain emulation and associated cognitive science might enhance the ability of artificial intelligence (AI) researchers to create "neuromorphic" (brain-inspired) algorithms, such as neural networks, reinforcement learning, and hierarchical perception. This could accelerate risks from uncontrolled AI. Participants at a 2011 AI workshop estimated an 85% probability that neuromorphic AI would arrive before brain emulation. This was based on the idea that brain emulation would require understanding some brain components, and it would be easier to tinker with these than to reconstruct the entire brain in its original form. By a very narrow margin, the participants on balance leaned toward the view that accelerating brain emulation would increase expected AI risk.
- Waiting might give society more time to think about the consequences of brain emulation and develop institutions to improve cooperation.
Emulations might be easier to control than de novo AI because
- We understand better human abilities, behavioral tendencies, and vulnerabilities, so control measures might be more intuitive and easier to plan for.
- Emulations could more easily inherit human motivations.
- Emulations are harder to manipulate than de novo AI, because brains are messy and complicated; this could reduce risks of their rapid takeoff. Also, emulations may be bulkier and require more hardware than AI, which would also slow the speed of a transition. Unlike AI, an emulation wouldn't be able to rapidly expand beyond the size of a human brain. Emulations running at digital speeds would have less intelligence differential vis-à-vis AI and so might more easily control AI.
As counterpoint to these considerations, Bostrom notes some downsides:
- Even if we better understand human behavior, the evolution of emulation behavior under self-improvement might be much less predictable than the evolution of safe de novo AI under self-improvement.
- Emulations may not inherit all human motivations. Perhaps they would inherit our darker motivations or would behave abnormally in the unfamiliar environment of cyberspace.
- Even if there's a slow takeoff toward emulations, there would still be a second transition to de novo AI later on. Two intelligence explosions may mean more total risk.
Ray Kurzweil, director of engineering at Google, claims to know and foresee that people will be able to "upload" their entire brains to computers and become "digitally immortal" by 2045. Kurzweil made this claim for many years, e.g. during his speech in 2013 at the Global Futures 2045 International Congress in New York, which claims to subscribe to a similar set of beliefs. Mind uploading is also advocated by a number of researchers in neuroscience and artificial intelligence, such as Marvin Minsky while he was still alive. In 1993, Joe Strout created a small web site called the Mind Uploading Home Page, and began advocating the idea in cryonics circles and elsewhere on the net. That site has not been actively updated in recent years, but it has spawned other sites including MindUploading.org, run by Randal A. Koene, who also moderates a mailing list on the topic. These advocates see mind uploading as a medical procedure which could eventually save countless lives.
Many transhumanists look forward to the development and deployment of mind uploading technology, with transhumanists such as Nick Bostrom predicting that it will become possible within the 21st century due to technological trends such as Moore's law.
Michio Kaku, in collaboration with Science, hosted a documentary, Sci Fi Science: Physics of the Impossible, based on his book Physics of the Impossible. Episode four, titled "How to Teleport", mentions that mind uploading via techniques such as quantum entanglement and whole brain emulation using an advanced MRI machine may enable people to be transported to vast distances at near light-speed.
The book Beyond Humanity: CyberEvolution and Future Minds by Gregory S. Paul & Earl D. Cox, is about the eventual (and, to the authors, almost inevitable) evolution of computers into sentient beings, but also deals with human mind transfer. Richard Doyle's Wetwares: Experiments in PostVital Living deals extensively with uploading from the perspective of distributed embodiment, arguing for example that humans are currently part of the "artificial life phenotype". Doyle's vision reverses the polarity on uploading, with artificial life forms such as uploads actively seeking out biological embodiment as part of their reproductive strategy.
Kenneth D. Miller, a professor of neuroscience at Columbia and a co-director of the Center for Theoretical Neuroscience, raised doubts about the practicality of mind uploading.[further explanation needed]
- BRAIN Initiative
- Brain transplant
- Democratic transhumanism
- Human Brain Project
- Isolated brain
- Ship of Theseus—thought experiment asking if objects having all parts replaced fundamentally remain the same object
- Simulation hypothesis
- Technologically enabled telepathy
- Turing test
- A framework for approaches to transfer of a mind's substrate, Sim Bamford
- "An Error Occurred Setting Your User Cookie". worldscientific.com.
- Coalescing minds: brain uploading-related group mind scenarios
- Sandberg, Anders; Boström, Nick (2008). Whole Brain Emulation: A Roadmap (PDF). Technical Report #2008‐3. Future of Humanity Institute, Oxford University. Retrieved 5 April 2009.
The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain.
- Hayworth, Kenneth. "Will You Upload Your Mind?". Galactic Public Archives. Retrieved Oct 17, 2014.
- Goertzel, Ben (December 2007). "Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil". Artificial Intelligence. 171 (18, Special Review Issue): 1161–1173. doi:10.1016/j.artint.2007.10.011.
- Kay KN, Naselaris T, Prenger RJ, Gallant JL (March 2008). "Identifying natural images from human brain activity". Nature. 452 (7185): 352–5. Bibcode:2008Natur.452..352K. doi:10.1038/nature06713. PMC . PMID 18322462.
- Koch, Christof; Tononi, Giulio (2008). "Can machines be conscious?". IEEE Spectrum. 45 (6): 55. doi:10.1109/MSPEC.2008.4531463.
- null. "Tech Luminaries Address Singularity". ieee.org.
- Marvin Minsky, Conscious Machines, in 'Machinery of Consciousness', Proceedings, National Research Council of Canada, 75th Anniversary Symposium on Science in Society, June 1991.
- "MindUploading.org". minduploading.org.
- Llinas, R (2001). I of the Vortex: From Neurons to Self. Cambridge: MIT Press. pp. 261–262. ISBN 0-262-62163-0.
- Ray Kurzweil (February 2000). "Live Forever–Uploading The Human Brain...Closer Than You Think". Psychology Today.
- Martin GM (1971). "Brief proposal on immortality: an interim solution". Perspectives in Biology and Medicine. 14 (2): 339. doi:10.1353/pbm.1971.0015. PMID 5546258.
- Prisco, Giulio (12 December 2012). "Uploaded e-crews for interstellar missions". kurzweilai.net. Retrieved 31 July 2015.
- Prisco, Giulio (2012). "Why we should send uploaded astronauts on interstellar missions". io9. Retrieved 31 July 2015.
- "Substrate-Independent Minds - Carboncopies.org Foundation". carboncopies.org.
- Roadmap p.11 "Given the complexities and conceptual issues of consciousness we will not examine criteria 6abc, but mainly examine achieving criteria 1‐5."
- "Bluebrain - EPFL". epfl.ch. 19 May 2015.
- Blue Brain Project FAQ, 2004
- BBC News, Artificial brain '10 years away'
- "New imaging method developed at Stanford reveals stunning details of brain connections". Stanford Medicine.
- Merkle, R., 1989, Large scale analysis of neural structures, CSL-89-10 November 1989, [P89-00173]
- ATLUM Project
- Hagmann, Patric; Cammoun, Leila; Gigandet, Xavier; Meuli, Reto; Honey, Christopher J.; Wedeen, Van J.; Sporns, Olaf; Friston, Karl J. (2008). Friston, Karl J., ed. "Mapping the Structural Core of Human Cerebral Cortex". PLoS Biology. 6 (7): e159. doi:10.1371/journal.pbio.0060159. PMC . PMID 18597554.
- Glover, Paul; Bowtell, Richard (2009). "Medical imaging: MRI rides the wave". Nature. 457 (7232): 971–2. Bibcode:2009Natur.457..971G. doi:10.1038/457971a. PMID 19225512.
- Franco Cortese (June 17, 2013). "Clearing Up Misconceptions About Mind Uploading". h+ Media.
- Yoonsuck Choe; Jaerock Kwon; Ji Ryang Chung (2012). "Time, Consciousness, and Mind Uploading" (PDF). International Journal of Machine Consciousness. 04 (01): 257. doi:10.1142/S179384301240015X.
- "The Duplicates Paradox (The Duplicates Problem)". benbest.com.
- Schneider, Susan (March 2, 2014). "The Philosophy of 'Her'". The New York Times. Retrieved May 7, 2014.
- Hughes, James (2013). Personal Identity and Uploading. Wiley.
- Wiley, Keith (March 20, 2014). "Response to Susan Schneider's "Philosophy of 'Her"". H+Magazine. Retrieved 7 May 2014.
- Wiley, Keith (Sep 2014). A Taxonomy and Metaphysics of Mind-Uploading (1st ed.). Humanity+ Press and Alautun Press. ISBN 978-0692279847. Retrieved 16 October 2014.
- Michael Hauskeller. "My Brain, my Mind, and I: Some Philosophical Problems of Mind-Uploading". academia.edu.
- George Dvorsky. "You Might Never Upload Your Brain Into a Computer". io9.
- Brandon Oto (2011), Seeking normative guidelines for novel future forms of consciousness (PDF), University of California, Santa Cruz
- Ben Goertzel (2012). "When Should Two Minds Be Considered Versions of One Another?" (PDF).
- Sally Morem (April 21, 2013). "Goertzel Contra Dvorsky on Mind Uploading". h+ Media.
- Martine Rothblatt (2012). "The Terasem Mind Uploading Experiment" (PDF). International Journal of Machine Consciousness. World Scientific Publishing Company. 4 (1): 141–158. doi:10.1142/S1793843012400070.
- Patrick D. Hopkins (2012). "Why Uploading Will Not Work, or, the Ghosts Haunting Transhumanism" (PDF). International Journal of Machine Consciousness. World Scientific Publishing Company. 4 (1). doi:10.1142/S179384301250014X.
- Anders Sandberg (14 Apr 2014). "Ethics of brain emulations". Journal of Experimental & Theoretical Artificial Intelligence. 26 (3): 439–457. doi:10.1080/0952813X.2014.895113. Retrieved 29 June 2014.
- Jack McKay Fletcher (December 2015). "A computational mind cannot recognize itself". Technoetic Arts. 13 (3): 261–267(7). doi:10.1386/tear.13.3.261_110.1386/tear.13.3.261_1.
- Tyler D. Bancroft (Aug 2013). "Ethical Aspects of Computational Neuroscience". Neuroethics. 6 (2): 415–418. doi:10.1007/s12152-012-9163-7. ISSN 1874-5504.
- Peter Eckersley; Anders Sandberg (Dec 2013). "Is Brain Emulation Dangerous?". Journal of Artificial General Intelligence. 4 (3): 170–194. doi:10.2478/jagi-2013-0011. ISSN 1946-0163.
- Kamil Muzyka (Dec 2013). "The Outline of Personhood Law Regarding Artificial Intelligences and Emulated Human Entities". Journal of Artificial General Intelligence. 4 (3): 164–169. doi:10.2478/jagi-2013-0010. ISSN 1946-0163.
- Shulman, Carl; Anders Sandberg (2010). Mainzer, Klaus, ed. "Implications of a Software-Limited Singularity" (PDF). ECAP10: VIII European Conference on Computing and Philosophy. Retrieved 17 May 2014.
- Hanson, Robin (26 Nov 2009). "Bad Emulation Advance". Overcoming Bias. Retrieved 28 June 2014.
- Muehlhauser, Luke; Anna Salamon (2012). "Intelligence Explosion: Evidence and Import". In Amnon Eden; Johnny Søraker; James H. Moor; Eric Steinhart. Singularity Hypotheses: A Scientific and Philosophical Assessment (PDF). Springer.
- Anna Salamon; Luke Muehlhauser (2012). "Singularity Summit 2011 Workshop Report" (PDF). Machine Intelligence Research Institute. Retrieved 28 June 2014.
- Bostrom, Nick (2014). "Ch. 14: The strategic picture". Superintelligence: Paths, Dangers, Strategies. Oxford University Press. ISBN 978-0199678112.
- "We'll be uploading our entire MINDS to computers by 2045 and our bodies will be replaced by machines within 90 years, Google expert claims". Daily Mail. London.
- "We'll be uploading our entire minds to computers by 2045 and our bodies will be replaced by machines within 90 years, Google expert claims - KurzweilAI". kurzweilai.net.
- "Mind uploading & digital immortality may be reality by 2045, futurists say - KurzweilAI". kurzweilai.net.
- Will You Ever Be Able to Upload Your Brain?, www.nytimes.com
||This article's use of external links may not follow Wikipedia's policies or guidelines. (June 2016) (Learn how and when to remove this template message)|
- Adee, Sally (June 2008). "Reverse engineering the brain". IEEE Spectrum. 45 (6): 51–55. doi:10.1109/MSPEC.2008.4531462.
- Reverse-engineer the brain from National Academy of Engineering
- The Duplicates Paradox by Ben Best; theories about the problem of personal continuity
- Joe Strout's Mind Uploading Home Page
- "The Day You Discard Your Body" by Marshall Brain
- Winter of our Consciousness, article in The Future Fire 3
- Reality 3.0: Uploading, Hypermediation & Paradise Engineering by Paul Hughes
- Transhumanist writings on uploading from the Foresight Institute