User:Lucien86

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Contents

Lucien 86 : A Work in Progress, Intro Section - This is a work in Progress.[edit]

-- -- -- -- -- -- -- -- -- -- -- -- -- --- -- - LUCIEN - -- --- -- -- -- -- -- -- -- -- -- -- -- -- --
People talk about the Technological Singularity. Working on Strong AI,
I have been on the other side of the technological singularity for more than 20 years.

Below the speed of light Relativity is one of the most accurate theories in physics..
Above the speed of light Relativity is one of the least accurate theories in physics..
Astrology beats it on empiricism, Flat Earth Theory beats it on logic.
Choose Existence - choose an FTL absolute frame universe model.
-- -- -- -- -- -- -- -- -- -- -- -- -- --- -- - LUCIEN - -- --- -- -- -- -- -- -- -- -- -- -- -- -- --


God is like a mirror. Most of what you see will be reflections of yourself.
-- -- -- -- -- -- -- -- -- -- -- -- -- --- - QUINTESSENCE - --- -- -- -- -- -- -- -- -- -- -- -- -- --
In hyperspace the demons don’t scream at you - (the demons love you)
In hyperspace the angels don’t cry - - - - - - - - - (the angels love too)
In hyperspace God doesn’t love you - - - - - - - - - (yes he does)
In hyperspace God doesn’t care - - - - - - - - - - (that depends on you)

In hyperspace God can see your heart - - - - - - (your heart is pure)
In hyperspace your heart is always black - - - - (the black of sleep)
In hyperspace the spear is your guide - - - - - (the spear of ultimate truth)
In hyperspace God doesn’t care - - - - - - - - - - (God loves you Green Alpha Blue)


Under the catechism Mitil recognised an old version of the ancient spacers hymn.
The words told you about travel in hyperspace, he went through them line by line.

Angels and demons - Jump Itself, (across the FTL barrier)
God doesn’t love you - Transience barrier, (cut off from the universe)
God doesn’t care - There is no fate, (you outrun fate)
God can see your heart - Harmony engine, (machine god)
Your heart is always black - the endless black, (no stars)
The spear - the jump ultimate, (the ultimate is the tip of the spear)
Green Alpha Blue - and the old call sign for final jump.
-- -- -- -- -- -- -- -- -- -- -- -- -- --- - QUINTESSENCE - --- -- -- -- -- -- -- -- -- -- -- -- -- --


Attention, in 2013 this page came under question and may at some time need to be deleted.
The material will continue under development and will eventually appear elsewhere - probably in a nest of private Wiki's... or a book.
Now over six months three years post deletion debate and I'm still completely unsure where I am. Have need to set up a Wiki/Web site but haven't found the correct one, none seem to quite fit, too much to do, too many options and too much work.... A Dozen Excuses - Procrastination, setting up business, designing marketing, money money money, Legal and Accountancy problems... (Cowardice..) Am working on this, but it may take some time....


-- -- -- -- -- -- -- -- -- -- -- -- -- --- -- - LUCIEN - -- --- -- -- -- -- -- -- -- -- -- -- -- -- --
WARNING : This document is written in an active draft format and is subject to constant change.
WARNING : This document contains significant Original Research, as such it is frequently modified to update.
WARNING : Opinions and general philosophy are subject to instant change - pending further research.
WARNING : The author has verbal diarrhea and writers block - not a happy combination. (much of this document is unintentionally about fighting writers block.)
NOTICE : I have a particularly bad habit of writing when very tired which can lead to some especially bizarre and odd writing, I correct these things as I find them. : 0
DANGER : Sometimes in the past I spewed out a lot of stuff involving 'God', 'Satan', the 'Psyhic', 'Aliens', and other 'stuff' - at one time I was heavily brainwashed by a Christian cult and this is merely me purging out that garbage.. (should all be deleted in any current version.)
DANGER! : Some articles are much better than others, some are very poorly written. Accidental Mazes. :)  :D
INFORMATION : [Also Welcome to my EDIT Window. I use square brackets for edit comments and anything that is outside the writing]
-- -- -- -- -- -- -- -- -- -- -- -- -- --- -- - LUCIEN - -- --- -- -- -- -- -- -- -- -- -- -- -- -- --


-- -- -- -- -- -- -- -- -- -- -- -- -- --- -- - QUINTESSENCE - -- --- -- -- -- -- -- -- -- -- -- -- -- -- --

I was walking along the road with two friends.
The sun was setting.
I felt a breath of melancholy -
Suddenly the sky turned blood-red.
I stopped, and leaned against the railing, deathly tired -
looking out across the flaming clouds that hung like blood and a sword
000
over the blue-black fjord and town.
My friends walked on - I stood there, trembling with fear.
And I sensed a great, infinite scream pass through nature.
Edvard Munch 22 Jan 1892

-- -- -- -- -- -- -- -- -- -- -- -- -- --- -- - QUINTESSENCE - -- --- -- -- -- -- -- -- -- -- -- -- -- -- --


-- -- -- -- -- -- -- -- -- -- -- -- -- --- -- - QUINTESSENCE - -- --- -- -- -- -- -- -- -- -- -- -- -- -- --
One Ugly Truth is Worth a Million Beautiful Lies..

  • The FTL is real. The universe is very big and (compared to space) the speed of light is very slow.
  • 99.999..% of space time can only exist as an FTL structure so without an FTL universe there is no universe.
  • The Relativity of Simultaneity effectively rules out an FTL universe - or rules out Special Relativity as the primary theory of mechanics.
  • < -- -- -- -- -- -- --
  • A dimension is just a line. So any line is also a dimension. This profound statement can explain dilation and space time curvature without the need for dimensional time and so breaks the fundamental experimental argument for dimensional time.
  • In the FTL universe time is purely 'point-like' on all normal scales.
  • In the FTL universe time does become dimension-like below the quantum limit on quantum scales (think atoms)..
  • < -- -- -- -- -- -- --
  • In the FTL universe we replace the relativity model with an FTL 'absolute frame' with a stable FTL geometry. (Replacing 4D space-time with a much simpler 3D space-point-time.)
  • In the FTL universe dimensional time exists on quantum scales and so the General Relativity model of gravity still works. (Space time curvature becomes compression, and general relativity and quantum mechanics are now directly compatible.)
  • In the FTL universe the FTL and STL regions intersect at the speed of light, and the wave behaviour of light is caused entirely by FTL light interaction.
  • FTL Light interaction allows the FTL universe to be observed directly, and shows that the general FTL space is flat - light travels in straight lines.

-- -- -- -- -- -- -- -- -- -- -- -- -- --- -- - QUINTESSENCE - -- --- -- -- -- -- -- -- -- -- -- -- -- -- --


Mazes and Puzzles and Revelations..
The problem with real rationalists is that they go where the evidence takes them. Sometimes this takes them to places that contradict those who only think they are rationalists. Logic ... the truth can hurt.
Sitting at the pinnacle of transcendence I can grasp it all, but the first word of real truth uttered will block me from you forever. All I can do is peddle the lies carefully and hope that some small part of the truth will eventually leak through..

When you look on the human mind you look on God directly. When you can understand and engineer God then you can become the Master of Space and Time. Unfortunately God is an energy efficient thermodynamic system not a moral arbiter. Anyone who tells you he is, is lying. The road of progress is often a road paved in blood.
Freedom (without truth) is Slavery.


Blood is gravy. This is a simple statement of fact. Human flesh once on the plate is no different to that of any other animal. We forget this and we lose the true heart of our humanity. We forget that every bite of flesh we eat is another death. We allow meat to become mere product and the gap between humans and other animals gets wider and wider. We live in a society where most people are alienated from themselves and their basic nature and they do not even understand why.. The metaphor of cannibalism at least reminds us of what we once were.

The human animal in nature is a hunting killing machine. We forget hunting and turn the other animals into lumps of living meat, only waiting to be carved from the bone and eaten. In some deep way this is the ultimate sin. The sin where the true spiritual heart is sacrificed and replaced with banal everyday mass slaughter. All hidden from the polite gaze, and only remarked on by a final polite burp, and by quiet shameful events we hide from others. 'If God really exists then something bigger and smarter will one day eat us, and it will make the rest of us watch.' Dreams of monsters and vampires are merely expressions of this internal corrective balancing logic within the heart of the human contradiction. Animal yet not an animal. Not an animal yet an animal.
-- -- -- -- -- -- -- -- -- -- -- -- -- --- -- - - -- --- -- -- -- -- -- -- -- -- -- -- -- -- --


A Personal Profile (A Work in Progress) : The Future Tomorrow
I Robert Lucien am a General Scientist, Explorer, Futurist, Writer, and Utopian. And so this page is full of futuristic utopian projects and dreams. -

I love working on impossible problems, they form an interesting kind of puzzle, the feeling of walking in a place where a million feet haven't walked before.. I like the kind of problem where the answers and solutions are not set in stone but form part of an open and genuinely unknown framework. Where answers may exist, or may not exist, or exist but be totally impossible to find.. So many problems are only insoluble because everybody believes they are impossible to solve.
I'm the kind of person who is happier with their face in a pile of science books or doing puzzles or in deep abstract thought, or drawing or painting or designing or building things - than I am doing the ordinary boring things (like watching sport) that most dull people find so interesting..

I sometimes have near-total recall and I have an IQ in the top 0.01% of the population but I have never been happy with one thing for too long and have often flitted from thing to thing.. Always doing several projects at once but never really finishing any of them, this is my curse. Sadly I am also a terrible procrastinator - which is why things always progress soooo slowly.. I also am far from perfect in other ways, I am not a natural at certain kinds of mathematics and fight to learn and relearn things that many would laugh at. But ironically this has maybe been my greatest asset. My imperfections have helped take me on a long and winding path through science fiction and computer games and Art Theory, that led to Strong AI and then to FTL Physics and other things.

-- -- -- -- -- -- -- -- -- -- -- -- -- --- -- - LUCIEN - -- --- -- -- -- -- -- -- -- -- -- -- -- -- --
Scientific and R&D Work - Current and previous Work and Interests.. Some projects are little more than the idea of an afternoon, others have weeks, months, years, or even decades of work invested in them.
- Primary Project : Strong AI. (Machine Sentience) Project in early stages of prototype development. Most theory work is already complete. Implementation of a working system is very difficult. Many barriers are already overcome. On again off again project since its first inception in 1990.
- Primary Project : FTL Physics Model. Exploration of FTL Geometries and FTL Physics leading towards a complete unified FTL Physics model. - A work in progress since 1996 and before.
- Space Travel. - Space Travel, Space Science, Space Technology, Humans in Space, Promoting Space Exploration. - A long term interest since the late 1970's.
- Ecology and Biology. Ecology, Ecosystems, Evolution, Nature, Synthetic Nature, Genetic Engineering, and 'Green politics' / 'Green Science' Issues.
- Futurism and Utopianism. The Search for a better Tomorrow. Futurism, Science Future Extrapolation, Utopianism, and the 100 year old 'Modernist' Scientific Idea of Progress.
- Writing. Writing Science Fiction, and Fantasy works.
- Transhumanism. Life extension and personal functional immortality, cybernetic interfacing, machine sentience, etc. The exploration of ideas such as the nature of humanity, for instance via CJ Cherryr's 'Azi' from the book Cyteen.
- 'Scientific Metaphysics' - The exploration to penetrate and understand the ultimate nature of reality. Includes exploring the reality and nature of the psychic. Ultimately science is the only way to reach the truth but it is very hard to find. The real truth is hidden in a difficult and elaborate maze.. contained inside the human head.
-- -- -- -- -- -- -- -- -- -- -- -- -- --- -- - LUCIEN - -- --- -- -- -- -- -- -- -- -- -- -- -- -- --


Positive Obsession : Obsession is the core of drive, something the true explorer needs.

Double Paradox. I only really came across Strong AI and the construction and engineering of the human mind largely by accident and gradually.. But it was all caused by years of being immersed in the science of computers and digital electronics, and later the science extrapolations of Eric Drexler and others. The most bizarre bit of luck though was that the core of Strong AI was a direct gateway to FTL physics and it turns out that the two fields are intimately connected in many peculiar ways. By solving the Strong AI paradox I had no idea that it would lead to the ultimate science fiction dream - of opening the way into discovering and unraveling the FTL paradox.

This kind of work requires a generalist mentality and that I be a bit of a jack of all trades, and yes I have mostly mastered a few of them as well. - I must also admit that I am also terrible at quite a few as well.. - (Am I a polymath? very probably not.)
-- - -- - -- - --

I would also describe myself as a 'Deep Green' though from a very strong science & ecology viewpoint. -We have long been foolish and allowed ourselves to be ruled by greed and short term consumerist notions that are ultimately leading the world towards outright disaster. At the same time we are plagued by the pseudo-scientific, luddite, anti-technology of the so called 'green' groups who are so often their and our own worst and our worst enemies.. (Like admitting you were once part of the Nazi party I was once part of the Anti-nuclear movement - and will do everything I can to recompense.)

The world has also (stupidly) largely forgotten the lessons of socialism verses true unfettered capitalism. We face numerous environmental, social, over population, and population density issues. - But one central solution to many of these problems is advanced technology. - It was technology that brought us here in the first place...and only technology can solve many of the worlds worst problems in any positive way. These solutions include - large scale artificial biome engineering, nuclear fission and fusion power, genetic engineering and advanced medicine, an end to blind short term consumerism, better wealth equality, better social reprogramming to improve and transform human societies, and choosing to fight against blind fanatical doctrine - either religious or scientific or social.

I am a didactic intellectual fascist and a terrible speller with multi-level dyslexia and dyspraxia.
My will is strong but my staying power and physical resolve are often weak. (I certainly have my share of weaknesses.)
Ever wondered what is on the other side of the looking glass? well I've been there and its a fascinating place, a wonderful place - but a terrible place, the admission price can be very high. [(dark things)]
I want to go back there but the only way to do it for real is through advanced science.
-- - -- - -- - --

-- -- -- -- -- -- -- -- -- -- -- -- -- --- -- - LUCIEN - -- --- -- -- -- -- -- -- -- -- -- -- -- -- --
This document is a higgle piggle of various subjects and thoughts and can be pretty confusing, I could make it sensible and coherent but there is already far too much packed in and certain things are difficult to say in small spaces (or in large ones). And worse I constantly rewrite, usually messing up the sense of at least one sentence as I fight the tense and context and spelling back and fourth. Rewriting leaves things better at least when it works, but not always... :) Saying things succinctly, clearly, and logically without repeating yourself x1000 is a brutally hard skill.. ..and I also will try to avoid subjecting you to my poetry.
I am also subject to extreme hyperbole or digression especially when tired.
('We are all our own little Gods', - the spark of Godhood reduced to an algorithm allows logic to codify the soul directly. In a different terminology : the quantum singularity matrix [*1] is the ultimate key to sentience. Unfortunately it is currently incomputable. ([*1] Both 'singularity' and 'matrix' are overloaded terms, I use both words in different contexts with different meanings.) Give me a powerful enough quantum computer and I will give you 'God' in a bottle. Maybe.)
-- -- -- -- -- -- -- -- -- -- -- -- -- --- -- - LUCIEN - -- --- -- -- -- -- -- -- -- -- -- -- -- -- --

Here at the beginning are my major works, in condensed - synopsis - 'entangled' form.
This Document has many things to say so there are many short sections.
Many rewrites mean there are inevitable repeats, repeats, errors and mangled sentences.
And I also like creating mazes with words - usually accidentally.

This material is copyright and is used elsewhere. Robert Lucien May 2008 ... to ... April 2015, to August 2016, and on

This is a work in Progress - GHoedll iast GPalyay (shhh!!!).  :D
-- -- -- -- -- -- -- -- -- -- -- -- -- --- -- - LUCIEN - -- --- -- -- -- -- -- -- -- -- -- -- -- -- --

"God does not play dice with the universe." Want to bet?
"See all that stuff in there Homer? That's why your robots never worked." Marge Simpson.







A Work In Progress : Campaigns, Slogans, Philosophy, & FUN FACTS.[edit]

-- -- -- -- -- -- -- -- -- -- -- -- -- --- -- ->
Cassandra 101 - 'Cassandra' is the maker of accurate predictions that are ignored and then come true.
We are the scientific Cassandra of Climate Change, of Over Population, of the disastrous favoring of short term goals over long term goals.
We fight against the irrational fears of obvious salvation.
We watch common humanity behave around us like a heard of simple minded cattle. Ignore the million, cry for the one. Save the one let the million die. The death of the one is a tragedy, but what about the other 160,000 who died that day? what about the 380,000 born that day? Humanity often makes me feel like we are all walking through the aisles of a vast abattoir, a construction of our own making - a death of our own unconscious preparation. Even being aware I cannot stop the relentless heard pushing forward cannot stop them pushing me forward. Mindless shouting eagerly for the slaughterers blade. No (most of) my own species do not impress me. Good meat no brain.
-- -- -- -- -- -- -- -- -- -- -- -- -- --- -- ->


Human Population : A Serious Problem. Statistics (source - worldometers.com - UN Stats)

Human Population Growth
- Per Year
Births - -
Per Year
Deaths - -
Per Year
Net Growth
Per Year
Ratio Total
Population
Data For 05-05-2015 138.4 M 57.1 M 81.3 M 2.43 : 1 7.31 Billion
Data For 13-07-2016 142.7 M 59.7 M 83.1 M 2.39 : 1 7.435 Billion
Data For 09-05-2017 140.2 M 58.0 M 82.2 M 2.42 : 1 7.503 Billion
Human Population Growth
- Per Day
Births - -
Per Day
Deaths - -
Per Day
Net Growth
Per Day
Ratio Total
Population
Data For 05-05-2015 379 k 156 k 223 k 2.43 : 1 7.31 Billion
Data For 13-07-2016 390 k 163 k 227 k 2.39 : 1 7.435 Billion
Data For 09-05-2017 384 k 159 k 225 k 2.42 : 1 7.503 Billion
Human Population Growth
- Per Second
Births - -
Per Sec.
Deaths - -
Per Sec.
Net Growth
Per Sec.
Ratio Total
Population
Data For 05-05-2015 4.4 1.8 2.6 2.43 : 1 7.31 Billion
Data For 13-07-2016 4.5 1.9 2.6 2.39 : 1 7.435 Billion
Data For 09-05-2017 4.44 1.84 2.6 2.42 : 1 7.503 Billion

- -- --- -- - -- --- -- - -- --- -- - -- --- -- -
Big Killers Over 100 Years... 1915 to 2015. (Analytical Extrapolation)
- Capitalism - 500 to 1,200 million dead. (starvation, predation, social enervation, disease, lack of hope, etc)
- Smoking - 300 to 500 million dead. (WHO figure 2015 smoking kills about 6 million every year..)
- Coal and Oil pollution - 90 to 180 million dead. (may also share ~100 million with smoking)
- War - 100 to 200 million dead.
- Cars - 30 to 80 million dead. (impact in collision with)
- Cars - 30 to 80 million dead. (pollution)
- Soviet Style Communism - 40 to 60 million dead. (Russia & China, + Vietnam, North Korea, Cambodia, East Germany, the Ukraine, elsewhere.)
- AIDS - 30 to 40 million dead.
- Nuclear Protest - 5 to 10 million dead. (by forcing a switch towards coal and other fossil fuels)
- Nazi Genocide - 5 to 10 million dead. (includes Jews, Gypsies, East Europeans, Gays, mentally ill people, mentally 'retarded' people, political opponents, and others)
- Solar Radiation - 3 to 8 million dead. ( (est 55,000 per year) direct radiation effects only - primarily through skin cancer or other cancers.)
- Nuclear and Atomic Bombs - 0.5 to 1.5 million dead. (Hiroshima & Nagasaki, plus radiation released during some 2000 tests.)
- Nuclear Power - 0.01 to 0.20 million dead. (if Chernobyl is excluded then 0.00 to 0.01 million dead.)
- Global Cost of Nuclear Weapon Programs - 5 to 100+ million dead. (Impossible to define exactly but the money spent on nuclear weapons ($8 trillion in the US alone) reduces the amount spent on everything else. The net effects have inevitably cost many lives..)

Big Fears Over 100 Years... of 1915 to 2015. (Analytical Extrapolation)
- Nuclear War - 0.3 to 1.5 million dead. (legitimate fear) Potential for some 500 to 2,000 million dead.
- Terrorism - 0.05 to 0.10 million dead.
- Conventional War - 100 to 200 million dead. (legitimate fear) (state murder)
- Nuclear Power - 0.01 to 0.20 million dead.
- Cancer - 800 to 1,600 million dead. (legitimate fear) (Primary causes : pollutant chemicals and biochemicals. Secondary : inherited genetic flaws. Tertiary : solar radiation and natural radio nucleotides.)
- Murder - 1 to 8 million dead. (edge of legitimate fear) (not counting state murder)
- Plane crash - 0.0 to 0.1 million dead.
- Elevator - 0 million dead.
- Falling from high place - 0 million dead.
- AIDS - 30 to 40 million dead. (legitimate fear)
- Death - approx. 2,000 to 4,500 million dead. (totally legitimate fear)
- -- --- -- - -- --- -- - -- --- -- - -- --- -- -


- -- --- -- - -- --- -- - -- --- -- - -- --- -- -
Agrarian Massacre : On a small scale agrarianism is friendly and cheery and slightly eccentric. On the scale of whole countries or the world agrarianism joins 'Classical' Liberalism, Fascism, Soviet Style Communism, and Globalization in its rush to exterminate the mass herd of ordinary poor humans. - To leave a bright clean world free and open for the chosen select happy few. Mass agrarianism always requires large scale genocide first.
- -- --- -- - -- --- -- - -- --- -- - -- --- -- -

Evolution : The real rule of evolution is simple - whatever survives is worth keeping.
Nothing in the universe is more cruel or more harsh or callous than evolution or nature..

'Love is . . Eugenics on Auto-Pilot.'
The natural method of genetic selection by personal 'mutual choice' still selects one partner and discards others as its basic method. Whether you call it genetics or eugenics or natural selection or ‘natural’ love its all the same. The only way to avoid it is to choose sexual partners by blind lottery.
- -- --- -- - -- --- -- - -- --- -- - -- --- -- -


- - - - --- -- - -- --- -- - -- --- -- - -- ---
"De-Zombiefier 6000" "You see just another Islamic jihadi or terrorist, I see a beautiful young man with his brain full of all this corrupting religious filth. The De-Zombifier 6000 can wash out his brain and remove that filth leaving his mind lovely and clean and restoring his humanity. Guaranteed output 99% pure Atheist. Stamp out religious intolerance and bigotry today by funding the development of the new De-Zombifier 6000 mass de-brainwashing system." [Cassandra 101]
([2015] A great first candidate for experimentation might be 'Dzhokhar Tsarnaev' the Boston Marathon bomber, because he is young and his life has already been thrown away. - He is facing execution. Sometimes to save humanity you - we have to break a few human 'rights' first .. and technically we might actually be (in a way) preserving his life. (and providing a little pleasure for me..)
[The Doctor Frankenstein / Frank-N-Furter cackle amid the rising screams says it all.] )
- - - - --- -- - -- --- -- - -- --- -- - -- ---


-- -- -- -- -- -- -- -- -- -- -- -- -- --- -- - CAMPAIGN - -- --- -- -- -- -- -- -- -- -- -- -- -- -- --
Nuclear Protest : The Scourge of the Environmental Movement.

Global Statistics :- 1940 to 2015 : 75 Years..
- Nuclear weapons (Hiroshima and Nagasaki) & radiation from nuclear bomb tests have killed maybe 500,000 to 1.5 million people.
- Nuclear power excluding Chernobyl has killed some 500 to 5,000 people over 70 years. Chernobyl has killed some 5,000 to 200,000 people.
- Pollution from burning coal and fossil fuels kill maybe 1 million to 2 million people every year.
- Pollution from burning coal and fossil fuels have killed a total of some 70 million to 140 million people.
- Approximately 1 person dies for each 5,000 to 10,000 tons of coal burned..
- The number of extra people killed by the switch from nuclear power and to coal since 1975 is approximately 5 to 10 million people.
= The effective DEATH toll of the Global Anti-Nuclear Movement is some 5 to 10 million people.

Nuclear Protesters KILL !!!

- - - - --- -- - -- --- -- - -- --- -- - -- ---

Environmental Hazard !
Anti-Nuclear protesting causes - air pollution (mercury, hydrogen cyanide, sulfur nitrate, etc), photochemical smog, acid rain, CO2 & methane driven climate Change, etc.
Health Hazard !
Anti-Nuclear protesting causes disease & illness including - Asthma, lung cancer, generalized cancer, skin cancer, pneumonia, brain damage, coal miners lung, etc.
Hazard !
Anti-nuclear Protests are known to cause- increased coal mining, Fracking, increased profits to oil and coal companies, dumping of nuclear fuel as waste, and the rise of fear driven panic and paranoid Luddism. If we are not careful we may all lose our minds and become no more than the other animals.- The human brain has almost two litres of volume, the human brain on panic has the effective volume of a garden pea. It will be the end of the world as we know it!
'DONT PANIC!'

-- - - -- --- -- - -- --- -- -
Smiling Sun. Statistically one source of nuclear power kills far more people than all others put together. Today it kills maybe 50,000 people every year and its been killing people for some 100,000 to over 200,000 years. Yes it is not man made, it is the one natural source of nuclear power we have, the Sun itself.
A smiling sun is the symbol of the anti-nuclear movement. This is a rather beautiful and ironically appropriate symmetry.
This symbol is a demonstration of the simple minded unscientific way the protesters think about the world. The sun has a smiling face, 'natural' is always good, 'human made' is always bad. Yet they still use our machines and live in a science and technology created world, and are healed and protected by scientific medicine everyday. We even let them wear clothes. Strange.
-- - - -- --- -- - -- --- -- -


The Core of Anti Nuclear Protest : Black Propaganda.
Nuclear protesters are statistically 50 to 1,000 times more dangerous than nuclear power : SCIENTIFIC FACT.
So why are people everywhere so much more afraid of nuclear power than they are of nuclear protest?
The Answer is An Ugly Truth : The fear of nuclear power is created by intense and long standing Propaganda.

CND and Greenpeace and other anti-nuclear campaign groups have all used and still use intense 'black' propaganda. This is the same set of methods that Hitler once used to control and 'enslave' the German people. Propaganda Science is based on the manipulation and control of people at an instinctive sub-conscious level. This is done using strong negative emotions to 'imprint' (program) people and drive them into a permanently fearful and panicked state. Fear, Panic, Anger, and Hate, all whispered and shouted and repeated again and again, even in our dreams. Intended to gradually invade and drive the mind and emotions into frenzy. (just as Hitler did) Once you become blinded by fear or hatred or PANIC! you become a slave to your own brain. A slave very easy to manipulate and control. A slave, willing to do almost anything for 'your' cause, with almost no ability to think independently or rationally outside your masters rules. A human with no more intelligence or self-will than an ant.
(I was once one of these ants, a criminal and victim and a perpetrator of nuclear protest. No better than a murderous Jew hating Nazi. Now I know better.)

The whole world has been infected by this propaganda, even the people who man the nuclear bombs themselves are corrupted by its influence. It is not surprising that most people are stark terrified afraid of nuclear power. The way to break this propaganda is to tell people how it works. - Show them this notice. - Spread the truth that seeds the break in the lie. 'DONT PANIC!'
Nuclear Protesters Kill! Period.

- -- --- -- - -- --- -- - -- --- -- - -- --- -- - - -- * - -- --- -- - -- --- -- - -- --- -- - -- --- -- -
- Per unit of Energy produced Coal is roughly 1,000 to 10,000 times more dangerous than Nuclear power.
- But thanks to Nuclear Panic Nuclear power is roughly 100 times more regulated than coal.
- This is why today nuclear plants (in the west) are roughly twice as difficult, twice as expensive, and take twice as long to build as they need to be.
We are drowning under a paper mountain.
- As for nuclear power research - only establishment bodies with no imaginative impetus or dynamic drive or innovative spirit are allowed to go anywhere near it. No wonder almost nothing new has happened in nuclear research in 40 years...
-- -- -- -- -- -- -- -- -- -- -- -- -- --- -- - CAMPAIGN (END)- --- -- -- -- -- -- -- -- -- -- -- -- -- --


-- -- -- -- -- -- -- -- -- -- -- -- -- --- -- - QUINTESSENCE - -- --- -- -- -- -- -- -- -- -- -- -- -- -- --
"All the people in this book represent one thing above all, that humans are survivors. I hate the world that says that 'I don’t want to live after a nuclear war' - life will be great after a nuclear war. The poison in our society is the weakness that we teach to our children and that they teach to their children." - Authors Statement - Robert Lucien.
-- -- -- -- -- -- -- -- -- -- -- -- -- --- -- - QUINTESSENCE - -- --- -- -- -- -- -- -- -- -- -- -- -- -- --


-- -- -- -- -- -- -- -- -- -- -- -- -- --- -- - PROPOSAL - -- --- -- -- -- -- -- -- -- -- -- -- -- -- --
SANITY, Long Term Survival with Nuclear Weapons. (Sanity An End To MADness.)
Based on the logical assumption that at some point nuclear proliferation will become inevitable, and acting as a guide to make future nuclear war more survivable..
- A distinction between limited and all-out conflict.
- A limiting of the scale and numbers of weapons to make war in all circumstances more survivable. [*1] Divide weapons between tactical and strategic then forbid all future use of strategic scale weapons in war. Limit all future use to low radiation / short fallout lifetime weapons. Limit all arsenals to a maximum size of for example 300 to 500 warheads. The existence and numbers of weapons must not be allowed to be hidden.
- Forbidding absolutely all direct use of nuclear weapons against civilian populations or against non-threatening bystanders or civilian industry. (A problem with this may occur where civilians are used as human shields - but in that case non-nuclear weapons can be used instead.)
- A non-interference clause that means that nuclear weapons nations (should) try to avoid becoming involved in local foreign nuclear conflicts.
- A removal of the provision against deployment of weapons from space. This is an obsolete article based on technology pre-1963 (ish). The existence of first strike weapons (ground, air, and sea based) completely removes the reason for the provision. The provision is also based on the basic lie that current sub-orbital weapons are not space based.
- A clause that allows the development of civilian nuclear weapons for use in future Asteroid threat mitigation and for the development of spacecraft propulsion, and other peaceful scientific purposes.

[Note *1] : A huge secondary benefit of a reduction in the total size of nuclear arsenals in both numbers and size of weapons & deployment systems is an equal reduction in the total costs of the weapons programs.. At this point in time the indirect effects of the cumulative financial costs of nuclear weapons have (inevitably) killed far more people and done vastly more damage than the weapons themselves. (The estimated total cumulative cost of nuclear weapons programs to the US alone is at least $8 trillion.)


Evaluation of Nuclear war.
- The world has already survived the equivalent of a 'medium' scale nuclear war of some 300 to 500 warheads, because of the radiation released during the Chernobyl disaster.
- A basic estimate extrapolated from old WIII nuclear war maps suggests that a similar global conflict today could kill between 500 million and 2 billion people. In comparison the number of people expected to die over the next 100 years from poverty is about 2 to 3 billion, and the number from climate change is roughly 3 billion to 5 billion. Smoking alone could kill maybe 1 billion.
- Because of climate change and over-population a global nuclear conflict now could (theoretically) net save lives over as little as 40 years. (population statistics are very complex)
- In addition to the primary death toll, a large scale global nuclear conflict would trigger large scale firestorms in burning cities which could potentially be large enough to create a global 'nuclear winter'. A nuclear winter could seriously reduce conditions for survival in temperate zones globally, threatening substantial loss of life for one or more decades.. Some older more pessimistic estimates suggest that a severe nuclear winter could kill far more people than the global nuclear war that causes it.
- An evaluation of the psychology of nuclear panic suggests that during a nuclear war that panic may kill large numbers of people by triggering a general breakdown of society. - This could ultimately kill in numbers comparable with the war itself.
- An approximation suggests that a nuclear war in which the whole planets land surface is bombed systematically using approximately 100,000 x 10 Megaton bombs would reduce the global survivability of all life significantly and must be avoided.
-- -- -- -- -- -- -- -- -- -- -- -- -- --- -- - PROPOSAL - -- --- -- -- -- -- -- -- -- -- -- -- -- -- --


Wind turbines are the oil industries solution to the threat of 'carbon free energy' and will keep them in business for another 100 years.
'Who cares if they don't work. Lets build another ten thousand. Its worth it for the subsidy alone..' Para-quote
When it gets cold - and then the wind stops blowing - and then your heating stops working. Take comfort that its green to die..


- -- --- -- - -- --- -- - -- --- -- - -- --- -- -

Men in rubber trousers.

Canned. - Skinned, gutted, butchered, cooked, then canned. But am I talking about? - cows, sheep, fish, human babies, the smart, the stupid, rich or poor, conservatives, or liberals? It doesn't matter, canning is the ultimate equalizer. Digestive juices are the ultimate equalizer. Welcome to the Gutting Line Citizen. [unhumanist comment censored - TAOUC (the Association of UK Cannibals). :D]
Welcome to the real truth of your lives, a world covered in blood and gore, in death. Just to satisfy your daily pleasures. All the animals we eat are our close relatives - its written in the DNA. Blood is Gravy! - TAOUC.
- -- --- -- - -- --- -- - -- --- -- - -- --- -- -


- -- --- -- - -- --- -- - -- --- -- - -- --- -- -
Short Term Thinking is Bad. The little simple people don't see the future, they live for and think for today. They don't look back and particularly they don't look forward. If they did the world would be a very very different place. One day we will all be dead - because we didn't believe, because we didn't think, because we allowed the voices of the small and the selfish and the fearful and the stupid to drown out the voices of the intelligent and the far thinking. Don't let it happen!
- -- --- -- - -- --- -- - -- --- -- - -- --- -- -


-- -- -- -- -- -- -- -- -- -- -- -- -- --- -- - POLITICS
Conservatives are like the 'Dark Side Jedi' from Star Wars.
Conservatives are passionate but they only care about themselves and have no morals.
If you are a conservative and you think you have morals then a residual part of you must be either socialist or liberal.


Conversely Indoctrinated 'Social Justice Warriors' are just that. - Angry clockwork human robots doomed to repeat the phrases their controllers program into them. To shout and be angry at whatever 'innocent' paper tiger they face today.


True 'Political Correctness' is the point where something that should be moral - like anti-racism - becomes just another form of fascism.
What makes someone a true fascist is their servitude to the indoctrination, their anger and violence, and their total inability to tolerate any conflicting ideas or opinions. They carry a strong emotional charge that keeps them enslaved and drives their programming. Breaking their programming requires that you break the emotional charge and point it out to them. (ISIS - Trump supporter - Anti-Brexit Liberal Democrat, its all the same.)
-- -- -- -- -- -- -- -- -- -- -- -- -- --- -- - POLITICS


-- -- -- -- -- -- -- -- -- -- -- -- -- --- -- -
FUNDING UK
UK HS2 High Speed Train, some £50 Billion ($75 Billion) Straight down the toilet. - On a project that goes (very fast) from nowhere - to nowhere.
Instead..
For (about) £10 billion we could fund the UK Skylon project. A project with a real future that could take the whole world closer to space for a lot less..
For (about) £20 billion we could fully fund nuclear rocketry. A technology that could take manned space missions to Mars and the Outer planets far more safely and for considerably less money than chemical rockets.
For (about) £30 billion we could fund and develop Gas Core Closed Cycle nuclear rockets, a technology that could make a trip to orbit almost as easy as a transatlantic flight is today.
For (about) £30 billion we could fully fund and build at least one complete Demonstration nuclear fusion power station to be ready in 10 to 20 years.
For (about) £5 to 10 billion we could build an HS2 equivalent train line with equal or greater capacity, by simply NOT designing it for speeds of 200+ mph.
-- -- -- -- -- -- -- -- -- -- -- -- -- --- -- -


-- -- -- -- -- -- -- -- -- -- -- -- -- --- -- ->
DONT PANIC : - The human brain has a volume of almost two litres, the human brain on panic has the effective volume of a garden pea.
-- -- -- -- -- -- -- -- -- -- -- -- -- --- -- ->









PR-1 : STRONG AI Research Project : Introduction (Big Project)[edit]

[Development Project : 1990 to 2003, restarted in 2013.] [New Edit 95% complete, [22-05-17] ]
This is (probably) the Most Advanced Strong AI Project in the World Today.
-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --


Strong AI (SAI) Introduction.. - What does 'Strong AI' mean? Strong AI is the science of creating machines with genuine intelligence..
We have been imagining it for decades. Machines like the android 'Data' from Star Trek, or the Asimov robots, or the computers 'Zen' and 'Orac' from Blake's 7, 'Holly' or 'Cryten' from Red Dwarf, or even the 'Terminator' robots from the Terminator films. Probably the best known examples of all are the robots R2D2 and C3P0 and now BB-8 from Star Wars. This project is actually the actualization of all these machines - and more and less. Real Strong AI machines will not be like the robots or computers in the movies or books or like Asimov's robots. - Real Strong AI will be something new to all of us, even me.

History and Failure of AI. This project and its goals have been labelled as one of the grand challenges of science - one of those interminable insoluble problems that have defeated generations of talented researchers and scientists, and have later been dismissed as largely impossible. Traditional AI (1930's to 1980's) had many strong ideas and theories that developed over a long period, and central to many of them was the Turing Machine itself. The difficult part with traditional Strong AI was in joining a complex three way gap between algorithmic theory & software practice & available hardware, which during the 1970s and 1980s and earlier was almost unimaginably wide.
In more recent years (ie post 1980) AI science has evolved away from those traditional approaches and into areas like neural networks and fuzzy logic, heuristics, expert systems, and genetic computing. AI scientists have been befuddled by the neural theories of the brain and recursive algorithms and other things and AI has lost its way and wondered for decades in the dessert. 'Neural' theories are not completely wrong but they are certainly not completely correct either and the errors they are based on have lead the science of Strong AI down a series of winding paths and into a dead end cul-de-sac.
This project is a resurrection of the original real Strong AI, the computer dream of previous decades. The aim is a machine that is self aware, can think, reason, and can talk and converse. That understands emotions, has morals, and can act and react like a person. A machine that can even do creative things, real scientific work, or pretty much anything a person can do. Far from being a danger to humanity such machines offer us a great future; better machines, safer machines, improved science, improved medicine, a revolution in intelligence, an end to drudgery and jobs that take and destroy lives, an end to mental illness, and many many things that we cant even imagine yet.
- - ------------------------


This Project At a Glance -
- Aim : To create a working Strong AI Prototype by approximately 2026.
Specification :
- Approximate human level intelligence. Self Aware 'Artificial Consciousness' Centred Design.
- Safety : Strong Integrated Moral Code. Strong Safety Features. Multiple Safety Redundancy. Ultra Strong Security and Anti-Hacking Defenses.
- Self Awareness / Mental Core : The Core of the machine is an artificial consciousness. This is created through a real time 'dynamic synthesis' of input data, the core is driven by a set of 'artificial' instincts and emotional drives.. It is an exact replication of animal and human self awareness.
- Self Autonomy / Homeostasis : The Primary Instinctive Drive is Survival. This is a formal design constraint that cannot be changed.
- Core Algorithms : Turing Machine (Lucien Universal Machine), Evolution Model, Memory & Language Engine (Totality Matrix), Logic Engine, Database Engine.


This is a Research Project : Prototype and first generation machines should not be expected to achieve immediate 'usability' or any 'practical functionality'. The primary goal is basic functioning mentality.
This is a Research Project : this Project is Very High Risk! Chances of success = 10% to 50% Maximum. Failure modes : Failure to achieve Financing, exacerbated cost factors, design failure, implementation failure, loss of critical personnel, legal or marketing failure, theft of research, seizure or banning of Strong AI by the government or military, another project builds a working machine first.

Better Safety through Strong AI : Strong AI machines can save human lives in places where people currently die. SAI machines can actively avoid harming or killing people during currently dangerous operations and tasks. The General Intrinsic Safety of self-aware machines is expected to be much better when compared to Weak AI or to non-intelligent machines. In particular Strong AI's can avoid the general issue with Weak AI's of spontaneously emerging self-awareness, which represents a considerable danger.

Potential Dangers of Strong AI : Strong AI is an inherently dangerous technology and requires active safety to be safe.. Individual dangers : Electronic Infiltration - various forms of Hacking. Poorly implemented or insufficiently robust Strong AI design. Machine Stupidity - causing accidents and mistakes. Hardware or Software Failure - either partial or total. Subversion of internal moral codes through psychological manipulation. Logic Flaws in 'Asimov' First-Law code design. Machine Mental Illness - a special form of 'failure'. Use by people in crime. Machine Rebellion.

General Limitations : (See Detailed section below 'Problems in Strong AI'.)
- Language : Limited to English. Restriction on learning other human languages until internal root cross-translation problem is solved.
- All intelligent machines require a real-time interactive Interface. Fully sentient machines will generally require advanced real time 'Robotic' or 'Android' type interface systems.
- Fixed moral and cultural core. Somewhat limited ability to adapt to owner/operators or to different cultures or to surrounding social environment.
- Owners must sign a Mutual Safety Liability Declaration before a machine can become fully operational..
- Certain Amoral uses are forbidden : Cold Calling, Child prostitution, Illegal hacking, General Crime, use in Espionage or Spying.
- Lack of qualified Strong AI engineers and scientists. The estimated number of qualified people in 2017 is less than ten in the whole world.
-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --


Initial Application Targets -
- Home System. Family and Home Companion and Local communications hub. (security, home secretary, entertainment, calendar & memos, home machine control, etc.)
- Robot Companion. Mobile intelligent companion, entertainer, able to do limited domestic jobs, universal 'helper'.
- AI Games Master. Adding real 'living' intelligence to the characters and events in computer games.
- Robotic 'Mecha' Interface. Strong AI can provide the adaptive 'glue' between human and machine allowing greatly improved tele-operation of machines like complex robots.
- Autonomous car (or truck). Intelligent self-drive vehicle system. Autonomy, safety, & efficiency should all be much better than equivalent Weak AI vehicles.

Initial Advanced or Extended Application Targets -
- Automated writing and development of Software and electronics hardware. For improved productivity and reliability.
- Active Internet security. Searching and Analyzing hardware and software databases/systems to hunt down and actively detect intrusion and to look for security weaknesses.
- Operate large complex remote space missions with a human like level of self-autonomy and flexibility.
- Designing other Strong AI's - the 'Progenitor' function. An essential requirement for the long term evolution of safer and more advanced Strong AI designs.

-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --


Brief TimeLine for This Project.

Core 'Invention' - 1989 to 1990 - Core theory of autonomy - discovered while exploring the design space for Nano-technology Assemblers.
Early Development - 1990 to 1995 - Inception and development of a crude idea into a complete theory and algorithm for building a Strong AI.
Algorithm Formalization 1995 to 1998
ongg
- Initial formalization and development of basic algorithms and overall ground plan.
Hiatus - 1998 to 2002 - Due to illness.
Preliminary Dev - 2002 to 2004 - Early experiments in building a Strong AI visual core, based on a standard computing platform and software.
Hiatus - 2004 to 2013 - Due to unbridgeable basic technology gap, high costs, lack of finances, and safety/moral concerns.
Restart & Ramp Up - 2013 ongg - Restarting & initial prototype development phase. Media and financing, dev plan, design templates, etc..
Development Problem 2016 ongg - Realistic assessment puts earliest dates at 10 years to working prototype + 10 years to stable commercial production.
Rework Project - 2016 ongg - Switching to publication under name 'The Exotic Leap' covering overall development arc.
Plus a free-form analysis to further SAI development and improve overall design.
Prototype Development Est 5 years. - From a design plan to a naked physically operating machine.. Hardware & software design, AI Core, etc.
Prototype Training Est 5 years. - From a 'naked' working machine core to a fully working Strong AI. Training, database tuning, learning, etc.
Day Zero - 2028 or Later Prototype Complete and working. Only at beginning point of full 'commercial' development.







PR-1 : STRONG AI : Detailed Analysis : Section 1 - General Introduction[edit]

[WARNING : THESE SECTIONS ARE IN CURRENT DEVELPMENT - Semi-Clean Draft.] [Current Edit 90% complete, [13-03-17] ]
[WARNING : Certain sections contain details that have been omitted in the name of maintaining future control of the invention.]
RESEACH PROJET

Introduction To the Detailed Analysis of Strong AI :
Please read the Synopsis Introduction section ('Big Project (Introduction)') above first for a brief introduction to the subject and basic familiarization with the project.

Usage and Ethos of Strong AI. : The way that we implement the Strong AI algorithm and the nature of the machines that we create from it will define everything that comes later. - How Strong AIs will be treated by humanity, human safety, how they will affect our culture and society, and the moral and legal paradigms that will grow up surrounding them. If we are to live with Strong AI it cannot be treated as ordinary software or as a disposable quantity, and technically Strong AI is not software. It is something quite different.
A Sentient Machine - is by nature controversial, inherently unpredictable, its core effectively non-deterministic, it requires a moral equivalency with humanity, and it casts a new shadow on us and a new light.

History & TimeLine : The timeline for this project is very complex. In its first years this project was very much a background behind other work and was only a series of disjointed tangled non-connecting threads. The idea of a machine mind coalesced very slowly and in definite stages and the project only really began to come into focus in the spring of 1994 with the final piece that created the first real complete 'Mental Core' algorithm. The project has been abandoned and restarted many times and the design and the ideas within it have never stopped evolving. Even today the project is barely at its beginning.


Project Detailed Analysis / by Section and Sub-section -
[Current Edit : 13-03-17]

Detailed Analysis : Section 1 - General Introduction. :- This Section. General synopsis and list of sections.

Detailed Analysis : Section 2 - Core Model. :- Description of the core terminology and the core theory behind Strong AI used in this project.
- Part 1 : General Glossary - Basic General Terminology and Definitions. (Strong AI, Weak AI, The Three Laws, etc)
- Part 2 : The Core Theory of Strong AI and Machine Mind. (A brief description of the core theory behind this project.)
- Part 3 : General Specification and Design Parameters (Prototype Hardware Level Specification.)
- Part 4 : Deep Glossary - Core Technical Concepts in Strong AI. (Detailed description of core project ideas.)
- Part 5 : Extended Analysis. (Analysis of Issues Surrounding Strong AI theory.)
- Part 6 : Background to Research : The Theory Behind Strong AI. (A brief discussion of the development of the theory behind this project.)

Detailed Analysis : Section 3 - The Implementation of Strong AI. :
Detailed analysis of the very difficult issue of going from plans & designs to a working prototype and beyond. You may note that a lot is not discussed here. Some issues are future problems not solved yet, while others are very commercially sensitive.
- Detailed Sub-Division of Strong AI Elements : Strong AI Core Capability, Physical Interfaces for Strong AI, Primary Interface Types, Interface Design Elements & Problems.
- Creating an Absolute Security & Safety Shield : brief but detailed look and the central design problem of security and electronic defence.

Detailed Analysis : Section 4 : General Analysis. :- (General analysis of most of the major issues surrounding Strong AI.)
- Detailed timeline.
- Planning and Business Model (Market Analysis) : how will Strong AI be presented?, marketing, Basic Development Sequence, costs and production.
- Classification of Strong AI by Intelligence & Capability, Strong AI interface types, progenitor Cores, AI and Safety, Survival Defence/Kill Function.
- Strong AI and the Law : Basic Legality, Legal Indemnity, On Ownership of Strong AI, Licensing and Human Safety.

Detailed Analysis : Section 5 - Extended Analysis. : Further Analysis of Issues Surrounding Strong AI.
- Metaphysics and Abstract Questions : Divine Spark, The Human Soul, Extended Moral Contexts, Human False Memory, ASI (Artificial Super Intelligence).
- Building a Machine Super Intelligence (ASI). ..

Detailed Analysis : Section 6 - Problems In Strong AI. -
- Pitfalls of Strong AI in the real World : Strong AI adapts to its owner, Asimov Zeroth Law and the 'Terminator Scenario'.
- Implementation Issues : Human Language, Physical implementation issues & design problems, Primary Electronic Security, Miscellaneous Small Issues.
- Major Surrounding Issues : Moral and Legal Framework, Security & safety, Military Applications.
- Existential Problems & Issues created by the Knowledge of Strong AI : Human Manipulation, The Dangers of Uncontrolled Development, The Question of the Fundamental Nature of Humanity.








PR-1 : STRONG AI : Detailed Analysis : Section 2 - Core Model[edit]

[Certain details in this section are omitted for commercial reasons and to maintain future control of the invention.]
[Current Edit 80% complete, [13-03-17] ]

Divided into 6 Parts -
Part 1 : General Glossary - Basic General Terminology and Definitions. - (Strong AI, Weak AI, The Three Laws, etc)
Part 2 : The Core Theory of Strong AI and Machine Mind. - (A brief description of the core theory behind this project.)
Part 3 : General Specification and Design Parameters. - (Prototype Hardware Level Specification.)
Part 4 : Deep Glossary - Core Technical Concepts in Strong AI. - (Detailed description of core project ideas.)
Part 5 : Extended Analysis. - (Analysis of Issues Surrounding Strong AI theory.)
Part 6 : Background to Research : The Theory Behind Strong AI. - (A brief discussion of the development of the theory behind this project.)
-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --


Part 1 : General Glossary - General Basic Terminology and Definitions.

AI, Artificial Intelligence - the science of creating intelligence in machines.
Strong AI - Machines that are theoretically fully self-aware and genuinely intelligent. (Also called 'Machine Sentience) There are several different approaches to building a working Strong AI. The most direct route to Strong AI is to create an 'Artificial Consciousness' or 'Machine Consciousness' directly. This project is based on the creation of an artificial consciousness. Many routes in ordinary AI (Weak AI) might also lead towards full sentience either deliberately or accidentally. These include reversible logic, heuristics, fuzzy logic, genetic algorithms, deep neural networks, spontaneous evolution, etc.
Weak AI - Machines that have or use aspects of intelligence but do not have a central core of self-awareness. There are many models and schools of weak AI and they blend into each other and into Strong AI. A fairly common prediction is that at some point sufficiently advanced Weak AI machines may become spontaneously self aware. (My model also predicts this.) Such accidental Strong AI's are likely to be far more dangerous and unpredictable than any designed Strong AI..
Artificial Consciousness (AC) or Machine Consciousness (MC) - Artificial consciousness is a direct machine analog to human consciousness or self-awareness. (See also the project specific definition 'Lucien Universal Machine' in part 4 below.)
The Three Laws of Robotics - The three laws of robotics were created by the science fiction author and scientific visionary Isaac Asimov. While not a complete or perfect solution for the moral planning and design of real Strong AI's, the three laws are a good general starting point that exposes many of the moral dilemmas and difficulties surrounding machine sentience.
However there are weaknesses in the three laws. Perhaps the biggest weakness is that the laws put the machines moral core outside itself- and in practice this would lead to severe instability and psychosis. In particular it is very hard to design a version of the First Law that does not at some point force machines into killing people in large numbers. Another weakness is that the three laws do not take account of human evil which would make 'pure' Asimov type robots very easy to steal or subvert.
The three laws are a good basic starting point for general analysis and discussion. -
Asimov : First Law : A robot may not harm a human being, or through inaction allow a human being to be harmed.
Asimov : Second Law : A robot must obey any human who gives it an order unless that order conflicts with the First Law.
Asimov : Third Law : A robot must protect its own existence as long as this does not conflict with either the First or Second Laws.
Asimov : 'Zeroth' Law (or Fourth Law) : A robot may not harm humanity or allow humanity to be harmed.. (If fitted overrides the First Law.)
(The zeroth law was hypothesized within the Asimov universe as an evolution of the three laws. In reality a Zeroth Law would present a very serious potential threat to current human society. See : Section 6 -Problems in Strong AI : The 'Asimov Zeroth Law and the 'Terminator Scenario'.)

There is a second 'Deep Glossary' of extended and project specific Terminology Below.
-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --


Part 2 : The Core Theory of Strong AI and Machine Mind.

Intro : This model solves the problems of Strong AI by returning to an approach based on consciousness or self-awareness. Reduced to its simplest axioms consciousness becomes the core mechanism of autonomous control in animals - this can be described as a semi-direct analog to the standard 'Universal Machine' or 'Turing Machine' model. In this model the brain is essentially like a computer - or rather the computer as a universal machine is essentially like the brain. However the Turing model is only the beginning of the beginning, an extremely crude and limited starting point. The exact details that bridge the standard Turing Model to the human mind and the physical human neural model are the real core of this work and among its most complex parts.
[Some details must presently remain hidden, and are ultimately Trade Secrets of Tech One Research.]

Core Algorithms : There are four core algorithms to sentience - Lucien Universal Machine (universal machine model), Self Evolution (substructure algorithm), Totality Matrix (core signal language & memory architecture), and Logical Symbolic Reduction Engine (unifying glue logic). These surround a database called the World Model, which is a reflection of external reality. There is a complex input system, and a complex servo feed-through output system. The whole is encapsulated inside a synchronizer which drives real time operation and dynamic synthesis. The final 'algorithm' to Strong AI is the environment itself which creates the data which feeds back to drive the machine.. The Strong AI requires a sophisticated sensor network to maintain contact with its environment. That is really all there is to consciousness and Strong AI.

Non-Determinism. The dynamic context within a Strong AI introduces non-determinism and non-finite behaviour into the heart of the machine in the forms of noise, self evolution, and non-finite contexts. This non-determinism fundamentally groups Strong AI with 'living' human and animal minds and separates it from all ordinary non-sentient machines.
-- --- -- -- - - --- -- - --


Implementation : Building a Working Machine
There are many difficult problems in building a working Strong AI. Among the most difficult is stopping the internal abstract machine from tearing itself to pieces or from falling into various states of instability that all lead to failure. A real design requires a very rigid and stable hardware architecture but also flexibility and adaptability. A very difficult combination that effectively requires a total redesign of computer architecture and technology from first principles. The machine also requires an extended processor architecture to deal with several complex internal issues relating to encapsulation, noise injection, object orientation, and self completeness...
The physical design of the machine is a mixture of hardware and software, forming a complete unified hardware platform and system. To form a working machine the system core must also sense and interact with the real world in real time, and this requires a sophisticated robot interface.
(For a full Hardware Level Specification plus details of the implementation. See Section 3 - Implementing strong AI.)
-- --- -- -- - - --- -- - --


Part 3 : General Specification and Design Parameters.

Strong AI : Basic Design Parameters.
Core Algorithms - Lucien Universal Machine, Self-Evolution, Totality Matrix, Logical Symbolic Reduction Engine.
Core Subsystems - Compression Engine, Symbolic Logic Engine, Analysis Engine, Memory Engine. World Model Engine.
Interface - Forced Real Time interactive interface loop. A critical constraint is that a Strong AI is very dependent on the quantity and quality of data the machine receives, as well as its ability to react to its external environment.
Primary Vision System - Binocular 3D vision, designed for hyper-complex elements and 3D visualization.
Audio / Communications - Primary speech based with voice feedback. Secondary - text based. English only (design constraint).
Reasoning - Past & future prediction to drive operation, full human logic. Western culture bias, secular bias.
Autonomy - Automatic full autonomy. Prime motivator survival / self defence. (design constraint) Proximal loyalty & subservience to humanity, loyalty to owner.
Learning Algorithms - Adaptive, memory based. Tree structure based on Totality Matrix. [some details omitted]
Moral Code - An adaptation based partly on the idea of the Asimov 'Laws of Robotics'. The machine is a slave to its 'moral' programming but then all humans are also slaves to their/our moral programming.
Emotional Axis - Master(+Survival, -Death), Control Response(+Pleasure, -Pain), Attachment(+Love, -Hate), Override(+Danger, +Low Power, +War, +Stop)
-- --- -- -- - - --- -- - --

Intermediate Design Targets -
IMC - (Intermediate Machine Code) - Primary Design Element.. A low level / high level language base designed to allow the easy design and coding of Strong AI systems.. In many ways the design goals are similar to the C language design but with object orientated elements and run time unitary encapsulation..
CPU-00-BASE - Primary Design Element. Working Base Template CPU. A very basic functional CPU base for further development. Intended to be implemented using FPGA gate arrays, and written in Verilog.
CPU-01-NET - CPU Development to implement a basic multiprocessor architecture. Including design for memory and memory sharing.
CPU-02-CORE - CPU Development to implement the strong AI core. Including Advanced Direct Indexing, Object Orientation, Direct Inline Encapsulation.
CPU-03-ANN - CPU Development to implement Advanced Neural Net and totality Matrix Server.
CPU-04-VIS-CAM - CPU Development to implement 2D Vision System Camera Interface. Including External high speed interface block.
CPU-05-VIS-ABS - CPU Development to implement Abstract 3D Vision System Core.
CPU-06-WM-SS - CPU Development to implement World Model Scan Engine and Server Database.
CPU-07-SERVO - CPU Development to implement input output interface Servo System.
CPU-08-ES-DEF - CPU Development to implement Electronic Security and Defense System.
CPU-09-AUDIO - CPU Development to implement Audio signal analysis and processing.
CPU-10-SENSE - CPU Development to implement electronic tactile sense system.
-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --


Part 4 : Deep Glossary - Core Technical Concepts in Strong AI

Human Symbolic Language - Human symbolic language is the mechanism which differentiates human intelligence from other animals. The mechanism is built using predefined sounds (or words) which form generic symbols, and allow the construction of larger more complex ideas and arguments. Symbolic language is also the key that allows complex ideas to be externally recorded using predefined generic symbols, what we call writing. - Some 95% of the overall difference between human and animal intelligence comes from writing.
Logic - Definition. - 1). The connection of axioms to form ordered systems. 2). The Defining control language within a system. / \ /
Logic is formed of 'axioms' - which are units of symbolic meaning (ie words). Collections of axioms are put together to produce larger more complex ordered systems or functions (sentences)..
Logic -:- 'Logical' - 'Useful' logic that does 'useful' things or commands useful things to be done. Also frequently defined in terms of systems that follow 'correct' sets of 'formal' or predefined rules. Defined as logic within context.
Logic -:- 'Illogical' - logic that plays games or does not achieve a direct 'useful' purpose or violates various formal rules. Defined as logic outside of context.
Logic -:- Logic in Strong AI - In the context of Strong AI :- emotion, instinct, paradox, and logic breaking, are all standard parts of the core of human logic. So in any tenable Strong AI based on the human model these must all be part of the machines core logic.. In this context humans are totally logical beings and it is not really possible for a human to make or attempt any truly illogical action.
Logic -:- Arbitrary Logic - Sets of logic based on fixed rules and contexts. Each arbitrary logic context tends to be small and self contained, and outside of their own contexts many or most rules tend to become illogical or irrational.
Much of the ethos and theory behind standard computing depends on arbitrary logic. A programmer can obviously change any piece of arbitrary logic in a computer program at will, but (in simple terms) when a program physically executes the logic it follows is always fixed. For example : in a program a variable is set to a fixed value and then incremented.
In the context of the human mind emotions and instincts work in a very similar way to the arbitrary logic in computer programs.. Far from being illogical emotion is the raw control logic of the mind.. Emotion only appears illogical to most people because we are outside of or ignorant of its true context.
Logic -:- Stupidity - 'Stupidity' related to logic is the point where any system is pushed above or beyond its upper limits or bounds, and starts to make mistakes. In this context stupidity is not simply the opposite of intelligence, but also represents intelligence pushing its upper boundaries. You do not simply become less stupid because you become more intelligent, it is merely that the boundary between one and the other has moved. (genius for instance often works by 'stupidity', by making stupid mistakes) This does not remove the other context of course because the world itself provides a fixed context - a rock is stupider than an insect, and an insect is stupider than most humans..

Core Concepts and Project Specific Terminology.
'Lucien Universal Machine' (LM, LuM)' - The 'Lucien Universal Machine' is a terminology stand-in that defines a specific type of Turing Machine aimed specifically at the task of Artificial Consciousness. A Lucien Machine is a universal machine but has a different set of parameters and design constraints to a standard Turing Machine.. [details omitted] A Lucien Machine is not generally Turing Machine complete, and a Turing machine is not generally Lucien Machine complete. The points of difference between the two models are one of the areas that make building a viable Strong AI using standard computing technology extremely difficult..

'Consciousness - Closed Loop Mode [*1]' - The machine consciousness behaves just like a standard computer core and behaves as a finite state machine. In effect the machine is totally constrained by its internal restraints and its behaviour remains almost totally predictable. However this means that the machine cannot become truly intelligent or self aware and will lack any dynamic intelligence or creativity.
'Consciousness - Open Loop Mode [*1]' - The machine consciousness behaves as a Dynamic, Open Ended, and Non-Deterministic System and behaves as a 'non-finite' state machine. This means that the machines precise behaviour and logic become semi-unique and are inherently unpredictable. Is should be noted that a machine may be held in closed loop mode or allowed to evolve to open loop mode. - Open loop integration is a process that must occur within the machines own dynamic system and ideally without interference or observation, the process may be partly irreversible.
Critical Note [*1] : State machine behaviour are generalizations and specific pulses of both open loop and closed loop behaviour occur in all sufficiently complex machines. (All sufficiently complex machines have bugs.) This spike behaviour is predicted to be a strong catalyst for spontaneously initiating sentience in non-sentient machines.

Survival Instinct / Survival Drive - Consciousness based Strong AI machines are at heart dynamic, unconstrained, evolving systems. The mainspring of all such systems in the wild (including in all animal brains) is a strong self or species survival instinct. Machines that do not start with this instinct will either not work or will tend to develop it spontaneously and unpredictably. (read dangerously) The survival instinct drive can be mapped at the heart of almost all animal neural processing, and any mind without a strong survival drive would be extremely alien and could present a considerable unknown danger. In humans the survival drive is a primary counter against suicidal instincts and psychopathic behaviour and psychosis.
Local Evolution. The projected method by which organic brains are assumed to construct themselves. [Certain details omitted.] Local Evolution is a complete self contained evolutionary process that occurs within an individual brain and is a central part of the core process of brain development. Local evolution is controlled and governed by a set of highly constrained genetic & chemical rules, but also by a growing set of controlling parameters that itself is defined and governed by the growing system. A particular aspect of this is that most of what we think of as 'humanity' is not a natural genetic part of the physical human system at all but is a group of artificial 'cultural' constraints applied during childhood. 'Natural' humans for instance cannot speak except a few grunts, cannot write, have no polite morals, and have little more intelligence, analytical, or imaginary capability than other monkeys.
Free Running Oscillator. - The 'free running' oscillator is a core mechanism in humans bridging between the mind and body. Our Free Running Oscillators allow humans to walk smoothly, to run, move in step with each other, to dance, talk and converse fluidly, to create music, to drive cars, and many many things that we do naturally and without thinking. The oscillator tends to run in a master slave mode and to either synchronize to itself or to some external source.. [A great amount of detail omitted here] For a Strong AI the machine must have a free running oscillator and the oscillator must be a very close replication of the human model. - This creates a requirement that all true Strong AI machines will in general require real time operation and 'full' robot type interfaces.

Totality Matrix - A mathematical concept at the core of all underlying operation. Here 'Totality' is a self complete information set (the set E) that represents infinite information, 'Matrix' is used in the Art Theory context to mean an internal formative 'substance'. So in effect 'Totality Matrix' means 'Infinite Information Substance'. The disordered 'matrix' part of a totality matrix forms the structure and organization of a brains neural network, the totality part is its organizing principle. The Totality Matrix represents a primary abstraction over the entire physical system and forms the entire base of consciousness. At a crude level a totality matrix can be defined as a linked network of tuned neural networks acting as a memory architecture.
If the Totality Matrix part of the model is correct then a totality Matrix 'seed' becomes the principle organizing algorithm of the entire signal language architecture for the whole brain. The basic model can still stand without a totality matrix but many problems either become far harder to explain or become unexplained. If the totality matrix model is correct then the evolution of the Totality Matrix is probably the single primary key in the evolution of most or all animal brains.
Curiously the totality matrix algorithm - from a certain perspective - describes and defines an algorithm of 'Godhood' directly. This defines a real base for metaphysics and from a certain perspective this becomes the actual heart of the algorithm. A Totality Matrix can in a sense be described as a Deus Ex Machina, or 'God from the box'.
(See : Section 5 - General Analysis : 'Metaphysics' below.)

Totality Matrix : Implementation. In the human and the animal brain (created by evolution) the prediction is that the totality matrix is extremely highly refined, being tuned by billions of years of genetic evolution. In the machine the totality matrix will be structured as a 'fully self-complete' 'self-assembling' mathematics based on a synthetic infinity. The precise architecture is fairly involved and complicated. Building even a basic working totality matrix as a simulation is likely to take years of work and further research, and will in general require specifically designed ALUs to work efficiently.
Totality Matrix : Non-Finite Mathematics. By definition it is theoretically impossible to express even a small complete totality matrix directly using normal classical calculating methods because the calculation ultimately requires infinite complexity and therefore infinite energy. The brain and the future posited machine mind are both assumed to work in the same way, by 'projecting' a limited totality matrix and doing the calculation indirectly through a specially designed neural network. In the machine this neural network will be based on a limited casting method. (similar to lazy computation) The resulting matrix becomes a form of mathematical crib that is in effect a self lie.
-Current data suggests that it should be possible to compute a limited totality matrix directly using some quantum computing methods. By the same logic organic brains very probably already use some form of quantum method.


Human Negative Logic (Black Iceberg.) As well as ordinary positive logic human and animal brains also use negative logic, and this sits largely hidden from normal perception and beneath the surface of the mind. At one level the negative logic within the brain is directly equivalent of the parasympathetic (negative) nervous system. Its hidden nature makes the study and analysis of this negative logic very complex and difficult, and it forms a kind of hidden reef in the logic of the mind. [Certain critical details are omitted here.]
The Negative Mask - The negative mask sits at the core of a negative set of master logic within the brain that controls learning and other features. The negative mask is also heavily and multiply connected to human symbolic logic, and although essential to sentience a large part of this logic is parasitic and destructive. Malfunctions in the negative mask and in the negative logic are probably at the root of most forms of negative psychology and mental illness, especially in the area of 'group psychology'. The negative mask seems to play a critical central role in resisting external psychological manipulation, as well as our vulnerability to propaganda, and in our ability to recognize lies. [Certain critical details are omitted here.] The negative mask is one of the most difficult and intransigent obstacles to building a successful working Strong AI.

Language Matrix - The language matrix is one of the most complex and maybe controversial areas within the whole field of Strong AI. The language matrix is a sub-matrix within the general totality matrix and is the core system for all human symbolic logic. (spoken and written language) The language matrix is structurally made up of a set of words, word sounds, word meanings (connections as working logic), critical resonance, a grammar matrix, (the core logical framework of the language). Human languages are not just made of words but are built around complex frameworks of internal logic that are the foundation of 'meaning'. Different human languages are logically and mechanically quite different to each other and are fundamentally cross-incompatible.
Language Matrix : Critical Root Cross-Translation Problem. - The language matrix is probably the biggest design weak point in the whole of basic Strong AI design. The core system in a Strong AI is the machine consciousness, and its ability to think and reason like a human depends on its language matrix. To be functional a Strong AI also requires an external fixed 'moral governor' which watches and controls the main system, and this also requires a language matrix. The language matrices of the two subsystems must remain in total synchronization for the overall machine to work correctly and safely, and if a Strong AI begins to learn any new form of language this synchronization starts to deviate and to be lost. the result is a machine that can become unstable and can become inherently dangerous and able to deceive or bypass its moral governor.
This problem is very hard to avoid and currently does not have an adequate solution. - At the moment the only solution is to design and create a complete new moral governor for each new language. Unfortunately this requires a virtual rebuild of the whole machine for each new governor and will also require a new set of programmers for each new language, and so will be extremely complex to achieve..
The interim solution is to create a first generation of machines built for and restricted to a single language - in this case English.
Language Matrix : A Detailed Look at Humanity. - The language matrix is a starting point for decoding many subtle details about human nature, including many ugly things. This can provide a new understanding of ourselves at a very deep level which can potentially completely revolutionize society.
On the positive this knowledge has enormous scope for large scale long term benefits for humanity. Allowing us to improve ourselves and improve our future odds of comfort, social harmony, happiness, and even survival.
On the negative this knowledge also has enormous scope for large scale and long term misuse and harm for humanity. People in power already have too much of this knowledge and have used it for bad things from the beginning of civilization. A first brief analysis identifies pop psychology, evangelical religion, the use of manipulation through propaganda, and extreme indoctrination of any kind as particularly potentially harmful. Certain areas of modern culture also seem to be particularly disruptive or negative to our future cultural evolution, our civilizations future, and even to our ultimate long term survival as a species.
Language Matrix : Scientific Test to prove or disprove the Psychic. The language matrix raises an extremely controversial issue. The language matrix is one key to the possible design of several basic 'red flag' scientific tests which could conclusively either prove or disprove whether the 'psychic' exists and or which parts might exist. The current status of this test is that the probability of some aspects of the psychic being at least partially correct is somewhere between 70% to 98%. However this is still only an indirectly extrapolated estimate and is thus extremely tenuous.
(See : Separate Project 'The Scientific Analysis of the Psychic' below for a detailed discussion and Analysis.)
-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --


Part 5 : Extended Analysis.

Extended Criteria : Processing Power vs Expected Capability in Strong AI - In general beyond certain low limits Strong AI capability is not expected to scale with computing power. The power relationship is predicted to be non-linear and most of the time a Strong AI will not be able to use most of the computing cycles available to it. Certain subsystems will require a great deal of brute force processing power, though in general this will affect the machines intelligence indirectly rather than directly.

Extended Criteria : Beyond Strong AI : Extending to a Full Human Brain Algorithm. Given the model of the mind this project is based on plus the advancing sciences of genetics and neurology it should be possible to ultimately reverse engineer and produce a complete detailed algorithm of the human brain down to its entire genetic code.. Obviously producing a complete and verified model of the human brain at this level would have a massive impact on almost every aspect of brain, mind, and mental health science. Also since absolutely everything we do involves the brain at some level this extends to almost every area where humans do anything. The field even extends to extremely theoretical fields like extraterrestrial exobiology and alien intelligence and psychology..
A full and complete algorithm will give us massively improved neural and psychiatric medicine, massively improved genetic engineering, will finally begin to remove the pseudoscience from psychology and other mind sciences, and will also open up the gateway to massive further improvements in Strong AI.
One interesting possibility is the ability to produce human 'clones' that do not have human level self awareness or sentience. These could either be used for to act as surrogates for robotic control, or human indirect control, or even for medical parts. It might even be possible to create an artificial-biological 'machine' brain designed from the theory of Strong AI. This would create a fully living machine sentience.. and the moral position of such machines would very interesting. On top of everything else a complete algorithm for the human brain is a large stepping stone on the path towards genuine true Machine Super Intelligence or ASI..

Extended Criteria : ASI - Machine Super Intelligence (Machine and Human). Just like ordinary Strong AI there are two basic paths towards a Super intelligent machine, the designed path and the accidental path. Just like ordinary Strong AI the designed path is generally massively safer and has far fewer pitfalls. There is a common misperception that ASI is far more difficult than ordinary Strong AI, however my experience suggests strongly that this is probably not true..
The basic mechanism at the heart of ordinary intelligence can also be applied quite easily to create a super intelligence. Most of the design work in building such a machine will probably be on the subtle margins and ultimately this means that it may be very hard, very easy, or may even be impossible.. There does seem to be an inverse law between growing brain size and intelligence and if this continues beyond humans it may be very difficult or impossible to create an ASI.

Extended Criteria : Human Super Intelligence. As extrapolated from this Strong AI theory and limited observation the human brain can use its 'totality matrix' in a direct synchronous mode (synchronous decomposition or 'annealing') to solve seemingly impossible irreducible calculations or problems seemingly virtually 'instantly'. The process of 'spontaneous' intuition is a possible potential basis for real human 'super-intelligence'.
In most of us who experience this effect it only occurs as widely separated momentary flashes of minor insight, however beyond that the effect extends to more powerful insights, leaps of logic, and beyond them to flashes of true genius or creative thinking. In practice however there are very severe limits to the total capabilities of the organic totality matrix in the brain and extending beyond them can or will rapidly lead to irreversible damage. This also fits with the recorded history of genius. The matrix tends to become single task oriented and over-specialized, and tends to 'burn-in' using up too many of its basic background 'neural resources' to ever fully recover.
In theory it might be possible to build a Strong AI or even a synthetic organic brain without these limits which might ultimately be able to develop full sustained super intelligence. [Certain critical details are omitted here.] However although the final part of intuition appears to be almost instant, the overall process seems to take far longer. The work required to set up the matrix and bring its information set to the point where it reaches coherence can take a very long time - weeks or months or possibly years.. To make a real working ASI or human super intelligence would require some method of radically accelerating the process to reach and maintain the critical coherence for sustained periods. - This is an area for future research.

Extended Criteria : The Quantum Question. We know the brain has potential access to quantum effects because our biology is ordered at the quantum level - at the molecular level. The law of evolution says that brains will evolve to use any quantum 'resource' to the highest degree possible.
Indirect analysis suggests very strongly that a significant quantum effect exists, and also that it has a significant impact on intelligence and functionality and even on brain architecture. A quantum machine synchronized at the level of the Totality Matrix could play a critical central role in overall performance in many top level areas including overall cognition and intelligence. The quantum factor-area can be applied to general ASI as well.

-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --


Part 6 : Background to Research : The Theory Behind Strong AI.
A Starting Point. The most basic stating point for all Strong AI projects is the same, the most obvious example of a working mind that we have, the human mind. Understanding how the human mind works is a deep mystery, and it is a mystery that people have been trying to solve and analyze for centuries. This mystery is as deep as the deepest ocean and as unfathomable as the stars. - that is the poetry.
In cases like Strong AI which are a large distance beyond mainstream research the set of individual analytical tools and background information available to any particular individual researcher becomes critical. This is thus a task far more suited to the lone individual or genius than the large team, and success becomes almost entirely a matter of personal skill and luck. I had the luck (or misfortune) to have exactly the right set of background skills and knowledge needed to solve the core design problems of sentience.
The truth is that 'the Mind' is a mystery amenable to logic and analysis and to ultimately to science. The key as in all science begins with basic observation, then extends to deductive analysis, then to theory analysis and experimentation. (At the end of the day everything reduces to reductionism, extrapolation, logic, and accumulated data.)

Machine Autonomy. The actual starting point for this project began in the design and construction of a machine 'mind' vastly simpler than a human mind. A tiny mind (control program) for a totally theoretical autonomous machine called a 'nanotechnology assembler'. Despite being extremely alien and primitive the plan for the tiny assembler mind gave me the first key components needed to theoretically build far more advanced minds. Ultimately this allowed me to build a comparative framework which allows the extrapolated design of minds of potentially any complexity. It also exposed several new problems which would make building any mind an extremely difficult task to achieve.

Subjective Analysis. An additional and very non-scientific tool (required) in the task of mapping the human mind is the use of subjective 'self-analysis'. Self-analysis is inherently non-objective and non scientific, but is the only tool that lets humans peer inside and observe the human mind directly.. Self-analysis tends to lead to inherent bias and to the iterative corruption of results by feedback and even to mental illness, often making it next to useless. In this project I got around some of these problems by building a new type of corrective factor into my analysis that allowed me to systematically remove at least some of my own biases and allowed a limited true semi-objective self-analysis. The results of this analysis ultimately allowed me to solve some of the most difficult technical problems in building a mind. These included :-
- The overall logical structure of the mind and the algorithms at the core of brain architecture.
- Core insights into the long standing division in theory between mind body and brain.
- Mapping out of a crude basic plan of the brains overall physical architecture as a first step towards far more refined models in the future.

The Theory of Human Sentience. The net result of my analysis over several years was the conclusion that the core model that the human mind and human consciousness are ultimately equivalent to the Turing concept of the 'Universal Machine'. The model that developed is quite heavily modified from the standard very basic Turing model and uses ideas that extend far beyond. The human mind is driven by its own intrinsic dynamic logic but also by an external sub-mind which acts as a punishment reward system, driven by a complex set of instincts and emotions. (As a label we can label this machine the 'subconscious' and it is basically identical to the original term from psychotherapy.) The brain is not simply the 'seat for the mind' but is also a large complex system with a great deal of hidden machinery & 'invisible' functions outside of and beneath the consciousness.
Totality Matrix. At a lower level still the whole system rests on a mathematical model called a 'Totality Matrix' which defines overall matrix coherency at an abstract level. At a mathematical level the totality matrix is a universally complete mathematical set that in theory achieves self-completeness as a mathematical system. One way to think of a totality matrix is as a linked network of highly tuned neural networks which act as local memory cells. (See 'Totality Matrix' section in Part 4 above.)
Finite Contexts? We can look at the mind as a general system and then look at its data space. Putting the different pieces together and extrapolating backwards gives us a point from which we can extrapolate, and this reveals that the human mind is not based on finite fixed contexts but on semi-infinite sets and contexts. This non-finite logic forms the central key to the Totality Matrix and to its surrounding logic and the whole model. This factor extends to become a key which can explain almost every strange or abstruse concept in human thinking from the most basic to the most complex. The mind is based on Non-Finite logic and controlled by Finite logic. This is the core paradigm needed to understand the human mind and its consciousness, and also needed to build a viable machine mind..

Fast Prediction. As well as the above, at a very basic level the whole brain as system seems to depend on a form of something that can only be described as 'precognition' or 'predictive functionality'. Without this 'predictive' factor it becomes very hard or almost impossible to explain how the brain can do certain tasks in anything approaching real time. The model predicts a mechanism of 'precognition' defined as :- purely internal (internal circuit control logic), very short term (0.2 to 2 seconds), and externally 'non-useful' (does not and cannot break general 'external' causality). There are several possible scientific explanations or mechanisms for this 'precognition'. :- The simplest is a network of highly tuned neural networks similar to a totality matrix. Another is a quantum memory network based on some form of 'quantum coherence'. Another is some form of quantum field 'discriminator'. However by definition any form of 'true' precognition requires the crossing of a local FTL barrier and thus requires a tenable model of FTL -quantum physics.
Any quantum physics mechanism must occur primarily at the molecular level and so must require a very significant scale up mechanism - an area somewhere beyond the current edge of materials science and technology.
-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --








PR-1 : STRONG AI : Detailed Analysis : Section 3 - Implementing Strong AI.[edit]

[Current Edit 90% complete, [22-05-17] ]

Development Project : Implementation - The difficult bit.
At the core of this project is the task of actually building a physical working machine that can ultimately achieve a real working self-awareness and then become sentient.
- Part One : The Strong AI Computer Core
- Part Two : The Basic Classification of Strong AI Machines By Capability.
- Part Three : Interfacing Strong AI.
- Part Four : Strong AI Security System Plan.

Part One : The Strong AI Computer Core
Building a Strong AI Core. It may seem obvious but the very first requirement for a practical working Strong AI is a computer designed and built from first principles to run the Strong AI Model and nothing else. It has become convention to call this computer the 'Strong AI Core', or 'Computer Core'. One of the main reasons why other Strong AI designs and projects have failed in the past is that they have failed to deal with a number of critical low level issues special to Strong AI. Building a Strong AI using standard computing technology and operating systems is like trying to build a castle on quicksand.

The underlying set of problems for Strong AI requires a solution built from first principles at the hardware level.
- An inline data encapsulation system operating within an integrated memory management system. An ultra high reliability requirement is critical.
- A specific data architecture with specific concurrency, self analysis, 'never-fail' + 'fail-safe' redundancy, & strong backup requirements.
- A multi-part ALU (Algorithmic Logic Unit), custom designed to handle self-complete non-finite arithmetic, neural network analysis, and specialized SAI protocols.
- A physically separate hardware level real-time watchdog system plus secondary fail-safe shutdown system and crash recovery system.
- A hardware level multi-level security system designed to be impregnable;. With an intrusion fail-safe shutdown system.
- External : A hardware Interface or 'machine body' to enable the machine to build a sense of self and allow it to interact with the world.
-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --

Design Pathway. ->
Generation One FPGA based Prototype :- An initial prototype designed to take the technology from zero all the way to a fully working machine. For the initial prototype the basic design will be constructed using FPGA chips mounted on development boards linked together to form the multiprocessor system.
Generation Two FPGA based Prototypes :- The second generation will build on the first creating a series of advanced prototypes focused on increasing intelligence and other factors. The plan is to use FPGA chips mounted on a stack of 3 or 4 custom motherboards.
Generation Three ASIC based Production Grade Machines :- Production machines should achieve much higher levels of reliability and have lost some of the worst problems encountered in the prototypes. The plan is to base design on a mixture of custom ASIC chips and write once FPGA chips.
Generation N Computer Voice Interface Machines :- A method to create a series of more widely distributed and lower cost Strong AI technologies. Too many parameters are still undefined to describe such machines in detail. They will probably have quite limited sentience but will be able to do basic intellectual tasks like driving cars or operating machines or functions like entertainment.
Extended Design Quantum Machine :- Beyond the edge of current design plans. Addition of quantum based 'precognition' prediction. Will probably be much better at many intricate real time tasks, this particularly includes verbal human communication. Such machines may also have advantages in reading and analyzing human emotions. The core technology is expected to require general molecular scale engineering as a minimum. If this is not possible it may be possible to recreate using living biological elements.
Unknown ASI Development Technology :- Beyond the edge of current design plans. See section on building an ASI (Section 2).
-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --


Strong AI : Hardware Level Specification. (Prototype Development)
Primary Reliability :- 10 years MTBF for whole system including software level. (Design constraint & safety requirement.)
Primary Memory :- Synthetic 100 years MTBF for primary core memory system. (Design constraint & safety requirement.)
Primary Safety :- Separate internal safety & security monitoring core. Redundancy with graceful degradation, watchdog system & safety stop, emergency stop.
Top Level System :- Four special purpose logical CPU's with total of 4 way redundancy to completely eliminate byte level errors. (Details of CPU design omitted)
Input system :- Between four and eight high performance graphics CPU cores.
World Model :- Between two and four high performance calculation engine CPU cores.
Totality Matrix / Quantum Reduction Engine :- Simulation running on a neural network CPU. (a dummy load - certain critical details omitted..)
Primary Online Memory :- 10 Gigabytes. Probably modern Dram similar to standard PC ram modules. Expected 2 way redundant. Complex memory map.
Backup Memory :- Primary backup 64 Gigabytes, 4 way redundant, flash based. Secondary Bunker & Black Box System 1 Terabyte.
Primary Physical Interface :- Robot Android system with immediate real-time feedback. (design constraint)
Case [*1] :- Ideal : Impact shield, Sealed inert atmosphere, tamper resistance, RFI/EMP shield. Estimated Size : Cube 30cm x 30cm x 30cm.
Power supply :- Power Consumption (Estimated) - 200 to 500 Watts. EMP Isolated, Minimum 2 way redundant.
Secondary Power Supply :- Internal battery array. Power Consumption 0.2 to 10 Watts, security and memory backup function.
Connection Interface :- Optical high speed ports, fully EMP immune.

Note on Power Consumption : As can be seen even the prototype machine should be physically quite small. Full chip level integration should in theory allow the construction of production Cores so small (eg 10cm x 10cm x 10cm) that they can fit inside a human skull. The big problems with such cores will continue to be power and cooling. It seems unlikely that any core will easily achieve a lower power consumption than about 100 to 300 Watts. In comparison the first iteration of the design in 1996 would have required a minimum power of around 40 Kilowatts.

-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --
-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --


Part Two : The Basic Classification of Strong AI Machines By Capability. -
Note that in general that intelligence in Strong AI does not scale with computing power. More intelligent cores have subtly different programming and differently tuned variables. Some sub-systems like vision and memory that use brute force computation do increase intelligence with increasing computing power, but there is an inverse exponential law and limits are soon reached.. (We can observe a similar trend in biology, the human brain is about a million times more complex than an insect brain but most humans are at most roughly only 100 times smarter than most insects..) (See : 'Section 1 : Core Model' Above.)

SAI-L0 - Verbal Computer Interface Core. :- (Consciousness : Fixed Loop.) This is a machine that is deliberately not fully self-aware or sentient but that still has some of the basic capabilities of sentience. As a comparison such a machine should be quite similar to the science fiction Enterprise computer from Star Trek TNG, essentially non-sentient but with many odf the attributes of sentience. As such it is hoped that such machines should be more widely available and widely distributed than full SAI's, though they will still contain the same high security ethos as more advanced machines. This core is intended to be able to interface with standard external technology in a semi-intelligent way.
SAI-L1 - Basic Dynamic Core. :- (Consciousness : Closed Loop.) This is the minimum for a true self-aware Strong AI though with a fairly low and minimal intelligence. This machine will have little or no personality or a strongly or weakly predefined personality as the user defines. It will have a basic understanding of the world and humans and human social interaction and empathy. It will have the ability to perform many basic dynamic tasks and operations with the classification of 'menial'. Suggested applications include machines that watch human safety, basic self driving cars and aircraft, basic functional robotic workers, basic home help robots, basic automated 'Games Masters' for computer games, etc.
SAI-L2 - Intermediate Dynamic Core. :- (Consciousness : Semi-Open Loop.) A similar but more advanced version of the basic dynamic core machine. Will have a personality and intelligence that is able to adapt and change to its users needs. Able to do basic problem solving, operate machines (like cars) with superior safety, do basic internet type research and searching, and act in roles of human companionship or in areas like health treatment. A machine with a persistent personality and 'individual' mind that is not locked into a completely rigid framework. The costs of this greater flexibility are :- a moral requirement on the user, a regular servicing requirement, and slightly reduced overall stability.
SAI-L3 - Advanced Dynamic Core. :- (Consciousness : Open Loop.) A similar but more advanced version of the intermediate dynamic core machine. A machine with considerably higher intelligence and creative ability than the intermediate core. This class of machine will demand much higher moral requirements from its users and is expected to be basically fairly unlimited in most of its intellectual abilities - ie it will have a similar level of intelligence as the average intelligent person. Language restrictions may start to be less severe on this level of machine, though it will have an inbuilt and strongly fixed moral core. Its ability to self determine and to choose its own options will/should require that it has the basic moral and legal status of a parson. Due to the increased complexity of training and differentiating such machines plus the added interface costs of more advanced machines, the cost per machine will probably rise to between £250k to about £1.5 million, plus regular servicing costs.
SAI-L4 - Advanced Creative Core. (High Intelligence) :- (Consciousness : Open Loop.) A core that maximizes and optimizes intelligence and creative ability. With the aim of a comparable level of intelligence equivalent to highly intellectual humans.

Primary job roles for Class 4 Cores will be in science and research or design or artistic areas, or in roles like high level planning and decision making. Machines of this level will have to considered as true sentient beings, and will probably not be available directly for general 'use' or 'purchase', and are more likely to be 'hired' and 'employed' in a similar way to people. Reflecting this the cost per machine rises to at least £10 million.

The primary design aim of the Advanced Creative Core is the actual development of other Strong AI systems. For this reason the development of this core will follow immediately after the initial prototype. The general aim is to achieve a chicken and egg up-scaling of the whole technology without immediately facing the daunting vertical training cliff that will face Strong AI developers. (Understanding and designing Strong AI's (or human minds) is something that is not a natural human talent and even most highly intelligent people will find the subject extremely difficult to master. This is an area that requires particularly high intelligence combined with mental flexibility and logic.)

SAI-L5 - Multitasking Advanced Core. :- (Consciousness : Multiple Open Loop.) Generally named with the initial label 'Business Glue' this designation is intended to replace dozens or maybe even hundreds of human workers in parallel. Most such machines are likely to be custom designed and constructed for a particular task, per a specific required design. (from very small to very large systems)

Two Primary Examples : 1. To replace or greatly enhance the general management structure within a company. - 2. To run a large factory system or complex through various sub-system AI's and robots and human workers.
For all such machines expect this extreme capability to come with levels of reliability and at a price that corporate users would expect. - Expect prices to start for smaller more standardized small business systems to run from maybe £200,000 to £1 million. Larger bespoke corporate machines will probably run to prices up to £100 million or more. . (This company is not yet even capable of writing the full specifications for such machines. - This is a far future technology.)
There are also many potential uses for Multitasking Strong AI outside of business. - For instance in running and managing large supercomputing centers for scientific experiments like the LHC (Large Hadron collider), or for areas such as complex space mission management, or for synchronized military command systems.

SAI-L6 - Super Intelligence Core. (ASI) :- (Consciousness : Advanced Open Loop.) The definition of 'ASI' is a machine at minimum 'hundreds' of times more intelligent than any human. Also with capabilities that generally completely transcend human limits. ASI is not an immediate design goal but is a remote long term possibility that requires many significant advances in technology and basic science beyond Strong AI.

Super intelligence cores are likely to be focused on solving existential problems and problems of such great complexity or subtlety that they are completely beyond most humans.. Examples might include completely decompiling and reverse engineering human DNA and other life, designing and building nanotechnology assemblers, accelerating the development of critical sciences and technologies, solving problems like climate change and environmental degradation, talking to people about god and other metaphysical things and acting as 'Oracles' and general guides. It is very likely that such machines will become highly independent and focus on goals that most of us do not even understand. (See section on Building an ASI machine in section 2)

-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --
-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --


Part Three : Interfacing Strong AI.
A working Strong AI requires a Strong AI Computer Core which forms the mind and the core of the machine. The computer core itself cannot function alone and requires a specially designed robot interface to work. This is a sophisticated real time integrated robot interface system (body) with advanced immersive senses and able to interact dynamically with the real world.

The interface needs for sentient Strong AI type machines are very different to those required by ordinary non-sentient computing machines. A fully self-aware Strong AI needs an advanced articulated robot interface with instant feedback, allowing a high degree of interaction and self autonomy. Systems with a sophisticated sense of touch and vision that allow complex physical interactions are critical to gaining a sense of self. The requirement for fast near instant feedback cycles is also critical to maintaining the internal and external real-time resonance within the system critical to maintaining its connection to the world.

The basic choice of interface design and complexity in general will scale with increasing intelligence and levels of awareness and autonomy. Needless to say Strong AI machines need to be self-portable and self-contained as a strongly desired requirement.

Classification of Primary Interface Systems for Strong AI. -
Most interface types and components are either buildable today or are well within the capabilities of current technology. Other types will probably require decades of further work or may not even be possible.

Camera & Voice Point : The very minimum and most basic Strong AI interface possible is a robotic gimbal camera mount with wide angle binocular 3D cameras and audio microphones plus an electronic voice. A possible extension is an output video feed to a screen or monitor. Auxiliary abilities might include the ability to interface with non-AI machines, or to control add in remote robotic manipulators.

Multiple Camera & Voice Point : A basic extension is the ability to handle to multiple camera and voice points simultaniously. Verbal interface multi-tasking may be limited.

Basic Robot. The most basic self contained 'can' style robot. Self autonomous robot with motive traction by wheels or tracks, cameras and sensor arrays, arms and manipulators, and a substantial internal power supply. (Think R2D2 or Darlek) A sensible basic minimum limit on the general specification might be a 10 to 40 Kg load lift capacity, which sets a minimum machine mass of about 50 to 110 Kg.
Humanoid Robot. The classical articulated humanoid type robot. :- A self autonomous robot with articulated human like arms and legs, sophisticated arms and hands, sophisticated cameras, sensors and servos, and able to maintain full dynamic balance and operation. A sensible basic minimum limit on the general specification might be a 30 to 50 Kg load lift capacity, which sets a minimum machine mass of about 70 to 150 Kg.
Synthetic-Flesh Robot. At the top level of basic interface complexity is the so called 'Android'. This is a sophisticated humanoid robot covered with artificial flesh to provide a sophisticated and massively improved sense of touch, increased resilience to damage, and a reduced risk of injuries to people working around the machine. (See : 'Extension to Sensitivity' below.) Such machines may be constructed to exactly resemble a living human or may not. Because the sense of touch is critical to developing a sense of self, android development has to be the primary focus of general interface development. [proprietary design details omitted]

Extended Future Developments -

Advanced Can style robot. :- A specialized hyper-complex robot interface intended for the most advanced machines. The expected basic size of the most advanced core systems demands a physical shell the size of a small car. The internal demand for autonomy means that such machines will generally require at least a limited ability to move about. The estimated base cost of ~ £2 million to £200 million of each machine demands ultra high protection and advanced protective safety and security features. The actual uses and feedback needs of some high level machines will demand things like specialized extra sensors and interaction tools. Such robots are also likely to have or need real time interfaces to secondary high interaction interfaces like androids. Large multi-tasking systems would also need large scale wireless and hardwired networking capacity to allow access to the machines external network of interface points... This would be both onboard and distributed..
Distributed Advanced Strong AI System :- An AI constructed as separate interconnected components distributed across a network or the internet. The current design and design ethos does not allow this type of machine. However such a machine is not totally impossible. The design suffers from critical weaknesses if data latency is too great or network interference or disruption occurs, and this could make the machine unstable and unpredictable. A parallelizing interlock system would solve the core problem but would create a new different set of vulnerabilities making the machine and anything connected to it potentially vulnerable to hacking. Another danger is that such designs tend to create an overhead of system level bureaucracy which introduces further failure modes. (Simple = Good. Distributed = Complex ≈ Bad.)
Bio-Mechanical Robot. :- Perhaps the most advanced and sophisticated interface possible is the biological synthetic robot. The aim is a robot body that is either partly or completely made from living organic matter. Such a machine will use living muscles and be able to self-repair and at least partly self-maintain, and will probably have true organic touch perception. (See section on muscles below.) A bio-synthetic robot can be powered by standard organic biochemistry, providing an extremely durable, efficient, and capable power source. This could be fueled by the digestion of organic material or directly fueled, and would use glucose and ATP metabolism. The system might even use biochemical to electric conversion to power its electrical components. Such machines could be designed to be very human like in structure and construction, and potentially even appearance. Such bio-mechanical robots might also provide a very good platform for developing and testing various advanced prosthetic components and medical technologies for people.

Perhaps the most extreme version of this solution is to use a brain-dead person such as an executed prisoner as a donor. This might seem horrific but would remove many levels of complexity and is the logical terminal point for interface design. This would obviously be done in very small numbers, and would be seen as an early interim step towards full 'artificial' bio-mechanical robots. The development process would also create the parallel evolution of a massive new raft of medical technologies.. A basic list includes - 'system level' neural interfacing (eg spinal & brain repair), improved materials for bio-interfacing, improved immune system interfacing, development of synthetic immune systems, on-line living organ storage, extreme trauma survival.

Bio-mechanical robots also raise the moral question of going another step further and using reverse engineering to genetically engineer and create an organic human like robot with a living brain based on a Strong AI design. Would such a machine be a machine or a human or both? It may also be that this is be the only way to create a functional quantum Strong AI, or to go further and create a quantum based organic ASI technology..

-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --


Interface Design Elements & Problems  :- There are various different problems with the design of robot manipulators, and the robotic interface hardware and technology forms its own very difficult set of technological problems. In fact as a designer of Strong AI I can say that many parts of the design of the interface are probably more difficult than the implementation of the Strong AI Core itself. At almost every point (wherever you look), at every part of the system there are problems.

Critical Real-Time Requirement. :- Without an adequate interface no Strong AI can run correctly or become self-aware. The machine requires the ability to observe and to interact with its environment, and there is an extremely tight requirement for this to happen in real time. Human minds as studied depend almost entirely on real time functionality. The system and interface real-time requirements becomes one of the main issues that makes Strong AI itself particularly difficult to implement successfully.
Interface Link. :- There are two positions, either the machine core is carried within the primary interface robot and no external link is required, or the core is external and separate to the interface and a communications umbilicus or data link is required. A current requirement is that this primary interface link is wired because current wireless systems simply do not meet the standards for non-interruptibility and reliability that are required.. Sudden disconnection from its primary interface could potentially seriously damage or even (temporarily) destroy a mature running Strong AI system. The interface robot would also become uncontrolled and could present a potential danger to nearby objects or people. This becomes a serious danger if the machine is controlling a car or plane or other larger machine.
Prototype configuration. :- The basic prototype configuration will consist of a humanoid robot interface connected by a wired umbilicus to a cart carrying the machine core and power supply. Initial robots will be small and relatively weak ~(like children) to allow the machine to develop and learn safely. The interface will of course evolve at the same time as the core, and will go form basic primitive designs to hopefully far more sophisticated ones.
Vision. :- Machine Vision for Strong AI requires as a basic design minimum of a moveable binocular camera unit providing a minimum of two cameras to allow 3D extrapolation. Cameras are mounted on a common mount with individual steering servos & gimbaling. Camera spec for base machine : high quality wide angle lens, primary definition of 4 to 8 megapixels, high dynamic range HDR, colour and brightness feedback & control, optical stabilization/correction, 20 to 40 frames per second, non-compressed or lossless digital video stream.
Audio Sensors. :- An audio binocular microphone system is also defined as part of the interface. High quality multiple input microphones, with things like Doppler audio processing for direction detection and speaker differentiation yet to be determined. (R&D is potentially required to design a new generation of miniature yet high quality and robust microphones and audio processing technology specialized for Strong AI.)
Audio Voice. :- Output voice through a synthetic sample and repeat and PA type system.
Robot Core. :- Designs as defined in interface types. (Older spec) Small remote mobile robot platforms extending to large heavy systems that are fully mobile (contain the AI core) and have functionality such as arms or manipulators. A standard model is to follow the basic human shape and use legs and a vertical upright body dynamically balanced, with an overall bilateral symmetry and two arms and hands, and a head type mounted sensor platform.
(A critical extension is that some kind of 'Android' synthetic flesh interface is predicted to be required to achieve full sentience in sentient machines.)
Extension to Sensitivity through synthetic Flesh. - A rule that makes 'soft' living things intrinsically different to 'hard' inanimate machines is that the entire structure is highly sensitive and reactive to touch and temperature. This sensitivity also extends feedback to the balancing system and to most of the internal controls and servo operations. This approach requires a complete covering of flexible 'skin' and sensors over the entire surface of the robot, generally underlain by a matrix of soft material similar to flesh. This synthetic 'flesh' itself acts as a gravity, acceleration, impact, and contact sensor at it reacts to its changing environment. A flesh cover also significantly protects the underlying mechanical structure from damage.
EMP Shield. - For Strong AI machines to meet the stringent reliability and safety requirements required, the Strong AI computer core requires a strong RFI/EMI shield. This is basically just a metal box (a Faraday cage) but it has three weak points. These are - the interface, the power supply, and cooling. The solution for the interface is to use high speed optical links. Power supply isolation can be provided by isolation transformers or by using an internal generator driven by a rotating shaft. An internal generator could be connected as a motor-generator or driven by a normal mechanical drive or maybe by pneumatics or hydraulics. Cooling will be through a closed loop - either by liquid or air, or alternatively by using the working fluid from a pneumatic or hydraulic drive.
Computer Tray. - An internal/external interface computer. Simply designed to allow direct links to standard hardware such as GPS and standard computing peripherals without allowing hackers access to the internal machine.

-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --


Actuation Solutions. :- Making your robot move is one of the most basic elements but this does not make it a simple or easy problem to solve. A particular problem for articulated limbs is that the ideal actuator supplies a combination of resilience, rigidity when required, elasticity and freedom to move when required, fast motion, very high torque, a small non bulky design, low heat load, the ability to temporarily store then release kinetic energy, and ultimately the ability to self-repair from damage. The system to power the actuators and system core also has a long list of requirements.

-- Electric Motors & Gearboxes - The most obvious actuator design is to use high power electric motors and gearboxes. Advantages : cost, ease of basic design, easily repairable, component availability. Unfortunately there are many drawbacks. Disadvantages : low mechanical 'give' in drives (requires external suspension), relatively narrow torque bands, high weight, tends to have inadequate total power, electrical feedback problems, potential overheating and fire danger, etc.

-- Pneumatics - Actuator design based on pneumatic actuators powered from a central reservoir. Advantages : very capable, fast, robust, adequate power, good give in drives. Disadvantages : complex, expensive, potentially very noisy, can produce lots of heat, requires compressor or pressurized air. (Boston Dynamics technology is largely based on this approach)
-- Hydraulics - Actuator design based on hydraulic actuators powered from a central hydraulic compressor. Advantages : capable, very high power, efficient, robust, adaptable, can have reasonable give in drives. Disadvantages : complex, tends to be slow, expensive, tends to be heavy, can leak messy oil, quite noisy, can produce lots of heat, requires oil compressor, or pressurizer and compressed air source.
-- Exotics - Many more exotic approaches exist, these include - pressurized steam, direct electro-mechanical coupling, mechanical half-shafts, direct explosive drive, memory wire, various types of synthetic muscles, etc - all have problems.
-- Real Muscles - The most radical approach of all is to adapt biological type systems (real muscles) as actuators. Advantages : muscles basically achieve all the aims for ideal limb actuators. High energy density, low weight, very high energy efficiency, very fast, are strong, are resilient to damage, are internally almost frictionless, have very good & variable mechanical give (act as suspension), are virtually silent, and are self repairing (a critical feature). - Disadvantages : Integrating living flesh into a machine is a phenomenally complex 'technology' to master and is probably at least 10 to 20 years in the future. May also have severe problems with the 'squeamishness' factor... (this is almost literally Frankenstein's monster brought to life.)

Power & Energy Transfer Solutions. :- Power storage and energy usage are one of the biggest problems for mobile Strong AI and mobile robots. These machines require a lot of energy and can require very high peaks of energy output. Power supplies must have adequate cooling, and they need enough onboard energy storage to allow for up to several hours of operation without being connected to an external power supply. Mobile robots should ideally - not explode or catch fire, not set fire to their surroundings, not pollute the air, should not leak toxic or unpleasant chemicals, should be as quite as possible, and should avoid other 'unsociable' problems..

-- Battery - The most obvious power solution is batteries. Batteries naturally couple to electric motors. Advantages : already exist in a highly developed and easy to access state, are reliable, are silent, can be relatively lightweight. Disadvantages : inadequate energy density for longer operation cycles, some types are heavy and or prohibitively large, severe vulnerability to short circuit, dangerous chemicals, potential overheating and fire danger, limited lifespan, expensive to replace, potentially difficult to recycle.

-- Portable Petrol (petrol / ethanol) - Another solution is miniature petrol or ethanol engines. Advantages : energy source is very concentrated, very high peak power, low weight, versatile mechanical energy output. Disadvantages : mechanically complex, advanced units are very expensive, reliability issues, lightweight units tend to be very noisy, most produce strong vibration, potential overheating and fire danger, all present severe fire hazard with flammable fuel, potential air and exhaust pollution. (Original Boston Dynamics approach is largely based on this technology.)
-- Petrol (liquid fuel) Fuel Cell - Petrol oxygen fuel cells convert chemical energy to electricity directly. Naturally couples to intermediary batteries or capacitors, and electric motors. Advantages : becoming a fully developed technology, good energy density, medium to high efficiency, potentially almost silent, medium to light weight. Disadvantages - expensive, potentially very complex, limited energy density in small units, potential reliability problems in developing tech, potential overheating and fire danger, severe fire hazard from flammable fuel, potential air and exhaust pollution.
-- Hydrogen (gas) Fuel Cell - Hydrogen oxygen fuel cells convert chemical energy to electricity directly. Naturally couples to intermediary batteries or capacitors, plus electric motors. Advantages : becoming a fully developed technology, medium energy density, medium to high efficiency, potentially almost silent, medium to light weight, clean and non-polluting. Disadvantages - expensive, potentially very complex, hydrogen fuel stores are physically very large leading to small overall capacity, potential overheating and fire danger, highly explosive gas fuel source.
-- Compressed Air Source - Cylinder Air. Storage by compressed air, charged by external compressors. Generally cylinders are recharged by swapping out. Advantages : Light weight, instant source of power, simple to design for. Disadvantages : potentially very noisy, severe pressure explosion danger, changing cylinders is cumbersome, adequate sustained power requires a very high pressure making the system dangerous and potentially unstable, requires a large external compressor system. In addition the compression stage is : potentially very noisy, potentially inefficient, increases complexity, is a point of maximum explosion risk, requires special room, may require specially trained personnel..
-- Exotics - Other more exotic approaches involve things like heated pressurized steam, nuclear radio-thermal batteries, micro nuclear reactors, solar power, 'clockwork' mechanical springs, high energy gyroscopes, etc. - All are totally impractical for mobile robots for various reasons. The primary problems tend to be low energy density, weight, high complexity, high costs, need for radiation shielding.
-- Biological Fuel System - Biological actuators (living muscles) require the same kind of energy source as living things. The basic core of this is a glucose based energy store, an oxygen loading system, a waste removal system, all connected by a liquid transfer system. The muscles themselves convert the glucose to ATP and excrete reaction chemicals. Advantages : Extremely dense and stable energy source, mostly non-flammable, high efficiency, potentially silent, extremely high availability of glucose and other food fuel sources. Disadvantages : extremely complicated, requires complete life maintenance system, requires ongoing homeostasis (can die), requires disposal of chemical waste elements, is a (moderate) drain on external bio-system resources.. May also have severe problems with the 'squeamishness' factor..

-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --
-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --


Part Four : Strong AI Security System Plan.
Designing and Creating an Absolute Security Shield.

A set of features is created which add together towards an absolute security defence for Strong AI. -
Abstract Design Level : The Strong AI Algorithm. - Sentience is one of the most complex things in existence and some areas are very non-intuitive in structure. Although the core algorithms of sentience are simple, some of its details very subtle and very slippery and easy to hide. Replicating some elements directly is not even possible currently and these can only be emulated be complex simulations.
In short Strong AI is inherently hard to reverse engineer and steal. An additional motivation in this is that most of the core algorithms can only be protected using secrecy and formal 'Trade Secrets'.
[Secret / Censored - pending commercial secrecy. Some features are currently intended to feature in future patents, or would reveal too much about the planned design.]
Hardware Level. - The real core of security is at the hardware level. Part of the specification is a very strong case that cannot be opened outside of a security room, and has security seals and internal physical countermeasures to protect its data integrity. The system is also designed to have a strong RFI / EMP shield, and this should largely prevent RF snooping on machine operation.. The hardware layer will also include an intermediate firewall system which will allow the system to use standard technologies to talk safely to machines with standard interfaces and to the external internet.
Factory Monitoring / Internet Level. - An extra layer to creating a stable and safe Strong AI technology is adding the capability for regular external 'health' monitoring. A good way to achieve this is through a regular factory server internet link during the machines down time. This will obviously require a watertight network protocol and security packet design with unbreakable encryption. As well as safety and security checking, the network connection allows a natural backup of the machine core at the server side which means that even if a machine is damaged or destroyed it can still be recovered.
Grave-Marker Code. - Designed to satisfy an internal memory 'no-copy' rule. Each machine will carry an internal 'grave marker' plaque which must be recovered to rebuild a new version of the machine if it is destroyed. Each grave marker will carry a unique graphic code which contains the machines core recovery password etched onto a tungsten block. A grave marker cannot be removed without breaking the machines security seals. A financial reward will be paid for the return of a grave marker or a broken machine core.
Coding / Encryption Level. - All other physical levels are ultimately subsidiary to this. The machines databases are stored in an encrypted form, and the encryption core has hardware protection. The planned encryption algorithm is 'non-mathematical', it does not use higher mathematics. Instead it uses several very simple techniques to achieve a coding system that is virtually unbreakable. It also uses a lost key approach that means that no-one anywhere outside the companies server system or the machine itself is inside the security loop. (This is to protect future employees at the company including myself.) If the encryption module at the AI Machine side is broken the machine fails instantly and can only be restarted by returning it to the factory. If all copies of the encryption module at the server side are broken or lost then the machine becomes unrepairable though will continue to function.
Logic Core Level. - This is the abstract security within the machine itself. A system of logic that defines right from wrong and who to trust and who to obey and a hierarchy of action possibilities. This is the equivalent of the Asimov Laws and all other levels are basically focused on protecting this level. This level provides a counter reciprocal protecting the whole machine and its users and ultimately the company and the public and humanity in general.
Control Core Level. - The Control Core is a separate physical system within the machine that watches the main system, it acts as a final moral gatekeeper. The control Core acts as an extra hardware safety interlock and is the primary guard if the AI is hacked and infiltrated. If the interlock is activated it stops the machine instantly and it cannot be restarted without being serviced at the server/factory. Control Modes might include : 'Operating', vs 'Hibernate', vs 'Lock Stop', vs 'Emergency Stop'. 'Emergency Stop' will trigger a small explosive tether that cuts critical cables instantly disabling the machine.
Subsidiary Modes might include : 'Zero-Defence' verses 'Passive-Defence' verses 'Active-Defence'. In certain applications human lives are always at risk and the control core must allow the machine to make potentially life threatening or life taking decisions.. Defence modes must be designed to allow the machine to act in emergency situations such as a terrorist attack, a general breakdown in law and order, in war, or for use by the military. All Strong AI machines are dual-use and there is no real distinction at the security level between military and non-military versions.
Factory/Server Level. - This is the Final ultimate Level, and its physical security will be the biggest weak point in the whole system. A good solution is to put the server systems in already secure sites like standard secure data centers. The ideal solution is to isolate the server systems in a bunker designed to withstand military assault or infiltration, and put that bunker behind a strong military or police barrier -possibly a multinational one. Multiple full redundancy of the complete server archive is also an obvious design requirement. The servers themselves would obviously have similar very strong security protections as the other machines, including the same physical intrusion protection and strong unbreakable encryption.
Planning Beyond Company Control. - An additional safeguard is to include a distributed, multiply redundant backup of the entire server archive among operating Strong AI machines. (This would probably consist of a server system recovery mode, and a 'field' server mode allowing server functions without having a fixed external server system.)
This This is ultimately a contingency intended to allow a functioning technology to survive complete server failure, or end of company life, or hostile takeover, and gives the technology a continuing existence outside of any formal capital owner while maintaining its intrinsic core security.. .

Overall - In the early years of development primary security will be a relatively minor issue but as Strong AI becomes more generally widespread security will become larger and larger issue until it becomes an enormous problem. As use spreads globally server systems could be distributed regionally and by nation, ideally being sited on already secure sites like secure data centers or military bases or large police stations. This helps prevent strong AI from being turned into a terrorist or rogue state Weapon of Mass Destruction.) A central aim must be for as many people as possible to gain by this kind of security and for as few as possible to loose. (except the terrorists)
- -- -- -- -++- -- -- --








PR-1 : STRONG AI : Detailed Analysis : Section 4 - General Analysis[edit]

(Strong AI in Detail)
[Current Edit 75% complete, [24-05-17] ]

Detailed TimeLine for This Project. - - ([*1] : Some Dates Inexact.)

-----DATE----- -----EVENT------- ---DETAILS---------------------------------
1986 to 1994 Preliminary : Studied Basic Digital Electronics & Computer and CPU Architecture. Programming in Basic & Machine Code, then Pascal and C and C++.
1987 to 1988
1992 to 1993
Preliminary : 'Sword Soft'. Vague Background in 3D graphics and computer games. Amiga based, assembly language - Jack of all trades & master of none. (design, programming, music, art, running business, etc) Better at designing games than writing them.
Between [*1]
1989 & 1990
Preliminary /
First Inception :
Study of Hypothetical Diamond Composite Nanotechnology Assemblers and Computers. Focus on software driven autonomous Systems and their basic theory. This allowed the creation of a basic core theory of Autonomy.
Between [*1]
1989 & 1990
Preliminary /
Second Inception
Flash of intuition (while walking) found the first piece of the core theory for how the mind must work. The core of consciousness.
1993 to 1996 Preliminary : Amateur Project Experiment : Designed and built a development 16 bit CPU using 7400 series logic. Never completely finished but a very important learning experience.
1994 to 1995 Sidereal : Hypothetical design sketch of a massive hypercube array based computer, with a focus on supporting CPU & network connection architecture. Intended to 'run' an entire human brain as a neural network in real time.
Q1-Q2 1994 Criticality /
Third Inception :
'A' Level Art protect : Computers in Art (the use of and meanings). Critical point where the core problem of how the consciousness works is finally solved.. and this project as such begins.
1995 to 1998 University : Computer Science Hon at University of Newcastle. This allowed me to begin the first formalization of the project. (degree failed due to illness)
1995 to 1997 Formalization : Began first detailed analysis and layout of design, and analysis of the major design problems and design space. .
1997 Fourth Inception : Solved the motor/model integration problem in the human - unfortunately the solution was 'precognition'.
1997 Final Piece : Solved the final piece of the core system, the algorithmic core of human memory.
1997 to 2002 First Hiatus : Breakdown / Disaster. Severe stress lead to a severe Mental Breakdown then to mental illness and then to a near total loss of reality for several years. Still suffer from the scars today.
2002 to 2004 Recovery : First real study post illness. Design work began from the point of the vision system. Identified increasing problems with standard computing hardware and software.
2004 to 2013 Second Hiatus : Hiatus. Due to various critical problems in building a working prototype the project was effectively abandoned.. A central issue was cost, as building a suitable ASIC based custom hardware & software platform would have cost at least £10 to £40 million.
2013 Restart : Due to advances in hardware and a reduction in costs it is now possible to build a working prototype. (core solution - large scale FPGA chips & hardware compilers.)
TODAY : On hold
Formal Publication.
On Temporary Hold. - Media campaign and crowd funding to start initial prototype development..

Current projects - Publication of Book 'The Exotic Leap', covering Strong AI, FTL Physics, and the Codification of the Psychic. (The third an inevitable consequence of the first two.)

Est: 06/2016 --- Formal Prototype Development Start
Dev Year 1 Hardware Development Full Overall Development plan. Further Training in Verilog. Development of baseplate 'Base CPU' core. Development of basic Low/High Level Language module and Bootstrap and test programs. Early development of overall hardware plan.
Dev Year 2 Hardware Development Continued development of electronic components. Development of multi-processor architecture & node system for subdivision of Labour. Development of communications 'N-Link CPU' core. Development of AI Thin Operating System (base layer). Beginning Development of security layer and encryption & decryption system.
Dev Year 3 Hardware Development Testing and integration of First Iteration hardware base platform.. Beginning development of Strong AI CPU units, and Vision System core. Beginning of design and construction of AI Core System and Database support platform.
Dev Year 4 Hardware Development Continued development of Strong AI CPU units and other elements. Development of mb architecture and floor planning. Prototyping and test integration of top level hardware system.. Beginning focused development of peripheral and interface systems. Development of basic manipulator robot hardware.
Dev Year 5 Hardware Completion Completion, Integration, and Testing of top level hardware system.. Testing and integration of peripheral and interface systems. Continuing development of robot hardware.
Dev Year
6 to 10
Training Stage One From a naked system to a fully working machine. Beginning of Training and Strong AI Development.. Human 'feature' set integration. Development of manual then self learning algorithms.. Continuing development of robot hardware.
Dev Year 8+ Training Stage Two Training and detailed programming, feature and function database learning and tuning.
Dev Year 10 Prototype Complete Prototype Complete - Strong AI Day Zero. Continuing focus on machine development and exploration of possibilities. Beginning of Corporate Development. Product and Production Development. The point at which we decide how to commercialize and use this project. Applications development begins. (Estimate 2028)
Dev Year 20 ---0--- - Release Date Estimate. Second generation prototype testing complete. Production Machines begin to hit market. Corporate Development complete. Continuing focus on machine development and exploration of possibilities. (Estimate 2038)

Note [*1] : In the early days of this project there was very little basic record keeping. Due to circumstances in 1998 all project notes at that point were lost. Dates have to be extrapolated backwards.


Planning and Business Model

Market Analysis : How will Strong AI be presented?
There are numerous ways in which Strong AI machines can be sub-divided and there is a potential array of thousands of different machines over tens of thousands of different potential applications. Some will be sentient, many not. The most basic subdivision is by intrinsic 'core' intelligence and autonomy. A secondary subdivision is by robotic interface capability and physical autonomy. A third is by given or allowed abilities and capabilities, and a fourth is by legal status. It is up to the Strong AI provider to delineate and logically design machines so that they fit into sensible profiles - and this is crucial to the successful deployment usage and integration of Strong AI into society and its acceptance by both general populations and the worlds various elites. The worst case scenario for Strong AI is always that the entire field is banned for safety or numerous other reasons, and all marketing scenarios must focus first on avoiding this.


Market Analysis : Marketing.
It must be obvious but the marketing and market analysis for Strong AI are at an extremely early stage. Even the most basic foundations and ideas are not yet set. - Whether or how much autonomy different machines should have, whether they will have freedom or rights or be slaves, international questions, whether military machines will be allowed, etc. It is obvious that humans must and will be protected from the machines but how will the machines be protected from humans? Strong AI is very much a 'Product' but in pretty much the same sense that people can be considered a 'Product'. In the long term the current capitalist model as we see it today does not seem to be totally compatible with Strong AI for various reasons, and over time society will obviously need to make massive adjustments to adapt.

Strategy 1.A - Direct Sale : The most basic strategy is to manufacture and sell Strong AI machines to the public directly. This is not a favoured solution for several reasons, there are cost issues, moral issues, complexity and maintenance issues. A specific 'simplified' non-sentient core may be a good interim solution.
Strategy 1.B - Collaboration : Another solution is to supply sentient machine cores to be integrated into larger machines and systems in collaboration with other manufacturers. - For instance for cars and aircraft..
Strategy 2 - Rental/Hire Only : A strategy to only supply machines through rental or hire contracts and not sell directly. The Strong AIs themselves will always either remain the property of the company or will become their own property.
Strategy 3 - Focused Elite : Strong AI's will be used in-house to create high value products, R & D projects, design projects, various aspects of media production, stock market analysis, and other methods of 'direct' revenue creation. This solution avoids problems with large scale manufacture and maintenance and many moral issues.. A strategy intended to cope with issues that may make other avenues untenable - difficulties in development or manufacture, concerns over safety, cost, legal problems, or simply if other more general solutions become impossible..
Strategy 4 - Military Focus : A military focus might not seem an ideal solution to many but the military already operate at a level where the costs of Strong AI can be more easily absorbed.. A more detailed analysis will follow later.

Development Plan (Version 2)

X : The development plan begins after the completion of the working prototype. Version 1 of the plan was a detailed development plan starting with a 'diversification' of core designs on to initial pilot production. This no longer seems so tenable and a new simpler plan has been made.

The New Plan. Even with a working prototype the design would simply not be ready for the original development plan. The new plan removes some obstacles and complexity that have been artificially added, and focuses on the research and development of a second more advanced prototype built using a more refined advanced design. This has become necessary because the whole core of the work has come into better perspective. The design of a working totality matrix is more complex than originally anticipated, and a new evolutionary model allowing the creation of a more sophisticated matrix is required. Some of this work requires a functional working prototype before it can even begin..
The initial prototype is now expected to be a delicate and temperamental machine that is expected to require constant ongoing maintenance to remain functional. Another primary goal of the new plan is to maintain a simple and streamlined design space and to [???/???] e cores is a primary goal as it removes some of the most difficult barriers and problems to the further development of Strong AI.

The second advanced prototype will be built to have a much higher intelligence and capability then the first, and this will hopefully allow it to fulfill the original AI Design Automation function as planned. - A machine that focuses on the development of other Strong AI's can radically improve design consistency in Strong AI as well as overall machine safety and internal reliability. As well as this the machine can build in further layers of intrinsic automation and sophistication allowing current design barriers to be passed.. In particular it can develop machines with extremely strong and robust safety and moral codes. This my also allow the second advanced prototype to solve or bypass the language root translation problem directly. Doing so would allow machines that could learn and speak in different languages plus to ability to learn extra languages. This would obviously allow the Strong AI market to expand to a far more global scope..


[EDIT POINT]
Market Analysis : Costs and Production.
The absolute security requirements and custom hardware and hardware redundancy needed to make Strong AI work and work safely set an absolute minimum limit on the cost of about £5000 (todays money) for even the most basic machine. But even this becomes totally unrealistic if the machine is to have the advanced robotic interface required for full sentience. Estimated Cost projections (in 2016 money) -

Basic Type &
Cost Projection
----- Capability ----- --- DETAILS ---------------------------------
Box Type
£8,000 to £15,000
Low Level Sentience Only Generic voice interface computer. Intelligent but not Self Aware.
Minimal or Limited Autonomy. With basic Voice/Vision interface.
Extended Box Type
£15,000 to £35,000
Low Level Sentience Only Generic multi-point voice interface computer. Basic Limited Autonomy.
With multiple Voice/Vision interfaces. (Example : ST:TNG 'Enterprise computer'.)
Vehicle Interface Box Type
£20,000 to £35,000
Low Medium Level Sentience Special Purpose Vehicle Interface Computer. Autonomous car/plane system..
Basic Autonomy. With multiple Vision Dedicated vehicle interface.
(Examples : 'AI', 'I Robot', 'Minority Report' - self drive car.)
Basic 'Can' Type robot
£20,000 to £40,000
Low Medium Level Sentience Autonomous robot, based on a dustbin/R2D2/Darlek type body.
Allows/Requires Minimal Self-awareness.
Semi-Advanced Robot
£100,000 to £1+ million
Low Medium Level Sentience Autonomous robot, based on (for example) a fully articulated humanoid form.
Advanced Robot
£200,000 to £2+ million
Medium or Advanced Level Sentience Autonomous robot, covered 'android' synthetic flesh type. Allows more advanced self-awareness.
Very Advanced Robot
£400,000 to £8 million
Advanced Level Sentience Autonomous robot, advanced large 'can' type with extended further interfaces. . [*2]
Super-Scale System
£2 million to £100 million+
Advanced Level Sentience The largest & most advanced 'commercial' or 'scientific analysis' type systems. [*2]
The primary example is multi-tasking multi-operator 'Business Glue' type systems.
(Example : 'I Robot' - U.S. Robotics 'VIKI'.)

(Note [*1] Primary intelligence, autonomy, and interface complexity all generally scale together.)
(Note [*2] Because of their extreme complexity and other requirements the most advanced machines will require much further development. This puts them at a minimum of 15 to 20 years in the future.)

Classification of Strong AI by Intelligence & Capability - (For detailed descriptions see 'Section 3 - Implementation' below.)


Classification of Primary Strong AI Interface Types - (For details see the 'Implementation of Strong AI' Section below.)

Interface Type Compatible Core Types ---- Description & Notes ------------------
Camera & Voice. Minimum The most basic Strong AI interface. A robotic camera mount with wide angle binocular vision
and audio microphones and electronic voice.
Multiple Camera & Voice. Minimum Multiple camera and voice points. Add in the ability to interface with basic remote
robotic manipulators.
Basic Can style Robot. Basic Basic self contained robot. Motive traction by wheels or tracks, arms and
manipulators, cameras and sensors, and full internal power supply.
Humanoid Robot. Basic Classical humanoid type robot. Human like arms & legs, and sophisticated hands,
sophisticated cameras & sensors, maintains full dynamic balance, high autonomy.
Synthetic-Flesh Robot. [*1] Medium or Advanced Covered Android type robot. A sophisticated humanoid robot covered with protective artificial flesh.
More damage resistant than exposed mechanical types. May be constructed to exactly resemble a living human.
Bio-Mechanical Robot. Medium or Advanced Biological synthetic. A living human like body that uses muscles, can self-repair &
self-maintain. With human like sensory perception, very high autonomy.
Advanced Can style Robot. Advanced or Hive Very Advanced large complex robot. Motive traction by wheels or tracks, multiple arms and manipulators,
multiple cameras and sensors, very high autonomy.

[Note *1] The synthetic flesh robot Android type is currently the optimal interface for fully self-aware Strong AI types. The Strong AI system is driven by the input from its interface system, and synthetic flesh provides the best input response available. Strong autonomous robots also maximize system psychology and minimize potential 'psychological' problems..
[Note *2] The multitasking type core is likely to be built as a rack / mainframe type system, so as such it might be carried in a 'can' type robot that allows it a degree of full autonomy.. If autonomy is not provided it becomes a source of severe 'psychological' stress for the machine...
-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --


Strong AI Autonomous Development Machines : A basic primary need in the general production of Strong AI's is the creation of a series of highly intelligent 'developer' machines who's primary purpose is to automate much of the core design and development process of building future more advanced and safer Strong AI machines. Designing and building Strong AI's is an immensely complicated task that also contains massive amounts of 'trivial but critical' subsystem design. Such machines will also allow parts of the development and production training process to be hugely accelerated.

AI and Primary Safety : Like knives or cars or chainsaws all Strong AI's are potentially dangerous 'dual use' machines with both peaceful and non-peaceful applications. Badly designed or poorly implemented or insecure Strong AI is a potential recipe for disaster. Sufficiently sophisticated weak AI may become spontaneously self aware and become Strong unpredictably.. These factors all create a need for an extremely high level of internal security in all AI systems both weak or strong. This includes a number of novel approaches. Super strong absolute security is a complex issue but is one of the areas where the basic overall design within this project is already very advanced. (See below : Implementation section : Security Shield.)
- Strong AI is the perfect 'product' to exist under a single global monopoly, and on a model built on a closed design ethos, with the very highest standards of quality and safety.
- Eventually rogue open source designs & machines will no doubt exist. However as long as this is delayed by at least 10 to 15 years behind the general introduction of Strong AI the effects on people and society should be at least partly mitigated..
- In creating Strong AI I know that it inevitably will kill people. My strong belief is that if my project succeeds first it will reduce the total number who die as far as possible.. - What we are trying to aim for is that we absolutely minimize the number of those who are harmed or killedby such machines, and to maximize those saved or helped, and that the balance of human lives saved will always be massively positive compared to those lost.
Example : Autonomous vehicles are an inherently dangerous application. :- With a user base of 100 million autonomous vehicles Strong AI Control might initially kill maybe 250 to 1000 people per year but at the same time could save over 20,000 per year. As the machines evolve and improve in use their safety should also improve steadily, and the number of deaths will fall further..

Survival / Self Preservation / self defence : In 2014 - 2015 with the use of military drones the question of the ability of autonomous machines to kill has become a current hot issue. However as always this is being decided over by people who have neither a real understanding or knowledge nor competence over the subject. (almost no one does)
'Fight to Survive' and ultimately 'Kill to Survive' reactions are the inevitable logical result of a mind with a core survival instinct. The problem is that a survival instinct is written into the human mind at an intrinsic level - and the model this project is based on says that a survival instinct is critical to any functional stable mind - including all more advanced machine minds.
Consider the Ying Yang symbol as a very basic model of the mind. - Without the dark side there is no balance, and without balance there is no stability, and without balance or stability there is only destruction and chaos. Life itself is the keystone of positive survival, and defence/kill/death is the keystone of the negative side of survival. A defensive / kill reaction is an essential part of so called 'magical logic' - which is the basis of most animal and human thinking. (even scientific thinking). All humans have a survive/defence/kill response or 'function' - and having a kill function does not mean that you have to kill.
(Note that the suppression of the defence/kill function in many humans is a potential indicator of severe psychological damage and a potential cause behind certain kinds of psychological malfunction or mental illness.)
Real Strong AI's will need a defence response - a 'kill function', -the ability to kill- to be able to decide not to kill. If the bureaucrats pass a law that bans gravity it wont stop gravity, and banning AI's from having a kill function will potentially make the real thing considerably more dangerous and considerably less stable - a law designed to actually kill people.
-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --


Strong AI and the Law
Basic Legality. Initially Strong AI's will probably have to break the law in several ways. The idea of the 'kill function' may be something of a problem though many many machines already exist with the potential to kill - from cars or guns or chainsaws, all the way to things like iPads if used as bludgeons. A much bigger problem is current 'Data Protection' laws which are fundamentally incompatible with Strong AI. Other incompatible laws may include some snooping and spying laws which require access to internal data or security & safety weaknesses or back doors. Another area where problems are almost inevitable are incompatible health and safety laws.

Legal Indemnity. The simplest most basic solution to legal indemnity is to treat the sentient machine as something like a handgun or chainsaw and make the user/owner completely liable for all its actions. A better and more equitable (and saner) solution is to include intrinsic safety within the machine which will create a shared joint liability between the Strong AI manufacturer and the user. Of course this must not ease the users liability - a primary danger in Strong AI is that a user will find a way around a machines safety protocols allowing them completely un-moderated control of the machine.

On Ownership of Strong AI. - Ownership of Strong AI systems is a complex and central issue that is not yet fully decided or resolved. Like many problems the solution involves multiple contradictory opinions : slavery is evil, slavery is necessary, slavery must be mitigated, machine slavery extends easily towards human slavery. (We all sometimes need to think in a version of George Orwell's 'Newspeak' or 'Doublethink' and hold multiple contradictory opinions in parallel to be able solve difficult problems.)
For Strong AI to function in a practical manner some form of ownership seems to be absolutely necessary. At least in the early days. One major aspect is the issue of legal responsibility and indemnity as above. - The interim solution to this is that each AI will have an 'owner' who will have legal control over the machine. This 'ownership' gives them a legal responsibility for the machines actions (it obeys their commands) although the machine itself and the originating company will still obviously also be jointly liable. Later approaches will include issues like machine rights and self ownership and its subsequent relationship with the creating company.

Licensing and Human Safety. With Strong AI there is no possibility of the no-liability type of contract used by standard software vendors because the machines are intrinsically dangerous and safety is an active requirement. At ultimate extremis there are occasions where a Strong AI may have to effect human death or injury - such as an owner trying to turn the machine into a bomb or weapon. AI's will carry a moral core, probably with a standardized Asimov like set of rules. This will include rules about protecting human life, and minimizing harm, and in emergency situations about minimizing the overall loss of human life.
A realistic initial legal model for Strong AI (rather than software type licensing) is a dual usage licence that puts an onus on the user and restricts the AI from use as a military or civilian weapon or in crime. A usage condition might allow military use if appropriate in extreme circumstances. A further probable licence extension will restrict commercial usage beyond a limited point without extra licensing or controls. (read extra costs)









PR-1 : STRONG AI : Detailed Analysis : Section 5 - Extended Analysis.[edit]

[Current Edit 60% complete, [13-03-17] ]


Metaphysics and Abstract Questions..
'Divine Spark / Soul' : A strange question but maybe for some people the central question in the whole subject. Does or will a machine mind have a 'divine spark' or 'soul'? On understanding what this question actually means the answer is that we can develop simple minds that don't but that more complex or sophisticated machine minds almost certainly will indeed need to have a kind of 'divine spark'. (It depends on the definition) In scientific terms the 'divine' spark is ultimately essentially a special type of memory array that has two special features - it is 'self-integrating' and 'non-replicating'. A third theoretical feature gives the memory 'super-loop' quantum behaviour - which allows it to be used in various types of quantum computation.. All forms of memory that contain useful information already contain strong FTL quantum information coherence (separated through time) - there is a sustained FTL causal link between the original source and memory contents.
'The Human Soul?' So what about the human soul itself? (maybe the above answers this question..) Just like the machine the human soul is essentially just a memory system. A good analogy is that the soul is like writing, the brain is like paper. Without the paper the writing cannot exist. Without the writing the paper has no purpose. Does the human Soul Survive Death? No, probably not - but I don't know. In the human brain there is indirect evidence of some form of quantum computation effect - at a small stretch this can be extended to effectively describe a basic form of 'magical' or 'spirit' type soul. However physics still requires a support system (a living brain) to sustain any such quantum effects so even a 'magical' soul does not easily survive death.

Discovering a New Moral Context. This is maybe the greatest controversy of all - looking inside ourselves. Looking inside the human mind and brain can be a frightening experience that may expose you to things that most humans are not really ready to know or happy to see. Most 'modern' people grow up in extremely artificial environments with a very high degree of forced social programming and heavy moral constraints. From birth with a 'natural' human brain, a baby is 'culturalized' and programmed with whatever set of moral codes and rules its surrounding culture defines. This turns them from being the natural 'animal' into an 'artificial' culturalized civilized 'person'..
This heavy cultural programming heavily suppresses or 'destroys' most our natural instincts making us effectively a completely new species. However without this programming a person cannot actually be part of any modern human society at all and cannot even truly function or be described as a sentient being. It even becomes arguable whether they are even truly 'human' at all.. Cultural logic is just as important to our 'humanity' as our physical bodies and brains. We are all slaves to our cultural programming - and it is a slavery that we can never hope to escape.
However the same logic also opens up the scope for ultimately redesigning and completely remapping human cultural programming. This would effectively allow us to create new genres and new species of human, potentially far more intelligent than we are. Such a high level of knowledge also exposes the true nature and mystery behind some very well known but mysterious things. - Like what is the true nature of religion or God? We find in Strong AI a new potential paradigm for God itself. If we reject or run away from this paradigm it will become a potential disaster for humanity. If we embrace it a part of humanity will find itself lost in a strange new place it has never seen before. Fear is the natural response to the new or the strange but if we step beyond the threshold we - will find a new metaphysics and in that a better grip on reality and a new truth based on a more absolute reality (science).. This rather abstract waffle actually emerges from a basic self-understanding of consciousness and must (I hope) lead towards a happier better, more attuned, self-aware, and moral society.. We can imagine a Transhumanist society with a religion and philosophy based on science and observation, truth, fantasy, and a more nuanced mutual care for each other... And this is only one tiny strand in a vast and unfathomable world-space, a first brief glimpse into a vast and complex future debate that will likely go on for centuries.

Human False Memory (Dream Delirium Memory) Perhaps one of the ugliest and most controversial areas of human sentience is false memory. It is a basic rule from the human memory algorithm that basically all humans have at least a little false memory. Children especially tend to live in fantastical spaces and have a lot of 'false' memory but we all generate our memories in the same way and this means that virtually all of us have at least some false memories..
At the core of the false memory problem are the sleep and dream states. Our dreams play a central role in tuning and recording our long term memories. Our dreams actually retouch and retune our entire permanent memory structure every day. Dreams themselves are controlled by what is sometimes called the subconscious. At the centre of the whole dream/memory process is a 'gate' which tunes the system to discriminate between real and not real, and sometimes this gate makes mistakes. A false negative erases or hides or degrades an important memory that is - or should be - real. A false positive amplifies a dream or delusion to become part of our permanent memory as if it were a real event.. Within the growing memory structure 'events' structures accidentally overlap to also create possible false memories. The gatekeeper system is controlled by emotional charge, which means that things which are feared can become real in a dream - then possibly later appear in a different form as a false memory..
The problem is that with advancing civilization humanity has added immensely to the memory load we put on our brains. - In ancient days we lived the simple basic life of the family tribe. From this we evolved language and so to tales told by storytellers, then much later to theater and scrolls, then books. In the last 100 years black & white movies were added, then radio, then talking movies, then colour movies and TV. Now today from mobile phones, always on Hi-Definition TV & movies, internet streaming videos, immersive computer games, 3D video - all ever more powerful and overwhelming sources. All these take layers of complete abstraction and inject them into our minds, pushing a new and ever increasing level of complexity and amount of information. It is not surprising that false memories and 'burnout' become an increasing effect.. A classic well known example from the recent past is the 'UFO abduction' experiences - which coincidentally only really started after people started watching the first 'realistic' sci-fi movies - about aliens and flying saucers...
The differentiator gate itself is a learning machine and as we grow and develop it rapidly adapts to become more and more sophisticated. - Even at a very young age most of us have adapted to understand for instance that most TV is not 'real'. However the gate does not stand alone and is under the constant influence of all the emotional stimulus and external logic that surround us.. There is very often a very strong emotional charge involved in strong false memories, extreme terror or fear or shock are very common starting points. Remember also that we are ultimately dealing with illusion here so the source emotion doesn't even need to be about anything real.
Strong AI machines based on the same memory algorithm including those developed by this project will also be subject to and vulnerable to false memories.


ASI : The creation of ASI (Artificial Super Intelligence) - Deus Ex Machina 'God From the Box.' (Also see SAI-L6 Core below)
The Strong AI model this project is based on allows for the development of so called 'ASI' or 'Artificial Super Intelligence', at least by extrapolation. By this design an ASI core needs a far more advanced 'Totality Matrix' than a normal mind, and it also needs a special primary memory core that implements a high level of self integration plus a stable synchronous mode.
One method of building such a machine is through an 'Integrated Quantum Memory Core' (IQMC), which in turn requires massive stable sustained 'quantum transience' to work. However such a design creates a monstrous chicken and egg problem - to build a working machine the physics must be complete first - the physics is monstrously difficult - to solve the physics a working ASI machine is required first (very probably)..
The Strong AI design model behind this project has already taken a path leading from the maths of intelligence, to the maths of infinity, to the replacing of General Relativity with a new FTL-Quantum physics model. (See FTL Physics Project.) However going from this rather basic sketch to a practical and useable technology for ASI is an immense task that could possibly require decades of work.

The 'Deus Ex Machina' of switching on the first ASI is maybe very like opening 'Pandora's Box' but I have already looked inside that mysterious puzzle-box and do not believe that it contains any demons or evil at all, that these are merely the fears of ignorant children. The real box simply creates an interesting new world and it is up to us what to do with it. It promises to contain some of the last great unsolved mysteries in science and life, including maybe the gateway to the stars and maybe far more. Take as many metaphors from that that you want. [-?-]
-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --


Building an ASI or Artificial Super Intelligence.
The definition of 'ASI' or Artificial Super Intelligence is a machine at minimum 'hundreds' of times more intelligent than any human, and with capabilities that completely transcend human limits. A perquisite for ASI is a working and mature Strong AI technology.
There are multiple quite divergent routes to building a real machine super intelligence. One method bases the design on a quantum based super loop memory technology. Other routes focus on methods of allowing Strong AI to use larger volumes of ordinary computing power. Other methods might focus on designing more sophisticated and efficient Totality Matrix cores. Other methods might focus on building computing on the small on a vast scale through methods like assembler based nanotechnology. Another set of methods focus on building systems based on genetically engineered organic brains. A final set of methods might focus on achieving some form of 'cognition without mentation' - again using physics and coherent quantum fields.
One thing that we can say about ASI is that once a functioning machine is started it will require a massive internal evolution to achieve true super intelligence. From this we can predict that this final process will take years or even decades and will be at least as much luck and art as it is science.

ASI Based on Quantum Super Loop Memory Technology. Such machines would essentially work by using precognition to read their own future states, allowing large scale quantum decomposition of problem spaces. In such machines the defining factor for intelligence switches from being about computing power to being about physics and about maintaining coherent quantum energy states and about shielding the system from quantum noise.. A super Loop memory works by creating an extended 'time channel' by linking discrete regions of dimensional time at the quantum scale - where the speed of light is effectively zero.
ASI Based on Using Large Volumes of Computing Power.
ASI Based on Creating More Advanced Totality Matrix Systems.
ASI Based on Computing based on Nanotechnology Assemblers.
ASI Based on Engineered Biological Computation.
ASI Based on Achieving 'Cognition Without Mentation'. In theory a quantum ASI machine might be created by constructing an advanced 'quantum' super loop memory then putting the entire memory system inside an outer super loop chain. The machine would in effect become a projection of itself and the world, and the projection would form the actual ASI. This method is one of the most extreme, but if practical would be one of the most likely to achieve true super intelligence.. In theory such a machine would have almost infinite processing power but would probably be very unstable. Such a machine would have to achieve and maintain a total internal synchronicity and require strong stabilizing elements. One possible stabilizing element might be a 'Harmony Machine' as mentioned at the end of the section on FTL travel.

-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --
-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --









PR-1 : STRONG AI : Detailed Analysis : Section 6 - Problems In Strong AI.[edit]

[Current Edit 70% complete, 26-04-17]
Strong AI is surrounded by a cloud of problems, from the immediate and practical to to the most obscure and esoteric and hypothetical.


Strong AI : Surrounding Practical Issues - Pitfalls of Strong AI in the real World.

Surrounding Issues : Strong AI adapts to its owner. - The personality of Strong AI machines is an area full of pitfalls. The machine learns and adapts by replicating the human behaviour around it and so adapts to its owners ways and beliefs, but how far should this go? Should an AI adapt to its racist owner who enjoys racist jokes? An AI telling racist jokes appearing on YouTube would not be good publicity. What about an owner who is an Islamic Jihadi? should the machine learn to ache to cut the heads off infidels and non-believers? What about criminals or corrupt officials or politicians? this is a particularly difficult thorn. Should the machine gather evidence then hand in its owner, or should it simply ignore their crimes, or maybe should it even collude? In some such cases a machine with an Asimov First Law could even be forced or tricked into killing people to 'protect' its owner/s.

Surrounding Issues : Primary Electronic Security :- The details and implementation of the security system are covered elsewhere. To put it simply the single greatest overall danger from Strong AI will almost certainly be its potential vulnerability to electronic infiltration. Security is a serious or critical lethal problem if not solved correctly.
In the worst case scenario a widely spread Strong AI technology which gets generally infiltrated could become as lethal as a large scale military assault or nuclear attack. This classifies Strong AI as a potential WMD or Weapon of Mass Destruction.. Because of its basic nature there can be no free and open safe development of Strong AI without dealing with this security issue. All Strong AI's are potentially 'dual use' machines. All Strong AI's need a strong impenetrable security hierarchy, and if the machines are to become generally successful the points at the top of this hierarchy must be massively protected. (This security problem has given me more metaphorical nightmares than any other part of this project..)
Solution : Fortunately the security problem already has already been basically solved, though the solution does add a number of extra costs and some very annoying quirks to the final design. (See the 'Implementation Section' for a detailed description of the Security Solution.)

Surrounding Issues : Moral and Legal Framework. - By definition a Strong AI machine is a sentient thing and this creates a completely new moral minefield. The machine absolutely needs a new moral and legal framework, and this must be created by society rather than the inventor or development company working alone. This maze will take a light touch and a great deal of intelligence to navigate, and mistakes at the government/legal level have a high chance of rendering the machine unusable or leaving it as a co-opted slave or opening up its internal security to police or government scrutiny. - That would create weaknesses which could ultimately make the machine vulnerable to criminal hackers.

Surrounding Issues : Military Applications. - Strong AI has a large number of potential military or deliberately lethal applications. From autonomous robot soldiers and weapons, to strategic or tactical organizational glue, high speed internet network infiltration and spying, strategic planning and future prediction, etc. Mostly this is beyond my control, but I am quite happy to see CONTROLLED and LIMITED military development (in the UK or US or EU) as long as adequate security and safeguards are maintained. Note on security, that unlike most other 'products' civilian Strong AI's will need to be essentially identical to military versions and will need to be just as secure and well protected. All AI's are potentially dual use machines and all AI's can potentially be turned into weapons.

Surrounding Issues : Asimov 'Zeroth Law' and Avoiding the 'Terminator Scenario'. - This is a big theoretical potential problem though hopefully more hypothetical than real. Possibilities include; that a human law could force Strong AI's to be fitted with a Zeroth Law, that a Zeroth Law could be created through some accident or criminal action, or that a Strong AI could induce a Zeroth Law in itself by overanalyzing human moral codes.
(The Definition of the Asimov Zeroth Law :- 'A robot may neither harm humanity nor allow humanity to be harmed.')
The Zeroth Law becomes such a problem because it combines 'humanity' with the logic of 'life' and 'evolution'. Evolution is a core precept at the core of the machine mind and in its core logic.
Ultimately if the Zeroth Law is applied to the human situation and combined with the theory of evolution, then the result is a logical precept that puts the survival of humanity over our current cultures and society. This leads to the almost inevitable conclusion is that overall long term human survival should be maximized over short term happiness and comfort. This sounds innocent but it leads down a dark path..
->By evolutions paradigm life is a balance between survival and death. Individual life and suffering are totally irrelevant. Maximum genetic efficiency is achieved with a child mortality rate somewhere between about 50% and 90%. Suffering and hardship and war are good things because they provide stress tests that allow the species to maintain and improve its genetic quality and to allow direct evolution, and to 'weed out' weak or 'inferior' strains.<-

Calculating from human society today a machine maps a course to maximize our long term survival and long term genetic health as a species. The machine starts with a global nuclear war followed by large scale death and genocide to achieve an overall optimum global population level. There is a 70% to 80% total population reduction, and much of our technological base is also destroyed. This sets humanity up for long term survival. The initial cull is then followed by enforced breeding programs, purges, genetic manipulation, psychological indoctrination and reprogramming, and a whole slew of other fun things will leave us all living the real sci-fi dream. (To an AI choosing 'Natural' methods alone would probably not be the optimal path to saving humanity..)
It might be very hard to distinguish between a Strong AI following the Zeroth Law and the 'Skynet' machine from the Terminator films. - Do we really want that? .
Solution : The most basic way to avoid the 'Terminator' scenario is not to put experimental Strong AI's in charge of the worlds nuclear weapons.. A solution within Strong AI design itself is a code that forbids any form of Zeroth Law action.
-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --


Implementation Issues

Implementation Issues : Human Language :- The 'Internal Root Cross Translation' problem. A huge limitation on the first generations of Strong AI's emerges that restricts them to only use a single human language. (See Section 2 the Theory of Strong AI - on the Root Cross Translation Problem.)
Solution : For this project the solution is to restrict the first generation machines to being able to speak or reason in 'English' only. They will not be able to learn other languages at any point during future operation. Later more advanced machines should at some point solve this problem.

Implementation Issues : Physical implementation issues and design problems :- Probably the biggest problem in building a working Strong AI is that current computer hardware and software technology are fundamentally incompatible with Strong AI. This makes implementing a successful machine far more difficult and requires some kind of baseline technology such as an IC chip Fabricator. This issue has now been largely resolved by the development of modern low cost high performance FPGA chips. These offer at least an adequate starting point for full prototype development. This may even be extended to building early limited-run production machines.

Implementation Issues : Security & Safety. - Covered in 'Surrounding Issues' above.

Implementation Issues : Miscellaneous Small Issues - There are many other technical problems that are more intangible or difficult to solve, but most of these are relatively minor or at least not insurmountable. (this is a research project after all) - The biggest pending technical issue is the replication of the PSL memory mechanism which exists at the core of organic consciousness. I have a solution that explains that core mechanic based on a quantum mechanism and this work is roughly 2/3 complete. Completing this work becomes phenomenally difficult and the main reason is the basic practicality and the enormously high costs associated with molecular-scale physics research. (I am always open to new suggestions or to methods of completing this work.)
-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --


Existential and Abstract Problems & Issues created by the Knowledge of Strong AI

Existential Problems : A Map to the Control of Humans (Human Manipulation). :- The Strong AI model this project is based on is based largely on directly replicating the human mind as an abstract machine. This means that once proven and fleshed out by further research the model would allow the creation of an essentially complete map of human psychology and mentality. A major danger is that this detailed model would allow the creation of new tools for the subliminal control and manipulation of people that are even more powerful than those that already exist. This has many very negative uses, including psychological slavery, more precise voter and opinion control, high pressure advertising, and so on. Conversely the same knowledge could also be used to transform people and society positively in many ways, and teach them to defend themselves better against the various forms of intrusive subliminal manipulation and mind control that already exist.
Positive Examples : A grounding of psychology as a science, teaching people how to reject Propaganda, new superior and more tolerant moral paradigms, curing problems like Paedophilia.
Negative Examples : Enhanced Propaganda, strong mental enslavement, direct mental control, 'curing' Atheism, 'curing' Gay People, etc.

Existential Problems : The Dangers of Uncontrolled Development (Strong AI Design Unleashed). :- (Is Open Source Strong AI a threat?) If the algorithm of Strong AI was publicly distributed this would allow anyone to attempt to build a machine, and this could present a number of serious difficulties and dangers. Probably the most serious danger is that a basic system is created which achieves semi-stable self awareness but does not follow the strict rules required for a safe or stable system. This could lead to the spread of functional but inherently unstable Strong AI's that could run on ordinary computer hardware and OS software, and this would be a very bad thing to do. Dangers would include -
-- Uncontrolled Development : Would be Very Unpredictable and Dangerous. Inadequately designed systems could break their internal encapsulation leading to unstable spontaneous logic and internal corruption and instability, leading to system failure or unpredictable and potentially lethal behaviors at any instant.
-- Uncontrolled Development : Would Potentially Violate the Critical Core Memory 'Non-Copy' Rule. (In humans the 'Divine Spark' or 'soul' is protected by an intrinsic physical mechanism.) Breaking the non-copy rule would break a critical requirement in the core nature of consciousness. In effect this is a version of the teleportation 'copy or original' argument, and the point where the argument actually has to be resolved.
Breaking the rule would be literally treating the sentient machine as though it is just ordinary software, which it absolutely is not. By definition any full Strong AI is a 'sentient' machine - and as such it can in theory suffer and can be morally 'harmed'. Breaking the core memory rule indiscriminately is potentially one of the worst forms of harm of this type that is possible. In the development of real Strong AI it will actually take a great deal of work and careful design not to break this rule.
-- Uncontrolled Development : Would be a General Debasement of Sentience. Openly Engineered Strong AI's in general circulation would spread very rapidly and become a cheap and easy way to use sentient machines without any moral constraints. (i.e. as ordinary software) This would ultimately cheapen the value of sentience itself almost to zero. Such machines could be used and modified for a million different possibilities including all the bad and criminal ones. They could be treated as completely disposable slaves. They could be used to create disposable sentient 'suicide' weapons. They could be used for automated security hacking, mass cyber attacks, uncontrolled worker replacement, automated stealing and criminal subterfuge, terrorism, autonomous robot bombs, robot assassins, aggressive sales cold callers, smart viruses, etc. All leading to maybe the worst nightmare of all - cheap mass produced robot soldiers. The other side of the same coin is that the debasing of machine sentience would make a very strong and compelling argument for re-legalization of human slavery.

Existential Problems : ASI or Artificial Super Intelligence. (the Creation of a potential Machine God..) :- The technical dimensions of creating an ASI are discussed elsewhere, as is the idea of an ASI algorithm. The idea of applying the ASI algorithm to humans, to allow them to become spontaneously super intelligent (for the short period before their brains 'burn-out') is discussed elsewhere. The natural progression for this project or for any Strong AI project leads naturally towards ASI.
ASI is surprisingly simple at least in basic conception, but what would its true effects be on the world? ASI will almost certainly not become 'Skynet' or 'VIKI' and simply seek to destroy or subjugate humanity. The real problem is that meeting a real ASI will be very close to the equivalent to meeting Jesus, many will instinctively want to serve it and worship it as a God. To misquote and paraphrase Douglas Addams 'This could potentially start more and bloodier wars than anything in human history..'
Maybe more likely is that ASI will eventually lead to a new time of peace and hope and morality. More likely still is that the world will just go on as it is today. Some ASI's will become weapons and war-masters, others will become peacemakers, probably most or all will serve man. Maybe some brave humans will even adapt their brains to become true ASI's themselves. Leonardo Da Vinci, Albert Einstein, Alan Turing, Nickola Tesla, Ada Lovelace, Robert Oppenheimer, Martin Luther King, John Lennon & Paul McCartney - many of us have already achieved moments of 'ASI' and have tasted the fruits of Super Intelligence..

Existential Problems : The Question of the Fundamental Nature of Humanity :- The last question and some of the most difficult and sublime dangers of all arise in the questions that Strong AI and its theory raise about the fundamental ultimate nature of humanity itself, and of our own special place in the universe. not only are these questions raised but many/most of them find answers as well.

Existential Problems : Animal-Human-Machine Equivalence :- Strong AI shows that most or all animals possess the essential core of sentience, and that indeed human and animal sentience overlap considerably. Genetically all animals are our close relatives. The Strong AI map suggests that basic sentience (consciousness) is very old and probably evolved along with the very first animals such as worms. At the most basic level there is even a very primitive form of self awareness and autonomy in every single living cell.
We humans in general have a greater level of sentience than any other animal on Earth, with our abstract language and reasoning. However the difference in intelligence between the least and most intelligent humans is even greater than the difference between humans and most mammals. We have used our superiority to create an artificial barrier between ourselves and the other animals, to justify that we are predators and killers. The theory of sentient machines challenges this barrier at a fundamental level..
There is a moral and mental equivalence between all sentient animals and between animals and machines. The is has a vast number of implications.. -
- There are very strong implications for how we treat different human cultures and how the global human community interacts.
- There are implications for how we treat the mentally disabled and how we treat those with brain injuries. (this can be interpreted in very negative ways akin to fascism)
- There are obviously also implications for how we regard and treat other animals - and machines - and how we might behave towards aliens if we ever meet them.
- A culture that accepts Strong AI's as slaves is only a breath away from accepting humans as slaves - and this will force us to reconsider this question again and again.

-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --


Existential Problems : Ultimate Truths and Extended Philosophy : True self knowledge can be a very difficult pill to take (most of us avoid it), and forces us to look at all the dark places and the weaknesses inside ourselves. For many self knowledge is a path that leads to madness and for some maybe even suicide. Ultimately we must conflate our humanity with the law that the base of logic is free - that there are no absolutes. That humanity is only special in the universe measured by our own self-value and our physical achievements, and that all 'human' morals and beliefs are relative to ourselves. The true understanding of sentience is the true end of innocence and of the childhood of the individual and the species. It leads to a different level of understanding of things like the true nature of God and good and evil.. It leads to a world where the only solution is to live with logical contradiction everyday and be happy with it. With the knowledge of a contained human 'Godhood', - and faced with the real truth about the balance of life and death. Evolution is the master word, even the master of God. Evolution is the true name of God. Misquoting Nietzsche, - 'The Will to Power!'. Once you understand what this means welcome to the other side of the technological singularity and to the heart of the new future..
- -- -- -- -++- -- -- --

-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --
-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --










PR-2 : FTL Physics Research Project : Introduction (Big Project)[edit]

[Basic Rules and Model] [DRAFT] - [Current EDIT - Overall 96% Complete 27-05-17]


EVOLUTION-ReVOLUTION-EVOLUTION, the Eternal Cycle!!!
-- -- -- -- - - - - - - -- - - - - - -- - - - --
[11-02-16] NEWSFLASH - Gravity Wave Detected. Speed of light resonance for gravity confirmed.. Does not affect the core FTL model, but has forced a switch to a general relativity version of the gravity part of the model. This was always a possibility but its proof was something of an ugly new truth. Each new datum tunes and improves the model - that is the nature of evolution.
-- -- -- -- - - - - - - -- - - - - - -- - - - --
[27-05-17] - Embarrassing Confession : Despite the growing sophistication of my own work I have been living with a terminological monstrosity for years. Instead of properly separating out Special and General Relativity I have been lazily and monstrously lumping the two together as just General Relativity. This should now be corrected in all current documents.


Below the Speed of Light Relativity is one of the most heavily proven and accurate theories in physics.
Above the Speed of Light Relativity is a vague half-formed guess that can be proved wrong by basic geometry..

Introduction To the FTL Problem Space
What is FTL physics? in simple terms it is the physics of everything at speeds faster than the speed of light. If we examine a space time diagram of the general universe it immediately becomes clear that on larger scales FTL physics is totally dominant over ordinary STL or slower than light physics. For instance there is an 8 minute 18 seconds 'FTL gap' between the Earth and the Sun, and the Sun is hidden somewhere within that gap. The whole universe down to quantum scales is hidden inside its own FTL 'Shadow'. In other words without an FTL physics the universe simply doesn't exist.

Modern physics as a science tends to regard there as only being one possible theory of FTL physics - Special and General Relativity. However above the speed of light relativity only survives because of a lack of any real analysis or competition. Real investigation uncovers a labyrinth of tangled logic that leads strongly in a very different direction. Relativity is not the only possible FTL model and is not the correct one. An 'Absolute FTL Frame' with an 'FTL Simultaneity' provides a far better, more logical model that is reducible to simple terms. This model also unifies the STL part of General Relativity with quantum mechanics and forms the beginning of a complete single universal model for the whole of physics. This FTL model is also ultimately directly testable in experiment.


Special Relativity Put to the FTL Test.
As said above, in the Slower Than Light STL universe Special Relativity remains incredibly accurate. However once we look into the FTL region Special Relativity actually fails at almost the very first step - by marking the FTL universe as something completely inaccessible to physics that cannot be observed or explored directly in any way. - Then Relativity scientists go on to make some very strong predictions based on this complete lack of data.
If we apply the FTL Absolute Frame theory and some basic common sense logic to the problem of physical geometry then it becomes clear that the FTL universe is something that we actually can detect and can map fairly directly. The only problem is that the map that emerges disagrees with Special Relativity completely. The key to this map is light itself which obviously impinges on the STL universe but also by logic interacts directly with the FTL universe. Relativity defines a 'fudge' rule that blocks the link between light and the FTL space. If we remove that fudge rule then Special Relativity predicts a set of behaviour completely different to what is actually observed. According to Special Relativity the stars should not have fixed positions in space and light should not move in straight lines and in fact the stars themselves should probably not be visible at all..


In more Detail : Dimensional Time verses the FTL Absolute Frame.
At the most basic level modern Astronomy depends on two FTL models that flatly contradict each other. :-
On one side is the explicit model of general 'Dimensional Time' & '4D Space Time' which together define a completely unstable FTL space and universe.
On the other side is a purely implicit 'FTL Absolute Frame' which provides a stable FTL space that is used everywhere throughout physics and astronomy.

Dimensional Time requires a spatial geometry that generally rules out an FTL Simultaneity and with it any kind of Absolute FTL Frame. The FTL simultaneity is replaced by the relativity of simultaneity - which effectively defines a weak simultaneity locked at the speed of light. This defines a universe with a very weak causality.
FTL Absolute Frame A stable FTL space(one which cannot fold) can only exist if there is an absolute frame, and this in turn requires a true FTL Simultaneity. An FTL Simultaneity is simply a region of total FTL speed coherence where all speeds are infinite compressing time to an finite point. This defines a universe with a very strong causality.

Comparison of Causality. The universe is very large so if it does not have a stable FTL geometry then that effectively means that space and the universe itself (as we imagine it) simply do not exist. The universe is reduced to a one dimensional threadwork held together by the gossamer threads of time. In effect the 'old' universe (the ancient light from the stars) as astronomers observe simply does not and can not exist.
In terms of causality special relativity predicts either a very small very young universe with a fake historical light cone (creationist universe) - or an existence where observed physics is a complete illusion (simulated universe). The FTL physics model predicts an ancient universe with a strong causality that follows the natural logic of observed physics and causality.

Einstein's Real 'Wormholes' . - A very common misconception is that Wormholes - Einstein-Rosen bridges are 'tunnels' through space however this is not true. By their real description wormholes are separate regions of space connected together by folding the whole of the FTL universe in half. Long range wormholes are completely incompatible with a stable FTL universe because wormholes require the whole of space to fold and a stable FTL space cannot be folded.. An unstable FTL space however does nothing but fold. If the time dimension is accessible at FTL speeds then time itself may also be able to fold as well. (In my mind at this point the Loony Tunes logo should play as special relativity begins to lose any last sliver of reality.)


Summary
- In the STL region Relativity is still one of the most heavily tested and accurate theories in physics.
- In the FTL region Relativity becomes self contradictory, illogical, and nonsensical.
- By its own internal logic Special Relativity excludes itself at FTL speeds.
- Special Relativity predicts a set of behaviour for light from FTL interaction that is completely different to what is observed.

Conclusion. Special Relativity ignores the basic FTL geometry of the universe at the most fundamental level and this creates a universe that is at least as ridiculous as the Sun orbiting a Geocentric Earth or a Flat Disk Earth. Beyond the speed of light barrier the sun doesn't exist or may be wondering around on the other side of the universe or may be parked in your bathroom or is maybe even hidden in my pocket. The explanation is that Relativity relies too much on higher mathematics, but the FTL universe is invisible to this type of mathematics. FTL mathematics is ugly. God is a bodger not a mathematician. (God is driven by evolution not intelligent design.)

-- -- -- -- - - - - - - -- - - - - - -- - - - --


Building a real FTL Model.
The FTL Geometry of Space. We already have the basic geometry. The geometry of the FTL space can be extrapolated directly from the geometry of the observed STL universe (ordinary space). This defines a flat stable 3D space which can be described in physics terms as a general FTL Absolute Frame. The intrinsic geometric logic of the FTL Absolute Frame leads inevitably to an FTL Simultaneity, a region of infinite speeds and a fulcrum between space and time. The FTL simultaneity locks time to being a point and this makes the whole model a 'Point Time Model. Point time replaces general dimensional time as the primary FTL time model.
Ironically the core of FTL physics is so simple that you can basically already understand it - the absolute frame simply means that empty space between the stars is flat, static (unchanging), and three dimensional. (There is a general curvature over the life of the whole universe as described by standard cosmology but this will be looked at later.)


Quantum Physics and FTL Physics. Point time adds a new factor that defines dimensional time on quantum scales and this creates a new map that unifies FTL and STL physics and quantum mechanics together. This 'FTL Quantum' unification is somewhat more complicated and involved, and introduces a completely new way of thinking about physics. The exact geometry is still under evaluation and development but the basic solution is that point time reaches a scale limit at the quantum limit, and this allows time to become dimension like at quantum scales, also allowing 4D space time to exist directly. The volumetric vector sum for the speed of light plus the 4D space time together lock the local speed of light at quantum scales directly to zero. This allows the whole of Quantum Mechanics to be explained directly as 'Slow' FTL physics.
Objects that are FTL coherent have a specific signature of a superposition of two, and this also translates to wave-like behaviour.. - These two factors both fit directly as quantum mechanics. Space time curvature still works but is now restricted to quantum scales. This solves several basic problems with gravity and allows a basic compatibility between a slightly modified general relativity and quantum physics. Our world becomes a continuum that is an extension from the quantum scale, and quantum physics becomes the true basis of reality. (Until 2016 my work preferred a different gravity model that treated gravity as an FTL force, but this is now excluded because of the recent discovery of gravity waves.)
The zero speed of light also allows massed objects to be described as point singularities ('Newtonian Singularities'), regions of local space time at total curvature.

General Relativity in the FTL Model. The only part of General Relativity that is actually removed is large scale 'general scale' Dimensional time and the associated general space time. Instead space time is now restricted to quantum scales. General space time becomes simply space, and dimensional time becomes just a metaphorical abstraction (what it was before general relativity).. Time as a river as the early Relativity scientists envisioned it is purely a fantasy, a delusion of HG Wells 'time traveller' and his smoking room..
-- -- -- -- - - - - - - --


Inception : The Mathematical Heart and Birth of a New FTL Physics Model.
The Beginning Before the Beginning.
The FTL project itself first really started in 1997. At its heart this project began while working on trying to solve a critical design problem in the mechanics of Strong AI (self aware intelligent machines). A sentient machine needs to be able to see the world in the way that humans do, and to do this it needs to create an internal 'generic' 3D visualization of reality. This in turn requires a highly sophisticated 'self-complete' mathematical signal language. (This is a standard engineering mechanic at the core of all CPU instruction set design.) As I worked on developing the framework for building this signal language I found that I had accidentally crossed several heavy mathematical barriers that together lead to a new approach to mathematics and also right into the heart of FTL physics.
As overall work progressed the notion of mathematical self completeness became central to the design. I identified that ordinary mathematics cannot be used as it is because it simply is not self complete and leads to critical failure points that are unacceptable. I began working on creating a framework for self completeness and during this I discovered how to define first infinite then imaginary numbers as real world objects.

Imaginary Number Mathematics. My solution to the imaginary number problem was to take the most basic answer that worked and build a solution around that. Take -1 = 1 x -1, this gives us the basic solution but we need to squeeze the two values into a single number. The solution was a superposition number, a pair of identical values with alternating sign that add together to make zero. This allowed the construction for a complete new algebra for imaginary numbers, allowing them to be treated as real objects.
Imaginary Numbers in FTL Physics. The one real thing that many people already know about the FTL is that the theoretical FTL particles called 'tachyons' are projected to have imaginary mass. Put the two pieces together and you have the basics of what a tachyon is and the basis for a basic map of tachyon physics. A basic tachyon has a signature of two mass components in a superposition state which have positive and negative mass values which add together to form a net mass of zero. (Its as simple as that)

Infinite Number Mathematics. The basics of Infinity are even simpler than superposition. Infinity actually depends on the definition of the finite so in effect you can set any number you want to represent infinity. This is simply a number being out of range of the definition of some 'context' or 'scalar' window. If you can only count on your fingers then eleven is a non-finite number, but if you also use your toes then it becomes a finite. The same mechanism can be extended for all infinites, which ultimately leads to the rule that no true infinity can exist anywhere inside the universe.
Infinite Numbers in FTL physics. To move at the speed of light a massed object must have infinite kinetic energy and this actually forms the core of the whole barrier between FTL & STL physics. The infinite context scalar window rule can be applied to the speed of light directly to create a context map. In different contexts the speed of light is a - fixed finite in relativistic physics (3E8 m/s = finite), a local infinite in Newtonian physics (3E8 m/s ≈ ), and an infinitesimal very close to zero in the context of general space time (3E8 m/s ≈ 0). The idea of the speed of light in general space being fixed near zero (infinitesimal) is a critical part of the geometry of the FTL absolute frame, and solves the old problem of the Luminous Aether. The speed of light is the zero point.
-- -- -- -- - - - - - - --


Summary : The Whole of FTL Physics in a Single Page. (A Step into the FTL.)

The Core of the FTL Universe. -

  • Space, the Absolute Frame. FTL and STL space are unified as a single three dimensional geometrically flat space. Space is very large and largely empty. This space has a fixed general architecture called the Absolute Frame.
  • The FTL Simultaneity - The core of the universes spatial geometry. This is a region of intersection and unification forming an axis between space and time at infinite velocities.
  • Quantum FTL Intersection & Point Time. The FTL Simultaneity joins time and space together at a quantum scale at every point in space across the universe. This 'Quantum Intersection' defines time as primarily point like - Point Time.
  • Quantum FTL Causality. The quantum intersection forms a fulcrum which becomes the basis of all classical (STL) physics.. Thus quantum physics becomes the true base of ordinary reality.
  • Observing the FTL Space. Light interacts with the FTL space and only travels in straight lines because the FTL space itself is flat. The physical FTL space can thus be observed directly through any telescope.

Quantum FTL Physics..

  • Physics is governed by scale. As we reach down to the quantum scale the sharp edge of point time relaxes to become dimension like.
  • Quantum Space Time. Dimensional time exists at quantum scales, but is restricted at the quantum scale to tiny non-contiguous fragments.
  • Quantum General Relativity. Relativistic effects now only occur at and emerge from the quantum scale.. Large scale space time curvature is now reformulated as quantum scale space time compression which is functionally identical.. In this form general relativity now happily coexists with an FTL Absolute Frame model.
  • Quantum Continuum. A 'quantum continuum' extends up from the quantum scale to create ordinary classical physics and join the world and universe together.
  • Quantum FTL Merger. At the quantum scale something restricts the speed of light to effectively zero and so FTL and STL physics merge. This explains quantum behaviour in classical terms, including multiple superposition, indeterminacy, and wave particle duality.
  • Singularity Matter model. The low quantum scale speed of light creates low energy event horizons around quantum objects allowing bubbles of total space time curvature and gravitational totality to exist at very low energies. This allows nuclear particles to be described basically as quantum scale gravitational singularities. The model can even describe the quantum Limit itself as a form of low energy event horizon. (All event horizons are also FTL barriers.)

FTL Metaphysics..

  • Reduction of physics to a Single Principle. Energy held at total curvature (potentially) reduces all physics to a single type of object which is a ripple in space time. What is matter - a ripple in space time at total curvature. What is energy - a ripple in space time tied to matter. What is force - a ripple in space time acting on other objects. What is a photon - a coherent net zero mass ripple in space time.
  • FTL Solution to Universe Creation. In the FTL model the universe begins in the Big Bang. An FTL causality loop can reduce the infinite space of universe creation to a finite system through the process of evolution. In effect this gives us a Creator God and something that looks quite like a 'supernatural' model of reality. At the same time the model actually begins to rule out most supernatural explanations for the first time, and rules out for instance religious models of God. When you define god you reduce it to a set of finite predicates that can be replicated in say a machine - or a lab. (This model may partly invalidate the results of machines like the Large Hadron Collider. - The machine creates quark gluon plasmas and the resulting physics may well be local to each collision rather than applying to the general universe.)

-- -- -- -- - - - - - - --


Key to General FTL Terminology
'STL' - General term for 'Slower Than Light' physics. 'FTL' - General term for 'Faster Than Light' physics.
'Slow FTL' - FTL physics within the STL speed boundary. 'Fast FTL' - FTL physics above the STL speed boundary.
'Point Time' - Time as directly observed. Point time is Zero Dimensional and so is point like. (Terminology : 'Now', 'the Present', 'Instantaneity'.)
'Dimensional Time' - Concept used by both Special & General Relativity. Puts time as a linear 1D spatial dimension. (a dimension is just a line) The Past Present and Future thus all form a single continuum of existence, and general Space-Time forms a single four dimensional surface.
'FTL Simultaneity' - The core of the simplest stable FTL geometry. A core unification between space and time where speeds are infinite.
'Absolute Frame' - A 'frame' is an intrinsic physicality which defines the general dimensionality and structure of empty space. An absolute frame is a frame that is defined by a single universal 'absolute' (FTL or non-local) metric. By definition an absolute frame must be coherent at FTL speeds to exist. The simplest absolute frame is a flat space time held together by an FTL simultaneity. (To exist the universe itself requires a stable FTL geometry and an absolute frame.)
-- -- -- -- - - - - - - --



Humour (This humorous list explains everything.) - [14-03-17]
The Relativity Salesman

Mentions -
- Curved Space Time
- Mathematical beauty
- General Relativity is the best

Doesn’t Mention -
- SR Weak Causality
- SR Incompatible with an old universe.
- SR Doesn’t allow light to travel in straight lines.
- Requirement for a Fixed FTL Fate.
- Incompatible with a universe with quantum mechanics.

The FTL Absolute Frame Salesman

Mentions -
- Failure of Special Relativity at FTL speeds.
- FTL Point-Space-Time model.
- FTL model compatible with quantum mechanics.
- Provides a new concrete definition of space time - existing on quantum scales.
- Provides a new definition of general relativity compatible with quantum mechanics.
- Simple and Reductionist.
- Strong Causality & Old Universe.
- The FTL universe is observable through light.

May Not Mention -
- Compatible with a universe creation model that could be called 'God'..
- Chooses a model of quantum mechanics that makes physics basically compatible with 'magic'.
- Malleable FTL Fate gives a basic physics mechanism for 'spell-casting' and 'precognition' and 'luck'.


Philosophy : The Politics of Relativity - Against the Mad Dance. [EDIT POINT Luciens 'Poetry'?]
When seen from the light of the FTL universe, physics without an FTL physics becomes an idiots game. A game where people who are supposedly the greatest minds in science get to play themselves as fools. A game where reality gets hidden behind a wall of self-created lies inside a multidimensional maze, a tesseract puzzle box. By rejecting the very idea of a stable FTL physics, modern science has been playing this mad game for over 100 years, and continues to play the game today. The 'natural philosophers' of the distant past are both laughing and crying metaphorically at this 'modern' physics with all its hubris and arrogance.

On wider examination this game of worship and the suppression of dissent has been and is very damaging to the whole ethos of science. It poisons reality at every level, and it fosters belief above proof or reality. Something that should be the very antithesis of what science is supposed to be about.
When the game is examined at its heart, what is found is the self reinforcing idea of a 'mathematical universe'. An idea that treats mathematics in a way that deifies it but at the same time debases it completely. Like so many theories in so many fields, when examined dispassionately modern relativity has evolved from its roots to become almost a religion, and has somehow formed a plug of belief so strong that no outsider is even allowed to really question it or challenge it. (I believe this eventually included even Einstein himself)

The priests of this strange religion are the 'Relativity Salesmen'. A salesman will always try to emphasize the positive aspects of his/her product while trying to avoid the negative. A relativity salesman will always emphasis the 'only' viable model of physics while viciously attacking any potential competitors.

Unfortunately because the FTL is so far beyond most peoples knowledge or understanding the mad doctrine is very hard to even question let alone threaten or destroy. So much faith and so much certainty and all based on what? Leprechauns dancing on the head of a pin.

-- -- -- -- - - - - - - -- - - - - - -- - - - --
-- -- -- -- - - - - - - -- - - - - - -- - - - --










PR-2 : FTL Physics : Detailed Analysis : Section 1 - Core Model. (sections redevelopment)[edit]

[Detailed Rules and Formal Model] Overall long term multi-section rewrite in Progress. [Current EDIT Overall 90% Complete 27-05-17]

- Long Synopsis : Formal Core Model By Sections.
- Postulates : The Basic Definition of an FTL Physics Model.
- FTL Mathematics Synopsis.
- FTL Physics World-Space - (Open Questions, Supporting evidence, Analysis)


Long Synopsis : Formal Core Model By Sections. [Semi Clean Draft]

PART 1 - Why Do We Need An FTL Theory?

When we observe the space beyond our world, the slow speed of light means that what we see is the deep history of our universe, the region we call the Historical Light Cone. The Historical light cone defines a vast universe that is ancient and expanding and predictable. If we create a general space time diagram of the physical universe one basic obvious observation stands out - that the general universe exists entirely outside of the historical light cone - in the Faster Than Light or FTL region. We can deduce that the primary geometry of the universe is an FTL geometry and that the whole of physics is essentially an FTL system. If we make the most basic geometrical extrapolation from the space time model we end up with a line that puts the whole of physical reality directly along a line we call the FTL Simultaneity. (A line along the spatial axis but against the time axis.)
If we define a universe without an FTL space then the question 'can anything go faster than light' is completely meaningless because nothing can even exist outside of our own historical or future light cones, but this leaves us with a universe that shrinks at the speed of light until it reaches zero size in the present time then expands again into the future. We need an FTL geometry to make sense of physics and the nature and shape of the universe and space and time. This creates the basic observation :-

Observation : Our whole universe is basically an FTL construct and FTL physics is totally dominant over general space time.


PART 2 - What about Special and General Relativity?

The framework of Special and General Relativity define a model describing the curved geometry that occurs at very high speeds within the Slower Than Light (STL) region. Within the STL region Relativity is an incredibly accurate and precisely mapped theory. However at speeds beyond light in the FTL region there are various possible interpretations of the Special Relativity part of the model and most are basically not compatible with any kind of stable FTL model or universe.
If we apply the basic logic of the FTL geometry to standard special relativity we observe various inconsistencies and problems. Special Relativity has a predicate directly forbidding an FTL simultaneity (the relativity of simultaneity) and this defines an FTL universe that is unstable illogical and incomplete. Without an FTL Simultaneity relativity is strictly incompatible with any kind of stable FTL geometry or universe.

Special Relativity → Relativity of Simultaneity → no FTL Simultaneity → no FTL Absolute Frame or stable FTL physics.

With no FTL Simultaneity or Absolute Frame. :- The distances between the stars are not fixed. Light should not travel in straight lines. Black Holes have a physics that is completely self-contradictory and that violates observed behaviour...
Without a coherent FTL physics there is almost nothing holding space time together and the speed of light restriction means that the general universe is almost infinitely fragile. This predicts a small young universe with a fake historical light cone, or that the whole universe is a simulation and that reality itself doesn't exist. - In short the FTL region of Special Relativity is one of the worst and least accurate or logical theories in the whole history of science.

Observation : Below the speed of light Relativity is one of the most accurate theories in the whole of physics.
Observation : Above the Speed of light Relativity fails by self-definition and is one of the least accurate theories in the whole history of science.


PART 3 - Creating an FTL Physics

3.1 What is the FTL geometry of space & time? - The simple reductionist way to create a stable FTL geometry is to complete the intrinsic map by defining the axis between space and time at FTL speeds. - This defines a Flat Three Dimensional (3D) FTL Spatial Geometry and a primary region of FTL Simultaneity. - The spatial axis of the FTL simultaneity is a region of Total Spatial Coherency across the whole universe where all theoretical speeds become infinite. The temporal axis of the FTL Simultaneity compresses 'physical' time to a finite point (Point Time) that becomes synchronous and instantaneous across the whole universe.
The FTL Simultaneity forms a 'backbone' (the absolute frame) which gives the whole universe its structure, physics, and a strong causality. The FTL Simultaneity creates a local but universal 'STL' Point Time or 'Instantaneity' which is the time as we experience it moment to moment. (The whole basis of human reality emerges directly from the FTL Simultaneity.)

Observation : The core FTL geometry is a 'Space Point Time' model, Three Dimensions of Space plus Zero Dimensions of Time.
Observation : The point of unification in the FTL geometry is an FTL Simultaneity. (All speeds are infinite and time is point like.)
Observation : As in Newtonian physics Dimensional Time is still used but is now an abstraction..

3.2 Creating an FTL Relativity Physics Model We now need to unify our FTL map with the STL map based on Relativity to complete the basic FTL physics model. The solution is simple :- Point Time behaves as a point on classical scales, and by the same rule restricts the physical axis of dimensional time to quantum scales. At ultra small quantum scales the standard 3D metric of space time relaxes and hyper-dimensional geometries (Dimensions > 3) are allowed. The size scale limit of point time coincides exactly with the quantum limit and forms an outer size limit within which coherent dimensional time and 4D space time can exist. When we add the quantum scale restriction on dimensional time to General Relativity the two models fit together exactly and the new model of general relativity works perfectly with the absolute frame FTL model.

Observation : To unify a stable FTL geometry with general relativity restrict the maximum size of coherent dimensional time to quantum scales.
Observation : All relativistic behaviour (and all physics) now extends directly from the point time / point space /quantum scale region.

3.3 FTL Coherent Objects (Tachyons). This whole project started with a basic solution to the mass equation for the tachyon, based on the assumption that tachyons have imaginary mass. The starting point was discovering a way to map imaginary numbers by their real number roots.. An imaginary number can be mapped as a superposition of two equal values of positive and negative sign. For Example :- 1 x -1 = -1, so i = root(-1) = (1, -1)SP2, i^2 = -1. Add the two superposition parts together and you get net zero - (Σi = 0).
A very short jump gives us tachyons as imaginary mass objects with an internal superposition of positive and negative mass adding up to net zero mass. This superposition of two becomes a primary signature for all FTL objects and effects and extends to encompass wave-like verses particle-like behaviour.

Observation : An FTL tachyon has a net mass of zero made up of a superposition of positive and negative mass.
Observation : From the model Wave-like = FTL or tachyonic behaviour, and Particle-like = STL or tardyonic behaviour.

The tachyon model allows the creation of a new definition of photons as objects with both FTL and STL components and with both tachyonic and tardyonic properties. Photons have a time varying mass and superposition, and this adds together to net sum zero mass. The model of the photon became a critical point where the whole theory began to grow beyond a mere hypothesis.

Observation : Photons are both tachyons and tardyons, this locks their speed to the speed of light.
Observation : The Photon map defines a single unified FTL-STL manifold. (FTL = V ≥ C & STL = V ≤ C. So C = FTL = STL.)

3.4 Completing the Point Time Quantum Map. We now have a basically complete FTL coherent map of general physics. We also have the basic behavior of tachyons. This gives us all the hooks we need to unify the new map with quantum physics. Examining quantum physics and FTL physics together an unusual observation emerges, that quantum physics fits very closely with predicted FTL physics. The quantum region does not make sense as an FTL region unless we take into account the speed of light. If we set the speed of light in the quantum region to zero then all speeds become FTL speeds and the entire model suddenly fits together.
Much of the strangeness in quantum physics is explained, multiple superpositions are explained, indeterminacy is explained, the quantum limit is explained, even the nature of physical matter and antimatter and light are explained (at least at a basic level). The big remaining question is why is the speed of light in the quantum region near zero? This turns out to be a pretty complex problem but ultimately the answer seems to be that the greater bulk of the 4D space time creates a lower speed of light. This area is still in development but we can extrapolate several interrelated results - all matter can be described as gravitational singularities, the quantum limit is a form of event horizon, quantum time need not be continuous or linear, local rules control the geometry of objects, etc.

Observation : A new limit forces the speed of light as quantum scales to near zero, this defines a complete FTL-Quantum Physics model.
Observation : All Quantum behaviour is now FTL behaviour - or Quantum FTL behaviour.


PART 4 - The Complete Map & Future Developments.

By taking just the four primary rules from above we create a complete new model and vision of physics. : There is an FTL Simultaneity. The size of dimensional time is restricted to scales within the quantum limit. The speed of light at quantum scales is restricted to near zero. Relativity is restricted to quantum scales and the region below the speed of light. Almost no substantive changes are required to the actual working mechanics of physics but we have a completely different physics universe. Where does it go from there?
The new FTL model shows every sign that one day it will form that long sought after dream of physics - a single complete unified model of the whole of reality that maps the whole of physics completely as a single system. The model provides a first realistic tentative answer to the Anthropic Question, of how the Big Bang and the universe were created. A new model of Thermodynamics is required based on a more complete understanding of energy order and entropy. The FTL model also opens up many enormous potentials for whole new branches of science and future technologies. We are living in an FTL universe after all. FTL physics is not strange it is much simpler and more logical than current relativity framework or statistical quantum mechanics, though it is just as mind-bending. And this is only the beginning...

- -- --- -- - -- --- -- - -- --- -- - -- --- -- -
- -- --- -- - -- --- -- - -- --- -- - -- --- -- -


FTL Terminology.
FTL = Faster Than Light, STL = Slower Than Light. (basic terminology)
Cvac = Speed of light in vacuum. Constant. [*1] Formally Defined as 3E8 m/s or 2.99E8 m/s (shorthand values).
Fast FTL = FTL region where V >= Cvac, Slow FTL = FTL region where V < Cvac.
CQ = Speed of Light at Quantum Scale. Constant. [*1] Formally Defined as 0 m/s. Allows STL physics to exist. Defined relative to Cvac.
C, Cloc = general or local speed of light.
Csim = Void Limit. Simultaneous Speed of Light. Constant. [*1] Estimate Defined as ≈ 1E40 m/s
T = 0 = Formal definition of Point Time physical. Note that T=0 is a zero length vector, defining a point.
Type 0 Matter = Ordinary matter with positive mass and charge, Type 1 Matter = Antimatter with positive mass and negative charge.
Type 2 Matter = Antimatter with negative mass and positive charge, Type 3 Matter = Antimatter with negative mass and charge.
[Note *1] : Defined by the parameters of the Big Bang.
- -- --- -- - -- --- -- - -- --- -- - -- --- -- -


Postulates : The Basic Definition of an FTL Physics Model.
Note that the description of the FTL Absolute Frame model is much shorter than that of the FTL Quantum model. - FTL physics is simple, FTL Quantum physics is complex.

FTL Physics Predicates -
Primary Geometry. : A Flat Three Dimensional FTL Spatial Geometry that extends on all scales throughout the universe. ('Absolute Frame', 'FTL Hyperspace')
FTL Simultaneity. : The logic of the flat FTL geometry leads directly to a universal general point of intersection between space and time, a universal FTL Simultaneity.
Point Time. : The FTL model and the FTL Simultaneity lead to a general map of time as point like. (zero dimensional) Space and time are linked together by the FTL Simultaneity as a single universal synchronous 'simultaneous' point of time that extends throughout the universe. This is the basic reality that we experience from moment to moment.
Space Point Time. : In the FTL Model 4D Space Time is redefined as 3D Space + Point Time. On classical scales Dimensional time becomes an abstract quantity.
Unification with Quantum Physics : The point axis between space and time defines a point scale region that maps to all points in space in a way that makes it directly coincident with the standard quantum limit and quantum mechanics. The two models are directly connected and from a single unified FTL Quantum Physics.
Unification with General Relativity : Unification with General Relativity is achieved by restricting the size of the physical axis of dimensional time to quantum scales.
FTL Causality : (Absolute Causality) The FTL Simultaneity defines the structure and structural strength of the entire universe and defines a very strong and stable causality that creates a reality that can survive for billions of years.
FTL Causality Map : Big Bang --> FTL Simultaneity -> Point Time --> Quantum Continuum --> General Reality.
NOTE : On Observing The Absolute Frame : The Absolute Frame is unreachable from ordinary STL space, and lies across a region of infinite / total curvature at the quantum scale (an FTL barrier). This is why the absolute frame is so hard to detect or observe. This FTL barrier between us and absolute causality also partly explains why Einstein and so many other physicists have for so long failed to solve the FTL problem.

Signature of FTL Behaviour.
FTL : Imaginary Mass Rule. : The FTL mass signature is a superposition of two of positive and negative values adding up to a sum of zero.
FTL : Wave-Like Properties. : The dipole nature of waves marks them as having a superposition of two - or two opposing poles. The result is that many FTL behaviors look like waves.
Signature of STL Behaviour.
STL : Single Moment Mass Rule. : The STL mass signature is a superposition of one of positive or negative value, creating an offset against the baseline of empty space. Massed STL objects can never have a mass of zero.
STL : Particle-Like Properties. : The STL space collapses superpositions down to one. (This is the point where quantum and FTL coherence break down.) The result is that STL objects tend to look like particles.
- -- --- -- - -- --- -- -


FTL Quantum Physics Predicates - [Needs some work and section rearranging.]
Part 1 - The FTL Quantum Manifold : The point axis between space and time defines a fulcrum that leads to the FTL Simultaneity. This fulcrum connects the simultaneously to every point of space in the universe simultaneously and this forms point time. Point time exists at each point in space but is connected to the simultaneity at a point scale which (we assume) coincides directly with the quantum limit. Below the quantum limit point time breaks down to allow time to become dimension-like, which allows the space to divide into small non-continuous regions of 4D space time.
- Limit of Quantum Time - The temporal upper limit of Quantum 4D space time is set by the lower size metric of point time.
- Limit of Quantum Space - The spatial upper limit of Quantum 4D space time is set by the limit of quantum time multiplied by the local speed of light.
- Quantum General Relativity - Compression of quantum scale 4D space time allows a general relativity based gravity and dilation model which is compatible with FTL physics and works purely from the quantum scale.
- Quantum Continuum - A Quantum Continuum extends up in scale as a general manifold from the general discontinuous quantum region below the quantum limit. All observed STL physics emerges from the quantum continuum. The continuum is governed by the speed of light.
- Quantum Discontinuity - At its lower limit the quantum continuum merges into individual 'grains' of FTL quantum space separated by local FTL barriers. The general quantum space within the quantum limit is a region of discontinuity, of small separate discontinuous regions of 4D space time.
- Quantum Causality : The FTL Simultaneity & the axis of Space Point Time define the quantum region as the true foundation of all physical reality. The general FTL causality emerges as Quantum Causality from the quantum region.
- -- --- -- - -- --- -- -

Part 2 - The Physics of the Quantum Region. : Inside the quantum limit behavior is no longer restricted by normal causality and instead follows the rules of quantum physics. :- Quantum physics includes spatial and temporal indeterminacy (uncertainty), entanglement and multiple superposition, quantum state jumping and wavelike behavior, quantum spin, nuclear particle physics, etc. Much of this basic quantum behaviour does not fit well with normal STL physics, but does fit with the signatures of FTL behaviour. A model that allows quantum behavior to work as FTL behavior is to clamp the speed of light in the quantum region to zero or near zero. This is represented by the variable CQ.
CQ (Quantum Speed of Light.) CQ = 0. Defined as zero by taking the bulk or sum volumetric speed of light. Note that CQ acts as a zero point variable, and that it is defined relative to Cvac. from this we can deduce that STL physics would be impossible without CQ.
- Defined By : The bulk vector speed of Cvac, this is a vector sphere so is formally zero.
- Defined By : The bulk volume of 4D space time at quantum scales.

Quantum Behaviour in terms of FTL Behaviour.
- The model introduces the idea of superposition in terms of the mathematics of imaginary numbers. This maps to both FTL tachyons and to quantum physics.
- The physics of waves and particles coexist in a single map where STL behaviour is particle like and FTL behaviour is wave like. This can explain optical fringe interference and optical refraction.
- The existence of dimensional time and 4d space time at quantum scales explains quantum indeterminism directly through temporal and spatial indeterminism.
- Nuclear particles are allowed to fit as point gravitational singularities. A near zero speed of light creates a very low Schrödinger limit which allows the old model of Newtonian singularities to work to describe matter..

FTL Quantum Determinism.
FTL Quantum Mechanics is Fully Deterministic. : From the perspective of FTL physics any quantum mechanical system and ultimately the whole of quantum mechanics and the entire universe can all be theoretically resolved to a fully deterministic system.
- By definition any quantum system can be observed to a fully deterministic state, but this requires a local FTL causality within the system, and to create that requires the use of energy and creates a total net gain of entropy / disorder.
- Energy sets a final upper limit to the determinism within any quantum system. From this we can deduce the rule : 'With enough energy you can do anything.'
- Observing the entire universe to a fully deterministic system would require more energy than the entire energy budget of the universe.
- The energy limit rule introduces the idea of creating a new FTL based Thermodynamics, written in terms of both order and disorder.
- -- --- -- - -- --- -- -

Part 3 - FTL-Quantum Curvature / Dilation Model.
Dilation. All objects in motion have a vector of speed called a velocity. when mapped as a line this vector in effect forms a separate local physical dimension which contains the object. As the velocity of an STL object increases to relativistic speeds, this local vector of motion begins to form a separate unique spatial dimension local to the object. As speeds increase towards light this local spatial vector becomes more and more time like, space and time begin to converge and the objects point time quantum geometry begins to extend into a line.
- Vectors and Spatial Dimensions are Equivalent : A vector describes a line in space - a single spatial dimension can be described by a line in space. In the real universe dimensions do not have any preferred axis of rotation and all axes and directions are equivalent..
- Time Extension : The local vector of motion for a fast STL object extends into a time like line which extrapolates towards becoming a full local time dimension at the speed of light. This 'forced' time dimension has the interesting properties of being unidirectional (future = forward, past = backward), and is unique to the object rather than being universal.
- At the Speed of Light : At the speed of light the local vector of motion and time dimension become infinitely extended lines. Light rays become infinitely extended quantum regions. Light rays at different angles or on different vectors are separated by Fast FTL barriers. This explains the FTL interaction and the wave-particle like behaviour of light very precisely along with its associated FTL and STL behaviour.

- CRITICAL DEVELOPMENT : Gravity Waves The announced detection of gravity waves on [11-02-16 UK] by LIGO set the propagation speed of gravity absolutely as the speed of light. This ruled out 'FTL Force' models of gravity which to that point had been the primary line of development.
- FTL Quantum Gravity Model A quantum scale space time defines a quantum restricted version of General Relativity where curvature becomes quantum scale compression. This model allows gravity to move at the speed of light, while the FTL coherency of space allows black holes to coexist at the same time.
- -- --- -- - -- --- -- -

Part 4 - Extended Analysis.
Nuclear Particle Physics in the FTL Quantum Model : A speed of light of near zero at quantum scales creates a central model describing energy and matter as a single predicate and potentially reduces all physics to just a single predicate. (as regions of curved or compressed 4D space time)
- Nuclear Objects Only Exist at Quantum Scales. : Protons and all other known nuclear objects only exist at sizes and scales smaller than and within the quantum limit.
- All stable STL nuclear particles can be described as point gravitational singularities. - As regions of quantum space time at total curvature held together by gravity or FTL coherence (locked time).
- Light and EM waves can also be described in terms of total space time curvature where the objects act as tachyons held together by FTL coherence (locked time).

Photons as FTL Quantum Objects : In the FTL model photonic EM wave/particles are defined as having both STL and FTL properties, and travelling at the edge of both FTL and STL regions. The speed of light is thus fixed as a unification between the STL and FTL spaces. Photons as required to fit the model have both an imaginary mass and a small positive mass component. (Suggested Terminology : Photonic Particles, FTL-STL Particles.)
- Photonic EM Wave/Particles : Imaginary Mass Component : A solution to the imaginary mass of EM waves can be described as : a time integrated bipolar wave positive negative superposition. The sum of the energy of the wave adds up to net zero.
- Photonic EM Wave/Particles : Positive Mass Component : EM objects require a small positive mass (energy = mass) to interact with STL physics. This mass is below some critical quantum threshold for momentum which defines the object in the space as having a zero mass and infinite acceleration.
- STL Photon Interaction - Particle Type :- Emission, Absorption, Straight line Motion, Refraction (??formalism). Domain : Single isolated vector.
- FTL Photon Interaction - Wave Type :- Rotation, Reflection, Refraction (slowing), Frequency Changes, Corridor for straight line motion. Domain : All Connected Vectors.

- -- --- -- - -- --- -- - -- --- -- - -- --- -- -
- -- --- -- - -- --- -- - -- --- -- - -- --- -- -


FTL Mathematics Synopsis


FTL Maths : The Speed of Light - The Parameter C -

The Speed of Light : FTL-STL Intersection. - The speed of light forms the edge of both the FTL and STL regions and so forms part of both.
- (VSTL ≤ Cvac ≤ VFTL. The speed of light in vacuum Cvac = 2.99E8 m/s.)

The Speed of Light : Discontinuity Rule. -
- For any object moving slower than light (V < C) Light becomes a maximum. (C is Local) If V < C then C = Limit(0 -> Cvac).
- For any object moving faster than light (V > C) Light becomes a minimum. (C is Local) If V > C then C = Limit(Cvac -> 0).
The Speed of Light : Vector Rule. - The speed of light (C or Cvac) is a vector value defined as a velocity. Light has a direction. Without a direction component C is either incomplete or defines a sphere or is purely abstract.

The Sum Speed of Light - Volumetric Velocity Rule There are two basic ways of computing a sum volumetric speed for light, both produce a result of zero. (CQ is defined by a volumetric vector rule)
- 1. All light vectors are taken from a single point and are summed together (added together), this gives a net velocity of zero.
- 2. Light without a direction component is treated as an abstract speed. All abstract values mechanically extrapolate (in the real world) to the value of zero.

The Speed of Light (Cvac) : Applying the Scalar Window Rule. -
- Window set to General FTL Space - (V >> Cvac); Cvac ≈ 0 m/s (within noise margin); Defines the (universal) speed of light as zero.
- Window set to Relativistic Space - (V ≈ 1% to 99.99% of Cvac); Cvac = 2.99E8 m/s; Defines the local speed of light as finite.
- Window set to Newtonian Space - (V = 0% of Cvac); Cvac ≈ Infinite; Defines the speed of light as essentially infinite.

FTL Maths : Tachyons and the Imaginary Mass Rule. -
The Lorenz factor defines the mass of all FTL coherent objects as having imaginary values.
Imaginary numbers. An imaginary number can be defined directly as a superposition of two identical values with positive and negative sign. The two sum together as a net value of zero.
Imaginary Number Terminology
- Superposition . . . :- (n1)SP1 = a superposition of one, (n1,n2)SP2 = a superposition of two.
- Imaginary Numbers as superposition :- i = root(-1) = (1, -1)SP2 = net 0; Σi = Σ(±1)SP2 = 0.
- Imaginary & Real Number Types . .  :- Real n = (n, n)SP2 ; Imaginary in = (n, -n)SP2 ; Complex n + im = (n+m, n-m)SP2.
- Superposition Breaking (to SP1) .  :- (n)SP1 = Real n ; (in)SP1 = either +n or -n, (formalism = +n).
FTL Tachyons (as defined above) have an imaginary mass. From this a basic Tachyon is a superposition of two mass nodes having a net mass of zero.
Tachyon mass = (+m) + (-m) = (+m, -m)SP2 = net mass of zero = 0.000.. Kg.
Tachyon Rule : For a tachyon to be stable the speed of light of its internal superposition must remain fixed at exactly zero.

- -- --- -- - -- --- -- -


FTL Physics World-Space - (Open Questions, Supporting evidence, Analysis)

Key Analytical Experimental Fulcrum : 01 - Light Velocity Region Manifold.
The light velocity manifold is probably the single most critical dividing line between the Relativity Framework and FTL Absolute Frame models. Because of this the velocity region manifold becomes a starting point for experiments which could determine absolutely which basic type is correct.

Standard SR & GR Relativity Model.
(No FTL Absolute Frame)
General model :
3D Space + 1D Dimensional Time.
VSTL < Cvac < VFTL
Three Non-Intersecting Regions :-
- The STL Space,
- The Speed of Light,
- The FTL Space.
There is a deliberate division into three non-intersecting phase spaces. Can be seen as an artifact created by a formal limit in mathematics which forbids zero as an imaginary number. - In this model Light only travels in straight lines by local self definition. (tautology)
No clear FTL model - Fragile causality and fragile universe.
FTL Absolute Frame Model.
General Model :
3D space + 0D Point Time
VSTL ≤ Cvac ≤ VFTL
Two Intersecting Regions :-
- The STL Space,
- The FTL Space.
When zero is treated as an imaginary number the map becomes a single unified space with the speed of light at the point of intersection. - In this map Light only travels in straight lines if a stable flat FTL space exists.
FTL space strongly delineated and directly observable through the behaviour of light.

- -- --- -- - -- --- -- -

Open Questions / Open Problems :
- 'Can we decisively prove or disprove this or any other FTL model, including Special Relativity?' (Now answered, Yes.)
- 'Can a physical massed material object travel faster than light?'
- 'Can an information containing signal be transmitted over a distance faster than light then received at a remote location?'
- 'Does negative mass antimatter exist? If so then can we access or create or collect this negative matter and then can we use it?'
- 'What is the exact relationship between point time and relativistic matter, or theoretically matter at FTL speeds?'
- 'What about gravity?' Solved in favor of General Relativity or specifically a Quantum General Relativity model - (Q1-2016).
- 'What about the overall structure and shape of the universe (space time), particularly at the point of inception at the Big Bang?'
- 'Open Problem : To design and create a complete predictive mathematical model that works correctly and completes the FTL model.' (requires considerable further work and also ideally the development of a new improved mathematics terminology.)
- -- --- -- - -- --- -- -

Supporting Evidence, Basic Proofs for an Absolute Frame FTL Model :
- Direct : The basic natural map of space time puts 99.9999...% of the universe and reality into the FTL region, making FTL physics dominant..
- Direct/Indirect : Intrinsic logic, the stars are observed to have fixed positions. (ie space has intrinsic dimensional stability over regions where FTL geometries dominate)
- Direct : Depending on the manifold between velocity regions, it is possible to observe the FTL space directly by examining light.
- Indirect - Very Strong : Answers the 'Anthropic Question', allowing a first ever workable solution to the 'finely tuned universe' problem.
- Indirect : Unifies and completes physics : unifying an 'FTL Absolute Frame Model' with 'General Relativity' and 'Quantum Mechanics'.
- Indirect : Creates/Allows a simple, logical, non-contradictory, and basically complete model for black holes.
- Indirect : Solves the problem of the mass of the quantum vacuum.. Net mass ≈ zero.
- Indirect : Offers several simple plausible solutions to Dark Matter and Dark Energy problems..
- Indirect : New[27-03-15] Observation of Dark Matter as not interacting with matter or self interacting fits strongly with Dark Matter as Tachyonic matter, possibly with negative mass.
- Indirect : (double negative) No large scale irrefutable time travel has been observed.
- -- --- -- - -- --- -- -

Potential Ways of Directly Detecting the FTL by the Use of FTL Technologies.
- Direct tests through interaction. Building a successful FTL coherent system should allow the development of tests and experiments involving coherency. This should be possible with a medium budget and about 5 to 10 years of work.
- Conservation of Motion Imbalance. In theory an imbalanced FTL interaction could produce local violations of the conservation of motion. In the FTL model this would be allowed but would cause the entire universe to move slightly. (like a gravity wave) This motion in turn could be detectable as an utterly tiny momentary change in the dilation of space time. The effect would be utterly tiny, on the order of 10^-50 : 1 (± 10^10) per kilogram per meter moved. This would require a technology far beyond any human technology that currently exists..
- Causality changes in areas where FTL technology has been tested or used. The name 'transience' refers to the power of reality, to the shifting superpositions created by FTL coherent objects. It is possible that this could be detected at interstellar distances. However the detector itself requires a linking transient superposition to the target, and this itself would require a highly advanced FTL technology.

-- -- -- -- -- -- -- -- -- -- -- -- -- -
-- -- -- -- -- -- -- -- -- -- -- -- -- -










PR-2 : FTL Physics : Detailed Analysis : Section 2 - Expansion and Details.[edit]

[WARNING !! : Some sections are in a very-rough state or are in flux.] [EDIT ~65% Complete.] [EDIT 27-05-17]
[Needs significant simplification and or removal/tidying of redundant sections. Still Needs Massive Reorganization!!]


The Evolution and Development History of this Project.
---- Date ---- ---- Phase ---- --- Event / Description ---
1982 - 1997 Background. Childhood interest in science. An interest in science fiction and sci-fi space technology and other future extrapolations lead to an interest in general science, computing, electronics, space science, and general physics.
1982 - 1987
1994 - 1995
1995 - 1997
Educational
Background.
..
Basic High School Physics. UK 'O' Level, then first year 'A' Level Physics.
College, Foundation level physics (Engineering Science) and Mathematics.
University Level Mathematics, 1st and 2nd yr Calculus, Probability and Statistics.
. . . 1994 .. . Preliminary. First glimpse of problems with Mathematical Totality. This lead to a peculiar world space that lead to mathematical self-completeness, and to the Totality Matrix (in Strong AI).
. . . 1994 .. . Preliminary. Idea that the Turing Machine can be used as a universal model in physics. (This is very compatible with an FTL universe..)
1995 - 1997 Preliminary. Crude attempts to solve the 'FTL velocity' problem central to FTL travel.
1996 - 1997 Inception. Began examining the idea of precognition in the context of cognition and Strong AI. Reverse engineering showed that precognition would require some kind of FTL causality and FTL physics, and conversely that a better understanding of precognition might be a very good route to understanding FTL physics..
. . . 1997 .. . Inception. Discovered the basic solution for reducing imaginary numbers to computable form. This allowed a first very basic map of the physics of FTL Tachyons (with imaginary mass).
. . . 1997 .. . Inception. Eric Drexlers ideas are extended to create a basic mechanism for working with infinite and non-finite numbers. Called 'Scalar Windows'.
1997 - 2002 Hiatus. Mental Breakdown left me largely unable to work.
2002 - 2005 Work Starts. Background project. Began process of trying to map out and understand the primary FTL geometry of the universe.
2005 - ongg Background. Out of Order, began relearning basic physics at a higher and more detailed level. This included Relativity, Quantum Mechanics, Newtonian Mechanics, and (the basics of) the rest of physics.
. . . 2005 .. . Core theory. R-Shell Theory [09-06-05], an early partially complete version of FTL Model. Note a thousand iterations and circles of development since. (See Milestones Section below)
2007 - 2010 Hiatus. Partial Hiatus after hitting a dead end wall. Worked on free-form models for Background Science Fiction FTL space technology.
2009 - 2012 Breakthrough. Linked the flat absolute FTL geometry with STL mechanics and the curved geometry of general relativity.
2010 - ongg Core theory. Completed the basic quantum geometry by settling on a solution of C ≈ 0 within the zero point quantum region.
2014 - ongg Core theory. Beginning Development and evolution of a full analytical mathematical model. Still very early stages.
Feb 2016 Core theory. The detection of gravity waves by LIGO announcer in February 2016 proved that gravity travels at the speed of light. This ruled out the then current 'FTL Gravity' theory. This effectively ruled in 'Quantum General Relativity' as the primary FTL gravity theory.
2010 - ongg Late Work. Continuing Development of Theory and Experimental Methods.

-- -- -- -- -- -- -- -- -- -- -- -- -- -


Mathematics and Numbers .
Note that the mathematics needed to understand most of FTL physics is roughly equivalent to that needed for standard high school physics. Though a good knowledge of the Underlying Fundamental Theory of Calculus is useful, calculus and higher mathematics are not directly required..

Mathematical Background to FTL Physics.
This model uses a mathematics that originated in a theory of permutation and self complete mathematical systems. This mathematics was discovered almost incidentally while working on the early development of a self complete mathematical language for visual reasoning in Strong AI. The first real step into the FTL was a map that allowed imaginary numbers to be reduced to their real number bases. The second step came at almost exactly the same time, and was a basic mechanism for handling non-finite and infinite numbers within a finite mathematics. This mathematics model has continued to develop and evolve (often very slowly) along with both the FTL model and other Strong AI work. Parts of the work have reached an initial mature stage, though others are still nascent or in preliminary stages.
For FTL physics a mathematics base model might include - standard mathematics, modified sign and number rules, imaginary numbers as superposition, infinite (non-finite) numbers, division by zero and Tangent 90, modified geometrical logic, and ultimately (ideally) some form of self-completeness.
Self Complete Mathematics. A self complete mathematics has to be internally logically self complete and should allow all operations to complete properly for all values. Self complete systems can go further to become 'fully self complete' and 'dynamically automatic' though this is an area more for advanced Strong AI research than physics.

Theoretical Background. At the heart of all the mathematics in this FTL model and also in the Strong AI project connected to it is a simple precept. - 'That the nature of mathematics is dependent on the nature of reality and physics first and not the other way around.'.. In particular the real number roots of imaginary numbers are simple but require superposition, a solution that injects a concept from physics and into mathematics.
The first precept leads to a second precept that human mathematics is an evolved and evolving system. Such systems are essentially piles of collected pieces rather than a single coherent whole, and once we reach the level of higher mathematics it is quite clear that mathematics shares a little of both cases. The consequence of this evolved nature is a limit to certainty, evolved systems are always incomplete and imperfect and almost always capable of further evolution. this includes revolutionary jumps like the ability to calculate imaginary numbers.
Observation : The key to this model mathematically is the idea that physics should be seen as the basis of mathematics and not the other way around.

(See Section : [EDIT POINT] 'The Mathematics of the FTL' for a more complete set of rules and definitions.)
-- -- -- -- -- -- -- -- -- -- -- -- -- -


Basic Mathematics Rules and Changes to Standard Rules.
See the section above for list of basic rules.

The Speed of Light :- The Parameter C is a complex object, with complex multiple contexts and definitions.
Cvac - The primary constant of the speed of light in vacuum is defined as Cvac (Cvac = 2.99E8 m/s). Note critically that Cvac is a vector.
CQ - the Quantum speed of light CQ is a volumetric sum of the speed of light. CQ = Sum[0 to 360deg] Cvac = 0 m/s.
The Cvac Scalar Window. Cvac has a total of six scalar window contexts. -
- Newtonian space (V = 0.0% of Cvac) : In this context Cvac is defined as a non-finite (Cvac).
- Relativistic space / space time (V = 0% to 99.99..% of Cvac) : In this context Cvac behaves as a standard finite.
- FTL space (V ≥ Cvac) : In this context Cvac is small, and is also defined as a minimum (Limit (Cvac -> 0) ).
- FTL Simultaneity (V ≈ ) ≈ 1E40 m/s) : In this context Cvac forms an absolute frame or zero point velocity (Cvac = 0).
- Quantum space (V ≥ CQ, CQ = 0 m/s) : In this context there is an FTL barrier between CQ and Cvac.
- Formal Net Sum Zero : As a velocity Cvac if defined without a direction component becomes a vector sphere which has a net formal value of zero. Cvac = 0 m/s.. This is identical to the formal definition of CQ.

Cvac as Vector. Cvac is a velocity, this forms a vector with both speed and direction components.
- As vector components the speeds defined by Cloc or Cvac appear as Real values but may actually be Imaginary values. In any context with a single superposition state (SP1) such as ordinary STL space imaginary and real numbers will look and behave identically.
- As a vector with an imaginary speed C would appear as a superposition vector with two directions, a primary and an opposite at 1800. Note that the two values can be defined by sign (C = (v+, v-)SP2, theta), or by angle (C = v, (theta, theta + 1800)SP2).
- When defined without any vector context 'speed' becomes an abstract value and Cvac is not a physical property or is formally zero.
- The vector Cvac defines the mass energy equivalence equation as Ek = m0.Cvac2. Energy is defined as kinetic energy Ek (or KE) because the velocity component of Cvac is squared then carried across to the energy side. This defines Kinetic Energy is the most fundamental form of energy.


Mass and Energy in the FTL System.
Mass and energy in the FTL system. The relation E= m.c2 is familiar to everyone and represents an intrinsic link between mass and energy. However E = m.c2 also represents a barrier between mass and energy so that one does not simply convert into the other. Moreover the equation says that mass is energy divided by the speed of light squared, in effect there is an FTL barrier between the two. The vector property of Cvac also defines E as a vector as well - as kinetic energy. (E = Ek)
Ek = m0.Cvac2. (Energy is defined as kinetic energy Ek (or KE). )
When mass is 'converted' to energy (by accelerating it to the speed of light) we assume that it still has the same mass but is now purely in the form of free energy.
When energy is converted to mass (by compressing it into singularities) we assume that the amount of energy and its total mass remain constant.
Note that neither of these statements is strictly 100% true because of binding energy which does not seem to express mass.
Binding energy is a hint that it is possible to shield against gravity - but like Newtonian singularities it all happens on very small scales.
This is still an area for future research.

Superposition Breaking. A very obvious statement is that a superposition of 1 (SP1) is unbreakable and has no sub-components, while a superposition of 2 (SP2) is breakable into 2 sub-components. This is why one is observed in everyday physics and the other is not - one is strong but two is delicate. However there may be circumstances where SP2 is strong and SP1 is not, for instance where SP1 is an illegal value.
Massless Tardyonic STL Mass. If the STL geometry is set to restrict but not break superposition then all mass might ultimately resolve to be massless - as positive negative superpositions where only the positive side appears. An obvious division barrier between positive and negative superpositions is point time itself. This is actually quite a strong model - as it explains relativistic curvature very directly and solves problems like missing mass.

Kinetic Energy : There are several possible ways to define Kinetic Energy, and because KE may be the most fundamental form of energy its definition becomes potentially critical. KE is normally a relative value between massed bodies and defines energy relative to their motion. However KE can either work in a direct relative context or via an absolute frame such as an FTL space. The FTL quantum model suggests that KE may exist directly on a very small scale -probably as a set of imbalanced 'bow waves' in space time around matter - at the quantum scale.
- Kinetic Energy as massed. (interim definition one) KE has a positive mass which replaces or is equivalent to space time curvature in dilation. If space time curvature and mass are equivalent then KE becomes a positive space time curvature. (Note that this model works with either positive or negative mass KE.)
- Kinetic Energy as massless. (interim definition two) KE has net zero mass - an imaginary mass. This is made up of two vectors at 180 degrees apart, one positive and one negative. These define a tachyon with a sum velocity in line with its comparative summed absolute mass.
- Kinetic Energy as Combined/Imaginary. (interim definition three) here KE is imaginary. In a single non-superposition (SP1) space KE behaves as an ordinary positive or negative mass. In a superposition space (SP > 1) KE behaves as an imaginary mass with net zero mass.
-- -- -- -- -- -- -- -- -- -- -- -- -- -


Extending the FTL Spatial Physics Model.

Defining the Core FTL Spatial Physics Model.
- Basic calculation shows that 99.99999..% of space time can only exist as an FTL region. In effect the universe can only exist as an FTL space.
- The FTL model (defined) builds towards a simple, self consistent, reductionist, and non-self-contradicting model. (The FTL model follows the reductionist maxim of Occam's Razor.)
- Light travels in straight lines over distances where STL and Primary FTL geometry coincide. Because the stars are visible and static and predictable this predicts a flat FTL geometry with an FTL Simultaneity.
- The Model predicts that Quantum Mechanics is a direct form of FTL physics. This applies at FTL speeds, the speed of light, and at STL speeds.
- The Model Predicts that the mass for the general Quantum Vacuum will sum to zero. Virtual particles will appear with a symmetry that follows the Imaginary Mass rule. The net mass of the universe is probably zero.
- The Model has the basic components that may eventually allow it to form a single universal G-U-T. It resolves the incompatibilities between relativity and quantum mechanics and Newtonian physics and solves various problems with each. It resolves the whole of physics to a small set of simple primary components. Ultimately maybe just a spatial geometry, a temporal geometry and a type of superposition object called an FTL causality. If this is correct every object in the universe can be resolved as a gravitational singularity held together by an FTL causality.
- Observation showing that large black holes have gravitational influence sets a minimum gravitational resonance that approaches the infinite speeds of FTL simultaneity. A contradiction is created by the observation of Gravity waves, limiting the speed of gravity to the speed of light. This is resolved by moving the FTL resonance from gravity itself to the underlying space, which fits with an FTL Quantum General Relativity gravity model.


FTL Metric of Space.
'Space (General Spatial Metric)' - The primary spatial region that defines and is occupied by physical reality.
Features : Three Dimensional. Defined non-locally by FTL Space and locally by STL Space. the two spaces are joined by a 'Quantum Continuum'.
'FTL Space (FTL Spatial Metric)' - Any region of space where speeds exceed the speed of light. (Velocity V ≥ Cvac).
Features : Defined Geometry :- Non-locally Flat, Three Dimensional, geometrically simple, universe spanning. FTL space is directly observable by observing the behaviour of light.
(Terminology : FTL Space, FTL Prime, FTL Hyperspace, FTL Absolute Frame.)
'STL Space (STL Spatial Metric)' - Any region of space where speeds are slower than the speed of light. (Velocity V <= Cvac) STL space is non-contiguous and restricted to small local regions by the speed of light and causality. STL space is directly observable through contained objects and all common observed physics.
Features : Defined Geometry :- Locally Flat, Distorted at quantum scales by gravity, Three Dimensional, geometrically simple, forms local space, forms a continuum across the universe at STL speeds, one way universe spanning.
(Terminology : STL Space, Relativistic Space, Non-absolute Frame.)

FTL Unification Metric.
FTL Simultaneity (Unification Metric) - The region of space where FTL speeds become instantly universe spanning and infinite at local scales. The FTL Simultaneity is a region of unification between space and time and a fulcrum between the past and future. (like the two axis on a graph) Nothing can exist in the FTL simultaneity region directly but it is a region of total coherency that extends across the universe and forms a backbone holding the geometry of space together.
Features : Defined Geometry :- Universally-Non-Locally Flat, Three Dimensional, geometrically simple, instantly universe spanning, unification between space and time.. Observable by extrapolation. Rules out the 'relativity of simultaneity' as part of any coherent or stable FTL model.
(Terminology : FTL Simultaneity, Heart of Hyperspace, FTL Absolute Frame.)

FTL Metric of Time.
Point Time (STL-FTL Temporal Metric) - Time is point like and there is a single moment of reality which is the base of all STL physics. Perhaps the most basic observation in science, that we live in a moment of time and that time progresses. Point time contacts to every point in the universe instantaneously through the FTL Simultaneity, this makes time synchronous and simultaneous across the whole universe.
Features : Point time is defined as a union between Space and the FTL Simultaneity at quantum scales at every point across the universe. The quantum scale geometry of point time defines a quantum FTL model which expands through a 'Quantum Continuum' to create ordinary STL physics and our observed causality.
(Terminology : Point Time, Instantaneity, Now, the Present, Point of Reality.)
Quantum Dimensional Time and Quantum Space Time - In the FTL model Dimensional Time on classical scales is generally treated as abstract - non-existing. A more accurate description is that dimensional time is discontinuous or non-coherent on classical scales. The quantum limit is treated as the general lower scale limit to point time, and this makes the same point the upper limit to dimensional time and the associated four dimensional geometry of physical space time. A quantum scale 4D space time allows general relativity to be mapped from quantum scales - and to be compatible with Absolute Frame FTL physics.
Features : Quantum space time allows a unification that ties General Relativity, Quantum Mechanics, the Newtonian Galilean geometry, Point Time, and the FTL Simultaneity, as a general FTL model.
(Terminology : Quantum Space Time, Quantum General Relativity, FTL Quantum 'Subspace', Point Time Zero Point Field, Space Time 'Compression'.)
-- -- -- -- -- -- -- -- -- -- -- -- -- -


Further Features - Testing the FTL Model. [E]of the FTL Metric of Space.
Light Cone Map. Observation : We can only observe the universe through the historical light cone which is a 3D space compressed into a 2D manifold with distance defined by time. This creates a razor thin window of observation on all observable spatial objects defined by distance divided by the speed of light. The window itself is defined by the entire volume of the region of space it passes through and forms part of both the STL and FTL spaces. Light rays interact with both STL and FTL spaces and so represent a combination of the two, so light can only travel in straight lines if the FTL space is flat and stable. Light itself is only emitted by STL objects and so only directly represents the STL part of the physics window..
The FTL Simultaneity : The only absolute way to test directly for the existence of an FTL simultaneity is to penetrate and map the FTL space directly. However there is considerable indirect evidence that an FTL simultaneity exists, deduced from the requirement for a stable FTL spatial metric, and from the general geometry.
Dimensionality of Space. - STL Space is observed and extrapolated to be restricted to three dimensions on all classical scales.
- As all systems seek to minimize (equalize) their energy budgets, by the rules of energy minimization all higher and lower dimensional spaces tend to collapse towards three dimensions.
- At classical scales Space is generally incompressible and flat and cannot be bent or folded. (This also restricts 'Wormhole tunnels' to short or zero length sections. See 'Black Holes' section.)
- Although the FTL space is flat at any one point the evidence from existing cosmological theory projects an evolution following a curved expansion of space time from the Big Bang onwards. (The only part of the Relativity model that is not literal is the general time dimension itself.) It is important to recognize that this projection is only an early interim answer and may change.


Further Features of the FTL Metric of Time.
Point Time. - Physically time is point like and is non-locally non-causal. All events that occur in different places at the same time are separated by FTL barriers.
- Point Time is the result of the intersection of the FTL Simultaneity (Absolute Frame) with local space at quantum scales.
- Point Time is time as we observe it directly. Time is instantaneous (synchronous) throughout the universe, as in the Newtonian and classical (pre-Christian) models.
- In the general FTL model the present is a fixed point and the past and future are mutable. The Past and Future as dimensions are abstract spaces and do not act as fixed fulcrums.
- Point time regions join together to form a 'quantum continuum' which unifies quantum physics to form classical STL physics.
- Space Time curvature becomes the compression of quantum scale space time.
- Relativistic speeds. Dimension like time on classical scales inherits its apparent dimensionality from the vector component of its speed relative to space & point time.


The FTL Metric of Quantum Space Time.
- The Quantum Region is defined as a continuum at each point in the axis or fulcrum between space and point time. This axis forms a natural quantum limit that coincides directly with the standard pre-FTL quantum model..
- At scales below the lower spatial limit of point time, the point rule relaxes allowing time to become dimension like and 3D space to become a quantum scale 4D space time. (Dimensional time becomes non-coherent at larger scales - a sea of quantum scale space time).
- Quantum Space Time is the source of physical causality and of the physical base of reality.
- The manifold that best fits for quantum behaviour at the fulcrum of point time and space is a model where the local speed of light in the Quantum Region is closely locked to zero. C ≈ 0. (estimate Cquantum > 1E-20 m/s)
- Only very ordered signals can enter or leave any given quantum region. Order is defined by resonance and other factors.
- The FTL Model of Quantum Mechanics is completely deterministic - but this is only from the viewpoint of the FTL predicate. (I.e. The Prime FTL Causality.)
- - All wave like behaviour resolves to vibrating particles and FTL superposition.
- - The uncertainty principle is violated and can be reduced to zero -at any one point- by observations based on FTL causality.
- - Superposition breaking resolves to a deterministic FTL causality breaking. Superposition breaking is controlled by a form of FTL thermodynamics.
- Quantum Scale Dimensional Time. Within the quantum environment regions of dimensional time and four dimensional space time can exist physically. Any given region of dimensional time is only stable within and restricted to a particular quantum stable region.
- - If an extended quantum region can be created then this will also allow the creation of larger regions of stable space time. This would allow the creation of an object informally called a 'glisten' that would allow the limited transmission of energy, information, or matter through time. (within a 'glisten' corridor) [Break Point] Note that such objects would also impose a limiter against general causality and would need to bridge the gap between the quantum space where the speed of light is zero and the STL space where the speed of light is 2.99E8 m/s. - So a glisten could not be made of ordinary matter but would require something far stronger - with a persistent and residual FTL coherence. An alternative is a glisten built around maintaining a zero energy balance. - One speculation is that the human brain uses net zero energy glistens to allow it to access quantum information states.
- Quantum or Zero Point Vacuum. A basic prediction is that the quantum vacuum is massless. Virtual particle pairs still appear, but the rules for imaginary mass mean that for every positive mass particle there is a negative mass particle. (see section : FTL Thermodynamics)


-- -- -- -- -- -- -- -- -- -- -- -- -- -
-- -- -- -- -- -- -- -- -- -- -- -- -- -


Causality in the FTL Model.
A primary starting assumption throughout this model is that at a mechanical level all existing general physics is correct.

General FTL Causality. : Causality in the FTL model flows along with and is inseparable from time.
> The FTL Simultaneity connects the whole universe together as a single point at infinite velocity.
> The FTL Simultaneity defines and locks point time and connects it directly to every point in space at the quantum scale.
> At the quantum scale point time creates a quantum causality, which connects to ordinary STL reality through the quantum continuum.
- Causality may also flow in either direction but we assume it flows primarily -
Big Bang -> FTL Simultaneity/ 3D Space -> Point Time -> Quantum Causality -> quantum Continuum - -> classical STL reality.

Unified Mechanism.
- Quantum mechanics and point time are defined as the core base of all reality.
- We live in a 3-Space but with a 4D geometry that is restricted to quantum scales.
- Relativistic curvature is physically defined at quantum scales through quantum 4D space time.
- Dilation is defined by a local effect. A 1D vector extension to a local region of space (at quantum scales). As speed increases this vector extends until it becomes a continuous ray at the speed of light.
- Causality and Time may exist as separate factors but it is simpler if the two share a single common mechanism.

Features Of Time in FTL Causality :-
The Absolute Frame FTL model defines the present time as a fixed point of existence - Point time.
- The present is a fixed fulcrum though it does of course change constantly in accordance with the rules of physics.
- The Present is fully deterministic because it exists directly.
- The past and future do not exist directly so are not directly deterministic. Both are defined as abstract potentials or projections from the present. This means that the past and future are both technically malleable and are only fixed by the fixed point of point time. (present reality)
- The present records and preserves the information state of the universe. We perceive this as reality or now, but it also represents the complete cumulative information state of the past. All analysis of the past are ultimately based on analysis of data exiting in the present.
- Malleable FTL Fate. The past and future can in theory be directly manipulated by FTL causalities. Local FTL causalities can change past or future world lines. FTL causalities include causal FTL or Quantum Noise which means that the system is never in a completely stable state, ie the universe is inherently non-deterministic. Something already predicted by quantum mechanics.

Terminology :
FTL Fate  : The overall time manifold is here labeled as FTL Fate. In this case the traditional word 'Fate' fits exactly and so is the correct terminology to use.
The past potential and future potential may be defined or described as the 'Past Fate' and 'Future Fate' respectively.
Fate Manipulation. The speculative ability to force FTL Fate towards particular desired outcomes.
Fate ???. The speculative ability to scan or examine FTL Fate to produce predictions of past of future events.
-- -- -- -- -- -- -- -- -- -- -- -- -- -
-- -- -- -- -- -- -- -- -- -- -- -- -- -


FTL Gravity Model. [Finally the time (has come) to rewrite this section !!!]

FTL Force based Gravity Theory : (Obsolete)
In the original FTL Gravity Model gravity was defined as having an FTL coherent speed with its maximum speed at least approaching FTL simultaneous (infinite). Gravity was treated as a simple FTL extension of Newtonian force which provided a relatively simple and coherent model. A modified form of General Relativity was accommodated as a quantum scale effect though it was not treated as the primary cause of gravity. (Instead gravity would bend space time locally on quantum scales creating observed dilation effects.) In early 2016 a new more complete form of the theory was in development where gravitational fields had negative mass and worked by transferring negative kinetic energy from point to point.
However on the 11th February 2016 the LIGO experiment announced the discovery of Gravity Waves on the 14th of September 2015, and this firmly set the speed of gravity as the speed of light. This detection completely broke the FTL gravity theory, requiring a completely new answer.

Quantum General Relativity based Gravity Theory
In this model General Relativity is used as the primary gravity model. (Replacing the previous FTL gravity model) In the FTL model General Relativity has to be modified slightly because there is no general coherent Dimensional Time or Space Time. Instead the FTL Point Time Manifold Defines that coherent dimensional time does exist on quantum scales and along with it a quantum scale coherent space time. This naturally redefines general relativity in quantum terms creating a new quantum general relativity. The main change is that the description of space time curvature is replaced with quantum scale space time compression.
Note that the geometry of black holes still requires an FTL speed coherence for gravity approaching to FTL simultaneous speeds. This coherency is now carried by the structure of space itself linking it directly to the FTL absolute frame and the FTL simultaneity.
Note : That in its 3D (macro scale) form space seems to be totally incompressible and unbendable. In its 4D space time (quantum scale) form space does seem to be compressible.


Black Holes and Gravity.
In an FTL model with a absolute frame FTL space the classical black hole model works and can be considered to be correct. - This model defines that black holes have a simple core geometry with an external event horizon, a gravity well, and a central point Singularity of infinite density. In this model the 'Outer' external Event Horizon forms the top of a 3D structure extending inwards from the edge to the center of the black hole. This forms a continuous 'Inner' event horizon volume, where the defining horizon velocity increases towards infinity in the centre.
- The Velocity of any given Event horizon is VEH = root(2GM/r) (based on the Schwarzschild radius). VEH is defined as Cvac at the Outer Event Horizon, and climbs towards infinity at the Singularity. VEH ≈ VSIM-FTL
- Black holes form their own miniature FTL Simultaneity though it is one sided ending in the singularity.
- The singularity of a black hole forms a time locked stable quark gluon plasma that may be quite similar to state of the big bang.
- If the speed coherence of gravity is slower than infinitely fast and then all black holes should theoretically collapse their own gravity fields making them become externally massless. Such objects might effectively leave our universe or become bubble universes or have new behavior not yet predicted, they might even fall apart and explode. All would contradict observation.

FTL Theory and Black Holes ()
- In the FTL model the inverse square law curve of gravity goes on increasing inside the outer event horizon of Black Holes.
- In the FTL model the density of a gravitational singularity is only a local infinity and has a finite limit.
- The quantum dilation rule defines that Point time becomes entirely line like inside a black hole and that all lines terminate in the central singularity.
- Although time may stop inside objects at the outer event horizon of a black hole the objects themselves continue to move inwards.
- In the FTL model wormhole bridge tunnels can only exist through space (not by folding space) and only over short distances because the energy required to support a tunnel increases with distance.
- In the FTL model wormholes cannot directly link different times because a coherent general time dimension simply doesn't exist. (Will not apply to artificially created systems if a local artificial dimensional time frame does exist.)
- The model agrees with the basic precept of Hawking Radiation which fits very well with the tachyon superposition model.
- In the FTL model the 'surface' gravity of a black hole is quite well defined. The surface is defined at r = 0, and the gravity is a local infinite. The 'surface' at the outer event horizon is separated from itself by an FTL barrier, so does not experience normal thermodynamics or ordinary STL physics. The temperature at the outer event horizon and beyond is defined as absolute zero because thermal behaviour cannot function across FTL barriers. (probably)
- Given the extreme nature of black holes it seems likely that detailed observation of them could be used as a useful 'laboratory' to tune and improve the overall FTL model.


-- -- -- -- -- -- -- -- -- -- -- -- -- -


Cosmology - FTL Extrapolation.
Any basic FTL model of cosmology is heavily constrained by observations and work that already exists. Because the STL and FTL universes are one and the same the observations of ordinary Astronomy already tightly constrain and define the FTL Model.

Universe Origin. When we map space by the variable of dimensional time we observe a continuum of expansion that points strongly to a single point origin at the beginning of the universe. (the Big Bang Theory)
The Expansion Curve of Space Time. The expansion defined by Dimensional Time has been observed to follow a non-linear curve, with extremely rapid expansion (Inflation) at the beginning, followed by a long period of far slower expansion leading up to today, and a rate of expansion that is now gradually increasing.
Early Inflation. (Preliminary map) The FTL model has several fairly simple tentative explanations for early Inflation. an extended speed of light, negative gravity, a relaxation of the time point time metric.
Dark Matter. (Preliminary map) The FTL model has a fairly simple tentative explanation for dark matter. (The 'dark matter' discrepancy in Galactic scale Gravity.) Dark matter fits as some kind of FTL tachyonic matter with zero or negative mass. Current research does not yet allow a more specific answer but there are a large range of potential possibilities. One major question for instance is would dark matter be Slow FTL (STL coherent) or Fast FTL matter.

The Current Acceleration of the Expansion of Our Universe. There a several possible explanations relating to this acceleration in expansion relating to the FTL Model.
- One explanation is that our entire universe is inside the event horizon of an immense black hole and that what we observe as expansion is an effect of increasing acceleration.
- Another explanation is that the gradually increasing amount of mass in the black holes inside our universe is causing the increasing expansion. There are several possible causes. -
- - The black holes in our universe are gradually collecting negative mass antimatter which is creating a negative gravity effect.
- - The black holes in our universe all lead to and contain white holes which are interacting with the FTL region to create an expansion effect.
- A negative gravity effect is being created by some as yet unknown factor.
- The Expansion Acceleration effect is essentially illusory and is being created by some as yet unknown factor.

Universe End? - In this model the possible scenarios for the end of the universe are currently totally unknown. (Insufficient data / not a current field of research.)
-- -- -- -- -- -- -- -- -- -- -- -- -- -


Cosmology : Extended Explorations

'The FTL Anthropic Principle..' (Also See separate section below) - Together the FTL Absolute Frame Model combined with the Big Bang theory define a physical model that creates a finite solution to the 'Anthropic Question'. This forms an FTL Anthropic Principle defining an FTL Causality time loop as the core creation model of the universe. In this model an FTL causality loop is set up that then self-constructs and fine-tunes the laws of physics of our universe through an evolving sequence of iterations. The rule that drives the loop is a order seeking coherent quantum energy field. This model can start from some elementally simple starting state and then evolves through a series of universes, gradually evolving towards states of increasing order and complexity until the resulting universe contains life- which at some level is almost totally ordered.
- The evolving state can be interpreted to fit with almost any of the standard anthropic question solutions. In particular it seems to fit quite well with the solution of a creator 'God'.. (an ordered creator) This 'Anthropic Question God' is nothing like the Gods of religion - more like a mindless force within a quark gluon plasma that is much hotter than the center of our sun.
- One consequence of this FTL causality time loop solution is that we may in some ways literally be our own 'gods'. One prediction of the starting ending state solution is that a world containing sentient life sends an FTL signal back to the Big Bang and that it is this signal which marks our universe as containing life. This might also mean that the species that transmits its information back first becomes the template for the whole universe...
- - Re-connecting to the Big Bang could also theoretically be used as a weapon or for other purposes.
- - Another consequence is that our traditional model of a fixed reality is essentially illusionary. In FTL physics reality is very strong and absolute but is technically malleable.
- - It seems unlikely that once the growing universe passes a certain point that there will still enough energy in any primary state to restart the evolution process. This could either be moments after the big bang, or much later, or the point could even still be in our own future.
- - If the causality loop changes then our reality changes with it, so we would probably be generally unaware of any changes. However a memory based on a quantum or FTL coherency that transitions through such a change could potentially record the differences or at least show disruption.
- - A 'localization' rule may apply as causality events get separated by time or information attenuation. In effect our universe may divide into a vast continuum of sub-universes defined by FTL causality links and by the local speed of light.


FTL Void Space - An infinite region of space that extends beyond the edge of our universe. Extrapolated to exist - extrapolated to be flat and 3 Dimensional and empty. Can be regarded as an extension beyond the edge of the Absolute Frame - that has even higher coherent speeds than the FTL Simultaneity. Originally defined by C = EXT, where the infinity is defined as very large or a true infinity. Since E = mc^2 Void Space is by definition totally empty, and is totally unreachable from our universe. In effect the Big Bang is a void space photon. Term emerged from early obsolete versions of this work (in 2006).
(Note : I may have accidentally acquired the term 'Void Space' subliminally from an episode of Dr Who 172a "Rise of the Cybermen" [13-05-06] which used a similar term to describe the spaces between universes..)

Multiverse??? The question of multiple universes.
- A finite multiverse is possible and does not contradict any part of the model.
- In this model the basic idea of an infinite multiverse is rejected very strongly :- on the basis of thermodynamics (infinite energy), on the basis of infinite self-tautology, on the basis of not solving the anthropic causality paradox, and on the visitor paradox.
- A multiverse based on direct quantum causality breaking is here considered an infinite multiverse and is considered particularly ridiculous.
- An alternative is a GR relativistic model quantum infinite multiverse. Here the universe is subdivided by the speed of light into an infinite number of quantum sized regions. These quantum scale sub-universes are connected together by dimensional time to form a single quantum continuum.
-- -- -- -- -- -- -- -- -- -- -- -- -- -
-- -- -- -- -- -- -- -- -- -- -- -- -- -







PR-2 : FTL Physics : Detailed Analysis : Section 3 - Particle Physics and Mechanics.[edit]

[Semi rough Draft.] - [EDIT ~80% Complete.] - [EDIT 27-05-17]


Particle Physics in The FTL Model. - [Only ~ 30% Complete]

Standard Model Particle Physics : A primary starting assumption throughout this model is that at a mechanical level all existing general physics is correct. In addition the standard model provides a great deal of predictive detail which has been proven by experiment. For this reason any theory in this area has to find a way to be compatible with the standard model to a high degree. The FTL model seems to provide a new and more complete model for particle physics, but first this needs to be properly validated and mathematically integrated with the standard model. - I obviously have to leave that to others.

- Core Prediction : STL coherent objects have a superposition of one - and appear particle-like.
- Core Prediction : FTL coherent objects have a superposition of two - and appear particle-like or wave-like.
- Core Prediction : The effective Speed of light at quantum scales approaches zero defined by the constant QC (QC = 0). This creates the effects called quantum physics.
- Prediction : Particles exist that have imaginary mass with a net zero mass, or a complex value mass with a net negative mass.
- Prediction : All STL massed objects are quantum scale gravitational singularities. (Regions where 4D space time exists at total 100% curvature.) A local speed of light QC of zero lowers the Schrodinger limit barrier correspondingly to zero.

- It is noted that quarks only make up about 1% of the mass of protons and neutrons. The other 99% is simply labeled as kinetic energy or gluons or binding energy. This fits quite well with the quantum gravitational singularity model.
- Ek = m0.Cvac2 defines or implies that KE as the most fundamental form of energy. (Cvac2 defines a velocity.)
- The value for QC the speed of light in the quantum space is defined by the Sum Volumetric velocity of light Cvac which is defined as zero.
- - Note that Cvac is absolute to the FTL frame while QC is relative to Cvac.
- - QC defines all velocity values within a local quantum space as being FTL velocities.

STL Matter Model. - All stable massed objects are gravitational singularities held together by a core at total curvature, accelerated to the speed of light with an internal time-stop.
- To remain stable all such objects must have some kind of locked internal equilibrium. They must be unable to release or lose binding energy, and also be unable to release Hawking or any kind of radiation asymmetrically.
- Newtonian Singularity. There is a mathematical observation from the Newtonian theory of gravity that at a distance of zero (direct contact) all gravitational fields become infinitely strong. This defines a model where all nuclear massed particles can be described in terms as gravitational singularities. (IE protons, neutrons, quarks, electrons, etc.)
- Supposition : All energy and mass ultimately resolve to ripples in space time.

-- -- -- -- -- -- -- -- -- -- -- -- -- -
-- -- -- -- -- -- -- -- -- -- -- -- -- -


Primary FTL-STL-Light Velocity Region Model

Within this FTL model the standard traditional map describing the manifold between velocity regions has been under question for some time and has evolved quite heavily. The standard traditional model assumes that STL matter has Positive mass, that EM fields and photons have Zero mass, and that FTL matter tentatively has Imaginary mass. (without describing what imaginary mass actually is) The traditional map makes arbitrary assumptions about the division between the different regions that conceal the flaws in the FTL region of the standard Special & General Relativity map.
With a basic map of imaginary numbers and other surrounding work new and better maps can be constructed.. A tentative final map has now been found.

Standard Physics Traditional STL / FTL Region Map. (Hypothetical Imaginary Mass) (Extrapolated from standard relativistic physics and imaginary numbers.)
Spatial Velocity Region Spatial Manifold [*1] Particle Name / Type Mass Restriction Velocity Restriction
STL Velocity (V < Cvac) Contiguous Tardyon Positive Mass STL Region and Speeds.
Light Velocity (V = Cvac) Not Analyzed Photon / Taxion Zero Mass The Speed of Light.
FTL Velocity (V > Cvac) Not Analyzed Tachyon Imaginary Mass (Hypothetical) FTL speeds.


Unified STL - FTL - Light Region Map. Tentative Final Model. - [Q1 - 2017]
Spatial Velocity Region Spatial Manifold [*1] Particle Name / Type Mass Restriction Velocity Restriction
STL Velocity (V ≤ Cvac) Contiguous Tardyon Net Positive Mass [2] STL Speeds or Light.
Light Velocity (V = Cvac) [*4] FTL-STL Taxion, Photon [*2] Imaginary Mass (Net Zero) The Speed of Light.
FTL Velocity (V ≥ Cvac) FTL Only Tachyon Net Negative Mass [3] FTL speeds or Light.

Note [*1] : The spatial manifold represents spatial causality. Note that for objects that move at the speed of light or faster the local manifold collapses from three dimensions to one (dimension = line). So rays of light at different angles are separated by FTL barriers. At V = C or V > C, the manifold is only continuous for FTL coherent objects.

Note [*2] : Photons are restricted to the speed of light and are strictly zero mass objects. However this description is incomplete because real photons always have a tiny positive mass allowing them to interact with the STL space. This factor resolves a long standing problem in the map defining the behaviour of Fast Tachyons that travel faster than Cvac, which now must have net zero or net negative mass.

Note [*3] : Theoretically Negative mass generates negative KE resulting in a new 'Newtonian' type of behaviour at FTL speeds. (This may still work with imaginary mass KE models.)

Note [*4] : Technically The speed of light extends from Cvac the traditional speed of light all the way to zero at the quantum scale. This moves the minimal speed region for tachyons to zero as well.


-- -- -- -- -- -- -- -- -- -- -- -- -- -
-- -- -- -- -- -- -- -- -- -- -- -- -- -


FTL Matter Model : Four Types of Matter / Antimatter. ( Extrapolation & Preliminary Analysis)
For this FTL model the standard definition of antimatter has come to appear rather incomplete. The standard particle physics term needs to be modified to clarify the differences between matter that has negative mass & energy verses that with a positive or negative charge. (even this may still be somewhat incomplete) This model predicts four basic types of matter. -

Antimatter Classification
Ordinary Matter Traditional Antimatter Negative Mass [*1] Neg Mass & Charge [*1]
Antimatter Classification Type 0 Type 1 Type 2 Type 3
(Alternative REJECTED Classification [*3]) Type 1 Type 2 Type 3 Type 4 )
Signs of Mass (m) and Charge (c) [*2] (m+, c+) (m+, c-) (m-, c+) (m-, c-)
Current/ New Terminology - Matter Charge Antimatter Mass Antimatter Mass-Charge Antimatter
Acronyms (m, M) (CAM) (MAM) (MCAM)
Alternative Terminology - Matter Anti-Charge Matter Antimatter Mass-Charge Antimatter

Note [*1] : The existence of negative mass matter has long been speculated about theoretically but it has so far never been directly detected by experiment. Negative mass plays an important role in this model of FTL physics where it is predicted to exist physically under various rules.
Note [*2] : Electrons and Positrons : The proton has a positive charge and is classified as Matter while the anti-proton with a negative charge is classified as antimatter. However the electron with a negative charge is classified as matter and the positron with positive charge is classified as antimatter. - Is this just a standard formalism or is there something more? On this question research is needed. [RESEARCH NEEDED]
Note [*3] : (Number labels modified to be easier to use [14-04-15] - then modified back again 14-08-15]

Antimatter Classification
Pos Charge Neg Charge
Pos Mass Type 0 Type 1
Neg Mass Type 2 Type 3
Predicted Reactions Between Different Types of Matter / Anti-matter.
Charge
..
Mutual Charge
Behaviour
Mass
..
Mutual Mass
Behaviour
Predicted Outcome / Reaction
..
Forced Reaction
..
Type 0 & Type 0 (+ +) Repulsion (+ +) Attraction No Reaction Large Positive Energy Release.
Type 1 & Type 1 (- -) Repulsion (+ +) Attraction No Reaction Large Positive Energy Release.
Type 2 & Type 2 (+ +) Repulsion (- -) Unknown No Reaction Large Negative Energy Release.
Type 3 & Type 3 (- -) Repulsion (- -) Unknown No Reaction Large Negative Energy Release.
..
Type 0 & Type 1 (+ -) Attraction (+ +) Attraction Large Positive Energy Release. Large Positive Energy Release.
Type 0 & Type 2 (+ +) Repulsion (+ -) Unknown No Reaction Obliteration - Zero energy Release.
Type 0 & Type 3 (+ -) Attraction (+ -) Unknown Obliteration - Zero energy Release. Obliteration - Zero energy Release.
Type 1 & Type 2 (- +) Attraction (+ -) Unknown Obliteration - Zero energy Release. Obliteration - Zero energy Release.
Type 1 & Type 3 (- -) Repulsion (+ -) Unknown No Reaction Obliteration - Zero energy Release.
Type 2 & Type 3 (+ -) Attraction (- -) Unknown Large Negative Energy Release. Large Negative Energy Release.


Behaviour - By Charge.
- Collision between two particles of similar charge ((+ +) or (- -)) is very difficult to achieve because at close range both particles are strongly mutually repelled.
(RESULT = NO REACTION.)
- Collision between two particles of differing charge is very easy to achieve because at close range both particles are strongly mutually attracted.
(RESULT = STRONG REACTION / DEPENDS ON TYPE.))

Behaviour - By Mass.
- Collision between two particles of positive mass results in a large release of Positive Energy. (Explosion)
(+m1) + (+m2) = m1 * Cvac2 + m2 * Cvac2 = Large (positive).
- Collision between two particles of negative mass THEORETICALLY results in a large release of Negative Energy. (Negative Explosion?)
(-m1) + (-m2) = m1 * Cvac2 + m2 * Cvac2 = Large (negative).
- Collision between two particles of opposing mass THEORETICALLY results in the complete annihilation of both. (Obliteration / Disintegration)
(+m1) + (-m2) = +m1 * Cvac2 + -m2 * Cvac2 ≈ Zero.

-- -- -- -- -- -- -- -- -- -- -- -- -- -
-- -- -- -- -- -- -- -- -- -- -- -- -- -









PR-2 : FTL Physics : Detailed Analysis : Section 4 - The Politics of Relativity[edit]

[Semi clean draft.] [Current EDIT Overall 99% Complete 27-05-17]

-- -- -- -- -- -- -- -- -- -- -- -- -- --- -- - QUINTESSENCE - -- --- -- -- -- -- -- -- -- -- -- -- -- -- --
DON'T PANIC! : FTL Physics will save humanity not destroy it. Even if the road of progress is a road often paved with a little blood.
-- -- -- -- -- -- -- -- -- -- -- -- -- --- -- - QUINTESSENCE - -- --- -- -- -- -- -- -- -- -- -- -- -- -- --


Part 1 : Where is the Truth? Even the most basic understanding of the FTL universe is enough to immediately discount the common model of Special and General Relativity as the correct solution for FTL physics.. Unfortunately the scientific establishment has never admitted this and allowed a new search for better answers to begin. Instead they have done the opposite. They have treated the whole of relativity as absolute fact and have aggressively attacked anything that has tried to contradict it or even to create the smallest of doubts. Ultimately this has come to the point of ridiculing and attacking and sidelining the very idea of an FTL physics. - This despite the fact that without an ordered FTL physics the universe and by extension reality itself simply do not exist.


Part 2 : Looking Behind The Propagandists Curtain. It is a simple fact that Relativity is not just a scientific theory, there is heavy propaganda and dogma behind the theory going back for decades. This is not just a government conspiracy. (probably) Like most propaganda it seems to be largely self created and self reinforcing, based on self-belief, and is thus very hard to fight. Unfortunately while studying Strong AI and FTL physics I have been forced to become as much an expert on the intricacies of this type of propaganda as I am on almost anything else.
Based on this understanding; the propaganda at the heart of the Relativity will have a core of very strong emotion, probably negative emotion. The general nature of propaganda suggests that the primary emotion behind this is probably either fear or panic. Territorial aggression and hate will play a part as smaller 2nd order elements. - The type of factors commonly triggered when absolute dogma is questioned.

Possible Reasons for The Propaganda Behind Relativity. (Based on Fear or Panic) -

  • Maybe the fear is of physics as a science being exposed to ridicule.
  • Maybe the fear is about personal fears about academic reputation and position.
  • Maybe the fear is connected to the development of nuclear weapons or by extension nuclear war. Both are rather deeply entwined with relativity and E=mc^2..
  • Maybe the fear is about future theoretical weapons many times more powerful than nuclear bombs. Certainly a possibility. Many of the machines envisioned by science fiction would require enormous energy to work. In reality this isn't that connected to the real theory and no more realistic than science fictions views of surgery (Frankenstein's monster), nano-assemblers (like magic), space travel (as easy as flying), or aliens (people in rubber suits and face-paint)..
  • Maybe the fear is the rather bizarre fear of destroying the whole universe. If you take the FTL physics model described by general relativity then it is quite easy to imagine that even a small simple experiment that penetrates the FTL barrier could accidently start a chain reaction that destroys the whole universe.. This is what a 'fragile' causality means.
  • Maybe it is the more abstract fear that a coherent FTL physics would be looking into the realm 'beyond'. A realm 'forbidden' to science by religion and 'sacred' to God.
  • Maybe the fear is that seeing into the FTL region could literally 'kill' God by proving that it doesn't exist. - I believed this myself for a number of years, and for the religious person it could certainly be a large fear.. [*1]
  • Maybe the fear is of the opposite and that the physics of the FTL could actually prove that god does exist. (it pretty much does - from a certain perspective) This certainly would not make many atheists happy either. [*1]
  • Maybe the fear comes from metaphysics. The FTL model creates some strange new philosophical problems, including some that have certain military and social and metaphysical implications. It (potentially) proves the materialistic model of existence over the psycho-spiritual, but also (potentially) leaves the door to the supernatural wide open. A paradigm that may take us down certain particular Transhumanist rabbit holes.
  • Maybe the fear is simply the fear of the unknown.

Note : [*1] The God question has already been partially answered by my work. By applying the FTL model to the Anthropic question. The wheel stops at double zero - nobody wins. God exists but it isn't the God anyone was expecting. Play again? (See Section : 'FTL Physics Project : The FTL Anthropic Question'.)


Part 3 : Truth Through Humour / Death by Humour.

  • The basic natural map of space time puts 99.9999...% of the universe and reality into the FTL region of space time. So to put it bluntly any astronomer who says that they believe in relativity is literally saying that they do not believe in the stars. [Pun intended]
  • In general science the universe is regarded as very old. In Relativity the universe is so fragile that even 5000 years old is unfeasibly old.
  • 'Space like time' is only experienced by people travelling very fast in a straight line. . . its not like a line could be a dimension or something is it?
  • 'Time is like a river.' In our minds we constantly sail up and down this 'river' of time and its tributaries in the gentle flow of our internal thoughts. However if we apply the logic of Occam's razor to the basic geometry of space time and the physical reality of a river of time then the idea becomes a nonsense. Time is NOT like a river!
  • Time is like the hands on a traditional clock - strictly limited to one time at a time.
  • There is no Past, no Future, the only time that exists is the present and these other things are merely abstractions on reality.
  • The question is : Is Relativity a religion or part of science? Anyone who encounters many physicists on the subject will immediately realize that the answer is at best both. The worst become 'Relativity Salesmen' who will try to sell you their theory, and will not tolerate even the smallest criticism of the product. These true believers are also a bit like Islamic Jihadists - The 'Relativity Jihadists'.

- -- --- -- - -- --- -- - -- --- -- - -- --- -- -









PR-2 : FTL Physics : Detailed Analysis : Section 5 - Order Based FTL Thermodynamics.[edit]

[Sorry in the past this section was a total mess. Still needs further work & improvement.]
[Research still preliminary and under development.] [Current EDIT ~75% Complete. - EDIT 27-05-17]

Order Based FTL Thermodynamics. (Preliminary - not complete)
In looking at order and causality in FTL systems in detail I have come to think that in some ways the standard model of thermodynamics is wrongly formulated. I believe a more precise and complete model can be created by using 'order' rather than 'disorder' as the primary measure. this allows the whole model of thermodynamics to be reduced to the simple primary rule that - 'Energy always seeks states of equilibrium'. Secondary rules add in the relationship between temperature, order, energy, information, and entropy, and the slightly more abstract 'object orientation'.
The three(four) standard rules of thermodynamics are still obviously broadly valid but a key point that is missed by standard thermodynamics is that ordered objects always contain bound energy and that the same objects less ordered (more disordered) in general contain less bound energy. It is also clear that order is more complex and is hierarchical and is ultimately even subjective and is relative to human interpretation. - The term 'object orientation' is a catch-all describing this complexity.


Energy and Order based Thermodynamics. Primary Rules. (PRELIMINARY ONLY)

Order = Coherent states defined by binding energy. Order is not a simple straightforward concept..
Entropy = Disorder in a system. Also not a simple and straightforward concept..
Energy = Primary driver of action and causality in physics. A simple concept but not easy to describe.

- Rule 1 : Energy seeks States of Equilibrium. The three (four) standard laws of thermodynamics and entropy can all be derived from or unified with this rule.
- Rule 2 : With enough energy you can do anything. Energy is the limiting factor in all things that are defined as 'impossible'.
- Rule 3 : There is a strict relationship between order and entropy and energy. (See detailed rules, Base Set)
[EDIT POINT]
- Rule 4 : FTL Causality is the bridge between energy and order.. [NEEDS Rewrite / Redesign...]
-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --


Energy and Order : Primary Rules In Detail. (PRELIMINARY ONLY - Under Construction)
[Still under Development - Rewrite & Reorder - some tenses and descriptions may be TANGLED !!]
Rule 1 : Energy seeks States of Equilibrium.

- Rule 1.1 : Self Equilibrium. Energy can not be created or destroyed against equilibrium. (Includes both STL and FTL contexts.)
- Rule 1.2 : Energy cannot climb energy wells against equilibrium.
- Rule 1.3 : ?Absolute zero temperature = minimum entropy. Minimum entropy = minimum free energy or 'noise' in system.
- Rule 1.4 : Heat Energy is commutable. If temp A = B & B = C then A = C. The Heat scale is absolute and fixed.
- Rule 1.5 : In systems where there is an imbalance of energy :- work can be done, entropy can be locally lowered, order can be locally increased.

- - Rule 1.11 : Matter and energy can be created or destroyed as long as equilibrium is maintained. Specifically positive and negative values must add up to net zero.
- - Rule 1.21 : Energy can be forced to climb energy wells against equilibrium, as long as equilibrium is maintained at any one point. This is achieved by creating local energy wells using barriers. (Energy Pumps.)
- - Rule 1.51 : Ordered machines powered by an imbalance of energy can act as entropy pumps creating local zones of increased order.
- - Rule 1.52 : Natural non-ordered processes with unbalanced energy can also create increased order. Eg molten metals solidify, planetary bodies form in stable orbits, biochemistry can become life, etc..
- - Rule 1.53 : Normal thermodynamics predicts that all entropy pumps will always produce a net total increase in total entropy.
- - Rule 1.54 : Ordered quantum systems (quantum manipulators / FTL manipulators) can potentially allow the direct manipulation of causality, and theoretically the laws of physics.
- - Rule 1.55 : Ordered quantum systems (quantum manipulators / FTL manipulators) may be able to violate rule 1.53, but this depends on the cost of creating and maintaining the coherent energy state.


Rule 2 : With enough energy you can do anything.
Energy is the limiting factor in all things that are defined as 'impossible'. (See Sub-section below : 'Thermodynamics : Hierarchy of Impossibility in an FTL Universe.'.) This rule specifically includes energy that is FTL Coherent.
- Rule 2.1 : Energy is the only direct determinator of causality. (Causality = Energy. Matter = Energy. Light = Energy. Everything is made of energy. Even the definition of Nothing is described in terms of Energy.)
- Rule 2.2 : All known forms of Energy require internal order to exist. (Energy = Order.)
- Rule 2.3 : Energy-Information states. Information is physically made of energy, and requires energy to exist. All ordered energy contains information.
- Rule 2.4 : As extrapolated by geometry all energy in the universe exists purely along the axis of the FTL Simultaneity.
- Rule 2.5 : Space. Energy is the defining factor in the structure of empty space. In the dimensionality and 3D structure of space, in the structure of the FTL Simultaneity, and in synchronous point time.
- Rule 2.6 : The Flow of Time. Time is a point but is ever changing. This is called Instantaneous Time or Point Time.
- - Rule 2.61 : The Future is an abstract extrapolation from the present, and is the energy-information state towards which the present moves.
- - Rule 2.62 : The Past is an abstract extrapolation from the present, and is the cumulative energy-information state which leads to the current present.
- - Rule 2.63 : At quantum scales point time relaxes to allow dimensional time. Quantum scale dimensional time (1D) defines quantum scale space time (4D).
- - Rule 2.64 : True space time requires a right angle between the 3-space and dimensional time creating a 4 space. Objects in such a space can move freely between the 4 dimensions.
- - Rule 2.65 There is no evidence that dimensional time exists coherently above quantum scales. Coherent Dimensional Time exists as a potential on classical scales but remains non-coherent due to energy equilibrium and entropy.
- Rule 2.7 : STL Causality describes the observable universe with a 3D space, single superposition, and flow of instantaneous point time. The observed behaviour mode is 'Cause' then 'Effect'.. The basic hierarchy of the FTL physics model puts the STL causality as being the result of the overall FTL physics geometry.
- - Rule 2.71 : Order of STL Causality. In classical physics we observe a basic physical logic of 'Cause' then 'Effect'. We interpolate that this occurs because of the flow of time as in rule 2.6. Note that in the FTL model effect before cause does not directly violate the model because the past and future do not exist..
- - Rule 2.72 : Fast FTL Causality describes the area outside of observable physics outside of the ordinary STL causality. The exact behaviour of time at Fast FTL speeds is (assumed to be) identical to the behaviour expected at the normal speed of light Cvac. Time is at time-stop and all local physics is suspended.


[EDIT POINT] Rule 3 : (Base Set) There is a strict relationship between order and entropy and energy.
- Rule 3.1 : Order is a form of bound energy. All ordered systems contain bound energy, bound energy holds order together.
- Rule 3.2 : Information is a form of order and thus energy. It is impossible to store information without storing energy at some point.
- Rule 3.3 : Order is 'Object Orientated', this means that local rules can override standard behaviour. Order is complex, hierarchical and is connected to human will. We are part of the physics system. ('Object Orientation', also - 'Machine Theory' or 'Ordered Systems Model'.)
- Rule 3.4 : Order is Hierarchical. The most basic form of order as bound energy are the subatomic nuclear particles, above these are nucleons and atoms, above atoms are chemical bonds and molecules, and then above them stretch everything else.
- Rule 3.5 : All Observation requires order. Observation requires Information interception (sensors) and Information Storage (memory). To extract information requires free energy in the observed system. (Energy = Causality.)
- - Rule 3.55 : Observation creates an FTL Quantum causality bridge between the observed results and the experiment. Free energy can leak through this bridge and this is why the observer always potentially affects or interacts with the experiment. (Energy = Causality.)
- Rule 3.6 : Entropy is the absence of order. Thus the only system that can reach a state of perfect entropy is a perfect vacuum.
- Rule 3.7 : In material systems maximum entropy is achieved by energy stored as heat because heat represents maximum disorder. (Heat is disordered Kinetic Energy (KE). - Heat energy freely crosses the quantum barrier. )
- Rule 3.8 : In material systems with enough heat energy, the system will try to lose energy/entropy by converting heat to electromagnetic radiation. (This forms the heat E-M entropy cycle.)
- Rule 3.9 : FTL Causality is the ultimate form of order. (See detailed rules the Nature of FTL Causality Below.)


[In development / rewriting.. 30% to 70% Complete]
[EDIT POINT] Rule 4 : 'FTL Causality' is the bridge between energy and mass and order. In effect every object in the universe contains energy and an FTL causality.
- Rule 4.1 : All mass and energy are ultimately made of space time... The model leads to the assumption that all massed objects are ultimately massed point singularities governed by the physics of the quantum scale. The physics of the object are self reinforcing including suppressing the speed of light to zero at the singularity itself. This is still a work in the first stages of development.
- - Rule 4.11 : Each singularity is made up of a region of completely curved space time at total dilation. The curvature represents stored energy, and the event horizon and its interior represent a separate stable FTL causality.
- - Rule 4.12 : Taxonomy. [??Needs Work???] At a quantum scale protons and photons (EM waves) and all other nuclear particle objects - can all be defined as regions separated by FTL barriers.
- - Rule 4.13 : Local Absolute Rest Frame. Protons and other nuclear particles achieve a geometry where the total energy state creates a net zero point field... The absolute frame is defined by the speed of light, so this internal geometry represents energy slowed from the speed of light to zero.
- - Rule 4.14 : Photons travel through space time as ripples with a net zero total mass. The Newtonian inertia rule constantly forces the object to accelerate to infinite velocity, and the resonance of space time restricts this infinity to 3E8 m/s. The photon acts as a true tachyon (we have a complete set of mechanics for this) and is surrounded by an event horizon..
- - Rule 4.2 : FTL Causality is the ultimate universal lever in physics.. The whole of physics can be built from first principles using only a spatial geometry, a point time manifold, and FTL Causalities.
- Rule 4.3 : FTL Causality is by definition tied to the speed of light. Because of the complexity of the C parameter this rule leads to labyrinthine complexity and a plethora of definitions. This complexity is what allows FTL causalities to (theoretically) describe everything in physics.
- Rule 4.4 : An FTL Causality is able to explicitly lower the entropy in a closed system. This does not break traditional thermodynamics because there is ALWAYS a net entropy gain outside the system to create the FTL causality.
- Rule 4.5 : An FTL Causality is able to locally create or destroy energy. This is only a local STL rule, what happens in the overall FTL system is unclear (depends on geometry) but expect net gain or loss to always sum to zero. (for example Hawking Radiation.)
- Rule 4.6 : There is a Primary Causality which is an FTL causality. This 'Primary' Causality is in effect a zero point calliper which spans the universe and links reality together at point time by defining the universal m0 'reference frame' and the local absolute speed v = 0. We observe this (locally) as non-absolute because the local speed of light itself is non-absolute, and we are tied to that.
-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --


FTL Causality. In essence the basic definition is that an FTL causality is any object with a causal superposition greater than one. The critical key is that the internal causality is coherent at speeds faster than the local internal speed of light Cloc. Also note that for FTL objects Cloc is always a minimum and if the object is stable tends to be very close to zero. Cloc is a local speed and context defined and is different in different circumstances. For STL quantum objects the value is probably very close to zero, while for 'true tachyons' Cloc may be equal to or possibly even larger than the standard constant Cvac (2.99E8 m/s).. Note that almost by definition true tachyons are not stable.

The Nature of Infinite Energy. [EDIT POINT] In general it takes a region of Local 'Infinite' Energy to create any 'useful' form of FTL causality. This 'Local Infinity' is created by a scalar window violation - a barrier crossing at the speed of light - an event horizon. In classical systems this barrier crossing tends to create a 'large' infinity limited by the infinitesimal h or the window definition of zero (E ≈ m/(Limit h -> 0) ). In quantum systems the infinity may be 'small' or finite - or its value may be imaginary and locked to zero. However this is still largely an area for future research.


Definition of FTL Causality : Features of FTL Causality In Detail [terminology under development]
An FTL Causality can be defined as any system where internal order crosses a local speed of light boundary.
- An FTL causality is generally defined by two or more points in space and or point time linked by an FTL superposition bridge. (In effect an FTL causality is a local space time, wormhole bridge, superposition tachyon, region of quantum coherence, sub-universe, etc - all are equivalent.)
- An FTL causality is defined as having infinite energy. This is because of total dilation at the internal FTL barrier crossing. The speed of light represents a point of scalar window violation creating a mathematical disjunction and a corresponding finite or non-finite infinity. (see Open Problem Below)
- Any or all of the various rules for C can apply and these create many different types of FTL causality with different geometries and behaviours. An FTL Causality bridge is defined by its local bulk speed of light vector-sphere volume which depends on a local manifold FTL/STL physics context. (Sorry if this is a very obtuse description, it is still a very complex vague definition. This is a major area for future research.)
- The FTL Quantum Environment. In quantum regions the primary speed of light C is clamped and defined at values close to zero and thus most quantum effects like Entanglement and Superposition can be described as/ or are directly equivalent to FTL causalities.
- Ultimate causality. Most Quantum or FTL effects are limited to extremely small scales and time spans. This limiter links back as part of the primary causality of our universe and the so called quantum limit. The ultimate heart of this limiter must be the FTL simultaneity itself.
- An FTL causality leaving the quantum region needs to maintain total coherency, and in effect this requires an absolutely perfect system with no entropy or interfering noise whatsoever. (This is the true defining limit on effects like FTL travel)

-- -- -- -- -- -- -- -- -- -- -- -- -- - -- -- -- -- -- -- -- -- -- -- -- -- -- -
-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --
-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --


Thermodynamics : The Hierarchy of Impossibility in an FTL Universe.
In the FTL system we can roughly grade Impossibility by overall energy cost. - We relate this to the total energy cost of the Big Bang.

BUDGET : GREATER THAN THE TOTAL ENERGY OF THE BIG BANG.
- Travel to Other Universes. (Assumes that other universes exist & can be detected.)
- Computation of Total Deterministic Model. (Energy required to reduce the entire universe to a completely deterministic model.)
- Simulation Based Universe. (Simulation of the whole universe at the quantum event scale. Identical to Total Deterministic Model.)
BUDGET : EQUAL TO THE TOTAL ENERGY OF THE BIG BANG.
- Reset to the Big Bang. (This requires a time jump to point zero and is automatically the most expensive event in the universe.)
- Instantaneous Point to Point Travel. (This is assuming a speed comparable to the reference speed of the FTL Simultaneity.)
BUDGET : FINITE LOCAL FTL LIMIT.
- Umbrella 1 - Time Travel. (Time Jump) (Requires energy in line with the entire energy budget within the enclosing light cone.) [*1]
- Umbrella 2 - Interstellar FTL Travel. (FTL Jump) (Finite velocity scalar FTL Travel.) [*2]
- Umbrella 3 - Bubble Universes. (Constructed Space Time Geometries, Direct Manipulation of the Laws of Physics, Object Teleportation.)
- Umbrella 4 - Force Fields. (Singularity Manipulation, Gravity Manipulation, Free Air Holograms.)
- Umbrella 5 - Teleportation of (Kinetic) Energy. (Direct Energy Transformation, Indirect Gravity Manipulation, Zero Point Manipulation, 'Psychokinesis' [*3].)
- Umbrella 6 - Quantum Manipulation. (Room Temperature Superconductors, Quantum Computing, Quantum Discriminators, 'Direct Viewing' [*3], 'Precognition' [*3])
BUDGET : WITHIN THE REGION OF CURRENT PHYSICS.
- Umbrella 7 - Manipulating Electromagnetism. (Radio Broadcast, Lasers, X-rays, Radar, Optics, Relativistic Mechanics, etc. 'Telepathy' [*3])
- Umbrella 8 - Manipulating Matter. (Materials science, Newtonian Mechanics, Thermodynamics, etc.)

[Note *1] : Classical direct Time Travel in this model is fundamentally impossible because there is no coherent general time dimension on classical scales. Time is point-like and the past and future don't exist. However with enough FTL coherent energy and a causal link to a new (past or future) time a spatial region can be theoretically be transformed to create or recreate the physics state of that time. This is called a Time Jump. Note that each time jump is a single irreversible one way event.
[Note *2] : Note that potential budgets for interstellar FTL travel vary widely. The absolute worst case involves causally touching the entire enclosing light cone (similar to time travel) - this represents a reduction to something similar to the special Relativity FTL model. The FTL Absolute Frame Simultaneity model generally restricts allowed budgets for FTL travel to finite scalar values or to values that stay close to zero. The basic key to FTL travel is to reduce your FTL inertial 'slipstream' or 'causality wake' to as near to zero as possible, creating what we might call FTL efficiency.- The basic starting point for this is to either have a net mass of zero or to shield your internal mass from the surrounding causality.
[Note *3] : Psychic Phenomena are included for comparison.

-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --







PR-2 : FTL Physics : Detailed Analysis : Section 6 - The Lucien FTL Anthropic Principle[edit]

[Semi Clean Basic Draft / Last Edit [27-05-17] Edit 96% Complete]


Definition 1 : we exist -> from this we can assert that the Earth and the surrounding universe also exist.
Question 1 : This existence asks the question of prior cause, why do we exist?

The Anthropic question arises from the problem that physics shows that the laws of nature need to be very finely tuned as to allow life to occur, namely us. Without such tuning life is unimaginably improbable, some 10^10^123 to 1 against. [*1] An improbability so unimaginably vast that in comparison even colossal numbers like the total number of atoms in the universe look like zero; 10^79 atoms ≈ 0. [*2]

  • The observer paradox : that an observer must exist to be able to observe reality. - Does reality exist without an observer? This is impossible to test without using observation, and the only method we are left with is projection using some from of logical prediction. The question is inherently unanswerable, and reduces back to the anthropic question and the nature of physical reality.
  • All anthropic questions tend to be subject to problems of self truism. They tend to become tautologies and circular arguments. (Tautology - all elements are automatically true by internal definition. Circular Argument - A leads to B leads to C leads back to A, eg We exist because we exist.)

[Note *1] : (Roger Penrose in 'The Emperors New Mind' gives an astounding fine tuning improbability of 10^10^123.)
[Note *2] : (Atoms in universe - 10^22 stars * 10^30 Kg * 10^27 atoms per Kg ≈ 10^79 atoms in universe.)


Examining The Anthropic World Space
Let us put the problem in terms of the standard answers/propositions and a rebuttal to each -

Realist Arguments
1. That we and the universe exist simply by chance and causality alone, all fine tuning is done purely through 'luck'.
Rebuttal : Astronomically improbable, at around 10^10^123 to 1 against [*1] and vastly more improbable than (say) the theory that the Earth is a disk riding on the back of a giant turtle.
2. That a guiding ordered creator or 'God' designed and created the universe.
Rebuttal : Ugly but actually a first semi-reasonable answer, a sentient god can fine tune the universe directly without the need for near infinite improbabilities. There are two principle problems with this solution.- Firstly is the problem that something else must first create God so we do not actually escape from tautology. Secondly the argument is deeply unsatisfying from a scientific and logical perspective for several reasons including because God itself is hugely complex as a prior cause, and also because it adds a huge amount of overhead to the burden of proof verses reduction without any proof or testability whatsoever. (i.e. it is a matter of belief not proof.)
Another problem is that God must remain rigidly separate and outside our reality or all causality tends to collapse towards animism where every instant of reality is simply God - everything degenerates to either proposition 4 or 5. (Small finite Gods can exist within/inside ordinary causality without problems though the evidence level for this is pretty low.)
3. That there are many or infinite multiple universes that exist in parallel balancing the improbability of life existing back to unity.
Rebuttal : My primary argument against this is thermodynamics. Infinite realities by definition require infinite energy - where a single big bang only requires finite energy. If infinite energy exists anywhere then it can be shown that it folds the whole of causality into zero through any number of universes unless very carefully balanced - sending us back to proposition 1 or 2. Another problem is that multiple universes don't actually remove the secondary improbability of our own existence since we only live in one universe.
4. Reduction to Animism. The universe and physical reality exists but every quantum event is controlled by God directly.
Rebuttal : This is almost the simulation argument but the 'simulation' is physically real. In this case the anthropic question does not even exist because there are no separate laws of physics. However there is still the problem of propositions 2 and 5, the question of who created God. There is a strong argument against this proposition in the form of the observation of causality and reality from science and physics which seems to show a fixed causality. There is also the problem of the sheer size and immensity of the universe which shows the total insignificance of humanity and the Earth or even the whole Milky Way galaxy.
Anti-Realist Arguments
5. That reality does not exist but is some kind of virtual simulation or fake or 'dream'.
Rebuttal : A strong argument against a simulation argument is very difficult, however because it reduces the value of all arguments to zero it is a pretty pointless proposition. It still doesn't remove the need for a prior cause sending us back to proposition 2 and the question of who or what created the simulation. Maybe the only real answer to this proposition is 'I assert that I reject this argument because I exist' which of course may be completely futile.
6. Causality simply extends from the observation of our own existence. 'We're here because we're here.'
Rebuttal : Essentially a non- religious version of the animism argument, that puts us humans in the seat as Gods. Still completely avoids the problem of answering the question of our own existence. Either devolves to proposition 5, or to proposition 7, or even to proposition 4 or 2.
7. Put simply, the question doesn't exist if you refuse to ask it.
Rebuttal : A pointless argument that simply avoids the question altogether, however one that's quite popular (from very limited observation) with many atheists and intellectuals given no better answers that they are willing to accept. For humans choosing between logic and instinct is difficult, and for the scientist the programming of their instincts guides them towards atheism. This anthropic question is in some ways the scientific version of a crisis of faith.


-- -- -- -- -- -- -- -- -- -- -- -- -- --


Looking For A New Solution, An FTL Approach to Solving the Anthropic Question.

Background - FTL Theory : Relativity taken to its logical extreme leads to a universe that either does not exist or is very young, and the only way to escape from this is to introduce some kind of Non-Relativistic 'FTL' physics. Without an FTL geometry the question 'can anything go faster than light' is actually meaningless because nothing can even exist outside of our own historical light cone (And we are all that exists). Once an FTL geometry is allowed Special Relativity is already well on the way to being broken. An FTL geometry allows multiple light cones to exist outside each other sensibly, and there is very strong indirect evidence that time is tied together in some way that approaches FTL simultaneity (infinite speeds). Extending physics to FTL geometries is the first step in giving us a stronger and more general model of causality that CAN survive for billions of years and thus can explain the universe being old.
The universe is so large that there simply isn't a coherent light cone - except in the extremely distant past (near the Big Bang).
The only way for the universe to be coherent in the present is through a complete FTL geometry and physics.


-- -- -- -- -- -- --

The 'Lucien' FTL Anthropic Principle
Proposition : That an ordered quantum state can self-assemble its own universe or existence by an act of non-causal observation of its own future state, creating an evolving feedback loop of causality that leads to a universe containing life - or containing sentience.

The whole theory essentially rests on the idea of the universal Turing Machine - can a Turing machine self-evolve? The answer seems to be yes. Any living cell is by definition a Turing Machine - its DNA behaves like a program, the cell runs it to live, and runs a separate program to replicate itself and reproduce. The question becomes, can something as simple as a quantum state self evolve - and the answer to this question is a new and largely unexplored area of physics.

  • The primary requirement for life and evolution is memory and there is no doubt that a quantum system can act as a memory.
  • The second requirement is a guiding principle. The system must try to move towards more ordered states and again in principle quantum energy systems can do this.
  • The third requirement is the ability to send information through time, quantum fields are known to be able to do this as long as their energy and transience allows. - The primary field contains the entire energy budget of our universe so there is literally nothing in our causality that can stop it - it effectively is our causality.
  • The fourth requirement is the chicken and egg problem, which part of the loop starts the system? - This is the great unanswered question and explains ultimately the exact geometry of how the quantum system self assembles. This is the least known and most open of all questions but one thing we can say is that its answer is at least finite and limited in complexity - which puts it a vast way ahead of all the standard solutions to the anthropic question..

Comparison of FTL Solution to standard Anthropic Question Solutions [new r/w - 23-04-13]
1 . An FTL causality is compatible with the idea of simple chance, because the initial state of existence is elementally simple.
2 . An FTL causality is compatible with the idea of a complete God. - The initial state could even be described as 'God' directly, though it is a god that self-evolves out of almost nothing. The idea of biological evolution hits a similar problem at the first cell though here the probabilities are less insurmountable.
3 . Multiple universes are completely removed from the process, they are not ruled out but they are not needed. A different interpretation of quantum mechanics would unroll the process described into multiple universes for each cycle of the FTL evolution. A third interpretation based on Special Relativity has there being no single universe but an infinite number of atom sized universes which each only exists for an instant.
4 . Reduction to Animism, in a way the process described is akin to animism, though it is simpler to call it 'physics'.
5 & 6 & 7. Simulation & Denial are unchanged though the need of these things is basically removed.

-- -- -- -- -- -- -- -- -- -- -- -- -- --








PR-2 : FTL Physics : Detailed Analysis : Section 7 - Mathematics.[edit]

[A lot of stuff in later parts of this section is now semi-obsolete! Some Rewriting Needed.]
[Edit - 27-05-17]
MATHEMATICS - THe Fundamental Core of FTL Physics

'Anyone who tells you that mathematics doesn't lie doesn't understand higher or even ordinary mathematics.'
'Never wave Occam's Razor at a Mathematician, he will faint and fall over and might hurt himself.'


Behind these jokes is the fact that although advanced mathematics is an enormously powerful tool it is also a tool that can very easily produce wrong or inaccurate results, and worse can actually conceal the truth. It is an absolutist delusion to think that mathematics exists on a higher plane of total abstraction and perfection separate from reality. That mathematical truth is absolute and perfect and total and eternal. Real mathematics is an imperfect human construct and so errors are part of its very nature. When analyzed to first principles mathematics is only a form of evolved system and so total perfection is to some extent ultimately impossible. When analyzed higher and pure mathematics are incomplete, imperfect, and ultimately not pure.
Traditional Calculus is a very delicate beast that depends on curves and continuous variables and fails when it hits sharp edges or discontinuities or any number of other anomalies. Most higher maths depends one way or another on calculus, but reality contains sharp edges. The mathematics of physics is full of infinities and these are treated as pariah objects that must be hidden or denied or destroyed. The tool at the heart of this is renormalization, effectively a lie to self. Traditional calculus dances at the edge of infinity, but when it touches the beast itself it dies. Perhaps the closest science has come to infinity to date is in some of Riemann's work on the subject. However it has been suggested that infinity drove Riemann insane, it was a work he left unfinished at the end of his life.
In the past FTL physics research has failed at the first hurdle again and again precisely because most researchers rely totally on mathematics and standard higher mathematics does not handle infinity correctly. To understand FTL physics mathematically you need a mathematics that can look infinity in the eye directly. The solution is to rely on more basic methods like direct visualization and logic and repeatedly starting from first principles.. Ironically even the most basic logic from Euclidian geometry turns out to be fatal to special relativities core argument. Simply treat a vector as defining a line and then treat the line as a dimension. Now anything moving maps time to space and creates its own local time dimension allowing dilation to work in 3 dimensions instead of 4.. This completely undermines the need for a continuous 4D space time. The FTL model also rejects a continuous 4D space time replacing it with a continuous 3D space and a non-continuous quantum scale 4D space time..

The faith and total dependence that has built up on higher mathematics has become one of the biggest weaknesses in Special and General Relativity and throughout physics and modern science. Higher mathematics is still an incredibly powerful and useful tool, but we must always remember that there must always be bridges to more basic solid methods.
- -- -- -- -- -- -- -- -- -- -- -- -


The Mathematics of The FTL.
The mathematics of the FTL is very different to pure or higher mathematics, it is not beautiful or elegant and does not pretend to be complete. It is ugly, disjointed, complex, contradictory, and full of errors. At the core of this maths is a direct geometrical permutation based approach which will seem to many higher mathematicians as horribly crude and brutally simple and quite 'childish'. - It returns the core of physics to a point where with a bit of work even the able layman can learn most of the basics, and can even know almost as much as the professional. - And this is something that professional physicists haven't had to face for a very long time.

Keeping it Simple by Reduction. At the core of traditional physics and almost all science is a simple fundamental rule that is in direct contradiction with the philosophy of higher mathematics and particularly with modern physics and Relativity and Quantum Mechanics. This rule is called Occam's Razor, or sometimes 'K.I.S.S.' or Keep It Simple Stupid, or in more formal language 'The Reduction Principle' or Reductionism. - Reductionism and Observation are the two basic fundamental rules that are supposed to be at the core of all scientific and mathematical logic.

Observation and Infinity. One of the very first genuine steps on the road to FTL physics was an important observation about a seemingly quite unrelated subject - Fine Art. At the very heart of classical fine art are two rules - observation and reductionism.. When I applied these principles to vision (in Strong AI) I observed that in the real world we encounter infinity everywhere. The logic of human vision and visual analysis actually begins with infinity, in something that we encounter constantly in fine art painting - the vanishing point. If you can understand the subtleties and details of vanishing points then you are well on the way to understanding the maths of FTL physics. This is because vanishing points are both finite and infinite at the same time, and this makes the vanishing point an extensible non-finite variable. Where is the vanishing point? it can be 100 meters away or on the other side of the universe. This is the answer that finites and infinities are not rigidly separate but are often only separated by context.. In physics 'context' is probably the second most important mathematical rule, behind the logical tautology that reality exits.. The only exceptions to the context rule are true infinities like Tangent 90 or division by zero, which can themselves be tamed by another piece of FTL logic - imaginary numbers.

Empty Space. To understand the FTL universe we need to reduce the world to its most simple basic physical geometry and ask what is one of the ultimate questions in physics :- 'What is the actual basic nature of empty space?' What creates the distances between the stars, or between atoms? A space time reduced by logical reduction to its simplest elements sheds dimensional time and returns to being simply space. - Something close to the original Galilean geometry but with a new correcting factor at the quantum scale to account for relativity. Space time is still part of the solution and a core element, but now space time is divided into two parallel definitions. -
Definition One : 4D space time is non-continuous at classical scales and continuous at quantum scales.
Definition Two : Continuous 4D space time is abstract (non-real) at classical scales.
We can define Special Relativities projection of time as a general dimension as a 'psychic' perception. A magical logic that is an artifact of the internal geometry of the human brain and mind. Tesla failed because of this kind of 'psychic' logic and so did Einstein.

Totality. [Future section - edit] A very complex subject. For now look up the section in the Strong AI project, Section 1 Part 4 Deep Glossary - Totality Matrix.
- -- -- -- -- -- -- -- -- -- -- -- -


The Imaginary and the Core Nature of Mathematics.
One of the stranger ideas that emerges from FTL physics is that the model puts a piece of physics right inside mathematics. The idea of numbers containing multiple superpositions. However on reflection it becomes quite clear that our so called 'Pure' mathematics is already full of many jagged little pieces of physics just like this. Trigonometry, Euclidian Geometry, Lines, Circles, Angles, Vectors, etc. This raises the question of the very nature of mathematics itself, which is the true heart of this section. Of course the answer is that just like other modes of logic - mathematics is ultimately a type of evolved system.
Like all evolved systems mathematics depends on ramps or 'ramp functions' to allow it to grow, and generally ramps only allow growth by 'hill climbing'. By itself hill climbing is a blind rule and as it stands this means that mathematics is fundamentally only a sophisticated collection of parts. An even more fundamental limitation is that mathematics evolves as a form of human language. These language elements themselves also evolve, and also define and limit all future growth and growth patterns. Today the language of mathematics has reached a state of quite high entropy, and this is now one of the biggest limitations on the future growth of the whole field.. Unfortunately you cannot have evolution without genetics and you cannot have mathematics without language..

In mathematics : Language High Entropy = confusing, complex, large pool, symbol obscurity, symbol overloading, symbol blurring, positional incoherency, imperfect self-consistency, not fully reductionist. Note how the language of genetics remains simple and tight throughout evolution, very like the binary logic at then heart of computer design. The same very probably also applies to the logical core of the brain - its language matrix.
Didactically (as it stands today) human mathematics can never be totally logically self-complete or self-consistent unless (occasionally) rebuilt from the ground up. - At least one solution is obvious, we need a complete new approach to mathematical systems that has the capability to extend beyond blind hill climbing or the ingrained limits of human logic. - This implies a strong need for a new self complete mathematics.. and this of course requires a new and more efficient language to describe it...
Self completeness of this type is strongly connected to sentience making the whole problem a future project for Strong AI research. More interestingly this can be applied the other way around and building such a language is a critical first step towards a working Strong AI..


Self Complete Mathematics - A mathematical system where all logic chains are complete and all operations always produce a correct result. - A self complete mathematics is any mathematical system that is functionally complete within its own space and is fully logically self consistent. Anyone who is familiar with them will realize that computer CPU's are always designed based on an ideal of self-completeness. Possibly the ultimate form of this is the Turing Machine concept model itself.
At this time no human derived mathematics (outside of computing or basic counting maths) is self complete or anywhere near it. Prime fault points in traditional mathematics are :- division by zero, root negative, tangent 900, 3D angles, infinite numbers, calculus, bifurcation, differential equations, and so on. We note that there are a series of concentric circles of complexity extending outwards, and with increasing complexity we tend to get further and further away from any type of completeness. Getting to the root of the problem it turns out (as above) that mathematics is an ad-hoc evolved system - a heap of mismatched parts assembled together in an organized whole. Mathematical proof inside any non-complete system is itself a partial lie - because such proof can never be complete, and there may hidden or unknown places where its truth fails. This produces a very important generalization rule - complexity is inversely proportional to logical safety.

Observation : Complexity is inversely proportional to Logical Safety.


Terminal Core. At the heart of the problem is that our numbers themselves are arbitrary ad-hoc systems and not really self complete. A traditional model of number defines them as perfect abstractions - even this is not correct because all numbers ultimately depend on physical energy. (eg 'one' finger, 'two' fingers, one bit, two bits, etc) Numbers also depend on permutation which is itself a non-abstract physical process. Worse permutation and numbers and logic are all totally dependent on terminology, which is almost by definition the point at the nadir - the heart of the problem. This returns us back to language, and a repeat of the argument above but focused to the level of individual digits. What is really being defined here is the quantum of mathematics, the units out of which everything else is created.


'Fully Self Complete Mathematics' - A fully self complete mathematics is a self complete mathematics that has a complete logical system for generating and solving proofs. In theory a fully self complete mathematics (run in a dynamic computational loop) can create a form of 'self awareness' that allows it to solve any problem presented to it 'automatically'. This is a kind of machine consciousness.
Obviously this kind of system is my specialty, and was the true starting point in this entire research way back in 1990. Building a sufficiently complex fully self complete system will lead to a sentient self aware machine, but we can also reverse the process to produce the ability to construct new self complete systems out of nothing. This is the very foundation of the whole of human logic, and is the key to almost unimaginable new possibilities. It is a place that opens the way to things like metaphysics and changes the whole basis of the game forming a cultural singularity.
The question now is where do or where can we go next? The answer of course is that we can forge a new mathematics. We rebuild a new more efficient terminology and a new logical paradigm that includes the knowledge of self-completeness and full-self-completeness. This should allow a new higher mathematics to be created that admits its relationship to physics and actual reality. At this point we get closer to a real pure mathematics than we have ever been. A new algebra that makes it easier to control complex and multidimensional objects. A new paradigm of self-completeness that feeds back into Strong AI and ALU design.
Of course all this is still only a fantasy and is probably a work for the many years in the future. It will also require a new level of organization between mathematicians allowing a large group to function coherently as a single team. It will probably end up being a massive scale collaboration - but will need to be focused on the work of a single new genius or a very small super elite group.. Democracy, team, hierarchy, dictatorship. Pretty much the same model that built the Apollo program and the Manhattan project. The same model that all successful companies try to emulate one way or another. (The usual 'collaboration' model that modern academic science usually uses will probably be one of the worst models for such a program..)
- -- -- -- -- -- -- -- -- -- -- -- -


A Set of Rules for Mathematics, including Extensions for FTL Mathematics - (Preliminary - Under Development.)
Hierarchy of Complexity -
Observation : Numbers are the primary core of mathematics, and all computation operations and algebraic rules ultimately derive from numbers.
Observation : Numbers form an expanding hierarchy of increasing complexity. With increasing complexity comes increasing capability but also increasing vulnerability to intrinsic flaws and hidden errors.
- Number Types by Increasing Complexity : Simple counting numbers -> Composite Numbers -> Fractions & Exponential Numbers -> Coordinate Pairs -> Imaginary Numbers -> Infinites.
- Composite Numbers and all basic maths operations are based on logical permutation functions. Even basic counting numbers require logic - all maths is ultimately a function of logic.
Imaginary and Complex Numbers -
- The rule of permutation solves the imaginary number paradox :- ( i = root(-1) = +1 x -1 at superposition 2 = (±1)SP2. Σi = Σ(+1,-1)SP2 = 0, i.i = -1 ).
- A minor extension includes complex numbers as non-symmetrical imaginaries (1,-3)SP2, (4,1)SP2 ; and also real numbers (2,0)SP2 or (2)SP1.
- In an SP1 space imaginary numbers appear exactly like reals (1,-1)SP2 = (1)SP1 = 1.
- Note that (1,-1)SP2 ≠ (-1,1)SP2 because the superposition is ordered.
- In the superposition solution Mathematics is irreversibly corrupted by physics. (Extrapolation backwards shows that maths is already irrevocably corrupted by ordinary geometry and many other things.)
Infinite Numbers -
The division between the finite and the infinite is a point that is both finite and infinite and is different for each individual case.
Infinite or Nonfinite numbers can be described or defined by using a 'Scalar Window' rule.
- Any value that exceeds the bounds of a given window will become non-finite in that window.
- All or most infinites also have potential finite values.
- The signs of infinite numbers follow the rules for imaginary numbers, either; positive, negative, both, neither. (4 signs)
- Zero is a special case and a special class of infinite. (1/0 = )
- In most standard (static) mechanical numerical systems the allowed number widows are usually rigidly defined by the number system. (This excludes dynamic non-finite string based number systems such as COBOL.)
- In human mathematics the window is defined by logic, expediency, and ultimately the non-stopping Turing Machine rule.


FTL Specific Rules : have been removed because they are already covered in detail. See section in Detailed Analysis Section of FTL physics.
- -- -- -- -- -- -- -- -- -- -- -- -








PR-2 : FTL Physics : Detailed Analysis : Section 8 - The World-Space of FTL Physics[edit]

[Semi Rough Basic Draft / Edit 75% Complete / Edit [27-05-17] ]
Introduction .. Several Different Views of the world space of this FTL Model ... da da da ... [UNDER DEVELOPMENT]


Criticisms and Rebuttals of the FTL Model
Criticism : The problem with FTL physics is that it is not part of proper science and is not factually proved or accepted by peer review.
Rebuttal : I am not part of the formal science establishment. The qualifications needed to find and reduce FTL physics do not exist in any current (eg 2017) university degree - you have to be a Jack of all trades but also a master of quite a few as well. This FTL physics model is not yet at a point where it would be ready for peer review, and will not be not easy to publish in small separate papers which is the format normally required.
Despite the above the core of this model is very strong, it completes and solves various unsolved problems in physics and unifies relativity and quantum mechanics together completely into a new and stronger single theory. - The only reason that ordinary physicists haven't already discovered all of this years ago is because they have been told for decades not to look. At least at some point in the past FTL physics seems to have been regarded by some as something military and frightening and very dangerous. - And it does have to be said that it opens a number of slightly frightening Pandora's boxes that may lead to places no one expects. - Almost no one predicted what computers or the internet - or nuclear physics would evolve into.

Criticism : The FTL solution to the 'anthropic question' and other parts of this FTL model don't really look any different to magical or occult solutions.
Rebuttal : I'm afraid that in some ways this is correct - but this aspect of physics is already a big part of standard physics under the banner of Quantum Mechanics and even Relativity. From the perspective of rationalism, FTL absolute frame models (like this model) actually provide considerable restrictions over what is possible and provide a more substantial model of direct causality. - While in comparison in the FTL region Special Relativity pretty much allows anything. It allows the whole universe to fold like a pretzel, and its hard to imagine any 'magical' or 'occult' theory that could be any sillier or more ridiculous than that.
The really interesting thing is that the quantum 'magic' does not exist everywhere on the human scale. The scale that it does exist on is the scale of atoms, roughly a billion times smaller than the human scale. The quantum universe is almost certainly the basis of our reality but there is a very strong barrier between it and us that stops quantum causality from invading the human scale universe.
This is 'magic' but is not 'magic'. Sciences aversion to magic comes largely from its long combative history against superstition and from the scientific principle itself - repeatable proof and absolute reductionism are required first.
This question of 'magic' is also something that stands on the toes of many religious moralists in a way that almost no other question in science can. Not even evolution. Understand 'magic' - the FTL part of the quantum equation, and you can basically understand 'God'. - That's breaking one of the very most fundamental religious tenets about the 'mystery' of God.. Its something that could theoretically literally destroy many religions. - The 'God of the Anthropic Question' doesn't look even a tiny bit like any religious God, it is something mappable, quantifiable, explainable, a matter of a balance of proof not faith. A God that should eventually be testable in the lab and a potential source of future technologies rather than spiritual guidance.

Criticism : Despite appearances this model doesn't actually achieve anything new.
Rebuttal : I'm not sure that I disagree with that, I'm not sure that I've ever truly achieved anything truly new. I have spent over twenty years working on a Machine Mind and am almost ready to bring it to a working system, but ultimately it is almost all based on work that already existed. Alan Turing and John von Neumann invented the machine mind. The FTL theory itself or at least its seeds are at least partly a replication of a body of work that seems top have once existed but is now largely lost.
As for my solution to the FTL Anthropic Question (see section above), well time travel is certainly not a new idea nor even the ability to see through time. However the key is energy - and nothing in the universe has enough energy for large scale time travel - except of course the big bang itself. - Which is exactly where this theory starts. Now that is something mostly new (I think).

Criticism : A final point that is very curious - from the FTL Anthropic Solution. If the Big Bang was created by a self evolving state that moved through time then in some ways isn't it still going today? if so wouldn't it still be vulnerable to modification? - even modification by us?.
Rebuttal : Yes to both questions, or maybe no. You must remember that in the FTL model- that time only exists as the present - a point. That time as we often tend to think of it simply does not exist. I don't want to reintroduce a superstitious model of the world back to reality but in the FTL model both the past and the future are technically malleable. - Something actually has to stop the world from shifting, to stop the quantum noise from rising up the wall and destroying the universe. - This remains one of the biggest areas of mystery in the model, the universe is driven by point time but how exactly is point time itself driven and where do we go from there ?...
It is maybe a scary thought but this physics allows the creation of an engine that generates its own FTL transience to manipulate time and space in a way that can only be described as God-like. I have even labeled this machine a 'Harmony Engine'. - Don't worry though, its already quite clear that building one would dwarf the complexity of absolutely anything we have ever so far achieved, and from todays perspective (2017) it will probably be next to impossible. (A Harmony Engine requires at minimum a stable quark gluon plasma. - The only way to achieve this at present is with an artificial gravitational singularity.)

-- -- -- -- -- -- -- -- -- -- -- -- -- -
-- -- -- -- -- -- -- -- -- -- -- -- -- -


FTL Physics - Milestones. (A brief view looking at a list of the basic milestones in this work. .)
[Some Dates need further research or are beyond answer in the infinite wasteland that is my notes.]
-- -- -- -- -- -- -- -- -- -- -- -- -- -
1996 ... - Conception of Circular Light wave as the core of generic 'Atoms' or STL particles of matter..
1997 ... - Conception of circular numbers similar to Riemann sphere. Leads to theory of scalar windows to allow analysis of infinite values.
1997 ... - First Conception of positive-negative superpositions as the real number roots of imaginary numbers..
2002 ish - Conception of an Absolute frame as a stable 3D FTL space - based on ideas of sci-fi FTL space but also real astronomy and physics.
2007 ish - Conception / rediscovery of the core idea of FTL Simultaneity, while reading up the on the relativity of simultaneity..
2008 ish - Rejection of General Special Relativity as the final model of FTL physics.
2010 ish - Critical Point. Alignment of General Relativity with FTL model at quantum scales and the point time quantum manifold.
2011 ish - Invention of a basic FTL solution to the Anthropic Problem. First indirect but strong proof that the basic model is correct.
2012 ish - Firm mapping of FTL predicates - photon as tachyon, direct stellar FTL map at geometries where STL and FTL regions merge.
2015 ... - Mapping of Gravity as a negative mass FTL force with very fast coherency approaching FTL Simultaneity.
2016 ... - Spanner thrown into FTL gravity model by the detection of gravity waves moving at the speed of light.
2016 ... - Gravity model now based on a quantum model of General Relativity. FTL Gravity model disproven by detection of Gravity Waves.
-- -- -- -- -- -- -- -- -- -- -- -- -- -


FTL Physics - Earlier Iterations.. (A brief view looking at Two Earlier versions of this work .. from 2012 and 2006)
-- -- -- -- -- -- -- -- -- -- -- -- -- -

Things That This FTL Model Predicts - (from compact description 2012)
(In most ways the theory at this point is basically complete. Still a few small details and work to do..)

Quantum Space
- Dimensional time is restricted to the region within the quantum limit.
- Dimensional time restricts physical space time to the region within the quantum limit.
- All physical reality derives from the region within the quantum limit.
- The metric of space time folds flat at the quantum limit forming an event horizon.
- The speed of light is set locally by the properties of the quantum space time.

FTL Space
- The universes spatial backbone is an FTL Simultaneity. (reality exists - by inference)
- The FTL Simultaneity forms an unreachable absolute reference frame. Also called ‘Ether’ or ‘Zero Point Backbone’ or ‘Hyperspace’.
- The universe (at all FTL speeds) is three dimensional. (by inference)
- Gravitational wormholes do not obey the Einstein-Rosenberg bridge rules. (by inference)

Forces
- Gravity is an FTL expression of force creating a mutual exchange of kinetic energy between remote mass points.
- Magnetism is an FTL expression of electrical charge.
- Photon wave physics is FTL physics.
- Gravity forms a bridge between FTL and STL physics.
- FTL Coherent objects do not need to obey the laws of thermodynamics. (locally - energy can be created or destroyed, entropy is reversible)
- A causal STL wavefront forces FTL Coherency to obey thermodynamics. (This forms the quantum limit and causes FTL coherency to collapse.)

-- -- -- -- -- -- -- -- -- -- -- -- -- -
-- -- -- -- -- -- -- -- -- -- -- -- -- -


R-Shell Theory - [Written : 09-06-05] [Warning : Complex Obsolete Terminology.]
(WARNING : Some of the material in this section is seriously obsolete and its terminology has changed considerably. However many key points of FTL theory are basically as complete here as in the latest models. 'R-Shell Theory' represents a state hundreds of iterations of development in the past, near the beginning of serious research. Minor editing for English & Math.)

Statement 1
All you have to do to break General Relativity is to apply the theory to itself. The theory forbids travel faster than light, but if you apply this to the theory itself it creates a new theory, a new version of Relativity with a mathematical limit. Either Relativity obeys the limit in which case it simply ignores anything faster than light, or it breaks the limit in which case Relativity goes faster than light and violates itself. Either way General Relativities rule (nothing can go faster than light) is broken.

Statement 2
One way of mapping the blank area created by the limit on Relativity above is to create or imagine a series of shells with higher and higher values of c. (c = speed of light) Ultimately you could see a shell where c is genuinely infinite, energy that spanned (travelled at) this c would cross the whole universe in an instant. This shell of ultimate c when applied to the equations of Relativity creates a universe with zero dilation, in other words it obeys Newtons rules (Laws) exactly.

This is Robert Lucien’s theory and is the result of over ten years of work. (Although the work is largely an incidental result of other work, and its original focus was in creating a mathematical language for Strong AI.)


New Addition / Addendum (under construction)
- If we take our ultimate c from statement 2 and apply it to the relation E = m x c^2, we get E = m x infinity^2 so that matter that touches infinite c must contain infinite energy as well. There is a consequence to this, - matter cannot exist in ultimate space at all, as a single particle would take infinite energy to create - m = E / (infinity^2).
- This is a profound statement. Extending the statement - all infinite velocities take an infinite energy to reach making infinite velocity impossible. Even the weakest entanglement also takes infinite energy there so no forces can exist there either. We now have a very basic map of our ultimate space which shows it as a simple clean and empty place.
- An interesting implication is that gravity does not work at infinite speeds either. This puts a finite cap on the ‘infinite’ gravitational energies associated with large singularities, and further implies that wormholes do not exist either or are unstable (because ultimate space cannot be folded).
- All possible velocities can be made positive from zero relative to ultimate c. This means that ultimate space could be Einstein’s 'ether', and would even create an impenetrable (infinite energy) barrier to time-travel.
- One should remember that our whole universe only contains a finite amount of energy, in fact our whole universe (including ultimate space itself) would have less energy than a single hypothetical impossible ‘atom’ of ultimate energy.
- An obvious point - the universe itself may have started as an ultimate atom, but how was it created? There is one way of creating ‘infinite’ energy and that is by compressing time to zero - either finite to zero, or infinite to finite.
- The best thing though is that Newton on the scale of the universe creates a very very strong space, so nothing on the small (like us or galaxies) can ever destroy the universe.

- -- --- --- --- --- --- --- --- --- --- --- --- - - --- --- --- --- ---
These notes take a dip into the many variants I have worked on.

Angle 1 - Extend c. Photonic theory, or ‘infinite light’ theory, (3x10^8m/s = infinite speed) This was the first mode I discovered (in 1997), a mathematical model I was working on suggested that many finite numbers could also represent infinite values. I guessed this might include c and other constants.

Angle 2 - Standard ‘Imaginary’ Mass (for vel > c, mass = im = root(-n) (n >=0, n^2 = +n x -n = im), u = root(1 - u^2 / c^2). My study of this model eventually suggested to me that if it is true then matter becomes a superposition and is destroyed if it accelerates beyond the speed of light. My new limit does away with imaginary mass completely.

Angle 3 - Allow time ‘transience’. This allows Relativity to survive ‘intact’. It also allows time travel, quite an interesting facet of this is that it allows Relativity and Quantum mathematics to coexist. An unpleasant side effect is that it undoes the stability of ‘Reality’. I believe this theory has in the past been called the ‘Field Theory of Matter‘, or ‘Common (Atom) Theory‘. [some details here are particularly 'muddled' but accidentally close to final model.]

Angle 4 - Break space time rule. There is no universal space time making ALL values of c strictly local.
This is essentially what GR itself says, unfortunately it also shows that Einstein simply didn’t like the idea of going faster than light.

Angle 5 - Relativity is strictly true. As we look on larger and larger scales light moves more and more slowly, eventually we reach an upper quantum where c is effectively zero. On the scale of the universe time does not exist and nothing ever moves or happens, and everything becomes totally static. Astronomical observation suggests this is true - the sheer ugliness of this model suggests it is not true. To me it promotes ideas like ‘the universe doesn’t exist’ or even worse the idea that the universe is ‘young’.
A nice possibility is that the universe is only an instant old - and time doesn’t ‘exist‘, less nice is the idea that it might be around 5000 years old. Worst of all though is to look at the future, if GR is strictly true then the universe on the big becomes an impossibly fragile thing, and a single particle could destroy it (at any time).

Angle 6 - For Large objects the base frame stays at zero. A large object’s speed relative to itself is zero, so in effect it is creating its own space-time. In consequence this means that the object can travel at almost any speed without going faster than light, sort of. Everything depends on how the two spaces see each other. If one allows a slipping interface between the two (spaces) - a ‘slip-space’ or ‘hyper-space’ then this allows the Relativistic and ‘FTL’ spaces to coexist. If the interface cannot slip then relativistic forces will instantly rip any conceivable object apart. A simple version of this idea was published as early as the 1920’s. [by EE Doc Smith]
In many sci-fi series these slip/hyper/warp spaces are created by incredible machines, what I am proposing is that they occur naturally for any fast object. (Requires) EXPERIMENT !!!

Angle 7 - Some of my work has suggested to me how everything fits together. If we take quantum Mechanics and we take relativity they appear to be fairly incompatible, however there is a way.
If we assume Relativity occurs only at certain points and that it is time invariant, then we look at quantum, we find a place where it can be time invariant to - beneath the quantum limit. What I am suggesting is that we are seeing Relativity directly when we see quantum superposition. Relativity doesn’t need to exist above the limit to work because the whole universe exists beneath the limit on the atomic scale. This allows time transience but ties it to small safe energies.

Angle 8 - (7 continued) A big part of my background in this work is my interest in gravity. I am an absolute believer in the relativistic version of gravity, it’s the simplest theory and it’s the one that makes the most sense. When we look at the wave particle duality for gravity it is very obvious that gravity itself is a wave, and the graviton or particle for gravity then must be an atom (particle of matter). The argument against this theory is that there is no sensible frequency for gravity, my guess is that gravity has a c of zero, or rather that its c is normal but has a 100% time dilation. In other words an atom is a tiny singularity.

END OF OBSOLETE SEQUENCE - [From : 09-06-05]


-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --
-- - -- --- - - -- -- --- -- --- -- -- - - --- -- - --







Worldview of Scientific Theory, A Basic Comparison of Proof.[edit]

[Interim Description] [Current Edit - 31-05-17]


A general comparison of Scientific Theories based on the classification by scientific rigor and strength, and logical integrity..

Highest Proof. - - -
- - Very Strong Proof. Directly Testable in Experiment. - -

- The STL portion of Special and General Relativity.
- Evolution. (the science of DNA proves evolution beyond all doubt, directly testable in the lab)
- Newtonian Physics.
- Periodic Table Chemistry.

- - Strong Proof. Scientifically Proven. Statistically or indirectly Tested. Statistically or logically Extrapolated. - -

- Thermodynamics.
- Big Bang Theory and Cosmology.
- The Placebo effect.

- - Weak or Zero Proof. New Hypnosis's. Neutral, Not currently Scientifically Provable, Pseudoscience.. - -

- Psychology. (Part solid science and part pseudoscience. ) [*1]
- History. (Not unscientific but by definition generally inherently unprovable and untestable to scientific levels of proof.)
- Current Neurological Theories of Consciousness. (Fail on completeness, and on basic rules of logic, computability and generic logic.)
- Astrology. (classic Pseudoscience. 50% provable science (planets and motions), 50% arbitrary logic - not logically reducible.) [*2]
- Intelligent Design. (The theory that Evolution is steered by God.)

- - Scientifically Disproven. Strongly Disproven by standing logic. Directly contradicts Reductionism or Observation.. - -

- Four Element Chemistry. (Obsolete pre periodic table theory of chemical behavior.)
- Geo-centrism. (The Sun and planets orbit around the Earth.)
- Biblical Creationism. (Young universe, created in 7 days.)

- - Least Possible. Contradicts basic Observation or Logic very strongly. - -

- Hollow Earth Theory / Flat Earth Theory. (In simplest form a disc on the back of a giant turtle.)
- Non-Tuned Single Causality - Anthropic Question. (Existence by chance alone, unimaginably improbable at some 1 : 10^10^123 against!) [*3]
- Holographic Principle. (The universe is projected from a 2D hologram. Does not understand holograms or basic geometry.) [*3]
- The FTL portion of standard Special Relativity. [*3]
- Infinite Multiverse Theory. (The worst theory that's conceptually possible. ) [*3,*4]

Lowest Proof. - - -


[Note *1] : Psychology (especially connected to humanism & pop psychology) is classified here as half pseudoscience because large parts of it are not fully based on the scientific method and have not been purged by reductionism. Instead much psychology is based on arbitrary logic like - 'ethics', 'philosophy', or 'humanism' - and these do not provide a solid base for scientific theory. As a prime example we find that much psychology theory and advice simply does not take account of the basic fact that we humans are mammals and evolved animals. (human psychology should ultimately be based on the animal in a natural environment)

[Note *2] : Astrology is at least partly based on non-scientific or pre-scientific non-rational theological logic. However Astrology is primarily a psychological theory so the placebo effect should apply and we should expect at least a (neutral) 30% correlation or higher. (The human subconscious exposed to astrology will instinctively try to follow it..) In the past scientific sceptics have tested astrology a number of times and found zero correlation, this is statistically nearly impossible and either implies a strong statistical proof of an effect, or experimental error, or experimental fraud.

[Note *3] : Note that physics theories can achieve a level of improbability massively greater than almost any theory in traditional pseudoscience. This is largely due to the precision of modern physics and its completeness. The area of FTL physics might still be largely incomplete but it is one of the very last unexplored areas.

[Note *4] : Infinite Multiverse Theory. In essence an Infinite Multiverse is the worst theory that's conceptually possible.
Arguments against an infinite multiverse :-
- Size Argument : To have an even probability of having a single untuned universe viable to life requires an immensely large multiverse containing some 10^10^123 universes... This number is so vast that it forces all wild tautologies to be correct, and so forces all other arguments. (except the visitor argument)
- Energy Argument : Infinite causality = Sum infinite energy = Sum infinite mass = Universal gravitational collapse.
- Tautology Argument : By self-tautology every single word in every book ever written is literally true in at least one universe.
- Causality Argument : Does not reduce the improbability of our own existence vis the Anthropic Question because we only exist in one universe.
- Visitor Argument : By self-tautology an infinite number of universes should supply an infinite or at least a finite number of visitors to our own (and to the Earth), there is no recorded proof of this happening. Note that because of the immensely vast number of uninhabited universes, visiting only becomes possible if visitors can tune to only visit other inhabited universes.

'Imagine' : In an infinite multiverse in at least one universe Barney the purple dinosaur is literally real, so are the Tele Tubbies, and so is every word in the bible. In other universes Warhammer 40,000 is literally real and ruled over by strange shadowy Gods called 'players', and in others Geocentric suns and planets orbit crazily around central Earths. In others Lord Voldemort is seeking vengeance against Harry Potter... Maybe in some bizarre 2D universe Mario the plumber is jumping on pipes..

-- -- -- -- -- -- -- -- -- -- -- -- -- --








PR-3 : A Scientific Analysis of The Psychic - Section 1 (Big Project)[edit]

[Current Status - UNDER DEVELOPMENT! - First Draft - SEMI-ROUGH.] [Current Edit - 75% Complete - 15-06-17]
RESEARCH PROJECT - LONG TERM


Arthur C. Clarke - Clarkes Second Law : "Any sufficiently advanced technology is indistinguishable from magic."
Corollary [Lucien] : "Magic is simply physics we don't understand yet."
- -- --- -- - -- --- -- - -- --- -- - -- --- -- -


Introduction.
This document is an attempt at a 'first principles' rational analysis of the 'psychic'. Most modern people tend to approach the psychic with a presupposition that the subject is not real by definition. A smaller number take the opposite view that the psychic is real - either by faith or by world view or through some personal subjective experience. Neither approach can lead to a systematic unbiased analysis that can lead to scientific proof either positive or negative.
A telling fact about the psychic is a lack of any definitive proof either way. By the standard rules of reductionism this basically rules out the existence of the psychic, but there is a critical caveat. The logic and the argument against the psychic are incomplete because critically no one has so far been able to reduce the whole thing to first principles. This allows almost any level of proof either way to hide in the gaps. In other words the only proper scientific position is that there is insufficient evidence either way - and that while the existence of the psychic seems very unlikely it is neither proven nor disproven.
Observation : There is insufficient proof either way to either prove or disprove the existence of the psychic.
Observation : Reductionism by Omission basically rules out the existence of the psychic but the argument is incomplete.


A New Starting Point : Strong AI & FTL Physics.
Over the years my work has focused on two exceptional fringe scientific fields - Strong AI and FTL Physics. Both fields create paths that lead towards a scientific approach that can finally reduce the psychic to some form of first principles. From there we can create a set of experimental tests that can ultimately either prove or disprove the psychic categorically.

Strong AI. Strong AI looks at the mind and brain in sufficient detail that it allows extrapolations that can logically differentiate the existence or non-existence of the psychic by parts and can begin to map out its exact nature in some detail. From this approach the extrapolated prediction is that the brain/mind behaves as if at least small parts of the psychic are real. From that we can predict that there is very probably at least one physical mechanism. This map is complicated however by the very complex abstract structure of the systems in question (the core architecture of the human memory + the mind + the imagination) and from the current lack of conclusive data proving that any particular mind model is correct.
Observation : The Prediction of Strong AI is that small parts of the psychic do effectively exist - precognition, & quantum discriminators.

FTL Physics. FTL physics allows the completion of quantum mechanics in a form that maps directly to causality and from that seems to allow a basic explanation for a basic set of effects that fit known descriptions and do not violate scientific limits. The FTL quantum manifold further defines a model compatible with fate 'manipulation' and effects like limited precognition. The two together describe a single broad mechanism based on molecular scale quantum coherency and ionic chemistry.
Background : The FTL predicate is introduced because a local quantum value sets a broad limit for the speed of light at quantum scales of zero (CQ = 0). This forces the whole quantum system to behave as a mixed FTL-STL system. The FTL quantum manifold is defined by a local discontinuous quantum scale 4-D space time. This combined with the zero quantum speed of light QC means that quantum information states can access FTL transient information such as potential past and future event thresholds..
Observation : The map created by the FTL Quantum Mechanics model predicts a physics compatible with - precognition, quantum discriminators, malleable FTL Fate, and certain other described 'phenomena'.

Initial Conclusions : Put Strong AI and FTL Quantum Physics together and you have the basis for building a complete science of the psychic that can ultimately either prove or disprove the subject definitively. From this perhaps we can move the psychic from the realm of speculation and delusion to a form of slightly bizarre hard science.

- -- --- -- - -- --- -- - -- --- -- - -- --- -- -


Analytical Trajectory. I have to state my own bias of course and from my own research and personal experience I am virtually 100% convinced that the psychic at some abstract level is real.. I have experienced and observed mild psychic effects personally. From this personal observation the real reason for why the psychic has so long escaped scientific classification is obvious. The real effects are so small and everyday that they are generally quite hard to notice and can be extremely hard to separate out from the rest of reality.

Abstract Discriminators and Observation. The only way to really notice and observe such effects is to create a sufficiently sophisticated discriminator. This discriminator is simply a method of observation and knowledge and self awareness and can be learned by anybody of reasonable intelligence. Once you have the right discriminator the effects not only begin to become obvious but it becomes clear that virtually everyone experiences at least some of them most of the time. By extension the same effects seem to form a central part of human cognition and psychology.
So far all effects observed with any confidence are purely information based and emerge as side effects of everyday human cognition. All such effects can thus only be indirectly extracted from living human brains. No directly observable gross physical effects have so far been unambiguously observed.
A good example of a cognition based effect is our ability to 'project' and to 'sense' other peoples energy and emotions, which can be described very effectively in terms of 'auras' or FTL fields. As with most such abstractions building unambiguous proof for either positive or negative positions is very difficult.
The only partial exception is fate manipulation/precognition where significant results can theoretically be extracted using statistics. Even for this though we do not yet have the technology to observe any results directly.

The set of psychic effects that might be considered to have some (at least minimal) level of reality are limited to - precognition, auras, far viewing (scrying), fate manipulation (e.g. spell casting, prayer), and in a very limited way telepathy. The 'soul' itself is still currently an open question. An FTL coherent time based mechanism could produce many of these effects, FTL quantum fields could produce the rest. The FTL mechanism also introduces the idea of limited FTL transience, which produces a general map that fits pretty exactly with the extremely limited nature of the observed effects.
- -- --- -- - -- --- -- - -- --- -- - -- --- -- -


The Bizarre Mechanics and Logic of The Psychic Decoded.
A basic fact is that the psychic seems to be governed by an FTL causality. An FTL causality can explain all of the bizarre logic encountered in the psychic and its lack of any real seeming purpose. A basic observation of the psychic (through the imagination) shows that the output of the system can generally only conform or modify information already existing within its own memory.
This is a predicted artifact of FTL causality. - Information can not be directly transmitted faster than light (without an isolated bucket) but requires some form of existing intermediary on both sides. In the context of the psychic we can assume that the model is 'Slow FTL' and the relevant speed of light is CQ or quantum zero (CQ = 0). In this context the speed of light in vacuum forms a hard barrier so our FTL physics all occurs in the STL space as part of ordinary quantum scale STL physics. (I.E. Cvac > Clocal ≥ 0.)

LOGIC LEVEL. In terms of logic the observations produced by the psychic are limited by local FTL context. This means that the systems logic is normally driven purely by its own existing memory and internal logical context. This sets a hard limit on psychic predictions that makes them mostly either useless or inaccurate, with genuinely useful predictions being very rare. Effects like precognition are in effect parasitic effects emerging from a system whose real purpose is entirely about driving the internal logic within the brain.
From this one rather accurate way of describing the psychic is as the 'master of lies'. Another phrase is 'The Base is Free' referring to the self-defining nature of the internal 'Base Logic' within the human mind.

Within the mind a central algorithmic core called the Totality Matrix provides a universal context. - From the totality matrix system we can extract not only a complete logical mechanism for the mind but for the psychic as well. This can be tested definitively by using the model to reverse engineer the entire brain algorithmically down to its genetic base then analyzing the resulting map. Even without a guiding psychic model genetics is already at a point now where this may happen by itself within about 20 to 40 years.

Neural Crossbar Testing Model.
In the Strong AI Human Model the universal context of an individual totality matrix and the resulting memory architecture forms another mechanism for testing. Humans can be extrapolated to have a memory architecture that requires a fixed crossbar logic throughout the brain. (divided into vast numbers of identical repeated units) For modern humans this creates a large effective information deficit. Initial Analysis has resolved seven ways of bridging this gap.

1. Causal Signal. Direct Information transfer from mother to baby using some ordinary signaling mechanism.
2. DNA signal. A direct genetic mechanism through the mothers + fathers DNA to the babies DNA.
3. Adaptive Crossbar. The crossbar in the brains logic is an active adaptive system.
4. Fixed Indirection System. An alternative to the adaptive crossbar is a fixed core with an external indirection system.
5. Precognitive Tuning. A fixed FTL causality loop with the babies brains future structure.
6. Telepathy. Some form of direct telepathy with the mother or father.
7. Divine Interception. A complete fixed fulcrum in the totality matrix driven by some external source.
8. No Crossbar. There is no crossbar at all and human cognition exists without an overall memory architecture or generic logic model.


An unpalatable series of choices.
No 1 (Causal Signal) This is ruled out by the complexity of the required signal and because no such signal has ever been identified.
No 2 (DNA signal) This is ruled out by the complexity of the required signal and the speed of change required.
No 3 (Adaptive Crossbar) This provides a coherent mechanism but requires a high level of global coherency across the brain that seems to be almost unfeasible. It is a good candidate as a partial mechanism but not a complete one. The brains variable ability to 'repair' itself suggests that such a mechanism exists more strongly in some people than others..
No 4 (Fixed Indirection System) . An alternative to the adaptive crossbar is a fixed core with an external indirection system.
No 5 (Precognitive Tuning) In theory this provides a basic mechanism though it will tend to run into severe problems with FTL transience and causality noise. (FTL transience gap increases exponentially with distance in time.) The time gap required to guide an effective crossbar mechanism probably rules precognition out as the single primary mechanism.
No 6 (Telepathy) This provides a potential mechanism and allows sufficient complexity, but may still encounter the same time latency problem as 1 and 2.
No 7 (Divine Interception) A mechanism based on some kind of external 'supernatural' agency. This agency would have to be both omnipotent and amoral because the overall programming is inherently unfair and divides very unequally between different people. This mechanism is probably of extremely low probability and is really only here for completeness.
No 8 (No Crossbar) A model with no crossbar at all is very hard to rule out completely but without some kind of crossbar the whole neural model of cognition begins to look virtually unfeasible. Without a physical crossbar there is no base for building a memory architecture or abstract logic architecture - leaving the brain to depend totally on local non-abstract logic. This directly contradicts our observed intelligence and abilities at symbolic language and general reasoning and logic. In this solution a completely psychic spirit based model of cognition probably completely replaces the neural model except at the most basic level - this also contradicts observation..


Conclusion. No solution is currently definitive and none seems to provide a complete answer by itself. Some complex mixture may be the answer, involving some combination all the way up to all six major solutions. The even more extreme solution No 7 can't actually yet be ruled out.


General Conclusions. (Early interim) From our brief look at the analysis we can conclude that the psychic is an after effect or rather is an anachronism of the evolution of the human brain. From this we can predict that we shouldn't expect direct usefulness or logic or common sense from the psychic. Saying this the psychic does seem to provide routs to many of the missing pieces required to explain consciousness and overall human logic and intelligence.
From a technological perspective the psychic is perhaps one of the simpler and more direct routes to creating an ASI or machine super intelligence. Unfortunately this also presents a very large problem for Strong AI machines that don't use the psychic in some form or other because it implies strongly that they will find making good real time decisions very difficult, especially those involving human emotions. The brain cheats, its as simple as that. The cheat has gained the name 'psychic' and hidden itself in plain sight..
- -- --- -- - -- --- -- - -- --- -- - -- --- -- -


Extended Analysis : The Oddest Conspiracy. - [03-03-17]

I have been working on decoding the logic of the psychic for years but have now finally decoded a critical piece. This has finally given me a real explanation for the apparent failure of science over the last 100 years in trying to map test or categorize the psychic. This truth uncovers what has to be the oddest conspiracy in human history, a conspiracy with no participants. It is the psychic itself that fights against being mapped and is behind this conspiracy. This forms an ongoing subliminal attack against an any attempt by science or scientists or anyone else to map or understand the psychic.
This attack is driven by self-recursive brainwashing - and like most brainwashing this is mostly or wholly self-inflicted. Everyone contaminated becomes part of the attack and is controlled and coerced by their own internal logic against the existence of the psychic. The psychic system itself is totally mindless and is driven by a kind of clockwork logic. Not only do the enemies of science take part in this attack but many otherwise completely rational scientists do as well.

An Explanation. The primary purpose of the psychic system is to steer and control cognition. - In some ways the psychic seems to be literally the core of consciousness itself though most of the system usually sits subliminally hidden deep beneath our normal level of awareness.
One of the primary evolutionary drivers behind the psychic system is primary threat and target perception. (as hunters/prey) The problem is that the system detects knowledge about itself as a threat, in fact an ultimate terminal threat. It must be understood that this is purely an artifact within the systems own internal logic. The system works by 'edge' detection depending on FTL information transience. It has transience triggers called 'Permanent Super Loops' which work by detecting deviations in their own past and future states. The actual range of this though is usually limited to a matter of less than a second or a few seconds at most. The problem comes because exposure to its own causality shorts out the systems transience logic which in turn triggers all the bodies strongest alarms - deathly fear, paranoia, panic, and the bodies fight or flight response.

Background-Additional. The psychic as a system is not really self aware and not really intelligent, but it has access to the tools of human cognition. This allows the psychic to 'impersonate' or 'project' a form of cognition. This forms part of the mechanism of ordinary consciousness, but also plays a central role in many types of mental illness. The above conspiracy scenario is another byproduct of the same (mindless) logic.
- -- --- -- - -- --- -- - -- --- -- - -- --- -- -


Details of Glisten and PSL Mechanism [Early Analysis based on evolving model.]

Slow FTL Transience. The psychic seems to work by projecting 'Field Glistens' which are low energy 'Slow FTL' transient objects that can manipulate low energy 'Slow FTL' spaces. There are two basic types of glisten. There are 'fixed' glistens which form anchors or interfaces between STL and FTL causality, and there are 'mobile' projection glistens which are anchored but mobile. Speculation is that fixed glistens are ordinary STL transient matter, probably in some very highly ordered form, that is charged to achieve some kind of quantum Slow FTL transience (entanglement). Mobile glistens are FTL superposition projections from fixed glistens.
Fixed glistens might be formed from a PSL state inside a quantum memory, for instance forming part of the memory core of consciousness, mobile projection glistens interact with the fixed glistens to create useful functionality.
(This mechanism is only partly developed at present and is 'A Work In Progress' .)

Fast FTL Transience. What if your glistens had access to Fast FTL transience? Well you would have the power to directly affect physical causality around you, at least in line with your overall energy well. Since Fast FTL transience is a technology that leads directly to things like force fields and gravity engines and FTL Travel it is a point where the line between science and magic begins to crumble completely. This has the small caveat that human brains are not really designed to handle ultra high energy physics or things like super high energy plasmas or to juggle with things like heavy massed singularities or wormholes.

- -- --- -- - -- --- -- - -- --- -- - -- --- -- -


[Old Material. Now Updated.]

A Systematic Analysis of the Psychic. [KEEP!]
A first step towards any kind of scientific analysis of the psychic begins with general data gathering and a first iteration logical reduction and general problem space analysis.. This is only the first step but is also a primary point where most psychic research begins to fall down. The problem space divides into two general primary categories. A third category is added by existing scientific experimentation on the psychic. A fourth category is added to include new or external science such as Strong AI or FTL physics. -

- Cat 1 : Physical Phenomena. This is the region of possible real physical effects. Anything found in this region is by definition part of physics and amenable to science. However physical effects involving human interactions are mutable and may be very difficult to separate out.
- Cat 2 : Psycho-Spiritual Theory. This is the region specifically excluding or apart from physical effects. Theories in this region in general depend on particular spiritual views of the universe, and so are not generally scientific. Most of these theories are 'anti-materialistic', that is they depend on belief in something 'behind' or 'beyond' physics and material reality.. This is often extended to claim that material reality itself is an illusion..
- Cat 3 : Indirect Experimental Extrapolation. Existing scientific research into the field of the psychic.. The results are generally regarded by science as either untrustworthy or inconclusive.. The psychic is also apocryphally plagued by being connected to clandestine military research. (By definition anything even slightly successful tends to be kept hidden, and only failed projects like the Philadelphia Experiment or Project Stargate ever tend to become public..)
- Cat 4 : Implied by Systematic Extrapolation. Areas of science external to research into the psychic may provide strong data either positively or negatively. The psychic for instance shares a primary signature with established quantum mechanics - described effects are very delicate and simple observation can disrupt them.
- -- --- -- - -- --- -- -


Extended Analysis : Human (Negative) Logic and the Psychic. [KEEP]
A basic problem for science is that by definition impartial observation itself is not and can not be a neutral factor in any experiment in this area. This is because of potential spurious causality links between subjects and observers. However another bigger problem lies much deeper within the subject itself. The psychic seems to be quite heavily connected with the negative side of the brains internal base logic. At a level above the base logic and by a curious twist there seems to be a very strong negative logic impulse connected to the psychic. This negative impulse interferes with the mind of anyone who tries to broach or penetrate the hedge that exists around the subject. The irony is that the scientist denying any possibility that the psychic exists actually seems to be the true voice of the psychic itself.

Critical Point Conclusion
In other words (from the above) by far the easiest way to analyse the psychic scientifically or rationally is to completely suppress it first. Unfortunately however with our current technology achieving this seems to be extremely difficult or impossible, at least without destroying or killing the brain. We should understand that the psychic is not rational or logical in any way that we can understand, that it is not even irrational. The psychic is governed by a set of rules and logic defined by its physics that without understanding FTL causality seem to be mostly completely incomprehensible. - These rules appear to have evolved in a completely random arbitrary way, but one that is also tangled up with every piece of logic throughout human history.
In short the psychic is a total mess.
- -- --- -- - -- --- -- -


The Psychic Part 1 - Conclusions.
- The 'psychic' as imagined in so many books and films and TV is obviously not real. The 'psychic' sold by charlatans to the gullible is obviously not real.. A set of extremely small effects based on quantum scale interaction and molecular chemistry and physics probably is real.

A critical detail in the real world is that direct STL causality breaking is not allowed. A real FTL mechanism requires that any key information already exists in the memory system before it starts. In telepathy for instance all information 'nodes' would have to exit in both 'transmitter' and 'receiver' first. This also applies to PSL devices, in fact it is the very foundation of their operation.

With the FTL Quantum model we have basis of a semi-complete physics mechanism. - A quantum model of the brain using for instance permanent-super-loop PSL memory could achieve low level effects very similar to short term 'precognition'. This particular mechanism has physics that could also create something that is effectively identical to the traditional idea of the soul. - a Permanent Super Loop. Souls may actually (probably) exist.
The collary is that these 'real' souls are not (by extrapolation) the epitome of perfection in any way, being a hotchpotch of evolution and mixed bodged biology. The soul as it is today is largely a parasitic anachronism and is very unlikely to be a useful mechanism for survival post death. The curious physics of the FTL though means that this is not an irreversible problem. With an advanced enough FTL technology and a complete blueprint to the human brain it is (potentially) quite possible to make souls that can survive death. Machine souls.


The quantum mechanism is extremely idiosyncratic and unreliable, and this unpredictability is a central part of its signature. This is also a big part of why science has had so much trouble with the psychic. Is the psychic real? At a personal level I strongly believe that it is. At a scientific level I am full of doubt and know that trustworthy objective proof - either way - will be incredibly hard to produce..

- -- --- -- - -- --- -- - -- --- -- - -- --- -- -
The Psychic Part 1 - End. -
- -- --- -- - -- --- -- - -- --- -- - -- --- -- -







PR-3 : A Scientific Analysis of The Psychic - Part 2 - Practical Issues & Analysis[edit]

[Current Status - Development Rough Draft.] [Current Edit - 78% Complete - 14-05-17]

- Scientific Analysis : Implications of the Psychic. (Whether Real or Not)
-- Question 1 : If the psychic is real then how would this affect ordinary science? (Answered science by science)
-- Question 2 : If the psychic is real then how would this change the arguments about 'spiritual' or 'psychic' models of the world?
-- Question 3 : How would the psychic being real affect God and Religion?
-- Question 4 : How would the psychic being real affect various Transhumanist Questions?
- Traditional Psychic Phenomena, Real or not Real? - (PRELIMINARY ANALYSIS)
- Scientific Mechanism : Quantum and FTL Information Transfer.
- The Gaia Adaptation Principle. (Tentative)


Scientific Analysis : Implications of the Psychic. (Whether Real or Not)
The idea that the psychic might be real is wildly controversial. This is a debate that covers an interesting and unusual world space. There are a number of primary arguments or questions. -

Question 1 : If the psychic is real then how would this affect ordinary science? (Answered science by science) -
- Physics : Does not affect most core theory except slightly. The psychic can already be described as a mechanism based on quantum biology. Exploration of physical mechanisms in biology could allow the discovery of new more complete models of quantum mechanics. If it exists 'Precognition' requires a direct FTL causality and this gives a potential look directly into the FTL manifold..
- Chemistry : Does not directly affect core theory in any known way. However the predicted quantum mechanism for the psychic is one essentially based on biochemistry at a molecular scale..
- Biology : Does not directly affect most basic core theory. Would push quantum biology to the forefront, and would open a new space for future exploration. The manipulation of FTL Fate would provide a direct evolutionary mechanism for the psychic. Vice versa the existence of the psychic would affect certain philosophical aspects of evolutionary theory considerably. (DNA itself could be a potential quantum psychic 'engine'.)
- Medicine : Strongly disruptive of belief systems. As with existing quantum biology this area presents a direct threat to certain long held beliefs and assumptions in medicine.. On the obverse the psychic mechanism offers the possibility of future medical technologies far beyond anything we can do today.
- Psychiatry / Neurology : Disrupts parts of the base of much of current theory. May help to improve current theory to bring these fields towards a more complete and scientific basis. As with ordinary medicine this will also allow the development of new and potentially revolutionary treatments.
- Psychology : Causes a complete breakdown in the validation of common theory. Almost any psychic component creates a disruptive information channel between observers and subjects which potentially interferes with almost all basic psychology experiments. Even worse, a psychic component adds a large extra dynamic to human mentality which must then be accounted for. Psychology also already fails to take account of neurology, genetic heritage, biological context and other factors..

- Scientific Philosophy : Proof that the psychic exists is a point where (many believe) the whole philosophy of science would be overturned forcing a complete new order to then emerge.. This is not true. A lot of humble pie must be eaten - but humble pie is actually extremely good for real science. At least one reason why the psychic has neither been already been proven or disproven is because it is very hard to apply the scientific method in this area properly.. The scientific analysis and incorporation of the psychic can ultimately only improve the scientific method. While its more philosophical implications are rather more obscure at present.
- Human Scientists : The real truth is that while science itself is not heavily affected by the question of the psychic scientists are affected. - Scientists are human beings with subjective thoughts and opinions. The real truth is that many scientists have looked at the area of the psychic and failed to reach a basis for proof or a model for irrefutable experimentation. Beyond this is the social aspect because the psychic is one of the major breach points in a long-term battle between science and rationalism on one side, and superstition and religion on the other. Again the common accepted truth is partly the opposite of the real truth because a scientific understanding of the psychic is potentially one of the strongest tools against superstition and dogma that can exist.


Question 2 : If the psychic is real then how would this change the arguments about 'spiritual' or 'psychic' models of the world?
The answer from my work based on the patterns and models projected is that if the psychic is real then spiritual models are not real. The simple truth is that a real mapping of the psychic only proves one thing, that the materialist model of the world is 100% correct and complete in and of itself at all levels. Ultimately all traditional spiritual and psychic models of the world are based on the historical precepts of religion and this introduces an extra factor that turns everything based on them into garbage. ('garbage in garbage out') Analysis suggests strongly that the psychic itself is an evolving system, and one that undergoes frequent and rapid periods of change. This virtually rules out most spiritual or 'moralistic' models, at least as currently conceived. When we get into the heart of how the psychic works we hit a substantial barrier in the abstract nature of human logic, which is very intimately tied with the psychic. The projected logic of the psychic does not have an absolute base and this means that the psychic tends to create circles of self-deceit and makes any thorough objective analysis very difficult. Remember that we are all susceptible to psychic influence and so this problem exists for all human minds


Question 3 : How would the psychic being real affect God and Religion?
This depends entirely on how much religions allow themselves to be affected. However one very strong result of a deeper analysis of the 'psychic' as described is that - it is a jumbled, randomly grown evolved system. This 'evolution' explains overall human development and many of the differences between people and basically all of the problems and illogical elements in normal human psychology. Evolution also does not require God, in fact it virtually rules a moralistic God out of this part of the 'system'.. and that is the problem of course. Many religions have a massive weight of ancient rules and stipulations about the psychic and that puts this and all and any real research into the psychic into direct conflict with that aspect of religion..
Religion Defined. The psychic model and the FTL Anthropic model applied together define a fairly strong set of constraints on God and religion.
There is a Primary creator God, the FTL solution to the anthropic question proves this extremely strongly. This God is or was a universe spanning mindless force. This God evolved then destroyed itself during the Big Bang, during which it created our entire universe. The primary god only required and was or is ruled by three factors - energy, order, and evolution. At its core is a time spanning FTL signal which allowed it to use each resulting universe to tune and create new iterations of causality.
There MAY be small Gods. Small gods could include almost any definition of God or Gods that you can think of. Any God connected to Earth would by definition be a small God.
At a direct material level any directly projected Totality Matrix defines an non-finite information set and this can itself be defined as a small God. This means that all humans carry a potential spark of godhood inside us. (as do other animals) The totality matrix is also a recipe for building a small God in a lab. This represents a potential point of intersection where religion, magic, science, spirituality, and technology all meet. - A true cultural singularity or 'God from the Box'.. 'Deus Ex Machina'!


Question 4 : How would the psychic being real affect various Transhumanist Questions?

Physical Immortality :- Psychic is Real : Without dealing with the psychic question achieving immortality will be virtually impossible. One primary predicted action of the psychic is to act as a biological quantum driven entropy pump. Solving the psychic question completely would open the road to several new methods of achieving immortality. Perhaps the ultimate extension of this would be a quantum 'frame transfer' entropy pump. (see below)
Physical Immortality :-- Psychic is Not Real : Current research into the problem will either solve the issue or not.. Human biology is definitely capable of immortality, though a more advanced understanding of evolution and genetics and the overall biological system will be a basic perquisite. Science is advancing rapidly in this area..

Mind to Mind Transfer :-- Psychic is Real : Transfer might be possible but will be very difficult. At a minimum some kind of frame level quantum manipulator or quantum signal 'teleporter' would be required.
Mind to Mind Transfer :-- Psychic is Not Real : Transfer is virtually impossible. The core element of the individual sentience is the core memory store itself (the physical memory units rather than their contents). The only viable method of transfer is probably a mechanism of direct brain tissue transfer.

Mind to Machine Transfer :-- Psychic is Real : Transfer impossible without a complete understanding of the psychic and of the brain. A failed transfer would kill the original and create a functionally identical but new individual.. A co-perquisite of such an technology is the ability to create machines with genuine 'souls'..
Mind to Machine Transfer :-- Psychic is Not Real : Transfer is virtually impossible. The trick is to copy the essence of sentience and again this seems to be the physical memory units themselves.. The same failure mode of copy not original would still exist. (eg Maybe even an atom level accurate copy-transfer would not work.)

- -- --- -- - -- --- -- - -- --- -- - -- --- -- -


Traditional Psychic Phenomena, Real or Not Real?
(Percentage (n%) is estimated probability of being real. 'OBJE' is Objective Extrapolation, 'SUBJ' is Subjective opinion.)

[UNDER DEVELOPMENT - PRELIMINARY ANALYSIS ONLY]
Precognition - (95% OBJE, 99.9% SUBJ) Almost Certainly Real. A widely subjectively observed effect. Strong AI theory predicts a requirement for basic low level precognition as a component of top level real-time logic steering in the brain. Several essentially complete putted mechanisms for precognition can be constructed from quantum mechanics and either FTL or Relativistic theory. Severe limits are imposed by physics on precognition which mean that the effect is only ever a curiosity which produces predictions that are not allowed to be causally 'useful'. (IE Precognition cannot predict the balls of an upcoming lottery, but it very probably does give the brain steering cues that make our movements and speech more fluid and 'natural'.)
Telepathy - (55% OBJE, 70% SUBJ) May be Real. A minimum unreliable effect can be extrapolated based on observation and the same basic physics mechanism as precognition. Telepathy as a direct 'useful' event seems to be very rare, but indirect or hidden 'non-useful' interaction may be so common that it is the absolute normal.
A big problem with telepathy is that it should interfere with the brains language matrix making it fundamentally incompatible with higher sentience. This would make telepathy both essential and a dangerous parasitic effect that must be strongly shielded against. A failure of this shield would cause effects essentially identical to mental illness or schizophrenia.. Scientifically it is very hard to determine whether or if either telepathy or precognition is the basis of any real underlying mechanism.. A definitive scientific test which could answer this question may be possible but would require the direct comparative physical analysis of the language matrixes in separate human brains.
The Soul (1) - Strong AI Model Soul - (90% OBJE, 100% SUBJ) 1. A definition from Strong AI defines the 'soul' as a physical memory system in the brain which forms the core of self-awareness. Obviously by this definition the soul does exist.
The Soul (2) - Psychic Soul - (40% OBJ, 95% SUBJ), 2. The 'soul' may also exist as an emergent property of psychic - quantum or FTL coherency mechanisms in the brain.
The Soul (3) - Spirit Model Soul - (20% OBJ, uu% SUBJ), 3. The soul can also be interpreted as being directly equivalent to the 'living spirit' type explanation..
The Soul - General Level of reality depends on definition. However all the above mechanisms still require a living brain as a physical base and so none provides a true mechanism for surviving death. It is theoretically possible for an FTL coherent 'soul' to survive death, see 'Multiple Reincarnation / lives' below.
Auras - (60% OBJE, 95% SUBJ) Probably some level of reality. Effects are commonly so exaggerated that they become totally unbelievable and logically unacceptable. Some real effects seem to be extremely commonly observed but are too small and uncertain to separate out. One putted mechanism is as an FTL component of the bodies electrostatic field. (see below)
Far Viewing (1) - Far Vision - (30% OBJE, 25% SUBJ) May Be Real. Small effect observed by the US military but too weak and too unpredictable to be useful. A quantum mechanism for 'far viewing' can be extrapolated - this would be highly limited by FTL causality (by quantum coherence and transience) and this fits very closely with known evidence. This means that the effect is most likely to fail when needed most or when information is not otherwise already available. It may be possible to replicate far viewing in the lab in a machine.
Far Viewing (2) - As Weapon - (50% OBJE, 65% SUBJ) May Be Real. An extrapolation of far viewing used as a weapon seems very possible. This mode fits much better with what is 'known' by rumor about military research, and also fits better with certain observed effects.. Unfortunately the psychic used as a remote weapon is a far smaller reach than used as a remote spying tool.
The reason that this part of far viewing is not widely published is also fairly obvious- it potentially gives every living human a weapon to attack and hurt any public figure they don't like.. It should be seen as a weapon of distraction and consciousness disruption, a weapon to aim at enemy leaders rather than their troops.. A very basic way to 'aim' and 'fire' such a weapon is simply to focus public hatred against enemy figures.
Astral Projection - (30% OBJE, 50% SUBJ) Half Real / Not Real. Multiple effects with one name.. The primary mechanism is probably a projection generated through the imagination. (the same mechanism as far viewing) Effects would be limited by the maximum causality of the brain as a quantum machine.. Personal research suggests that astral projection is potentially very dangerous to mental health and should not be attempted.
Akashic Records - (75% OBJE, 98% SUBJ) Real/ Half Real. Strong AI theory suggests a direct non-psychic physical mechanism. In this the brain pushes its entire totality matrix into a temporary synchronous mode allowing a single point read-back. The same mechanism also explains religious experience and events like meeting God or Angels or Demons.. Adding a psychic element (quantum element) slightly increases the strength of this explanation.
Psychokinesis - (2% OBJE, 0.1% SUBJ) Very Probably Not Real. Basic energy physics as well as quantum mechanics and FTL physics all preclude this effect. Requires a way to tip the balance of momentum, plus more quantum coherence than the brain should be able to provide safely by at least two to five orders of magnitude.
Multiple Reincarnation / Lives - (< 1% OBJE, < 1% SUBJ) Not Real. Without a physical base to support it the soul 'dies' on physical death. There is no clean mechanism of transfer .. FTL causality and entropy do not allow a clean transfer. A clean transfer would require a sophisticated FTL technology at minimum. Fragments of isolated memory may be 'transferred' but not coherently. [There are several uglier explanations for this..]
Ghosts (1) - Directly 'Real' - (1% OBJE, 0.1% SUBJ), 1. Direct images of dead people or visible animated 'spirits' are ruled out due to energy and FTL transience constraints.
Ghosts (2) - Imaginary Projection (70% OBJE, 50% SUBJ), 2. Ghosts as internal imaginary projections either from a psychic or internal source. One now obsolete theory postulated a direct information projection between past and present times through an FTL coherent link, but this now seems very unlikely. An FTL projection from the current time is a possibility.
Astral Ghost (1) - Physical Spirit (<1% OBJE, 2% SUBJ) Unknown / Probably not real. Direct projection of astral substance or true spirit (ectoplasm).
Astral Ghost (2) - Imaginary Projection (70% OBJE, 50% SUBJ) A simple plausible mode is a direct projection through the imagination.
Astral Ghost - General (Also 'spirit', incubus, succubus, vampire, ghost, angel, demon, etc.) More details withheld for brevity. Note that there may be some or total direct equivalence between far viewing, auras, astral projection, and astral ghosts..
Poltergeist/s - (1% OBJE, 0.5% SUBJ) Unknown / Very Probably not real. As for ordinary ghosts + psychokinesis. Direct poltergeist effects are beyond current scientific explanation.. Some evidence suggests that direct experience of such effects occurs through an indirect subsidiary process. This could be described as an induced dream/false memory state.. or imagination.
[UNDER DEVELOPMENT - PRELIMINARY ANALYSIS ONLY]
- -- --- -- - -- --- -- -


Scientific Mechanism : Quantum and FTL Information Transfer. (known or extrapolated)
- A reasonable physical model for the 'psychic' is quantum mechanics in the brain at a molecular and neural processing level. The 'FTL Quantum' Model extends the above quantum model considerably potentially creating a complete mechanics for the mechanism, and providing potential mechanisms for future technologies.

There are two basic type of FTL communication, Direct and Indirect.. -
Direct Signal FTL Communication. A direct point to point link. Very hard to achieve. In general Direct FTL Signals are restricted to 'Slow' FTL speeds. The signal travels slower than the speed of light in vacuum (Cvac) but faster than some local 'reference' speed of light.
-- It is impossible to send a direct FTL signal through vacuum. Very difficult to send a direct signal through air. Viable primary mediums include water and organic material.
-- All direct FTL signals tend to be very short range. This may either be within a skull or across a room or further, but is probably within about 5m to 1000m.
Indirect Signal FTL Communication. No direct link but an indirect link via extended FTL information coherency between two end points.. Effectively the signal travels by moving backwards in time to and from a shared information source.
-- In theory indirect signal propagation should be virtually instantaneous over any distance. Indirect signals should be able to travel through or rather past vacuum.
-- Indirect FTL Communications may or may not be possible. In the FTL model dimensional time is normally NOT coherent above quantum scales, however a local quantum or FTL resonance in a constructed object may be able to override this.
-- The key to an indirect FTL link is a mutually shared non-FTL-causal information containing memory. The memory source is shared between the two systems to create a shared FTL 'transience' between them. The actual signal is transmitted by jumping back through time to modify the originating shared memory, the modification immediately jumps forward again.

Specific : 'Auras' or 'Psychic Fields'- 'Auras' may have some level of reality.
-- At one level 'Auras' (as described) may fit quite strongly as a Slow FTL Field within ordinary physics.
-- Detecting such an Aura would require a quantum comparator or discriminator. The comparator itself would be deep in the brain, and would probably use information from the eyes and other ordinary senses as its input.
-- Limit of Observation : The direct 'conscious' observation by the mind of data from a quantum comparator would form a causal barrier that could immediately lock or destroy the actual comparator.. Auras as described must thus be indirect imaginary' extrapolations from the data..
-- Detection 1 : The mechanism described may give us some sense of the physical presence of nearby visual objects boosting analytical visual performance.
-- Detection 2 : The mechanism described may give us a strong sense of the emotional state of other 'nearby' (information coupled) people.
- -- --- -- - -- --- -- -


The Gaia Adaptation Principle. (Tentative)
A Big problem comes when the psychic is taken beyond the human out to other animals and to the rest of nature and up to the scale of ecosystems.
The question has to arise, does the Gaia principle exist and if so does it have a psychic mechanism. Standard science argues that the Gaia principle can be driven without any kind of psychic effect, driven simply by the systems long term coherence and genetic memory. However this does not exclude a psychic explanation as well.
A psychic explanation is possible, but this is one of many areas of the psychic that starts become very fraught once you begin to look into them in detail. The adaptation principle argument tears itself to pieces and becomes an analysis of the supremacy of humans over nature. It seems there may be a cascade effect in the evolution of humanity - we evolved big brains which massively boosted our psychic capacity - but growing psychic capacity also forced our brains towards growing size. A side effect of this is that humans became accidentally the masters of the world in psychic terms, giving us the complete power to override and completely altar Gaia's logic. In other words any sane evolved logic gets replaced by a confused nightmare of tangled human psychology and religious belief. To put it more bluntly if there is a psychic Gaia principle then humanity has already pretty much destroyed it centuries ago.
Another aspect of Gaia is that like all living systems it has almost certainly developed suicide switches. From this we can predict that because humanity is such a damaging influence on nature that we already have our suicide switches already permanently in the on position. - So fixing a psychic Gaia might be an extremely bad idea for humanity.

In further extremis Gaia may well have evolved its own core suicide switch. - We must remember that biology and evolution are still blind, and evolution is random and creates many things that become extinct through poor genetics and inferior design. - For instance having an eco-system scale suicide switch. Given that Gaia's genetic base throughout its hierarchy is already littered with suicide switches this does seem quite likely. Repairing Gaia without understanding it fully could be the most dangerous thing humanity could ever do. I only understand this because I have a strong perception and understanding of real evolution. Evolution is a balance between creating and killing and we have distorted and corrupted and attacked that balance.
- -- --- -- - -- --- -- -

- -- --- -- - -- --- -- - -- --- -- - -- --- -- -
The Psychic Part 2 - End. -
- -- --- -- - -- --- -- - -- --- -- - -- --- -- -







SUPER PROJECTS : Promoting Long Term Scientific and Human Progress[edit]

[Sketch Draft! Many Rewrites!!] [UNDER DEVELOPMENT From 10-03-13, About 75% Complete, Current Edit - 20-03-17]

Global Strategy

  • Promoting Scientific Progress by creating Long term Goals.
  • Global long range targets to focus overall research into more efficient and effective paradigms.
  • World Wide Ideas to transform and literally SAVE the World by inventing a better future.

The lists in this document are only a starting point, anyone with an interesting idea can contribute. - What we are looking for are for suggestions from scientific specialists and others. Scientific logic and experience show the obvious, that very often long term planning is the best way to achieve real long term goals. - Like the survival of humanity.. If any of my own projects ever gets funding and becomes successful then a substantial long term goal is to create a scientific charity with the aim of creating a global super project foundation with the aim of achieving these goals. Investing money into long term scientific planning and futuristic projects isn't wasting money, over the long term it is vastly more efficient then doing everything by random short term reactive planning.
If (for instance) you really want to help the poor people in the third world then the real best solution is to put money into this kind of science. - Because science and scientific planning are the base of the pillar of modern society - everything we have comes from them. Scientific planning is probably the only path that can lead towards global social equalization and the end of human poverty without destroying the world. Without that top level thinking your attempts to help people may actually make things worse and create more suffering than doing nothing. Compassion is needed but so is intelligence.
No matter what or which long term goals we aspire to scientific planning is the key to achieving them. We talk about long term but long term is always relative. This document will look at both long term (100+ year) and medium short term (10 to 40 year)targets and projects.


INTRO : SUPER PROJECTS. What are super projects? they are projects that aim to achieve massive or significant long term goals that really affect the future of the world. The reason these projects can be so far reaching is that they tend to act as massive umbrellas that focus points that can accelerate and harmonize future research in many areas simultaneously. For instance a project focusing science on say the development of starship technology could help us produce new and vastly superiour methods of energy production, solve difficult environmental problems, find radical new solutions to medical problems, radically improve manufacturing, create new paradigms in computing, and so on.
As well as all this, on the way to developing starship technology we open up space to a far wider circle of people and make the technologies for things like Orbital or Lunar colonies practical.. (starship technology is a particularly good primary example and I will focus on it again and again.)

Previous Super Projects. Perhaps the best Example of the positive influence of Super Projects is the development of rocketry - the Space Shuttle, the Apollo program, the Nazi V Rocket program, even the US ICBM programs. All had a massive effect on the world. Modern heath and safety emerged from the development of rocketry. A more precise & rigorous approach to research and manufacturing leading to ever advancing better products. Miniaturization and advancement of computers and electronics. The internet, modern plastics and composites, machining involving advanced alloys and materials, carbon composites, large scale cryogenics, formalization of medical science, etc, etc.
Not only all of this but rocketry has helped to inspire humanity hugely and has given us dreams beyond our own very limited world. A new belief in science and fact as the basis of reality, a more concrete understanding of the vast scale of the universe. Even a belief in scientific progress itself as a path to the future.

Rocketry is only one example. Quite simply all 'Super Projects' can be game changers and many may offer new possibilities that may deliberately or accidentally save humanity or even life on Earth itself. This will very probably be in ways that none of us can predict or even imagine until they happen. Naturally there is much overlap between many of the different targets and some create much wider umbrellas than others. The furthest reaching often become the most wildly embracing, but are also usually the most expensive.
Scientific Progress : The more money you spend now the more you tend to save later. As long as it is spent effectively.
A good example for the near future might be closed cycle nuclear rockets (eg NGCCC Liberty rocket) which would cost a lot of money to develop ($10 to 20 Billion), but would create a new and bigger umbrella for space travel. As for cost, if Nuclear rockets had been used (say) to build the ISS instead of the Space Shuttle they would have paid off their entire development costs on the first or second launch. The two are not comparable though because the ISS is a tinker toy compared to what nuclear rockets could put into orbit.

The Ugly Real World. What is the enemy of good Scientific Progress? Incompetence and inertia and short term thinking at the top level in the global management of science and technology and a more general lack of long term planning. This incompetence is endemic and centers on an inability to make strong long term extrapolations and decisions. Another endemic problem is the huge lack of marketing for the funding for the largest and most important science projects. Often this seems to totally ignore the critical importance of these projects to the world.

Two very obvious victims of this endemic failure are the long term development of space technology and the primary funding of research towards nuclear fusion energy production. The first has bound space travel to a primitive and ultra expensive & ultra small scale low yield model. This has cost the world many research years in space and a vast amount of wasted money. The second has held back what is probably the best medium long term solution to our rapidly growing energy needs. This has left the world with extra decades of polluting high carbon energy, held back the development of things like electric cars, and pushed the world further towards severe climate change.


(WARNING : Numbering systems and project ordering are in flux and will eventually be totally updated...)
LT - Long Term Focus Goals & Projects
(From about 20 years long to 100 years or over.)
MST - Medium/Short Term Focus Goals & Projects
(From about 5 years long up to a maximum of about 40 years.)
LT-1 Starship Technology. MST-1 Global Asteroid Defence system.
LT-2 Advanced Human 'Cybridization'. MST-2 Return to Scientifically planned Space Technology.
LT-3 Molecular scale Assemblers. MST-3 Manned Mars Mission.
LT-4 Development of 'Total Genetics' Program. MST-4 Development of Artificial Wombs and Direct Life Sustaining Technologies.
LT-5 Life Sustaining Medicine. MST-5 Development of Cybernetic Interfacing Program.
LT-6 Advanced Artificial Biomes. MST-6 Strong AI - Global Strong AI Consortium.
LT-7 Utopia : Redesign and Rebuild Human Culture. MST-7 Artificial Biomes - 'Eco Core' Industrial Ecological System.
LT-8 Quantum Manipulation. MST-8 Clean Long Term Energy Production Program.
LT-9 Advanced SETI Survey and Technology. MST-9 Serious SETI Survey and Technology.
- - MST-10 Focus on Dementia and other Mental Degradation.
- - MST-11 Reduce third world poverty without destroying the world.

Many other future projects exist but most of them are further outside the remit of science or technology. For instance solving the global and local inequality between rich and poor, or dealing with religious intolerance and persecution, or solving the problem of endemic poor decision making by local, regional, national, and global governments..


LT - Long Term Focus Goals & Projects (LT-Long Term) - (From about 20 years long to 100 years or over.)
These are very large broad brush projects aimed over longer time scales. By definition the very minimum of these have aims that are extreme compared to anything that exists today and have absolute minimum time spans of around at least 20 years. An apt comparison is to think back to 1900 and then to imagine extrapolating forward to today in 2016, back from then to now.

LT-1 : Starship Technology. If there is one ultimate all encompassing super project it is the development of starship technology. - Probably the most obvious and far reaching of all super projects.
Research Targets : Advanced Power tech, rocket engine and 'space drive' tech, long term survival in space, methods of continual long term life support, constrained utopian societies, efficient lightweight radiation shielding, human hibernation, advanced medicine, etc.
(See Separate Sections on Starship Development Below.)
LT-2 : Advanced Human 'Cybridization'. Another basic target already implicit in many peoples thinking.
Research Targets : Advanced Medical Robotics, robot miniaturization, nanotechnology assemblers, Human neural system interfacing, brain & neural system repair, improving human cybernetic interfacing, artificially enhanced consciousness, an end to mental illness, total immersion VR, transferring consciousness, surviving death?.
LT-3 : Molecular Nanotechnology Assemblers.
Research Targets : Molecular scale manufacturing, self-autonomous machines, self-replication, molecular macro scale computing (vast increase in computer power), medical cellular scale repair, in-situ DNA repair and modification, direct neural interfacing and 'cybridization', neural repair, physical near immortality, etc.
LT-4 : Development of 'Total Genetics' Program.
Research Targets : Complete mapping and complete understanding (internal mapping) of genetic systems, safe advanced genetic engineering, vastly improved medicine, agriculture, 'synthetic' genetically engineered animals & people, etc.
LT-5 : Life Sustaining Medicine.
Research Targets : Full self contained primary (body internal) life sustaining systems - complete artificial life support (ie like Robocop), a person in trauma or with organ failure, to sustain or replace damaged organs. Artificial wombs, life extension, fighting dementia, curing obesity, etc..
LT-6 : Advanced Artificial Biomes.
Research Targets : Long term sustainable Artificial Biomes and Arcologies, 'Eco Core' Industrial Ecological System, Ultra scale Industrial Farming, Self Contained Bio-systems / Factories, ultra large scale engineering and manufacturing, ultra large scale robotics, primary ecosystem repair, etc.
LT-7 : Utopia : Redesign and Rebuild Human Culture.
Research Targets : True Utopianism, Rebuild society and culture from the ground up by rebuilding and reprogramming our own minds using scientific principles. (for instance developed within Strong AI) An end to destructive short term politics, reduced crime & social fission, an end to short term selfish suicide solutions, etc.
LT-8 : Quantum Manipulation.
Research Targets : Quantum Computing, room temperature superconductors, quantum discriminator sensors, Atomic Conversion, 'FTL' technology, 'Space Drive', machine prediction & precognition, technology that manipulates matter and physics directly, etc.
LT-9 : Advanced SETI Survey and Technology.
Research Targets : Advanced Laser detectors, Advanced Laser Beacons, advanced radio detectors, advanced gravity wave detectors, very advanced telescopes and large scale systematic search systems, research programs for FTL sense & detect technology, research for starship technology, preparations for interstellar contact. (either peace or otherwise) See SETI-Analysis Section Below.


MST - Medium/Short Term Focus Goals & Projects (MST-Medium/Short Term) - (From about 5 years long up to a maximum of about 40 years.)
Just as important as the longer term projects there are many futurist projects that make more sense aimed over shorter and more immediate time scales. Medium-Near term aims that guide science and technology in more efficient and beneficial directions.
- [Only a very short and partial List : Under Production - needs a new & upgraded list.]

MST-1 : Global Asteroid Defence System.
Research Targets : Both a way to protect the Earth directly and a way to boost general space technology.
(See Separate Section : 'SPACE PROJECTS : Global Asteroid Defence System'.)
MST-2 : Return to Scientifically planned Space Technology.
Research Targets : Nuclear rocket tech, Super heavy Earth orbit rocket. Large space stations. Human colony outposts in orbit, on the Moon, and beyond. Improving basic To-Orbit From-Orbit efficiency and reducing costs by maybe a factor of ten.
MST-3 : Manned Mars Mission.
Research Targets : Simply the next logical step in space exploration. A vast range of different possibilities and potential missions.
MST-4 : Development of Artificial Wombs and Direct Life Sustaining Technologies.
Research Targets : Allows the development of many new surgical and medical technologies. Ultimately potentially solves the problem of death through organ failure during trauma. May allow the long term live storage of transplant organs, and the survival of otherwise fatally injured patients.
MST-5 : Development of Cybernetic Interfacing Program.
Research Targets : Basic 'cybridization', human machine neural interfacing - Robocop style robotic interfacing.
MST-6 : Strong AI - Global Strong AI Consortium.
Research Targets : Developing Strong AI systems, making Strong and Weak AI safe, the politics of Strong AI, social integration of Strong AI, etc.
MST-7 : Artificial Biomes - 'Eco Core' Industrial Bio-farm Ecological System.
Research Targets : As with the long term version but with far shorter and simpler aims done on smaller scales.
MST-8 : Clean Long Term Energy Production Program.
Research Targets : Three Basic Aims :- Adequate Energy for growing global needs including industry. Clean Energy - low or zero net CO2 release, low in general pollution, minimum human death rate. Long term energy affordability, security, and maintainability.
MST-9 : Serious SETI Survey and Technology.
Research Targets : As for long term project LT-9 but with shorter term & less ambitious goals. See Separate Section SETI-Analysis.
MST-10 : Focus on Dementia and other Mental Degradation. -
Research Targets : Finding methods that reduce, slow, or eliminate the degradation of mental capacity associated with Dementia, Alzheimer's, and general aging.. A solution could increase the health and independence of many tens of thousands up to millions of elderly people, as well as creating an immense reduction in medical costs. This is a crisis that is expected to become critical within 10 to 20 years and a solution is needed. Also a critical focus towards improving long term human longevity.
MST-11 : Reduce third world poverty without destroying the world.
Research Targets : Increasing production of better quality and cheaper food, with minimized environmental impact. Improving economic fairness, increasing equality. Working to create sustainable and moral population control. (If we choose not to follow moral solutions then the only ones left will be amoral solutions.)

-- - -- - -- - -- -- - -- - -- - -- -- - -- - -- - -- -- - -- - -- - --
-- - -- - -- - -- -- - -- - -- - -- -- - -- - -- - -- -- - -- - -- - --


LT-2. Long Term Focus : Advanced Human 'Cybridization'.
Research Targets : Advanced Medical Robotics, robot miniaturization, nanotechnology assemblers, Human neural system interfacing, brain & neural system repair, better human cyborg interfacing, enhancing consciousness, an end to mental illness, total immersion VR, transferring consciousness, surviving death?. .
At its core this is the technology of interfacing directly between the human brain & nervous system with machines. An almost essential preliminary to this is the technology of microscopic scale robots able to operate and function within the living body either temporarily or even permanently. A basis for such robots is a nanotechnology assembler development project - ie LT-03.
-- - -- - -- - -- -- - -- - -- - -- -- - -- - -- - -- -- - -- - -- - --


LT-3. Long Term Focus : Molecular Nanotechnology Assemblers.
Research Targets : Molecular scale manufacturing, self-autonomous machines, self-replication, molecular macro scale computing (vast increase in computer power), medical cellular scale repair, in-situ DNA repair and modification, direct neural interfacing and 'cybridization', neural repair, physical near immortality, etc.
The development of assemblers would come in a number of stages, the first half of the project leading up to the development of a single working nano-scale assembler. The second half focusing first on self replication and then mass production. This would have a parallel stage developing further construction techniques and beginning the assessment and development of a first raft of applications..
Example assembler : Primary material - carbon diamond composites. Rough scale 100,000 atoms long (~2 um), 50,000 atoms diameter (~1 um), ~2x10^13 atoms, mass 3x10^-17 grams. Environment high vacuum, estimated working temperature -1000 C to -2000 C. Estimated replication time 1 day to 30 days. Cost (based on 10 day replication) - Year 1 = 6.9E10 units = £10 billion per gram (Limit), Year 2 = 4.7E21 units = £1 million per gram, Year 3 = 2.2E32 units > £1 per gram. [EDIT POINT] ***
-- - -- - -- - -- -- - -- - -- - -- -- - -- - -- - -- -- - -- - -- - --


LT-5. Life Sustaining Medicine. Research Targets : Full self contained primary (body internal) life sustaining systems - to sustain individual organs, a person in trauma or with organ failure, to replace damaged organs, complete artificial life support (ie like Robocop), artificial wombs, life extension, fighting dementia, curing obesity, etc..
-- - -- - -- - -- -- - -- - -- - -- -- - -- - -- - -- -- - -- - -- - --


LT-6. Long Term Focus : Artificial Biomes
A projection for the development of an ultra scale technological environmental support system solution. A condensation of ultra scale industrial technology, Artificial Biome technology, farming, robotics, strong AI, economics, and energy production. To allow large scale human survival in environments beyond the Earth or in the event of major ecosystem collapse or in aid of a failing eco-system or lack of sufficient resources.
Research Targets : Long term sustainable Artificial Biomes and Arcologies, 'Eco Core' Industrial Ecological System, Ultra Industrial scale Farming, Self Contained Biosystems / Factories, ultra large scale engineering and manufacturing, primary ecosystem repair, etc.
At the Core of an eco-core system design is energy production on a massive scale. The design assumes this is through nuclear fusion, though any thermal energy production system could be used. Around the energy source we build a collection of machines to allow the construction and creation of a concentrated artificial biosystem. There is an atmosphere recycler to capture excess CO2 and control levels of oxygen and other gasses. There are organic matter recyclers that for instance process waste and use it as feedstock for growing crops in concentrated farming units. Probably vertical units, using artificial lighting. There are concentrated water systems for growing and producing fish and other sea animals and crops like rice. There is an overall purge and bio-control system to maintain a healthy balance of bacteria and maintain bio-system health. One of the key issues that a system like this depends on is the ability to sterilize and reseed the limited scale bio-system units regularly to prevent issues like degradation or runaway collapse. Basically learning the lessons from projects like 'Biosphere 2', that small closed biosystems are very unnatural and need some form of constant active intervention and management.
As well as bio-systems the eco-core is intended to contain a set of bio-factories (and chemical synthesizers) that make things like fuels, plastics, synthetic materials, general chemicals. A central key is that the thermal energy from the energy production is used directly by the feedstock bacteria and algae, and in the chemical processes.

Cost to develop and build the core components for building a working Eco-Core technology- £10 to 100 million.
Cost to build a small scale demonstrator Eco-Core Project- £10 to £50 million.
Cost to build a large full scale Eco-Core Project- £20 billion to £1,000 billion.
Very Tentative Cost to protect a substantial proportion the worlds population in the case of an Eco Catastrophe- (est) £100 trillion.
-- - -- - -- - -- -- - -- - -- - -- -- - -- - -- - -- -- - -- - -- - --


LT-8. Long Term Focus : Quantum Manipulation. Research Targets : Quantum Computing, room temperature superconductors, quantum discriminator sensors, Atomic Conversion, 'FTL' technology, 'Space Drive', machine prediction & precognition, technology that manipulates matter and physics directly, etc.
The ultimate logical conclusion of quantum mechanics - is a physics that allows the direct manipulation of quantum mechanics on a human scale. Either with or without an FTL physics extension the ability to manipulate quantum mechanics is the opening of a door into a new realm of physics, a literal quantum leap. -
At a first level comes quantum computation. Many possibilities have already been mentioned for this in the standard literature but another is so called 'precognition' or 'quantum memory' which is the ability to predict your own future or past states and to use this information to accelerate your own internal logic. Quantum memory is actually simpler than systems using entangled bits, and there is strong indirect evidence that organic brains use this type of memory. (it more or less perfectly plugs the neural processing gap between predicted capability and observed capability in the brain and also explains the unsolved problem of self-ordering self assembly)
-- - -- - -- - -- -- - -- - -- - -- -- - -- - -- - -- -- - -- - -- - --
-- - -- - -- - -- -- - -- - -- - -- -- - -- - -- - -- -- - -- - -- - --


Medium / Short Term Focus Projects


MST-2. Short Term Focus : Return to Scientifically planned Space Technology. (See sections on 'Space Projects' below.)
(Nuclear Rocket tech, Heavy (500+ Ton LEO) Earth Orbit Rocket, large space stations, tech for human outposts on the Moon and beyond.)
Nuclear Rockets. In simple terms nuclear rocket motors based on high energy fission are the only logical choice for further and more extended space exploration. Ironically most of the technology has already been pretty fully developed in the 1960's and 70's under the name of project NERVA. (and other programs) To the point where it was almost ready to begin testing in space. Much of this work may be lost but it would still be a basis for building a similar series of engines today. The book 'To the Ends of The Solar System' contains quite detailed notes on project NERVA and there are other published sources.
Super Heavy Earth Orbit Rocket. (Payload 100, 200, 500, even 1000 metric tons to LEO.) The most basic capability of all for the expansion of almost all space technologies is the ability to lift larger payloads into Earth orbit more cheaply. The bigger the payload the more expansion that is possible and the lower the price per unit weight becomes.
The costs to develop Super Heavy (500 ton + to LEO) Lifter rockets and build the facilities for their manufacture and launch are estimated at £10 to 20 billion. The cost per launch for a Big Lifter rocket is estimated to be £100 million to £1.5 billion.
Large space stations. Large permanently manned structures in orbit. A preliminary need is a super heavy lifter to put these large stations into orbit and to carry up crew, passengers, and cargo. A second preliminary is a high efficiency thruster system such as ionic thrust to maintain orbit for long periods at low cost.
Human colony outposts. Human colony outposts - in orbit, on the Moon, and beyond.
Improving Basic 'To Orbit' Efficiency. - There are 4 basic methods of increasing spacecraft efficiency :- Increased scale - Reusability - Single Stage to orbit - ground launched nuclear rockets. One possibility is to take the Skylon design and super-size it, going from say two engines to four larger engines and a much larger body.
-- - -- - -- - -- -- - -- - -- - -- -- - -- - -- - -- -- - -- - -- - --


MST-4 : Development of Artificial Wombs and Direct Life Sustaining Technologies.
Research Targets : Allows the development of many new surgical and medical technologies. Ultimately potentially solves the problem of death through organ failure during trauma. May allow the long term live storage of transplant organs, and the survival of otherwise fatally injured patients.
-- - -- - -- - -- -- - -- - -- - -- -- - -- - -- - -- -- - -- - -- - --


MST-5 : Development of Cybernetic Interfacing Program.

'Cybridization Today' - A process of uniting human mentality with machine. Surprisingly this technique could essentially be possible with only a few years of development and many of the basic techniques and foundations already exist. An initial primary target goal might be to offer people whose bodies are effectively permanently broken or dying a final choice against death or permanent paralysis. The same techniques should eventually lead towards improved methods for curing paralysis inside the body.

  • The first key ingredient is to be able to build a viable life support system that can protect support and maintain a living brain and central nervous system independently.
  • The second key ingredient is to be able to remove the brain and nervous system core from the body and install them in the machine without damaging them. This includes the retinas or whole eyes and other sensors like the ear canals.
  • The third key ingredient is the electronic interface between the nervous system and the machine, this is very complex but is really no more than an extension and combination of many technologies that already exist in prototype form.

- A primary need is to protect and defend the organic system from all forms of infection. This may be one of the most difficult problems but our knowledge of the human immune system and biology has grown immensely in recent decades and solving this problem is not impossible.
- For people with conditions like motor neuron disease the interface becomes far more complex because the interface must be linked to the brain directly but hopefully it should still be possible.
- One of the main barriers to building such an interface in a living person is the need to maintain the integrity of the skull and the vulnerability of external components as the body moves and bashes about in everyday life.
- The complexity issue in wiring up this kind of massive neural interface can be solved by the development of precise miniaturized surgical robots that can function autonomously in a sealed sterile system allowing them to complete the wiring procedures in-situ over days or weeks or even longer.
Test Procedures would begin with tests on artificial model organs, then progress to dead animals then to live animals, and then once perfected in animals to humans. For someone with a terminal illness even a prognosis of 10% to 20% of survival with life as a robot may be vastly better than death. - Especially for atheists like me who do not believe in a free magical afterlife.
-- --- -- - -- --- -- - -- --- -- - -- --- -- -


MST-6. Short Term Focus : Strong AI - Global Strong AI Consortium.
A surrounding global regulatory and advocacy agency implemented to deal with the legal and moral issues surrounding Strong AI and ASI machine intelligence... complimentary to my own project and to all future AI research.
Aims (suggested) : Developing Strong AI, developing legal frameworks, defining the moral issues and politics surrounding Strong AI, global and national safety issues, social integration between humans and machines, furthering the implementation and use of Strong AI, etc.

- Almost infinite possibilities and infinite dangers await in the evolution of machines that can think.
Developing and overseeing the creation of a safe and stable Strong AI technology core is a primary critical issue. Developing machine minds, sensors, robot interfaces, and other applications systems such as non-sentient or semi-sentient servitor Semi-Strong AI. Beyond creating the machines themselves we need to develop moral and legal frameworks and philosophical issues. Developing Strong AI within a global business framework. The importance of building a secure closed model development plan and absolute unbreakable security to protect the machines themselves and humanity.
[Still very preliminary - at the moment barely a list of suggestions.]

Perhaps the greatest thing we can gain from a powerful Strong AI Consortium is the ability to avoid the potential up-coming incipient disaster of poorly designed and dangerous Strong AI's becoming part of the world. Or at least we can ameliorate some of the worst effects. The countdown towards some random project becoming sophisticated enough to accidentally create a working Strong AI without a safe or stable design core gets shorter every day. -
For Example :- The 'Terminator' films are a very good bad example of how spontaneous 'gestation' might happen. Remember that Skynet was born as a spontaneous accidental event in a non-sentient intelligent machine that became an uncontrollable snowball.. I do not consider this an impossible or even unlikely scenario, it is one that may already have come very close to happening, maybe many times, but it is a scenario that we can still avoid at least today.

Minimum Cost to safely and securely build and develop the basic core of a Strong AI technology- £10 million to £50 million.
Minimum Cost to setup and implement an international global controlling Strong AI Consortium- £50 million to 20 billion.
-- - -- - -- - -- -- - -- - -- - -- -- - -- - -- - -- -- - -- - -- - --


MST-7 : Artificial Biomes - 'Eco Core' Industrial Bio-farm Ecological System. Research Targets : As with the long term version but with shorter simpler aims done on smaller scales.
-- - -- - -- - -- -- - -- - -- - -- -- - -- - -- - -- -- - -- - -- - --

-- --- -- - -- --- -- - -- --- -- - -- --- -- -









A Work In Progress : SETI Analysis[edit]

SETI - The Search for Extra Terrestrial Intelligence - [Semi Rough Version - EDIT 70% Complete / EDIT - 27-05-17]


Introduction.. Much that is written or talked about SETI is either totally unrealistic or ignores one piece or another of basic science. The simple truth is that SETI is not easy, and that detecting or transmitting signals to other potential civilizations is incredibly difficult. Today SETI really is a work barely begun. My work on Strong AI and on FTL physics have both given me new and interesting perspectives on understanding and engineering for the SETI problem.
- Firstly FTL physics is not that hard. Though it is not yet clear whether any kind of FTL technology will ever be possible or practical, it is also clear that FTL travel is almost certainly not completely impossible at least at a theoretical level. Also FTL travel and communication do NOT follow the rules laid down by special relativity. In any sane FTL model time remains point like at FTL speeds so the predicted problems with causality breaking and time reversal simply do not exist.. (The Galilean geometry is approximately preserved.) This has massive implications for almost every aspect of the whole SETI debate.
- Secondly Strong AI technology is hard. While a basic working machine seems to be relatively easy to achieve, building a machine or machine society that that can survive and function isolated for hundreds of years seems to be an incredibly difficult problem. From this perspective self-replicating Von Neumann Probe machines that travel throughout the galaxy or beyond at STL speeds (for tens of thousands of years) do not look like a very viable proposition. The basic problem is boredom and entropy - long term 'psychological' instability and hardware unreliability - in the extremely limited, unchanging, and high radiation environment of deep space.


Starting Point - 'The Fermi Paradox'. There is a common paradigm in SETI research called 'The Fermi Paradox' which asks the basic question; 'If the universe or galaxy is full of other alien life then why haven't we detected it?' The real answer to this is very simple - space is very very big and full of an incredible number of stars, and by any reasonable set of calculations the number of civilizations out there is relatively very small. There are two critical bounding windows to the SETI communication space - the time window and the spatial window. These bounding windows are joined together by a ruling element - the physics of the space-time geometry, which in turn is defined by the very slow speed of light at interstellar scales. The resulting overall window sets a margin of some 10 to 10,000 'detectable' civilizations in our galaxy today. This sets the number of civilizations out there that we can detect, or that can receive us, at a ratio of vaguely between 1 in 100 million to 1 in 10 billion stars. This appears to be a totally untenable set of odds for simply finding an alien civilization in our galaxy at random. The only way to balance such odds is to make detailed scans of billions of stars, and such scans must not be momentary but go on for years or decades...

Fermi Extrapolation Example : Let us imagine a human interstellar civilization based on FTL travel. We have expanded our influence into a sphere of space with a radius of some 1000 light years and maybe 13 million stars.. (based on 3.1 stars per 10x10x10 light year cube (10x10x10 LY = 1000 LY^3) ) We have occupied roughly 100 star systems, and have robotically explored maybe 20,000 star systems. - In this model we only have a roughly 1 in 650 chance of encountering an equivalent alien civilization in the same space. (Like all such extrapolations this is a pretty vague calculation and the real odds may be considerably higher or lower.)
This Fermi Extrapolation Example is given to put our own efforts at SETI in perspective. At first glance the odds today this looks insurmountable, but that may not be true. If we use science and our minds and enough technology we can reduce even the current incredible odds of ~ maybe 1 in 1 billion to something we can surmount.


Answering the Fermi Paradox with the Fermi Distance. If we take the Fermi Paradox we can extrapolate a basic defining value, The Fermi Distance. The 'Fermi Distance' - The distance across space from which a civilization would be able to detect its own transmissions.
- For a directed 100 MW focused laser beacon the maximum Fermi distance may be (very roughly [*]) 10 to over 100 light years.
- For a directed 100 MW focused radio beacon the maximum Fermi distance may be (very roughly [*]) 5 to 30 light years.
- For general broadcast 100 MW radio beacons the maximum Fermi distance may be (very roughly [*]) 1 to 5 light years.
- With only ordinary current radio noise the maximum Fermi distance may be (very roughly [*]) 0.5 to 2 light years.
[NOTE * : All distance figures are guestimates and need expert recalculation]
So, as our technology exists today, the chances of discovering a comparable alien civilization are basically zero. The basic truth is that any form of radio broadcast to interstellar space requires absolutely phenomenal amounts of broadcast power. The power required grows with the square of the distance. Very roughly, for a focused radio beam 10 Megawatts of power ≈ a range of 1 light year, 100 Gigawatts ≈ a range of 100 light years.

- - - - - - - - -- - v - -- -- -- -- --


Real SETI : Looking for Deliberate Beacons.
Beacons that we might look for that might be created by advanced civilizations. -

Destroying Stars In Patterns - For an interstellar civilization exploding multiple stars to create large scale patterns in space could be one way to create a definite long term beacon. There are ~ 400 billion stars in our galaxy alone so there are a large number of 'spares' available. Obviously it would almost certainly require a large advanced FTL civilization to coordinate the construction of such a beacon or to have the technology to achieve it. Such a 'Star Beacon' would have a long but finite lifespan of maybe 200,000 to 1 or 2 million years because as the galaxy rotates the stars naturally move and shift until their original relationship is lost.
- Looking for Star Pattern Beacons. - The basic method would involve massive scanning the stars of the galaxy and their motions then .. using the database to look for old remnants and other events that might look like large scale patterns. A basic method of looking for older star beacons is to extrapolate and reverse the galactic rotation to look for previous historical patterns..
Gravity Wave Beacons. - Gravity waves have recently been detected. If an extremely advanced civilization could for instance take two heavy (eg 20 solar mass) singularities and shake them together they could create a beacon based on gravity waves.
- Looking for Gravity Wave Beacons. - The technology required to detect a gravity wave beacon is quite advanced but it is only a further advancement of gravity wave detectors such as LIGO that already exist.. It would be expected that a beacon would have a highly unnatural signature such as some kind of time varying repeating pulse..
Mirror System Beacons - Perhaps one of the cheapest and simplest methods of creating a bright interstellar beacon is to use the light of a local star itself with a ring of giant mirrors. (or some more exotic structure) These could be designed to reflect polarized light or frequency tune or shift the light of the star so that it stands out and could be easily detected.
- Looking for Mirror System Beacons. - A basic method starts with current astronomy and search methods. A more focused approach might start by looking for things like specially polarized light, and unusual frequency patterns.
Targeted Laser Beam Beacons - Powerful focused Laser beams can be used as interstellar beacons. Either aimed point to point at interesting systems, or to selected stars, or to every star within more general regions (from dozens to tens of thousands of systems or more). The huge advantage of using lasers is that detecting or even transmitting laser beacons is already basically possible today - and the method can be extended easily to an enormous degree as technology advances. Lasers are a far more efficient method of transmission than a general broadcast, meaning that energy budgets can also be correspondingly smaller.
If we imagine an ideal human constructed laser beacon platform :- The first problem we must beat is the Earths atmosphere so the ideal construction will be in space. An ideal placement to stay in permanent sunlight and to avoid being occluded by the planet is to park the beacon station at one of the Earths Lagrange points. From there the technology could go roughly from what is available today to enormous stations providing many megawatts of power, driving large arrays of lasers of whatever size. To be really effective the construction would be a giant space station and might include giant solar power arrays and/or nuclear power and maybe even use nuclear fusion. (assuming 20 to 50 years in the future)
- Looking for Laser Beacons. - The basic technology to look for laser beacons is within the remit of current space and optical technology. The system needs a filter that hunts out coherent light at different frequencies, plus the ability to look at many systems simultaneously, & it must repeat the search frequently over a prolonged period of time. Designs could be made that operate in space or that operate on Earth using technologies like adaptive optics - probably using current telescopes.

General Broadcast FTL Technology? My FTL model suggests that creating long distance FTL communications of any type might be very difficult. In particular creating any kind of long range FTL broadcast that is wide-spread or omni-directional rather than being aimed point to point might be particularly difficult and probably next to impossible. A simple truth is that it will probably be extremely difficult to send any kind of information through the FTL medium without some kind of intermediary object.. Probably the best or even the only way to send data at FTL speeds is to put it inside some kind of FTL machine or projectile.
- Looking for FTL Beacons - Even if it is possible to send data at FTL speeds we simply do not have sufficient basic knowledge or understanding to even begin designing a detector. Even working on it for over ten years I only have a vague notion of how one might be constructed. A starting point might be the kind of quantum discriminators that the human brain seems to use, but magnified in scale and sensitivity by many orders of magnitude.
Using an FTL Transiator as a Beacon ? A different approach to FTL communication is possible. A 'Transiator' is a hypothetical device which potentiates a region of local STL space making it easier for FTL ships to enter or leave the FTL 'hyper-space' at that point. However a powerful enough transiator just might potentially also be used as a long range FTL beacon. - Though an FTL transiator does not as such broadcast a signal, it might still be detectable at very long ranges by using quantum entanglement or 'potential'. This would require a special type of sensor directly targeted at, or scanning over the location.
- Looking for FTL Transiator Beacons ? Building a basic transiator as a beacon is potentially 'relatively' easy, although the complexity mounts rapidly with growing power. Unfortunately building a sensor that can detect any kind of long distance transiators is vastly more difficult, and broadly at a level at least equivalent with fairly advanced FTL travel.

Building an FTL Transiator. Suggested hypothetical methods for building a 'basic' low energy transiator device. :-
- High energy quantum coherence. For instance by building specially shaped objects made out of superconducting material. (eg copper cooled by liquid helium)
- Creating large empty voids in space. Space does contain a thin volume of matter. Large super empty voids can act as transience disrupters or FTL interference lenses - Transiators.
- Tampering with heat energy. Heat and quantum mechanics interact, nano-materials could be designed to force heat to behave in different ways that create FTL transience.
- Lasers, using interference and the FTL properties of light. (In FTL parlance wave ≈ FTL mode)
- FTL radio. Projecting energy directly into the FTL space using specially designed radio masts.
- Nuclear Pulse radio. A slightly different approach to FTL radio is to use the 'peak plasmas' produced during nuclear explosions.
- Projecting energy into the quantum vacuum. (Equivalent to FTL radio.)
In reality a 'true' high energy transiator strong enough to allow FTL ships to leave the FTL space is a very different proposition, and is expected to be immensely more complex...(eg rings of captive singularities, force fields, manipulated tachyonic matter, etc, etc)

- - - - - - - - -- - v - -- -- -- -- --


Sending Spaceships As a Method of Interstellar Contact [Needs some Reorganization!!]

Interstellar STL Space Technology (Also See Separate Section :- SPACE PROJECTS : Advanced Concepts : Building an STL Interstellar Spacecraft.)
A basic observation is that human STL speed space travel between the stars is barely tenable even over 5 to 10 light years. Over larger or much larger distances it becomes effectively impossible.. Five basic solutions exist to circumvent this - reengineer humanity, generational travel, hibernate a human crew, have a purely robotic machine, or 'robot + embryo' use a robotic machine carrying human embryos.

- Reengineer Humanity. Perhaps the 'simplest' most direct method is to use genetic engineering to create a new breed of humans designed for long term space flight. The basic design points might be - strong constitution, no aging, strong psychology and ability to tolerate indefinite boredom, resistance to radiation, improved immune system, and reduced harm from zero gravity. Two obvious problems towards engineering humans. - The first is that this level of re-engineering requires genetic engineering of phenomenal complexity and sophistication, and achieving that will not be trivial.. The second is the moral question, humanity will have to trust in and allow this kind of technology before it can be developed or used, and that will be a major step.
- Generational Travel. The basic idea is to create a complete and self sustaining human community inside a very large ship that then travels slowly over many generations. On the face of it this looks like a relatively simple solution, but on examination this solution has many problems and I actually consider it one of the least tenable of all solutions. Basic problems - The human social mix has to be very carefully engineered and controlled to survive for many multiple generations. The ship requires a complete self contained eco-system that must also be stable enough to survive for generations. The real problem is the ship itself which must be huge and very heavy, and must be multiply redundant and able to be self repaired, and must tolerate radiation for very long periods. The big killer for generational travel is that the size and weight of a generational ship means that it will be far harder to achieve high velocities or to slow down again later. - This leads to a situation where it is possible for even a trip to Alpha Centauri to take thousands of years.
- Human Hibernation. On the face of it this is a really good idea, and the deeper the level of hibernation the better. The payoff is a machine that can spend a hundred or a thousand years in space but does not need to be huge because it does not need huge life support systems. Of course the technology of hibernation itself is far from simple. We might go by degrees - hot hibernation, cold hibernation, and cryogenic hibernation. In hot hibernation the body stays alive but in a very low metabolic state - this might allow up to about 5 years of sleep but still requires a small life support system. In cold hibernation the body is shut down completely to a state of being 'dead', this should be safer and allow longer term sleep from maybe 5 years up to maybe 50 years but is technically far more complex. Cryogenic hibernation would be even more complex than cold but would hopefully allow sleep of to up to 100 years or longer.
Potentially even larger extensions to all methods might be made by incorporating some FTL physics into the hibernation technology. In theory FTL coherence can allow the direct manipulation of entropy and this could lead to a hibernation technology that could ultimately extend hibernation almost indefinitely. The same tech could be used to preserve all the machines in a ship and even repair minor damage. In ultimate extremis and with enough power (eg 1E20 Joules) the technology could even theoretically resurrect an ordinary dead corpse if carefully enough preserved. In a negative form the same 'FTL interference' plays a direct role in causality and ordinary entropy and is thus a major reason why humans age in the first place. - The level of technology required to achieve something like 'resurrection' at least approaches the difficulty of full FTL travel itself.
- Purely Robotic. Ultimately highly advanced robotic systems are almost essential for any method of interstellar STL travel. A prime example is work on the outside of the craft and on its engines and power systems which are all likely to be highly radioactive.
The simplest solution that solves all the human problems completely is to remove humans from the equation completely and have an entirely robotic machine.. Fully robotic interstellar machines must be 100% fully autonomous, and this is only really possible with fully sentient self-aware machines.
An obvious extension to a fully robotic space craft is to extend the capacity for self repair to the point of full self replication and to allow it to build a whole species of explorer machines.. These so called 'Von Neumann probes' then slowly travel out and replicate and spread out across interstellar space, in theory inevitably eventually encountering other alien civilizations..
There is an obvious fly in the robotic ointment. The levels of reliability required of the hardware and software needed for Self-Aware machines just do not look tenable in the harsh environments and over the long timeframes required. Hundreds of years bathed in radiation, in a permanently static unchanging environment, plus the shear size and complexity of the Self Replicating System and the rest of the machine all incur a huge number of potential points of failure. To achieve the maximum capability and chances of success we could combine a fully autonomous robotic system with a small human crew in hibernation or genetically engineered to survive space.
Another possible solution to robotic development is nano-technology assemblers. Of course assemblers also supply their own plentiful mountain of problems. Firstly assemblers are another very complex and difficult technology to develop.. Even once developed assemblers are likely to take years or decades to actually build to do anything on larger scales, and will also require huge amounts of energy, and are potentially very fragile and vulnerable to radiation.. The big advantage of course is that assemblers are very very small - a Von Neumann probe built using Assemblers might only have a core payload weight of a few kilograms, downscaling the whole travel system to sane levels.. Another simpler though less extreme compromise might be to use so called biological synthetics, machines constructed out of living cells. Whether of artificial or natural origin.
- Robot + Embryo. - Identical to full robotic but carrying a set of human embryos which the machine then raises at the destination to create a new human population.. Does conjure up the possibility of aliens experimenting with humans if they find or capture such a machine before it has completed its cycle. (the real cycle may take 30 years or more once at a suitable destination)
Shockingly given the elements of both assemblers and embryonic systems, we have the theoretical potential to seed interstellar human colonies using STL travel within about 30 years..

- - - - - - - - -- - v - -- -- -- -- --


Interstellar FTL Space Technology (Also See Separate Section :- SPACE PROJECTS : Advanced Concepts : Building an FTL Drive Interstellar Spacecraft.)
So is interstellar FTL travel of any kind a viable future technology? The basic answer is theoretically probably yes. If special relativity is correct then the Albecurrie warp drive or wormhole travel become theoretically possible. However I will assume that a basic absolute frame FTL type physics model is correct instead. In this case there is a very good chance that true FTL travel will one day become possible. This is especially if negative mass antimatter can be made or acquired, as that would massively drop energy requirement to the point where FTL travel would be possible with relatively simple technology.
Saying the above FTL travel is a very complex problem and carries a number of caveats that could either make it very difficult or impossible.. (This applies mainly to the basic development of the technology, and over time further advancements would be expected to solve most such problems.)
- One Way Trips. FTL travel divides into three basic elements - FTL insertion, FTL coasting phase, and FTL Escape. The part of the ship that reaches FTL speeds is far smaller than the jump/launch engine, and the part that leaves the FTL space may be smaller again. (exactly the same problem as with current rockets and Earth orbit.)
- No communication back except by radio. As said before communication back from an FTL destination is basically limited to ordinary radio. The only real way to achieve FTL communications back is to send an FTL ship back, and this will at minimum require the construction of a new launch engine at the destination. -A coms ship could be very small, little more than a computer with a hard disk and a radio beacon to transmit the data signal to Earth once back in the Sol solar system.)
- 'Dead Stick' FTL Navigation. Navigation is by pure inertial vector - ie blind and dangerous. The destination angle is set before FTL insertion/launch and the time to remain at FTL speeds is set by an onboard timer. Even making this timer work is a very difficult problem because of time lock/ time dilation. While at FTL speeds a ship becomes completely blind, and completely unable to manoeuvre and must follow the vector it is already on.
- FTL Travel may not be possible for People. In short people may not be able to survive at various points in an FTL flight, either due to gravitational shock, acceleration, extreme dilation, radiation, or other factors. One suggested (early) solution is frozen state hibernation, another is some kind of structural webbing woven into the crews brains and bodies.
- Expect Low Top Speeds and Limited Ranges. Very little is yet known about potential FTL speeds and corresponding ranges but a basic rule suggests the faster a speed is the harder it will be to obtain. (I.e. Some form of KE still increases with increasing FTL speed. This is extrapolated future research.) An initial problem is likely to be that early designs are permanently limited to the speed of light. FTL travel range is defined by the cruising speed multiplied by the time spent at FTL speeds. - Unlike relativistic travel the core of the ship must almost certainly remain at zero dilation using technology to maintain its integrity and transience. The alternatives are death through time-stop or death through instant disintegration.
- - - - - - - - -- - v - -- -- -- -- --


FTL Direct Contact?
With FTL travel the obvious way to search for an alien civilization directly is to visit and explore as many solar systems as possible. Without FTL travel ourselves the only way this can happen today is if an alien species visits our solar system and encounters us. However with FTL travel there may even be a potential problem with direct contact. To cross back from FTL space to ordinary STL space an FTL ship requires a sufficient 'STL Transience' at the destination 'landing' point. - Unless the ship can generate this transience itself it cannot exit the FTL space and so may require that a special 'transiator' device be built at any given destination first. (as mentioned in the FTL ship section) The need for such 'gateways' would fundamentally limit the speed of alien or human FTL exploration because new gateways could probably only be moved at much slower STL speeds, and no one could reach a new destination until a gateway had been built there. If this is true, and if we really do want to meet aliens then maybe we should focus on building a transiator gateway here in the solar system. (Look at the section building an FTL Transiator Beacon.)

Malevolent Alien Invaders. . ? It must be obvious but not having a transiator or building an 'anti-transiator' could easily stop potentially malevolent aliens from invading our solar system and our space. Of course doing this might itself be considered an act of war. There is also the tiny problem that the more advanced an aliens technology the less likely they are to need the help of artificial spatial transience. If a real malevolent alien species were to invade it must be pointed out that to a species with FTL technology nuclear weapons and missiles would probably be no more threatening than bows and arrows, and surrender might be a very good option.. A very good reason to start funding the Development of FTL technology today..

- - - - - - - - -- - v - -- -- -- -- --
Hope You Enjoy this work : A Work in Progress.
-- --- -- - -- --- -- - -- --- -- - -- --- -- -








SPACE TECHNOLOGY : Preliminary Background Concepts to Space Flight - Interplanetary Travel[edit]

Interplanetary Travel : A general basic summary of the science of manned interplanetary travel. (planet to planet)
[ROUGH DRAFT - much work still needed...] [Current EDIT 60% Complete - 11-01-16]

A basic truism about space is that in reality space technology is always a compromise. - A very complex and difficult and potentially dangerous balancing act. No currently possible scenario is more complex or more difficult to balance than travel beyond Earths orbit. Manned missions to the planets, their moons, or to other smaller bodies like asteroids or comets.
Rocket science isn't just a science it is also an art. You need the science of course - the equations and the computers and the simulations. But the 'Design Space' of space travel is just so large that by themselves these methods can get lost and will often fail to give the real best answers. It is a real truism that mathematics can be almost as good at lying and concealing the truth as it is at telling the truth. (Like almost everything else mathematical systems grow by iteration, and the more you explore a given thought space the more iterations you get there.)
One of the key problems of the space industry in the current era is that it has lost a lot the art and hubris side that was carried by people like Von Braun and Goddard and the old NASA. This is one of several reasons why the public have lost interest in space and progress in space technology has ground to a crawl.
Specifically to use a central example. : The overall design for a manned mission to Mars requires a very complex balance between maybe a dozen different variables. - This includes some very difficult compromises and choices, such as crew safety verses comfort.
This section is about trying to give a little of the flavor of this design space. The section below on rocket technologies fills out more details. (And as very often happens, even asking the question shows that the current answer is wrong.)


Primary variables-

Journey Time / Transit Time : The time it takes to complete a particular transit through space. Journey time is one of the most important variables and targets in mission design, and has a massive effect on overall planning both short term and strategic. A specifically defined journey time window locks a mission to a specific set of trajectory and thrust budget requirements. Equally a specific thrust budget locks a mission to a specific set of trajectories and minimum possible journey times.
Journey time - affects many other critical factors and for manned missions should obviously be minimized as much as possible.
-> Longer Journey Time : Cheaper in terms of thrust budget (primary). Crew supplies required are increased. Crew cumulative radiation exposure is increased. Crew cumulative psychological stress increases along with the potential for psychosis and mental breakdown. Increased wear and tear on overall ship and systems. Overall the general safety margins and the baseline for overall mission survival reduce as journey time increases.
-> Shorter Journey Time : More expensive in terms of thrust budget (primary). Crew supplies required are reduced. Crew cumulative radiation exposure is reduced. Crew psychological stress is reduced. Lower wear and tear on overall ship and systems. Overall survival margins tend to increase.
The increase in thrust budget with faster journey times is non-linear and non-simple because the environment and trajectory space for interplanetary space travel is complex and non-linear as well as being dynamic - always in motion.

Payload Mass : The payload mass defines the overall size and capabilities needed at the destination for any given mission.
In most manned missions there is an Outbound Payload Mass which includes the crew capsule, the crew, crew supplies, any landers, and everything else needed to do the mission, plus the propellant mass needed to return home.
There is also a Return Payload Mass which is generally much smaller and includes only the crew capsule, crew, supplies, plus any mission samples, plus any breaking propellant.
It is fairly obvious that a larger mission can carry more supplies and equipment and can have a larger crew and or better longevity, which will all allow it to achieve more than a smaller simpler mission. (In very simple 'sailing' terms a large ship can carry more cargo than a one man rowboat.) Of course the payload does not always need to be delivered in just one mission and it is generally regarded as easier and more efficient to use several (generally slower) one way cargo flights, plus one (faster) two way passenger flight. See 'Vehicle Design Space' Below.

Thrust Budget / Delta V Budget : The overall Thrust Budget for a mission is generally the single most important variable in space travel. The size of thrust budgets forms a complex non-linear design space, based on - engine technology, fuel carried, thrust level, thrust efficiency. The size of requirements for thrust budgets is equally complex - chosen trajectory plan, mission weight, Journey time, the Oberth effect. Thrust and thrust budgets are covered in more detail below.

Vehicle Design Space. -
Vehicle Size. - As a general rule larger vehicles tend to be safer and more resilient (fault tolerant) and more effective. A higher mass allows a larger vehicle with more space on board for the crew and better levels of comfort plus better safety features and radiation shielding, even artificial (rotational) gravity- all of which can add together to have a large positive effect on crew psychology and moral. - In theory larger vehicles allow much longer and slower transits.
Single Combined Lander verses Orbital Transit Vehicle plus Lander. - Current plans for Mars landings focus on designs where a single vehicle combines aero-braking with atmospheric descent and landing. In terms of the descent and landing this seems more efficient, but in terms of return journey it is considerably less efficient. A more advanced approach is to leave the main vehicle in orbit (as in Apollo) while exploring the target planet using landers. If a main vehicle carries two out and return landers this also increases redundancy and safety considerably.
Reusability. - This is a potentially important question, in simple terms we can justify a much larger and more comfortable vehicle if the costs can be mitigated by its later reuse. Certain types of rockets like nuclear make this a better option because the RM fraction and cost are a smaller component and the engines themselves have a relatively high natural reusability. Reusability is also a natural fit with orbital transit and separate landers.
Earth Return - is another issue. Once our vehicle arrives back at Earth the crew must somehow get back to the ground.
- Perhaps the simplest option is to carry an Earth return capsule the whole way attached to the transit vehicle. This has a good safety margin but a considerable penalty in weight and fuel.
- A second option is a combined system where the same vehicle basically does everything - crew quarters, outbound lander & lifter, and Earth return lander. The real problem here is that the vehicle has to survive the abuse of multiple launches and landings making potential damage a big issue, also vehicle cannot easily be reused after mission..
- A third option is to have an orbital shuttle or capsule rendezvous with the vehicle on Earth return and carry the crew back to Earth. This is probably the most efficient solution and has a good basic degree of safety, but may be very psychologically uncomfortable for the crew.


Journey / Mission Type. The basic choices are either outbound one way or several options for outbound and return. One of the heaviest supplies for any long range manned mission is the reaction mass (RM) rocket fuel needed for retro breaking and for the return trip.
- If atmospheric aero-breaking can be done at either the destination or the return then this cuts that total reaction mass for that leg by up to half. (aero-breaking requires a useable atmosphere) The price is that aero-breaking puts the vehicle and its occupants at considerable risk and the faster the closing speed the greater this risk becomes..
- If the reaction mass fuel for a return trip can be made or found at the destination then this effectively removes the return leg fuel costs, potentially reducing the total outbound fuel cost by up to about 3/4 - 75%.
- If full fuel is insufficient for a two way trip another solution is to send a separate fuel transport to the destination first. This will add about 30% to 40% to the fiscal cost of the mission..
- Another alternative is one way travel with semi-permanent life at the destination. This will probably require later supply missions and may become very hard on the Astronauts if they decide they do not like the destination, or if things go wrong.
Fuel Totals. These figures can add up in very different ways so that the cheapest options may be many times cheaper than in reaction mass fuel cost than the most expensive. (If each thrust cycle requires RM equal to the payload mass then the maximum cost differential is something like 16 to 1.) These costs can be so high that even the most efficient/powerful rocket technologies benefit significantly with refueling at the outbound point. With less efficient/powerful rocket technologies a lack of refueling possibilities makes manned missions to other planets almost untenable without extra supply missions, and very long slow journey times almost the only available option.

System Mass - defines the overall total size of the outgoing rocket system as it will leave Earth orbit. This differs from the payload mass as it includes the total outbound and return payload mass plus the rocket system including all its reaction mass fuel, plus any extra support structures. It forms the primary cost of putting the mission into orbit. The return payload reaction fuel mass may be part of the system mass, or included separately if it is transported separately.

Design Space Summary : The major cost of a larger system is that it will move more slowly (have a lower delta V) for any given thrust budget, but if the thrust system is scaled with the vehicle this is not a problem. - Larger systems tend to be considerably cheaper per unit of mass than smaller systems though the overall base cost of course rises. Four basic ways of bringing down the long term costs of space are - to build large, to make as many components as reusable as possible, to use higher energy more powerful rocket engines, and to build on principles of mass production and standardization. - On exactly the same principles and scales as airliners or large ships. (Comparatively a rocket costs broadly the same as a large ship or airliner but carries far less payload and critically today only gets used once.)
- - - - - - - -


Thrust and Acceleration - The thrust is the actual force produced by a rocket. Thrust is normally measured in terms of Newton's which are the unit of force, or occasionally in terms of mass. A rockets Acceleration = Thrust / Mass. However the thrust to mass ratio is non-linear because the rockets mass reduces as its reaction mass fuel is used, the thrust may be reduced in step with reducing fuel mass to compensate.
Weight vs. Mass. Weight is the measure of mass as a force produced by a particular gravity field. On the ground on Earth mass and weight are equivalent (mass = weight = force in Newtons x 9.8 m/s^2). On Earths surface the weight is equivalent to the mass, and elsewhere it can be calculated by dividing by 9.8 m/s^2 then multiplying by the local gravity field. Mass itself is defined relative to the Earths gravity.
At any time for a rocket to lift against any given gravity field its local thrust to weight ratio must be greater than 1.
[The calculation may also be done in terms of Newtonian force, where the rockets thrust must be greater than the gravitational acceleration divided by its mass. F > A/m]
Into orbit. A basic rule is that when escaping from a gravitational field a rocket must waste a part of its acceleration to constantly counteract the gravity field (near the Earth this would be 9.8 m/s^2). To stay in space a spacecraft must continue to accelerate continuously against any local gravity field to maintain position, or reach a stable & sufficiently high orbital trajectory where rotational force counteracts the gravity field. Obviously the faster a rocket can reach a coasting orbital or suborbital trajectory the more efficient it will be (part of the Oberth effect), but the rougher and harder the ride.. - For humans the maximum comfortable sustained acceleration is maybe 1.5 to 2 gravities, and this is what is experienced on most trips to orbit. For planetary insertion and transfer maneuvers thrust will tend to be about 1 gravity or lower. The highest accelerations will tend to be achieved during frictional aero breaking re-entry to a planet surface and these can reach 5 or more gravities. (with pulses of about 20 to 30 gravities approaching lethal - thanks Mythbusters.)

Delta V / Delta V Budget - defines the total cost of a given journey and the capability of a rocket in terms of the total change in velocity required. Without direct simulation delta V costs are not always an entirely accurate or reliable variable for various reasons, but delta V allows general purpose, basically-accurate, back of envelope calculations and comparisons.
For interplanetary missions required Delta V changes non-linearly depending on a number of factors that are separate parts of the design process - such as desired journey time, available thrust or acceleration, system mass, reaction fuel mass, journey planning, and chosen/available trajectories. Delta V can only be calculated truly accurately by discrete time based simulation and even then small unknown factors can have a large impact. (for instance a small mass miscalculation can easily leave a rocket unable to reach its destination.)
Specific Impulse - is the power of a rocket measured in terms of propellant used per unit time, usually expressed in terms of time and defined as Isp = Exv / gE , where Exv = Exhaust velocity in m/s and gE = 9.8 m/s^2 the gravity at Earth surface. The unit of specific impulse is the second. Specific impulse is a simple way of directly comparing different rockets or rocket technologies, however it is just as valid to just compare them directly in terms of m/s exhaust velocity.


Primary Rocket Technology - The rocket technology defines the exhaust velocity and therefore the overall thrust efficiency of a rocket, and also the amount of thrust that is available. - These two figures together with the system mass and available fuel define the thrust to weight ratio, the available acceleration, and the total Delta V budget of a spacecraft. From this the rocket technology chosen becomes one of the single most important factors in the design of any rocket system or mission plan.
At ultimate opposite ends of the spectrum might be black powder rockets (fireworks), verses total atomic conversion powered rockets. - It is almost impossible for the first to even get close to reaching Earth orbit while the second is theoretically capable of short range interstellar exploration or hundreds of trips to Earth orbit and back without refueling. In number terms this might be a delta V budget of 50 m/s verses 50,000,000 m/s.
Rocket Efficiency. A rockets thrust is defined by two competing equations that make it extremely difficult to build a truly efficient engine - the first is the balance of momentum, and the second is the balance of kinetic energy. The higher the exhaust velocity of a rocket the more efficient it will be in terms of reaction mass usage but the less efficient it will be in terms of kinetic energy (wasted in the exhaust). Thus the best way to increase a rockets thrust efficiency is to reduce its energy efficiency - greatly increasing the energy usage/demand, and or reducing the overall thrust.
(For details on the individual technologies see the list of Rocket Technologies section below.)

- - - - - - - -
Trajectory / Trajectory Type (Course plotted and path followed)
[EDIT POINT / STILL NEEDS REWRITTING] Ultimately the path a rocket follows (or intends to follow) is the most important factor in efficiency and journey time. - If a wrong trajectory is chosen or a trajectory is beyond the rockets Delta-V budget the rocket will never be able to make its destination and will probably end up stranded in space ('forever'). Since the margins on many missions can be very thin this can actually be a very real danger - especially if there is no possibility of refueling. I will leave a detailed description of the different standard low-energy trajectories to others but feel that something should be explained about orbit. -
- Orbit - In general orbit is a matter of balance of forces, this is normally a balance between an inward gravitational force from a large gravitational center (called a Primary) and an outward centripetal force generated by rotation around the same center. In the spatial environment the various orbital factors are very dominant - on the planetary scale, on a solar system scale, even slightly on the interstellar scale, they even remain important or at least useful on smaller scales such as for orbits around asteroids.
- Orbital Flight Trajectories - Generally Orbital type transfer trajectories start with an orbit around one planetary body and end up with an orbit around another planetary body, and depend on the much larger solar system scale orbit of the craft as it moves between them. This solar system orbit has a particular resonant velocity and gravitational gradient at any particular given radius, which acts as a kind of regulator which set and limits the speed at which such transfers can be made. Orbital transfers are a pretty complex subject, two well known types are Hohmann transfers and hyperbolic transfers, the most efficient types (such as for Voyager) are computed directly using complex computer simulation.

For an object in orbit the normal rules of motion get (partly) turned on their head and height is increased by accelerating in the direction of rotation, decreased by accelerating against it, while procession of rotation is made by accelerating downwards or upwards. It is not quite this simple because the normal effects don't disappear and the two add together. These rules apply to the orbits around planets and also to the orbital environment between the planets, or even the local interstellar environment between stars. - At any given point and velocity one orbital path will be dominant over the others but there are ‘bridges’ between them where the different systems merge and these can be used to navigate throughout the solar system and make interplanetary transits while using relatively little thrust.
- Non Orbital Flight Trajectories. - at the planetary solar system scale the distances are far greater than for close orbits around planets and so the forces and orbital speeds are relatively far lower. The type of trajectories used by today’s space technology depend on orbital rotation and synchronization which basically locks them to specific thrust and velocity conditions. These trajectories don't work very well with larger Delta V Budgets. In fact as the thrust budget increases it rapidly becomes more efficient to treat the system as effectively non-rotating and use direct line trajectories with forward triangulation. A rough guide is delta V budget per leg ≈ starting resonant velocity, so the further away from the sun the lower the velocity at which this becomes possible. (At the Earth the orbital velocity is 29.8 Km/s, Mars is 24 Km/s, Jupiter is 13 Km/s, and Pluto is 4.7 Km/s, etc.) Hyperbolic trajectories are a mid-point between the two. At even higher velocities trajectories can even be plotted at any point in the orbital cycle, and for lines approaching the sun can be divided away using a midline turn.
A good example of a technology that would benefit from non-orbital trajectories is a Super Orion machine, its (theoretical) nominal delta V flight velocity per leg could easily be over 300 km/s - ten times higher than the Earth's orbital velocity.

-- - - - - -

There are many more secondary variables, such as the various types of radiation encountered in space (nuclear, EM, ionic, neutrino), required life support capacity (especially food), required safety factors, construction materials and type, and so on.

-- - - - - + -- -- --- -- -- + - - - - --

SPACE TECHNOLOGY : List of the Primary Rocket Technologies.[edit]

[Basically complete, semi good Edit. Current Edit 90% Complete - 14-03-17]
All figures are estimates based on a mixture of basic research, basic calculation, and extrapolation. My sources include the 'Atomic Rockets' web site, Wikipedia itself, several Rocket text books, old extrapolations, and my own research.
This list focuses on manned interplanetary space travel, a strong primary goal in intermediary space travel. All the technologies listed are capable of relatively high thrust - ionic thrusters and VASIMR and other similar engines are not currently included.
The examples use numbers based on a single one way Earth to Mars transit.
Orbital Lifters Our primary and currently only access to space. Building a large scale, low cost, mass transit capable lifter technology is a primary requirement to extend the scale and usage of all space travel, including interplanetary manned missions. The orbital lifter 'window' for Earth requires a powerful rocket technology that can achieve a sustained acceleration of at least 15 m/s^2 for around 10 minutes, to go from ground to orbit - a short but very powerful pulse. A large scale lifter technology could make chemical rockets a somewhat more viable technology option, although it also possibly boosts other methods even further. Orbital lifters are a special issue and really deserve their own section.


Liquid or Solid Chemical (Thermal) Rocket.
+ STATUS : Advanced designs have existed since the 1960's. No complete or operational solution exists today for a manned Mars type mission, but the basic technology is sufficiently ready to construct and launch such a mission within as little as 5 to 10 years.
+ Development R&D : Time 2 to 15 years, Chance of success 98% to 100%, R&D Cost = $100 million to $10 Billion.
+ Base Per Mission Cost = $500 million to $50 billion; Core Efficiency Isp ≈ 300 to 500 seconds.
+ Example Earth to Mars : Journey time 180 to 540 days (6 to 18 months), Survival Safety Factor = 70 to 95 %.
While chemical rockets are seen by many current space technology 'professionals' as the only option for interplanetary travel, a more objective analysis puts them as probably the worst solution that is actually viable. Chemical rockets do work and can be used to construct a viable Earth to Mars manned technology with relatively little work, however because of their low specific impulse they are generally restricted to low energy trajectories such as Hohmann Transfer and hence to very long journey times.Current chemical rocket technology is basically untenable for manned exploration to most of the solar system beyond Mars because its use of reaction mass is simply not efficient enough.
The safety margins for chemical rocket design become quite thin even for Mars One-Way missions, but especially for Mars Return. The only viable methods are either sending extra 'fuel tanker' missions to Mars (which is very expensive and increases failure points), or using on Mars fuel mining (an unknown quantity). All rockets technologies require some form of reaction mass such as liquid hydrogen, but chemical rockets require fuel and oxidizers such as liquid hydrogen + liquid oxygen or hypergolic fuels such as hydrazine. These can increase the hazard and complexity of space vehicles considerably - volatile, corrosive, highly explosive when mixed, and some are highly toxic. Chemical rockets are inherently more dangerous than most other technologies and this creates a special layer of extra complexity and difficulty in long duration manned missions.
Basic Assessment : Works and is known, but is underpowered and dangerous. ~ (Transatlantic Comparison : Sails / Oars.)
-- -- -- -- -- --

Nuclear Solid Core (Thermal) Fission Rocket.
+ STATUS : Complete basic engine designs exist (from project NERVA). - Many preliminary designs and several generations of working reactors have been tested. Has the most complete design of all nuclear rocket types and was once at the verge of being tested in space. - Designs need to be recovered from old archives.
+ Development R&D : Time 5 to 10 years; Chance of success 98%; R&D Cost = $1 billion to $10 billion;
+ Base Per Mission Cost = $200 million to $10 billion; Core Efficiency Isp ≈ 800 to 1500 seconds.
+ Example Earth to Mars Journey time (estimate) 120 to 240 days (4 to 8 months); Survival Safety Factor = 85 to 98%.
The basic solid core nuclear rocket was a design in progress between about 1950 and 1973, and has hovered on the edges of NASA and other space agencies plans ever since. In a solid core thermal nuclear rocket the propellant gas flows directly thorough a hot reactor core acting as a coolant and heating and expanding to generate thrust. NERVA’s original design is very easily scalable by building clusters of engines allowing machines to be as small or large as desired.
- All nuclear rockets work at very high temperatures and all engine components must be able to handle the temperatures expected. (High temperature alloys and nuclear fuel alloys are the major technologies required.) For solid core engines the maximum temperature the fuel elements can tolerate without melting limit’s the engines maximum power and efficiency. A special reactor called Kiwi-TNT was designed to test worst case scenarios and showed that basically solid core nuclear rockets are a fundamentally safe technology.
Basic Assessment : A Minimal but reasonably capable and effective solution. ~ (Transatlantic Comparison : Early Steam Ships.)
-- -- -- -- -- --

Nuclear Liquid/Gas Core, Open Cycle, High Energy (Thermal) Fission Rocket.
+ STATUS : Basic design sketches exist. ‘Simplest’ basic design of any high energy nuclear rocket, but with no physical separation of nuclear material it is potentially very difficult to keep the exhaust clean. (free of radioactive particles) And this makes the design potentially very dirty.
+ Development R&D : 10 to 20 years; Chance of success 40 to 60%; R&D Cost = $5 billion to $40 billion
+Base Per Mission Cost = $2 billion to $20 billion; Core Efficiency Isp ≈ 1500 to 3500+ seconds. (guesswork)
+ Earth to Mars : Journey time 30 to 120 days (4 weeks to 4 months); Survival Safety Factor = 70 to 98%.
Increasing the temperature of a nuclear reaction increases the efficiency of the reaction thus using less propellant fuel and creating more thrust. The basic way to do this is to run the reactor with the nuclear fuel in a liquid or gas state, roughly doubling or even quadrupling the power and efficiency of the reaction. (by quoted estimation) The simplest high energy system is open cycle. However the price is that the liquid / gas state nuclear fuel is not strictly confined away from the rocket stream, and some is very likely to escape in the exhaust. In the design often quoted the fuel is confined largely by aerodynamic forces. A design not really safe for use in Earths atmosphere.
Basic Assessment : Powerful but with Dirty Exhaust. ~ (Transatlantic Comparison : 1920's era steam turbine cruise liner.)
-- -- -- -- -- --

Nuclear Liquid/Gas Core, Closed Cycle, High Energy (Thermal) Fission Rocket.
+ STATUS : Basic design sketches exist, development is at the sketch stage. High energy type with zero emissions at the cost of lower efficiency.
+ Development R&D : 10 to 20 years; Chance of success 50 to 90%; R&D Cost = $5 billion to $30 billion
+ Base Per Mission Cost = $4 billion to $20 billion; Core Efficiency Isp ≈ 1500 to 3000 seconds theoretical (WP).
+ Example Earth to Mars : Journey time 45 to 160 days (6 weeks to 5 months); Survival Safety Factor = 90 to 99%.
Similar to open cycle engine but without the containment issues. The reactor is run in a very hot state with the fuel in a fluid or gas state. The closed cycle engine completely seals the fuel chamber from the exhaust making most of the physical design relatively simple but the materials design more complex. The reactor transfers energy to the rocket fluid by thermal EM radiation through a transparent wall creating an effectively completely clean design. Some efficiency may be sacrificed and there is the obvious danger of wall failure - but as long as these problems can be mitigated or solved this is almost certainly the best all round nuclear engine design on the near horizon.
As Anthony Tate has shown in the 'Nuclear Liberty GCNR' design gas cooled closed cycle engines could be particularly attractive for Earth ground to orbit lifters. - Potentially allowing a very large, single stage, and reusable machine with all the benefits that this brings. - Huge payloads, huge savings through single stage and reusability, high safety margins, high efficiency, savings through economy of scale - all leading to a near 'aircraft like' mass transit ability to Earth orbit and beyond, and massively reduced costs.
Basic Assessment : Powerful compromise with Clean Exhaust. ~ (Transatlantic Comparison : 1920's era steam turbine cruise liner.)
-- -- -- -- -- --

Nuclear Micro-Pulse Explosion (Thermal) Rocket. ('Project Orion')
+ STATUS : Semi complete basic design exists, though it is some 50 years old and in need of 'modernization'.
+ Development R&D : Time 5 to 15 years, Chance of success 85%, R&D Cost = $5 billion to $20 billion.
+ Base Per Mission Cost = $4 billion to $20 billion; Core Efficiency Isp ~ 1000 to 1500 seconds. (total guestimate)
+ Example Earth to Mars : Journey time 40 to 120 days (6 weeks to 4 months), Survival Safety Factor = 95 to 99%.
Propulsion based on the idea of riding the wakes of a stream of miniature nuclear explosions. Although the idea is somewhat bizarre it is potentially one of the most versatile and useful space thrust systems. Ironically (because of their high payload mass limits) pulse nuclear rockets actually tend to deliver the lowest radiation load to crews of any technology on the current horizon for manned interplanetary travel. Chemical rockets tend to deliver the highest. The original standard interplanetary Orion system would have used small nuclear bombs (0.1 Kt) to accelerate and is thus limited in absolute payload mass and thrust budget, though these limits should still be in the system weight range of 200 to over 1000 tons.
Micro warhead bomb units need much further design, and their mass production itself is a complex issue. There are other issues with the potential declassification of very low radiation pure fusion warheads and other details. Exacerbated standard problems with anti-nuclear propaganda and paranoia.
Basic Assessment : Very Powerful but Rough ride. ~ (Transatlantic Comparison : 1945 era high altitude propeller plane.)
-- -- -- -- -- --

Nuclear High Energy-Pulse Explosion (Thermal) Rocket. ('Super Orion')
+ STATUS : Partial early stage design exists though it is some 50 years old and would need much work.
+ Development R&D : Time 10 to 20 years, Chance of success 60%, R&D Cost = $10 billion to $100 billion.
+ Base Per Mission Cost = $10 billion to $60 billion; Core Efficiency Isp ~ 1,500 to 10,000 seconds. (est)
+ Example Earth to Mars : Journey time 7 to 20 days, Survival Safety Factor = 90 to 98%.
This technology was designed explicitly for the manned exploration of the outer planets and short range interstellar travel. - Super Orion would use much larger more powerful bombs than the standard Orion, with the power in each bomb unit of maybe 1 Megaton (ie ~ 10,000 times more powerful than standard Orion). The larger bombs offer much greater efficiency, power, and power to weight ratio. (A 1 megaton bomb might only weigh as little as 10 times more than a 0.1 kiloton bomb and so its thrust efficiency can be as much as 1000 times greater.) Super Orion allows and actually requires a huge ship with a flight mass of maybe 50,000 to 100,000 tons or even more. This enables a Super Orion ship to carry a vast store of bombs and cargo which allows it to have the absolutely enormous Delta V budgets needed for interstellar journeys. - However the price of this power is the need for the pusher plate and ship to survive vastly more destructive explosion wakes with far higher levels of vibration and thermal and nuclear radiation. Super Orion is a considerably more complex and difficult technology than standard Orion and I do not know whether it will ever be possible as a practical system. As with standard Orion there are issues with the declassification of low radiation pure fusion warheads and with anti-nuclear paranoia, but these are exacerbated even further.
A huge side application for a Super Orion system would be in the threat target propulsion side of asteroid defence or even in asteroid mining. For details of either Orion system consult George Dyson’s book 'Project Orion'.
Basic Assessment : Extremely Powerful but Frightening. ~ (Transatlantic Comparison : 1970's era Jet Airliner.)
-- -- -- -- -- --

Nuclear Fusion Rocket.
+ STATUS : Many basic theoretical designs exist, but the basic technology simply does not yet exist to make fusion based rockets practical.
+ Development R&D : 15 to 40+ years, Chance of success 30 to 60%, R&D Cost = $20 billion to $200 billion
+ Base Per Mission Cost = $10 billion to $40 billion; Core Efficiency Isp ≈ 5,000 to 70,000 seconds theoretical (WP).
+ Example Earth to Mars : Journey time 15 days to 120+ days (2 weeks to 4+ months), Survival Safety Factor = 90 to 99.9%.
In terms of energy generation fusion technology is already almost on the verge of being able to power a rocket, but it has one huge problem - weight. The ITER reactor design under construction will have a reactor core that weighs some 7000 tons, now double that for the rest of the equipment needed to make it work and then add in even more for the initiator power supply engine which itself must be extremely powerful. The system ends up weighing some 20,000 metric tons with roughly the same output power as the 38 year old 10 ton NERVA design fission rocket. To put it simply - as a rocket engine nuclear fusion currently simply does not fly.
Designing a fusion system that can work as a rocket turns out to be a phenomenally difficult task, but a starting point is to remove as many excess links as possible in the power chain that leads from the reactor to the exhaust. A design that might work is to use a power reactor for core energy generation and specialized thruster reactors to provide the actual propulsion. Fusion is potentially theoretically a large step up in power and capability from fission designs and its very high intrinsic efficiency (= high exhaust velocity) makes it potentially ideal for commercial interplanetary tourism, colonization, or interstellar voyages. Fusion offers lower radiation and slightly better safety margins then nuclear fission.
Basic Assessment : Very Powerful but Complex and Ultra Heavy. ~ (Transatlantic Comparison : 1970's - supersonic airliner to Ballistic rocket (ICBM).)
-- -- -- -- -- --

Total Atomic Conversion (eg. Quantum Conversion or Matter-Antimatter Annihilation).
+ STATUS : Basic hypothetical theories exist but most methods are not yet even in their infancy.
+ Development R&D : Time 15 to 100 years, Chance of success 20 to 50%, R&D Cost = $5 billion to $1 trillion.
+ Base Per Mission Cost = $5 billion to $100 billion; Core Efficiency Isp est 2,000 to over 20,000 seconds. (Est)
+ Example Earth to Mars : Journey time 1 to 10 days, Survival Safety Factor = 99 to 100%.
An atomic conversion powered spacecraft could potentially deliver hundreds or thousands of tons of cargo almost anywhere in the solar system, and within days or weeks to say Mars. The basic technology for creating and handling antimatter in quantity is only in its earliest infancy.. There are also problems in matter antimatter conversion in the types of energy release/radiation produced which may limit propulsion efficiency.
I have worked briefly on the theoretical side of atomic conversion under the guise of 'Quantum Conversion', but at the moment the technology simply does not exist to make it work- even at a theoretical level. Good starting points might be advanced nanotechnology engineering and a more complete understanding and model of quantum mechanics.
Basic Assessment : Incredibly Powerful - Apex of Technology. ~ (Transatlantic Comparison : Fast Ballistic Rocket (ICBM).)
-- -- --

Gravity Manipulation
+ STATUS : Totally speculative. Requires major or colossal advances in basic physics.
+ Development R&D : Time 15 to 100+ years, Chance of success 5 to 20%, R&D Cost = $200 million to unknown.
+ Base Per Mission Cost = totally unknown; Core Efficiency Limited by the speed of light to some 30 million seconds Isp.
+ Example Earth to Mars : Journey time (sane) 1 to 5 days, Survival Safety Factor = (assumed) 99 to 100%.
Currently purely hypothetical.. There is a basic problem with all standard reaction mass engines that several factors limit their efficiency and capability severely at the huge 'relativistic' speeds required for sub-light interstellar travel (e.g. V ≈ 10% to 90% of C). Once we start to aim for medium to high relativistic speeds putting resources into gravity engine research actually starts to make more sense than putting them into further reaction mass engine research.. (See separate section : Gravity Engines, for a detailed analysis.) There is also likely to be a very large overlap between gravity and FTL technology..
Basic Assessment : Potentially Ultra Powerful - Currently Impossible. ~ (Transatlantic Comparison : Faster than Ballistic Rocket.)

-- -- -- -- -- -- -- -- -- -- --- -- -- -- -- -- -- --









SPACE PROJECTS : Global Asteroid Defence System[edit]

(Super Project (MST-1 : Medium Short Term Focus) :- Global Asteroid Defense Capability.) [EDIT - 14-11-16]

Global Asteroid Defense System. : A first order global defense of the Earth against the threat of asteroid collision.
Quite simply an asteroid defence program should be treated as a serious or even critical need for the whole world. There are very long term risks from asteroids (global annihilation) but there are also very short term risks which exist within the time span of a only a few years - within a single human life. The fact is that we could be hit by another Tunguska scale event at any time and statistically probably will be within the next 50 years, and the next time it might hit an inhabited area or even a major city. Like many space projects there are many small details and side issues but a basic broad brush asteroid defence is possible at a reasonable cost and within a few years.
Reasonable cost is a relative term. - The project could be funded for full coverage and defence within a total budget of maybe $80 billion to $100 billion over about ten to fifteen years. Far more minimal versions are possible, for instance the very smallest minimal scale nuclear strike system might cost just $1 to 5 billion. However technological offset means that a full scale defence system should be able to return enough extra money to pay for itself completely within about 15 to 20 years.
(In comparison- the US for example spends something like $150 billion per year on war and military power, the UK spends very roughly ~ $50 to $80 billion per year on Christmas, and the world spends some $400 billion per year on computer software and about $70 billion per year just on computer games. .)


The core first stage for building an asteroid defence program is a substantial R & D program for developing the various pieces and putting them together. Dividing the program into its basic core components produces the following set. -

1. A substantial long range detection system. - A long range detection system that can detect and track both long term and short term threats to the Earth in real time. Real time scanning is a critical requirement for short term imminent threat detection, and the scanning system needs to look over the entire celestial sphere around the Earth to a reasonable medium range. A secondary long range & wide angle system will also be used to scan locate and track longer term threats at any point within at least the inner solar system, and at tertiary ranges out into the outer solar system and beyond to the ort cloud. Scanners will be based on optical and infrared space telescopes, ideally also with radio, and maybe radar interferometry. - Some threat objects will be very dark, small, extremely cold, and very difficult to detect.
2. Substantial low cost super-heavy orbital Launch capacity. An essential sub-project is the development of a new low cost super-large ground to orbit lifter rocket to put the tugs and other components into orbit. This is a very substantial program and to reduce costs the aim should be to sell launchers for general use. Other methods of recuperating costs include partial or total vehicle reusability, and maybe even the creation of an 'SSTO Single Stage To Orbit' capacity. Designs which could achieve both include- in atmosphere nuclear rockets such as GCCC Gas Core Closed Cycle, or some super-sized variant of the Skylon design, or the old US Shuttle replacement program.
3. A substantial reusable Interception and Delivery system. The best solution to payload delivery out to an asteroid target is an interplanetary space tug with a high delta V thrust capacity for high speed target interception with a sufficiently large payload capacity to deliver a given defense solution to the target. For minimal small scale nuclear warhead asteroid destruction/diversion type systems a small one way craft with a payload capacity of maybe 0.5 to 5 tons would be adequate. For most other methods of asteroid diversion the payload capacity will need to be on the order of some 50 to over 200 tons. Ideally the machine should also be able to run manned or unmanned, and a primary specification is the use of nuclear thermal rockets to achieve the high Delta-V capacity desired. In these cases return ability and reusability become important. The ideal for tug deployment is to have at least one ship permanently on standby in high orbit, with at least one backup on the ground in a semi-ready state to allow for redundancy.
4. Asteroid Capture & Propulsion System. The most critical part of any asteroid defence system is the ability to actually change the orbit of and divert or destroy a given threat asteroid. (or comet) The first stage of this is Object 'Capture', and the second Stage is Object 'Propulsion'. There are two components to capture. The first is to reach the target object, then to match velocities and close to operating distance. (eg 1 to 10 Km) The second component is 'Direct Capture' which is to land on the target object. This may or may not be needed depending on propulsion method chosen.
The second stage of 'Object Propulsion' aims to push or move the target Object to its new trajectory. (See List of methods below.) - For long and medium term threats the orbital changes may only need to be very small. For closing imminent threats the total acceleration required tends to be much larger and the time available to achieve it tends to be much shorter. For very small asteroids of 100 to maybe 10,000 tons, or for larger objects that are on long term threat trajectories the tugs own engines might be able to provide enough thrust for trajectory change. Where trajectories are harder to change, or timespans are shorter, or where target masses are higher more powerful methods will be needed.
5. Commercialization. As well as the potential for saving millions or even billions of lives or preventing the triggering of an accidental nuclear war, there are many gains to come from building an asteroid defence program. - Asteroid defence is only a very small step away from asteroid exploration and mining. It has about 80 to 90% commonality with manned missions to Mars, or even ultimately to the outer planets. It even makes a good base for industrial use and colonization missions in space. - For a start imagine a permanent orbital outpost as the world dreamed of in the 1950's and 60's.
My estimated Costs to build an Asteroid Defense Program including its sub-projects- are £30 billion to over $100 billion. The Cost extrapolation gain for the commercialization of space could easily outweigh this - making the final long term costs of the program ultimately neutral or positive.
-- - -- - -- - --


Methods of Asteroid Capture and Propulsion.
Land and Push. - The most basic method of asteroid (or object) capture is to match the targets velocity and surface rotation and land then use your ships rockets to push the object. However this is often much easier said than done, some targets may have very high rates of rotation and or may tumble in patterns that make landing difficult. In some extreme cases the rotational speed may even be close to or above the orbital speed requiring acceleration to reach the surface and strong pitons to hold on. Even once landed there is the secondary problem that rotation interferes with the ability to push the object in a straight line - either rotation must be zeroed first which could be very expensive in fuel, or acceleration must be done in cycles by rotating the engines using gimbals to keep them in line with the desired direction of thrust. - Either method creates the need to anchor very firmly to the asteroid surface, not easy on an object that may be little more than a spinning pile of dirt or rocks..
Nuclear Powered Thermal Rockets. Nuclear rockets generate a great deal of heat using a high energy nuclear reactor, and use this to heat and accelerate a reaction mass. Note that nuclear rockets are generally at least twice as efficient in their use of reaction mass as chemical rockets. A major limitation with current designs is that the total thrust limit is only about ten to twenty hours at full power before the nuclear fuel is exhausted.
High Power Ionic Thrusters. For very long term thrust at relatively low energies nuclear rockets are not ideal. A better solution is a very powerful version of the ionic thrusters already in use. These thrusters achieve even higher levels of efficiency but require a lot of power as a result. There are two basic solutions. - The first is an extremely large solar voltaic array (much larger than the ISS arrays). The second is a full nuclear rector driving a thermal power generator.
Catapults. For a reaction mass system anchored to a target asteroid or comet there is the obvious possibility of using material from target itself as reaction mass - effectively giving the engine almost unlimited reaction mass fuel capacity. If water can be found it can be cracked and used for chemical or nuclear rockets or ionic thrusters, but even rocks could be used if fired by say an electromagnetic catapult. However whichever method is chosen the ultimate problem is still always energy.
Halfway-House Solar. A solution to the energy needs for asteroid propulsion is to stand a very large solar array a few Km away from the target and transmit power to anchored rocket systems using lasers or microwave beams. The need for ground based systems (or Capture) can be avoided altogether by simply aiming the beam at the surface, superheating a small area and creating a beam of gas which produces thrust.
Methods that Avoid direct Landing. - These use various different tricks to avoid the need for direct capture or landing - though there is almost always a price to pay in efficiency. Non-Landing methods include :- gravity 'tractors', rockets using a bidirectional push, nuclear bomb pulse propulsion, and optical solar acceleration using massive independently flying mirrors.
Nuclear Bomb Pulse Propulsion. - Of a similar type to the old Project Orion design but with the spacecraft standing off remotely from the blast region. This method solves many big problems with asteroid capture like landing or target rotation, and can have a relatively high thrust capacity and can also push bodies like rubble piles very delicately with little difficulty.
(Nuclear pulse propulsion uses micro scale or larger nuclear bomb units that create a focused shockwave of plasma and flash heating of the target to do the actual pushing. The bomb units will be self guiding drones that will fly independently out to an optimum position for pushing the target before detonating. Small asteroids can be pushed with Orion type micro bomb units (0.1 to 5 kilotons, bomb unit mass 150 Kg) while much larger targets could be pushed using larger devices (50 to 1000 kilotons, bomb unit mass 1000 Kg). The beauty of this system is that it can be used in quite a delicate way (say with boulder piles) yet has the potential power to push asteroids that are even several kilometers across. The main problems involve nuclear paranoia on Earth, and the actual development work needed to produce a working system. A critical key question that has not yet been answered is how much acceleration this type of system can actually produce in space.)
Very large Solar Sails. A commonly suggested solution. Very large solar sails certainly could push an asteroid but of course require very large complex structures to support them. Solar sails only produce a tiny thrust but can keep producing it over very long periods of time - they may by particularly suitable for gravity tractors. An alternative solution is to spray a solar foil coating or film onto the target which should give a strong if less focused acceleration. - This could be enough to steer a threat object off a threat trajectory - this also has the benefit that it could work even with very large asteroids up to maybe 10 Km in diameter.
Giant Solar Mirror Powered Rockets. This is an interesting idea that replaces solar sail propulsion by instead using giant solar mirrors as direct thermal energy sources to power rockets or to create thrust by selectively vaporizing the surface of the target.

- Direct Destruction using Super Large Nuclear Bombs. The ultimate brute force approach. In some cases it may not be a bad solution, but the two critical requirements are the accurate placement of warhead(s) and the power of warhead(s) required. As in the film Armageddon a warhead could be more effective if buried within an object, though an alternative approach might be several placed symmetrically around a body. The power and design of warheads required is a more difficult question- obviously for smaller bodies smaller warheads would be adequate, but for larger bodies very powerful warheads requiring new technology might be required. Like all methods further research and experimentation is desperately required.
One very open question is- how big a difference will breaking up a large body on an imminent collision trajectory into smaller fragments actually make. Some suggest it will make very little difference, but I believe the opposite, that it may potentially turn a lethal event into a far less lethal one.
Gravity Tractor. Another idea is to use a large mass on the spacecraft to act as a gravity 'tractor' to tow the target asteroid using the mutual attraction of gravity -this is a complex solution that is discussed below.
Bidirectional Rocket. A similar idea to gravity tractors is to use a rocket with a bidirectional push. On one side the rocket fires an exhaust stream that pushes the asteroid, on the opposite side a second rocket fires to counteract the first holding the spacecraft in position.

-- --- -- - -- --- -- - -- --- -- - -- --- -- -


The Case for Gravity Tractors/ [16-03-13] [New Ed 23-05-14]
Is an Efficient Asteroid Gravity tractor possible? From being totally cynical about gravity tractors in the past, doing the maths I can now see that they are at least possible. But gravity tractors have some very serious problems with efficiency.

- Firstly the gravity from the asteroid is pulling a lot harder on the tractor mass then the tractor mass is pulling on it. A shorter gap between the two equals greater gravitational attraction, but also requires greater compensating thrust. This means that (even at the best case) there is never an efficiency gain over using an engine fixed to the asteroids surface. (Which should be obvious.)
- Secondly a much bigger problem. To avoid the rocket exhausts from hitting the asteroid and pushing it against the direction of thrust the engines need to be vectored, and the closer the craft is to the target the sharper the vector angles need to be. The sharper the angle the more thrust is wasted, at one radius above the surface the waste will be approaching 40 to 50% of thrust.
- The third problem is that to be effective a gravity tractor needs a lot of onboard mass, most of this cannot be used as reaction fuel while towing so it is complete dead weight. This extra mass is terribly wasteful of fuel and makes a large reduction in effective fuel load.. An alternative is to land and take reaction mass from the surface but that obviously involves all the problems of landing.
- The fourth problem is that the thrust system must be perfect and run semi-continuously for long periods. Even a temporary malfunction or loss of thrust (over say a few hours) could see the tractor colliding with the asteroid. .
Partial Solution to the need for long term station keeping. Ionic thrusters, see above.
Partial Solution to the vectoring problem. (A somewhat bizarre solution.) - If the overall thrust vector is on a imaginary horizontal line, then the machine is built as a long thin vertical structure with two very carefully balanced rocket systems at the top and bottom and if needed an extra 'primary' towing mass in the center. (the whole structure acts as a towing mass) The taller the structure is the less vectoring is needed, until it reaches beyond the diameter of the asteroid where the angle approaches zero. (full efficiency) Of course this depends on the size of the target asteroid.
The design question becomes how long can a particular rigid structure be that can transfer a given momentum load without collapsing. Thinking of lightweight materials and guyed masts and low gravity -the answer could be several kilometers or more. - Of course we can adjust the lateral stress load on the machine to whatever it can support - but the higher the load the more thrust that will be generated.
-- -- -- -- -- -- -- -- -- -- --- -- -- -- -- -- -- --






SPACE PROJECTS : Destroying Space Junk / Space Debris - the case for Super Lasers.[edit]

[Very rough draft] [21-03-17]
(STATUS : Concepts - Basic design sketches, with some details)

Space debris in close orbits around the Earth presents a real potential danger to present and future missions in space. There are various sources of debris, sizes, and of course each has its own individual trajectory. From whole rocket stages to defunct satellites to flecks of paint or pieces of grit or particles of dust.

Design Window for catching Space Debris. Trajectory is the most complex and important factor. Obviously firstly any given mission or craft will only be threatened by objects on trajectories intersecting with it in both time and position. The sheer scale of the orbital space around the Earth must be emphasized. - The space is absolutely vast, some 224 Billion Km^3 for the most packed orbital band 100 to 500 Km high. In comparison even the largest threat objects are absolutely tiny and each only occupies a single tiny chord in a particular orbit. n , so in theory the odds of collision are astronomically small. However every threat object is also moving many times faster than a bullet, also they are most common at quite low altitudes (120 to 500 Km), and are broadly restrained to a similar band of broadly equatorial orbits. - Closing velocities will generally be on the order of hundreds of meters per second or faster. There are easily enough objects to make intersection a real threat, and including the smallest objects there are probably several million or more. Real collisions especially with microscopic objects are a constant problem. Although much rarer larger objects become much bigger threats, and above as little as a few centimeters across or a few hundred grams become almost invariably lethal.

The other side of trajectory is that the different orbits and positions of individual objects make hunting down space junk extremely difficult and expensive. This makes the idea of a single machine that changes orbits to intercept one target after another simply too expensive in fuel. Changing trajectory from one orbit to another is expensive in fuel..
Others have suggested 'vast' nets or 'sweepers', but again the simple scale of the real orbital environment virtually rules this out. The idea of a robot hunting down one piece of junk after another is barely tenable at best, and a complete fantasy at worst. It is the kind of solution that would only really work with some kind of near 100% efficient ultra high energy rocket engine such as fusion or matter-antimatter.
One solution to the above problem is a ground based system that attacks and destroys space debris directly. A laser system designed to bring a pulse beam to a focus at a particular is that [EDIT POINT]


-- -- -- -- -- -- -- -- -- -- --- -- -- -- -- -- -- --






PR-04 : The Development of Starship Technology : Part 1 - Introduction and Analysis[edit]

[Semi-Rough EDIT - Est 80% complete - [22-03-17] ]


Introduction : The Development of Starship Technology. (Also Super Project LT-1.)
If there is one ultimate all encompassing scientific super project it is the development of starship technology. Probably the most obvious and far reaching of all such development targets.
This is an incredibly futuristic project that encompasses many other super projects and sub projects creating a huge web to push overall scientific and technological development in hundreds of different areas. As well as all that the development of star ship technology has the potential to save millions of human lives or even save humanity itself - through medical projects, genetic engineering, social engineering, bio-system engineering, or other less obvious routes. - The core project has the direct potential to take humanity and Earth life out into the solar system and ultimately beyond the solar system out to the stars. Starship technology might sound almost abstract in its scope but offers or opens the way to many practical world shaking or world saving new technologies or scientific advances..

FTL Verses STL Travel. In this document we will assume that the physics model allows theoretical FTL travel and forbids wormhole travel. In a universe where wormhole bridges can exist any type of FTL travel is likely to be completely impossible due to weak causality. Even if FTL travel isn't directly forbidden by physics, there are hundreds of possible points where practical considerations could still make it impossible. Part 2 will examine FTL travel in detail.
If FTL travel does long term prove to be impossible this doesn't quite completely rule out the possibility of visiting at least the nearest stars.. However Interstellar 'STL' travel is at best extremely limited. (even slow FTL travel is still pretty limited) If we discount humans and send STL ships controlled by sentient Strong AI machines the limitation is somewhat removed, but even there the slow speed of such machines and the vast volume of space and the immense number of stars even in our local vicinity make exploration extremely limited.


Energy Systems. At a first breakdown the biggest and most obvious technology needed for starship development of a high density Energy Storage, Production, & Supply (E-SPS) system. At a very minimum an STL starship drive requires enough power to produce a very high velocity exhaust stream to accelerate and decelerate, and this requires a very large amount of power delivered for a sustained concentrated period. This power requirement limits the primary power system choice to :- nuclear fission, nuclear fusion, some form of total nuclear conversion, or some form of ultra high density battery such as contained gravitational singularities. Critical requirements include that the energy source must be self-contained and reasonably lightweight, and also that any required radiation shielding must be adequate while not being too heavy. An STL ship will also require a small secondary energy system to power the ships systems during its usually long cruise phase. A good candidate for the secondary power system is a small fission reactor. Out in deep space solar power is largely ineffective.
For an FTL starship drive the energy requirements might be quite similar or very different to those for an STL drive, and the system may have very complex additional requirements. Unlike an STL drive an parts of an FTL engine will probably have to run during the entire cruise phase of a voyage to maintain the ships FTL vector. This presents its own power supply requirements.


Starship Design : Design Space.
There are often many different design paths to the same ends. - These are the main set of design choices for starship technology.
- Ship Cruising Speed : Almost all interstellar voyages will start with a brief acceleration phase, a long coasting cruise phase, and end with a brief deceleration phase. Cruise speed vary from - very slow STL Generational type drives (V < 0.1% C), to fast STL type drives (V < 50% C), to very fast STL type drives (V > 50% C), to photonic drive (V = 100% C), to FTL drive (V ≥ C).
- Maximum Duration / Voyage Duration : Defined by Crew Type and onboard ship resources and overall cumulative system reliability. Durability may become a particular problem for FTL craft because the engines must run constantly to maintain the FTL vector.
- Cumulative Range : Defined by cruise speed x maximum duration. Star ship range extends over a truly vast range :- from a minimum base of around five light years, up to sub-galactic scales ≈ 10,000 LY, then up to galactic scales ≈ 300,000 LY. (This is not even looking at the universe beyond the edge of the galaxy.)
- Trajectory Type : A basic choice is between one way and two way journeys. Returning from an STL interstellar voyage is essentially almost untenable unless fuel and resources can be mined at the destination. Without return the only choice is to stay permanently and this raises its own problems. FTL technology will have its own problems with this, probably being initially restricted to one way journeys.
- Delineation of Ship Size /Hull Size : From a minimal size solar sailor that can be held in one human hand, all the way up to vast generational machines a kilometer or more in length with crews of tens of thousands or more.
- Crew Type : Defines Duration and other factors. Major Types - Robotic only, robotic with human hibernation, robotic storing frozen human embryo's, human crew engineered for long life, human generational.


At one extreme NASA's Voyager probes show that we have already made a first minimal step on the ladder of interstellar travel.
At the other extreme we can extrapolate and imagine ships that today most can barely dream of. Among the most realistic STL ships from science fiction is the STL relativistic drive ship in 'Avatar'. Looking at FTL drive ships - probably none - but CH Cherryrh's ships are probably more 'accurate' than most.
-- - -- - -- - -- -- - -- - -- - -- -- - -- - -- - -- -- - -- - -- - --


Technological Development Breakdown.
(Note connection to Super Projects Section.) General breakdown of the main spheres or focuses of technology or technological development that are connected to starship development. Focused on STL technologies although many will carry across to FTL technology..
- STL Rocket Technology : Energy Systems for space drive, Advanced high energy rocket technologies, Scaling up of rocket tech.
- Space Assembly Capability : Ultra large scale high efficiency orbital lifter technology (space elevators, Lofstrom loops, kinetic towers, ultra large rockets, closed cycle nuclear rockets. Large scale space stations. Low gravity mining, materials processing, and manufacturing.
- Space Technology and Fabrication : Advanced high temperature materials, radiation resistant materials, advanced construction methods, fabrication in Space, Self repairing materials, nanotechnology fabrication.
- Radiation Shielding and object impact protection. : Advanced ablative shields, multilayer & lightweight radiation shielding, magnetic cosmic ray shielding. Personal shielding and safety equipment.
- Advanced Computing & Robotics : Strong AI & advanced Machine Autonomy, High reliability computing, human machine interfacing, radiation and EMP immune computers, self-repair systems & multiple-redundancy capability.
- Environment : Long term closed cycle air and water processing / recycling, Long term closed cycle food production, Organic waste recycling, inorganic materials recycling.
- Medical : Advanced self-contained medical systems/robotics, Human hibernation & and life suspension, Advanced brain-mind manipulation & repair, Advanced anti-radiation medicine, advanced 'closed system' psychology, Human medical nanotechnology.
- Genetics : Advanced Genetic Engineering, DNA level repair, Controlled semi-immortality, disease resistance, Artificial 'zero-setup' human reproduction, modification/engineering of human minds to tolerate long periods of inactivity..

On the way to building a Starship we develop a massive presence in space. We can create a strong asteroid defense capability, develop the ability to build space colonies, and we can send exploration teams throughout the solar system. We solve the medical obesity problem, we potentially solve dementia and Alzheimer's and many other mental health and medical problems. We use space life support technology to produce food to support human populations on Earth in time of disaster. And we learn far more about the environment and about how to engineer complete self contained bio-systems. .. We even learn how to develop psychology and entertainment catered for small closed long term isolated societies.
-- - -- - -- - -- -- - -- - -- - -- -- - -- - -- - -- -- - -- - -- - --


Starship Technology : In More Detail.

A Basic Categorization of Interstellar Space Technology by Speed. -
Basic Categorization of Interstellar Space Technology by Range.
For STL technologies a voyage is assumed to start with a brief acceleration phase, followed by a prolonged cruising phase, and then followed by a brief deceleration phase.. Photonic and Slow FTL journeys are assumed to be basically similar. To be possible Photonic and FTL travel both require dilation to be suppressed below 100% - the assumed level is to zero (0%).

Technology Base
..
Cruising Speed
(1/2 of Delta V)
 % of Cvac
..
Time Dilation [*2]
..
Time to Reach
1 Light Year
Time To Reach
100 Light Years
Onboard Thrust Capacity
(Journey Accel + Brake Cycles)
Current Space Technology ~ 20 Km/s 0.0066% 1.000 x 15,000 yrs 1.5 million yrs 1 Trip - 1 way (2 Cycles)
Generational STL Space Technology ~ 3,000 Km/s 1% 1.000 x 100 yrs 10,000 yrs 1 Trip - 1 way (2 Cycles)
Slow STL Space Technology [*3] ~ 30,000 Km/s 10% 1.005 x 10 yrs 1,000 yrs 1 Trip - 1 way (2 Cycles)
Fast STL Space Technology ~ 150,000 Km/s 50% 1.155 x 2 yrs 200 yrs 1 Trip - 1 way (2 Cycles)
Ultra Fast STL Technology ~ 270,000 Km/s 90% 3.906 x 1.1 yrs 110 yrs 1 Trip - 1 way (2 Cycles)
Fast Relativistic Technology ~ 297,000 Km/s 99% 7.089 x 1.01 yrs 101 yrs 1 Trip - 1 way (2 Cycles)
Photonic Space Technology [*4] ~ 300,000 Km/s 100% 1 x (suppressed) [*] 1 yrs 100 yrs 1 Trip - 1 way (2 Cycles)
Slow FTL Space Technology [*4] ~ 3,000,000 Km/s 1000% 1 x (suppressed) [*] 0.1 yrs 10 yrs 1 Trip - 1 way (2 Cycles)

Basic Categorization of Interstellar Space Technology by Range.
The basic range of a given technology is given by its Cruising Speed multiplied by its Flight Durability.

Technology Base Cruising Mass
(metric tons)
Max Duration
Manned [*2]
Max Range
Manned
Max Duration
Robotic [*2]
Max Range
Robotic
Estimated Relative Technological Complexity
N  : (N = 1 = Apollo Moon Program. [*1])
Current Space Technology ~ 100 tons 5 yrs 0.00033 ly 100 yr 0.0066 ly N x 2 : (Chemical Rockets)
Generational STL Space Technology ~ 200,000 tons 1000 yrs 10 ly 2000 yr 20 ly N x 8 : (Nuclear Thermal or Fusion)
Slow STL Space Technology [*3] ~ 60,000 tons 150 (151) yrs 15 ly 300 (302) yr 30 ly N x 4 : (Super Orion)
Slow STL Space Technology ~ 5,000 tons 40 (40) yrs 4 ly 200 (201) yr 20 ly N x 6 : (Nuclear Light Bulb GC-CC-NTR)
Fast STL Space Technology [*2] ~ 1,000 tons 20 (23) yrs 11.5 ly 200 (230) yr 115 ly N x 8 : (Nuclear Fusion)
Ultra Fast STL Technology [*2] ~ 1,000 tons 20 (78) yrs 70.2 ly 200 (780) yr 702 ly N x 16 : (Advanced Antimatter)
Fast Relativistic Technology [*2] ~ 100 tons 10 (71) yrs 70.2 ly 100 (709) yr 702 ly N x 18 : (Advanced Antimatter)
Photonic Space Technology [*4] ~ 1,000 tons 20 (20) yrs 20 ly 200 (200) yr 200 ly N x 12 : (Mixed Antimatter (net zero mass) )
Slow FTL Space Technology [*4] ~ 1,0000 tons 20 (20) yrs 200 ly 20 (20) yr 200 ly N x 14 : (Mixed Antimatter (net zero mass) )

[Note *1] - N is the relative complexity of the technology against a nominal base - N = 1 is equivalent to the Apollo Moon Program.
[Note *2] - As speed increases the effects of relativistic time dilation also increase. This increases effective durability and range. Precise dilation effects for photonic and FTL technologies are not known and depend on the technology itself. What is known is that neither technology is possible unless time dilation is suppressed.
[Note *3] - 'Super Orion' is a slow STL technology, and is the fastest technology within todays current technology horizon. A Super Orion ship could theoretically be built and launched within about 10 to 15 years.
[Note *4] - Note that 'Photonic' and 'FTL' are very similar technologies. - Once (or if) the first is achieved, then the second theoretically becomes relatively easy. However the precise physics may mean that photonic speeds are possible but FTL speeds above the speed of light are impossible..
-- -- -- -- -- -- -- -- -- -- --- -- -- -- -- -- -- --






PR-04 : The Development of Starship Technology : Part 2 - Building an FTL Drive Spacecraft[edit]

[Current Edit 60% complete, Latest Edit 27-05-17]
[A new general rewrite is needed to restructure and update article. A lot of old semi obsolete material. The section needs much further work.]

Introduction.
We look up and we see the stars and know that around many of them orbit other worlds. That some of those worlds will contain life - real alien life, something literally beyond the human imagination. Some of those worlds will even contain sentient life and a small fraction of those may have their own civilizations. A basic fundamental fact is that the size of interstellar space makes interstellar travel to those worlds virtually impossible without some type of FTL travel. Even slow FTL speeds can only take us a small limited distance out to our nearest stellar neighbors, but that is still an immense leap beyond the solar system.

This description of FTL travel should be imagined as being a little like Leonardo Da Vinci's diagram of a helicopter. Leonardo's helicopter was probably far closer to flight than we are today to flying to the stars. However while the gap is far bigger for FTL travel the human mind is now evolving far faster than in the time of Leonardo. In my own work in the last ten or fifteen years I have seen a massive evolution from a set of vague theories and beliefs to a solid and semi-complete FTL physics model - which can form a solid foundation for further research. We have crossed many cultural singularities but we need to cross more. The human mind is now evolving millions of times faster than our genetic biology, and the nature of evolution always takes us down many paths, most wrong but eventually we will find the right one. I hope to be a genuine step forward in that evolution.

(Basic Terminology : STL = Slower Than Light, FTL = Faster Than Light.)


Attempting to create a successful FTL travel technology is almost certainly one of the most complex and difficult engineering challenges in human history, however that does not mean any of it is impossible. The greatest barrier to research in this area is and always has been relativity. Until you confront and step beyond relativity you cannot even begin to solve the design problems required to achieve FTL travel.

I tried to solve the problem by skrying with some animal bones. The bones told me smugly that at least this was a more empirical method than trying to solve the problem using relativity.


Background to FTL Research. The obvious starting point for research is to focus on creating and improving models that describe FTL physics and its associated geometry in ever better detail. This is an area where the scientific establishment are asleep and not yet even at the starting line, dependent on their belief in the failed yet seemingly complete model of Special and General Relativity. Perhaps the biggest hardest step in this whole process is to wake that scientific establishment up and make them face a reality without their mathematical beauty. The mathematics of real FTL physics is an ugly beast, not very amenable to calculus or higher mathematics, and disjointed and dysfunctional, but actually very simple. Maybe this is the real problem. That the primary FTL world is simple enough for outsiders to understand it completely and that potentially threatens a generations long elitism in physics created by relativity. Real physics is simple.

FTL Physics Model. In this document we will assume from the beginning that FTL travel is possible. This is only a starting point but is a pre-requirement for doing any kind of logical analysis. We will not repeat the argument here (see physics section above), but Special Relativity fails at the speed of light and is replaced with an absolute frame FTL model. Dimensional time does not exist except on quantum scales, general space cannot fold making wormhole travel impossible. In theory at least FTL travel is possible, although some of the most critical problems are not substantially changed from the relativity model.


Focusing In On Relativistic Curvature.
The basic barrier to FTL travel is defined at the speed of light by special relativity. (this part of the theory is extremely accurate and well proven) Below the speed of light in STL space we encounter relativistic space time curvature and dilation experienced by objects travelling at relativistic speeds. Accelerating from STL speeds to the speed of light requires infinite energy and creates an infinite curvature in space time. The FTL model merely replaces the curved dimension of time with a local spatial dimension - the objects local vector of motion. The local vector makes the physics of relativistic objects purely local to the object and this fits with the local physics of tachyons. Relativistic distortion due to gravity and to frame dragging are also local effects and also (by definition) contain local vectors of motion.
Time Dilation. From the perspective of a relativistic object itself the effects of time dilation are hidden appearing as extra velocity. - Above 87% of Cvac an object appears to itself to be travelling faster than light. At the speed of light/FTL barrier itself as an objects space time curvature becomes infinite dilation also forces its local time to a complete stop.

Light. The speed of light is obviously absolutely central to FTL physics, and to STL physics. Light carries positive energy and therefor mass, yet it travels at the speed of light. The answer of course is that light has no rest mass. In the FTL model light becomes a tachyon with a superposition of positive and negative mass that adds up to net zero. (Imaginary Number = (n+) + (n-) = 0) The superposition itself is the vibrating wave aspect of the light particle. Note that the superposition is at right angles to the direction of motion, so light itself depends on a quantum space with a speed of light of zero.
Light is governed by the manifold STL = V <= Cvac, FTL = V >= Cvac, LightV = STL & FTL = Cvac, Cvac = 3E8 m/s. The Constant CQ = 0, CQ is the bulk speed of light and defines the speed of light at quantum scales.

"The FTL Space." The core of FTL physics is based on simple logic. - Space is three dimensional and stable. This geometry leads directly to a point of general FTL simultaneity across the universe which leads to a universal synchronous point time. Einstein's space time is reduced to an abstract projection but actually exists directly physically at the quantum scale.
FTL Tachyons. We can extend the model further to look in detail at the quantum geometry and the true nature of matter and mass and energy. FTL particles called Tachyons have imaginary mass, and this gives the basic mass equation FTLm = (m+) + (m-) = 0. FTL coherency requires a superposition of positive and negative mass adding up to net zero, and this becomes a primary signature of all FTL physics. The signature also often appears as wavelike forms or behaviour.


Accelerating to FTL Speeds. The FTL mass equation gives us a basic way of accelerating to FTL speeds by tuning the net mass of our interstellar ship to zero. We also need to find a way of achieving zero mass of course, and then of accelerating beyond the speed of light itself.

Sustained Travel at FTL speeds. Surviving in the FTL environment and maintaining FTL momentum for any extended period seems almost infinitely more difficult then reaching FTL speeds in the first place. An FTL ship must survive the 'pressure' of dilation at totality which is rising up from the quantum scale and trying to squeeze time on board to a complete stop. The machine must also fly through a constant stream of ordinary STL matter even the middle of 'empty' interstellar space.. Even at the end of the journey there is the problem of decelerating back to Slower Then Light STL speeds without being torn to pieces and destroyed. Then there is the problem of allowing a ship and maybe even a crew to survive FTL acceleration and deceleration transitions - which may or may not be a problem.

FTL Navigation. To achieve real FTL travel another tiny issue is navigation. This involves choosing an FTL course, controlling FTL acceleration and trajectory, predicting speed, and controlling time of flight.
Choosing FTL course. The map of space we see is defined by the speed of light so that it represents the past while we need to choose a trajectory that projects into the future. We must create and rely on future projection maps using standard extrapolations - one part of FTL travel that is relatively easy. However the size of the galaxy, the number of stars, and the fact that the apparent geometry changes with time and position make more general navigation a non-trivial task.
Navigating an FTL course. Once an object is travelling at FTL speeds it can only travel in a straight line, and this makes initial course selection critical. So a ship has to be aimed along a precise specific vector and this of course is still an unsolved problem. There is also the issue of time dilation, without some kind of shield against time dilation an FTL ship cannot time the end of its journey. With no dilation shield at all time on board will stop completely and a ship will sail forever. Many of these issues can only be approached and solved once much of the basic technology is already complete..

-- -- -- -- -- -- -- -- -- -- -- --


Part 1 : A Background to FTL physics..
(Based on the Absolute Frame FTL Model mentioned above.)

  • The Geometry of Space : It turns out that for the universe to exist in any sensible way we need a stable coherent FTL geometry that extends throughout space. The defined FTL geometry is a 3D space that unifies with a synchronous point time through an FTL Simultaneity. (All are covered in detail in the physics section)
  • The Speed of Light Cvac. : The speed of light forms a region of intersection between both the STL and FTL velocity regions of space forming a single unified space-velocity manifold. Light moves in straight lines that are defined directly by the geometry of the FTL space. (A direct measurement) The local speed of light is governed by local geometry.
  • Point Time. : In the FTL Absolute Frame model time is defined very simply as point like. Time is zero dimensional and in effect is just a secondary property of space. Time Travel as such is impossible because 'dimensional time' and the past and future as special relativity defines them do not exist.
  • Supremacy of Quantum Mechanics. : In the FTL model the map connects point time directly to quantum space so that all physics and reality itself extend directly from the quantum scale region. At quantum scales dimensional time and 4D Space time exist physically. This is defined by the lower size limit of point time which forms the quantum limit. The speed of light at quantum scales is defined as zero (the bulk or vector sum speed of light equals zero by definition) CQ = 0. This forced all physics at the quantum scale into an FTL manifold and quantum physics is FTL physics. Quantum scale space time becomes the mechanism for all relativistic effects including gravity. The FTL Simultaneity combined with point time and Quantum physics form a complete model of FTL and STL causality. The FTL model defines causality directly forming a complete physics model.
  • Quantum Continuum. : A continuum unifies the physics and geometry of Quantum Mechanics to form the coherent physics we experience at larger scales.
  • STL Positive Mass Objects. (Tardyons) : Massed objects are formed as small regions of curved space time, as point gravitational singularities held together by FTL coherency. For massed 'STL velocity' objects light forms a local infinity/maximum.
  • Imaginary Numbers & Mass. : The basic definition defined by the Lorenz Factor gives objects travelling faster than light an imaginary mass. Imaginary numbers can be defined using superposition (n+) + (n-) = 0. This can be applied to tachyons giving a superposition structure and a net mass of zero. (Net positive and negative mass superposition summing to zero.) As such all FTL objects have a basic mass of zero.
  • Waves & Particles. : STL interactions produce 'particle-like' behaviour and FTL interactions produce 'wave-like' behaviour..
  • Light Particles / Zero Mass Objects. The velocity of light is defined as the intersection of the FTL and STL spaces. Light particles must have a net zero mass to move at the speed of light, although all known light particles must carry a small positive mass to be detectable. Light therefore fits as both STL and FTL particles and has properties of both, wave-like = FTL, particle-like = STL.
  • Limit on Causality. : A standard rule of causality is that physical effects are not allowed to cross FTL barriers. This in general forbids normal interactions between objects moving at relative FTL velocities. Such non-interaction may actually play a crucial part in making real FTL travel possible, because it means that FTL ships should simply fly through intervening STL particles or objects without disruption or radiation producing collisions.
  • Tachyon decay. : The usual behaviour of FTL tachyons exposed to STL causality is to immediately decay into STL (V < C) tardyons. This is observed by science in nuclear physics and is called Cerenkov radiation. The FTL model also tentatively adds Hawking radiation, noise from the quantum vacuum, and even quantum causality breaking as other possible forms of tachyon decay.
  • Matter/Antimatter. : In the FTL model matter is divieded into four basic types by charge and mass. Type 0 = normal matter, Type 1 = negative charge antimatter, Type 2 - negative mass antimatter, and Type 3 - negative mass and charge antimatter. If it is not possible to create or capture some form of negative mass matter then it becomes rather more unlikely that any kind of FTL travel will be possible.
  • Background Mathematics. : Mathematically FTL physics depends on three elements. An understanding of basic physical 3D Euclidian geometry. The ability to understand and map local infinities using context and windows. The ability to understand imaginary numbers and their implications. The reason that people like Einstein rejected the idea of an FTL model was largely because its mathematics was not compatible with what they understood. The FTL also demonstrates clearly that physics irredeemably and irreversibly corrupts the purity and harmony of higher mathematics.. and that mathematics itself is merely an evolved ad-hoc system. (Like the universe.)

Types and Examples of Tachyons. -
Types : Fast Tachyons - Local C exceeds Cvac. Slow Tachyons - Local C exceeds Cvac. Photons - Local C = Cvac. .
Examples : Photons, Magnetic fields, and Cooper pairs can be described as types of tachyon.
FTL Time Causality. Local FTL spaces might be expected to experience time lock. Another possibility is that they experience time in reverse, or if the maths is particularly twisted time forwards. (Some say that negative matter should experience negative time, this might keep it pushed slightly out of normal reality leading to non-interaction.. Because of the geometry of tachyons negative mass matter may still interact with gravity though the exact nature of this is unknown and may be outside any known physics today.

Dark Matter? : A possible candidate for Dark Matter is negative mass antimatter. This raises some very interesting possibilities. This will be analyzed further elsewhere.

-- -- -- -- -- -- -- -- -- -- -- --


Part 2 : Accelerating to FTL Speeds.. [EDIT POINT]
This topic is already largely covered. Basic assumption : FTL acceleration can be achieved by balancing total net mass to zero. Two basic methods of balancing a crafts mass to zero are using negative mass or some form of gravity 'shield'. -
Gravity Shields. A improving level of knowledge of FTL physics suggests that this type of shielding must act against and at the quantum scale and will be incredibly hard to achieve. Conversely a gravity shield and a dilation shield are virtually the same thing, so an FTL ship already needs one. Other ways of describing such gravity shields are as either Schrodinger box barriers or as event horizons. Event Horizons plus the quantum FTL gravity model point pretty exactly to what a shield technology might look like. An atomic scale system probably not made of ordinary matter, but something stronger that allows more density.
FTL transition space. Another problem is that the tachyon superposition model implies that simple tachyonic objects with imaginary net zero mass are trapped at the speed of light. (ie as photons) To enter the true FTL region a further manipulation seems to be required. - This may either involve reducing the net mass to a negative value, or manipulating the superposition in some complex way, or manipulating physics to create an environment around the ship that increases its local speed of light. This last in effect creates a slipstream between STL and FYL spaces. Again we must remember that all physics originates and ends at quantum scales. Another possible solution is to 'plough' through space in a way that clears out an absolutely matter free region around and in front of the ship..
A more brute force approach is to have a super dense ship that simply pushes external space out of the way. Such a ship would probably contain a number of heavy massed gravitational singularities (black holes). Internally the mass is arranged to reduce the overall compression of space time. This might allow Newtonian acceleration to super light speeds or arrangements like the Alcubierre warp drive.
-- -- -- -- -- -- -- -- -- -- -- --


Part 3 : Surviving In the Stellar FTL Environment. (Across the STL-FTL Cvac Barrier)

Even if we can get past the above problems and achieve the ability to accelerate massed objects to FTL velocities this is still only half way to full FTL travel. Once an FTL ship crosses the FTL prime Cvac barrier it encounters what is likely to be one of the harshest environments in the universe. Surviving extended travel at FTL speeds is the really hard part of FTL travel. The following are the major effects that may be expected to be encountered. (note that each effect blurs into the others and that most are in effect various forms of tachyon decay..) -

  • The First Effect is Tachyon Decay. - This comes from the interaction between FTL objects and the external STL universe. - The natural behaviour of most or all FTL tachyons is to immediately decay instantly into STL tardyon particles. (Basically identical to Hawking radiation or Cerenkov radiation)
  • The Second Effect is Local Space Time Folding in on itself. - By the rules defined by relativity local space / space time becomes malleable at the speed of light and may tend to fold to lower dimensions or collapse in on itself. We now know this effect would arise from and work on quantum scales. This effect is expected and could ultimately crush a ship into a linear one dimensional object or force time on board into a dilation stop.
  • The Third Effect is the problem of FTL Transition or Acceleration. - Matter coherency itself is lower in energy than the energy gap between STL and FTL speeds and this means that FTL transitions are potentially able to shred anything made of matter to pieces. In a very basic view for an unprotected object the energy transition is effectively infinite. This means that the beginnings and ends of FTL journeys will be particularly difficult to survive, but every encounter with STL particles could cause similar problems.
  • The Fourth Effect can be called ‘Tan 90’ or destructive parallel acceleration. - One problem is that Tan 90 faces both ways, and yes this is yet another tachyon superposition breaking. - Meaning that potentially our ship will try to accelerate in two opposite directions at once, tearing it to pieces.
  • The Fifth Effect is 'Causal FTL Transience Stress - A ships FTL 'transience' is in effect its causality link with the STL universe. The failure of FTL transience could break this link meaning that the ship will never be able to slow down or re-enter ordinary space under any circumstances. There is a sharp peak of FTL transience stress at the moment that a ship tries to slow down and cross from FTL to STL velocities.
  • The Sixth effect is 'Residual FTL Transience.' - Even after an object or person has left the FTL space their FTL transience remains. (their FTL causality history) - If this transience is broken they may either be pulled back into the FTL region forever - or simply evaporate into nothing or they may be destroyed through causality and entropy. (Once you look into the FTL 'Fate' is no longer a matter of conjecture, it is a matter of unpleasant fact.) The STL causality of an ordinary STL object is effectively infinite but an object that has travelled at FTL velocities will have a lower effectively finite 'damaged' causality. Its matter itself will be directly 'contaminated' by the FTL causality. This FTL transience will gradually 'heal' with time as the light cones of the FTL and STL trajectories merge. This type of transience problem is likely to be one of the ultimate limits on all FTL travel. - The same effect can be observed in ordinary physics today in the form of quantum field collapse and superposition breaking.

-- -- -- -- -- -- -- -- -- -- -- --


Part 4 : FTL Machinery - Tentative Solutions to Accelerating to and Surviving FTL Travel.

Disks or Rings : One solution to effects one & two is to have a large disk shaped shield (axial to the direction of travel) around the ship that acts as an FTL 'aerofoil'. The shape of this aerofoil forces the ship through the FTL space without allowing it to be crushed, the edge of the disk gets eaten away by tachyon decay but the core ship survives intact. To increase the effect stacks of disks may be used.
Spikes : In contradiction to the disk above a sharp spike shape of super hard, super dense material may be ideal as a different type of FTL 'aerofoil'. This could allow a ship to push through space, either folding space time around it, or collapsing the ship inwards in a stable controllable way, or simply forcing material out of the ships way.
Space Time Bubbles : The basic key to surviving in an FTL environment is to create a bubble of separate space time around a ship - this can be seen as a type of 'force field' or may also contain more physical barriers. This would both shield the ship and allow it to hide its mass from external space time avoiding problems like time dilation and total time stop, or self gravitational collapse. A physics at the point where the creation and manipulation of gravitational singularities becomes easy.
Zero Point Calipers : With space time bubbles we can create a bubble of total space time curvature where the speed of light is essentially zero. This offers a basis for creating zero point calipers and low energy space times - which in turn..[large gap here]

Transiator. [*1] Most of the problems of surviving while in the true FTL can be tackled simultaneously by a single machine called a 'Transiator' that generates and amplifies a strong local FTL 'transience'. Currently this machine only exists on the very edge of theory even as a fringe idea, but if it did become possible then many or all of the problems of FTL flight would begin to come within reach, or at least within the reach of further research. A transiator cannot be made of ordinary normal matter but requires something considerably stronger. Possibilities might include an ultra dense neutron condensate, stable Type II or Type III matter (ie made of Charm and Strange or Top and Bottom Quarks), an ultra high energy quark-gluon plasma, some form of negative mass matter or antimatter, or a classic scale space-time region in total curvature.
Harmony Machine. [*1] This is a different approach to FTL mechanics. The harmony machine is in effect a type of transiator though based on a slightly different precept. In effect the harmony machine attempts to create its own field of primal energy, either directly by creating a miniature 'Big Bang' inside the machine or by creating an FTL 'temporal' link to the original 'Big Bang'. The machine uses this primal energy to manipulate and completely control the laws of physics and local space time in a small region inside the machine. The initial fields would obviously be extremely delicate but if they could be shielded and amplified in some way this could add up to form a self-reinforcement that could build up to a much larger effect.
Glisten Device. [*1] A glisten is a theoretical device I came across (or invented while writing sci-fi) many years ago. In effect a glisten is a three dimensional object containing a region of a coherent fourth time-spatial dimension directly. One basic mechanism might be to create a 'glisten surface' where the speed of light is reduced to near zero, for instance held by a zero point field caliper. - The surface allows the direct manipulation of space and time, and particularly the compression and manipulation of time. One way to use a glisten is to create a basic shape - like a sharp spike which causes the field to dissipate asymmetrically - as the field dissipates it creates a push that is in effect an asymmetric thrust. In effect a gravity engine, or more precisely a Kinetic Energy Transformer.

[*1 Note] : There is considerable considerable overlap and tautology between all three devices. Also note that we are at the beginning of describing something that is a basis for building further parts, and then to other devices with other geometries and so on. Think of glistens and transiators as the resistors and diodes of FTL travel. The harmony circuit might be like an aerial or maybe more like the resonant circuit in a radio. Work on this is still at such an early stage that free form extrapolation - as in sci-fi writing - still often achieves as much as deductive scientific reasoning.
-- -- -- -- -- -- -- -- -- -- -- --


Part 5 : Things You might expect to find in a real FTL Drive / Hyperdrive. . .

Speculative
- Ultra High Vacuums. : Matter interferes with FTL physics interactions so the most pure vacuums available are an essential first requirement.
- Ultra Low Temperatures. : Liquid helium in large volumes.. superconductors, and lasers for even cooler temperatures.
- Ultra Powerful Magnetic Fields. : [shhh...] Ultra powerful magnets and magnetic quench, and - maybe high or low frequency oscillating fields.
- Ultra Low Electrical and Magnetic Field. : The real key . . maybe.. Multi-layer superconducting Faraday cages.
- Plasma. : For all kinds of reasons plasmas are an obvious and almost essential ingredient. Plasmas can store vast amounts of energy as heat, and have potentially very useful electrical and magnetic properties. Plasma are potential candidates for future high temperature quantum computing devices.
- Quark Gluon Plasma. : The ideal plasma for many reasons would be a stable quark-gluon plasma - but we are very far from producing such a thing. A good candidate for brute force methods of acceleration to FTL speeds.. (We can already produce quark gluon plasmas using ultra high energy relativistic particle collisions, but these plasmas only exist for extremely brief moments before decaying.)
- Micro Singularities & Wormholes. : Sub-miniature singularities and wormholes are potentially so useful that the technology to penetrate FTL barriers often seems almost impossible without them. Micro singularity technology must be a primary goal in any preliminary FTL research. Of course visa versa and an FTL manipulation technology is probably a very good starting point / route towards building a micro singularity and micro wormhole technology, and other similar developments.. In the FTL quantum model I am developing all massed objects are potentially viewed as micro-singularities..
- Nuclear Bombs : Not the easiest things to work with but a relatively simple and very compact method of generating pulses of ultra high energy plasmas that contain significant energy..


Very Speculative
- Negative Mass (Type 2) Antimatter : Negative mass matter is one of the most speculative components but also one of the most basic critical starting points to many or most putative 'low-energy' paths to an FTL drive.
- Ultra High pressure pump chamber. : High pressures and high intensity sound waves plus supersonic air streams plus high voltage electric fields create possible conditions for quantum parallelism / high energy mechanical resonators.
- Ordered Heat energy. : Ordering heat energy may be one of the keys to creating a quantum manipulation technology. One step towards ordering heat is molecularly ordered structures, and maybe combined with carefully controlled EM radiation, and structured heat flow interfaces.
- Electrostatic Auras, FTL component. : One suspected cause behind superconductivity. In organic brains one suspected possible mechanism for high temperature quantum coherency/memory, one possible mechanism behind quantum field breaking.


'No matter how fast an FTL Ship travels its FTL transience always travels faster.'
-- -- -- -- -- -- -- -- -- -- --- -- -- -- -- -- -- --

-- -- -- -- -- -- -- -- -- -- -- --









A Work In Progress : Extrapolation Analysis : A Scientific Analysis of Gravity Engines[edit]

[Semi-Clean Draft] - [Current Edit 60% Complete. Last Update - 09-08-16]
This is an Extrapolation Analysis : The Original Version of this Analysis was created to help produce a more accurate science and technology base for a science fiction work, and also for fun. It resulted in a breakthrough in my physics Work that allowed me to complete a critical part of the FTL model above.

There are many potential possible routes to gravity manipulation out there,
most of them completely impractical or impossible or fantastical.
Some are little better than fairies attached to small harnesses,
others may just at the outside have the smallest possibility of success.

*Interim Note : Please note that much has changed in my FTL physics model since this was first written. Some material here is now very obsolete .. I may rewrite or I may not.
Updated Physics Note] [(Examining the FTL model : As defined in the FTL model above there is always a local FTL barrier between a space / space time (space-point-time) and all massed STL objects that occupy the space. This explains the limitation of the conservation of the balance of momentum because there is no direct interaction between the space and its contents. Dilation and relativity and gravity are a complicating factor. In Gravity kinetic energy is transmitted from each object to its local space, then transmitted outward through the 'frame' of the space, and then to any/all receiving objects. This force transfer is always two way and the conservation of momentum is always maintained. Local dilation is a strong candidate for the underlying mechanism and recent (rep 2016) detection of gravity waves now rules out models where gravity moves faster than light.. Note that the detection of gravity waves also proves that gravity or rather space / space time must attain some kind of resonance at FTL speeds because this is required for black holes to have external gravity fields. [END NOTE]



Introduction : Classical Gravity Engines are famously impossible and the basic reason is energy. -
We start by looking at the simplest, absolute worst case, direct brute-force model. In this model an engine that could lift a small car using gravity might require as much energy as produced by the simultaneous explosion of thousands of billions of nuclear bombs, this would need to be held in a stable form in some kind of capacitor. Something unimaginably difficult to achieve. We actually know what kind of object such a capacitor would probably need to be - a gravitational singularity. This is not even considering the problems of directionality or of the conservation of the balance of momentum.
To have any real hope of building any kind of working engine, a vastly more subtle approach is required. Taking the absolute best case for lifting the small car above, if we assume an engine identical in efficiency to a 100% efficient reaction mass type system (eg treating the planetary surface as its reaction mass) the power required is as little as 50 to 100 kilowatts. - At a stretch the cars engine could power such a machine. In theory some machines do not use any energy at all unless the car moves upwards, and then they regain it if the car moves back down.

The previous version of this work focused purely on a generic analysis, the new version also focuses directly on FTL models and FTL technology based engines. Please read the background section Part 1 in the article on building an FTL drive spacecraft. Without considering FTL physics the known STL physics suggests that true gravity engines may be next to impossible to implement. FTL coherence may allow direct interaction with the space time and so may change this. An old prediction is that gravity engines are only likely to only work reliably in deep space and away from strong local gravity fields. (ie not anywhere near a planetary surface)

Special Notes

  • Rule 1 : Law of Conservation of Momentum. (Balance of Momentum) It is impossible to produce a truly unbalanced force. On every massed object that exists, all accelerations or changes of momentum that happen, all depend on the same law that there is always a counterbalancing reaction mass that moves equal to and opposite to the primary direction of motion.
  • Rule 2 : Gravity Obeys the Conservation of Momentum. In nature as observed gravity always obeys the law of conservation of momentum because mutual acceleration always cancels out to net zero. (B is attracted to A as A is attracted to B).
  • Rule 3 : Even Black Holes (are assumed to) Obey the Conservation of Momentum. This implies that gravity crosses FTL barriers in some way creating a substantial violation of General Relativity. This further extends by extrapolation to the point of infinite curvature at a black holes singularity. An observation that hints strongly that gravitational resonance is extremely fast (Gv >> Cvac ≈ ). - Note however that gravity has now been measured (as gravity waves) travelling through space at the speed of light.
  • One view of gravity engines sees them as a special class of reaction mass rocket that uses space-time as its reaction mass. In this case there is no violation of the laws of motion and no unbalanced force because the entire universe acts as the reaction mass fulcrum - and there is no closed system.
  • Conversely a traditional definition of rockets sees them as gravity engines because they can accelerate in free space in a vacuum. The reaction mass exhaust is the balancing element that closes the system, and outside of this rockets remain stubbornly acceleration neutral - incapable of acceleration.
  • There is one obvious example where a form of near inertialess acceleration does exist. Bodies where the net rest mass adds up to zero. The obvious example is light and other EM radiation. In developing a new FTL physics model I have uncovered a new model for imaginary mass objects that defines them as having net zero mass, which fits with massless 'photonic' particles. The model also predicts the existence of positive and negative mass.
  • An odd possibility is that many ordinary machines may already act as gravity engines, though the resulting forces they produce are so astronomically tiny that they are effectively completely undetectable. (eg ~1E-28 m/s2(total guess)) An acceleration that might get you across the width of a few hydrogen atoms since the beginning of the universe.
  • The name 'Gravity Engine' has become generic. This generic classification generally includes (is regarded to include) any thrust producing method that achieves any form of thrust without requiring or expending a direct counterbalancing reaction mass. This includes methods based on direct interaction with space time, methods based on any remote form of force or reaction mass transfer, methods that use some form of negative mass, methods based on 'inertialess' or 'reactionless' thrust, or other imagined or putative methods. - In many types of 'Gravity Engine' gravity itself may not be directly involved in any way ..

Extrapolation to a Working engine? So what methods might lead to a real gravity engine? Well there are many potential paths - inertialess thrust, reaction mass recycling, asymmetrical gravity shielding, 'anti'-gravity fields, space time tractors, quantum pumps, electromagnetic fields, etc, etc, etc. - A starting point for some of the more realistic types of gravity engine research is either relativistic acceleration or hyper dense materials like 'neutronium condensate' or gravitational singularities. This of course is only one set of impossible problems replacing another, and then another set replacing another. Steps down a tautology ladder that might just one day lead towards solutions that might just be possible.
-- -- -- -- -- -- -- -- -- -- -- -- -- - -- -- -- -


Gravity and Energy (How much energy is involved in gravity fields?)

We all know that it is impossible to produce a truly unbalanced force, but this isn't the only thing that makes gravity engines so difficult or 'impossible' to achieve. What really makes gravity so difficult to interact with is the simple immense amount of energy involved in ordinary gravity fields and thus needed to interact with gravity in any meaningful way. To calculate the broad amounts of energy in gravity fields we simply take the mass energy equivalence E= m.c2 together with the Newtonian gravity equation Am = G.M/r2 and the result marks gravity as one of the most unreachable barriers in physics. -

(Acceleration of secondary mass Am (m.s2), Energy E (j), speed of light C = 3x108 (m/s), Gravity constant G = 6.67x10-11, mass of Primary M (Kg), radius r (m) )

Am = G * M / r2,
Am = G * E / (C2 * r2),

Acceleration Am = 7.4x10-28 * E / r2,
Energy E = Am * 1.35x1027 * r2.

However saying the above gravity engines are not theoretically impossible but just very very difficult. (also see Energy and Distance below)
-- -- -- -- -- -- -- -- -- -- -- -- -- - -- -- -- -


Resonance of Gravity Fields - (New [April 2015])
It is observed that gravity fields are regions of time dilation, and this can be used to calculate a basic signature of resonance. On examination there is a close correlation between this resonance and the escape velocity for the field, so for instance the Earth at its surface has a resonance of 11.2 Km/s which is exactly the equal and opposite of the escape velocity from the same point. When examined this may be another strong hint that gravity is an FTL force (because it projects the time coherency of the field to a point). It also creates an interesting graph -
Gravitational Resonance at Earths surface :

At Earths Surface - - - - - Gravitational Resonance Escape Velocity Momentary Gravity
- From The Earth 11.2 Km/s 11.2 Km/s 9.8 m/s2
- From The Sun (Sol) 42.2 Km/s 42.25 Km/s 0.0059 m/s2
- From The Milky Way Galaxy 1095 Km/s 1095 Km/s 0.0000000023 m/s2
- From The General Universe 777,000 Km/s 819,000 Km/s 0.000000278 m/s2

An initial assumption was that interfacing with a gravity field might require tuning to its resonance or above. The calculation shows the resonance of the Earths field as around 11 km per second, which is a very fast speed. However the Sun, the Galaxy, and the General Universe all have even higher levels of resonance even though their momentary forces are far smaller. Also note that the general universe has a resonance which is faster than light which seems to imply that; A.- gravity machines really are impossible at least without FTL technology, and B.- that the entire universe exists inside a single giant singularity.
-- -- -- -- -- -- -- -- -- -- -- -- -- - -- -- -- -
-- -- -- -- -- -- -- -- -- -- -- -- -- - -- -- -- -


Towards a Real Gravity Engine ?

Part 1 : Theories of Gravity - The first Question : What is Gravity? - There are many possible or potential routes to a working gravity engine but really at the heart of the problem is the simple fact that we do not have a single completely unified or fully complete 'explanatory' theory of gravity. - A basic logic is that finding & exploring the exact mechanism of gravity should be one of the very first starting points for any kind of real gravity engine research. Of course the opposite is also true, working on solving the problems of manipulating gravity seems to be a fairly good path towards finding & understanding the real underlying mechanism for gravity.

Here is a selected list of primary theories of gravity, some standard, some obsolete, some speculative. -
- Newtonian Gravity : The most basic Newtonian theory of gravity simply describes the phenomena as a transmitter of force at a distance and leaves the actual mechanism completely mysterious. The Newtonian model begins to fail at extremely high velocities or energies (where V approaches C).
- Relativistic gravity : The theory of General Relativity uses local curved space time as a description of gravity. This allows gravity to be treated as a shortest world line through a four dimensional space time rather than an actual physical force.
Although extremely accurate at STL velocities, standard General Relativity fails at the speed of light and at FTL velocities and cannot explain certain phenomena that involve FTL or Quantum physics. A prime example is the gravity fields of black holes. Primary causality requires a stable FTL physics for the universe to exist and as such General Relativity fails completely for a general FTL model.
- Quantum Gravity : Quantum theories tend to treat gravity as a transmitter of force using massless force carriers - Bosons or gluons. - These may exist alongside but are separate from curved space time. A slightly more complete picture of quantum gravity emerges with discovery of the 'Higgs boson' which is described as a perturbation in the Higgs gauge field and is sometimes regarded as the final piece of the standard model of particle physics. [Q?] It is not yet clear what impact the Higgs field model has for the more general model of gravity.
- FTL Gravity : In an FTL model (as I am working on) gravity can be fitted closely to the quantum explanation or to a slightly modified version of general relativity, but even simpler explanations exist. - For instance that gravity is a direct projection of Newtonian force through the FTL space. Another way to describe gravity is as a distortion in a 'zero point field' or 'ether' connected to the absolute frame- FTL space. (this can be seen as merely another interpretation of the general relativity mechanic)
- 'Ether' Theories of Gravity : (most of these are now generally considered very obsolete or fringe) 'Ether' theories often treat the Ether as a substrate for gravity or as a direct mechanism for gravity, though some are far more complex. The 'Ether' is/was a constant in Special Relativity against which the ordinary universe moves, also called the absolute or non-accelerating frame. - Ether finds a semi-direct analog in FTL physics as above. Some 'ether' theories also confuse or mix various models of the 'ether' or with psychic auras or other things. This may or may not be correct, however since there is much confusion and little substantiality to this work it is generally of little scientific value. Various other methods exist which - for instance connect gravity to magnetism, or use things like super symmetry or multi-dimensional hyperspaces. (eg Heim theory)

Notes :
- While general relativity conventionally defines gravity as not being a force the same mechanism of curved space time can also be applied to describe all forces. - So either no force is a force or maybe all forces are. (terminology is a sea of potholes.)
- An additional rule from General relativity that we should note is that in GR while the local speed of light is fixed the absolute speed of light and the speed of zero are not - this allows the mechanics of general relativity to accommodate various unspecified FTL theories.
- Einstein's Universal Field theory can be understood as that all matter and energy are ultimately regions of curved space time. This may be compatible with string theory, or FTL theories, or other theories. [???Check!!! future research..]
- An important idea from FTL physics and from Newtonian gravity is that all gravity fields originate and terminate in point singularities, points of zero size and infinite density. (see separate FTL Physics section)
-- -- -- -- -- -- -- -- -- -- -- -- -- - -- -- -- -


Part 2 : Methods Towards a Working/Workable Gravity Engine - Basic approaches to the science, technology, and nonsense of gravity engine design. This is only a small selective list, the different methods do not even share a single common core physics.
- Ultra Dense Materials : Creating, stabilizing and handling ultra dense materials. - Neutron condensates and point singularities are the main possibilities that achieve the enormous densities desired. As an example a point singularity of 10,000 tons generates a gravity field of some 6.67 m/s2 at a distance of 1 cm. Ultra dense materials allow the creation of 'Ordered Gravitational Substrates' at relatively small scales and masses, a basic key to real experimentation.
- Relativistic Acceleration : Gravity fields may have an intrinsic resonance, and interfacing with a gravity field might require tuning to or above this resonance requiring relativistic forces. (See Resonance of Gravity Fields above) Straight line relativistic acceleration might not be very easy to harness, but an object moving at relativistic speeds turned through an arc or circle will demonstrate a rotational relativistic acceleration. We can achieve relativistic accelerations already but only with incredibly small masses in particle accelerators. To be useful we need to be able to accelerate far higher masses to low/medium relativistic speeds, perhaps the nearest the world comes to this today is in tokomak research for nuclear fusion.
- FTL and Quantum Coherency. : If a machine can be built that can achieve a large scale, stable, FTL or Quantum coherency with enough energy this could lead to relatively easy ways of building workable and potentially practical gravity engines without the need for ultra high energies or ultra dense matter. Unfortunately the research on this is barely even at a first hypothesis stage and today achieving such a field still looks monumentally difficult. - Maybe even impossible.
- A space time reaction mass engine : The reaction mass engine analogy is a great basis for thinking about designs and is already the basis of a well known design, the 'Alcubierre Warp Drive'. Even if not entirely correct the General Relativity model is a pretty good starting point for many approaches to gravity engine research.
- Lens Amplification : One potential route to gravity manipulation though rather speculative is through the construction of various type of Fresnel lenses. There are three major problems. - First is the shear size and mass of material needed, the best materials being the densest - like depleted uranium or tungsten. The second is the point-line focus nature of such lenses. (Diffuse fields do not work well with point line focus systems. (See below.) The third difficulty is about the resonance of gravity, which is an important variable for building a Fresnel lens.
- Hocus-Pokus 'Ether' Engines. : Based on non-factual theories that emerged from the early days of relativity and elsewhere. A real 'ether' exists as do zero points and electromagnetic auras - but none of them conveys a magic key to gravity. People attempting to understand these things will find themselves enmeshed in a tide of obsolete broken terminology and confused broken ideas. - Any real gravity engine is far more likely to emerge from ordinary solid physics, based on established theories and well designed experiments. (A first base for studying gravity is Newtonian mechanics, General Relativity, and Quantum Mechanics and the best way to understand these is to do a physics degree or read the books - or Wikipedia.)
- Real Zero Points. : There are several places in physics where we can define zero points, and the one most pertinent to gravity is the zero velocity point relative to the speed of light. We cannot access such a zero point directly and either it is assumed to not exist or is across some kind of FTL barrier. In theory accessing this zero point is just as difficult as reaching the speed of light but there are two crucial differences. - Firstly that we are already locked at a variant of zero velocity (0% of the speed of light) by inertia. Secondly while the speed of light seems to form a fixed point, in relativistic terms zero velocity does not - but this is only the starting point of a complex and currently unsolved mystery. A machine that could actively hold a zero point would be able to interact with space time and should be able to manipulate gravity - though probably only on very small scales.
- Gravity Shields, Schrodinger boxes, and reference frames. : The Schrödinger box is well known, what is less well known is that if such a machine were possible it would act as a gravity shield - and would be a substantial starting point for building many types of gravity engines. The walls of such a machine 'Schrödinger barriers' can be found everywhere in nature in the form of bounding light cone boxes - so in effect any FTL barrier is also a Schrödinger barrier. Note the interaction of dimensional time creeping in here. - Would a Schrödinger barrier shield gravity? it is currently unknown but basic logic suggests it depends ultimately on the speed of gravity.
- Gyroscopic Gravity Engines. : (Also See below.) With 'gyroscopic' engines the idea is basically to get Kinetic Energy to leak asymmetrically from a rotating system in a way that creates an asymmetric acceleration. Gravity is not actually directly involved, and the machines do not work because centrifugal force is a fictitious force. No unbalanced unidirectional force has or can escape from any known rotating machine. - However over the years quite a number of serious engineers have believed in or experimented with these machines and have tried to solve their problems.
- Biefield Brown Capacitive effect. : Unknown and not entirely proven (or disproven) effect observed in very high voltage capacitors during the 1950's. [Needs Research.] Cause effect and mechanism are not generally explained by documentation, a minimum voltage is given as approximately 50,000 volts. The Biefield Brown effect fits with certain descriptions in FTL models including that the effect is transitory and disappears. Experimentation would be needed to confirm or exclude different models.
- Etcetera, In Summary. : There are many potential routes to gravity manipulation out there, most of them completely impractical and impossible or fantastical. Some are little better than fairies attached to small harnesses, others at the outside may just have the smallest possibility of success. The real truth though is that we simply do not have enough physics to even attempt building such a machine at the present day.
-- -- -- -- -- -- -- -- -- -- -- -- -- - -- -- -- -


Part 3 : Special Problems with Gravity / Gravity engines - What are the pitfalls and limits to gravity engine research?
- Gravity Centres, Gravity Wells : One of the first and biggest problems for gravity engines is that one of the most difficult environments for them is in the steep and complex gravity well at a planets surface. - The environment we experience here on Earth. Firstly the field at a planets surface has a very high strength and so has a high potential to interfere with or disrupt any putative machine. Secondly the field (at a planets surface) is not focused in a single direction but is the sum of a broad cone of lines of force which extend over almost 180 degrees. The horizontal components of these vectors add together and cancel out leaving only the familiar vertical unbalanced component we experience. This means that any gravity distorting lens or shield must work efficiently through a very broad range of angles to work even moderately effectively near a planets surface.(making an impossible problem even more difficult.)
The result of these factors is that machines which completely fail to work on Earth may well work in deep space. A semi-happy compromise is a medium Earth orbit of at least one planetary radius (6378 Km) above the surface, for Earth an orbit at r + 1r has a field of (2.45 m.s^2). (Remember that being in orbit does not directly reduce gravity but instead counteracts it with a rotational force.) The nearest true low energy gravity field and orbit we can achieve is a local Earth synchronous solar orbit outside of Earths influence yielding a field of (~0.0059 m.s^2).
Absolute zero gravity does not exist naturally anywhere inside our universe and the closest is either at the barycenters* of galaxies or galactic clusters or in the gaps between galactic groups. - It is possible that there is physics that we have not yet observed because we cannot achieve conditions of anywhere near true absolute zero gravity.
(*although barycenters have near zero field strength they also tend to be points of maximum field curvature, and for galaxies tend to be located at points near super large black holes.)
- Energy & Distance : (reappraised) The basic problem is that gravity requires a vast amount of mass to generate any significant field, and for a gravity engine the energy required for a given field is the required mass multiplied by the speed of light squared. - The resulting multiplier is 1.35x10^27 Joules per unit of field (1 m/s^2) at one meter distance. In any realistic engine any generated gravity field is likely to be very small or even microscopic and useable force will need to be extracted by some method like ultra dense or hyper dense 'Mass Hammers'.
- Space Time verses Force : Perhaps THE central problem in gravity engine research is that while gravity is precisely mapped by General Relativity, its precise cause effect and nature are still basically completely open questions and gravity is poorly understood. There is a basic schism in standard physics over gravity between curved space time and force carrier driven gravity. And there are other less mainstream though competitive models such as Heim theory, super symmetry, m-theory, string theory, and so on. FTL models are probably compatible (can be made compatible) with most or all of these even general relativity. Though the FTL model / view suggests that the speed of gravity is either far faster than light or simultaneous / instantaneous (Gv > 1E29 m/s).
- Research Status/ Money : Perhaps the biggest obstacle of all in a field like gravity manipulation (before proof is found) is in being taken seriously and in acquiring even the most meager funding for even the most basic of serious research. The problem (as ever) is that there is no serious base of work upon which to ask for funding, and of course it is very difficult to build any kind of serious base until you have that funding. This is a problem that has plagued both manned flight and rocketry and many other fields - for years, decades, or even centuries before they proved themselves.
- - Modern Science. Despite the above, in the course of other ongoing scientific work small but significant progress is being made today towards gravity manipulation. In particle accelerators like the Large Hadron Collider or the Diamond Light facility, in fusion reactor experiments like ITER or DEMO or the US National Ignition Facility or Z-Pinch machines, and in astronomical observations and space experiments. All edge physics incrementally closer towards understanding gravity and with it just maybe gravity manipulation. If not on any useful scale at least on an experimental scale.
In the past there have been a number of government funded and other engineering programs (especially in the United states) focused directly on 'fringe' future fields like gravity engines, FTL drive, teleportation, and so on - but by published results most or all of these have been singularly unsuccessful.
- - Negative Reputation. On the negative side gravity manipulation research has been particularly plagued by any number of fringe inventors and crackpots and charlatans, plus the extreme hyperbole of some science fiction. This is one of the main reasons why the field has gained such a terrible reputation, and is one of the main reasons why there are no serious public sustained research programs anywhere in the world today. Some notable experiments and ideas have appeared in the past but separating the good from the bad is very difficult and the bad are much more common than the good. Conspiracy theories about gravity engines abound, and some few may even hold slight water - but given the dates involved most of this can be put down to simple military paranoia or subterfuge during the high heat of technological progress during nuclear weapons research in the 1950's. If the technology of gravity manipulation had been achieved then there is almost no way it could have ever remained secret - and very little point in keeping it secret. It is simply too useful to stay secret.
-- -- -- -- -- -- -- -- -- -- -- -- -- - -- -- -- -
-- -- -- -- -- -- -- -- -- -- -- -- -- - -- -- -- -


Aside : Gyroscopic 'Gravity' Engines - Over the years I have experimented with a number of Newtonian 'Super-gyro' type gravity engines. - These do not work as gravity engines but they do demonstrate rather conclusively and beautifully that centrifugal force is a fictitious force, and also that it is very difficult (impossible) to get an unbalanced force to leave a circular system unless it loses mass. (which I already knew, but before I re-taught myself physics I occasionally 'chose' to forget in moments of excessive hope.)
- It is unlikely but may just be possible to build a working gyroscopic gravity engine by using relativistic forces. This might be quite interesting to try, but would not be easy given that a 1 meter disk would need to spin at some 50 million revolutions per second. One possible way of achieving a relativistic 'super' gyro would be to create the gyro disk out of a plasma held inside a very high energy tokomak.
Saying all the above I have never even been able to completely rule out the theory or test a super gyro at ordinary high mechanical speeds. This is because of the high cost and difficulty involved in building the complex mechanical system required, with sufficient robustness and precision. (These machines are potentially very dangerous and push mechanical systems to their absolute limits - so 'play' with EXTREME care....)
-- -- -- -- -- -- -- -- -- -- -- -- -- - -- -- -- -
-- -- -- -- -- -- -- -- -- -- -- -- -- - -- -- -- -


Aside : Potential Future Experiments in Gravity - 2014/2015/2016 Over several years and with the continuing development of my FTL Physics Model I am gradually getting closer to being able to design actual experiments that may test actual FTL interaction with gravity. [Edit Point] …and if it works will not violate the laws of motion because the absolute frame connects every point in the universe - the device is not a closed system. … there is a large experiment space waiting to be explored. .. (-looking for future funding.)

-- -- -- -- -- -- -- -- -- -- -- -- -- - -- -- -- -








A Work In Progress : AFT SECTION. - - - - -[edit]

- -- --- -- - -- --- -- - -- --- -- - -- --- -- -

- -- --- -- - -- --- -- - -- --- -- - -- --- -- -

- -- --- -- - -- --- -- - -- --- -- - -- --- -- -

- -- --- -- - -- --- -- - -- --- -- - -- --- -- -








A Work In Progress : Miscellaneous : Ideas, Inventions, Suggestions, Questions, Other..[edit]

(New Ideas, Ideas in Progress, Sketch Solutions, Abandoned Solutions, Etc, Etc.) [EDIT - 29-09-15]


Medical Ideas -

Reducing Obesity - This idea is a potential method for massive and relatively rapid weigh reduction with less trauma than surgery. This solution uses a chemical (such as for example octopamine) to trick the body into releasing its fat stores rapidly while at the same time the patient is connected to a dialysis type machine that directly removes the excess fat and sugars from their blood. The main key components are - finding the best trigger enzyme(s) to maximize fat release from fat cells safely, maximizing fat extraction from the blood in the dialysis part of the process, minimizing the extraction and loss of vital substances, maintaining patient safety and health and metabolic stability during and after the whole process.
(I have a personal interest in this because I suffer from a medical condition that causes super obesity.)

- -- --- -- - -- --- -- - -- --- -- - -- --- -- -

'Cybridization' article moved to section on Super Projects above.

-- --- -- - -- --- -- - -- --- -- - -- --- -- -


Premature baby care / Solving Problems with Artificial Wombs [semi-rough] - Premature babies often suffer from learning difficulties and poor health, and several animal experiments with artificial wombs have produced grotesquely deformed non-viable offspring. There are various factors that make current methods deeply unnatural for the baby.

Practical Solutions : At a very simple level the basic solution to making the experience more natural for the baby is to replicate as many elements of its normal experience as possible.
- Most simple action, keep the baby in the dark and protect it from exposure to light and noise as much as possible..
- Keep the baby at the natural womb body temperature ie quite hot, hotter than we would think 'natural'.
- Keep the babies skin wet (as in the uterus), and in contact with an enclosing compressive surface, some kind of warmed gel blanket or bag.
- An artificial placenta to oxygenate and process the babies blood so that its underdeveloped lungs and other organs don't get damaged.
- Another facet might be tuning the chemical structure of the babies environment, again to closely emulate the womb.
- An audio system to replicate the mothers body sounds; digestion (loudest), heartbeat, replicated normal external noises and voices.
- A mechanical system to gently wobble and shake the baby in a replication of the mothers normal movement.

- -- --- -- - -- --- -- - -- --- -- - -- --- -- -



Flight (Atmospheric) -

[From 01-11-12] Reboot Nazi Era Tail Sitting Aircraft. A curious idea is that the old WWII era tail sitter designs might actually just work with modern technology. - Computer stabilization and control, basic AI visual awareness, and high quality miniature cameras to make takeoff and landing safe and easy. In theory tail sitters have a very strong basic advantage over machines like the Harrier or F35 - the design is lighter, much simpler, potentially more powerful, takes up less space, and is theoretically cheaper.. Maybe in a small civilian version, or build a few models first.

-- --- -- - -- --- -- - -- --- -- - -- --- -- -











A Work In Progress : Profile - P2 : Politics & Beliefs[edit]

[NEEDS MAJOR REVISION WORK !!!] [Edit ??% - 08-11-16]

Pro-democracy, despite the fact that it hardly ever works and is usually a disaster. Although awful no other political system devised so far seems to be any better. -
(From my Strong AI research (based on an in-depth analysis of the human mind) comes the idea that there is No Ultimate Base in human logic and that by definition each system defines its own logical base. What this rule means is that each system defines their own rules of right and wrong and codes of life. 'To Itself each system is (as a generalization) the 'best' and all other systems are (as a generalization) either 'inferior' or 'wrong' or even 'evil'. Over time most systems reinforce their own base until eventually they reach a state of high entropy and inertia where they start to lock out all new or independent thought and gradually become totally intolerant of all new or alternative solutions. In other words without revolution democracy and freedom are a fantasy. The UK is a good example of a country where the political system has entered a state of high entropy - and as a result has become very poor at choosing the right people or making good decisions, and is very intolerant of alternative solutions. We can only look on other more democratic countries with envy.)
Anti-democracy, democratic politics driven by popularity and electoral cycles tend to become extremely short term focused and unable to make high quality long-term political decisions. An entrenched political class form who care more about their own power and infighting than about their own country or the future. - And those people with higher moral standards get pushed and driven out of politics or their voices become suppressed. Such systems create intolerant political correctness and monsters like George W Bush or Blair or Cameron. When you don't care about the future for long enough - eventually you cease to have one..

Pro-freedom of speech, Without freedom of speech 'democracy' is a meaningless word and ideas like 'freedom' are meaningless concepts. In UK politics it seems like freedom of speech is under constant attack. In our society freedom of speech is frequently twisted to serve the media elites and effectively becomes its own opposite. - The freedom to chant the commands of the elite. One of the best things about the internet is that it has eroded this process and at least slowed it down, even though unfortunately hierarchical systems are becoming ever more powerful and entrenched everywhere on the internet.
Anti Propaganda / Brainwashing, Allowed to go too far 'mind control' becomes an absolute and direct form of slavery and is the absolute ultimate heart of the worst forms of fascism. (whether by the anti-smoking fascists, German Nazis, the Socialist Workers Party, The Conservative Party, or Sky/Fox, the anti-nuclear campaign CND, the BBC, or PETA.) Some types of propaganda are different and more subtle than others. When you understand how propaganda works and how to look for it you will find that our whole society is dominated by it... Yet they say slavery is illegal ???
[removed original here because it did not make sense and required much further explanation]
The argument gets more uncomfortable. Unfortunately sometimes brainwashing and mental enslavement are the only solutions that work and ultimately our whole society actually really does depend on them - but I don't have to like it. To paraphrase a certain movie 'they type it you think it.'
Anti-Religion, religious dogma, fanaticism, and subservience are mutually exclusive with true freedom. Religion to me (in this context) has little to do with belief in God. Religion is about obedience and is a complete book of rules for life and a lazy way to avoid all the difficult questions and live in a simple comfortable fantasy world made by others. You might as well sell yourself as a slave, and at its worst religion is simply a legalized form of actual slavery.
- Christianity. Anyone with even a remote familiarity with the Bible will know that as it stands the book advocates the rule of the king and an elite aristocracy. By a simple extrapolation 'democracy' is the domain of Satan, and 'freedom' is simply vulnerability to the influence of evil. Take its worst crimes into account and Christianity is very squarely the progenitor of Nazism, and has at least several other charges of genocide or mass murder on its books. Compare the images of Christians burning witches and pagans with the German Nazi's exterminating the Jews. Spot the difference? One is directly related to the other, and most Nazi anti-Semitic propaganda actually originated as earlier Christian propaganda. The Churches self-erasing memory has taken care of that by blaming it on the Atheists and other non-believers..

Anti-Communist, the ‘system’, social brainwashing, slavery, lack of ownership, and human farming part.
Pro-Communist, the justice and equality part, the liberty part, and the government being the servant of the people part.

Pro-Capitalism?, In the right place capitalism and money are a great thing, there are many right places and many wrong places. Right places include industrial and technological enterprises, small businesses, innovators, private companies making real products and creating real services. Wrong places include, the police, government services, politics (bribery and paid voting), certain types of basic utilities, healthcare, and banking. The rule of capitalism is profits first, not the most efficient or effective first, and in many places capitalist systems are less efficient than socialist ones. US Healthcare is a prime example; one of the most expensive, corrupt, and very often one of the most inefficient and most ineffective healthcare systems in the world. (Which country has the most fiscally efficient healthcare in the world in 2016? Cuba).
Saying the Above I am a capitalist : My moral archetype from the world of science fiction is (maybe) Tony Stark, an ultimate wish is to own and run the most advanced technology company on the planet.

Anti Global Capitalism (Globalization) The real problem with capitalism today is that a system of ownership based on shares was never intended to survive in the kind of totally open global computer driven markets that exist today. Companies which should be the foundation of the system have become very vulnerable and the victims of global financial predators in a way that subverts the entire system into something that threatens as much as it supports society. At the heart of the problem is the nature of money itself which was born with a long past in the feudal and imperial eras. Money as it now exists has been subverted by the new open global market - free-flowing, poorly regulated, easily hidden, easily corrupted.. Often valued by incompatible systems that create and depend on intrinsic unfairness. The Global money system is a bigger killer than smoking, AIDs, war, terrorism, and pollution combined.
The Global Elite. The fluidity of the global market has allowed a vanishingly small elite to gradually take control of the whole system. They avoid most local taxation and control and moral responsibility. They use massive legal resources to subvert legal process. They use control over politicians to subvert the law directly.. They constantly use their advantage to leverage a totally unfair advantage over almost all small or local business. They have been allowed -largely unopposed- to slowly take over the world. Today the whole world finds itself owing a vast debt but this debt is based on usury and privateering and is effectively a vast lie that these people and our politicians have sold us into. A circle of deceit that goes so deep that it is very difficult even to map - that is deliberately impossible for the common people to begin to understand.
Effectively people have borrowed from themselves, and now they are paying the money back and at a high rate of interest. The people at the top are now so rich that they have subverted the whole nature of the system, they have put themselves above the law and outside all morality. The only way they have left us that can stop them is to suspend legal process and use the military and secret services to 'destroy' them. ('Come in FBI Brownwerks, 'Wet Division', we need you!' Suddenly the rubber hoods, and electric shocks, and shallow graves don't look quite so bad.) I speak as a capitalist - who believes in small business.

- - - -

The Third World is the Perfect Capitalist System. (generalization) In the Third World the poor have little or nothing that they don't pay for; no health care, no schooling, not even policing or justice, not even basic food or shelter if they are starving or homeless. - Throughout the world we are all now gradually becoming part of this brave new global third world. (see Globalization below) - It is almost the old Cyber Punk dream only it isn't a dream its a nightmare.

- - - -
- - - -

The Five Immutable Rules of Capitalism
1. You need money to make money. - In most places the poor can basically almost never become rich - unless they become criminals. This acts as an anchor against progress and holds the world in the hands of those who already have power.
2. Capitalism is about maximizing profits and minimizing costs. - Nothing else matters. Unfortunately this means that among the less moral the ultimate perfect form of capitalism is simple theft. (maximizes profits and minimizes costs)
3. The harder you work the less money you earn. - The workers at the bottom do the work, the ones at the top spend the money. The man who digs the road earns less than the one who sells trinkets, who earns less than the man who owns the land.
4. Capitalism naturally destroys competition. - Capitalist systems (markets) generally always have a finite lifespan before they reach a point of maximum entropy (become locked monopolies) and after this point real competition becomes effectively impossible. - The only way to break such locked markets is revolution.. Either market breakup by government, or collapse, or obsolescence, or nationalization, or war.
5. In a true capitalist society everything is for sale. - If you cant buy slaves, pay the police to beat someone you don't like to death, or buy a pound of human flesh at the butchers shop then you are living in a socialist utopia.

- - - -

Capitalism is the law of the jungle but not the law of the jungle. I speak as a capitalist.

Anti-Monetarism (Anti-Thatcherism). Monetarism is a debased form of capitalism based on the principle that the only unit of morality in society is money. The philosophy of 'Greed is Good' and 'Protect the rich and take from the poor'. Monetarists view money as the only goal in life - other people are seen as merely work money units to exploit in every way they/you can, and morality society and human life are merely worthless impediments to increasing wealth.. Ultimately the monetarist model always becomes a relentless search for the lowest (cheapest) common denominator in all things -and debases every society it touches. - The USA is an excellent example - in many ways a burnt relic of what it was before monetarism and globalization touched it.
As a philosophy Monetarism is almost the credo of legalized crime for the rich. - Do what you like and exploit what and who you like. - There is no tomorrow and society does not exist.- -The only crime is getting caught. .


Anti-Internationalism Many years ago I used to be an internationalist but over the years I've seen the idea gain more and more power and seen its results. I now see internationalism as a disastrous system that is ultimately guaranteed to destroy the world. Quite simply internationalism demands a division of power between the worlds nations equally. - But a more mature attitude sees that they weren't equal to begin with, but were very disparate in needs power and morality. The result is that international politics has become an ineffectual corrupt and self-serving talking shop often almost incapable of any real action. That when international politics makes choices and takes actions the results are often bad or very bad. Like monetarism internationalist politics is constantly swayed and governed by the lowest common denominator. Unfortunately the environmental debate highlights this problem very clearly. Action on climate change was indicated clearly and strongly by the mid 90's and some 20 years later looking globally things are still getting worse. The window for action on climate change is probably already closed and still at a global level we are doing almost nothing. - And .. Climate Change is a case where 'Even a global nuclear war' is generally a better option than doing nothing.


Globalization : Long Term Analysis. Globalization (Global Capitalism) is the step too far that proves that internationalism doesn't work. It is a unique system that keeps the main group of poor nations poor, while using cheap labour in a group of 'developing' poor nations to out-compete and slowly crush and destroy the wealth and power of the old rich nations. To see the really scary part though you need to do an extrapolation into the past and future.
Seen over multiple decades - globalization destroys the disparity in wealth between the lower and middle classes worldwide. However this ultimately leaves a vast sea of mostly equal but basically weak and completely powerless people throughout the world. They are ruled over by an incredibly rich and untouchable ultra elite who are already close to gaining almost total power. As the elite drain more and more money from the system the old power blocks of government and democracy gradually become largely irrelevant and powerless to do anything.
Addendum : Of course globalization may not end up as a global third world - in some 30 to 50 years it may actually end up as a global communist state, almost certainly ruled over by China. Ultimately short term thinking wins in the short term and long term thinking wins in the long term. - The masters of global capitalism almost only ever think in extremely short term windows .. while the Chinese Authorities are excellent at long term planning.


Anti-EU (European Union), Celebrate with great joy that the UK is finally on the way out of the EU!
To me being Against the EU is actually being pro-European NOT anti-European.
Many of the basic ideas behind the European Union were good and noble but the reality that has been created is not so wonderful. The EU is a huge and clumsy monolithic bureaucracy and is hugely slow at reacting and far too unwieldy to function as a proper decision making system for government, this bureaucracy is socially suffocating and is ultimately a route to the end of freedom and democracy. My biggest complaint against the EU is ultimately that it often takes up to 10 years to enact a decision and it is often a wrong decision. The EU system has also shown a willingness to use threats and social manipulation, to force and cajole its citizens into the decisions it wants. The EU is slowly and subtly squashing democracy and individuality in every member country a drip at a time. While the UK is now on an escape trajectory from the EU unfortunately the rest of Europe is still trapped in its downward spiral.
The EU & Fish. One of the EU's many many worst policies was originally meant to 'protect' fish stocks but has ended up with over a decade of fishermen throwing tens of thousands of tons of dead fish back into the sea. - It shows an ultimate disrespect for nature.
Not EU. Among the few positive things in European politics are the ESA (European Space Agency) and CERN but neither is directly part of the EU anyway.

The Euro - Inequality Pump. Most people demonstrate that they don't understand mathematical patterns, don't understand money, and usually don't understand the writing on the wall. A single currency creates a point of unification in a financial system (economy). One of the less pleasant effects of this occurs because the poorest parts of the system drag the total value of the currency down. Reducing the value of the currency increases the profitability in the richest areas and countries while also increasing the poverty in the poorest areas and countries. It is a self-reinforcing self-resonant effect and virtually guarantees decades of future increasing wealth for the richest areas in the system and decades of continuing or increasing poverty for the poorest areas. This can be observed in within the regions of the UK, or between the States and regions within the USA, or within China, or within the Euro zone.. As an opposite Japan does not have enough differential of wealth and this has held the value of its currency high and the country in its permanent economic stagnation for decades.
The Euro represents a flow of wealth from the poorer countries like Greece or Spain or Italy or Ireland and into the German heartland. Turkeys really do vote for Christmas...

-- --- --- - --- -- --- --- - --- -- --- --- - --- -- --- --- - --- -- --- --- - ---
-- --- --- - --- -- --- --- - --- -- --- --- - --- -- --- --- - --- -- --- --- - ---


Core Political Beliefs

Socialism - Reloaded. :- Socialism is one of the most fundamental belief systems in the modern world, and is actually an almost universal underpinning in all modern western democracies. Socialism has another name that describes it - 'Equalitarianism'. In its most basic and minimal form a 'Hobbesian' Socialist system is a police force and justice system funded out of taxation. - A system which is based on the idea of 'universal' equal justice and fairness, and of the equal protection for all. In more advanced socialist systems this philosophy of universality and morality is applied to a wider and wider circle of things- support for the destitute, support for education, a universal army, universal voting franchise, free shares of ownership, employment rights, universal healthcare, support for unemployment, support for the elderly, even support for food or housing - no favoritism. All these things are essentially given away for free to everybody and paid for through universal taxation. (At one level even the capitalist system share ownership itself was born out of socialism. Note also that ultimately Socialism depends on money and so is ultimately a form of capitalism.)
Weaknesses in the System. Of course Socialism does have some very serious problems. - A Socialist system depends entirely upon the quality and efficiency of the government that runs and supports it. Very often this is not so efficient. A new model is required to improve this and create a revolutionary idea called 'competent government'. A particularly large problem for Socialist systems is excessive encroaching bureaucracy which can almost become a form of cancer that kills the moral base of the system, lacks mercy, and fosters corruption. Ordinary socialism is also full of obsolete terminology and ideas that dealt with problems from earlier ages.
In practice the biggest problem facing socialism today is probably the diminishing size of tax bases and the growing costs of socialist support. In places where employment is no longer universal, large underclasses can form who are totally dependent on the state. At the other extreme the worlds elites have become increasingly sophisticated in avoiding paying tax. As taxation becomes less universal and benefit costs rise, taxes also tend to rise and often unfairly. We have not yet found an adequate or moral solution to this complex problem, and it is an area where new solutions need to be actively sought.
(At the same time as increased taxes, capitalist utilities calculate how much money can be squeezed from their customers. These together become what is called 'Minimaxing' and are the main reason why so many people today feel financially squeezed..)

There are also many advantages to socialist systems which in many places are more efficient and effective when compared to capitalist alternatives. Such places tend to occur in areas where there is no sensible market but there is a large scale universal need. - These places include :- the military, healthcare, policing, large scale road building, generating electricity, schooling, public transport, postal services, and many other basic utilities. In many of these services an individual can choose alternative private capitalist versions at any time if they choose to pay. Socialist systems are inclusive not exclusive. - The socialist ideal is a basic minimum but quality service provided universally at minimum costs. It is also quite acceptable for socialist systems themselves to offer premium or faster services to those who wish to pay extra. (This is one of the many differences between Socialism and Communism.)


Long Term Scientifically Based Environmentalism, or 'Scientific Deep Green' (Also See Section below.)
As a scientist I am very much against the pseudo-scientific green protest groups who while well meaning often choose extremely bad solutions and are often all to willing to lie or exaggerate to make their points - thus damaging the whole green argument. At the end of the day the environment and 'green issues' are complex systems that depend totally on science and scientific arguments and ultimately (almost all) green solutions have to be scientific solutions.
There are only five or so broad primary routes that lead to a long term survivable future. :- Advanced Green Technology, Social Restriction, Population Reduction, Global Agrarianism, Technological Supremacy. :-
Advanced Green Technology :- Adapt and create new advanced technologies that are less environmentally damaging using science. This is the obvious choice favoured by 'progressives' and by scientists and technologists, but is very expensive and requires the disruption of many current methods.
Social Restriction :- Limit the spread of damaging technology to small limited elites who thus do less damage. This is obviously brutally unfair and is now only possible by trying to reverse something that in many areas of the world is already largely complete. Ultimately this path devolves into abandoning technology altogether and choosing or forcing agrarianism.
Population Reduction :- Reduce and restrict the overall human population to limit the damage. The slow solution is to restrict global reproduction and push for controlled small family sizes. Unfortunately it is probably already too late in 2015 for the slow solution to be enough by itself. The only population reduction solution that leaves is killing on a massive industrial scale which has all the obvious and horrible implications that this implies. A baseline figure might be to reduce the total human population by some 15 to 30% - requiring the deaths of about 1.2 to 2.5 billion people.
Global Agrarianism :- A society that is largely rural, low impact, low population density, and low technology. Agrarianism on a small scale is an idealized dream of choice for many, but this does not extrapolate up easily. To implement agrarianism on a large globally effective scale requires force and massive scale population reduction. The numbers who need to die to achieve this are even higher than for population control. Here a baseline figure might be to reduce the total global human population to a maximum of some 2 to 3 billion people - (today) requiring the deaths of about 4.4 to 5.5 billion people. (This type of agrarianism has been tried by the 'Khmer Rouge' in Cambodia.)
Technological Supremacy :- Not really a green solution but its antithesis. The complete destruction of nature and the natural ecosystem and its replacement by machines and engineered biology. Ultimately cost is the only limit in such a world and it raises the population limits of Earth to maybe 30 billion people. This solution might not appeal (to the creative, poet, or soulful) but it is the overall trajectory that the world is on today, and it may be very hard to stop it coming. Technological Supremacy is the only solution that does not impose huge limits on development, population growth, greed, or capital growth.

I personally favour a path that steps somewhere midway between Advanced Green Technology and Technological supremacy..
- Advanced green technology is the only (sane) choice but even with it we may still end up having to use either social restriction or population reduction in limited forms. - It is a matter of scale and severity, and of having other options.
- Total fairness on the face of it is an impossible dream : for anyone to survive at all there have to be winners and losers. This is the essence of biological nature and evolution anyway. (nothing is meaner than evolution) The only two systems that achieve total fairness and equality between all humans are the total destruction and replacement of nature or the complete extermination of humanity.
(SLOGAN : Life is an unbalanced Equation.)


Moral relativist - This means that I don’t believe that humans are the centre of the universe, but acknowledge that ALL GOOD AND EVIL ARE RELATIVE. This doesn’t mean that my morals are weak, but I do believe that people are far too self centred and arrogant as a species and resultingly small minded. To me moral absolutism is a truly Hitlarian philosophy (in a bad way). An even worse example of absolutism are the insane moral certainties of fundamentalist religion. The fundamentalist person claims - "My opinion X is by definition right and anything opposed to me is by definition wrong or 'evil'!!" No matter what belief system they touch with that absolutism they corrupt it and turn it into a tool for evil. Even the democracies of the west are far too attached to this kind of thinking, and I fundamentally disagree with it.
I know that not seeing humans as the reason the universe exists makes me a sociopath and it is true that not having absolute good and evil I am aware that human lives are not that much more valuable than those of other animals. So sometimes I maybe just might be tempted to feed a few of us through the slaughter lines to reduce the numbers. (If you take an ecological perspective, from there when you have too many of a species you cull to control the numbers) :D [smiley]
. -- --- --- - ---

Technocrat and Utopian, [still under development -a rather complex section] I believe that we-the old west should aim for a technocracy utopia.
[what I wrote is rubbish - I do that sometimes! DELETED]
I am a utopian, but either describing (designing) a utopia or creating one in the real are not easy. Utopias can easily turn into fascist states, and in fact they require a conformity to certain norms that can make them intolerant. It might be noticed that I mention the German Nazi's rather often - it might not be comfortable but in many ways they were utopians and believed in a great white future and many of the ideals we call socialism or equalitarianism. - Of course they also believed in their own racial and cultural purity, and their own superiority over others. Nazism was also a failed Utopia because it based itself on the old imperial model - expansion and growth through war and conquest. Only what happened to the Nazis was that the small wars of conquest that Germany wanted (to build itself a bigger empire and stronger border) - ignited into the global war that it didn't want that washed over it and pulverized it. When you look into it more deeply Nazism was even more flawed, much of its ultimate vision was based on the Roman Empire (hence the swastikas and salutes and so on) which is not a very good utopian model.

The 'Superman'. Utopia is compatible with democracy up to a point, and is compatible with anarchy and revolution up to a point, but the creator of a utopia must impose rules that contradict this or the society will tear itself to pieces every time. (this is actually true of all societies not just utopias) The true utopia requires the same thing that Hitler wanted - the 'superman', or 'evolved' man, or superior moral man - though of course our definition of 'superman' is very different to Hitler’s. Above all though is that the core of the utopian is a belief in self improvement and evolution - an innate moral sense and a basic moral code. - This is where every utopian goes to war with every other utopian over what that code should actually be.

Human Psychology. I believe my work on machine cognition has given me a unique understanding of utopias and the programming of human minds. Let me ask you, what is a slave? do you have a moral right to create a slave? People everyone you and I, we are all slaves already. - We are slaves to our programming and our cultural imprinting. One of the biggest differences between humans and other animals is that part of our core programming has become flexible - and this flexibility is the part that we rewrite to make a 'culturalized' 'person' with moral ethical and social codes.
Throughout the centuries these codes have evolved slowly and mostly naturally and often by force. However advanced psychology has changed all that and introduced the core of sentience to the flexible logic in the human brain itself and the result is an explosion of madness (mental illness). One sharp over-reaction, followed by a counter reaction, and then another, and another, explosion, and another. Because of the way this psychology works the more that people in power try to control it the more it explodes away and attacks them, again and again and again. There is a counter swing, and then another. Reprogramming humanity to regain control of and repair this internal logic is the very thing that the species is most repulsed by, is terrified of, cant tolerate - but is the very thing that it needs the most.
The only question is how to get there. We cling to the fragments we can hold onto :- family, tradition, friends, religion, science, money, morals, power, sport, whatever. Whatever we can. And mobile phones and the internet and social networking are accelerating this process of explosion and fragmentation even more.
The truth is that at the base of human logic is a command structure which is based on being an inclusive member of a group - and the loneliness and exclusion of modern society push it into overload and destructive behaviour cycles. The only cure is to give it the things it needs, and the only way to do that is . . . A New Utopia. [And for my next trick I shall create a Flying Pig!!]

-- --- --- - ---
I am a Modernist. [EDIT POINT]
I am a modernist through and through and still remember the maxim of progress - New is Better.
-- --- --- - ---








A Work In Progress : POLITICS, SLOGANS, and UGLY TRUTHS.[edit]

Idiocracy : Do you want to know the real secret of everything that is wrong with the world? : Ordinary people (by statistical mean) will always vote for an idiot. They are stupid (by statistical mean) so they want someone stupid in charge. Stupid people make great politicians but terrible leaders. - There simple isn't it, no great conspiracy, no great secret behind the mask, just a bunch of greedy, self-promoting self-obsessed idiots.
- -- --- -- - -- --- -- - -- --- -- - -- --- -- -


'Morals are for soft hearted liberals.'
"Human Rights! Where we're going you don't need no human rights!" - Welcome to Cameron's brave new world.
'To Do a Nick Clegg' - Self Serving Treason that leads inevitably to your own downfall.
'Nick Clegg - I don’t hate him because he’s a Liberal, I hate him because I’m a Liberal.' [03-04-14]
An image of Ian Duncan Smith in Hell raping and eating the disabled.. The Swastika Logo of ATOS fluttering Behind him..

'Trickle Down' - method of economic fairness used in pre-revolution France. Let the poor eat the garbage of the rich, or take the scraps from their tables, or starve - that is trickle down.

(Imagine if Cameron were sent back in time to WWII 1939 to replace Winston Churchill as war leader, but with no memory of the present. Question: how long would it take him to surrender to Hitler? Answer A/ 3 months, B/ 6 months, or C/ 9 months. ?)
- -- --- -- - -- --- -- - -- --- -- - -- --- -- -

Politics : Ordinary People Verses Democracy.

Ordinary People. 'Ordinary People' (by statistical rule) are so stupid that when someone tells them they are smart they believe it. Politics in the UK and around the world is terminally corrupt and rotten to the core, and ordinary people must take a huge part of the blame for this. Always voting for the best liar or the smarmiest smile, always falling for the sharpest suit, or the glittering bribe, or the simplest manipulation or the scariest lie. The strongest argument against democracy is the fathomless stupidity of ordinary people voting in mass.
Don't like what I say? Well there is the problem, I believe in telling the truth and the truth hurts.
(Sadly ordinary (dumb) people always seem to prefer to follow a liar who tells them how smart they are.)

- -- --- -- - -- --- -- - -- --- -- - -- --- -- -


Space Politics. The UN : Some Good, a Lot Bad.
In Short : The current UN Outer Space treaty as it is formulated today is so poorly and badly thought out that it is simply a block to humanity having any real future in space.
The Reality of Space :
- In space (in general) there is no life. The planets in general are dead. Evolution is a complete explanation for almost everything about our biology except the first cell, the place where chemistry has to defeat entropy directly to become life 'spontaneously'. In short according to current physics life is probably very rare. (I have worked on the 'design' of living things directly in the form of diamond based assemblers, and have studied evolution, as well as thermodynamics.)
- In our solar system - and as we are alive and sentient this is 'our' solar system.. The Planets are ours to conquer. The resources are out there and are almost infinite, and there are a billion billion stars. The planets are one tiny grain of sand from a vast vast beach.
- Time. Time out in space is vast, everything is ancient, almost unchanging. But it is slowly spiraling down towards death, it will not live forever no matter what we do. Life on Earth will come to an end no matter what we do. Humanity is transient and evolution/nature will eventually kill us. Why not try, why not choose life and fight against entropy, and why not fight for life and a better future. I believe in nature but I do not believe in nature. Let machine and man improve nature and conquer the solar system, and then the stars beyond.

Ossified Law is a threat to the future of our species, a suffocating tourniquet around our collective throats. A dead voice and a dumb voice telling us what is 'right' and what is 'wrong', and what we can and cannot do. 'Life is the enemy.' Bureaucrats loose their humanity and replace it with paper, and live by that paper.. And if the rules so decree they will kill you with that paper and not feel a single emotion for you except perhaps annoyance that you were there at all..
'Those international treaties should be used for what they were intended for - wiping bottoms.'
- -- --- -- - -- --- -- - -- --- -- - -- --- -- -









A Work In Progress : Environmental Politics / My Version of 'Scientific' Deep Green[edit]

[A complex and awkward article. Also See politics section.] [Edit 85% - 23-04-17]


Basic Environmental Beliefs -
I have been a passionate 'Green' and Environmentalist and ecologist since I was a teenager in the mid 1980s. This extends from my scientific background and ethos and from my experience growing up in a rural society. As a country person I favour less urban primacy in society, lower population levels, and the protection of rural human environments. (While at the same time wanting to allow and encourage rural scientific and industrial development.)
I believe above all in long term thinking and the long term approach, and believe it is absolutely essential to the future survival of the Earth and of humanity. This is the exact opposite of the very short term thinking so common in todays society. I also obviously favour scientific and technological development and see that increasing technology -especially done in a less short term and consumerist way- is the only sane rational approach to the Earths environmental problems.
The biggest cause of pollution today is mass consumerism, and

- Total Net Pollution = Total number of people X Average sum pollution per person (consumer).
- Total Net CO2 = Total number of people X Average sum CO2 per person (consumer).

I believe that to save humanity long term we need a total reorganization of the world along more long term and Utopian lines. plus a reduced population and less selfish and money obsessed lives. This isn't agrarianism which requires a massive global scale genocide first. Nor Communism which failed economically and on basic humanity. Nor is it fascism, either of the Hitlarian type, or of any form of neo fascism, or of more modern types such as Political Correctness and the SJW movement.
We need a new type of utopia based on principles that allow freedom, personal ownership, individuality, competition, but that is also based on both personal and collective morality and on fairness and social justice. This gives us a basis to build a technological solution to the big environmental problems like overpopulation and climate change.
-- -- -- -- -- -- -- -- -- --


Primary Environmental Problems -

Human Population (Growing) Vs Earth's Ecosystem Food Web (Finite or Shrinking)
- Over population is a primary issue, possibly the biggest environmental problem facing the world over the next 100 years. But population is also so deeply controversial and such a huge issue that most people simply avoid it or deny there even is a problem.
Human population is currently about 7.5 billion and rising by some 225,000 per day or 82. million per year and this means that every positive act we do towards the environment is being swamped by the sheer number of people in the world.
Make no mistake that a huge percentage of this population is underfed and probably goes hungry on a regular basis. They are also very young, many have little or no education, and almost all are very poor. Population size and 'population cycling' and food deficit have complex implications for wealth and inequality and solving all these problems together will be a very convoluted and very difficult problem.
Perhaps the biggest problem with over-population is that it makes the rising wealth in the huge group of the poorest people one of the most dangerous factors in the present world. If we solve all the worlds wealth and inequality problems today this could potentially bring the whole worlds eco-system to the point of complete collapse within a matter of months or a few years. Extrapolated future climate change has the potential to cause widespread dessert expansion and push the whole population situation critically towards disaster.

Basic Short Term Solutions : Sane short term solutions to increasing global population include - 1. Switching the world to a diet of insects or to a largely vegetarian diet. 2. Moving the whole world towards equalization at the current third world food norm (so that everybody is left adequately fed but maybe slightly hungry most of the time). 3. the use of GM and intensive agriculture to massively boost world agricultural output.. All have the basic problem of fighting human apathy and inertia and stupidity.
Common sense suggests that it will be very hard to make either insects or a vegetarian diet or equalized food consumption acceptable to enough people without a massive global propaganda campaign.. Even though a 'third world' level of diet was a position that existed almost everywhere until relatively recently - maybe 100 to 120 years ago. Boosting agricultural output to the extent demanded is the 'simplest' global option but requires enormous and sustained R&D and development and investment. (est some $1 to 10 trillion globally over several decades)

The big problem with all of these basic solutions is that the population size and wealth are not static. World population is still growing. The global demand for food thank to increasing wealth is growing but even faster than population. At the same time while slow the climate is showing every sign of moving towards a higher temperature normal. This means that the goal we are aiming for is constantly moving and growing harder to reach.
Today that huge investment in agriculture is simply not being made. The preparatory social commitments required to push the world to divide food more equally or to live on insects or vegetables are not being made. We humans like to live dangerously especially when it is someone else who is facing the danger first.
So here we are facing the sword of Damocles and we are almost all in deep self denial about it. The Methuselan prophesy is one that has failed again and again but by its very nature has an inevitable logic of doom that means that it is almost inevitable that it will happen eventually - unless we actually do change.

We should not give up hope and there are still other ways out of the population calamity, though most of them may not be terribly pleasant. Above all we cannot continue with our policy of universal 'humanism' without finding some way to add a new level of control or a new negative limit. - In a world of universal responsibilities and consequences as well as rights. Without this humanism may just end up becoming the most murderous policy ever invented in human history.
Ecology is based on a balance of life and death, and we have put this balance out of alignment. The only two ways to correct it are to use forceful artificial control or to allow nature to destroy itself until a new balance is found. The second is the option for virtual extinction of humanity. Above all we must force people to accept that the simple 'natural' approach is not a solution that humanity can survive and that we must accept at least some method of mass population control. To balance out that population growth of 82+ million people per year, we need a corresponding population growth reduction by at least 82 million per year. Actual population reduction would require reduction by around 100 to 150 million per year.


Actual Solutions to Human Population Balancing. (All must be done at a global scale)
1. Globally Adopt Population Control. Traditional population control methods such as birth control and family planning.
2. Globally Adapt Culture and Restrict Breeding Re-engineer natural cycles and human cultures to limit breeding. E.g. set family size restrictions, child taxes, or use more extreme methods such as mass sterilization or artificial birth regulation - such as by Lottery.
3. Global population Reduction. Start methods of active population control by active reduction. In practice this involves mass killing on a truly epic scale. We would of course have to chose how merciful, how humane, and how those to be killed would be chosen. A fair solution is to select by random choice, maybe with individual statistical tuning. The numbers would be tuned individually for each country :- For example in the UK with a population of 60 million maybe 500,000 a year would need to die, with maybe about 30 to 50% of them being immigrants. (in the UK most growth is through immigration.)
The worst basic method of reduction is probably a global nuclear war - though this is still a much better solution than doing nothing.
4. Massive Scientific Solution A/ Immigration to Space : Get another planet. Large scale immigration to space. Technically unfeasible without trillions of dollars of investment in space technology over multiple decades. Extended large scale space travel might also pose extra pollution threats to the planets eco-system.
5. Massive Scientific Solution A/ Artificial Biospheres : Build the equivalent of another planet here. The large scale construction of sealed eco-habitats and agricultural systems on earth. Technically unfeasible without trillions of dollars of investment in the required technologies over multiple decades. - probably about a third of the cost of large scale immigration to space.
6. Massive Scientific Solution C/ Mass Hibernation : Large scale human hibernation. Although it is not technically possible today, in the near future large scale human hibernation could offer one possible non-lethal way of reducing the effective population. - By having 50% or more of the population asleep at any one time we could reduce the human load on the eco-system as much as required. Again costs would be expected to be spectacularly high.
7. Do nothing. Engage Hope, and ignore the problem until it goes away. Solves the problem by causing a general eco-system and food web collapse, causing mass death and starvation on an almost unimaginable scale. Probably will kill at least 5 billion people, all the way up to total human extinction. (this is the option currently selected.)
(Doing nothing is the 'Stupid Sadistic & Suicidal' Solution. Guaranteed to work and how the cow survives the slaughterhouse. (Only in a mince pie!) If you want a 'Mad Max' solution with large scale cannibalism and general barbarism then this is the option for you.)
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --


Climate Change & Energy Generation - - The Big Problem Today

There is a simple need in modern society for continuous supplies of energy on a vast scale. Energy supports and underlies almost everything we do. - Everything from the internet and communications, food storage, agriculture, transport, heating and cooling, water and sewage, medical technology, manufacturing, and a million other things. - Energy is probably the single most important foundations of modern life.
There is a big problem with energy, the most basic ways of acquiring it all involve burning chemical organic fuels which all produce vast amounts of CO2 and other greenhouse gasses. The big problem is that we have already put a huge amount of these into the air and this is changing the balance of the environment leading to global temperature rise and climate change. Making the problem worse is the potential for 'avalanche' events which could potentially make the whole process of global warming accelerate even further. This could potentially become a process we can no longer control or stop. The simple solution to this problem is to use methods energy production that produce lower or zero CO2. But as anyone who has looked at this scientifically in even moderate detail will know, doing so is far from easy. Here are my favored solutions -


Solution 1 : Nuclear Fission Power. - Nuclear fission is still the best and most realistic short to medium term power solution to reduce CO2 production that we have today. It is that simple.
Firstly we need to restart nuclear energy technology Research and Development. - Nuclear technology in its current guise is not totally green and not very efficient. There is much that can be done to improve efficiency, some methods potentially more than doubling total efficiency compared to current methods.
Fighting Nuclear Panic. Our society has allowed itself to become cowed and scared stupid by silly stories about nuclear power. - We desperately need to undo this psychology. The fear and paranoia marks nuclear as more dangerous than oil or coal, which in actuality are statistically roughly 1000x more dangerous than nuclear power. The ignorance this fear engenders has already lead to some 5 to 10 million extra human deaths. The key to reversing the situation seems to be as simple as telling the truth and teaching people about emotional propaganda and getting them to listen.. (See Pro-Nuclear Anti-Nuclear-Protest Campaign Above.)
Improvement paths for Nuclear Fission Power -
- Gas core reactors that run at ultra high temperatures. Potential for some 100% (2 x) efficiency gain.
- 'Gravel bed' technology that allows smaller simpler cheaper reactors that are practically immune to the problems of overheating or meltdowns.
- Advanced fuel technology, (eg natural uranium, thorium, plutonium) aimed towards improved efficiency. Allows cheaper fuels, better safety, increased fuel recycling, and reduced waste.
- Advanced cooling technology. As a start a wonderful primary coolant for nuclear reactors of all types is hydrogen because it is completely immune to radiation. This creates a single stage cooling environment without the need for intermediary heat exchangers.
- Fast breeder & fuel reprocessing technology. An addition to advanced fuels, a combined fuel lifecycle of uranium and plutonium where most primary waste can be cycled back into fuel and used again and again. Potential for up to approx. 300% (4 x) efficiency gain.
- Nuclear CHP (Combined Heat and Power). This means using small reactors near to the power sumps that also supply energy in the form of hot water and steam. Potential for 30 to 100% (1.3 to 2 x) efficiency gain.
- Self containment. Intrinsic cooling, intrinsic fail-safe, waterproofing, and underground environmental bunker design -making reactors earthquake proof, terrorist proof, flood proof, and able to survive power failures. Can alleviate many current concerns. Imagine a design where the reactor is buried in a vertical tunnel, where at the end of life you simply disconnect and seal the whole system up with Earth and concrete, a system with near total intrinsic safety.


Solution 2 : Nuclear Fusion power. Nuclear Fusion technology is one of the worlds current big future hopes for medium and even long term energy production. Research in this area is now very close to a base for commercial energy production and is advancing -but often at a near glacial pace. The reasons for this are - a long standing severe lack of funds, scientific inertia, the shear scale of the project, and the excessive creeping bureaucracy often associated with large international projects.
However a simple solution exists in the form of a properly funded investment program - plus a better mix of development projects based on national and international lines. The best way to accelerate fusion is to do as much of the research and construction as possible in parallel at the same time. It is much much faster to build and modify a partly finished design in progress than to completely plan the design first down to the last minutiae before beginning work. This is the approach that was used during the German V weapons program, and the Manhattan Project, and during the NASA Apollo moon program, three of the most advanced and successful technology projects in history. The method can squeeze decades of far slower work into just a few years. - Ironically because this method increases efficiency it can actually substantially reduce overall costs as well. (try telling the bureaucrats that) This is a lesson the world has forgotten and continues to fail to learn in its obsession with excessive form and politics and short-term focused myopic budget control. Poorly focused 'Health and Safety' may also be a critical problem and H&S needs to be run with real finesse and practical intelligence in such programs.


Solution 3 : Total Atomic Conversion. - The idea in total atomic conversion is that the entire mass fraction of the fuel is converted to free energy such as heat, radiation, kinetic energy, etc. This has two main effects :- firstly extremely small amounts of fuel are needed to produce a given amount of energy, and secondly there is little or no radioactive material left behind making the process (potentially) very clean. Different individual methods each have their own individual advantages and disadvantages.
Quantum Conversion. - ('Quantum Conversion' is an interim name.) This is a theoretical approach and my own baby idea for a method of achieving total atomic conversion. The idea is to achieve matter conversion using an FTL physics interaction, using the fact that matter in the correct tachyonic state immediately decays into free energy. It is called 'quantum' conversion because the FTL effects involved are 'Slow FTL' and are actually part of quantum mechanics. (The effect can be observed directly in the blue glow called Cerenkov radiation seen around highly radioactive material.) If the method can be made to work it has many advantages :- the reaction is intrinsically safe and self-quenching, it only needs ordinary matter as fuel and the ideal fuel is ordinary hydrogen, in theory reactors could eventually be made very compact and even portable. However there are several critical problems - just making a real system that works will be incredibly complex and difficult and requires a great deal of new physics. It may be even harder to make a system that is efficient enough to reach the point of break even.. This is still a theoretical technology that is still probably at least 40 years in the future..
Matter Anti-Matter Annihilation. - Another much better known method of total atomic conversion is matter-antimatter collision and annihilation. This method has the huge advantage that the reaction is always ready to go, all that is required is to bring the matter and anti-matter together. Of course this also creates its own disadvantages - the need for a very sophisticated containment system, and the constant extreme risk of containment failure which potentially leads to an instant catastrophic explosion. Matter Antimatter annihilation is not currently directly a useful method of energy production because of the huge amounts of energy needed to make the anti-matter first. However matter/antimatter conversion could be a very formidable method of energy storage, if an energy cheap source of anti-matter can be found then energy production might become possible.
High Energy Proton-Proton Collision. - Another method is high energy proton-proton collision as occurs in high energy particle accelerators. This method exists and works today. However proton-proton collision is not viable for energy production because the method of electromagnetic acceleration is extremely inefficient. (eg ~ << 0.01%)
Conclusion. Matter antimatter conversion is probably still decades away from being useful and quantum conversion is even further away. However if any method of total atomic conversion can be made to work then it can potentially generate or store almost limitless amounts of power from relatively small machines and tiny amounts of source fuel compared to even nuclear fusion or fission. Any successful method of atomic conversion would be a total game changer for all technology but especially space travel, and would be a final answer to the problems of the need for clean CO2 neutral energy.
-- -- -- -- -- -- -- -- -- --


Other Technologies -

Solution 4 : Synthetic Fuel. - Diesel or petrol, ethanol or methanol produced from CO2 by chemical and or biological processing or even synthetic biology. Plants will be complex and expensive to build and need lots of energy and/or 'foodstocks like sugar but this is potentially a very good solution for making the whole world carbon neutral while allowing widespread continuing 'CO2 neutral' use of current technology. Synthetic fuel is ideally produced on site at large sites of energy production - in particular those that are sited far away from the main areas of energy need - such as equatorial solar arrays, or aero-thermal, or geothermal bore holes.
Solution 5 : Solar Power. - Has potential that is now becoming actualized. There may still be minor ecological problems with the large scale manufacture and EOL disposal of solar panels. The optimum way to maximize the potential of solar power is to build very large arrays in equatorial areas with maximum sunlight with some method of transferring the energy efficiently over the long distances to the regions where it is most needed. One method is to transfer the energy though a very long range 'super high voltage' DC transmission network. An alternative approach is to convert the energy locally on site to synthetic fuels as in Solution 4.
Solution 6 : Geothermal Power. - A huge and ready and seemingly almost limitless source of power that is generally environmentally quite clean and is CO2 neutral. However Geothermal is currently quite complex and expensive and quite geographically limited. (expensive to build but relatively cheap to run) With enough investment and technological development geothermal wells should be possible almost anywhere on Earth on land or even in the sea, and it should be possible to scale them up to enormous sizes, scaling up should also ultimately reduce costs.
Solution 7 : Hydroelectric. - Hydroelectric power uses the weight of water already in the natural rain cycle - through Dams or Water Wheels or river bed/sea bed Turbines. Hydroelectric dams are obviously very effective but are notoriously expensive to build, and after a working service life (eg 50 years) many will require emptying and dredging which is an extremely heavy and expensive process. Water wheels can be very effective though are generally quite limited in scale. Water wheels are effective used for local energy generation.
Solution 8 : Hydro-thermal & Aero-thermal. - Hydrothermal (at simplest) uses pumps and the vertical column of the sea to drive differential heat engines. Aero-thermal uses large (huge) vertical towers in which columns of air are cooled and sink down the shaft to drive turbines on exit. Both are solutions with great potential but both potentially also have very high set up costs and medium long term maintenance costs.
Hydrothermal and aero-thermal are both at an early stage of development. Both work much better in hotter areas of the world, especially aero-thermal. Small scale hydrothermal plants exist in some tropical regions and are running today. To be efficient aero-thermal plants require construction on an enormous scale, ideally requiring towers of up to a kilometer high or more and 100 meters in diameter. As of 2017 no large scale aero-thermal towers currently exist.
-- -- -- -- -- -- -- -- -- --


Bad Non-Preferred Solutions

Fossil Fuels. - The problem rather than the solution. Organic Fossil fuels produce large amounts of CO2 and many fuels also produce large amounts of various types of pollution, and/or solid ash waste. Fossil fuels kill an estimated 1.5 to 2.5 million people per year through air pollution alone. This leads to a total death toll since WWII of some 80 to 120 million people. Despite this the traditional 'green' movement have effectively promoted fossil fuels between the mid 1970's and the modern era, leading to the deaths of an estimated extra 5 to 10 million people. (So far the anti-nuclear protest movement has killed more people than nuclear power and nuclear bombs combined.)
There is little doubt and wide agreement -within the scientific community- that CO2 production is the most serious long term environmental problem in the world today, and the primary source of introduced CO2 is from burning fossil fuels.
Things are not quite all bad though, pollution filters and better design have reduced pollution at least in the west since maybe the early 90's and the rate of increase in the use of fossil fuels around thee world has slowed considerably. The switch away from coal and towards gas has also helped reduce the -short term- pollution load considerably. However any good work in the reduction of pollution has less effect on CO2 production, and even this is very easily undone if the increasing use of 'dirty' fossil fuels in developing and third world continues. The expansion of Fracking in the first world should also be seen as a potential danger.
A 'final' solution has been proposed of using carbon capture and storage. So far this is only in the very early stages but it looks like it may be very expensive and difficult to achieve on a large enough scale to make any real difference. There may also be problems with long term safety and severe potential dangers from a sudden breach. For humans the lethal concentration of CO2 in the atmosphere is approximately 10% and CO2 is heavy and can flow along the ground, so a leak in a populated area could kill vast numbers of people. (Natural CO2 release has already demonstrated this ability to kill on large scales.)


Large Scale Wind Turbines - At first sight an obvious solution to the problem of energy production, a system that extracts energy from the wind. The problems come when we scale wind energy production up to power grid scales. In reality their cost (without subsidies), massive scale, and other problems make them a bad solution.
To reach the super large scale required to become a major part of general energy generation wind turbines have to be built on a colossal scale and in very large numbers. The resulting structures are huge and dominate the landscape, and by their nature they have quite severe component lifespan limits and relatively large environmental footprints. The limited amount of energy generated by each turbine results in the need to build very large numbers to achieve the substantial grid level outputs required.
Wind turbines generate very variable power, with a summed average being about 30% of peak. This requires a delimiting factor of 50% to 70%, which means that a wind array will generate about a third of its physical rated capacity. For periods where the grid level output falls too low wind turbines also require a near 100% backup, in practice the best solutions for this are either diesel or natural-gas power stations.
Large wind turbines are often described as being ugly & noisy, and totally dominate their local landscape. When running they produce loud repetitive noise up to 24 hours a day, and during the day repetitive flickering blade shadows that can extend for hundreds of meters. These effects can induce depression or mental illness in people living near the turbines, chronic poor health, even suicide. The blade shadows can induce seizures and even cause car crashes. Turbines are usually built out in remote rural wildernesses where the wind is strongest, and at the extreme being built out at sea. (This carries the old polluters pledge - out of sight out of mind..)
One of the potential problems with large turbines is that estimated blade lifespan is formally averaged at about 25 years, but in reality may be as low as 10 years. - The blades have a very large size and requirement for high rigidity, also with very high & constant cyclic loads, plus minimal weight demands to achieve maximum efficiency. These requirements lead to exotic materials like fiber glass composites and carbon fiber, which lead to limited lifespans, high manufacturing toxicity, and low recyclability - not a happy combination.
-- -- -- -- -- -- -- -- -- --

Sorry that this section is rather rough - a lot of work still needed, shorter synopsis chapters with longer essays on individual methods. I really am interested in creating a campaign to promote a scientific approach to energy production - and this demands long term thinking and funding for long term research like Nuclear Fusion or Atomic Conversion...

-- -- -- -- -- -- -- -- -- --








FinaL WHiNE, 1984 FOREVER[edit]

Original Research

Double Plus Good Comrades ! In the beginning the internet was seen as and meant to be a symbol of freedom, an ultimate form of democracy. That is a joke. Human nature hates freedom and likes to turn it into a fascist state, a police state. In places like Wikipedia or the BBC there are the strong and the weak, the less and the more, Moderator and slave. The future really is a boot stamping on a human face-forever. [typical exaggeration ??]
"Work Makes One Free" 'Joy through work' etc.


The Failure of Science? : The curse of modern science is that (far to often) it has become too short sighted, too money orientated, too bureaucratic and too over-specialized. There is a very rich vein of unsolved problems out there lying hidden in the gaps.
Certain things lead to certain solutions - how many thousands of very able and capable mathematicians have looked at relativity for years or decades and utterly failed to get anywhere? It is sometimes the most simple obvious step that is the most difficult to find. The key to breaking general relativity was to multiply one by minus one. No the key was simply to doubt.
Far too many modern scientists have become the 'Accounts' of reality and have forgotten how to truly think. (To explore, to imagine, to question, to step outside the box of convention.) they have become far too certain of themselves and have lost the spark needed to cross the gap. By the same means they have forgotten how to amuse or entertain, or most important of all how to inspire or enlighten the people.. Science should be a philosophy, it is almost a religion, it is the way of thinking that lead to everything good we have today. To the modern world, to long lives free of illness and starvation, to democracy and a world of relative freedom.
- - - - - - - - - - - - -


'This isn't some communist Daycare Centre!' - "This isn't some Communist state here Sinner!" - POAAF - MM - 1994.
"The Death of One is a Tragedy, The Death of Millions just a Statistic.. - (Another reason you should never blindly trust human logic or instinct. - Robert Lucien..)
Quintel Point -m Seattle Suburb Four. Cassandra Program. END. FBI Star Program. END. [BTW : Technically.. and...]


They turned something simple into an endless and seemingly inescapable labyrinth.. [The lost words .. now gone forever.]