Let’s Play Predict the Future: Where Is Science Going Over the Next 30 Years?

By Amos Zeeberg (Discover Web Editor) | September 14, 2010 11:50 am

whereAs part of DISCOVER’s 30th anniversary celebration, the magazine invited 11 eminent scientists to look forward and share their predictions and hopes for the next three decades. But we also want to turn this over to Science Not Fiction’s readers: How do you think science will improve the world by 2040?

Below are short excerpts of the guest scientists’ responses, with links to the full versions:

Ken Caldeira: “…If you could directly produce chemical fuel from sunlight and do it affordably, that could really be a game changer…”

Jack Horner: “…If we want to see an animal like a velociraptor, we will be able to create one by genetic engineering. It might even be possible to make something that looks like a T. rex…”

Oliver Sacks: “…We thought that every part of the brain was predetermined genetically, and that was that. Now we know that enormous changes of function are possible…”

Sylvia Earle: “…We’ve explored only about 5 percent of the ocean. For us to have better maps of the moon, Mars, and Jupiter than of our own ocean floor is baffling…”

Rodney Brooks: “…The arguments we have about drugs and sports are minuscule compared with what’s coming, such as ‘What is the definition of human?’ We have the Paralympics now, but we’ll have the Augmented Olympics in the future…”

Debra Fischer: “…Every year since 1995, we have discovered more extrasolar planets than the year before. A parallel thing could happen with extraterrestrial life: After we find one example, we’ll hone our strategies to be smarter and more efficient…”

Tachi Yamada: “…I don’t believe just because you’re poor, you shouldn’t have access to lifesaving technology…”

Neil Turok: “…The science has reached the point where questions that used to be just philosophy could be observationally testable in 10 or 20 years…”

Ian Wilmut: “…We should be able to control degenerative disorders like Parkinson’s and heart disease…”

Sherry Turkle: “…Sometimes a citizenry should not ‘be good.’ You have to leave room for real dissent…”

Brian Greene: “…We may establish that there is not a unique universe—that ours is just one of many in a grand multiverse. That would be one of the most profound revolutions in thinking we have ever sustained…”


Comments (40)

  1. James F Milne

    Human decision making is highly emotional. We need to create an intelligent entity dedicated to the preservation of the total life system on earth.

  2. zhaphod

    I would like the following to happen:

    1. Nuclear energy becoming ubiquitous by moving to LFTRs.
    2. Owning cars should become less and less attractive. Public transportation should become the main means of commuting.
    3. World population should show clear trend towards coming down below the 6 to 7 billion mark by 2100.

  3. Daniel J Hesse

    1. Our understanding of the human body will gradually change with a return to more whole foods and organics.
    2. Our mode and means of travel with the decreased dependence on fossil fuels.
    3. Our understanding of molecular and biochemistry will challenge cancer and obesity as never before in our history.
    4. The number of Americans living will increase but the numbers of mixed ethnics will surmount, with the introducing of new strains of genetic engineering.
    5. Our approach to education in America will rapidly change in the next 10 years and delivery almost in now time.

  4. Neurotransmitter manipulation will become increasingly important in the treatment of myriad diseases.

  5. scribbler

    I have only one prediction: That we will all sound more foolish in our predictions as they did 30 years ago.

    One thing is clear, we don’t know what we are really capable of. The bible says that unchecked, we can come to know all and control all. It says that knowledge will expand exponentially. Seeing even this small tip of that ice berg of technology should serve to let us know that we have no real idea where all this is headed and certainly whether we will be at all able to handle it properly.

    In my fifty or so years here, I’ve seen some mighty impressive changes in our abilities as individuals and as societies. I’ve seem many encouraging things like the increase in farming technology so that now we produce enough food to feed every man, woman and child on the planet. And yet, around 5 million starve to death a year, do they not?

    While technology balloons at an unimaginable rate, man is still, well man…

    Ten thousand years ago, if you wanted to travel 50 miles, you rode a horse. The same is true a thousand years ago and five hundred years ago and a little over a hundred years ago. All that history of mankind eveloped by the same basic technology.

    And look at the last hundred years! Who could have seen that coming or known the shape of everyday life because of it?

    Equally, the next hundred years will be impossible to get our minds around, I think.

    When I was five, I first heard and understood the concept of Mutually Assured Destruction. I looked at my father and said, “Don’t worry Dad! No one is stupid enough to blow up the whole world!”

    So far I’ve been right and yet as I look and understand that world annihilation will soon be in the hands of as few as one or two humans; humans just like you and I, my optimism is wanning…

    With the coming control man will inevitably gain over time and space and energy, the chances of destruction by accident looms ever larger as well, does it not?

    Leaves me at a growing quandry to scientifically quantify “hope”…

  6. We will get rid of tires and wires. We will fly everywhere and cars will become relics. Wires will go because each house will produce its own power, heat, electricity, etc. Sewage gas is already being turned into a source of power and cold nuclear fusion with further that change.

    YEA NO MORE WIRES. Everything will be wireless!

  7. The riches in space are endless…diamonds, energy, as far as the eye can see. We just have to reach up and touch it.

  8. Tom Toth

    Organic/silicone or totally organic computer chips that based upon high-level human supplied parameters will develop their own instructions and grow how and when necessary.

  9. Bassam Ghanim

    Scientists should keep their hands off viruses and do not tamper with them genetically! We have enough troubles handling naturally occurring mutations.

  10. Robert Crow
  11. 1. Physics will under go a revolution when physicists start treating gravity as the interaction of objects and space-time instead of as a force. With the Forces reduced to three — electromagnetism, the strong force, and the weak force — advances will be made in the Theory of Everything, and science will come to the conclusion that the standard model does not provide all the answers and needs work.

    2. Plate tectonics will under go a revolution of its own when geologists learn that the Earth’s crust consist of a layer of oceanic crust upon which rests a discontinuous continental crust which is carried along as the oceanic crust moves. The current model of 8 major and a few minor plates will be replaced by a model of dozens of minor oceanic plates, some of whom the crustal plates overlap.

  12. Armitage

    @Alan Kellog.

    1. General Relativity was published in 1915.

  13. Almitage, @12

    Not everybody got it where gravity is concerned.

  14. Only Hope: Thousands of Ole Miss Alumni Asked to Solve Superstring Theory of Everything

    Ole Miss Human-Based Superstring Research

    Hi! Type “Part III: If God is Light, What is Dark Energy?” in Google Advanced Search.

    Then at the top of the list, click on one of the links to this permanent link on the oldest largest astronomy network in the world and read a testable solution to the Superstring Theory of Everything! — By an Ole Miss grad!!!

    You will also find on this physics website four other permanent links and two excellent video lectures about Dark Energy by professors from Caltech and Berkley.

    The National Science Foundation has already approved a pre-submission letter, which is required for grant proposals involving new concepts. Research organizations can now submit proposals for grants of up to $300,000 to test the bioscience of this new theory.

    Type “Redefining Reality” in Google and my jacobsladder.com website about the psychology and religion of this theory will be number one. Site number two by Dr. Timothy McGettigan at Central European University in Warsaw explains the sociology of this new theory.

    I hope to continue this research, under controlled conditions, as part of a multidisciplinary research team at Ole Miss!

    Please read Part I and Part II about this research at ArticlesBase.com and follow the article instructions to test this theory on yourself, your family members, your friends, and your work associates. By doing so, you can prove for yourself that Jesus and Saint John weren’t lying when they taught that God is Light!

    “This then is the message which we have heard of him, that God is light, and in him is no darkness at all.” – 1 John 1:5

    Please keep me informed about your own exploratory research.

    Henry Madison Jacobs, MA, ABD, DD
    Copyright 2010-08-30
    Trinity Leadership Research Center
    980 Gin Pond Drive
    Saulsbury, TN 38067-7487

  15. scribbler

    If God is Light then of course Dark Energy is Ignorance…

    While our knowledge grows exponentially, our awareness of our own ignorance about the rest of what’s out there grows by many magnitudes beyond that, does it not?

    If then our “brilliance” expands the Universe, would not our ignorance then expand it many fold?

  16. Patricia Torok

    In my opinion the biggest scientiific breakthrough of the next 30 years will be the realization that my generation (50-60 babyboomers) will come to the knowledge that in our parenting skills to be the so called “better parents than that of the ones before us” offered our children more than what we had. In that discovery we began to realize that what we did was just the opposite. Our previous fathers actually did do the better job parenting and matters of social and fiscal responsibilities. What we did without was probably a good thing. We ate whole foods, home growned and were better educated, disciplined by our teachers and not raised by them. Physicianws still performed house calls and we could send the payment in at a latter date. We were asked what was the reason for the doctors appointment and not asked what insurance we carried. We knew the difference bewteen right and wrong and the righteousness of humanitarianism with a deep sense of community and loyalty.

    In my opinion what could go wrong would be for our children to do the very same thing by giving their children more than what they had. What a catastrophe!

  17. Dra.Guadalupe Mora

    If the human being is not able to stop demographic explosion and create an organized society which harmonize with the nature, Science and the human being will disappear drowned in their wastes.

  18. Bob Bowden

    Scribbler substantially correct. Factor in truely alien/ineffable strangeness of being; the possible yet unforeseeable; the replacement of homosapiens with novusapiens/bio-synthesis/dna manipulation and the future is as unknowable as the ending of any great novel. Enjoy.

  19. Mike Prosen

    Off the top of my head (I only have a few minutes):
    1. Toxicity of heavy metals (like mercury) in the brain will be determined as the cause of certain diseases / ailments like Alzheimer.
    2. Man’s knowledge of what is “The Known Universe” will expand … the size and what make up the universe. Like dark matter and dark energy … we will determine that the empty space is filled. The Kuiper Belt is larger than expected … 100s of AU wide.
    3. Technology will make Medicine will become a science instead of an Art.
    4. Micro chips will be Atomic Chips … because data will be stored at an atom level.
    5. Nuclear fusion will become viable. Solar energy and energy storage will become efficient and used for transportation. Petroleum will not be a bad word; it will be just source materials.
    6. Man will live on Mars and will really start to explore our solar system. Because we will determine how to escape gravitational pull without expending so much energy and polluting the environment.

  20. Here’s a prediction and exploration of a possibility from my blog, posted recently:

    If you read my previous post on sci-fi books, you’ll know that lately, the topic of mind uploading, particularly as it relates to the technological singularity, has been present in much of my reading material. While the concept seems plausible, in at least a sci-fi sort of way (the Matrix being a case in point), most people scoff at the idea of it really happening, and openly laugh at the suggestion of it happening in the next 50 years.

    I tend to be on the other side of this particular fence. I think that it is not only likely, but almost a certainty in the next 50 years. So, I decided to post up an exploration and perform some calculations to figure out if it is plausible. First though, let me explain mind uploading and get some basic prerequisites out of the way.

    With mind uploading, the basic idea is that that due to Moore’s law and miniaturization, soon, computers will be powerful enough to rival the human brain in sheer processing power. Once computing systems reach this level of miniaturization and efficiency, the theory is that they will be able to simulate human brains on either a virtualized level(“pretend” brains based on a simplified simulation of the underlying physics of reality) or fully simulated level(brains where every possible event, down to a quantum level, are fully simulated).

    The first thing I had to tackle when thinking about this is to determine what the estimated processing capacity of the brain is. From that, it should be relatively straight-forward to determine how close we are.

    Luckily, many scientists have already pondered this exact question, and arrived at an answer. Based on our current understanding of how the brain works (which I will tackle a little later on), the expected total processing power is in the range of 100,000,000 MIPS (1014). This is helpful, but not hugely so. The reason this isn’t more helpful is because computational speeds are generally measured in FLOPS (Floating point Operations per Second) instead of MIPS (Millions of Instructions Per Second). Both of these are somewhat subjective, however, so a little work needs to be done to convert them. Let me dig into this a bit.

    For MIPS, the atomic unit is an Instruction, which is pretty darn flexible. For example, an instruction could be writing data from one part of a chip to another (which is not a calculation at all), or it could be doing very simple math (2+3), or it could be calculating the precise position (X/Y/Z) of an object moving in 3-D space (in the case of a very specialized DSP). In this way, what an instruction actually represents is dependent on the core chip instruction set. In this way, MIPS are only a valid comparison for chips with identical instruction sets.

    A FLOP, on the other hand, has an atomic unit of a Floating Point Operation. A floating point operation is much more standardized, and is a calculation using a floating point number (such as 2.56 × 1047). This is a pretty good indicator of the raw processing ability of a digital computer, but it still doesn’t take into account some general purpose tasks. Luckily, in most modern general purpose processors, MIPS and FLOPS tend to run pretty much neck and neck.

    In any event, if we convert the processing power to FLOPS, we end up with 100 TeraFLOPS (1014 FLOPS), and we have a number we can compare things to. And it also turns out that, based on this number, there are any number of computer systems today that are powerful enough to simulate the brain. For example, a Cray XT3 is almost identically powerful, and it’s a relatively old supercomputer (2004). The most powerful supercomputers today are approximately 1,000 times as powerful.

    So, if we can already (theoretically anyhow) simulate a brain, why haven’t we? Well, as the good people working on the Blue Brain project can attest, it’s not enough just to have the processing power. You also have to have an understanding of how the brain really works, and build software to emulate that. Power without direction is, at best, useless.

    Also, I speculate that we will find, as we dig into this, that the human brain is vastly more complex than we have given it credit for. In fact, I speculate that the brain is actually a largely quantum computer primarily aimed at calculating things that are non-computable functions in conventional computers. I also suspect that the brain has only rudimentary analog conventional computing capabilities.

    I am definitely not the first to have this idea, and research is still ongoing in this direction, but I believe that describing the brain as a quantum computer clears up a lot of contradictions inherent when you describe it as a classical computer.

    For example, classical or conventional computers, whether they be a handheld calculator or a supercomputer, are all exceptionally good at math. The slowest general purpose computer you can find today can calculate pi out to around a million digits faster than you can read this sentence. Even the most gifted human on the planet cannot begin to compete with the speed of a computer even one one-thousandth as powerful in overall “brain-power” for general number crunching.

    On the other hand, conventional computers are generally horrible at estimations. This is the nature of the computing platform. To estimate, it has to use an algorithm to approximate “fuzzy” logic and come up with a rough number. A human, on the other hand, can “eyeball” something and estimate it, often with a fairly high degree of accuracy, with very little effort.

    Some scientists completely disagree with this notion. They state that the problem is one of software: Human brains are good at probabilistic logic and bad at traditional or “crisp” logic because our brains are wired or programmed to be good at it. This is possible as well, but is a much more complex answer. I think the simpler answer is that the brain is simply a different kind of computer altogether, one that expresses states as a range of possibilities, not as an absolute.

    Furthermore, the brain does this automatically, estimating states without any conscious effort. In fact, it takes conscious effort to nail down to a solid number. In our minds, we can very easily compare things (such as two glasses with different amounts of water) by unconsciously estimating without actually arriving at a value for each glass. If we are asked to give a percentage of fullness to each glass, that takes effort, but the actual comparison is effortless.

    In a computer, you must first estimate each value, which is a huge undertaking when only given visual evidence, then compare the values. The whole process takes an enormous amount of computing power, but in humans, it is done without a thought. To me, this is indicative of a very large disparity, and one that is more easily described by differences in platform than differences in software.

    Ultimately though, with a sufficiently powerful computer, all of this would be moot. We could simply simulate the underlying physics of the universe for the space a human brain occupies, and we don’t even have to know how it works. We just have to have an unerringly accurate snapshot of the brain down to the quantum level. This, however, presents a completely different problem, as Heisenberg’s uncertainty principle states that we cannot know both the exact speed and location of any single particle. Still, perhaps using “default” values for speed and position of underlying particles would suffice as long as the major structures are accurate. It is hard to know without trying.

    So, assuming that we can either figure out the underlying “software” of the brain, or we can get around Heisenberg’s uncertainty principle and create a molecular model of the brain, can we potentially simulate a brain with silicon in a smaller, more efficient space than the actual “meat brain” we were born with?

    To answer this question, the first question I had to ask myself was: Based on the physical laws as we understand them, what are the computational limits of matter in our universe? This question is actually relatively easy to answer, because someone has already answered it. Bremermann’s limit and the Bekenstein bound determine the maximum computational power of a self-contained computer with a given mass and the maximum uncompressed information storage capacity of an area of space, respectively.

    Bremermann’s limit states that the maximum information processing ability of a gram of matter is roughly 2.56 × 1047 bits per second. Now this is an interesting (and very large) number, but, unfortunately, it isn’t very useful. As we mentioned previously, FLOPS is really what we need in order to compare computational capability, so that is what we need to convert to. In order to convert this number to FLOPS, we first need to determine how many bits are going to be involved in each Floating point Operation.

    Assuming 32-bit floating point numbers (32-bits per FLOP), the Bremermann’s limit for a 1 gram processor is about 2.56 PetaFLOPS (2.56×1015 FLOPS). That’s a much smaller (and more useful) number. You can roughly estimate that a self-contained computer of this power would be about the size of a single cubic centimeter. For reference, today, a computer half this powerful (the Cray Jaguar) takes up over 340,000,000 cubic centimeters.

    To reach Bremermann’s limit using Moore’s law (doubling the processing capacity we can fit into a given space once every 18 months) is going to take about 42 years. This is assuming we can maintain Moore’s law for that long, of course, which is unlikely due to diminishing returns. Still, the possibility is definitely there.

    At that point, we should theoretically be able to make something that has the mass of 1 nanogram (roughly the size of 1 cubic nanometer, or half the diameter of a helix of DNA) that can process at the rate of 2.56 MegaFLOPS (2,560,000 FLOPS). To put this in perspective, we should be able to create a molecular-sized computer that can process as fast as an Intel Core2 Duo. You will literally be able to fit more processing power that the current most powerful supercomputer in the world in the lint on your clothing. Even if we do not reach Bremermann’s limit in 42 years, we should still be able to put a useful amount of computing power into dust-sized processors in the near future.

    So, now that we understand Bremermann’s limit, will we be able to simulate a human brain in less space that running a real “meat brain”? That question takes a little more work.

    First, remember that we essentially have two major ways of simulating a brain. We can either do it by emulating the brain (i.e. creating software that operates on the same principles and runs on a computer system powerful enough to make a pseudo-brain) or by fully simulating the physics involved and using that, along with a particle-level map of the brain to completely simulate it.

    If we go the emulation route, we need a computer system with enough power to model the brain (roughly 100 TeraFLOPS) plus a little extra for overhead (a low-level OS and some type of virtualization/emulation engine). So, figure about 120 TeraFLOPS.

    We also need to consider storage (memory). The brain has, roughly, the equivalent of 100TB (100 TeraBytes, or roughly 100,000 GB) of storage. Again, expect ~20% for overhead, so we actually need about 120TB. So, for the emulation route, we need to pack processing power of 120 TeraFLOPS and storage of 120TB into 1.5 kg of mass.

    Can we build a system of this power that has an equal mass? Well, my calculations on Bremermann’s limit already answered this question: Yes. Furthermore, creating a system with the required processing power that fits in roughly half of the space of the brain (~600 cubic cm) should take us around 10.5 years. So that part is definitely feasible, assuming we can figure out the “software” aspect, and assuming that we actually do have brains that roughly resemble classical computers.

    For the storage, I had to reuse the Bekenstein bound. The Bekenstein bound describes the maximum amount of information that can be stored, given a fixed about of energy and space. For a space of 600 cubic cm (with a radius of 5.232 cm) and a mass of .75 kg, this works out to 1.01 x 1042 bits (1.264 x 1039 TB). This is much larger than what we need, so it’s definitely possible. However, how long will it take?

    Luckily, storage has been increasing at a rate that’s actually faster than Moore’s law, and there are current techniques (such as electron quantum holography) that prove that it is possible to store up to 3 EB (3,000,000 TB) in a square inch. So, this part is actually possible today, and in a space that is considerably smaller than the 600 cubic cm that we allocated for it, meaning we could allocate more space to the processing unit.

    Ultimately, it seems brain emulation in a human brain-sized package should be feasible, at least theoretically, within 5-10 years. That’s pretty encouraging news, but we can’t lose sight of the likelihood that we really don’t understand how the brain operates well enough to emulate it with any degree of success.

    If we go with the particle-level simulation, however, we don’t have to understand how the brain works. We simply have to understand how the underlying physics works (which we presumably have a decent grasp on) and have an extremely detailed map or scan of the brain (which we do not have and may prove to be an impossible challenge).

    The other downside of a full simulation is that it requires much more in the way of processing and storage resources, and is ultimately basically impossible to do in less space than it is already being done. This calculation is also arrived at using the Bekenstein bound. In addition to describing the maximum amount of information you can store in a given space, the Bekenstein bound can also be used to describe the maximum amount of information needed to perfectly describe (down to the quantum level) a physical system. Since you would need a chunk of mass at least as large as the human brain to store the amount of information contained in the sub-atomic particles that make up the human brain, you can’t possibly fully simulate the brain in less space.

    This makes some sense; after all, the universe is simulating itself as fast as it possibly can. Put another way, you can’t possibly simulate the universe from inside the universe at faster than the universe is currently running, with the possible sole exception of a simulation that is running from inside the event horizon of a singularity. In fact, it is absolutely required that the simulated reality be, at least to some degree, slower than actual reality. However, this ultimately may not matter, as in simulated reality, time would be effectively meaningless.

    Even if we can only achieve 50% of the speed in the primary simulation of the brain as we can in a real brain, we gain a lot of functionality that simply doesn’t exist in our meat brains. For example:

    * Death can become a thing of the past
    * We can “back up” our brains, saving distinct states to restore to (effectively giving us “undo” points)
    * We can run multiple copies of ourselves, spawning new copies to do things like explore space
    * We can forego the “meatbody” support apparatus, and go with a slimmer power source (such as solar or nuclear power) that can run for years
    * We can do a lot of things non-destructively (since we can backup and spawn new copies) that we can only currently do destructively, like sever connections in the brain to try and understand what they do
    * We can, once we learn enough about how the brain works, easily expand the capacity of our minds by adding more computing resources

    This, of course, is just the tip of the iceberg, but it’s still pretty enticing.

    So, ultimately, do I think the technological singularity in the shape of mind uploading is possible? Yes, assuming we can cross a few major hurdles. The hardware is already at a point to where it is feasible, at least from an emulation standpoint – we just have to get the software in place. Do I think it is likely to happen in my lifetime? Hard to say, but I am hopeful. One thing’s for sure – the next 50 years are going to be very interesting.


    Bremermann’s limit: 2.56 × 1047 bits per second per gram (3.2 x 1046 bytes) (4 x 1039 MIPS) (2.56×1015 FLOPS)

    1000 Core i7’s needed to emulate brain (virtual simulation)

    5.9011596613984439986299359005725×1032 Core 17’s needed to simulate brain (full simulation, assuming 5 FLOPS per Instruction, 1 instruction per bit per second average)

    Time to gram-sized emulation of brain: 15 years

    Time to gram-sized simulation of brain: 160.5 years – Impossible based on Bremermann’s limit

    Time to Bremermann’s limit: 504 months (42 years); calculated by dividing 340,000,000 by 2 and then dividing that result by 2 and so on until reaching the number “1.2”. This process took 28 division cycles, which when multiplied by 18 (the number of months between each transistor doubling cycle, according to Moore’s law) gives you 504 months.

    Number of Cray XT5 cabinets needed to emulate brain: 17.45

    Number of AMD Opteron processors per cabinet: 187

    Rough processor dimensions (including ceramic slug): 34mm x 35mm x 4mm (4760mm3)

    Total volume of necessary processors: 89,012 cm3

    Time to ~600cm3 emulation of brain: 10.5 years

    Bekenstein bound: Human Brain: 3.01505 x 1032 GiB (3.01505 x 1041 bytes / 2.41204 x 1042 bits)

    Bekenstein bound: General: 2.5769087×1043×(mass in kilograms)×(radius in meters)

    Bekenstein bound: 1 Gram object: 2.5769087 x 1038 (32,211,358,750,000,000,000,000,000 TB)

    Brain weight: 1.5 kg

    Brian volume: ~1200 cm3

    Brain processing power: 100,000,000 MIPS (1014)

    Brain memory: 100 TB (105 GiB)

  21. John Merryman

    I think we will come to understand that time is a process by which motion constantly reconfigures what exists, such that it is the future becoming the past, rather than a dimension along which the present moves from past to future. It is an emergent effect of motion, rather than the fundamental basis for it.

    When we think of the progression of time, we view it in a historical sense, where one event leads to the next, such that the prior events are cause for subsequent effects. The problem is that the past is necessarily determined, as there is no way to go back and change it. Thus it would seem that the future is equally determined, since the present is a seemingly dimensionless point between the two, with no apparent ability to affect either.

    If we look at it as a function of process though, it is the multiple potentialities of the future, which intersect at the present, that are cause for the events which occur, such that the future is cause and the past is effect. In other words, it is not yesterday which determines what tomorrow holds, but the potentialities of tomorrow, when they collide as today, that are the cause of what that date will hold, as it recedes into the past.

    Consider how this affects the paradox of Schrodinger’s Cat. When we view the process as moving along a dimension from past events to future ones, the only way to reconcile this seemingly automated and deterministic process is to assume it must branch into separate realities, when confronted by the inescapable probabilities of the quantum realm. When we look at it the other way, in which time is a process and not a dimension, then it is the very collapsing of these probabilities which create the effect of time in the first place.

    This would also explain why time is a local, rather than universal effect, without having to discount the importance of the present, as does Relativity.

  22. cyclops

    My hope is that the mysticism and obscurantism of quantum physics will be revealed to be that tattered ragbag of mutually inconsistent half-truths and cover-ups that it really is. Reasonable people will be able to ask detailed questions and receive coherent answers on issues such as the nature of photons and the size of electrons. The true basis of the observed quantal behavior will be discovered and physicist will learn to live by their own dictum of not postulating implicit factors to explain their partial understanding. Efforts to develop quantum computing will be abandoned when it is understood that they are based on a the mirage of quantal uncertainty. Only when physics is firmly founded a rational basis will it be able to go forward to meaningful advances in femtotechnology.

  23. There will not be any progress in basic science until basic fact of entire universal spatial propagation having both absolute , C=E & relative O through C at the Same Time Velocity. This differential is manifested as various frequencies: energy, gravity, various forces, quantum/wave/field/propagations, strings, branes, mass, matter, dark or otherwise. etc… This basic, unified understanding makes such traditional concepts as Conservation, Equivalency & Interchangeabilities Laws as natural outcomes of Spatial/Propagational/Densities rather than mysterious forces…VG

  24. I have a question that is kind of related. What is the easiest way to find reviews of the best moving company? Best!

  25. You’re making several fine arguments in this particular posting but its tough for me personally to concentrate on this great article considering the complicated page design!

  26. Within the next 30 years,

    a: Human minds can be melded together.
    Leading to a Single Universal Mind, within the next 30 years.

    b: Selective areas of acquired knowledge can be transferred from one brain to another.
    For example, I have never been to (say) Budapest.
    From the brain of a person who was born and bred up in Budapest, I can transfer to my brain all that he knows about Budapest – roads, directions, buildings & their details etc.
    Of course, this is Budapest, as viewed and experienced by the donor. Lots of personal view points will be embedded in this transfer.

    c: There will be need for any experience to be gone through, only once
    Like climbing Everest only once, diving from 20 km height only once etc.
    The person’s experience can be tapped, as it enters the brain at multi points (vision, sound, taste, smell, touch, acceleration), recorded and can be made available to any person, who can plug the recording to his brain. The recipient will feel exactly as the donor felt.


  27. John Porter

    Our consumption of oil will become significantly smaller than it is today. One of the big savings will be brought about by revamping our national transportation system to make our railways more useful and desirable for passenger travel and our airlines will not be required to shuttle so many people back and forth; the trucking industry will be effected in a similar manner as the airlines. This will result in fewer airplanes, trucks and resources being consumed because the railways will be made more competitive and fuel efficient than the airlines and trucks. To make this happen will require significant changes in the way the railway system is defined. Some of the changes that will occur are:
    1. The government will realize that the only way the rails can be made to function efficiently is to eliminate the private control of different sectors of the rail system that exists today. The rail track-ways will be consolidated under federal ownership similar to the interstate highways so that anyone playing under the new rules can utilize the railway system.
    2. Track and train maintenance will be subcontracted to private industry so that the system will function much like the airlines with different companies competing to provide train services.
    3. The use of a new train system that allows for passengers and luggage to be loaded and unloaded from the moving train will eliminate the need for trains to stop at terminals. This feature alone will allow for a passenger to travel from Los Angeles to New York City by train, comfortably in 24 hours without the use of high speed trains thereby eliminating the need to spend large sums of money to upgrade the rails for high speed trains. This will also improve the safety and fuel efficiency of rail travel by keeping the speeds of trains to more practical levels, while at the same time improving the travel times from point A to point B of passengers and luggage by a factor of 10 over todays scheduled travel from coast to coast.

  28. Absolute spatial/propagations(C) with it’s instantaneous from 0 to C relative dimensional parameters have no beginnings or endings–that is, to say it differently,entire cosmic content propagates with any radio/spectral speed, forever. We observe their evolutionary structures on Micro/Macro/Scales & give them different names such as Gravity, various Forces etc….VG…nothing will change this basic Cosmic Principle –VG–& there will be no future advance without understanding this fundamental principle…

  29. In the next 30 years, science could not only make the visions shown to us by Star Trek come true but it could surpass those visions, unimaginably by today’s standards. We could travel to any point in the universe, or in time, instantly – we could achieve physics’ holy grail of cosmic unification, and unify the large scale universe with small scale quantum mechanics – we could see eternal health for everyone who ever lived – and proper understanding of unification would not only show that a being called God must exist but humans would be unified with that God. If we can cast aside our emotional attachments to life as we know it, all this might happen by 2040. If we cannot cast aside our attachments, we’ll call the following “nonsense” and might have to wait hundreds of years to see it come true.

    1) In July 2009, electrical engineer Hong Tang and his team at Yale University in the USA demonstrated that, on silicon chip- and transistor- scales, light can attract and repel itself like electric charges/magnets (Discover magazine’s “Top 100 Stories of 2009 #83: Like Magnets, Light Can Attract and Repel Itself” by Stephen Ornes, from the January-February 2010 special issue; published online December 21, 2009). This is the “optical force”, a phenomenon that theorists first predicted in 2005 (this time delay is rather confusing since James Clerk Maxwell showed that light is an electromagnetic disturbance approx. 140 years ago). In the event of the universe having an underlying electronic foundation (a necessary precursor to scientific fulfilment of Star Trek’s “magic” which becomes clear as these steps are read), it would be composed of “silicon chip- and transistor- scales” and the Optical Force would not be restricted to microscopic scales but could operate universally. Tang proposes that the optical force could be exploited in telecommunications. For example, switches based on the optical force could be used to speed up the routing of light signals in fibre-optic cables, and optical oscillators could improve cell phone signal processing.

    2) If all forms of EM (electromagnetic) radiation can attract/repel, radio waves will also cause communication revolution e.g. with the Internet and mobile (cell) phones. There may be no more overexposure to ultraviolet or X-rays.

    3) In agreement with the wave-particle duality of quantum mechanics, EM waves have particle-like properties (more noticeable at high frequencies) so cosmic rays (actually particles) are sometimes listed on the EM spectrum beyond its highest frequency of gamma rays.

    4) If cosmic rays are made to repel, astronauts going to Mars or another star or galaxy would be safe from potentially deadly radiation.

    5) And if all particles in the body can be made to attract or repel as necessary, doctors will have new ways of restoring patients to health.

    6) From 1929 til his death in 1955, Einstein worked on his Unified Field Theory with the aim of uniting electromagnetism and gravitation. Future achievement of this means warps of space (gravity, according to General Relativity) between spaceships/stars could be attracted together, thereby eliminating distance. And “warp drive” would not only come to life in future science/technology … it would be improved tremendously, almost beyond imagination.

    7) Since Relativity says space and time can never exist separately, warps in space are actually warps in space-time. Eliminating distances in space also means “distances” between both future and past times are eliminated – and time travel becomes reality. This is foreseen by the Enterprise time-travelling back to 20th-century Earth in the 1986 movie “Star Trek IV: The Voyage Home” and by Star Trek’s “subspace communications”. Doing away with distances in space and time also opens the door to Star Trek-like teleportation (and the phrase “Beam me up, Scotty” could be used in real-life situations). Teleportation wouldn’t involve reproducing the original and there would be no need to destroy the original body – we would “simply” be here one moment, and there the next (wherever and whenever our destination is).

    8) Another step might be to think of “… the grand design of the universe, a single theory that explains everything” (words used by Stephen Hawking on the American version of Amazon, when promoting his latest book “The Grand Design”) in a different way than physicists who are presently working on science’s holy grail of unification. Recalling the manmade Genesis Planet in the 1982 movie “Star Trek II: The Wrath of Khan”, we might anticipate that the future will actually see a manmade planet (literally forming a planet is merely an advancement of terraforming, where a planet is engineered to be Earth-like and habitable). We might even free our minds from all restrictions and imagine science and technology creating every planet in the universe. The universe’s underlying electronic foundation (which makes our cosmos into a partially-complete unification, similar to 2 objects which appear billions of years or billions of light-years apart on a huge computer screen actually being unified by the strings of ones and zeros making up the computer code which is all in one small place) would make our cosmos into a complete unification if it enabled not only elimination of all distances in space and time, but also elimination of distance between (and including) the different sides of objects and particles. This last point requires the universe to not merely be a vast collection of the countless photons, electrons and other quantum particles within it; but to be a unified whole that has “particles” and “waves” built into its union of digital 1’s and 0’s (or its union of qubits – quantum binary digits). If we use the example of CGH (computer generated holography, which is reminiscent of the holographic simulation called the Holodeck in “Star Trek: The Next Generation”), these “particles” and “waves” would either be elements in a Touchable Hologram – demonstrated by Japanese researchers in August 2009 (search for “Touchable Holography” in Google or You Tube) – or elements produced by the interaction of electromagnetic and presently undiscovered gravitational waves, producing what we know as mass (in September 2008, renowned British astrophysicist Professor Stephen Hawking bet US$100 that the Large Hadron Collider would not find the Higgs boson, a theoretical particle supposed to explain how other particles acquire mass) and forming what we know as space-time. Einstein predicted the existence of gravitational waves, and measurements on the Hulse-Taylor binary-star system resulted in Russell Hulse and Joe Taylor being awarded the Nobel Prize in Physics in 1993 for their work, which was the first indirect evidence for gravitational waves. The feedback of the past and future universes into the unified cosmos’s electronic foundation would ensure that both past and future could not be altered. (A unified whole that has particles and waves built into its union disagrees with Einstein’s view of weights [mass] causing indentations in a malleable “rubber sheet” called space-time, but that system can yield exactly the same measurements as his and I think Einstein would welcome the chance to consider a different interpretation.) (Our brains and minds are part of this unification too, which must mean extrasensory perception and telekinetic independence from technology are possible.)

    9) Elimination of diseased matter and/or eliminating the distance in time between a patient and recovery from any adverse medical condition – even death – would also be a valuable way of restoring health. With literal time travel, people who have long since died could have their minds downloaded into reproductions of their bodies – a modification of ideas published by robotics/artificial-intelligence pioneer Hans Moravec, inventor/futurist Ray Kurzweil and others – allowing them to “recover” from death (establishing colonies throughout space and time would prevent overpopulation). Or if the distance between recovery and a patient is reduced to zero before illness or accident occurs (we might call this “eVaccination” – electronic vaccination); prevention of any adverse medical condition, including that of a second death for those resurrected, can occur. Science’s real-life conquering of all disease, and even death, would certainly make the technology employed by Leonard “Bones” McCoy, the Enterprise’s doctor, appear non-futuristic.

    10) These paragraphs imply the possibility of humans time-travelling to the distant past and using electronics to create this particular subuniverse’s computer-generated Big Bang (but there’s still room for God because God would be a pantheistic union of the megauniverse’s material and mental parts, forming a union with humans in a cosmic unification). We’ve seen several examples of how science fact could equal, or surpass, science fiction. A final example of surpassing is that, in Star Trek, there are many military conflicts with Klingons, Romulans, the Borg, etc. In a real-life cosmic unification, there are no wars between the stars but peace is normal – even on Earth – since nobody can attack anyone in any way without knowing they’re attacking themself.

  30. Superb. We were looking for just like this.

  31. Ken Leslie

    Science may one day achieve effective eradication of cold and flu viruses from our populations; but, the result may not be better health. It may turn out they are symbiotic. Perhaps this will result in an explosion of lung and brain disease much worse than cold or flu when they are not around to cause a good lung and sinus clean out.

  32. I don’t know why those renegade scientists from sometime in the future (see “About Science Not Fiction” near the top of the page, on the right) are trying to spill the secrets of tomorrow. It’s obviously impossible because today’s scientists are firmly stuck in the 20th century. 100 years from now, science will have drastically changed the fundamentals of today’s science. Though future science’s discoveries are based on today’s science, no scientist in today’s world wants to know about these discoveries because tomorrow’s science is simply too much for them. Altering their preconceived fundamentals is equivalent to defiling the sacred. Anyhow, here’s a little article entitled “E=m ^ 1+0 is E=mc2 for the 22nd century”.

    Does the simple modification of E=mc2 (E=mc ^ 2) to E=m exponent 1+0 (E=m ^ 1+0) extend Albert Einstein’s genius, which he claimed was not genius but intense curiosity and imagination, infinitely beyond the 20th century?

    Removing E=m from both equations means c2 (to be precise, c ^ 2) = ^ 1+0
    Multiplying each side by base n (any number) gives us
    nc2 = n ^ 1+0 i.e. nc2 = n
    Dividing both sides by n gives c2 = 1, therefore c also equals 1

    Tradition says c is the speed of light. If c has the same value as c ^ 2 then the velocity of light in a vacuum must be a universal constant and since it cannot change, space-time has to warp: producing things like gravity, gravitational lenses, black holes and time travel.

    Solving E=mc2 for mass (m) results in m=E/c ^ 2
    Since c ^ 2 = ^ 1+0
    m = E/^ 1+0
    Multiplying each part of each element by base n: nm = nE/n ^ 1+0
    nm = nE/n
    m = E/1 = E
    Therefore, the mass of the expanding universe can be thought of as pure energy.

    If we interpret m=E (1m=1E) as meaning all the mass and energy in the universe forms a unit, we won’t be able to think of any of the masses and energies composing the universe as separate. Every planet, star, magnet, beam of light, etc. would be part of a unification comparable to a hologram (but a very special hologram, including all forms of electromagnetism as well as gravitational waves which give objects mass. In September 2008, renowned British astrophysicist Professor Stephen Hawking bet US$100 that the Large Hadron Collider would not find the Higgs boson, a theoretical particle supposed to explain how other particles acquire mass. Einstein predicted the existence of gravitational waves, and measurements on the Hulse-Taylor binary-star system resulted in Russell Hulse and Joe Taylor being awarded the Nobel Prize in Physics in 1993 for their work, which was the first indirect evidence for gravitational waves).

    The seeming fact that particles can communicate instantly over billions of light years (are entangled – a process that appears to have operated in the entire universe forever) also seems to support the holographic principle and makes these lines relevant – another effect of the universe being a unification having zero separation is that experiments in quantum mechanics would show that subatomic particles instantly share information even if physically separated by many light years (experiments conducted since the 1980s repeatedly confirm this strange finding). This is explicable as 2 objects or particles only appearing to be 2 things in an objective, “out there” universe. They’d actually be 1 thing in a unified, “everything is everywhere and everywhen” universe. If the universe is a hologram with each part containing information about the whole, the instant sharing of information over many light-years loses its mystery.

    Light can attract and repel itself like electric charges and magnets (according to Discover magazine’s “Top 100 Stories of 2009 #83: Like Magnets, Light Can Attract and Repel Itself” by Stephen Ornes, from the January-February 2010 special issue; published online December 21, 2009 – in July 2009, electrical engineer Hong Tang and his team at Yale University in the USA demonstrated that, on silicon chip- and transistor- scales, light can attract and repel itself like electric charges/magnets). Therefore, it must be true to say electrically charged particles and magnets can attract and repel like light (electric/magnetic attraction/repulsion would, similarly to light, occur only on microscopic scales if the universe did not have an electronic foundation in which it was composed of silicon chip- and transistor- scales: more will be said about this later). We have known for ages they attract/repel – but now we know they do it “like light”, can we extend this phenomenon from quantum mechanics’ wave-particle duality (in the case of electric charges and light) to universe-wide wave-particle duality (in the case of magnets and light)? If the magnets we can see and touch behave like light, is it not possible that every object in the universe (from a small magnet to an enormous planet or star) behaves like light – making the universe a hologram.

    Since m=E, we can think of c as not merely representing the speed of light but as symbolic of the speed of universal expansion (c=Hubble Constant or 299,792.458 kilometres per second = approx. 70 km/sec/megaparsec). What can it mean if c and c2 both equal 1 in the context of cosmic holographic expansion? Answering this is impossible unless we look back at the work of Albert Einstein. That work leads to the conclusion – if c has the same value as c ^ 2 then the velocity of light in a vacuum must be a universal constant and since it cannot change, space-time has to warp: producing things like gravity, gravitational lenses, black holes and time travel. Applied to cosmic holographic expansion, the conclusion is – if c has the same value as c ^ 2 then expansion (whether positive, zero or negative) obviously always exists and space-time’s warping produces the weird phenomena modern science proposes, like higher dimensions and hyperspace and time travel and parallel universes. Let’s see where things lead if we assume c and c2 both equalling 1 means that the future universe, whose rate of expansion is the square of today’s, is existing at the same time as today’s – and if we think of present expansion as c2, that the present universe whose rate of expansion is the square of one in the past is unified with the past one. For a start, such an assumption would be consistent with “dark energy” causing expansion to accelerate.

    We can, of course, write that c2 equals a number, any number (c2 = n)
    Then c = square root n (n ^ ½)
    But c = 1
    Therefore n ^ ½ = 1
    n = 1 ^ 2
    n = 1
    n = c
    and 1 = c ^ 2
    n = c ^ 2

    Since c and c2 both equal n, any past or future universe (whatever the rate of expansion, even if zero or negative) exists at the same time as ours. So a simple modification of Einstein’s E = mc ^ 2 to E = m ^ 1+0 implies that our holographic universe is generated and supported by binary digits (1’s and 0’s). What line of thinking could justify such an apparent leap? The universe’s underlying electronic foundation (which makes our cosmos into a partially-complete unification, similar to 2 objects which appear billions of years or billions of light-years apart on a huge computer screen actually being unified by the strings of ones and zeros making up the computer code which is all in one small place) would make our cosmos into physics’ holy grail of a complete unification if it enabled not only elimination of all distances in space and time, but also elimination of distance between (and including) the different sides of objects and particles. This last point requires the universe to not merely be a vast collection of the countless photons, electrons and other quantum particles within it; but to be a unified whole that has “particles” and “waves” built into its union of digital 1’s and 0’s (or its union of qubits – quantum binary digits). The feedback of the past and future universes into the unified cosmos’s electronic foundation would ensure that both past and future could not be altered.

    Carl Sagan (who was an American astronomer, astrophysicist, cosmologist and author) said there is “… no centre to the expansion, no point of origin of the Big Bang, at least not in ordinary three-dimensional space.” (p. 27 of “Pale Blue Dot” – Headline Book Publishing, 1995). Does this mean the Big Bang (or for our purposes, the binary 1’s and 0’s) would exist outside space-time in what we might call 5th dimensional hyperspace? The revised equation also says this universe is a unification, permitting time travel into both past and future (because any past or future universe exists at the same time as ours – a twist on the concept of parallel universes). Repeated experimental verification of Einstein’s Relativity theory confirms its statement that space and time can never exist separately but form what is known as space-time. So space, like time, must also be a unification whose separation can be reduced to zero. This suggests that intergalactic travel might oneday be completed extremely rapidly.

    From 1929 til his death in 1955, Einstein worked on his Unified Field Theory
    with the aim of uniting electromagnetism and gravitation. Future achievement of this means warps of space (gravity, according to General Relativity) between
    spaceships/stars could be attracted together, thereby eliminating distance. And “warp drive” would not only come to life in future science/technology … it would be improved tremendously, almost beyond imagination. This reminds me of the 1994 proposal by Mexican physicist Miguel Alcubierre of a method of stretching space in a wave which would in theory cause the fabric of space ahead of a spacecraft to contract and the space behind it to expand. Therefore, the ship would be carried along in a warp bubble like a person being transported on an escalator, reaching its destination faster than a light beam restricted to travelling outside the warp bubble. There are no practical known methods to warp space – however, this extension of the Yale demonstration in electrical engineering may provide one.

    Let’s return to Relativity’s statement that space and time can never exist separately, therefore warps in space are actually warps in space-time: Eliminating distances in space also means “distances” between both future and past times are eliminated – and time travel becomes reality. Can anything more specific about the mechanics of time travel be stated here? If we get into a spaceship and eliminate the distance between us and a planet 700 light-years away, it’ll not only be possible to arrive at the planet instantly but we’ll instantly be transported 700 years into the future. On page 247 of “Physics of the Impossible” by physicist Michio Kaku (Penguin Books – 2009), it’s stated “astronomers today believe that the total spin of the universe is zero”. This is bad news for mathematician Kurt Godel, who in 1949 found from Einstein’s equations that a spinning universe would be a time machine (p. 223 of “Physics of the Impossible”). Professor Hawking informs us that “all particles in the universe have a property called spin which is related to, but not identical with, the everyday concept of spin” (science is mystified by quantum spin which has mathematical similarities to familiar spin but it does not mean that particles actually rotate like little tops). Everyday spin might be identical to Godel’s hoped-for spinning universe. If the universe is a Mobius loop (a Mobius loop can be visualised as a strip of paper which is given a half-twist of 180 degrees before its ends are joined), the twisted nature of a Mobius strip or loop plus the fact that you have to travel around it twice to arrive at your starting point might substitute for the lack of overall spin. Then the cosmos could still function as a time machine. We’ve seen how it permits travel into the future. We can journey further and further into the future by going farther and farther around the Mobius Universe. We might travel many billions of years ahead – but when we’ve travelled around M.U. exactly twice, we’ll find ourselves back at our start i.e. we were billions of years in the future … relative to that, we’re now billions of years in the past.

    And according to Michio Kaku on p. 316 of “Physics of the Impossible” – Penguin Books, 2009 – “… the inverse-square law (of famous English scientist Isaac Newton [1642-1727]) says that the force between two particles is infinite if the distance of separation goes to zero”. Space-time’s being a unification whose separation can be reduced to zero also suggests the existence of an infinitely powerful, and infinitely intelligent (since those particles could be brain particles), God. Since the distance of separation is zero, the universe must be unified with each of its constituent subatomic particles and those particles must follow the rules of fractal geometry being similarly composed of space and time and hyperspace. Unification of the cosmos with its particles is an insurmountable challenge to our bodily senses and their extensions, scientific instruments – as is existence of zero separation between us and a star’s gravity, heat etc. If we could see the universe exclusively with our minds, we’d see that these insurmountable challenges are indeed possible if we live in a non-materialistic holographic universe (combining gravitational with electromagnetic waves) controlled by the magic of computers.

    Some people will criticise my mathematical approach. They’ll say my article is invalidated by my selective use of the equations which, they’ll contend, are too simple to convey anything of importance. But if you want to say something like “The sky is blue”, you need enough intelligence to mentally sort through an entire dictionary in a tiny fraction of a second and select the 4 little words that express what you know. This article is not wild speculation – it is a jigsaw … combining a recent demonstration in electrical engineering at Yale University, Professor Ed Fredkin’s belief that the universe is a computer, Professor David Bohm’s belief that the universe is a hologram, Professor Stephen Hawking’s lack of belief in the existence of the Higgs boson, the Large Hadron Collider, the work of Nobel Laureates Russell Hulse and Joe Taylor for their discovery of the first indirect evidence for gravitational waves, the work of Yakir Aharonov and John Cramer and John Dobson and Neil Turok and Paul Steinhardt, the discovery of dark energy, Carl Sagan’s statement that there is no point of origin of the Big Bang, Miguel Alcubierre’s “warp drive”, modern science’s popularising science in books for the public as well as its openness to higher dimensions and hyperspace and time travel and parallel universes, Isaac Newton’s religious belief, Benoit Mandelbrot’s fractal geometry, Edwin Hubble’s discovery of universal expansion, mathematician Kurt Godel who tried to use Einstein’s equations to turn the universe into a time machine, and Albert Einstein’s Theories of Special and General Relativity.

    Perhaps the atheists among my readers are thinking it can’t be denied that these paragraphs imply the possibility of humans from the distant future time-travelling to the distant past and using electronics to create this particular subuniverse’s computer-generated Big Bang. Maybe any limits on trips to the future or past are overcome by travelling to other universes and linking their “eliminated distances” to those in this universe. This linkage requires all laws of physics etc. to be identical everywhere. In a so-called multiverse consisting of parallel universes where things have the potential to be slightly different in each universe, the link could be broken because we might find ourselves trying to force a square peg into a round hole.

    An accomplishment such as this would be the supreme example of “backward causality” (effects influencing causes) promoted by Yakir Aharonov, John Cramer and others. However, recalling Isaac Newton’s inverse-square law and what it says about the force between two particles being infinite if the distance of separation goes to zero means there’s still room for God because God would be a pantheistic union of the megauniverse’s material and mental parts, forming a union with humans in a cosmic unification. Subuniverse? Megauniverse? What am I talking about?

    A megauniverse is hinted at by Einstein´s equations as well as cosmology´s Steady-State theory, which say the universe has always existed and will continue forever. Einstein spoke of a “static” universe (which accurately describes a megauniverse that has no limits in space and has always existed/will continue forever), but he thought of this local branch as static, and rightly called it his greatest mistake since the local universe (our subuniverse) is now known to have had a beginning and to be expanding. Each subuniverse and its region of space-time is created from a big bang, but the megauniverse they belong to has no beginning and no end. And it maintains its average density through continuous “creation” (actually, recycling) of matter via the small amount from a preceding universe which is used to initiate expansion of its successor. This steady-state, or static, megauniverse would have its tendency to collapse (from, according to the viewpoint that only one time exists at any instant, ever-increasing gravitational attraction) always exactly balanced by, again from the viewpoint that all times cannot exist at once, the ever-increasing expansion of the universes it contains. The notion that contained universes that are forever expanding would somehow “burst” a static, steady-state megauniverse mistakenly assumes the megauniverse possesses a finite size; and it also reverts to our everyday experience that only one time exists at any instant (forgetting that all times exist and the megauniverse therefore accommodates not just some, but all, extents of expansion). Expanding subuniverses reminds me of the claim by cosmologists Paul J. Steinhardt and Neil Turok that the Big Bang which created our universe was triggered by a collision between our cosmic brane (or membrane) and a neighbouring one. The only essential difference between our hypotheses is that I believe collisions between neighbouring universes are the result, not the cause, of big bangs. We can regard the cosmic hologram and the megauniverse as examples of invariance (the quality of not changing) and the hologram´s relativistic property of appearing different from differing vantage points as represented by the expanding universes with their big bangs.


  33. Alot of religious people follow that flesh & blood person over God, the spirit, or the examples found in the bible, namely Jesus.more about rapture of the church

  34. great thanks for sharing

  35. I recently fell in love with using my computer’s Windows Movie Maker to express myself. You’ve heard of “Star Trek” – now meet my film “Time Trek” (also called “2011: A Space-time Odyssey”). This movie on Amazon Studios (you can choose between a 28-minute version and a 73-minute version) has 2 purposes – 1) to be an outlet for some ideas I have about science in the future and how its reconciliation with religion will be achieved, as well as 2) combination of those serious ideas with pure entertainment and a good story. To watch it now (for free), go to


    and scroll halfway down the page. Hope you like it!

  36. Was looking for information on that. I wrote it off as yet another charge, but I’m going to examine it just as before.

  37. Your web site won’t display properly on my blackberry – you might wanna try and repair that

  38. Thank you for the really enlightening posting, many of us could benefit from a lot more blogs of this nature on the internet. Is it possible to expand more on the 2nd paragraph please? I’m a little bit baffled as well as uncertain whether or not I realize your point entirely. Many thanks.

  39. I do not even understand how I stopped up right here, but I assumed this put up was good. I don’t know who you’re however certainly you’re going to a well-known blogger for those who aren’t already 😉 Cheers!

  40. In this case a lot more colleagues should certainly examine this quandry.


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!


See More

Collapse bottom bar