Ever since the discovery of quantum mechanics, “reality” has been hard to grasp, literally and figuratively. The laws of quantum mechanics seem to suggest that particles are neither here nor there. Particles or waves exist everywhere and nowhere at once. Only when observed do particles suddenly make a choice on where to be.
Does this mean that observing our world is what makes it real? Does it blink out of existence when we’re not looking?
The question of how to make sense of quantum mechanics and its curious implications has occupied many curious minds for decades. One such mind is that of Lucien Hardy, a theoretical physicist at the Perimeter Institute in Canada.
Hardy has proposed an experiment that may shed light on many unanswered questions, bringing us closer to understanding what the mind is, whether it’s made up of the same matter as the physical world and whether it is capable of affecting the physical world.
Hi experiment concerns one of the features of quantum mechanics known as quantum entanglement – what Einstein called “spooky action at a distance”. It refers to the fact that entangled particles affect one another regardless of the distance between them – the quantum state of each particle cannot be described independently of the other particle. For a brief overview, see the animated video below.
Hardy has proposed to use a variation on the Bell test which was devised in 1964 to prove the actual existence of certain theoretical consequences of the phenomenon of entanglement in quantum mechanics.
Futurism explains: “Simply put, the Bell test involves a pair of entangled particles: one is sent towards location A and the other to location B. At each of these points, a device measures the state of the particles. The settings in the measuring devices are set at random, so that it’s impossible for A to know the setting of B (and vice versa) at the time of measurement. Historically, the Bell test has supported the spooky theory”.
In Hardy’s experiment, humans will be used to decide the settings at each end.
“To get a sufficiently high rate of switching at both ends, I suggest an experiment over a distance of about 100km with 100 people at each end wearing EEG headsets, with the signals from these headsets being used to switch the settings.”
“The radical possibility we wish to investigate is that, when humans are used to decide the settings (rather than various types of random number generators), we might then expect to see a violation of Quantum Theory in agreement with the relevant Bell inequality. Such a result, while very unlikely, would be tremendously significant for our understanding of the world,” Hardy wrote in a paper published online.
If the experiment turns up results that show a correlation between the measurements that doesn’t match previous Bell tests, it would boil down to a violation of quantum theory which would suggest a new influence on the A and B measurements outside the realm of standard physics.
Why does this matter?
This new influence might be the mind, or human consciousness. It could show that consciousness is not comprised of matter, that it exists in a separate state and is not governed by the laws of physics.
Such a result would shed more light on human consciousness and likely unleash a new slew of debates and theories.
However, current trends in software-only artificial intelligence and deep learning technology raise serious doubts about the plausibility of this claim, especially in the long term. This doubt is not only due to hardware limitations; it is also to do with the role the human brain would play in the match-up.
Musk’s thesis is straightforward: that sufficiently advanced interfaces between brain and computer will enable humans to massively augment their capabilities by being better able to leverage technologies such as machine learning and deep learning.
But the exchange goes both ways. Brain-machine interfaces may help the performance of machine learning algorithms by having humans “fill in the gaps” for tasks that the algorithms are currently bad at, like making nuanced contextual decisions.
The idea in itself is not new. J. C. R. Licklider and others speculated on the possibility and implications of “man-computer symbiosis” in the mid-20th century.
However, progress has been slow. One reason is development of hardware. “There is a reason they call it hardware – it is hard,” said Tony Fadell, creator of the iPod. And creating hardware that interfaces with organic systems is even harder.
Assuming that the hardware challenge is eventually solved, there are bigger problems at hand. The past decade of incredible advances in deep learning research has revealed that there are some fundamental challenges to be overcome.
The first is simply that we still struggle to understand and characterise exactly how these complex neural network systems function.
We trust simple technology like a calculator because we know it will always do precisely what we want it to do. Errors are almost always a result of mistaken entry by the fallible human.
One vision of brain-machine augmentation would be to make us superhuman at arithmetic. So instead of pulling out a calculator or smartphone, we could think of the calculation and receive the answer instantaneously from the “assistive” machine.
Where things get tricky is if we were to try and plug into the more advanced functions offered by machine learning techniques such as deep learning.
Let’s say you work in a security role at an airport and have a brain-machine augmentation that automatically scans the thousands of faces you see each day and alerts you to possible security risks.
Most machine learning systems suffer from an infamous problem whereby a tiny change in the appearance of a person or object can cause the system to catastrophically misclassify what it thinks it is looking at. Change a picture of a person by less than 1%, and the machine system might suddenly think it is looking at a bicycle.
Terrorists or criminals might exploit the different vulnerabilities of a machine to bypass security checks, a problem that already exists in online security. Humans, although limited in their own way, might not be vulnerable to such exploits.
Despite their reputation as being unemotional, machine learning technologies also suffer from bias in the same way that humans do, and can even exhibit racist behaviour if fed appropriate data. This unpredictability has major implications for how a human might plug into – and more importantly, trust – a machine.
Trust me, I’m a robot
Trust is also a two-way street. Human thought is a complex, highly dynamic activity. In this same security scenario, with a sufficiently advanced brain-machine interface, how will the machine know what human biases to ignore? After all, unconscious bias is a challenge everyone faces. What if the technology is helping you interview job candidates?
We can preview to some extent the issues of trust in a brain-machine interface by looking at how defence forces around the world are trying to address human-machine trust in an increasingly mixed human-autonomous systems battlefield.
There is a parallel between a robot warrior making an ethical decision to ignore an unlawful order by a human and what must happen in a brain-machine interface: interpretation of the human’s thoughts by the machine, while filtering fleeting thoughts and deeper unconscious biases.
In defence scenarios, the logical role for a human brain is in checking that decisions are ethical. But how will this work when the human brain is plugged into a machine that can make inferences using data at a scale that no brain can comprehend?
In the long term, the issue is whether, and how, humans will need to be involved in processes that are increasingly determined by machines. Soon machines may make medical decisions no human team can possibly fathom. What role can and should the human brain play in this process?
In some cases, the combination of automation and human workers could increase jobs, but this effect is likely fleeting. Those same robots and automation systems will continue to improve, likely eventually removing the jobs they created locally.
Likewise, while humans may initially play a “useful” role in brain-machine systems, as the technology continues to improve there may be less reason to include humans in the loop at all.
The idea of maintaining humanity’s relevance by integrating human brains with artificial brains is appealing. What remains to be seen is what contribution the human brain will make, especially as technology development outpaces human brain development by a million to one.
Elon Musk, Stephen Hawking and Bill Gates have all issued warnings about the increasing capabilities of artificial intelligence. Yet how real is the threat? Is it something we need to worry about in the short to medium term or something that’s so far away that we can’t even begin to prepare for it?
Many popular sci-fi movies such as 2001: A Space Odyssey, The Terminator, The Matrix, Transcendence and Ex Machina have played on our deeply rooted fears about the rapid evolution of artificial intelligence and how it may threaten the survival of humans. Yet the reality is that it’s very difficult to decipher how artificial intelligence is developing, and where we are at with current breakthroughs and applications of artificial intelligence.
This is why the perspective offered by Dr. Michio Kaiu is so interesting. Uniquely, he is both a theoretical physicist and futurist and has turned his attention to focusing on the future of the human mind. He not only understands the latest developments in artificial intelligence, but also has a deep understanding of human intelligence and what it would require for AI to develop to the point of threatening humans.
In a phone interview, Dr. Kaku was asked for his thoughts on the current state of artificial intelligence:
“The potential of A.I. is real in the sense that if artificially intelligent robots leave the laboratory it could signal the end of the human race, however let’s be practical.”
For the time being, Dr. Kaku believes we have little to fear from AI:
“Our most advanced robots have the intelligence of a cockroach; a retarded lobotomized cockroach. You put one of our robots in Fukushima, for instance, instead of cleaning up the mess they just fall over, they can barely walk correctly.
“The pentagon even sponsored a challenge, the DARPA challenge, to create a robot that could clean up the Fukushima nuclear disaster. All the robots failed except one. They failed to do simple things like use a broom or turn a valve. That’s how primitive we are.”
Dr. Kaku foresees advances over the next few decades, but also believes we have enough time to prepare:
“However, in the coming decades I expect robots to become as smart as a mouse, then a rabbit, then a dog, and finally a monkey. At that point who knows? Perhaps by the end of the century, it could be dangerous. Not now. In which case, I think we should put a chip in their brains to shut them off. The conclusion is we have time, we have time in which to deal with robots as they gradually, slowly become more intelligent.”
He has a valuable analogy to share with anyone thinking the danger is already here:
“I think it’s good to alert people that this is happening, but the time frame is not years, the time frame is decades, or as one scientist said, ‘the probability that a sentient robot will be built soon is similar to the probability a 747 jetliner is assembled spontaneously by a hurricane,’ so we’re still children when it comes to harnessing artificial intelligence, not that it can’t happen.”
Despite the fears surrounding the evolution of artificial intelligence, Dr. Kaku believes our main concerns as a civilization should be focused on making the rapid technological growth seen in the past few decades sustainable.
Moore’s Law is a computing term which originated in 1970 that states the processing power for computers overall will double every two years. The term was coined by Gordon Moore, the co-founder of Intel, who predicted the pace of the modern digital world would exponentially increase every six months. Yet Dr. Kaku warns that we are reaching the limits of how much further we can go with the silicon-based technologies that launched the computer revolution.
“Moore’s law is one of the pillars of modern civilization. It is the reason why we have such prosperity, and why we have such dazzling household electronic appliances, but it can’t last forever. Sooner or later Silicon Valley could become a rust belt, just like rust belt in Pennsylvania.
“That’s because components inside a silicon chip are going down to the size of atoms. In your Pentium chip, in your laptop, there is a layer that is about 20 atoms across, tucked into some of the layers that are in a Pentium chip, we’re talking about atomic scale.”
At this point, the major challenge is how we’ll get to the next level. As Dr. Kaku says:
“However, in the coming decades it’ll go down to maybe 5 atoms across, and at that point two things happen; you have enormous heat being generated and second is leakage, electrons leak out because of the certainty principle.
“In other words, the quantum theory kills you. The quantum theory giveth and the quantum theory taketh away. The quantum theory makes possible the transistor to begin with, so it also spells the doom of the age of silicon.”
Although it’s almost impossible to predict what’s going to emerge as the next big technological breakthrough, Dr. Maku is certain that Silicon-based processors will go the way of vacuum tubes:
“One day the age of silicon will end just like the age of vacuum tubes. When I was a kid I remember TV sets all had vacuum tubes. That era is long gone. Historians today look at this age of silicon and wonder what’s next. The short answer is we don’t know. There are no viable candidates, none of them ready for prime time, to replace silicon power.
“So what this means is that in the future, we could see a slowing down of Moore’s Law. That at Christmas time, computers may not be twice as powerful as the previous Christmas, so we physicists are desperately looking for replacements like quantum computers or I personally think molecular computers will eventually replace silicon, but they’re not ready yet. I think that in principle, we could be in trouble.”
Very bad news for all of us: We will not survive another 1,000 years on Earth, says Stephen Hawking. One of the most brilliant minds warns that ‘we must continue to go into space for the future of humanity.’
‘Humans will not survive another 1,000 years on ‘fragile’ Earth.’
Professor Hawking at a talk at the Oxford University Union also insisted in space travel.
“I don’t think we will survive another 1,000 years without escaping our fragile planet. I therefore want to encourage public interest in space, and I have been getting my training in early.
I believe that life on Earth is at an ever-increasing risk of being wiped out by a disaster, such as a sudden nuclear war, a genetically engineered virus, or other dangers. I think the human race has no future if it doesn’t go to space.”
High above the surface, Earth’s magnetic field constantly deflects incoming supersonic particles from the sun. These particles are disturbed in regions just outside of Earth’s magnetic field – and some are reflected into a turbulent region called the foreshock.
New observations from NASA’s THEMIS – short for Time History of Events and Macroscale Interactions during Substorms – mission show that this turbulent region can accelerate electrons up to speeds approaching the speed of light. Such extremely fast particles have been observed in near-Earth space and many other places in the universe, but the mechanisms that accelerate them have not yet been concretely understood.
The new results provide the first steps towards an answer, while opening up more questions. The research finds electrons can be accelerated to extremely high speeds in a near-Earth region farther from Earth than previously thought possible – leading to new inquiries about what causes the acceleration. These findings may change the accepted theories on how electrons can be accelerated not only in shocks near Earth, but also throughout the universe. Having a better understanding of how particles are energized will help scientists and engineers better equip spacecraft and astronauts to deal with these particles, which can cause equipment to malfunction and affect space travelers.
“This affects pretty much every field that deals with high-energy particles, from studies of cosmic rays to solar flares and coronal mass ejections, which have the potential to damage satellites and affect astronauts on expeditions to Mars,” said Lynn Wilson, lead author of the paper on these results at NASA’s Goddard Space Flight Center in Greenbelt, Maryland.
The results, published in Physical Review Letters, on Nov. 14, 2016, describe how such particles may get accelerated in specific regions just beyond Earth’s magnetic field. Typically, a particle streaming toward Earth first encounters a boundary region known as the bow shock, which forms a protective barrier between the solar wind, the continuous and varying stream of charged particles flowing from the sun, and Earth. The magnetic field in the bow shock slows the particles, causing most to be deflected away from Earth, though some are reflected back towards the sun. These reflected particles form a region of electrons and ions called the foreshock region.
This image represents one of the traditional proposed mechanisms for accelerating particles across a shock, called a shock drift acceleration. The electrons (yellow) and protons (blue) can be seen moving in the collision area where two hot plasma bubbles collide (red vertical line). The cyan arrows represent the magnetic field and the light green arrows, the electric field. Credits: NASA Goddard’s Scientific Visualization Studio/Tom Bridgman, data visualizer
Some of those particles in the foreshock region are highly energetic, fast moving electrons and ions. Historically, scientists have thought one way these particles get to such high energies is by bouncing back and forth across the bow shock, gaining a little extra energy from each collision. However, the new observations suggest the particles can also gain energy through electromagnetic activity in the foreshock region itself.
The observations that led to this discovery were taken from one of the THEMIS – short for Time History of Events and Macroscale Interactions during Substorms – mission satellites. The five THEMIS satellites circled Earth to study how the planet’s magnetosphere captured and released solar wind energy, in order to understand what initiates the geomagnetic substorms that cause aurora. The THEMIS orbits took the spacecraft across the foreshock boundary regions. The primary THEMIS mission concluded successfully in 2010 and now two of the satellites collect data in orbit around the moon.
Operating between the sun and Earth, the spacecraft found electrons accelerated to extremely high energies. The accelerated observations lasted less than a minute, but were much higher than the average energy of particles in the region, and much higher than can be explained by collisions alone. Simultaneous observations from the additional Heliophysics spacecraft, Wind and STEREO, showed no solar radio bursts or interplanetary shocks, so the high-energy electrons did not originate from solar activity.
“This is a puzzling case because we’re seeing energetic electrons where we don’t think they should be, and no model fits them,” said David Sibeck, co-author and THEMIS project scientist at NASA Goddard. “There is a gap in our knowledge, something basic is missing.”
The electrons also could not have originated from the bow shock, as had been previously thought. If the electrons were accelerated in the bow shock, they would have a preferred movement direction and location – in line with the magnetic field and moving away from the bow shock in a small, specific region. However, the observed electrons were moving in all directions, not just along magnetic field lines. Additionally, the bow shock can only produce energies at roughly one tenth of the observed electrons’ energies. Instead, the cause of the electrons’ acceleration was found to be within the foreshock region itself.
“It seems to suggest that incredibly small scale things are doing this because the large scale stuff can’t explain it,” Wilson said.
High-energy particles have been observed in the foreshock region for more than 50 years, but until now, no one had seen the high-energy electrons originate from within the foreshock region. This is partially due to the short timescale on which the electrons are accelerated, as previous observations had averaged over several minutes, which may have hidden any event. THEMIS gathers observations much more quickly, making it uniquely able to see the particles.
Next, the researchers intend to gather more observations from THEMIS to determine the specific mechanism behind the electrons’ acceleration.