However, current trends in software-only artificial intelligence and deep learning technology raise serious doubts about the plausibility of this claim, especially in the long term. This doubt is not only due to hardware limitations; it is also to do with the role the human brain would play in the match-up.
Musk’s thesis is straightforward: that sufficiently advanced interfaces between brain and computer will enable humans to massively augment their capabilities by being better able to leverage technologies such as machine learning and deep learning.
But the exchange goes both ways. Brain-machine interfaces may help the performance of machine learning algorithms by having humans “fill in the gaps” for tasks that the algorithms are currently bad at, like making nuanced contextual decisions.
The idea in itself is not new. J. C. R. Licklider and others speculated on the possibility and implications of “man-computer symbiosis” in the mid-20th century.
However, progress has been slow. One reason is development of hardware. “There is a reason they call it hardware – it is hard,” said Tony Fadell, creator of the iPod. And creating hardware that interfaces with organic systems is even harder.
Assuming that the hardware challenge is eventually solved, there are bigger problems at hand. The past decade of incredible advances in deep learning research has revealed that there are some fundamental challenges to be overcome.
The first is simply that we still struggle to understand and characterise exactly how these complex neural network systems function.
We trust simple technology like a calculator because we know it will always do precisely what we want it to do. Errors are almost always a result of mistaken entry by the fallible human.
One vision of brain-machine augmentation would be to make us superhuman at arithmetic. So instead of pulling out a calculator or smartphone, we could think of the calculation and receive the answer instantaneously from the “assistive” machine.
Where things get tricky is if we were to try and plug into the more advanced functions offered by machine learning techniques such as deep learning.
Let’s say you work in a security role at an airport and have a brain-machine augmentation that automatically scans the thousands of faces you see each day and alerts you to possible security risks.
Most machine learning systems suffer from an infamous problem whereby a tiny change in the appearance of a person or object can cause the system to catastrophically misclassify what it thinks it is looking at. Change a picture of a person by less than 1%, and the machine system might suddenly think it is looking at a bicycle.
Terrorists or criminals might exploit the different vulnerabilities of a machine to bypass security checks, a problem that already exists in online security. Humans, although limited in their own way, might not be vulnerable to such exploits.
Despite their reputation as being unemotional, machine learning technologies also suffer from bias in the same way that humans do, and can even exhibit racist behaviour if fed appropriate data. This unpredictability has major implications for how a human might plug into – and more importantly, trust – a machine.
Trust me, I’m a robot
Trust is also a two-way street. Human thought is a complex, highly dynamic activity. In this same security scenario, with a sufficiently advanced brain-machine interface, how will the machine know what human biases to ignore? After all, unconscious bias is a challenge everyone faces. What if the technology is helping you interview job candidates?
We can preview to some extent the issues of trust in a brain-machine interface by looking at how defence forces around the world are trying to address human-machine trust in an increasingly mixed human-autonomous systems battlefield.
There is a parallel between a robot warrior making an ethical decision to ignore an unlawful order by a human and what must happen in a brain-machine interface: interpretation of the human’s thoughts by the machine, while filtering fleeting thoughts and deeper unconscious biases.
In defence scenarios, the logical role for a human brain is in checking that decisions are ethical. But how will this work when the human brain is plugged into a machine that can make inferences using data at a scale that no brain can comprehend?
In the long term, the issue is whether, and how, humans will need to be involved in processes that are increasingly determined by machines. Soon machines may make medical decisions no human team can possibly fathom. What role can and should the human brain play in this process?
In some cases, the combination of automation and human workers could increase jobs, but this effect is likely fleeting. Those same robots and automation systems will continue to improve, likely eventually removing the jobs they created locally.
Likewise, while humans may initially play a “useful” role in brain-machine systems, as the technology continues to improve there may be less reason to include humans in the loop at all.
The idea of maintaining humanity’s relevance by integrating human brains with artificial brains is appealing. What remains to be seen is what contribution the human brain will make, especially as technology development outpaces human brain development by a million to one.
Modern societies are usually defined as relatively unreligious, dominated by money and power rather than belief in gods. This idea marks them out as modern when compared to traditional societies as well as highlighting the many issues of modernity including capitalism, growth, overproduction and climate change.
But why are we so sure that secularisation and the dominance of politics and economics are in the DNA of modern societies? Our answer to this question defines and confine our problem-solving ability.
A recent article in the journal Futures shows that most strategic management tools and models of the future have a strong bias towards politics, economics and science, thus systematically neglecting religion, law, art, or education.
Given this bias is unconscious and unjustified, we risk constantly looking at solutions to wrong problems.
We undertook big data research on the digital database created by the Google Books project, which has scanned and digitised over 25 million of the estimated 130 million books ever published worldwide.
To systematically screen this huge collection of text, we used the Google Books Ngram Viewer, a free online graphing tool that charts annual word counts as found in the Google Books project. The Ngram Viewer comes with an intuitive interface where users can enter keywords, choose the sample period, define the desired language area, and modulate the shape of the graphical output.
One of our challenges was to find the right keywords. For this, we used an open source tool by Jan Berkel.
The result was a list of the 10,000 most frequently used words and strings in books published between 1800 and 2000. This period covers a considerable proportion of the era commonly referred to as modernity and is regarded as reliable data by Google.
We repeated the procedure until we had compiled one list each for English, Spanish, Russian, French, German, and Italian. We then screened the word frequency lists for terms that make unambiguous and distinct keywords. Money or God make good examples of such keywords, whereas we omitted terms such as tax or constitution as they refer to both politics and economy or law.
Finally, we entered stacks of the five most frequent religious, political, economic, and other pertinent keywords to run comparative analyses of word frequency time-series plots as displayed by the Google Ngram Viewer.
The figure below shows word frequencies of combined religious, political, economic, scientific, and mass media-related keywords in the English-language Google Books corpus between 1800 and 2000.
Since we analysed a considerable proportion of humanity’s collective memory between 1800 and 2000, and since the outcomes of our research resemble classical electroencephalography (EEG) recordings (see figure), we also linked our research into the global brain discourse.
The basic idea here is that the worldwide network of information and communication technology acts as the global brain of planet earth. In this sense, our electroencephalographic big data internet research is the first example of a global brain wave measurement, which was heralded by Peter Russell in his 1982 book The Global Brain.
Secularisation, much politics, and no capitalism
Looking at the global brain wave recordings (figure below), we find that our method performs well in capturing the expected decline of religion (orange line), which, by the way, is less significant in Spanish and Italian.
The chart for English language shows notable interactions between the two world wars and the significance of politics (blue line), and we find similar interactions in other areas, too.
In the Russian segment, the importance of politics is considerably increased during the (post-) October Revolution period and even more dramatically so in the context of the second world war.
In the French case, the years around the first world war see a steep rise in the importance of politics, whereas the interaction during the second world war is much more moderate. The German data follow a similar pattern as the French, but exhibit the dramatic rise of politics in the post-second world war era, peaking at around 1970.
The rise of the importance of the mass media corresponds to the timeline of the information age (green line). What we do not see, however, is the dominant position of economy (purple line) in modern societies. There is a short period between 1910 and 1950 where economy was second to a much stronger politics.
This image of a war economy is the closest approximation to a capitalist situation to be found in the English segment, in which the economy is outperformed by science (red line) soon after the second world war and by mass media in the 1990s, ranking fourth at the end of the sample period.
There’s no sign of the golden age of capitalism in the 19th century either, as narratives of the Industrial Revolution would lead us to believe.
The charts for the other languages also do not support the idea of modern societies as being capitalist or otherwise dominated by the economy. The only exception is the French segment, where economy has been second again to a much stronger politics since the end of the second world war.
Economy ranks third in the Russian segment only from the late 1950s to the 1990s and in the German segment not before the 1970s, and well below par in the Spanish and Italian segments.
New god of modern societies
Our big data research hence suggests that modern societies are heavily politicised; most investigated societies are clearly secularized, with reservations applying to the societies where Spanish and Italian are spoken; and that science plays a remarkable role, ranking second in the English-, Russian-, and German-language areas in the second half of the 20th century.
None of the investigated societies is dominated by the economy, with the minor reservation discussed above applying only to French. Even this finding reflects only the idea of capitalism as an economy-based political ideology, not the idea of the primacy of the economy.
The major finding of our research is that political power rather than economics has dethroned religious faith as the dominant guiding principle between 1800 and 2000.
Our data definitely suggests that, despite all contradicting ideologies or habits of mind, the economy is of only moderate importance to modern societies. This implies that, in the future, we may wish to think twice before we continue to label our societies as money-driven, economy-biased, or simply capitalist.
One major shortcoming of our research is that it focused only on books. But this focus is adequate as the ideas of capitalism and the primacy of economy have been developed precisely in the books we investigated.
The idea that the definitions of modern societies as capitalist or economy-dominated are probably rooted in misconceptions rather than modern scientific worldviews may be counter-intuitive or even shocking for both capitalists and anti-capitalists.
Acknowledgement: Our research was first presented at 2016 City University of Hong Kong Workshop on Computational Approaches to Big Data in the Social Sciences and Humanities. I am grateful to Jonathan Zhu and the entire team of the Web Mining Lab at the CityU Department of Media and Communication for the invitation to Hong Kong as well as for valuable feedback.
Reliving and sharing our personal past is part of what makes us human. It creates a sense of who we are, allows us to plan for the future and helps us form relationships. But we don’t all remember our past in the same way. In fact, the nature and quality of memory differs considerably between people.
For instance, when asked to remember something about a party, one person might describe vividly their sixth birthday: how the gifts were laid out, the sweet, chocolatey taste of the hedgehog cake and going to bed really late. Another person might not recall this precise detail, but remember that their aunt despised parties and that hedgehog cakes were massive in the 80s.
So, our personal memories contain different types of information. Some of this is very specific about when and where things happened – and what it felt like. This collection of personal experiences is known as “episodic memory”. Other bits are general facts about the world, ourselves and the people we know. This is called “semantic memory”. A big question in neuroscience is whether these two memory types involve distinct parts of the brain.
Individuals who have suffered damage to a region called the hippocampus (involved in memory, learning and emotion) have been found to remember facts about their lives but lack the high-resolution, episodic detail. On the other hand, patients with a rare form of dementia, known as semantic dementia, can remember episodic information, but not the facts that glue it all together. Intriguingly, these individuals show early degeneration of another part of the brain called the anterior temporal lobe (thought to be critical for semantic memory).
Networks versus areas
But can we see a similar distinction in the healthy brain? As reflecting on our past is highly complex, it seems likely that different brain regions must work together to achieve it. And studies using functional MRI have shown that personal memories activate large networks in the brain.
So it appears that memory cannot be boiled down to one or two particular brain areas. We have to think more widely than that. The brain itself is made up of both grey and white tissue. The white part, known as “white matter”, contains fibres that allow information to travel between different areas of the brain. So could these connections themselves predict how we remember?
In our latest study, published in the journal Cortex, we explored this question by using a brain scanning technique known as diffusion MRI. This method uses the movement of water molecules to map out the brain’s white matter pathways.
We asked 27 college-aged volunteers to lie still in the scanner as we collected images of their brains. Using these images we could identify specific pathways and pull out measures their structure – indicating how efficiently information can travel between connected regions.
Outside the scanner, each volunteer was asked to describe memories from their past in response to cue words, such as “party” or “holiday”. By going through and painstakingly coding each memory, we could work out how “episodic” and “semantic” each person’s memory was. For instance, precise spatial statements would count toward the episodic score (“The Eiffel Tower was directly behind us”), and facts would count toward the semantic score (“Paris is my sister’s favourite city”).
We found that the amount of rich, episodic detail that volunteers remembered was related to the connectivity of an arch-shaped white matter pathway called the fornix, which links to the hippocampus. So, the more efficiently the fornix can relay information from the hippocampus to surrounding regions, the more episodic someone’s memory is.
A different white matter pathway – catchily named the inferior longitudinal fasciculus – strongly predicted how semantic people’s memories were. Interestingly, this long bundle of white matter is the major route from visual parts of the brain to the anterior temporal lobe – the same region that is affected in cases of semantic dementia.
Wired for memory
These findings suggest that differences in how we each remember our past are reflected in how our brains are wired. Historically, neuroscience has tended to see brain regions as singletons, working alone. These results suggest the alternative: that links between regions – and the networks they form – are critical for how we think and behave.
Our finding also supports the idea that there are separate memory “systems” in the brain. One for reliving time and place and another for pulling in general knowledge and personal facts.
Could these findings help people with memory problems? Not yet, but working out how memory works in healthy people may eventually help us understand exactly what goes wrong in the brain when we get diseases like Alzheimer’s – and help us treat it. For instance, people with damage to the “episodic” network, such as those with early Alzheimer’s disease, may benefit from semantic memory strategies to compensate. A recent study found that cuing memories with physical objects led to better episodic memory in people with Alzheimer’s.
There’s plenty we still don’t know about the brain’s white matter. A number of properties can affect how information travels along it, such as the density of fibres. In the future, we can use new and powerful scanning techniques to uncover the parts of white matter that drive these fascinating effects.
The issue of sexual orientation has been a very controversial issue since nations in the West began to recognize Gays, Lesbians, Bisexuals and Transgender (LGBT) rights.
An important part of the current LGBT debate is the belief that sexual orientation is predetermined by biology. Therefore, if a person has no choice over whether to be gay or not, society cannot demand that he or she becomes ‘straight’. This is a sound and reasonable argument. Indeed, society cannot force people to change from something they have no control over.
On the other hand, the ‘born that way’ argument has been disputed by some people. A latest cross-discipline study published in the journal New Atlantis has challenged the belief that human sexuality and gender identity are determined by biology and remains fixed. The New Atlantis journal focuses on political, societal and ethical ramifications of technological advances.
The study, carried out by two researchers from Johns Hopkins University in Baltimore, revealed that there is no scientific proof of sexual orientation being fixed. The researchers said the objective of their study is to draw the attention of the public to mental health problems of the LGBT community. The study cautioned against drastic medical treatment for transgender children.
According to the study, regardless of its political worth, the “born this way” notion by the LGBT community is not backed up by sufficient scientific data. But the study did not conclude or state that being gay is a choice. It merely said stating the opposite may be wrong.
The study, a 144-page paper, was written by Dr. Lawrence S. Mayer, an epidemiologist and biostatistician also trained in psychiatry, who is currently a scholar in residence at the Department of Psychiatry at Johns Hopkins School of Medicine. Dr. Paul R. McHugh, who also co-wrote the paper is a renowned psychiatrist, researcher, and educator and former chief of psychiatry at Johns Hopkins Hospital.
Dr. Mayer said many people who contributed to the study asked not to be identified. The anonymity they requested is to protect them from a potential backlash from those who would disagree with the study. He admitted the study may stir controversy among both pro and anti-LGBT people.
“Some feared an angry response from the more militant elements of the LGBT community; others feared an angry response from the more strident elements of religiously conservative communities. Most bothersome, however, is that some feared reprisals from their own universities for engaging such controversial topics, regardless of the report’s content—a sad statement about academic freedom,” Dr. Mayer said.
The paper’s three sections focus on sexual orientation, links between sexuality and mental health, and gender identity. Drawing on studies in fields varying from neurobiology to social sciences, the researchers wrote: “The understanding of sexual orientation as an innate, biologically fixed property of human beings – the idea that people are ‘born that way’ – is not supported by scientific evidence.”
The study stated that even the term ‘sexual orientation’ itself is ambiguous. The term, according to the study, is used to describe attraction, behavior or identity by different researchers. Sometimes, the same term refers to things such as belonging to a certain community or having certain fantasies.
The study said: “It is important, then, that researchers are clear about which of these domains are being studied, and that we keep in mind the researchers’ specified definitions when we interpret their findings.”
The researchers acknowledged in the study that there are biological factors associated with sexual behavior, but pointed out that there are no compelling causal biological explanations for human sexual orientation. For example, some studies have showed that there are differences in the brain structures of gay and straight people. But the study said the differences are not necessarily innate, and may be the result of environmental or psychological factors.
“The strongest statement that science offers to explain sexual orientation is that some biological factors appear, to an unknown extent, to predispose some individuals to a non-heterosexual orientation,” the study said.
Explaining further, the researchers revealed LGBT individuals are statistically at greater risk of having mental health problems than the general population. The researchers said in the United States for example, the rate of lifetime suicide attempts across all ages of transgender individuals is estimated at 41%, compared to under 5% in the overall population of the country. The usually accepted explanation for this is social stress from discrimination and stigma, but the study said that those factors may not solely explain the disparity and that more scientific research on the issue is necessary.
The paper added that the notion that gender identity is fixed and determined by biological factors is also not backed up by data. More scientific data is also needed to back this claim, according to the researchers.
“In reviewing the scientific literature, we find that almost nothing is well understood when we seek biological explanations for what causes some individuals to state that their gender does not match their biological sex,” the authors said.
Concluding the study, the researchers warned against resorting to drastic medical treatment such as sex-reassignment surgery for people identified or identifying as transgender. The researchers said their warning is especially true in children, whose sexuality is mutable and for whom such treatments may do more harm than good.
“There is little scientific evidence for the therapeutic value of interventions that delay puberty or modify the secondary sex characteristics of adolescents, although some children may have improved psychological well-being if they are encouraged and supported in their cross-gender identification. There is no evidence that all children who express gender-atypical thoughts or behavior should be encouraged to become transgender,” the researchers said.
The researchers finally noted that their study touches upon controversial issues, insisting that first and foremost, it is about science and the need for additional evidence in the field. They encouraged western societies to accept the study from a scientific point of view, and not by emotions and personal beliefs.
Human consciousness is perhaps one of the most complicated puzzles that scientists have been struggling to put together for ages. Even though we’ve advanced an incredible amount in science, we still have yet to get a grasp on it. But believe it or not, scientists may have pinpointed the physical origins of human consciousness.
There are three regions that are coming out as crucial to consciousness. A team of researchers at the Beth Israel Deaconess Medical Centre at Harvard Medical School have been working hard to pin it down.
Michael Fox, a lead researcher, said, “For the first time, we have found a connection between the brainstem region involved in arousal and regions involved in awareness, two prerequisites for consciousness.” He went on to say, “A lot of pieces of evidence all came together to point to this network playing a role in human consciousness.”
Science says that consciousness is made up of arousal and awareness. It has already been shown that arousal is normally regulated by the brainstem or the portion of the brain that is connected to the spinal cord. It helps us sleep and wake up using our breathing and heart rate. Awareness hasn’t been as easy to pin down.
For quite a while, scientists thought that it might lay somewhere within the outer layer of the brain known as the cortex. But much to their surprise, two cortex regions in the brain are appearing to work as a team in order to make up human consciousness.
But how did they figure this out?
Well, 36 patients in a hospital with brain lesions were studied. 12 of them were unconscious or in a coma and 24 of them were conscious. They were analyzed to figure out why some patients had stayed conscious while others were unconscious, though they had similar injuries.
The rostral dorsolateral pontine tegmentum is a small area of the brainstem and was found to be associated with unconsciousness. 10 our or 12 unconscious individuals had damage in this area of the brain where only 1 out of the 24 conscious patients did. This means that this portion of the brain is important when it comes to consciousness.
Researchers then looked at the connectome, also known as a brain map, to see all the various connections in our brains. Two specific areas were connected to the rostral dorsolateral pontine tegmentum. One of them was located in the ventral anterior insula and the other in the pregenual anterior cingulate cortex. In previous studies, both these areas have been known to play some part in arousal and awareness, but never before had they been connected to the brainstem.
More studies were conducted, all with the same conclusions:
“This is the most relevant if we can use these networks as a target for brain stimulation for people with disorders of consciousness,” said Michael Fox. This study could eventually lead to new treatments for individuals who are in comas or those who have healthy brains and can’t regain consciousness.
“If we zero in on the regions and network involved, can we someday wake someone up who is in a persistent vegetative state? That’s the ultimate question.”
This research could lead to a whole new world of possibilities in medical science.
Who knows? Maybe someday we’ll be able to cure someone who’s been in a coma for years. But for now, this is the beginning of exciting new medical developments in science.
Bad things happen to everyone. But how we react to the bad things in life reveals a lot about our brains. It might be obvious, but people who are happier are better able to regulate their emotions when dealing with unpleasant events.
How? There are a few theories.
One is that happier people are able to focus on positive things and filter out the negative. Another reason is that happier people could be better at savouring good moments and emotions to help them deal with negative events.
But why does this matter? Because this has implications for your perspective on life. Is it better to ignore the negatives, or strengthen your ability to focus on the good while acknowledging the bad?
Activity in the amygdala
The answer may lie in the amygdala—the primitive “fear centre” of the brain, which is always on the lookout for potential threats. In some people, increased amygdala activity has been linked to depression and anxiety.
That’s what psychologists William Cunningham at the University of Toronto and Alexander Todorov of Princeton University are exploring with their colleagues.
However it’s not just the “fear center” that they’re interested in. They’ve discovered a whole new amygdala—one they believe holds the key to human connection, compassion, and happiness. According to their research, the happiest people don’t ignore threats. They just might be better at seeing the good.
Happy people take the good with the bad
Cunningham and Kirkland recorded the amygdala activity of 42 participants as they viewed series of positive, negative, and neutral pictures. Participants also filled out surveys to determine their subjective happiness levels.
When compared with less-happy people, the researchers found that happier people had greater amygdala activation in response to positive photographs. But they did not have a decreased response to negative images, as would be predicted by the “rose-colored glasses” view of happiness.
According to the paper, this suggests that “happier people are not necessarily naïve or blind to negativity, but rather may respond adaptively to the world, recognizing both good and bad things in life.”
This is interesting because it suggests that being able to sense and respond to negative information may actually be an important component of happiness. The authors’ conclusion from this study: “Happy people are joyful, yet balanced.”
Continue the conversation
Our parent site, Ideapod, is a social network for idea sharing. It’s a place for you to explore ideas, share your own and come up with new perspectives, meeting like minded idea sharers in the process.
Here are some conversations happening on happiness on Ideapod.