Parenthood. One of the biggest changes in life one can go through.
It’s no longer just yourself. You now have to protect and care for an innocent and defenseless little human being.
Dealing with this change is tough enough, but have you wondered what biochemical reactions happen in the brain to the Mother who had to conceive this baby?
We all know giving birth takes a huge toll on the body, but we often don’t talk about the effects it has on the brain.
So today, we’re going to go over what science has found to Mother’s brain once she becomes pregnant.
What happens to the brain after pregnancy
A groundbreaking study recently found that being pregnant creates long-lasting effects in a mother’s brain, with MRI scans showing changes in grey matter volume that may actually help Moms look after their new babies.
What are these changes?
According to the researchers, gray matter concentrates in regions associated with social cognition and theory of mind – a region of the brain that’s activated when women looked at photos of their infants.
Here’s the definition of ‘theory of mind’:
“The ability to attribute mental states—beliefs, intents, desires, pretending, knowledge, etc.—to oneself and others and to understand that others have beliefs, desires, intentions, and perspectives that are different from one’s own.”
Also, activity increases in regions that control empathy, anxiety, and social interaction. These changes were still present two years after birth.
We all know that during pregnancy, there’s an enormous increase in hormones such as progestorone and estrogen to prepare a women’s body to carry a child.
We produce similar amounts of these hormones during puberty, which is known to cause dramatic and organizational changes in the brain. Boys and girls lose gray matter in the brain as it is pruned to be more efficient.
While it is not entirely clear why women’s gray matter concentrates during pregnancy, the lead researcher of the study, Heokzema, thinks it may be because their brains are becoming better prepared to adapt to motherhood and respond to their babies.
Mel Rutheford, an evolutionary psychologist, summarizes it best:
“As a parent, you’re now going to be solving slightly different adaptive problems, slightly different cognitive problems than you did before you had children…You have different priorities, you have different tasks you’re going to be doing, and so your brain changes.”
But ask any Mother:
One of the biggest changes that occur after giving birth are intimate ones – the emotional changes. The feelings of empathy and love that’s so deep it can’t be put into words. But, as it turns out, they are also largely neurological.
The researchers say that gray matter becomes more concentrated and activity increases in regions that control empathy, anxiety and social interaction, as well as a flood of hormones resulting from pregnancy, help attract a new mother to her baby.
In other words, the incredibly strong maternal feelings of love, fierce protectiveness and constant worry begin with neurological changes in the brain.
If you’re like most people, you probably drink alcohol regularly. According to statistics, there were 136.8 million drinkers in the US in 2013.
But have you wondered what alcohol is doing to your body and brain?
Sure, if you can stick to one drink a day, the effects aren’t that huge. But many people find it hard to stop once they’ve had a couple drinks and the cumulative effects can add up.
If you’re searching for a way to stop, perhaps listening to Dr. Samuel Ball explaining why alcohol is one of the most dangerous substances on the planet will motivate you. Check it out:
As soon as you consume alcohol, around 33% of it gets absorbed into the blood through the stomach lining. The remaining alcohol is absorbed more slowly into the blood through the small intestine.
Once it’s in the bloodstream, alcohol spreads into almost every biological tissue in the body, as cell membranes are highly permeable.
The recommended maximum intake of alcohol is 2 drinks per day for men and 1 drink per day for women. Consuming more than this can be problematic.
Five or more drinks per day for men (4 for women) is considered binge drinking.
What alcohol is doing to your brain
The effects of alcohol on the brain are quite complex.
Just about any substance that we get pleasure from tends to affect the brain processes that are involved in reward and control.
And over time with addiction, the brain gets used to having alcohol and the dopamine receptors become less sensitive. Stopping results in problems because your brain isn’t ready for it. That’s what happens with withdrawal.
But it’s not just the reward system in the brain that alcohol affects. Alcohol affects many other areas of the brain, such as memory and motor coordination.
What alcohol is doing to your body
According to Dr. Samuel Ball, when someone becomes alcohol dependent, it can be one of the most destructive drugs to various parts of the body and different organ systems.
From the top-down, alcohol can have significant affects on cognitive impairment, memory loss, lack of motor coordination.
It can also cause esophageal problems, down to the stomach and pancreatitis, and also liver disease because the liver helps get rid toxic things throughout the body.
When that starts shutting down, it’s unable to process alcohol and get rid of its by-products, which can lead to other organs from losing functioning.
Worst case scenarios can be things like alcohol related dementia or delirium, which are serious problems that can lead to people going into nursing homes.
Have you ever been searching for your keys only to find they’ve been in your hand the whole time? Or forgot where you put your glasses, even though you just put them down ten minutes earlier?
Instead of fearing you may be suffering from early signs of dementia, a new study shows this might be a great sign.
It means your brain is waiting for more useful information and is able to sort the junk from the prized possessions.
Although this study has been questioned and dismissed before, the new study from University of Toronto points out a few very important details.
1.The part of the brain associated with memory- the hippocampus, is just making room for more important details
It was found that the more new neuron growth that took place in this region, the more forgetful we become.
2. The point of memory is to be able to give an intelligent person the ability to make decisions under certain circumstances
Imagine you’re in danger and your brain is bogged down with useless junk. If only you could remember that escape tactic or self defense move.Our memories are meant to serve us in times of danger or for circumstances that become questionable to us and to guide us to the best possible decision.
3. Our brains start to generalize information rather than remember details
With memory overload our brains start to generalize information over time, making a unique experience dull and taking away the ability to focus in on specific details. Your morning walk to the train becomes the same each day, even though it’s not.
Michael Anderson of The University of Oregon states “the process of forgetting serves a good, functional purpose”
So next time you’re being hard on yourself for losing your glasses, take a breath, you could be making room for more important information coming your way.
Nova Johnstone is a Ballet Dancer, Choreographer, Freelance Writer and Mentor to many young students. Based in Ireland she is the owner of Destination Dance Ireland where she helps Pre Professional Students take steps towards a career.
However, current trends in software-only artificial intelligence and deep learning technology raise serious doubts about the plausibility of this claim, especially in the long term. This doubt is not only due to hardware limitations; it is also to do with the role the human brain would play in the match-up.
Musk’s thesis is straightforward: that sufficiently advanced interfaces between brain and computer will enable humans to massively augment their capabilities by being better able to leverage technologies such as machine learning and deep learning.
But the exchange goes both ways. Brain-machine interfaces may help the performance of machine learning algorithms by having humans “fill in the gaps” for tasks that the algorithms are currently bad at, like making nuanced contextual decisions.
The idea in itself is not new. J. C. R. Licklider and others speculated on the possibility and implications of “man-computer symbiosis” in the mid-20th century.
However, progress has been slow. One reason is development of hardware. “There is a reason they call it hardware – it is hard,” said Tony Fadell, creator of the iPod. And creating hardware that interfaces with organic systems is even harder.
Assuming that the hardware challenge is eventually solved, there are bigger problems at hand. The past decade of incredible advances in deep learning research has revealed that there are some fundamental challenges to be overcome.
The first is simply that we still struggle to understand and characterise exactly how these complex neural network systems function.
We trust simple technology like a calculator because we know it will always do precisely what we want it to do. Errors are almost always a result of mistaken entry by the fallible human.
One vision of brain-machine augmentation would be to make us superhuman at arithmetic. So instead of pulling out a calculator or smartphone, we could think of the calculation and receive the answer instantaneously from the “assistive” machine.
Where things get tricky is if we were to try and plug into the more advanced functions offered by machine learning techniques such as deep learning.
Let’s say you work in a security role at an airport and have a brain-machine augmentation that automatically scans the thousands of faces you see each day and alerts you to possible security risks.
Most machine learning systems suffer from an infamous problem whereby a tiny change in the appearance of a person or object can cause the system to catastrophically misclassify what it thinks it is looking at. Change a picture of a person by less than 1%, and the machine system might suddenly think it is looking at a bicycle.
Terrorists or criminals might exploit the different vulnerabilities of a machine to bypass security checks, a problem that already exists in online security. Humans, although limited in their own way, might not be vulnerable to such exploits.
Despite their reputation as being unemotional, machine learning technologies also suffer from bias in the same way that humans do, and can even exhibit racist behaviour if fed appropriate data. This unpredictability has major implications for how a human might plug into – and more importantly, trust – a machine.
Trust me, I’m a robot
Trust is also a two-way street. Human thought is a complex, highly dynamic activity. In this same security scenario, with a sufficiently advanced brain-machine interface, how will the machine know what human biases to ignore? After all, unconscious bias is a challenge everyone faces. What if the technology is helping you interview job candidates?
We can preview to some extent the issues of trust in a brain-machine interface by looking at how defence forces around the world are trying to address human-machine trust in an increasingly mixed human-autonomous systems battlefield.
There is a parallel between a robot warrior making an ethical decision to ignore an unlawful order by a human and what must happen in a brain-machine interface: interpretation of the human’s thoughts by the machine, while filtering fleeting thoughts and deeper unconscious biases.
In defence scenarios, the logical role for a human brain is in checking that decisions are ethical. But how will this work when the human brain is plugged into a machine that can make inferences using data at a scale that no brain can comprehend?
In the long term, the issue is whether, and how, humans will need to be involved in processes that are increasingly determined by machines. Soon machines may make medical decisions no human team can possibly fathom. What role can and should the human brain play in this process?
In some cases, the combination of automation and human workers could increase jobs, but this effect is likely fleeting. Those same robots and automation systems will continue to improve, likely eventually removing the jobs they created locally.
Likewise, while humans may initially play a “useful” role in brain-machine systems, as the technology continues to improve there may be less reason to include humans in the loop at all.
The idea of maintaining humanity’s relevance by integrating human brains with artificial brains is appealing. What remains to be seen is what contribution the human brain will make, especially as technology development outpaces human brain development by a million to one.
Modern societies are usually defined as relatively unreligious, dominated by money and power rather than belief in gods. This idea marks them out as modern when compared to traditional societies as well as highlighting the many issues of modernity including capitalism, growth, overproduction and climate change.
But why are we so sure that secularisation and the dominance of politics and economics are in the DNA of modern societies? Our answer to this question defines and confine our problem-solving ability.
A recent article in the journal Futures shows that most strategic management tools and models of the future have a strong bias towards politics, economics and science, thus systematically neglecting religion, law, art, or education.
Given this bias is unconscious and unjustified, we risk constantly looking at solutions to wrong problems.
We undertook big data research on the digital database created by the Google Books project, which has scanned and digitised over 25 million of the estimated 130 million books ever published worldwide.
To systematically screen this huge collection of text, we used the Google Books Ngram Viewer, a free online graphing tool that charts annual word counts as found in the Google Books project. The Ngram Viewer comes with an intuitive interface where users can enter keywords, choose the sample period, define the desired language area, and modulate the shape of the graphical output.
One of our challenges was to find the right keywords. For this, we used an open source tool by Jan Berkel.
The result was a list of the 10,000 most frequently used words and strings in books published between 1800 and 2000. This period covers a considerable proportion of the era commonly referred to as modernity and is regarded as reliable data by Google.
We repeated the procedure until we had compiled one list each for English, Spanish, Russian, French, German, and Italian. We then screened the word frequency lists for terms that make unambiguous and distinct keywords. Money or God make good examples of such keywords, whereas we omitted terms such as tax or constitution as they refer to both politics and economy or law.
Finally, we entered stacks of the five most frequent religious, political, economic, and other pertinent keywords to run comparative analyses of word frequency time-series plots as displayed by the Google Ngram Viewer.
The figure below shows word frequencies of combined religious, political, economic, scientific, and mass media-related keywords in the English-language Google Books corpus between 1800 and 2000.
Since we analysed a considerable proportion of humanity’s collective memory between 1800 and 2000, and since the outcomes of our research resemble classical electroencephalography (EEG) recordings (see figure), we also linked our research into the global brain discourse.
The basic idea here is that the worldwide network of information and communication technology acts as the global brain of planet earth. In this sense, our electroencephalographic big data internet research is the first example of a global brain wave measurement, which was heralded by Peter Russell in his 1982 book The Global Brain.
Secularisation, much politics, and no capitalism
Looking at the global brain wave recordings (figure below), we find that our method performs well in capturing the expected decline of religion (orange line), which, by the way, is less significant in Spanish and Italian.
The chart for English language shows notable interactions between the two world wars and the significance of politics (blue line), and we find similar interactions in other areas, too.
In the Russian segment, the importance of politics is considerably increased during the (post-) October Revolution period and even more dramatically so in the context of the second world war.
In the French case, the years around the first world war see a steep rise in the importance of politics, whereas the interaction during the second world war is much more moderate. The German data follow a similar pattern as the French, but exhibit the dramatic rise of politics in the post-second world war era, peaking at around 1970.
The rise of the importance of the mass media corresponds to the timeline of the information age (green line). What we do not see, however, is the dominant position of economy (purple line) in modern societies. There is a short period between 1910 and 1950 where economy was second to a much stronger politics.
This image of a war economy is the closest approximation to a capitalist situation to be found in the English segment, in which the economy is outperformed by science (red line) soon after the second world war and by mass media in the 1990s, ranking fourth at the end of the sample period.
There’s no sign of the golden age of capitalism in the 19th century either, as narratives of the Industrial Revolution would lead us to believe.
The charts for the other languages also do not support the idea of modern societies as being capitalist or otherwise dominated by the economy. The only exception is the French segment, where economy has been second again to a much stronger politics since the end of the second world war.
Economy ranks third in the Russian segment only from the late 1950s to the 1990s and in the German segment not before the 1970s, and well below par in the Spanish and Italian segments.
New god of modern societies
Our big data research hence suggests that modern societies are heavily politicised; most investigated societies are clearly secularized, with reservations applying to the societies where Spanish and Italian are spoken; and that science plays a remarkable role, ranking second in the English-, Russian-, and German-language areas in the second half of the 20th century.
None of the investigated societies is dominated by the economy, with the minor reservation discussed above applying only to French. Even this finding reflects only the idea of capitalism as an economy-based political ideology, not the idea of the primacy of the economy.
The major finding of our research is that political power rather than economics has dethroned religious faith as the dominant guiding principle between 1800 and 2000.
Our data definitely suggests that, despite all contradicting ideologies or habits of mind, the economy is of only moderate importance to modern societies. This implies that, in the future, we may wish to think twice before we continue to label our societies as money-driven, economy-biased, or simply capitalist.
One major shortcoming of our research is that it focused only on books. But this focus is adequate as the ideas of capitalism and the primacy of economy have been developed precisely in the books we investigated.
The idea that the definitions of modern societies as capitalist or economy-dominated are probably rooted in misconceptions rather than modern scientific worldviews may be counter-intuitive or even shocking for both capitalists and anti-capitalists.
Acknowledgement: Our research was first presented at 2016 City University of Hong Kong Workshop on Computational Approaches to Big Data in the Social Sciences and Humanities. I am grateful to Jonathan Zhu and the entire team of the Web Mining Lab at the CityU Department of Media and Communication for the invitation to Hong Kong as well as for valuable feedback.
Reliving and sharing our personal past is part of what makes us human. It creates a sense of who we are, allows us to plan for the future and helps us form relationships. But we don’t all remember our past in the same way. In fact, the nature and quality of memory differs considerably between people.
For instance, when asked to remember something about a party, one person might describe vividly their sixth birthday: how the gifts were laid out, the sweet, chocolatey taste of the hedgehog cake and going to bed really late. Another person might not recall this precise detail, but remember that their aunt despised parties and that hedgehog cakes were massive in the 80s.
So, our personal memories contain different types of information. Some of this is very specific about when and where things happened – and what it felt like. This collection of personal experiences is known as “episodic memory”. Other bits are general facts about the world, ourselves and the people we know. This is called “semantic memory”. A big question in neuroscience is whether these two memory types involve distinct parts of the brain.
Individuals who have suffered damage to a region called the hippocampus (involved in memory, learning and emotion) have been found to remember facts about their lives but lack the high-resolution, episodic detail. On the other hand, patients with a rare form of dementia, known as semantic dementia, can remember episodic information, but not the facts that glue it all together. Intriguingly, these individuals show early degeneration of another part of the brain called the anterior temporal lobe (thought to be critical for semantic memory).
Networks versus areas
But can we see a similar distinction in the healthy brain? As reflecting on our past is highly complex, it seems likely that different brain regions must work together to achieve it. And studies using functional MRI have shown that personal memories activate large networks in the brain.
So it appears that memory cannot be boiled down to one or two particular brain areas. We have to think more widely than that. The brain itself is made up of both grey and white tissue. The white part, known as “white matter”, contains fibres that allow information to travel between different areas of the brain. So could these connections themselves predict how we remember?
In our latest study, published in the journal Cortex, we explored this question by using a brain scanning technique known as diffusion MRI. This method uses the movement of water molecules to map out the brain’s white matter pathways.
We asked 27 college-aged volunteers to lie still in the scanner as we collected images of their brains. Using these images we could identify specific pathways and pull out measures their structure – indicating how efficiently information can travel between connected regions.
Outside the scanner, each volunteer was asked to describe memories from their past in response to cue words, such as “party” or “holiday”. By going through and painstakingly coding each memory, we could work out how “episodic” and “semantic” each person’s memory was. For instance, precise spatial statements would count toward the episodic score (“The Eiffel Tower was directly behind us”), and facts would count toward the semantic score (“Paris is my sister’s favourite city”).
We found that the amount of rich, episodic detail that volunteers remembered was related to the connectivity of an arch-shaped white matter pathway called the fornix, which links to the hippocampus. So, the more efficiently the fornix can relay information from the hippocampus to surrounding regions, the more episodic someone’s memory is.
A different white matter pathway – catchily named the inferior longitudinal fasciculus – strongly predicted how semantic people’s memories were. Interestingly, this long bundle of white matter is the major route from visual parts of the brain to the anterior temporal lobe – the same region that is affected in cases of semantic dementia.
Wired for memory
These findings suggest that differences in how we each remember our past are reflected in how our brains are wired. Historically, neuroscience has tended to see brain regions as singletons, working alone. These results suggest the alternative: that links between regions – and the networks they form – are critical for how we think and behave.
Our finding also supports the idea that there are separate memory “systems” in the brain. One for reliving time and place and another for pulling in general knowledge and personal facts.
Could these findings help people with memory problems? Not yet, but working out how memory works in healthy people may eventually help us understand exactly what goes wrong in the brain when we get diseases like Alzheimer’s – and help us treat it. For instance, people with damage to the “episodic” network, such as those with early Alzheimer’s disease, may benefit from semantic memory strategies to compensate. A recent study found that cuing memories with physical objects led to better episodic memory in people with Alzheimer’s.
There’s plenty we still don’t know about the brain’s white matter. A number of properties can affect how information travels along it, such as the density of fibres. In the future, we can use new and powerful scanning techniques to uncover the parts of white matter that drive these fascinating effects.