Information

Does the brain stop processing data when the eyes move?

Does the brain stop processing data when the eyes move?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I was reading this question and remember back to the 70s when our local newspaper changed formats. In an article about why they made the changes that they did, one of the points they mentioned was that by making the columns narrower, you would retain more because the brain didn't process the words when the eyes were moving.

Over the past few years working in instructional design, I've often wanted to cite a reference for my newspaper's statement, or at the very least a reference that debunks it. I've been unsuccessful finding anything on google, but I'm not sure if I'm asking the right questions or not. It may very well be that this was a theory in the 70's, since disproven, junk science, or something very real for which I've not stumbled upon the right search terms.

In any case, it seems to me that I'm the only person in the world whose ever heard this idea, and I'd like to find some reference, positive or negative, to it. Can anyone direct me to some research on the topic?


I'm sorry you happened upon that research from the 70s. Should it have been true those with Nystagmus (constant involuntary eye movement) would never learn anything so it must be in a general sense false.

Here is a current thoughts and research into the subject:

Eye movement-related brain activity during perceptual and cognitive processing

For several decades researchers have been recording electrical brain activity associated with eye movements in attempt to understand their neural mechanisms. However, recent advances in eye-tracking technology have allowed researchers to use eye movements as the means of segmenting the ongoing brain activity into episodes relevant to cognitive processes in scene perception, reading, and visual search. This opened doors to uncovering the active and dynamic neural mechanisms underlying perception, attention and memory in naturalistic conditions. The present eBook contains a representative collection of studies from various fields of visual neuroscience that use this cutting edge approach of combining eye movements and neural activity.

The majority of the articles in the eBook combine the measurement of eye movements with the recording of the electroencephalogram (EEG) in human subjects performing various psychological tasks. The most common methodological approach is examination of the EEG activity time-aligned to certain eye movement events, such as the onset of a fixation or a start of a saccadic eye movement (Fischer et al., 2013; Frey et al., 2013; Henderson et al., 2013; Hutzler et al., 2013; Nikolaev et al., 2013; Richards, 2013; Simola et al., 2013). Several works employ the time-frequency and synchrony analysis (Fischer et al., 2013; Hoffman et al., 2013; Ito et al., 2013; Nakatani and Van Leeuwen, 2013; Nakatani et al., 2013).


On TV and in movies, we’ve all seen doctors stick an X-ray up on the lightbox and play out a dramatic scene: “What’s that dark spot, doctor?” “Hm…”

In reality, though, a modern medical scan contains so much data, no single pair of doctor’s eyes could possibly interpret it. The brain scan known as fMRI, for functional magnetic resonance imaging, produces a massive data set that can only be understood by custom data analysis software. Armed with this analysis, neuroscientists have used the fMRI scan to produce a series of paradigm-shifting discoveries about our brains.

Now, an unsettling new report, which is causing waves in the neuroscience community, suggests that fMRI’s custom software can be deeply flawed — calling into question many of the most exciting findings in recent neuroscience.

The problem researchers have uncovered is simple: the computer programs designed to sift through the images produced by fMRI scans have a tendency to suggest differences in brain activity where none exist. For instance, humans who are resting, not thinking about anything in particular, not doing anything interesting, can deliver spurious results of differences in brain activity. It’s even been shown to indicate brain activity in a dead salmon, whose stilled brain lit up an MRI as if it were somehow still dreaming of a spawning run.

The report throws into question the results of some portion of the more than 40,000 studies that have been conducted using fMRI, studies that plumb the brainy depths of everything from free will to fear. And scientists are not quite sure how to recover.

“It’s impossible to know how many fMRI studies are wrong, since we do not have access to the original data,” says computer scientist Anders Eklund of Linkoping University in Sweden, who conducted the analysis.

How it should have worked: Start by signing up subjects. Scan their brains while they rest inside an MRI machine. Then scan their brains again when exposed to pictures of spiders, say. Those subjects who are afraid of spiders will have blood rush to those regions of the brain involved in thinking and feeling fear, because such thoughts or feelings are suspected to require more oxygen. With the help of a computer program, the MRI machine then registers differences in hemoglobin, the iron-rich molecule that makes blood red and carries oxygen from place to place. (That’s the functional in fMRI.) The scan then looks at whether those hemoglobin molecules are still carrying oxygen to a given place in the brain, or not, based on how the molecules respond to the powerful magnetic fields. Scan enough brains and see how the fearful differ from the fearless, and perhaps you can identify the brain regions or structures associated with thinking or feeling fear.

That’s the theory, anyway. In order to detect such differences in brain activity, it would be best to scan a large number of brains, but the difficulty and expense often make this impossible. A single MRI scan can cost around $2,600, according to a 2014 NerdWallet analysis. Further, the differences in the blood flow are often tiny. And then there’s the fact that computer programs have to sift the through images of the 1,200 or so cubic centimeters of gelatinous tissue that make up each individual brain and compare them to others, a big data analysis challenge.

Eklund’s report shows that the assumptions behind the main computer programs used to sift such big fMRI data have flaws, as turned up by nearly 3 million random evaluations of the resting brain scans of 499 volunteers from Cambridge, Massachusetts Beijing and Oulu, Finland. One program turned out to have a 15-year-old coding error (which has now been fixed) that caused it to detect too much brain activity. This highlights the challenge of researchers working with computer code that they are not capable of checking themselves, a challenge not confined just to neuroscience.

The brain is even more complicated than we thought. Worse, Eklund and his colleagues found that all the programs assume that brains at rest have the same response to the jet-engine roar of the MRI machine itself as well as whatever random thoughts and feelings occur in the brain. Those assumptions appear to be wrong. The brain at rest is “actually a bit more complex,” Eklund says.

More specifically, the white matter of the brain appears to be underrepresented in fMRI analyses while another specific part of the brain — the posterior cingulate, a region in the middle of the brain that connects to many other parts — shows up as a “hot spot” of activity. As a result, the programs are more likely to single it out as showing extra activity even when there is no difference. “The reason for this is still unknown,” Eklund says.

Overall, the programs had a false positive rate — detecting a difference where none actually existed — of as much as 70 percent.

Unknown unknowns: This does not mean all fMRI studies are wrong. Co-author and statistician Thomas Nichols of the University of Warwick calculates that some 3,500 studies may be affected by such false positives, and such false positives can never be eliminated entirely. But a survey of 241 recent fMRI papers found 96 that could have even worse false-positive rates than those found in this analysis.

“The paper makes an important criticism,” says Nancy Kanwisher, a neuroscientist at MIT (TED Talk: A neural portrait of the human mind), though she points out that it does not undermine those fMRI studies that do not rely on these computer programs.

Nonetheless, it is worrying. “I think the fallout has yet to be fully evaluated. It appears to apply to quite a few studies, certainly the studies done in a generic way that is the bread-and-butter of fMRI,” says Douglas Greve, a neuroimaging specialist at Massachusetts General Hospital. What’s needed is more scrutiny, Greve suggests.

Another argument for open data. Eklund and his colleagues were only able to discover this methodological flaw thanks to the open sharing of group brain scan data by the 1,000 Functional Connectomes Project. Unfortunately, such sharing of brain scan data is more the exception than the norm, which hinders other researchers attempting to re-create the experiment and replicate the results. Such replication is a cornerstone of the scientific method, ensuring that findings are robust. Eklund, for one, therefore encourages neuroimagers to “share their fMRI data, so that other researchers can replicate their findings and re-analyze the data several years later.” Only then can scientists be sure that the undiscovered activity of the human brain is truly revealed … and that dead salmon are not still dreaming.

About the author

David Biello is an award-winning journalist writing most often about the environment and energy. His book "The Unnatural World" publishes November 2016. It's about whether the planet has entered a new geologic age as a result of people's impacts and, if so, what we should do about this Anthropocene. He also hosts documentaries, such as "Beyond the Light Switch" and the forthcoming "The Ethanol Effect" for PBS. He is the science curator for TED.


Can adults get CVI?

Adults can also develop problems with their vision after a traumatic brain injury (such as a head injury or stroke that damages the brain). Veterans may be at higher risk for visual problems as a result of combat injuries.

These problems are sometimes called acquired CVI, but it isn’t the same as CVI. A brain injury that happens later in life usually has different symptoms than CVI, which is caused by an injury early in life.

If you or a loved one has vision problems because of a brain injury, ask the doctor about vision rehabilitation and other support services. Vision rehabilitation can help people with brain injuries make the most of their vision.


It's (Not) All in the Eyes: Eye Movements Don't Indicate Lying

The direction of eye movements may not indicate lying, says new research.

Is There a Price for Fibbing?

July 12, 2012— -- Your eyes may not say it all when it comes to lying, according to a new study.

Despite the common belief that shifty eyes -- moving up and to the right -- indicate deception, researchers found no connection between where the eyes move and whether a person is telling the truth.

In three separate experiments, they tested whether people who lied tended to move their eyes up and to the right, more than people who were not lying. They found no association between which direction the eyes moved and whether participants were telling the truth.

"This is in line with findings from a considerable amount of previous work showing that facial clues (including eye movements) are poor indicators of deception," wrote the authors, led by Richard Wiseman of the University of Hertfordshire in the United Kingdom. The study is published in the online journal PLoS ONE.

Howard Ehrlichman, a professor emeritus of psychology at Queens College of the City University of New York, has done considerable research on eye movements, and said he also never found any link between the direction of eye movements and lying.

"This does not mean that the eyes don't tell us anything about what people are thinking," he said. "I found that while the direction of eye movements wasn't related to anything, whether people actually made eye movements or not was related to aspects of things going on in their mind."

He said that people tend to make eye movements -- about one per second on average -- when they are retrieving information from their long-term memory.

"If there's no eye movement during a television interview, I'm convinced that the person has rehearsed or repeated what they are going to say many times and don't have to search for the answer in their long-term memories."

He said he's not sure where the notion about directionality of eye movement and lying came from, but said it has spread despite little scientific evidence for it.

The study authors attribute part of the popularity of the belief that looking up to the right indicates lying while looking up to the left indicates truthfulness to many practitioners of neuro-linguistic programming (NLP). NLP -- controversial among scientists -- is a therapy approach that revolves around the connection between neurological processes, language and behavior.

"Many NLP practitioners claim that a person's eye movements can reveal a useful insight into whether they are lying or telling the truth," they wrote.

Some NLP practitioners dispute this assertion.

"We don't believe that eye movements are an indication of lying and have never taught it as such. I believe that someone started the idea as a marketing ploy. Perhaps they really believed it," said Steven Leeds, a co-director of the NLP Training Center of New York. "Eye movements, as we teach it, indicate how a person is processing information, whether it be visual, auditory, or kinesthetic, and whether is remembered or created."

Others say they believe that the direction of eye movements can give away a liar.

Donald Sanborn is president of Credibility Assessment Technologies, a company that specializes in lie detection technology, recently licensed new technology -- called ocular-motor detection testing -- based on research done by psychologists from the University of Utah that utilizes a combination of eye tracking and other variables to determine whether a person is lying.

"When a person is lying, their emotional load goes up, which causes changes in pupil diameter and gaze position," said Sanborn. The device also measures how long it takes to read and answer certain questions. Pupil size, gaze position and the length of time it takes to respond to questions reflect that the brain is working harder, which the psychologists determined is a sign of lying.

This device is designed to be used for pre-employment screening, but it is not legal for use by private companies, so the company is working with the U.S. government and with foreign firms. He estimates its accuracy to be around 85 to 87 percent.

But the authors of the new study said their work offers proof that no relationship exists between the direction of eye movements and truthfulness.

"Future research could focus on why the belief has become so widespread," they wrote.


How Hurricanes Get Their Names

When It's Helpful — and When It's Not

In addition to getting us thinking about how we'd handle a potential disaster and the risk factors that increase the chance of being involved, Dr. Mayer says there are a few other ways that viewing destruction can actually be beneficial. "The healthy mechanism of watching disasters is that it is a coping mechanism," he explains. "We can become incubated emotionally by watching disasters and this helps us cope with hardships in our lives. Looking at disasters stimulates our empathy and we are programmed as humans to be empathetic — it is a key psychosocial condition that makes us social human beings."

We tend to think negatively to protect ourselves from the reality. If it turns out better, we're relieved. If it turns out to be worse, we're prepared.

However, as Dr. Stephen Rosenberg points out, empathy can also have a negative impact when following disasters — especially if you know someone who's being affected. "Being human and having empathy can make us feel worried or depressed," he says. "A patient of mine has his family trapped in Puerto Rico. He is following the news closely to monitor events and is waiting to hear from his family after the decimation of the island from the latest hurricane." Staying glued to this news coverage — especially when someone you know is affected, also activates our negativity bias. "We tend to think negatively to protect ourselves from the reality," Dr. Rosenberg explains. "If it turns out better, we're relieved. If it turns out to be worse, we're prepared."

The ability to empathize also plays a part in how we're affected when we see coverage of a disaster that we can relate to. "The more similar the viewer is to the victims of the disaster, the more likely he or she will be to experience anxiety, fear, vicarious trauma, physical complaints and illnesses and decreased daily functioning," Dr. Carr explains. For example, a study published by the American Psychological Association found that during the Ebola outbreaks, participants living in areas with a high demographic of West African-born residents experienced more symptoms of anxiety than those who didn't.

Related

Safety Zone How to Prepare for a Hurricane

Even if you're not directly impacted by a disaster that's being widely covered on the news, there's evidence that repeated viewing can have a negative effect on mental health and well being. In a post 9/11 study published by The Journal of Anxiety Disorders, the television viewing habits of 166 children and 84 mothers who had no direct exposure to the attacks were studied. Sixty-eight percent of mothers and 48 percent of children reported increased television viewing in the days following the attacks. The study found that this uptick in viewing predicted an increased risk of PTSD symptoms.


Does the brain stop processing data when the eyes move? - Psychology


The blind spot is a part of the retina where there are no photoreceptors. To demonstrate its existence to yourself, close your right eye, look at the + sign below with your left eye, then move your head toward or away from the screen slowly while continuing to watch the + sign. The big black dot will disappear as it passes through the blind spot of the retina of your left eye.

Thus there is a portion of your field of vision that you would expect to experience as missing. The reason this does not happen is that your brain fills in the blind spot with the colour and texture of the area surrounding it. In the above experiment, the black dot was replaced with the white background of this Web page. The following example works exactly the same way but is even more striking, because your brain fills in the break in the line.

If your visual cortex is capable of filling in the image in your blind spot in this way, then chances are good that it does the same thing throughout your field of vision. Consequently, what you are aware of seeing may not be exactly what is actually being imprinted on your retina, as if it were just a simple piece of film. Instead, what you are seeing may already have had several "special effects" added.


Perception as Inference

Predictive Processing has its roots in the work of German physician and physicist Herman von Helmholtz (Friston et al., 2006). Helmholtz recognized that incoming sensory data are ambiguous (Helmholtz, 1867). For a given sensation there are multiple potential causes in the world. For example, an orange scent could be caused by orange soda, air freshener, or an actual orange. And contrary to common sense, we do not have direct access to the world. Consider vision. Light does not enter the brain. The inside of the skull is dark. Instead the retina converts photons of light into the firing of neurons. In fact every sensory receptor, from vision, to touch, to smell, has the same type of output. This is true for the interoceptive senses such as proprioception, hunger, and thirst as well. In our experience of the world, all the brain has to work with are patterns of firing neurons. As Immanuel Kant suggested, all we can know is the “phenomenon,” that is the effect of the world upon us, i.e., patterns of firing neurons. We can never know “the thing in itself” that is, the actual causes in the world of the effects we experience (Kant, 1781). With this observation Kant anticipated the Markov Blanket, a concept central to Predictive Processing. The Markov blanket is essentially the boundary between a system, and everything else that is not that system, expressed in mathematical terms (Yufik and Friston, 2016). Given the ambiguity of sensory data and the impossibility of knowing “the thing in itself” Helmholtz concluded that perception is an act of unconscious inference. We cannot know directly what lies on the other side of the Markov blanket (i.e., sensory boundary) that is constituted by our sensory epithelia. When we perceive, the brain is making a guess about the state of the world. This process is automatic, rapid, and unconscious (Tenenbaum et al., 2011). As a result we are unaware that a sophisticated process has occurred. We are only aware of the product, what the brain has calculated is the most likely cause, the best guess. However we do not experience this as a probability or a guess, but rather as a fact (Dehaene and Changeux, 2011 Hohwy, 2013) “I see an orange.”

Helmholtz’ hypothesis of perception as inference has significant implications for brain function. If perception is an act of inference, the brain must have information that is used as the basis for inference. That is, it must have a model of the world, a priori, before it encounters the world. Dreaming during Rapid Eye Movement (REM) sleep illustrates the ability of the brain to generate perceptual hypotheses in the absence of any sensory data, an a priori model (Hobson et al., 2014). In the parlance of predictive processing this is called a prior probability or “prior” based on Bayes Theorem (Geisler and Diehl, 2003). Prior probability is the likelihood of a proposition before considering empirical data from the senses. But where does such a prior probability or model come from?


The Brain on Trial

Advances in brain science are calling into question the volition behind many criminal acts. A leading neuroscientist describes how the foundations of our criminal-justice system are beginning to crumble, and proposes a new way forward for law and order.

O n the steamy first day of August 1966, Charles Whitman took an elevator to the top floor of the University of Texas Tower in Austin. The 25-year-old climbed the stairs to the observation deck, lugging with him a footlocker full of guns and ammunition. At the top, he killed a receptionist with the butt of his rifle. Two families of tourists came up the stairwell he shot at them at point-blank range. Then he began to fire indiscriminately from the deck at people below. The first woman he shot was pregnant. As her boyfriend knelt to help her, Whitman shot him as well. He shot pedestrians in the street and an ambulance driver who came to rescue them.

The evening before, Whitman had sat at his typewriter and composed a suicide note:

By the time the police shot him dead, Whitman had killed 13 people and wounded 32 more. The story of his rampage dominated national headlines the next day. And when police went to investigate his home for clues, the story became even stranger: in the early hours of the morning on the day of the shooting, he had murdered his mother and stabbed his wife to death in her sleep.

Along with the shock of the murders lay another, more hidden, surprise: the juxtaposition of his aberrant actions with his unremarkable personal life. Whitman was an Eagle Scout and a former marine, studied architectural engineering at the University of Texas, and briefly worked as a bank teller and volunteered as a scoutmaster for Austin’s Boy Scout Troop 5. As a child, he’d scored 138 on the Stanford-Binet IQ test, placing in the 99th percentile. So after his shooting spree from the University of Texas Tower, everyone wanted answers.

For that matter, so did Whitman. He requested in his suicide note that an autopsy be performed to determine if something had changed in his brain—because he suspected it had.

Whitman’s body was taken to the morgue, his skull was put under the bone saw, and the medical examiner lifted the brain from its vault. He discovered that Whitman’s brain harbored a tumor the diameter of a nickel. This tumor, called a glioblastoma, had blossomed from beneath a structure called the thalamus, impinged on the hypothalamus, and compressed a third region called the amygdala. The amygdala is involved in emotional regulation, especially of fear and aggression. By the late 1800s, researchers had discovered that damage to the amygdala caused emotional and social disturbances. In the 1930s, the researchers Heinrich Klüver and Paul Bucy demonstrated that damage to the amygdala in monkeys led to a constellation of symptoms, including lack of fear, blunting of emotion, and overreaction. Female monkeys with amygdala damage often neglected or physically abused their infants. In humans, activity in the amygdala increases when people are shown threatening faces, are put into frightening situations, or experience social phobias. Whitman’s intuition about himself—that something in his brain was changing his behavior—was spot-on.

Stories like Whitman’s are not uncommon: legal cases involving brain damage crop up increasingly often. As we develop better technologies for probing the brain, we detect more problems, and link them more easily to aberrant behavior. Take the 2000 case of a 40-year-old man we’ll call Alex, whose sexual preferences suddenly began to transform. He developed an interest in child pornography—and not just a little interest, but an overwhelming one. He poured his time into child-pornography Web sites and magazines. He also solicited prostitution at a massage parlor, something he said he had never previously done. He reported later that he’d wanted to stop, but “the pleasure principle overrode” his restraint. He worked to hide his acts, but subtle sexual advances toward his prepubescent stepdaughter alarmed his wife, who soon discovered his collection of child pornography. He was removed from his house, found guilty of child molestation, and sentenced to rehabilitation in lieu of prison. In the rehabilitation program, he made inappropriate sexual advances toward the staff and other clients, and was expelled and routed toward prison.

At the same time, Alex was complaining of worsening headaches. The night before he was to report for prison sentencing, he couldn’t stand the pain anymore, and took himself to the emergency room. He underwent a brain scan, which revealed a massive tumor in his orbitofrontal cortex. Neurosurgeons removed the tumor. Alex’s sexual appetite returned to normal.

The year after the brain surgery, his pedophilic behavior began to return. The neuroradiologist discovered that a portion of the tumor had been missed in the surgery and was regrowing—and Alex went back under the knife. After the removal of the remaining tumor, his behavior again returned to normal.

When your biology changes, so can your decision-making and your desires. The drives you take for granted (“I’m a heterosexual/homosexual,” “I’m attracted to children/adults,” “I’m aggressive/not aggressive,” and so on) depend on the intricate details of your neural machinery. Although acting on such drives is popularly thought to be a free choice, the most cursory examination of the evidence demonstrates the limits of that assumption.

Alex’s sudden pedophilia illustrates that hidden drives and desires can lurk undetected behind the neural machinery of socialization. When the frontal lobes are compromised, people become disinhibited, and startling behaviors can emerge. Disinhibition is commonly seen in patients with frontotemporal dementia, a tragic disease in which the frontal and temporal lobes degenerate. With the loss of that brain tissue, patients lose the ability to control their hidden impulses. To the frustration of their loved ones, these patients violate social norms in endless ways: shoplifting in front of store managers, removing their clothes in public, running stop signs, breaking out in song at inappropriate times, eating food scraps found in public trash cans, being physically aggressive or sexually transgressive. Patients with frontotemporal dementia commonly end up in courtrooms, where their lawyers, doctors, and embarrassed adult children must explain to the judge that the violation was not the perpetrator’s fault, exactly: much of the brain has degenerated, and medicine offers no remedy. Fifty-seven percent of frontotemporal-dementia patients violate social norms, as compared with only 27 percent of Alzheimer’s patients.

Changes in the balance of brain chemistry, even small ones, can also cause large and unexpected changes in behavior. Victims of Parkinson’s disease offer an example. In 2001, families and caretakers of Parkinson’s patients began to notice something strange. When patients were given a drug called pramipexole, some of them turned into gamblers. And not just casual gamblers, but pathological gamblers. These were people who had never gambled much before, and now they were flying off to Vegas. One 68-year-old man amassed losses of more than $200,000 in six months at a series of casinos. Some patients became consumed with Internet poker, racking up unpayable credit-card bills. For several, the new addiction reached beyond gambling, to compulsive eating, excessive alcohol consumption, and hypersexuality.

What was going on? Parkinson’s involves the loss of brain cells that produce a neurotransmitter known as dopamine. Pramipexole works by impersonating dopamine. But it turns out that dopamine is a chemical doing double duty in the brain. Along with its role in motor commands, it also mediates the reward systems, guiding a person toward food, drink, mates, and other things useful for survival. Because of dopamine’s role in weighing the costs and benefits of decisions, imbalances in its levels can trigger gambling, overeating, and drug addiction—behaviors that result from a reward system gone awry. Physicians now watch for these behavioral changes as a possible side effect of drugs like pramipexole. Luckily, the negative effects of the drug are reversible—the physician simply lowers the dosage, and the compulsive gambling goes away.

The lesson from all these stories is the same: human behavior cannot be separated from human biology. If we like to believe that people make free choices about their behavior (as in, “I don’t gamble, because I’m strong-willed”), cases like Alex the pedophile, the frontotemporal shoplifters, and the gambling Parkinson’s patients may encourage us to examine our views more carefully. Perhaps not everyone is equally “free” to make socially appropriate choices.

Does the discovery of Charles Whitman’s brain tumor modify your feelings about the senseless murders he committed? Does it affect the sentence you would find appropriate for him, had he survived that day? Does the tumor change the degree to which you consider the killings “his fault”? Couldn’t you just as easily be unlucky enough to develop a tumor and lose control of your behavior?

On the other hand, wouldn’t it be dangerous to conclude that people with a tumor are free of guilt, and that they should be let off the hook for their crimes?

As our understanding of the human brain improves, juries are increasingly challenged with these sorts of questions. When a criminal stands in front of the judge’s bench today, the legal system wants to know whether he is blameworthy. Was it his fault, or his biology’s fault?

I submit that this is the wrong question to be asking. The choices we make are inseparably yoked to our neural circuitry, and therefore we have no meaningful way to tease the two apart. The more we learn, the more the seemingly simple concept of blameworthiness becomes complicated, and the more the foundations of our legal system are strained.

If I seem to be heading in an uncomfortable direction—toward letting criminals off the hook—please read on, because I’m going to show the logic of a new argument, piece by piece. The upshot is that we can build a legal system more deeply informed by science, in which we will continue to take criminals off the streets, but we will customize sentencing, leverage new opportunities for rehabilitation, and structure better incentives for good behavior. Discoveries in neuroscience suggest a new way forward for law and order—one that will lead to a more cost-effective, humane, and flexible system than the one we have today. When modern brain science is laid out clearly, it is difficult to justify how our legal system can continue to function without taking what we’ve learned into account.

M any of us like to believe that all adults possess the same capacity to make sound choices. It’s a charitable idea, but demonstrably wrong. People’s brains are vastly different.

Who you even have the possibility to be starts at conception. If you think genes don’t affect how people behave, consider this fact: if you are a carrier of a particular set of genes, the probability that you will commit a violent crime is four times as high as it would be if you lacked those genes. You’re three times as likely to commit robbery, five times as likely to commit aggravated assault, eight times as likely to be arrested for murder, and 13 times as likely to be arrested for a sexual offense. The overwhelming majority of prisoners carry these genes 98.1 percent of death-row inmates do. These statistics alone indicate that we cannot presume that everyone is coming to the table equally equipped in terms of drives and behaviors.

And this feeds into a larger lesson of biology: we are not the ones steering the boat of our behavior, at least not nearly as much as we believe. Who we are runs well below the surface of our conscious access, and the details reach back in time to before our birth, when the meeting of a sperm and an egg granted us certain attributes and not others. Who we can be starts with our molecular blueprints—a series of alien codes written in invisibly small strings of acids—well before we have anything to do with it. Each of us is, in part, a product of our inaccessible, microscopic history. By the way, as regards that dangerous set of genes, you’ve probably heard of them. They are summarized as the Y chromosome. If you’re a carrier, we call you a male.

Genes are part of the story, but they’re not the whole story. We are likewise influenced by the environments in which we grow up. Substance abuse by a mother during pregnancy, maternal stress, and low birth weight all can influence how a baby will turn out as an adult. As a child grows, neglect, physical abuse, and head injury can impede mental development, as can the physical environment. (For example, the major public-health movement to eliminate lead-based paint grew out of an understanding that ingesting lead can cause brain damage, making children less intelligent and, in some cases, more impulsive and aggressive.) And every experience throughout our lives can modify genetic expression—activating certain genes or switching others off—which in turn can inaugurate new behaviors. In this way, genes and environments intertwine.

When it comes to nature and nurture, the important point is that we choose neither one. We are each constructed from a genetic blueprint, and then born into a world of circumstances that we cannot control in our most-formative years. The complex interactions of genes and environment mean that all citizens—equal before the law—possess different perspectives, dissimilar personalities, and varied capacities for decision-making. The unique patterns of neurobiology inside each of our heads cannot qualify as choices these are the cards we’re dealt.

Because we did not choose the factors that affected the formation and structure of our brain, the concepts of free will and personal responsibility begin to sprout question marks. Is it meaningful to say that Alex made bad choices, even though his brain tumor was not his fault? Is it justifiable to say that the patients with frontotemporal dementia or Parkinson’s should be punished for their bad behavior?

It is problematic to imagine yourself in the shoes of someone breaking the law and conclude, “Well, I wouldn’t have done that”—because if you weren’t exposed to in utero cocaine, lead poisoning, and physical abuse, and he was, then you and he are not directly comparable. You cannot walk a mile in his shoes.

T he legal system rests on the assumption that we are “practical reasoners,” a term of art that presumes, at bottom, the existence of free will. The idea is that we use conscious deliberation when deciding how to act—that is, in the absence of external duress, we make free decisions. This concept of the practical reasoner is intuitive but problematic.

The existence of free will in human behavior is the subject of an ancient debate. Arguments in support of free will are typically based on direct subjective experience (“I feel like I made the decision to lift my finger just now”). But evaluating free will requires some nuance beyond our immediate intuitions.

Consider a decision to move or speak. It feels as though free will leads you to stick out your tongue, or scrunch up your face, or call someone a name. But free will is not required to play any role in these acts. People with Tourette’s syndrome, for instance, suffer from involuntary movements and vocalizations. A typical Touretter may stick out his tongue, scrunch up his face, or call someone a name—all without choosing to do so.

We immediately learn two things from the Tourette’s patient. First, actions can occur in the absence of free will. Second, the Tourette’s patient has no free won’t. He cannot use free will to override or control what subconscious parts of his brain have decided to do. What the lack of free will and the lack of free won’t have in common is the lack of “free.” Tourette’s syndrome provides a case in which the underlying neural machinery does its thing, and we all agree that the person is not responsible.

This same phenomenon arises in people with a condition known as chorea, for whom actions of the hands, arms, legs, and face are involuntary, even though they certainly look voluntary: ask such a patient why she is moving her fingers up and down, and she will explain that she has no control over her hand. She cannot not do it. Similarly, some split-brain patients (who have had the two hemispheres of the brain surgically disconnected) develop alien-hand syndrome: while one hand buttons up a shirt, the other hand works to unbutton it. When one hand reaches for a pencil, the other bats it away. No matter how hard the patient tries, he cannot make his alien hand not do what it’s doing. The movements are not “his” to freely start or stop.

Unconscious acts are not limited to unintended shouts or wayward hands they can be surprisingly sophisticated. Consider Kenneth Parks, a 23-year-old Canadian with a wife, a five-month-old daughter, and a close relationship with his in-laws (his mother-in-law described him as a “gentle giant”). Suffering from financial difficulties, marital problems, and a gambling addiction, he made plans to go see his in-laws to talk about his troubles.

In the wee hours of May 23, 1987, Kenneth arose from the couch on which he had fallen asleep, but he did not awaken. Sleepwalking, he climbed into his car and drove the 14 miles to his in-laws’ home. He broke in, stabbed his mother-in-law to death, and assaulted his father-in-law, who survived. Afterward, he drove himself to the police station. Once there, he said, “I think I have killed some people … My hands,” realizing for the first time that his own hands were severely cut.

Over the next year, Kenneth’s testimony was remarkably consistent, even in the face of attempts to lead him astray: he remembered nothing of the incident. Moreover, while all parties agreed that Kenneth had undoubtedly committed the murder, they also agreed that he had no motive. His defense attorneys argued that this was a case of killing while sleepwalking, known as homicidal somnambulism.

Although critics cried “Faker!,” sleepwalking is a verifiable phenomenon. On May 25, 1988, after lengthy consideration of electrical recordings from Kenneth’s brain, the jury concluded that his actions had indeed been involuntary, and declared him not guilty.

As with Tourette’s sufferers, split-brain patients, and those with choreic movements, Kenneth’s case illustrates that high-level behaviors can take place in the absence of free will. Like your heartbeat, breathing, blinking, and swallowing, even your mental machinery can run on autopilot. The crux of the question is whether all of your actions are fundamentally on autopilot or whether some little bit of you is “free” to choose, independent of the rules of biology.

This has always been the sticking point for philosophers and scientists alike. After all, there is no spot in the brain that is not densely interconnected with—and driven by—other brain parts. And that suggests that no part is independent and therefore “free.” In modern science, it is difficult to find the gap into which to slip free will—the uncaused causer—because there seems to be no part of the machinery that does not follow in a causal relationship from the other parts.

Free will may exist (it may simply be beyond our current science), but one thing seems clear: if free will does exist, it has little room in which to operate. It can at best be a small factor riding on top of vast neural networks shaped by genes and environment. In fact, free will may end up being so small that we eventually think about bad decision-making in the same way we think about any physical process, such as diabetes or lung disease.

The study of brains and behaviors is in the midst of a conceptual shift. Historically, clinicians and lawyers have agreed on an intuitive distinction between neurological disorders (“brain problems”) and psychiatric disorders (“mind problems”). As recently as a century ago, a common approach was to get psychiatric patients to “toughen up,” through deprivation, pleading, or torture. Not surprisingly, this approach was medically fruitless. After all, while psychiatric disorders tend to be the product of more-subtle forms of brain pathology, they, too, are based in the biological details of the brain.

What accounts for the shift from blame to biology? Perhaps the largest driving force is the effectiveness of pharmaceutical treatments. No amount of threatening will chase away depression, but a little pill called fluoxetine often does the trick. Schizophrenic symptoms cannot be overcome by exorcism, but they can be controlled by risperidone. Mania responds not to talk or to ostracism, but to lithium. These successes, most of them introduced in the past 60 years, have underscored the idea that calling some disorders “brain problems” while consigning others to the ineffable realm of “the psychic” does not make sense. Instead, we have begun to approach mental problems in the same way we might approach a broken leg. The neuroscientist Robert Sapolsky invites us to contemplate this conceptual shift with a series of questions:

Acts cannot be understood separately from the biology of the actors—and this recognition has legal implications. Tom Bingham, Britain’s former senior law lord, once put it this way:

In the past, the law has tended to base its approach … on a series of rather crude working assumptions: adults of competent mental capacity are free to choose whether they will act in one way or another they are presumed to act rationally, and in what they conceive to be their own best interests they are credited with such foresight of the consequences of their actions as reasonable people in their position could ordinarily be expected to have they are generally taken to mean what they say.

Whatever the merits or demerits of working assumptions such as these in the ordinary range of cases, it is evident that they do not provide a uniformly accurate guide to human behaviour.

The more we discover about the circuitry of the brain, the more we tip away from accusations of indulgence, lack of motivation, and poor discipline—and toward the details of biology. The shift from blame to science reflects our modern understanding that our perceptions and behaviors are steered by deeply embedded neural programs.

Imagine a spectrum of culpability. On one end, we find people like Alex the pedophile, or a patient with frontotemporal dementia who exposes himself in public. In the eyes of the judge and jury, these are people who suffered brain damage at the hands of fate and did not choose their neural situation. On the other end of the spectrum—the blameworthy side of the “fault” line—we find the common criminal, whose brain receives little study, and about whom our current technology might be able to say little anyway. The overwhelming majority of lawbreakers are on this side of the line, because they don’t have any obvious, measurable biological problems. They are simply thought of as freely choosing actors.

Such a spectrum captures the common intuition that juries hold regarding blameworthiness. But there is a deep problem with this intuition. Technology will continue to improve, and as we grow better at measuring problems in the brain, the fault line will drift into the territory of people we currently hold fully accountable for their crimes. Problems that are now opaque will open up to examination by new techniques, and we may someday find that many types of bad behavior have a basic biological explanation—as has happened with schizophrenia, epilepsy, depression, and mania.

Today, neuroimaging is a crude technology, unable to explain the details of individual behavior. We can detect only large-scale problems, but within the coming decades, we will be able to detect patterns at unimaginably small levels of the microcircuitry that correlate with behavioral problems. Neuroscience will be better able to say why people are predisposed to act the way they do. As we become more skilled at specifying how behavior results from the microscopic details of the brain, more defense lawyers will point to biological mitigators of guilt, and more juries will place defendants on the not-blameworthy side of the line.

This puts us in a strange situation. After all, a just legal system cannot define culpability simply by the limitations of current technology. Expert medical testimony generally reflects only whether we yet have names and measurements for a problem, not whether a problem exists. A legal system that declares a person culpable at the beginning of a decade and not culpable at the end is one in which culpability carries no clear meaning.

The crux of the problem is that it no longer makes sense to ask, “To what extent was it his biology, and to what extent was it him?,” because we now understand that there is no meaningful distinction between a person’s biology and his decision-making. They are inseparable.

While our current style of punishment rests on a bedrock of personal volition and blame, our modern understanding of the brain suggests a different approach. Blameworthiness should be removed from the legal argot. It is a backward-looking concept that demands the impossible task of untangling the hopelessly complex web of genetics and environment that constructs the trajectory of a human life.

Instead of debating culpability, we should focus on what to do, moving forward, with an accused lawbreaker. I suggest that the legal system has to become forward-looking, primarily because it can no longer hope to do otherwise. As science complicates the question of culpability, our legal and social policy will need to shift toward a different set of questions: How is a person likely to behave in the future? Are criminal actions likely to be repeated? Can this person be helped toward pro-social behavior? How can incentives be realistically structured to deter crime?

The important change will be in the way we respond to the vast range of criminal acts. Biological explanation will not exculpate criminals we will still remove from the streets lawbreakers who prove overaggressive, underempathetic, and poor at controlling their impulses. Consider, for example, that the majority of known serial killers were abused as children. Does this make them less blameworthy? Who cares? It’s the wrong question. The knowledge that they were abused encourages us to support social programs to prevent child abuse, but it does nothing to change the way we deal with the particular serial murderer standing in front of the bench. We still need to keep him off the streets, irrespective of his past misfortunes. The child abuse cannot serve as an excuse to let him go the judge must keep society safe.

Those who break social contracts need to be confined, but in this framework, the future is more important than the past. Deeper biological insight into behavior will foster a better understanding of recidivism—and this offers a basis for empirically based sentencing. Some people will need to be taken off the streets for a longer time (even a lifetime), because their likelihood of reoffense is high others, because of differences in neural constitution, are less likely to recidivate, and so can be released sooner.

The law is already forward-looking in some respects: consider the leniency afforded a crime of passion versus a premeditated murder. Those who commit the former are less likely to recidivate than those who commit the latter, and their sentences sensibly reflect that. Likewise, American law draws a bright line between criminal acts committed by minors and those by adults, punishing the latter more harshly. This approach may be crude, but the intuition behind it is sound: adolescents command lesser skills in decision-making and impulse control than do adults a teenager’s brain is simply not like an adult’s brain. Lighter sentences are appropriate for those whose impulse control is likely to improve naturally as adolescence gives way to adulthood.

Taking a more scientific approach to sentencing, case by case, could move us beyond these limited examples. For instance, important changes are happening in the sentencing of sex offenders. In the past, researchers have asked psychiatrists and parole-board members how likely specific sex offenders were to relapse when let out of prison. Both groups had experience with sex offenders, so predicting who was going straight and who was coming back seemed simple. But surprisingly, the expert guesses showed almost no correlation with the actual outcomes. The psychiatrists and parole-board members had only slightly better predictive accuracy than coin-flippers. This astounded the legal community.

So researchers tried a more actuarial approach. They set about recording dozens of characteristics of some 23,000 released sex offenders: whether the offender had unstable employment, had been sexually abused as a child, was addicted to drugs, showed remorse, had deviant sexual interests, and so on. Researchers then tracked the offenders for an average of five years after release to see who wound up back in prison. At the end of the study, they computed which factors best explained the reoffense rates, and from these and later data they were able to build actuarial tables to be used in sentencing.

Which factors mattered? Take, for instance, low remorse, denial of the crime, and sexual abuse as a child. You might guess that these factors would correlate with sex offenders’ recidivism. But you would be wrong: those factors offer no predictive power. How about antisocial personality disorder and failure to complete treatment? These offer somewhat more predictive power. But among the strongest predictors of recidivism are prior sexual offenses and sexual interest in children. When you compare the predictive power of the actuarial approach with that of the parole boards and psychiatrists, there is no contest: numbers beat intuition. In courtrooms across the nation, these actuarial tests are now used in presentencing to modulate the length of prison terms.

We will never know with certainty what someone will do upon release from prison, because real life is complicated. But greater predictive power is hidden in the numbers than people generally expect. Statistically based sentencing is imperfect, but it nonetheless allows evidence to trump folk intuition, and it offers customization in place of the blunt guidelines that the legal system typically employs. The current actuarial approaches do not require a deep understanding of genes or brain chemistry, but as we introduce more science into these measures—for example, with neuroimaging studies—the predictive power will only improve. (To make such a system immune to government abuse, the data and equations that compose the sentencing guidelines must be transparent and available online for anyone to verify.)

B eyond customized sentencing, a forward-thinking legal system informed by scientific insights into the brain will enable us to stop treating prison as a one-size-fits-all solution. To be clear, I’m not opposed to incarceration, and its purpose is not limited to the removal of dangerous people from the streets. The prospect of incarceration deters many crimes, and time actually spent in prison can steer some people away from further criminal acts upon their release. But that works only for those whose brains function normally. The problem is that prisons have become our de facto mental-health-care institutions—and inflicting punishment on the mentally ill usually has little influence on their future behavior. An encouraging trend is the establishment of mental-health courts around the nation: through such courts, people with mental illnesses can be helped while confined in a tailored environment. Cities such as Richmond, Virginia, are moving in this direction, for reasons of justice as well as cost-effectiveness. Sheriff C. T. Woody, who estimates that nearly 20 percent of Richmond’s prisoners are mentally ill, told CBS News, “The jail isn’t a place for them. They should be in a mental-health facility.” Similarly, many jurisdictions are opening drug courts and developing alternative sentences they have realized that prisons are not as useful for solving addictions as are meaningful drug-rehabilitation programs.

A forward-thinking legal system will also parlay biological understanding into customized rehabilitation, viewing criminal behavior the way we understand other medical conditions such as epilepsy, schizophrenia, and depression—conditions that now allow the seeking and giving of help. These and other brain disorders find themselves on the not-blameworthy side of the fault line, where they are now recognized as biological, not demonic, issues.

Many people recognize the long-term cost-effectiveness of rehabilitating offenders instead of packing them into overcrowded prisons. The challenge has been the dearth of new ideas about how to rehabilitate them. A better understanding of the brain offers new ideas. For example, poor impulse control is characteristic of many prisoners. These people generally can express the difference between right and wrong actions, and they understand the disadvantages of punishment—but they are handicapped by poor control of their impulses. Whether as a result of anger or temptation, their actions override reasoned consideration of the future.

If it seems difficult to empathize with people who have poor impulse control, just think of all the things you succumb to against your better judgment. Alcohol? Chocolate cake? Television? It’s not that we don’t know what’s best for us, it’s simply that the frontal-lobe circuits representing long-term considerations can’t always win against short-term desire when temptation is in front of us.

With this understanding in mind, we can modify the justice system in several ways. One approach, advocated by Mark A. R. Kleiman, a professor of public policy at UCLA, is to ramp up the certainty and swiftness of punishment—for instance, by requiring drug offenders to undergo twice-weekly drug testing, with automatic, immediate consequences for failure—thereby not relying on distant abstraction alone. Similarly, economists have suggested that the drop in crime since the early 1990s has been due, in part, to the increased presence of police on the streets: their visibility shores up support for the parts of the brain that weigh long-term consequences.

We may be on the cusp of finding new rehabilitative strategies as well, affording people better control of their behavior, even in the absence of external authority. To help a citizen reintegrate into society, the ethical goal is to change him as little as possible while bringing his behavior into line with society’s needs. My colleagues and I are proposing a new approach, one that grows from the understanding that the brain operates like a team of rivals, with different neural populations competing to control the single output channel of behavior. Because it’s a competition, the outcome can be tipped. I call the approach “the prefrontal workout.”

The basic idea is to give the frontal lobes practice in squelching the short-term brain circuits. To this end, my colleagues Stephen LaConte and Pearl Chiu have begun providing real-time feedback to people during brain scanning. Imagine that you’d like to quit smoking cigarettes. In this experiment, you look at pictures of cigarettes during brain imaging, and the experimenters measure which regions of your brain are involved in the craving. Then they show you the activity in those networks, represented by a vertical bar on a computer screen, while you look at more cigarette pictures. The bar acts as a thermometer for your craving: if your craving networks are revving high, the bar is high if you’re suppressing your craving, the bar is low. Your job is to make the bar go down. Perhaps you have insight into what you’re doing to resist the craving perhaps the mechanism is inaccessible. In any case, you try out different mental avenues until the bar begins to slowly sink. When it goes all the way down, that means you’ve successfully recruited frontal circuitry to squelch the activity in the networks involved in impulsive craving. The goal is for the long term to trump the short term. Still looking at pictures of cigarettes, you practice making the bar go down over and over, until you’ve strengthened those frontal circuits. By this method, you’re able to visualize the activity in the parts of your brain that need modulation, and you can witness the effects of different mental approaches you might take.

If this sounds like biofeedback from the 1970s, it is—but this time with vastly more sophistication, monitoring specific networks inside the head rather than a single electrode on the skin. This research is just beginning, so the method’s efficacy is not yet known—but if it works well, it will be a game changer. We will be able to take it to the incarcerated population, especially those approaching release, to try to help them avoid coming back through the revolving prison doors.

This prefrontal workout is designed to better balance the debate between the long- and short-term parties of the brain, giving the option of reflection before action to those who lack it. And really, that’s all maturation is. The main difference between teenage and adult brains is the development of the frontal lobes. The human prefrontal cortex does not fully develop until the early 20s, and this fact underlies the impulsive behavior of teenagers. The frontal lobes are sometimes called the organ of socialization, because becoming socialized largely involves developing the circuitry to squelch our first impulses.

This explains why damage to the frontal lobes unmasks unsocialized behavior that we would never have thought was hidden inside us. Recall the patients with frontotemporal dementia who shoplift, expose themselves, and burst into song at inappropriate times. The networks for those behaviors have been lurking under the surface all along, but they’ve been masked by normally functioning frontal lobes. The same sort of unmasking happens in people who go out and get rip-roaring drunk on a Saturday night: they’re disinhibiting normal frontal-lobe function and letting more-impulsive networks climb onto the main stage. After training at the prefrontal gym, a person might still crave a cigarette, but he’ll know how to beat the craving instead of letting it win. It’s not that we don’t want to enjoy our impulsive thoughts (Mmm, cake), it’s merely that we want to endow the frontal cortex with some control over whether we act upon them (I’ll pass). Similarly, if a person thinks about committing a criminal act, that’s permissible as long as he doesn’t take action.

For the pedophile, we cannot hope to control whether he is attracted to children. That he never acts on the attraction may be the best we can hope for, especially as a society that respects individual rights and freedom of thought. Social policy can hope only to prevent impulsive thoughts from tipping into behavior without reflection. The goal is to give more control to the neural populations that care about long-term consequences—to inhibit impulsivity, to encourage reflection. If a person thinks about long-term consequences and still decides to move forward with an illegal act, then we’ll respond accordingly. The prefrontal workout leaves the brain intact—no drugs or surgery—and uses the natural mechanisms of brain plasticity to help the brain help itself. It’s a tune-up rather than a product recall.

We have hope that this approach represents the correct model: it is grounded simultaneously in biology and in libertarian ethics, allowing a person to help himself by improving his long-term decision-making. Like any scientific attempt, it could fail for any number of unforeseen reasons. But at least we have reached a point where we can develop new ideas rather than assuming that repeated incarceration is the single practical solution for deterring crime.

A long any axis that we use to measure human beings, we discover a wide-ranging distribution, whether in empathy, intelligence, impulse control, or aggression. People are not created equal. Although this variability is often imagined to be best swept under the rug, it is in fact the engine of evolution. In each generation, nature tries out as many varieties as it can produce, along all available dimensions.

Variation gives rise to lushly diverse societies—but it serves as a source of trouble for the legal system, which is largely built on the premise that humans are all equal before the law. This myth of human equality suggests that people are equally capable of controlling impulses, making decisions, and comprehending consequences. While admirable in spirit, the notion of neural equality is simply not true.

As brain science improves, we will better understand that people exist along continua of capabilities, rather than in simplistic categories. And we will be better able to tailor sentencing and rehabilitation for the individual, rather than maintain the pretense that all brains respond identically to complex challenges and that all people therefore deserve the same punishments. Some people wonder whether it’s unfair to take a scientific approach to sentencing—after all, where’s the humanity in that? But what’s the alternative? As it stands now, ugly people receive longer sentences than attractive people psychiatrists have no capacity to guess which sex offenders will reoffend and our prisons are overcrowded with drug addicts and the mentally ill, both of whom could be better helped by rehabilitation. So is current sentencing really superior to a scientifically informed approach?

Neuroscience is beginning to touch on questions that were once only in the domain of philosophers and psychologists, questions about how people make decisions and the degree to which those decisions are truly “free.” These are not idle questions. Ultimately, they will shape the future of legal theory and create a more biologically informed jurisprudence.


How to Bliss Out on Your Beach Trip

As much as we’d happily set up residence beachside on a Hawaiian island, you don’t actually have to live by the water to reap the benefits. It’s all about taking advantage of the time you spend there by practicing mindfulness.

Mindfulness has tons of mental health benefits including stress relief, says Nazari. A study published in the journal Psychiatry Research found that the brains of those who completed an eight-week mediation course changed in a few ways. For starters, the area of the brain responsible for stress shrunk in size. “This means that we become more resilient to handling stress and that stress doesn’t rattle us as much,” explains Nazari.

On the flip side, the parts of the brain responsible for memory, reasoning and empathy grew. An easier time thinking, focusing and connecting? That’s a pretty sweet deal — and just might make for a more productive Monday morning.


Limitations

Measuring the behavior of two live participants, while rich in data, is not without its limitations. For instance, participants who wore the sunglasses (participant B) verbally mentioned that they were uncertain of the extent to which the sunglasses disguised their eyes to the other participant. Based on informal conversations, the majority of participants who wore the sunglasses assumed their eyes were quite visible and thus, they would believe they could send gaze information to the other person. However, this should have been consistent with every participant either believing that their eyes were visible or not. Another potential limitation was the within-subjects design, such that participants took part in all three conditions𠅌lear, degraded, and blocked. While the order of conditions was counterbalanced, previous (unpublished) research in our lab has shown that participants habituate to eye contact over time and show less and less arousal with repeated exposure. Thus, our data may have been stronger if we had enough participants to analyze the data as a between-subjects design. Lastly, in the blindfold condition it was assumed that the blindfold would prevent all gaze signals from being sent and received between dyads because participant B’s eyes were entirely concealed. Since both participants were expected to have no ability to send or receive gaze information, no differences in the SCR between partners were expected to emerge in the eye contact trials. As mentioned in the results, there was a difference between participants arousal levels within the blindfold condition, such that participant B (wearing the blindfold) showed significantly higher arousal across all gaze trials. One possibility is that simply being blindfolded increased arousal because of the knowledge of being the focus of someone’s attention. Thus, being the object of someone’s attention could have been driving the arousal response.


How Hurricanes Get Their Names

When It's Helpful — and When It's Not

In addition to getting us thinking about how we'd handle a potential disaster and the risk factors that increase the chance of being involved, Dr. Mayer says there are a few other ways that viewing destruction can actually be beneficial. "The healthy mechanism of watching disasters is that it is a coping mechanism," he explains. "We can become incubated emotionally by watching disasters and this helps us cope with hardships in our lives. Looking at disasters stimulates our empathy and we are programmed as humans to be empathetic — it is a key psychosocial condition that makes us social human beings."

We tend to think negatively to protect ourselves from the reality. If it turns out better, we're relieved. If it turns out to be worse, we're prepared.

However, as Dr. Stephen Rosenberg points out, empathy can also have a negative impact when following disasters — especially if you know someone who's being affected. "Being human and having empathy can make us feel worried or depressed," he says. "A patient of mine has his family trapped in Puerto Rico. He is following the news closely to monitor events and is waiting to hear from his family after the decimation of the island from the latest hurricane." Staying glued to this news coverage — especially when someone you know is affected, also activates our negativity bias. "We tend to think negatively to protect ourselves from the reality," Dr. Rosenberg explains. "If it turns out better, we're relieved. If it turns out to be worse, we're prepared."

The ability to empathize also plays a part in how we're affected when we see coverage of a disaster that we can relate to. "The more similar the viewer is to the victims of the disaster, the more likely he or she will be to experience anxiety, fear, vicarious trauma, physical complaints and illnesses and decreased daily functioning," Dr. Carr explains. For example, a study published by the American Psychological Association found that during the Ebola outbreaks, participants living in areas with a high demographic of West African-born residents experienced more symptoms of anxiety than those who didn't.

Related

Safety Zone How to Prepare for a Hurricane

Even if you're not directly impacted by a disaster that's being widely covered on the news, there's evidence that repeated viewing can have a negative effect on mental health and well being. In a post 9/11 study published by The Journal of Anxiety Disorders, the television viewing habits of 166 children and 84 mothers who had no direct exposure to the attacks were studied. Sixty-eight percent of mothers and 48 percent of children reported increased television viewing in the days following the attacks. The study found that this uptick in viewing predicted an increased risk of PTSD symptoms.


Does the brain stop processing data when the eyes move? - Psychology


The blind spot is a part of the retina where there are no photoreceptors. To demonstrate its existence to yourself, close your right eye, look at the + sign below with your left eye, then move your head toward or away from the screen slowly while continuing to watch the + sign. The big black dot will disappear as it passes through the blind spot of the retina of your left eye.

Thus there is a portion of your field of vision that you would expect to experience as missing. The reason this does not happen is that your brain fills in the blind spot with the colour and texture of the area surrounding it. In the above experiment, the black dot was replaced with the white background of this Web page. The following example works exactly the same way but is even more striking, because your brain fills in the break in the line.

If your visual cortex is capable of filling in the image in your blind spot in this way, then chances are good that it does the same thing throughout your field of vision. Consequently, what you are aware of seeing may not be exactly what is actually being imprinted on your retina, as if it were just a simple piece of film. Instead, what you are seeing may already have had several "special effects" added.


The Brain on Trial

Advances in brain science are calling into question the volition behind many criminal acts. A leading neuroscientist describes how the foundations of our criminal-justice system are beginning to crumble, and proposes a new way forward for law and order.

O n the steamy first day of August 1966, Charles Whitman took an elevator to the top floor of the University of Texas Tower in Austin. The 25-year-old climbed the stairs to the observation deck, lugging with him a footlocker full of guns and ammunition. At the top, he killed a receptionist with the butt of his rifle. Two families of tourists came up the stairwell he shot at them at point-blank range. Then he began to fire indiscriminately from the deck at people below. The first woman he shot was pregnant. As her boyfriend knelt to help her, Whitman shot him as well. He shot pedestrians in the street and an ambulance driver who came to rescue them.

The evening before, Whitman had sat at his typewriter and composed a suicide note:

By the time the police shot him dead, Whitman had killed 13 people and wounded 32 more. The story of his rampage dominated national headlines the next day. And when police went to investigate his home for clues, the story became even stranger: in the early hours of the morning on the day of the shooting, he had murdered his mother and stabbed his wife to death in her sleep.

Along with the shock of the murders lay another, more hidden, surprise: the juxtaposition of his aberrant actions with his unremarkable personal life. Whitman was an Eagle Scout and a former marine, studied architectural engineering at the University of Texas, and briefly worked as a bank teller and volunteered as a scoutmaster for Austin’s Boy Scout Troop 5. As a child, he’d scored 138 on the Stanford-Binet IQ test, placing in the 99th percentile. So after his shooting spree from the University of Texas Tower, everyone wanted answers.

For that matter, so did Whitman. He requested in his suicide note that an autopsy be performed to determine if something had changed in his brain—because he suspected it had.

Whitman’s body was taken to the morgue, his skull was put under the bone saw, and the medical examiner lifted the brain from its vault. He discovered that Whitman’s brain harbored a tumor the diameter of a nickel. This tumor, called a glioblastoma, had blossomed from beneath a structure called the thalamus, impinged on the hypothalamus, and compressed a third region called the amygdala. The amygdala is involved in emotional regulation, especially of fear and aggression. By the late 1800s, researchers had discovered that damage to the amygdala caused emotional and social disturbances. In the 1930s, the researchers Heinrich Klüver and Paul Bucy demonstrated that damage to the amygdala in monkeys led to a constellation of symptoms, including lack of fear, blunting of emotion, and overreaction. Female monkeys with amygdala damage often neglected or physically abused their infants. In humans, activity in the amygdala increases when people are shown threatening faces, are put into frightening situations, or experience social phobias. Whitman’s intuition about himself—that something in his brain was changing his behavior—was spot-on.

Stories like Whitman’s are not uncommon: legal cases involving brain damage crop up increasingly often. As we develop better technologies for probing the brain, we detect more problems, and link them more easily to aberrant behavior. Take the 2000 case of a 40-year-old man we’ll call Alex, whose sexual preferences suddenly began to transform. He developed an interest in child pornography—and not just a little interest, but an overwhelming one. He poured his time into child-pornography Web sites and magazines. He also solicited prostitution at a massage parlor, something he said he had never previously done. He reported later that he’d wanted to stop, but “the pleasure principle overrode” his restraint. He worked to hide his acts, but subtle sexual advances toward his prepubescent stepdaughter alarmed his wife, who soon discovered his collection of child pornography. He was removed from his house, found guilty of child molestation, and sentenced to rehabilitation in lieu of prison. In the rehabilitation program, he made inappropriate sexual advances toward the staff and other clients, and was expelled and routed toward prison.

At the same time, Alex was complaining of worsening headaches. The night before he was to report for prison sentencing, he couldn’t stand the pain anymore, and took himself to the emergency room. He underwent a brain scan, which revealed a massive tumor in his orbitofrontal cortex. Neurosurgeons removed the tumor. Alex’s sexual appetite returned to normal.

The year after the brain surgery, his pedophilic behavior began to return. The neuroradiologist discovered that a portion of the tumor had been missed in the surgery and was regrowing—and Alex went back under the knife. After the removal of the remaining tumor, his behavior again returned to normal.

When your biology changes, so can your decision-making and your desires. The drives you take for granted (“I’m a heterosexual/homosexual,” “I’m attracted to children/adults,” “I’m aggressive/not aggressive,” and so on) depend on the intricate details of your neural machinery. Although acting on such drives is popularly thought to be a free choice, the most cursory examination of the evidence demonstrates the limits of that assumption.

Alex’s sudden pedophilia illustrates that hidden drives and desires can lurk undetected behind the neural machinery of socialization. When the frontal lobes are compromised, people become disinhibited, and startling behaviors can emerge. Disinhibition is commonly seen in patients with frontotemporal dementia, a tragic disease in which the frontal and temporal lobes degenerate. With the loss of that brain tissue, patients lose the ability to control their hidden impulses. To the frustration of their loved ones, these patients violate social norms in endless ways: shoplifting in front of store managers, removing their clothes in public, running stop signs, breaking out in song at inappropriate times, eating food scraps found in public trash cans, being physically aggressive or sexually transgressive. Patients with frontotemporal dementia commonly end up in courtrooms, where their lawyers, doctors, and embarrassed adult children must explain to the judge that the violation was not the perpetrator’s fault, exactly: much of the brain has degenerated, and medicine offers no remedy. Fifty-seven percent of frontotemporal-dementia patients violate social norms, as compared with only 27 percent of Alzheimer’s patients.

Changes in the balance of brain chemistry, even small ones, can also cause large and unexpected changes in behavior. Victims of Parkinson’s disease offer an example. In 2001, families and caretakers of Parkinson’s patients began to notice something strange. When patients were given a drug called pramipexole, some of them turned into gamblers. And not just casual gamblers, but pathological gamblers. These were people who had never gambled much before, and now they were flying off to Vegas. One 68-year-old man amassed losses of more than $200,000 in six months at a series of casinos. Some patients became consumed with Internet poker, racking up unpayable credit-card bills. For several, the new addiction reached beyond gambling, to compulsive eating, excessive alcohol consumption, and hypersexuality.

What was going on? Parkinson’s involves the loss of brain cells that produce a neurotransmitter known as dopamine. Pramipexole works by impersonating dopamine. But it turns out that dopamine is a chemical doing double duty in the brain. Along with its role in motor commands, it also mediates the reward systems, guiding a person toward food, drink, mates, and other things useful for survival. Because of dopamine’s role in weighing the costs and benefits of decisions, imbalances in its levels can trigger gambling, overeating, and drug addiction—behaviors that result from a reward system gone awry. Physicians now watch for these behavioral changes as a possible side effect of drugs like pramipexole. Luckily, the negative effects of the drug are reversible—the physician simply lowers the dosage, and the compulsive gambling goes away.

The lesson from all these stories is the same: human behavior cannot be separated from human biology. If we like to believe that people make free choices about their behavior (as in, “I don’t gamble, because I’m strong-willed”), cases like Alex the pedophile, the frontotemporal shoplifters, and the gambling Parkinson’s patients may encourage us to examine our views more carefully. Perhaps not everyone is equally “free” to make socially appropriate choices.

Does the discovery of Charles Whitman’s brain tumor modify your feelings about the senseless murders he committed? Does it affect the sentence you would find appropriate for him, had he survived that day? Does the tumor change the degree to which you consider the killings “his fault”? Couldn’t you just as easily be unlucky enough to develop a tumor and lose control of your behavior?

On the other hand, wouldn’t it be dangerous to conclude that people with a tumor are free of guilt, and that they should be let off the hook for their crimes?

As our understanding of the human brain improves, juries are increasingly challenged with these sorts of questions. When a criminal stands in front of the judge’s bench today, the legal system wants to know whether he is blameworthy. Was it his fault, or his biology’s fault?

I submit that this is the wrong question to be asking. The choices we make are inseparably yoked to our neural circuitry, and therefore we have no meaningful way to tease the two apart. The more we learn, the more the seemingly simple concept of blameworthiness becomes complicated, and the more the foundations of our legal system are strained.

If I seem to be heading in an uncomfortable direction—toward letting criminals off the hook—please read on, because I’m going to show the logic of a new argument, piece by piece. The upshot is that we can build a legal system more deeply informed by science, in which we will continue to take criminals off the streets, but we will customize sentencing, leverage new opportunities for rehabilitation, and structure better incentives for good behavior. Discoveries in neuroscience suggest a new way forward for law and order—one that will lead to a more cost-effective, humane, and flexible system than the one we have today. When modern brain science is laid out clearly, it is difficult to justify how our legal system can continue to function without taking what we’ve learned into account.

M any of us like to believe that all adults possess the same capacity to make sound choices. It’s a charitable idea, but demonstrably wrong. People’s brains are vastly different.

Who you even have the possibility to be starts at conception. If you think genes don’t affect how people behave, consider this fact: if you are a carrier of a particular set of genes, the probability that you will commit a violent crime is four times as high as it would be if you lacked those genes. You’re three times as likely to commit robbery, five times as likely to commit aggravated assault, eight times as likely to be arrested for murder, and 13 times as likely to be arrested for a sexual offense. The overwhelming majority of prisoners carry these genes 98.1 percent of death-row inmates do. These statistics alone indicate that we cannot presume that everyone is coming to the table equally equipped in terms of drives and behaviors.

And this feeds into a larger lesson of biology: we are not the ones steering the boat of our behavior, at least not nearly as much as we believe. Who we are runs well below the surface of our conscious access, and the details reach back in time to before our birth, when the meeting of a sperm and an egg granted us certain attributes and not others. Who we can be starts with our molecular blueprints—a series of alien codes written in invisibly small strings of acids—well before we have anything to do with it. Each of us is, in part, a product of our inaccessible, microscopic history. By the way, as regards that dangerous set of genes, you’ve probably heard of them. They are summarized as the Y chromosome. If you’re a carrier, we call you a male.

Genes are part of the story, but they’re not the whole story. We are likewise influenced by the environments in which we grow up. Substance abuse by a mother during pregnancy, maternal stress, and low birth weight all can influence how a baby will turn out as an adult. As a child grows, neglect, physical abuse, and head injury can impede mental development, as can the physical environment. (For example, the major public-health movement to eliminate lead-based paint grew out of an understanding that ingesting lead can cause brain damage, making children less intelligent and, in some cases, more impulsive and aggressive.) And every experience throughout our lives can modify genetic expression—activating certain genes or switching others off—which in turn can inaugurate new behaviors. In this way, genes and environments intertwine.

When it comes to nature and nurture, the important point is that we choose neither one. We are each constructed from a genetic blueprint, and then born into a world of circumstances that we cannot control in our most-formative years. The complex interactions of genes and environment mean that all citizens—equal before the law—possess different perspectives, dissimilar personalities, and varied capacities for decision-making. The unique patterns of neurobiology inside each of our heads cannot qualify as choices these are the cards we’re dealt.

Because we did not choose the factors that affected the formation and structure of our brain, the concepts of free will and personal responsibility begin to sprout question marks. Is it meaningful to say that Alex made bad choices, even though his brain tumor was not his fault? Is it justifiable to say that the patients with frontotemporal dementia or Parkinson’s should be punished for their bad behavior?

It is problematic to imagine yourself in the shoes of someone breaking the law and conclude, “Well, I wouldn’t have done that”—because if you weren’t exposed to in utero cocaine, lead poisoning, and physical abuse, and he was, then you and he are not directly comparable. You cannot walk a mile in his shoes.

T he legal system rests on the assumption that we are “practical reasoners,” a term of art that presumes, at bottom, the existence of free will. The idea is that we use conscious deliberation when deciding how to act—that is, in the absence of external duress, we make free decisions. This concept of the practical reasoner is intuitive but problematic.

The existence of free will in human behavior is the subject of an ancient debate. Arguments in support of free will are typically based on direct subjective experience (“I feel like I made the decision to lift my finger just now”). But evaluating free will requires some nuance beyond our immediate intuitions.

Consider a decision to move or speak. It feels as though free will leads you to stick out your tongue, or scrunch up your face, or call someone a name. But free will is not required to play any role in these acts. People with Tourette’s syndrome, for instance, suffer from involuntary movements and vocalizations. A typical Touretter may stick out his tongue, scrunch up his face, or call someone a name—all without choosing to do so.

We immediately learn two things from the Tourette’s patient. First, actions can occur in the absence of free will. Second, the Tourette’s patient has no free won’t. He cannot use free will to override or control what subconscious parts of his brain have decided to do. What the lack of free will and the lack of free won’t have in common is the lack of “free.” Tourette’s syndrome provides a case in which the underlying neural machinery does its thing, and we all agree that the person is not responsible.

This same phenomenon arises in people with a condition known as chorea, for whom actions of the hands, arms, legs, and face are involuntary, even though they certainly look voluntary: ask such a patient why she is moving her fingers up and down, and she will explain that she has no control over her hand. She cannot not do it. Similarly, some split-brain patients (who have had the two hemispheres of the brain surgically disconnected) develop alien-hand syndrome: while one hand buttons up a shirt, the other hand works to unbutton it. When one hand reaches for a pencil, the other bats it away. No matter how hard the patient tries, he cannot make his alien hand not do what it’s doing. The movements are not “his” to freely start or stop.

Unconscious acts are not limited to unintended shouts or wayward hands they can be surprisingly sophisticated. Consider Kenneth Parks, a 23-year-old Canadian with a wife, a five-month-old daughter, and a close relationship with his in-laws (his mother-in-law described him as a “gentle giant”). Suffering from financial difficulties, marital problems, and a gambling addiction, he made plans to go see his in-laws to talk about his troubles.

In the wee hours of May 23, 1987, Kenneth arose from the couch on which he had fallen asleep, but he did not awaken. Sleepwalking, he climbed into his car and drove the 14 miles to his in-laws’ home. He broke in, stabbed his mother-in-law to death, and assaulted his father-in-law, who survived. Afterward, he drove himself to the police station. Once there, he said, “I think I have killed some people … My hands,” realizing for the first time that his own hands were severely cut.

Over the next year, Kenneth’s testimony was remarkably consistent, even in the face of attempts to lead him astray: he remembered nothing of the incident. Moreover, while all parties agreed that Kenneth had undoubtedly committed the murder, they also agreed that he had no motive. His defense attorneys argued that this was a case of killing while sleepwalking, known as homicidal somnambulism.

Although critics cried “Faker!,” sleepwalking is a verifiable phenomenon. On May 25, 1988, after lengthy consideration of electrical recordings from Kenneth’s brain, the jury concluded that his actions had indeed been involuntary, and declared him not guilty.

As with Tourette’s sufferers, split-brain patients, and those with choreic movements, Kenneth’s case illustrates that high-level behaviors can take place in the absence of free will. Like your heartbeat, breathing, blinking, and swallowing, even your mental machinery can run on autopilot. The crux of the question is whether all of your actions are fundamentally on autopilot or whether some little bit of you is “free” to choose, independent of the rules of biology.

This has always been the sticking point for philosophers and scientists alike. After all, there is no spot in the brain that is not densely interconnected with—and driven by—other brain parts. And that suggests that no part is independent and therefore “free.” In modern science, it is difficult to find the gap into which to slip free will—the uncaused causer—because there seems to be no part of the machinery that does not follow in a causal relationship from the other parts.

Free will may exist (it may simply be beyond our current science), but one thing seems clear: if free will does exist, it has little room in which to operate. It can at best be a small factor riding on top of vast neural networks shaped by genes and environment. In fact, free will may end up being so small that we eventually think about bad decision-making in the same way we think about any physical process, such as diabetes or lung disease.

The study of brains and behaviors is in the midst of a conceptual shift. Historically, clinicians and lawyers have agreed on an intuitive distinction between neurological disorders (“brain problems”) and psychiatric disorders (“mind problems”). As recently as a century ago, a common approach was to get psychiatric patients to “toughen up,” through deprivation, pleading, or torture. Not surprisingly, this approach was medically fruitless. After all, while psychiatric disorders tend to be the product of more-subtle forms of brain pathology, they, too, are based in the biological details of the brain.

What accounts for the shift from blame to biology? Perhaps the largest driving force is the effectiveness of pharmaceutical treatments. No amount of threatening will chase away depression, but a little pill called fluoxetine often does the trick. Schizophrenic symptoms cannot be overcome by exorcism, but they can be controlled by risperidone. Mania responds not to talk or to ostracism, but to lithium. These successes, most of them introduced in the past 60 years, have underscored the idea that calling some disorders “brain problems” while consigning others to the ineffable realm of “the psychic” does not make sense. Instead, we have begun to approach mental problems in the same way we might approach a broken leg. The neuroscientist Robert Sapolsky invites us to contemplate this conceptual shift with a series of questions:

Acts cannot be understood separately from the biology of the actors—and this recognition has legal implications. Tom Bingham, Britain’s former senior law lord, once put it this way:

In the past, the law has tended to base its approach … on a series of rather crude working assumptions: adults of competent mental capacity are free to choose whether they will act in one way or another they are presumed to act rationally, and in what they conceive to be their own best interests they are credited with such foresight of the consequences of their actions as reasonable people in their position could ordinarily be expected to have they are generally taken to mean what they say.

Whatever the merits or demerits of working assumptions such as these in the ordinary range of cases, it is evident that they do not provide a uniformly accurate guide to human behaviour.

The more we discover about the circuitry of the brain, the more we tip away from accusations of indulgence, lack of motivation, and poor discipline—and toward the details of biology. The shift from blame to science reflects our modern understanding that our perceptions and behaviors are steered by deeply embedded neural programs.

Imagine a spectrum of culpability. On one end, we find people like Alex the pedophile, or a patient with frontotemporal dementia who exposes himself in public. In the eyes of the judge and jury, these are people who suffered brain damage at the hands of fate and did not choose their neural situation. On the other end of the spectrum—the blameworthy side of the “fault” line—we find the common criminal, whose brain receives little study, and about whom our current technology might be able to say little anyway. The overwhelming majority of lawbreakers are on this side of the line, because they don’t have any obvious, measurable biological problems. They are simply thought of as freely choosing actors.

Such a spectrum captures the common intuition that juries hold regarding blameworthiness. But there is a deep problem with this intuition. Technology will continue to improve, and as we grow better at measuring problems in the brain, the fault line will drift into the territory of people we currently hold fully accountable for their crimes. Problems that are now opaque will open up to examination by new techniques, and we may someday find that many types of bad behavior have a basic biological explanation—as has happened with schizophrenia, epilepsy, depression, and mania.

Today, neuroimaging is a crude technology, unable to explain the details of individual behavior. We can detect only large-scale problems, but within the coming decades, we will be able to detect patterns at unimaginably small levels of the microcircuitry that correlate with behavioral problems. Neuroscience will be better able to say why people are predisposed to act the way they do. As we become more skilled at specifying how behavior results from the microscopic details of the brain, more defense lawyers will point to biological mitigators of guilt, and more juries will place defendants on the not-blameworthy side of the line.

This puts us in a strange situation. After all, a just legal system cannot define culpability simply by the limitations of current technology. Expert medical testimony generally reflects only whether we yet have names and measurements for a problem, not whether a problem exists. A legal system that declares a person culpable at the beginning of a decade and not culpable at the end is one in which culpability carries no clear meaning.

The crux of the problem is that it no longer makes sense to ask, “To what extent was it his biology, and to what extent was it him?,” because we now understand that there is no meaningful distinction between a person’s biology and his decision-making. They are inseparable.

While our current style of punishment rests on a bedrock of personal volition and blame, our modern understanding of the brain suggests a different approach. Blameworthiness should be removed from the legal argot. It is a backward-looking concept that demands the impossible task of untangling the hopelessly complex web of genetics and environment that constructs the trajectory of a human life.

Instead of debating culpability, we should focus on what to do, moving forward, with an accused lawbreaker. I suggest that the legal system has to become forward-looking, primarily because it can no longer hope to do otherwise. As science complicates the question of culpability, our legal and social policy will need to shift toward a different set of questions: How is a person likely to behave in the future? Are criminal actions likely to be repeated? Can this person be helped toward pro-social behavior? How can incentives be realistically structured to deter crime?

The important change will be in the way we respond to the vast range of criminal acts. Biological explanation will not exculpate criminals we will still remove from the streets lawbreakers who prove overaggressive, underempathetic, and poor at controlling their impulses. Consider, for example, that the majority of known serial killers were abused as children. Does this make them less blameworthy? Who cares? It’s the wrong question. The knowledge that they were abused encourages us to support social programs to prevent child abuse, but it does nothing to change the way we deal with the particular serial murderer standing in front of the bench. We still need to keep him off the streets, irrespective of his past misfortunes. The child abuse cannot serve as an excuse to let him go the judge must keep society safe.

Those who break social contracts need to be confined, but in this framework, the future is more important than the past. Deeper biological insight into behavior will foster a better understanding of recidivism—and this offers a basis for empirically based sentencing. Some people will need to be taken off the streets for a longer time (even a lifetime), because their likelihood of reoffense is high others, because of differences in neural constitution, are less likely to recidivate, and so can be released sooner.

The law is already forward-looking in some respects: consider the leniency afforded a crime of passion versus a premeditated murder. Those who commit the former are less likely to recidivate than those who commit the latter, and their sentences sensibly reflect that. Likewise, American law draws a bright line between criminal acts committed by minors and those by adults, punishing the latter more harshly. This approach may be crude, but the intuition behind it is sound: adolescents command lesser skills in decision-making and impulse control than do adults a teenager’s brain is simply not like an adult’s brain. Lighter sentences are appropriate for those whose impulse control is likely to improve naturally as adolescence gives way to adulthood.

Taking a more scientific approach to sentencing, case by case, could move us beyond these limited examples. For instance, important changes are happening in the sentencing of sex offenders. In the past, researchers have asked psychiatrists and parole-board members how likely specific sex offenders were to relapse when let out of prison. Both groups had experience with sex offenders, so predicting who was going straight and who was coming back seemed simple. But surprisingly, the expert guesses showed almost no correlation with the actual outcomes. The psychiatrists and parole-board members had only slightly better predictive accuracy than coin-flippers. This astounded the legal community.

So researchers tried a more actuarial approach. They set about recording dozens of characteristics of some 23,000 released sex offenders: whether the offender had unstable employment, had been sexually abused as a child, was addicted to drugs, showed remorse, had deviant sexual interests, and so on. Researchers then tracked the offenders for an average of five years after release to see who wound up back in prison. At the end of the study, they computed which factors best explained the reoffense rates, and from these and later data they were able to build actuarial tables to be used in sentencing.

Which factors mattered? Take, for instance, low remorse, denial of the crime, and sexual abuse as a child. You might guess that these factors would correlate with sex offenders’ recidivism. But you would be wrong: those factors offer no predictive power. How about antisocial personality disorder and failure to complete treatment? These offer somewhat more predictive power. But among the strongest predictors of recidivism are prior sexual offenses and sexual interest in children. When you compare the predictive power of the actuarial approach with that of the parole boards and psychiatrists, there is no contest: numbers beat intuition. In courtrooms across the nation, these actuarial tests are now used in presentencing to modulate the length of prison terms.

We will never know with certainty what someone will do upon release from prison, because real life is complicated. But greater predictive power is hidden in the numbers than people generally expect. Statistically based sentencing is imperfect, but it nonetheless allows evidence to trump folk intuition, and it offers customization in place of the blunt guidelines that the legal system typically employs. The current actuarial approaches do not require a deep understanding of genes or brain chemistry, but as we introduce more science into these measures—for example, with neuroimaging studies—the predictive power will only improve. (To make such a system immune to government abuse, the data and equations that compose the sentencing guidelines must be transparent and available online for anyone to verify.)

B eyond customized sentencing, a forward-thinking legal system informed by scientific insights into the brain will enable us to stop treating prison as a one-size-fits-all solution. To be clear, I’m not opposed to incarceration, and its purpose is not limited to the removal of dangerous people from the streets. The prospect of incarceration deters many crimes, and time actually spent in prison can steer some people away from further criminal acts upon their release. But that works only for those whose brains function normally. The problem is that prisons have become our de facto mental-health-care institutions—and inflicting punishment on the mentally ill usually has little influence on their future behavior. An encouraging trend is the establishment of mental-health courts around the nation: through such courts, people with mental illnesses can be helped while confined in a tailored environment. Cities such as Richmond, Virginia, are moving in this direction, for reasons of justice as well as cost-effectiveness. Sheriff C. T. Woody, who estimates that nearly 20 percent of Richmond’s prisoners are mentally ill, told CBS News, “The jail isn’t a place for them. They should be in a mental-health facility.” Similarly, many jurisdictions are opening drug courts and developing alternative sentences they have realized that prisons are not as useful for solving addictions as are meaningful drug-rehabilitation programs.

A forward-thinking legal system will also parlay biological understanding into customized rehabilitation, viewing criminal behavior the way we understand other medical conditions such as epilepsy, schizophrenia, and depression—conditions that now allow the seeking and giving of help. These and other brain disorders find themselves on the not-blameworthy side of the fault line, where they are now recognized as biological, not demonic, issues.

Many people recognize the long-term cost-effectiveness of rehabilitating offenders instead of packing them into overcrowded prisons. The challenge has been the dearth of new ideas about how to rehabilitate them. A better understanding of the brain offers new ideas. For example, poor impulse control is characteristic of many prisoners. These people generally can express the difference between right and wrong actions, and they understand the disadvantages of punishment—but they are handicapped by poor control of their impulses. Whether as a result of anger or temptation, their actions override reasoned consideration of the future.

If it seems difficult to empathize with people who have poor impulse control, just think of all the things you succumb to against your better judgment. Alcohol? Chocolate cake? Television? It’s not that we don’t know what’s best for us, it’s simply that the frontal-lobe circuits representing long-term considerations can’t always win against short-term desire when temptation is in front of us.

With this understanding in mind, we can modify the justice system in several ways. One approach, advocated by Mark A. R. Kleiman, a professor of public policy at UCLA, is to ramp up the certainty and swiftness of punishment—for instance, by requiring drug offenders to undergo twice-weekly drug testing, with automatic, immediate consequences for failure—thereby not relying on distant abstraction alone. Similarly, economists have suggested that the drop in crime since the early 1990s has been due, in part, to the increased presence of police on the streets: their visibility shores up support for the parts of the brain that weigh long-term consequences.

We may be on the cusp of finding new rehabilitative strategies as well, affording people better control of their behavior, even in the absence of external authority. To help a citizen reintegrate into society, the ethical goal is to change him as little as possible while bringing his behavior into line with society’s needs. My colleagues and I are proposing a new approach, one that grows from the understanding that the brain operates like a team of rivals, with different neural populations competing to control the single output channel of behavior. Because it’s a competition, the outcome can be tipped. I call the approach “the prefrontal workout.”

The basic idea is to give the frontal lobes practice in squelching the short-term brain circuits. To this end, my colleagues Stephen LaConte and Pearl Chiu have begun providing real-time feedback to people during brain scanning. Imagine that you’d like to quit smoking cigarettes. In this experiment, you look at pictures of cigarettes during brain imaging, and the experimenters measure which regions of your brain are involved in the craving. Then they show you the activity in those networks, represented by a vertical bar on a computer screen, while you look at more cigarette pictures. The bar acts as a thermometer for your craving: if your craving networks are revving high, the bar is high if you’re suppressing your craving, the bar is low. Your job is to make the bar go down. Perhaps you have insight into what you’re doing to resist the craving perhaps the mechanism is inaccessible. In any case, you try out different mental avenues until the bar begins to slowly sink. When it goes all the way down, that means you’ve successfully recruited frontal circuitry to squelch the activity in the networks involved in impulsive craving. The goal is for the long term to trump the short term. Still looking at pictures of cigarettes, you practice making the bar go down over and over, until you’ve strengthened those frontal circuits. By this method, you’re able to visualize the activity in the parts of your brain that need modulation, and you can witness the effects of different mental approaches you might take.

If this sounds like biofeedback from the 1970s, it is—but this time with vastly more sophistication, monitoring specific networks inside the head rather than a single electrode on the skin. This research is just beginning, so the method’s efficacy is not yet known—but if it works well, it will be a game changer. We will be able to take it to the incarcerated population, especially those approaching release, to try to help them avoid coming back through the revolving prison doors.

This prefrontal workout is designed to better balance the debate between the long- and short-term parties of the brain, giving the option of reflection before action to those who lack it. And really, that’s all maturation is. The main difference between teenage and adult brains is the development of the frontal lobes. The human prefrontal cortex does not fully develop until the early 20s, and this fact underlies the impulsive behavior of teenagers. The frontal lobes are sometimes called the organ of socialization, because becoming socialized largely involves developing the circuitry to squelch our first impulses.

This explains why damage to the frontal lobes unmasks unsocialized behavior that we would never have thought was hidden inside us. Recall the patients with frontotemporal dementia who shoplift, expose themselves, and burst into song at inappropriate times. The networks for those behaviors have been lurking under the surface all along, but they’ve been masked by normally functioning frontal lobes. The same sort of unmasking happens in people who go out and get rip-roaring drunk on a Saturday night: they’re disinhibiting normal frontal-lobe function and letting more-impulsive networks climb onto the main stage. After training at the prefrontal gym, a person might still crave a cigarette, but he’ll know how to beat the craving instead of letting it win. It’s not that we don’t want to enjoy our impulsive thoughts (Mmm, cake), it’s merely that we want to endow the frontal cortex with some control over whether we act upon them (I’ll pass). Similarly, if a person thinks about committing a criminal act, that’s permissible as long as he doesn’t take action.

For the pedophile, we cannot hope to control whether he is attracted to children. That he never acts on the attraction may be the best we can hope for, especially as a society that respects individual rights and freedom of thought. Social policy can hope only to prevent impulsive thoughts from tipping into behavior without reflection. The goal is to give more control to the neural populations that care about long-term consequences—to inhibit impulsivity, to encourage reflection. If a person thinks about long-term consequences and still decides to move forward with an illegal act, then we’ll respond accordingly. The prefrontal workout leaves the brain intact—no drugs or surgery—and uses the natural mechanisms of brain plasticity to help the brain help itself. It’s a tune-up rather than a product recall.

We have hope that this approach represents the correct model: it is grounded simultaneously in biology and in libertarian ethics, allowing a person to help himself by improving his long-term decision-making. Like any scientific attempt, it could fail for any number of unforeseen reasons. But at least we have reached a point where we can develop new ideas rather than assuming that repeated incarceration is the single practical solution for deterring crime.

A long any axis that we use to measure human beings, we discover a wide-ranging distribution, whether in empathy, intelligence, impulse control, or aggression. People are not created equal. Although this variability is often imagined to be best swept under the rug, it is in fact the engine of evolution. In each generation, nature tries out as many varieties as it can produce, along all available dimensions.

Variation gives rise to lushly diverse societies—but it serves as a source of trouble for the legal system, which is largely built on the premise that humans are all equal before the law. This myth of human equality suggests that people are equally capable of controlling impulses, making decisions, and comprehending consequences. While admirable in spirit, the notion of neural equality is simply not true.

As brain science improves, we will better understand that people exist along continua of capabilities, rather than in simplistic categories. And we will be better able to tailor sentencing and rehabilitation for the individual, rather than maintain the pretense that all brains respond identically to complex challenges and that all people therefore deserve the same punishments. Some people wonder whether it’s unfair to take a scientific approach to sentencing—after all, where’s the humanity in that? But what’s the alternative? As it stands now, ugly people receive longer sentences than attractive people psychiatrists have no capacity to guess which sex offenders will reoffend and our prisons are overcrowded with drug addicts and the mentally ill, both of whom could be better helped by rehabilitation. So is current sentencing really superior to a scientifically informed approach?

Neuroscience is beginning to touch on questions that were once only in the domain of philosophers and psychologists, questions about how people make decisions and the degree to which those decisions are truly “free.” These are not idle questions. Ultimately, they will shape the future of legal theory and create a more biologically informed jurisprudence.


Can adults get CVI?

Adults can also develop problems with their vision after a traumatic brain injury (such as a head injury or stroke that damages the brain). Veterans may be at higher risk for visual problems as a result of combat injuries.

These problems are sometimes called acquired CVI, but it isn’t the same as CVI. A brain injury that happens later in life usually has different symptoms than CVI, which is caused by an injury early in life.

If you or a loved one has vision problems because of a brain injury, ask the doctor about vision rehabilitation and other support services. Vision rehabilitation can help people with brain injuries make the most of their vision.


On TV and in movies, we’ve all seen doctors stick an X-ray up on the lightbox and play out a dramatic scene: “What’s that dark spot, doctor?” “Hm…”

In reality, though, a modern medical scan contains so much data, no single pair of doctor’s eyes could possibly interpret it. The brain scan known as fMRI, for functional magnetic resonance imaging, produces a massive data set that can only be understood by custom data analysis software. Armed with this analysis, neuroscientists have used the fMRI scan to produce a series of paradigm-shifting discoveries about our brains.

Now, an unsettling new report, which is causing waves in the neuroscience community, suggests that fMRI’s custom software can be deeply flawed — calling into question many of the most exciting findings in recent neuroscience.

The problem researchers have uncovered is simple: the computer programs designed to sift through the images produced by fMRI scans have a tendency to suggest differences in brain activity where none exist. For instance, humans who are resting, not thinking about anything in particular, not doing anything interesting, can deliver spurious results of differences in brain activity. It’s even been shown to indicate brain activity in a dead salmon, whose stilled brain lit up an MRI as if it were somehow still dreaming of a spawning run.

The report throws into question the results of some portion of the more than 40,000 studies that have been conducted using fMRI, studies that plumb the brainy depths of everything from free will to fear. And scientists are not quite sure how to recover.

“It’s impossible to know how many fMRI studies are wrong, since we do not have access to the original data,” says computer scientist Anders Eklund of Linkoping University in Sweden, who conducted the analysis.

How it should have worked: Start by signing up subjects. Scan their brains while they rest inside an MRI machine. Then scan their brains again when exposed to pictures of spiders, say. Those subjects who are afraid of spiders will have blood rush to those regions of the brain involved in thinking and feeling fear, because such thoughts or feelings are suspected to require more oxygen. With the help of a computer program, the MRI machine then registers differences in hemoglobin, the iron-rich molecule that makes blood red and carries oxygen from place to place. (That’s the functional in fMRI.) The scan then looks at whether those hemoglobin molecules are still carrying oxygen to a given place in the brain, or not, based on how the molecules respond to the powerful magnetic fields. Scan enough brains and see how the fearful differ from the fearless, and perhaps you can identify the brain regions or structures associated with thinking or feeling fear.

That’s the theory, anyway. In order to detect such differences in brain activity, it would be best to scan a large number of brains, but the difficulty and expense often make this impossible. A single MRI scan can cost around $2,600, according to a 2014 NerdWallet analysis. Further, the differences in the blood flow are often tiny. And then there’s the fact that computer programs have to sift the through images of the 1,200 or so cubic centimeters of gelatinous tissue that make up each individual brain and compare them to others, a big data analysis challenge.

Eklund’s report shows that the assumptions behind the main computer programs used to sift such big fMRI data have flaws, as turned up by nearly 3 million random evaluations of the resting brain scans of 499 volunteers from Cambridge, Massachusetts Beijing and Oulu, Finland. One program turned out to have a 15-year-old coding error (which has now been fixed) that caused it to detect too much brain activity. This highlights the challenge of researchers working with computer code that they are not capable of checking themselves, a challenge not confined just to neuroscience.

The brain is even more complicated than we thought. Worse, Eklund and his colleagues found that all the programs assume that brains at rest have the same response to the jet-engine roar of the MRI machine itself as well as whatever random thoughts and feelings occur in the brain. Those assumptions appear to be wrong. The brain at rest is “actually a bit more complex,” Eklund says.

More specifically, the white matter of the brain appears to be underrepresented in fMRI analyses while another specific part of the brain — the posterior cingulate, a region in the middle of the brain that connects to many other parts — shows up as a “hot spot” of activity. As a result, the programs are more likely to single it out as showing extra activity even when there is no difference. “The reason for this is still unknown,” Eklund says.

Overall, the programs had a false positive rate — detecting a difference where none actually existed — of as much as 70 percent.

Unknown unknowns: This does not mean all fMRI studies are wrong. Co-author and statistician Thomas Nichols of the University of Warwick calculates that some 3,500 studies may be affected by such false positives, and such false positives can never be eliminated entirely. But a survey of 241 recent fMRI papers found 96 that could have even worse false-positive rates than those found in this analysis.

“The paper makes an important criticism,” says Nancy Kanwisher, a neuroscientist at MIT (TED Talk: A neural portrait of the human mind), though she points out that it does not undermine those fMRI studies that do not rely on these computer programs.

Nonetheless, it is worrying. “I think the fallout has yet to be fully evaluated. It appears to apply to quite a few studies, certainly the studies done in a generic way that is the bread-and-butter of fMRI,” says Douglas Greve, a neuroimaging specialist at Massachusetts General Hospital. What’s needed is more scrutiny, Greve suggests.

Another argument for open data. Eklund and his colleagues were only able to discover this methodological flaw thanks to the open sharing of group brain scan data by the 1,000 Functional Connectomes Project. Unfortunately, such sharing of brain scan data is more the exception than the norm, which hinders other researchers attempting to re-create the experiment and replicate the results. Such replication is a cornerstone of the scientific method, ensuring that findings are robust. Eklund, for one, therefore encourages neuroimagers to “share their fMRI data, so that other researchers can replicate their findings and re-analyze the data several years later.” Only then can scientists be sure that the undiscovered activity of the human brain is truly revealed … and that dead salmon are not still dreaming.

About the author

David Biello is an award-winning journalist writing most often about the environment and energy. His book "The Unnatural World" publishes November 2016. It's about whether the planet has entered a new geologic age as a result of people's impacts and, if so, what we should do about this Anthropocene. He also hosts documentaries, such as "Beyond the Light Switch" and the forthcoming "The Ethanol Effect" for PBS. He is the science curator for TED.


Limitations

Measuring the behavior of two live participants, while rich in data, is not without its limitations. For instance, participants who wore the sunglasses (participant B) verbally mentioned that they were uncertain of the extent to which the sunglasses disguised their eyes to the other participant. Based on informal conversations, the majority of participants who wore the sunglasses assumed their eyes were quite visible and thus, they would believe they could send gaze information to the other person. However, this should have been consistent with every participant either believing that their eyes were visible or not. Another potential limitation was the within-subjects design, such that participants took part in all three conditions𠅌lear, degraded, and blocked. While the order of conditions was counterbalanced, previous (unpublished) research in our lab has shown that participants habituate to eye contact over time and show less and less arousal with repeated exposure. Thus, our data may have been stronger if we had enough participants to analyze the data as a between-subjects design. Lastly, in the blindfold condition it was assumed that the blindfold would prevent all gaze signals from being sent and received between dyads because participant B’s eyes were entirely concealed. Since both participants were expected to have no ability to send or receive gaze information, no differences in the SCR between partners were expected to emerge in the eye contact trials. As mentioned in the results, there was a difference between participants arousal levels within the blindfold condition, such that participant B (wearing the blindfold) showed significantly higher arousal across all gaze trials. One possibility is that simply being blindfolded increased arousal because of the knowledge of being the focus of someone’s attention. Thus, being the object of someone’s attention could have been driving the arousal response.


It's (Not) All in the Eyes: Eye Movements Don't Indicate Lying

The direction of eye movements may not indicate lying, says new research.

Is There a Price for Fibbing?

July 12, 2012— -- Your eyes may not say it all when it comes to lying, according to a new study.

Despite the common belief that shifty eyes -- moving up and to the right -- indicate deception, researchers found no connection between where the eyes move and whether a person is telling the truth.

In three separate experiments, they tested whether people who lied tended to move their eyes up and to the right, more than people who were not lying. They found no association between which direction the eyes moved and whether participants were telling the truth.

"This is in line with findings from a considerable amount of previous work showing that facial clues (including eye movements) are poor indicators of deception," wrote the authors, led by Richard Wiseman of the University of Hertfordshire in the United Kingdom. The study is published in the online journal PLoS ONE.

Howard Ehrlichman, a professor emeritus of psychology at Queens College of the City University of New York, has done considerable research on eye movements, and said he also never found any link between the direction of eye movements and lying.

"This does not mean that the eyes don't tell us anything about what people are thinking," he said. "I found that while the direction of eye movements wasn't related to anything, whether people actually made eye movements or not was related to aspects of things going on in their mind."

He said that people tend to make eye movements -- about one per second on average -- when they are retrieving information from their long-term memory.

"If there's no eye movement during a television interview, I'm convinced that the person has rehearsed or repeated what they are going to say many times and don't have to search for the answer in their long-term memories."

He said he's not sure where the notion about directionality of eye movement and lying came from, but said it has spread despite little scientific evidence for it.

The study authors attribute part of the popularity of the belief that looking up to the right indicates lying while looking up to the left indicates truthfulness to many practitioners of neuro-linguistic programming (NLP). NLP -- controversial among scientists -- is a therapy approach that revolves around the connection between neurological processes, language and behavior.

"Many NLP practitioners claim that a person's eye movements can reveal a useful insight into whether they are lying or telling the truth," they wrote.

Some NLP practitioners dispute this assertion.

"We don't believe that eye movements are an indication of lying and have never taught it as such. I believe that someone started the idea as a marketing ploy. Perhaps they really believed it," said Steven Leeds, a co-director of the NLP Training Center of New York. "Eye movements, as we teach it, indicate how a person is processing information, whether it be visual, auditory, or kinesthetic, and whether is remembered or created."

Others say they believe that the direction of eye movements can give away a liar.

Donald Sanborn is president of Credibility Assessment Technologies, a company that specializes in lie detection technology, recently licensed new technology -- called ocular-motor detection testing -- based on research done by psychologists from the University of Utah that utilizes a combination of eye tracking and other variables to determine whether a person is lying.

"When a person is lying, their emotional load goes up, which causes changes in pupil diameter and gaze position," said Sanborn. The device also measures how long it takes to read and answer certain questions. Pupil size, gaze position and the length of time it takes to respond to questions reflect that the brain is working harder, which the psychologists determined is a sign of lying.

This device is designed to be used for pre-employment screening, but it is not legal for use by private companies, so the company is working with the U.S. government and with foreign firms. He estimates its accuracy to be around 85 to 87 percent.

But the authors of the new study said their work offers proof that no relationship exists between the direction of eye movements and truthfulness.

"Future research could focus on why the belief has become so widespread," they wrote.


Perception as Inference

Predictive Processing has its roots in the work of German physician and physicist Herman von Helmholtz (Friston et al., 2006). Helmholtz recognized that incoming sensory data are ambiguous (Helmholtz, 1867). For a given sensation there are multiple potential causes in the world. For example, an orange scent could be caused by orange soda, air freshener, or an actual orange. And contrary to common sense, we do not have direct access to the world. Consider vision. Light does not enter the brain. The inside of the skull is dark. Instead the retina converts photons of light into the firing of neurons. In fact every sensory receptor, from vision, to touch, to smell, has the same type of output. This is true for the interoceptive senses such as proprioception, hunger, and thirst as well. In our experience of the world, all the brain has to work with are patterns of firing neurons. As Immanuel Kant suggested, all we can know is the “phenomenon,” that is the effect of the world upon us, i.e., patterns of firing neurons. We can never know “the thing in itself” that is, the actual causes in the world of the effects we experience (Kant, 1781). With this observation Kant anticipated the Markov Blanket, a concept central to Predictive Processing. The Markov blanket is essentially the boundary between a system, and everything else that is not that system, expressed in mathematical terms (Yufik and Friston, 2016). Given the ambiguity of sensory data and the impossibility of knowing “the thing in itself” Helmholtz concluded that perception is an act of unconscious inference. We cannot know directly what lies on the other side of the Markov blanket (i.e., sensory boundary) that is constituted by our sensory epithelia. When we perceive, the brain is making a guess about the state of the world. This process is automatic, rapid, and unconscious (Tenenbaum et al., 2011). As a result we are unaware that a sophisticated process has occurred. We are only aware of the product, what the brain has calculated is the most likely cause, the best guess. However we do not experience this as a probability or a guess, but rather as a fact (Dehaene and Changeux, 2011 Hohwy, 2013) “I see an orange.”

Helmholtz’ hypothesis of perception as inference has significant implications for brain function. If perception is an act of inference, the brain must have information that is used as the basis for inference. That is, it must have a model of the world, a priori, before it encounters the world. Dreaming during Rapid Eye Movement (REM) sleep illustrates the ability of the brain to generate perceptual hypotheses in the absence of any sensory data, an a priori model (Hobson et al., 2014). In the parlance of predictive processing this is called a prior probability or “prior” based on Bayes Theorem (Geisler and Diehl, 2003). Prior probability is the likelihood of a proposition before considering empirical data from the senses. But where does such a prior probability or model come from?


How to Bliss Out on Your Beach Trip

As much as we’d happily set up residence beachside on a Hawaiian island, you don’t actually have to live by the water to reap the benefits. It’s all about taking advantage of the time you spend there by practicing mindfulness.

Mindfulness has tons of mental health benefits including stress relief, says Nazari. A study published in the journal Psychiatry Research found that the brains of those who completed an eight-week mediation course changed in a few ways. For starters, the area of the brain responsible for stress shrunk in size. “This means that we become more resilient to handling stress and that stress doesn’t rattle us as much,” explains Nazari.

On the flip side, the parts of the brain responsible for memory, reasoning and empathy grew. An easier time thinking, focusing and connecting? That’s a pretty sweet deal — and just might make for a more productive Monday morning.


Watch the video: When Does Your Brain Stop Developing? (June 2022).


Comments:

  1. Mehdi

    I apologize for interfering, but could you please describe in a little more detail.

  2. Azizi

    I think you are not right. I can prove it. Write in PM, we will talk.

  3. Mahpee

    wonderfully, very useful information

  4. Bond

    Thank you so much for the information, this is really worth keeping in mind, by the way, I could not find anything sensible on this topic anywhere in the net. Although in real life many times I came across the fact that I did not know how to behave or what to say when it came to something like that.



Write a message