New Study Suggests You Can Reprogram Your Brain in Four Days!

Many previous studies have shown through the use of neuroimaging that meditation can change the brain. But most of those studies have looked at medium to long-term meditators. Some looked at monks who had meditated for decades, and some looked at new meditators who had meditated daily for 6 to 8 weeks. At least this much meditation practice was thought to be necessary to create measurable changes in the brain.

But a new study at the University of North Carolina at Charlotte suggests that brain changes may happen much more quickly, in as few as four days!

Student volunteers were randomly assigned to either practice mindfulness meditation or listen to the reading of JRR Tolkien’s The Hobbit, for 20 minutes a day, for four days. The groups were tested using behavioral tests of mood, memory, visual attention, attention processing, and vigilance. The meditative practice was a simple mindfulness technique.  Participants were told to focus on their breath, and that when thoughts distracted them to notice the thought, and then refocus on the breathing.

What were the results? Both groups improved in mood, but only the meditation group improved in cognitive measures. In one challenging mental task, the meditation group did 10 times better than the reading group. It appeared that meditation improved the ability to sustain attention and vigilance.

This is an exciting study which hopefully will be replicated and expanded with their neuroimaging to see if there are functional or structural brain changes after brief meditation practice.

To summarize, it appears that a brief four-day practice of mindfulness meditation can significantly improve cognitive functioning that is related to attention and vigilance.

How lasting is this effect? Does it wear off in hours, days, etc.? What is the dose response ratio of meditation to cognitive functioning improvement? For instance, would eight days of meditation practice create even more cognitive improvement?

In any case, it’s worth practicing meditation at least briefly to see its effects on your mind and your emotions. Commit to 20 minutes a day for one week, and see what happens for you.

Now I’m off to meditate…

Copyright © 2010 Andrew Gottlieb, Ph.D. /The Psychology Lounge/TPL Productions

How to Read Media Coverage of Scientific Research: Sorting Out the Stupid Science from Smart Science

Reading today’s headlines I saw an interesting title, “New Alzheimer’s Gene Identified.”

I was intrigued. Discovering a gene that caused late onset Alzheimer’s would be a major scientific breakthrough, perhaps leading to effective new treatments. So I read the article carefully.

To summarize the findings, a United States research team looked at the entire genome of 2269 people who had late onset Alzheimer’s and 3107 people who did not. They were looking for differences in the genome.

In the people who had late onset Alzheimer’s, 9% had a variation in the gene MTHFD1L, which lives on chromosome 6. Of those who did not have late-onset Alzheimer’s 5% had this variant.

So is this an important finding? The article suggested it was. But I think this is a prime example of bad science reporting. For instance, they went on to say that this particular gene is involved with the metabolism of folate, which influences levels of homocysteine. It’s a known fact that levels of homocysteine can affect heart disease and Alzheimer’s. So is it the gene, or is it the level of homocysteine?

The main reason why I consider this an example of stupid science reporting is that the difference is trivial. Let me give you an example of a better way to report this. The researchers could have instead reported that among people with late-onset Alzheimer’s, 91% of them had no gene changes, and then among people without late onset Alzheimer’s 95% of them had normal genes. But this doesn’t sound very impressive and calls into question whether measurement errors would account for the differences.

So this very expensive genome test yields absolutely no predictive value in terms of who will develop Alzheimer’s and who will not. There is a known genetic variant, called APOE, which lives on chromosome 19. Forty percent of those who develop late-onset Alzheimer’s have this gene, while only 25 to 30% of the general population has it. So even this gene, which has a much stronger association with Alzheimer’s, isn’t a particularly useful clinical test.

The other reason this is an example of stupid science is that basically, this is a negative finding. To scan the entire human genome looking for differences between normal elderly people and elderly people with Alzheimer’s, and discover only a subtle and tiny difference, must’ve been a huge disappointment for the researchers. If I had been the journal editor reviewing this study, I doubt I would’ve published it. Imagine a similar study of an antidepressant, which found that in the antidepressant group, 9% of people got better, and in the placebo group 5% got better. I doubt this would get published.

Interestingly enough, the study hasn’t been published yet, but is being presented as a paper at the April 14 session of the American Academy of Neurology conference in Toronto. This is another clue to reading scientific research. If it hasn’t been published in a peer-reviewed scientific journal, be very skeptical of the research. Good research usually gets published in top journals, and research that is more dubious often is presented at conferences but never published. It’s much easier to get a paper accepted for a conference than in a science journal.

It’s also important when reading media coverage of scientific research to read beyond the headlines, and to look at the actual numbers that are being reported. If they are very small numbers, or very small differences, be very skeptical of whether they mean anything at all.

As quoted in the article, “While lots of genetic variants have been singled out as possible contributors to Alzheimer’s, the findings often can’t be replicated or repeated, leaving researchers unsure if the results are a coincidence or actually important,” said Dr. Ron Petersen, director of the Mayo Alzheimer’s disease research Center in Rochester, Minnesota.

So to summarize, to be a savvy consumer of media coverage of scientific research:

1. Be skeptical of media reports of scientific research that hasn’t been published in top scientific journals. Good research gets published in peer-reviewed journals, which means that other scientists skeptically read the article before it’s published.

2. Read below the headlines and look for actual numbers that are reported, and apply common sense to these numbers. If the differences are very small in absolute numbers, it often means that the research has very little clinical usefulness. Even if the differences are large in terms of percentages, this doesn’t necessarily mean that they are useful findings.

An example would be a finding that drinking a particular type of bourbon increases a very rare type of brain tumor from one in 2,000,00 to three in 2 million. If this was reported in percentage terms the headline would say drinking this bourbon raises the risk of brain tumor by 300%, which would definitely put me and many other people off from drinking bourbon. (By the way, this is a completely fictitious example.) But if you compare the risk to something that people do every day such as driving, and revealed the driving is 1000 times more risky than drinking this type of bourbon, it paints the research in a very different light.

3. Be very skeptical of research that has not been reproduced or replicated by other scientists. There’s a long history in science of findings that cannot be reproduced or replicated by other scientists, and therefore don’t hold up as valid research findings.

4. On the web, be very skeptical of research that’s presented on sites that sell products. Unfortunately a common strategy for selling products, particularly vitamin supplements, is to present pseudoscientific research that supports the use of the supplement. In general, any site that sells a product cannot be relied on for objective information about that product. It’s much better to go to primarily information sites like Web M.D., or the Mayo Clinic site, or one can go directly to the original scientific articles (in some cases), by using PubMed.

So be a smart consumer of science, so that you can tell the difference between smart science and stupid science.

Copyright © 2010 Andrew Gottlieb, Ph.D. /The Psychology Lounge/TPL Productions

Hacking Your Next Job Interview: The Real Secret to Getting Hired

This post is for my oldest niece, who told me she had an interview for a job, and wondered if there were any “psychological tricks” for doing well in an interview. I thought about it, and realized she wanted help with some Jobhacks™.

It turns out that there are some tricks. These are written about in a wonderful new book called 59 Seconds: Think a Little, Change a Lot by Richard Wiseman. I’ll be blogging more on the book, which is a concise, science-based set of tips for improving your life, and being happier, healthier, and more productive. I highly recommend the book. It’s a fun, easy read, full of great research and life tips.

(Full Disclosure: If you click on the link, and buy, PsychologyLounge will get a small payment, so you’ll be supporting this blog. If you don’t want to support this blog, just log into your own Amazon account, and search for the book.)

So let’s review conventional wisdom first.  Job interviews are based on academic training and work experiences, right? The candidate who gets the job is the one with the best academic credentials and the most impressive work history, correct?

That’s what most people think and they are wrong!

Chad Higgins and Timothy Judge did research looking at factors that influenced interviewers decisions about job candidates. I won’t bore you with the details of their research, but I will tell you what they found. First, they found that the qualifications and work experience of the candidate didn’t matter.

It turns out that the most important predictor of who will be offered the job was a magical and mysterious quality: the pleasantness and likability of the candidate!

So now you’re thinking: “Great, I need a personality transplant in order to become nicer and more likable. Thanks, Gottlieb, years of therapy for that one no doubt!”

No, you don’t need a personality transplant. You just need to follow a simple set of behavioral guidelines.

What were the behaviors that communicated likability? They were very simple:

1. Small talk. Talk about something that interests both you and the interviewer, even if it’s not about work. You notice a picture of them fishing, and you share fishing tales.

2. Praise. Find something you like about the organization they represent and compliment it. Or praise or compliment the interviewer in a genuine way.

3. Enthusiasm. Show your excitement about the job being offered and the company.

4. Connection. Smile and make eye contact.

5. Involvement. Show interest in the person interviewing you. Ask smart questions about the type of person they are looking for, and how the job fits into the organization.

That’s it. Do this and you will greatly increase your likability, and with it, your chance of getting a job. I suspect this would work pretty well in other interview situations too, like blind dates, but that’s more research…

P.S. Two more quick tips from 59 Seconds. If you have weaknesses that will most likely come up, bring them up early in the interview, that increases your credibility, and gives you time to use likability to your advantage. If you have a particular strength, share it later in the interview, in order to look more humble, and end on a strong note.

Copyright © 2010 Andrew Gottlieb, Ph.D. /The Psychology Lounge/TPL Productions

New Study Shows Antidepressant Medication Fails to Help Most Depressed Patients

A very interesting study recently published in the Journal of the American Medical Association (JAMA) demonstrated very clearly that when it comes to antidepressant medication, the Emperor is wearing few if any clothes! The researchers did what is called a meta-study or meta-analysis. They searched the research literature for all studies that were placebo-controlled studies of antidepressants when used for depression. That means the studies had to include random assignment to either a medication group or a placebo (sugar pill) group. They eliminated some studies which use a placebo washout condition. (This means the studies first gave patients a placebo, and then eliminated all patients who had a 20% or greater improvement while taking placebo.) When they eliminated all studies that didn’t meet their criteria, they were left with 6 studies of 738 people.

Based on scores on the Hamilton Depression Rating Scale (HDRS), the researchers divided the patients into mild to moderately depressed, severely depressed, and very severely depressed. This is a 17 item scale that is filled out by a psychologist or psychiatrist, and measures various aspects of depression. It is used in most studies of depression. They then analyzed the response to antidepressant medication based on how severe the initial depression was.

The two antidepressants studied were imipramine and paroxetine (Paxil). Imipramine is an older, tricyclic antidepressant, and Paxil is a more modern SSRI antidepressant.

What did they find? They were looking at the size of the difference between the medication groups and the placebo groups. Rather than do the typical thing of just looking at statistical significance, which is simply a measure of whether the difference could be explained by chance, they looked at clinical significance. They used the definition used by NICE (National Institute of Clinical Excellence in England), which was an effect size of 0.50 or a difference of 3 points on the HDRS. This is defined as a medium effect size.

What they found was very disheartening to those who use antidepressant medications in their practices. They divided the patients into three groups based on their initial HDRS scores: mild to moderate depression (HDRS 18 or less), severe depression (HDRS 19 to 22), and very severe depression (HDRS 23 or greater).

For the mild to moderately depressed patients, the effect size was d=0.11, and for severely depressed patients the effect size was d = 0.17. Both of these effect sizes are below the standard description of a small effect which is 0.20. For the patients in the very severe group, the effect size was 0.47 which is just below the accepted value of 0.50 for a medium effect size.

When they did further statistical analysis, they found that in order to meet the NICE criteria of effect size of a 3 points difference, patients had to have an initial HDRS score of 25 or above.  To meet the criteria of an effect size of .50, or medium effect size, they had to have a score of 25 or above, and to have a large effect size, 27 or above.

What does this all mean for patient care? It means that for the vast majority of clinically depressed patients who fall below the very severely depressed range, antidepressant medications most likely won’t help. The sadder news is that even for the very severely depressed, medications have a very modest effect. Looking at the scoring of the HDRS, the normal, undepressed range is 0 to 7. The very severely depressed patients had scores of 25 or above, and a medium effect size was a drop in scores of 3 or more points compared to placebo patients. Looking at the one graph in the paper that show the actual drops in HDRS scores, the medication group had a mean drop of 12 points when their initial score was 25. That means they went from 25 to 13, which is still in the depressed range, although only mildly depressed. Patients who initially were at 38 dropped by roughly 20 points, ending at 18, which is still pretty depressed. And the placebo group had only slightly worse results.

One interesting thing is how strong the placebo effects are in these studies. It seems that for depressions less serious than very severe, placebo pills work as well as antidepressant medication.  Is this because antidepressants don’t work very well, or because placebos work too well? It’s hard to know. Maybe doctors should give their patients sugar pills, and call the new drug Eliftimood!

So in summary, here are the main observations I make from this study.

  • If you are very severely depressed, antidepressants may help, and are worth trying.
  • If you are mildly, moderately, or even severely depressed, there is little evidence that antidepressants will help better than a placebo. You would be better off with CBT (Cognitive Behavioral Therapy), which has a proven track record with less severe depressions, and which has no side effects.
  • Interestingly, CBT is less effective for the most severe depressions, so for these kinds of depressions medication treatment makes a lot of sense.
  • If you are taking antidepressants and having good results, don’t change what you are doing. You may be wired in such a way that you are a good responder to antidepressants.
  • If you have been taking antidepressants for mild to severe (but not very severe) depression, and not getting very good results, this is consistent with the research, and you might want to discuss alternative treatments such as CBT with your doctor. Don’t just stop the medications, as this can produce withdrawal symptoms, work with your doctor to taper off them.
  • Even in very severely depressed patients, for whom antidepressants have some effects, they may only get the patient to a state of moderate depression, but not to “cure”. To get to an undepressed, normal state, behavioral therapy may be necessary in addition to medications.
  • How do you find out how depressed you are? Unfortunately there is no online version of the HDRS for direct comparison. You may want to see a professional psychologist or psychiatrist if you think you might be depressed, and ask them to administer the HDRS to you.  There are also online depression tests, such as here and here. If you score in the highest ranges you might want to consider trying antidepressant medications, if you score lower you might want to first try CBT.
  • The most important thing is not to ignore depression, as it tends to get worse over time. Get some help, talk to a professional.

I’m off to take my Obecalp pills now, as it’s been raining here in Northern California for more than a week, and I need a boost in my mood. (Hint: what does Obecalp spell backwards?)

Copyright © 2010 Andrew Gottlieb, Ph.D. /The Psychology Lounge/TPL Productions