How to Meditate

Meditation is simple but not easy.

Meditation and balance

Meditation: Simple But Not Easy

  1. Sit comfortably, in a chair or on a pillow on the floor. Don’t lie down because most of us sleep deprived human beings will fall asleep very rapidly if we try to meditate lying down.
  2. Turn off your phone and other devices that might interrupt you. Close your eyes or at least let your gaze fall so that you’re not looking at anything in particular.
  3. Take several deep breaths and focus on the breath either where the breath comes in or out of your nose or on the rise and fall of your chest. This will be the primary focus of your mindful attention during meditation. Watch your breath in either of those two places.
  4. Your mind will wander away from your breath. Be gentle toward yourself regarding your wandering mind. I often suggest to people that they simply note, in a gentle internal tone, the type of distracting thought. If it’s a thought you can say to yourself “thinking thinking”. If it’s a sound you can say to yourself “hearing hearing.” If it’s a sensation, you can say to yourself “feeling feeling.” Once you have gently noted the type of distraction then simply refocus your attention on the breath. Just watch your breath, don’t try to change it or modify it.

When To Practice and For How Long

With meditation practice the key is to actually do it so the when to doesn’t really matter as long as the time is convenient for you and encourages you to practice. Some say that after a big meal is not ideal, and I’d probably agree, but other than that it doesn’t matter whether you practice early in the morning, late at night, or in the middle of the day.

In terms of how long you should practice, I would say start small. I’d start with 10 minutes a day, and once you get comfortable with that push that time up to 15 or 20 minutes. I suspect that beyond this there are diminishing returns, but up to 30 minutes a day is probably beneficial. Experiment with different time frames and see what works for you.

What you will find as you practice is that initially your mind is a “drunken monkey.” It wanders more than it focuses on the breath. This is completely normal and you should not allow yourself to get frustrated by it. Meditation is a learned skill and as you continue to practice you will find it easier and easier to focus on the breath, to notice distracting thoughts, and then to return to the breath. Eventually, you will be able to mostly hear silence in your mind which is a very peaceful feeling.

You can also practice mindfulness in other situations without doing formal meditation. For instance, when you take a shower, just feel the water on your body. Don’t think about your to do list. Or you can be mindful even when doing mundane tasks like washing dishes; feel the water on your hands, notice the shape and the sound of the dishes, and be completely immersed in the present moment.

That’s it, meditation practice made simple. Happy meditating!

 

Forgiveness and Happiness Researcher Fred Luskin Says Turn Off Your Smartphone If You Want to be Happy

Earlier this year I had the good fortune to spend several morning hours listening to Stanford professor and researcher Fred Luskin talk about happiness. Dr. Luskin is a psychologist who has done groundbreaking research on forgiveness over many years. He’s the author of many books, and frequently lectures about forgiveness. I often recommend his book Forgive for Good: A Proven Prescription for Health and Happiness to clients suffering from anger and hurt.

But this morning he was discussing happiness. He came into the room with no pretense. His hair was wild and curly, partly dark and partly gray. He was wearing a puffy black down jacket, a T-shirt, running tights, and sneakers. Clearly a man comfortable with himself, and not trying to impress.

He started off by doing something quite outrageous. He asked the audience of 30 people to turn off their cell phones. Not to lower the volume, or turn off the ringers, but to actually shut down their cell phones. This clearly caused some discomfort among the audience. He explained that the reason he wanted people to turn off their cell phones is so that they would truly focus on the present and to listening to him. He cited a statistic that people check email on average 79 times a day. Each time they check their email they get a burst of adrenaline and stress. Clearly this is not conducive to genuine happiness.

He pointed out that you can’t really be happy unless you can sit still and relax. “We are all descended from anxious monkeys,” he said, and clearly most of us do not know how to sit still and relax. “Happiness is the state of ‘enough’ “, he said, “and is not consistent with wanting more.”

He pointed out that wanting what you have equals being happy. And that wanting something else than what you have equals stress.

He talked about the beginnings of his career, when clinical psychology was focused on unhappiness and problems. There was no science of happiness. Now there is a huge area of research and writing on happiness called Positive Psychology.

He shared some simple techniques for enhancing happiness. One simple technique revolved around food. When you’re eating don’t multitask. Give thanks for the food, and really focus on tasting and savoring that food. One technique I have often used is to close my eyes while I savor food, which greatly intensifies the taste.

Another simple practice is whenever you are outside, take a few moments to feel the wind or sun on your skin.

He also talked about phones and how we use them. We are completely addicted to the little bursts of dopamine and adrenaline that we get each time we check our email or we get a text. And rather than be present in most situations, we simply look at our phones. Go to any outdoor cafe and look at people who are sitting alone. Most of them are looking at their phones rather than experiencing the surroundings or interacting with other people. Even sadder, look at people who are with others, either at a cafe, or a restaurant. Much of the time they too are lost in their smartphones.

He discussed how happiness is not correlated with achievement. Nor is it correlated with money once you have an adequate amount to cover basic needs. What happiness seems to be most correlated with is relationships. If you like yourself and connect with other people you will tend to be happy.

He reviewed  the relationship between impatience, anger, frustration, judgment and happiness. He pointed out that whenever we are impatient or in a hurry all of our worst emotions tend to come out. When someone drives slowly in front of us we get annoyed. When someone takes too much time in the checkout line ahead of us, we get angry.

I really liked his discussion of grocery stores. He pointed out what an incredible miracle a modern American grocery store really is. The variety of delicious foods that we can buy for a relatively small amount of money is truly staggering. But instead of appreciating this, we focus on the slow person in the line ahead of us, or the person who has 16 items in the 15 item express line. What a shame!

He pointed out we have a choice of what we focus on, and this choice greatly influences our happiness. We all have a choice to focus on what’s wrong with our lives, or what’s right with our lives. And we have a choice of whether to focus on how other people have treated us poorly, or how other people have treated us well. These choices of focus will determine how we feel.

We also have the choice of focusing on what we already have, or focusing on what we do not have and aspire to have. For instance, let’s imagine that you are currently living in a rental apartment. The apartment is quite nice, although there are things that could be better. The kitchen could be bigger, and the tile in the bathroom could be prettier.

Perhaps you imagine owning a house, and you feel badly about renting an apartment. Rarely do we appreciate what we have. Having a place to live is clearly infinitely better than being homeless. And even a flawed apartment is still home.

All of us need to work on learning to emphasize generosity, awe, and gratitude in our lives if we want to be happy. Generosity means kindness and acceptance in contrast to anger and judgment. Awe is the ability to be astounded by the wonder and beauty in the world. Gratitude is appreciation for all the good things in your own life and in the world.

He cited one interesting study where researchers observed a traffic crosswalk. They found that the more expensive cars were less likely to stop for people in the crosswalks. Thus wealth often correlates with a lack of generosity and a higher level of hostility. Other data shows that there is very little correlation between wealth and charitable giving, with much of the charitable giving in the USA coming from those of modest means.

He also talked about secular changes in our society. He quoted a statistic that empathy is down 40% since the 1970’s. At the same time narcissism has increased by roughly 40%. This has a huge negative impact on relationships.

I was impressed by this simple but profound message of Dr. Luskin’s talk. Slow down, smell the roses, turn off your phone, focus on relationships, appreciate what you have, and become happier.

It’s a simple message, but hard to actually do.

I’m off to go for a hike in the hills, without my phone!

How Reporters Screw up Health and Medical Reporting (and How You Can Catch Them Doing So)

I’ve written before about common mistakes in interpreting medical research in my blog post How to Read Media Coverage of Scientific Research: Sorting out the Stupid Science from Smart Science. I recently read a very interesting post by Gary Schwitzer about the most common mistakes that journalists make when reporting health and medical findings.

The three mistakes that he discusses:

 1.      Absolute versus relative risk/benefit data

“Many stories use relative risk reduction or benefit estimates without providing  the absolute data. So, in other words, a drug is said to reduce the risk of hip fracture by 50% (relative risk reduction), without ever explaining that it’s a reduction from 2 fractures in 100 untreated women down to 1 fracture in 100 treated women. Yes, that’s 50%, but in order to understand the true scope of the potential benefit, people need to know that it’s only a 1% absolute risk reduction (and that all the other 99 who didn’t benefit still had to pay and still ran the risk of side effects).

2.      Association does not equal causation

A second key observation is that journalists often fail to explain the inherent limitations in observational studies – especially that they cannot establish cause and effect. They can point to a strong statistical association but they can’t prove that A causes B, or that if you do A you’ll be protected from B. But over and over we see news stories suggesting causal links. They use active verbs in inaccurately suggesting established benefits.

3.      How we discuss screening tests

The third recurring problem I see in health news stories involves screening tests. … “Screening,” I believe, should only be used to refer to looking for problems in people who don’t have signs or symptoms or a family history. So it’s like going into Yankee Stadium filled with 50,000 people about whom you know very little and looking for disease in all of them. … I have heard women with breast cancer argue, for example, that mammograms saved their lives because they were found to have cancer just as their mothers did. I think that using “screening” in this context distorts the discussion because such a woman was obviously at higher risk because of her family history. She’s not just one of the 50,000 in the general population in the stadium. There were special reasons to look more closely in her. There may not be reasons to look more closely in the 49,999 others.”

Let’s discuss each of these in a little bit more depth. The first mistake is probably the most common one, where statistically significant findings are not put into clinical perspective. Let me explain. Suppose we are looking at a drug that prevents a rare illness. The base rate of this illness, which we will call Catachexia is 4 in 10,000 people. The drug reduces this illness to one in 10,000 people, a 75% decrease. Sounds good, right?

Not so fast. Let me add a few facts to this hypothetical case. Let’s imagine that the drug costs $10,000 a year, and also has some bad side effects. So in order to reduce the incidence from four people to one person in ten thousand, 9996 people who would never develop this rare but serious illness must be treated. The cost of doing so would be $99,960,000! Plus those 9996 people would be unnecessarily exposed to side effects.

So which headline sounds better to you?

New Drug Prevents 75% of Catachexia Cases!

Or

New Drug Lowers the Prevalence of Catachexia Cases by Three People per 10,000, at a Cost of Almost $100 Million Dollars

The first headline reflects a reporting of the relative risk reduction, without cost data, and the second headline reflects the absolute risk reduction, and the costs. The second headline is the only one that should be reported but unfortunately the first headline is much more typical in science and medical reporting.

The second error where association or correlation does not equal causation is terribly common as well. The best example of this is all of the studies looking at the health effects of coffee. Almost every week we get a different study that claims either a health benefit of coffee or a negative health impact of coffee. These findings are typically reported in the active tense such as, “drinking coffee makes you smarter.”

So which headline sounds better to you?

Drinking Coffee Makes You Smarter

Or

Smarter People Drink More Coffee

Or

Scientists Find a Relatively Weak Association between Intelligence Levels and Coffee Consumption

Of course the first headline is the one that will get reported, even though the second headline is equally inaccurate. Only the third headline accurately reports the findings.

The theoretical problem with any correlational study of two different variables is that we never know, nor can we ever know, what intervening variables might be correlated with each. Let me give you a classic example. There is a high correlation between the consumption of ice cream in Iowa and the death rate in Mumbai, India. This sounds pretty disturbing. Maybe those people in Iowa should stop eating ice cream. But of course the intervening variable here is summer heat. When the temperature goes up in Iowa people eat more ice cream. And when the temperature goes up in India, people are more likely to die.

The only way that one could actually verify a correlational finding would be to do a follow-up study where you randomly assign people to either consume or not consume the substance or food that you wish to test. The problem with this is that you would have to get coffee drinkers to agree not to drink coffee and non-coffee drinkers to agree to drink coffee, for example, which might be very difficult. But if you can do this with coffee, chocolate, broccoli, exercise, etc. then at least you could demonstrate a real causal effect. (I’ve oversimplified some of the complexity of controlled random assignment studies, but my point stands.)

The final distortion which involves confusion about screening tests is also very common, and unfortunately, incredibly complex. The main point that Schwitzer is trying to make here though is simple; screening tests are only those tests which are applied to a general population which is not at high risk for any illness. Evaluating the usefulness of screening tests must be done in the context of a low risk population, because that is how most screening tests are used. Most people don’t get colon cancer, breast cancer, or prostate cancer, even over 50. If you use a screening test only with high-risk individuals then it’s not really a screening test.

There is the whole other issue with reporting on screening tests that I’m only going to briefly mention because it’s so complicated and so controversial. It’s that many screening tests may do as much harm as good. Recently there has been a lot of discussion of screening for cancer, especially prostate and breast cancer. The dilemma with screening tests is that once you find cancer you almost always are obligated to treat it because of medical malpractice issues and psychological issues (“Get that cancer out of me!”) The problem with this automatic treatment is that current screening doesn’t distinguish between fast-growing dangerous tumors and very slow growing indolent tumors. Thus we may apply treatments which have considerable side effects or even mortality to tumors that would never harm the person.

Another problem is that screening often misses the onset of fast-growing dangerous tumors because they begin to grow between the screening tests. The bottom line is that screening for breast cancer and prostate cancer may have relatively little impact on the only statistic that counts – the cancer death rate. If we had screening tests that could distinguish between relatively harmless tumors and dangerous tumors then screening might be more helpful, but that is not where we are yet.

One more headline test. Which headline do you prefer?

Screening for Prostate Cancer Leads to Detection and Cure of Prostate Cancer

Or

Screening for Prostate Cancer Leads to Impotence and Incontinence in Many Men Who Would Never Die from Prostate Cancer

The first headline is the one that will get reported even though the second headline is scientifically more accurate.

I suggest that every time you see a health or medicine headline that you rewrite it in a more accurate way after you read the article. Remember to use absolute differences rather than relative differences, to report association instead of causation, and add in the side effects and costs of any suggested treatment or screening test. This will give you practice in reading health and medical research accurately.

Also remember the most important rule, one small study does not mean anything. It’s actually quite humorous how the media will seize upon a study, even though the study was based on 20 people and hasn’t been replicated or repeated by anybody. They also typically fail to put into context the results of one study versus other studies of the same thing. A great example is eggs and type II diabetes. The same researcher, Luc Djousse, published a study in 2008 (link) that showed a strong relationship between the consumption of eggs and the occurrence of type II diabetes, but then in 2010 published another study finding absolutely no correlation whatsoever. Always be very skeptical, and most often you will be right.

I’m off to go make a nice vegetarian omelette…

 

Copyright © 2011 Andrew Gottlieb, Ph.D. /The Psychology Lounge/TPL Productions

How to Read Media Coverage of Scientific Research: Sorting Out the Stupid Science from Smart Science

Reading today’s headlines I saw an interesting title, “New Alzheimer’s Gene Identified.”

I was intrigued. Discovering a gene that caused late onset Alzheimer’s would be a major scientific breakthrough, perhaps leading to effective new treatments. So I read the article carefully.

To summarize the findings, a United States research team looked at the entire genome of 2269 people who had late onset Alzheimer’s and 3107 people who did not. They were looking for differences in the genome.

In the people who had late onset Alzheimer’s, 9% had a variation in the gene MTHFD1L, which lives on chromosome 6. Of those who did not have late-onset Alzheimer’s 5% had this variant.

So is this an important finding? The article suggested it was. But I think this is a prime example of bad science reporting. For instance, they went on to say that this particular gene is involved with the metabolism of folate, which influences levels of homocysteine. It’s a known fact that levels of homocysteine can affect heart disease and Alzheimer’s. So is it the gene, or is it the level of homocysteine?

The main reason why I consider this an example of stupid science reporting is that the difference is trivial. Let me give you an example of a better way to report this. The researchers could have instead reported that among people with late-onset Alzheimer’s, 91% of them had no gene changes, and then among people without late onset Alzheimer’s 95% of them had normal genes. But this doesn’t sound very impressive and calls into question whether measurement errors would account for the differences.

So this very expensive genome test yields absolutely no predictive value in terms of who will develop Alzheimer’s and who will not. There is a known genetic variant, called APOE, which lives on chromosome 19. Forty percent of those who develop late-onset Alzheimer’s have this gene, while only 25 to 30% of the general population has it. So even this gene, which has a much stronger association with Alzheimer’s, isn’t a particularly useful clinical test.

The other reason this is an example of stupid science is that basically, this is a negative finding. To scan the entire human genome looking for differences between normal elderly people and elderly people with Alzheimer’s, and discover only a subtle and tiny difference, must’ve been a huge disappointment for the researchers. If I had been the journal editor reviewing this study, I doubt I would’ve published it. Imagine a similar study of an antidepressant, which found that in the antidepressant group, 9% of people got better, and in the placebo group 5% got better. I doubt this would get published.

Interestingly enough, the study hasn’t been published yet, but is being presented as a paper at the April 14 session of the American Academy of Neurology conference in Toronto. This is another clue to reading scientific research. If it hasn’t been published in a peer-reviewed scientific journal, be very skeptical of the research. Good research usually gets published in top journals, and research that is more dubious often is presented at conferences but never published. It’s much easier to get a paper accepted for a conference than in a science journal.

It’s also important when reading media coverage of scientific research to read beyond the headlines, and to look at the actual numbers that are being reported. If they are very small numbers, or very small differences, be very skeptical of whether they mean anything at all.

As quoted in the article, “While lots of genetic variants have been singled out as possible contributors to Alzheimer’s, the findings often can’t be replicated or repeated, leaving researchers unsure if the results are a coincidence or actually important,” said Dr. Ron Petersen, director of the Mayo Alzheimer’s disease research Center in Rochester, Minnesota.

So to summarize, to be a savvy consumer of media coverage of scientific research:

1. Be skeptical of media reports of scientific research that hasn’t been published in top scientific journals. Good research gets published in peer-reviewed journals, which means that other scientists skeptically read the article before it’s published.

2. Read below the headlines and look for actual numbers that are reported, and apply common sense to these numbers. If the differences are very small in absolute numbers, it often means that the research has very little clinical usefulness. Even if the differences are large in terms of percentages, this doesn’t necessarily mean that they are useful findings.

An example would be a finding that drinking a particular type of bourbon increases a very rare type of brain tumor from one in 2,000,00 to three in 2 million. If this was reported in percentage terms the headline would say drinking this bourbon raises the risk of brain tumor by 300%, which would definitely put me and many other people off from drinking bourbon. (By the way, this is a completely fictitious example.) But if you compare the risk to something that people do every day such as driving, and revealed the driving is 1000 times more risky than drinking this type of bourbon, it paints the research in a very different light.

3. Be very skeptical of research that has not been reproduced or replicated by other scientists. There’s a long history in science of findings that cannot be reproduced or replicated by other scientists, and therefore don’t hold up as valid research findings.

4. On the web, be very skeptical of research that’s presented on sites that sell products. Unfortunately a common strategy for selling products, particularly vitamin supplements, is to present pseudoscientific research that supports the use of the supplement. In general, any site that sells a product cannot be relied on for objective information about that product. It’s much better to go to primarily information sites like Web M.D., or the Mayo Clinic site, or one can go directly to the original scientific articles (in some cases), by using PubMed.

So be a smart consumer of science, so that you can tell the difference between smart science and stupid science.

Copyright © 2010 Andrew Gottlieb, Ph.D. /The Psychology Lounge/TPL Productions

New Study Finds the Best Pharmacological Stop Smoking Solution: (Hint, it’s not what you’d think)

A new study at the Center for Tobacco Research and Intervention, School of Medicine and Public Health, University of Wisconsin, Madison, compared all except one of the current drug treatments that help with quitting smoking. They looked at the following treatments and combined treatments:

  • “bupropion SR (sustained release; Zyban, GlaxoSmithKline), 150 mg twice daily for 1 week before a target quit date and 8 weeks after the quit date;
  • nicotine lozenge (2 or 4 mg) for 12 weeks after the quit date;
  • nicotine patch (24-hour, 21, 14, and 7 mg titrated down during 8 weeks after quitting;
  • nicotine patch plus nicotine lozenge;
  • bupropion SR plus nicotine lozenge; or
  • placebo (1 matched to each of the 5 treatments).”

Everyone received six 10- to 20-minute individual counseling sessions, with the first 2 sessions scheduled before quitting.

What were the results?

Three treatments worked better than placebo during the immediate quit period: the patch, bupropion plus lozenge, and patch plus lozenge.

At six months, only one treatment was effective; the nicotine patch plus nicotine lozenge. The exact numbers , as confirmed by carbon monoxide tests, were: “40.1% for the patch plus lozenge, 34.4% for the patch alone, 33.5% for the lozenge alone, 33.2% for bupropion plus lozenge, 31.8% for bupropion alone, and 22.2% for placebo.”

So we see that the combined nicotine substitution therapy worked best, followed closely by either nicotine substitute alone. Zyban or Welbutrin (bupropion) was a bust, no more effective than the simple nicotine lozenge. The only advantage to Zyban would be if one prefers not to use nicotine substitutes.

Now I mentioned that they omitted one drug treatment, which is the drug Chantix (varenicline). This is probably because the drug is a nicotine receptor blocker, so wouldn’t have made sense to combine with nicotine substitutes. Also, there have been some disturbing case reports of people having severe depressive reactions to Chantrix.

Of course, there was one glaring omission that any card-carrying psychologist would spot in a moment–the lack of a behavior therapy component. Giving 6 ten minute sessions is hardly therapy. I would have liked to see true smoking cessation behavior therapy combined with the drug treatments.

So, if you’re trying to quit smoking, combine nicotine patches with nicotine lozenges, sold in any pharmacy. If you do, you have a 40 percent chance of succeeding at 6 months.

Now I am off to have a cigarette….just kidding.

Study: http://cme.medscape.com/viewarticle/712074_print

Copyright © 2009/2010 Andrew Gottlieb, Ph.D. /The Psychology Lounge/TPL Productions