Are We Really Connected?

August 11, 2011

in wellness

you may have noticed something a bit different about this post’s title. it’s in title case. that’s because carol harnett—author, speaker and trendspotter—wrote it. after the new york times ran an article that calls into question christakis’ and fowlers’ findings that health behavior is contagious, i asked carol to share her thoughts on what it meant. and i “let” her write the entire thing in traditional case. 

One of the most powerful classes I took in graduate school was called, “How to Lie with Statistics.” You must love a stats professor who knows how to market his curriculum.

No, he didn’t really teach us how to lie. Our instructor taught us how to discern – good from bad design, statistical significance from insignificance, and proper from flawed statistical conclusions.

That’s why, when the journal, Statistics, Politics, and Policy published Russell Lyon’s paper, The Spread of Evidence-Poor Medicine via Flawed Social-Network Analysis in May 2011, I was immediately captivated.

Okay, not immediately. I have geek-like tendencies, but I don’t subscribe to this journal.

I do, however, have a statistics Twitter list. And, within a couple of weeks of the Lyons paper hitting the proverbial newsstand, my Twitter list was chock-full of posts about the statistical flaws surrounding the research of Nicholas Christakis and James Fowler on the social contagion of obesity, smoking, happiness and loneliness.

So, in laymen’s terms, what’s all the fuss about?

Essentially, Christakis and Fowler propose that our body weight, tendency to smoke cigarettes and states of happiness and loneliness are spread—like a virus—by up to three degrees of friends (friends, friends of friends, and friends of friends of friends, i.e., the next-door neighbor of your best friend’s college roommate).

The researchers assert that this extended friends’ network causes these personality traits to multiply through a process similar to infectious disease transmission. And, that this transmission is not limited by geographic location or where these people live.

Causality is a pretty big statement—much different than saying these conditions are associated with your friends and their networks.

What’s the difference between causality and association? Let’s use some examples.

When we’re talking about health, rarely do we conclusively find that one thing causes another. Hand washing, as a way to reduce the spread of infectious disease, is an exception. It is widely accepted and supported by research that washing your hands cuts down the spread of infection.

Most of everything else we promote about healthy behaviors is an association. For example, obesity is associated with a greater risk for heart disease and cancer. However, obesity alone does not cause heart disease or cancer.

To get to this idea that obesity spreads like a virus, Christakis and Fowler had to rule out two concepts: homophily and shared environment.

Homophily boils down to one of my grandmother’s favorite sayings, “Show me your friends and I’ll tell you who you are.” In other words, we tend to associate with people who look like us and act like us.

Shared environment is exactly what it sounds like. We live in the same or similar places and those environments influence our personality traits and choices.

According to a growing list of statisticians who support Lyons’ work, Christakis and Fowler’s statistical techniques, which allowed them to rule out the influence of where we live and choosing friends who are like us, were greatly flawed.

Their errors, according to Lyons, are summed up as follows:

“1. [Christakis and Fowler] use statistical models that contradict their data, as well as their conclusions.

“2. Even if one accepts [Christakis and Fowler’s] statistical models and tests, [Christakis and Fowler] interpret the results incorrectly.”

The bottom line? Statisticians say that Christakis and Fowler cannot make the statements they made using the statistical methods they chose. They also believe that their mistake was inadvertent, not intentional.

So, is there anything you can take away from the great debate about the validity of Christakis and Fowler’s research?


First, the people we hang out with and where we live, work and play influence our choices and behaviors.

Second, no one is saying that social contagion doesn’t exist and have influence. Heck, the only reason Lyons’ paper made it beyond the statistics world to the New York Times was because people spread it on Twitter.

As Dave Johns wrote in Slate, “The very idea of contagion and connectedness seems to embody the spirit of today, from the upswell of support for a young, black Chicago politician to the Facebook-driven revolutions of the Middle East.”

Third, use discernment when reading research—even in venerable journals like the New England Journal of Medicine and the British Medical Journal. It’s no one’s intention to lie with statistics, but it’s easy to make mistakes.

you can find carol on twitter, youtube and human resources executive online.


Leave a Comment

{ 3 comments… read them below or add one }

Bob Merberg August 12, 2011 at 7:49 pm

Carol (and Fran?),

This is a fascinating topic, very close to my heart. I have to admit to having tried to help promulgate some of Fowler and Christakis’s (F & C) notions, online and in presentations. (I’m not the only one. I heard this research summarized at conferences by many highly regarded experts, and on webinars, were presented as documentation for the frameworks of some of the leading wellness vendors). In fact, I had a revealing email exchange with Christakis, which I’ll have to detail in another space.

I appreciate, Carol, your raising the issue of using discernment when reading about research. I think this entire imbroglio raises issues regarding scientific and health literacy. Most of us do not have the skills to actually assess F & C’s sophisticated statistical models. So we accepted it as truth, based on the opinions of experts (such as those reviewing for the New England Journal of Medicine). Interestingly, I think that if one followed most of the guidelines you could find regarding how to go about diligently assessing science news, you would have walked away confident in the accuracy of F & C’s findings. After all, the research:

— came from credentialed researchers at world-class institutions
— was published in the most esteemed peer-reviewed journals on the planet
— was duplicated (though this may be arguable)
— was not (if I recall correctly) commercially sponsored

One test, generally omitted from scientific literacy guidelines, that the research does not pass is the common sense test. Sure, we influence each other’s behaviors and emotions. But that’s not all F & C concluded. They said that your chance of becoming obese was influenced by your friends’ friends’ friends — even if you don’t know them and they live some place far away. Three degrees of separation. And *that’s* the part that’s hard to swallow. But this is the paradox: The fact that the findings were so unlikely is what made them so appealing to the scientific journals, to the lay media, and to the public.

So maybe part of the lesson is: If the findings seem to defy common sense, be even more discerning than you usually would be.

It’s somewhat reminiscent of another lesson we all learned (the hard way) long ago: If it sounds too good to be true, it probably is.

(Probably. This particular research is all about probability. And I suspect the final word is not yet in.)


Bob Merberg August 13, 2011 at 12:52 pm

I didn’t want to make my Comments on this blog longer than the blog itself. I’ve offered some additional commentary on this controversy, and why it is important for wellness professionals, on my blog at .


Carol Harnett August 16, 2011 at 11:25 am

Great points, as always, Bob.

I’ve split my comments between your blog post:
and here.

What I will add on the free-range site is some information I left out of my original post because it didn’t want to distract from the lay explanation of the issues.

The most disturbing thing to me about the Christakis and Fowler research, which was published a minimum of four times in highly-regarded, peer-reviewed journals like the New England Journal of Medicine and the British Medical Journal, was this. From the first journal article, there were protests from the statistical community about the flawed methodology. They were ignored. Russell Lyons tried to submit his paper to those same journals. He was rejected by both the NEJM and BMJ – not because his work wasn’t credible, but because it was a criticism of published work. The editors felt their journals’ foci was not to publish criticisms as research articles.

Here is some of the information that Dave Johns shared on Slate (

“The case against Christakis and Fowler has grown since then. The Lyons paper passed peer review and was published in the May issue of the journal Statistics, Politics, and Policy. Two other recent papers raise serious doubts about their conclusions. And now something of a consensus is forming within the statistics and social-networking communities that Christakis and Fowler’s headline-grabbing contagion papers are fatally flawed. Andrew Gelman, a professor of statistics at Columbia, wrote a delicately worded blog post in June noting that he’d “have to go with Lyons” and say that the claims of contagious obesity, divorce and the like “have not been convincingly demonstrated.”

“Another highly respected social-networking expert, Tom Snijders of Oxford, called the mathematical model used by Christakis and Fowler “not coherent.” And just a few days ago, Cosma Shalizi, a statistician at Carnegie Mellon, declared, “I agree with pretty much everything Snijders says.”
Gelman argues that the papers might not have been accepted by top journals if these technical criticisms had been aired earlier. Indeed, Lyons posted damning quotes from two anonymous reviewers of his own work. “[Christakis and Fowler’s] errors are in some places so egregious that a critique of their work cannot exist without also calling into question the rigor of review process,” one of them wrote. Slate has confirmed with an editor at Statistics, Politics, and Policy that the quote is authentic.”

I think the last statement gets to your point, Bob. It is difficult for not statisticians like us to sift through a methodology, which is not part of our expertise. We tend to trust the peer-review process in journals of high regard like the NEJM and the BMJ. They have let us down.

But, I believe, we hold some responsibility, too. To your point both here and on your blog, if it doesn’t sit right in our guts, if it seems too good to be true, we should find out more. And, we should be careful about promoting work we don’t fully understand or believe.

A case in point is the recent release that people who exercise even an hour a day cannot overcome the harmful effects of sitting all day – particularly at work. That information is an observational statement. As you know (because I saw that you tweeted the article), Australian researchers are not putting that observation through scientific rigor. They’re going to look at whether putting in sit-stand desks, etc. is enough to overcome the impact of sitting all day. They’ll have control and experimental groups. They’ll look at biologic markers as well as claim data.

We don’t have to wait until their research is completed and passes peer review to share the sitting information along to others. However, we should be careful about promoting options like sit-stand desks as an intervention that will minimize the risk of sitting and deliver an ROI. We can still tell our clients that it’s worth a try and set up the intervention like an experiment – compare similar departments and/or locations where one group has the intervention and the other does not.

Ultimately, I think there are a lot of lessons to be learned from the Christakis and Fowler debate. The one I like best is that honest discussion and dialogue like the one we’re having here.

Warm regards,


{ 1 trackback }

Previous post:

Next post: