How to Balance Scientific Research with Practical Training

The lens of science has given us some incredibly useful information about running.

Scientific research has illuminated everything from injury prevention and rehabilitation to race-day preparation and best practices in training.

With the advent of well-designed scientific studies, we don’t have to rely only on folk wisdom to answer questions like “is hip strength important for runners” or “should I take ibuprofen to treat all of my running injuries” or “will a cup of coffee help me run faster on race day?” (the answers, by the way, are yes, no, and yes).

I am continually impressed by the things that I learn through reading scientific studies on running-related topics.  Nevertheless, once in a while the scientific literature will come to a conclusion that’s the complete opposite of what I’ve found in my time as a runner and a coach.

Today, I’d like to examine a few examples of this and try to figure out why the science says one thing while “runner wisdom” says another.

Hard-to-believe scientific research results

To get started, I want to highlight some scientific findings that we’ve uncovered that go against very common (and practical) running wisdom Then we’ll look at how you can balance or interpret these findings correctly and apply them to your training in the right way.

Should you run when you feel sick?

The first is the question of whether you should keep running when you start getting sick.  In my own running and coaching experience, trying to continue to train with a sore throat, fatigue, soreness, and all of the other symptoms of an upper respiratory infection is an unqualified mistake.

But scientific research claims otherwise: research done at Ball State University appears to show that moderate-intensity aerobic exercise doesn’t affect the length or severity of an upper respiratory infection.

The effects of pacing on running

Another case is the effects of your pacing on your normal mileage runs.

It’s fairly well-accepted in the running community that going too fast on your easy runs means you’re bound to get injured.  But several large studies of runners demonstrate otherwise: on the whole, runners who do their training runs fast don’t get injured any more than those who train slow.

Running with injuries

Finally, even something as basic as running when you’ve got an injury isn’t as simple as you might think.

While I’m always quick to warn athletes I coach not to run on a nagging injury, new research is showing that, at least in cases of chronic soft-tissue injuries like runner’s knee or Achilles tendonitis, running on mild or moderate pain doesn’t leave you any worse than not running at all, as long as the pain during running doesn’t exceed 5/10 on the pain scale, and as long as it has faded by the next morning.

Is science or our perception always right?

So what are we to make of this?

Taking both science and personal experience at face value can lead to some serious cognitive dissonance.  To resolve this, we’ll have to tease apart some of the subtleties of both scientific experiments and personal observations.

Flaws in scientific design – research participants

Despite rigorous experimental design, scientific studies are often still full of issues that could limit their applicability to a serious runner in training.

The research cited above on upper respiratory infections, for example, used moderately active undergraduate students, not distance runners training for a marathon.

So we might be able to argue that hard workouts, high mileage, and races put a much greater stress on an already-bogged down immune system in competitive runners, which could account for why I and many others have found that trying to run through illness is not a good idea.

Flaws in scientific design – sample selection

Problems in sample selection can bog down scientific studies too—perhaps there are many injury-prone runners who intentionally run slow because they are aware of their heightened injury risk, yet still become injured for other reasons.

This would mask any protective effect of running slower to avoid injury.

And in the case of “running through” injuries, the scope of the current studies might be too narrow.  Perhaps we can’t generalize the “5/10 on the pain scale” rule to all injuries, as different body tissues likely have their own rates of healing.

Flaws in our perceptions: The danger of cause-and-effect

However, we should also be aware of the flaws in our own perception.

Our brains are quick to assign cause-and-effect relationships when two things happen sequentially, even if they might be unrelated.

Perhaps you would have gotten sicker even if you didn’t decide to do a hard workout when you were coming down with a sore throat, and maybe factors like muscular strength and running mechanics really do play a bigger role in your injury risk than how fast you run.

Our minds are also very bad at stepping outside of the immediate situation and looking at the big picture.

After running on a stiff Achilles or a sore knee, you are obviously not going to feel like you are progressing in your recovery.  But what you might miss is a longer-term trend of that stiffness or soreness gradually improving over the course of several weeks.

How do we know when to apply scientific research to our training?

With respect to scientific studies, we should be careful not to over-interpret the results.

Usually, the first studies on a specific topic like illness or injury are small and fairly specialized.  By refining experimental design, researchers can gradually reinforce or refute the findings of previous studies, eventually leading to a more rigorous understanding of the topic in question.

For example, one very legitimate criticism of early studies on the benefits of strength training exercises for runners was that they only included modestly-trained recreational runners, not experienced competitors.  But as more studies were conducted, researchers found that the benefits of strength training (especially plyometric-style explosive strength) extend even to extremely fit national and international-caliber distance runners.

Managing our own perception biases can be more difficult.

Even if heaps of research comes out saying that it’s okay to run when you’re sick, I’ll still have a hard time bringing myself to do it because I’ve had so many bad experiences in the past.

But making a system for analyzing your own running history can go a long ways towards offsetting some of the problems described earlier.

You can get very technical if you’d like, using spreadsheets to analyze mileage and paces and so on, but for most runners, just keeping a daily log of your workouts and how you feel will open your eyes to some of the things that you’d otherwise miss.

When new scientific research is hard to believe, we should rightfully be skeptical and analyze the methods of the study to see whether or not it’s really applicable for all runners.

But we shouldn’t stop there—we should also be skeptical and analytical about our own experiences and beliefs about running, because these can have flaws too!

Who We Are

Who We Are

Your team of expert coaches and fellow runners dedicated to helping you train smarter, stay healthy and run faster.

We love running and want to spread our expertise and passion to inspire, motivate, and help you achieve your running goals.

References

1. Weidner, T.; Schurr, T., Effect of exercise on upper respiratory track infection in sedentary subjects. British Journal of Sports Medicine 2003, 37, 304-306.
2. Weidner, T.; Cranston, T.; Schurr, T.; Kaminsky, L., The effect of exercise training on the severity and duration of a viral upper respiratory illness. Medicine & Science in Sports & Exercise 1998, 30 (11), 1578-1583.
3. Weidner, T.; Anderson, B.; Kaminsky, L.; Dick, E.; Schurr, T., Effect of a rhinovirus-caused upper respiratory illness on pulomonary function test and exercise responses. Medicine & Science in Sports & Exercise 1997, 29 (5), 604-609.
4. Messier, S. P.; Pittala, K. A., Etiologic factors associated with selected running injuries. Medicine & Science in Sports & Exercise 1988, 20 (5), 501-505.
5. Walter, S. D.; Hart, L. E.; McIntosh, J. M.; Sutton, J. R., The Ontario cohort study of running-related injuries. Archives of Internal Medicine 1989, 149 (11), 2561-2564.
6. Hreljac, A.; Marshall, R. N.; Hume, P., Evaluation of lower extremity overuse injury potential in runners. Medicine & Science in Sports & Exercise 1999, 32 (32), 9.
7. Silbernagel, K. G.; Thomee, R.; Eriksson, B. I.; Karlsson, J., Continued Sports Activity, Using a Pain-Monitoring Model, During Rehabilitation in Patients With Achilles Tendinopathy: A Randomized Controlled Study. The American Journal of Sports Medicine 2007, 35 (6), 897-906.
8. Thomeé, R., A comprehensive treatment approach for patellofemoral pain syndrome in young women. Physical Therapy 1997, 77, 1690-1703.
9. Paavolainen, L.; Häkkinen, K.; Hämäläinen, I.; Nummela, A.; Rusko, H., Explosive-strength training improves 5-km running time by improving running economy and muscle power. Journal of Applied Physiology 1999, 86, 1527-1533.
10. Saunders, P. U.; Telford, R. D.; Pyne, D. B.; Peltola, E. M.; Cunningham, R. B.; Gore, C. J.; Hawley, J. A., Short-term Plyometric Training Improves Running Economy in Highly Trained Middle and Long Distance Runners. Journal of Strength and Conditioning Research 2006, 20 (4), 947-954.

Some Other Posts You May Like...

2 Responses

  1. Good job. I particularly like the section about the limitations of scientific studies. Even though scientific research is very important and illuminating, a lot of folks lionize scientific research and are quick to dismiss “traditional wisdom” because they don’t totally understand those limitations. In fact, I’ll add that a lot of athletes and coaches go so far as to think that traditional wisdom isn’t valid unless it’s substantiated by scientific study. Not everything can be operationalized, though; not everything can be made into a study. At those times, I think that traditional wisdom, borne of experience, is valid.

  2. You raise important points about [good] science versus anecdotal experience. In the latter case, frequently a sample size of 1, that is, based upon our own individual experience.

    Properly done science is about making inferences about larger groups, using a properly designed and executed study on a smaller sample, while recognizing that there will be natural exceptions to the findings, given biological diversity and things that we did not or cannot properly measure, that may be relevant to explaining the findings.

    A key limitation within that framework is the ability to generalize findings from the study sample to the larger group. That ability is predicated upon the notion that the sample in the study is randomly selected from the population of interest and that the inclusion/exclusion criteria for the study subjects are reasonable and not overly narrow. In addition, that the sample size used in the study is appropriate to detect the magnitude of the outcome (or difference in outcomes) of interest, while controlling for the likelihood of Type I (false positive) and Type II (false negative) errors. In the case of Type I errors, that is typically set at 5% (0.05). In the case of Type II errors, that is commonly set at 20% (0.2). We make the prospective decision that we are willing to accept a greater risk of not detecting the outcome of interest, than detecting it. This is the basis of formal null hypothesis testing in traditional statistics.

    Unfortunately, there is a lot of bad science and a lot of it is approved by IRBs and is subsequently published, frequently with poorly implemented peer review by the relevant journals.

    A key problem is that many of these studies lack formally stated hypotheses and no formal power/sample size calculation or simulation is performed by the investigators. The result is frequently underpowered studies, that is, studies that have too small of a sample size, resulting in Type I and Type II error rates that are markedly above commonly accepted values. This problem can also be confounded by the use of the wrong statistical techniques to analyze the data.

    This results in studies that are frequently not reproducible, which in turn yields conflicting results when multiple studies on similar outcomes are published. How is the naive reader, who is not trained in these methods supposed to separate bad science from good, so that they can take that information and turn it into actionable knowledge? The reality is that frequently, they cannot. Thus, as you note here, they are left to make decisions based upon their own individual experience, or elect to utilize a study that fits their a priori bias.

    Unfortunately, this underlying problem is not limited to running, but is common across virtually all scientific disciplines.

    Science is certainly not perfect, but bad science subverts our ability to learn.

    Thanks for raising these important points.

Leave a Reply

Your email address will not be published. Required fields are marked *