Guest Articles

Evidence-based medicine and the rational use of diagnostic investigations

Professor Rakesh K. Kumar

Every senior medical student and young doctor want to be able to keep up with the latest advances in medicine. However, the output of published literature keeps rising, so that we are all in danger of drowning in data. It’s difficult enough to keep up with the latest in clinical practice, let alone in basic research.

To at least some extent, evidence-based medicine can help, because it offers approaches that help to turn the data into knowledge which can actually be applied. Notably, these include systematic reviews and meta-analyses, which yield evidence-based practice guidelines that can inform clinical decision-making. Of course, one must remember that guidelines are only generalisations. Achieving the best outcomes for any given patient requires a combination of:

  • skilled clinical observation
  • appropriate investigations
  • application of knowledge and expertise gained by experience
  • the best scientific evidence from the literature.

In this article, I will focus on the appropriate use of investigations. This is an important issue with respect to the care of individual patients, because unnecessary and inappropriate investigations may have adverse effects, while false-positive results may prompt further needless investigation. It is also important with respect to utilisation of resources, particularly in Australia where costs to the health care system are substantially borne by the taxpayer. Over the past decade, the use of laboratory tests has seen a modest annual increase of approximately 3% to 6% [1]. At the same time, requests for diagnostic imaging investigations have increased at approximately 9% per year, so that these services now account for approximately 15% of all Medicare outlays [2].

When looking at evidence-based medicine in the context of the rational use of investigations, it is easy to get lost in the arithmetic of predictive values, probabilities and likelihood ratios. An alternative simpler approach is to rely on the maxim “Only request a laboratory test if the result will change the management of the patient” [3]. This may be an oversimplification in that among other things, investigations are relevant to establishing a diagnosis, excluding differential diagnoses, assessing prognosis and guiding management. Nevertheless, focusing on investigations that matter is sound advice, which is unfortunately all too often ignored.

The quality of the evidence around overuse of diagnostic investigations is relatively low. In hospital settings, however, it has long been recognised that as many as two-thirds of requests for some common Pathology tests may be avoidable, in that they fail to contribute to diagnosis or management [4]. Senior medical students and junior medical officers need to be especially aware of this, because most hospital Pathology test requests are submitted by junior doctors. Among factors that contribute to the uncritical overuse of investigations by JMOs are inexperience, lack of awareness of the evidence base for using a particular investigation and lack of awareness of the cost of the test. Other significant factors are the desire to anticipate the expectations of one’s supervisor and the fear of missing something important. Perhaps the supervisors of PGY1/2 trainees themselves need to drive cultural change and better model the appropriate use of diagnostic investigations!

Some strategies targeted at the test-requesting behaviour JMOs appear to be effective in at least some settings, for example restricting the range of tests that junior doctors may request in emergency departments [5,6]. More generally, management systems with budgetary controls, as well as online systems with decision support, have been promoted [7]. Importantly, education also has a valuable role to play [8].

With funding support from the Commonwealth Department of Health, my colleagues and I developed an open-access website to educate JMOs about the rational use of diagnostic investigations. As a user, you interact with simulated cases and can request investigations as you attempt to establish a diagnosis, while being presented with a running tally of the costs of the tests sought. At the end of each case, you receive feedback via comparison with what an expert would have done. Try it by self-registering, without cost, at The largest collection of cases is targeted to JMOs, but are also likely to be of interest to senior medical students. In addition, there are cases for trainee GPs, plus a few specifically created for advanced trainees in Respiratory Medicine. However, all cases are accessible to all users.

We have evidence that this educational approach can work: in a trial at a large Sydney hospital, we demonstrated that in the period immediately following active engagement of the cohort of junior doctors with this website, there were significant hospital-wide cost savings and an encouraging reduction in the number of blood samples collected from patients [9]. Unfortunately, in agreement with other studies of educational interventions, these changes in test-requesting behaviour were not sustained over the following months. However, there is additional evidence that routine requests for diagnostic investigations can be reduced if junior doctors are provided with cost data at the time of submitting a request [10]. We think a good case can be made for integrating this information into online systems in hospitals, to provide reinforcement.

Meanwhile, I encourage you to have a look at one of the few collections of guidelines about the use of investigations, available on the Australian Choosing Wisely website at These guidelines are supported by a number of specialist medical colleges, notably including the Royal College of Pathologists of Australasia and the Royal Australian and New Zealand College of Radiologists. Also well worth reading is a thoughtful reflection on the “big picture” of overuse and the Choosing Wisely initiative, published late last year and targeted specifically to medical students and trainee doctors [11].



  1. National Coalition of Public Pathology. Encouraging quality pathology ordering in Australia’s public hospitals – Final Report, 2012 (last accessed January 2017).
  2. Australian National Audit Office. Diagnostic Imaging Reforms, 2014 (last accessed January 2017).
  3. Hawkins RC. The Evidence Based Medicine approach to diagnostic testing: practicalities and limitations. Clin Biochem Rev. 2005; 26:7-18.
  4. Hammett RJ, Harris RD. Halting the growth in diagnostic testing. Med J Aust 2002; 177:124-125.
  5. Stuart PJ, Crooks S, Porton M. An interventional program for diagnostic testing in the emergency department. Med J Aust 2002; 177:131-4.
  6. Chu KH, Wagholikar AS, Greenslade JH, O’Dwyer JA, Brown AF. Sustained reductions in emergency department laboratory test orders: impact of a simple intervention. Postgrad Med J 2013; 89:566-71.
  7. Janssens PMW. Managing the demand for laboratory testing: Options and opportunities. Clin Chim Acta 2010; 411:1596-602
  8. Corson AH, Fan VS, White T, Sullivan SD, Asakura K, Myint M, Dale CR. A multifaceted hospitalist quality improvement intervention: Decreased frequency of common labs. J Hosp Med. 2015; 10:390-5.
  9. Ritchie A, Jureidini E, Kumar RK. Educating young doctors to reduce requests for laboratory investigations: opportunities and challenges. Med Sci Educ 2014; 24:161-3.
  10. Feldman LS, Shihab HM, Thiemann D, Yeh HC, Ardolino M, Mandell S, Brotman DJ. Impact of providing fee data on laboratory test ordering: a controlled clinical trial. JAMA Intern Med 2013; 173:903-8.
  11. Lakhani A, Lass E, Silverstein WK, Born KB, Levinson W, Wong BM. Choosing Wisely for medical education: six things medical students and trainees should question. Acad Med 2016; 91:1374-8.
Guest Articles

Surgery: art or science?

Professor Ian Harris AM

It’s often said that surgery is more art than science. Rubbish. Too much emphasis is placed on surgeons’ technical skills and not enough on the decisions behind them.

Any good surgeon can operate, better surgeons know when to operate and the best surgeons know when not to. Knowing when to operate and when to hold off relies on weighing up relative probabilities of success and failure between alternatives.

Good decision makers (and therefore good surgeons) base such decisions on quality evidence, and this is where science comes in. The evidence we seek is evidence of the true effectiveness of an intervention, and it is the scientific method that provides us with the most accurate and reliable estimate of the truth. Faced with alternatives, surgeons can sometimes make the wrong choice by being unscientific.

Surgeons often decide to do certain procedures because it’s what’s usually done, because it’s what they were taught, because it sounds logical, or because it fits with their own observations. If the surgeon’s perception of effectiveness and the evidence from scientific studies align, there is little problem. It’s when the two conflict that there’s a problem: either the surgeon’s opinion or the evidence is wrong. Worse, sometimes there is no good quality evidence and we are left with the surgeon’s opinion.

There is abundant evidence that surgeons overestimate the effectiveness of surgery, and considerable evidence of seemingly effective operations (based on observational evidence) turning out to be ineffective on proper scientific testing.

So what evidence should we rely on? Put simply, when you are trying to determine true effectiveness, the best method is the one that is least wrong, i.e., the method that has the least error. The scientific method is constructed to reduce error – we rarely know the truth, but we can increase the likelihood of our estimates containing the truth and we can make those estimates more precise by reducing error. In other words, we can never be certain but we can reduce uncertainty.

There are two types of error: random error and systematic error. Random error is easy to understand. If you toss a coin ten times, you may get seven heads, but that doesn’t mean the coin is unbalanced. Toss it 100 times and if you get 100 heads then you have reduced random error (the play of chance in generating such a result) and it is now very likely (and we are more certain) that the coin is unbalanced.

Systematic error (bias) is when we consistently get the wrong answer because we are doing the experiment wrong. There are many causes of bias in science and many go unrecognised, like confirmation bias, selective outcome reporting bias, selective analysis bias, measurement bias, and confounding. Systematic error is poorly understood and a major reason for the difference between the true and the apparent effectiveness of many surgical procedures.

The best way to test the effectiveness of surgery and overcome bias (particularly when the outcome is subjective, such as with pain) is to compare it with a sham or placebo procedure and to keep the patients and those who measure the effectiveness ‘blinded’ to which treatment was given. Yet such studies, common in the drug world, are rare in surgery.

In a study that summarised the research that has compared surgery to sham or placebo procedures, it was shown that the surgery in most such studies was no better than pretending to do the procedure [1]. And in the studies where surgery was better than placebo, the difference was generally small.

It’s not always necessary to compare surgery to a sham – sometimes comparing it to non-surgical treatment is sufficient. This is particularly the case for objective outcomes (survival, recurrence of disease, anatomic corrections) where blinding is less important. But you still have to compare it to something – to merely report the results of an operation with no comparator provides no reference for effectiveness beyond some historical control (of different patients, with possibly different conditions, from another place and another time). Journals are littered with case reports showing that most people got better after receiving treatment X but such reports tell us nothing about what would have happened to the patients if they did not receive treatment X, or received some other treatment. These types of non-comparative studies continue to sustain many quack therapies as well as common medical and surgical therapies, just as they sustained the apparent effectiveness of bloodletting for thousands of years.

However, even when comparative studies are done, they are not always acted upon. In a study looking at the evidence base for orthopaedic surgical procedures, it was found that only about half of all orthopaedic procedures had been subjected to tests comparing them to not operating [2]. And for those procedures that had been compared to not operating, about half were shown to be no better than not operating, yet the operations were still being done. The other surgical specialties are unlikely to be much better.

So there are two problems in surgery: an evidence gap in which there’s a lack of high quality evidence to support current practice, and an evidence-practice gap where there’s high quality evidence that a procedure doesn’t work, yet it’s still performed.

Part of the problem is that operations are often introduced before there’s good quality evidence of their effectiveness in the real world. The studies comparing them to non-operative treatment or placebo often come much later – if at all.

Surgical procedures should not be introduced or funded until there’s high quality evidence showing their effectiveness, and it should be unethical to introduce a new technique without studying its effectiveness. Instead, the opposite is argued: that high quality comparative studies (placebo controlled trials) are unethical.

Often, procedures that surgeons consider to be obviously effective are later shown to be ineffective. In the US in the 1980s, a new procedure that removed some lung tissue was touted for emphysema. Animal studies and (non-comparative) results on humans were encouraging. So the procedure became commonplace. A comparative trial was called for but proponents argued that this would deprive many people of the benefits of the procedure, the effectiveness of which was obvious.

Medicare in the US decided only to fund the surgery if patients participated in a trial comparing it to non-surgical treatment. The trial was done and the surgery was found wanting. This cost Medicare some money, but much less than paying for the procedure for decades until someone else studied it. This type of solution should be considered in Australia – only introduce new procedures if they are being evaluated as part of a trial.

The current practice of surgery is not based on quality science. If you got a physicist from NASA to look at the quality of science supporting current surgical practice they would faint. But it is getting better. It is getting better because of advancements in our understanding, because of the spread of evidence based medicine (in teaching and in journal requirements, for example), and because surgeons are understanding science better. The trials are getting better, but the incorporation of the results of those trials into practice is slow and often meets resistance because of suspicions that stem from a lack of understanding of science and the biases that drive current practice.

Billions are spent worldwide on surgical procedures that may not be effective because in many areas of surgery we still rely on surgical opinions based on biased observations and tradition. It is time for surgery to be a real science and to rely on the kind of evidence on which other scientific endeavours rely; the kind of evidence that we demand of other medical specialties and of non-medical practitioners. It’s not too hard. It’s not unethical. It’s right, and it’s time.



[1] Wartolowska K, Judge A, Hopewell S, Collins GS, Dean BJF, Rombach I, et al. Use of placebo controls in the evaluation of surgery: systematic review. BMJ. 2014;348:3253.

[2] Lim HC, Adie S, Naylor JM, Harris IA. Randomised trial support for orthopaedic surgical procedures. PLoS One. 2014;9(6):96745.

Guest Articles

Conversational EBM

Professor Frank Bowden

Medicine, to paraphrase LP Hartley, is a foreign country – they say things differently there [1]. When I started out, most of the anatomy, physiology, biochemistry and microbiology was, well, Greek to me. My undergraduate years were as much language lab as pathology lab but by the time I completed my final exams after 6 years of full immersion I was speaking Medicine in my dreams.

Then, in the 1990s, I met a tribe known as ‘Clinical Epidemiologists’ who spoke a medical dialect I had not previously encountered. Their words were familiar but the meanings were hard to exactly translate. I knew, for example, the common definition of ‘sensitive’ and ‘specific’, (indeed my wife said that at times I had too much of the latter and not enough of the former), but these strangers had something else in mind when they used the words. Some phrases seemed to be self-evident – what else could ‘positive predictive value’ be apart from the ‘predictive value of being positive’? And what on earth was a ‘meta-analysis’ or a ‘likelihood ratio’?

The Lancet, that bastion of all that is right and good in the medical world, wrote an editorial in 1995 expressing the view that the emerging EBM speakers were OK as long as they stayed ‘in their place’ [2]. Since then, two generations of medical students have learnt their trade in clinical environments that have only reluctantly and incompletely adopted EBM as the lingua franca. Some young doctors have entered the workforce truly bilingual but most have EBM as a second language. The paucity of native speakers in hospitals and general practices means that many doctors never have enough time to adequately practice their conversation skills. Some have forgotten even the most basic vocabulary.

Critics – and they are many [3] – argue that  evidence based medicine focuses on groups and averages; that  it is only about research and academia; that it is an excuse for cost-saving and external control and that it is not really about individual patients. But from the outset David Sackett, the father of EBM, defined his newborn as ‘the conscientious, explicit and judicious use of current best evidence in making decisions about the care of the individual patient’ [4]. Take each of the words in that sentence seriously and I believe that it would be hard to find a better way to live a medical life.

Like most doctors I struggle to stay up to date even in my area of specialty. (If they change the name of one more bacterium or fungus I will scream!) Yet it is hard to convey to people younger than 30 how precious information was in the time before the interweb. It is not surprising then, that after we graduated, virtually the only source of education about new treatments and diagnostics came from the people who made and sold them. We read clever advertisements in journals and we listened, over fine food and wine, to well-dressed experts talking about new advances. There was no Cochrane database, anything that was in Harrison’s textbook was unquestionably correct and Up to Date was something that we wanted to be, not log on to. Today we carry more information in our mobile phone than was ever imagined by Douglas Adams or Isaac Asimov.

But some things don’t change: I have observed that doctors, as a species, hate bureaucracy, administration and any form of external control, yet we are naively open to the influence of experts that look or sound like us. If a colleague we like says something, we are inclined to believe them. Even if we don’t like them, we tend to be more Mulder than Scully. If you think I’m exaggerating, consider the exponential rise of PSA testing in the 1990s [5], the explosion of thyroid cancer diagnoses in the last decade [6], the sunburst of unnecessary vitamin D measurement [7], the overuse and subsequent loss of every new antibiotic released in the last 50 years [8], the epidemic of unnecessary radiological investigations and the steely push for wider access to the unproven benefits of robotic surgery [8-10] – to name just a few examples.  On the other hand, independent sources, such as the Australian Choosing Wisely program [11], almost exclusively recommend that we do fewer investigations and treat fewer people, rather than more.

If good medical practice is the offspring of a metaphorical marriage between expert, independent professionals and autonomous, informed patients, we have to acknowledge the risk that a third party presents to the relationship. My patients have the right to know where I get my facts and who is influencing my decision making.

So, how can doctors make sense of modern practice in a world that is overflowing with information, short on knowledge, long on potential for conflict of interest and sadly wanting for wisdom? Just teach them more evidence based medicine? That it were so easy… Sorting out the treatments that really do make a difference to our health and well being is much harder than it seems. If you want doctors who are able to tease out the complex arguments about the pros and cons of prostate or breast cancer screening [12], who can make an independent judgement about the role of early thrombolysis in stroke [13], who can convey  the difference between absolute risk and relative risk in a way that is understandable to the lay person, then EBM instruction has to be integrated into all levels of medical training.

I hate to admit this but I used to watch my students’ eyes glaze over when I tried to teach them certain things in evidence based medicine. For example, and this will make the EBM purists cringe, it is very difficult to get undergraduate medical students excited about critical appraisal of research studies. It’s not that it isn’t important – understanding the fine details of clinical research methods is essential for doctors who are going to be creators of knowledge – it’s just that the vast majority of us are consumers, not makers. The well informed consumer needs to know how to safely and effectively use the product they have, more than they need to know how to manufacture it. I worry that many medical students never learn the importance of EBM (and its parent – epidemiology) if the early focus of teaching is on the laborious dissection of the mechanisms of evidence-making rather than on a more general exploration of what evidence is and how it can be applied in the real world.

Medical facts change rapidly but the principles of EBM stay remarkably stable. The range of treatments that existed when I was a medical student was nothing like that which is available today and we can only guess at the progress that will occur over the next 30 years. Nevertheless, the design of the studies needed to prove the efficacy and safety of those new treatments will be almost identical to those of today and we will still use the tools of EBM to interpret the results.

Perhaps only a small group of doctors – the creators – need to be truly fluent in EBM. But the rest of us – the users – need to make the effort to learn the basics of the language of evidence. Those who don’t may find that they have been left out of the conversation altogether.


  1. Hartley LP. The Go-between: By L. P. Hartley. 1967.
  2. Evidence-based medicine, in its place. Lancet 1995; 346: 785.
  3. Greenhalgh T, Howick J, Maskrey N, et al. Evidence based medicine: a movement in crisis? BMJ 2014; 348: g3725.
  4. Davidoff F, Haynes B, Sackett D, et al. Evidence based medicine. BMJ 1995; 310: 1085–1086.
  5. Zargar H, van den Bergh R, Moon D, et al. The Impact Of United States Preventive Services Task Force (USPTSTF) Recommendations Against PSA Testing On PSA Testing In Australia. BJU Int. Epub ahead of print 2016. DOI: 10.1111/bju.13602.
  6. McCarthy M. US thyroid cancer rates are epidemic of diagnosis not disease, study says. BMJ 2014; 348: g1743–g1743.
  7. Bilinski K, Boyages S. The rise and rise of vitamin D testing. BMJ 2012; 345: e4743–e4743.
  8. Vincent J-L. Antibiotic resistance: understanding and responding to an emerging crisis. Lancet Infect Dis 2011; 11: 670.
  9. Mayor S. Robotic surgery for prostate cancer achieves similar outcomes to open surgery, study shows. BMJ 2016; i4150.
  10. Yaxley JW, Coughlin GD, Chambers SK, et al. Robot-assisted laparoscopic prostatectomy versus open radical retropubic prostatectomy: early outcomes from a randomised controlled phase 3 study. Lancet 2016; 388: 1057–1066.
  11. O’Callaghan G, Meyer H, Elshaug AG. Choosing wisely: the message, messenger and method. Med J Aust 2015; 202: 175–177.
  12. Hackshaw A. Benefits and harms of mammography screening. BMJ 2012; 344: d8279–d8279.
  13. Warlow C. Therapeutic thrombolysis for acute ischaemic stroke. BMJ 2003; 326: 233–234.