Categories
Review Articles

A review of early intervention in youth psychosis

Early intervention in youth psychosis has been a topic of contentious discussion. In particular, there is a lack of consensus regarding how early to treat patients with a psychotic disorder. There has been a recent push to provide treatment early in the development of psychosis, specifically to patients in an ultra-high risk or prodromal stage. There is also debate about the types of interventions that should be used, such as psychoeducation, psychotherapy and pharmacotherapy. In Australia, these uncertainties have been reflected by the production of conflicting guidelines by key stakeholders in this area. There are significant arguments both for and against the practice of early intervention. This article explores these arguments and reviews current practices in Australia. A number of updated recommendations are also set out in accordance with the findings of this article.

Introduction

A review of early intervention in youth psychosisPsychotic disorders are characterised by the presence of symptoms that reflect an excess or distortion of normal functions. For example, hallucinations, delusions, thought disorder and disorganised behaviour are symptoms characteristic of psychosis. Patients diagnosed with schizophrenia must demonstrate positive symptoms or severe negative symptoms (e.g. flattened affect, social withdrawal) in addition to deterioration in their social and vocational functioning. [1] Hence, the diagnosis is typically made after the onset of significant symptomology.

McGorry et al. [2] argue that late-stage diagnosis of a psychotic illness leads to delayed and inconsistent management of these patients. The concept of “early intervention” refers to appropriately managing patients in the early stages of psychotic disease, to minimise long-term negative social and psychological outcomes. As such, it represents a secondary prevention strategy and a paradigm shift in the way schizophrenia and other psychotic disorders are viewed; rather than being seen as illnesses with an inevitably poor social and functional outcome, they are viewed as conditions whose course can be altered
by recognition of the early warning signs and application of timely intervention. [2] The proponents of early intervention argue that many of the recognised risk factors for the development and progression of a psychotic disorder (e.g. disrupted peer and family networks, substance A review of early intervention in youth psychosis use, depression) are recognisable in advance and can be acted upon. [2]

The clinical staging model [3] proposes that psychiatric illnesses should be viewed as a sequence of stages that increase in disease severity. Employing the appropriate treatment modality at a particular stage would allow regression of the disease to an earlier stage. The clinical stages of early psychosis include the ‘ultra-high risk’ stage, the ‘first psychotic episode’ stage and the ‘first 5 years after diagnosis’ stage. [2]

The ‘ultra-high risk’ stage is the stage preceding the first psychotic episode. Although the first psychotic episode is often the first recognised sign of a psychotic illness, retrospective analysis reveals many changes occur in an individual’s thoughts and behaviour in the period preceding the psychotic episode. This is known as the ‘prodromal phase’. To intervene at this stage, it is clearly necessary to be able to identify this period in advance, and a considerable research effort is being focused on developing prospective criteria for this purpose. Two tools currently in use are the Positive and Negative Syndrome Scale (PANSS) or Attenuated Positive Symptoms (APS) approach and the Basic Symptoms (BS) approach. [4] The PANSS is a 30-point questionnaire with a 7-point rating for each question. It covers positive symptoms (e.g. delusions, hallucinations), negative symptoms (e.g. social withdrawal, blunted affect) and general symptoms of psychopathology (e.g. depression, poor insight, feelings of tension). [5] The Basic Symptoms approach focuses on subtler, self-experienced subclinical symptoms such as thought interference, disturbance of receptive language, inability to divide attention between tasks and derealisation. [6]

Intervention at the ‘first psychotic episode’ stage is largely aimed at reducing the duration of untreated psychosis (DUP), as a high DUP has been shown to result in poorer outcomes. Some authors have argued that untreated psychosis can lead to irreversible brain damage. [7,8] Although this theory has yet to receive widespread support, the personal, social and societal consequences of untreated psychosis can have a tremendous impact on the patient’s ability to recover from the episode. [2] Functional MRI brain imaging studies have shown decreased memory encoding in patients with schizophrenia and interestingly, decreased posterior cingulate activity in patients with ongoing first-episode psychosis compared to those showing remission at one year. [9] Such alterations in brain activity in patients more likely to proceed to a significant psychotic illness has exciting implications for the use of fMRI as a tool in screening for patients most likely to benefit from early intervention.

The ‘first 5 years after diagnosis’ stage is a crucial period that determines a patient’s long-term outcome. It is the time most likely to result in suicide, disengagement, relapse, [2] long-term treatment resistance and the break down and accumulation of disabilities in personal, social and occupational settings. [10] Mason et al. [11] suggest that the level of disability accumulated in the first 2 years of psychosis may in fact ‘set a ceiling for recovery in the long term’. Hence, intervention at this period is important. Maintaining a steady support structure especially tailored towards young people receiving a diagnosis of psychosis is likely to maximise chances of engagement with mental health care, life-style modifications, and adequate family involvement. [2]

Current Practice
There are currently a number of different practices/guidelines in Australia relating to early intervention in youth psychosis. The Royal Australian and New Zealand College of Psychiatrists (RANZCP) has produced clinical practice guidelines for schizophrenia, which include recommendations for patients at ultra-high risk (UHR). [12] Orygen Youth Health and headspace have also developed guidelines, called ‘The Australian Clinical Guidelines for Early Psychosis’, which are now in the second edition. [13]

Australia has established the first clinical and research clinic in the world for individuals considered to be at imminent risk of psychosis. The Personal Assessment and Crisis Evaluation (PACE) clinic was established by Orygen in Melbourne in 1994. [14] The clinic receives referrals from general practice, school counsellors and various health services. [14] They facilitate case management and provide a variety of in-house support services to families and carers including group programs, vocational and educational assistance, and occupational therapy. [15] Orygen, in conjunction with the Australian General Practice Network, the Australian Psychological Society and the Brain and Mind Research Institute also established headspace, which is a national youth mental health foundation. [16] The aim of headspace was to facilitate early intervention by increasing community awareness, clinician training and taking a youth-specific approach to management, as well as utilising multidisciplinary care. [16,17] Another service available is the Early Psychosis Prevention and Intervention Centres (EPPIC). In the 2010-11 and 2011-12 budgets, the Federal Government allocated $247m to the establishment of a network of 16 of these centres across Australia, modelled upon Orygen’s EPPIC centre in Melbourne. [18] A more detailed summary of the current guidelines/practices existing in Australia for youth psychosis is listed in Table 1.

Table 1. Current practice (guidelines and health services) in Australia for youth psychosis.

Guideline
Recommendation
RANZCP Clinical Practice Guidelines for the Treatment of Schizophrenia and Related Disorders (2005) [12] Assessment and close monitoring every 2-4 weeks along with the provision of information to the patient and their family about the risk and likelihood of progression. Other techniques such as cognitive behavioural therapy (CBT), stress management and vocational rehabilitation should be employed depending on any concurrent psychosocial difficulties. Antipsychotics are only to be prescribed when the patient has been frankly psychotic for over a week, or in cases when milder symptoms are associated with a risk of self-harm or aggression (however, patients without such a history are often treated regularly with antipsychotics and the primary concern here is that they may have a delirium or physical illness, which should be excluded first). [12]
The Australian Clinical Guidelines for Early Psychosis [13] Commencement of CBT for all patients identified as being at ultra-high risk is recommended. Family, vocational, educational and accommodation support should also be provided as required in a low stigma setting. Antipsychotic medication should only be considered once full threshold psychotic symptoms have been sustained for over a week, or if there is rapid deterioration accompanied by psychotic-like symptoms. [13]
Health Service
Nature of service provided
The Personal Assessment and Crisis Evaluation (PACE) clinic PACE provides information to individuals and their families about what it means to be at risk of psychosis. [14] They facilitate case management and provide a variety of in-house support services to families and carers including group programs, vocational and educational assistance and occupational therapy. [15] Specific treatment is largely in the form of voluntary participation in clinical trials, such as those looking at antipsychotic use or CBT in ultra-high risk individuals. [14]
Headspace These centres for 12-25 year olds combine specialist mental health, drug and alcohol and primary care services, vocational services and training, and employment support within a youth and family-friendly environment. [16,17] Headspace centres are also tasked with developing awareness campaigns for their local community and providing training for primary care and other workers using an evidence-based approach. [16]
Early Psychosis Prevention and Intervention Centres (EPPIC) Provide comprehensive in-patient and mobile components and aim to identify patients as early as possible and deliver phase-specific bestpractice interventions to psychotic individuals between the ages of 15 to 24. [19] This model has also been adopted widely around the world, including in the UK [20] and the US. [21]

 

The early intervention model has also been subject to some criticism. The major basis for this is a lack of evidence, especially with regard to the use of anti-psychotics in the prodromal stages of psychotic illness and the significant cost associated with creating a clinical infrastructure for patients who may never proceed to a long-term psychotic illness.

Results and Discussion

Evidence for early intervention

There is evidence from several small studies that psychotherapy such as CBT [22] and pharmacotherapy [3,23] can reduce the progression of ultra-high risk individuals to first episode psychosis.

Wyatt et al. [8] reviewed 22 studies, of varying study designs, which included contemporaneous control group studies, cohort studies, mirror image studies and early intervention studies. In these studies, patients with schizophrenia were either given or not given neuroleptics at a specific time during the course of their illness. 19 of the studies, in particular, looked at patients who were experiencing their first psychotic episode. After re-analysing the data, Wyatt et al. [8] showed that early intervention with a neuroleptic in first-break schizophrenic patients improved the long-term course of the illness, commonly assessed based on re-hospitalisation and relapse rates. It was also shown that with the use of neuroleptics, the length of the initial psychotic period was reduced. In addition, when neuroleptics were discontinued, it resulted in poorer outcomes as the patients were not able to return to their previous level of functioning and relapses occurred more frequently. Neuroleptic medication has the strongest support for relapse prevention in schizophrenia and is the basis of most interventions.

It has been suggested that the duration of untreated psychotic episodes directly correlates with less complete recovery, a higher rate of relapse and increased levels of compromised functioning, since these episodes have a toxic effect on the brain. [7,8,24-26] These studies, both retrospective and prospective, suggest that a longer DUP in the early stage of schizophrenia is associated with a longer time to remission, a lower level of recovery, a greater likelihood of relapse and a worse overall outcome.

Studies have shown that raising public awareness and using mobile outreach detection teams to identify candidate patients [27] has significantly reduced DUP, leading to beneficial outcomes. In particular there has been a reduction in negative symptoms in schizophrenic patients.

Arguments against early intervention

There are certain groups who are against early intervention. One of the arguments against early intervention relates to whether it is cost effective, as resources may be diverted from treatment programs for patients who already have an established diagnosis of psychosis. In addition, they argue that the great majority of high-risk patients do not in fact progress to frank psychosis. There is also the argument that some patients seeking early intervention may not have ‘true prodromal’ features, thus inflating the numbers of those who actually require early intervention. These arguments are discussed in more detail below.

Economic cost of early intervention may be infeasible

Those against early intervention believe the increased attention and funding given to early intervention diverts funding away from treatment in those with established psychosis. [28-30] They also argue that proponents of early intervention have touted the cost-effectiveness of early intervention as such programs utilise more outpatient resources compared to inpatient resources, thus reducing overall healthcare costs (with outpatient services being much cheaper than inpatient treatment). However, critics of early intervention have pointed out that implementation of a cost-effective treatment actually increases total costs [31,32] since cheaper treatment would have a much higher uptake compared to an expensive alternative, thus raising the total cost of treatment. In addition, Amos argues that total healthcare costs are further increased since in-patient costs are not reduced with early intervention. [33] This is because 80% or more of hospital costs are fixed costs and by shifting psychosis treatment to largely outpatient settings in the community, community costs increase but hospital costs are not reduced. [33] This is corroborated by previous studies, which show an increase in total costs when hospitalisation rates had been reduced. [34,35]

Most high-risk patients do not progress to frank psychosis

One possible explanation for this is that a subset of adolescents whom are identified as being UHR may just be odd adolescents that become odd adults with few progressing to a frank psychosis. The prominent child psychiatrist Sula Wolff was the first to describe these odd adolescents in her book, Loners: The Life Path of Unusual Children. [36] Her research has shown that while odd qualities such as those found in schizoid and schizotypal disorders are found pre-morbidly in patients with schizophrenia, very few children with such personality traits/disorders go on to develop schizophrenia. For example, in 1995 Wolff undertook a records survey of all psychiatric hospital admissions in Scotland. Overall, 5% of schizoid young people were affected by schizophrenia in adulthood compared to a population prevalence rate in the UK of 0.31-0.49%. [36] These numbers suggest that while the risk for schizophrenia in schizoid children is higher than that of the general population, it is still low. To reiterate, there may be a proportion of patients who are flagged as being prodromal but whom actually have qualities consistent with schizoid personality disorder that will never progress to psychosis.

Recently, there has been a decline in the proportion of patients at high risk of psychosis actually progressing to frank psychosis

This decline has important ramifications for the practice of early intervention. A decline in the transition rate of patients identified as UHR has been reported within the PACE clinic (Melbourne, Australia) and in other UHR clinics as well. [37,38] As an example, The PACE clinic has reported that each successive year between 1995-2000 had a rate equal to 0.8 of the previous year. [38] The reported decline in transition rate was not due to differing patient characteristics across the years, such as gender, age, family history, baseline functioning and degree of psychopathology and psychiatric symptoms. [38] Additionally, the UHR criteria remained unchanged in the PACE clinic between 1995-2000. [38]

There are a number of possible explanations for the declining transition rate to psychosis. Firstly, UHR patients are being detected more quickly than in the past (the duration of symptoms prior to detection is getting shorter). [38] However, it is unclear whether the resulting decline in transition rate is due to earlier treatment (which may be more effective than delayed treatment), the identification of increased numbers of false positives (those who are not going to progress to psychosis) or a combination of both. [38] There may also be an effect from clinicians becoming better at managing UHR patients. [38] Additionally, it has been noted that the decline in transition rate was more prominent for patients who met two of the UHR inclusion criteria simultaneously compared to those who met only one of the criteria. [38] This could have been due to the increased emphasis which was placed on detection of patients who met both criteria, both in the UHR clinic and from referrers, thereby leading to earlier detection and treatment. [38] This is also in keeping with the wider community shift and preoccupation towards early psychosis and its recognition, and the increase in available referral pathways.

The decline in transition rate also raises questions about the validity of intervention approaches, such as pharmacotherapy and psychosocial treatment, on patients who may not ultimately transition to psychosis. [38] Such intervention may be harmful and therefore unjustified in this context. The UHR concept, which is used extensively in psychosis research, may also have to be re-visited if many of the identified patients are not transitioning. [38]

Due to the uncertainties regarding the basis for the declining transition rate, a review of the role of UHR clinics may be warranted. [38] It may be necessary to initially monitor patients and treat conditions such as depression, substance use problems and anxiety disorders while withholding antipsychotic treatment until features suggestive of transition occur, such as worsening of sub-threshold psychotic symptoms. [38] This may be prudent in the context of detecting increasing numbers of patients who were never destined to transition to psychosis. In any case, further research is needed to clarify the ongoing uncertainties in this area.

Bias in patient selection

Specialised teams set up to treat early psychosis engage with anyone who is seeking help. However, Castle [39] believes that this would skew the treatment group, as it would engage those with help-seeking behaviours rather than prodromal psychosis. Furthermore, it also raises the issue that those seeking help may have signs and symptoms of what is a normal developmental process or a ‘psychosis proneness’, which is part of a normal distribution within the general population. [40] Thus, these individuals may not require treatment for psychosis at all as they would either grow out of ‘psychotic proneness’ or would
stabilise and never develop psychosis.

Prescribing anti-psychotics to a population that is not psychotic: An ethical implication

The potential dangers of psychotropic drugs on young people are outlined in the United Nations Convention on the Rights of the Child, where children are recognised as being particularly deserving of protection from unnecessary exposure to psychotropic substances. [41] However, much of the research into early intervention includes administration of a low dose of antipsychotics as a crucial and efficacious treatment option. [42] Furthermore, antipsychotics are known to have serious side effects including sedation, weight gain, mild sexual dysfunction and disconcerting extrapyramidal symptoms (EPS) such as pseudoparkinsonism, akathisia, acute dystonia, and tardive dyskinesia. [43] While these effects have a stronger association with first generation antipsychotics, there is increasing evidence suggesting that second generation antipsychotics (SGA) are associated with significant side effects such as weight gain, hyperprolactinemia and EPS in the adolescent population.

Summary and recommendations

In view of the currently available literature, the authors make the following summary and recommendations with regards to early intervention in psychosis.

  • Psychosis is a highly disabling condition with detrimental impacts on patients’ relationships and occupational and social functioning
  • Possible interventions that delay or prevent transition from the prodromal period to psychosis are important, both clinically and economically
  • A systematic review by the Cochrane Database found limited evidence about interventions to prevent psychosis. Despite this, early intervention facilities such as headspace are widespread in Australia

Our recommendations

  1. We do not recommend the use of antipsychotics in children and adolescents who have been identified as at increased risk but who have not yet progressed to frank psychosis. Exposing children and adolescents to the serious side effects of antipsychotics is both unethical and inappropriate considering a proportion of these patients will not progress to psychosis.
  2. We recommend more research into safer, less harmful interventions such as omega-3 fatty acids and psychotherapy. For omega-3 fatty acids, evidence suggests a beneficial effect on transition rates compared to placebo. [44] However, this evidence comes from a single trial with few participants. A replication study with a larger sample size is needed to more definitively ascertain the merit of this intervention
  3. As previously discussed, preliminary evidence shows that CBT may reduce the transition rate to psychosis. Further research should be undertaken to conclusively establish the benefit of psychotherapy in high-risk individuals. Further research should
    include investigation of the cost-effectiveness of psychotherapy as an early intervention for youth psychosis. In addition, research should aim to identify any detrimental effects associated with providing psychotherapy to patients who do not progress to psychosis.
  4. Patients identified as being at risk of developing psychosis should be monitored closely by a multi-disciplinary team. Team members may include a general practitioner, social worker, psychiatrist and psychologist. By closely monitoring at-risk patients, their progression into frank psychosis can be detected earlier and appropriate treatment given in a timely manner. Prompt detection and treatment of psychosis is crucial, as delayed untreated psychosis has been shown to result in poorer outcomes.

Table 2. Summary of the evidence supporting and arguments against early intervention in psychosis.

Evidence supporting early intervention Evidence from small studies showing psychotherapy such as CBT and pharmacotherapy can reduce the progression
of ultra-high risk individuals to first episode psychosis.Studies show that raising public awareness and using mobile outreach detection teams to identify candidate patients significantly reduces the duration of psychosis.
Arguments not in favour of early intervention The economic cost of early intervention may be infeasible.Most patients identified as being high risk do not progress to frank psychosis.Treatment teams for early psychosis may disproportionately target patients with “help
seeking behaviour” and thereby treat more patients who simply display signs and symptoms of a normal developmental process or “psychosis proneness”.The negative ethical implications associated with prescribing antipsychotics to a population that is not psychotic.

Acknowledgements

The authors would like to thank Professor Jeff Cubis and Professor David Harley for their guidance and expert opinion on the matter.

Conflict of interest

None declared.

Correspondence

H C Y Yu: u4788941@anu.edu.au

Categories
Feature Articles Articles

The role of the food industry in tackling Australia’s obesity epidemic

Whilst a number of factors contribute to Australia’s rapidly rising obesity rates, the role of fast food companies in addressing the epidemic remains controversial. This report discusses the contribution of fast food companies to high obesity rates, explores the notion of corporate social responsibility, and discusses a range of government policies that could be implemented to limit the contribution of fast food chains in promoting obesity.

Introduction

Obesity is a major concern in Australia, with 62.8% of the adult population termed overweight (35.3%) or obese (27.5%). [1] Multiple factors contribute to these rising statistics, and whilst fast food companies undeniably contribute to obesity, defining the exact role that they play and their responsibility remains controversial. Whilst difficult to numerically define, previous studies have offered various definitions of fast food, broadly defining it as food purchased from cafeterias, restaurants and ‘big brand’ companies (such as McDonalds) that are high-calorie and low in nutritional value. [2-4] Food produced in restaurants, for example, is at least 20% higher in fat-content than the home-cooked equivalent. [5] It has now been over a decade since the infamously termed ‘McLawsuit,’ in which a group of overweight children in America filed legal action against the McDonalds corporation for their obesity-related health problems, first bringing the issue of corporate responsibility to a head. [6] Increasingly, now, trends are towards increasing regulation of the fast food industry, with recent debate over a fat tax in Australia, [7] and New South Wales enforcing nutrition labelling of fast food products. [8] The obesity epidemic continues to contribute to the morbidity and health expenditure of many developed countries, with minimal resolution on the role that fast food companies should play in tackling it.

Obesity in Australia: The contribution of fast food companies

The complex array of factors contributing to obesity makes the issue of responsibility a difficult one. [9] Australia’s obesogenic environment comprises multiple factors, such as increasingly sedentary lifestyles, poor education regarding nutrition and the accessibility of fast food. [10] In this respect, fast food companies, government and the wider community are all stakeholders with differing degrees of responsibility.

Fast food companies are considered a key stakeholder in contributing to Australia’s obesogenic environment. This is attributed to factors such as their large portion sizes, and marketing ploys that intensively promote the accessibility of unhealthy snack foods and target vulnerable groups such as low-income earners and children. [11] These factors make it difficult for consumers to make informed choices and resist unhealthy options, ultimately contributing to overeating and excess body weight. [12] Establishing a causal link between fast food companies and obesity is difficult due to the complexity of the relationship. An analysis by Chou, Rashad and Grossman (2005) indicated that a ban on fast food advertising would reduce the number of overweight children by 10% in the 3-11 year age group, and by 12% amongst 12-18 year olds, suggesting a causal component to the relationship in this vulnerable population group. [13] Accessibility of fast food outlets is also a contributing factor, with Maddock and colleagues (2004) showing that there is a significant correlation between proximity to fast food restaurants and obesity. [14] Furthermore, ever-increasing portion sizes also provide evidence for the influence of fast food companies on obesity; with Young and Nestle (2002) noting that portion sizes have paralleled the increase in average body weight. [15] Whilst some claim that fast food companies simply respond to consumer desires, and that the average consumer is well aware of the obesity epidemic, it can be argued that they are still partly responsible by providing and promoting this supply.

Corporate Social Responsibility

Corporate Social Responsibility (CSR) is a form of self-regulation that corporations integrate into their business model. [16] It involves taking responsibility for the impact of the company’s decisions on society and the environment. [17] Guler and Crowther (2010) further describe CSR as honouring the triple bottom line of people, planet and profit rather than solely focusing on profit maximisation. [18]

Proponents of CSR claim that it maximises long-term profits by encouraging firms to operate sustainably, decreasing risks despite initial costs. [19] Wood (2010) argues that ‘strategic CSR’ rewards business for CSR activities via the positive responses of consumers, employees and shareholders. [20] Ethical business policy may lead to brand differentiation and customer loyalty, increasing purchase intention and willingness to pay. Similarly, employees may be attracted and motivated by strong CSR policies, potentially increasing recruitment, work ethic and employee loyalty. Successful CSR strategies can also improve a firm’s reputation, reduce external pressure from non-government organisations (NGOs) and attract shareholders. [21] For example, Becker-Olsen et al. (2006) argues that McDonald’s funding of programs such as Maths Online, Little Athletics and its Ronald McDonald House Charities acts as subconscious advertising to improve its reputation. [21-23]

However, as well as these incentives, firms also face challenges in establishing CSR policies. Some economists claim that CSR distracts from the role of business, which is to maximise profit. [24] The financial costs of introducing CSR policies may also be barriers for firms, particularly small businesses that lack the required resources. [20] Moreover, CSR does not necessarily equate to positive consumer perceptions, as the credibility of corporations is often doubted. [21] For example, partnerships between KFC and the McGraw foundation, McDonalds and WeightWatchers, and Nestle and Jenny Craig have been criticised as marketing ploys, termed ‘weightwashing’ by the Obesity Policy Coalition. [25]

In the context of Australia’s obesity epidemic, CSR policies in the food industry may have varied impacts. Fast food companies, at a time of increasing obesity rates, may see an opportunity in utilising health policy to establish consumer goodwill and brand value, creating a profit-driven incentive to engage in obesity prevention. Self-motivation in CSR policy construction could, however, be detrimental to health prevention, with, for example, fast food companies shifting blame from ‘foods’ to ‘sedentarism’ in their marketing, rather than altering the quality of their products. [20] Additionally, as a defensive response to avoid government regulation, the food industry has created an opening for itself in a health and sports promotion role, which, whilst contributing to preventative health programs in the short-term, may in time detract from the conventional governmental role in public health, devolving government of some responsibility without effectively satisfying community needs.

Despite its challenges, the potential benefits of CSR and the rise of privatisation and globalisation make self-regulation in the food industry an important, and perhaps inevitable, approach to consider in tackling obesity. [25]

Government Regulation

In light of steadily increasing obesity rates in many Western societies, a number of governments have implemented policies to reduce the impact of fast food companies in promoting overeating. [26,27] Outlined below are four categories of legislative change and their implications.

Restricting fast food advertising

Fast food advertising can send misleading messages to consumers, particularly those less informed. [28] Ethically, from a communitarian perspective, restricting advertising may denormalise fast food by making it less ubiquitous, helping change social attitudes, which is key to combating obesity. [29] Conversely, restrictions on advertising limit choice by making consumers less aware of their options, contradicting the principle of autonomy. Whilst it could be said that advertising of healthy foods continues to provide this autonomy, critics argue that fast food is not harmful in moderation and thus consumers should be able to make an informed decision of their purchases. Similarly, in alignment with narrative ethics, individuals have different approaches to eating, which may be compromised by eliminating the information delivered by fast food advertising. [30,31] It is important to note, however, that many of these concerns assume advertising delivers accurate information, which is often not the case. Critics also claim that restricting advertising is ineffective, as there are more important factors contributing to obesity. In addition, there are concerns about how the distinction between healthy and unhealthy foods would be made and the rights of companies to market their goods. [32]

Examples of restrictions on fast food advertising include banning fast food company sponsorship of sporting events and celebrity endorsement of unhealthy foods, as well as banning advertising that targets children, a population group particularly susceptible to marketing ploys. In regards to this, banning advertising to children in prime-time hours has already been successfully achieved in a number of countries. Quebec, for example, has had a 32-year ban on fast food advertising to children, leading to an estimated US$88 million reduction in fast food expenditure. [33] Australia has been moving towards restricting fast food advertising that targets children, with the Australian Food and Grocery Council resolving to not advertise fast foods in programs where at least 35% of the audience are children. [34] However, analyses of the difficulties of self-regulation in the food industry indicate that its effectiveness depends on the rate of engagement by individual companies and is not sufficient to adequately protect consumers. [35,36]

Cost measures

A ‘fat tax’ would involve taxing foods or beverages high in fat content (other ‘unhealthy’ components such as sugar and salt could also be taxed). It aims to discourage consumers from unhealthy products and offset their health costs with the tax revenue generated. [37] Subsidies for healthy food options are considered less practical with a greater cost-burden for taxpayers. Critics argue that a ‘fat tax’ would disproportionately affect low socio-economic consumers, unless healthy alternatives are made cheaper. [38] Some argue that obese individuals are also less responsive to increased prices than consumers of average BMI, reducing the effectiveness of a tax. [39] A ‘fat tax’ could even exacerbate health problems – a tax only on saturated fat, for example, may increase salt intake, which increases cardiovascular risk. [38] Denmark was the first country to introduce a tax on fat in 2011; however, it has since resolved to repeal the legislation, claiming that it increased consumer prices, increased corporate administration costs, and damaged Danish employment prospects without changing Danish eating habits, reducing the likelihood of this approach being trialled by other countries, including Australia. [40] It should be noted, however, that country-specific differences may have contributed to its lack of success.  These include the ability of the Danish population to travel to neighbouring countries to maintain their eating habits despite the government tax, which would not be feasible in Australia. Alternatively, the government could consider combining subsidies for healthy food options with a ‘fat tax,’ as this approach would be more acceptable to the public than a tax alone, and also yield a lower cost-burden for taxpayers than subsidies alone.

Nutrition labelling

Nutrition labelling aims to ensure consumers understand the nutritious value of foods. In Australia, all food labels must abide by the national Food Standards Code. [41] Options to simplify food labelling include traffic light food labelling, which codes foods red, yellow or green based on their fat, sugar and salt content. [42] Another option is health warnings on unhealthy foods to deter consumption. [43] Australia-wide, a new star rating system for packaged foods has been developed by a working group which included industry and public health experts; as of June 2013, this has been approved by state and federal governments. [44] The scale will rate foods from half-star to five stars based on nutritional value, despite concerns raised by the Australian Food and Grocery Council over the cost to manufacturers and how nutritional value would be determined. Sacks et al. (2009) argues, however, that there is insufficient evidence to suggest that food labelling would reduce obesity. [45] Critics also argue that more restrictive practices, such as health warnings, are excessive and impractical considering the ubiquity of high fat foods. [46]

Limit physical accessibility of fast food

Easy accessibility of unhealthy foods makes them difficult to resist. Making fast food less accessible again denormalises it, helping change social attitudes. This is supported by studies showing that obesity rates are higher in areas with an increased number of fast food outlets. [14] Zoning laws have been suggested as a policy tool to limit the accessibility of fast food, with findings suggesting success in reducing alcohol-related problems. [47] Other approaches to restrict access include removing fast food from high accessibility shelves in supermarkets, banning fast food vending machines and banning fast food from school canteens. Victoria, for example, has imposed strict canteen rules restricting the sale of fast food to twice a term. [48] However, critics argue that restricting the accessibility of fast food may undermine consumer autonomy and choice, impinge on the legal rights of companies to market their goods, and could also be a precedent for government intervention in other areas. [49]

Conclusion

Whilst it is difficult to define the extent of the role that fast food companies play, there is no doubt that they significantly contribute to Australia’s obesity epidemic through their large portion sizes, low quality food, extensive fast food advertising and high accessibility. Ultimately, combating obesity will require a multi-faceted approach that denormalises unhealthy foods – a process that requires both consumers and government to take a role in regulating the quality, marketing practices and accessibility of unhealthy products produced by the food industry.

Conflict of interest

None declared.

Correspondence

S Bobba: samantha.bobba@gmail.com

References

References

[1] Australian Bureau of Statistics. Overweight and Obesity [Internet]. Canberra: ABS; 2012. [Cited 2013 Apr 11]. Available from:http://www.abs.gov.au/ausstats/abs@.nsf/Lookup/33C64022ABB5ECD5CA257B8200179437?opendocument

[2] Driskell, J, Meckna, B, Scales, N. Differences exist in the eating habits of university men and women at fast-food restaurants. Nutri Res. 2006; 26(10):524-530.

[3] Pereira, M, Karashov, A, Ebbeling, C, Van Horn, L, Slattery, M, Jacobs, D, Ludwig, D. Fast-food habits, weight gain, and insulin resistance (the CARDIA study): 15-year prospective analysis. Lancet. 2005; 365(9453):36-42.

[4] Duffy, B, Smith, K, Terhanian, G, Bremer, J. Comparing data from online and face-to-face surveys. Int J Market Res. 2005; 47(6):615-639.

[5] Seiders, K, Petty, RD. Obesity and the role of food marketing: a policy analysis of issues and remedies. JSTOR. 2004; 23(2):153-169.

[6] Mello, MM, Rimm, ER, Studdert, DM. The McLawsuit: The fast-food industry and legal accountability for obesity. Health Aff. 2003; 22(6):207-216.

[7] Lewis, S. Federal government backed study into fat tax on fast foods. News.com [Internet]. May 20 2013. [Cited 2013 Apr 15]; Available from: http://www.news.com.au/lifestyle/food/federal-government-backed-study-into-fat-tax-on-fast-foods/story-fneuz8wn-1226646283704

[8] Thompson, J. Forcing fast food chains to join the fight against obesity. ABC News [Internet]. November 8 2010. [Cited 20/4/2013]; Available from: http://www.abc.net.au/news/2010-11-08/forcing-fast-food-chains-to-join-the-fight-against/2328572

[9] Frewer, LJ, Risvik, E, Schifferstein, H. Food, people and society: A European perspective of consumers’ food choices. Heidelberg: Springer; 2001.

[10] Germov, J, Williams, L. A Sociology of food & nutrition: The social appetite. Melbourne: Oxford University Press; 2004.

[11] Bronwell, KD. Fast food and obesity in children.  Paediatrics 2004; 113(1):132.

[12] Young, LR, Nestle, M. Portion sizes and obesity: Responses of fast-food companies. .J Public Health Policy. 2007; 28(2):238-248.

[13] National Bureau of Economic Research. Fast-food restaurant advertising and its influence on childhood obesity. Cambridge: NBER, 2005. [Cited 2013 Apr 55/4/2013]. Available from: http://www.nber.org/papers/w11879

[14] Maddock, J. The relationship between obesity and the prevalence of fast food restaurants: state-level analysis. Am J Health Promot. 2004; 19(2):137-143.

[15] Young, LR, Nestle, M. The contribution of expanding portion sizes to the US obesity epidemic. Am J Public Health. 2002; 92(2):246-249.

[16] McBarnet, DJ, Voiculescu, A, Campbell, T. The new corporate accountability: corporate social responsibility and the law. Cambridge: Cambridge University Press; 2007.

[17] Maloni, MJ, Brown, ME. Corporate social responsibility in the supply chain: An application in the food industry. J Bus Ethics. 2007; 68(1):35-52.

[18] Guler, A, Crowther, D. A handbook of corporate governance and social responsibility. Fanham: Gower; 2010.

[19] Porter, ME, Kramer, MR. Strategy and society: the link between competitive advantage and corporate social responsibility. Harv Bus Rev. 2006; 84(12):78-92.

[20] Wood, DJ. Measuring corporate social performance: A review. Int J Manag Rev. 2010; 12(1):50-84.

[21] Becker-Olsen, KL, Cudmore, AB, Hill, RP. The impact of perceived corporate social responsibility on consumer behaviour. J Bus Research. 2006; 59(1):46-63.

[22] McDonalds. McDonalds Corporation: Worldwide social corporate responsibility. 2009. [Cited 2013 Apr 16]. Available from: http://www.aboutmcdonalds.com/etc/medialib/csr/docs.Par.32488.File.dat/mcd063_2010%20PDFreport_v9.pdf

[23] Royle, T. Realism or idealism? Corporate social responsibility and the employee stakeholder in the global fast-food industry. Business Ethics: A European Review. 2005; 14(1):42-55.

[24] Peattie, K. Corporate social responsibility and the food industry. AIFST. 2006; 20(2):46-48.

[25] Martin, J. Rich pickings, fat or thin. Sydney Morning Herald [Internet]. 2010. [Cited 2013 Apr 27]; Available from: http://www.smh.com.au/lifestyle/wellbeing/rich-pickings-fat-or-thin-20110115-19rvj.html

[26] Mitchell, C, Cowburn, G, Foster, C. Assessing the options for local government to use legal approaches to combat obesity in the UK: Putting theory into practice. Obes Rev. 2011; 12(8):660-667.

[27] Diller, PA, Graff, S. Regulating food retail for obesity prevention: How far can cities go? J Law, Med Ethics. 2011; 39(1):89-93.

[28] Carter, OB, Patterson, LJ, Donovan, RJ, Ewing, MT, Roberts, CM. Children’s understanding of the selling versus persuasive intent of junk food advertising: Implications for regulation. Soc Sci Med. 2011; 72(6):962-968.

[29] Veerman, JL, Van Beeck, EF, Barendregt, JJ, MacKenbach, JP. By how much would limiting TV food advertising reduce childhood obesity. Eur J Public Health. 2009; 19(4):365-369.

[30] Caraher, M, Landon, J, Dalmeny, K. Television advertising and children: lessons from policy development. Public Health Nutr. 2006; 9(5):596-605.

[31] McNeill, P, Torda, A, Little, JM, Hewson, L. Ethics Wheel. Sydney: University of New South Wales; 2004.

[32] Henderson, J, Coveney, J, Ward, P, & Taylor, A. Governing childhood obesity: Framing regulation of fast food advertising in the Australian print media. Soc Sci Med. 2009 69(9): 1402-1208.

[33] Dhar, T, & Baylis, K. Fast-food consumption and the ban on advertising targeting children: the Quebec experience. J Marketing Research. 2011 48(5).

[34] Collier, K. Crackdown on junk food advertising during shows children watch. Sydney Mornng Herald [online newspaper]. November 02 2012. [Cited 2013 Apr 20]; Available from: http://www.news.com.au/lifestyle/parenting/crackdown-on-junk-food-advertising-during-shows-children-watch/story-fnet08ui-1226508730232

[35] Sharma, L, Teret, S, Brownell, K. The Food Industry and Self-Regulation: Standards to promote success and to avoid public health failures. Am J Pub Health. 2010; 100(2):240-246.

[36] King, L, Hebden, L, Grunseit, A, Kelly, B, Chapman, K, Venugopal, K. Industry self regulation of television food advertising: responsible or responsive? Int J Ped Obes. 2011; 6(2):390-398.

[37] Mytton, O, Gray, A, Rayner, M, Rutter, H. Could targeted food taxes improve health? J Epidemiol Community Health. 2007; 61(1):689-694

[38] Clark, JS, Dittrich, OL. Alternative fat taxes to control obesity. Int Adv Econ Res. 2007; 16(4):388-394.

[39] Tiffin, R, Arnoult, M. The public health impacts of a fat tax. Eur J Clin Nutr. 2011; 65(1):427-433.

[40] Denmark to scrap world’s first fat tax. ABC News [online newspaper]. November 11 2012. [Cited 2013 Apr 16]; Available from: http://www.abc.net.au/news/2012-11-11/denmark-to-scrap-world27s-first-fat-tax/4365176

[41] Martin, T, Dean, E, Hardy, B, Johnson, T, Jolly, F, Matthews, F, et al. A new era for food safety regulation in Australia. Food Control. 2003; 14(6):429-438.

[42] Magnusson, R. Obesity prevention and personal responsibility: the case of front-of-pack labelling in Australia. National Institute of Health; 2010. [Cited 2013 Apr 24]; Available from: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3091573/

[43] Cinar, AB, Murtomaa, H. A holistic food labelling strategy for preventing obesity and dental cavities. Obes Rev. 2009; 10(3):357-361.

[44] Winter, C. Government touts star rating system for food to fight obesity epidemic, ABC News [online newspaper]. June 27 2013. [Cited 2013 Apr 21]; Available from: http://www.abc.net.au/news/2013-06-13/food-labelling-system-to-encourage-healthy-eating-options/4752032

[45] Sacks, G, Rayner, M, Swinburn, B. Impact of front-of-pack ‘traffic-light’ nutrition labelling on consumer food purchases in the UK.Health Promot Int. 2009; 24(4):344-352.

[46] Dunbar, G. Task-based nutrition labelling. Appetite. 2010 55(3): 431-435.

[47] Mair, JS, Pierce, MW, Teret SP. The use of zoning to restrict fast food outlets: a potential strategy to combat obesity.  2005. [Cited 2013 Apr 22]; Available from: http://www.publichealthlaw.net/Zoning%20Fast%20Food%20Outlets.pdf

[48] Rout, M. Junk food bans at schools. Herald Sun [online newspaper]. October 16 2006. [Cited 2013 Apr 21]; Available from: http://www.heraldsun.com.au/news/victoria/junk-food-bans-at-schools/story-e6frf7kx-1111112365970

[49] Reynolds, C. Public health law and regulation. Annandale: Federation Press; 2004.

 

Categories
Feature Articles Articles

Chocolate, Cheese and Dr Chan: Interning at the World Health Organization Headquarters, Geneva

Introduction

In early 2013, Ban Ki-Moon, Margaret Chan and Kofi Annan had something in common: they may be completely unaware of it, but I saw them speaking in Geneva, and not merely because I lurked around Palais de Nations. Rather, wielding my very own blue United Nations (UN) ID card as an intern at the World Health Organization headquarters (WHO), I became a de facto insider to events on the international stage.

Between January and March, I undertook an 11 week internship with the WHO Emergency and Essential Surgical Care (EESC) Program in the Clinical Procedures Unit, Department Health Systems Policies and Workforce. The WHO is the directing and coordinating authority for health within the UN. It is responsible for providing leadership on global health issues, setting the research agenda, setting and articulating norms, standards and evidence-based policy options, providing technical support to countries, and monitoring and assessing health trends. [1] At some point during your studies, you will encounter material developed and disseminated by the WHO. You may have cited WHO policies in your assignments, looked up country statistics from the Global Health Observatory before your elective, or at least seen the ubiquitous Five Moments for Hand Hygiene posters, which emerged from WHO guidelines. [2] In other words, if you haven’t heard of the WHO or don’t recognize the logo, then I suggest taking a break from textbooks and click around the many fascinating corners of their website. Perhaps watch Contagion for a highly stylized (but filmed partially on location) view of an aspect of their work. [3]

The WHO and Surgery

The WHO established the EESC Program in 2005, in response to growing recognition of the unaddressed burden of mortality and morbidity caused by treatable surgical conditions. [4] This reflects the lack of prioritization of surgical care systems in national health plans and the ongoing public health misconception that surgical care is not cost-effective and impacts only upon a minority of the population. [5] These misconceptions apply as much to those in the field of public health, as well as to surgeons on the ground and in the literature, let alone those at other agencies. This was reflected by the question I all too commonly faced when explaining my internship: “The WHO is involved in surgery?”

In reality, surgical conditions contribute to an estimated 11% of the global burden of disease. It is a field that cuts across a number of public health priorities. [6] For example, progress on many of the Millennium Development Goals demands the prioritization of surgical care systems, most obviously in connection with maternal and newborn care. [7] Timely access to surgical interventions, including resuscitation, pain management and caesarean section, are vital to reducing maternal mortality. [8] Even in its most basic forms, surgical procedures can play a role in both preventative and curative therapies, from male circumcision in relation to HIV control to the aseptic suturing of wounds. However, surgery also has a very real impact on poverty by addressing the underlying causes of disability which often contribute to unemployment and debt.  These include the management of congenital and injury incurred disabilities and preventable blindness. [9] Surgery can be a complex intervention because it relies upon numerous elements of the health system to be functioning completely. There is little point in having access to basic infrastructural amenities like electricity, running water and oxygen, when at the moment of an emergency it is unavailable. Similarly, having equipment and supplies alone are insufficient when there is a shortage of a skilled workforce to wield them.

The WHO EESC is dedicated to supporting life-saving surgical care systems in the areas of greatest need, through collaborations between the WHO, Ministries of Health and other agencies. The WHO Global Initiative for Emergency and Essential Surgical Care (GIEESC) is an online network linking academics, policy makers, health care providers and advocates across 100 countries. Together, they developed the WHO Integrated Management for Emergency and Essential Surgical Care (IMEESC) toolkit to equip health and government workforces with WHO recommendations, skills and resources, focusing on emergency, trauma, obstetrics and anaesthesia, in order to improve the quality of, and access to, surgical services. [4]

In terms of my role, let me begin with the caveat that, as an intern, one can be called upon to conduct a wide variety of tasks within the huge scope of the WHO. My experiences differed greatly from those of colleagues and are not necessarily reflective of what one may encounter in other departments, or even in the same program at different times of year. I applied through the online internship application, but amongst my colleagues, this was in fact a rarity. [10] By far the majority of interns had applied directly or through their university to the specific areas of the WHO that aligned with their interests.

I undertook both administrative and research tasks, working very closely with my supervisor, Dr. Meena Cherian. There is a paucity of evidence capturing the surgical capacity, including infrastructure, equipment, health workforce and surgical procedures provided, across facilities in low- and middle-income countries. Through the GIEESC network, the WHO Situation Analysis Tool captures the capacity of first-referral health facilities to provide emergency and essential surgical care. [4] One of my key roles was in data collation and analysis. In terms of tangible outcomes, in the span of my internship I was able to contribute to two research papers for submission. In terms of my education, however, it was the administrative roles that demonstrated many of the key lessons about working in international organizations, and for which I am most grateful. Although menial tasks like photocopying and editing PowerPoint presentations can seem futile, carrying those documents into meetings allows one to witness the behind-the-scenes exchanges that drive many international organizations. This deepened my understanding of the WHO’s functions, strengths and limitations. In my experience, exacerbated by funding limitations and the rise of game-changing new players from the non-government sector, such as the Bill and Melinda Gates Foundation, attracting and justifying resource allocation to marginalized areas like surgery becomes a full-time job in itself. Opportunities for collaboration with such NGOs can result in successful innovation and programming, as with the polio eradication campaign. However, although inter-sector and inter-department collaboration is seemingly an obvious win-win, the undercurrents of turf wars and politicization of health issues can make such collaborations seem like a delicate diplomatic performance. Effective WHO engagement with external stakeholders cannot come at the cost of its intergovernmental nature or independence from those with vested interests. [1] Such are the limitations imposed on international organizations by the international community and the complex relationships between member states.

Ultimately, learning to collaborate with colleagues across cultural, economic, resource and contextual barriers under the tutelage of my supervisor has implications for any future endeavor in our increasingly globalized workplaces.  Learning to navigate such competing social and political interests is as applicable to clinical practice as it is to public health. Furthermore, the Experts for Interns program initiated by the WHO Intern Board provides bi-weekly lunchtime seminars specifically designed to broaden the scope of the intern experience by facilitating discussion with experts in various fields. [11] These were particularly valuable as an opportunity to ask direct questions, challenge preconceived notions and reexamine the historical development of public health, global health and the role and scope of the WHO.

Personal Highlights and Challenges

Geneva is an international city, home not only to WHO HQ and the United Nations in Europe, but also to a number of other UN Agencies and international non-governmental organisations, including Médecins Sans Frontières and the International Committee of the Red Cross. This means that, at any given time, there are a huge number of international and cultural events occurring. During my stay alone, there was the WHO Executive Board Meeting, the Geneva Human Rights Film Festival, a number of conferences, the UN Human Rights Council and some truly high profile speakers. Hans Rosling, the rock star of epidemiology and a founder of GapMinder, has put together a great TED Talk, but seeing him speak in person was one of my most lively, educative and stimulating experiences. [12,13] It is a memory I will treasure and return to for motivation, particularly when rote learning another fact for an  exam seems impossible. For many of us who aspire towards a career in global health, seeing these famous faces and learning firsthand about their work and career pathways is more than just inspiring, it can become a raison d’être.

From a slightly more cavalier perspective, Switzerland is centrally located in Europe, and Geneva as an international city is a great base from which to travel. It is easy to find sale flights to major European destinations, and the train to Paris takes only three hours. As an unpaid intern, your weekends are your own and most supervisors are generous about allowing travel grace. The opportunity to explore a new city every weekend is alluring, and with options like the demi-tariff, the half-priced fares on Swiss trains, it’s certainly a possibility. The Geneva and Lac Leman region features charming villages and cities with the sort of breathtaking mountain views that makes everything look like a postcard, not least if it’s covered in a blanket of pristine white snow. This is the other key attraction of Geneva in winter: if you’re into snow sports, you will be based within an easy day trip to some of the best pistes in the world. Indeed, the Swiss penchant for such trips seems to be why Sundays find Geneva a ghost town of sorts. Aside from the odd museum, absolutely nothing is open on Sundays, to the point where if you make my mistake of arriving on Sunday, it may be difficult to even find food.

Even if you can find food on a Sunday, affording it and anything else in Geneva is not easy. The cost of living is high, and, despite the high turnover of ex-pat staff, finding a place to stay is extremely difficult. As Australians we have it luckier than most, by being able to make use of OS-HELP loans while studying overseas. There is an ongoing discussion about paid internships within the UN, and there are varying practices amongst agencies. Some, notably the International Labor Organization, pay interns, while the WHO and others do not. This has become an advocacy issue amongst interns, as it severely limits access to the educative and career-oriented experiences internships provide for those from middle- and low-income nations. However, it seems unlikely that this will change in the near future. Nonetheless, with careful saving, planning and some basic austerity measures, finances should not deter you from this experience.

Another potential challenge is that Geneva is in francophone Switzerland; it is geographically surrounded on most sides by France. Many WHO staff and interns live, or at least shop, across the French border, where I discovered amazing supermarkets with entire aisles devoted to Swiss and French cheeses and chocolate. As a hopeless Francophile, this was a delicious highlight. I took classes and developed my French while safely working in a predominantly anglophone environment. If you have never studied French, then learning basics pre-arrival would be recommended, though it is possible to get around Geneva without, as the locals are very generous in this regard. However, there were often times when my linguistic limitations perpetuated anglophone dominance, forcing colleagues to transition into my language of choice. Such language barriers can also contribute to cultural misunderstandings.  For example, early on I committed the fauxpas of being too casual in the more hierarchical workplace, where professional titles were used even in personal conversations. Coming from the more egalitarian Australian context, this can come as something of a culture shock, though it varies considerably between different offices and departments.

As a result of my experiences, I am now both more cynical and more hopeful about the future of global health. The bureaucratic limitations of the WHO are also where its authority lies. Sifting through convoluted Executive Board meetings, it is easy to become skeptical about the relevance of this 65-year old organization. However, this belies the power of health mandates supported by member state consensus, whether in regards to the Tobacco Free Initiative or the Millennium Development Goals. Such change and reform, though slow, is broad reaching and invigorating.

Finally, the most significant and meaningful experiences I shared during my internship were not with the famous faces of global health, but with my peers. Across the various organizations based in Geneva, there are a huge number of interns from all over the world. Making connections with these kindred spirits, who shared my interests and a similar desire for an international career, was such a privilege. Even when our areas of interest did not intersect, it was amazing to learn from the expertise of fellow interns and students. For example, a fascinating experience was encountering a student at CERN (Conseil Européen pour la Recherche Nucléaire, better known to us non-physicists as the home of the Large Hadron Collider) with whom I was able to have a sticky beak at the labs and lifestyles of modern physics’ greatest thinkers. Just as he is progressing towards becoming a don of theoretical physics, at some point in the next few decades many of the friends I’ve made through this internship are going to become the next generation of global health leaders. More importantly, creating such networks across continents and across specializations has been instrumental in shaping my sense of self and perspective, and has left an indelible mark in the form of new education, career and lifestyle aspirations.

Where to from here for you

There is an online intern application through the WHO website. Although I completed this as my elective term, I also encountered a number of students from Australia for whom this was a summer opportunity to experience the organization, or as part of their research. I would strongly urge any student with interests in public health, global health, policy or research to consider applying for this opportunity, and to do so by contacting the departments of your interests directly.

In terms of surgery and global health, please visit the EESC program website at www.who.int/surgery for further information, and to become a GIEESC member.

Acknowledgements

The author wishes to acknowledge the generosity of the WHO Clinical Procedures Unit, particularly Dr. Meena Cherian of the WHO Emergency and Essential Surgical Care Program, who hosted her internship, and the financial support of the Australian government OS-HELP loan program.

Conflict of interest

None declared.

Correspondence

L Hashimoto-Govindasamy: laksmisg@gmail.com

References

[1] World Health Organization. About the WHO. [Internet]. 2013 [cited 2013 March 26]. Available from: http://www.who.int/about/en/

[2] World Health Organization. Clean care is safer care: five moments in hand hygiene. [Internet]. 2013 [cited 2013 March 26]. Available from: http://www.who.int/gpsc/tools/Five_moments/en/

[3] International Movie Database. Contagion. [Internet]. 2013 [cited 2013 March 26]. Available from: http://www.imdb.com/title/tt1598778/

[4] World Health Organization. Emergency and essential surgical care. [Internet]. 2013 [cited 2013 March 26]/ Available from: http://www.who.int/surgery/en/

[5] Weiser TG, Regenbogen SE, Thompson KD, Haynes AB, Lipsitz SR, Berry WR, Gawande AA. An estimation of the global volume of surgery: a modeling strategy based on available data. Lancet. 2008;372:139–44.

[6] Debas HT, Gosselin R, McCord C, Thind A. Surgery. In: Jamison D, editor. Disease Control Priorities in Developing Countries. 2nd ed. New York. Oxford University Press; 2006.

[7] United Nations. Millennium Development Goals. [Internet]. 2013 [cited 2013 March 26]. Available from: http://www.un.org/millenniumgoals/

[8] Kushner A, Cherian M, Noel L, Spiegel DA, Groth S, Etienne C. Addressing the Millennium Development Goals from a surgical perspective: essential surgery and anesthesia in 8 low- and middle-income countries. Arch Surg. 2010;145(2):154-160.

[9] PLOS Medicine Editors. A crucial role for surgery in reaching the UN Millennium Development Goals. PLOS Med. 2008;5(8):e182.doi:10.1371/journal.pmed.0050182

[10] World Health Organization. WHO employment: WHO internships. [Internet]. 2013 [cited 2013 March 26]. Available from: http://www.who.int/employment/internship/interns/en/index1.html

[11] WHO Interns. Experts for interns (E-4-I). [Internet]. 2013 [cited 2013 March 26]. Available from: http://whointerns.weebly.com/experts-for-interns.html

[12] TED: Ideas worth spreading. Hans Rosling: Stats that reshape your worldview. [Internet]. 2006 [cited 2013 March 26]. Available from: http://www.ted.com/talks/hans_rosling_shows_the_best_stats_you_ve_ever_seen.html

[13] Gapminder. Gapminder: for a fact-based worldview. [Internet]. 2013 [cited 2013 March 26]. Available from: http://www.gapminder.org/

Categories
Review Articles Articles

A systematic review evaluating non-invasive techniques to diagnose genetic disorders in a human fetus and the ethical implications of their use

Introduction: Genetic disorders are a significant cause of neonatal morbidity and mortality. [1] Diagnosing a genetic disorder currently involves invasive tissue sampling which carries an increased risk of miscarriage. The discovery of cell-free fetal DNA (cffDNA) in maternal plasma has enabled the development of non-invasive prenantal diagnostic tests (NIPD). [2,3] The scientific and ethical implications are examined. Methods: Medline, PubMed and Cochrane Library were searched for original research articles, review articles and meta-analyses focussed on screening and diagnosis of fetal genetic disorders. Results: 422 original research and review articles were assessed using processes in the Cochrane Handbook for Systematic Reviews of Interventions. [4] Using maternal plasma obtained during the second trimester, researchers were able to sequence the fetal genome with up to 98% accuracy. Clinicians reported the test will improve prenatal screening uptake, and reduce morbidity and mortality associated with genetic disorders. Ethicists argue it has implications for informed consent, rates of termination, reliability of future applications, inadvertent findings in clinical settings, commercial exploitation and inconsistent use of the technology internationally. Conclusions: Once NIPD tests utilising cffDNA are refined and costs reduced it is likely its implementation will affect both specialist genetic and routine antenatal services. However, given the complex set of ethical, legal and sociocultural issues raised by NIPD, professional education, public engagement, formal evaluation and the development of international standards are urgently needed. Health systems and policy makers must prepare to respond to cffDNA technology in a responsible and effective manner.

Introduction

Most pregnant women wish to be reassured that their unborn baby is healthy. [5] The aim of antenatal care is therefore to select screening and diagnostic tests that are accurate, safe and can be performed sufficiently early to allow parents to plan ahead or terminate the pregnancy in the event that fetal abnormality is diagnosed. [6] Genetic disorders are a significant cause (20%) of neonatal mortality. [1] At present, maternal serum screening, alone or in combination with ultrasound, is used to identify fetuses at risk of aneuploidy and other disorders. [7] Unfortunately, neither maternal serum screening nor ultrasound provide information on the genetic constitution of a fetus or allow a definitive diagnosis to be made. [8] For this, fetal cells must be invasively sampled from the placenta (chorionic villus tissue), amniotic fluid or fetal blood – all of which increase the risk of miscarriage. [9,10] This increased risk makes the decision to use invasive prenatal diagnosis difficult, particularly as there are still only very limited treatment options. [11] As a result, the medical community has sought to develop reliable and safe methods for achieving non-invasive prenatal diagnosis (NIPD), in addition to future treatment options. [12] Through NIPD, researchers hope to improve screening uptake, and reduce morbidity and mortality associated with genetic disorders. [1] Ethicists argue that NIPD transects existing distinctions between screening and diagnostic tests, and has implications for informed consent or choice. [12]

Methods

MEDLINE, PubMed and Cochrane Library were searched weekly between September 2012 and April 2013 for original research articles, review articles and meta-analyses focussed on screening and diagnosis of fetal genetic disorders. MeSH headings used were: Genetics, Medical, Genetics Testing and Fetus. Search terms used were: non-invasive, whole-genome and sequencing. Results were limited to human studies written in English between 1995 and 2013.

Results

The search resulted in 422 articles being identified; these were subsequently examined. The majority of publications were original research and review articles, although there was one meta-analysis by Alfirevic et al. (2003). [6] Many publications (217) were excluded for their limited scope or irrelevance.

Maternal serum screening and ultrasound are current methods of choice for screening pregnancies at risk of genetic disorders. [8,13] However, both methods rely on measuring epiphenomena rather than core pathology. Consequently, both tests have limited sensitivity and specificity and can only be used within a relatively narrow gestational period. [14] To achieve a definitive diagnosis chorionic villi sampling (CVS), amniocentesis or cordocentesis must be used. [6,8]

CVS is an invasive diagnostic procedure performed after 10 weeks gestation that is used for karyotyping when first trimester screening suggests a high risk of aneuploidy. [8] It is also used for fetal DNA analysis if the parents are known to be carriers of an identifiable gene mutation, such as cystic fibrosis or thalassaemia. [9] The procedure involves ultrasound-guided aspiration of trophoblastic tissue using either the trans-cervical or trans-abdominal routes. The tissue is then analysed with fluorescence in situ hybridisation polymerase chain reaction (FISH PCR). Like CVS, amniocentesis involves ultrasound-guided aspiration of amniotic fluid but is performed after 15 weeks gestation. [6] Cordocentesis involves direct sampling of fetal blood from the umbilical cord but is rarely performed and will not be discussed further in this article.

The benefit of CVS is that it can be performed at an earlier gestation, facilitating earlier diagnosis and providing the opportunity to terminate the pregnancy by suction curettage of the uterus. The benefits of amniocentesis include the lower background rate of miscarriage and the avoidance of isolated placental mosaicism occurring in 1% of samples. [8] The primary risk with CVS and amniocentesis is miscarriage. The level of risk is similar for the two tests (reported risk ranges from 1% to 1 in 1600) and is operator dependent. [6,7] Researchers have attempted to reduce this risk by developing a NIPD that allows the direct analysis of fetal genetic materials. [2,12,14-22] Better screening tests will achieve a higher detection rate combined with a lower false positive rate, resulting in less invasive testing and fewer procedure-related miscarriages.

Much of the early work on NIPD focussed on the isolation of fetal nucleated cells that had entered into the maternal blood. [14] However, the concentrations of these cells were low, meaning the tests had low sensitivity and specificity. [15,22] Later methods were inspired by the presence of tumour-derived DNA in the plasma of cancer patients. [23,24] In 1997, Lo et al. (1997) observed an analogous phenomenon was present in pregnancy by identifying Y chromosomal DNA sequences in plasma of women carrying male foetuses. [25] Replication of this study has concluded that 10% of cell-free DNA (cffDNA) in a pregnant woman’s plasma originates from the fetus she carries. [14,17,18,20] Since then, several groups have developed NIPD tests but most were only capable of detecting gross abnormalities such as aneuploidies, and were limited by small sample size and substandard accuracy. [17,18,22,26,27] In June 2012, Kitzman et al. (2012) reconstructed the whole-genome sequence of a human fetus using samples obtained relatively noninvasively during the second trimester, including paternal buccal DNA and maternal and cffDNA from the pregnant mother’s plasma. [2] Predicting which genetic variants were passed from mother to fetus was achieved by resolving the mother’s haplotypes – groups of genetic variants residing on the same chromosomes – and combining this result with shotgun genome sequencing of the father’s DNA and deep sequencing of maternal plasma DNA. [19] Comparing the results of this method with cord blood taken at delivery found inheritance was predicted with 98.1% accuracy. The study sequenced only two fetuses at a cost of $50,000 each, and is yet to be reproduced. Researchers from Stanford University were able to sequence the fetal genome without a paternal saliva sample although this was less accurate than the method used by Kitzman et al. (2012). [18] This latter method forms the basis of commercially available NIPD tests being offered by laboratories. [28] In Australia, NIPD testing is currently limited to Trisomy 21, 18, 13 and abnormalities of sex chromosomes, is not eligible for a Medicare rebate and costs upwards of $1,250. [29] It is anticipated that analysing samples for NIPD locally will reduce the cost and drive demand. [30,31]

Discussion

Clinicians report that non-invasively diagnosing genetic disorders will reduce infant mortality and morbidity. [31] Ethicists argue the technology raises concerns for informed consent, rates of termination, reliability of future applications, inadvertent findings in clinical settings, commercial exploitation and inconsistent use of the technology internationally [12,32-36].

Informed Consent and Informed Choice

Ethicists believe NIPD testing transects existing distinctions between screening and diagnostic tests and has implications for informed consent and choice. [12] An example is screening for Down’s syndrome, a common genetic disorder. Although a significant number of women may not already achieve informed choice for screening, at least a subsequent invasive diagnosis provides another opportunity for reflection as they consent to the procedure (CVS or amniocentesis). [34,35] Replacing this multi-step screening process with highly-predictive cffDNA testing may reduce opportunities for exercising informed choice. [12] In addition, despite the belief that introducing cffDNA testing will promote parental reproductive choice, it may indeed make proceeding with an affected pregnancy more difficult for two reasons: First, the decreased risks associated with cffDNA might lead women to feel ‘pressured’ into agreeing to the tests, or undergoing testing without informed consent, even if they potentially lead to outcomes with which they disagree. [33,36] Second, the lower risks might cause a shift in the extent to which society is supportive of those who chose to have disabled children. [10] In turn, worries over social disapprobation could prompt a loopback effect, where women feel more pressured to test and to terminate their pregnancies.

Termination of pregnancy (TOP)

In Australia, there is broad agreement that TOP is ethically and legally permissible in some circumstances. [11,33,37] However, the laws are notoriously unclear, outdated and inconsistent between states and territories. [38,39] In many jurisdictions it is legally defensible for a clinician to perform a TOP at any gestation if they can justify the harms of continuing with the pregnancy outweigh the risks of termination. [40] For this reason, access to TOP is very much dependent on the clinician, which may be problematic if cffDNA testing becomes more widespread and moves outside the existing setting of medical genetics, where high standards of relevant ethical practice and the professional duty of non-directive counselling are firmly entrenched. [12]

Accuracy and reliability of NIPD

Despite improved accuracy by utilising fetal nucleic acids, the sensitivity and specificity of even the most accurate method is still less than 100%. [2] Maintaining an acceptably high sensitivity and specificity will also be a challenge, as researchers discover an ever-increasing number of sequences associated with pre-existing diseases. [12] To do this will require careful monitoring within different applications. [33] Without it, the personal, sociocultural, legal and ethical ramifications of false positives and negatives may be devastating. For example, additional invasive testing may be undertaken, healthy fetuses may be terminated, and children may suffer psychologically should they discover their parents would have terminated them if they had known of their diagnosis. [34]

Inadvertent findings in clinical settings

The Kitzman et al. method requires paternal buccal DNA to sequence the fetal genome and may therefore inadvertently disclose misattributed paternity. [41] However, so too may the Stanford University method that forms the basis of commercially available NIPD but that does not require paternal buccal DNA. [18] In a trial of 18 subjects, researchers using the latter method were able to predict 70% of the paternally inherited haplotypes in the fetus with 94–97% accuracy. [18] Of course, the correlation of these findings to the clinical setting would likely still require paternal buccal DNA to confirm paternity. The potential for inadvertent disclosure of misattributed paternity would be a particular concern if cffDNA testing were ever incorporated into routine antenatal screening as a greater number of women who may not have been adequately forewarned would be exposed to the risks such information may bring.

Commercial and international uses

The likely increase in the accessibility of NIPD using cffDNA tests made available via the internet has major implications, particularly for fetal sex selection. [12] In China [42] and India, [43] population skewing has already been observed as a result of unlawful sex selection practices favouring male children. Some ethicists believe cffDNA could significantly aggravate or extend this problem. [44] The development of cffDNA technology within the commercial sector is also a concern as some companies choose only to sell the service rather than invest in research and development eg: babygendermentor.com. The provision of testing direct-to-consumers raises a complex set of issues relating to the role of ‘gatekeepers’ in prenatal testing and access to non-clinical applications of the technology. [33] In addition, it may even impact upon the provision made through Medicare for ongoing care, including diagnostic confirmation, interventional procedures (such as TOP) and medical advice. [5] Having commercial players involved may result in elements of professional practice, including informed consent and counselling, being difficult to enforce considering international legislative and regulatory boundaries. [12] The cultural context is also highly relevant to how consumers access cffDNA testing. For example, its use in countries where access to safe TOP is limited or absent is ethically questionable and could cause significant social and medical problems. [45]

Conclusion

The utilisation of cffDNA for safe and reliable NIPD has opened the way for accurate sequencing of the fetal genome and the ability to diagnose an ever-increasing number of genetic anomalies and their clinical disorders. Once methods such as those by Kitzman et al. and researchers at Stanford University are refined and costs reduced it is likely the implementation of cffDNA testing will affect both specialist genetic and routine antenatal services, improve screening uptake, and reduce morbidity and mortality associated with genetic disorders. As a result of the pace of development, there is concern that cffDNA testing transects existing distinctions between screening and diagnostic tests, having implications for informed consent, termination rates and commercial. Given the complex set of ethical, legal and sociocultural issues raised by NIPD, both professional education and public engagement are urgently needed. Formal evaluation of each test should be required to determine its clinical accuracy, and laboratory standards should be developed alongside national best practice guidelines to ensure that cffDNA testing is only offered within agreed and well-supported pathways that take account of the aforementioned issues. This development has the potential to deliver tangible improvements in antenatal care within the next 5-10 years, and health systems and policy makers around the globe must now prepare to respond to further developments in cffDNA technology in a responsible, effective and timely manner.

Conflict of interest

None declared.

Correspondence

M Irwin: matt@irwinmd.com.au

References

[1] Bell, C.J., et al., Carrier testing for severe childhood recessive diseases by next-generation sequencing. Sci Transl Med, 2011. 3(65): p. 65ra4.

[2] Kitzman, J.O., et al., Noninvasive whole-genome sequencing of a human fetus. Sci Transl Med, 2012. 4(137): p. 137ra76.

[3] Allyse, M., et al., Cell-free fetal DNA testing for fetal aneuploidy and beyond: clinical integration challenges in the US context. Hum Reprod, 2012.

[4] Collaboration, T.C., in Cochrane Handbook for Systematic Reviews of Interventions, J. Higgins and J. Deeks, Editors. 2011.

[5] Tischler, R., et al., Noninvasive prenatal diagnosis: pregnant women’s interest and expected uptake. Prenat Diagn, 2011. 31(13): p. 1292-9.

[6] Alfirevic, Z., K. Sundberg, and S. Brigham, Amniocentesis and chorionic villus sampling for prenatal diagnosis. Cochrane Database Syst Rev, 2003(3): p. CD003252.

[7] Dugoff, L. and J.C. Hobbins, Invasive procedures to evaluate the fetus. Clin Obstet Gynecol, 2002. 45(4): p. 1039-53.

[8] Collins, S.L. and L. Impey, Prenatal diagnosis: types and techniques. Early Hum Dev, 2012. 88(1): p. 3-8.

[9] Mujezinovic, F. and Z. Alfirevic, Procedure-related complications of amniocentesis and chorionic villous sampling: a systematic review. Obstet Gynecol, 2007. 110(3): p. 687-94.

[10] Geaghan, S.M., Fetal laboratory medicine: on the frontier of maternal-fetal medicine. Clin Chem, 2012. 58(2): p. 337-52.

[11] van den Berg, M., et al., Accepting or declining the offer of prenatal screening for congenital defects: test uptake and women’s reasons. Prenat Diagn, 2005. 25(1): p. 84-90.

[12] Hall, A.B., A; Wright, C F, Non-Invasive Prenatal Diagnosis Using Cell-Free Fetal DNA Technology: Applications and Implications. Public Health Genomics, 2010. 13(4): p. 246-255.

[13] Wieacker, P. and J. Steinhard, The prenatal diagnosis of genetic diseases. Dtsch Arztebl Int, 2010. 107(48): p. 857-62.

[14] Lo, Y.M., Non-invasive prenatal diagnosis by massively parallel sequencing of maternal plasma DNA. Open Biol, 2012. 2(6): p. 120086.

[15] Bianchi, D.W., et al., PCR quantitation of fetal cells in maternal blood in normal and aneuploid pregnancies. Am J Hum Genet, 1997. 61(4): p. 822-9.

[16] Dan, S., et al., Prenatal detection of aneuploidy and imbalanced chromosomal arrangements by massively parallel sequencing. PLoS One, 2012. 7(2): p. e27835.

[17] Faas, B.H., et al., Non-invasive prenatal diagnosis of fetal aneuploidies using massively parallel sequencing-by-ligation and evidence that cell-free fetal DNA in the maternal plasma originates from cytotrophoblastic cells. Expert Opin Biol Ther, 2012. 12 Suppl 1: p. S19-26.

[18] Fan, H.C., et al., Non-invasive prenatal measurement of the fetal genome. Nature, 2012. 487(7407): p. 320-4.

[19] Kitzman, J.O., et al., Haplotype-resolved genome sequencing of a Gujarati Indian individual. Nat Biotechnol, 2011. 29(1): p. 59-63.

[20] Lo, Y.M., et al., Maternal plasma DNA sequencing reveals the genome-wide genetic and mutational profile of the fetus. Sci Transl Med, 2010. 2(61): p. 61ra91.

[21] Muller, S.P., et al., Cell-free fetal DNA in specimen from pregnant women is stable up to 5 days. Prenat Diagn, 2011. 31(13): p. 1300-4.

[22] Bianchi, D.W., et al., Fetal gender and aneuploidy detection using fetal cells in maternal blood: analysis of NIFTY I data. Prenatal Diagnosis, 2002. 22(7): p. 609-615.

[23] Chen, X.Q., et al., Microsatellite alterations in plasma DNA of small cell lung cancer patients. Nat Med, 1996. 2(9): p. 1033-5.

[24] Nawroz, H., et al., Microsatellite alterations in serum DNA of head and neck cancer patients. Nat Med, 1996. 2(9): p. 1035-7.

[25] Lo, Y.M., et al., Presence of fetal DNA in maternal plasma and serum. Lancet, 1997. 350(9076): p. 485-7.

[26] Chiu, R.W. and Y.M. Lo, Non-invasive prenatal diagnosis by fetal nucleic acid analysis in maternal plasma: the coming of age. Semin Fetal Neonatal Med, 2011. 16(2): p. 88-93.

[27] Chu, T., et al., Statistical model for whole genome sequencing and its application to minimally invasive diagnosis of fetal genetic disease. Bioinformatics, 2009. 25(10): p. 1244-50.

[28] Skirton, H. and C. Patch, Factors affecting the clinical use of non-invasive prenatal testing: a mixed methods systematic review. Prenat Diagn, 2013. 33(6): p. 532-41.

[29] Douglass Hanly Moir Pathology, Verifi Prenatal Test, in http://dhm.com.au/media/21890505/verify_6pp_dhm_dl_fa_web.pdf2013, Douglass Hanly Moir Pathology: Sydney. p. 1-2.

[30] Kelly, S.E. and H.R. Farrimond, Non-invasive prenatal genetic testing: a study of public attitudes. Public Health Genomics, 2012. 15(2): p. 73-81.

[31] Sayres, L.C., et al., Cell-free fetal DNA testing: a pilot study of obstetric healthcare provider attitudes toward clinical implementation. Prenat Diagn, 2011. 31(11): p. 1070-6.

[32] Dormandy, E., et al., Low uptake of prenatal screening for Down syndrome in minority ethnic groups and socially deprived groups: a reflection of women’s attitudes or a failure to facilitate informed choices? International Journal of Epidemiology, 2005. 34(2): p. 346-352.

[33] Newson, A.J., Ethical aspects arising from non-invasive fetal diagnosis. Semin Fetal Neonatal Med, 2008. 13(2): p. 103-8.

[34] Schmitz, D., C. Netzer, and W. Henn, An offer you can’t refuse? Ethical implications of non-invasive prenatal diagnosis. Nat Rev Genet, 2009. 10(8): p. 515.

[35] Seavilleklein, V., Challenging the rhetoric of choice in prenatal screening. Bioethics, 2009. 23(1): p. 68-77.

[36] van den Berg, M., et al., Are counsellors’ attitudes influencing pregnant women’s attitudes and decisions on prenatal screening? Prenat Diagn, 2007. 27(6): p. 518-24.

[37] Drabsch, T., Abortion and the law in New South Wales, N.P.L.R. Service, Editor 2005, NSW Parliamentary Library Research Service: Sydney. p. 1-33.

[38] de Costa, C.M., et al., Introducing early medical abortion in Australia: there is a need to update abortion laws. Sex Health, 2007. 4(4): p. 223-6.

[39] De Crespigny, L.J. and J. Savulescu, Abortion: time to clarify Australia’s confusing laws. Med J Aust, 2004. 181(4): p. 201-3.

[40] Cica, N., Abortion Law in Australia, D.o.t.P. Library, Editor 1998, Department of Parliamentary Library: Canberra.

[41] Lucast, E.K., Informed consent and the misattributed paternity problem in genetic counseling. Bioethics, 2007. 21(1): p. 41-50.

[42] Cecilia Lai-wan, C., B. Eric, and C. Celia Hoi-yan, Attitudes to and practices regarding sex selection in China. Prenatal Diagnosis, 2006. 26(7): p. 610-613.

[43] George, S.M., Millions of missing girls: from fetal sexing to high technology sex selection in India. Prenat Diagn, 2006. 26(7): p. 604-9.

[44] Van Balen, F., Attitudes towards sex selection in the Western world. Prenat Diagn, 2006. 26(7): p. 614-8.

[45] Ballantyne, A., et al., Prenatal Diagnosis and Abortion for Congenital Abnormalities: Is It Ethical to Provide One Without the Other? The American Journal of Bioethics, 2009. 9(8): p. 48-56.

Categories
Letters Articles

Management of high-grade vulvar intraepithelial neoplasia

Vulval intraepithelial neoplasia (VIN) is a condition which is increasingly prevalent, particularly in young women, [1] but is a topic rarely touched upon in medical school. The following article reviews current treatment methods for VIN, both surgical and pharmacological, as well as promising new treatment modalities still being researched.

VIN is a condition in which pre-cancerous changes occur in the vulval skin. The incidence of the diagnosis of VIN is approximately 3/100,000, increasing more than four fold since 1973. [2] Vulvar intraepithelial neoplasia is classified into two main groups based on morphologic and histologic features, consisting of VIN usual group and VIN differentiated type. VIN usual group can be subdivided into basaloid and warty subtypes, typically occurs in younger, premenopausal women and is related to HPV infection and cigarette smoking. VIN differentiated type typically occurs in postmenopausal women and is often associated with lichen sclerosus, which presents as white patches on vulval skin. The rate of progression to invasive vulvar cancer in women with untreated high-grade VIN is reported to range from 9.0 to 18.5%. [3] Half of women with VIN are symptomatic, with pruritis, perineal pain or burning, dysuria, a visible lesion or a palpable abnormality. The lesions themselves are often multifocal, raised and can vary in colour from white to red, gray or brown. Diagnosis involves a colposcopic examination, where VIN lesions produce dense acetowhite lesions with or without punctuation. The goals of treatment are prevention of progression to invasive vulvar cancer and symptom relief, as well as preservation of normal vulvar function and anatomy.

Current surgical therapies include excisional treatments or vulvectomy. The main advantage of excisional therapies over ablative or medical treatment is the ability to make a histopathological diagnosis based on the excised lesion, particularly as occult invasive squamous cell carcinoma is present in many of these women. [4]

Wide local excision is the preferred initial intervention for women in whom clinical or pathologic findings suggest invasive cancer, despite a biopsy diagnosis of VIN, to obtain a specimen for pathologic analysis. [4] Localised high-grade VIN lesions are best managed by superficial local excision of an individual lesion, with reported recurrence rates of 20 to 40%. [5]

Multifocal or extensive lesions that are not amenable to removal with local excision are best removed with a partial or simple vulvectomy. This involves removal of part of or the entire vulva, respectively, together with subcutaneous tissue and perineal tissues if indicated; [5] a last resort as neither normal function nor anatomy are preserved.

Laser ablation therapy is an alternative to excisional therapy, particularly for women with multifocal and extensive disease in whom cancer is not suspected. [6] CO2 laser vaporisation has been shown to be effective in eradicating VIN while achieving good cosmetic and functional results, with success rates of 40 to 75%. [6-7]

A systematic review showed that there were no significant differences in recurrence after vulvectomy, partial vulvectomy, local excision or laser evaporisation. [8]

Medical therapies aimed at preserving the vulvar anatomy are useful in younger patients, provided colposcopic examination and biopsies have excluded invasive disease. The primary medical treatment available is Imiquimod 5% cream, which has antiviral and antitumour effects via stimulation of local cytokine production and cell-mediated immunity. [9] A Cochrane review [1] concluded for women with high grade VIN, Imiquimod was better than placebo in terms of reduction in lesion size and histologic regression. This conclusion was based on the findings of three randomised placebo-controlled trials, with the largest trial reporting a complete response rate of 35% and partial response of 46%. [10] Common side effects reported were erythema, soreness, itching, burning, ulceration and flu-like symptoms; however, these side effects were be reduced by placing patients on an escalating dosing regimen. [1]

Agents such as cidofovir, 5-fluorouracil and photodynamic therapy are currently being investigated as treatment for vulval intraepithelial neoplasia. Cidofovir is an acyclic nucleoside analogue with antiviral activity, and a pilot study shows promising results. [11] 5-fluorouracil is a chemotherapeutic agent that inhibits DNA synthesis, with a review demonstrating a remission rate of 34%; [12] however, this agent is used less commonly in current practice. Photodynamic therapy, whereby a sensitizing agent is applied prior to irradiation of the vulva, has been demonstrated to cause complete response in 33 to 55% of patients with VIN 2-3. [7,13]

The major surgical interventions for VIN appear to be similarly effective and are appropriate when there is desire for a histopathological specimen to exclude invasive cancer. Medical interventions are useful when occult cancer is unlikely and preservation of normal vulvar anatomy is desired. Evidence appears to be strongest for Imiquimod as a conservative medical intervention for the treatment of high grade VIN. Other promising agents include cidofovir, but further investigation through large scale studies is required to characterise the efficacy of these therapies. Diligent follow-up is essential in detecting disease recurrence and monitoring the effectiveness of therapies. More research is needed to develop effective treatment strategies that preserve function and anatomy, particularly as the disease becomes more prevalent in young women.

Conflict of interest

None declared.

Correspondence

S Ai: sylvia.ai3@gmail.com

References

[1] Pepas L, Kaushik S, Bryant A, Nordin A, Dickinson HO. Medical interventions for high grade vulval intraepithelial neoplasia. Cochrane Database of Systematic Reviews 2011, Issue 4. Art. No.: CD007924. DOI: 10.1002/14651858.CD007924.pub2.
[2] Judson PL, Habermann EB, Baxter NN, Durham SB, Virnig BA. Trends in the incidence of invasive and in situ vulvar carcinoma. Obstet Gynecol 2006:107(5):1018-22
[3] Joura EA. Epidemiology, diagnosis and treatment of vulvar intraepithelial neoplasia. Gynaecol Oncol Path 2002:14(1):39-43
[4] NSW Department of Health. Best Clinical practice gynaecological cancer guidelines 2009. [online]. Accessed on 28/4/2012 from http://www.aci.health.nsw.gov.au/__data/assets/pdf_file/0010/154549/go_clinical_guidelines.pdf
[5] Holschneider CH. Vulval intraepithelial neoplasia. In: UpToDate, Basow, DS (Ed), UpToDate, Waltham, MA, 2012.
[6] Hillemanns P, Wang X, Staehle S, Michels W, Dannecker C. Evaluation of different treatment modalities for vulvar intraepithelial neoplasia (VIN): CO2 laser vaporisation, photodynamic therapy, excision and vulvectomy. Gynecol Oncol 2006:100(2):271-5
[7] Sideri M, Spinaci L, Spolti N, Schettino F. Evaluation of CO2 laser excision or vaporisation for the treatment of vulvar intraepithelial neoplasia. Gynecol Oncol 1999:75:277-81.
[8] Van seters, M, van Beurden, M, de Craen, AJM. Is the assumed natural history of vulvar intraepithelial neoplasia III based on enough evidence? A systematic review of 3322 published patients. Gynecol Oncol 2004:97(2):645-51
[9] Mahto M, Nathan M, O’Mahony C. More than a decade on: review of the use of imiquimod in lower anogenital intraepithelial neoplasia. Int J STDs AIDs 2010:21(1):8-16
[10] Van Seters M, van Beurden M, ten Kate FJW, Beckmann I, Ewing PC, Eijkemans MJC et al. Treatment of vulvar intraepithelial neoplasia with topical imiquimod. NEJM 2008:358:1465-73
[11] Tristram A, Fiander A. Clinical responses to cidofovir applied topically to women with high grade vulval intraepithelial neoplasia. Gynecol Oncol 2005:99(3):652
[12] Sillman FH, Sedlis A, Boyce JG. A review of lower genital intraepithelial neoplasia and the use of topical 5-fluorouracil. Obstet Gynecol Survey 1985:40(4):190-220
[13] Fehr MK, Hornung R, Schwarz VA, Haller SU, Wyss P. Photodynamic therapy of vulvar intraepithelial neoplasia III using topically applied 5-aminolevulinic acid. Gynecol Oncol 2001:80(1):62-6

Categories
Book Reviews Articles

Starlight stars bright

White T. Starlight: An Australian Army doctor in Vietnam. Brisbane: CopyRight Publishing; 2011.

RRP: $33.00

Not many of us dream of serving as a medical doctor in the frontlines of war. War is after all the antithesis of everything the medical profession stands for. [1] In Starlight, Dr Tony White AM vividly recounts his tour of duty in South Vietnam between 1966 and 1967 through correspondence exchanged with his family. STARLIGHT was the radio call sign for the medical officer and it bore the essence of what was expected of young White as a Regimental Medical Officer (RMO) in the 5th Battalion of the Royal Australian Regiment (5 RAR).

White was born in Perth, grew up in Kenya and read medicine in Clare College, Cambridge University. After completing the first half of the six-year degree, he moved back with his family to Sydney where the pivotal decision to join military service was made. White accepted a scholarship from the Australian government to continue at the University of Sydney in exchange for four years of service in the Australian Defence Force after a year of residency.

In May 1966, White’s wartime duties commenced with 5 RAR in Vung Tau, southeast of Saigon, dubbed “Sufferer’s Paradise”. After a brief settling-in, the battalion moved to Nui Dat, their operational base for the year. The initial excitement of the 25-year-old’s first visit to Asia quickly faded as the realities of war – the mud, the sweat and the blood – set in. Footnotes and explanation of military jargon and organisation were immensely helpful in acquainting the reader to the battalion’s environment. As an RMO, White worked round-the-clock performing General Practice duties such as sick parades and preventive medicine, emergency duties attending to acute trauma, and public health duties monitoring for disease outbreaks and maintaining hygiene. The stark difference from being a civilian doctor is candidly described, “You live, eat, dig, and [defecate] with your patients and, like them, get every bit as uncomfortable and frightened. There is no retreat or privacy.”

From the early “friendly fires” and booby traps to the horror of landmines, White’s affecting letters offer a very raw view of war’s savagery. It was a war fought against guerrillas, much like the present war in Afghanistan, where the enemy is unknown and threat may erupt into danger at any time. During the numerous operations 5 RAR conducted, White attended to and comforted many wounded. With every digger killed in action, a palpable sense of loss accompanies the narration. White clearly laments the “senseless killing of war” as he explained, “You spend all that time – 20 years or so – making a man, preserving his health, educating and training him, to have him shot to death.” White himself had close brushes with death. He was pinned down by sniper fire on one occasion and even found himself in the middle of a minefield in the worst of tragedies encountered. The chapter “Going troppo” ruminates on the enduring psychological effects of these events as the year unfolded.

The insanity of war is balanced by many heartening acts. First and foremost is the remarkable resilience of the diggers whose tireless disposition to work inspired White profoundly. White also voluntarily set up regular clinics in surrounding villages to provide care for civilians despite the threat of enemy contact. In an encouraging twist, both friendly and enemy (Viet Cong) casualties were rendered the same standards of care. Even more ironic was the congenial interactions between the two factions within the confines of the hospital. Perhaps the most moving of all was White’s heartfelt words of appreciation to his family who supported his spirits through sending letters and homemade goodies like fruitcakes, biscuits and smoked oysters.

So why should you read this book? Textbooks do not teach us empathy. White shares in these 184 pages experiences that we all hope never to encounter ourselves. Yet countless veterans, refugees, abuse victims, etcetera have faced such terror and our understanding of their narratives is essential in providing care and comfort. In the final chapters of this book White gives a rare physician perspective on post-traumatic stress disorder and how he reconciled with the profound impact of war to achieve success in the field of dermatology. These invaluable lessons shine through this book.

Conflicts of Interest

None declared.

References


[1] DeMaria AN. The physician and war. Journal of the American College of Cardiology. 2003;41(5):889-90.

Categories
Articles Original Research Articles

Predicting falls in the elderly: do dual-task tests offer any added value? A systematic review

The issue of falls is a significant health concern in geriatric medicine and a major contributor to morbidity and mortality in those over 65 years of age. Gait and balance problems are responsible for up to a quarter of falls in the elderly. It is unclear whether dual- task assessments, which have become increasingly popular in recent years, have any added benefit over single-task assessments in predicting falls. A previous systematic review that included manuscripts published prior to 2006 could not reach a conclusion due to a lack of available data. Therefore, a systematic review was performed on all dual-task material published from 2006 to 2011 with a focus on fall prediction. The review included all studies published between 2006-2011 and available through PubMed, EMBASE, PsycINFO, CINAHL and the Cochrane Central Register of Controlled Trials databases that satisfied inclusion and exclusion criteria utilised by the previous systematic review. A total of sixteen articles met the inclusion criteria and were analysed for qualitative and quantitative results. A majority of the studies demonstrated that poor performance during dual-task assessments was associated with a higher risk of falls in the elderly. Only three of the 16 articles provided statistical data for comparison of single- and dual-task assessments. These studies provided insufficient data to demonstrate whether dual-task tests were superior to single- task tests in predicting falls in the elderly. Further head-to-head studies are required to determine whether dual-task assessments are superior to single-task assessments in their ability to predict future falls in the elderly.

Introduction

Many simple tasks of daily living such as standing, walking or rising from a chair can potentially lead to a fall. Each year one in three people over the age of 65 living at home will experience a fall, with five percent requiring hospitalisation. [1, 2] Gait and balance problems are responsible for 10-25% of falls in the elderly, only surpassed by ‘slips and trips,’ which account for 30-50%. [2] Appropriate clinical evaluation of identifiable gait and balance disturbances, such as lower limb weakness or gait disorders, has been proposed as an efficient and cost-effective practice which can prevent many of these falls. As such, fall prevention programs have placed a strong emphasis on determining a patient’s fall risk by assessing a variety of physiological characteristics. [2, 3]

Dual-task assessments have become increasingly popular in recent years, because they examine the relationship between cognitive function and attentional limitations, that is, a subject’s ability to divide their attention. [4] The accepted model for conducting such tests involves a primary gait or balance task (such as walking at normal pace) performed concurrently with a secondary cognitive or manual task (such as counting backwards). [4, 5] Divided attention whilst walking may manifest as subtle changes in posture, balance or gait. [5, 6] It is these changes that provide potentially clinically significant correlations, for example, detecting changes in balance and gait after an exercise intervention. [5, 6] However, it is unclear whether a patient’s performance during a dual-task assessment has any added benefit over a single-task assessment in predicting falls.

In 2008, Zijlstra et al. [7] produced a systematic review of the literature which attempted to evaluate whether dual-task balance assessments are more sensitive than single balance tasks in predicting falls. It included all published studies prior to 2006 (inclusive), yet there was a lack of available data for a conclusion to be made. This was followed by a review article by Beauchet et al. [8] in 2009 that included additional studies published up to 2008. These authors concluded that changes in performance while dual-tasking were significantly associated with an increased risk of falling in older adults. The purpose of this present study was to determine, using recently published data, whether dual-task tests of balance and/or gait have any added benefit over single-task tests in predicting falls. A related outcome of the study was to gather data to either support or challenge the use of dual-task assessments in fall prevention programs.

A systematic review of all published material from 2006 to 2011 was performed, focusing on dual-task assessments in the elderly. Inclusion criteria were used to ensure only relevant articles reporting on fall predictions were selected. The method and results of included manuscripts were qualitatively and quantitatively analysed and compared.

Methods

Literature Search

A systematic literature search was performed to identify articles which investigated the relationship between falls in older people and balance/gait under single-task and dual-task conditions. The electronic databases searched were PubMed, EMBASE, PsycINFO, CINAHL and Cochrane Central Register of Controlled Trials. The search strategy utilised by Ziljstra et al. [7] was carried out. Individual search strategies were tailored to each database, being adapted from the following which was used in PubMed:

1. (gait OR walking OR locomotion OR musculoskeletal equilibrium OR posture)

2. (aged OR aged, 80 and over OR aging)

3. #1 AND #2

4. (cognition OR attention OR cognitive task(s) OR attention tasks(s) 
OR dual task(s) OR double task paradigm OR second task(s) OR 
secondary task(s))

5. #3 AND #4

6. #5 AND (humans)

Bold terms are MeSH (Medical Subjects Headings) key terms.
The search was performed without language restrictions and results were filtered to produce all publications from 2006 to March 2011 (inclusive). To identify further studies, the author hand-searched reference lists of relevant articles, and searched the Scopus database to identify any newer articles which cited primary articles.

Selection of papers

The process of selecting manuscripts is illustrated in Figure 1. Only articles with publication dates from 2006 onwards were included, as all relevant articles prior to this were already contained in the mini-review by Ziljstra et al. [7] Two independent reviewers then screened article titles for studies that employed a dual-task paradigm – specifically, a gait or balance task coupled with a cognitive or manual task – and included falls data as an outcome measure.

Article abstracts were then appraised to determine whether the dual- task assessment was used appropriately and within the scope of the present study; that is to: (1) predict future falls, or (2) differentiate between fallers and non-fallers based on retrospective data collection of falls. Studies were only considered if subjects’ fall status was determined by actual fall events – the fall definitions stated in individual articles were accepted. Studies were included if participants were aged 65 years and older. Articles which focused on adult participants with a specific medical condition were also included. Studies that reported no results for dual-task assessments were included for descriptive purposes only. Interventional studies which used the dual- task paradigm to detect changes after an intervention were excluded, as were case studies, review articles or studies that used subjective scoring systems to assess dual-task performance.

Analysis of relevant papers

Information on the following aspects was extracted from each article: study design (retrospective or prospective collection of falls), number of subjects (including gender proportion), number of falls required to be classified a ‘faller’, tasks performed and the corresponding measurements used to report outcome, task order and follow up period if appropriate.

Where applicable, each article was also assessed for values and results which allowed comparison between the single and dual-task assessments and their respective falls prediction. The appropriate statistical measures required for such a comparison include sensitivity, specificity, positive and negative predictive values, odds ratios or likelihood ratios. [9] The dual-task cost, or difference in performance between the single and dual-task, was also considered.

Results

The database search of PubMed, EMBASE, PsycINFO, CINAHL and Cochrane produced 1154, 101, 468, 502 and 84 references respectively. As alluded to by Figure 1, filtering results for publications between 2006-2011 almost halved results to a total of 1215 references. A further 1082 studies were omitted as they fell under the category of duplicates, non-dual task studies, or did not report falls as the outcome.

The 133 articles which remained reported on falls using a dual- task approach, that is, a primary gait or balance task paired with a secondary cognitive task. Final screening was performed to ensure that the mean age of subjects was at least 65, as well as to remove case studies, interventional studies and review articles. Sixteen studies met the inclusion criteria, nine retrospective and seven prospective fall studies, summarised by study design in Tables 1A and 1B respectively.

The number of subjects ranged from 24 to 1038, [10, 11] with half the studies having a sample size of 100 subjects or more. [11-18] Females were typically the dominant participants, comprising over 70% of the subject cohort on nine occasions. [10, 13, 14, 16-21] Eight studies investigated community-dwelling older adults, [10-12, 14, 15, 19, 20, 22] four examined older adults living in senior housing/residential facilities [13, 16-18] and one focused on elderly hospital inpatients. [21] A further three studies exclusively investigated subjects with defined pathologies, specifically progressive supranuclear palsy, [23] stroke [24] and acute brain injury. [25]

Among the nine retrospective studies, the fall rate ranged from 10.0% to 54.2%. [12, 25] Fall rates were determined by actual fall events; five studies required subjects to self-report the number of falls experienced over the preceding twelve months, [10, 12, 20, 23, 24] three studies asked subjects to self-report over the previous six months [13, 22, 25] and one study utilised a history-taking approach, with subjects interviewed independently by two separate clinicians. [19] Classification of subjects as a ‘faller’ varied slightly, with five studies reporting on all fallers (i.e. ≥ 1 fall), [10, 19, 20, 22, 25] three reporting only on recurrent fallers (i.e. ≥ 2 falls), [12, 13, 23] and one which did not specify. [24]

The fall rate for the seven prospective studies ranged from 21.3% to 50.0%. [15, 21] The number of falls per subject were collected during the follow-up period, which was quite uniform at twelve months, [11, 14, 16-18, 21] except for one study which continued data collection for 24 months. [15] The primary outcome measure during the follow- up period was fall rate, based on either the first fall [16-18, 21] or incidence of falls. [11, 14, 15]

The nature of the primary balance/gait task varied between studies. Five studies investigated more than one type of balance/gait task. [10, 12, 19, 20, 24] Of the sixteen studies, ten required subjects to walk along a straight walkway, nine at normal pace [10, 11, 14, 16-19, 21, 24] and one at fast pace. [23] Three studies incorporated a turn along the walkway [15, 22, 25] and a further study comprised of both a straight walk and a separate walk-and-turn. [12] The remaining two studies did not employ a walking task of any kind, but rather utilised a voluntary step execution test [13], a Timed Up & Go test and a one-leg balance test. [20]

The type of cognitive/secondary task also varied between studies. All but three studies employed a cognitive task; one used a manual task [19] and two used both a cognitive and a manual task. [11, 14] Cognitive tasks differed greatly to include serial subtractions, [14, 15, 20, 22, 23] backward counting aloud, [11, 16-18, 21] memory tasks, [24, 25] stroop tasks [10, 13] and visuo-spatial tasks. [12] The single and dual-tasks were performed in a random order in six of the sixteen studies. [10, 12, 16-18, 20]

Thirteen studies recorded walking time or gait parameters as a major outcome. [10-12, 14-17, 19, 21-25] Of all studies, eleven reported that dual-task performance was associated with the occurrence of falls. A further two studies came to the same conclusion, but only in the elderly with high functional capacity [11] or during specific secondary tasks. [14] One prospective [17] and two retrospective studies [20, 25] found no significant association between dual-task performance and falls.

As described in Table 2, ten studies reported figures on the predictive ability of the single and/or dual-tasks; [11-18, 21, 23] some data was obtained from the systematic review by Beauchet et al. [8] The remaining six studies provided no fall prediction data. In predicting falls, dual-task tests had a sensitivity of 70% or greater, except in two studies which reported values of 64.8% [17] and 16.7%. [16] Specificity ranged from 57.1% to 94.3%. [16, 17] Positive predictive values ranged from to 38.0% to 86.7%, [17, 23] and negative predictive values from 54.5% to 93.2%. [21, 23] Two studies derived predictive ability from the dual-task ‘cost’, [11, 14] which was defined as the difference in performance between the single and dual-task test.

Only three studies provided statistical measures for the fall prediction of the single task and the dual-task individually. [16, 17, 21] Increased walking time during single and dual-task conditions were similarly associated with risk of falling, OR= 1.1 (95% CI, 1.0-1.2) and OR= 1.1 (95% CI, 0.9-1.1), respectively. [17] Variation in stride time also predicted falls, OR= 13.3 (95% CI, 1.6-113.6) and OR= 8.6 (95% CI, 1.9- 39.6) in the single and dual-task conditions respectively. [21] Walking speed predicted recurrent falls during single and dual-tasks, OR = 0.96 (95% CI, 0.94-0.99) and OR= 0.60 (95% CI, 0.41-0.85), respectively. [16] The later study reported that a decrease in walking speed increased risk of recurrent falls by 1.67 in the dual-task test compared to 1.04 during single-task. All values given in these three studies, for both single and dual-task tests, were interpreted as significant in predicting falls by their respective authors.

Discussion

Only three prospective studies directly compared the individual predictive values of the single and dual-task tests. The first such study concluded that the dual-task test was in fact equivalent to the single- task test in predicting falls. [17] This particular study also reported the lowest positive predictive value of all dual-task tests at 38%. The second study [21] also reported similar predictive values for the single and dual-task assessments, as well as a relatively low positive predictive value of 53.9%. Given that all other studies reported higher predictive values, it may be postulated that at the very least, dual-task tests are comparable to single-task tests in predicting falls. Furthermore, the two studies focused on subjects from senior housing facilities and hospital inpatients (187 and 57 participants respectively), and therefore results may not represent all elderly community-dwelling individuals. The third study [16] concluded that subjects who walked slower during the single-task assessment would be 1.04 times more likely to experience recurrent falls than subjects who walked faster. However, after a poor performance in the dual-task assessment, their risk may be increased to 1.67. This suggests that the dual-task assessment can offer a more accurate figure on risk of falling. Again, participants tested in this study were recruited from senior housing facilities, and thus results may not be directly applicable to the community-dwelling older adult.

Eight studies focused on community-dwelling participants, and all but one [20] suggested that dual-task performance was associated with falls. Evidence that dual-task assessments may be more suitable for fall prediction in the elderly who are healthier and/or living in the community as opposed to those with poorer health is provided by Yamada et al. [11] Participants were subdivided into groups by results of a Timed Up & Go test, separating the ‘frail’ from the ‘robust’. It was found that the dual-task assessments were associated with falls only in groups with a higher functional capacity. This intra-cohort variability may account for, at least in part, why three studies included in this review concluded that there was no benefit in performing dual-task assessments. [17, 20, 25] These findings conflicted with the remaining thirteen studies and may be justified by one or all of several possible reasons: (1) the heterogeneity of the studies, (2) the non-standardised application of the dual-task paradigm, or (3) the hypothesis that dual- task assessments are more applicable to specific subpopulations within the generalised group of ‘older adults’, or further, that certain primary and secondary task combinations must be used to produce favourable results.

The heterogeneity among the identified studies played a major role in limiting the scope of analysis and potential conclusions derived from this review. For example, the dichotomisation of the community- dwelling participants in to frail versus robust [11] illustrates the variability within a supposedly homogenous patient population. Another contributor to the heterogeneity of the studies is the broad range of cognitive or secondary tasks used, which varied between manual tasks [19] and simple or complex cognitive tasks. [10-21, 23-25] The purpose of the secondary task is to reduce attention allocated to the primary task. [5] Since the studies varied in secondary task(s) used, each with a slightly different level of complexity, attentional resources redirected away from the primary balance or gait task would also be varied. Hence, the ability of each study to predict falls is expected to be unique, or poorer, in studies employing a secondary task which is not sufficiently challenging. [26] One important outcome from this review has been to highlight the lack of a standardised protocol for performing dual-task assessments. There is currently no identified combination of a primary and secondary task which has proven superiority in predicting falls. Variation in the task combinations, as well as varied participant instructions given prior to the completion of tasks, is a possible explanation for the disparity between results. To improve result consistency and comparability in this emerging area of research, [6] dual-task assessments should be comprised of a standardised primary and secondary task.

Sixteen studies were deemed appropriate for inclusion in this systematic review. Despite a thorough search strategy, it is possible that some relevant studies may have been overlooked. Based on limited data from 2006 to 2011, the exact benefit of dual-task assessments in predicting falls compared to single-task assessments remains uncertain. For a more comprehensive verdict, further analysis is required to combine previous systematic reviews, [7, 8] which incorporates data prior to 2006. Future dual-task studies should focus on fall prediction and report predictive values for both the single-task and the dual-task individually in order to allow for comparisons to be made. Such studies should also incorporate large sample sizes, and assess living conditions and health status of participants. Emphasis on the predictive value of dual-task assessments requires these studies to be prospective in design, as prospective collection of fall data is considered the gold standard. [27]

Conclusion

Due to the heterogeneous nature of the study population, the limited statistical analysis and a lack of direct comparison between single- task and dual-task assessments, the question of whether dual-task assessments are superior to single-task assessments for fall prediction remains unanswered. This systematic review has highlighted significant variability in study population and design that should be taken into account when conducting further research. Standardisation of dual-task assessment protocols and further investigation and characterisation of sub-populations where dual-task assessments may offer particular benefit are suggested. Future research could focus on different task combinations in order to identify which permutations provide the greatest predictive power. Translation into routine clinical practice will require development of reproducible dual-task assessments that can be performed easily on older individuals and have validated accuracy in predicting future falls. Ultimately, incorporation of dual- task assessments into clinical fall prevention programs should aim to provide a sensitive and specific measure of effectiveness and to reduce the incidence, morbidity and mortality associated with falls.

Acknowledgements

The author would like to thank Professor Stephen Lord and Doctor Jasmine Menant from Neuroscience Research Australia for their expertise and assistance in producing this literature review.

Conflict of interest

None declared.

Contact

M Sarofim: mina@student.unsw.edu.au

Categories
Letters Articles

The hidden value of the Prevocational General Practice Placements Program

Medical students, prevocational doctors and general practitioners (GPs) may have little knowledge of the Prevocational General Practice Placements Program (PGPPP).

This article seeks to explore the value of such placements and provide an aspiring surgeon’s reflection on a PGPPP internship placement in rural New South Wales (NSW).

General practice placements for interns have been available for the past three decades in the United Kingdom, with literature unanimously promoting the educational benefits. [1] The Australian PGPPP experiences in Western Australia and South Australia reinforce the feasibility of such placements, and propose cooperation between universities, postgraduate councils, training networks and specialist training colleges. [2] Semi-structured interviews with interns who had completed the PGPPP indicated experience in communication, counselling and procedural skills, with a range of patient presentations. [3] The uptake of the PGPPP has varied between states, with NSW until recently, having substantially fewer placements, particularly at an intern level. [4]

Prevocational GP placements have the potential to alleviate some of the pressure of sourcing additional postgraduate placements for junior doctors. With the dramatic increase of Australian medical school graduates – by 81% in seven years – overwhelming traditional postgraduate training placements, [5] the growth of PGPPP will continue. Despite available qualitative data, there is currently no published information that adequately describes the range and volume of patient and procedural experiences in PGPPP placements. In response, a prospective study of caseload data is currently underway to better inform medical students, prospective PGPPP doctors, GP supervisors and specialist training colleges of the potential value of such placements.

 

In April 2012, I undertook an eleven week placement at Your Health Griffith, a medical centre in Griffith, NSW. The practice was staffed by seven GPs and two practice nurses. Two GPs alternated as clinical supervisors and a third GP, separate from the practice, conducted informal weekly tutorials and reviewed patient encounter and procedure logs. Both clinical supervision and teaching exceeded RACGP standards. [6]

Presentations during a single day included complex medical or surgical issues, paediatrics, obstetrics, gynaecology, dermatology, mental health, occupational health, preventative health and more. The workload included booked appointments and consenting patients from the GP supervisors’ bookings, thus ensuring a reasonable patient caseload. Patients often attended follow up appointments during this term. The continuity of patient care in PGPPP was in stark contrast to acute medicine and surgery in tertiary hospitals. This allowed establishment of greater rapport, with patients openly discussing intimate social or mental health issues during subsequent consultations.

The majority of tertiary hospitals encompass an established hierarchy of fellows, accredited and unaccredited registrars, residents and enthusiastic medical students vying for procedures. With the possible exception of emergency, most interns complete only a few procedures in hospital rotations. Hence, in PGPPP, junior doctors value the opportunities to practice procedural skills including administration of local anaesthesia, skin lesion excisions and suturing.

The main source of frustration within the placement was administrative red tape. The restrictions placed upon interns with provisional medical registration meant all scripts and referrals had to be counter-signed and conducted under the GP supervisors’ provider number and prescription authority. Interns routinely prescribe medications and make referrals in the hospital system. That this authority in a supervised hospital system has not been extended to similarly supervised PGPPP is bewildering. The need to obtain formal consent prior to consults, in contrast to the implied consent in hospital treatment, was reminiscent of medical school.

One of the main purposes for the development of PGPPP was to promote general practice to prevocational junior medical officers. These placements provide junior doctors with valuable exposure to community medicine, develop confidence to deal with the uncertainty of diagnoses, a range of patient presentations, and improve counselling and procedural skills. These skills and experiences are likely to be retained regardless of future specialisation. Perhaps it is just as important for GPs to play a role in educating future tertiary care specialists, so all may better understand both the capabilities and limitations of community medicine. While I still wish to pursue a career in surgery, this placement has provided insight into the world of community medicine. The value of PGPPP truly extends beyond the attraction of prevocational doctors to general practice careers.

Conflict of interest

None declared.

Acknowledgements

Dr Marion Reeves, Dr Jekhsi Musa Othman and Dr Damien Limberger for their supervision and guidance through this clinical rotation.

 

Categories
Feature Articles Articles

Graded exposure to neurophobia: Stopping it affect another generation of students

Neurophobia

Neurophobia has probably afflicted you at some stage during your medical school training, whether figuring out how to correlate signs elicited from examination with a likely diagnosis, or deciphering which tract has decussated at a particular level in the neuroaxis. The disease definition of neurophobia as the ‘fear of neural sciences and clinical neurology’ is a testament to its influence; affecting up to 50% of students and junior doctors at least once in their lifetime. [1] The severity of the condition ranges from simple dislike or avoidance of neurology to sub-par clinical assessment of patients with a neurological complaint. Neurophobia is often compounded by a crippling lack of confi dence in approaching and understanding basic neurological concepts.

According to the World Health Organisation, neurological conditions contribute to about 6.3% of the global health burden and account for as much as twelve percent of global mortality. [2] Given these figures, neurophobia persisting into postgraduate medical years may adversely infl uence the treatment that the significant proportion of patients who present with neurological complaints receive. This article will explore why neurophobia exists and some strategies in remedying it from both a student and teaching perspective.

Perceptions of neurology

One factor contributing to the existence of neurophobia is the perception of neurology within the medical community. The classic stereotype, so vividly depicted by the editor of the British Medical Journal: ‘the neurologist is one of the great archetypes: a brilliant, forgetful man with a bulging cranium….who….talks with ease about bits of the brain you’d forgotten existed, adores diagnosis and rare syndromes, and – most importantly – never bothers about treatment.’ [3] The description by Talley and O’ Connor is that of the neurologist being identified by the presence of hatpins in their expensive suits; and how they use the keys of an expensive motor car to elicit plantar reflexes further solidifies the mythology of the neurologist as a stereotype for another generation of Australian medical students. [4] Some have even proposed that neurologists thrive in a specialty known for its intellectual pursuits and exclusivity – a specialty where ‘only young Einsteins need apply.’ [5] Unfortunately, these stereotypes may only serve to perpetuate the reputation of neurology as a difficult specialty, which is complex and full of rare diagnoses (Figure 1).

However, everyone knows that stereotypes are almost always inaccurate. An important question is what do students really think about neurology? There have been several questionnaires asking this question to medical students across various countries and the results are strikingly similar in theme. Neurology is considered by students as the most difficult of the internal medicine specialities. Not surprisingly, it was also the specialty perceived by medical students as the one they had the least knowledge about and, understandably, were least confident in. [5-10] Yet such sentiments are also shared amongst residents, junior doctors and general practitioners in the United Kingdom (UK) and United States (US). [8-10] The persistence of this phenomenon after medical school is supported by the number of intriguing and difficult case reports published in the prestigious Lancet journal. Neurological cases (26%) appear at more than double the frequency of the next highest specialty, gastroenterology, (12%) as a proportion of total case reports in the Lancet from 2003 to 2008. [11] However, this finding may also be explained by the fact that in one survey, neurology was ranked as the most interesting of specialities by medical students, especially after they had completed a rotation within the specialty. [10] So whilst neurophobia exists, it is not outlandish to claim that many medical students do at least find neurology very interesting and would therefore actively seek to improve their understanding and knowledge.

The perception of neurological disease amongst students and the wider community can also be biased. Films such as The Diving Bell and the Butterfly (2007), which is about locked-in syndrome, are not only a compelling account of a peculiar neurological disease, capturing the imagination of the public, but they also highlights the absence of effective treatment following established cerebral infarction. Definitive cures for progressive illnesses, including multiple sclerosis and motor neuron disease are also yet to be discovered, but the reality is that there are numerous effective treatments for a variety of neurological complaints and diseases. Some students will thus incorrectly perceive that the joy gained from neurology only comes from the challenge of arriving at a diagnosis rather than from providing useful treatment to patients.

 

Other causes of neurophobia

Apart from the perception of neurology, a number of other reasons for students’ neurophobia and the perceived difficulty of neurology have been identified. [5-10] Contributory factors to neurophobia can be divided into pre-clinical and clinical exposure factors. Preclinical factors include inadequate teaching in the pre-clinical years, insufficient knowledge of basic neuroanatomy and neuroscience, as well as difficulty in correlating the biomedical sciences with neurological cases (especially localising lesions). Clinical exposure factors include the length of the neurology examination, a perception of complex diagnoses stemming from inpatients being a biased sample of neurology patients, limited exposure to neurology and a paucity of bedside teaching.

Preventing neurophobia – student and teacher perspective

It is clearly much better to prevent neurophobia from occurring than to attempt to remedy it once it has become ingrained. Addressing pre-clinical exposure factors can prevent its development early during medical school. Media reports have quoted doctors and students bemoaning the lack of anatomy teaching contact hours in Australian medical courses. [12, 13] Common sense dictates that the earlier and more frequent the exposure that students have with basic neurology in their medical programs (for example, in the form of introductory sessions on the brain, spinal cord and cranial nerves that are reinforced later down the track), the greater the chance of preventing neurophobia in their clinical years. It goes without saying that a fundamental understanding of neuroanatomy is essential to localising lesions in neurology. Clinically-relevant neurosciences should likewise receive emphasis in pre-clinical teaching.

Many neurology educators concur with students’ wishes for the teaching of basic science and clinical neurology to be effectively integrated . [14, 15] This needs to be a priority. The problem or case based learning model adopted by many undergraduate programs should easily accommodate this integration, using carefully selected cases that can be reinforced with continual assessments via written or observed clinical exams. [15] Neuroanatomy can be a complex science to comprehend. Therefore, more clinically-appropriate and student offerocused rules or tricks should be taught to simplify the concepts. The ‘rule of fours’ for brainstem vascular syndromes is one delightful example of such a rule. [16] This example of a teaching ‘rule’ would be more useful for students than memorising anatomical mnemonics, which favour rote learning over developing a deeper understanding of anatomical concepts. Given the reliance on more and more sophisticated neuroimaging in clinical neurology, correlating clinical neuroimaging with the relevant anatomical concepts must also be included in the pre-clinical years.

During the clinical years, medical students ideally want more frequent and improved bedside teaching in smaller tutorial groups. The feasibility of smaller groups is beyond the scope of my article but I will emphasise one style of bedside teaching that is most conducive to learning neurology. Bedside teaching allows the student to carry out important components of a clinical task under supervision, test their understanding during a clinical discussion and then reflect on possible areas of improvement during a debrief afterwards. This century-old style of bedside teaching, which has more recently been characterised in educational theory as the application of an experiential learning cycle (ELC) framework, works for students and as it did for their teachers when they themselves were students of neurology. [17, 18] The essential questions for a clinical teacher to ask during bedside tutorials are ‘why?’ and ‘so what?’ These inquiries will gauge students’ deeper understanding of the interactions between an illness and its neuro-anatomical correlations, rather than simply testing recall of isolated medical facts. [19]

There is also the issue of the inpatient population within the neurology ward. The overwhelming majority of patients are people who have experienced a stroke and, in large tertiary teaching hospitals, there will also be several patients with rare diagnoses and syndromes. This selection of patients is unrepresentative of the broad nature of neurological presentations and especially excludes patients whose conditions are non-acute and who are referred to outpatients’ clinics. Students are sometimes deterred by patients with rare syndromes that would not even be worth mentioning during a differential diagnosis list in an objective structured clinical examination. Therefore, more exposure to outpatient clinics would assist students to develop skills in recognising common neurological presentations. The learning and teaching of neurology at outpatients should, like bedside tutorials, follow the ELC model. [18] Outpatient clinics should be made mandatory within neurology rotations and rather than making students passive observers, as is commonplace, students should be required to see the patient beforehand (especially if the patient is a patient known to the neurologist with signs or important learning points that can be garnered in their history). A separate clinic room for the student is necessary for this approach, with the neurologist then coming in after a period of time, allowing the student to present their findings followed by an interactive discussion of relevant concepts. Next, the consultant can conduct the consultation with the student observing. Following feedback, the student can think about what can be improved and plan the next consultation, as described in the ELC model (Figure 2). Time constraints make teaching difficult in outpatient settings. However, with this approach, when the student is seeing the known patient by themselves, the consultant can see other (less interesting) patients in the clinic so in essence no time (apart from the teaching time) is lost. This inevitably means the student may miss seeing every second patient that comes to the clinic but in this case, sacrificing quantity for quality of learning may be more beneficial in combating neurophobia long term.

Neurology associations have developed curricula in the US and UK as a “must-know” guideline for students and residents. [20, 21] The major benefits of these endeavours are to set a minimum standard across medical schools and provide clear objectives to which students can aspire. This helps develop recognition of common neurological presentations and the acquisition of essential clinical skills. It is for this reason that the development of a uniform neurology curriculum adopted through all medical school programs across Australia may also alleviate neurophobia.

The responsibility to engage or seek learning opportunities in neurology, to combat neurophobia, nevertheless lies with the students. Students’ own motivation is vital in seeking improvement. It is often hardest to motivate students who find neurology boring and thus fail to engage with the subject. Nevertheless, interest often picks up once students feel more competent in the area. To help improve knowledge and skills in neurology, students can now use a variety of resources apart from textbooks and journals to complement their clinical neurology exposure. One growing trend in the US is the use of online learning and resources for neurology. A variety of online resources supplementing bedside learning and didactic teaching (e.g. lectures) is beneficial to students because of the active learning process they promote. This involves integrating the acquisition of information, placing it in context and then using it practically in patient encounters. [9] Therefore, medical schools should experiment with novel resources and teaching techniques that students will find useful – ‘virtual neurological patients’, video tutorials and neuroanatomy teaching computer programmes are all potential modern teaching tools. This new format of electronic teaching is one way to engage students who otherwise are uninterested in neurology. In conclusion, recognising the early signs of neurophobia is important for medical students and teachers alike. Once it is diagnosed, it is the responsibility of both student and   teacher to minimise the burden of disease.

Acknowledgements

The author would like to thank May Wong for editing and providing helpful suggestions for an earlier draftof the article.

Conflicts of interest

None declared.

Correspondence

B Nham: benjaminsb.nham@gmail.com

Categories
Feature Articles Articles

The ethics of euthanasia

Introduction

The topic of euthanasia is one that is shrouded with much ethical debate and ambiguity. Various types of euthanasia are recognised, with active voluntary euthanasia, assisted suicide and physicianassisted suicide eliciting the most controversy. [1] Broadly speaking, these terms are used to describe the termination of a person’s life to end their suffering, usually through the administration of drugs. Euthanasia is currently illegal in all Australian states, refl ecting the status quo of most countries, although, there are a handful of countries and states where acts of euthanasia are legally permitted under certain conditions.

Advocates of euthanasia argue that people have a right to make their own decisions regarding death, and that euthanasia is intended to alleviate pain and suffering, hence being ascribed the term “mercy killing.” They hold the view that active euthanasia is not morally worse than the withdrawal or withholding of medical treatment, and erroneously describe this practice as “passive euthanasia.” Such views are contested by opponents of euthanasia who raise the argument of the sanctity of human life and that euthanasia is equal to murder, and moreover, abuses autonomy and human rights. Furthermore, it is said that good palliative care can provide relief from suffering to patients and unlike euthanasia, should be the answer in modern medicine. This article will define several terms relating to euthanasia in order to frame the key arguments used by proponents and opponents of euthanasia. It will also outline the legal situation of euthanasia in Australia and countries abroad.

Defining euthanasia

The term “euthanasia” is derived from Greek, literally meaning “good death”. [1] Taken in its common usage however, euthanasia refers to the termination of a person’s life, to end their suffering, usually from an incurable or terminal condition. [1] It is for this reason that euthanasia was also coined the name “mercy killing”.

Various types of euthanasia are recognised. Active euthanasia refers to the deliberate act, usually through the intentional administration of lethal drugs, to end an incurably or terminally ill patient’s life. [2] On the other hand, supporters of euthanasia use another term, “passive euthanasia” to describe the deliberate withholding or withdrawal of life-prolonging medical treatment resulting in the patient’s death. [2] Unsurprisingly, the term “passive euthanasia” has been described as a misnomer. In Australia and most countries around the world, this practice is not considered as euthanasia at all. Indeed, according to Bartels and Otlowski [2] withholding or withdrawing life-prolonging treatment, either at the request of the patient or when it is considered to be in the best interests of the patient, “has become an established part of medical practice and is relatively uncontroversial.”

Acts of euthanasia are further categorised as “voluntary”, “involuntary” and “non-voluntary.” Voluntary euthanasia refers to euthanasia performed at the request of the patient. [1] Involuntary euthanasia is the term used to describe the situation where euthanasia is performed when the patient does not request it, with the intent of relieving their suffering – which, in effect, amounts to murder. [3] Non-voluntary euthanasia relates to a situation where euthanasia is performed when the patient is incapable of consenting. [1] The term that is relevant to the euthanasia debate is “active voluntary euthanasia”, which collectively refers to the deliberate act to end an incurable or terminally ill patient’s life, usually through the administration of lethal drugs at his or her request. The main difference between active voluntary euthanasia and assisted suicide is that in assisted suicide and physician-assisted suicide, the patient performs the killing act. [2] Assisted suicide is when a person intentionally assists a patient, at their request, to terminate his or her life. [2] Physician-assisted suicide refers to a situation where a physician intentionally assists a patient, at their request, to end his or her life, for example, by the provision of information and drugs. [3]

Another concept that is linked to end-of-life decisions and should be differentiated from euthanasia is the doctrine of double effect. The doctrine of double effect excuses the death of the patient that may result, as a secondary effect, from an action taken with the primary intention of alleviating pain. [4] Supporters of euthanasia may describe this as indirect euthanasia, but again, this term should be discarded when considering the euthanasia debate. [3]

Legal situation of active voluntary euthanasia and assisted suicide

In Australia, active voluntary euthanasia, assisted suicide and physician-assisted suicide are illegal (see Table 1). [1] In general, across all Australian states and territories, any deliberate act resulting in the death of another person is defined as murder. [2] The prohibition of euthanasia and assisted suicide is established in the criminal legislation of each Australian state, as well as the common law in the common law states of New South Wales, South Australia and Victoria. [2]

The prohibition of euthanasia and assisted suicide in Australia has been the status quo for many years now. However, there was a period when the Northern Territory permitted euthanasia and physician-assisted suicide under the Rights of Terminally Ill Act (1995). The Act came into effect in 1996 and made the Northern Territory the first place in the world to legally permit active voluntary euthanasia and physicianassisted suicide. Under this Act, competent terminally ill adults who were aged 18 or over, were able to request a physician to help them in dying. This Act was short-lived however, after the Federal Government overturned it in 1997 with the Euthanasia Laws Act 1997. [1,2] The Euthanasia Laws Act 1997 denied states the power to legislate to permit euthanasia or assisted suicide. [1] There have been a number of attempts in various Australian states, over the past decade and more recently, to legislate for euthanasia and assisted suicide, but all have failed to date, owing to a majority consensus against euthanasia. [1]

A number of countries and states around the world have permitted euthanasia and/or assisted suicide in some form; however this is often under specific conditions (see Table 2).

Arguments for and against euthanasia

There are many arguments that have been put forward for and against euthanasia. A few of the main arguments for and against euthanasia are outlined below.

For

Rights-based argument

Advocates of euthanasia argue that a patient has the right to make the decision about when and how they should die, based on the principles of autonomy and self-determination. [1, 5] Autonomy is the concept that a patient has the right to make decisions relating to their life so long as it causes no harm to others. [4] They relate the notion of autonomy to the right of an individual to control their own body, and should have the right to make their own decisions concerning how and when they will die. Furthermore, it is argued that as part of our human rights, there is a right to make our own decisions and a right to a dignified death. [1]

Beneficence

It is said that relieving a patient from their pain and suffering by performing euthanasia will do more good than harm. [4] Advocates of euthanasia express the view that the fundamental moral values of society, compassion and mercy, require that no patient be allowed to suffer unbearably, and mercy killing should be permissible. [4]

The difference between active euthanasia and passive euthanasia

Supporters of euthanasia claim that active euthanasia is not morally worse than passive euthanasia – the withdrawal or withholding of medical treatments that result in a patient’s death. In line with this view, it is argued that active euthanasia should be permitted just as passive euthanasia is allowed.

James Rachels [12] is a well-known proponent of euthanasia who advocates this view. He states that there is no moral difference between killing and letting die, as the intention is usually similar based on a utilitarian argument. He illustrates this argument by making use of two hypothetical scenarios. In the first scenario, Smith anticipates an inheritance should anything happen to his six-year-old cousin, and ventures to drown the child while he takes his bath. In a similar scenario, Jones stands to inherit a fortune should anything happen to his six-year-old cousin, and upon intending to drown his cousin, he witnesses his cousin drown on his own by accident and lets him die. Callahan [9] highlights the fact that Rachels uses a hypothetical case where both parties are morally culpable, which fails to support Rachels’ argument.

Another of his arguments is that active euthanasia is more humane than passive euthanasia as it is “a quick and painless” lethal injection whereas the latter can result in “a relatively slow and painful death.” [12]

Opponents of euthanasia argue that there is a clear moral distinction between actively terminating a patient’s life and withdrawing or withholding treatment which ends a patient’s life. Letting a patient die from an incurable disease may be seen as allowing the disease to be the natural cause of death without moral culpability. [5] Life-support treatment merely postpones death and when interventions are withdrawn, the patient’s death is caused by the underlying disease. [5]

Indeed, it is this view that is strongly endorsed by the Australian Medical Association, who are opposed to voluntary active euthanasia and physician-assisted suicide, but does not consider the withdrawal or withholding of treatment that result in a patient’s death as euthanasia or physician-assisted suicide. [1]

Against

The sanctity of life

Central to the argument against euthanasia is society’s view of the sanctity of life, and this can have both a secular and a religious basis. [2] The underlying ethos is that human life must be respected and preserved. [1]

The Christian view sees life as a gif offerrom God, who ought not to be off ended by the taking of that life. [1] Similarly the Islamic faith says that “it is the sole prerogative of God to bestow life and to cause death.” [7] The withholding or withdrawal of treatment is permitted when it is futile, as this is seen as allowing the natural course of death. [7]

Euthanasia as murder

Society views an action which has a primary intention of killing another person as inherently wrong, in spite of the patient’s consent. [8] Callahan [9] describes the practice of active voluntary euthanasia as “consenting adult killing.”

Abuse of autonomy and human rights

While autonomy is used by advocates for euthanasia, it also features in the argument against euthanasia. Kant and Mill [3] believe that the principle of autonomy forbids the voluntary ending of the conditions necessary for autonomy, which would occur by ending one’s life.

It has also been argued that patients’ requests for euthanasia are rarely autonomous, as most terminally ill patients may not be of a sound or rational mind. [10]

Callahan [9] argues that the notion of self-determination requires that the right to lead our own lives is conditioned by the good of the community, and therefore we must consider risk of harm to the common good.

In relation to human rights, some critics of euthanasia argue that the act of euthanasia contravenes the “right to life”. The Universal Declaration of Human Rights highlights the importance that, “Everyone has the right to life.” [3] Right to life advocates dismiss claims there is a right to die, which makes suicide virtually justifi able in any case. [8]

The role of palliative care

It is often argued that pain and suffering experienced by patients can be relieved by administering appropriate palliative care, making euthanasia a futile measure. [3] According to Norval and Gwynther [4] “requests for euthanasia are rarely sustained after good palliative care is established.”

The rights of vulnerable patients

If euthanasia were to become an accepted practice, it may give rise to situations that undermine the rights of vulnerable patients. [11] These include coercion of patients receiving costly treatments to accept euthanasia or physician-assisted suicide.

The doctor-patient relationship and the physician’s role

Active voluntary euthanasia and physician-assisted suicide undermine the doctor-patient relationship, destroying the trust and confi dence built in such a relationship. A doctor’s role is to help and save lives, not end them. Casting doctors in the role of administering euthanasia “would undermine and compromise the objectives of the medical profession.” [1]

Conclusion

It can be seen that euthanasia is indeed a contentious issue, with the heart of the debate lying at active voluntary euthanasia and physicianassisted suicide. Its legal status in Australia is that of a criminal off ence, conferring murder or manslaughter charges according to the criminal legislation and/or common law across Australian states. Australia’s prohibition and criminalisation of the practice of euthanasia and assisted suicide refl ects the legal status quo that is present in most other countries around the world. In contrast, there are only a few countries and states that have legalised acts of euthanasia and/or assisted suicide. The many arguments that have been put forward for and against euthanasia, and the handful that have been outlined provide only a glimpse into the ethical debate and controversy surrounding the topic of euthanasia.

Conflicts of interest

None declared.

Correspondence

N Ebrahimi: nargus.e@hotmail.com

References

[1] Bartels L, Otlowski M. A right to die? Euthanasia and the law in Australia. J Law Med. 2010 Feb;17(4):532-55.

[2] Walsh D, Caraceni AT, Fainsinger R, Foley K, Glare P, Goh C, et al. Palliative medicine. 1st ed. Canada: Saunders; 2009. Chapter 22, Euthanasia and physician-assisted suicide; p.110-5.

[3] Goldman L, Schafer AI, editors. Goldman’s Cecil Medicine. 23rd ed. USA: Saunders; 2008. Chapter 2, Bioethics in the practice of medicine; p.4-9.

[4] Norval D, Gwyther E. Ethical decisions in end-of-life care. CME. 2003 May;21(5):267-72.

[5] Kerridge I, Lowe M, Stewart C. Ethics and law for the health professions. 3rd ed. New South Wales: Federation Press; 2009.

[6] Legemaate J. The dutch euthanasia act and related issues. J Law Med. 2004 Feb;11(3):312-23.

[7] Bulow HH, Sprung CL, Reinhart K, Prayag S, Du B, Armaganidis A, et al. The world’s major religions’ points of view on end-of-life decisions in the intensive care unit. Intensive Care Med. 2008 Mar;34(3):423-30.

[8] Somerville MA. “Death talk”: debating euthanasia and physician-assisted suicide in Australia. Med J Aust. 2003 Feb 17;178(4):171-4.

[9] Callahan D. When self-determination runs amok. Hastings Cent Rep. 1992 Mar- Apr;22(2):52-55.

[10] Patterson R, George K. Euthanasia and assisted suicide: A liberal approach versus the traditional moral view. J Law Med. 2005 May;12(4):494-510.

[11] George R J, Finlay IG, Jeff rey D. Legalised euthanasia will violate the rights of vulnerable patients. BMJ. 2005 Sep 24;331(7518):684-5.

[12] Rachels J. Active and passive euthanasia. N Engl J Med. 1975 Jan 9;292(2):78-80.