Categories
Articles Editorials

Medical students in the clinical environment

Introduction

It is common amongst medical students to feel apprehension and uncertainty in the clinical environment. It can be a daunting setting, where medical students can sometimes feel as if they are firmly rooted to the bottom of the pecking order. However, there are many ways medical students can contribute to their respective healthcare teams. Whilst students are not able to formally diagnose patients or prescribe medications, they remain an integral part of the healthcare landscape and culture. The step from being ‘just’ a medical student to being a confident, capable medical professional is a big step to take, but an important one in our development from the textbook to the bedside. By being proactive and committed, students can be of great help and achieve improved outcomes in a clinical setting. Through this editorial we hope to illustrate several methods one can employ to ease this transition.

Concerns of medical students

When faced with the clinical environment, most medical students will have some form of reservation regarding various aspects of clinical practice. Some of the concerns listed in the literature revolve around being accepted as part of the team, [1] fatigue, [2, 3] potential mental abuse, [4, 5] poor personal performance and lifestyle issues. [6, 7] These points of concern can mostly be split up into three parts: concern regarding senior clinicians, concern regarding the clinical environment, and concern regarding patient interaction. [1] Practicing clinicians hold the key to effective medical education and their acceptance of medical students is often crucial for a memorable learning experience. [1] Given the hierarchical nature of most medical organisations, senior clinicians being the direct ‘superiors’, are given the responsibility of assessing students. Concerns regarding the clinical environment refer to the demands on students during clinical years, such as on calls, long hours, early starts and the pressure to gain practical knowledge. Anecdotally, it’s common to hear of medical students becoming consumed by their study of medicine and rarely having the time to pursue other interests in life.

Patient-student interaction is another common source of anxiety, as medical students are often afraid to cause harm to real-life patients. Medical students are often encouraged to perform invasive practical skills (such as venipuncture, intravenous cannulation, catheterisations, suturing, invasive clinical exams, nasogastric tube insertion, airway management, arterial blood gases) and to take sensitive histories. We have the ability to physically or psychologically hurt our patients, and Rees et al. [8] have recently reported the performance of intimate examinations without valid consent by Australian medical students. This has to be balanced against our need to learn as students so that we avoid making errors when we eventually enter clinical practice. These are all pertinent points that have to be addressed to ensure that the average medical student can feel comfortable and contribute to the team in an ethical manner.

Attitudes towards medical students

Despite the concerns of medical students regarding the attitudes of clinicians, allied health professionals and patients towards them, most actually take a positive view on having students in the clinical environment. Most studies have shown that the majority of patients are receptive to medical students and had no issues with disclosing personal information or being examined. [9-11] In particular, patients who were older and had prior admissions tended to be more accepting of student participation in their care. [9, 12] These findings were consistent across a number of specialties, even those dealing with genitourinary issues. [13] On a cautionary note, students should bear in mind that a sizable minority of patients prefer to avoid medical student participation, and under these circumstances it is important to respect patient autonomy and refrain from being involved with their care. [14] Graber et al. [14] have also reported that patients are quite apprehensive regarding having medical students perform procedures on them, particularly more invasive procedures such as central line placement or lumbar puncture. Interestingly, a sizable minority (21%) preferred to never have medical students perform venipuncture, [14] a procedure often considered minor by medical professionals. It is a timely reminder that patient perspectives often differ from ours and that we need to respect their opinions and choices.

Ways we can contribute

As aspiring medical professionals our primary objective is to actively seek ways to learn from experienced colleagues and real-life patients about the various conditions that they face. Being a proactive learner is a crucial aspect of being a student and this in itself can be advantageous to the clinical team by sharing new knowledge, promoting academic discussion or as a source of motivation for senior clinicians. However as medical students we can actively contribute to the healthcare team in a variety of practical ways. These methods include formulating a differential diagnosis, assisting in data collection, preventing medical errors and ensuring the emotional well-being of patients. These are simple yet effective ways of fulfilling one’s role as a medical student with potentially meaningful outcomes for patients.

Preventing medical errors

As medical students, we can play an important role in preventing patient harm and picking up medical errors. Medical errors can be caused by a wide variety of reasons, ranging from miscommunication to a loss of documentation to the lack of time on the part of physicians. [15-18] These are all situations where medical students can be as capable as medical professionals in noticing these errors. Seiden et al. [19] reports four cases where medical students prevented medical errors and ensured patient safety, ranging from ensuring sterile technique in surgery to correcting a medication error to respecting a do not resuscitate order. These are all cases within the circle of competence of most medical students. Anecdotally, there are many more cases of situations where a medical student has contributed to reducing medical errors. Another study has shown that up to 76% of second-year medical students at the University of Missouri-Columbia observed a medical error. [17] However, only 56% reported the error to the resident-in-charge. Various factors contribute to this relatively low percentage: inexperience, lack of confidence, hesitancy to voice opinions, being at the bottom of the medical hierarchy and fear of conflict. [17] Whilst medical students should not be relied upon as primary gatekeepers for patient safety, we should be more forthcoming with voicing our opinions and concerns. By being involved and attuned to the fact that medical errors are common, we can make a significant difference to a patient’s well-being. In recognition of the need to educate medical students about the significance of medical errors, there have been efforts to integrate this formally into the medical student curriculum. [20, 21]

Assistance with collecting data

Physicians in clinical environments are notoriously limited with time. Average duration of consultations may range from eight to nineteen minutes, [22-24] which is often insufficient to take a comprehensive history. There are also a range of administrative duties that reduce patient interaction time, such as ordering investigations, filling out drug charts, arranging referrals or finding a hospital bed. [25,26] Mache et al. [25,26] have reported that pediatricians and surgeons spent up to 27% and 21% of their time on administrative duties and documentation. Medical students tend to have less administrative duties and are thus able to spend more time on individual patients. Medical students can be just as competent at taking medical histories or examining patients, [27,28] and they can uncover crucial pieces of information that had gone unnoticed, such as the presence of a ‘Do Not Resuscitate’ order in a seriously ill patient. [19] Students are also often encouraged to try their hand at practical skills such as venipuncture, history taking or clinical examination, all of which saves physician time and contribute to the diagnostic process as well.

Emotional well-being of patients

Due to the unique nature of the hospital environment, patients often have a range of negative emotions, ranging from anxiety to apprehension and depression. [29-31] A patient’s journey in the hospital can be an unnerving and disorientating experience, where he/ she is referred from unit to unit with several different caregivers at each stage of the process. This issue is further compounded by the fact that clinicians simply do not always have sufficient patient contact time to soothe their fears and emotional turmoil; studies have shown that direct patient contact time represented a small proportion of work time, as little as 4% in some cases. [25,26,32,33] Most patients feel comfortable and enjoy their interactions with medical students and some even feel that they benefit from having medical students in the healthcare team. [9,10,12,14,34] By being empathetic and understanding of our patient’s conditions, we can often alleviate the isolating and disorientating nature of the hospital environment. [12,35]

International health

Most medical students, particularly earlier in the course are motivated by idealistic notions of making a difference to the welfare of our patients. [36,37] This often extends to the less fortunate in developing countries and students often have a strong interest in global health and overseas electives (38, 39). This can be a win-win situation for both parties. Healthcare systems in developing countries stand to benefit from the additional help and expertise provided by students and students gain educational benefits (recognising tropical conditions, public health, alternative medicine), enhanced skills (clinical examination, performing investigations), cultural exposure and fostering certain values (idealism, community service). [38] However, it is important to identify our limits as medical students and learn how to turn down requests that are beyond our scope of knowledge, training and experience. This is an ethical dilemma that many students face whilst on electives in resource-poor areas, and it is often a fine line to tread between providing help to those in desperate need and inappropriate abuse of one’s position. We have the potential to do more harm than good when exceeding our capabilities, and given the lack of clear guidelines it comes down to the student to be aware of these ethical dilemmas and draw the line between right and wrong in these situations. [40,41]

Student-run clinics and health promotion activities

In other countries, such as the United States, student-run medical clinics play a crucial role in the provision of affordable healthcare. [42- 45] These clinics number over 120 across the country and have up to 36 000 visits nation-wide. [43] In these clinics, students from a variety of disciplines (such as medicine, nursing, physiotherapy, dentistry, alternative medicine, social work, law and pharmacy) collaborate to manage patients coming from disadvantaged backgrounds. [46] Whilst this concept is still an emerging one in Australia (the first student run clinic was initiated by Doutta Galla Community Health and the University of Melbourne this year, culminating in the REACH clinic – Realising Education, Access and Collaborative Health), [47] there has been a strong tradition of medical students being heavily involved with health promotion projects in their respective local communities. [48] It is not uncommon to hear of students being actively involved in community health promotion clinics, blood donation drives or blood pressure screening, [49] all of which have practical implications on public health. Through modifying our own health behaviours and active participation in local communities, students can have a tangible impact and influence others to lead a healthier lifestyle.

Note of caution

Whilst medical students should actively participate and be an integral part of a medical team, care must be taken to not overstep the professional boundaries of our role. It is always important to remember that our primary aim is to learn how to care for patients, not to be the principle team member responsible for patient care. There have been several ethical issues surrounding the behavior of medical students in clinical settings in recent times. A prominent example of this is the lack of valid consent whilst observing or performing intimate examinations. This report by Rees et al. [8] generated widespread controversy and public outrage. [50] The study showed that most medical students complied with the instructions of more senior clinicians and performed sensitive examinations without explicit consent, sometimes whilst patients were under anaesthesia. There were a variety of reasons leading up to the action, ranging from the lack of similar opportunities to the presumed pressure from supervising doctors. This is not a new issue; a previous study by Coldicott et al. [51] had also highlighted this as a problem. As emerging medical professionals we must avoid getting carried away by the excitement of clinical practice and ignore the vulnerability of our patients.

Conclusion

The clinical environment offers medical students limitless potential to develop their clinical acumen. As medical students we have the opportunity to participate fully in all stages of patient care, from helping formulate a diagnosis to proposing a management plan. Holistic care for our patients goes beyond the physical aspect of disease and medical students can play an important role in ensuring that the psychosocial wellbeing of patients is not ignored. Our impact is not just restricted to a hospital setting; we are only limited by our imagination and determination. By harnessing the idealism unique to medical students we are able to come up with truly inspirational projects that influence local or overseas communities. Through experiencing a full range of clinical scenarios in different environments we can develop a generation of doctors that are not only clinically astute, but also well- rounded individuals with the ability to connect to patients from all backgrounds. As medical students we have the potential to contribute in a practical manner with tangible outcomes, and we should aspire to that as we make the fifth cup of coffee for the busy registrar on call.

Acknowledgements

Michael Thompson for his feedback and assistance in editing draft manuscripts.

Conflict of interest

None declared.

Correspondence

f.chao@amsj.org

 

Categories
Articles

The AMSJ leading the way in Australian medical student research

Welcome to Volume 3, Issue 2, another successful issue of the Australian Medical Student Journal. The current issue again provides our medical student and junior doctor readership with a broad range of intellectually intriguing topics.

Our medical student authors have continued to submit quality articles, demonstrating their important contribution to research. Publication of medical student research is an important part of the transition from being a medical student to a junior academic clinician and the AMSJ continues to provide this opportunity for Australian medical students.

Some key highlights from this issue include an editorial from our Associate Editor, Foong Yi Chao, who illustrates the important role that medical students have in clinical settngs – in addition to their role in medical research. Continuing this theme are submissions that describe an impressive Australian medical student-led project aimed at reducing the incidence of malaria in a Ugandan community as well as a student elective project in India. Other submissions range from an examination of the traditional white coat in clinical medicine, the history and evolution of abdominal aortic aneurysm repair surgery, case reports in paediatric surgery and infectious diseases, a student perspective in palliative care medicine and the future role of direct-to-consumer genetic tests.

We also present a systematic review on “Predicting falls in the elderly” by Mina Sarofim, which has won the award for the best article for Volume 3, Issue 2. This research was identified by our editors and reviewers to be of a significantly high quality, with robust and rigorous methodology. Sarofim’s article analyses an important problem and cause of great morbidity in the elderly population.

The Hon. Jillian Skinner MP, New South Wales Minister for Health and Minister for Medical Research provides us with an insightful discussion on the role of e-health and telemedicine programs in improving healthcare. Advances in e-health will be of significant value to the Australian community, especially in rural and regional areas where a lack of appropriate specialist care has been a major problem.

The AMSJ continues to support initiatives to encourage student research. We are pleased to be publishing a supplementary online issue of research abstracts presented at the Australasian Student’s Surgical Conference (ASSC) in June this year.

Another new addition to our website will be a database of all peer-reviewers who have contributed in 2012. We are always on the look out for new peer reviewers who are welcome to fill in their details via our website.

The AMSJ Blog is another initiative that we are excited to have been redeveloping and revitalising. From November 2012, readers can look forward to regular fortnightly articles from the AMSJ staff. Topics coming up include conference summaries, tips on professional networking and even a discussion on the coffee habits of medical students!

Since our inaugural issue in 2010, the AMSJ has continued to expand as a student publication. We received over 70 submissions for this issue and have been able to continue to publish approximately 30-50% of submissions. We have also recently completed a new Australia- wide recruitment for staff members. Our nation-wide based staff have continued to successfully work together through the use of teleconference meetings and email.

The production of this journal is a major undertaking, with several staff members completing their final medical school exams during the process of compiling this issue. We would like to extend our thanks to all of our voluntary staff members as well as the invaluable assistance of our external peer- reviewers for donating their time and efforts in ensuring a successful issue of the AMSJ.

In addition, we would like to thank you- our valued readers and authors for your continued support and providing the body of work that has made this publication possible. We hope that you will enjoy the current issue as much as we have in compiling it.

Categories
Letters Articles

The hidden value of the Prevocational General Practice Placements Program

Medical students, prevocational doctors and general practitioners (GPs) may have little knowledge of the Prevocational General Practice Placements Program (PGPPP).

This article seeks to explore the value of such placements and provide an aspiring surgeon’s reflection on a PGPPP internship placement in rural New South Wales (NSW).

General practice placements for interns have been available for the past three decades in the United Kingdom, with literature unanimously promoting the educational benefits. [1] The Australian PGPPP experiences in Western Australia and South Australia reinforce the feasibility of such placements, and propose cooperation between universities, postgraduate councils, training networks and specialist training colleges. [2] Semi-structured interviews with interns who had completed the PGPPP indicated experience in communication, counselling and procedural skills, with a range of patient presentations. [3] The uptake of the PGPPP has varied between states, with NSW until recently, having substantially fewer placements, particularly at an intern level. [4]

Prevocational GP placements have the potential to alleviate some of the pressure of sourcing additional postgraduate placements for junior doctors. With the dramatic increase of Australian medical school graduates – by 81% in seven years – overwhelming traditional postgraduate training placements, [5] the growth of PGPPP will continue. Despite available qualitative data, there is currently no published information that adequately describes the range and volume of patient and procedural experiences in PGPPP placements. In response, a prospective study of caseload data is currently underway to better inform medical students, prospective PGPPP doctors, GP supervisors and specialist training colleges of the potential value of such placements.

 

In April 2012, I undertook an eleven week placement at Your Health Griffith, a medical centre in Griffith, NSW. The practice was staffed by seven GPs and two practice nurses. Two GPs alternated as clinical supervisors and a third GP, separate from the practice, conducted informal weekly tutorials and reviewed patient encounter and procedure logs. Both clinical supervision and teaching exceeded RACGP standards. [6]

Presentations during a single day included complex medical or surgical issues, paediatrics, obstetrics, gynaecology, dermatology, mental health, occupational health, preventative health and more. The workload included booked appointments and consenting patients from the GP supervisors’ bookings, thus ensuring a reasonable patient caseload. Patients often attended follow up appointments during this term. The continuity of patient care in PGPPP was in stark contrast to acute medicine and surgery in tertiary hospitals. This allowed establishment of greater rapport, with patients openly discussing intimate social or mental health issues during subsequent consultations.

The majority of tertiary hospitals encompass an established hierarchy of fellows, accredited and unaccredited registrars, residents and enthusiastic medical students vying for procedures. With the possible exception of emergency, most interns complete only a few procedures in hospital rotations. Hence, in PGPPP, junior doctors value the opportunities to practice procedural skills including administration of local anaesthesia, skin lesion excisions and suturing.

The main source of frustration within the placement was administrative red tape. The restrictions placed upon interns with provisional medical registration meant all scripts and referrals had to be counter-signed and conducted under the GP supervisors’ provider number and prescription authority. Interns routinely prescribe medications and make referrals in the hospital system. That this authority in a supervised hospital system has not been extended to similarly supervised PGPPP is bewildering. The need to obtain formal consent prior to consults, in contrast to the implied consent in hospital treatment, was reminiscent of medical school.

One of the main purposes for the development of PGPPP was to promote general practice to prevocational junior medical officers. These placements provide junior doctors with valuable exposure to community medicine, develop confidence to deal with the uncertainty of diagnoses, a range of patient presentations, and improve counselling and procedural skills. These skills and experiences are likely to be retained regardless of future specialisation. Perhaps it is just as important for GPs to play a role in educating future tertiary care specialists, so all may better understand both the capabilities and limitations of community medicine. While I still wish to pursue a career in surgery, this placement has provided insight into the world of community medicine. The value of PGPPP truly extends beyond the attraction of prevocational doctors to general practice careers.

Conflict of interest

None declared.

Acknowledgements

Dr Marion Reeves, Dr Jekhsi Musa Othman and Dr Damien Limberger for their supervision and guidance through this clinical rotation.

 

Categories
Articles Editorials

Modelling human development and disease: The role of animals, stem cells, and future perspectives

Introduction

The ‘scientific method’ begins with a hypothesis, which is the critical keystone in forming a well-designed study. As important as it is to ask the correct questions to form the hypothesis, it is equally important to be aware of the available tools to derive the answers.

Experimental models provide a crucial platform on which to interrogate cells, tissues, and even whole animals. They broadly serve two important purposes: investigation of biological mechanisms to understand diseases and the opportunity to perform preclinical trials of new therapies.

Here, an overview of experimental models based on animals commonly used in research is provided. Limitations which may impact clinical translation of findings from animal experiments are discussed, along with strategies to overcome this. Additionally, stem cells present a novel human-derived model, with great potential from both scientific and clinical viewpoints. These perspectives should draw attention to the incredible value of model systems in biomedical research, and provide an exciting view of future directions.

Animal models – a palette of choices

Animal models provide a ‘whole organism’ context in studying biological mechanisms, and are crucial in testing and optimising delivery of new therapies before the commencement of human studies. They may be commonly referred to under the classification of invertebrates (flies, worms) and vertebrates (fish, rodents, swine, primates); or small animal (fish, rodents) and large animal (swine, primates, sheep).

Whilst organisms have their own niche area of research, the most frequently used is the humble mouse. Its prominence is attested by the fact that it was only the second mammalian species after humans to have its genome sequenced, demonstrating that both species share 99% of their genes. [1] Reasons for the popularity of mice as a choice include that mice share many anatomical and physiological similarities with humans. Other advantages include that they are small, hardy, cheap to maintain and easy to breed with a short lifespan (approximately three years), [2] allowing experiments to gather results more quickly. Common human diseases such as diabetes, heart disease, and cancer affect mice, [3] hence complex pathophysiological mechanisms such as angiogenesis and metastasis can be readily demonstrated. [2] Above all, the extraordinary ease with which mice are manipulated has resulted in the widespread availability of inbred, mutant, knockout, transgenic or chimeric mice for almost every purpose conceivable. [3] By blocking or stimulating the overexpression of specific genes, their role in developmental biology and disease can be identified and even demonstrated in specific organs. [4]

Humanised mice are another step closer in representation of what happens in the human body, thereby increasing the clinical value of knowledge gained from experiments. Humanised mice contain either human genes or tissue allowing the investigation of human mechanisms whilst maintaining an in vivo context within the animal. Such approaches are also available in other organisms such as rats, but are often adapted from initial advances in mice, and hardly mirror the ease and diversity with which humanised mice are produced.

Aside from the mouse, invertebrates such as the Drosophila vinegar fly [5] and Caenorhabditis elegans worm [6] are also widely used in research of genetics or developmental biology studies. They are particularly easy to maintain and breed and therefore large stocks can be kept. Furthermore, there are fewer ethical dilemmas and invertebrates have a genome simple enough to be investigated in its entirety without being cost-prohibitive or requiring an exhaustive set of experiments. Their anatomies are also distinct and simple, allowing developmental changes to be readily visualised.

Another alternative is the Zebrafish, which shares many of the advantages offered by Drosophila and C. elegans. Additionally, it offers greater scope for investigating more complex diseases like spinal cord injury and cancer, and possesses advanced anatomical structures as a vertebrate. [7] Given the inherent capacity of the Zebrafish for cardiac regeneration, it is also of interest in regenerative medicine as we seek to harness this mechanism for human therapy. [8]

Large animals tend to be prohibitively expensive, time-consuming to manage and difficult to manipulate for use in basic science research. Instead, they have earned their place in preclinical trials. Their relatively large size and physiological similarity to humans provides the opportunity to perform surgical procedures and other interventions on a scale similar to that used clinically. Disease models created in sheep or swine are representative of the complex biological interactions that are present in highly evolved mammals; hence may be suitable for vaccine discovery. [9] Furthermore, transgenic manipulation is now possible in non-human primates, presenting an opportunity to develop humanised models. [10] Despite this, there are obvious limitations confining their use to specialised settings. Large animals need more space, are difficult to transport, require expert veterinary care, and their advanced psychosocial awareness raises ethical concerns. [9]

The clinical context of animal experimentation

A major issue directly relevant to clinicians is the predictive value of animal models. Put simply, how much of research using animals is actually clinically relevant? Although most medical therapies in use today were initially developed using animal models, it is also recognised that many animal experiments fail to reproduce their findings when translated into clinical trials. [11] The reasons for this are numerous, and require careful analysis.

The most obvious is that despite some similarities, animals are still animals and humans are humans. Genetic similarities between species as seemingly disparate as humans and mice may lead to assumptions of conserved function between humans and other animal species that are not necessarily correct. Whilst comparing genomes can indicate similarities between two species such studies are unable to capture differences in expression or function of a gene across species that may occur at a molecular level. [12]

The effectiveness and clinical relevance of experimental animal trials is further complicated by epigenetics. Epigenetics is the modification of genetic expression due to environmental or other cues without actual change in DNA sequence. [13] These changes are now considered just as central to the pathogenesis of cancer and other conditions as genetic mutations.

It is also important to consider the multi-factorial nature of human diseases. Temporal patterns such as asymptomatic or latent phases of disease can further complicate matters. Patients have co-morbidities, risk factors, and family history, all of which contribute to disease in a way that we may still not completely understand. With such complexity, animal models do not encapsulate the overall pathophysiology of human disease. Animals may be too young, too healthy, or too streamlined in sex or genetics. [14] To obtain animals with specific traits, they are often inbred such that two animals in the same experiment will have identical genetic make-up – like twins, hardly representative of the diversity present in nature. Understandably, it can be an extraordinary challenge to incorporate all these dimensions into one study. This is especially so when the very principles of scientific method dictate that variables except for the one under experimentation should be minimised as much as possible.

A second area of concern is the sub-optimal rigour and research design of animal experiments. Scientists who conduct animal experiments and clinicians who conduct clinical trials often have different goals and perspectives. Due to ethical and cost concerns, the sample size of animal experiments is often kept to a minimum, and studies are prolonged no more than necessary, often with arbitrarily determined end-points. [14] Randomisation, concealed allocation, and blinded outcome of assessment are poorly enforced, leading to concerns of experimental bias. [11] Additionally, scientific experiments are rarely repeated due to an emphasis on originality, whereas clinical trials are often repeated (sometimes as multi-centre trials) in order to assess reproducibility of results. Furthermore, clinical trials are more likely to be published regardless of the nature of results; in contrast, scientific experiments with negative findings or low statistical significance often fail to be reported. These gaps highlight the fact that preclinical trials should be expected to adhere to the same standards and principles of clinical trials in order to improve the translatability of results between the two settings.

Although deficiencies in research conduct is a concern, the fundamental issue that remains is that even the best-designed preclinical study cannot overcome the inherent differences that exist between animal models and ‘real’ human patients. However, it is reassuring to know that we are becoming better at manipulating animal models and enhancing their compatibility with their human counter-parts. As such, this drive towards increasingly sophisticated animal models will provide more detailed and clinically relevant answers. Additionally, with the recognition that a single animal model is inadequate on its own, experiments may be repeated in multiple models. Each model will provide a different perspective and lead to the formation of a more comprehensive and balanced conclusion. A suggested structure is to start initial proof-of-principle experiments in small, relatively inexpensive and easily manipulated animals, and then scale up to larger animal models.

‘Human’ experimental models – the revolution of stem cells

Given the intrinsic differences between animals and humans, it is crucial to develop experimental systems that simulate human biology as much as possible. Stem cells are ‘master cells’ with the potential to differentiate into more mature cells, and are involved in the development and maintenance of organs through all stages of life from an embryo (embryonic stem cells) to adult (tissue-specific stem cells). [15] With the discovery of human embryonic stem cells [16] and other tissue-specific stem cells [17] it is now possible to appreciate the developmental biology of human tissues and organs in the laboratory. Stem cells may be studied under various controlled conditions in a culture dish, or even implanted into an animal to recapitulate in vivo conditions. Furthermore, stem cell transplantation has been used in animal models of disease to replace lost or damaged tissue. These methods are now commencing high-profile clinical trials with both embryonic stem cells [18] and tissue-specific stem cells. [19] Although stem cells hold great potential, translating this into the clinical environment has been hindered by several obstacles. Chiefly, tissue- specific stem cells are rare and difficult to isolate, while embryonic stem cells can only be created by destroying an embryo. In order to generate personalised embryonic stem cells for cell therapy or disease modelling, they need to be created via ‘therapeutic cloning.’ The considerable ethical quandary associated with this resulted in a field mired in controversy and political debate. This led to research coming almost to a standstill. Fortunately, stem cell research was rejuvenated in 2007 with the revolutionary discovery of induced pluripotent stem (iPS) cells – a discovery notable enough to be awarded the 2012 Nobel Prize in Physiology/Medicine.

Induced pluripotent stem (iPS) cells are created by reprograming mature cells (such as skin fibroblasts) back into a pluripotent ‘stem cell’ state, which can then re-differentiate into cells of any of the three germ layers irrespective of what its original lineage was. [20] Cells from patients with various diseases can be re-programmed into iPS cells, examined and compared to cells from healthy individuals to understand disease mechanisms and identify therapeutic opportunities. Rather than using models created in animals, this approach represents a ‘complete’ model where all genes contributing to a specific disease are present. Crucially, this enables the previously inconceivable notion of deriving patient-specific ‘disease in a dish’ models, which could be used to test therapeutic response. [21] It also provides unprecedented insight into conditions such as those affecting the heart [22] or brain, [23] which have been difficult to study due to limitations accessing tissue specimens and conducting experiments in live patients.

However, if a model system rests purely on stem cells alone this would relegate the approach to in vitro analysis without the whole organism outlook that animal experiments afford us. Accordingly, by combining this with rapidly evolving cell transplantation techniques it is possible to derive stem-cell based animal models. Although this field is flourishing at an exponential rate it is still in its infancy. It remains to be seen how the actual translation of iPS technology will fit into the pharmacological industry, and whether personalised drug screening assays will become adopted clinically.

Conclusion

Experimental models provide us with insight into human biology in ways that are more detailed and innovative than ever before, with a dazzling array of choices now available. Although the limitations of animal models can be sobering, they remain highly relevant in biomedical research. Their contribution to clinical knowledge can be strengthened by refining models to mimic human biology as closely as possible, and by modifying research methods to include protocols similar to that used in clinical trials. Additionally, the emergence of stem cells has shifted current paradigms by introducing patient-specific models of human development and disease. However, it should not be seen as rendering animal models obsolete, but rather a complementary methodology that should improve the predictive power of preclinical experiments as a whole.

Understanding and awareness of these advances is imperative in becoming an effective researcher. By applying these models and maximising their potential, medical students, clinicians and scientists alike will enter a new frontier of scientific discovery.

Conflict of interest

None declared.

Correspondence

k.yap@amsj.org

 

Categories
Articles Review Articles

Where to from here for Australian childhood obesity?

Aim: At least one in twenty Australian school children are obese. [1] The causes and consequences of childhood obesity are well documented. This article examines the current literature on obesity management in school-aged, Australian children. Methods: A systematic review was undertaken to examine the efficacy of weight management strategies of obese Australian school-aged children. Search strategies were implemented in Medline and Pubmed databases. The inclusion criteria required original data of Australian origin, school-aged children (4 to 18 years), BMI defined populations and publication within the period of January 2005 to July 2011. Reviews, editorials and publications with inappropriate focus were excluded. Thirteen publications were analysed. Results: Nine of the thirteen papers reviewed focused on general practice (GP) mediated interventions, with the remainder utilising community, school or tertiary hospital management. Limitations identified by GP-led interventions included difficulties recognising obese children, discussing obesity with families, poor financial reward, time constraints, and a lack of proven management strategies. A school-based program was investigated, but was found to be ineffective in reducing obesity. Successful community- based strategies focused on parent-centred dietary modifications or exercise alterations in children. Conclusion: Obesity-specific management programs of children are scarce in Australia. As obesity remains a significant problem in Australia, this topic warrants further focus and investigation.

Introduction

In many countries the level of childhood obesity is rising. [2] Whilst the popular press have painted Australia as being in a similar situation, research has failed to identify significant increases in the level of childhood obesity since 1997, and in fact, recent data suggests a small decrease. [2,3] Nonetheless, an estimated four to nine percent of school-aged children are obese. [1,4] Consequently, the Australian government have pledged to reduce the prevalence of childhood obesity. [5]

In this review, articles defined Body Mass Index (BMI) by dividing weight (in kilograms) by the square of the height (in metres). [1] BMI was then compared to age- and gender-specific international set points. [6] Obesity was defined as children who had a BMI ≥ 95% of children with the same age and gender. [6] The subjects of this review, Australian school-aged children, were defined as those aged 4 to 18 years in order to include most children from preschool to the completion of secondary school throughout Australia. As evidence suggests that obese individuals have significantly worse outcomes than overweight children, this review focused on obesity rather than overweight and obese individuals. [1]

The aim of this article was to examine the recent Australian literature on childhood obesity management strategies.

Background

Causes of obesity

A myriad of causes of childhood obesity are well established in the literature. Family and culture influence a child’s eating habits, their level of physical activity and ultimately their weight status. [4,7,8] Parental attributes such as maternal obesity and dismissive or disengaged fathers also play a role. [9] Notably, maternal depression or inappropriate parenting styles have little effect on obesity. [10] Children from lower socio-economic status (SES) are at a greater risk of being obese. [9,11-13]

Culture and genetic inheritance also influence a child’s chance of being obese. [8] Evidence suggests that culture influences an individual’s beliefs regarding body size, food and exercise. [7,14] O’Dea (2008) found that Australian children of European and Asian descent had higher rates of obesity when compared with those of Pacific Islander or Middle Eastern heritage. [8] Interestingly, there is conflicting evidence as to whether being an Indigenous Australian is an independent risk factor for childhood obesity. [7,9]

A child’s nutritional knowledge has little impact on their weight. Several authors have shown that while obese and non-obese children have different eating styles, they possess a similar level of knowledge about food. [4,13] Children with a higher BMI had lower quality breakfast and were more likely to omit meals in comparison to normal weight children. [4,7,13]

The environment in which a child lives may impact their weight status; existing literature suggests that the built environment has little influence over dietary intake, physical activity and ultimately weight status. [15,16] However, there is limited research presently available.

Consequences of obesity

Obesity significantly impacts a child’s health, resulting in poorer physical and social outcomes. [4,17] Obese children are at greater risk of becoming obese in adulthood. [4,18] Venn et al. (2008) estimates that obese children are at a four- to nine-fold risk of becoming obese adults. [18] Furthermore, obese children have an increased risk of acquiring type 2 diabetes, sleep apnoea, fatty liver disease, arthritis and cardiovascular disease. [4,19]

An individual’s social health is detrimentally affected by childhood obesity. Obese children have significantly lower self worth, body image and perceived level of social acceptance amongst their peers. [7,20,21] Indeed, overall social functioning is reduced in obese children. [17] Interestingly, some studies identify no differing rates of mental illness or emotional functioning between obese and non-obese children. [12,17,22,23]

Method

Using Medline and Pubmed, searches were undertaken with the following MeSH terms: child, obesity and Australia. Review and editorial publication types were excluded, as only original data was sought for analysis. Further limits to the search included literature available in English, focused on school-aged children from 4 to 18 years, articles which defined obesity in their population using BMI, publications which addressed the research question (management of childhood obesity), and recent literature. Recent literature was defined as articles published from 1 January 2005 until 31 July 2011. This restriction was placed in part due to resource constraints, but January 2005 was specifically chosen, as this marked the introduction of several Australian government strategies to reduce childhood obesity. [5]

In total, 280 publications were identified in the Pubmed and Medline searches. The abstracts of these articles were manually assessed by the investigator for relevance to the research question and described inclusion and exclusion criteria. As a result of inappropriate topic focus, population, publication type, publication date and repetition, 265 articles were excluded. Ten articles were identified as pertinent via Pubmed. Medline searches revealed five articles of relevance, all of which were duplicated in the Pubmed search. Hence, ten publications were examined. Additionally, a search of relevant publications’ reference lists identified three further articles for analysis. Subsequently, this paper reviews thirteen articles.

Publications included in this study were either randomised controlled trials or cross-sectional analyses. The papers collected data from a variety of sources, including children, parents, clinicians and simulated patients. Consequently, population sizes varied greatly throughout the literature.

Results

Much of the Australian literature on childhood weight management does not specifically focus on the obese; instead, it combines the outcomes of obese and overweight children, sometimes including normal weight children.

Thirteen intervention articles were identified in the literature, nine of which employed GP mediated interventions, with the remainder using a community-based approach, school-based or tertiary hospital mediated obesity management.

General practitioner intervention

The National Health and Medical Research Council (NHMRC) guidelines recommend biannual anthropometric screening for children; however, many studies illustrate that few GPs regularly weigh and measure children. [24,25] Whilst Dettori et al. (2009) reported 79% of GPs interviewed measure children’s weight and height, only half of their respondents regularly converted these figures to determine if a child was obese. [26] A possible reason for the low rates of BMI calculation may be that many GPs find it difficult to initiate discussions about weight status in children. [24-27] A number of authors have identified that some GPs fear losing business, alienating or offending their clients. [24,25,27]

There was wide variability in the tools GPs used to screen children, which may ultimately have led to incorrect weight classifications. [24] Spurrier et al. (2006) investigated this further, identifying that GPs may use visual cues to identify normal weight children; however, using visual cues alone GPs are not always able to recognise an obese from an overweight child or an overweight from a normal weight child. [28] Hence, GPs may fail to identify obese children if appropriate anthropometric testing is not performed.

There is mixed evidence regarding the willingness of GPs to manage obese children. Firstly, McMeniman et al. (2007) identified that GPs felt there was a lack of clear management guidelines, with the majority of participants feeling they would not be able to successfully treat an obese child. [27] Some studies identified that GPs see their role as gatekeeper for allied health intervention. [24,25] Another study showed that GPs preferred shared care, providing the primary support for obese children, which involved offering advice on nutrition, weight and exercise, whilst also referring onto other health professionals such as nutritionists, dieticians and physicians. [11]

Other factors impeding GP-managed programs are time and financial constraints. The treatment of childhood obesity in general practice is time consuming. [11,26,27] Similarly, McMeniman et al. [27] highlighted that the majority of responders (75%) felt that there was not adequate financial incentive to identify and manage obese children.

Evidence suggests that providing education to GPs on identifying and managing obesity could be useful in building their confidence. [26] One publication found that over half of GPs receiving education were able to better identify obese children. [26] Similarly, Gerner et al. (2010) illustrated, by using simulated patients, that GPs felt they had improved their competence in the management of obese children. [29] In the Live, Eat and Play (LEAP) trial, patient outcomes at nine months were compared to GP’s self-rated competence, simulated patient ratings and parent ratings on consultations. [29] Interestingly, simulated patient ratings were shown to be a good predictor of real patient outcomes, with higher simulated patient marks correlating to larger drop in a child’s BMI. [29]

Unfortunately, no trials illustrated an effective GP-led child obese management strategy. The LEAP trial, a twelve week GP-mediated intervention focused on nutrition, physical exercise and the reduction of sedentary behaviour, failed to show any significant decrease in BMI of the intervention group compared with the control. [30] Notably, the LEAP trial failed to separate the data of obese and non-obese children. [30]

Further analysis of the LEAP trial illustrated that the program was expensive, with the cost to an intervention family being $4094 greater than of that to a control family. [31] This is a significant burden on families, with an additional fiscal burden of $873 per family to the health sector. [31] Whilst these amounts are likely to be elevated due to the small number of children, program delivery is costly for both families and the health care sector. [31]

Community-based programs

Literature describing community-based obesity reduction was sparse. Two publications were identified, both of which pertained to the HICKUP trial. These articles illustrated that parent-centred dietary program and child-focused exercise approaches can be efficacious in weight reduction in a population of children including the obese. [32,33] In this randomised controlled trial, children were divided into three groups: i) parent-focused dietary program, ii) child-centred exercise, and iii) combination of the aforementioned. [32,33] Dietary programs focused on improving parenting skills to provide behavioural change in children, whilst physical activity program involved improving children’s fundamental skills and competence. [32,33] A significant limitation of the study was that children were identified through responding to advertising in school newsletters and GP practices, lending this investigation to volunteer bias. Additionally, the outcome data in these studies failed to delineate obese children from overweight or normal weight children.

School-based programs

Evidence suggests that an education and exercise-based program can be implemented into a school system. [34] The Peralta et al. (2009) intervention involved a small sample of twelve to thirteen year old boys who were either normal weight, overweight or obese, and were randomised to a control or intervention group. [34] The program’s curriculum focused on education as well as increasing physical activity. Education sessions were based on dietary awareness, goal se

Categories
Original Research Articles Articles

Predicting falls in the elderly: do dual-task tests offer any added value? A systematic review

The issue of falls is a significant health concern in geriatric medicine and a major contributor to morbidity and mortality in those over 65 years of age. Gait and balance problems are responsible for up to a quarter of falls in the elderly. It is unclear whether dual- task assessments, which have become increasingly popular in recent years, have any added benefit over single-task assessments in predicting falls. A previous systematic review that included manuscripts published prior to 2006 could not reach a conclusion due to a lack of available data. Therefore, a systematic review was performed on all dual-task material published from 2006 to 2011 with a focus on fall prediction. The review included all studies published between 2006-2011 and available through PubMed, EMBASE, PsycINFO, CINAHL and the Cochrane Central Register of Controlled Trials databases that satisfied inclusion and exclusion criteria utilised by the previous systematic review. A total of sixteen articles met the inclusion criteria and were analysed for qualitative and quantitative results. A majority of the studies demonstrated that poor performance during dual-task assessments was associated with a higher risk of falls in the elderly. Only three of the 16 articles provided statistical data for comparison of single- and dual-task assessments. These studies provided insufficient data to demonstrate whether dual-task tests were superior to single- task tests in predicting falls in the elderly. Further head-to-head studies are required to determine whether dual-task assessments are superior to single-task assessments in their ability to predict future falls in the elderly.

Introduction

Many simple tasks of daily living such as standing, walking or rising from a chair can potentially lead to a fall. Each year one in three people over the age of 65 living at home will experience a fall, with five percent requiring hospitalisation. [1, 2] Gait and balance problems are responsible for 10-25% of falls in the elderly, only surpassed by ‘slips and trips,’ which account for 30-50%. [2] Appropriate clinical evaluation of identifiable gait and balance disturbances, such as lower limb weakness or gait disorders, has been proposed as an efficient and cost-effective practice which can prevent many of these falls. As such, fall prevention programs have placed a strong emphasis on determining a patient’s fall risk by assessing a variety of physiological characteristics. [2, 3]

Dual-task assessments have become increasingly popular in recent years, because they examine the relationship between cognitive function and attentional limitations, that is, a subject’s ability to divide their attention. [4] The accepted model for conducting such tests involves a primary gait or balance task (such as walking at normal pace) performed concurrently with a secondary cognitive or manual task (such as counting backwards). [4, 5] Divided attention whilst walking may manifest as subtle changes in posture, balance or gait. [5, 6] It is these changes that provide potentially clinically significant correlations, for example, detecting changes in balance and gait after an exercise intervention. [5, 6] However, it is unclear whether a patient’s performance during a dual-task assessment has any added benefit over a single-task assessment in predicting falls.

In 2008, Zijlstra et al. [7] produced a systematic review of the literature which attempted to evaluate whether dual-task balance assessments are more sensitive than single balance tasks in predicting falls. It included all published studies prior to 2006 (inclusive), yet there was a lack of available data for a conclusion to be made. This was followed by a review article by Beauchet et al. [8] in 2009 that included additional studies published up to 2008. These authors concluded that changes in performance while dual-tasking were significantly associated with an increased risk of falling in older adults. The purpose of this present study was to determine, using recently published data, whether dual-task tests of balance and/or gait have any added benefit over single-task tests in predicting falls. A related outcome of the study was to gather data to either support or challenge the use of dual-task assessments in fall prevention programs.

A systematic review of all published material from 2006 to 2011 was performed, focusing on dual-task assessments in the elderly. Inclusion criteria were used to ensure only relevant articles reporting on fall predictions were selected. The method and results of included manuscripts were qualitatively and quantitatively analysed and compared.

Methods

Literature Search

A systematic literature search was performed to identify articles which investigated the relationship between falls in older people and balance/gait under single-task and dual-task conditions. The electronic databases searched were PubMed, EMBASE, PsycINFO, CINAHL and Cochrane Central Register of Controlled Trials. The search strategy utilised by Ziljstra et al. [7] was carried out. Individual search strategies were tailored to each database, being adapted from the following which was used in PubMed:

1. (gait OR walking OR locomotion OR musculoskeletal equilibrium OR posture)

2. (aged OR aged, 80 and over OR aging)

3. #1 AND #2

4. (cognition OR attention OR cognitive task(s) OR attention tasks(s) 
OR dual task(s) OR double task paradigm OR second task(s) OR 
secondary task(s))

5. #3 AND #4

6. #5 AND (humans)

Bold terms are MeSH (Medical Subjects Headings) key terms.
The search was performed without language restrictions and results were filtered to produce all publications from 2006 to March 2011 (inclusive). To identify further studies, the author hand-searched reference lists of relevant articles, and searched the Scopus database to identify any newer articles which cited primary articles.

Selection of papers

The process of selecting manuscripts is illustrated in Figure 1. Only articles with publication dates from 2006 onwards were included, as all relevant articles prior to this were already contained in the mini-review by Ziljstra et al. [7] Two independent reviewers then screened article titles for studies that employed a dual-task paradigm – specifically, a gait or balance task coupled with a cognitive or manual task – and included falls data as an outcome measure.

Article abstracts were then appraised to determine whether the dual- task assessment was used appropriately and within the scope of the present study; that is to: (1) predict future falls, or (2) differentiate between fallers and non-fallers based on retrospective data collection of falls. Studies were only considered if subjects’ fall status was determined by actual fall events – the fall definitions stated in individual articles were accepted. Studies were included if participants were aged 65 years and older. Articles which focused on adult participants with a specific medical condition were also included. Studies that reported no results for dual-task assessments were included for descriptive purposes only. Interventional studies which used the dual- task paradigm to detect changes after an intervention were excluded, as were case studies, review articles or studies that used subjective scoring systems to assess dual-task performance.

Analysis of relevant papers

Information on the following aspects was extracted from each article: study design (retrospective or prospective collection of falls), number of subjects (including gender proportion), number of falls required to be classified a ‘faller’, tasks performed and the corresponding measurements used to report outcome, task order and follow up period if appropriate.

Where applicable, each article was also assessed for values and results which allowed comparison between the single and dual-task assessments and their respective falls prediction. The appropriate statistical measures required for such a comparison include sensitivity, specificity, positive and negative predictive values, odds ratios or likelihood ratios. [9] The dual-task cost, or difference in performance between the single and dual-task, was also considered.

Results

The database search of PubMed, EMBASE, PsycINFO, CINAHL and Cochrane produced 1154, 101, 468, 502 and 84 references respectively. As alluded to by Figure 1, filtering results for publications between 2006-2011 almost halved results to a total of 1215 references. A further 1082 studies were omitted as they fell under the category of duplicates, non-dual task studies, or did not report falls as the outcome.

The 133 articles which remained reported on falls using a dual- task approach, that is, a primary gait or balance task paired with a secondary cognitive task. Final screening was performed to ensure that the mean age of subjects was at least 65, as well as to remove case studies, interventional studies and review articles. Sixteen studies met the inclusion criteria, nine retrospective and seven prospective fall studies, summarised by study design in Tables 1A and 1B respectively.

The number of subjects ranged from 24 to 1038, [10, 11] with half the studies having a sample size of 100 subjects or more. [11-18] Females were typically the dominant participants, comprising over 70% of the subject cohort on nine occasions. [10, 13, 14, 16-21] Eight studies investigated community-dwelling older adults, [10-12, 14, 15, 19, 20, 22] four examined older adults living in senior housing/residential facilities [13, 16-18] and one focused on elderly hospital inpatients. [21] A further three studies exclusively investigated subjects with defined pathologies, specifically progressive supranuclear palsy, [23] stroke [24] and acute brain injury. [25]

Among the nine retrospective studies, the fall rate ranged from 10.0% to 54.2%. [12, 25] Fall rates were determined by actual fall events; five studies required subjects to self-report the number of falls experienced over the preceding twelve months, [10, 12, 20, 23, 24] three studies asked subjects to self-report over the previous six months [13, 22, 25] and one study utilised a history-taking approach, with subjects interviewed independently by two separate clinicians. [19] Classification of subjects as a ‘faller’ varied slightly, with five studies reporting on all fallers (i.e. ≥ 1 fall), [10, 19, 20, 22, 25] three reporting only on recurrent fallers (i.e. ≥ 2 falls), [12, 13, 23] and one which did not specify. [24]

The fall rate for the seven prospective studies ranged from 21.3% to 50.0%. [15, 21] The number of falls per subject were collected during the follow-up period, which was quite uniform at twelve months, [11, 14, 16-18, 21] except for one study which continued data collection for 24 months. [15] The primary outcome measure during the follow- up period was fall rate, based on either the first fall [16-18, 21] or incidence of falls. [11, 14, 15]

The nature of the primary balance/gait task varied between studies. Five studies investigated more than one type of balance/gait task. [10, 12, 19, 20, 24] Of the sixteen studies, ten required subjects to walk along a straight walkway, nine at normal pace [10, 11, 14, 16-19, 21, 24] and one at fast pace. [23] Three studies incorporated a turn along the walkway [15, 22, 25] and a further study comprised of both a straight walk and a separate walk-and-turn. [12] The remaining two studies did not employ a walking task of any kind, but rather utilised a voluntary step execution test [13], a Timed Up & Go test and a one-leg balance test. [20]

The type of cognitive/secondary task also varied between studies. All but three studies employed a cognitive task; one used a manual task [19] and two used both a cognitive and a manual task. [11, 14] Cognitive tasks differed greatly to include serial subtractions, [14, 15, 20, 22, 23] backward counting aloud, [11, 16-18, 21] memory tasks, [24, 25] stroop tasks [10, 13] and visuo-spatial tasks. [12] The single and dual-tasks were performed in a random order in six of the sixteen studies. [10, 12, 16-18, 20]

Thirteen studies recorded walking time or gait parameters as a major outcome. [10-12, 14-17, 19, 21-25] Of all studies, eleven reported that dual-task performance was associated with the occurrence of falls. A further two studies came to the same conclusion, but only in the elderly with high functional capacity [11] or during specific secondary tasks. [14] One prospective [17] and two retrospective studies [20, 25] found no significant association between dual-task performance and falls.

As described in Table 2, ten studies reported figures on the predictive ability of the single and/or dual-tasks; [11-18, 21, 23] some data was obtained from the systematic review by Beauchet et al. [8] The remaining six studies provided no fall prediction data. In predicting falls, dual-task tests had a sensitivity of 70% or greater, except in two studies which reported values of 64.8% [17] and 16.7%. [16] Specificity ranged from 57.1% to 94.3%. [16, 17] Positive predictive values ranged from to 38.0% to 86.7%, [17, 23] and negative predictive values from 54.5% to 93.2%. [21, 23] Two studies derived predictive ability from the dual-task ‘cost’, [11, 14] which was defined as the difference in performance between the single and dual-task test.

Only three studies provided statistical measures for the fall prediction of the single task and the dual-task individually. [16, 17, 21] Increased walking time during single and dual-task conditions were similarly associated with risk of falling, OR= 1.1 (95% CI, 1.0-1.2) and OR= 1.1 (95% CI, 0.9-1.1), respectively. [17] Variation in stride time also predicted falls, OR= 13.3 (95% CI, 1.6-113.6) and OR= 8.6 (95% CI, 1.9- 39.6) in the single and dual-task conditions respectively. [21] Walking speed predicted recurrent falls during single and dual-tasks, OR = 0.96 (95% CI, 0.94-0.99) and OR= 0.60 (95% CI, 0.41-0.85), respectively. [16] The later study reported that a decrease in walking speed increased risk of recurrent falls by 1.67 in the dual-task test compared to 1.04 during single-task. All values given in these three studies, for both single and dual-task tests, were interpreted as significant in predicting falls by their respective authors.

Discussion

Only three prospective studies directly compared the individual predictive values of the single and dual-task tests. The first such study concluded that the dual-task test was in fact equivalent to the single- task test in predicting falls. [17] This particular study also reported the lowest positive predictive value of all dual-task tests at 38%. The second study [21] also reported similar predictive values for the single and dual-task assessments, as well as a relatively low positive predictive value of 53.9%. Given that all other studies reported higher predictive values, it may be postulated that at the very least, dual-task tests are comparable to single-task tests in predicting falls. Furthermore, the two studies focused on subjects from senior housing facilities and hospital inpatients (187 and 57 participants respectively), and therefore results may not represent all elderly community-dwelling individuals. The third study [16] concluded that subjects who walked slower during the single-task assessment would be 1.04 times more likely to experience recurrent falls than subjects who walked faster. However, after a poor performance in the dual-task assessment, their risk may be increased to 1.67. This suggests that the dual-task assessment can offer a more accurate figure on risk of falling. Again, participants tested in this study were recruited from senior housing facilities, and thus results may not be directly applicable to the community-dwelling older adult.

Eight studies focused on community-dwelling participants, and all but one [20] suggested that dual-task performance was associated with falls. Evidence that dual-task assessments may be more suitable for fall prediction in the elderly who are healthier and/or living in the community as opposed to those with poorer health is provided by Yamada et al. [11] Participants were subdivided into groups by results of a Timed Up & Go test, separating the ‘frail’ from the ‘robust’. It was found that the dual-task assessments were associated with falls only in groups with a higher functional capacity. This intra-cohort variability may account for, at least in part, why three studies included in this review concluded that there was no benefit in performing dual-task assessments. [17, 20, 25] These findings conflicted with the remaining thirteen studies and may be justified by one or all of several possible reasons: (1) the heterogeneity of the studies, (2) the non-standardised application of the dual-task paradigm, or (3) the hypothesis that dual- task assessments are more applicable to specific subpopulations within the generalised group of ‘older adults’, or further, that certain primary and secondary task combinations must be used to produce favourable results.

The heterogeneity among the identified studies played a major role in limiting the scope of analysis and potential conclusions derived from this review. For example, the dichotomisation of the community- dwelling participants in to frail versus robust [11] illustrates the variability within a supposedly homogenous patient population. Another contributor to the heterogeneity of the studies is the broad range of cognitive or secondary tasks used, which varied between manual tasks [19] and simple or complex cognitive tasks. [10-21, 23-25] The purpose of the secondary task is to reduce attention allocated to the primary task. [5] Since the studies varied in secondary task(s) used, each with a slightly different level of complexity, attentional resources redirected away from the primary balance or gait task would also be varied. Hence, the ability of each study to predict falls is expected to be unique, or poorer, in studies employing a secondary task which is not sufficiently challenging. [26] One important outcome from this review has been to highlight the lack of a standardised protocol for performing dual-task assessments. There is currently no identified combination of a primary and secondary task which has proven superiority in predicting falls. Variation in the task combinations, as well as varied participant instructions given prior to the completion of tasks, is a possible explanation for the disparity between results. To improve result consistency and comparability in this emerging area of research, [6] dual-task assessments should be comprised of a standardised primary and secondary task.

Sixteen studies were deemed appropriate for inclusion in this systematic review. Despite a thorough search strategy, it is possible that some relevant studies may have been overlooked. Based on limited data from 2006 to 2011, the exact benefit of dual-task assessments in predicting falls compared to single-task assessments remains uncertain. For a more comprehensive verdict, further analysis is required to combine previous systematic reviews, [7, 8] which incorporates data prior to 2006. Future dual-task studies should focus on fall prediction and report predictive values for both the single-task and the dual-task individually in order to allow for comparisons to be made. Such studies should also incorporate large sample sizes, and assess living conditions and health status of participants. Emphasis on the predictive value of dual-task assessments requires these studies to be prospective in design, as prospective collection of fall data is considered the gold standard. [27]

Conclusion

Due to the heterogeneous nature of the study population, the limited statistical analysis and a lack of direct comparison between single- task and dual-task assessments, the question of whether dual-task assessments are superior to single-task assessments for fall prediction remains unanswered. This systematic review has highlighted significant variability in study population and design that should be taken into account when conducting further research. Standardisation of dual-task assessment protocols and further investigation and characterisation of sub-populations where dual-task assessments may offer particular benefit are suggested. Future research could focus on different task combinations in order to identify which permutations provide the greatest predictive power. Translation into routine clinical practice will require development of reproducible dual-task assessments that can be performed easily on older individuals and have validated accuracy in predicting future falls. Ultimately, incorporation of dual- task assessments into clinical fall prevention programs should aim to provide a sensitive and specific measure of effectiveness and to reduce the incidence, morbidity and mortality associated with falls.

Acknowledgements

The author would like to thank Professor Stephen Lord and Doctor Jasmine Menant from Neuroscience Research Australia for their expertise and assistance in producing this literature review.

Conflict of interest

None declared.

Contact

M Sarofim: mina@student.unsw.edu.au

Categories
Review Articles Articles

The therapeutic potentials of cannabis in the treatment of neuropathic pain and issues surrounding its dependence

Cannabis is a promising therapeutic agent, which may be particularly beneficial in providing adequate analgesia to patients with neuropathic pain intractable to typical pharmacotherapy. Cannabinoids are the lipid-soluble compounds that mediate the analgesic effects associated with cannabis by interacting with the endogenous cannabinoid receptors CB1 and CB2, which are distributed along neurons associated with pain transmission. From the 60 different cannabinoids that can be found in cannabis plants, delta-9 tetrahydrocannabinol (THC) and cannabidiol are the most important in regards to analgesic properties. Whilst cannabinoids are effective in providing diminished pain responses, their therapeutic use is limited due to psychotropic side effects via interaction with CB1, which may lead to cannabis dependence. Cannabinoid ligands also interact with glycine receptors, selectively to CB2 receptors, and act synergistically with opioids and non-steroidal anti-inflammatory drugs (NSAIDs) to attenuate pain signals. This may be of therapeutic potential due to the lack of psychotropic effects produced. Clinical trials of cannabinoids in neuropathic pain have shown efficacy in providing analgesia; however, the small number of participants involved in these trials has greatly limited their significance. Although the medicinal use of cannabis is legal in Canada and some parts of the United States, its use as a therapeutic agent in Australia is not permitted. This paper will review the role cannabinoids play in providing analgesia, the pharmacokinetics associated with various routes of administration and dependence issues that may arise from its use.

Introduction

Compounds in plants have been found to be beneficial, and now contribute to many of the world’s modern medicines. Delta-9- tetrahydrocannibinol (THC), the main psychoactive cannabinoid derived from cannabis plants, mediates its analgesic effects by acting at both the central and peripheral cannabinoid receptors.[1] The analgesic properties of cannabis were first observed by Ernest Dixon in 1899, who discovered that dogs failed to react to pin pricks following the inhalation of cannabis smoke.[2] Since that time, there has been extensive research into the analgesic properties of cannabis, including whole plant and synthetic cannabinoid studies. [3-5]

Although the use of medicinal cannabis is legal in Canada and parts of the United States, every Australian jurisdiction currently prohibits its use.[6] Despite this, Australians lead the world in the illegal use of cannabis for both medicinal and recreational reasons. [7]

Although the analgesic properties of cannabis could be beneficial in treating neuropathic pain, the use of cannabis in Australia is a controversial, widely debated subject. The issue of dependence to cannabis arising from medicinal cannabis use is of concern to both medical and legal authorities. This review aims to discuss the pharmacology of cannabinoids as it relates to analgesia, and also the dependence issues that may arise from the use of cannabis.

Medicinal cannabis can be of particular benefit in the treatment of neuropathic pain that is intractable to the typical agents used, such as tricyclic antidepressants, anticonvulsants and opioids. [3,8] Neuropathic pain is a disease affecting the somatosensory nervous system which thereby causes pain that is unrelated to peripheral tissue injury. Treatment options are limited. The prevalence of chronic pain in Australia has been estimated at 20% of the population, [9] with neuropathic pain estimated to affect up to 7% of the population. [10]

The role of cannabinoids in analgesia

Active compounds found in cannabis

Cannabis contains over 60 cannabinoids, with THC being the quintessential mediator of analgesia and the only psychoactive constituent found in cannabis plants. [11] Another cannabinoid, cannabidiol, also has analgesic properties; however, instead of interacting with cannabinoid receptors, its analgesic properties are attributed to inhibition of anandamide degradation. [11] Anandamide is the most abundant endogenous cannabinoid in the CNS and acts as an agonist at cannabinoid receptors. By inhibiting the breakdown of anandamide, its time in the synapse is prolonged and its analgesic effects are perpetuated.

Cannabinoid and Vanilloid receptors

Distributed throughout the nociceptive pathway, cannabinoid receptors are a potential target for the administration of exogenous cannabinoids to suppress pain. Two known types of cannabinoid receptors, CB1 and CB2, are involved in pain transmission. [12] The CB1 cannabinoid receptor is highly expressed in the CNS as well as in peripheral tissues, and is responsible for the psychotropic effects produced by cannabis. There is debate regarding the location of the CB2 cannabinoid receptor, previously found to be largely distributed in peripheral immune cells. [12-13] Recent studies, however, suggest that CB2 receptors may also be found on neurons. [12-13] The CB2 metabotropic G-protein coupled receptors are negatively coupled to adenylate cyclase and positively coupled to mitogen-activated protein kinase. [14] The cannabinoid receptors are also coupled to pre-synaptic voltage-gated calcium channel inhibition and inward- rectifying potassium channel activation, thus depressing neuronal excitability, eliciting an inhibitory effect on neurotransmitter release and subsequently decreasing pain transmission. [14]

Certain cannabinoids have targets other than cannabinoid receptors through which they mediate their analgesic properties. Cannabidiol can act at vanilloid receptors, where capsacsin is active, to produce analgesia. [15] Recent studies have found that the actions of administered cannabinoids in mice have a synergestic effect to the response of glycine, an inhibitory neurotransmitter that may contribute to its analgesic effects. Analgesia was absent in mice that lacked glycine receptors, but not in those lacking cannabinoid receptors, thus indicating an important role of glycine in the analgesic affect of cannabis. [16] Throughout this study, modifications were made to the compound to enhance binding to glycine receptors and diminish binding to cannabinoid receptors, which may be of therapeutic potential to achieve analgesia without psychotropic side effects. [16]

Mechanism of action in producing analgesia and side effects

Cannabinoid receptors also play an important role in the descending inhibitory pathways via the midbrain periaqueductal grey (PAG) and the rostral ventromedial medulla (RVM). [17] Pain signals are conveyed via primary afferent nociceptive fibres to the brain via ascending pain pathways that synapse on the dorsal horn of the spinal cord. The descending inhibitory pathway modulates pain transmission in the spinal cord and medullary dorsal horn via the PAG and RVM before noxious stimuli reaches a supraspinal level and is therefore interpreted as pain. [17] Cannabinoids activate the descending inhibitory pathway via gamma-aminobutyric acid (GABA)-mediated disinhibition, thus decreasing GABAergic inhibition and enhancing impulses responsible for the inhibition of pain; this is similar to opioid-mediated analgesia. [17]

Cannabinoid receptors, in particular CB1, are distributed throughout the cortex, hippocampus, amygdala, basal ganglia outflow tracts and cerebellum, which corresponds to the capacity of cannabis to produce motor and cognitive impairment. [18] These deleterious side effects limit their therapeutic use as an analgesic. Since ligands binding to CB1 receptors are responsible for mediating the psychotropic effects of cannabis, studies have been undertaken on the effectiveness of CB2 agonists; they were found to attenuate neuropathic pain without experiencing CB1-mediated CNS side effects. The discovery of a suitable CB2 agonist may be of therapeutic potential. [19]

Synergism with commonly used analgesics

Cannabinoids are also important in acting synergistically with non- steroidal anti-inflammatory drugs (NSAIDs) and opioids to produce analgesia; cannabis could thus be of benefit as an adjuvant to typical analgesics. [20] A major central target of NSAIDs and opioids is the descending inhibitory pathway. [20] The analgesia produced by NSAIDs through its action on the descending inhibitory pathway requires simultaneous activation of the CB1 cannabinoid receptor. In the presence of an opioid antagonist, cannabinoids are still effective analgesics. Whilst cannabinoids do not act via opioid receptors, cannabinoids and opioids show synergistic activity. [20] On the other hand, Telleria-Diaz et al. reported that the analgesic effects of non- opioid analgesics, primarily indomethacin, in the spinal cord can be prevented by a CB1 receptor antagonist, thus highlighting synergism between the two agents. [21] Although no controlled studies in pain management have used cannabinoids with opioids, anecdotal evidence suggest synergistic benefits in analgesia, particularly in patients with neuropathic pain. [20] Whilst the interaction between opioids, NSAIDs and cannabinoids is poorly understood, numerous studies do suggest that they act in a synergistic manner in the PAG and RVM via GABA- mediated disinhibition to enhance descending flow of impulses to inhibit pain transmission. [20]

Route of Administration

Clinical trials of cannabis as an analgesic in neuropathic pain have shown cannabis to reduce the intensity of pain. [5,22] The most common administration of medicinal cannabis is through inhalation via smoking. Two randomised clinical trials assessing smoked cannabis showed that patients with HIV-associated neuropathic pain achieved significantly reduced pain intensity (34% and 46%) compared to placebo (17% and18% respectively). [5,22] One of the studies was composed of participants whose pain was intractable to first-line analgesics used in neuropathic pain, such as tricyclic antidepressants and anticonvulsants. [22] The numbers needed to treat (NNT=3.5) were comparable to agents already in use (gabapentin: NNT=3.8 and lamotrigine: NNT=5.4). [22] All of the studies undertaken on smoked cannabis have been short-term studies and do not address long- term risks of cannabis smoking. An important benefit associated with smoking cannabis is that the pharmacokinetic profile is superior to orally ingested cannabinoids. [23]After smoking one cannabis cigarette, peak plasma levels of THC are reached within 3-10 minutes and due to its lipid solubility, levels quickly decrease as THC is rapidly distributed throughout the tissues. [23] While the bioavailability of THC when inhaled via smoke is much higher than oral preparations, due to first pass metabolism, there are obvious harmful affects associated with smoking which warranted the study of using other means of inhalation such as vapourisation. In medicinal cannabis therapy, vapourisation may be less harmful than smoking as the cannabis is heated below the point of combustion where carcinogens are formed. [24] A recent study found that the transition from smoking to vapourising in cannabis smokers improved lung function measurements and, following the study, participants refused to participate in a reverse design in which they would return to smoking. [24]

Studies undertaken on the efficacy of oro-mucosal cannabinoid preparations (Sativex) showed a 30% reduction in pain as opposed to placebo; the NNT was 8.6.[4] Studies comparing oral cannabinoid preparations (Nabilone) to dihydrocodeine in neuropathic pain found that dihydrocodeine was a more effective analgesic. [25] The effects of THC from ingested cannabinoids lasted for 4-12 hours with a peak plasma concentration at 2-3 hours. [26] The effects of oral cannabinoids was variable due to first pass metabolism where significant amounts of cannabinoids are metabolized by cytochrome P450 mixed-function oxidases, mainly CYP 2C9. [26] First pass metabolism is very high and bioavailability of THC is only 6% for ingested cannabis, as opposed to 20% for inhaled cannabis. [26] The elimination of cannabinoids occurs via the faeces (65%) and urine (25%), with a clinical study showing that after five days 90% of the total dose was excreted. [26]

The issue of cannabis dependence

One of the barriers to the use of medicinal cannabis is the controversy regarding cannabis dependence and the adverse effects associated with chronic use. Cannabis dependence is a highly controversial but important topic, as dependence may increase the risk of adverse effects associated with chronic use. [27] Adverse effects resulting from long-term use of cannabis include short term memory impairment, mental health problems and, if smoked, respiratory diseases. [28] Some authors report that cannabis dependence and subsequent adverse negative effects upon cessation are only observed in non- medical cannabis users, other authors report that dependence is an issue for all cannabis users, whether its use is for medicinal purposes or not. An Australian study assessing cannabis use and dependence found that one in 50 Australians had a DSM-IV cannabis use disorder, predominately cannabis dependence. [27] They also found that cannabis dependence was the third most common life-time substance dependence diagnosis following tobacco and alcohol dependence. [27] Cannabis dependence can develop; however, the risk factors for dependence come predominantly from studies that involve recreational users, as opposed to medicinal users under medical supervision. [29]

A diagnosis of cannabis dependence, according to DSM-IV, is made when three of the following seven criteria are met within the last 12 months: tolerance; withdrawal symptoms; cannabis used in larger amounts or for a longer period than intended; persistent desire or unsuccessful efforts to reduce or cease use; a disproportionate amount of time spent obtaining, using and recovering from use; social, recreational or occupational activities were reduced or given up due to cannabis use; and use continued despite knowledge of physical or psychological problems induced by cannabis. [29] Unfortunately, understanding of cannabis dependence arising from medicinal use is limited due to the lack of studies surrounding cannabis dependence in the context of medicinal use. Behavioural therapies may be of use; however, their efficacy is variable. [30] A recent clinical trial indicated that orally-administered THC was effective in alleviating cannabis withdrawals, which is analogous to other well-established agonist therapies including nicotine replacement and methadone. [30]

The pharmacokinetic profiles also affect cannabis dependence. Studies suggest that the risk of dependence seems to be marginally greater with the oral use of isolated THC than with the oral use of combined THC-cannabidiol. [31] This is important because hundreds of cannabinoids can be found in whole cannabis plants, and cannabidiol may counteract some of the adverse effects of THC; however, more studies are required to support this claim. [31]

The risk of cannabis dependence in the context of long term and supervised medical use is not known. [31] However, some authors believe that the pharmacokinetic profiles of preparations used for medicinal purposes differ from those used for recreational reasons, and therefore causalities in terms of dependence and chronic adverse effects between the two differ greatly. [32]

Conclusion

Cannabis appears to be an effective analgesic and provides an alternative to analgesic pharmacotherapies currently in use for the treatment of neuropathic pain. Cannabis may be of particular use in neuropathic pain that is intractable to other pharmacotherapy. The issue of dependence and adverse side effects including short term memory impairment, mental health problems and if smoked, respiratory diseases arising from medicinal cannabis use is a highly debated topic and more research needs to be undertaken. The ability of cannabinoids to modulate pain transmission by enhancing the activity of descending inhibitory pathways and acting as a synergist to opioids and NSAIDs is important as it may decrease the therapeutic doses of opioids and NSAIDs required, thus decreasing the likelihood of side effects. The possibility of a cannabinoid-derived compound with analgesic properties free of psychotropic effects is quite appealing, and its discovery could potentially lead to a less controversial and more suitable analgesic in the future.

Conflict of interest

None declared.

Correspondence

S Sargent: stephaniesargent@mail.com


Categories
Review Articles Articles

A biological explanation for depression: The role of interleukin-6 in the aetiology and pathogenesis of depression and its clinical implications

Depression is one of the most common health problems addressed by general practitioners in Australia. It is well known that biological, psychosocial and environmental factors play a role in the aetiology of depression. Research into the possible biological mechanisms of depression has identified interleukin-6 (IL-6) as a potential biological correlate of depressive behaviour, with proposed contributions to the aetiology and pathogenesis of depression. Interleukin-6 is a key proinflammatory cytokine involved in the acute phase of the immune response and a potent activator of the hypothalamic-pitutary-adrenal axis. Patients with depression have higher than average concentrations of IL-6 compared to non- depressed controls, and a dose-response correlation may exist between circulating IL-6 concentration and the degree of depressive symptoms. Based on these insights the ‘cytokine theory of depression’ proposes that proinflammatory cytokines, such as IL-6, act as neuromodulators and may mediate some of the behavioural and neurochemical features of depression. Longitudinal and case- control studies across a wide variety of patient cohorts, disease states and clinical settings provide evidence for a bidirectional relationship between IL-6 and depression. Thus IL-6 represents a potential biological intermediary and therapeutic target for the treatment of depression. Recognition of the strong biological contribution to the aetiology and pathogenesis of depression may help doctors to identify individuals at risk and implement appropriate measures, which could improve the patient’s quality of life and reduce disease burden.

Introduction

Our understanding of the immune system has grown exponentially within the last century, and more questions are raised with each new development. Over the past few decades research has emerged to suggest that the immune system may be responsible for more than just fighting everyday pathogens. The term ‘psychoneuroimmunology’ was first coined by Dr Robert Ader and his colleagues in 1975 as a conceptual framework to encompass the emerging interactions between the immune system, the nervous system, and psychological functioning. Cytokines have since been found to be important mediators of this relationship. [1] There is considerable research that supports the hypothesis of proinflammatory cytokines, in particular interleukin-6 (IL-6), in playing a key role in the aetiology and pathophysiology of depression. [1-5] While both positive and negative results have been reported in individual studies, a recent meta-analysis supports the association between depression and circulating IL-6 concentration. [6] This review will explore the impact of depression in Australia, the role of IL-6 and the proposed links to depression and clinical implications of these findings.

Depression in Australia and its diagnosis

Depression belongs to a group of affective disorders and is one of the most prevalent mental illnesses in Australia. [7] It contributes to one of the highest disease burdens in Australia, closely following cancers and cardiovascular diseases. [7] Most of the burden of mental illness, measured as disability adjusted life years (DALYs), is due to years of life lost through disability (YLD) as opposed to years of life lost to death (YLL). This makes mental disorders the leading contributor (23%) to the non-fatal burden of disease in Australia. [7] Specific populations, including patients with chronic disease, such as diabetes, cancer, cardiovascular disease, and end-stage kidney disease, [1,3,4,10] are particularly vulnerable to this form of mental illness.[8, 9] The accurate diagnosis of depression in these patients can be difficult due to the overlapping of symptoms inherent to the disease or treatment and the diagnostic criteria for major depression. [10-12] Nevertheless, accurate diagnosis and treatment of depression is essential and can result in real gains in quality of life for patients with otherwise incurable and progressive disease. [7] Recognising the high prevalence and potential biological underpinnings of depression in patients with chronic disease is an important step in deciding upon appropriate diagnosis and treatment strategies.

Role of IL-6 in the body

Cytokines are intercellular signalling polypeptides produced by activated cells of the immune system. Their main function is to coordinate immune responses; however, they also play a key role in providing information regarding immune activity to the brain and neuroendocrine system. [13] Interleukin-6 is a proinflammatory cytokine primarily secreted by macrophages in response to pathogens. [14] Along with interleukin-1 (IL-1) and tumour necrosis factor-alpha (TNF-α), IL-6 plays a major role in fever induction and initiation of the acute-phase response. [14] The latter response involves a shi

Categories
Review Articles Articles

Seasonal influenza vaccination in antenatal women: Views of health care workers and barriers in the delivery of the vaccine

Background: Pregnant women are at an increased risk of developing influenza. The National Health and Medical Research Council recommends seasonal influenza vaccination for all pregnant women who will be in their second or third trimester during the influenza season. The aim of this review is to explore the views of health care workers regarding seasonal influenza vaccination in antenatal women and describe the barriers in the delivery of the vaccine. Methods: A literature search was conducted using MEDLINE for the terms: “influenza,” “pregnancy,” “antenatal,” “vaccinations,” “recommendations,” “attitudes,” “knowledge” and “opinions”. The review describes findings of publications concerning the inactivated influenza vaccination only, which has been proven safe and is widely recommended. Results: No studies have addressed the knowledge and attitudes of Australian primary health care providers towards influenza vaccination despite their essential role in immunisations in Australia. Overseas studies indicate that factors that contribute to the low vaccination rates are 1) the lack of general knowledge of influenza and its prevention amongst health care workers (HCWs) 2) variable opinions and attitude regarding the vaccine 3) lack of awareness of the national guidelines 4) and lack of discussion of the vaccine by the HCW. Lack of maternal knowledge regarding the safety of the vaccine and the cost-burden of the vaccine are significant barriers in the uptake of the vaccination. Conclusion: Insufficient attention has been given to the topic of influenza vaccinations in pregnancy. Significant efforts are required in Australia to obtain data about the rates of influenza vaccination of pregnant women.

Introduction

Seasonal influenza results in annual epidemics of respiratory diseases. Influenza epidemics and pandemics increase hospitalisation rates and mortality, particularly among the elderly and high risk patients with underlying conditions. [1-3] All pregnant women are at an increased risk of developing influenza due to progressive suppression of Th1- cell-mediated immunity and other physiological changes that cause culmination of morbidity towards the end of pregnancy. [4-7]

Annual influenza vaccination is the most effective method for preventing influenza virus infection and its complications [8] Trivalent inactivated influenza vaccine (TIV) has been proven safe and is recommended for person aged ≥6 months, including those with high- risk conditions such as pregnancy. [8-10] A randomised controlled study in Bangladesh demonstrated that TIV administered in the third trimester of pregnancy resulted in reduced maternal respiratory illness and reduced infant influenza infection. [11, 12] Another randomised controlled trial has shown that influenza immunisation of pregnant women reduced influenza-like illness by more than 30% in both the mothers and the infants, and reduced laboratory-proven influenza infections in 0- to 6-month-old infants by 63%. [13]

The current Australian Immunisation Guidelines recommend routine administration of influenza vaccination for all pregnant women who will be in the second or third trimester during the influenza season, including those in the first trimester at the time of vaccination. [4,14,15] The seasonal influenza vaccination has been made available for free to all pregnant women in Australia since 2010. [4] However, The Royal Australian and New Zealand College of Obstetricians and Gynaecologists (RANZOG) statement for ‘Pre-pregnancy Counselling and routine Antenatal Assessment in the absence of pregnancy Complications’ does not explicitly mention routine delivery of influenza vaccination to healthy pregnant women. [16] RANZCOG recently published the college statement on swine flu vaccination during pregnancy; advising that pregnant women without complications and recent travel history must weigh the risk-benefit ratio before deciding to uptake the H1N1 influenza immunisation. [17] Therefore, it is evident that there is conflicting advice in Australia about the routine delivery of influenza vaccination to healthy pregnant women. In contrast, firm recommendation for routine influenza vaccination for pregnant women was established in 2007, by the National Advisory Committee on Immunisations (NACI) in Canada, with minimal conflict from The Society of Obstetricians and Gynaecologists of Canada (SOGC). [6] Succeeding the 1957 influenza pandemic, the rate of influenza immunisations increased significantly with greater than 100,000 women receiving the vaccination annually between 1959-1965 in the United States. [8] Since 2004 the American Advisory Committee on Immunisation Practice (ACIP) has recommended influenza vaccination for all pregnant women, at any stage of gestation. [9] This is supported by The American College of Obstetricians and Gynaecologists’ Committee on Obstetric Practice. [18]

A recent literature review performed by Skowronski et al. (2009) found that TIV is warranted to protect women against influenza- related hospitalisation during the second half of normal pregnancy, but evidence is otherwise insufficient to recommend routine TIV as the standard of practice for all healthy women beginning in early pregnancy. [6] Similarly, another review looked at the evidence for the risks of influenza and the risks and benefits of seasonal influenza vaccination in pregnancy and concluded that data on influenza vaccine safety in pregnancy is inadequate. [19] However, based on the available literature, there was no evidence of serious side effects in women or their infants, including no indication of harm from vaccination in the first trimester. [19]

We aim to review the literature published on the delivery and uptake of influenza vaccination during pregnancy and identify the reasons for low adherence to guidelines. The review will increase our understanding of how the use of the influenza vaccination is perceived by health care providers and the pregnant women.

Evidence of health care provider’s attitude, knowledge and opinions

Several published studies have revealed data supporting deficits in the knowledge of health care providers regarding the significance of the vaccine and the national guidelines, hence suggesting a low rate of vaccine recommendation and uptake by pregnant women. [20] A research project in 2006 performed a cross-sectional study of the knowledge and attitudes towards the influenza vaccination in pregnancy amongst all levels of health care workers (HCW’s) working at the Department for Health of Women and Children at University of Milan, Italy. [20] The strength of this study was that it included 740 HCWs representing 48.4% working in obstetrics/gynaecology, 17.6% in neonatology and 34% in paediatrics, of whom 282 (38.1%) were physicians, 319 (43.1%) nurses, and 139 (18.8%) paramedics (health aides/healthcare assistants). The respondents were given a pilot-tested questionnaire about their perception of the seriousness of influenza, their general knowledge of influenza recommendations and preventive measures, and their personal use of influenza vaccination; which was to be self-completed in 20 mins in an isolated room. Descriptive analysis of the 707 (95.6%) HCWs that completed the questionnaire revealed that the majority (83.6%) of HCW’s in obstetrics/gynaecology never recommended the influenza vaccination to healthy pregnant women. Esposito et al. (2007) highlighted that only a small number of nurses and paramedics, from each speciality, regarded influenza as serious in comparison to the physicians. [20] Another study investigating practices of the Midwives found that only 37% believed that influenza vaccine is effective and 22% believed that the vaccine was a greater risk than influenza. [21] The results from these studies clearly indicate deficiencies in the general knowledge of influenza and its prevention amongst health care staff.

In contrast, a study by Wu et al. (2006) suggested unusually high vaccination uptake rate of the fellows from the American College of Obstetricians and Gynaecologists (ACOG) who live and practice in Nashville, Tennessee. [22] The survey focussed on physician knowledge, practices, and opinions regarding influenza vaccination of pregnant women. Results revealed that 89% of practitioners responded that they routinely recommend the vaccine to pregnant women and 73% actually administered the vaccination to pregnant and postpartum women. [21] Sixty-two percent responded that the earliest administration of the vaccine should be the second trimester, while 32% reported that it should be offered in the first trimester. Interestingly, 6% believed that it should not be delivered at all during the pregnancy. Despite the national recommendation to administer the vaccination routinely to all pregnant women, [4] more than half of the obstetricians preferred to withhold it until second trimester due to concerns regarding vaccine safety, association with spontaneous abortion and possibility of disruption in embryogenesis. [22] Despite the high uptake rate identified by the respondents, there are a few major limitations in this study. First, the researchers excluded the family physicians and midwives practicing obstetrics in their survey, which prevents a true representation of the sample population. Second, the vaccination rates were identified by the practitioners and not validated, which increases the likelihood of personal bias by the practitioners.

It is evident that HCWs attending to pregnant women and children have limited and frequently incorrect beliefs concerning influenza and its prevention. [20,23] A recent study by Tong et al. (2008) demonstrated that only 40% of the health care providers at the three hospitals studied in Toronto were aware of the high-risk status of pregnant women and only 65% were aware of the NACI recommendations. [23] Furthermore, obstetricians were less likely than family physicians to indicate that it was their responsibility to discuss, recommend, or provide influenza vaccination. [23] Tong et al. (2008) also demonstrated that high levels of provider knowledge about influenza and maternal vaccination, positive attitudes towards influenza vaccination, increased age, being a family physician, and having been vaccinated against influenza, were associated with recommending influenza vaccine to pregnant women. [23] This data is also supported by Wu et al. and Espostio et al.

In 2001, Silverman et al. (2001) concluded that physicians were more likely to recommend vaccine if they were aware of current ‘Centers for Disease Prevention and Control’ guidelines, gave vaccinations in their offices and had been vaccinated against influenza themselves. [24] Similarly, Lee et al. (2005) showed that midwives who received the immunisation themselves and firmly believed in its benefits, were more likely to offer it to pregnant women. [21] Wallis et al. (2006) conducted a multisite interventional study involving educational sessions with the physicians and the use of “Think Flu Vaccine” notes on active obstetric charts, to illustrate a fifteen fold increase in the rate of influenza vaccinations in pregnancy. [25] This study also demonstrated that increase in uptake was greater in family practices versus obstetric practices, and furthermore increased in small practices as opposed to large practices.

Overall, the literature here is derived mostly from American and Canadian studies as there is no data available for Australia. Existing data suggest that there is a significant lack of understanding regarding influenza vaccine safety, benefits and recommendations amongst the HCW’s. [20-27] These factors may lead to wrong assumptions and infrequent vaccine delivery.

Barriers in delivering the influenza vaccinations to pregnant women

Aside from the gaps in the health care provider’s understanding of vaccine safety and national guidelines, several other barriers in delivering the influenza vaccine to pregnant women have been identified. A study published in 2009, based on CDC analysis of data from the Pregnancy Risk Assessment and Monitoring System from Georgia and Rhode Island over the period of 2004-2007, showed that the most common reasons for not receiving the vaccination were, “I don’t normally get the flu vaccination” (69.4%), and “my physician did not mention anything about a flu vaccine during my pregnancy” (44.5%). [28] Lack of maternal knowledge about the benefits of the influenza vaccination has also been demonstrated by Yudin et al. (2009), who conducted a cross-sectional in hospital survey of 100 postpartum women during the influenza season in downtown Toronto. [29] This study concluded that 90% of women incorrectly believed that pregnant women have the same risk of complications as non-pregnant women and 80% incorrectly believed that the vaccine may cause birth defects. [29]. Another study highlighted that 48% of physician listed patient refusal as a barrier for administering the vaccine. [22] These results were supported by Wallis et al. (2006), which focused on using simple interventions such as chart reminders to surmount the gaps in knowledge of women. [25] ‘Missed opportunities’ by obstetricians and family physicians to offer the vaccination have been suggested as a major obstacle in the delivery of the influenza vaccination during pregnancy. [14,23,25,28]

During influenza season, hospitalized pregnant women with respiratory illness had significantly longer lengths of stay and higher odds of delivery complications than hospitalized pregnant women without respiratory illness. [5] In some countries cost-burden of the vaccine to women is another major barrier that contributes to lower vaccination rates among pregnant women. [22] This is not an issue in Australia where the vaccination is free for all pregnant women. Provision of free vaccination to all pregnant women is likely to have a significant advantage when considering the cost-burden of influenza on the health-care sector. However, the cost-burden on the patient can be viewed as lack of access, as reported by Shavell et al. (2012) As such patients that lacked insurance and transportation were less likely to receive the vaccine. [30]

This is supported by several studies that have shown that the vaccine is comparatively cost-effective when considering the financial burden of influenza related morbidity. [31] A 2006 study based on decision analysis modelling revealed that vaccination rate of 100% in pregnant women would save approximately 50 dollars per woman, resulting in a net gain of approximately 45 quality-adjusted hours relative to providing supportive care alone in the pregnant population. [32] Beigi et al. (2009) demonstrated that maternal influenza vaccination using either the single- or 2-dose strategy is a cost-effective approach when influenza prevalence is 7.5% and influenza-attributable mortality is 1.05%. [32] As the prevalence of influenza and/or the severity of the outbreak increases the incremental value of vaccination also increases. [32] Moreover, a study in 2006 has proven the cost-effectiveness to the health sector of the single dose influenza vaccination for influenza like illness. [31] Therefore, patient education about the relative cost- effectiveness of the vaccine and adequate reimbursement by the government is required to alleviate this barrier in other nations but not in Australia where the vaccination is free for all pregnant women.

Lack of vaccine storage facilities in physician offices is an important barrier preventing the recommendation and uptake of the vaccine by pregnant women. [23,33] A recent study monitoring the immunisation practices amongst practicing obstetricians found that less than 30% store influenza vaccine in their office. [18] One study showed acceptance rates of influenza vaccine of 71% of 448 eligible pregnant women who were offered the influenza vaccine at routine prenatal visit due to the availability of storage facilities at the practice, suggesting that the uptake of vaccination can be increased by simply overcoming the logistical and organisational barriers such as vaccine storage, inadequate reimbursement and patient education. [34]

Conclusion

From the limited data available, it is clear that there are is a variable level of knowledge of influenza and its prevention amongst HCWs. There is also and a general lack of awareness of the national guidelines in their countries. However, there is no literature for Australia to compare with other nations. There is some debate regarding the trimester in which the vaccine should be administered. There is further lack of clarity in terms of who is responsible for the discussion and delivery of the vaccine – the general practitioner or the obstetrician. These factors contribute to a lack of discussion of vaccine use and amplify the amount of ‘missed opportunities.’

Lack of maternal knowledge about the safety of the vaccine and its benefits is also a barrier that must be overcome by the HCW through facilitating an effective discussion about the vaccine. Since the vaccine has been rendered free in Australia, cost should not prevent vaccination. Regular supply and storage of vaccines especially in remote towns of Australia is likely to be a logistical challenge.

There is limited Australian literature exploring the uptake of influenza vaccine in pregnancy and the contributing factors such as the knowledge, attitude and opinion of HCWs, maternal knowledge of the vaccine and logistical barriers. A reasonable first step would be to determine the rates of uptake and prevalence of influenza vaccination in antenatal women in Australia.

Conflict of interest

None declared.

Correspondence

S Khosla: surabhi.khosla@my.jcu.edu.au

 

Categories
Review Articles Articles

Spontaneous regression of cancer: A therapeutic role for pyrogenic infections?

Spontaneous regression of cancer is a phenomenon that is not well understood. While the mechanisms are unclear, it has been hypothesised that infections, fever and cancer are linked. Studies have shown that infections and fever may be involved in tumour regression and are associated with improved clinical outcomes. This article will examine the history, evidence and future prospects of pyrogenic infections towards explaining spontaneous regression and how they may be applied to future cancer treatments.

Introduction

Spontaneous regression of cancer is a phenomenon that has been observed since antiquity. [1] It can be defined as a reversal or reduction of tumour growth in instances where treatment has been lacking or ineffectual. [2] Little is known about its mechanism but two observations in cancer patients are of particular interest: first, infections have been shown to halt tumour progression while second, development of fever has been associated with improved prognosis.

Until recently, fever and infections have been regarded as detrimental states that should be minimized or prevented. However, in the era preceding the use of antibiotics and antipyretics, the prior observations were prevalent and were used as the basis of crude yet stunningly effective immunological-based treatments. The promise of translating that success to modern cancer treatment is a tempting one and should be examined further.

History: Spontaneous Regression & Coley’s Toxins

Spontaneous regression of cancers was noted as early as the 13th century. The Italian Peregrine Lazoisi was afflicted with painful leg ulcers which later developed into a massive cancerous growth. [3]The growth broke through the skin and became badly infected. Miraculously, the infection induced a complete regression of the tumour and surgery was no longer required. He later became the patron saint of cancer sufferers.

Reports that associated infections and tumour regression continued to grow. In the 18th century, Trnka and Le Dran reported cases of breast cancer regressions which occurred after tumour site infection. [4, 5] These cases are often accompanied by signs of inflammation and fever and gangrene are common. [3]

In the 19th century, such observations became the basis of early clinical trials by physicians such as Tanchou and Cruveillhier. Although highly risky, they attempted to replicate the same conditions artificially by applying a septic dressing to the wound or injecting patients with pathogens such as malaria. [1] The results were often spectacular and suddenly, this rudimentary form of ‘immunotherapy’ seemed to offer a genuine alternative to surgery.

Until then, the only option for cancer was surgery and outcomes were at times very disappointing. Dr. William Coley (a 19th century New York surgeon) related his anguish after his patient died despite radical surgery to remove a sarcoma of the right hand. [3] Frustrated by the limitations of surgery, he sought an alternative form of treatment and came across the work of the medical pioneers Busch and Fehleisen. They had earlier experimented with erysipleas, injecting or physically applying the causative pathogen, Streptococcus pyogenes, onto the tumour site. [6] This was often followed by a high fever which correlated with a concomitant decrease in tumour size in a number of patients. [3] Coley realized that using live pathogens was very risky and he eventually modified the approach using a mixture of killed S. pyogenes and Serratia marescens. [7] The latter potentiated the effects of S. pyogenes such that a febrile response can be induced safely without an ‘infection’, and this mixture became known as Coley’s toxins. [1]

A retrospective study in 1999 showed that there was no significant difference in cancer death risk between patients treated using Coley’s toxins and those treated with conventional therapies (i.e. chemotherapy, radiotherapy and surgery). [8] Data from the second group was obtained from the Surveillance Epidemiology End Result (SEER) registry in the 1980s. [3] This observation is remarkable given that Coley’s toxins were developed at a fraction of the cost and resources afforded to current conventional therapies.

Researchers also realized that Coley’s toxins have broad applicability and are effective across cancers of mesodermal embryonic origin such as sarcomas, lymphomas and carcinomas. [7] One study comparing the five-year survival rate of patients with either inoperable sarcomas or carcinomas found that those treated with Coley’s toxin showed had a survival rate as high as 70-80%. [9]

Induction of a high grade fever proved crucial to the success of this method. Patients with inoperable sarcoma who were treated with Coley’s toxins and developed a fever between 38-40 oC had a five-year survival rate three times higher than that of afebrile patients. [10] As cancer pain can be excruciating, pain relief is usually required. Upon administration of Coley’s toxins, an immediate and profound analgesic effect was often observed; allowing the discontinuation of narcotics. [9]

Successes related to ‘infection’ based therapies are not isolated. In the early 20th century, Nobel laureate Dr. Julius Wagner-Jauregg used tertian malaria injections in the treatment of neurosyphilis-induced dementia paralytica. [3]This approach relied on the induction of prolonged and high grade fevers. Considering the high mortality rate of untreated patients in the pre-penicillin era, he was able to achieve an impressive remission rate of approximately one in two patients. [11]

More recently, Bacillus Calmette-Guérin (BCG) vaccine has been used in the treatment of superficial bladder cancers. [12] BCG consists of live attenuated Mycobacterium bovis and is commonly used in tuberculosis vaccinations. [12,13] Its anti-tumour effects are thought to involve a localized immune response stimulating production of inflammatory cytokines such as tumour necrosis factor α (TNF-α) and interferon γ (IFN-γ). [13] Similar to Coley’s toxins, it uses a bacterial formulation and requires regular localized administration over a prolonged period. BCG is shown to reduce bladder cancer recurrence rates in nearly 70% of cases and recent clinical trials suggest a possible role in colorectal cancer treatment. [14] From these examples, we see that infections or immunizations can have broad and effective therapeutic profiles.

Opportunities Lost: The End of Coley’s Toxins

After the early success of Coley’s toxins, momentum was lost when Coley died in 1936. Emergence of chemotherapy and radiotherapy overshadowed its development while aseptic techniques gradually gained acceptance. After World War II, large-scale production of antibiotics and antipyretics also allowed better suppression of infections and fevers. [1] Opportunities for further clinical studies using Coley’s toxins were lost when despite decades of use, it was classified as a new drug by the US Food and Drug Administration (FDA). [15] Tightening of regulations regarding clinical trials of new drugs after the thalidomide incidents in the 1960s meant that Coley’s toxins were highly unlikely to pass the stringent safety requirements. [3]

With fewer infections, spontaneous regressions became less common. An estimated yearly average of over twenty cases in the 1960-80s decreased to less than ten cases in the 1990s. [16] It was gradually believed that the body’s immune system had a negligible role in tumour regression and focus was placed on chemotherapy and radiotherapy. Despite initial promise, these therapies have not fulfilled their full potential and the treatment for certain cancers remains out of reach.

In a curious turn of events, advances in molecular engineering have now provided us with the tools to transform immunotherapy into a viable alternative. Coley’s toxins have provided the foundations for early immunotherapeutic approaches and may potentially contribute significantly to the success of future immunotherapy.

Immunological Basis of Pyrogenic Infections

The most successful cases treated by Coley’s toxins are attributed to: successful infection of the tumour, induction of a febrile response and daily intra-tumoural injections over a prolonged period.

Successful infection of tumour

Infection of tumour cells results in infiltration of lymphocytes and antigen-presenting cells (APCs) such as macrophages and dendritic cells (DCs). Binding of pathogen-associated molecular patterns (PAMPs) (e.g. lipopolysaccharides) to toll-like receptors (TLRs) on APCs induces activation and antigen presentation. The induction process also leads to the expression of important co-stimulatory molecules such as B7 and interleukin-12 (IL-12) required for optimal activation of B and T cells. [17] In some cases, pathogens such as the zoonotic vesicular stomatitis virus (VSV) have oncolytic properties and selectively lyse tumour cells to release antigens. [18]

Tumour regression or progression depends on the state of the immune system. A model of duality in which the immune system performs either a defensive or reparative role has been proposed. [1, 3] During the defensive mode, tumour regression occurs and immune cells are produced, activated and mobilized against the tumour. In the reparative model, tumour progression is favoured and invasiveness is promoted via immunosuppressive cytokines, growth factors, matrix metalloproteinases and angiogenesis factors. [1, 3]

The defensive mode may be activated by external stimuli during infections; this principle can be illustrated by the example of M1/M2 macrophages. M1 macrophages are involved in resistance against infections and tumours and produce pro-inflammatory cytokines such as IL-6, IL-12 and IL-23. [19, 20] M2 macrophages promote tumour progression and produce anti-inflammatory cytokines such as IL-10 and IL-13. [19, 20] M1 and M2 macrophage polarization is dependent on transcription factors such as interferon response factor 5 (IRF5). [21] Inflammatory stimuli such as bacterial lipopolysaccharides induce high levels of IRF5 and this commits macrophages to the M1 lineage while also inhibiting expression of M2 macrophage marker expression. [21] This two-fold effect may be instrumental in facilitating a defensive mode.

Induction of febrile response

In Matzinger’s ‘danger’ hypothesis, the immune system responds to signals produced during distress known as danger signals, including inflammatory factors released from dying cells. [22] T cells remain anergic unless both danger signals and tumour antigens are provided. [23] A febrile response is advantageous as fever is thought to facilitate inflammatory factor production. Cancer cells are also more vulnerable to heat changes and elevated body temperature during fever may promote cell death and the massive release of tumour antigens. [24]

Besides a physical increase in temperature, fever encompasses profound physiological effects. An example of this is the induction of heat-shock protein (HSP) expression on tumour cells. [16] Studies have shown that Hsp70 expression on carcinoma cells promotes lysis by natural killer T (NKT) cells in vitro, while tumour expression of Hsp90 may play a key role in DC maturation. [25, 26] Interestingly, HSPs also associate with tumour peptides to form immunogenic complexes involved in NK cell activation. [25] This is important since NK cells help overcome subversive strategies by cancer cells to avoid T cell recognition. [27] Down regulation of major histocompatibility complex (MHC) expression on cancer cells results in increased susceptibility to NK cell attacks. [28] These observations show that fever is equally adept at stimulating innate and adaptive responses.

Route and duration of administration

The systemic circulation poses a number of obstacles for successful delivery of infectious agents to the tumour site. Neutralization by pre-immune Immunoglobulin M (IgM) antibodies and complement activation impede pathogens. [18] Infectious agents may bind non- specifically to red blood cells and undergo sequestration by the reticuloendothelial system. [29] In the liver, specialized macrophages called, Kupffer cells, can also be activated by pathogen-induced TLR binding and cause inflammatory liver damage. [29] An intratumoural route therefore has the advantage of circumventing most of these obstacles to increase the probability of successful infection. [18]

It is currently unclear if innate or adaptive immunity is predominantly responsible for tumour regression. Coley observed that shrinkage often occurred hours after administration whereas if daily injections were stopped, even for brief periods, the tumour continued to progress. [30] Innate immunity may therefore be important and this is consistent with insights from vaccine development, in which adjuvants enhance vaccine effectiveness by targeting innate immune cells via TLR activation. [1]

Although T cell numbers in tumour infiltrates are substantial, tolerance is pervasive and attempts to target specific antigens have been difficult due to antigenic drift and heterogeneity of the tumour microenvironment. [31] A possible explanation for the disproportionality between T cell numbers and the anti-tumour response is that the predominant adaptive immune responses are humoral rather than cell-mediated. [32] Clinical and animal studies have shown that spontaneous regressions in response to pathogens like malaria and Aspergillus are mainly antibody mediated. [3] Further research will be required to determine if this is the case for most infections.

Both innate and adaptive immunity are probably important at specific stages with sequential induction holding the key to tumour regression. In acute inflammation, innate immunity is usually activated optimally and this in turn induces efficient adaptive responses. [33] Conversely, chronic inflammation involves a detrimental positive feedback loop that acts reversibly and over-activates innate immune cells. [34] Instability of these immune responses can result in suboptimal anti- tumour responses.

Non-immune considerations and constructing the full picture

Non-immune mechanisms may be partly responsible for tumour regression. Oestrogen is required for tumour progression in certain breast cancers and attempts to block its receptors by tamoxifen have proved successful. [35] It is likely that natural disturbances in hormone production may inhibit cancerous growth and promote regression in hormone dependent malignancies. [36]

Genetic instability has also been mentioned as a possible mechanism. In neuroblastoma patients, telomere shortening and low levels of telomerase have been associated with tumour regression. [37] This may be due to the fact that telomerase activity is required for cell immortality. Other potential considerations may include stress, hypoxia and apoptosis but these are not within the scope of this review. [38]

As non-immune factors tend to relate to specific subsets of cancers, they are unlikely to explain tumour regression as a whole. They may instead serve as secondary mechanisms  which support a primary immunological system. During tumour progression, these non-immune factors may either malfunction or become the target of subversive strategies.

A simplified outline of the possible role of pyrogenic infections in tumour kinetics is illustrated below (Figure 1).

Discussion

The intimate link between infections, fever and spontaneous regression is slowly being recognized. While the incidence of spontaneous regression is steadily decreasing due to circumstances in the modern clinical se