Categories
Original Research Articles

Medical students in Aboriginal Community Controlled Health Services: identifying the factors involved in successful placements for staff and students

Abstract

Background: To identify the facilitators and barriers to positive medical student placements at Aboriginal Community Controlled Health Services (ACCHSs).

Materials and Methods: A total of 15 focused interviews were conducted with medical students from Victorian universities and staff from two Victorian ACCHSs. Staff and students were asked about their expectations of students’ placements; the learning outcomes for students; the structural elements that have an influence on student placements; and the overall benefits and challenges of placements within these settings. This data was then thematically analysed.

Results: The study found that student placements in ACCHSs were of benefit to both the student and the organisation. However, areas for improvement were identified, including avenues for administrative assistance from universities in managing placements and clarifying expectations with regard to learning objectives. Overall, it was the opinion of participants that placements in this setting should be encouraged as a means of medical and cultural education.

Conclusion: The study contributes to building an understanding of the elements that lead to good practice in student placement design, and developing relationships between medical schools and ACCHSs. The study provides grounding for further research into the development of a framework for assisting successful student placements in the ACCHS setting.

 

Introduction

Medical education can be a powerful tool for social reform [1]. The teaching that occurs within medical schools, and the manner and context in which it is delivered, has the potential to influence the practice of future doctors and have an effect on addressing social inequities. One of the greatest heath inequities in Australia is between Indigenous and non-Indigenous Australians [2].

In an effort to address this health disparity, there has been increasing emphasis on the teaching and learning of Indigenous health issues in medical schools within Australia, with a range of initiatives guiding the development and improvement of the medical curriculum and associated activities [3,4]. One of the most significant is the inclusion in 2006 of Indigenous health in the Australian Medical Council’s guidelines for Assessment and Accreditation of Medical Schools [5]. An important element of the Standards for Accreditation is the emphasis on offering student placements in Aboriginal Community Controlled settings and the development of relationships between medical schools and Aboriginal Community Controlled Health Services (ACCHSs) to facilitate this (see Standards 1.6.2 (regarding effective community partnerships) and 8.3.3 (regarding exposure to culturally competent healthcare) [6].

Student placements in such a setting offer an opportunity for students to develop cultural competency in the area of Indigenous health. This was outlined in the National Best Practice Framework for Indigenous Cultural Competency in Australian Universities as a critical area of need, and defined as:

“Student and staff knowledge and understanding of Indigenous Australian cultures, histories and contemporary realities and awareness of Indigenous protocols, combined with the proficiency to engage and work effectively in Indigenous contexts congruent to the expectations of Indigenous Australian peoples [7]”.

While ACCHSs have played host to medical students for some time, there has been little formal research regarding ACCHS as a setting for student placements, locally or at other universities across Australia [8-11]. The purpose of this study is to investigate the key facilitators and barriers to positive medical student placements in this sector.

 

Methods and analysis

Participants for this research included Victorian medical students who had completed a placement at an ACCHS and staff members of Victorians ACCHSs who had been involved in medical student placements. Students were recruited on a voluntary basis by responding to an electronic noticeboard announcement. ‘Snowball’ sampling was also employed. A total of seven student interviews were recorded. Of these, six had been involved in placements in ACCHSs, and one in a remote Aboriginal community government-run health service. The duration of placements ranged from one to six weeks, and were conducted in ACCHSs located across Australia in Queensland, Victoria, New South Wales, Western Australia, and the Northern Territory.

The ACCHSs involved in this study were all located in Victoria and selected on the basis of having a pre-existing relationship with the University of Melbourne. Each organisation provided approval for involvement in the research following internal protocols, and staff members were nominated by the ACCHSs on the basis of their direct involvement in medical student placements. A total of eight interviews were conducted with the staff members from Victorian ACCHSs.

Data for this project was collected through a series of one-on-one semi-structured interviews with participants, conducted by the first author, either in workplaces of participants or university campus interview rooms. Interviewees understood the context and purpose of the research, as explained prior to interviews. Interview questions focused on the benefits and challenges both groups experienced during student placements at the services. Transcripts were returned to participants for comment and correction.

The data gathered from the transcribed interviews was arranged according to questions asked, and then further under thematic headings. Shared themes were derived from the data, without use of supportive software.

This project was conducted as part of the Scholarly Selective program of the University of Melbourne Doctor of Medicine. The first author at time of writing was a fourth-year postgraduate medical student, supervised by two experienced researchers. Ethics approval for this project was obtained through School of Population and Global Health Human Ethics Advisory Group of the University of Melbourne (approval no. 1443395).

 

Results

In total, 15 interviews were recorded for this research. All students were studying medicine at universities in Victoria. The ACCHS placements were undertaken as either GP placements or electives. Staff from the ACCHSs had a variety of roles including general practitioner, nurse, Aboriginal health worker, medical director, clinical director, and executive director of health services. Points of discussion arising from the data fell largely under six major themes:

  • Student exposure
  • Burden on health services
  • Interpersonal value
  • Community benefits
  • Educational value
  • Student engagement

All participants, on direct questioning, agreed that medical student placements in ACCHSs are important. The data was, therefore, considered on the basis that there is strong support from both students and staff to make these placements a positive and constructive experience for all.

Student exposure

A strong theme that emerged from the responses of both groups was that these placements offered medical students practical exposure to Indigenous health, culture, and community, with several students stating that they offered an important insight into Indigenous health that was not possible through theoretical teachings delivered elsewhere in the curriculum:

“I mean, you hear it, you read it, and so you know it superficially, but when you’re sitting in front of multiple people who can tell you the details of their story, you get a much better understanding as to why these families have had opportunities denied to them” (Student 6).

Students and staff also recognised that placements provided an opportunity to teach students about the ACCHS model of healthcare, which involves not only the delivery of medical services, but also health prevention, social outreach and advocacy programs that address the social determinants of health [12-15]. For one Aboriginal health worker, the value lies in teaching the principles of self-determination upon which ACCHS are founded [15]:    

“I just like the fact that they’re in our setting, our community, and learning from us, not being told by someone else that this is how it is” (Staff member 6).

Community benefits

Staff and students cited the potential benefits for Indigenous communities, such as recruiting medical staff and strengthening ties between the medical profession and Indigenous communities, as a primary benefit of student placements:

“… we see it as an opportunity to expose people to what it’s like working in Aboriginal health, and that helps us with recruitment” (Staff member 8).

Several staff and students commented on the role of placements to promote awareness of ACCHSs amongst the medical community, thereby increasing the likelihood of referrals and support for the services:

“… it’s very good for the organisation and the community to see that students come here to learn because it gives them the message that this is a place of excellence … I think that builds confidence on their part in the service” (Staff member 8).

In addition, placements provided ACCHSs and their patients the opportunity to engage in the medical education process:

“… it makes medical education more transparent for Aboriginal people … and in turn I think that has the potential to create more trust between the patient and the doctor in Aboriginal health centres” (Student 6).

Participants also saw that placements could have a broader impact on the healthcare system outside of the ACCHS setting, in that the students who have had this experience would go on to work in practices and hospitals across the country in a more culturally appropriate way. As such, these placements are “… seeding the medical workforce with people who have some understanding and experience in Aboriginal health” (Staff member 4).

Burden on health services

Participants recognised that the administrative and organisational duties required for placements were very time-consuming, and that supervising students put pressure on practitioners’ time, increasing delays for patients and overall demand on the practice. The administrative duties for ACCHS staff include scheduling time for teaching, co-ordinating the student’s timetable to allow them to spend time in various parts of the organisation, and working through requests for placements from different universities and faculties.

Many of the challenges that students experienced in their placements related to how well the organisation was able to handle these tasks. This was, as several students noted, a feature of clinical placements that is not unique to the ACCHS setting. Challenges for some students included an apparent lack of structure to the placement, staff being unaware in advance of the student’s arrival, finding the clinic to be underprepared for the student or understaffed, or doctors simply not having the time available to teach the student. As one student commented, the service was, “… definitely very welcoming … but they were very space-limited and time-limited in terms of how much attention they can pay to students” (Student 6).

Several students mentioned the value of a careful introduction and orientation to the practice as a way of helping students to feel comfortable in the new environment, and as a result, improving student engagement and relieving some of the administrative stress on the organisation:

“If the host organisation gives a good introduction to the student, it will be easier for them right the way through the placement because the student will know what they’re doing and where they fit, so they won’t be constantly having to direct them” (Student 6).

Educational value

Responses in regards to the education value of the placement varied both between and within the two groups. Most staff at the ACCHSs were generally very happy with the educational experience they provided, not only in terms of general practice knowledge, but also holistic care, community medicine, and Aboriginal culture. Several staff, however, recognised that the emphasis placed on cultural and holistic care may not have been in line with what students expected from placements:

“… I don’t know if they come with that same perspective of the holistic model of care … yes, the clinical side is important, but that’s not the whole reason why they’re coming to [ACCHS]s” (Staff member 2).

Conversely, some of the staff interviewed said that some students were surprised by the degree of emphasis placed on the general practice aspect of the placement.

While all students reported that the placement had been a valuable learning experience, more than half of those interviewed commented that in terms of examinable material for a general practice rotation, the ACCHS placement was perhaps not as rewarding as a placement in a ‘mainstream’ practice:

“I don’t think I learned a lot of examinable material” (Student 3).

One student noted the fact that the longer consultations, which staff regarded as a virtue, meant fewer patients were seen, and the opportunity for learning through repetition was diminished on a purely quantitative basis.

In contrast to the opinions of some of their peers, several students stated explicitly that they believed the educational experience was better for being in an ACCHS setting, and many said that the cultural and community teachings had enriched their learning.

“I can only say that I think if anything it was an advantage because not only did I get the clinical experience I also got the community, social aspect of it as well which might be harder to grasp if you hadn’t done that” (Student 1).

This discrepancy in opinions to some extent reflects a differing of expectations both within the student group and between the students and staff.

Student engagement

Participants were asked what they defined as a ‘successful’ placement. Responses from students varied, and largely focused on basic principles of medical education such as patient contact and fulfilling the curriculum requirements, but also included having clear expectations and an orientation to the ACCHS.

While staff responses also varied, the majority of comments related to student engagement—with the staff, the service as a whole, and with the community:

“If … I get a sense that they’re starting to integrate with the broader team … that sort of marrying in with the team well, I think, is a very good sign” (Staff member 8).

Several staff commented that students who were confident in the ACCHS and able to seek out their own learning opportunities were ‘easier’, more engaged and more likely to be active learners:

“Some of them are much easier and more outgoing. Whereas some of them you have to spend a bit of time engaging and making them feel confident…that’s not a bad thing but it’s harder work” (Staff member 5).

Interpersonal value

The value of the human interactions that arose from placements emerged as a common theme in the interviews. Several students spoke of the relationships with staff, and the trust that developed with community members returning to the clinic, as particularly rewarding experiences:

“I got to see a number of patients quite a few times so that made it a very good learning experience, and a lot of the patients were very trusting, and so I got to do a lot in terms of their care. That … was really rewarding” (Student 6).

Staff from the ACCHSs spoke enthusiastically of having engaged students around the clinics and the organisations more broadly:

“It’s enjoyable, honestly, to see someone who wants to come here and work with Aboriginal people” (Staff member 7).

They cited the benefits of a fresh perspective on health, a new skill set, at times a helping hand, and importantly, a sense of goodwill toward the Indigenous community and the health organisation in the form of a demonstration of interest in Indigenous people and their health.

 

Discussion

Major benefits and challenges

This study highlights strong support for student placements in ACCHSs. The most commonly cited reasons for this support centre on the ability to offer students first-hand experience in an Aboriginal community health setting, and the reciprocal benefit to the community in creating a more culturally educated workforce.

The challenges reported by staff and students emphasise areas in need of improvement in the placement process, and provide a foundation for refinement. The foremost of these is the administrative and organisational burden on the health services, how the co-ordination of placements can be improved, and what the implications are for the relationships between universities and ACCHS in this process. Nelson et al [10] suggest that there is a role to be played by university-appointed administrators to assist ACCHSs in the processes required to ensure students and the ACCHSs themselves are adequately prepared for placements. Their study highlights the positive feedback received when such appointments have been made, and the interviews here reinforce the message that good preparation and coordination improves the experience of both staff and students [10].

Orientation

Ensuring that students feel both socially and culturally oriented in the placement environment is an important element of a successful experience for both staff and students. Students who feel at ease, or more confident in the environment, tend to be more proactive with their learning, and less demanding on the organisation. An important way of fostering this is through a formal orientation.

At the sites where an orientation was undertaken and involved specific cultural awareness training, students felt more confident and engaged. While this responsibility sat with the ACCHS, several participants noted that cultural awareness training should be a core part of medical education in the university environment. Preliminary training would then be the basis for, and be complementary to, the localised and more specific learning provided once students are in the ACCHS setting. Improved coordination between the universities and the ACCHSs is therefore important to ensure that appropriate training and orientation is completed before the student begins their work in the clinical environment.

Educational value of placements in ACCHSs

A successful placement requires that all parties have a clear understanding of the nature and purpose of the placement, with shared expectations of learning objectives. Most placements are either part of general practice rotations or student-initiated electives. While the interviews included positive accounts of both types of placements, the flexibility of student-initiated electives was noted as an advantage in the ACCHS context. Electives, as distinct from other in-semester rotations, are not intended to fulfil precise curriculum requirements, and allow students to engage more freely in learning about Indigenous health and culture and the broader healthcare delivery services provided by ACCHS. However, participants also noted the importance of ACCHSs being included in general practice rotations. It must also be recognised that the medical curriculum is not limited to clinical decision-making, and the educational value of these placements should not be restricted to these domains.

Selection of students

The administrative burden and over-demand for student placements in ACCHSs raises the issue of whether students should be required to demonstrate an interest in Indigenous health to be granted a placement, a requirement that already exists in some ACCHSs. The interview data clearly identified that the burden on the heath service was greater if students were unenthusiastic, disinterested, and unable to self-manage. Approximately half the respondents agreed that an expression of interest should be requisite. The remainder of respondents suggested that those students who do not express an interest in Indigenous health placements might have the most to gain from the experience. Adequate orientation may provide a solution in terms of familiarising the student, managing expectations, and facilitating a positive experience for the student and health service.

Limitations

This study was limited in its breadth by the nature of the research as a University of Melbourne Scholarly Selective project. The study therefore had limited scope and a small sample size, and while strongly-shared themes arose from the data, the interviews did not reach saturation. The authors also acknowledge that students interviewed had all voluntarily selected Aboriginal health placements, and therefore a selection bias may exist with regard to their views of the value of these placements. The authors further acknowledge that while students interviewed were placed in ACCHSs across Australia, the ACCHS staff were from Victorian ACCHSs only, and therefore the placements they describe are not necessarily shared experiences. No community members visiting the ACCHSs were interviewed. Their opinions on the presence of students in the organisations may form a basis for further research.

For ACCHSs to continue to be an active part of medical education, as mandated by the AMC, it is important to ensure that they have the resources to provide a good learning environment, and that the presence of students is not an impediment to the organisations. Placements should contribute to cultivating trust between Indigenous communities and the medical profession, and this is more likely with careful planning and co-ordination of placements. It is hoped that the findings of this research will help guide student placements into the future and contribute to ensuring a mutually beneficial system. Further research and larger trials in this area may include investigation of the perspectives of community members on the presence and engagement of students in ACCHSs, as well as a deeper exploration of the effects of student placements in other settings, including remote areas.

 

Conflict of interest
None declared.

Abbreviations and notes

ACCHS- Aboriginal Community Controlled Health Service

* Note: the term ‘Indigenous’ is used in this article to refer to the Aboriginal and Torres Strait Islander peoples of Australia.

St x– student no. x

Sf x– staff member no. x

 

References

[1] Murray RB, Larkins S, Russell H, Ewen S, Prideaux D. Medical schools as agents of change: socially accountable medical education. Med J Aust. 2012;196(10):653.

[2] Australian Bureau Of Statistics. Experimental life tables for Aboriginal and Torres Strait Islander Australians [Internet]. 2007 [updated 2013; cited 2015 October 10]. Available from:

[3] Mackean T, Mokak R, Carmichael A, Phillips GL, Prideaux D, Walters TR. Reform in Australian medical schools: a collaborative approach to realising Indigenous health potential. Med J Aust. 2007;186(10):544-6.

[4] Phillips G. CDAMS Indigenous health curriculum framework [Internet]. Melbourne: VicHealth Koori Health Research and Community Development Unit; 2004 [cited 2015 Jan 5]. Available from: http://www.limenetwork.net.au/files/lime/cdamsframeworkreport.pdf

[5] Australian Medical Council. Assessment and accreditation of medical schools: standards and procedures [Internet]. 2006 [cited 2011 Nov 10]. Available from: http://www.amc.org.au/forms/Guide2006toCouncil.pdf

[6] Australian Medical Council. Standards for assessment and accreditation of primary medical programs by the Australian Medical Council [Internet]. 2012 [cited 2015 Jan 6]. Available from: https://www.amc.org.au/files/d0ffcecda9608cf49c66c93a79a4ad549638bea0_original.pdf

[7] National best practice framework for Indigenous cultural competency in Australian universities [Internet]. Universities Australia Indigenous Higher Education Advisory Council (IHEAC); 2011 [cited 2015 Jan 6]. Available from https://www.universitiesaustralia.edu.au/uni-participation-quality/Indigenous-Higher-Education/Indigenous-Cultural-Compet

[8] Weightman, M. The role of aboriginal community controlled health services in Indigenous health. Australian Medical Student Journal. 2013;4(1).

[9] Ross S, Whaleboat D, Duffy G, Woolley T, Sivamalai S, Solomon. S. A successful engagement between a medical school and a remote North Queensland Indigenous community: process and impact. LIME Good Practice Case Studies. 2013;2:39-43.

[10] Nelson A, Shannon C, Carson A. Developing health student placements in partnerships with urban Aboriginal and Torres Strait Islander Community Controlled Health Services. LIME Good Practice Case Studies. 2013;2:29-34.

[11] Patel A, Underwood P, Nguyen HT, Vigants M. Safeguard or mollycoddle? An exploratory study describing potentially harmful incidents during medical student placements in Aboriginal communities in Central Australia. Med J Aust. 2011;194:497-500.

[12] Marles E, Frame C, Royce M. The Aboriginal Medical Service Redfern – improving access to primary care for over 40 years. Aust Fam Physician. 2012;41(6):433-6.

[13] Panaretto KS, Wenitong M, Button S, Ring IT. Aboriginal community controlled health services: leading the way in primary care. Med J Aust. 2014;200(11):649-52.

[14] Bartlett B, Boffa J. Aboriginal Community Controlled comprehensive primary health care: the Central Australian Aboriginal Congress. Aust J Prim Health. 2001;7(3):74-82.

[15] Davis, M. Community control and the work of the national aboriginal community controlled health organisation: putting meat on the bones of the ‘UNDRIP’. Indigenous Law Bulletin. 2013;8(7):11.

 

 

Categories
Original Research Articles

Mistreatment in Australian medical education: a student-led scoping of experiences

Abstract

Background:
Evidence of bullying and harassment of medical students and junior doctors has existed for over 30 years. However, there has been little attempt to explore the dimensions of this issue in Australia to date. Given the evidence which indicates that experiencing abusive behaviour has a detrimental effect on professional identity formation and on mental health, the Australian Medical Students’ Association (AMSA) undertook a national scoping study to better understand the experiences of Australian medical students.

Methods:
We conducted a mixed methods survey of the 16,959 students enrolled in a medical degree at an Australian university in 2015. An anonymous, voluntary online questionnaire was distributed through AMSA’s social media, email newsletter and website, and medical students’ societies.

Results:
We received 519 responses, including 194 (37%) detailing at least one incident of bullying or harassment. 335 (65%) survey respondents were women and 345 (67%) were in the clinical years of their training. 60% of all respondents reported experiencing or witnessing mistreatment during their medical education. The most common theme in the free text was belittlement of the student’s competence and capacity to be a good doctor. Some gave details about how universities failed to prevent or appropriately respond to students’ experiences of bullying and harassment.

Conclusion:
In line with international data, this study shows that many Australian medical students perceive mistreatment as an important problem that is not always managed well by faculties. Multi-pronged policy and practice responses are needed to instigate cultural change in Australian medical education.

 

Introduction

“How can we care for our patients, man, if nobody cares for us?”
— Chuck the Intern, The House of God, Samuel Shem [1].

Evidence of bullying of medical students and junior doctors has existed for over 30 years in the United States and the United Kingdom [2-7]. In Australia over the past two years, the topics of bullying, teaching by humiliation, and sexual harassment in Australian medical training have attracted attention both from the mainstream news media and within the profession. There is also some formal evidence about the extent of this problem nationally. A recent local study of “teaching by humiliation” found 74% of medical students reported experiencing this practice and 84% witnessed it [5].

Worldwide, similar studies have shown that any student can be affected, regardless of gender or race [8-10]. The most common form of mistreatment reported is covert, mostly in the form of belittling, exclusion or humiliation, rather than overt yelling or violence [4-6,11]. Sexual harassment is the most common form of documented incident [12-14]. The perpetrators of bullying are most commonly senior male clinicians [4,15]. Under-reporting is the norm, especially when the perpetrator of mistreatment is the student’s clinical supervisor, due to fears about the possible impact on career progression [4,10].

There is evidence that experiencing abusive behaviour causes harm both to the student and later, potentially, to their patients (and colleagues). Other research has demonstrated how a student’s developing identity affects their subsequent career progress, employability, and performance [17-19].

Mistreatment may be a contributor to the high levels of psychological distress found among medical students. Studies have shown that rates of such distress are three times higher among medical students than the general population (9.2% and 3.1% respectively), and that female medical students were more likely than male students to have considered suicide in the past twelve months, with 4.5% having attempted suicide. In particular, Indigenous students found bullying to be a substantial source of stress [16].

The exact nature of bullying and discrimination can be difficult to define. Through this research, we determined which incidents students found distressing, and what they considered to be bullying or discriminatory behaviour.

Prompted by an increase in reports of mistreatment from Australian medical students following media attention to the issue, the Australian Medical Students’ Association (AMSA), in association with the Sydney School of Public Health (SSPH), undertook scoping research to better understand the experiences of Australian medical students.

Rather than confirm the prevalence of abuse, demonstrated by previous research and by the Report of the Senate Inquiry into Medical Complaints Processes (2016) [20], we aimed to explore aspects of students’ experiences and their responses.

Methods

Study design and sample
As the only other previous survey of this issue in Australia was distributed to students from two medical schools only [5], our study aimed to reach a wide variety of students from all Australian medical schools and to confirm that previously published findings were generalisable nationally. For this reason, a survey was chosen as the medium for this research as it could be easily disseminated nationally online.

Medical students aged 18 and older who were enrolled in an Australian University between the 25th of August and the 5th of November 2015 were surveyed. The survey was an anonymous, voluntary online questionnaire using REDCap survey software (Vanderbilt University, Tennessee, USA) (see Supplementary Materials online).

The survey link and description were distributed through AMSA’s official Facebook page, Twitter account, website, and email newsletter (“Embolus”). Some medical schools’ student societies, as well as individual participants also shared the survey link.

The questionnaire contained four parts. Part one collected demographic information. In part two, respondents rated their perception of the extent of five categories of mistreatment — “general bullying”, “sexism”, “disability discrimination” (including mental illness), “racism”, and “homophobia” — in Australian medical education, by moving a pointer on a scale from 0 to 100. In part three, respondents rated the attributes of incidents they had witnessed or experienced and were then invited to use free text boxes to describe these incidents. They were also asked about their response to these incidents. In part four, students could describe what actions they felt AMSA could take in response.

The project received ethics approval from the Human Research Ethics Committee (HREC) of the University of Sydney [Protocol number 2015/642].

Data and statistical analysis
Basic demographic information about the respondents was reported, along with the proportions who reported experiencing or witnessing mistreatment. We tested whether experiencing or witnessing mistreatment was associated with gender, age, enrolment, sexuality, and training stage using Fisher’s exact test and Pearson’s chi-squared test. Boxplots were created based on the levels of agreement scales in part two. Differences in levels of agreement between subgroups of respondents were tested using two-sided exact Wilcoxon rank sum tests (gender, enrolment, and sexuality) and Kruskal-Wallis test (age), as the data were negatively skewed (Figure 2). These analyses were performed in R version 3.2.4 [21]. No adjustment has been made for multiple statistical comparisons.

Authors A-KS, EB and KI independently conducted an initial close coding of the open text responses with advice from CH. The taxonomies and categories developed in this process were then reviewed by the research team for comparison and reliability, and a primarily taxonomic thematic coding structure was agreed upon [22,23]. This was then applied to the free text data.

Results

We received 531 completed surveys. Twelve surveys (2%) were excluded as they contained demonstrably unreliable answers or answers unrelated to medical education, leaving a sample of 519 surveys (Figure 1). The respondents were predominantly female (65%), young (median age of 24 years), local students (90%), and at the clinical stage of their training (67%) (Table 1). Each Australian medical school was represented.

Figure 1. Number of surveys included and number of incidents of mistreatment described in check box responses or free text.

It was reported by 60% of all respondents that they had witnessed or experienced adverse treatment (Table 1). Adverse treatment was more likely to be reported by: female than male students (64% vs 53%), older than younger students (79% for 35 years and older vs 55% for 20-24 years), non-heterosexual than heterosexual students (75% vs 58%), and clinical than pre-clinical students (70% vs 40%).

For the five categories of mistreatment (general bullying, sexism, disability discrimination including mental illness, racism, and homophobia), females reported greater problems in medical education than males (p≤0.004) (Figure 2). Non-heterosexuals tended to report greater problems than heterosexuals, particularly regarding homophobia (p<0.001). International students believed mistreatment was less of an issue in medical education than local students for all categories, except racism, though differences were small (p>0.05), and there were no consistent patterns with age (data not presented).

Figure 2. Boxplots of responses of 519 medical students to statements that general bullying, sexism, disability discrimination, racism and homophobia are a problem in medical education.

Information about 301 incidents involving mistreatment was given by 194 students (Figure 1). In 92% of incidents, the victim was a student (Table 2). The respondents nominated consultants as the primary instigator in 46% of the incidents. Belittlement, condescension or humiliation were present in 65% of the incidents. Most students (68%) reported they did not react (that is, take action in response) to the event. Two major reasons for not reacting were not knowing what to do, and fear of repercussions. Most students were bothered by the incident, with only 4% moving the slider scale from “a little” bothered to “not at all”. Over a third moved the slider to the lowest tenth of the scale, described as “very much”.

Table 1. Demographics of 519 survey respondents and the proportion who witnessed or experienced mistreatment in medical education.
Reported p values are for tests of independence between experiencing/witnessing mistreatment and gender, age, enrolment, sexuality and training. Fisher’s exact test was used for gender and training, and Pearson’s chi-squared test for enrolment, sexuality and training.
*There is 1 missing value.

Of the 519 respondents, 168 submitted text descriptions of individual events (Figure 1). Of these, 41 described two events, 14 described three events, and ten described four events. In total, 267 events were described. Themes captured by coding the text responses included the type of event, the perpetrator, the situation and context, aspects that compounded the situation, and any potential outcomes of the event.

The most common theme was the denigration of the student’s competence and capacity to be a good doctor.

“The senior registrar in this instance verbally abused the student regularly, claiming that it was inconceivable that she would one day be a doctor and would cause great harm to potentially anyone she would treat.”

A commonly used framing motif was that the recipient was unworthy, should not have been allowed entry into medical school, or should make way for those who are actually fit to be doctors. The stories included examples of discrimination in all the social categories we investigated. One of the most common was the perceived incompetence or unworthiness of women.

“When we got it wrong, he would tell us we were stupid, we should drop out of medicine because we’d never make a good doctor, or there was no point trying, because we’d quit later to have babies like women should.”

“I was a new mother… and was told by another student I should be at home looking after my child instead of wasting a place at medical school that would have been better off given to someone else.”

A minority of the comments were on non-medical themes such as attractiveness, racial stereotypes, or perceived promiscuousness of the student.
“I would hear jibes about ‘Indians taking over the healthcare system of Australia’ and how ‘No one could understand their curry accent so they shouldn’t be able to work in this country.”

“All the women in our class [were] being scaled on ‘crazy versus hot.’ [The respondent was] followed into a women’s toilet and told to get down on my knees and ‘suck my dick’ while [a male medical student] grabbed his crotch.”

The more the abuse was related to medical practice or competence, the more respondents constructed it as acceptable or understandable.
“The taunts were often unrelated to medicine which made it even more unprofessional.”

Harm and suffering
Implicitly or explicitly, almost all of the 267 free text stories indicated that the recipient(s) of mistreatment were negatively affected as a result. Some accounts directly indicated that teaching by humiliation inhibited rather than enhanced medical learning, decreasing confidence and stopping the student from seeking out further educational opportunities from medical staff.
“Instead of attempting to teach the student in any way, she would harangue the student with increasingly difficult questions — lambasting her further with every question she answered incorrectly… this destroyed the confidence of the student in question quite quickly, to the point where she was afraid and unwilling to go to her clinical placement and learn for fear of the treatment she would receive the next day.”

“I understand that his motivation is to encourage us to be thorough, safe doctors. However, I was so scared at being yelled at for getting an answer wrong in his tutorials that I didn’t learn anything.”

Perpetrators

Students
Of the 38 responses indicating students as perpetrators, the same frame of medical incompetence and unworthiness was common.
“A fellow student kept on telling me that I was stupid and inept and kept saying things like if you don’t know this that (sic) you [don’t] belong in medicine … he threatened to hit me if I continue (sic) doing idiotic things.”

Faculty
We did not include the faculty as perpetrators in our questionnaire. Nevertheless, we received 22 responses citing medical school faculty (including both non-clinical staff and clinical staff in non-clinical roles) as reported perpetrators. These most often related to mishandling of reports of bullying, refusal, or inability to make reasonable allowances for mental illness or disability, and instances of discrimination against students.
“I was told that I was at risk of failing, had my depression called ‘your condition’ the whole time, making me think it was dirty or bad since it couldn’t even be addressed … They said to ask them any questions and the school of medicine would do everything it could to support me, but when I directly asked what they could offer they had nothing.”

Some respondents expressed disappointment and frustration at feeling unsupported by staff.
“The direct inaction by my university nearly led to my suicide … For me the only way I thought the Uni would notice my problem would be if I was to kill myself. Thankfully I pulled myself out of it and am still fighting week in week out to keep myself going.”

Often, students reported faculty promising support to the student, but providing no support or enacting no change. Other responses cited direct bullying or discriminatory action from the faculty towards the student.
“We get mistreated by the very people that are in control of our assessment/progression. How can you complain against the very person that controls your future? It’s just easier to endure it.”


Table 2. Details of 301 incidents of mistreatment reported by 194 medical students.
*The number of incidents with this question answered. **More than one response could be selected therefore the percentages do not sum to 100. *** “Other” includes (from most to least frequently reported) intrusive/unwanted questions, refusal to make reasonable allowance for the needs of others, threatening failure/low grade, receiving lower evaluations/grades, asking to perform personal/inappropriate tasks, spreading malicious rumours, being coerced into unprofessional behaviour, other, and actual/threatened physical punishment. **** “Other” includes (from most to least frequently reported) not there at the time, other, chalked it up to experience, told not to respond, it wasn’t that bad, didn’t have the time, my fault, didn’t care, not intentional, not important, and the data field not being filled in.

Discussion

Our study indicates that a significant proportion of medical students across the country experience or witness mistreatment, extending existing evidence of this issue, which had been previously confined to only two Australian medical schools. In line with other studies [4,11,24], our study shows that bullying and discrimination are commonly “medically themed”, in ways that belittle a student’s core identity and competency. This fits with sociologies of medicine that have shown that surviving humiliating treatment is often a ritual of socialisation into the profession [25]. Our study also extends our understanding of which students are affected and identifies a wider range of perpetrators than had earlier studies [5,14], including professional and support staff in their clinical schools and other students.

A limitation of our study was the sliding response scale for which the default setting was in the middle of the scale. If the respondent did not touch the slider, we could not be sure if they had elected to forego answering the question entirely (because they did not have an opinion or they did not want to answer) or if they were agreeing with the default mid-range answer.
While our low response rate and methods of recruitment mean that our results cannot be regarded as necessarily representative of the whole Australian medical student population, our data strengthen worldwide trends and provides confirmation that Australian medical students often experience serious mistreatment. It also reflects the findings of the Inquiry into Medical Complaints Processes, released in December 2016 [20,26].

Our study underscored that these behaviours can be damaging to students’ mental health. Our data also confirmed the widespread reluctance to disclose, report, or confront mistreatment as students fear direct educational and professional disadvantage as a result.

This research demonstrates that mistreatment is justified by the idea of upholding professional competence in medical students. It has also shown that, for some students, the mistreatment has a negative effect on their mental health and their willingness to perform. While there will be no single intervention solution for this problem, the authors suggest that clinical teaching staff may find an evidence-based short course on adult education and effective and constructive criticism, useful training for teaching medical students. This could include clear guidelines for both staff and students on the difference between effective teaching and bullying. Indeed, as teaching is often considered an integral part of clinical medicine, targeted preparation for this could commence in medical school. This should be paired with effective policies ensuring that staff who have been reported repeatedly for bullying behaviours are removed from teaching positions and receive appropriate training to improve their skills before resuming work.

We also saw that under-reporting is often due to fear of educational and professional disadvantage. We can address this by encouraging the production of university and hospital policies ensuring anonymity and protection for those who report, and providing alternatives such as switching the student to a different rotation.

Another finding was how some students felt that universities were failing to take action to support them. This was particularly linked to students with a disability or mental health conditions. The reports detail either a perceived choice by the university to not support the student or the admission that university staff did not have the ability or resources to support the student.

The authors suggest that universities enact stronger policies with safety nets available for struggling students (such as changing rotations, alternate exam arrangements, or taking time off) and ensure they have the necessary resources to do so. We also suggest the creation of policies that monitor how many students are struggling, detailing the issues, and taking steps to ensure the problem does not continue. The authors suggest that medical school accreditation processes should include a more rigorous examination of institutional performance on this issue.

These recommendations run alongside those handed down by the Inquiry into Medical Complaints Processes, which specifically identified the government, hospitals, colleges, and universities as parties with responsibility for addressing bullying and harassment in the medical profession [26].

Changes in policy and training educators on effective criticism would be strengthened by slowly incorporating cultural change through encouraging positive professionalism training. Such programs use creative techniques such as acting skills to build core professional values and behaviours. They can also reveal the impact of bullying on others without directly shaming perpetrators or exposing victims [27-29].

Further research is required to determine the effectiveness of these approaches to change. It is as yet unknown whether a pre-emptive educational approach or more capacity to remove perpetrators from teaching roles would be most effective in reducing mistreatment. Further qualitative research would better capture the dimensions and effects of mistreatment, which may be experienced differently by male and female students, on the basis of mental health status, or with respect to sexuality or ethnicity. Such research could assist in identifying institutional barriers to managing poor behaviour among teaching and non-clinical staff, and identify the best strategies by which the effects of mistreatment in medical education can be ameliorated.

Acknowledgements
We thank Rita Shackel for her assistance with the ethics approval process.

Conflict of interest
None declared.

Correspondence
A Szubert: anna.szubert64@gmail.com

References

[1] Shem S. The House of God. New York: Dell Publishing Company; 1978.
[2] Baldwin Jr D, Daugherty SR, Eckenfels EJ. Student perceptions of mistreatment and harassment during medical school: a survey of ten United States schools. West J Med. 1991;155(2):140.
[3] Neville AJ. In the age of professionalism, student harassment is alive and well. Med Educ. 2008;42(5):447-8. doi:10.1111/j.1365-2923.2008.03033.x
[4] Rees CE, Monrouxe LV. A morning since eight of just pure grill: a multischool qualitative study of student abuse. Acad Med. 2011;86(11):1374-82. doi:10.1097/ACM.0b013e3182303c4c
[5] Scott K, Caldwell P, Barnes E, Barrett J. Teaching by humiliation and mistreatment of medical students in clinical rotations: a pilot study. Med J Aust. 2015;203(4):185. doi:10.5694/mja15.00189
[6] Silver HK. Medical students and medical school. JAMA. 1982;247(3):309-10.
[7] Ulusoy H, Swigart V, Erdemir F. Think globally, act locally: understanding sexual harassment from a cross-cultural perspective. Med Educ. 2011;45(6):603-12. doi:10.1111/j.1365-2923.2010.03918.x
[8] Fnais N, Soobiah C, Chen MH, Lillie E, Perrier L, Tashkhandi M, et al. Harassment and discrimination in medical training: a systematic review and meta-analysis. Acad Med. 2014;89(5):817-27. doi:10.1097/ACM.0000000000000200
[9] Wear D, Aultman JM, Borges NJ. Retheorizing sexual harassment in medical education: women students’ perceptions at five U.S. medical schools. Teach Learn Med. 2007;19(1):20-9. doi:10.1080/10401330709336619
[10] Babaria P, Abedin S, Berg D, Nunez-Smith M. I’m too used to it: a longitudinal qualitative study of third year female medical students’ experiences of gendered encounters in medical education. Soc Sci Med. 2012;74(7):1013-20. doi:10.1016/j.socscimed.2011.11.043
[11] Sheehan KH, Sheehan DV, White K, Leibowitz A, Baldwin DC. A pilot study of medical student ‘abuse’: student perceptions of mistreatment and misconduct in medical school. JAMA. 1990;263(4):533-7.
[12] McDonald P. Workplace sexual harassment 30 years on: a review of the literature. Int J Manag Rev. 2012;14(1):1-17. doi:10.1111/j.1468-2370.2011.00300.x
[13] Nora LM, McLaughlin MA, Fosson SE, Stratton TD, Murphy-Spencer A, Fincher R-ME, et al. Gender discrimination and sexual harassment in medical education: perspectives gained by a 14‐school study. Acad Med. 2002;77(12, Part 1):1226-34.
[14] White GE. Sexual harassment during medical training: the perceptions of medical students at a university medical school in Australia. Med Educ. 2000;34(12):980-6. doi:10.1097/ACM.0b013e3181d27fd0
[15] Crebbin W, Campbell G, Hillis DA, Watters DA. Prevalence of bullying, discrimination and sexual harassment in surgery in Australasia. ANZ J Surg. 2015;85(12):905-9. doi:10.1111/ans.13363
[16] Wu F, Ireland M, Hafekost K, Lawrence D; National mental health survey of doctors and medical students [Internet]. Beyond Blue; 2013 [cited 2015 Aug 25]. Available from: https://www.beyondblue.org.au/docs/default-source/research-project-files/bl1132-report—nmhdmss-full-report_web
[17] Foster C. Factors influencing notions of professionalism: insights from established practitioner narratives [dissertation]. Sydney (NSW): University of Sydney; 2012.
[18] Frost H, Regehr G. I am a doctor: negotiating the discourses of standardization and diversity in professional identity construction. Acad Med. 2013;88(10):1570-7. doi:10.1097/ACM.0b013e3182a34b05
[19] Monrouxe LV. Identity, identification and medical education: why should we care? Med Educ. 2010;44(1):40-9. doi:10.1111/j.1365-2923.2009.03440.x
[20] Medical complaints process in Australia [Internet]. Canberra: Community Affairs References Committee; 2016 Nov [cited 2017 Jan 16]. Available from 
[21] R: A language and environment for statistical computing [Internet]. R Foundation for Statistical Computing: 2014 [cited 2015 Aug 25]. Available from: http://www.R-project.org/
[22] Bradley EH, Curry LA, Devers KJ. Qualitative data analysis for health services research: developing taxonomy, themes, and theory. Health Serv Res. 2007;42(4):1758-72. doi:10.12691/ajnr-1-1-2
[23] Creswell, JW. Research design – qualitative, quantitative and mixed methods approaches. 4th ed. California: SAGE Publications; 2014.
[24] Gan R, Snell L. When the learning environment is suboptimal: exploring medical students’ perceptions of mistreatment. Acad Med. 2014;89(4):608-17. doi:10.1097/ACM.0000000000000172
[25] Timm A. It would not be tolerated in any other profession except medicine: survey reporting on undergraduates’ exposure to bullying and harassment in their first placement year. BMJ Open. 2014;4(7). doi:10.1136/bmjopen-2014-005140
[26] Buisson E. We know the way, but is there the will to stop bullying? [Internet]. MJA Insight; 2017 Jan 21 [cited 2017 Jun 10]. Available from: https://www.doctorportal.com.au/mjainsight/2017/1/we-know-the-way-but-is-there-the-will-to-stop-bullying/
[27] Scott KM, Berlec Š, Nash L, Hooker C, Dwyer P, Macneill P, et al. Grace under pressure: a drama-based approach to tackling mistreatment of medical students. Med Humanit. 2017. 43(1):68-70. doi:10.1136/medhum-2016-011031
[28] Heru AM. Using role playing to increase residents’ awareness of medical student mistreatment. Acad Med. 2003;78(1):35-8. doi:10.1097/00001888-200301000-00008
[29] Heru AM. Teaching psychosomatic medicine using problem-based learning and role-playing. Acad Psychiatry. 2011;35(4):245-8. doi:10.1176/appi.ap.35

Categories
Original Research Articles

Knowledge needs and coping with atopic dermatitis: perspectives of patients and healthcare professionals in Singapore

Abstract

Background: Atopic dermatitis (AD) is a common chronic skin condition which has significant disease burden. Hence, it is important to understand the knowledge needs and coping of patients with AD.

Materials and Methods: This study was conducted in a dermatology outpatient clinic in Singapore. Qualitative, semi-structured interviews were conducted with patients, dermatologists, dermatology nurses, and a medical social worker (MSW). A sample of patients with AD was recruited. Dermatologists and dermatology nurses who regularly worked with patients with AD were selected. Interviews were recorded and transcribed verbatim. The framework method was employed for data analysis.

Results:
A total of 22 participants were recruited, comprising of eight patients with AD, eight dermatologists, five dermatology nurses, and one MSW. The main needs of patients that were identified were: knowledge about AD and coping with psychosocial aspects of the disease. Regarding knowledge about AD, patients wanted to know more about the underlying causes and management of AD. On coping with psychosocial aspects, patients expressed their appreciation for both the concern shown by their healthcare professionals and the opportunity to share their experiences. Some patients had difficulties coping with the rashes on the visible areas of their body.

Conclusion: It is essential to include education surrounding AD pathophysiology and the psychosocial aspects of coping with AD during counselling of these patients. Itch management, knowledge of possible triggers, and discussion on complementary and alternative medicine should be included as components of counselling. With respect to psychosocial counselling, patients could be given strategies to cope with both the changes in appearance and the frustration associated with undesired outcomes.

 

Introduction

Atopic dermatitis (AD), also known as atopic eczema, is a common chronic skin condition prevalent in people who have a family history of atopy, including asthma, eczema, or allergic rhinitis [1]. In the United States, the prevalence of AD has been reported to be 10.7% in children and 10.2% in adults [2]. AD is also the most common skin disease in the Asian population [3,4]. In Singapore alone, 20.8% of children between the ages of seven and 16 have been diagnosed with AD [4].

AD is characterised by intermittent periods of exacerbation and remission. Patients with AD have pruritic rashes, erythema, lichenified patches, and excoriations due to scratching of the skin. These symptoms often affect the patient’s sleep and mood, resulting in a decreased overall quality of life [5,6].

Due to its significant disease burden, understanding the education needs of patients is important for developing a holistic program to help patients manage their condition. From a review of the literature, the education needs of AD patients and their caregivers include disease pathophysiology, awareness of trigger factors, skin care (including application of topical creams such as steroids, moisturisers, and wet wraps), a range of treatment modalities, management of symptoms such as itch and sleep disturbances, nutritional aspects, and coping strategies [7,8]. A good education program has been found to result in a significantly lower dermatitis severity index, increased use of emollients and wet wraps, decreased use of steroids, reduced itching and irritability, and improved sleep [7,9].

It is important to understand the knowledge needs and coping mechanisms from both the patient’s and healthcare professional’s perspectives. Studies to understand the education needs of AD patients were mainly conducted in Western countries. Conducting such a study in an Asian context will enable us to tailor AD education programmes for these populations [10]. This study aims to achieve an understanding of these issues through group interviews with patients and their multidisciplinary healthcare team.

 

Materials and Methods

Table 1. The guide used to focus the group discussions on atopic dermatitis for patients and healthcare professionals.

Design
This study was conducted in a tertiary dermatological centre in Singapore between June and December 2015 after obtaining ethical approval from the Domain Specific Review Board (DSRB) by the National Healthcare Group (NHG), study reference 2015/00236. Qualitative semi-structured interviews were conducted with AD patients and healthcare professionals. Both patients and healthcare professionals were included to obtain the perspectives from both groups, and to identify any conflicts. In a semi-structured interview, an interview guide with broad questions was used to focus the discussion (Table 1). Patients and nurses were interviewed in groups of three to five participants, while dermatologists and a medical social worker (MSW) were interviewed individually. All interviews were conducted in English, by the same primary investigator to ensure consistency.

Participants
A purposive sample of patients with AD was recruited for the focus group interviews. Recruited patients had AD for at least twelve months and were all older than 21. Dermatologists and dermatology nurses, who regularly worked with patients with AD, were selected for the interviews.

Data collection
Patients who met the inclusion criteria and had clinic visits during the study period were recruited and written consent was obtained. Demographic data obtained from the patients included age, gender, race, level of education, occupation, smoking and drinking status, areas of skin affected by AD, age of onset, disease duration, previous treatments, and history of inpatient admission due to their skin condition.

The demographic data obtained for dermatologists and nurses included age, position, number of years practicing, and academic qualifications. The focus group interviews lasted around 60 minutes, and the individual interviews lasted 20-45 minutes. Data was collected until saturation was achieved, which meant that no new information was obtained from subsequent interviews.

Data analysis
Recorded interviews were transcribed verbatim. The framework method was employed for data analysis. Briefly, this included the analysers familiarising themselves with the interview content, coding of transcripts, and categorising the data into themes [11]. Trustworthiness was achieved using strategies suggested by Lincoln and Guba [12], which included credibility, transferability, dependability, and confirmability. Credibility was achieved through triangulation and critical self-reflection. Triangulation, which was used to ensure validity through exploring multiple perspectives, was achieved through interviewing both patients and their healthcare professionals [13]. Critical self-reflection (the reflection of one’s viewpoints) was used in data collection and analysis to reduce bias from self-imposed viewpoints [14]. Transferability (generalisability of results) was achieved through transcription of the entire interview to provide context and meaning [15]. Dependability (the reliability of the results) was ensured by developing an audit trail which consisted of raw data, audio recordings, products of data analysis and synthesis, and interview guides, to increase overall transparency of the research process [14]. Finally, confirmability was achieved when the criteria of credibility, transferability, and dependability were established [12]. The preliminary analysis was completed by a single researcher, who then presented the selected pre-codes and themes to the other team members.

Results

A total of 22 participants were recruited: eight patients with AD, eight doctors, five nurses, and one MSW. Two focus groups were conducted for the patients, and one for the nurses. There were nine face-to-face interviews with the doctors and MSW.

Patient demographics and clinical data
The mean age of the patients was 30.9±7.8, and there was equal gender distribution (11 (50%) males, 11 (50%) females). AD affected numerous body regions, including the scalp (n=4, 18%), face (n=4, 18%), trunk (n=5, 23%), upper limbs (n=6, 27%), and lower limbs (n=8, 36%). The mean age of onset of AD was 13.9±12.1 years, and the mean duration since AD diagnosis was 17.0±9.1 years. Topical steroids (n=8), prednisolone (n=6), and phototherapy (n=7) were common treatments received.

Healthcare professional demographics
The mean age of the healthcare professionals was 42.1±12.2 years. The mean duration of specialisation in dermatology was 12.1±10.4 years.

Themes
The main needs of patients could be broadly divided into two themes: knowledge about AD and coping with the psychosocial aspects of the disease (Table 2).

Table 2. Themes, sub-themes and codes arising from the patient and healthcare professional interviews on atopic dermatitis.

Knowledge about AD
This theme includes the knowledge needs of patients with AD comprising the underlying pathophysiology of AD and management of the disease.

Pathophysiology of ADMost healthcare professionals believed that most patients required only the most basic information on the nature of AD so as not to overwhelm them with too much information. However, some patients were interested to know the various subtypes of eczema, and important tips to help them identify the severity and status of their condition.

They just need to know two or three key points of information. Otherwise they forget everything which is said. Firstly, I tell them the genetic causes. The gene makes good skin that’s why they have poor skin. Because of the poor skin, they have poor skin barrier. Water is lost and a lot of allergens or infectious agents can come in. [This is] good enough.” (Healthcare professional 3)

Personally I will like (sic) to have more information. But I can see how sometimes more information gets you more worried especially if they tell you some eczema are more dangerous and it can last forever (sic), for example. It would have been nice if I knew ‘here are different severities of eczema’ for example. Not just types, but more serious, less serious and some kind of sense of where you are along the spectrum. That will be useful.” (Focus group 2)

Management of AD
Healthcare professionals felt that the importance of the use of moisturisers and topical steroids could not be overemphasised among patients with AD. Patients were often very concerned with their itch, and wanted better strategies to alleviate their symptoms. They were also very keen to discuss the role of complementary and alternative medicines (CAMs) as part of their overall management, however, these options were often not addressed or quickly dismissed.

I think they need to know the importance of moisturiser. I think when you ask the patients if they put (sic) moisturiser, most of them will say, ‘maybe once’ or sometimes, ‘forget’. They always think that steroid is the main thing. So moisturiser is very important for eczema because you want to resolve the barrier function. So most of the time I spend quite a long time telling them how important the moisturiser is as a maintenance.” (Healthcare professional 6)

Maybe for them [healthcare professionals] to be more open to alternative treatment. You [another participant] mentioned is (sic) gut health and all that. Things like having more holistic treatment options instead of just dismissing it as, ‘Ah, doesn’t work’. They need to be able to discuss with you.” (Focus group 1)

Patients had variable preferences with respect to the amount and types of treatment related information they received. Some preferred to know all the possible types of treatment options, while others believed that the doctors would make the best decision for them.

[The doctors can] outline the different treatment methods and what are the pros and cons of each.” (Focus group 1)

I think I will put myself in the hands of the doctor. Because they know our condition better. They have seen a lot of patients with similar conditions. So maybe they know what is the best for us.” (Focus group 1)

Coping with the psychosocial aspects of AD
Patients expressed feeling frustrated and stressed by the supposedly well-intended opinions of relatives, friends, and strangers who did not understand that there was no cure for their eczema, yet still continued to provide advice. Patients also expressed that they appreciated the concern shown by their healthcare professionals, and also the opportunity to talk about their eczema exacerbations and how to prevent them. Some patients had difficulties coping with the unsightly rashes on visible areas of the body, such as the face, arms and legs.

Some uncles and aunties will say, ‘you must do this, do that’. But I think sometimes these kind (sic) of things make us feel a bit down. As I mentioned, my friend’s son has severe eczema on his face. So she also has people coming up to her and telling her things. She feels very upset about it. So I feel that, it’s a kind of a stress in a way.” (Focus group 1)

“I think it’s showing concern for you. Because when you come up for your routine check-ups, it is good that they give you a chance to share about any flare ups that you experienced, and discuss what might have caused it, and what you can do to prevent it.” (Focus group 1)

Both patients and healthcare professionals agreed that having support groups for AD patients is essential for enabling them to share their challenges and provide support for one another.

I do think such support groups are good for patients to come together and share. Because they [patients] do trial and errors for different kind of remedies (sic). So sharing experience will help different patients to spot each other needs (sic).” (Focus group 2)

Showing them support groups. So it’s just not the nurses [only], but you organising a good support group. I think that is very critical for them.” (Healthcare professional 7)

Discussion

Due to the chronic nature and impact of AD on patients’ physical and psychosocial health, education is critical to ensure successful long-term management of the disease and adherence to treatment. Barbarot and colleagues [10] emphasised the importance of tailoring AD education programmes to the sociocultural context of the patient. In this study, we have explored the education needs of patients with AD in an Asian context. Both patients and healthcare professionals expressed two main components pertinent in AD counselling, which were knowledge about AD, and coping with the psychosocial aspects of the disease.

Knowledge about AD
Although both patients and healthcare professionals agreed that providing knowledge on the pathophysiology of AD was important, patients wanted to know more about the different subtypes of AD and severities, which contrasted with healthcare professionals believing that providing only basic information relating to AD was sufficient. Patients felt that this knowledge could help them manage an impending exacerbation when, for example, they noticed subtle changes in their skin condition. Although the majority of patients in this study felt that they wanted more information on their disease, one patient also acknowledged that having more knowledge might generate unnecessary worry and could therefore have a negative impact. Hence, it is important to tailor the amount and type of information provided to the needs of the patient.

Healthcare professionals tend to emphasise the use of moisturisers and topical steroids in the management of AD, which plays a large role in nurse-led eczema counselling programmes [7,9]. However, patients did not feel that they needed more information on the use of topical treatments, possibly indicating that sufficient information is already being provided in this respect.

Regarding medical treatments, patients expressed that they wanted healthcare professionals to be more open to discussions surrounding CAMs, and not simply discount them as unscientific or ineffective. A recent study also described that the majority of patients rated it as being important that healthcare professionals know about CAMs for the treatment of AD [16]. Education and counselling regarding CAMs may prove to be an important part of patient counselling, particularly when considering the chronic nature of AD and the limitations of current therapies [17]. It has also been found that in addition to their prescribed therapies, patients who were more familiar with the Internet were likely to search for alternative complementary therapies online, including homoeopathy, ingestion of essential fatty acids, Chinese herbal therapy, phytotherapy, acupuncture, autologous blood therapy, and bioresonance [18]. Small trials have shown that these therapies may have some positive effects, but the evidence is not yet sufficient to support their use [19]. Despite the lack of scientific and clinical evidence supporting the effectiveness of CAMs, healthcare professionals need to be able to address these issues with their patients.

Symptomatic itch was a major concern for all patients included in this study, and they expressed a desire for more information relating to its management. Although patients knew that they should not scratch their skin as it would worsen their AD, many found this hard to avoid. This highlights the importance of including itch management as an important component in AD counselling. Besides antihistamines, the current first-line therapy for controlling itch (which is often unsuccessful), patients could be taught to use distraction and habit reversal techniques [20].

Coping with the psychosocial aspects of AD
Both healthcare professionals and patients agreed that having a support group could be a platform for patients to share their AD coping methods. Weber and colleagues [21] found that support groups helped improve patients’ quality of life, personal relationships, and participation in leisure activities. The impact of AD on body image has been documented in the literature [6]. As a result of impaired socialisation secondary to changes in body image, support groups could provide a platform for overcoming these issues.

Participants also found it stressful and frustrating to receive advice from relatives and friends who did not have much knowledge relating to AD. It was reported by the study participants that most people believed that the rashes were caused by a food allergy, and told them to avoid certain foods, or tried to provide suggestions to cure their AD which did not have any effect. The participants in this study were all adults above 21 years of age, which meant that they were now unlikely to outgrow their disease. On a community level, there could be more education on the various types of skin rashes and the possibility of AD continuing into adulthood.

Study limitations
A cross-sectional study was conducted, and therefore we do not know the changes in the needs and coping of patients over time. Also, the experiences of the participants relating to their initial diagnosis was based on recall, which may be inaccurate or subject to bias.

Practical implications
Itch management and management of exacerbations are essential for helping AD patients cope with their disease. AD is a chronic skin condition with no cure. Hence, it is common for patients to seek alternate methods of treatment, and therefore CAMs are widely used. Furthermore, people who are more familiar with the Internet could search for information on these therapies online [18]. Healthcare professionals need to be able to discuss the use of these therapies with patients, including explaining that while there may not be any evidence to support their use, CAMs may be used if the components of the therapy are identified and not known to cause any serious adverse effects.

Support groups could be used to help patients cope with the psychosocial aspects of their disease. Patients may also benefit from support in managing the stress and frustration arising from well-intentioned, but unhelpful comments from family members and friends.

Potential future directions
This study highlighted some conflicts in the perceived information requirements of AD patients and healthcare professionals. Most patients wanted more information on the nature of AD, which healthcare professionals believed was unnecessary. For treatments, besides the use of topical steroids and moisturisers, patients wanted more information on CAMs, which healthcare professionals did not believe were beneficial or useful. Despite the lack of scientific and clinical evidence to support the effectiveness of CAMs, healthcare professionals need to have a basic knowledge on these therapies as the discussion of such therapies was important to patients. With respect to psychosocial issues, patients could be taught how to cope with the changes in appearance associated with AD, and the stress and frustration arising from the advice given by their family and friends. A counselling programme should be developed to address these patient needs.

Acknowledgements
The study was funded by National Healthcare Group – Health Outcomes and Medical Education Research (NHG-HOMER) grant (FY15/A02). The authors would like to thank the participants in the study for their time and input.

Conflict of interest

None declared.

References

[1] Darsow U, Wollenberg A, Simon D, Taïeb A, Werfel T, Oranje A, et al. ETFAD/EADV eczema task force 2009 position paper on diagnosis and treatment of atopic dermatitis. J Eur Acad Dermatol Venereol. 2010;24(3):317-28. doi:10.1111/j.1468-3083.2009.03415.x

[2] Silverberg JI, Hanifin JM. Adult eczema prevalence and associations with asthma and other health and demographic factors: a US population–based study. J Allergy Clin Immunol. 2013;132(5):1132-8. doi:10.1016/j.jaci.2017.05.009

[3] Goon T, Goh C, Yong A. Endogenous eczema. In: Chua S, Goh C, Ng S, Tan S, editors. Asian skin: a reference color atlas of dermatology and venereology. 2nd ed. Singapore: McGraw-Hill Education (Asia); 2015.

[4] Tay Y, Kong K, Khoo L, Goh C, Giam Y. The prevalence and descriptive epidemiology of atopic dermatitis in Singapore school children. Brit J Dermatol. 2002;146(1):101-6. doi:10.1046/j.1365-2133.2002.04566.x

[5] Agner T. Compliance among patients with atopic eczema. Acta Derm Venereol. 2005;215(33). doi:10.1080/03658340510012471

[6] Ang S, Teng C, Monika T, Wee H. Impact of atopic dermatitis on health-related quality of life among infants and children in Singapore: a pilot cross-sectional study. Proceedings of Singapore Healthcare. 2014;23(2):100-7. doi:https:10.1177/201010581402300203

[7] Moore E, Williams A, Manias E, Varigos G. Nurse‐led clinics reduce severity of childhood atopic eczema: a review of the literature. Brit J Dermatol. 2006;155(6):1242-8. doi:10.1111/j.1365-2133.2006.07534.x

[8] Jackson K, Ersser S, Dennis H, Farasat H, More A. The eczema education programme: intervention development and model feasibility. J Eur Acad Dermatol Venereol. 2014;28(7):949-56. doi:10.1111/jdv.12221

[9] Cork M, Britton J, Butler L, Young S, Murphy R, Keohane S. Comparison of parent knowledge, therapy utilization and severity of atopic eczema before and after explanation and demonstration of topical therapies by a specialist dermatology nurse. Br J Dermatol. 2003;149(3):582-9. doi:10.1046/j.1365-2133.2003.05595.x

[10] Barbarot S, Bernier C, Deleuran M, Raeve L, Eichenfield L, El Hachem M, et al. Therapeutic patient education in children with atopic dermatitis: position paper on objectives and recommendations. Pediatr Dermatol. 2013; 30(2):199-20. doi:10.1111/pde.12045

[11] Gale N, Heath G, Cameron E, Rashid S, Redwood S. Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Med Res Methodol. 2013;13(1):117. doi:10.1186/1471-2288-13-117

[12] Lincoln Y, Guba E. Naturalistic inquiry (Vol. 75). Beverly Hills, CA: Sage; 1984.

[13] Polit D, Beck C. Essentials of nursing research: appraising evidence for nursing practice. United States: Lippincott Williams & Wilkins; 2013.

[14] Macnee C, McCabe S. Understanding nursing research: using research in evidence-based practice. United States: Lippincott Williams & Wilkins; 2008.

[15] Shenton A. Strategies for ensuring trustworthiness in qualitative research projects. Edu Info. 2004;22(2):63-75. doi:10.3233/EFI-2004-22201

[16] Munidasa D, Lloyd-Lavery A, Burge S, McPherson T. What should general practice trainees learn about atopic eczema? J Clin Med. 2015;4(2):360-8. doi:10.3390/jcm4020360

[17] Lim M, Sadarangani P, Chan H, Heng J. Complementary and alternative medicine use in multiracial Singapore. Complement Ther Med. 2005;13(1):16-24. doi:10.1016/j.ctim.2004.11.002

[18] Ring J, Alomar A, Bieber T, Deleuran M, Fink‐Wagner A, Gelmetti C, et al. Guidelines for treatment of atopic eczema (atopic dermatitis) part II. J Eur Acad Dermatol Venereol. 2012;26(9):1176-93. doi:10.1111/j.1468-3083.2012.04636.x

[19] Vieira BL, Lim NR, Lohman ME, Lio PA. Complementary and alternative medicine for atopic dermatitis: an evidence-based review. Am J Clin Dermatol. 2016;17(6):557-81. doi:10.1007/s40257-016-0209-1

[20] Mochizuki H, Kakigi R. Itch and brain. J Dermatol. 2015;42(8):761-7. doi:10.1111/1346-8138.12956

[21] Weber MB, Prati C, Soirefman M, Mazzotti NG, Barzenski B, Cestari T. Improvement of pruritus and quality of life of children with atopic dermatitis and their families after joining support groups. J Eur Acad Dermatol Venereol. 2008;22(8):992-7. doi:10.1111/j.1468-3083.2008.02697.x

Categories
Original Research Articles

Routine blood tests in hospital patients: a survey of junior doctor’s cost awareness and appropriate ordering

Abstract

Background: Excessive and redundant ordering of pathology tests contributes to increasing healthcare costs. Common blood tests, such as full blood counts, liver function tests, serum electrolytes, and C-reactive protein are frequently ordered with little consideration of purpose or intent. Most commonly the ordering of ‘routine’ blood tests is the responsibility of the most junior member of the medical team (the intern). We hypothesise that overutilisation of pathology tests exists due to an under-appreciation of the costs of testing.

Materials and Methods: We surveyed 50 interns regarding their comprehension of the cost of four commonly ordered pathology tests. We also identified the proportion of participants that had ordered an investigation inappropriately.

Results: Full blood counts, serum electrolytes, liver function tests and C-reactive protein were, on average, overestimated in cost by 9%, 32%, 36%, and 71% respectively. Costs for each test were underestimated in only a minority of cases, 32% for full blood counts, 14% for serum electrolytes, 16% for liver function tests, and 18% for C-reactive protein. All participants recall circumstances in which they inappropriately ordered an investigation.

Conclusion: Junior doctors did, on the whole, not underestimate the cost of pathology tests. Junior doctors are poorly informed about the cost of tests, however, this does not appear to influence their ordering, with 100% of participants reporting that they had inappropriately ordered investigations.

 

Introduction

The use of diagnostic testing is essential in the accurate diagnosis, monitoring, and screening of various diseases [1], with an estimated 70% of clinical decisions being substantially based on the results of such investigations [2]. Over the past 20 years, the number of laboratory tests available to clinicians has more than doubled [3], with most clinical laboratories in Australia reporting a 5-10% increase in their annual workload [4]. Similar to biochemical investigations, the uptake of imaging based diagnostics has growth at a rate of 9% annually [5]. Laboratory medicine is the single highest volume activity in healthcare, with demand increasing disproportionally to other medical activities [6].

Unfortunately, these increased volumes of testing have not always resulted in clinically relevant or useful patient interventions. Indeed, numerous studies [3,7-9] have attempted to investigate the impact of inappropriate pathology testing. While definitions of inappropriate use vary, it can generally be understood as pathology findings that do not have any impact on the clinical decision-making pathway. Estimating the size of this issue is difficult, but has been explored in numerous studies. Miyakis et al [10] found that 68% of a panel of 25 investigations failed to contribute to a patient’s clinical management. Sarkar et al [11] reviewed the cases of 200 patients with haemostatic disorders, and found that 78% of investigations ordered did not influence patient management. This represented an avoidable cost of $200,000. Rogg et al [12] found that repeat investigations are redundantly ordered in 40% of patients transferred from the emergency department to inpatient wards.

Rates of overuse reported in other studies ranged from 40-65%, depending on how ‘appropriate use’ was defined [13-17]. Walraven et al [19] reported, in a systematic review of laboratory clinical audits, pervasive overuse ranging from 4.5-95%. A more recent meta-analysis by Zhi et al [20] estimates the general prevalence of overuse as 20.6%. In Canada, redundant test ordering is expected to represent an annual cost of $36 million (CAD) [21], finances that could have otherwise been redistributed to other essential areas of healthcare.

The impact of inappropriate testing cannot, however, be qualified simply in terms of monetary cost. Even high-value and high-quality investigations can have limitations. False positive results can lead to unnecessary, anxiety provoking, and costly follow-up investigations [22-24]. Appropriate ordering decreases the likelihood of false positive results, thereby reducing the associated physical and emotional stress associated with these false positive values.

Improving the practice of ordering laboratory diagnostics is a challenging issue, the solution of which has been widely studied with variable levels of success. Consensus between these studies seems to suggest that education, audit, and feedback regarding appropriate investigations can limit the demand for diagnostic investigations. Miyakis et al [10] observed a 20% reduction in avoidable testing after education was provided to clinicians regarding their test ordering behaviours, the costs of ordering, and the factors that contributed to overuse. Feldman et al [25] found that attaching fee data to routinely ordered pathology investigations reported an 8.6% reduction in the number of tests ordered. A similar study by Tierney et al [26]  reported a 7.7% reduction in the number of tests ordered. Hampers et al [27] found that listing the individual charges of diagnostic tests at the time of ordering resulted in a 27% reduction in the total ordering of diagnostic tests.

Miyakis et al [10] found that junior medical staff are 20% more likely to order unnecessary investigations when compared to senior staff. This observation is vitally important as in public teaching hospitals, junior medical staff are generally most often responsible for the ordering of relevant investigations, often under a degree of self-direction. It is in this group where education regarding cost awareness would be most impactful in reducing inappropriate ordering. Limited numbers of past studies suggest there is a knowledge gap regarding cost comprehension in junior medical staff. Khromona et al [28] found that 82 (70%) respondents at a single institution felt they needed further education into the ordering of appropriate tests. Stanfliet et al [29] found that all interns interviewed (n=61) across two South African Hospitals reported that they would benefit from further education into the appropriate ordering of investigations.

The aim of this pilot study was to evaluate the awareness that junior medical staff (interns) at the Gold Coast University Hospital have of the costs of various commonly requested blood tests. It was hypothesised that systematic over-ordering may be accounted for by underestimation of cost. If this was confirmed, it would be possible to devise educational interventions designed to manage these deficiencies, which may subsequently promote more cost-effective and appropriate investigation. The efficacy of this process has been suggested in previous studies [10,25-27].

 

Materials and Methods

Table 1. Example of questions asked of survey participants to gauge their understanding of the costs associated with pathology testing in hospitals

Study design

The study utilised an observational design, with the development of a questionnaire aimed at assessing cost compression of interns at the Gold Coast University Hospital (Table 1). The questionnaire included questions relating to some of the most commonly ordered investigations at the hospital: full blood count (FBC), liver function tests (LFTs), serum electrolytes (UES), and C-reactive protein (CRP). Additionally, we requested that participants report if they had ever requested a pathology test that they felt was not clinically indicated, or was inappropriate.

Ethics approval to perform this survey was granted by the Human Research and Ethics Committee of the Gold Coast University Hospital (HREC//16/QGC/320).

 

Participant selection and setting

Medical staff of the classification of intern (first year medical graduates) were approached for inclusion. These staff represented the most junior element of their respective medical/surgical teams. The centre in which this project was conducted is the largest facility of the Gold Coast Health district, which, across its Southport and Robina campuses, serves over 750 beds, with over 100,000 emergency presentations annually. Both campuses are major teaching hospitals, and the majority of interns were graduates of Queensland universities.

The questionnaires were completed during mandatory teaching sessions, which all interns were required to attend. Each participant from the study population had an equal likelihood of being involved in the study. A total of 88 interns were present at these education sessions. Participants were approached randomly with requests for their participation until a sample of 50 participants was reached.

To enhance a response rate and ensure reliability, all surveys were completed during face-to-face meetings with the principal investigator, thus ensuring responders could not have advance understanding of the nature of the specific questions and therefore prepare accordingly by accessing reference materials.

Data collection

The actual cost of the four commonly ordered pathology tests (FBC, CRP, UES, LFTs) according to hospital financial records was used as a comparison with participant estimates. These values are represented as a total dollar value without a breakdown of individual costs, and represent the cost of labour, consumables, processing, and reporting.

Questionnaire responses were de-identified, and no personal or identifying information was retained. Participation and completion of the questionnaire was completely voluntary. This process was repeated until a minimum of 50 completed questionnaires had been collected. It was thought that this number would allow for an equal distribution of uncontrolled variables amongst the study sample.

Statistical analysis

Data was collated using Microsoft Excel 2016 (Microsoft Corporation, Redmond, WA, USA) and statistical analysis was performed using SPSS version 23 (SPSS Inc, Chicago, Ill, USA). Continuous data were analysed for normality using the Kolmogorov-Smirnov method. The mean estimated cost provided by participants was compared to the true cost of the relevant test and was analysed using a one-sample T-test, where p<0.05 was considered statistically significant. Simple graphical representations were used to visualise the number of participants that had overestimated or underestimated the cost of the test. Responses within 25% of the actual cost were regarded as accurate, with estimates more than 25% above the true cost being considered an overestimate, and likewise estimates more than 25% below the true cost being considered underestimates. These thresholds were suggested by a previous systemic review which examined physician cost awareness of pathology testing [29].

 

Results

A total of 50 interns at the Gold Coast University Hospital were included in this study. The mean assumed cost of pathology testing was, for all tests, higher than that of the true cost.

For almost all tests (with the exception of FBC), costs were routinely overestimated. Costs were overestimated by 50% of participants with respect to UES, 56% of participants with respect to LFTs, and 68% of participants with respect to CRP testing (Figure 1, Table 2). The FBC was the most accurately predicted test, with 40% of respondents accurately estimating the true cost.

Figure 1. Proportion of candidates to overestimate (grey), accurately estimate (orange) or underestimate (blue) the true cost of pathology tests. Estimates within 25% of the true cost were regarded as accurate. Estimates more than 25% above the true cost were regarded as overestimates. Estimates more than 25% below the true cost were regarded as underestimates
Figure 2. Comparison of the mean estimates of pathology test costs amongst interns (blue), compared with the actual cost of the test (orange).

Comparing the mean estimated cost and true value directly, we observed that for LFTs, UES, and CRP testing, there was a statistically significant overestimation of cost. LFTs, UES, and CRP were overestimated by 35.5% ($20.87±10.53, p<0.001), 31.5%, ($19.76±12.55, p=0.001), and 70.6% ($39.97±38.20, p<0.001), respectively when compared to the true costs. FBC testing was overestimated by only 9% ($17.25±13.43, p=0.442).

Table 2. Proportion of respondents who underestimated costs, accurately predicted costs and overestimated costs of commonly ordered blood tests. Estimates within 25% of the true cost were regarded as accurate. Estimates more than 25% above the true cost were regarded as overestimates. Estimates more than 25% below the true cost were regarded as underestimates.
Table 3. Actual and estimated cost of pathology tests.

Of note is that 100% of responders reported ordering an inappropriate pathology test during their clinical practice. We hypothesised this inappropriate ordering would be explained by an assumption that tests were cheaper than their true value; however, this was not the case as the majority of participants were found to overestimate costs for most investigations (Figure 2, Table 3).

100% of participants reported that they had previously ordered tests inappropriately.

Discussion

The results of this study seem to suggest that the understanding of the cost of common pathology tests is highly variable between individuals, with a clear lack of consensus amongst the study group a whole. Surprisingly on average, the estimated cost of pathology testing was generally more than the true cost of testing. In this study 100% of individuals report having ordered a pathology test inappropriately, and various previous studies [7-11] explore the prevalence of test overordering. This would suggest that other factors other than underappreciation of cost are driving excessive ordering amongst medical staff.

It was not surprising that the majority of interns would admit to ordering unnecessary blood tests. This could be because it is often easier to perform the tests with onsite phlebotomy services. Due to the high workload of interns, ordering “routine blood tests” is convenient, time-efficient, and often an expectation of senior staff.

In agreement with previous studies [29,30] interns at the Gold Coast University Hospital demonstrate a poor understanding of the cost of pathology investigations. They also report knowingly ordering inappropriate or unnecessary investigations. We propose three potential explanations for this. First, some participants may have had prior experience with or knowledge of commercial pathology testing, which tends to carry higher costs than in-house hospital pathology tests. Second, due to clinical inexperience, the perceived clinical value of the unnecessary tests was thought to be greater than the monetary costs of performing the investigation. Finally, it is possible that cost reduction is not perceived to be the responsibility of the most junior member of the management team. One study by Tiburt et al [29] in 2013 found that only 36% of physicians considered themselves responsible for reducing healthcare costs. Simply put, many clinicians do not acknowledge or accept their own role in rationalising healthcare costs.

Miyakis et al [10] found that junior staff will order inappropriate investigations 20% more frequently than senior staff (across a single Australian emergency department). However, the same study did not suggest cost-comprehension as a driving force for this difference. Schilling [31] found that only 28% of Swedish emergency department physicians correctly predicted the cost of investigations used to investigate pulmonary emboli, concluding that level of experience did not imply a better knowledge of the costs of investigation. A systematic review by Allan et al [32] of 14 studies of diagnostic and non-drug therapy cost estimates reported that clinicians of various nationalities estimated costs to within 25% of the tests correct value 33% of the time, and that the year of study, level of training, and specialty did not appear to impact this accuracy. These studies were represented by mixed specialties in various European and American based institutions. Broadwater-Hollifeild et al [33] found that only 20% of emergency physicians correctly predicted the costs of common medical tests (within 25% of true cost) across eleven emergency departments in Utah, USA. For comparison as an aggregate, in our study, interns were able to correctly predict cost (within 25% of true value) in 29.5% of proposed tests. The individual populations and settings varied in these studies and the resounding consensus is that clinicians, in general, will poorly predict the cost of investigations.
While experienced clinicians may have a limited knowledge of the costs of the investigations they order, they may request more relevant investigations, likely to be a consequence of experience and a better understanding of the specific indications and limitations of particular tests [33]. However, in some scenarios seniority does not always correlate with a reduced volume of testing. For example, a recent study by Magin et al [34] found that in Australian GPs, for every 6 months of cumulative training, the number of investigations ordered increased by 11%. This indicates the relationship between ordering and experience may be more complex. This may be because with greater comprehension of potential pathology, registrars in later stages of training have greater concern for potential missed diagnoses, or in general have a lower acceptance of ambiguity.

Although unnecessary testing is often associated with a net detrimental effect, examples do exist where excessive ordering of low yield investigations can result in the capture of significant pathology, allowing for the early management of conditions that may have otherwise led to significant mortality and morbidity. These screening programs usually undergo rigorous cost-benefits analyses, ensuring the net benefits outweigh any risks and costs associated with implementing such a program. Some examples of which include routine screening for breast cancer [35] and colorectal cancer [36,37]. These are examples of tests where despite low pretest probability of disease, the impact of a positive value can significantly alter patient mortality and morbidity to the level that routine testing is justifiable for relevant parties. Another example is routine screening for inborn errors of metabolism, which is performed for every child born in Australia. Although these illnesses are rare, these routine tests have high sensitivity and specificity, allowing for early intervention and leading to substantially better outcomes for affected patients [38]. While we acknowledge that this ‘shotgun’ approach can occasionally have positive outcomes, clinicians face an ethical conundrum. Maximising the use of resources in every patient runs the risk of eroding and diluting the overall effectiveness of the healthcare system, and each investigation ordered for a patient increases the risk of a false positive result or adverse event. We do not advocate compromising patient safety in favour of retaining finances, but as 100% of the junior doctors surveyed in this study have ordered inappropriate tests, some degree of cost containment must be considered.

Targeted interventions to curtail unnecessary investigations may assist in this regard. Given the overestimation of costs found in this study, it is unlikely that providing fee data for investigations would impact ordering behaviours significantly. A better approach would be to try and understand what factors are taken into consideration when ordering tests by more senior clinicians, given their tendency to order less inappropriate investigations than interns. Further studies would benefit from comparisons between interns and more senior medical staff, to establish what behaviours in senior staff result in more appropriate test ordering. Targeted education of these concepts may produce a reduction in inappropriate test ordering.

Study limitations and future directions

Our study analysed only awareness of costs, but did not demonstrate or attempt to ascertain the degree of inappropriate usage. Based on our current results we could not provide an opportunity for a cost reduction through education of true cost, as participants generally overestimate rather than underestimate test values.

In future studies, it may be beneficial to include additional questions incorporating a Likert scale in which participants rank the factors most important to them when ordering a blood test (for example, including factors such the cost of the test, expectations from a superior, desire for completeness, and expectations from patients). This would allow for identification of the traits most likely to lead to excessive ordering. Consequently, future interventions could be developed to address factors most likely to contribute to these behaviours. As discussed, it may be beneficial to compare groups of interns to more senior clinicians to establish the behaviours that most strongly correlate with rational test ordering.

Another limitation of this study was that we did not ascertain the degree of previous education regarding pathology testing costs that each participant had received. Previous studies [26,27] suggest that this may be a widespread phenomenon. It would also be valuable to ascertain how many tests participants are ordering to establish if participants who routinely underestimate the cost of tests tend to order more frequently, or vice versa. Such data could be linked to administrative data to assess for clustering and to determine if ordering behaviours vary between departments.

 

Conclusion

Junior doctors frequently report ordering inappropriate tests and in general, overestimate the costs of these pathology tests. This has a financial impact on the health system. We advocate that pathology services develop educational strategies for reducing inappropriate testing. Cost awareness does not appear to be a highly relevant factor in test ordering. Further study is needed to recognise the specific factors that contribute to systematic over-ordering.

Acknowledgements

I would like to extend my thanks to both Robert Ellis and Miranda Rue-Duffy, who have both been invaluable in providing advice on producing appropriate statistics.

 

Conflict of interest

None declared.

 

References

[1] Lippi G, Guidi GC, Plebani M. One hundred years of laboratory testing and patient safety. Clin Chem Lab Med. 2007;45:797-8

[2] Rohr UP, Binder C, Dieterle T, Giusti F, Messina CG, Toerien E, et al. The value of in vitro diagnostic testing in medical practice: a status report. PLoS One. 2016;11:e0149856.

[3] Hickner J, Thompson PJ, Wilkinson T, Epner P, Sheehan M, Pollock AM, et al. Primary care physicians: challenges in ordering in clinical laboratory tests and interpreting results. J Am Board Fam Med. 2014;27:268-74.

[4] National Coalition of Public Pathology. Encouraging quality pathology ordering in Australia’s public hospitals [Internet]. 2011 [cited 2017 Jul]. Available from:

[5] The Royal Australian and New Zealand College of Radiologists. Review of funding for diagnostic imaging services [Internet]. 2011 [cited 2017 Apr]. Available from:

[6] Freedman DB. Towards better test utilisation – strategies to improve physician ordering and their impact on patient outcome. EJIFCC. 2015;26(1):15-30.

[7] Hogg W, Baskerville N, Lemelin J. Cost savings associated with improving appropriate and reducing inappropriate preventive care: cost consequences analysis. BMC Health Serv Res. 2005;5:20.

[8] Hicker JM, Fernald DH, Harris DM, Poon EG, Elder NC, Mold JW. Issues and initiatives in the testing process in primary care physician offices. Jt Comm J Qual Patient Saf.  2005;31:81-9.

[9] Weydert J A, Nobbs N D, Feld R, Kemp JD. A simple, focused, computerized query to detect overutilization of laboratory tests. Arch Pathol Lab Med. 2005;129(9):1141-3

[10] Miyakis S, Karamanof G, Liontos M, Mountokalakis TD. Factors contributing to inappropriate ordering of tests in an academic medical department and the effect of an educational feedback strategy. Postgrad Med J. 2006;82:823-9.

[11] Sarkar MK, Botz CM, Laposata M. An assessment of overutilization and underutilization of laboratory tests by expert physicians in the evaluation of patients for bleeding and thrombotic disorders in clinical context and in real time. Diagnosis. 2017;4(1):21-6.

[13] Rogg JG, Rubin JT, Hansen P, Liu SW. The frequency and cost of redundant laboratory testing for transferred ED patients. Am J Emerg Med. 2013;31(7):1121-3.

[14] Cassel CK, Guest JA. Choosing wisely: helping physicians and patients make smart decisions about their care.  JAMA. 2012;307(17):1801-2.

[15] Institute of Medicine Roundtable on Evidence-Based Medicine. The National Academies Collection: reports funded by National Institutes of Health. In: Yong PL, Saunders RS, Olsen LA, editors. The healthcare imperative: lowering costs and improving outcomes: workshop series summary. Washington: National Academies Press; 2010.

[16] Bates DW, Boyle DL, Rittenberg E, Kuperman GJ, Ma’Luf N, Menkin V, et al. What proportion of common diagnostic tests appear redundant? Am J Med. 1998;10(4);361-8

[17] Spiegel J S, Shapiro M F, Berman B, Greenfield S. Changing physician test ordering in a university hospital. An intervention of physician participation, explicit criteria, and feedback. Arch Intern Med. 1989;149(3);9549-53.

[18] Schroeder S A, Myers L P, McPhee S J, Showstack JA, Simborg DW, Chapman SA, et al. The failure of physician education as a cost containment strategy. Report of a prospective controlled trial at a university hospital. JAMA. 1984;252(2):225-30.

[19] Van Walraven C, Naylor CD. Do we know what inappropriate laboratory utilization is? A systematic review of laboratory clinical audits. JAMA. 1998;280(6):550-8.

[20] Zhi M, Ding EL, Theisen-Toupal J, Whelan J, Arnaout R. The landscape of inappropriate laboratory testing: a 15-year meta-analysis. PLoS One. 2013;8(11):e78962.

[21] Van Walraven C, Raymond M. Population-based study of repeat laboratory testing. Clin Chem. 2003;49:1997-2005.

[22] Moynihan R, Doust J, Henry D. Preventing over diagnosis: how to stop harming the healthy. BMJ. 2012;344:e3502.

[23] Laposata, M. Putting the patient first – using the expertise of laboratory professionals to produce rapid and accurate diagnoses. Lab Med. 2014;45:4-5.

[24] Epner PL, Gans JE, Graber ML. When diagnostic testing leads to harm: new outcomes-based approach for laboratory medicine. BMJ Qual Safe. 2013:22:ii6-10.

[25] Feldman LS, Shihab HM, Thiemann D, Yeh HC, Ardolino M, Mandell S, et al. Impact of providing fee data on laboratory test ordering: a controlled clinical trial. JAMA Intern Med. 2013;17:903-8.

[26] Tierney WM, Miller ME, McDonald CJ. The effect on test ordering of informing physicians of the charges for outpatient diagnostic tests. N Engl J Med. 1990;322(21):1499-1504.

[27] Hampers LC, Cha S, Gutglass DJ, Krug SE, Binns HJ. The effect of price information on test-ordering behaviour and patient outcomes in a paediatric emergency department.  Paediatrics. 1999;103(4 pt 2):877-82.

[28] Khromova V, Gray T. Learning needs in clinical biochemistry for doctors in foundation years. Ann Clin Biochem. 2008;45:33-8.

[29] Tilburt JC, Wynia MK, Sheeler RD, Thorsteinsdottir B, James KM, Egginton JS, et al. Views of US physicians about controlling health care costs. JAMA. 2013;310(4):380-8.

[30] Stanfliet JC, Macauley J, Pillay TS. Quality of teaching in chemical pathology: ability of interns to order and interpret laboratory tests. J Clin Pathol. 2009;62:664-6.

[31] Schilling UM. Cost Awareness among Swedish physicians working at the emergency department. Eur J Emerg Med. 2009;16(3):131-4.

[32] Allan GM, Lexchin J. Physician awareness of diagnostic and nondrug therapeutic costs: a systematic review.  Int J Technol Assess Health Care. 2008;24(2):158-65.

[33] Broadwater-Hollifield C, Gren LH, Porucznik CA. Emergency physician knowledge of reimbursement rates associated with emergency medical care. Am J Emerg Med. 2014;32(6):498-506.

[34] Margin PJ, Tapley A, Morgan S. Changes in pathology test ordering by early career general practitioners, a longitudinal study. Med J Aust. 2017;207(2):70-4.

[35] Mackenzie F, Christoph L, Joann E. Breast cancer screening: an evidence-based update. Med Clin North Am. 2015;99(3):451–68.

[36] Beck D. The importance of colorectal cancer screening. Ochsner J. 2012;12(1):7–8.

[37] Lin JS, Piper MA, Perdue LA, Rutter CM, Webber EM, O’Connor E, et al. Screening for colorectal cancer: updated evidence report and systemic review for the US preventative services task force. JAMA. 2016;315(23):2576-94.

[38] Geelhoed EA, Lewis B, Hounsome D, O’Leary P. Economic evaluation of neonatal screening for phenylketonuria and congenital hypothyroidism. J Paediatr Child Health. 2005;41(11):575-9.

 

Categories
Original Research Articles

Combined epiretinal membrane and cataract surgery: visual outcomes

Abstract

Introduction: This study compares visual outcomes between patients undergoing a single surgery combining cataract and epiretinal membrane (ERM) peel versus cataract surgery preceding ERM surgery as separate procedures.

Materials and Methods: A retrospective review was undertaken of electronic medical records for patients undergoing ERM surgery by vitrectomy, performed by a single surgeon. Peri-operative, three month, and twelve month follow-up visual data were collected. Three groups were identified: 1) Cataract surgery prior to ERM peel; 2) Combined ERM and cataract surgery; and 3) Cataract surgery post-ERM peel. Post-operative complications and mean change in visual acuity (VA) were investigated in the cataract surgery prior to ERM group compared to the combined surgery group.

Results: A total of 271 eyes underwent ERM peel either before or after cataract surgery, or combined with cataract surgery. 62 eyes were excluded as they did not have follow-up data available. Of the 209 included eyes, 62 had cataract surgery prior to ERM peel, 105 had combined ERM peel and cataract surgery, and 28 had cataract surgery post-ERM peel. Analysis of outcomes in the cataract surgery pre-ERM versus combined surgery group found improvements in both groups’ VA at three months (mean logMAR (logarithm of minimum angle of resolution) improvement -0.10 vs. -0.08, p=0.87) and twelve months post-operative follow-up (mean -0.18 vs. -0.22 p=0.54), with no significant difference between the groups. There was no difference in the proportion of eyes in either group that had peri-operative (9.5% vs. 4.8%, p=0.28) or post-operative complications (5.9% vs.1.8%, p=0.42).

Conclusion: Combined cataract and ERM vitrectomy is as effective as consecutive operations for improving VA, whilst reducing patient exposure to the risks associated with two separate procedures.

Introduction

The epiretinal membrane (ERM), also referred to as macular pucker or cellophane maculopathy, is a sheet of fibrous cells on the surface of the retina.  Proliferation of the fibrous cells and the subsequent contraction of the membrane lead to defective visual symptoms, in particular distortion and blurred vision. ERM is a relatively common occurrence with ageing, with the prevalence of ERM ranging from 8.6% to 12.1% amongst various ethnic groups and countries [1-3], and typically affecting patients over 60 years old [3]. Often symptoms are only minimal and ERM requires no intervention unless progression occurs. In some instances however, ERM adversely affects vision and requires treatment through surgical intervention with vitrectomy and an ERM peel procedure. Patients requiring treatment for ERM may have other comorbid eye conditions, such as cataract, which also need to be addressed. Therefore, the question arises if these two problems are best addressed simultaneously or sequentially. The current literature provides only small case series in answering this question and results have been mixed, although most papers have found no significant differences [5-7]. There is also a paucity of literature that compares combined surgery and cataract surgery prior to ERM peel.

 

Aim

In this study, we sought to identify the number of patients who underwent ERM vitrectomy before or after cataract surgery and the number of patients who received ERM phacovitrectomy combined with cataract surgery at a single site. Secondly, we sought to determine complication rates and visual outcomes in patients who had combined ERM cataract surgery compared to cataract surgery prior to or after ERM vitrectomy.

 

Materials and Methods

Following approval from the University of Tasmania Human Research Ethics Committee (H0015009), we conducted a retrospective review of the medical records of patients who were diagnosed with ERM and underwent an ERM peel vitrectomy by a single surgeon. The surgeries were conducted in an Australian regional town (Launceston) between July 1, 2005 and May 12, 2014. Inclusion criteria were patients with pre-operative visual data and twelve month post-surgery follow-up visual data.

Information retrieved from patient medical records consisted of:

  • Baseline demographics
  • Medical comorbidities (pre-operative and post-operative)
  • Lens status (phakic or pseudophakic),
  • Date of cataract surgery (cataract surgery pre-ERM, combined, or cataract surgery post-ERM)
  • Surgical complications during ERM removal
  • Best corrected visual acuity (VA) pre-operatively as well as at three months and twelve months post-operatively
  • Central macular thickness (CMT) (pre-operatively and twelve months post-operatively)

Patients were classified according to the date of cataract surgery in relation to their ERM vitrectomy: cataract surgery before ERM vitrectomy, combined phacovitrectomy, or cataract surgery after ERM vitrectomy. Patients who did not return for twelve month follow-up were excluded.

Snellen acuity measurement of best-corrected visual acuity (BCVA) was performed and the results were converted into a logarithm of the minimal angle of resolution (logMAR) before analysis. To utilise a stable BCVA, the twelve month post-operative follow-up BCVA was used in the final analysis. The surgeries were performed by one surgeon. Details on posterior vitreous detachment (PVD) and peri-operative complications were also collected.

Data from patient records were extracted into an Excel 2010 (Microsoft, Redmond, WA, USA) spreadsheet and imported into Stata 14.1 (StataCorp, College Station, TX, USA) for analysis. Continuous data distributions were investigated and one-way analyses of variance (ANOVA) were used to investigate baseline differences between the three groups (cataract surgery pre-ERM, combined, or cataract surgery post-ERM) with Tukey-Kramer post-hoc tests. For categorical data, crosstabs with Chi-square and Fisher’s exact tests were utilised to investigate differences among the three groups at baseline. These were also used to investigate complications and the proportion of eyes with VA improvements of one chart line or greater at three month and twelve month follow-ups for the cataract surgery pre-ERM group versus the combined surgery group. Independent t-tests were used for the investigation of mean change in VA (logMAR) and mean change in CMT from baseline to three month follow-up, and twelve month follow-up between the pre-ERM group versus the combined surgery group. All tests were two-sided and differences were accepted as significant at p<0.05.

 

Results

A total of 271 eyes underwent ERM peel either before or after cataract surgery, or combined with cataract surgery. There were 62 eyes excluded as they did not have follow-up data available. Twelve month follow-up data was available for 209 eyes from 199 patients (93 women and 106 men) and these were selected for analysis. There were 108 (51.7%) right eyes and 101 (48.3%) left eyes (Table 1). There were 62 (29.7%) eyes with cataract surgery prior to ERM peel, 28 (13.4%) eyes proceeding to cataract surgery post-ERM peel, 105 (50.2%) eyes with combined ERM peel and cataract surgery, and 14 (6.7%) eyes with solely ERM peel that have not yet required cataract surgery (Table 1).

Table 1. Baseline characteristics of all eyes.

There was no significant difference between the groups for pre-operative CMT or cystoid macular oedema (CMO). There was a significant difference in mean age between the groups (F2,192=7.8, p=0.001) with the Tukey-Kramer post-hoc test indicating a significant difference between the cataract surgery prior to ERM peel group and the cataract surgery post-ERM peel group (p<0.0001) and the combined group and the post-ERM peel group (p=0.02), but not between the cataract surgery prior to ERM peel group and the combined group (p=0.13). The combined surgery group had better mean VA (logMAR) pre-operatively compared to the cataract surgery prior to ERM or post-ERM groups (p=0.04). In the cataract surgery pre-ERM group, there was a mean of 1347±1179 days between cataract surgery and ERM peel.

Table 2. Baseline characteristics of cataract surgery pre-ERM, combined surgery and cataract surgery post-ERM groups.
*Unable to calculate due to small expected cell sizes. ns: not significant.

Overall there were 18 peri-operative complications: 17 retinal tears or breaks (8.1%) and one lens touch (0.48%) There was no difference in the proportion of eyes that had peri-operative complications in the cataract surgery pre-ERM group compared to the combined group (4.8% vs. 9.5%, p=0.38) or in the proportion of eyes with postoperative complications (1.8% vs. 5.9%, p=0.42).

In the post-operative period, CMO developed in five eyes that had combined surgery and three eyes that had cataract surgery after ERM peel. No eyes that had cataract surgery pre-ERM developed CMO post-operatively (Table 3). Comparing the cataract surgery after ERM peel versus combined surgery group, there was a trend for a larger proportion of eyes  receiving cataract surgery after ERM to have CMO post-operatively, although this was not statistically significant (11.1% vs. 4.8%, p=0.36).

There were improvements in both the cataract surgery pre-ERM group and the combined group at three month and twelve month post-operative follow-up appointments (Table 3). There was no difference between the groups for mean change in VA (logMAR) from baseline to three month (p=0.87) or twelve month follow-ups (p=0.54). There was also no significant difference in mean change in CMT from baseline to three month follow-up (p=0.07) or twelve month follow-up (p=0.20) between the groups.

Table 3. Visual acuity at three month and twelve month follow-up in pre-ERM versus combined surgery group.
*Note: Five pre-ERM eyes and ten combined eyes did not have three month follow-up data available. †Three eyes with missing twelve month follow-up VA excluded.
‡There were 41 eyes with twelve month follow-up PVD information not available.
ns: not significant.

Discussion

BCVA is our main tool for predicting visual outcomes in ERM vitrectomy. It is a widely used and reliable prognostic factor to measure visual outcomes [4].  Our study utilised results from a single vitreoretinal surgeon and one centre, which provided consistency for this study, and reduced variation. We investigated outcomes in the combined surgery and the cataract surgery prior to ERM peel groups, due to the limited published research comparing these groups. For these outcome analyses, we excluded the cataract surgery post-ERM group.

Dawson et al [10] found pre-operative VA may predict visual improvements at follow-up. Our results confirm Dawson et al’s [10] research for both groups, with the strongest prediction being for the consecutive surgery group (pre-surgery VA is a stronger predictor of follow-up VA in consecutive versus combined surgery). In our study, the combined surgery group had better mean (logMAR) pre-operative BCVA, due to a greater proportion of eyes within the 6/15 to 6/24 range. To control for greater baseline visual acuity in the combined group, we analysed change in BCVA (logMAR) at follow-up between the two groups.

Our finding of no significant difference in mean VA improvement between the cataract surgery pre-ERM group and the combined group is similar to previous research. In a 2010 study by Dugas et al [5], 174 eyes were compared for surgical outcomes between combined and consecutive cataract extraction and ERM vitrectomy. At twelve months follow-up, the groups did not demonstrate a statistically significant difference in VA improvements [5]. In another similar study conducted by Yiu et al [6] in 2013, 81 eyes from 79 patients were grouped into combined cataract and ERM vitrectomy and ERM vitrectomy alone. Yiu et al [6] found no statistically significant differences between the groups on VA improvements at six month and twelve month follow-up. We also found no significant difference in mean improvement in CMT between the cataract surgery pre-ERM group and the combined group. This finding is also consistent with both Dugas et al [5] and Yiu et al [6].

Eyes that had cataract surgery after ERM peel had approximately double the incidence of post-operative CMO compared to the combined surgery group. While this finding was not statistically significant, this may be explained by the small number of eyes that developed CMO in each group, resulting in a lack of power to detect a significant difference between the groups. Among the combined group in our study, 5.0% developed CMO. This concurs with previous studies that reported incidences of 3.6% to 8.1% in combined surgical cases [7,8]. A separate study also concluded no difference in the incidence of CMO between combined surgery and pre-ERM groups [9]. Due to the small sample size, and no eyes in the cataract surgery pre-ERM group developing CMO post-operatively, we were unable to determine if the incidence of CMO would have improved if consecutive surgery (either cataract surgery before or after ERM peel) was conducted. Perhaps a further study with larger population sizes would allow for a clearer understanding of this.

ERM peel is known to be a safe and effective procedure. Our study indicated that there was no statistically significant difference in the proportion of patients in the combined or sequential groups who had peri-operative or post-operative complications. Peri-operative complications included retinal tears and lens touch. Post-operative complications included CMO and posterior capsule opacification. The rate of complications we observed was also consistent with similar studies. As such, we would conclude that there was no evidence to demonstrate that combined surgery would be safer in terms of complications. The benefits of a combined operation include the reduced risks of two separate operations, reduced costs, and increased convenience for the patient. Furthermore, a cataract operation after an ERM peel might be more challenging for the surgeon, due to the lack of vitreous support and increased difficulty of intraocular lens placement, thus increasing peri-operative complications [11]. Both ERM and cataract affect vision, so conducting a cataract operation alone might not provide optimal VA improvement, as there may be progression of ERM. Considerations against combined surgery include an increased post-operative inflammatory response, CMO, and increased rates of posterior capsule opacification and posterior synaechiae formation [12].

Limitations of this study include its retrospective design. Unfortunately, this was necessary to identify a substantial number of patients for inclusion (this required an eight year study inclusion period), as the study recruited patients from the only cataract and vitreoretinal surgeon in the region. A prospective study including patients from several surgeons would have less bias and the results would have greater generalisability. We excluded patients from the study if they did not return for post-surgery follow-up. This resulted in 29 combined surgery eyes and 33 pre-ERM eyes being excluded. If these eyes were included in the study, 21.6% of the combined surgery group and 34.7% of the pre-ERM would have been categorised as lost to follow-up. It is probable that patients lost to follow-up may have had better improvements in their VA. However, it is unknown to what extent the exclusion of these eyes may have introduced bias to the study results, particularly when comparing the two groups. The patients who underwent combined phacovitrectomy had a better mean (logMAR) VA and a greater proportion had a pre-operative VA of 6/24 or better. Given the presence of both pathologies, these patients underwent combined surgical procedures. However, some of these patients with both pathologies (ERM and cataract) may have had an acceptable VA improvement from cataract surgery alone, such that they would not have ordinarily proceeded to epiretinal surgery. In contrast, the patients in the cataract surgery prior to ERM peel group had already selected themselves into that group requiring ERM surgery as they were sufficiently affected by ERM. Thus, the patients in the consecutive surgery group potentially would not present for surgery unless already having a worse pre-operative VA than the combined surgery cohort.

We acknowledge that an analysis of a twelve month follow-up BCVA would include various confounders, such as new ophthalmic pathologies or worsening of pre-existing pathologies. Those factors were not taken into account during our analysis. As post-operative conditions tend to stabilise by three months following operation, we also analysed our patients at the three month follow-up period. The results were similar at three and twelve month follow-up intervals. Whilst we made an assumption that the patients were stable at three months, they appeared to continue improving after this time, as demonstrated by the twelve month follow-up results. Additionally, our comparison did not include patients who were diagnosed with ERM but only underwent cataract surgery, as they might not present for ERM peel secondary to acceptable visual improvements. Our consecutive group involved patients who underwent cataract operation then ERM peel. Some of these patients might have developed ERM only after the cataract surgery. Finally, our study is limited by the absence of baseline comorbidity data. As such, we were unable to assess the impact, if any, of comorbidities on VA improvements.

Conclusion

This study has shown that combined cataract and ERM vitrectomy is at least as effective as consecutive operations, if not better, for improving VA. As such, it may be prudent to conduct combined surgery, as it reduces patient exposure to the risks of two separate operations, as well as being more convenient for the patient.
Acknowledgements

We would like to thank the staff at the Launceston Eye Institute for their technical assistance in obtaining the dataset required for this research.

Conflict of interest

None declared.

 

References

[1] Cheung N, Tan S, Lee S, Cheung G, Tan G, Kumar N, et al. Prevalence and risk factors for epiretinal membrane: the Singapore epidemiology of eye disease study. Br J Ophthalmol. 2017;101:371-6.

[2] Noda Y, Yamazaki S, Kawano M, Goto Y, Otsuka S, Ogura Y. Prevalence of epiretinal membrane using optical coherence tomography. Nippon Ganka Gakkai Zasshi. 2016;119(7):445-50.

[3] Aung K, Makeyeva G, Adams M, Chong E, Busija L, Giles G, et al. The prevalence and risk factors of epiretinal membranes. Retina. 2013;33(5):1026-34.

[4] Kauffmann Y, Ramel JC, Lefebvre A, Isaico R, De Lazzer A, Bonnabel A, et al. Preoperative prognostic factors and predictive score in patients operated on for combined cataract and idiopathic epiretinal membrane. Am J Ophthalmol. 2015;160(1):185-92.

[5] Dugas B, Ouled-Moussa R, Lafontaine PO, Guillaubey A, Berrod JP, Hubert I, et al. Idiopathic epiretinal macular membrane and cataract extraction: combined versus consecutive surgery. Am J Ophthalmol. 2010;149(2):302-6.

[6] Yiu G, Marra KV, Wagley S, Krishnan S, Sandhu H, Kovacs K, et al. Surgical outcomes after epiretinal membrane peeling combined with cataract surgery. Br J Ophthalmol. 2013;97(9):1197-201.

[7] Kim KN, Lee HJ, Heo DW, Jo YJ, Kim JY. Combined cataract extraction and vitrectomy for macula-sparing retinal detachment: visual outcomes and complications. Korean J Ophthalmol. 2015;29(3):147-54.

[8] Wensheng L, Wu R, Wang X, Xu M, Sun G, Sun C. Clinical complications of combined phacoemulsification and vitrectomy for eyes with coexisting cataract and vitreoretinal diseases. Eur J Ophthalmol. 2009;19(1):37-45.

[9] Savastano A, Savastano MC, Barca F, Petrarchini F, Mariotti C, Rizzo S. Combining cataract surgery with 25-gauge high-speed pars plana vitrectomy: results from a retrospective study. Ophthalmology. 2014;121(1):299-304.

[10] Dawson SR, Shunmugam M, Williamson TH. Visual acuity outcomes following surgery for idiopathic epiretinal membrane: an analysis of data from 2001 to 2011. Eye (Lond). 2014;28(2):219-24.

[11] Cole CJ, Charteris DG. Cataract extraction after retinal detachment repair by vitrectomy: visual outcome and complications. Eye (Lond). 2009;23(6):1377-81.

[12] Smith M, Raman SV, Pappas G, Simcock P, Ling R, Shaw S. Phacovitrectomy for primary retinal detachment repair in presbyopes. Retina. 2007;27:462-7.

Categories
Original Research Articles

Educational outcomes for children with moderate to severe acquired brain injury

Background: Acquired brain injury (ABI) in childhood can have serious physical, cognitive, and social consequences, although its specific impact on schooling attendance and provision of aid for children is often uncertain. We described educational and neuropsychological outcomes for a population of children with moderate to severe ABI.

 

Methods: A retrospective cohort study of children with moderate to severe ABI attending a paediatric brain injury service at The Children’s Hospital at Westmead between January 2003 and December 2007 was performed. The children were aged 8-16 at time of injury and information on school attendance, provision of aide, and neuropsychological test results were collected at 6, 18, and 30 months post-injury. Children with previous moderate to severe ABI, neurological disorders or learning difficulties were excluded.

 

Results: 104 children were included (mean age 12.4, 62.5% male). 48 had severe ABI (Glasgow Coma Scale ≤ 8 or Post Traumatic Amnesia ≥ 7 days). The proportion having returned to full time schooling improved from 56% to 89.7% between the 6 and 30-month follow-up. A majority of children had an impairment recorded on neuropsychological testing. Regression analysis found that severity of injury and language deficit were predictors of attendance in the first six months post-injury. During the 30-month follow-up, 18% of children attended special classes or received a classroom aide.

 

Conclusion: Time is important in recovery from ABI in children. Neuropsychological deficits influence delivery of classroom aides or modified curricula. Children with severe injury are more likely to have poorer cognitive and educational outcomes.

 

What is already known about this topic

  • Acquired brain injury can lead to serious physical, mental, and social problems for school-aged children
  • These deficits can often extend years after the initial injury
  • Severity of injury is correlated with poorer outcomes

 

What this paper adds

  • An Australian perspective of educational outcomes for children with moderate to severe brain injury
  • Information on deficits experienced by children over two-and-a-half years of follow-up
  • A better understanding of the importance of time, neuropsychological deficits, and physical injuries in transition back to school

 

Introduction

Acquired brain injury (ABI) includes a range of disabilities affecting the brain after birth including traumatic brain injury and haemorrhage. Children with moderate to severe ABI often experience long-term physical, cognitive, or behavioural impairments [1,2]. During discharge planning for these children, families often want to know what to expect from the future. In particular, they worry about the transition from hospital to the home and school environment [3]. Schooling is an important forum for childhood learning, as well as emotional and social development [4]. As such, parents often worry about how and when their children may return to school [5]. These concerns are important to address but are difficult to answer due to the great heterogeneity of outcomes following ABI.

Research has indicated that transition of children with ABI back into school is a challenging time for families. After brain injury, students may need to change their educational and vocational goals to accommodate changes in their abilities [6]. Interviews with children returning to school after ABI raise many issues, including social isolation, missed schoolwork, difficulties adjusting to physical and cognitive changes, and the support provided by schools [7]. Children find it more challenging to participate in school activities than at home and this may be due to the familiarity and greater support provided by the home environment [8].

It has been clearly established by prospective longitudinal studies that severity of injury is associated with poorer physical or cognitive outcomes [1,9-11]. Younger children are also more vulnerable to ongoing consequences of brain injury due to their larger head-to-body ratio, ongoing brain tissue myelination, and their thinner cranial bones [12]. Other factors such as type of injury, socioeconomic status, and provision of family support are also known to affect outcomes following childhood ABI [1,5,7]. Time plays a particularly important role in recovery from ABI however it is useful to note that some deficits may also become more apparent over time.

Neuropsychological testing may also be an early predictor of educational performance and special education requirements: in a study by Kinsella et al., severity of injury and verbal memory and fluency at three months post-injury was a predictor for requirement of special education at 24 months post-injury, Similar findings of the importance of verbal memory influencing educational performance at two years post-injury were made by Catroppa and Anderson as well as Miller and Donders [13,14]. Arnett also found that measures of executive functioning and verbal memory predicted educational competency but did not find these measures predictive of provision of special education [15]. Many studies regarding educational and schooling outcomes for children with ABI do not look specifically at school attendance. Studies of educational outcomes are also limited by small patient numbers and limited follow-up [16].

This study aims to use retrospective data to provide a better understanding of specific neuropsychological and schooling outcomes for children with moderate to severe ABI over a two-and-a-half-year period of follow-up. In particular, the study looks at providing a picture of time for return to schooling and the likelihood of requirement for an aide in the classroom or special education. It also seeks to explore whether neuropsychological factors such as attention, memory, information processing, and executive function, and whether co-morbidities such as fatigue and motor capacity may influence return to school and provision of an aide. This information may enable parents of children with ABI to have a better understanding of what to expect and could improve school engagement in the rehabilitation process [7].

 

Methods

Participants

Eligible cases were identified from the 2003-2007 database of a paediatric brain injury service at The Children’s Hospital at Westmead, New South Wales, Australia.

Inclusion criteria were age at injury of 8-16 years, moderate or severe ABI, and admission to hospital for ABI. Moderate ABI was defined as Glasgow Coma Scale (GCS) ≤ 12 or Post Traumatic Amnesia (PTA) ≥ 1 day. Severe ABI was defined as GCS ≤ 8 or PTA ≥ 7 days [17]. There were eight cases which were judged as representing moderate or severe ABI but there was unclear GCS and PTA data. These cases were included in order to more accurately represent the patient population and were classified as “undefined” in severity.

Exclusion criteria were previous moderate or severe ABI, previously documented behavioural or developmental difficulties, or previously documented special education support.

Medical records were searched and data extracted from neuropsychological and brain injury clinic reports, discharge summaries, and other hospital records. Data were collected for 0-6, 6-18, and 18-30 months post-injury. Data on educational outcomes of school attendance, provision of classroom aide, and whether children changed school were collected. Data on neuropsychological outcomes was taken from reports written by clinical neuropsychologists at the service. Patient demographics were taken from medical notes. Information on co-morbidities was collected primarily from brain injury clinic reports.

 

Measures

The neuropsychological testing variables measured were attention, memory, information processing, and executive functioning. Neuropsychological profile was considered intact when reported as “low average” or above. Where terms such as “difficulty”, “reduced”, “borderline”, or “impaired” were used as descriptors in reports they were coded as a deficit. In cases where children had no deficit on initial neuropsychological testing and were subsequently discharged without further testing, it was assumed that they would not develop deficit later on.

This research also collected data on variables concerning other sequelae of ABI including mood/behavior, fatigue, gross and fine motor deficit, receptive and expressive language deficit, visual impairment, and hearing impairment. These deficits were determined by whether they were mentioned as ongoing issues in clinical letters and other medical notes during the set follow-up periods.

 

Statistical analysis

Quantitative analysis was undertaken using STATA 11 SE.  Where possible, variables were coded dichotomously for analysis using Fisher’s Exact Test to look for a relationship with attendance at school or provision of aide. Ordered logistical regression examined which variables (severity, neurological findings, or co-morbidities) were predictive of school attendance.

 

Ethics approval

Ethics approval was obtained from the Services Improvement Unit at The Children’s Hospital at Westmead, NSW, Australia, approval number: QIE-2011-02-09.

 

Results

Participant demographics (Table 1)

Of the 158 identified cases, 104 cases met the inclusion criteria.  Age at time of injury was between 8-16 years, with the mean age at time of injury being 12.4 years. There were 48 children with severe injury, 48 with moderate injury and 1Table 1.)vehicle accidents. CT/8 with non-traumatic injury, mostly haemorrhage from rupture of arteriovenous malformations. 62.5% were male and three quarters came from urban residencies. 37.5% of injuries were due to falls and 31.7% of children were involved as passengers or pedestrians in motor vehicle accidents. CT and MRI data was collected for 85.6% patients, of which 82% showed abnormalities.

Table 1. Patient demographics of children with moderate to severe acquired brain injury.†
† Note that information is only reported for those cases where it was available.
Undefined cases are cases that were clinically moderate to severe but GCS and PTA were not clearly recorded.

Outcomes

Neuropsychological deficit (Table 2)

Sex and age at onset were not associated with any significant differences in neuropsychological outcomes. As expected, severe ABI has a trend towards more deficits as compared to moderate ABI. Children often had deficits in more than one domain, and children with severe injuries had higher rates of reported deficits. Almost all cases of children who had no deficits on neuropsychological testing were children with moderate ABI. Over time, there was improvement in the numbers of children with reported deficits across attention, memory, information processing, and executive functioning. There was no increase in incidence of deficits over time. Many children with deficits recorded at 0-6 months recovered by 18 or 30 months of follow-up.

Table 2. Number of children with moderate to severe acquired brain injury with neuropsychological deficits at follow up. †Non-traumatic cases had consequences considered to reflect moderate to severe ABI but there was insufficient information on GCS for status to be clearly defined. Note that information is only reported for those cases where it was available. This table therefore does not report on the entire sample of 104. Undefined cases are cases that were clinically moderate to severe but GCS and PTA were not clearly recorded.

Co-morbidities

The most common complaints reported were headache, fatigue, and dizziness. From 0-6 months, 62 children reported fatigue. Mood and behavioural problems were also common, with 61 children reporting problems between 0-6 months, 38 at 6-18 months, and 25 at 18-30 months. Persistence of mood and behavioural problems discussed by parents and children at rehabilitation clinics even two-and-a-half years after injury reflects the ongoing difficulties faced by children with ABI even after physical injuries have healed.

Fine motor deficits were slightly more common than gross motor deficits. For gross motor deficits, from 0-6 months, there were a greater number of children with impaired mobility requiring aid, than those without aide, but between 6-30 months, the majority of children with impaired mobility were able to walk without an aide. Over a fifth of children had initial reports from brain injury clinic reviews describing receptive or expressive language problems, but two thirds of these were resolved by 30 months follow-up. Between 2-8% of children experienced vision or hearing problems after ABI. Except for fine motor deficits, co-morbidities were most frequently recorded during the first 6 months. The frequencies of co-morbidities were recorded at each of the follow-up time points (Table 3).

Table 3. Frequency of co-morbidities reported for children with moderate to severe ABI at follow-up.†
†Note that information is only reported for those cases where it was available for all co-morbidities. This table therefore does not report on the entire sample of 104.

School attendance

Attendance improved over time; most part-time students transitioned into full-time schooling by 18 months (Figure 1). At the end of 18-30 months follow up, 6.9% (n = 87) remained unable to return to full-time schooling.

Figure 1. School attendance for 104 children with moderate to severe brain injury over follow up.

Ordered logistic regression was performed to identify predictors of school attendance. As expected, injury severity was negatively associated with full-time school attendance at 0-6 months post-injury. A child with severe ABI was five times less likely to attend school within six months post-injury than a child with moderate ABI (Table 4). There was a significant difference in school attendance at 18 months post-injury for children with moderate versus severe injury (p < 0.05). No relationship was found at 30 months (p > 0.2). No significant statistical impact of individual neuropsychological measures and attendance of schooling was found.

Table 4. Ordered logistical regression of attendance 0-6 months for 63 children with moderate to severe ABI.†
†SE= Standard Error.
Likelihood Ratio chi2(2) = 24.58 Prob > chi2 = 0.0000
Log likelihood = -52.060058 Pseudo R2= 0.191

Of the co-morbidities measured, it was found that injury severity and language deficit (independently and in combination) were negatively associated with full-time school attendance at 0-6 months post-injury. A child with a receptive or expressive language deficit was ten times less likely to attend school within six months post-injury than a child without a known language deficit.

 

School aide and change of school

Classroom aide was received by 3.3% of children at 0-6 months follow-up, by 12.8% at 6-18 months, and by 13.4% at 18-30 months. There was a significant difference according to injury severity for provision of a teaching aide at 18-30 months (p < 0.03). Special classes or educational programs were provided for 1.1% of children at 0-6 months follow-up, by 5.3% at 6-18 months, and by 7.2% at 18-30 months. There was some overlap with children receiving both aide assistance and attending a special class. During follow-up, seven children required a change of school for reasons relating to their ABI. Of these children, five had experienced severe ABI.

 

Discussion

This study describes the pattern of children in accessing schooling and special education or aide support following ABI.  Extended absences from school are one of the initial challenges facing children after ABI; 17.6% of children in our study population did not attend school in the first six months post-ABI. Whilst hospital and home schooling were sometimes available, this represents a considerable time difference in which children with ABI may fall behind their peers.  This study found that a combination of severity of injury and language deficit were found to be predictive of attendance in the first six months after injury. The involvement of language as a predictive factor is important, as it is modifiable. Language is important to complex learning and adaptation and contributes to understanding shared meanings in contexts such as school [18,19]. Language intervention programs may be able to facilitate earlier transition back to school. This study shows that the great majority (93%) of children with moderate or severe ABI will be able to return to full-time schooling. It also shows that the majority of these children are not given provision of classroom aides, special classes, or educational programs.

Attention to classroom instructions, reasoning and expression of ideas, and self-monitoring are all important features of good reintegration to schooling [20]. Children with severe ABI accounted for a greater proportion of neurological deficits in every domain measured (intellect, attention, memory, executive function, and information processing), and 44 of the 45 children with no reported neuropsychological deficits on testing had only moderate ABI. Our study reinforces that there is great variability in the way that ABI affects children, but severe ABI generally has a poorer prognosis and such children may experience greater challenges when returning to school. It is reassuring to note that time can help reduce the burden of ABI, with prevalence of neuropsychological deficits generally improving during follow-up. Longer-term studies suggest that intellect and personality problems may resolve by adulthood, but that reduced quality of life in relation to education and employment can persist [1]. Further long-term follow-up of these patients may be valuable in investigating this. Our study also found that attendance also improves with time, as 89.7% of children were able to resume full-time schooling by 30 months post-injury.

The presence of a classroom aide and modified learning programs is important in exploring whether the ongoing needs of children with ABI are met by schools. Our study found that 13% were provided with classroom aide during 30 months of follow-up. The provision of aid was found to increase over time. This may be accounted for by the inability of children with severe injuries to return to school early but another possible explanation is that there is a delay in the processing and provision of aid.

Quality of aide provision and the satisfaction children and their families had with the schooling system were not measured in this study. This is a possible avenue for future research, as general school educators and also special education teachers often do not have specialised training for working with children with ABI.  TBI Consulting Team and BrainSTARS are two promising models currently available for improving professional development of educators in caring for children with ABI, but both require further studies to show objective improvement [21].

In our study, some children reported needing to repeat a year of school. Grade repetition is known to be a de-motivating process that can affect homework completion and predict greater amounts of school absence [22]. A possible direction for future research would be to examine how common grade repetition is amongst the ABI population.

 

Strengths and limitations of this study

This study addresses the need for a better understanding of educational outcomes for children with moderate to severe ABI. The follow-up time of 30 months also provides a clearer understanding of how outcomes change over time. Additionally, this study deals specifically with school attendance and provision of aide time, two outcomes which are often overlooked in studies describing participation of children in the community following ABI.

The study also provides important information regarding predictors of attendance in the first six months of schooling. Whilst severity has been a known predictor, language has not been a focus for research previously. This new information may help guide health and education professionals in providing appropriate resources to ensure the best educational outcomes for children with ABI [23].

This study had a number of methodical limitations. Due to the highly variable nature of ABI and the small sample size, subgroup analysis was limited. As the study was retrospective there were a number of missing data fields. The results may underestimate true incidence of neuropsychological deficits as standard clinical practice does not comprehensively test children at all points of follow-up if no changes are expected or testing is not necessary. A larger, prospective study of educational outcomes would provide more data for studies with larger patient cohorts to be undertaken in order to confirm our results [24].

The study did not include a control group so confounders were minimised by excluding children with previous intellectual deficits, moderate to severe brain injury, schooling problems, or behavioural difficulties.

This study was unable to detect differences for children who were previously above average, but dropped into an average category on neuropsychological testing. Unfortunately, pre-morbid capabilities are difficult to quantify without formal testing. This study would not consider these children to have a deficit even though they have experienced a change in abilities. Any changes in abilities should not be discounted as they can still negatively impact the expectations and lifestyle of children and their families.

 

Conclusion

Children with moderate to severe ABI experience a wide range of neuropsychological and physical co-morbidities that can persist for at least 30 months following injury. Greater severity of injury and presence of language deficit are predictive of school attendance of children in the first six months following ABI. 13% of children required additional aide support or involvement in special classes. Over a third of children still reported fatigue and behavioural problems at 30 months follow-up. This study shows that whilst patients and families experience a long and difficult process of recovery, they may be able to expect improvements over time, and children are very likely to have returned to full-time schooling by 30 months post-injury.

 

Acknowledgements

I would like to thank Dr Angela Morrow for her supervision and guidance throughout this research project. I would further wish to express my gratitude to Dr Barzi for great assistance with the statistics and to Julie-Anne Macey, who came up with the research concept. I would also like to thank Dr Patrina Caldwell for her encouragement and invaluable feedback during the editing process.

 

References

[1] Anderson V, Brown S, Newitt H, Hoile H. Long-term outcome from childhood traumatic brain injury: intellectual ability, personality, and quality of life. Neuropsychology. 2011;25(2):176-84.

[2] Anderson V, Le Brocque R, Iselin G, Eren S, Dob R et al. Adaptive ability, behavior and quality of life pre and posttraumatic brain injury in childhood. Disabil Rehabil. 2012.

[3] Aitken ME, Mele N, Barrett KW. Recovery of injured children: parent perspectives on family needs. Arch Phys Med Rehab. 2004;85(4):567-73.

[4] Catalano RF, Oesterle S, Fleming CB, Hawkins JD. The importance of bonding to school for healthy development: findings from the social development research group. J School Health. 2004;74(7):252-61.

[5] Beaulieu CL. Rehabilitation and outcome following pediatric traumatic brain injury. The Surgical Clinics of North America. 2002;82(2):393-408.

[6] Stewart-Scott AM, Douglas JM. Educational outcome for secondary and postsecondary students following traumatic brain injury. Brain Injury. 1998;12(4):317-31.

[7] Sharp NL, Bye RA, Llewellyn GM, Cusick A. Fitting back in: adolescents returning to school after severe acquired brain injury. Disabil Rehabil. 2006;28(12):767-78.

[8] Galvin J, Froude EH, McAleer J. Children’s participation in home, school and community life after acquired brain injury. Aust Occup Ther J. 2010;57(2):118-26.

[9] Anderson V, Catroppa C, Morse S, Haritou F, Rosenfeld J. Functional plasticity or vulnerability after early brain injury? Pediatrics. 2005;116(6):1374-82.

[10] Anderson VA, Catroppa C, Haritou F, Morse S, Rosenfeld JV. Identifying factors contributing to child and family outcome 30 months after traumatic brain injury in children. J Neurol Neurosur PS. 2005;76(3):401-8.

[11] Catroppa C, Anderson VA, Morse SA, Haritou F, Rosenfeld JV. Outcome and predictors of functional recovery 5 years following pediatric traumatic brain injury (TBI). J Pediatr Psychol. 2008;33(7):707-18.

[12] Catroppa C, Anderson V. Recovery in memory function, and its relationship to academic success, at 24 months following pediatric TBI. Child Neuropsychol. 2007 May; 13(3):240-61.

[13] Miller LJ, Donders J. Prediction of educational outcome after pediatric traumatic brain injury. Rehabil Psychol. 2003;48:237–241

[14] Neuroimaging. 2012;22(2):e1-e17.

Arnett AB, Peterson RL, Kirkwood MW, Taylor HG, Stancin T et al, Behavioral and cognitive predictors of educational outcomes in pediatric traumatic brain injury. J Int Neuropsychol Soc. 2013;19(8):881-9.

[15] Pinto PS, Poretti A, Meoded A, Tekes A, Huisman TA. The unique features of traumatic brain injury in children. Review of the characteristics of the pediatric skull and brain, mechanisms of trauma, patterns of injury, complications and their imaging findings–part 1. J Neuroimaging. 2012;22(2):e1-e17.

[16] Welfare AIoHa. Disability in Australia: trends in prevalence, education, employment and community living. Canberra: AIHW, 2008.

[17] Teasdale G, Jennett B. Assessment of coma and impaired consciousness. A practical scale. Lancet. 1974;2(7872):81-4.

[18] Marlowe WB. An intervention for children with disorders of executive functions. Dev Neuropsychol. 2000;18(3):445-54.

[19] Ewing-Cobbs L, Barnes M. Linguistic outcomes following traumatic brain injury in children. Semin Pediat Neurol. 2002;9(3):209-17.

[20] Semrud-Clikeman M. Pediatric Traumatic Brain injury: rehabilitation and transition to home and school. Appl Neuropsychol. 2010;17(2):116-22.

[21] Glang A, Todis B, Sublette P, Brown BE, Vaccaro M. Professional development in TBI for educators: the importance of context. J Head Trauma Rehab. 2010;25(6):426-32.

[22] Martin AJ. Holding back and holding behind: grade retention and students’ non-academic and academic outcomes. Brit Educ Res J. 2010;37(5):739-63.

[23] Hawley CA, Ward AB, Magnay AR, Mychalkiw W. Return to school after brain injury. Arch Dis Child. 2004;89(2):136-42.

[24] Slomine B, Locascio G. Cognitive rehabilitation for children with acquired brain injury. Dev Disabil Res Rev. 2009;15(2):133-43.

Categories
Original Research Articles

Evaluating women’s knowledge of the combined oral contraceptive pill in an Australian rural general practice setting

Background: In addition to the contraceptive action of the combined oral contraceptive pill (COCP), there are a number of other benefits to its use such as menstrual cycle regulation. However, COCP use is also associated with a higher risk of thromboembolism. Despite the prevalence of COCP use, studies have indicated that overall women have poor knowledge of the COCP. Aim: To evaluate women’s knowledge of the COCP in a rural general practice setting. The extent of knowledge was assessed in several domains including: COCP use and effectiveness, mechanism of action, and the risks and benefits of COCP use. Methods: An observational study design was utilised. Women aged 18-50 years self-selected to complete an anonymous survey at a general practice in rural NSW. Women who were currently using, had previously used, or had never used the COCP were invited to participate. Women using a progesterone-only contraceptive were excluded. A total knowledge score on the usage and effects of the COCP was calculated for each participant by assessing responses to 34 questions for an overall score out of 34. Results: A total of 80 surveys were completed revealing that 98% of respondents used the COCP at some time in their lives with almost 29% being current users. The mean total knowledge score for all participants was 14.4 (SD = 4.9) out of a possible 34 (range: 5 – 26). There was no significant difference in total knowledge score between current and previous users (p = 0.56). Conclusion: The women surveyed in this study appear to have substantial gaps in their knowledge of the COCP. This study provides insight into specific knowledge areas that require further education and clarification during COCP counselling sessions (especially those conducted by a GP) to encourage improved knowledge of the COCP by women in this particular setting.

Introduction

The combined oral contraceptive pill (COCP) is an oral hormonal contraceptive that contains synthetic oestrogen and progesterone. Since it was first made  available in Australia in 1961, the COCP has become the principal contraceptive method of choice among Australian women [1]. Contraceptive management is a common reason for GP consultation, with the COCP being the most frequently prescribed contraceptive [1].

Though it is well known for its contraceptive action, there are a number of additional benefits associated with COCP use [2-11]. There is decreased risk of ovarian and endometrial cancers [2,3,5,6] and reduced risk of benign breast disease, functional ovarian cysts, ectopic pregnancies, and pelvic inflammatory disease [2-4,7]. The COCP is also beneficial in that it helps to regulate the menstrual cycle, and reduces dysmenorrhoea, menorrhagia, and endometriosis-associated pain [2,3,8,9]. Acne and the effects of hyperandrogenism may also be minimised with COCP use [2-4,10].

Despite these many benefits, there are several risks associated with COCP use. The introduction of low-dose COCPs saw a significant improvement in its safety profile, particularly in the reduction of thromboembolism [2,3]. Nonetheless, COCP use does increase the risk of thromboembolism, stroke, and myocardial infarction [2,3,12,13]. This is a rare complication in otherwise healthy women [2]. Women over the age of 35, smokers, and women who are obese have a higher risk of thromboembolism with COCP use [11,14]. The evidence is mixed as to whether the COCP increases the risk of breast cancer [2,15]. The current consensus is that the COCP does increase risk, but this risk is considered to be very small (equal to approximately one extra case per year for every 100,000 women) and becomes negligible ten years after cessation of use [15,16], however, research is still ongoing.

The COCP has been shown to be a very effective contraceptive with perfect use (the failure rate is 0.3%), however, its typical-use failure rate is as high as 9% [17,18]. These figures were generated by an American study by Trussell [18] and are frequently utilised in Australian literature. The typical-use rate is most commonly attributed to incorrect or inconsistent use [2]. Thus, unplanned pregnancy is an important risk for women taking the COCP to consider. There is little data to suggest that sound knowledge of the COCP correlates to improved behavioural changes and related outcomes such as unintended pregnancies [19,20]. Nevertheless, a better understanding of this common medication is likely to be a significant contributing factor in the reduction of the current failure rate which is why studies assessing women’s knowledge of the COCP are important.

Research conducted in a diverse range of settings has indicated that women’s knowledge of the COCP is generally poor [11,19,21-24]. A comprehensive search of the current literature, however, revealed a paucity of studies focusing on such knowledge amongst rural Australian women, with only one Australian study focusing on women’s knowledge of the COCP from a national perspective [11]. Furthermore, there were no international studies focusing on a rural perspective in their study populations. As such, this study aimed to evaluate the level of knowledge women attending an Australian rural general practice have regarding the COCP. The extent of knowledge was assessed through several domains including: COCP use and effectiveness, mechanism of action, and the potential risks and benefits of COCP use.

Methods

Participants

Participants eligible for inclusion were women of reproductive age, between 18 and 50 years, who were patients of a New South Wales rural general practice, and who attended the practice during the study period. Women who were currently using or had previously used the COCP were invited to participate, as were women who had never taken the COCP. Male patients and women taking a progesterone-only oral contraceptive were excluded from this study due to the nature of the research question. A total of 80 responses were collected and all were used in data analysis.

Study design and survey

This study utilised an observational study design through the provision of a survey to participants. The survey included two basic demographic questions (age and level of education) and five questions assessing personal COCP usage patterns. The questions assessing knowledge covered several domains including: COCP use and effectiveness, mechanism of action, and the potential risks and benefits of COCP use. Additionally, participants were asked about their information sources regarding the COCP.

Recruitment and data collection

Women attending the medical practice self-selected to complete the survey. Participant information sheets were attached to each survey and were made available at the reception desk of the practice. Posters advertising the survey were also displayed in the waiting room area. Participation was entirely voluntary and anonymous, with consent being implied from completion of the survey. Completed surveys were returned to a secure box at the reception desk, with access to returned surveys and subsequent generated data being limited exclusively to the lead researcher. Data collection occurred between October and December 2014.

Ethics approval was granted by the University of Wollongong (UoW) Human Research Ethics Committee in collaboration with the UoW Graduate School of Medicine.

Statistical analysis

Survey data was processed using Microsoft Excel™. P-values were calculated for correct scores between current and previous COCP users using z-scores with a significance level of ≤ 0.05.

A “total knowledge score” was also calculated for each participant by combining the total marks for questions 8, 10, and 11 of the survey, where one mark was awarded to each correct response. Question 8 comprised a total of 6 sub-questions, question 10 comprised 13 and question 11 had 15. As such, the maximum possible score for these questions was 34. The mean total knowledge score was subsequently calculated by averaging the values amongst all the participants. The total knowledge scores of current COCP users versus previous users were analysed using the Mann-Whitney “U test” with a significance level of ≤ 0.05.

For the purpose of this study, a score of 80% or above for each individual response item was designated as an adequate level of knowledge.

Results

Sample characteristics

In total, 80 responses were received during the study period. Table 1 shows basic demographic information of the study participants. The mean age of the sample was 32.1 years (standard deviation = 8.8).

Table 1: Demographic information of sample (n = 80)

Variable n (%)
Q.1 Age (years)
18-20 8 (10%)
21-24 17 (21%)
25-30 10 (13%)
31-34 17 (21%)
35-40 11 (14%)
40-50 17 (21%)
Q.2 Education level
Year 10 18 (23%)
Year 12 24 (30%)
Undergraduate degree 16 (20%)
Postgraduate degree 6 (8%)
TAFE qualification 12 (15%)
Other 4 (5%)

Personal COCP usage information

Of the respondents, 98% (n = 78) had taken the COCP at some point in their lives (question 3 of the survey). Further information regarding usage for women who had previously or were currently taking the COCP is listed in Table 2. Women who had never taken the COCP were not required to complete these questions (questions 4 to 7).

Table 2: Usage information for women who are currently using or have previously used the COCP

Variable n (%*)
Q.4 Current COCP usage (n = 78)
Yes 23 (29%)
No 55 (69%)
Q.5 Duration of COCP usage (n = 77)
< 1 year 5 (6%)
1 – 5 years 29 (36%)
5 – 10 years 17 (21%)
> 10 years 26 (33%)
Q.6 Has an active tablet ever been missed? (n = 78)
Yes 64 (80%)
No 13 (16%)
Don’t Know 1 (1%)
Q.7 Frequency of missing an active tablet (n = 77)
Never 12 (15%)
Only one time 4 (5%)
Once a year 11 (14%)
Once every few months 30 (38%)
Once a month 16 (20%)
Once a week 4 (5%)

*Percentages calculated using the total sample (n = 80)

Knowledge domains

COCP use and effectiveness

Participants were asked to complete questions that assessed their general knowledge of the COCP and of what factors may reduce the COCPs contraceptive effect.

In terms of general knowledge (question 8 of the survey), 96% of participants correctly identified that the COCP needs to be taken every day to serve as effective contraceptive, with 94% correctly identifying that it should be taken at the same time every day. Only 28% of women were aware that the COCP is not the most effective contraceptive currently available with 13% of current COCP users selecting the correct answer (compared to 35% of previous users).

Of the factors that may reduce the contraceptive effect of the COCP (question 10), missing one active pill by more than 12 hours and missing more than one active pill was correctly identified by 84% and 94% of women respectively. Other factors that potentially reduce contraceptive effect (with percentage of participants selecting the correct response in brackets) are as follows: St John’s wort (20%), epilepsy medications (14%), vomiting (79%), and severe diarrhoea (61%). Two-thirds of women incorrectly identified that antibiotics (other than rifampicin and rifabutin) may be a factor that reduces contraceptive benefit. There was no significant difference in the number of participants selecting the correct response between current and previous COCP users for each of the factors investigated. Participant responses are further detailed in Table 3.

Table 3: Participant responses to general knowledge questions relating to the COCP and factors that may reduce its contraceptive action

  Yes No Don’t know No response Number of current COCP users correct (n = 23) Number of previous COCP users correct

(n = 55)

P-value

(significance ≤ 0.05)

Q.8 General knowledge
The pill needs to be taken every day to be an effective contraceptive *77 (96%) 1 (1%) 2 (3%) 22 (96%) 53 (96%) 0.88
The pill should be taken at approximately the same time every day *75 (94%) 3 (4%) 2 (3%) 23 (100%) 51 (93%) 0.18
It is acceptable to continue taking active tablets without taking the inactive tablets in between *43 (54%) 16 (20%) 21 (26%) 12 (52%) 31 (56%) 0.73
The pill is the most effective form of contraception currently available when used correctly 44 (55%) *22 (28%) 14 (18%) 3 (13%) 19 (35%) 0.054
It is possible to fall pregnant while taking the pill even with perfect use *63 (79%) 10 (13%) 6 (8%) 1 (1%) 16 (70%) 45 (82%) 0.23
It is important to take a break from using the pill 26 (33%) *20 (25%) 34 (43%) 6 (26%) 14 (25%) 0.95
Q.10 Factors that may reduce the contraceptive benefit of the COCP
Missing one active pill by less than 12 hours 25 (31%) *34 (43%) 17 (21%) 4 (5%) 11 (48%) 23 (42%) 0.62
Missing one active pill by more than 12 hours *67 (84%) 6 (8%) 7 (9%) 19 (83%) 48 (87%) 0.58
Missing more than one active pill *75 (94%) 1 (1%) 3 (4%) 1 (1%) 23 (100%) 51 (93%) 0.18
Missing one or more inactive pill/s 22 (28%) *43 (54%) 14 (18%) 1 (1%) 14 (61%) 28 (51%) 0.42
St John’s Wort herbal preparation *16 (20%) 9 (11%) 55 (69%) 5 (22%) 11 (20%) 0.86
Epilepsy medications such as phenytoin or carbamazepine *11 (14%) 3 (4%) 66 (83%) 3 (13%) 8 (15%) 0.86
Vomiting *63 (79%) 5 (6%) 12 (15%) 19 (83%) 44 (80%) 0.79
Severe diarrhoea *49 (61%) 10 (13%) 21 (27%) 15 (65%) 34 (62%) 0.78
Smoking 6 (8%) *36 (45%) 38 (48%) 9 (39%) 26 (47%) 0.51
Antibiotics such as rifampicin and rifabutin *53 (66%) 3 (4%) 24 (30%) 14 (61%) 38 (69%) 0.48
Other antibiotics

(When taken without side-effects like vomiting/diarrhoea)

53 (66%) *2 (3%) 25 (31%) 0 (0%) 2 (4%) 0.35
Minor alcohol consumption

(e.g. an occasional alcoholic drink/s not on a regular basis)

6 (8%) *52 (65%) 22 (28%) 17 (74%) 35 (64%) 0.38
Excessive alcohol consumption

(e.g. drinking amounts that cause vomiting, diarrhoea, poor  concentration or memory, or significant liver damage)

*43 (54%) 13 (16%) 24 (30%) 10 (43%) 32 (58%) 0.23

*Indicates the correct answer

Mechanism of action

Only 58% of women surveyed correctly identified that the COCP acts to prevent ovulation; this represented 44% of current COCP users and 64% of previous COCP users. Furthermore, only 3% of the study sample correctly identified all three mechanisms of action (preventing ovulation, thickening of cervical mucus, and helping to prevent adherence of the embryo to the endometrium).

Risks and benefit of COCP use

Frequencies of responses to questions assessing knowledge of the potential risks and benefits of the COCP are shown in Table 4. The conditions in which COCP use may be beneficial (with the percentages of participants selecting the correct responses listed in brackets) were as follows:  menstrual disturbances (60%), acne (56%), endometriosis-associated pain (28%), ectopic pregnancy (9%), and ovarian and endometrial cancer (6%). Fifty-nine percent of women correctly identified that the COCP has no effect on the risk of contracting a sexually transmitted infection (STI). Furthermore, weight gain was incorrectly identified as a risk associated with taking the COCP by 75% of women with only 5% of participants selecting the correct answer of “no effect”. COCP use increases the risk of cardiovascular disease which 39% of women correctly identified. For the majority of these questions, “don’t know” was the response selected by a large proportion of participants.

Table 4: Participant responses regarding effects of the COCP on level of risk for various conditions

Q.11 Decreases No effect Increases Don’t know No response Number of current COCP users correct (n=23) Number of previous COCP users correct

(n=55)

P-value

(significance ≤ 0.05)

Ectopic pregnancy *7 (9%) 18 (23%) 11 (14%) 44 (55%) 2 (9%) 4 (7%) 0.83
Birth defects 2 (3%) *33 (41%) 8 (10%) 37 (46%) 4 (17%) 29 (53%) 0.004
Infertility 3 (4%) *30 (38%) 14 (18%) 33 (41%) 9 (39%) 21 (38%) 0.94
Cardiovascular disease

(stroke, hypertension, clots)

2 (3%) 14 (18%) *31 (39%) 33 (41%) 10 (43%) 21 (38%) 0.66
Benign breast disease *4 (5%) 17 (21%) 16 (20%) 42 (53%) 1 (1%) 0 (0%) 4 (7%) 0.18
Functional ovarian cysts *9 (11%) 12 (15%) 11 (14%) 47 (59%) 1 (1%) 4 (17%) 5 (9%) 0.29
Endometriosis-associated pain *22 (28%) 9 (11%) 3 (4%) 45 (56%) 1 (1%) 8 (35%) 14 (25%) 0.41
Breast cancer 4 (5%) 17 (21%) *18 (23%) 41 (51%) 6 (26%) 13 (22%) 0.82
Ovarian cancer *5 (6%) 18 (23%) 11 (14%) 46 (58%) 2 (9%) 3 (5%) 0.59
Endometrial cancer *5 (6%) 18 (23%) 7 (9%) 50 (63%) 1 (4%) 4 (7%) 0.63
Menstrual disturbances *48 (60%) 6 (8%) 9 (11%) 13 (16%) 4 (5%) 12 (52%) 35 (64%) 0.35
Acne *45 (56%) 4 (5%) 14 (18%) 16 (20%) 1 (1%) 11 (48%) 33 (60%) 0.32
Weight gain 1 (1%) *4 (5%) 60 (75%) 13 (16%) 2 (3%) 1 (4%) 3 (5%) 0.84
Pelvic inflammatory disease *6 (8%) 14 (18%) 9 (11%) 50

(63%)

1 (1%) 2 (9%) 4 (7%) 0.83
Sexually transmitted infections 4 (5%) *47 (59%) 7 (9%) 20 (25%) 2 (3%) 11 (48%) 36 (65%) 0.12

*Indicates the correct answer

Question 12 of the survey asked women to identify factors that can potentially increase a women’s risk of thromboembolism while taking the COCP. The most frequently identified risk factors were smoking and obesity (selected by 74% and 69% of participants, respectively). Only 38% correctly identified all three risk factors, which also includes age greater than 35 years [11,14].

Information sources

Participants were asked where they source information regarding the COCP for question 13 of the survey. “General practitioner” was the most frequently selected option at 90% (n = 72). Further response details are shown in Figure 1.

v7_i2_o2_f1

Figure 1: Survey participants’ information sources regarding the COCP

Total knowledge score

The mean total knowledge score for all participants was 14.4 (SD = 4.86) out of a possible 34 (range = 5 to 26). The mean total knowledge score for current COCP users was 14.0 (SD = 4.81), with previous COCP users achieving a mean score of 14.8 (SD = 4.75). Women who had never used the COCP achieved a mean total knowledge score of 6.5. There was no significant difference in total knowledge score between current and previous users of the COCP (p = 0.56).

Discussion

This study has found deficiencies in women’s knowledge of the COCP in all domains that were assessed. This finding is consistent with the available literature [11,19,21-24]. For the purpose of this study, a score of 80% or above for each individual response item was designated as an adequate level of knowledge. The rationale for stating an arbitrary value such as this was influenced by a recent systematic review by Hall et al. [19]. Though many studies concluded women have a poor level of knowledge regarding oral contraceptives, Hall et al. stated that of the studies they included for review, “what constituted deficient or adequate knowledge was not clearly defined”. The majority of women did not score above the required 80% correct responses to be considered adequate knowledge. No significant differences were found in the number of correct responses per question between current COCP users and previous users except for one question regarding whether the COCP has an effect on the risk of birth defects occurring (p = 0.004). Furthermore, the total knowledge score for both current and previous COCP users was less than 50% of the possible maximum score.

Several key findings discussed below stand out as being important focus areas for improved contraceptive counselling.

COCP use and effectiveness

This study revealed that 55% of women believe the COCP is the most effective form of contraception currently available when used correctly, with only 13% of current COCP users correctly identifying that it is not. Examples of contraception that have a better failure rate profile than the COCP include long-acting reversible contraceptives (LARC) such as the implantable rod (typical and perfect-use failure rate 0.05%), and intrauterine devices such as the Mirena (typical and perfect-use failure rate 0.2%) [17].

Women were not aware that antibiotics (other than rifampicin and rifabutin) were no longer considered to have a negative impact on the contraceptive effect of the COCP [25], with 66% of women indicating that taking antibiotics (without side effects such as vomiting and diarrhoea) would reduce the contraceptive effect of the COCP.

There were mixed results regarding whether it is important to take a break from the COCP with 25% of women correctly identifying there is no requirement for a break. Interestingly, Philipson, Wakefield, and Kasparian [11] found that 25.6% of their participants thought that it was healthy to stop COCP use for a while (length of time was not stipulated in the question).

Mechanism of action

Only 58% of women correctly identified the main mechanism by which the COCP works, with 3% correctly identifying all three mechanisms. A systematic review by Hall et al. [19] found that understanding of the mechanism of action is infrequently assessed in similar studies. A study by Rajasekar and Bigrigg [23] not included in the aforementioned review found that 81.5% of women understood that the oral contraceptive prevented ovulation every month, but that 32% also thought that it killed sperm.

Risks of COCP use

Of the study participants, 39% correctly identified that the COCP increases the risk for cardiovascular disease (hypertension, stroke, and other thromboembolic events). Similarly, Philipson, Wakefield, and Kasparian [11] found that 46.5% women identified an increase in blood clots. Although 74% of women in our study identified smoking as a factor that when combined with the COCP increases thromboembolism risk further, only 38% of women correctly identified all three risk factors (obesity, age over 35 years, and smoking).

Women appear to erroneously believe that the COCP causes weight gain (75% of respondents). A causal relationship has never been established. A Cochrane Review has found there is no significant difference in weight change between placebo and those taking combined contraceptives, though further research was indicated [26]. Previous studies suggest similar results. Fletcher, Bryden, and Bonin [27] found that 30.6% of respondents were concerned about weight gain on the pill, with 23.4% of respondents reporting weight gain as an experienced side effect. Gaudet et al. [28] found that 51.5% of respondents thought weight would increase on the pill.

Only 59% of women were aware that the COCP has no effect on contracting STIs, with 48% of current COCP users identifying the correct answer. This result is lower than that found by Philipson, Wakefield, and Kasparian [11] with 81.3% of their respondents identifying the correct answer.

Benefits of COCP use

There was a low level of understanding regarding decreased ovarian and endometrial cancer risk, but a better (though still low) understanding that COCP use can improve acne and menstrual disturbances. Poor understanding about COCP benefits appears consistent among studies with Philipson, Wakefield, and Kasparian [11] finding 13.7% correctly identified that COCP use decreases ovarian cancer risk, with 10% identifying decreased risk of endometrial cancer.

Study limitations

Noting that approximately 29% of this study sample is currently taking the COCP, one might consider that knowledge would be forgotten after having ceased the COCP or after changing contraceptive methods. Additionally, it cannot be expected that women will remember all details relating to the COCP, as with any medication. Significant limitations of this study include the small response rate, which is likely due to the self-selection of participants. A self-selection bias may also exist. We can see from the results that there were very few women who had never taken the pill completing the survey. We must consider whether this is a true representation, or whether this may reflect the fact that women who have previously taken or are currently taking the COCP are more likely to complete the survey (perhaps due to a perceived familiarity with the topic). As the study was conducted in a general practice, a bias may also exist towards women who are likely to attend such medical facilities. An additional limitation of this study is that data was generated out of a single general practice and therefore the results may reflect specific factors associated with the GPs working there. Due to how the study was implemented, it cannot be determined if the participants had ever received contraceptive counselling from the practitioners within this centre, or whether a single or multiple GPs from this practice may have been involved in the counselling and prescribing of the COCP. At the time the study was conducted, seven GPs were working within the practice and so participants are likely patients of a number of these GPs with no particular focus on an individual practitioner’s patient list. Both the self-selection and single-centre nature of this study means that the results cannot be generalised. The survey was developed after a review of current literature and did not come from a validated source. Assessment of the reading level of the survey and a pilot study prior to data collection would improve the validity of the findings. Additionally, statistical analysis was limited to current and previous COCP users as the sample of participants who had never used the COCP was too small to allow reliable calculations.

Implication for clinical practice and future directions

A recent analysis of the Bettering the Evaluation and Care of Health (BEACH) data by Mazza et al. [1] found that COCP prescribing is a common focus of many GP consultations concerning contraceptive management. Our study also indicated that GPs are the main source of information regarding the COCP. Given that the COCP is a prescription medication, routine medical consultations are required and offer ample opportunity for medical practitioners to ensure appropriate use and knowledge of the COCP. This is especially so since a total of 54% of participants in our study indicated they have been or were using the pill for more than five years.  In their study assessing Australian women’s knowledge of the COCP, Philipson, Wakefield, and Kasparian [11] found a positive correlation between duration of pill usage and level of knowledge.

Although our study suggested that GPs are the main source of information regarding the COCP, there were many other information sources identified and so we cannot assume the subsequent level of knowledge of the surveyed participants is the result of GP intervention alone. Therefore, other healthcare professionals that may provide COCP counselling have a role in helping to improve women’s knowledge of the COCP. Given that the Internet, friends, and family members were also important information sources for women regarding the COCP, awareness and appropriate counselling is also necessary to identify and address any misinformation that women may have obtained from these sources.

This study provides a unique perspective in that it assesses rural Australian women’s knowledge of the COCP. The aforementioned study by Philipson, Wakefield, and Kasparian [11], whose data collection was generated randomly from each state, was the only other Australian study identified after examination of the literature. As our study was limited to a rural general practice setting, future research may wish to expand on this data by investigating other rural practices or compare results to metropolitan practices.

Rural Australians experience poorer health outcomes compared to their metropolitan counterparts [29,30]. Health literacy is likely a contributing factor to such outcomes [30]. An analysis of the Adult Literacy and Life Skills Survey data from 2006 by the Australian Bureau of Statistics shows that health literacy levels are low across the board – 42% of Australian urban populations were shown to have a literacy level of 3 (considered an adequate level) or greater; 38% and 39% of inner regional and remote populations, respectively also demonstrated a literacy level of 3 or greater. The outer regional populations possessed the lowest levels of people demonstrating a literacy level of 3 or greater at 36% [31]. In the context of the clinical environment, there is a paucity of literature available, but one recent study by Wong et al. [32] comparing health literacy of patients attending both a rural and an urban rheumatology practice found no significant difference between these groups. Despite research showing differences in health outcomes between rural and metropolitan populations of Australia [29,30], studies comparing the knowledge and health literacy of rural and metropolitan patients, particularly in relation to medications, proved difficult to find so we cannot extrapolate the findings of the current study to comment on whether a general knowledge deficit exists.

Since this study was designed only to assess women’s level of knowledge about the COCP, and not factors associated with level of knowledge, further studies regarding what factors influence knowledge are also important. These may include factors relating to the primary care setting, such as: impact of consultation timing; exploring the discussions and resources used during COCP consultations and whether counselling deficiencies exist; assessing what information healthcare professionals deem clinically relevant or applicable on an individual patient basis, and whether this impacts upon what information is provided to patients and therefore what knowledge base they retain. Additional studies may wish to investigate the effectiveness of the product information sheet for the COCP, or whether women believe COCP information is easily accessible and where this can be obtained (for example, what limits women’s access to information from pharmacies or community health clinics). Future studies may also wish to explore whether rural specific issues (for example, more limited access to healthcare providers) play a role.

Furthermore, additional studies that evaluate practical strategies for improving knowledge and information retention should also be undertaken. In the systematic review by Hall et al. [19] only four studies assessed interventions and their impact on contraceptive pill knowledge. Three of the four studies noted improved knowledge in at least one domain, highlighting that an array of additional educational materials may be beneficial in improving counselling sessions [19].

As more Australian-specific data accumulates about women’s knowledge of the COCP, better public health initiatives and education strategies can be implemented to improve outcomes. The results of this study may encourage healthcare professionals to better understand and review areas of their own counselling sessions. Improvements may be achieved through better addressing how to use the COCP, what will affect its contraceptive benefit, and common misconceptions. Additionally, healthcare professionals can be assured they have provided appropriate informed consent by discussing risks, benefits, and alternative options [33]. In the long term, this may eventually lead to improvements in the typical failure rate of the COCP and reduce the rate of unintended pregnancies.

Conclusion

The women surveyed in this study appear to have substantial gaps in their knowledge of the COCP despite a high prevalence and duration of usage. Although many other sources were also utilised for information on the COCP, GPs were the main source of information. As such, this study provides insight into specific knowledge areas that require further education and clarification during COCP counselling sessions to encourage improved knowledge of the COCP by women, particularly those in the rural Australian general practice setting.

Acknowledgements

The authors would like to thank the staff at the medical centre where this research was conducted for their support in facilitating this project.

Conflicts of interest

None declared.

References

[1] Mazza D, Harrison C, Taft A, Brijnath B, Britt H, Hussainy S, et al. Current contraceptive management in Australian general practice: an analysis of BEACH data. Med J Aust. 2012;197(2):110-4.

[2] Dragoman M. The combined oral contraceptive pill – recent developments, risks and benefits. Best Pract Res Clin Obstet Gynaecol. 2014;28(6):825-34.

[3] D’Souza R, Guillebaud J. Risks and benefits of oral contraceptive pills. Best Pract Res Clin Obstet Gynaecol. 2002;16(2):133-54.

[4] Schindler A. Non-contraceptive benefits of oral hormonal contraceptives. Int J Endocrinol Metab. 2013;11(1):41-7.

[5] Collaborative group on epidemiological studies on endometrial cancer. Endometrial cancer and oral contraceptives: an individual participant meta-analysis of 276 women with endometrial cancer from 36 epidemiological studies. The Lancet Oncology. 2015; 16(9):1061-70.

[6] Havrilesky L, Moorman P, Lowery W, Gierisch J, Coeytaux R, Myers E, et al. Oral contraceptive pills as primary prevention for ovarian cancer. J. Obstet Gynecol. 2013;122(1):139-47.

[7] Vessey M, Yeates D. Oral contraceptives and benign breast disease: an update of findings in a large cohort study. Contraception. 2007; 76(6): 418-24.

[8] Harada T, Momoeda M, Taketani Y, Hoshiai H, Terakawa N. Low-dose oral contraceptive pill for dysmenorrhea associated with endometriosis: a placebo-controlled, double-blind, randomized trial. Fertility and Sterility. 2008;90(5)1583-8.

[9] Wong C, Farquhar C, Roberts H, Proctor M. Oral contraceptive pill for primary dysmenorrhoea. Cochrane Database Syst Rev. 2009.

[10] Arowojolu A, Gallo M, Lopez L, Grimes D. Combined oral contraceptive pills for treatment of acne. Cochrane Database Syst Rev. 2012.

[11] Philipson S, Wakefield C, Kasparian N. Women’s knowledge, beliefs, and information needs in relation to the risks and benefits associated with use of the oral contraceptive pill. Int. J. Wom. Health. 2011;20(4):635-42.

[12] de Bastos M, Stegeman B, Rosendaal F, Van Hylckama Vlieg A, Helmerhorst F, Stijnen T, et al. Combined oral contraceptives: venous thrombosis. Cochrane Database Syst Rev. 2014.

[13] Roach R, Helmerhorst F, Lijfering W, Stijnen T, Algra A, Dekkers O. Combined oral contraceptives: the risk of myocardial infarction and ischemic stroke. Cochrane Database Syst Rev. 2015.

[14] Royal College of Obstetricians and Gynaecologists (RCOG). Green-top guideline No.40: Venous thromboembolism and hormonal contraception [Internet]. RCOG; 2010 [cited 2015 March 24]. Available from: https://www.rcog.org.uk/en/guidelines-research-services/guidelines/gtg40/.

[15] Royal Australian and New Zealand College of Obstetricians and Gynaecologists (RANZCOG). College Statement C-Gyn 28: Combined hormonal contraceptives [Internet]. RANZCOG; 2012 [cited 2015 March 24]. Available from: https://www.ranzcog.edu.au/college-statements-guidelines.html#gynaecology.

[16] Cancer Council Australia. Position Statement: Combined oral contraceptives and cancer risk [Internet]. Cancer Council Australia; 2006 [cited 2015 Feb 7]. Available from: http://www.cancer.org.au/policy-and-advocacy/position-statements/oral-contraceptives.html.

[17] eTG Complete [Internet]. Melbourne (Vic): Therapeutic Guidelines Limited; 2015. Hormonal contraception: introduction; [cited 2015 March 24]. Available from: http://online.tg.org.au.ezproxy.uow.edu.au/ip/desktop/index.htm.

[18] Trussell J. Contraceptive failure in the United States. Contraception. 2011;83(5):397-404.

[19] Hall K, Castaño P, Stone P, Westhoff C. Measuring oral contraceptive knowledge: a review of research findings and limitations. Patient Educ Couns. 2010;81(3):388-94.

[20] Black K, Bateson D, Harvey C. Australian women need increased access to long-acting reversible contraception. Med J Aust. 2013;199(5):317-8.

[21] Bryden P, Fletcher P. Knowledge of the risks and benefits associated with oral contraception in a university-aged sample of users and non-users. Contraception. 2001;63(4):223-7.

[22] Davis T, Fredrickson D, Potter L, Brouillette R, Bocchini A, Parker R, et al. Patient understanding and use of oral contraceptive pills in a southern public health family planning clinic. South Med J. 2006;99(7):713-8.

[23] Rajasekar D, Bigrigg A. Pill knowledge amongst oral contraceptive users in family planning clinics in Scotland: facts, myths and fantasies. Eur J Contracept Reprod Health Care. 2000;5(1):85-90.

[24] Schrager S, Hoffmann S. Women’s knowledge of commonly used contraceptive methods. WMJ. 2008;107(7):327-30.

[25] Family Planning New South Wales, Family Planning Queensland, Family Planning Victoria. Contraception: an Australian clinical practice handbook. 3rd ed. Canberra: Family Planning New South Wales, Family Planning Queensland, Family Planning Victoria; 2012.

[26] Gallo M, Lopez L, Grimes D, Carayon F, Schulz K & Helmerhorst F. Combination contraceptives: effects on weight. Cochrane Database Syst Rev. 2014 Jan

[27] Fletcher P, Bryden P, Bonin E. Preliminary examination of oral contraceptive use among university-aged females. Contraception. 2001;63(4):229-233.

[28] Gaudet L, Kives S, Hahn P, Reid R. What women believe about oral contraceptives and the effect of counselling. Contraception. 2004;69(1):31-6.

[29] Australian Institute of Health and Welfare (AIHW). Rural, regional and remote health: Indicators of health status and determinants of health [Internet]. AIHW; 2008 [cited 2016 May 4]. Available from: http://www.aihw.gov.au/publication-detail/?id=6442468076.

[30] Australian Institute of Health and Welfare. Australia’s health 2014 [Internet]. AIHW; 2014 [cited 2016 May 4]. Available from: http://www.aihw.gov.au/WorkArea/DownloadAsset.aspx?id=60129548150.

[31] Australian Bureau of Statistics (ABS). Health literacy Australia 2006 [Internet]. ABS; 2008 [cited 2016 May 4]. Available from: http://www.abs.gov.au/ausstats/abs@.nsf/Latestproducts/4233.0Main%20Features22006?opendocument&tabname=Summary&prodno=4233.0&issue=2006&num=&view=.

[32] Wong P, Christie L, Johnston J, Bowling A, Freeman D, Bagga H, et al. How well do patients understand written instructions? Health literacy assessment in rural and urban rheumatology outpatients. Medicine. 2014;93(25):1-9.

[33] Vogt C, Schaefer M. Disparities in knowledge and interest about benefits and risks of combined oral contraceptives. Eur J Contracept Reprod Health Care. 2011;16(3);183-93.

Appendix

Download (DOCX, 26KB)

Categories
Original Research Articles

Clavicle fractures: An audit of current management practices at a tertiary hospital, a review of the literature and advice for junior staff

Background: The clavicle is one of the most commonly fractured bones in the body. Interns are often delegated to treat these cases in an emergency department. This audit looks at the adherence to a tertiary hospital’s clavicle fracture protocol and reviews the literature to provide suggestions on updates based on current evidence. Methods: A retrospective case note and radiograph audit was undertaken to assess adherence to current protocols for the calendar years 2012 and 2013. A literature search was performed to find the most up to date evidence for future clavicle fracture management. Results: There were 131 clavicle fractures reviewed. An AP x-ray was taken in 120/122 cases (98.3%). The Orthopaedic registrar was notified for 6/7 (86%) cases with respiratory, neurovascular or skin compromise. Up to 83/131 (63%) patients were provided with a broad arm sling. Mean initial follow up was at ten days (3-20 days) and 39/95 (41%) followed x-ray protocol at this review. Appropriate rehabilitation advice was documented in 12/82 (14.6%) cases and the mean duration until discharge was 52.25 days. Conclusion: Despite the high frequency of clavicle fractures there are still significant errors that can be, and are being, made in their management. It is important for all medical students and junior doctors to become familiar with this Orthopaedic condition, as it is a common presentation that is often initially managed by junior medical staff.

Introduction33

Clavicle fractures are one of the most common fractures in the adult with an annual incidence of 29-64 per 100,000 people, per year. [1] Fractured clavicles account for up to 5% of all fractures and up to 44% of fractures to the shoulder girdle. [1,2]

Clavicle fractures are commonly managed by junior staff, and the current adult fracture protocol at our institution guides this management (Figure 1). The protocol was issued in July 2006 and reviewed in 2009. However, this protocol remains based on evidence the most recent of which was published in 1997. [3–8] There has been an influx of published literature on clavicle fractures over the last decade providing more recommendations for which an updated protocol can be based, including two well-designed multi-centre randomised controlled trials. [1,9,10,11,12,13] These articles demonstrate the shift from conservative management to surgical management for displaced and comminuted fractures of the adult clavicle.

This retrospective case note and radiograph audit firstly assesses the adherence of management practices at a tertiary hospital to the current institutional protocol for the calendar year of 2012 and 2013. Secondly, it discusses the standards of best practice in the current literature, with the view to providing recommendations for alterations to hospital protocol and management practice.

Methods

Process

In preparing for this audit a literature review was conducted to identify the gold standard and best practice guidelines for the investigation, management, and rehabilitation of clavicle fractures. Additionally, the hospital intranet was searched for any further documents including protocols, information sheets, and patient handouts. Consultation with the physiotherapy (PT) and occupational therapy (OT) departments was undertaken to assess any current gold standards, best practice or unwritten guidelines.

The case notes and radiographs of patients identified with a clavicle fracture were reviewed and adherence to the current protocol was assessed. Specifically, adherence to the following aspects of the protocol was scrutinised (Figure 1).

  1. All patients are to receive an anterior to posterior (AP) x-ray
  2. The orthopaedic registrar must be notified if there is respiratory, neurovascular or overlying skin compromise
  3. All patients are to receive a broad arm sling for acute management
  4. Outpatient follow up is to be booked for two weeks post injury
  5. Only postoperative patients are to have an x-ray on arrival (XROA) at the two week follow up
  6. All patients are to begin pendulum exercises immediately, range of motion (ROM) from two weeks, full active ROM (AROM) from 6 weeks or after clinically healed
  7. Return to sport (RTS) should be delayed for at least 4-6 months.

34

Figure 1. The current adult clavicle fracture protocol for which adherence was audited.

Ethical approval

This audit was reviewed and approved by the local clinical human research ethics committee. No identifiable patient data was collected and all records were viewed on site in the medical records department. A retrospective case note and electronic record audit was performed for the calendar year of 2013. As insufficient data was available to make reliable conclusions an additional calendar year, 2012, was included.

Patient recruitment

With the assistance of the orthopaedic department and the support of the project manager, all patients with clavicle fractures who presented to the emergency department (ED) or who were admitted to the wards in the calendar years 2012-13 were included. The hospital coding system, Inpatient Separations Information System (ISIS), was searched using the World Health Organisation (WHO) International Classification of Diseases, version ten, (ICD10) codes for all clavicle fracture admissions, S4200-3 inclusive. In addition the ED database for the two calendar years was hand-searched for provisional diagnoses relevant to clavicle fractures. This limited selection bias caused by spelling errors if searched electronically. These searches provided a list of 141 patient unit record numbers (URN) that were provided to medical records for retrieval.

Data retrieval

All case notes were reviewed immediately once available to minimise loss to the removal of records. Missing records were re-requested and viewed on multiple occasions until all records had been accounted for. Despite multiple searches two case notes were unable to be retrieved, being listed on the system as in stock but unable to be located by staff. For these two cases the electronic records were viewed to minimise selection bias and to ensure all patients were analysed.

Each patient file was meticulously studied and cross-referenced against the electronic discharge summaries and encounters, in addition to radiological analysis using a picture archiving and communication system.

Data analysis

Data was collected and stored in a Microsoft Excel (Copyright Microsoft Corporation 2010) spread sheet. Simple descriptive statistical analysis was performed using IBM SPSS version 22 (Copyright IBM Corporation and other(s) 1989, 2013).

Standards for adherence

In consultation with the orthopaedic department it was determined that 90% compliance with the current protocol would be deemed acceptable. Adherence was analysed collectively for the entire cohort but also separately for the two calendar years, surgical versus non-surgical patients, and then again against current literature recommendations. Only the collective data will be presented.

Results

Recruitment and demographics

The database searches resulted in the retrieval of 141 patient URNs. Of these 131 were new clavicle fractures. There were 99 males and 42 females of which 47 fractures were on the right and 84 on the left (Table 1). The dominant arm was affected in 17 cases, non-dominant in 34, and not documented in 80 cases. There were three medial (2.3%), 93 central (70.1%) (Figure 2), 34 lateral (25.9%), and one both middle and lateral (other 0.8%). This is almost identical to the fracture pattern distribution reported by Robinson. [11] Associated injuries were documented in 34/131 (26%) cases. This is slightly less than the 36% reported by Nowak, Mallmin & Larsson however there are differences between the skin abrasions that were included in each study. [14]

35

 

Radiological adherence

An AP radiograph was taken for 120/122 (98.3%) patients (nine outside films). The two patients that did not have an x-ray both re-presented and subsequently had an x-ray identifying a clavicle fracture and were therefore included in this audit. Only 97/122 (79.5%) had two views taken of their clavicle fracture on presentation despite current literature and orthopaedic dogma dictating that every fracture should be viewed from two angles and include two joints. [15,16]

At the two week review 39/95 (41%) cases followed protocol regarding XROA. The conservative group were not required to have an x-ray at two weeks with adherence in 30/85 (35.3%), however, 13 of these had x-rays at subsequent appointments. For the conservative cases that did have x-rays at their first outpatient department (OPD) appointment 3/55 (5.5%) resulted in a change of management toward surgery. Nine out of ten (90%) surgical patients had an x-ray to check the position of the metalwork at two weeks as required.

36

Figure 2. The typical middle third clavicle fracture that presents a management dilemma.

Broad arm sling

Compliance with the protocol regarding the broad arm sling application is summarised in Figure 3. Of note 6/32 (18.8%) patients who were provided with a collar and cuff for acute management showed progressive displacement and five of these required surgical fixation. If the benefit of the doubt is given and all those who were documented as given a ‘sling’ are combined with the BAS then 83/131 (63.3%) patients were correctly treated.

37

Figure 3. Illustrating the adherence of staff to provide a broad arm sling as first line management. BAS = Broad Arm Sling, C+C = Collar and Cuff, #HOH = Fractured Head of Humerus, OPD = Outpatient Department.

Registrar notification

The orthopaedic registrar was notified 46 times for the 131 cases analysed (35%), although according to the protocol they are only required to be notified if there is respiratory, neurovascular, or skin compromise. In this case they were notified on six out of seven occasions (86%). However, if they were also required to be notified for displaced fractures >20mm and shortened >15mm in addition to the associated injuries in the protocol, then they were notified in 27/51 (53%) cases. Cases of which the orthopaedic registrar was not informed of include one floating shoulder, three ACJ separations, one head of humerus (HOH) fracture, and one patient with ipsilateral rib fractures 1-5.

Prior to the outpatient follow up six patients re-presented to ED, two patients on multiple occasions. Of these two were treated with a collar and cuff and one with a sling.

Complications or associated injuries were present in 18/131 cases (13.7%), five with tented/compromised skin, six ACJ separations, four floating shoulders, one ipsilateral HOH fracture, one ulnar nerve paraesthesia, and one patient with multiple ipsilateral rib fractures. Of these six underwent surgical fixation.

There were 14 surgeries (11 middle third, three lateral) and eleven patients received private orthopaedic management (Table 2).

38

Rehabilitation

There were 82 patients followed up in the orthopaedic OPD clinic. Twelve of these (14.6%) had sufficient documentation to suggest the patient had been provided with appropriate rehabilitation advice. These included 4/9 (44%) surgical cases and 8/73 (11%) conservative cases. This left a large cohort of patients that had been given some or no advice on what rehabilitation they could perform. Six patients attended their six-week review having been immobilised in their sling for the entire duration leading to stiff painful shoulders. In eight case notes there is mention of seeking physiotherapy treatment of which two cases were treated by the hospitals physiotherapy department. There are no current physiotherapy handouts or protocols and the occupational therapy department has no involvement in the management or rehabilitation of clavicle fractures.

39Discussion

Clavicle fractures are a common presentation to any emergency department, representing 5% of all fractures, and are often managed by junior staff. This audit demonstrates that there is still some mismanagement and that further education of junior staff is required.

The gold standard for initial radiological review of clavicle fractures remains to be elucidated but current evidence agrees on the standard AP radiograph plus a second radiograph tilted on an angle. [15,16,17] Only 97/131 (74%) patients received two different views of their clavicle fracture on presentation. The angle of the second view ranged from 5-30 degrees AP cephalic tilt, with each five-degree increment in between. The optimal angle and direction of this second radiograph varies among authors with some recommending a posterior to anterior (PA) 15-degree cephalic tilt, while others recommend an AP 15-degree caudal tilt. [15,16,17] This makes it difficult to compare fracture patterns between clavicles when determining clinical management and also for future retrospective analysis. It may be preferable to have two views from the same side to limit the manoeuvring of patients and reduce the time demand on the radiological department. A second AP view with 20 degrees cephalic tilt has been recommended and would be the technically easiest view with minimal changes to current practices. [9]

Conservative treatment remains the management of choice for isolated non-displaced clavicle fractures. [12,18] The broad arm sling is recommended for clavicle fractures, [19] as the collar and cuff allows traction on the arm that risks further displacement of fracture segments. [20,21] The figure-of-eight slings have not been shown to be superior to the broad arm sling and are more uncomfortable and difficult to use. [22] Our results demonstrate that the collar and cuff is still being provided in many cases over the preferred broad arm sling. This may be due to confusion with the fractured neck of humerus that requires distracting forces for fracture alignment. One fifth of patients provided with collar and cuffs demonstrated significant progression of fracture displacement with many of these requiring surgical fixations.

While most isolated non-displaced fractures of the clavicle are managed conservatively, it is important to know when to refer to the orthopaedic department for review. Clear operative indications include compromise to the skin, nerves, vasculature, and grossly displaced/comminuted fractures. [23] Some authors also advocate for the primary fixation of clavicle fractures with multi-trauma, ipsilateral shoulder injuries, acromioclavicular joint (ACJ) involvement, high velocity mechanisms, or young active individuals. [24,25] For these patients, the surgeon’s preference continues to dictate treatment. [26] As such it is difficult to create a protocol to mandate management and the protocol is rather a guide to prompt management considerations.

Furthermore, over the past ten years there has been a growing body of literature increasing the relative and absolute surgical indications for clavicle fractures [13,23,24,27] Newer studies have highlighted that previously used outcomes may not be the best end points for assessing management. Previously, radiological union was the sole primary outcome for assessment of fracture management. [18,28] Recent studies focussing on patient-centred outcomes such as pain and functional capacity however have highlighted a disparity between radiological union and satisfactory objective and subjective patient-centred outcomes. [9-14] Furthermore, older studies have included children in outcome assessments, despite the greater regenerative capacity of this younger age group prior to fusion of the medial growth plate, leading to an over estimation of adult clavicle fracture recovery in these older studies. [20]

Recent studies have demonstrated sequelae including pain and neurological deficits occurring in up to 46% of clavicle fractures. [29] These sequelae are more likely in non- or mal-union of the fracture. Non-union rates for adult patients are as high as 15% and symptomatic mal-union up to 20-25%. [13] Risk factors for non-union include:

  • Smoking (33.3%)
  • Increasing Age and Female gender (often less symptomatic in this population)
  • Shortening >20 mm (increasing shortening increases non-union/mal-union)
  • Displacement >15 mm (27%)
  • Comminuted fractures (21.3%). [24,30]

The non-union rate for surgically treated patients’ is around 1-2%. [9,10,13] The number needed to treat (NNT) if all displaced clavicle fractures were operated on to prevent one symptomatic non-union would be 7.5, this number is reduced to 4.6 if symptomatic mal-unions are included. [13,24] The NNT drops to 1.7 if only those with multiple risk factors for non-union were surgically fixed. [24] Surgical risks must be considered however, including infection, implant irritation, neurological damage, and even death. [9,10,31] For widely displaced mid third clavicle fracture surgical plate fixation has been shown to be superior to conservative management. [9,10]

In this audit, the orthopaedic registrar was notified for most cases where there was neurovascular, respiratory, or skin compromise. However, in the current protocol there is mention of displacement and shortening with a decision to be made on whether the fracture is stable or unstable and whether the patient receives surgical or conservative management. If the orthopaedic registrar was required to be notified to make this decision then they were only notified in half the cases. It may be seen as an increased demand on the ED if they had patients waiting in beds for an orthopaedic review if it was considered unnecessary.

Murray and colleagues have shown that shortening and displacement are risk factors for non-union and it has been shown that delaying surgery results in worse functional outcomes. [24,32,33] Hence, it would be prudent to have an orthopaedic review in ED to make the clinical decision regarding management immediately. While this would increase the demand on orthopaedic staff to review x-rays and or patients from ED it also has the potential to decrease outpatient demand. By providing patients with the correct treatment immediately, early (< 2 weeks) outpatient follow up can be avoided and the six re-presentations to ED potentially avoided. Additionally if more accurate predictions are made for probable outcomes then work loads can be reduced by minimising the patients that are treated for extended periods or who fail conservative treatment and undergo additional surgical fixation.

There was a large variation in time to outpatient follow-up after ED discharge. This potentially reflects the uncertainty of the ED staff in their management of the patient. Rather than discharging the patient with appropriate sling, pain medication, and early rehabilitation advice they are requesting very early orthopaedic outpatient follow up essentially doubling the patient’s visits.

Radiological evaluation at the two-week review was over-utilised with correct use in less than half of the patients. This again could reflect uncertainty among medical staff about management. Of the 55 conservative patients undergoing multiple x-ray evaluations, a change in management was only initiated in three cases. Progressive displacement of the fracture ends has been recognised in one third of cases over a 5-year period by Plocher et al. who recommends serial x-rays for the first 3 weeks. [25] However, repeated x-rays are not useful unless the information is going to be used to guide clinical decision-making. Again, identifying patients at risk of progressive displacement (such as high velocity injuries) or those on the cusp of surgical intervention and providing early decision-making could remove the need to wait and watch.

Advice on rehabilitation was poorly documented and may reflect that it was not provided adequately in many cases. Documentation of education provided is an important part of keeping legal records and only one-fifth of patients had documented evidence that rehabilitation advice was provided. It was evident that no advice had been given for the six patients that returned to the six-week review having remained immobilised in the sling. Rehabilitation timeframes are largely based on expert opinion and are generally consistent with the current practices. [9,20,34] There has been limited research into the optimal timeframe to return to sport, however one study followed 30 patients after plate fixation and 20 had returned to sport after twelve weeks. [35] Three conservative patients re-fractured within 3-5 months in our audit population suggesting that returning to full contact sport should be delayed greater than six months.

The reliability of the results provided is limited by the dependence on multiple steps in accurately identifying and documenting clavicle fractures. These factors include:

  • Staff in ED diagnosing the clavicle fractures and documenting accordingly
  • Administration clerk in ED transcribing appropriate provisional diagnoses
  • Hospital coding officers applying the correct ICD10 codes
  • Human error is possible in misidentifying data in the notes
  • Human error in transcribing data into SPSS
  • Especially dependent on the accuracy of the doctor’s written case notes and discharge summaries for the above steps and also audit data collection.

Every effort was taken to ensure that the data collected was accurate including cross-referencing across multiple platforms. In stating this, however, the data collection for the variables dependent on documentation can only be as accurate as the written information and may not be a true representation of actual events. This would impact most significantly on the reporting of the sling provided, orthopaedic registrar notification, and rehabilitation advice given.

The methods used do not capture the cases that were directly referred to the outpatient department without first presenting to ED or as an inpatient. However, the majority of this audit involved looking at the initial management, as per the protocol, and therefore these cases were not required.

Conclusion

Clavicle fractures are a common presentation to any emergency department and are often managed by junior staff. This audit demonstrates that there is still some mismanagement, particularly in radiological assessment, sling prescription, and knowledge of protocol for registrar notification, outpatient follow-up and rehabilitation. Furthermore, new evidence indicates that the current protocol at this institution requires updating to clarify the requirements for referral and allow earlier interventions or rehabilitation. In summary, recommended radiological views are a standard AP and a second AP with 20 degrees cephalic tilt. Isolated non-displaced fractures of the clavicle are almost always managed conservatively, however, it is important to know when to refer to the orthopaedic department for review. This is always necessary if there are associated injuries or pending complications. It is also recommended that all displaced or comminuted fractures be referred for an orthopaedic opinion. The broad arm sling is the immobilisation technique of choice and not the collar and cuff for clavicle fractures because the collar and cuff allows distracting forces that risks further displacement of the fracture segments. Early rehabilitation is required to prevent painful stiff shoulders.

Acknowledgements

The authors would like to thank Katharina Denk, orthopaedic research assistant for her work in identifying patients records for inclusion in this audit.

Conflict of Interest

 None declared

Correspondence

D M George: daniel.george@health.sa.gov.au

Reference:

[1] Khan LA, Bradnock TJ, Scott C, Robinson CM. Fractures of the clavicle. J Bone Joint Surg Am. 2009;91(2):447–60.

[2] Postacchini F, Gumina S, De Santis P, Albo F. Epidemiology of clavicle fractures. J Shoulder Elb Surg. 2002;11(5):452–6.

[3] Andersen K, Jensen PO, Lauritzen J. Treatment of clavicular fractures. Acta Orthop Scand. 1987;58(1):71–4.

[4] Bostman O, Manninen M, Pihlajamaki H. Complications of plate fixation in fresh displaced midclavicular fractures. J Trauma Inj Infect Crit Care. 1997;43(5):778–83.

[5] Nordqvist A, Redlund-Johnell I, von Scheelel A, Petersson CJ. Shortening of clavicle after fracture: Incidence and clinical significance, a 5-year follow-up of 85 patients. Acta Orthop Scand. 1997;68(4):349–51.

[6] Hill JM, McGuire MH, Crosby LA. Closed treatment of displaced middle-third fractures of the clavicle gives poor results. J Bone Joint Surg Br. 1997;79-B(4):537–9.

[7] Davids PHP, Luitse JSK, Strating RP, van der Hart CP. Operative treatment for delayed union and nonunion of midshaft clavicular fractures: ao reconstruction plate fixation and early mobilization. J Trauma Inj Infect Crit Care. 1996;40(6):985–6.

[8] Hutchinson MR, Ahuja GS. Diagnosing and Treating Clavicle Injuries. Phys Sportsmed. 1996;24(3):26–36.

[9] Canadian Orthopaedic Trauma Society. Nonoperative treatment compared with plate fixation of displaced midshaft clavicular fractures. A multicenter, randomized clinical trial. J Bone Joint Surg Am. 2007;89(1):1–10.

[10] Robinson CM, Goudie EB, Murray IR, Jenkins PJ, Ahktar MA, Foster CJ, et al. Open reduction and plate fixation versus nonoperative treatment for displaced midshaft clavicular fractures: a multicentre, randomized, controlled trial. J Bone Joint Surg Am. 2013;95-A(17):1576–84.

[11] Robinson AM. Fractures of the clavicle in the adult epidemiology and classification. J Bone Joint Surg Br. 1998;80-B(3):476–84.

[12] Lenza M, Buchbinder R, Johnston R, Belloti J, Faloppa F. Surgical versus conservative interventions for treating fractures of the middle third of the clavicle (Review). Cochrane database Syst Rev Online. 2013;(6):CD009363.

[13] Mckee RC, Whelan DB, Schemitsch EH, Mckee MD. Operative versus nonoperative care of displaced midshaft clavicular fractures: a meta-analysis of randomized clinical trials. J Bone Joint Surg Am. 2012;94-A(8):675–84.

[14] Nowak J, Mallmin H, Larsson S. The aetiology and epidemiology of clavicular fractures. A prospective study during a two-year period in Uppsala, Sweden. Injury. 2000;31(5):353–8.

[15] Axelrod D, Safran O, Axelrod T, Whyne C, Lubovsky O. Fractures of the clavicle: which x-ray projection provides the greatest accuracy in determining displacement of the fragments? J Orthop Trauma. 2013;3(electronic):1–3.

[16] Sharr JR, Mohammed KD. Optimizing the radiographic technique in clavicular fractures. J Shoulder Elbow Surg. 2003;12(2):170–2.

[17] Smekal V, Deml C, Irenberger A, Niederwanger C, Lutz M, Blauth M, et al. Length Determination in Midshaft Clavicle Fractures: Validation of Measurement. J Orthop Trauma. 2008;22(7):458–62.

[18] Neer C. Nonunion of the clavicle. J Amercan Med Assoc. 1960;172:1006–11.

[19] Lenza M, Belloti JC, Andriolo RB, Gomes Dos Santos JB, Faloppa F. Conservative interventions for treating middle third clavicle fractures in adolescents and adults. Cochrane database Syst Rev Online. 2009;2(2):CD007121.

[20] Jeray KJ. Acute midshaft clavicular fracture. J Am Acad Orthop Surg. 2007;15(4):239–48.

[21] Mckee MD, Wild LM, Schemitsch EH. Midshaft malunions of the clavicle. J Bone Joint Surg Am. 2003;85-A(5):790–7.

[22] McKee MD. Clavicle fractures in 2010: sling/swathe or open reduction and internal fixation? Orthop Clin North Am. 2010;41(2):225–31.

[23] Lenza M, Belloti JC, Gomes Dos Santos JB, Matsumoto MH, Faloppa F. Surgical interventions for treating acute fractures or non-union of the middle third of the clavicle. Cochrane database Syst Rev Online. 2009;(4):CD007428.

[24] Murray IR, Foster CJ, Eros A, Robinson CM. Risk factors for nonunion after nonoperative treatment of displaced midshaft fractures of the clavicle. J Bone Joint Surg Am. 2013;95(13):1153–8.

[25] Plocher EK, Anavian J, Vang S, Cole PA. Progressive displacement of clavicular fractures in the early postinjury period. J Trauma. 2011;70(5):1263–7.

[26] Heuer HJ, Boykin RE, Petit CJ, Hardt J, Millett PJ. Decision-making in the treatment of diaphyseal clavicle fractures : is there agreement among surgeons ? Results of a survey on surgeons ’ treatment preferences. J Shoulder Elb Surg. 2014;23(2):e23–e33.

[27] Zlowodzki M, Zelle BA, Cole PA, Jeray K, McKee MD. Treatment of acute midshaft clavicle fractures: systematic review of 2144 fractures. J Orthop Trauma. 2005;19(7):504–7.

[28] Rowe CR. An atlas of anatomy and treatment of midclavicular fractures. Clin. Orthop. Relat. Res. 1968;58:29–42.

[29] Nowak J, Holgersson M, Larsson S. Can we predict long-term sequelae after fractures of the clavicle based on initial findings? A prospective study with nine to ten years of follow-up. J Shoulder Elb Surg. 2004;13(5):479–86.

[30] Robinson CM, Court-Brown CM, McQueen MM, Wakefield AE. Estimating the risk of nonunion following nonoperative treatment of a clavicular fracture. J Bone Joint Surg Am. 2004;86-A(7):1359–65.

[31] Wijdicks FJ, Van der Meijden OA, Millett PJ, Verleisdonk EJ, Houwert RM. Systematic review of the complications of plate fixation of clavicle fractures. Arch Orthop Trauma Surg. 2012;132(5):617–25.

[32] Potter JM, Jones C, Wild LM, Schemitsch EH, McKee MD. Does delay matter? The restoration of objectively measured shoulder strength and patient-oriented outcome after immediate fixation versus delayed reconstruction of displaced midshaft fractures of the clavicle. J Shoulder Elb Surg. 2007;16(5):514–8.

[33] George DM, McKay BP, Jaarsma RL. The long-term outcome of displaced mid-third clavicle fractures on scapular and shoulder function: variations between immediate surgery, delayed surgery, and nonsurgical management. J Shoulder Elb Surg. 2015; 24(5):669-76.

[34] Kim W, McKee MD. Management of acute clavicle fractures. Orthop Clin North Am. 2008;39(4):491–505.

[35] Meisterling SW, Cain EL, Fleisig GS, Hartzell JL, Dugas JR. Return to athletic activity after plate fixation of displaced midshaft clavicle fractures. Am J Sports Med. 2013;41(11):2632–6.

Categories
Original Research Articles

Adequacy of anticoagulation according to CHADS2 criteria in patients with atrial fibrillation in general practice – a retrospective cohort study

Background: Atrial fibrillation (AF) is a common arrhythmia associated with an increased risk of stroke.  Strategies to reduce stroke incidence involve identification of at-risk patients using scoring systems such as the CHADS2  score (Congestive Heart Failure, Hypertension, Age ≥75 years, Diabetes or Stroke) to guide pharmacological prophylaxis. Aim: The aim of this research project was to determine the prevalence and management of AF patients within the general practice (GP) setting and to assess the adequacy of anticoagulation or antiplatelet prophylaxis according to the CHADS2  score. Methods: This study was a retrospective cohort study of 100 AF patients ≥50 years conducted at a South Coast NSW Medical Centre over a 3-year period.   Data was obtained from existing medical records. CHADS2   scores were determined at baseline, 12 months and 3 years and were compared with medications to assess whether patients were undertreated, adequately treated or over-treated according to their CHADS2 score. Results: Prevalence of AF in patients >50 years was 5.8%. At baseline, 65% of patients (n=100) were at high risk of stroke (CHADS2  score ≥2).   This increased to 75.3% of patients at 12 months (n=89) and 78.4% of patients at 3 years (n=60).  Adequate treatment occurred in 79.0% of patients at baseline and 83.1% and 76.7% at 12 months and 3-years, respectively.  There were three instances of stroke or trans-ischemic attack during the study period. Conclusion: GPs play a critical role in prevention of stroke in patients with AF.   Adequate pharmacological interventions occurred in the majority of cases, however, identification and treatment of at-risk patients could be further improved.

v6_i1_a22a

Introduction

Atrial fibrillation (AF) is the most common cardiac arrhythmia in Australia, affecting 8% of the population over the age of 80 years. [1,2]  The morbidity and mortality associated with AF is primarily due to an increased risk of thromboembolic events such as stroke, with studies reporting up to a five-fold increase in the annual risk of stroke among patients with AF who have not received prophylaxis with either anticoagulant or antiplatelet therapies. [3,4]

It has been demonstrated that the incidence of stroke in patients with AF can be significantly reduced with the use of pharmacological agents, such as anticoagulant and antiplatelet medications including warfarin and aspirin, respectively. [5] More recently, the development of new oral anticoagulant (NOAC) medications such as dabigatran and rivaroxaban have also been approved for use in patients with AF. [6] However, several studies indicate that the use of anticoagulants and antiplatelets for the prevention of thromboembolic events is often underutilised. [7,8]  It is estimated that up to 51% of patients eligible for anticoagulant therapy do not receive it. [9]   Furthermore, an estimated 86% of patients who suffer from AF and have a subsequent stroke were not receiving adequate anticoagulation therapy following their AF diagnosis. [10]

In contrast, pharmacological treatments for stroke prophylaxis have been associated with an increased risk of intracerebral haemorrhage, particularly amongst the elderly. [11]  A study of 170 patients with AF over the age of 85 years demonstrated that the rate of haemorrhagic stroke was 2.5 times higher in those receiving anticoagulant therapy compared to controls (OR=2.5, 95% CI: 1.3-2.7). [12]  Therefore, the need to optimise the management of patients with AF in the general practice (GP) setting is of high importance for stroke prevention and requires an individualised pharmacological approach in order to achieve a balance between stroke reduction and bleeding side effects.

Consequently, the development of validated risk stratification tools such as the CHADS2 score (Congestive Heart Failure, Hypertension, Age ≥75 years, Diabetes, Previous Stroke or Trans-ischemic Attack (TIA)) has enabled more accurate identification of AF patients who are at an increased risk of stroke by assessing co-morbidities and additional risk factors to determine the appropriateness of anticoagulation or antiplatelet prophylaxis to reduce the risk of thromboembolic events. [13]

The aim of this research project was to determine the prevalence of AF among patients within a GP cohort and to assess the adequacy of pharmacological stroke prophylaxis according to the CHADS2  criteria. The results of this study will enable GPs to determine whether the current management of patients with AF is adequate and whether closer follow-up of these patients needs to occur in order to minimise associated bleeding and stroke complications.

Methods

Study design and ethics

This study was a retrospective cohort study of the prevalence, patient characteristics and adequacy of anticoagulation according to the CHADS2  score in GP patients with AF over a 3-year period.  The study was approved by the University of Wollongong Human Research Ethics Committee (Appendix 1, HREC 13/031).

Participants

Participants were identified using a search of the practice database (Best Practice, Version 1.8.3.602, Pyefinch Software Pty Ltd), at a South Coast NSW Medical Centre using the database search tool.  Search criteria included any patient (recorded as alive or deceased) who attended the practice with a recorded diagnosis of AF over a 3-year period (between November 2010 – November 2013) and were ≥50 years of age. This included both patients with long-term AF diagnosed before the study period in addition to those newly diagnosed with AF during the study period.   The total number of all patients aged ≥50 years who attended the practice at least once during the same period was recorded to determine the prevalence of AF at the practice.

Exclusion Criteria

Exclusion   criteria   included   patients   <50   years   of   age,   patients with incomplete medical records or those diagnosed with AF who subsequently moved from the practice during the study period.

 

CHADS2  score

The CHADS2   score was chosen for the purpose of this study as it is a validated risk-stratification tool for patients with AF. [13-15]  The scoring system assigns one point each for the presence of Congestive Heart Failure, Hypertension, Age ≥75 years or Diabetes and assigns two points if a patient has a history of previous Stroke or TIA.  AF patients with a CHADS2 score of 0 are considered to be at low risk of a thromboembolic event (0.5 – 1.7% per year stroke rate); a score of 1 indicates intermediate risk (2.0% per year stroke rate) and a score ≥2 indicates high risk (4.0% per year stroke rate). [16]

Data Search and Extraction

Patient data was manually extracted from individual patient records, coded  and  recorded  into  a  spreadsheet  (Microsoft  Excel,  2007). Basic data including date of birth and sex were recorded.  Date of AF diagnosis (assessed as the first documented episode of AF within the patient record) and co-morbidities including hypertension, congestive heart failure, diabetes, stroke or TIA were included if documented within the patient medical record.   Correspondence from specialists and hospital discharge summaries were also analysed for any diagnosis made outside of the medical centre and not subsequently recorded in the medical record.

Lifestyle factors were recorded from the practice database including
alcohol use (light/moderate/heavy or none) and smoking status (nonsmoker, ex-smoker or current smoker). Complications arising from
pharmacological prophylaxis (including any documented bleeding or
side-effects) or discontinuation of treatments were included. Individual
patient visits were analysed for any documented non-compliance with
medications. Where possible, cause of death was also recorded.

Adequacy of Anticoagulation

Individual CHADS2 scores were determined for each patient at baseline,
12 months and 3 years. At each of these time points, CHADS2 scores
were compared to each patient’s medication regime (i.e. no medication
use, an anticoagulant agent or an antiplatelet agent). The use of other
medications for the treatment of AF (for example, agents for rate or
rhythm control) was not assessed. Patients were then classified as
being undertreated, adequately treated or over-treated according to
the CHADS2 score obtained at baseline, 12 months and 3 years as per
the current therapeutic guidelines (Figure 1). [17]

v6_i1_a22c

Adequate treatment was considered to be patients receiving treatments
in accordance with the therapeutic guidelines. [17] Undertreated
patients included those who received no treatment when an oral
anticoagulant was indicated (CHADS2 score ≥2). Over-treated patients
included those treated with an oral anticoagulant where it was not
indicated according to the current guidelines (CHADS2 score = 0).

Statistical Analysis

Results are presented as mean ± standard deviation.   A p-value of <0.05 was considered to be statistically significant.  One-way ANOVA was  used  to  assess  between-group  differences  in  CHADS2    scores at each time point (Baseline, 12 months and 3 years).   Descriptive data is presented where relevant.   Prevalence of AF at the practice was calculated using the formula; (patients with AF ≥50 years / total number of patients ≥50 years at the practice, X 100).

 

 

Results

A total of 346 patients with AF aged ≥50 years were identified. Of these, 246 participants were excluded – (n=213 due to insufficient data within their medical record, and n=33 patients had left the practice during the study period) leaving a total of 100 patients for inclusion in the analysis (Figure 2).  Due to the nature of the search strategy (which identified any patient with AF during the period of November 2010-November 2013), both newly-diagnosed patients and patients with long-term AF were included in the analysis. Therefore, long-term data was available for n=89 participants at 12 months, and n=60 participants at 3 years. There were no statistically significant differences in age (p=0.91) or sex (p=0.86) between the included and excluded participants.

v6_i1_a22d

Including all patients initially identified with AF (n=346), the overall prevalence of AF among patients at the practice was 5.8%. Participant characteristics are presented in Table 1.  The mean age of participants at diagnosis was 74.9 ± 10.0 years, with more males suffering from AF (60%) compared to females (40%).  Over half of patients had a history of smoking (57%), and hypertension was the most common co- morbidity (74%).  13% of participants were listed within the practice database as being deceased.

v6_i1_a22e

 

At baseline, 65.0% of patients were classified as high risk of stroke (CHADS score ≥2).  This increased to 75.3% of patients and 78.4% of patients at 12 months and 3 years, respectively (Graph 1). There were no patients with a CHADS2 score of 6 at any of the study time points. Analysis of participants who had 3-year follow-up data available (n=60) demonstrated  a  statistically significant increase  in  average  CHADS2 scores among patients between baseline vs. 12 months (p<0.05) and baseline vs. 3 years (p<0.01).   There was no statistically significant difference in CHADS2 scores between 12 months vs. 3 years (p=0.54).

v6_i1_a22f

Graph 2 demonstrates changes in treatment adequacy over time based on patients’ initial treatment group allocation at baseline. For patients who were initially identified as being undertreated at baseline, there was a trend toward adequate treatment by 3-years.  For patients initially identified as over-treated at baseline, the trend towards adequate treatment occurred more rapidly (p=non-significant) (on average by 12 months).

v6_i1_a22b

Patient pharmacological  treatments and adequacy of treatment at baseline, 12 months and 3 years are shown in Table 2.

v6_i1_a22g

 

There were several reported side-effects and documented instances of medication cessation from anticoagulation and antiplatelet therapy. A total of eight patients were non-compliant and ceased warfarin during the study period and eight patients had their warfarin ceased by their treating doctor (reason for cessation unknown).  A further eight patients ceased warfarin therapy due to side-effects (Intracranial haemorrhage   (n=1),   Gastrointestinal  bleeding   (n=3),   Haematuria (n=1), Unknown bleeding (n=3)).   One patient ceased aspirin due to oesophageal irritation.   No other pharmacological therapies were ceased due to side-effects. Warfarin was ceased in one case due to an elective surgical procedure.

A total of two patients suffered an embolic or haemorrhagic stroke and a further two patients suffered a TIA during the study period.  Prior to their thromboembolic event, one patient was undertreated with aspirin (CHADS2 score = 2), one was adequately treated with clopidogrel (CHADS2    score  =  1)  and  a  further  one  patient  was  undertreated on aspirin (CHADS2 score = 3).  Cause of death was unknown in six patients. No patients had stroke or TIA listed as their cause of death in their medical record.

Discussion

It  has  been  suggested  that  Australian  patients  with  AF  may  not be receiving optimal prophylactic anticoagulant and antiplatelet medications for the prevention of thromboembolic events. [7,8]  The aims of this retrospective cohort study were to assess stroke risk and the adequacy of anticoagulation in 100 AF patients ≥50 years over a 3 year period in a GP setting.

Results from the current study indicate that overall, the use of anticoagulant and antiplatelet strategies for stroke prophylaxis was appropriate in the majority of cases and consistent with published therapeutic guidelines. [17]  The prevalence of AF at the practice of 5.8% was similar with other studies, which report a prevalence of AF in the GP setting of between 4-8%. [18, 19]  In the current study, there were more males with AF than females, however this trend has also been found in several other studies which have reported a higher prevalence of AF amongst males. [15,18]

CHADS2  scores increased between baseline and 12 months and baseline and 3 years.   This increase was to be expected as patients are likely to gain additional risk factors as they age.  The majority of patients at all time points were at high risk of stroke (CHADS2 score ≥2), with warfarin or similar anticoagulation therapy being indicated.

Overall, treatment adequacy increased between baseline and 12 months (79% versus 83.1%), then decreased by 3 years (83.1% versus 76.7%).  This trend is likely to represent aggressive management of AF at the initial diagnosis then a decline in optimal stroke prophylaxis as patients age, develop additional side-effects or become at increased risk of falls.  Additionally, older patient groups (those >70 years) were more likely to be undertreated.  This may be due to several factors, including patient non-compliance with warfarin therapy, doctor reluctance to prescribe warfarin to patients at risk of falls, and the incidence of side-effects such as bleeding.   Similar causes of under- treatment of elderly patients with AF have been outlined in other studies. [20,21]  In younger patients, there was a trend towards over- treatment at the time of diagnosis.

In the current study, one patient suffered an embolic stroke during the study period and two patients had a TIA. Appropriately, all three of these patients were subsequently changed to warfarin.  One patient who was adequately treated on warfarin with a CHADS2   score of 1 was changed to aspirin following an intracranial haemorrhage (and consequently remained classified as adequately treated).  Although these were isolated cases within the study, it should be noted that the life-long morbidity of stroke for these individuals is significant.

Strengths of the current study include the large number of patients and the comprehensive assessment of medical records for the main study outcomes of CHADS2  scores and anticoagulation or antiplatelet therapies.  By assessing individual medical records, a comprehensive assessment of patient data was available for inclusion in the study analysis.

There are some limitations in the current study. As data was extracted from  an  existing  database  of  patient  medical  records  (which  was not kept for the purpose of conducting research) there were some instances of missing or incomplete data.  However, the majority of missing data was, in general, relating to the patient’s social history (such as smoking rates and alcohol use), which were not central to the main research aims and would not have influenced the results.

A thorough assessment of medication regimes was able to be carried out for the purpose of this study.   As all medication changes are automatically recorded by the Best Practice program at each visit, the author is confident that this aspect of the data is accurate. However, it should be noted that it is possible that some patients may have been taking over the counter aspirin, which may not have been recorded on their medication list and consequently some patients may have been assessed as ‘undertreated’.  An additional consideration relates to the use of warfarin and whether patients’ prescribed warfarin were within the therapeutic range, however, the assessment of multiple INR readings for each patient over a 3-year period was thought to be beyond the scope of this study. Only two patients at the practice had been prescribed NOACs (Dabigatran) for anticoagulation, therefore analysis of this medication was limited.

The  calculation  of  CHADS2   scores was  able  to  be  assessed for all patients.  Although most co-morbidities were well documented, theremay have been some limitations with regards to the identification of some co-morbidities such as hypertension, diabetes and the presence of congestive heart failure among some patients.   For example, in some instances some patients did not have a recorded diagnosis of hypertension, but a review of blood pressure readings demonstrated several high systolic blood pressure readings which could have been diagnostic for hypertension.  Where this occurred, patients were not considered to have hypertension or congestive heart failure and were not assigned an additional CHADS2 point.

The CHADS2   score was chosen for the purpose of this study due to its simplicity and validation for the identification of patients at risk of stroke [13-15].  More recently, refinements to the CHADS2  score has led to the development of the CHA2DS2-VASC score, which assigns additional points to higher age groups, female patients and patients with vascular disease. [22]  The CHA2DS2-VASC score provides a more comprehensive overview of stroke risk factors in an individual and has  also  been  validated  for  the  purpose  of  determining  the  need for pharmacological stroke prophylaxis.  More recently, studies have shown that application of the CHA2DS2-VASC score is most useful for clarifying the stratification of patients within the low-intermediate stroke risk categories (i.e. determining those with CHADS2  scores of 0-1 who are truly at low risk and do not require aspirin). [23]  Because the aims of the current study were to identify patients at high risk of stroke and determine the appropriateness of their treatment, the CHA2DS2-VASC score was not utilised in this study.  However, it should be noted that the CHA2DS2-VASC may provide additional clarification in the assessment of patients with low-intermediate CHADS2 scores.

An additional consideration in this study relates to the nature of the AF suffered by patients.  Although patients were included if they had a known diagnosis of AF, it is almost impossible to determine how long patients had already been suffering from AF prior to and after their diagnosis.  In addition, it was not possible to determine whether patients had paroxysmal or sustained/chronic AF.   However, it has been demonstrated that there may be little difference in outcomes for patients with paroxysmal versus persistent AF, [24,25] with a large cohort study comparing stroke rates in patients with paroxysmal versus sustained  AF  reporting no  significant  difference  in  rates  of  stroke (3.2% versus 3.3%, respectively). [24] Therefore, it is unlikely that determination of paroxysmal and sustained AF patterns would have influenced results of the current study.

Conclusion

The results obtained from this study will allow GPs to optimise the management of patients with AF in the community setting.  Although this study found that the management of patients with AF at the practice is consistent with the current guidelines in the majority of cases, further improvements can be made to minimise the risk of stroke among patients with AF, especially with regards to targeting undertreated patients.   Additionally, the current study may raise greater awareness of the incidence of AF within the practice and the need to assess stroke risk and treat patients accordingly, especially as  CHADS2 scores  were  rarely recorded  formally  at  the  time  of diagnosis.  GPs are well placed to optimise the treatment of AF and prevent strokes though treatment of co-morbidities and implementing lifestyle interventions, such as encouraging smoking cessation and the minimisation of alcohol use, and may further reduce the incidence of stroke and TIA in patients with AF.

Acknowledgements

The author would like to acknowledge Dr Darryl McAndrew, Dr Brett Thomson, Prof Peter McLennan, Dr Judy Mullan and Dr Sal Sanzone for their contribution to this research project.

Conflict of interest

None declared.

Correspondence

S Macleod: dm953@uowmail.edu.au

References

[1] Furberg, C, Psaty, B, Manolio, T, Gardin, J, Smith, V, Rautaharju, P. Prevalence of atrial fibrillation in elderly subjects (The Cardiovascular Health Study). Am J Cardiol. 1994; 74 (3): 236-241.

[2] Wong, C, Brooks, A, Leong, D, Roberts, K, Sanders, P. The increasing burden of atrial fibrillation compared with heart failure and myocardial infarction: A 15-year study of all hospitalizations in Australia. Arch Int Med. 2012; 172 (9): 739-741.

[3] Lip, G, Boos, C. Antithrombotic treatment in atrial fibrillation. Heart. 2006; 92 (2): 155-161.

[4] Medi, C, Hankey, G, Freedman, S. Stroke risk and antithrombotic strategies in atrial fibrillation. Stroke. 2010; 41: 2705-2713.

[5] Gould, P, Power, J, Broughton, A, Kaye, D. Review of the current management of atrial fibrillaiton. Exp Opin Pharmacother. 2003; 4 (11): 1889-1899.

[6] Brieger, D, Curnow, J, Anticoagulation: A GP primer on the new anticoagulants. Aust Fam Physician 2014, 43 (5): 254-259.

[7] Gladstone, D, Bui, E, Fang, J, Laupacis, A, Lindsay, P, Tu, J, et. al. Potentially preventable strokes in high-risk patients with atrial fibrillation who are not adequately anticoagulated. Stroke. 2009; 40: 235-240.

[8] Olgilvie, I, Newton, N, Welner, S, Cowell, W, Lip, G. Underuse of oral anticoagulants in atrial fibrillation: A systematic review. Am J Med. 2010; 123 (7): 638-645.

[9] Pisters, R, Van Oostenbrugger, R, Knottnerus, I. The likelihood of decreasing strokes in atrial fibrillation patients by strict application of guidelines. Europace. 2010; 12: 779-784. [10] Leyden, J, Kleinig, T, Newbury, J, Castles, S, Cranefield, J, Anderson, C, et. al. Adelaide Stroke Incidence Study: Declining stroke rates but many preventable cardioembolic strokes. Stroke. 2013; 44: 1226-1231.

[11] Vitry, A, Roughead, E, Ramsay, E, Preiss, A, Ryan, P, Pilbert, A, et. al. Major bleeding risk associated with warfarin and co-medications in the elderly population. Pharmacoepidem Drug Safe. 2011; 20 (10): 1057-1063.

[12] Fang, M, Change, Y, Hylek, E, Rosand, J, Greenberg, S, Go, A, et. al. Advanced age, anticoagulation intensity, and risk for intracranial hemorrhage among patients taking warfarin for atrial fibrillation. Ann Int Med. 2004; 141: 745-752.

[13] Gage B, Waterman, A, Shannon, W. Validation of clinical classification schemes for predicting stroke: Results from the National Registry of Atrial Fibrillation. JAMA. 2001; 285: 2864-2870.

[14] Khoo, C, Lip, G. Initiation and persistance on warfarin or aspirin as thromboprophylaxis in chronic atrial fibrillation in general practice. Thromb Haemost. 2008; 6: 1622-1624.

[15] Rietbrock, S, Heeley, E, Plumb, J, Van Staa, T. Chronic atrial fibrillation: Incidence, prevalence, and prediction of stroke using the congestive heart failure, hypertension, age >75, diabetes mellitus, and prior stroke or transient ischemic attack (CHADS2) risk stratification scheme. Am Heart J. 2008; 156: 57-64.

[16]  UpToDate.  Antithrombotic  therapy  to  prevent  embolization  in  atrial  fibrillation. [Internet]. 2013 [Cited 2014 Mar 9]. Available from: http://www.uptodate.com/contents/antithrombotic-therapy-to-prevent-embolization-in-atrial-fibrillation

[17] e-Therapeutic Guidelines. Prophylaxis of stroke in patients with atrial fibrillation.[Internet]. 2012 [Cited 2014 Mar 9]. Available from: http://etg.hcn.com.au/desktop/index.htm?acc=36422

[18] Fahridin, S, Charles, J, Miller, G. Atrial fibrillation in Australian general practice. Aust Fam Physician. 2007; 36.

[19] Lowres, N, Freedman, S, Redfern, J, McLachlan, A, Krass, I, Bennet, A, et. al. Screening education and recognition in community pharmacies of atrial fibrillation to prevent stroke in and ambulant population aged ≥65 years (SEARCH-AF Stroke Prevention Study): A cross-sectional study protocol. BMJ. 2012; 2 (Online).

 

[20] Hobbs, R, Leach, I. Challenges of stroke prevention in patients with atrial fibrillation in clinical practice. Q J Med. 2011; 104: 739-746.

[21] Hickey, K. Anticoagulation management in clinical practice: Preventing stroke in patients with atrial fibrillation. Heart Lung. 2012; 41: 146-156.

[22] van Starr, T, Setakis, E, Ditanna, G, Lane, D, Lip, G. A comparison of risk stratification schemes for stroke in 79,884 atrial fibrillation patients in general practice. Thromb Haemost. 2011; 9: 39-48.

[23] Lip, G. Atrial fibrillation and stroke prevention: Brief observations on the last decade. Expert Rev Cardiovasc Ther. 2014; 12 (4): 403-406.

[24] Hart, R, Pearce, L, Rothbart, R, McAnulty, J, Asinger, R, Halperin, J. Stroke with intermittent atrial fibrillation: Incidence and predictors during aspirin therapy. J Am Coll Cardiol. 2000; 35: 183-187.

[25] Nattel, S, Opie, L. Controversies in atrial fibrillation. Lancet. 2006; 367: 262-272.

Categories
Original Research Articles

General practitioner awareness of pharmacogenomic testing and drug metabolism activity status amongst the Black-African population in the Greater Western Sydney region

Background:  Individuals  of  black-African  background  have  a high variability in drug metabolising enzyme polymorphisms. Consequently, unless these patients are tested for these polymorphisms, it becomes difficult to predict which patients may have a sub-therapeutic response to medications (such as anti- depressants) or experience an adverse drug reaction. Given the increasing population of black-Africans in Australia, GPs are on the front line of this issue, especially in Greater Western Sydney (GWS) – one of the country’s rapidly increasing populations due to migration. Aim: To ascertain the awareness of GPs regarding drug metabolising enzyme polymorphisms in the black-African population and pharmacogenomic testing in the GWS community. Methods:  A  descriptive,  cross-sectional  study  was  conducted in GWS by analysing GP responses to a questionnaire consisting of closed and open-ended questions. Results: A total of 46 GPs completed the questionnaire. It was found that 79.1% and 79.5% of respondents were unaware of: the high variability in drug metabolism enzyme activity in the black-African population and pharmacogenomic testing (respectively). No respondents had ever utilised pharmacogenomic testing. Only a small proportion of GPs “always” considered a patient’s genetic factors (13.9%) and enzyme metaboliser status (11.1%) in clinical practice. Preferred education media for further information included written material, direct information from other health professionals (such as pharmacists) and verbal teaching sessions. Conclusion: There was a low level of awareness of enzyme metaboliser status and pharmacogenomic testing amongst GPs in GWS. A future recommendation to ameliorate this includes further education provision through a variety of media noted in the study.

v6_i1_a21a

Introduction

Depression accounts for 13% of Australia’s total disease burden, making it an important health issue in the current context. [1] General Practitioners (GPs) are usually the first point of contact for patients seeking help for depression. [2,3] Antidepressant prescription is the most common treatment form for depression in Australia with GPs prescribing an antidepressant to treat up to 40% of all psychological problems. [2] This makes GP awareness of possible treatment resistance or adverse drug reactions (ADRs) to these medications vital.

Binder et al. [4] described pharmacogenomics as “the use of genome- wide approaches to elucidate individual differences in the outcome of drug therapy”. Detecting clinically relevant polymorphisms in genetic expression can potentially be used to identify susceptibility to ADRs. [4] This would foster the application of personalised medicine by  encouraging  an  inter-individual  approach  to  medication  and dose prescriptions based on an individual’s predicted response to medications. [4,5]

Human DNA contains genes that code for 57 cytochrome (CYP) P450 isoenzymes; these are a clinically important family of hepatic and gastrointestinal isoenzymes responsible for the metabolism of over 70% of clinically prescribed drugs. [5-10] The CYP family of enzymes are susceptible to polymorphisms as a result of genetic variations, influenced by factors such as ethnicity. [6,5,10] Research has shown that polymorphisms in certain CYP drug metabolising enzymes can result in phenotypes that class individuals as “ultrarapid metabolisers (UMs), extensive metabolisers (EMs), intermediate metabolisers (IMs) and poor metabolisers (PMs).”[6,10] These categories are clinically important as they determine whether or not a drug stays within the therapeutic range. Individuals with PM status may be susceptible to experiencing ADRs as a result of toxicity, and conversely, those with UM status may not receive a therapeutic effect. [5,6,10,11]

When considering the metabolism of antidepressants, the highly polymorphic CYP enzymes: CYP2C19 and CYP2D6 are known to be involved. [5,10,12] A study by Xie et al. [13] has shown that for the CYP2D6 enzyme alone, allelic variations induce polymorphisms that result in a PM phenotype of “~1%” in Asian populations, “0-5%” among Caucasians and a variation of between “0-19%” in black- African populations. This large disparity of polymorphism phenotypes was reproduced in a recent study, which also showed that the variation is not exclusive to the CYP2D6 enzyme. [6] It has been reported that the incidence of ADRs among PMs treated with drugs such as antidepressants is 44% compared to 21% in other patients. [5,14] Consequently, increased costs have been associated with the management of UM or PM patients. [5]

The black-African population in Australia and specifically Sydney (where GWS is one of the fastest growing regions) continues to rise through migration and humanitarian programs. [15-18] Almost 30% of Africans settling in Australia in the decade leading to the year 2007 did so under humanitarian programs including under refugee status. [15-17] As refugees are at a higher risk of having mental health problems including depression  due  to  their  traumatic  histories  and  post-migratory difficulties, GPs in GWS face increased clinical interactions with  black-Africans  at  risk  of  depression.  [19,20]  Considering  the high  variability of enzyme   polymorphisms   in   this   population, pharmacogenomic testing may play a role in the primary care of these patients. We therefore conducted a study to assess GP awareness of pharmacogenomic testing and the differences in enzyme metaboliser status (drug metabolism phenotypes). We also investigated the GP preferences of media for future education on these topics.

Methodology

Study Design and Setting

This is a descriptive, cross-sectional study. Ethics approval was granted by the Human Research Ethics Committee.

Considering GWS is the fastest growing region in Sydney, we focussed on particular suburbs in GWS (Blacktown, Parramatta and Holroyd Local Government Areas). [17-20] Using geographical cluster sampling, a list of GP practices were identified with the aim of recruiting 50 participants.

Study tool

Data was collected using a questionnaire validated by university supervisors and designed to elicit the level of understanding and awareness among GPs. The main themes of the questionnaire involved: questions regarding basic demographic information; questions aimed at determining the level of GP awareness regarding differences in drug metabolising phenotypes and pharmacogenomic testing; and open- ended questions eliciting the preferred methods of education with respect to pharmacogenomic testing.

Data Collection

We invited 194 GPs between April and May 2014 to participate in the study. The questionnaire and participant information sheet were either given to the practice managers or to the GPs in person. Questionnaires were collected in person within the following two weeks.

Data Analysis

Data was analysed using SPSS (version 22, IBM Australia). Descriptive statistics were used to summarise findings, with p-values calculated using Chi-square analysis (with Yates correction) to compare two sets of data. A p-value of <0.05 indicated statistical significance.

Results

The overall response rate was 23.7% (46/194). Our respondents included: 27 females and 19 males. The mean number of years of experience in general practice was 13.9 and most GPs (93.4%, 43/46) had received some form of training in antidepressant prescription in the last 5 years. The number of patients of black-African background seen in the last 6 months ranged from 0 to greater than 100. Only

26.1% (12/46) of GPs reported no consultations with a patient of black- African background within this timeframe. Of the 73.9% (34/46) of GPs who had seen at least one patient from this cohort, 55.9% (19/34) had treated at least one patient for depression with antidepressants.

GPs experience of ADRs in patients of black-African background treated for depression

From 46 participants, 19 had treated a patient of black-African background with antidepressants, 18/19 reported having identified at least one ADR (Figure 1).

v6_i1_a21d

GP awareness and consideration of drug metabolism activity status and genetic factors

Awareness amongst GPs of the different drug metabolism activity phenotypes in black-Africans was low with 79.1% (34/43) being unaware. Patients’ genetic factors and enzyme metaboliser status were “always” considered by only 13.9% (5/36) and 11.1% (4/36) of GPs, respectively. There was no statistically significant difference regarding awareness between GPs who had treated black-African patients and those who had not (21.1% vs 13.3% respectively, p=0.89).

GP awareness and use of pharmacogenomic testing

The awareness of methods for testing a patient’s key drug metabolising enzymes, also known  as  pharmacogenomic testing, was extremely low with 79.5% (35/44) of GPs being unaware of the testing methods available in Australia. Of the 20.5% of GPs (9/44) who were aware, none had utilised pharmacogenomic testing for their black-African patients. These nine GPs then nominated factors that would influence their utilisation of pharmacogenomic testing on these individuals. Three main categories of influence emerged (Table 1). When specifically asked whether they would be more inclined to utilise pharmacogenomic testing on black-African patients who had previously experienced ADRs, 88.9% (8/9) GPs stated that they would be more inclined.

v6_i1_a21b

Preferred education media

GPs that were aware of pharmacogenomic testing were asked, through an open-ended question, how they obtained information regarding these  methods.  Three  main  categories  were  identified  based  on their responses (Table 2). All GPs were then asked to note down their preferred medium of education for pharmacogenomic testing (Table 3). Multiple responses were allowed.

v6_i1_a21c

Discussion

This study showed that there is a low level of awareness regarding pharmacogenomic testing and the differences in drug metabolism phenotypes among GPs. Additionally, we identified the preferred education media for providing information to GPs (Table 3). Awareness of pharmacogenomic testing and of the differences in drug enzyme metaboliser status (phenotype) could be valuable in the clinical setting. Improved patient outcomes have been noted when doctors are able to personalise management based on information from pharmacogenomic testing,[21] with Hall-Flavin et al. [21] noting significantly improved baseline depression scores amongst patients with depression whose doctors were provided with information on pharmacogenomics.

A previous study reported that a high proportion (97.6%) of physicians agreed that differences in genetic factors play a major role in drug responses.  [22]  Whilst  it  is  arguable  that  knowledge  of  genetic factors holistically playing a role in drug response may be universal, our study specifically focussed on the knowledge of differences in enzyme metaboliser status. It was found that 79.1% of GPs (34/43) were unaware, with only a small number of GPs “always” considering enzyme metaboliser status (11.1%) in their management. Given the aforementioned  importance  of  genetic  factors  and  the  potential to reduce ADRs using personalised medicine, this is an area for improvement.

When considering pharmacogenomic testing, we found 79.5% (35/44) of GPs to be unaware of testing methods. No GP had ever utilised pharmacogenomic testing, this low rate of utilisation is also reported previously in other several studies. [22-24] A lack of utilisation and awareness arguably forms a barrier against the effective incorporation of personalised medicine in the primary care setting. These low figures represent a lack of education regarding pharmacogenomics and its clinical applications. This is an issue that has been recognised since the arrival of these testing methods. [25] McKinnon et al. [25] highlighted that this lack of education across healthcare professionals is significant enough to be considered a “barrier to the widespread uptake of pharmacogenomics”. To ameliorate the situation, the International Society of Pharmacogenomics has issued recommendations in 2005 for  pharmacogenomics  to  be  incorporated  into  medical  curricula. [26]  Another  contributing  factor  to  the  low  utilisation  of  testing could include the lack of subsidised tests available through Medicare. Currently, pathology labs do provide pharmacogenomic testing (such as Douglas Hanley Moir and Healthscope), however this is largely done so through the patient’s expenses as only two methods are subsidised by Medicare. [23,27,28]

Amongst those aware of pharmacogenomic testing, eight out of nine GPs answered that they would be more likely to utilise pharmacogenomic testing in black-African patients who had previously experienced ADRs; this is consistent with findings noted by van Puijenbroek et al. [29]. Among these GPs, factors that were noted to be potential influences in their utilisation of testing included: patient factors such as compliance and the reliability of the test, and, factors affecting the clinical picture (as described in Table 1). This is consistent with findings by studies that have also identified cost and a patient’s individual response to drugs as influential factors in a physician’s decision making. [29,30]

Considering that the majority of information regarding enzyme metabolism and pharmacogenomic testing was published in pharmacological journals,[6,8-14,30-32] much of this knowledge may not have been passed on to GPs. In order to understand the preferred media of information for GPs, we posed open-ended questions and discovered that the majority of GPs who answered the question (32/39), would prefer information in the form of writing (Table 3). This could be either in the form of online sources (such as guidelines, summaries, the National Prescribing Service or the Monthly Index of Medical Specialities) or peer reviewed journal articles. Current literature also reflects this preference for GPs to gain education regarding pharmacogenomics through journal articles. [22] The other preferred medium of education was through verbal teachings, peer discussions and presentations (Table 3), with there being specific interest in information being disseminated by clinical pathology laboratories; this is also reflected in the literature. [22,29]

Strengths and limitations

Small sample size is a limitation of this study with possible contributing factors including: the short amount of time allowed for data collection and the low response rate due to GP time constraints. Strengths of the study include the use of a validated questionnaire catered to our target population and open-ended questions which gave us further insight into GP preferences.

Implications and future research

Currently, anti-coagulants provide an example of the clinical applications of considering enzyme polymorphisms in patient management. [33,34] Warfarin is a particular example where variability in INR has been associated with enzyme polymorphisms, leading to the utilisation of dosage algorithms to optimise clinical outcomes. [34] Similarly, when using antidepressants, pharmacogenomic testing could play a role in clinical decision making with Samer et al. [5] suggesting dose reductions and serum monitoring for those with known PM status. However, as identified in our study, there is an overall lack of awareness regarding the differences in enzyme metaboliser status and the methods available for pharmacogenomic testing.

Future studies should focus on the clinical practicality of utilising these tests. Additionally, future studies should determine the effectiveness of the identified GP preferred modalities of education in raising awareness.

Conclusion

There is a low awareness among GPs regarding both the differences in enzyme metaboliser status in the black-African community, and the methods of pharmacogenomic testing.

To optimise clinical outcomes in black-African patients with depression, it  may  be  useful  to  inform  GPs  of  the  availability  and  application of pharmacogenomic testing. We have highlighted the preferred education modalities through which this may be possible.

Acknowledgements

We would like to acknowledge and thank Dr. Irina Piatkov for her support as a supervisor during this project.

Conflict of interest

None declared.

Correspondence

Y Joshi: 17239266@student.uws.edu.au

References

[1] Australian Institute of Health and Welfare. The burden of disease and injury in Australia 2003  [Internet].  2007  [cited  2014  April  25].  Available  from:  http://www.aihw.gov.au/ publication-detail/?id=6442467990

[2] Charles J, Britt H, Fahridin S, Miller G. Mental health in general practice. Aust Fam Physician. 2007;36(3):200-1.

[3] Pierce D, Gunn J. Depression in general practice: consultation duration and problem solving therapy. Aust Fam Physician. 2011;40(5):334-6.

[4]  Binder  EB,  Holsboer  F.  Pharmacogenomics  and  antidepressant  drugs.  Ann  Med. 2006;38(2):82-94.

[5] Samer CF, Lorenzini KI, Rollason V, Daali Y, Desmeules JA. Applications of CYP450 testing in the clinical setting. Mol Diagn Ther. 2013;17(3):165-84.

[6]  Alessandrini  M,  Asfaha  S,  Dodgen  MT,  Warnich  L,  Pepper  MS. Cytochrome  P450 pharmacogenetics in African populations. Drug Metab Rev. 2013;45(2):253-7.

[7] Yang X, Zhang B, Molony C, Chudin E, Hao K, Zhu J et al. Systematic genetic and genomic analysis of cytochrome P450 enzyme activities in human liver. Genome Res. 2010;20(8):1020-36.

[8] Zanger UM, Schwab M. Cytochrome P450 enzymes in drug metabolism: Regulation of gene expression, enzyme activities and impact of genetic variation. Pharmacol Therapeut. 2013;138(1):103-41.

[9]  Guengerich  FP.  Cytochrome  P450  and  chemical  toxicology.  Chem  Res  Toxicol. 2008;21(1):70-83.

[10] Ingelman-Sundberg M. Genetic polymorphisms of cytochrome P450 2D6 (CYP2D6): clinical consequences, evolutionary aspects and functional diversity. Pharmacogenomics J. 2005;5:6-13.

[11] Zhou S. Polymorphism of human cytochrome P450 2D6 and its clinical significance. Clin Pharmacokinet. 2009;48(11):689-723.

[12] Li-Wan-Po A, Girard T, Farndon P, Cooley C, Lithgow J. Pharmacogenetics of CYP2C19: functional and clinical implications of a new variant CYP2C19*17. Br J Clin Pharmacol. 2010;69(3):222-30.

[13] Xie HG, Kim RB, Wood AJJ, Stein CM. Molecular Basis of ethnic differences in drug disposition and response. Ann Rev Pharmacol Toxicol. 2001;41:815-50.

[14] Chen S, Chou WH, Blouin RA, Mao Z, Humphries LL, Meek QC et al. The cytochrome P450  2D6  (CYP2D6)  enzyme  polymorphism:  screening  costs  and  influence on  clinical outcomes in psychiatry. Clin Pharmacol Ther. 1996;60(5):522–34.

[15]  Hugo  G.  Migration  between  Africa  and  Australia:  a  demographic  perspective  – Background paper for African Australians: A review of human rights and social inclusion issues. Australian Human Rights Commission [Internet]. 2009 Dec [cited 2014 April 26]. Available  from:  https://www.humanrights.gov.au/sites/default/files/content/Africanaus/papers/Africanaus_paper_hugo.pdf

[16]  Joint  Standing  Committee  on  Foreign  Affairs,  Defence  and  Trade.  Inquiry  into Australia’s relationship with the countries of Africa [Internet]. 2011 [cited 2014 April 26]. Available  from:  http://www.aph.gov.au/Parliamentary_Business/Committees/House_of_Representatives_Committees?url=jfadt/africa%2009/report.htm

[17] Census 2006 – People born in Africa [Internet]. Australian Bureau of Statistics; 2008 August 20 [updated 2009 April 14; cited 2014 April 26]. Available from: http://www.abs.gov.au/AUSSTATS/abs@.nsf/Lookup/3416.0Main+Features32008

[18]    Greater    Western    Sydney    Economic    Development    Board.    Some    national transport  and  freight  infrastructure  priorities  for  Greater  Western  Sydney  [Internet]. Infrastructure    Australia;    2008    [cited    April    25    2014].    Available    from:    http:// w w w. i n fras tru ctu r eau s tral i a. g o v. au /p u b l i c_su b mi ssi o ns/p u b l i sh ed /fi l es/368_ greaterwesternsydneyeconomicdevelopmentboard_SUB.pdf

[19] Furler J, Kokanovic R, Dowrick C, Newton D, Gunn J, May C. Managing depression among ethnic communities: a qualitative study. Ann Fam Med. 2010;8:231-6.

[20] Robjant K, Hassan R, Katona C. Mental health implications of detaining asylum seekers: systematic review. Br J Psychiatry. 2009;194:306-12.

[21] Hall-Flavin DK, Winner JG, Allen JD, Carhart JM, Proctor B, Snyder KA et al. Utility of integrated pharmacogenomic testing to support the treatment of major depressive disorder in a psychiatric outpatient setting. Pharmacogenet Genomics. 2013;23(10):535- 48.

[22] Stanek EJ, Sanders CL, Taber KA, Khalid M, Patel A, Verbrugge RR et al. Adoption of pharmacogenomics testing by US physicians: results of a nationwide survey. Clin Pharmacol Ther. 2012;91(3):450-8.

[23] Sheffield LJ, Phillimore HE. Clinical use of pharmacogenomics tests in 2009. Clin Biochem Rev. 2009;30(2):55-65.

[24] Corkindale D, Ward H, McKinnon R. Low adoption of pharmacogenetic testing: an exploration and explanation of the reasons in Australia. Pers Med. 2007;4(2):191-9.

[25]  McKinnon  R,  Ward  M,  Sorich  M.  A  critical  analysis  of  barriers  to  the  clinical implementation of pharmacogenomics. Ther Clin Risk Manag. 2007;3(5):751-9.

[26]  Gurwitz  D,  Lunshof  J,  Dedoussis  G,  Flordellis  C,  Fuhr  U,  Kirchheiner  J  et  al. Pharmacogenomics      education:      International      Society      of      Pharmacogenomics recommendations for medical, pharmaceutical, and health schools deans of education. Pharmacogenomics J. 2005;5(4):221-5.

[27]  Pharmacogenomics  [Internet].  Healthscope  Pathology;  2014  [cited  2014  October 22]    Available    from:    http://www.healthscopepathology.com.au/index.php/advanced pathology/pharmacogenomics/

[28]  Overview  of  Pharmacogenomic  testing.  Douglas  Hanley  Moir  Pathology;  2013 [cited  2014  October  22].  Available  from:  http://www.dhm.com.au/media/21900626/pharmacogenomics_brochure_2013_web.pdf

[29] van Puijenbroek E, Conemans J, van Groostheest K. Spontaneous ADR reports as a trigger for pharmacogenetic research: a prospective observational study in the Netherlands. Drug Saf. 2009;32(3):225-64.

[30]  Rogausch  A,  Prause  D,  Schallenberg  A,  Brockmoller  J,  Himmel  W.  Patients’  and physicians’ perspectives on pharmacogenetic testing. Pharmacogenomics. 2006;7(1):49- 59.

[31] Akilillu E, Persson I, Bertilsson L, Johansson I, Rodrigues F, Ingelman-Sundberg M. Frequent distribution of ultrarapid metabolizers of debrisoquine in Ethopian population carrying duplicated and multiduplicated functional CYP2D6 alleles. J Pharmacol Exp Ther. 1996;278(1):441-6.

[32] Bradford LD. CYP2D6 allele frequency in European Caucasians, Asians, Africans and their descendants. Pharmacogenomics. 2002;3:229-43.

[33] Cresci S, Depta JP, Lenzini PA, Li Ay, Lanfear DE, Province MA et al. Cytochrome p450 gene variants, race, and mortality among clopidogrel-treated patients after acute myocardial infarction. Circ Cardiovasc Genet. 2014 7(3):277-86.

[34] Becquemont L. Evidence for a pharmacogenetic adapted dose of oral anticoagulant in routine medical practice. Eur J Clin Pharmacol. 2008 64(10):953-60