Categories
Review Articles Articles

Is cancer a death sentence for Indigenous Australians? The impact of culture on cancer outcomes

Aim: Indigenous Australian cancer patients have poorer outcomes than non-Indigenous cancer patients after adjusting for age, stage at diagnosis and cancer type. This is not exclusive to the Indigenous population of Australia. The aim of this review is to explore the reasons why Indigenous Australians face a higher cancer mortality rate when compared to their non-Indigenous counterparts. Methods: A literature search was conducted using PubMed and Medline to identify articles with quantitative research on the differing survival rates and cancer epidemiology, and qualitative data on postulated reasons for this discrepancy. Qualitative studies, non-systematic topic reviews, quality improvement projects and opinion pieces were also reviewed in this process, with the belief that they may hold key sources of Indigenous perspectives, but are undervalued in the scientific literature. Results: Although allcause cancer incidence is lower within Indigenous Australians, the probability of death was approximately 1.9 times higher than in non-Indigenous patients. Occurrence of cancer types differ slightly among the Indigenous population, with a higher incidence of smoking-related cancers such as oropharyngeal and lung cancers, and cancer amenable to screening such as cervical cancer. Indigenous patients generally have a later stage at diagnosis, and are less likely to receive curative treatment. This discrepancy has been attributed to health service delivery issues, low uptake of screening, preventative behaviours, communication barriers, socioeconomic status and non-biomedical beliefs about cancer. Conclusion: The implication of these findings on the future of Indigenous cancer care indicates the fundamental social, cultural and serviced-based change required for long-term sustainable improvement in reducing Indigenous mortality rates. To ‘close the gap’ we need to make further collaborative system changes based on Indigenous cultural preferences.

Introduction

Indigenous Australian cancer patients have much poorer outcomes than non-Indigenous cancer patients after adjusting for age, stage at diagnosis and cancer type. [1] Statistics from 2005 show that cancer was the third highest cause of death in Indigenous people, as for all Australians, causing 17% and 30% of all deaths respectively. [1,3] However after adjusting for age and sex, Indigenous people had a 50% higher cancer death rate. [4] Indigenous Australians have a higher incidence of rapidly-fatal cancers that are amenable to screening or are preventable, particularly lung and other smoking related cancers. [2] One of the major contributors to increased mortality is the advanced stage at cancer diagnosis. In addition to this, Indigenous people are less likely to receive adequate treatment. The aim of this review is to explore the reasons why Indigenous Australians face a higher cancer mortality rate when compared to their non-Indigenous counterparts. This review will display the epidemiology of cancer types and discuss the grounds for this discrepancy, including a focus on the causes for advanced stage at diagnosis, geographical distribution of the population, socioeconomic status, service delivery and cultural beliefs about cancer.

A literature search was conducted using PubMed and Medline to identify articles with quantitative research on the differing survival rates and cancer epidemiology, and qualitative data on postulated reasons for this discrepancy. Qualitative studies, non-systematic topic reviews, quality improvement projects and opinion pieces were also reviewed in this process, with the belief that they may hold key sources of Indigenous perspectives, but are undervalued in the scientific literature. Combinations of key words such as ‘Indigenous’, ‘cancer’, ‘incidence’, ‘mortality’, ‘non-Indigenous’, and ‘cultural beliefs’ were used, in addition to criteria limiting articles to those published after 2000 and within Australia, although some key international references were included.

Generally, Indigenous Australians have a life expectancy seventeen years younger than their non-Indigenous counterparts, and a burden of chronic disease 2.5 times higher. [5] This is not exclusive to the Indigenous population of Australia; similar findings have been shown for Indigenous people of Canada, New Zealand and the United States. [6,7,8] The Aboriginal and Torres Strait Islander people of Australia, who account for 2.4% of the total population, will be referred to as Indigenous people for the purpose of this review, [9] although their separate cultural entities are recognised.

Epidemiology

Indigenous people in the Northern Territory diagnosed with cancer between 1991 and 2000 were 1.9 times more likely to die than other Australians, after adjusting for cancer site, age and sex (Figure 1). [10] The prevalence of cancer types differed among the Indigenous population, with a higher incidence of, and mortality from, smokingrelated cancers such as oropharyngeal and lung cancers, and cancers amenable to screening, such as cervical and bowel cancer. [10,11] In addition, studies from New South Wales, the Northern Territory and Queensland have found that Indigenous people are more likely to have advanced disease at diagnosis for all cancers combined. [2,12,13] Notably, lung cancer is diagnosed earlier in Indigenous people; this is thought to be due to the high prevalence of lung conditions such as tuberculosis and chronic lung disease among the Indigenous population. [2] Statistics show that only 11% of Indigenous bowel cancer patients in the Northern Territory, compared to 32% of non- Indigenous patients, had an early diagnosis. This has potential for improvement through the use of faecal occult blood programs as a cost effective screening tool. [6] In addition to the late stage at diagnosis, the low rate of cancer survival in Indigenous patients can be, in part, attributed to the prevalence of high fatality cancers, treatmentlimi ting comorbidities and high uptake of palliative or non-aggressive treatment options. [2]

Studies from across a number of states in Australia have shown that Indigenous patients are less likely to undergo treatment. In a study reported by Hall et al. in Western Australia, 26 (9.5%) of 274 Indigenous lung cancer patients underwent surgery, as compared to 1693 (12.9%) of 13,103 non-Indigenous lung cancer patients, from 1982 to 2001. [16] In the same time period, one (1.5%) of 64 Indigenous prostate cancer patients, versus 1,787 (12.7%) of 12,123 non-Indigenous prostate cancer patients underwent surgery. The study concluded that the Indigenous population with prostate or lung cancer were less likely to undergo surgery than their non-Indigenous counterparts. [14] A Queensland study by Valery et al. also reported that Indigenous cancer patients were less likely to undergo surgical treatment. [9] This may partly be explained by advanced stage at diagnosis; however, the results are statistically significant, demonstrating under-treatment after this adjustment. Treatment choice and barriers to care were identified as important contributors to this discrepancy. [14]

Longitudinal trends from 1995 to 2005 in the Northern Territory reported a downward trend in all cancer incidence among the non- Indigenous people, as opposed to an increase in Indigenous people. The all-cancer mortality declined significantly within non-Indigenous people, while there was little change in the death rate of Indigenous people. [10] Nation-wide trends between 1982 and 2007 show that the incidence of all cancers combined increased from 383 cases per 100,000 to 485 per 100,000. [15]

Rural and Remote Locations

Lower survival rates were observed in Queensland, Western Australia and the Northern Territory in Indigenous cancer patients from remote communities. [4,16] Indigenous people are ten times more likely to live in remote areas of Australia than non-Indigenous people. [17] This has implications for service delivery of screening, diagnosis and treatment, as well as access to preventative health education. In rural and remote Australia there is a shortage of healthcare providers and adequate primary health care facilities to cater for the vast geographical distances. There is a difficulty in ensuring transport links between major centres for patients requiring referral. These factors probably contribute to the outcomes.

Socioeconomic Status

Like other Indigenous populations, Indigenous Australians are overrepresented in the low socioeconomic strata. [1,2] Since the colonisation of Australia by the non-Indigenous population, the Indigenous people have progressively lost their cultural expression and practices, resulting in disempowerment. [8] Subsequent ‘welfare dependency’ with continuing loss of skills, unemployment and hopelessness have been suggested as contributory factors. [12] There are a multitude of reasons as to why disempowerment has manifest in poor levels of education, employment and health outcomes, which is beyond the scope of this discussion. In addition, Indigenous Australians are more likely, in varying degrees, to be exposed to poor environmental health such as disadvantaged living conditions. This includes overcrowding, poor nutrition and obesity, tobacco, excessive alcohol consumption and other drugs, and higher rates of human papillomavirus (HPV) infection, which are linked to the aetiology of cervical and head and neck cancers. [1,11,17] Behavioural risk factors linked to low socioeconomic status may contribute to higher levels of comorbidities.

Culture

Cultural isolation, power imbalances and differing health beliefs of cancer causation are patient factors that also contribute to poorer prognosis. Indigenous people are sensitive to power imbalances in their interaction with healthcare providers. [12] Psychological stress, common to many vulnerable populations, has been consistently associated with sub-optimal health outcomes for Indigenous people and an important obstacle in accessing healthcare. Peiris et al. believes that ‘cultural safety’ within healthcare facilities is paramount in addressing this problem. [12] Creating open-door policies, welcoming waiting rooms and reception staff who know the community are means of reorientating the health services and preventing the cultural disconnect. [12] However, there is a lack of community-controlled health services in many areas, and a relative lack of skilled Indigenous people in the workforce. Improving these factors would greatly enhance the cultural safety and community-specific delivery of health. [4,12] Studies comparing the Maori and Pacific Islander people in New Zealand have extrapolated on similar causes of ethnic inequalities in access to culturally acceptable health services. [6,7,8]

Language

In 2002, 66% of Indigenous Australians in the Northern Territory reported speaking a language other than English at home; in Western Australia, South Australia and Queensland the number of Indigenous language speakers was eleven to fourteen percent. [18] A study by Condon et al. reported that cancer survival was strongly associated with the patient’s first language. [2] After adjusting for treatment, cancer stage and site, it was shown that the risk of death for Indigenous native language speakers was almost double that of Indigenous English speaking and non-Indigenous patients. [2] It is postulated that communication difficulties, social and cultural ‘disconnect’ from mainstream health services and poor health literacy may be linked to native first language. [9] This valuable finding reinforces the importance of using Aboriginal Health Workers and translators in clinical practice.

Beliefs about Cancer

Attitudes to cancer and medical services strongly influence the use of diagnostic or curative care. Shahid et al. interviewed Indigenous people from various geographical areas in Western Australia about their beliefs and attitudes towards cancer. [3] The findings were surprising. Many Indigenous people believe cancer is contagious, and attributed cancer to spiritual curses, bad spirits or as punishment from a past misdeed. It was found that blaming others or one’s own wrongdoing as a cause of cancer or illness is widespread within Aboriginal communities, where spiritual beliefs about one’s wellbeing predominate. [3] Shahid et al. claimed that attribution of cancer to spiritual origins lead to acceptance of disease without seeking healthcare. [3] In addition, the Indigenous cancer sufferer may feel ashamed of their ‘wrongdoings’ and hide their symptoms, delaying diagnosis. [3]

Fatalistic attitudes towards cancer diagnosis in the general Australian population has changed in recent times, with the dissemination of information regarding curative cancer treatments, and the shifting focus toward understanding the biological basis of cancer and educating the public about screening and preventative behaviours such as the bowel cancer screening and the HPV vaccination. However, the low socioeconomic status and poor educational background of many Indigenous Australians has limited their access to such information. [3] In many Indigenous communities the fatalistic expectations of a cancer diagnosis remain. Such fatalistic beliefs are associated with delays in cervical cancer screening, late presentation of cancer symptoms, and patients who are lost to follow-up, contributed to by the aforementioned beliefs. For example, some Indigenous women with cervical cancer in Queensland blamed cancer on the loss of a traditional lifestyle. [19] Other beliefs about cancer are that screening protects from cancer and that cancer is contagious. Studies from New Zealand, Canada and the US have shown similar themes concerning non-biomedical Indigenous beliefs about cancer. [6]

In addition to the view that “cancer means death” were views of overreliance or mistrust in doctors. Often personal stories of an individual’s unmet expectations of the medical system spread within the community and influenced other’s attitudes; examples include patients who had fi nished treatment, thought they had been cured of cancer, and were then lost to follow-up, or the idea that screening prevents cancer. [20] Traditional healing still has a role in many Indigenous communities for health and wellbeing, as well as the importance during palliation of the cancer patients, often as a link to their connection with their country and ancestral roots. [3]

Recommendations

Culturally-appropriate service delivery

Diagnosing cancers earlier in the Indigenous population would increase the chance for curative treatment and reduction in overall mortality. Increasing primary health care services and their culturally appropriate delivery would address this need. However, improving the access to and use of relevant services for Indigenous people currently remains a challenge. For women’s issues, there may be stigma, shame and embarrassment associated with sexually transmitted infections and cervical cancer, as well as the cultural factors associated with denial of symptoms and gender roles of healthcare workers. [20] Service delivery failures are related to inadequate or inappropriate recallsystems, privacy during screening, especially in small communities, sex of the healthcare provider, timing and location of screening, discontinuity of care, difficulty maintaining cold chain and promoting vaccinations such as GardasilTM. [17] National data for breast and cervical cancer screening reveals that Indigenous women participate at about two-thirds of the national rate. However, the implementation of the culturally acceptable “Well Women’s Screening Program” in the Northern Territory, substantially improved Indigenous participation in PAP test screening from 33.9% in 1998 to 44% in 2000. [19] Similar initiatives have also been successfully implemented in Queensland. [22] This highlights the efficacy of culturally appropriate services tailored to the population.

Education

Programs to decrease tobacco use and to improve other behavioural risk factors need to be designed appropriately for use in the settng of communication difficulty and poor health literacy, and they need to address the cultural role of smoking in Indigenous people. [1] In addition, health service delivery improvements such as health education, promotion, screening programs and cultural safety, such as those demonstrated in the successful “Well Women’s Screening Program”, [19] will also contribute to a successful intervention.

Conclusion

Australian national ‘Closing The Gap’ targets include “to halve the life expectancy gap [between Indigenous and non-Indigenous Australians] within a generation.” [5] Language, cultural barriers, geographical distance, low socioeconomic status, high-risk health behaviours and traditional and non-biomedical beliefs about cancer are all reasons why Indigenous Australians have worse cancer outcomes than non- Indigenous Australians. The implication of these findings on the future of Indigenous cancer care and on meeting the national targets signifies the fundamental social, cultural and service-based changes required for long-term sustainable improvement in reducing the Indigenous mortality rates. The underlying cultural beliefs and individual perceptions about cancer must be specifically addressed to develop effective screening and treatment approaches. Educational material must be designed to better engage Indigenous people. In addition, Aboriginal cancer support services and opportunities for Aboriginal cancer survivors to be advocates within their communities may increase Indigenous peoples’ willingness to accept modern oncology treatments. Through these improvements, a tailored approach to Indigenous cancer patients can meet the spiritual, cultural and physical needs that are imperative for a holistic approach in their management.

Conflicts of interest

None declared.

Correspondence

S Koefler: sophia.koefler@gmail.com

References

[1] Cunningham J, Rumbold A, Zhang X, Condon J. Incidence, aetiology and outcomes of cancer in Indigenous people of Australia. Lancet Oncology. 2008;8:585-95.

[2] Condon J, Barnes T, Armstrong B, Selva-Nayagam S, Elwood M. Stage at diagnosis and cancer survival for Indigenous Australians in the Northern Territory. Med J Aust. 2004;182(6):277-80.

[3] Shahid S, Finn L, Bessarab D, Thompson S. Understanding, beliefs and perspectives of Aboriginal people in Western Australia about cancer and its impact on access to cancer services. BMC Health Services Research. 2009;9:132-41.

[4] Roder D, Currow D. Cancer in Aboriginal and Torres Strait Islander people of Australia. Asian Pacific J Cancer Prev. 2008;9(10):729-33.

[5] Anderson I. Closing the indigenous health gap. Aust Fam Physician. 2008;37(12):982.

[6] Shahid S, Thompson S. An overview of cancer and beliefs about the disease in Indigenous people of Australia, New Zealand and the US. Aust NZ J Public Health. 2009;33:109-18.

[7] Paradies Y, Cunningham J. Placing Aboriginal and Torres Strait Islander mortality in an international context. Aust NZ J Public Health. 2002;26(1):11-6.

[8] Jeff reys M, Stevanovic V, Tobias M, Lewis C, Ellison-Loschmann L, Pearce N et al. Ethnic inequalities in cancer survival in New Zealand: linkage study. Am J Public Health. 2005;95(5):834-7.

[9] Valery P, Coory M, Stirling J, Green A. Cancer diagnosis, treatment, and survival in Indigenous and non-Indigenous Australians: a matched cohort study. The Lancet. 2006;367:1842-8.

[10] Zhang X, Condon J, Dempsey K, Garling L. Cancer incidence and mortality in Northern Territory, 1991-2005. Department of Health and Families; Darwin 2006:1-65.

[11] Condon J, Barnes T, Cunningham J, Armstrong B. Long-term trends in cancer mortality for Indigenous Australians in the Northern Territory. Med J Aust. 2004;180:504-407.

[12] Peiris D, Brown A, Cass A. Addressing inequities in access to quality health care for indigenous people. Canadian Med Ass J. 2008;179(10): 985-6.

[13] Supramaniam R, Grindley H, Pulver LJ. Cancer mortality in Aboriginal people in New South Wales, Australia, 1994-2001. Aust NZ J Public Health. 2006;30(5):453-6.

[14] Hall S, Bulsara C, Bulsara M, Leahy T, Culbong M, Hendrie D et al. Treatment patterns for cancer in Western Australia, does being Indigenous make a difference? Med J Aust. 2004; 181(4): 191-4.

[15] Australian Institute of Health and Welfare & Australasian Association of Cancer Registries 2010. Cancer in Australia: an overview. AIHW. 2010;60(10):14-5.

[16] Hall S, Holman C, Sheiner H. The influence of socio-economic and locational disadvantage on patterns of surgical care for lung cancer in Western Australia 1982-2001. Aust Health Rev. 2004;27(2):68-79.

[17] Jong K, Smith D, Yu X, O’Connell D, Goldstein D, Armstrong B. Remoteness of residence and survival from cancer in New South Wales. Med J Aust. 2004;180:618-21.

[18] Condon J, Cunningham J, Barnes T, Armstrong B, Selva-Nayagam S. Cancer diagnosis

and treatment in the Northern Territory: assessing health service performance for

Indigenous Australians. Intern Med J. 2006;36:498-505.

[19] Binns P, Condon J. Participation in cervical screening by Indigenous women in the Northern Territory: a longitudinal study. Med J Aust. 2006;185(9):490-4.

[20] Lykins E, Graue L, Brechting E, Roach A, CochettC, Andrykowski M. Beliefs about cancer causation and prevention as a function of personal and family history of cancer: a national, population-based study. Psycho-oncology. 2008;17:967-74.

[21] Henry B, Houston S, Mooney G. Institutional racism in Australian healthcare: a plea for decency. Med J Aust. 2004;180:517-9.

[22] Augus S. Queensland Aboriginal and Torres Strait Islander women’s cervical screening strategy. Population Health Branch Queensland Health 2010. 10-27.

Categories
Articles Review Articles

Control of seasonal influenza in healthcare settings: Mandatory annual influenza vaccination of healthcare workers

Introduction: The aim of this review is to emphasise the burden and transmission of nosocomial seasonal influenza, discuss the influenza vaccine and the need for annual influenza vaccination of all healthcare workers, discuss common attitudes and misconceptions regarding the influenza vaccine among healthcare workers and means to overcome these issues, and highlight the need for mandatory annual influenza vaccination of healthcare workers. Methods: A literature review was carried out; Medline, PubMed and The Cochrane Collaboration were searched for primary studies, reviews and opinion pieces pertaining to influenza transmission, the influenza vaccine, and common attitudes and misconceptions. Key words used included “influenza”, “vaccine”, “mandatory”, “healthcare worker”, “transmission” and “prevention”. Results: Seasonal influenza is a serious disease that is associated with considerable morbidity and mortality and contributes an enormous economic burden to society. Healthcare workers may potentially act as vectors for nosocomial transmission of seasonal influenza. This risk to patients can be reduced by safe, effective annual influenza vaccination of healthcare workers and has been specifically shown to significantly reduce morbidity and mortality. However, traditional strategies to improve uptake consistently fail, with only 35 to 40% of healthcare workers vaccinated annually. Mandatory influenza vaccination programs with medical and religious exemptions have successfully increased annual influenza vaccination rates of healthcare workers to >98%. Exemption requests often reflect misconceptions about the vaccine and influenza, and reflect the importance of continuous education programs and the need for a better understanding of the reasons for compliance with influenza vaccination. Conclusion: Mandatory annual influenza vaccination of healthcare workers is ethically justified and, if implemented appropriately, will be acceptable. Traditional strategies to improve uptake are minimally effective, expensive and inadequate to protect patient safety. Therefore, low voluntary influenza vaccination rates of healthcare workers leave only one option to protect the public: mandatory annual influenza vaccination of healthcare workers.

Introduction

Each year, between 1,500 and 3,500 Australians die from seasonal influenza and its complications. [1] The World Health Organization (WHO) estimates that seasonal influenza affects five to fifteen per cent of the population worldwide annually, with an associated three to five million cases of serious illness and 250,000-500,000 deaths. [2] In Australia, it is estimated that seasonal influenza causes 18,000 hospitalisations and over 300,000 general practitioner (GP) consultations every year. [3] Nosocomial seasonal influenza is associated with considerable morbidity and mortality among the elderly, neonates, immuno-compromised and patients with chronic diseases. [4] The most effective way to reduce or prevent nosocomial transmission of seasonal influenza is annual influenza vaccination of all healthcare workers. [5,6] The Centre for Disease Control and Prevention (CDC) has recommended annual influenza vaccination of all healthcare workers since 1981, and the provision and administration of the vaccine to healthcare workers at the work site, free of charge, since 1993. [7] Despite this, only 35% to 40% of healthcare workers are vaccinated annually. [8]

 

Transmission of seasonal influenza

The influenza virus attaches and invades the epithelial cells of the upper respiratory tract. [8] Viral replication in these epithelial cells leads to pro-infl ammatory cytokines, and necrosis of epithelial cells. [8] Influenza is primarily transmitted from person to person by droplets that are generated when an infected person breathes, coughs, sneezes and speaks. [8] These droplets settle on the mucosal surfaces of the upper respiratory tract of susceptible persons; thus transmission of influenza primarily occurs in those who are near the infected person. [8]

The influenza vaccine

The influenza vaccines currently available in Australia are inactivated, split virion or subunit vaccines, produced using viral strains propagated in fertilised hens’ eggs. [9] The inactivated virus is incapable of replication inside the human body, and thus incapable of causing infection. [10] Influenza vaccines are trivalent, i.e. they protect against three different strains of influenza. [9] As influenza viruses are continually subject to antigenic change, annual adaptation of the influenza vaccine is needed to ensure the vaccine provides protection against the virus strains likely to be circulating during the influenza season. [9] The composition of the influenza vaccine in 2011 covered pandemic H1N1 2009 (swine flu), H3N2 and B strains of influenza. [11] Influenza vaccines are included in the Australian National Immunisation Program only after evaluation of their quality, safety, effectiveness and cost-effectiveness for its intended use in the Australian population. [9] The only common adverse effect of the influenza vaccine is minor injection site soreness for one to two days. [10] Influenza vaccine effectiveness depends on the age and immune status of the individual being vaccinated, and on the match between the strains included in the vaccine and those strains circulating in the community. [12] The influenza vaccine is 70 to 90% effective in preventing influenza infection in healthy individuals under 65 years of age; the majority of healthcare workers fall into this category. [12] Influenza vaccination has been shown to be 88% effective in preventing laboratory-confirmed influenza in healthcare workers. [13]

The need for annual influenza vaccination

Transmission of influenza has been reported in a variety of healthcare settings and healthcare workers may often be implicated in the outbreaks. [13] Healthcare workers are at an increased risk of acquiring seasonal influenza because of exposure to the virus in both the healthcare and community settings. [13] However, simply staying home from work during symptomatic illness is not an effective strategy to prevent nosocomial transmission of seasonal influenza. [10] The incubation period ranges from one to four days; the contagious period begins before symptoms appear, and the virus may be shed for at least one day prior to symptomatic illness. [4,10] Less than 50% of people show classic signs of influenza; asymptomatic healthcare workers may fail to recognise that they are infected, yet can shed the virus for five to ten days. [13,14] Symptomatic healthcare workers also often continue to work despite the presence of symptoms of influenza. [10,15] In one study, 23% of serum samples from healthcare workers contained specific antibody suggesting seasonal influenza infection during a single season; however, 59% of those infected could not recall influenza-like illness and 28% were asymptomatic. [13] The direct implication of this fact is that healthcare workers themselves may potentially act as vectors for nosocomial transmission of seasonal influenza to patients who are at increased risk of morbidity and mortality from seasonal influenza. [10] Many of these patients do not mount an appropriate immune response to influenza vaccination, making vaccination of healthcare workers especially important. [16] Only 50% of residents in long-term care settings develop protective influenza vaccinationinduced antibody titres. [17] Influenza vaccination of healthcare workers may reduce the risk of seasonal influenza outbreaks in all types of health care settings and has been specifically shown to significantly reduce morbidity and mortality. [12] A randomised controlled trial evaluating the effect of annual influenza vaccination of healthcare workers found that it was significantly associated with a 43% reduction in influenza-like illness and a 44% reduction in mortality among geriatric patients in long-term care settings. [12] Furthermore, an algorithm evaluating the effect of annual influenza vaccination of healthcare workers on patient outcomes predicted that if all healthcare workers in healthcare settings were vaccinated annually with the influenza vaccine, then approximately 60% of patient influenza infections could be prevented. [18]

Although a number of factors contribute to the overall burden of seasonal influenza, the economic burden to society results primarily from the loss of working time/productivity associated with influenza-related work absence and increased use of medical resources required to treat patients with influenza and its complications. [19] Typically, the indirect costs associated with loss of working time/productivity due to illness account for the greater proportion (>80%) of the economic burden of seasonal influenza. [19] One study reported those healthcare workers who received the influenza vaccine had 25% fewer episodes of respiratory illness, 43% fewer days of sickness absenteeism due to respiratory illness and 44% fewer visits to physicians’ offices for upper respiratory illness than those who received placebo. In a review of studies that confirmed seasonal influenza infection using laboratory evidence, the mean reported sickness absenteeism per episode of seasonal influenza ranged from 2.8 to 4.9 days for adults. [19] Furthermore, a retrospective cohort study investigating the association between influenza vaccination of emergency department healthcare workers and sickness absenteeism found that a significantly larger proportion took sick leave because of influenza-like illness in the vaccine non-recipient group (55% against 30.3%). [20]

Attitudes and misconceptions

Self-protection, rather than protection of patients, is often the dominant motivation for influenza vaccination. Many healthcare workers report they would be more willing to be vaccinated against pandemic influenza, which is perceived to be more dangerous than seasonal influenza. [15] One study found that the most popular reason (100% of those surveyed) for receiving the influenza vaccine among healthcare workers was self-protection against influenza. [21] Seventy percent of healthcare workers were also concerned about their colleagues, patients and community in preventing cross-infection. [21] Popular reasons mentioned for not receiving the influenza vaccine included “trust in, or the wish to challenge natural immunity”, “physician’s advice against the vaccine for medical reasons”, “severe localised effects from the vaccine” and “not believing the vaccine to have any benefit.”[21] A multivariate analysis of a separate study revealed that “older age”, “believing that most colleagues had been vaccinated” and “having cared for patients suffering from severe influenza” were significantly associated with compliance with influenza vaccination, with the main motivation being “individual protection”. [22] Lack of information as to effectiveness, recommended use, adverse effects of the vaccine and composition, again reflect the importance of continuous education programs and the need for a better understanding of the reasons for compliance with influenza vaccination. [22]

Major issues

Analysis of interviews with healthcare workers indicated that successfully adding mandatory annual influenza vaccination to the current policy directive would require four major issues to be addressed: providing and communicating a solid evidence base supporting the policy directive; addressing the concerns of staff about the influenza vaccine; ensuring staff understand the need to protect patients; and addressing the logistical challenges of enforcing an annual vaccination campaign. [23] A systematic review of influenza vaccination campaigns for healthcare workers revealed that a combination of education or promotion and improved access to the influenza vaccine yielded greater increases in coverage among healthcare workers. [24] Campaigns involving legislative or regulatory components such as mandatory declination forms achieved higher rates than other interventions. [24]

Influenza vaccination is currently viewed as a public health initiative focused on personal choice of employees. [12] However, a shift in the focus of vaccination strategy is appropriate – seasonal influenza vaccination of healthcare workers is a patient health and safety initiative. [12] In 2007, the CDC Advisory Committee on Immunisation added a recommendation that health care settings implement policies to encourage influenza vaccination of healthcare workers with informed declination. [25] A switch from influenza vaccination of healthcare workers on a voluntary basis to a mandatory policy should be considered by all public-health bodies. [4]

Mandatory annual influenza vaccination

Fifteen states in the USA now have laws requiring annual influenza vaccination of healthcare workers, although they permit informed declination; and at least five states require it of all healthcare workers. Many individual medical centres have instituted policies requiring influenza vaccination, with excellent results. [26]

A year-long study of approximately 26,000 employees at BJC HealthCare found that a mandatory influenza vaccination program successfully increased vaccination rates to >98%. [27] Influenza vaccination was made a condition of employment for all healthcare workers, with those still not vaccinated or exempted, terminated after one year. [27] Medical or religious exemption could be sought, including hypersensitivity to eggs, prior hypersensitivity reaction to influenza vaccine, and history of Guillain-Barre syndrome. [27] Exemption requests often reflected misconceptions about the vaccine and influenza. [27] Several requests cited chemotherapy or immuno-compromise as a reason not to get the influenza vaccine, even though these groups are at high risk for complications from influenza and are specifically recommended to be vaccinated. [27] Several requests cited pregnancy, although the influenza vaccine is recommended during pregnancy. [27]

Similarly, a five-year study of mandatory influenza vaccination of approximately 5,000 healthcare workers from Virginia Mason Medical Centre sustained influenza vaccination rates of more than 98% during 2005-2010. [28] Less than 0.7% of healthcare workers were granted exemption for medical or religious reasons and were required to wear a mask at work during influenza season, and less than 0.2% of healthcare workers refused vaccination and leftthe centre. [28]

Conclusion

Mandatory annual influenza vaccination of healthcare workers raises complex professional and ethical issues. However, the arguments in favour are clear. 1. Seasonal influenza is a serious and potentially fatal disease, associated with considerable morbidity and mortality among the elderly, neonates, immuno-compromised and patients with chronic diseases. [4] 2. The influenza vaccine has been evaluated for safety, quality, effectiveness and cost-effectiveness for its intended use in the Australian population. [9] 3. Healthcare workers themselves may potentially act as vectors for nosocomial transmission of seasonal influenza and this risk to patients can be reduced by safe, effective annual influenza vaccination of healthcare workers. [10] 4. The contagious period of seasonal influenza begins before symptoms appear and the virus may be shed for at least one day prior to symptomatic illness. [14] 5. Influenza vaccination of healthcare workers may reduce the risk of seasonal influenza outbreaks in all types of health care settings and has been specifically shown to significantly reduce morbidity and mortality. [12] 6. Seasonal influenza contributes an enormous economic burden to society from the loss of working time/productivity associated with influenza-related work absence and increased use of medical resources required to treat patients with influenza and its complications. [19] 7. Traditional strategies to improve uptake by healthcare workers consistently fail, with only 35% to 40% of healthcare workers vaccinated annually. [8] 8. Mandatory influenza vaccination programs with medical and religious exemptions have successfully increased annual influenza vaccination rates of healthcare workers to >98%. [27,28] 9. Exemption requests often reflected misconceptions about the vaccine and influenza, and reflect the importance of continuous education programs and the need for a better understanding of the reasons for compliance with influenza vaccination. [27, 22]

These facts suggest that mandatory annual influenza vaccination of healthcare workers is ethically justified and, if implemented appropriately, will be acceptable. [15] For this to occur, a mandatory program needs leadership by senior clinicians and administrators; consultation with healthcare workers and professional organisations; appropriate education; free, easily accessible influenza vaccine and adequate resources to deliver the program efficiently. It further requires provision for exemptions on medical and religious grounds and appropriate sanctions for those who refuse annual influenza vaccination, for example, requirement to wear a mask during influenza season, or termination of employment. [15] Healthcare workers accept a range of moral and other professional responsibilities, including a duty to protect patients in their care from unnecessary harm, to do good, to respect patient autonomy, and to treat all patients fairly. They also accept reasonable, but not unnecessary, occupational risk such as exposure to infectious diseases. [15] Vaccination is often seen as something that people have a right to accept or refuse. However, freedom to choose also depends on the extent to which that choice affects others. [15] In the healthcare settng, the autonomy of healthcare workers must be balanced against patients’ rights to protection from avoidable harm, and the moral obligation of healthcare workers not to put others at risk. [15] Mandatory annual influenza vaccination of healthcare workers is consistent with the right the public have to expect that healthcare workers will take all necessary and reasonable precautions to keep them safe and minimise harm. [15] Traditional strategies to improve uptake by healthcare workers are minimally effective, expensive, and inadequate to protect patient safety. Therefore, low voluntary influenza vaccination rates of healthcare workers leave only one option to protect the public: mandatory annual influenza vaccination of healthcare workers.

Conflicts of interest

None declared.

Correspondence

K Franks: kathryn.franks@my.jcu.edu.au

References

[1] Australian Bureau of Statistics. 3303.0 – Causes of death, Australia. 2007.

[2] World Health Organization. Fact sheet no. 211. Revised April 2009.

[3] Williams U, Finch G. Influenza specialist group – influenza fact sheet. Revised March2011.

[4] Maltezou H. Nosocomial influenza: new concepts and practice. Curr Opin Infect Dis.2008;21:337-43.

[5] Weber D, Rutala W, Schaff ner W. Lessons learned: protection of healthcare workers from infectious disease risks. Crit Care Med. 2010;38(8):306-14.

[6] Ling D, Menzies D. Occupation-related respiratory infections revisited. Infect Dis Clin North Am. 2010;24:655-80.

[7] Centre for Disease Control and Prevention. Influenza vaccination of healthcare personnel: recommendations of the healthcare infection control practices advisory committee and the advisory committee on immunization practices. MMWR Morb Mortal Wkly Rep. 2006;55:1-41.

[8] Beigel J. Influenza. Crit Care Med. 2008;36(9):2660-6.

[9] Horvath J. Review of the management of adverse effects associated with Panvax and Fluvax: fi nal report. In: Ageing DoHa, editor. 2011. p.1-58.

[10] McLennan S, Gillert G, Celi L. Healer, heal thyself: health care workers and the influenza vaccination. AJIC. 2008;36(1):1-4.

[11] Bishop J. Seasonal influenza vaccination 2011. In: Ageing DoHa, editor. Canberra 2011.

[12] Schaff ner W, Cox N, Lundstrom T, Nichol K, Novick L, Siegel J. Improving influenza vaccination rates in health care workers: strategies to increase protection for workers and patients. In: NFID, editors. 2004. p.1-19.

[13] Goins W, Talbot H, Talbot T. Health care-acquired viral respiratory diseases. Infect Dis Clin North Am. 2011;25(1):227-44.

[14] Maroyka E, Andrawis M. Health care workers and influenza vaccination. AJHP. 2010;67(1):25.

[15] Gilbert GL, Kerridge I, Cheung P. Mandatory influenza immunisation of health-care workers. Laninf. 2010;10:3-4.

[16] Carlson A, Budd A, Perl T. Control of influenza in healthcare settings: early lessons from the 2009 pandemic. Curr Opin Infect Dis. 2010;23:293-9.

[17] Lee P. Prevention and control of influenza. Southern Medical Journal. 2003;96(8):751-7.

[18] Ottenburg A, Wu J, Poland G, Jacobson R, Koenig B, Tilburt J. Vaccinating health care workers against influenza: the ethical and legal rationale for a mandate. AJPH. 2011;101(2).

[19] Keech M, Beardsworth P. The impact of influenza on working days lost: a review of the literature. TPJ. 2008;26(1):911-24.

[20] Chan SS-W. Does vaccinating ED health care workers against influenza reduce sickness absenteeism? AJEM. 2007;25:808-11.

[21] Osman A. Reasons for and barriers to influenza vaccination among healthcare workers in an Australian emergency department. AJAN. 2010;27(3):38-43.

[22] Takayanagi I, Cardoso M, Costa S, Araya M, Machado C. Attitudes of health care workers to influenza vaccination: why are they not vaccinated? AJIC. 2007;35(1):56-61.

[23] Leask J, Helms C, Chow M, Robbins SC, McIntyre P. Making influenza vaccination mandatory for health care workers: the views of NSW Health administrators and clinical leaders. New South Wales Public Health Bulletin. 2010;21(10):243-7.

[24] Lam P-P, Chambers L, MacDougall DP, McCarthy A. Seasonal influenza vaccination campaigns for health care personnel: systematic review. CMAJ. 2010;182(12):542-8.

[25] Centre for Disease Control and Prevention. Prevention and control of influenza, recommendation of the Advisory Committee on Immunization Practices (ACIP). MRR- 6MWR Recomm Rep. 2007;56(RR-6):1-54.

[26] Tucker S, Poland G, Jacobson R. Requiring influenza vaccination for health care workers: the case for mandatory vaccination with informed declination. AJN. 2008;108(2):32-4.

[27] Babcock H, Gemeinhart N, Jones M, Dunagan WC, Woeltje K. Mandatory influenza vaccination of health care workers: translating policy to practice. CID. 2010;50:259-64.

[28] Rakita R, Hagar B, Crome P, Lammert J. Mandatory influenza vaccination of healthcare workers: a 5-year study. ICHE. 2010;31(9):881-8.

Categories
Review Articles Articles

Suxamethonium versus rocuronium in rapid sequence induction: Dispelling the common myths

Rapid sequence induction (RSI) is a technique used to facilitate endotracheal intubation in patients at high risk of aspiration and for those who require rapid securing of the airway. In Australia, RSI protocols in emergency departments usually dictate a predetermined dose of an induction agent and a neuromuscular blocker given in rapid succession. Suxamethonium, also known as succinylcholine, is a depolarising neuromuscular blocker (NMB) and is commonly used in RSI. Although it has a long history of use and is known for producing good intubating conditions in minimal time, suxamethonium possesses certain serious side effects and contraindications (that are beyond the scope of this article).

If there existed no alternative NMB, then the contraindications associated with suxamethonium would be irrelevant – yet there exists a suitable alternative. Rocuronium, a non-depolarising NMB introduced into Australia in 1996, has no known serious side effects or contraindications (excluding anaphylaxis). Unfortunately, many myths surrounding the properties of rocuronium have propagated through the anaesthesia and emergency medicine communities, and have resulted in some clinicians remaining hesitant to embrace this drug as a suitable alternative to suxamethonium for RSI. This essay aims to dispel a number of these myths through presenting the evidence currently available and thus allowing physicians to make informed clinical decisions that have the potential to significantly alter patient outcomes. It is not intended to provide a clear answer to the choice of NMB in RSI, but rather to encourage further debate and discussion on this controversial topic under the guidance of evidence-based medicine.

One of the more noteworthy differences between these two pharmacological agents is their duration of action. The paralysis induced by suxamethonium lasts for five to ten minutes, while rocuronium has a duration of action of 30-90 minutes, depending on the dose used. The significantly shorter duration of action of suxamethonium is often quoted by clinicians as being of great significance in their decision to utilise this drug. In fact, some clinicians are of the opinion that by using suxamethonium, they insert a certain ‘safety margin’ into the RSI protocol under the belief that the NMB will ‘wear off ’ in time for the patient to begin spontaneously breathing again in the case of a failed intubation. Benumof et al. (1997) [1] explored this concept by methodically analysing the extent of haemoglobin desaturation (SpO2) following administration of suxamethonium 1.0mg/kg in patients with a non-patent airway. This study found that critical haemoglobin desaturation will occur prior to functional recovery (that is, return of spontaneous breathing).

In 2001, a study by Heier et al. [2] was conducted, involving twelve healthy volunteers aged 18 to 45 years who were all pre-oxygenated to an end-tidal oxygen concentration >90% (after breathing a FiO2 of 1.0 for three minutes). Following the administration of thiopental and suxamethonium 1.0mg/kg, no assisted ventilation was provided and the oxygen saturation levels were closely monitored. The results demonstrated that one third of the patients included in the study desaturated to SpO2 <80% (at which point they received assisted ventilation during the trial). As the authors clearly stated, the study participants were all young, healthy and slim individuals who received optimal pre-oxygenation, yet still a significant proportion suff ered critical haemoglobin desaturation before spontaneous ventilationresumed. In a real-life scenario, particularly in the patient population who require RSI, an even higher number of patients would be expected to display significant desaturation due to their failing health and the limited time available to provide pre-oxygenation. Although one may be inclined to argue that the results would be altered by reducing the dose of suxamethonium, Naguib et al. [3] affirmed that, while reducing the dose from 1.0mg/kg to 0.6mg/kg did slightly reduce the incidence of SpO2 <90% (from 85% to 65%), it did not shorten the time to spontaneous diaphragmatic movements. Therefore, the notion that the short duration of action of suxamethonium can be relied upon to improve safety in RSI is not supported and should not be trusted as a reliable means to rescue a “cannot intubate, cannot ventilate” situation.

Having demonstrated that differences in the duration of action should not sway one in the false belief of improving safety in RSI, let us compare the effect of the two drugs on oxygen saturation levels if apnoea was to occur following their administration. As suxamethonium is a depolarising agent, it has the side effect of muscle fasciculations following administration, whereas rocuronium, a non-depolarising agent, does not. It has long been questioned whether or not the existence of fasciculations associated with the use of suxamethonium alters the time to onset of haemoglobin desaturation if the airway was unable to be secure in a timely fashion and thus prolonged apnoea occurred.

This concept was explored by Taha et al. [4] who divided enrolled participants in the study into three groups: lidocaine/fentanyl/ rocuronium, lidocaine/fentanyl/suxamethonium and propofol/ suxamethonium. Upon measuring the time to onset of haemoglobin desaturation (deemed to be SpO2 <95%), it was discovered that both groups receiving suxamethonium developed significantly faster desaturation than the group receiving rocuronium. By analysing the differences between the two groups receiving suxamethonium, one discovers a considerable difference in results, with the lidocaine/ fentanyl group having a longer onset to desaturation than the propofol group. Since lidocaine and fentanyl are recognised to decrease (but not completely attenuate) the intensity of suxamethonium-induced fasciculations, these results suggested that the fasciculations associated with suxamethonium do result in a quicker onset to desaturation compared to rocuronium.

Another recent study by Tang et al. [5] provides further clarification on this topic. Overweight patients with a BMI of 25-30 who were undergoing elective surgery requiring RSI were enrolled in the study. Patients were given either 1.5mg/kg suxamethonium or 0.9mg/ kg rocuronium and no assisted ventilation was provided following induction until SpO2 <92% (designated as the ‘Safe Apnoea Time’). The time taken for this to occur was measured in conjunction with the time required to return the patient to SpO2 >97% following introduction of assisted ventilation with FiO2 of 1.0. The authors concluded that suxamethonium not only made the ‘Safe Apnoea Time’ shorter but also prolonged the recovery time to SpO2 >97% compared to rocuronium. In summary, current evidence suggests that the use of suxamethonium results in a faster onset of haemoglobin desaturation than rocuronium, most likely due to the increased oxygen requirements associated with muscle fasciculations.

Since RSI is typically used in situations where the patient is at high risk of aspiration, the underlying goal is to secure the airway in the minimal amount of time possible. Thus, the time required for the NMB to provide adequate intubating conditions is of great importance, with a shorter time translating into better patient outcomes, assuming all other factors are equal. Suxamethonium has long been regarded as the ‘gold-standard’ in this regard, yet recent evidence suggests that the poor reputation of rocuronium in regards to the time required is primarily due to inadequate dosing. Recommended doses for suxamethonium tend to be reliably stated as 1.0-1.5mg/kg, [6] whereas rocuronium dosages have often been quoted as 0.6mg/kg, which, as will be established below, is inadequate for use in RSI.

A prospective, randomised trial study published by Sluga et al. [7] in 2005 concluded that, upon comparing intubating conditions following administration of either 1.0mg/kg suxamethonium or 0.6mg/kg rocuronium, there was a significant improvement in conditions with suxamethonium at 60 seconds post-administration. Another study [8] examined the frequency of good and excellent intubating conditions with rocuronium (0.6mg/kg and 1.0mg/kg) or suxamethonium (1.0mg/kg). Upon comparison of the groups receiving rocuronium, the 1.0mg/kg group had a consistently greater frequency of both good and excellent intubating conditions at 50 seconds. While the rocuronium 1.0mg/kg and suxamethonium 1.0mg/kg groups had a similar frequency of acceptable intubating conditions, there was a higher incidence of excellent conditions in the suxamethonium group. A subsequent study [9] confirmed this finding, with the intubating physician reporting a higher degree of overall satisfaction with the paralysis provided with suxamethonium 1.7mg/kg when compared to rocuronium 1.0mg/kg. In other words, it appears that the higher dose of 1.0mg/kg of rocuronium produces better intubating conditions than 0.6mg/kg, yet it does not do so to the same extent as suxamethonium.

If no evidence were available comparing an even higher dose of rocuronium, the argument for utilising suxamethonium in RSI would defi nitely be strengthened by the articles presented above. However, a retrospective evaluation of RSI and intubation from an emergency department in Arizona, United States provides further compelling evidence. [10] The median doses used were suxamethonium 1.65mg/ kg (n=113) and rocuronium 1.19mg/kg (n=214) and the study authors state there was “no difference in success rate for first intubation attempt or number of attempts regardless of the type of paralytic used or the dose administered.” To add further weight to this issue, a Cochrane Review in 2008 titled “Rocuronium versus succinylcholine for rapid sequence induction intubation” combined 37 studies for analysis and concluded that “no statistical difference in intubating conditions was found when [suxamethonium] was compared to 1.2mg/kg rocuronium.” [11] Hence, there exists sufficient evidence that with adequate dosing, rocuronium (1.2mg/kg) is comparable to suxamethonium in time to onset of intubating conditions and thus this argument cannot be used to aid in selecting an appropriate neuromuscular blocker for RSI. In recent times, particularly here in Australia, there have been questions posed regarding a supposedly increased risk of anaphylaxis to rocuronium. Rose et al. [12] from Royal North Shore Hospital in Sydney addressed this query in a paper in 2001. They found that the incidence of anaphylaxis to any NMB will be determined by its market share. Since the market share (that is, number of uses) of rocuronium is increasing, the cases of anaphylaxis are also increasing – but importantly, they are only increasing “in proportion to usage.” Of note, the authors state that rocuronium should still be considered a drug of “intermediate risk” of anaphylaxis, compared to suxamethonium which is “high risk”. Although not addressed in this paper, there are additional factors that have the potential to alter the incidence of anaphylaxis, such as geographical variation that may be related to the availability of pholcodine in cough syrup. [13]

Before the focus of this paper shifts to a novel agent that has the potential to significantly alter the decision of selecting between suxamethonium versus rocuronium in RSI, there remains a pertinent issue that needs to be discussed. It appears as though one of the key properties of suxamethonium is its brief duration of only five to ten minutes and many clinicians tend to quote this as an important aspect, with the Cochrane Review itself stating that “succinylcholine was clinically superior as it has a shorter duration of action,” despite finding no statistical difference otherwise. [11]

The question that needs to be posed is whether this is truly an advantage of a NMB used in RSI. Patients who require emergency intubation often have a dire need for a secure airway to be established – simply allowing the NMB to “wear off ” and the patient to begin spontaneously breathing again does nothing to alter their situation. One must consider that, even if the clinician was aware of the evidence against relying on suxamethonium’s short duration of action to rescue them from a failed intubation scenario, the decision to initiate further measures (that is, progress to a surgical airway) would be delayed in such a scenario. If rocuronium, with its longer duration of action, was used, would clinicians then feel more compelled to ‘act’ rather than ‘wait’ in this rare scenario, knowing that the patient would remain paralysed? If rescue techniques such as a surgical airway were instigated, would the awakening of the patient (due to suxamethonium terminating its effect) be a hindrance? Although the use of rocuronium presents the risk of a patient requiring prolonged measures to maintain oxygenation and ventilation in a “cannot intubate, can ventilate” scenario, paralysis would be reliably maintained if a surgical airway was required.

No discussion on the debate of suxamethonium versus rocuronium would be complete without mentioning a new drug that appears to hold great potential in this arena – sugammadex. A γ-cyclodextrin specifically designed to encapsulate rocuronium and thus cause disassociation from the acetylcholine receptor, it acts to reverse the effects of neuromuscular blockade from rocuronium. In addition to its action on rocuronium, sugammadex also appears to have some crossover effect on vecuronium, another steroidal non-depolarising NMB. While acetylcholinesterase inhibitors are often used to reverse NMBs, they act non-specifically on both muscarinic and nicotinic synapses and cause many unwanted side effects. If they are given before there is partial recovery (>10% twitch activity) of neuromuscular blockade, they do not shorten the time to 90% recovery and thus are ineffective against profound block.

Sugammadex was first administered to human volunteers in 2005 with minimal side effects. [14] It displayed great potential in achieving quick recovery from rocuronium-induced paralysis within a few minutes. Further trials were conducted, including by de Boer et al. [15] in the Netherlands. Neuromuscular blockade was induced with rocuronium 1.2mg/kg and doses ranging from 2.0 to 16.0mg/kg of sugammadex given. With recovery of the train-of-four ratio to 0.9 designated as the primary outcome, the authors found that successive increases in the dose of sugammadex resulted in decreased time required to reverse profound blockade at five minutes following administration of rocuronium, with sugammadex 16mg/kg giving a mean recovery time of only 1.9 minutes compared to the placebo recovery time of 122.1 minutes. In a review article, Mirakhur [16] further supported the use of high-dose sugammadex (16mg/kg) in a situation requiring rapid recovery of neuromuscular blockade.

With an effective reversal agent for rocuronium presenting a possible alternative to suxamethonium in rapid sequence inductions, Lee et al. [17] closely examined the differences in time to termination of effect. They studied 110 patients randomised to either rocuronium 1.2mg/kg or suxamethonium 1mg/kg. At three minutes following administration of rocuronium, 16mg/kg sugammadex was given. The results of this study confirmed the potential of sugammadex and its possible future role in RSI, as the study group given rocuronium and sugammadex (at three minutes) recovered significantly faster than those given suxamethonium (mean recovery time to first twitch 10% = 4.4 and 7.1 minutes respectively). The evidence therefore suggested that administering sugammadex 16mg/kg at three minutes after rocuronium 1.2mg/kg resulted in a shorter time to reversal of neuromuscular blockade compared to spontaneous recovery with suxamethonium. While sugammadex has certainly shown great potential, it remains an expensive drug and there still exist uncertainties regarding repeat dosing with rocuronium following reversal with sugammadex, [18] as well as the need to suitably educate and train staff on its appropriate use, as demonstrated by Bisschops et al. [19] It is also important to note that for sugammadex to be of use in situations where reversal of neuromuscular blockade is required, the full reversal dose (16mg/kg) must be readily available. Nonetheless, it appears as if sugammadex may revolutionise the use of rocuronium not only in RSI, but also for other forms of anaesthesia in the near future.

As clinicians, we should strive to achieve the best patient outcomes possible. Without remaining abreast of the current literature, our exposure to new therapies will be limited and, ultimately, patients will not always be provided with the high level of medical care they desire and deserve. I urge all clinicians who are tasked with the difficult responsibility of establishing an emergency airway with RSI to consider rocuronium as a viable alternative to suxamethonium and to strive to understand the pros and cons associated with both agents, in order to ensure that an appropriate choice is made on the basis of solid evidence-based medicine.

Conflicts of interest

None declared.

Correspondence

S Davies: sjdav8@student.monash.edu

References

[1] Benumof JL, Dagg R, Benumof R. Critical haemoglobin desaturation will occur before return to an unparalysed state following 1 mg/kg intravenous succinylcholine. Anesthesiology. 1997; 87:979-82.

[2] Heier T, Feiner JR, Lin J, Brown R, Caldwell JE. Hemoglobin desaturation after succinylcholine-induced apnea. Anesthesiology. 2001; 94:754-9.

[3] Neguib M, Samarkandi AH, Abdullah K, Riad W, Alharby SW. Succinylcholine dosage and apnea-induced haemoglobin desaturation in patients. Anesthesiology. 2005; 102(1):35-40.

[4] Taha SK, El-Khatib MF, Baraka, AS, Haidar YA, Abdallah FW, Zbeidy RA, Siddik-Sayyid SM. Effect of suxamethonium vs rocuronium on onset of oxygen saturation during apnoea following rapid sequence induction. Anaesthesia. 2010; 65:358-361.

[5] Tang L, Li S, Huang S, Ma H, Wang Z. Desaturation following rapid sequence induction using succinylcholine vs. rocuronium in overweight patients. Acta Anaesthesiol Scand. 2011; 55:203-8.

[6] El-Orbany M, Connolly LA. Rapid Sequence Induction and Intubation: Current Controversy. Anaesth Analg. 2010; 110(5):1318-24.

[7] Sluga M, Ummenhofer W, Studer W, Siegemund M, Marsch SC. Rocuronium versus succinylcholine for rapid sequence induction of anesthesia and endotracheal intubation a prospective, randomized trial in emergent cases. Anaesth Analg. 2005; 101:1356-61.

[8] McCourt KC, Salmela L, Mirakhur RK, Carroll M, Mäkinen MT, Kansanaho M, Kerr C, Roest GJ, Olkkola KT. Comparison of rocuronium and suxamethonium for use during rapid sequence induction of anaesthesia. Anaesthesia. 1998; 53:867-71.

[9] Laurin EG, Sakles JC, Panacek EA, Rantapaa AA, Redd J. A comparison of succinylcholine and rocuronium for rapid-sequence intubation of emergency department patients. Acad Emerg Med. 2000; 7:1362-9.

[10] Patanwala AE, Stahle SA, Sakles JC, Erstad BL. Comparison of Succinylcholine and Rocuronium for First-attempt Intubation Success in the Emergency Department. Acad Emerg Med. 2011; 18:11-14.

[11] Perry JJ, Lee JS, Sillberg VAH, Wells GA. Rocuronium versus succinylcholine for rapid sequence induction intubation. Cochrane Database Syst Rev. 2008:CD002788.

[12] Rose M, Fisher M. Rocuronium: high risk for anaphylaxis? Br J Anaesth. 2001; 86(5):678-82.

[13] Florvaag E, Johansson SGO. The pholcodine story. Immunol Allergy Clinic North Am. 2009; 29:419-27.

[14] Gijsenbergh F, Ramael S, Houwing N, van Iersel T. First human exposure of Org 25969, a novel agent to reverse the action of rocuronium bromide. Anaesthesiology. 2005; 103:695- 703.

[15] De Boer HD, Driessen JJ, Marcus MA, Kerkkamp H, Heeringa M, Klimek M. Reversal of rocuronium-induced (1.2 mg/kg) profound neuromuscular block by sugammadex. Anesthesiology. 2007; 107:239-44.

[16] Mirakhur RK. Sugammadex in clinical practice. Anaesthesia. 2009; 64:45-54.

[17] Lee C, Jahr JS, CandiottKA, Warriner B, Zornow MH, Naguib M. Reversal of profound neuromuscular block by sugammadex administered three minutes after rocuronium. Anesthesiology. 2009; 110:1020-5.

[18] Cammu G, de Kam PJ, De Graeve K, van den Heuvel M, Suy K, Morias K, Foubert L, Grobara P, Peeters P. Repeat dosing of rocuronium 1.2 mg/kg after reversal of neuromuscular block by sugammadex 4.0 mg/kg in anaesthetized healthy volunteers: a modelling-based pilot study. British Journal of Anaesthesia. 2010; 105(4):487-92.

[19] Bisschops MM, Holleman C, Huitink JM. Can sugammadex save a patient in a simulated ‘cannot intubate, cannot ventilate’ situation? Anaesthesia. 2010; 65:936-41.

Categories
Review Articles Articles

Ear disease in Indigenous Australians: A literature review

Introduction

The Australian Indigenous versus non-Indigenous mortality gap is worse in Australia than in any other Organisation for Economic Coopera tion and Development nation with disadvantaged Indigenous populations, including Canada, New Zealand, and the USA. [1] This gap reached a stark peak of seventeen years in 1996-2001. [2] Otitis media affects 80% of Australian children by the age of three years, being one of the most common diseases of childhood. [3]

Whilst ear diseases and their complications are now rarely a direct cause for mortality, especially since the advent of antimicrobial therapy and the subsequent reduction in extracranial and intracranial complications, [4] the statistics of ear disease nevertheless illustrate the unacceptable disparity between the health status of these two populations cohabiting a developed nation, and are an indictment of the poor living conditions in Indigenous communities. [5] Moreover, the high prevalence of ear disease among Aboriginal and Torres Strait Islanders is associated with secondary complications that represent significant morbidity within this population, most notably conductive hearing loss, which affects up to 67% of school-age Australian Indigenous children. [6]

This article aims to illustrate the urgent need for the development of appropriate strategies and programs, which are founded on evidencebased research and also integrate cultural consideration for, and design input from, the Indigenous communities, in order to reduce the medical and social burden of ear disease among Indigenous Australians.

Methodology

This review covered recent literature concerning studies of ear disease in the Australian Indigenous population. Medical and social science databases were searched for recent publications from 2000-2011. Articles were retrieved from The Cochrane Library, PubMed, Google Scholar and BMJ Journals Online. Search terms aimed to capture a broad range of relevant studies. Medical textbooks available at the medical libraries of Notre Dame University (Western Australia) and The University of Western Australia were also used. A comprehensive search was also made of internet resources; these sources included the websites of The Australian Department of Health and Ageing, the World Health Organisation, and websites of specific initiatives targeting ear disease in the Indigenous Australian population.

Peer reviewed scientific papers were excluded from this review if ear disease pertaining to Indigenous Australians was not a major focus of the paper. Studies referred to in this review vary widely in type by virtue of the multi-faceted topic addressed and include both qualitative and quantitative studies. For the qualitative studies, those that contributed new information or covered areas that had not been fully explored in quantitative studies were included. Quantitative studies with weaknesses arising from small sample size, few factors measured or weak data analysis were included only when they provided insights not available from more rigorous studies.

Overview and epidemiology

The percentage of Australian Indigenous children suff ering otitis media and its complications is disproportionately high; up to 73% by the age of twelve months. [7] In the Australian primary healthcare settng, Aboriginal and Torres Strait Islander children are five times more likely to be diagnosed with severe otitis media than non-Indigenous children. [8]

Chronic suppurative otitis media (CSOM) is uncommon in developed societies and is generally perceived as being a disease of poverty. The World Health Organisation (WHO) states that a prevalence of CSOM greater than or equal to 4% indicates a massive public health problem of CSOM warranting urgent attention in targeted populations. [9] CSOM affects Indigenous Australian children up to ten times this proportion, [5] and fifteen times the proportion of non-Indigenous Australian children, [8] thus reflecting an unacceptably great dichotomy of the prevalence and severity of ear disease and its complications between Indigenous and non-Indigenous Australians.

Comparisons of the burden of mortality and the loss of disabilityadjusted life years (DALYs) have been attempted between otitis media (all types grouped together) and illnesses of importance in developing countries. These comparisons show that the burden of otitis media is substantially greater than that of trachoma, and comparable with that of polio, [9] with permanent hearing loss accounting for a large proportion of this DALY burden.

Whilst there are some general indications that the health of Indigenous Australian children has improved over the past 30 years, such as increased birth weight and lower infant mortality, there is evidence to suggest that morbidities associated with infections such as respiratory infections and otitis media have not changed. [10-12]

Middle Ear disease: Pathophysiology and host risk factors

The disease process of otitis media is a complex and dynamic continuum. [10] Hence there is inconsistency throughout the medical community regarding defi nitions and diagnostic criteria for this disease, and controversy regarding what constitutes “gold standard” treatment. [7,13] In order to form a discussion about the high prevalence of middle ear diseases in Indigenous Australians, one must first establish an understanding of their aetiology and pathogenicity. Host-related risk factors for otitis media include young age, high rates of nasopharyngeal colonisation with potentially pathogenic bacteria, eustachian tube dysfunction and palato-facial abnormalities, lack of passive immunity and acquisition of respiratory tract infections in the early stages of life. [7,9,10,14,15]

Streptococcus pneumoniae, Haemophilus influenzae and Moraxella catarrhalis are the recognised major pathogens of otitis media. However, this disease has a complex, polymicrobial aetiology, with at least fifteen other genera having been identified in middle ear eff usions. [11] The organisms involved in CSOM are predominantly opportunistic organisms, especially Pseudomonas aeruginosa, which is associated with approximately 20-50% of CSOM in both Aboriginal, Torres Strait Islander and non-Indigenous children. [10]

Relatively new findings in otitis media pathogenicity have included the identification of Alloiococcus otitidis and human metapneumovirus. [13] A. otitidis in particular, a slow-growing aerobic gram positive bacterium, has been identified in as many as 20-30% of middle ear eff usions in children with CSOM. [13,16,17] The importance of interaction between viruses and bacteria (with the major identified viruses being adenovirus, rhinovirus, polyomavirus and more recently human metapneumovirus) is well recognised in the pathogenicity of otitis media. [13,18,19] High identification rates of viral-bacterial co-infection found in asymptomatic children with otitis media (42% Indigenous and 32% non-Indigenous children) underscore the potential value in preventative strategies targeted at specific pathogens. [19] The role of biofi lms in otitis media pathogenesis has been of great interest since a fluorescence in-situ hydridisation study detected biofi lms in 92% of middle ear mucosal biopsies from 26 children with recurrent otitis media or otitis media with eff usion. [20] This suggested an explanation for the persistence and recalcitrance of otitis media, as bacteria growing in biofi lm are more resistant to antibiotics than planktonic cells. [20]

However, translating all this knowledge into better health outcomes – by means of individual clinical treatment and community preventative strategies – is not straightforward. A more thorough understanding of the polymicrobial pathogenesis is needed if more effective therapies for otitis media are to be achieved.

Some research has been involved in the possibility of a genetic predisposition to otitis media, based on its high prevalence observed across several Indigenous populations around the world, including the Indigenous Australian, Inuit, Maori and Native American peoples. [10] However, whilst the suggestion that genetic factors may play a role in otitis media susceptibility is a worthwhile area of further research, its emphasis should not overlook the significance of poverty, which generally exists throughout colonised Indigenous populations worldwide and is a major public health risk factor. It should be remembered that socioeconomic status is a major determinant of disparities in Indigenous health, irrespective of genetics or ethnicity.

Environmental risk factors

The environmental risk factors for otitis media are well recognised and extensively documented. They include season, inadequate housing, overcrowding, poor hygiene, lack of breastfeeding, pacifier use, poor nutrition, exposure to cigarette or wood-burning smoke, poverty and inadequate or unavailable health care. [5,7,9,10,21]

Several recent studies have examined the impact of overcrowding and poor housing conditions on the health of Indigenous children, with a particular focus on upper respiratory tract infections and ear disease. [22-24] The results of these studies reinforced the belief that elements of the household and community environment are important underlying determinants of the occurrence of common childhood conditions, which impair child growth and development, contribute to the risk of chronic disease and to the seventeen year gap in life expectancy between Aboriginal and Torres Strait Islander people and non-Indigenous Australians. [22, 23] Interestingly, one study’s findings identified the potential need for interventions which could target factors that negatively impact the psychosocial status of carers and which could also target health-related behaviour, including maintenance of household and personal hygiene. [22]

Raised levels of stress and poor mental health associated with the psycho-spatial elements of overcrowded living (that is, increased interpersonal contact, lack of privacy, loss of control, high demand, noise, lack of sleep) may therefore be considered as having a negative impact on the health of dwellers, especially those whose health largely depends on care from others, such as the elderly and young children, who are more susceptible to disease. Urgent attention is needed to improve housing and access to clean running water, nutrition and quality of care, and to give communities greater control over these improvements.

Exposure to environmental smoke is another significant, yet potentially preventable, risk factor for respiratory infections and otitis media in Indigenous children. [25,26] Of all the environmental risk factors for otitis media mentioned above, environmental smoke exposure is arguably the most readily amenable to modification. A recent randomised controlled trial tested the efficacy of a family-centred tobacco control program, aimed at reducing the incidence of respiratory disease among Indigenous children in Australia and New Zealand. It was found that interventions aimed at encouraging smoking cessation as well as reducing exposure of Indigenous children to environmental smoke had the potential for significant benefit, especially when the intervention designs included culturally sound, intensive family-centred programs that emphasised capacitybuilding of the Indigenous community. [25] Such studies testify to the potentially high levels of interest, cooperativeness, pro-activeness and compliance demonstrated by Indigenous communities regarding public health interventions, given the study design is culturally appropriate and accepts that Indigenous people need to be meaningfully engaged in preventative health efforts.

Preventative strategies

The advent of the 7-valent pneumococcal conjugate vaccine has seen a substantial decrease in invasive pneumococcal disease. However, changing patterns of antibiotic resistance and pneumococcal serotype replacement have been documented since the introduction of the vaccine, and large randomised controlled trials have shown its reduction of risk of acute otitis media and tympanic membrane perforation to be minimal. [13,27] One retrospective cohort study’s data suggested that the pneumococcal immunisation program may be unexpectedly increasing the risk of acute lower respiratory infection (ALRI) requiring hospitalisation among vaccinated children, especially after administration of the 23vPPV booster at eighteen months of age. [28] These findings warrant re-evaluation of the pneumococcal immunisation program and further research into alternative medical prevention strategies.

Swimming pools in remote communities have been associated with reduced prevalence of tympanic membrane perforations (as well as pyoderma), indicating the long term benefits associated with reduction in chronic disease burden and improved educational and social outcomes. [6] No outbreaks of infectious diseases have occurred in the swimming pool programmes to date and water quality is regularly monitored according to government regulations. On the condition that adequate funding continues to maintain high safety and environmental standards of community swimming pools, their net effect on community health will remain positive and worthwhile.

Treatment: Current guidelines and practices, potential future treatments

Over the last ten years there has been a general tendency to reduce immediate antibiotic treatment for otitis media for children aged over two years, with the “watchful waiting” approach having become more customary among primary care practitioners. [7] The current therapeutic guidelines note that antibiotic therapy provides only modest benefit for otitis media, with sixteen children requiring treatment at first presentation to prevent one child experiencing pain at two to seven days. [29] Routine antibiotics are recommended only for infants less than six months and for all Aboriginal and Torres Strait Islander children at the initial presentation of acute otitis media. [8] Current guidelines acknowledge that suppurative complications of otitis media are common among Indigenous Australians; hence specific therapeutic guidelines apply to these patients. [30] For those patients in whom antibiotics are indicated, a twice-daily regimen, five day course of amoxicillin is the antibiotic agent of choice. Combined therapy with a seven day course of higher-dose amoxicillin and clavulanate is recommended for poor response to amoxicillin or patients in high-risk populations for amoxicillin-resistant Streptococcus pneumoniae. For CSOM, topical ciprofl oxacin drops are now approved for use in Aboriginal and Torres Strait Islander children, since a study in 2003 contributed to their credibility in the treatment of CSOM. [31,32]

Treatment failure with antibiotics has been observed in some Aboriginal and Torres Strait Islander communities due to poor adherence to the twice-daily regimen of five and seven day courses of amoxicillin. [33] The reasons for non-adherence remain unclear. They may relate to language barriers (misinterpretation or non-comprehension of instructions regarding antibiotic use), storage (lacking a home fridge in which to keep the antibiotics), shared care of the child patient (rather than one guardian) or remoteness (reduced access to healthcare facility and reduced likelihood of follow-up). Treatment failure with antibiotics has also been noted in cases of optimal compliance in Indigenous communities, indicating that poor clinical outcomes may also be due to organism resistance and/or pathogenic mechanisms. [11]

A recent study compared the clinical effectiveness of a single-dose azithromycin treatment with the recommended seven day course of amoxicillin among Indigenous children with acute otitis media in rural and remote communities in the Northern Territory. [33] Whilst azithromycin was found to be more effective at eradicating otitis media pathogens than amoxicillin, azithromycin treatment was associated with an increase in carriage of azithromycin-resistant Streptococcus pneumoniae. Another recent study investigated the antimicrobial susceptibility of Moraxella catarrhalis isolated from a cohort of children with otitis media in the Kalgoorlie-Boulder region of Western Australia. [34] It was found that a large proporstion of strains were resistant to ampicillin and/or co-trimoxazole. Findings from studies such as these indicate that the current therapeutic guidelines, which recommend amoxicillin as the antibiotic of choice for treatment of otitis media, may require revision.

Overall, further research is needed to determine which antibiotics best eradicate otitis media pathogens and reduce bacterial load in the nasopharynx in order to achieve better clinical outcomes. Recent studies indicate that currently recommended antibiotics may need to be reviewed in light of increasing rates of resistant organisms and emerging evidence of new organisms.

Social ramifications associated with ear disease

There is substantial evidence to demonstrate that ear disease has a significant negative impact on the developmental future of Aboriginal and Torres Strait Islander children. [35] Children who are found to have early-onset otitis media (under twelve months) are at high risk of developing long-term speech and language problems secondary to conductive hearing loss, with the specific areas of cognition thought to be affected being auditory processing, attention, behaviour, speech and language. [36] Between 10% and 67% of Indigenous Australian school age children have perforated tympanic membranes, and 14% to 67% have some degree of hearing loss. [37]

Sub-optimal hearing can be a serious handicap for Indigenous children who begin school with delayed oral skills, especially if English is not their first language. Learning the phonetics and grammar of a second language with the unrecognised disability of impaired hearing renders the classroom experience a difficult and unpleasant one for the student, resulting in reduced concentration and increased distractibility, boredom and non-attendance. Truancy predisposes to anti-social behaviour, especially among adolescents, who by this age tend to no longer have infective ear disease but do have established permanent hearing loss. [38] Poor engagement in education and employment, alcohol-fuelled interpersonal violence, domestic violence, and communication difficulties with police and in court have all been linked to the disadvantage of hearing loss and the eventuation of becoming involved in the criminal justice system. [39]

In the Northern Territory, where the Indigenous population accounts for only 30% of the general population, 82% of the 1100 inmates in Northern Territory correctional facilities in the year 2010 were found to be Aboriginal or Torres Strait Islander. [40] Two recent studies conducted within the past two years investigated the prevalence of hearing loss among inmates in Northern Territory correctional facilities. They found that more than 90% of Australian Indigenous inmates had a significant hearing loss of >25dB. [39] A third study in a youth detention centre in the Northern Territory demonstrated that as many as 90% of Australian Indigenous youth in detention may have hearing loss, [41] whilst yet another study found that almost half the female Indigenous inmates at a Western Australian prison had significant hearing loss, almost ten-fold that of the non-Indigenous inmates. [37]

The fact that the Northern Territory study of adult inmates showed a comparatively low prevalence of hearing loss among Indigenous persons who weren’t imprisoned (33% not imprisoned compared with 94% imprisoned) [39] demonstrates a strong correlation between the high prevalence of hearing loss and the over-representation of Indigenous people in Australian correctional facilities. Although this area warrants further research, the data from these studies demonstrate that the higher prevalence of hearing loss among Indigenous inmates suggests that ear disease and hearing loss may have played a role in many Aboriginal and Torres Strait Islander people becoming inmates.

Changes and developments for the future

As we have discussed throughout this article, the unacceptably high burden of ear disease among Indigenous Australians is due to a myriad of medical, biological, socio-cultural, pedagogical, environmental, logistical and political factors. All of these contributing factors must be addressed if a reduction in the morbidity and social ramifications associated with ear disease among Indigenous Australians is to be achieved. The great dichotomy in health service provision could eventually be eradicated if there is the political will and sufficient, specific funding.

Addressing these factors will require the integration of multi- disciplinary efforts from medical researchers, health care practitioners, educational professionals, correctional facilities, politicians, and most importantly the members of Indigenous communities. The latter’s active involvement in, and responsibility for, community education, prevention and medical management of ear disease are imperative to achievement of these goals.

The Government’s response to a recent federal Senate inquiry into Indigenous ear health included $47.7 million over four years to support changes to the Australian Government’s Hearing Services Program (HSP). This was in addition to other existing funds available to eligible members of the hearing-impaired, such as the More Support for Students with Disabilities Initiative and the Better Start for Children With a Disability intervention. [42] Whilst this addition to the federal budget may be seen as a positive step in the Government’s agenda to ameliorate the burden of ear health among the Indigenous Australian population, it will not serve any utility if the funding is not sustainably invested and effectively implemented along the appropriate avenues, which should:

1. Specifically target and reduce identified risk factors of otitis media.

2. Support the implementation of effective, evidence-based, public health prevention strategies, and encourage community control over improvements to education, employment opportunities, housing infrastructure and primary healthcare services.

3. Support constructive and practical multidisciplinary research into the areas of pathogenicity, diagnosis, treatment, vaccines, risk factors and prevention strategies of otitis media.

4. Support and encourage training and employment for healthcare and educational professionals in regional and remote areas. These professionals include doctors, audiologists, speech pathologists, and teachers, and all of these professions should off er programs that increase the number of practising Aboriginal and Torres Strait Islander clinicians and teachers.

5. Adequately fund ear disease prevention and medical treatment programs, including screening programs, so that they may expand, increase in their number and their efficacy. Such services should concentrate on prevention education, accurate diagnosis, antibiotic treatment, surgical intervention (where applicable) and scheduled follow-up of affected children. An exemplary program is Queensland’s “Deadly Ears” program. [43]

6. Support the needs of students and inmates with established hearing loss in the educational and correctional environments, for example, through provision of multidisciplinary healthcare services and the use of sound field systems with wireless infrared technology.

7. Support community and family education regarding the effects of hearing loss on speech, language and education.

All of these objectives should be fulfi lled by cost-effective, sustainable, culturally-sensitive means. It is of paramount importance that these objectives should be well-received by, and include substantial input from, Indigenous members of the community. Successful implementation of these objectives reaching the grass-roots level (thus avoiding the so-called “trickle-down” effect) will not only require substantially increased resources, but also the involvement of Indigenous community members in intervention design and deliverance.

Conclusion

Whilst there remains a continuous need for valuable research in the area of ear disease, it appears that failure to apply existing knowledge is currently more of a problem than a dearth of knowledge. The design, funding and implementation of prevention strategies, community education, medical services and programs, and modifications to educational and correctional settings should be the current priorities in the national agenda addressing the burden of ear disease among Aboriginal and Torres Strait Islander people.

Acknowledgements

Thank you to Dr Matthew Timmins and Dr Greg Hill for providing

feedback on this review.

Conflicts of interest

None declared.

Correspondence

S Hill: shillyrat@hotmail.com

 

Categories
Review Articles Articles

The future of personalised cancer therapy, today

With the human genome sequenced a decade ago and the concurrent development of genomics, pharmacogenetics and proteomics, the field of personalised cancer treatment appears to be a maturing reality. It is recognised that the days of ‘one-sizefi ts-all’ and ‘trial and error’ cancer treatment are numbered, and such conventional approaches will be refined. The rationale behind personalised treatment is to target the genomic aberrations driving tumour development while reducing drug toxicity due to altered drug metabolism encoded by the patients’ genome. That said, a number of key challenges, both scientific and non-scientific, must be overcome if we are to fully exploit knowledge of cancer genomics to develop targeted therapeutics and informative biomarkers. The progress of research has yet to be translated to substantial clinical benefits, with the exception of a handful of drugs (tamoxifen, imatinib, trastuzumab). It is only recently that new targeted drugs have been integrated into the clinical armamentarium. So the question remains: Will there be a day when doctors no longer make treatment choices based on population-based statistics but rather on the specific characteristics of individuals and their tumours?

Introduction

In excess of 100,000 new cases of cancer were diagnosed in Australia in 2010, and the impact of cancer care on patients, their carers, and the Australian society is hard to ignore. Cancer care itself consumes $3.8 billion per year in Australia, constituting close to one-tenth of the annual health budget. [1] As such, alterations to our approach to cancer care will have wide-spread impacts on the health of individuals as well as on our economy. The first ‘golden era’ of cancer treatment began in the 1940s, with the discovery of the effectiveness of the alkylating agent, nitrogen mustard, against non-Hodgkin’s lymphoma. [2] Yet the landmark paper that demonstrated cancer development required more than one gene mutation was published only 25 years ago. [3] With the discovery of the human genome sequence, [4] numerous genes have been implicated in the development of cancer. Data from The Cancer Genome Atlas (TCGA) [5] and the International Cancer Genome Consortium (ICGC) [6] reveal that even within a cancer subtype, the mutations driving oncogenesis are diverse.

The more we learn about the molecular basis of carcinogenesis, the more the traditional paradigm of chemotherapy ‘cocktails’ classified by histomorphological features appears inadequate. In many instances, this classification system correlates poorly with treatment response, prognosis and clinical outcome. Patients within a given diagnostic category receive the same treatment despite biological heterogeneity, meaning that some with aggressive disease may be undertreated, and some with indolent disease may be overtreated. In addition, these generalised cytotoxic drugs have many side eff ects, a low specifi city, low concentration being delivered to tumours, and the development of resistance, which is an almost universal feature of cancer cells.

In theory, personalised treatment involves targeting the genomic aberrations driving tumour development while reducing drug toxicity due to altered drug metabolism encoded by the patient’s genome. The outgrowth of innovations in cancer biotechnology and computational science has enabled the interrogation of the cancer genome and examination of variation in germline DNA. Yet there remain many unanswered questions about the efficacy of personalised treatment and its applicability in clinical practice, which this review will address. The transition from morphology-based to a genetics-based taxonomy of cancer is an alluring revolution, but not without its challenges.

This article aims to outline the current methods in molecular profiling, explore the range of biomarkers available, examine the application of biomarkers in cancers common to Australia, such as melanoma and lung cancer, and to investigate the implications and limitations of personalised medicine in a 21st century context.

Genetic profiling of the cancer genome

We now know that individual tumour heterogeneity results from the gradual acquirement of genetic mutations and epigenetic alterations (changes in DNA expression that occur without alterations in DNA sequence). [7,8] Chromosomal deletions, rearrangements, and gene mutations are selected out during tumour development. These defects, known as ‘driver’ mutations, ultimately modify protein signalling networks and create a survival advantage for the tumour cell. [8-10] As such, pathway components vary widely among individuals leading to a variety of genetic defects between individuals with the same type of cancer.

Such heterogeneity necessitates the push for a complete catalogue of genetic perturbations involved in cancer. This need for a large-scale analysis of gene expression has been realised by current high throughput technologies such as DNA array technology. [11,12] Typically, a DNA array is comprised of multiple rows of complementary DNA (cDNA) samples lined up in dots on a small silicon chip. Today, arrays for gene expression profiling can accommodate over 30,000 cDNA samples. [13] Pattern recognition software and clustering algorithms promote the classification of tumour tissue specimens with similar repertoires of expressed genes. This has led to an explosion of genome-wide association studies (GWAS) which have identified new chromosomal regions and DNA variants. This information has been used to develop multiplexed tests that hunt for a range of possible mutations in an individual’s cancer, to assist clinical decision-making. The HapMap aims to identify the millions of single nucleotide polymorphisms (SNPs), which are single nucleotide differences in the DNA sequence, which may confer individual differences in susceptibility to disease. The HapMap has identified low-risk genes for breast, prostate and colon cancers. [14] TCGA and ICGC have begun cataloguing significant mutation events in common cancers. [5,6] OncoMap provides such an example, where alterations in multiple genes are screened by mass spectrometry. [15]

The reproduction and accuracy of microarray data needs to be addressed cautiously. ‘Noise’ from analysing thousands of genes can lead to false predictions and, as such, it is difficult to compare results across microarray studies. In addition, cancer cells alter their gene expression when extrapolated from their environment, potentially yielding misleading results. The clinical utility of microarrays is difficult to determine, given the variability of the assays themselves as well as the variability between patients and between the laboratories performing the analyses.

Types of cancer biomarkers

This shif offerrom entirely empirical cancer treatment to stratified and eventually personalised approaches requires the discovery of biomarkers and the development of assays to detect them (Table 1). With recent technological advances in molecular biology, the range of cancer biomarkers has expanded, which will aid the implementation of effective therapies into the clinical armamentarium (Figure 1). However, during the past two decades, fewer than twelve biomarker assays have been approved by the US Food and Drug Administration (FDA) for monitoring response, surveillance or the recurrence of cancer. [16]

Early detection biomarkers

Most current methods of early cancer detection, such as mammography or cervical cytology, are based on anatomic changes in tissues or morphologic changes in cells. Various molecular markers, such as protein or genetic changes, have been proposed for early cancer detection. For example, PSA is secreted by prostate tissue and has been approved for the clinical management of prostate cancer. [17] CA-125 is recognised as an ovarian cancer-specific protein. [18]

Diagnostic biomarkers

Examples of commercial biomarker tests include the Oncotype DX biomarker test and MammaPrint test for breast cancer. Oncotype DX is designed for women newly diagnosed with oestrogen-receptor (ER) positive breast cancer which has not spread to lymph nodes. The test calculates a ‘recurrence score’ based on the expression of 21 genes. Not covered by Medicare, it will cost US$4,075 for each woman. One study found that this test persuaded oncologists to alter their treatment recommendations for 30% of their patients. [19]

Prognostic biomarkers

The tumour, node, metastasis (TNM)-staging system is the standard for prediction of survival in most solid tumours based on clinical, gross and pathologic criteria. Additional information can be provided with prognostic biomarkers, which indicate the likelihood that the tumour will return in the absence of any further treatment. For example, for patients with metastatic nonseminomatous germ cell tumours, serum-based biomarkers include α-fetoprotein, human chorionic gonadotropin, and lactate dehydrogenase.

Predictive biomarkers

Biomarkers can also prospectively predict response (or lack of response) to specific therapies. The widespread clinical usage of ER and progesterone receptors (PR) for treatment with tamoxifen, and human epidermal growth factor receptor-2 (HER-2) for treatment with trastuzumab, is evidence of the usefulness of predictive biomarkers. Epidermal growth factor receptor (EGFR) is overexpressed in multiple cancer types. EGFR mutation is a strong predictor of a favourable outcome if treated with EGFR tyrosine kinase inhibitors such as gefitinib in non-small cell lung carcinoma (NSCLC) and anti-EGFR monoclonal antibodies such as cetuximab or panitumumab in colorectal cancer. [20] Conversely, the same cancers with KRAS mutations are associated with primary resistance to EGFR tyrosine kinase inhibitors. [21,22] This demonstrates that biomarkers, such as KRAS mutation status, can predict which patient may or may not benefit from anti-EGFR therapy (Figure 2).

Pharmacodynamic biomarkers

Determining the correct dosage for the majority of traditional chemotherapeutic agents presents a challenge because most drugs have a narrow therapeutic index. Pharmacodynamic biomarkers, in theory, can be used to guide dose selection. The magnitude of BCR–ABL kinase activity inhibition was found to correlate with clinical outcome, possibly justifying the personalised selection of drug dose. [23]

The role of biomarkers in common cancers

Biomarkers currently have a role in the prediction or diagnosis of a number of common cancers (Table 2).

Breast Cancer

Breast cancer can be used to illustrate the contribution of molecular diagnostics to personalised treatment. Discovered in the 1970s, tamoxifen was the first targeted cancer therapy against the oestrogen signalling pathway. [8] Approximately three quarters of breast cancer tumours express hormone receptors for oestrogen and/or progesterone. Modulating either the hormone ligand or the receptor has been shown to be effective in treating hormone receptorposi tive breast cancer for over a century. Although quite effective for a subset of patients, this strategy has adverse partial oestrogenic eff ects in the uterus and vascular system, resulting in an increased risk of endometrial cancer and thromboembolism. [9,10] Alternative approaches to target the ligand production instead of the ER itself was hypothesised to be more effective with fewer side effects. Recent data suggest that the use of specific aromatase inhibitors (anastrozole, letrozole and exemestane), which block the formation of endogenous oestrogen, may be superior in both the adjuvant [24] and advanced disease settings. [25]

Lung Cancer

Lung cancer is the most common cause of cancer-related mortality affecting both genders in Australia. [26] Many investigators are using panels of serum biomarkers in an attempt to increase sensitivity of prediction. Numerous potential DNA biomarkers such as the overactivation of oncogenes, including K-ras, myc, EGFR, and Met, or the inactivation of tumour suppressor genes, including p53 and Rb, are being investigated. Gefitinib was found to be superior to carboplatin– paclitaxel in EGFR-mutant non-small cell lung cancer cases [20] and to improve progression-free survival, with acceptable toxicity, when compared with standard chemotherapy. [27]

Melanoma

Australia has the highest skin cancer incidence in the world. [28] Approximately two in three Australians will be diagnosed with skin cancer before the age of 70. [29] Currently, the diagnosis and prognosis of primary melanoma is based on histopathologic and clinical factors. In the genomic age, the number of modalities for identifying and subclassifying melanoma is rapidly increasing. These include immunohistochemistry of tissue sections and tissue microarrays and molecular analysis using RT-PCR, which can detect relevant multidrug resistance-associated protein (MRP) gene expression and characterisation of germ-line mutations. [30] It is now known that most malignant melanomas have a V600E BRAF mutation. [31] Treatment of metastatic melanoma with PLX4032 resulted in complete or partial tumour regression in the majority of patients. Responses were observed at all sites of disease, including the bone, liver, and small bowel. [32]

Leukaemia

Leukaemia has progressed from being seen merely as a disease of the blood to one that consists of 38 different subtypes. [33] Historically a fatal disease, chronic myeloid leukaemia (CML) has been redefined by the presence of the Philadelphia chromosome. [34] In 1998, imatinib was marketed as a tyrosine kinase inhibitor. This drug has proven to be so effective that patients with CML now have mortality rates comparable to those of the general population. [35]

Colon Cancer

Cetuximab was the first anti-EGFR monoclonal antibody approved in the US for the treatment of colorectal cancer, and the first agent with proven clinical efficacy in overcoming topoisomerase I resistance. [22] In 2004, bevacizumab was approved for use in the first-line treatment of metastatic colorectal cancer in combination with 5-fluorouracil-based chemotherapy. Extensive investigation since that time has sought to define bevacizumab’s role in different chemotherapy combinations and in early stage disease. [36]

Lymphoma

Another monoclonal antibody, rituximab, is an anti-human CD20 antibody. Rituximab alone has been used as the first-line therapy in patients with indolent lymphoma, with overall response rates of approximately 70% and complete response rates of over 30%. [37,38] Monoclonal antibodies directed against other B-cell-associated antigens and new anti-CD20 monoclonal antibodies and anti-CD80 monoclonal antibodies (such as galiximab) are being investigated in follicular lymphoma. [39]

Implication and considerations of personalised cancer treatment

Scientific considerations

Increasing information has revealed the incredible complexity of the cancer tumourigenesis puzzle; there are not only point mutations, such as nucleotide insertions, deletions and SNPs, but also genomic rearrangements and copy number changes. [40-42] These studies have documented a pervasive variability of these somatic mutations, [7,43] so that thousands of human genomes and cancer genomes need to be completely sequenced to have a com¬plete landscape of causal mutations. And what about epigenetic and non-genomic changes? While there is a lot of intense research being conducted on the sorts of molecular biology techniques discussed, none have been prospectively validated in clinical trials. In clinical practice, what use is a ‘gene signature’ if it provides no more discriminatory value than performance status or TNM-staging?

Much research has so far been focused on primary cancers; what about metastatic cancers, which account for considerable mortality? The inherent complexity of genomic alterations in late-stage cancers, coupled with interactions that occur between tumour and stromal cells, means that most often we are not measuring what we are treating. If we choose therapy based on the primary tumour, but we are treating the metastasis, we are likely giving the wrong therapy. Despite our increasing knowledge about metastatic colonisation, we still hold little understanding of how metastatic tumour cells behave as solitary disseminated entities. Until we identify optimal predictors for metastases and an understanding of the establishment of micrometastases and activation from latency, personalised therapy should be used sagaciously.

In addition, from a genomic discovery, it is difficult, costly and timeconsuming to deliver to patients a new targeted therapy with suitable pharmacokinetic properties, safety and demonstrable efficacy in randomised clinical trials. The first cancer-related gene mutation was discovered nearly thirty years ago – a point mutation in the HRAS gene that causes a glycine-to-valine mutation at codon twelve. [44,45] The subsequent identification of similar mutations in the KRAS family [46- 48] ushered in a new field of cancer research activity. Yet it is only now, three decades later, that KRAS mutation status is affecting cancer patient management as a ‘resistance marker’ of tumour responsiveness to anti-EGFR therapies. [21]

Ethical and Moral Considerations

The social and ethical implications of genetic research are significant, in fact 3% of the budget for the Human Genome Project is allocated for the same reason. These worries range from “Brave New Worldesque” fears about the beginnings of “genetic determinism” to invasions of “genetic privacy”. An understandable qualm regarding predictive genetic testing is discrimination. For example, if a person is discovered to be at genetically-predisposed to developing cancer, will employers be allowed to make such individuals redundant? Will insurance companies deny claims on the same basis? In Australia, the Law Reform Commission’s report details the protection of privacy, protection against unfair discrimination and maintaining ethical standards in genetics, of which the majority was accepted by the Commonwealth. [49,50] In addition, the Investment and Financial Services Association states that no applicant will be required to undergo a predictive genetic test for life insurance. [51] Undeniably, the potentially negative psychological impact of testing needs to be balanced against the benefits of detection of low, albeit significant, genetic risk. For example, population-based early detection testing for ovarian cancer is hindered by an inappropriately low positive predictive power of existing testing regimes.

As personalised medicine moves closer to becoming a reality, it raises important questions about health equality. Such discoveries are magnifying the disparity in the accessibility of cancer care for minority groups and the elderly, evidenced by their higher incidence rates and lower rates of cancer survival. This is particularly relevant in Australia, given the pre-existing pitfalls of access to medical care for Indigenous Australians. Even when calibrating for later presentations and remoteness, there have still been significant survival disparities between the Indigenous and non-Indigenous populations. [52] Therefore, a number of questions remain. Will personalised treatment serve only to exacerbate the health disparities between the developing and developed world? Even if effective personalised therapies are proven through clinical trials, how will disadvantaged populations access this care given their difficulties in accessing the services that are currently available?

Economic Considerations

The next question that arises is: Who will pay? At first glance, stratifying patients may seem unappealing to the pharmaceutical industry, as it may mean trading the “blockbuster” drug offered to the widest possible market for a diagnostic/therapeutic drug that is highly effective but only in a specific patient cohort. Instead of drugs developed for mass use (and mass profi t), drugs designed through pharmacogenomics for a niche genetic market will be exceedingly expensive. Who will cover this prohibitive cost – the patient, their private health insurer or the Government?

Training Considerations

The limiting factor in personalised medicine could be the treating doctor’s familiarity with utilising genetic information. This can be addressed by enhancing genetic ‘literacy’ amongst doctors. The role of genetics and genetic counselling is becoming increasingly recognised, and is now a subspecialty within the Royal Australian College of Physicians. If personalised treatment improves morbidity and mortality, the proportion of cancer survivors requiring follow-up and management will also rise, and delivery of this service will fall on oncologists and general practitioners, as well as other healthcare professionals. To customise medical decisions for a cancer patient meaningfully and responsibly on the basis of the complete profile of his or her tumour genome, a physician needs to know which specific data points are clinically relevant and actionable. For example, the discovery of BRAF mutations in melanoma [32] have shown us the key first step in making this a reality, namely the creation of a clear and accessible reference of somatic mutations in all cancer types.

Downstream of this is the education that medical universities provide to their graduates in the clinical aspects of genetics. In order to maximise the application of personalised medicine it is imperative for current medical students to understand how genetic factors for cancer and drug response are determined, how they are altered by genegene interactions, and how to evaluate the significance of test results in the context of an individual patient with a specific medical profile. Students should acquaint themselves with the principles of genetic variation and how genome-wide studies are conducted. Importantly, we need to understand that the same principles of simple Mendelian genetics cannot be applied to the genomics of complex diseases such as cancer.

Conclusion

The importance of cancer genomics is evident in every corner of cancer research. However, its presence in the clinic is still limited. It is undeniable that much important work remains to be done in the burgeoning area of personalised therapy; from making sense of data collected from the genome-wide association studies and understanding the genetic behaviour of metastatic cancers to regulatory and economic issues. This leaves us with the parting question, are humans just a sum of their genes?

Conflicts of interest

None declared.

Correspondence

M Wong: may.wong@student.unsw.edu.au

Categories
Review Articles Articles

Is Chlamydia trachomatis a cofactor for cervical cancer?

Introduction

The most recent epidemiological publication on the worldwide burden of cervical cancer has reported that cervical cancer (0.53 million cases) was the third most common female cancer reported in 2008 after breast (1.38 million cases) and colorectal cancer (0.57 million cases). [1] Cervical cancer is the leading source of cancer-related death among women in Africa, Central America, South-Central Asia and Melanesia, indicating that it remains a major public health problem in spite of effective screening methods and vaccine availability. [1]

The age-standardised incidence of cervical cancer in Australian women (20-69 years) has decreased by approximately 50% from 1991 (the year the National Cervical Screening Program was introduced) to 2006 (Figure 1). [2,3] Despite this drop, the Australian Institute of Health and Welfare estimated an increase in cervical cancer incidence and mortality for 2010 by 1.5% and 9.6 % respectively. [3]

Human papillomavirus (HPV) is required but not sufficient to cause invasive cervical cancer (ICC). [4-6] Not all women with a HPV infection progress to develop ICC. This implies the existence of cofactors in the pathogenesis of ICC such as smoking, sexually transmitted infections, age at first intercourse and number of lifetime sexual partners. [7] Chlamydia trachomatis (CT) is the most common bacterial sexually transmitted infection (STI) and it has been associated with the development of ICC in many case-controlled and population based studies. [8-11] However, a clear cause-and effect relationship has not been elucidated between CT infection, HPV persistence and progression to ICC as an end stage. This article aims to review the literature for evidence that CT acts as a cofactor in the development of ICC and HPV establishment. The understanding of CT as a risk factor for ICC is crucial as it is amenable to prevention.

Aim: To review the literature to determine if an infection with Chlamydia trachomatis (CT) acts as a confounding factor in the pathogenesis of invasive cervical cancer (ICC) in women. Methods: Web-based Medline and the Australian Institute of Health and Welfare (AIHW) search for key terms: cervical cancer (including neoplasia, malignancy and carcinoma), chlamydia, human papillomavirus (HPV) and immunology. The search was restricted to English language publications on ICC (both squamous and adenocarcinoma) and cervical intraepithelial neoplasia (CIN) between 1990-2010. Results: HPV is essential but not sufficient to cause ICC. Past and current infection with CT is associated with squamous cell carcinoma of the cervix of HPV-positive women. CT infection induces both protective and pathologic immune responses in the host that depend on the balance between Type-1 helper cells versus Type-2 helper cell-mediated immunity. CT most likely behaves as a cervical cancer cofactor by 1) invading the host immune system and 2) enhancing chronic inflammation. These factors increase the susceptibility of a subsequent HPV infection and build HPV persistence in the host. Conclusion: Prophylaxis against CT is significant in reducing the incidence of ICC in HPVposi tive women. GPs should be raising awareness of the association between CT and ICC in their patients.

Evidence for the role of HPV in the aetiology and pathogenesis of cervical cancer

HPV is a species-specific, non-enveloped, double stranded DNA virus that infects squamous epithelia and consists of the major protein L1 and the minor capsid protein L2. More than 130 HPV types have been classified based on their genotype and HPV 16 (50-70% of cases) and HPV 18 (7-20% cases) are the most important players in the aetiology of cervical cancer. [6,12] Genital HPV transmission is usually spread via skin-to-skin contact during sexual intercourse but does not require vaginal or anal penetration, which implies that condoms only offer some protection against CIN and ICC. [6] The risk factors for contracting HPV infection are early age at first sexual activity, multiple sexual partners, early age at first delivery, increased number of pregnancies, smoking, immunosuppression (for example, human immunodeficiency virus or medication), and long-term oral contraceptive use. Social customs in endemic regions such as child marriages, polygamy and high parity use may also increase the likelihood of contracting HPV. [13] More than 80% of HPV infections are cleared by the host’s cellular immune response, which starts about three months from the inoculation of virus. HPV can be latent for 2-12 months post-infection. [14]

Molecular Pathogenesis

HPV particles enter basal keratinocytes of mucosal epithelium via binding of virions to the basal membrane of disrupted epithelium. This is mediated via heparan surface proteoglycans (HSPGs) found in the extracellular matrix and cell surface of most cells. The virus is then internalised to establish an infection mainly via a clathrin-dependent endocytic mechanism. However, some HPV types may use alternative uptake pathways to enter cells, such as a caveolae-dependent route or the involvement of tetraspanin-enriched domains as a platform for viral uptake. [15] The virus replicates in nondividing cells that lack the necessary cellular DNA polymerases and replication factors. Therefore, HPV encodes proteins that reactivate cellular DNA synthesis in noncycling cells, inhibit apoptosis, and delay the differentiation of the infected keratinocyte, to allow viral DNA replication. [6] The integration of viral genome in the host DNA causes deregulation of E6 and E7 oncogenes of high-risk HPV (HPV 16 and 18) but not of low risk HPV (HPV 6 and 11). This results in the expression of E6 and E7 oncogenes throughout the epithelium resulting in aneuploidy and karotypic chromosomal abnormalities that accompany keratinocyte immortalisation. [5]

Natural History of HPV infection and cervical cancer

Low risk HPV infections are usually cleared by cellular immunity coupled with seroconversion and antibodies against major coat protein L1. [5,6,12] Infection with high-risk HPV is highly associated with the development of squamous cell and adenocarcinoma of the cervix, which is confounded by other factors such as smoking and STIs. [4,9,10] The progression of cervical cancer in response to HPV is schematically illustrated in Figure 2.

Chlamydia trachomatis and the immune response

CT is a true obligate intracellular pathogen and is the most common bacterial cause of STIs. It is associated with sexual risk-taking behaviour and leads to asymptomatic and therefore undiagnosed genital infections due to the slow growth cycle of CT. [16] A CT infection is targeted by innate immune cells, T cells and B cells. Protective immune responses control the infection whereas pathological responses lead to chronic inflammation that causes tissue damage. [17]

Innate immunity

The mucosal epithelium of the genital tract provides first line of host defence. If CT is successful in entering the mucosal epithelium, the innate immune system is activated through the recognition of pathogen-associated molecular patterns (PAMPs) such as the Toll-like receptors (TLRs). Although CT lipopolysaccharides can be recognised by TLR4, TLR2 is more crucial for signalling pro-inflammatory cytokine production. [18] This leads to the production of pro-inflammatory cytokines such as interleukin-1 (IL-1), IL-6, tumour necrosis factor-a (TNF-a) and granulocyte-macrophage colony-stimulating factor (GMCSF). [17] In addition, chemokines such as IL-8 can increase recruitment of innate-immunity cells such as macrophages, natural killer (NK) cells, dendritic cells (DCs) and neutrophils that in turn produce more proinflammatory cytokines to restrict CT growth. Infected epithelial cells release matrix metalloproteases (MMPs) that contribute to tissue proteolysis and remodelling. Neutrophils also release MMPs and elastases that contribute to tissue damage. NK cells produce interferon (IFN)–gamma that drives CD4 T cells toward the Th1-mediated immune response. The infected tissue is infi ltrated by a mixture of CD4, CD8, B cells, and plasma cells (PCs). [17,19,20] DCs are essential for processing and presenting CT antigens to T cells and therefore linking innate and adaptive immunity.

Adaptive Immunity

Both CD4 and CD8 cells contribute to control of CT infection. In 2000, Morrison et al. showed that B cell-deficient mice, depleted of CD4 cells, are unable to clear CT infection. [21] However, another study in 2005 showed that passive transfer of chlamydia-specific monoclonal antibodies into B-cell deficient and CD4 depleted cells restored the ability of these mice to control a secondary CT infection. [22] This indicates a strong synergy between CD4 and B cells in the adaptive immune response to CT. B cells produce CT-specific antibodies to combat the pathogens. In contrast, CD8 cells produce IL-4, IL-5 and IL- 13 that do not appear to protect against chlamydia infection and may even indirectly enhance chlamydia load by inhibiting the protective CD4 response. [23] A similar observation was made by Agrawal et al. who examined cervical lymphocyte cytokine responses of 255 CT antibody–positive women with or without fertility disorders (infertility and multiple spontaneous abortions) and of healthy control women negative for CT serum IgM or IgG. [20] The study revealed a significant increase in CD4 cells in the cervical mucosa of fertile women, compared with those with fertility disorders and with negative control women. There was a very small increase in CD8 cells in cervical mucosa of CT infected women in both groups. The results showed that cervical cells from the women with fertility disorders secreted higher levels of IL- 1b, IL-6, IL-8, and IL-10 in response to CT; whereas, cervical cells from antibody-positive fertile women secreted significantly higher levels of IFN-gamma and IL-12. This suggests that a skewed immune response toward Th1 prevalence protects against chronic infection. [20]

The pathologic response to CT can result in inflammatory damage within the upper reproductive tract due to either failed or weak Th1 action resulting in chronic infection or an exaggerated Th1 response. Alternatively, chronic infection can occur if Th2 response dominates Th1 immune response and result in autoimmunity and direct cell damage which in turn will enhance tissue inflammation. Inflammation also increases the expression of human heat shock protein (HSP), which induce production of IL-10 via autoantibodies leading to CT associated pathology such as tubal blockage and ectopic pregnancies. [24]

Evidence that Chlamydia trachomatis is a cofactor for cervical cancer

Whilst it has been established that HPV is a necessary factor in the development of cervical cancer, it is still unclear why the majority of women infected with HPV do not progress to ICC stage. Several studies in the last decade have focused on the role of STIs in the pathogenesis of ICC and discovered that CT infection is consistently associated with squamous cell ICC.

In 2000, Koskela et al. performed a large-scale case-controlled study within a cohort of 530,000 Nordic women to evaluate the role of CT in the development of ICC. [10] One-hundred and eighty-two women with ICC (diagnosed during a mean follow-up of five years after serum sampling) were identified via linking data files of three Nordic serum banks and the cancer registries of Finland, Norway and Sweden. Microimmunofl uorescence (MIF) was used to detect CT-specific IgGs and HPV16-, 18- and 33-specific IgG antibodies were determined by standard ELISAs. Serum antibodies to CT were associated with an increased risk for cervical squamous-cell carcinoma (HPV and smoking adjusted odds ratio (OR), 2.2; 95% confi dence interval (CI), 1.3–3.5). The association remained also after adjustment for smoking both in HPV16-seronegative and seropositive cases (OR, 3.0; 95% CI, 1.8–5.1; OR, 2.3, 95% CI, 0.8–7.0 respectively). This study provided sero-epidemiologic evidence that CT could cause squamous cell ICC. However the authors were unable to explain the biological association between CT and squamous cell ICC.

Many more studies emerged in 2002 to investigate this association between CT and ICC even further. Smith et al. performed a hospital case-controlled study of 499 ICC women from Brazil and 539 from Manila that revealed that CT seropositive women have a twofold increase in squamous ICC (OR, 2.1; 95% CI, 1.1-4.0) but not adenocarcinoma or adenosquamous ICC (OR, 0.8; 95% CI, 0.3-2.2). [8] Similarly, Wallin et al. conducted a population based prospective study of 118 women who developed cancer after having a normal pap smear (average of 5.6 years later). [25] Women were followed up for 26 years. PCR analysis for CT and HPV DNA showed that the relative risk for ICC associated with past CT, adjusted for concomitant HPV DNA positivity, was 17.1. They also concluded that the presence of CT and of HPV was not interrelated.

In contrast, another study examining the association between CT and HPV in women with cervical intraepithelial neoplasia (CIN) found that there is an increase in CT rate in HPV-positive women (29/49) as compared to HPV-negative women (10/80), (p<0.001). [26] However, no correlation between HPV and CT co-infection was found and the authors suggested that the increased CT infectivity rate in HPVposi tive women is presumably due to HPV-related factors, including modulation of the host’s immunity. In 2004, a case-controlled study of 1,238 ICC women and 1100 control women in 7 countries coordinated by the International Agency for Research on Cancer (IARC), France also supported the findings of previous studies. [7]

Strikingly, a very recent study in 2010 confirmed that there was no association between CT infection, as assessed by DNA or IgG, and risk of cervical premalignancy, after controlling for carcinogenic HPVposi tive status. [11] The authors have justified the difference in results from previous studies by criticising the retrospective nature of the IARC study, which meant that HPV and CT status at relevant times were not available. [7] However, other prospective studies have also identified the association between CT and ICC development. [9,25] Therefore, the results from this one study remain isolated from practically every other study that has found an association between CT and ICC in HPV infected women.

Consequently, it is evident that CT infection has a role in confounding squamous cell ICC in HPV infected women but it is not an independent    cause for ICC as previously suggested by Koskela et al. [10] Previous cause-and-eff ect association between CT and HPV are most likely from CT infection increasing the susceptibility to HPV. [9,11,27] The mechanisms by which CT can act as a confounder for ICC relate to CT induced inflammation (associated with metaplasia) and invasion of the host immune response, which increases susceptibility to HPV infection and enhances HPV persistence in the host. CT can directly degrade RFX-5 and USF-1 transcription factors that induce expression of MHC class I and MHC class II respectively. [17,28] This prevents recognition of both HPV and CT by CD4 and CD8 cells, thus preventing T-cell effector functions. CT can also suppress IFN-gamma-induced MHC class II expression by selective disruption of the IFN-gamma signalling pathways, hence evading host immunity. [28] Additionally, as discussed above, CT induces inflammation and metaplasia of infected cells, which predisposes them as target cells for HPV. CT infection may also increase access of HPV to the basal epithelium and increases HPV viral load. [16]

Conclusion

There is sufficient evidence to suggest that CT infection can act as a cofactor in squamous cell ICC development due to consistent positive correlations between CT infection and ICC in HPV positive women. CT invades the host immune response due to chronic inflammation and it is presumed that it prevents the clearance of HPV from the body, thereby increasing the likelihood of developing ICC. More studies are needed to establish the clear biological pathway linking CT to ICC to support the positive correlation found in epidemiological studies. An understanding of the significant role played by CT as a cofactor in ICC development should be exercised to maximise efforts in CT prophylaxis, starting at the primary health care level. Novel public health strategies must be devised to reduce CT transmission and raise awareness among women.

Conflicts of interest

None declared.

Correspondence

S Khosla: surkhosla@hotmail.com

Categories
Case Reports Articles

Ovarian torsion in a 22-year old nulliparous woman

Ovarian torsion is the fifth most common gynaecological emergency with a reported prevalence of 2.7% in all cases of acute abdominal pain. [1] It is defined as the partial or complete rotation of the adnexa around its ovarian vascular axis that may cause an interruption in the ovarian blood flow. [2] Ischaemia is therefore, a possible consequence and this may lead to subsequent necrosis of the ovary and necessitate resection. As symptoms of ovarian torsion are non-specific and variable, this condition remains a diagnostic challenge with potential implications for future fertility. [3] Consequently, clinical suspicion and timely intervention are crucial for ovarian salvage.

This case report illustrates the multiple diagnoses that may be incorrectly ascribed to the variable presentations of ovarian torsion. Furthermore, a conservative treatment approach is described in a 22-year old nulliparous woman, with the aim of preserving her fertility.

Case report

A 22 year old nulliparous woman presented to the emergency department in the middle of her regular 28 day menstrual cycle with sudden onset of right iliac fossa pain. The pain was post-coital, of a few hours duration and radiating to the back. The pain was described as constant, severe and sharp, and associated with episodes of emesis. Similar episodes of pain were experienced in the previous few weeks. These were, however, shorter in duration and resolved spontaneously. She was otherwise well and had no associated gastrointestinal or genitourinary symptoms. She had no past medical or surgical history and specifically was not using the oral contraceptive pill as a form of contraception. She was in a two year monogamous relationship, did not experience any dyspareunia and denied any prior sexually transmitted diseases. Her cervical smears were up to date and had been consistently reported as normal.

On examination, she was afebrile with a heart rate of 90 beats per minute (bpm) and a blood pressure of 126/92 mmHg. Her abdomen was described as “soft” but she displayed voluntary guarding particularly in the right iliac fossa. There was no renal angle tenderness and bowel sounds were present.

Speculum examination did not demonstrate any vaginal discharge and bimanual pelvic examination demonstrated cervical excitation with significant discomfort in the right adnexa.

Urinalysis did not suggest a urinary tract infection due to the absence of protein or blood in the urine sample. The corresponding urine pregnancy test was negative. Her blood tests confirmed the negative urine pregnancy test. There was a mild leukocytosis, and the CRP was normal.

Pelvic ultrasound demonstrated bilaterally enlarged ovaries that contained multiple echogenic masses measuring 31mm, 14.4mm, and 2mm on the right side, and 6mm, 17mm and 2mm on the left side. Blood supply to both ovaries was described as determined by blood flow Doppler. There was a small amount of free fluid in the pouch of Douglas. The report suggested there were no features suggestive of acute appendicitis and that the findings were interpreted as bilateral endometriomas. Initially her pain was unresponsive to narcotic analgesics but she was later discharged home with simple analgesics as her symptoms improved.

Two days later she represented to the hospital with an episode of post-coital vaginal bleeding and uncontrolled ongoing severe lower abdominal pain. She was now febrile with a temperature of 38.2°C and a heart rate of 92 bpm. Her blood pressure was 114/66 mmHg. Repeat blood tests revealed a slightly raised CRP of 110 mg/L and a WCC of 11.5 x109/L. Abdominal and pelvic examinations elicited guarding and severe tenderness. On this occasion endocervical and high vaginal swabs were taken and she was treated for pelvic inflammatory disease based on her raised temperature and elevated CRP.

Subsequently, a repeated pelvic ultrasound showed bilaterally enlarged ovaries similar to the previous ultrasound. On this occasion, the ultrasound findings were interpreted as bilateral ovarian dermoids. No comment was made on ovarian blood flow, but in the right iliac fossa a tubular blind-ended, non-compressible, hyperaemic structure measuring up to 8mm in diameter was described. These latter findings were considered consistent with appendicitis.

The patient was admitted and the decision was made for an emergency laparoscopy.

Intraoperative findings revealed an 6cm diameter partially torted left  ovary containing multiple cysts, and an 8cm dark haemorrhagic oedematous torted right ovary (Figure 1). There was a haemoperitoneum of 100 mL. Of note there was a normal appearing appendix and no evidence of adhesions, infection or endometriosis throughout the pelvis.

Laparoscopically, the right ovary was untwisted and three cystic structures, suggestive of ovarian teratomas were removed intact from the left ovary. The nature of these cystic structures was confirmed by the subsequent histopathology report of mature cystic teratomas. During this time the colouration to the right ovary was re-established. Even though the ultrasound scan suggested cystic structures within the right ovary, due to the oedematous nature of this ovary and the haematoperitoneum that appeared to have arisen from this ovary, no attempt was made at this time to reduce the size of the ovary by cystectomy.

The postoperative period was uneventful and she was discharged home on the following day. She was well two weeks post-operation and her port sites had healed as expected. Due to the possibility of further cystic structures within her right and left ovary, a repeat pelvic ultrasound was organised in four months. The patient was reminded of her high risk of re-torsion and advised to represent early if there were any further episodes of abdominal pain.

The repeat ultrasound scan confirmed the presence of two ovarian cystic structures within the left ovary measuring 3.5cm and 1.3cm in diameter as well as a 5.5cm cystic structure in the right ovary. The ultrasound scan features of these structures were consistent with ovarian dermoids. She is currently awaiting an elective laparoscopy to perform bilateral ovarian cystectomies of these dermoid structures.

Discussion

Ovarian torsion can occur at any age with the greatest incidence in women 20-30 years of age. [4] About 70% of ovarian torsion occurs on the right side, which is hypothesised to occur due the longer uteroovarian ligament on this side. In addition, the limited space due to the presence of the sigmoid colon on the left side is also thought to contribute to the laterality incidence. [1] This is consistent with this case report in which there was partial torsion on the left side and complete torsion on the right side.

Risk factors for ovarian torsion include pregnancy, ovarian stimulation, previous abdominal surgery, and tubal ligation. [1,4] However, torsion is frequently associated with ovarian pathologies that result in enlarged ovaries. The most frequent encountered pathology is that of an ovarian dermoid, although other structures include parameso/tubal cysts, follicular cysts, endometriomas and serous/mucinous cystadenoma. [5] In this case report, despite the suggestion of endometriomas and tubo-ovarian masses secondary to presumed pelvic inflammatory disease, bilateral ovarian dermoids were the actual cause of ovarian enlargement. The incidence of bilateral ovarian dermoids is 10-15%. [6,7]

The diagnosis of ovarian torsion is challenging as the clinical parameters yield low sensitivity and specificity. Abdominal pain is reported in the majority of patients with ovarian torsion, but the characteristics of this pain are variable. Sudden onset pain occurs in 59-87%, sharp or stabbing in 70%, and pain radiating to the flank, back or groin in 51% of patients. [4,8] Patients with incomplete torsion may present with severe pain separated by asymptomatic periods. [9] Nausea and vomiting is common in 59-85% of cases and a low grade fever in 20%. [4,8] Other non-specific symptoms including non-menstrual vaginal bleeding and leukocytosis, reported in about 4.4% and 20% of cases, respectively. [4] In this case report the patient presented with such non-specific symptoms. These symptoms are common to many other differential diagnoses of an acute abdomen, including: ectopic pregnancy, ruptured ovarian cyst, pelvic inflammatory disease, gastrointestinal infection, appendicitis, and diverticulitis. [4] In fact, the patient was initially incorrectly diagnosed as having bilateral endometriomas and together with ultrasound scan features, appendicitis was considered.

Acute appendicitis is the most common differential diagnosis in patients with ovarian torsion. Fortunately, this usually results in an operative intervention. Therefore, if a misdiagnosis has occurred, the gynaecologist is usually summoned to deal with the ovarian torsion. Conversely, gastrointestinal infection and pelvic inflammatory disease are non-surgical misdiagnoses that may result in delayed surgical intervention. [10] Consequently, it is not surprising that in one study, ovarian torsion was only considered in the admitting differential diagnosis of 19-47% of patients with actual ovarian torsion. [4] In this present case report, the patient had variable symptoms during the course of her presentations and ovarian torsion was not initially considered.

Imaging is frequently used in the management of an acute abdomen. In gynaecology, ultrasound has become the routine investigation for potential pelvic pathologies, and colour Doppler studies have been used to assess ovarian blood supply. However, the diagnostic contribution of ultrasound scan to the diagnosis of ovarian torsion remains controversial. [2] Non-specific ultrasound findings include heterogeneous ovarian stroma, “string of pearls” sign, and free fluid in the cul de sac. [2,12] However, ovarian enlargement of more than 4cm is the most consistent ultrasound feature in ovarian torsion, the greatest risk occurring in cysts measuring 8-12 cm. [2,11]

Furthermore, the use of ultrasound scan Doppler results in highly variable interpretations and some studies disagree on its usefulness. [1,2] Because cessation of venous flow precedes the interruptions in arterial flow, the presence of blood flow on ultrasound scan Doppler studies indicates probable viability of the ovary rather than the absence of ovarian torsion. [2,13] In the presented case, both ovaries demonstrated blood flow two days prior to the patient receiving an operation to de-tort her left ovary. However, it is possible that complete ovarian torsion actually occurred after the last ultrasound was performed.

Other imaging modalities, such as contrast CT and MRI, are rarely useful when the ultrasound findings are inconclusive. Thus, direct visualisation by laparoscopy or laparotomy is the gold standard to confirm the diagnosis of ovarian torsion.

Laparoscopy is the surgical approach of choice as it has the advantages of a shorter hospital stay and reduced postoperative pain requirements. [14,15] Although laparoscopy is frequently preferred in younger patients, the surgical skill in dealing with these ovarian masses may require a laparotomy. Furthermore, in patients where there is a suspicion of malignancy, for example, a raised CA125 (tumour marker) in the presence of endometriomas, a laparotomy may be appropriate. [16] Eitan et al. reported a 22% incidence of malignancy in 27 postmenopausal patients with adnexal torsion. [16,17]

Traditionally, radical treatment by adnexectomy was the standard approach to ovarian torsion in cases of ovarian decolouration/necrosis. This was due to the fear of pulmonary embolism from untwisting of a potentially thrombosed ovarian vein. This approach obviously resulted in the loss of the ovary and potential reduction in fertility. More recently this approach has been challenged. A more conservative treatment that consists of untwisting the adnexa followed by cystectomy or cyst aspiration has been reported. [1]

Rody et al. [5] suggest conservative management of ovarian torsion regardless of the macroscopic appearance of the ovary. Their large literature review reported no severe complications, such as embolism or infection, even after the detorsion of “necrotic-looking” ovaries. In support of this, animal studies suggest that reperfusion of ischaemic ovaries even after 24 hours, with a time limiting interval of 36 hours, results in ovarian viability as demonstrated histologically. [18]

This ovary sparing approach after detorsion of ischaemic ovaries is considered safe and effective in both adults and children. [19,20] A cystectomy is usually performed on suspected organic cysts for histological examination. In the case of difficult cystectomy due to ischaemic oedematous ovary, some authors recommend a reexamination 6-8 weeks following the acute episode and secondary surgery at this later time if necessary. [5,19,20] In this case report, detorsion alone of the haemorrhagic left ovary was sufficient to resolve the pain, allowing a second laparoscopic procedure to be arranged in order to remove the causative pathology.

Summary points on ovarian torsion

1. Ovarian torsion is difficult to diagnose clinically and on ultrasound.

2. Clinical suspicion of ovarian torsion determines the likelihood of operation.

3. Laparoscopy is the surgical approach of choice.

4. Detorsion is safe and may be preferred over excision of the torted ovary.

What did I learn from this case and my reading?

1. Accurate diagnosis of ovarian torsion is difficult.

2. Suspicion of ovarian torsion should be managed, like testicular torsion, as a surgical emergency.

3. An early laparoscopy/laparotomy should be considered in order to avoid making an inaccurate diagnosis that may significantly impact on a woman’s future fertility.

Acknowledgements

The authors would like to acknowledge the Graduate School of Medicine, University of Wollongong for the opportunity to undertake a selective rotation in the Obstetrics and Gynaecology Department at the Wollongong Hospital. In addition, we would like to extend a special thank you to Ms. Sandra Carbery (Secretary to A/Prof Georgiou) and the Wollongong Hospital library staff for their assistance with this research project.

Consent declaration

Consent to publish this case report (including figure) was obtained from the patient.

Conflict of interest

None declared.

Correspondence

H Chen: hec218@uowmail.edu.au

Categories
Case Reports Articles

Use of olanzapine in the treatment of acute mania: Comparison of monotherapy and combination therapy with sodium valproate

Introduction: The aim of this article is to review the literature and outline the evidence, if any, for the effectiveness of olanzapine as a monotherapy for acute mania in comparison with the effectiveness of its use as a combined therapy with sodium valproate. Case study: GR, a 55 year old male with no previous psychiatric history was assessed by the Consultation and Liaison team and diagnosed with an acute manic episode. He was placed under an involuntary treatment order and was prescribed olanzapine 10mg once daily (OD). After failing to respond adequately to this treatment, sodium valproate 500mg twice daily (BD) was added to the regimen. Methods: A literature search was conducted using Medline Ovid and NCBI Pubmed databases. The search terms mania AND olanzapine AND valproate; acute mania AND pharmacotherapy and olanzapine AND mania were used. Results: Two studies were identified that addressed the efficacy and safety of olanzapine for the treatment of acute mania. Both studies confirmed the superior efficacy of olanzapine in the treatment of acute mania in comparison to placebo. There were no studies identified that directly addressed the question of whether use of combination therapy of olanzapine and sodium valproate was more efficacious than olanzapine monotherapy. Conclusion: There is no evidence currently available to support the use of combination olanzapine/ sodium valproate as a more efficacious treatment than olanzapine alone.

Case report

GR is a 55 year old Vietnamese male with no previous psychiatric history who was seen by the Consultation and Liaison Psychiatry team at a Queensland hospital after referral from the Internal Medicine team. He was brought into the Emergency Department the previous day by his ex-wife after noticing increasing bizarre behaviour and aggressiveness. He had been discharged from hospital one week earlier after bilateral knee replacement surgery twenty days prior to his current admission. GR was assessed thoroughly for delirium caused by a general medical condition, with all investigations showing normal results.

GR was previously working as an electrician, but is currently unemployed and is on a disability benefit due to a prior back injury. He currently acts as a carer for his ex-wife who resides with him at the same address. He was reported to be irritable, excessively talkative with bizarre ideas, and sleeping for less than two hours each night for the past four nights. He has no other past medical history apart from hypertension which is currently well controlled with candesartan 10mg OD. He is allergic to meloxicam with an unspecified reaction.

On assessment, GR was dressed in his nightwear, sitting on the edge of his bed. He was restless and erratic in his behaviour with little eye contact. Speech was loud, rapid and slightly pressured. Mood was unable to be established as GR did not provide a response on direct questioning. Affect was expansive, elevated and irritable. Grandiose thought was displayed with flight of ideas. There was no evidence of perceptual disturbances, in particular any hallucinations or delusions. Insight and judgement was extremely poor. GR was assessed to have a moderate risk of violence. There was no risk of suicide or self harm or risk of vulnerability.

After a request and recommendation for assessment, GR was diagnosed with an acute manic episode in accordance with Diagnostic and Statistical Manual of Mental Disorders, 4th Edition, Text Revision (DSM-IV-TR) criteria and placed under an involuntary treatment order. He was prescribed olanzapine 10mg OD. After failing to respond adequately to this treatment, sodium valproate 500mg BD was added to the regimen. Improvement with the addition of the new medication was seen within a number of days.

Introduction

A manic episode, as defined by the DSM–IV-TR, is characterised by a distinct period of abnormally and persistently elevated, expansive or irritable mood lasting at least one week (or any duration if hospitalisation is required) and is associated with a number of other persistent symptoms including grandiosity, decreased need for sleep, talkativeness, distractibility and psychomotor agitation, causing impaired functioning and not accounted for by another disorder. [1] Mania tends to have an acute onset and it is these episodes that define the presence of bipolar disorder. Bipolar I Disorder is characterised by mania and major depression, or mania alone, and Bipolar II Disorder is defined by hypomania and major depression. [1] The pharmacological management of acute mania involves primary treatment of the pathologically elevated mood. A number of medications are recommended including lithium, anti-epileptics either sodium valproate or carbamazepine and second generation antipsychotics such as olanzapine, quetiapine, risperidone, or ziprasidone. [2] Suggested approaches to patients with mania who fail to respond to a single medication include optimising the current drug; switching to a different drug or using drugs in combination. [2] GR was initially managed with olanzapine 10mg OD and then after failing to respond adequately, sodium valproate 500mg BD was added. This raises the following question: Is the use of combination therapy of olanzapine and sodium valproate more efficacious than olanzapine monotherapy?

Objective

The objective of this article was to review the literature and outline the evidence that is available, if any, for the effectiveness of olanzapine as a monotherapy for acute mania in comparison with the effectiveness of its use as a combined therapy with sodium valproate. The issue of long term outcome and efficacy of these two therapies is outside the scope of this particular report.

Data collection

In order to address the question identified in the objective, a literature search was conducted using Medline Ovid and NCBI Pubmed databases with limits set to only include articles that were written in English and available as full text journals subscribed to by James Cook University. The search terms mania AND olanzapine AND valproate; acute mania AND pharmacotherapy AND olanzapine AND mania were used. A number of articles were also identified through the related articles link provided by the NCBI Pubmed Database. A number of articles including randomised controlled trials (Level II Evidence) and meta-analyses (Level I Evidence) were reviewed, however no study was found that compared the use of olanzapine as a monotherapy with the use of combined therapy of olanzapine and sodium valproate.

Discussion

Efficacy of olanzapine as a monotherapy

Two studies were identified that addressed the efficacy and safety of olanzapine for the treatment of acute mania. The first, by Tohen et al. in 1999 [3], was a random assignment, double blind, placebo controlled parallel group study involving a sample of 139 patients who met the DSM-IV-TR criteria for either a mixed or manic episode with 70 assigned to olanzapine 10mg OD and 69 to placebo. Both treatment groups were similar in their baseline characteristics and severity of illness with therapy lasting for three weeks. After the first day of treatment, the daily dosage could be increased or decreased by 5mg each day within the allowed range of 5-20mg/day. The use of lorazepam as a concurrent medication was allowed up to 4mg/day. [3] Patients were assessed at baseline and at the end of the study. The Young Mania Rating Scale was used as the primary efficacy measure with a change in total score from baseline to endpoint.

The study found those treated with olanzapine showed a greater mean improvement in total scores on the Young Mania Rating Scale with a difference of -5.38 points (95% CI -10.31-0.93). [3] Clinical response (decrease of 50% or more from baseline score) was also seen in 48.6% of patients receiving olanzapine compared to 24.2% of those assigned to placebo. [3] Improvement was also seen in other measures such as the severity of mania rating on the Clinical Global Impression – Bipolar version and total score on the Positive and Negative Symptom Scale. [3]

A second randomised, double blinded placebo controlled study was conducted by Tohen et al. in 2000. [4] This four week trial had a similar methodology with identical criteria for inclusion, primary efficacy measure and criteria for clinical response. It was, however, designed to also address some of limitations of the first trial, particularly the short treatment period, and to further determine the efficacy and safety of olanzapine in the treatment of acute mania. [4] The study design, method and assessment were clearly outlined. The study involved 115 patients and experienced a -6.65 point mean improvement in the Young Mania Rating Scale score and also showed a statistically significant greater clinical response in the olanzapine group compared to the placebo group. [4] Both studies confirmed the superior efficacy of olanzapine in the treatment of acute mania in comparison to placebo in a number of subgroups including mania versus mixed episode and psychotic-manic episode versus non-psychotic. [3,4]

The efficacy of olanzapine as monotherapy has also been compared to a number of other first line medications including lithium, haloperidol and sodium valproate. Two studies were identified that evaluated the efficacy of olanzapine and sodium valproate for the treatment of acute/mixed mania. Both demonstrated olanzapine to be an effective treatment. [5,6] Tohen et al. (2002) [5] showed olanzapine to have a superior improvement in mania rating scores and clinical improvement when compared to sodium valproate, however, this may have been affected by differences in dosage regimens between the study and mean model dosages. [7] Zajecka (2002) [6] described no significant differences between the two medications. In comparison to lithium, a small trial by Beck et al. in 1999 [8] described no statistically significant differences between the two medications. Similar rates of remission and response were shown in a twelve week double blinded study comparing olanzapine and haloperidol for the treatment of acute mania. [9]

The evidence presented from these studies suggests olanzapine at a dosage range of 5-20mg/day is an efficacious therapy in the treatment of acute manic episodes when compared to placebo and a number of other medications.

Efficacy of combination therapy of olanzapine and sodium valproate

As mentioned previously, there was no studies identified that directly addressed the question of whether use of combination therapy of olanzapine and sodium valproate were more efficacious than olanzapine monotherapy. One study by Tohen et al. in 2002 [10] was identified that investigated the efficacy of olanzapine in combination with sodium valproate for the treatment of mania, however this was in comparison to sodium valproate monotherapy rather than olanzapine.

This study was a six week double-blind, placebo controlled trial that evaluated patients with failure to respond to two weeks of monotherapy with sodium valproate or lithium. 344 patients were randomised to receive either combination therapy with olanzapine or continued monotherapy with placebo. [10] Efficacy was measured by use of the Young Mania Rating Scale with results showing combination therapy with olanzapine and sodium valproate showed greater improvement in total scores as well as clinically significant improved clinical response rates when compared to sodium valproate monotherapy. [10] This improvement was demonstrated by almost all measures used in the study. However, assignment to valproate or lithium therapy was not randomized with a larger number of patients receiving valproate monotherapy. This was noted as a limitation of the study. [10] The lack of an olanzapine monotherapy group within this study also prevents exploration of a postulated synergistic effect between olanzapine and the mood stabilisers such as sodium valproate. [10]

The study by Tohen et al. (2002) [10] does show that olanzapine when combined with the use of sodium valproate shows superior efficacy for the treatment of manic episodes than sodium valproate alone which may indicate that combination therapy may be more effective than monotherapy. Whilst suggestive that a patient not responding to initial therapy may benefit from the addition of a second medication, these study results cannot be generalised to compare olanzapine monotherapy and sodium valproate/olanzapine combination therapy.

Conclusion

When first line monotherapy for the treatment of acute manic episodes fails, the therapeutic guidelines recommend combination therapies as an option to improve response to therapy. [2] However there is no evidence currently available to support or disprove the use of combination olanzapine/sodium valproate as a more efficacious treatment than olanzapine alone. As no studies have been conducted addressing this specific question, the ability to comment about the appropriateness of the management of GR’s acute manic episode is limited.

This review has revealed a need for further studies to be undertaken evaluating the effectiveness of combination therapy for the treatment of acute manic episodes. In order to answer the question raised, it is essential that a trial be conducted with a large sample size; placebo controlled involving monotherapy with olanzapine and combination therapy in order to ascertain what approach is most effective. Another potential area for future research is for further assessment of what approach is best for those patients who fail to respond to initial monotherapy (increase current dose, change drugs or addition of medications) and then to identify whether characteristics of the patient such as whether they are experiencing a manic or mixed episode has any infl uence on the effectiveness of particular pharmacotherapies. This information would provide more evidence on which to base future recommendations.

There is clear evidence that supports the efficacy of olanzapine monotherapy in the treatment of acute mania as well as evidence suggesting combined therapy with sodium valproate is also an effective treatment; however a comparison between the two approaches to management was unable to be made. When evidence is lacking, it then becomes appropriate to consider the progress of the patient in order to assess the efficacy of the current management plan, as GR experienced considerable improvement, this may indicate that his current therapy is suitable for his condition.

Consent declaration

Informed consent was obtained from the patient for the original case report.

Conflicts of interest

None declared.

Correspondence

H Bennet: hannah.bennett@my.jcu.edu.au


Categories
Articles Guest Articles

Global inequities and the international health scene – Gustav Nossal


All young people should be deeply concerned at the global inequities that remain, and nowhere is this more clearly seen than in international health. Particularly we in the lucky country need to be mindful of this as we enjoy some of the best health standards in the world (with the notable exception of Aboriginal and Torres Strait Islander Australians). After decades of neglect, there are rays of hope emerging over the last 10-15 years. This brief essay seeks to outline the dilemma and to give some pointers to future solutions.

Mortality statistics

A stark example of the health gap is shown in Table 1, which shows that life expectancy at birth has risen markedly in the richer countries in the last 50 years, but has actually gone backwards in some countries, the situation being worst in Sub-Saharan Africa. As a result, life expectancy is now less than half of that in industrialised countries.

Deaths in children under five is widely used as a rough and ready measure of the health of a community, and also of the effectiveness of health services. Table 2 shows some quite exceptional reductions in the richer countries over half a century, but a bleak picture in many developing countries. India is doing reasonably well, presumably as a result of rapid economic growth, although the good effects are slow to trickle down to the rural poor. The Table shows the toll of communicable diseases and it is clear that at least two-thirds of these premature deaths are preventable.

We can total up these deaths, and note that the total comes to 20 million in 1960 and less than 8 million in 2010. Much of this improvement is due to international aid. One can do some optimistic modelling, and if we project the downward trend to 2025, deaths will be around 4.5 million. This would mean a total of 27 million extra child deaths prevented, chiefly though better treatment of pneumonia, diarrhoea and malaria; better newborn care practices; and the introduction of several new vaccines.

A final chilling set of statistics is presented in Table 3. This concerns the risk of a mother dying in childbirth. As can be seen, this is now exceedingly rare in industrialised countries. With few exceptions, those rare deaths are in mothers who have some underlying serious disease not connected with their pregnancy. In contrast, deaths in childbirth are still common in poor countries. Once again, the chief causes, obstructed labour, haemorrhage and sepsis, are largely preventable. It is unconscionable that a woman is 400 times more likely to die in childbirth than in the safest country. In some villages with high birth rates and high death rates a woman’s lifetime chance of dying from a pregnancy complication is one in seven!

International aid is increasing but must go higher

Properly deployed and in full partnership with the developing country, international aid can really help. At the prompting of the former Prime Minister of Canada, Lester Pearson, the United Nations mandated that the rich countries should devote 0.7% of their gross national income (GNI) to development assistance. Only five countries have reached or exceeded that goal, namely Denmark, Norway, Sweden, The Netherlands and Luxembourg. The global total is only 0.32% of GNI, or US$128.5 billion in 2010. Australia, presently at $4.8 billion, is pledged to go to 0.5% of GNI by 2015. The health component of aid varies from 7-15%.

Major new programmes speed progress in health

In the last 10-15 years, and for the first time, major health programmes have come forward where the budgets are measured in billions rather than millions. One with which I am particularly familiar is the GAVI Alliance, a global alliance for vaccines and immunisation. I had the honour of being involved in the “pre-history” of GAVI when I acted as the Chairman of the Strategic Advisory Council of the Bill and Melinda Gates Children’s Vaccine Program from 1997-2003. Alerted to the fact that Bill and Melinda Gates wished to make a major donation in the field of vaccines a working party with representatives from the World Health Organization (WHO), UNICEF, The World Bank, and the Gates and Rockefeller Foundations engaged in a series of intense discussions with all stakeholders throughout 1998 and 1999, prominently including the Health Ministers of developing countries. GAVI was launched at the World Economic Forum in Davos in January 2000 with an initial grant of $750 million from the Gates Foundation. Its purpose is to bring vaccines to the 72 poorest countries in the world, including newer vaccines, and to sponsor research and development of still further vaccines. As regards the six traditional childhood vaccines, namely those against diphtheria, tetanus, whooping cough, poliomyelitis, measles and BCG (for tuberculosis), 326 million additional children have been immunised and the coverage has been increased from 66% to 82%. Some 5.5 million deaths have been averted. Sturdy progress has been made in deploying vaccines against hepatitis B, one form of meningitis and yellow fever. More ambitiously, programmes are now being rolled out against pneumonia, the worst form of viral diarrhoea, cervical cancer and german measles. The budget of the GAVI Alliance is now over $1 billion per year, but it will have to rise as further vaccines are included. There are still 19 million children unimmunised each year. One GAVI strategy is to demand some co-payment from the affected country, requiring it to give a higher priority to health and encouraging sustainability.

Two separate large programmes are addressing the problem of HIV/ AIDS, arguably the worst pandemic the world has ever faced. They are the Global Fund for AIDS, TB and Malaria and PEPFAR, the US President’s Emergency Fund for AIDS Relief. Together these programmes spend an astonishing US$12 billion per year. As a result, highly active antiviral therapy (HAART) is reaching 6.5 million people in low and middle income countries, not only prolonging their lives indefinitely but also lowering the virus load in their blood, thus diminishing their capacity to transmit the virus. There is good evidence that the epidemic has peaked with the number of new cases going down each year. In addition, special effort is going into the prevention of mother to child transmission of HIV.

The search for an AIDS vaccine continues. An encouraging but vexing result emerged from a clinical trial of Sanofi -Pasteur’s vaccine in Thailand, involving 16,000 volunteers. The vaccine gave 31.2% protection from HIV infection, clearly not sufficient to go forward with mass immunisation, but enough to warrant further investigation in what has previously been a rather discouraging field.

Progress in malaria has been substantial. Insecticide-impregnated bednets turn out to be a powerful weapon, causing a 5% lowering of mortality where they are used. The Global Fund has distributed 240 million of these, and it is planned to reach a total of 700 million, an astonishing effort. Chemotherapy has been increased, including IPT, intermittent preventive therapy, where a whole population of children receives antimalarials every six months. IPT is also useful in pregnant women. A malaria vaccine is in the late phases of clinical trial. Produced by GlaxoSmithKline, it is known as RTS,S and has proven about 50% eff ective. It is targeted at the surface of that life-form of the parasite, known as sporozoite, which leaves the mosquito’s salivary gland and is injected under the skin when the mosquito feeds. Most experts believe that the final, definitive malaria vaccine will also need to target the liver cell stage, where the parasite goes underground, the blood cell stage, where it multiplies extensively in red blood cells, and perhaps the sexual stages. Good progress is being made in research in all these areas.

Tuberculosis remains a formidable foe particularly as resistance to anti- tuberculous drugs is developing. That being said, the Global Fund is treating 8.7 million tuberculosis patients with DOTS (directly-observed therapy, short term, to assure compliance). Sadly, short term means six months, which is quite a burden. Extensive research is seeking newer drugs able to act in a shorter time frame. As regards vaccines, unfortunately it is clear that the birth dose of BCG, which does a good job of preventing the infant manifestations of TB, namely tuberculous meningitis and widespread miliary tuberculosis, is ineff ective in preventing the much more common pulmonary tuberculosis of adolescents and young adults. An impressive body of research is attempting to develop new TB vaccines. Three are in Phase II clinical trial and at least eight in Phase I trial. The chronic nature of tuberculosis makes this a slow and expensive exercise.

The challenge of global eradication of poliomyelitis

Following the triumph of global eradication of smallpox, WHO set itself the challenge of eradicating poliomyelitis. When I was young, this was a most feared disease, with its capacity to kill and maim. The Salk vaccine and then later the oral Sabin live attenuated vaccine brought the disease under control in the industrialised countries with remarkable speed. A dedicated effort in Latin America did the same. But in Africa and the Indian subcontinent it was a different story. For this reason, a major partnership was launched in 1988 between the voluntary organisation Rotary International, WHO and UNICEF, with help from many others, to eradicate polio globally. Five strategies underpinned the venture. The Sabin oral polio vaccine was used to cut costs and ease administration, as oral drops rather than an injection was needed. High routine infant immunisation rates were encouraged. To get to the hard to reach children, national immunisation days were instituted, where all children under five were lined up and given the drops, regardless of previous immunisation history. Strong emphasis was placed on surveillance of all cases of paralysis with laboratory confirmation of suspected cases.

Finally, as control approached, a big effort was made to quell every little outbreak, with two extra doses of vaccine two weeks apart around the index case. As a result of this work, polio cases were reduced by over 99%. In 2011, there were only 650 confirmed cases in the whole world. India deserves special praise. Despite the large population and widespread poverty, the last case in India occurred on 13 January, 2011. There are now only three countries in which polio has never been eradicated, namely Pakistan, Afghanistan and Nigeria. Unfortunately, three countries have re-established polio transmission after prior eradication: Chad, DR Congo and Angola. Furthermore, sporadic cases are occurring in other countries following importation, though most of those mini-outbreaks are quickly controlled. We are at a pivotal point in this campaign. It is costing about $1 billion per year to maintain the whole global apparatus while the public health burden is currently quite small. Cessation of transmission was targeted for end 2012; this deadline is unlikely to be met. But failure to reach the end goal would constitute the most expensive public health failure in history. If we can get there, the economic benefits of eradication have been estimated at US$40-50 billion.

Some further vaccine challenges are listed in Table 4. In a twenty year framework success in most of these is not unrealistic. The dividends would be enormous; finding the requisite funds will be a daunting task.

Conclusions

This essay focuses on infections and vaccines, my own area of expertise, but plentiful pathways for progress exist in other areas. New drugs for all the above diseases; clever biological methods of vector control; improved staple crops with higher micronutrients and protein through genetic technologies; stratagems for improved antenatal care and obstetrics; a wider array of contraceptive measures tailored to particular cultures; in time thrusting approaches to noncommunicable diseases including cardiovascular disease, diabetes, obesity, hypertension and their consequences; and greater recognition of the importance of mental health with depression looming as a very grave problem. As young people contemplating a career in medicine, I commend all of these areas to you. In particular, consider spending some months or a few years in joining this battle to provide better health to all the world’s citizens. There are plenty of opportunities and the relevant travel will certainly prove enriching. A new breeze is blowing through global health. The thought that we can build a better world has taken firm hold. It is your generation, dear readers, who can turn dreams into realities and make the twenty-first century one truly to remember.

References

[1] World Life Expectancy: Country Health Profiles [Electronic Database]; 2012 [cited 2012 15 March 2012]. Available from: http://www.worldlifeexpectancy.com/world-healthrankings.

[2] Mortality rate, under-5 (per 1,000) [Electronic Database]: World Bank, United Nations; 2012 [cited 2012 15 March 2012]. Available from: http://data.worldbank.org/indicator/SH.DYN.MORT

[3] Hogan MC, Foreman KJ, Naghavi M, Ahn SY, Wang M, Makela SM, et al. Maternal mortality for 181 countries, 1980-2008: a systematic analysis of progress towards Millennium Development Goal 5. Lancet 2010;375(9726):1609-23.


Categories
Articles Guest Articles

The role of medical students in innovation – Fiona Wood

When thinking about the role of medical students in innovation, my mind drifts back to my early days in St Thomas’ Hospital Medical School, London 1975. It was exciting because I could see for the first time that I had a role in the world that was useful. Let’s face it, until then it is pretty well a one-way street, with education being handed to us and us taking it; but here, I could see a chance to grow and to contribute in a way I had not been able to do so. Then came the big question: How?

My “research career” started early: I knew that I needed a CV that was interesting to get a sniff of a surgical job, so not all out of pure curiosity. My Bachelor of Medical Science equivalent was both interesting and frustrating, but most importantly, it taught me that I could question. More to the point, that I could work out the strategies to answer the questions. Exploring the evolution of the brain from worms to elephants was fascinating, but did I find the point where the central nervous system (CNS) and peripheral nervous system (PNS) transitioned? No! But I did get to go on an anthropology field trip to Kenya and Tanzania and work collecting fossils on the Leaky camp, which was a huge bonus.

So, when I got to my Obstetrics and Gynaecology term, I decided that India was a good place to investigate malnutrition in pregnancy. Looking back, I find it hard to fi gure how I put the trip together with the funding and the equipment. But a long shaggy dog story short, my friend Jenny and I tripped off to a government hospital and measured anthropometric measurements, HbA1c as a measure of carbohydrate metabolism, and RBP, a protein with a high essential amino acid content, to measure the protein metabolism (measured on gel plates by yours truly as I got back). It was a great trip, a huge learning on lots of levels and it still remains with me.

Having submitted the work to the British Journal of Nutrition, it was sent back for revisions. However, I didn’t have the understanding to realise that was good and that I just needed to revise and send my changes back. Instead, totally deflated I put it in a file and tried hard to forget that I had failed at the last fence. I clearly was never successful in forgetting! Finishing is key, otherwise you have used resources, yours and others’, and selfishly not added to the body of knowledge. Sounds harsh, but even negatives should be published: How many resources are wasted in repetition? Finishing is essential for all of us. Anyone can start, but learn the value of finishing– I learnt early and it is with me still.

I will fast forward a little to the early days of my surgical training, when I saw amazing things being done by the plastic surgeons, and I was hooked. At the time, microsurgery was gaining momentum, tissue expansion was being explored, and tissue culture for clinical skin replacement were all in the mix. Yes, heady times indeed! One surgeon told me, “Medicine is 5% fact. The rest, well, opinion based on experience. The aim is to find the facts, hold on to them, and build the body of evidence.” Someone else once told me, “Believe nothing of what you hear and only half of what you see!”

Regarding tissue expansion, it was said casually by my consultant that, “It’s all well and good to create skin with characteristics similar to the defect, but the nerves are static and so the quality of innervation will decrease as the skin is expanded.” Really? I wasn’t so sure. So, off to find a friendly neurophysiologist who helped me design an experiment, and taught me how to do single axon recordings on T11 of a rodent model. We proved for the first time that the peripheral nerve field was not static but responded to the changes in the skin with forces applied.

We are now familiar with neural plasticity. Understanding CNS and PNS plasticity remains a corner stone of my research efforts exploring the role in healing the skin. How do we harness the capacity to self organise back to our skin shape, not scar shape, on a microscopic level? How can we think ourselves whole? Questions that stretch and challenge us are always the best ones!

The spray on skin cell story for me started in 1985 in Queen Victoria Hospital, East Grinstead, when I saw scientists growing skin cells. Another wow moment. So, I read all I could get my hands on. That is the starting point, to know what is out there. There is no point in reinventing the wheel! In 1990, as a Registrar in Perth, with the help of the team in Monash, Professor John Masterton and Joanne Paddle, our first patient was treated in Western Australia. By 1993, Marie Stoner and I had a lab in Perth funded by Telethon, exploring the time taken to expand the number of a patient’s skin cells to heal the wounds as rapidly as possible. By 1995, we were delivering in a spray instead of sheets. We then developed a kit to harvest cells for bedside use, using the wound as the tissue culture environment. This is now in clinical trials around the world, some funded by the US Armed Forces Institute of Regenerative Medicine. That is another lesson: The work never stops, it simply evolves.

So, asking questions is the starting point. Then, finding out how to answer the question, who can help support, do some work, fund it, etc., are all valid questions. BUT you must also ask, what direction do you go when there are so many questions? Go in the direction that interests you. Follow your passion. Not got one? Then expose yourself  to clinical problems until you meet a patient you want to help like no other, in a subject area that gets you out of bed in the morning. Then, you will finish, maybe not with the answer, but with a contribution to the body of knowledge such that we all benefit.

Do medical students have a role? A group of highly competitive intellectual problem-solvers? Absolutely, if they choose to. I would say start now, link with positive energy, keep your ears and eyes open, and always learn from today to make sure tomorrow is better, and that we pass on medical knowledge and systems we are proud of.