Oncology Nurses' Narratives About Ethical Dilemmas and Prognosis-Related Communication in Advanced Cancer Patients

Susan M. McLennon, PhD, RN: Margaret Uhrich, BSN, RN, Sue Lasiter, PhD, RN: Amy R. Chamness, BA, Paul R. Helft, MD


Cancer Nurs. 2013;36(2):114-121. 

In This Article



The details of the construction of the instrument, mailing and consent processes, and results of the quantitative findings have been previously reported.[14] In the original study, a survey was mailed to Oncology Nursing Society members. At the end of the survey, 2 open-ended questions invited the respondents to share an experience of feeling ethically conflicted regarding prognosis-related communications with a patient or family and to comment about any related concerns. Handwritten responses were transcribed verbatim. A content analysis of the narratives was performed,[15,16] and results of the findings are reported here.


Of the 394 nurse participants in the original study, 137 (34.7%) provided a total of 173 narrative comments for 1 or both of the open-ended questions. There were 134 female and 3 male nurses, with a mean age of 49.9 years (range, 24–72 years), who responded with comments. The majority were white/Caucasian (88%), with an average of 14.7 years of oncology experience who were practicing as staff nurses (68%) in outpatient or inpatient settings. Details are provided in Table 1.

Data Analysis

Using the content analysis method adapted from Elo and Kyngäs[15] and Krippendorff,[16] narrative data were analyzed to gain a greater understanding of the nurses' perceptions of ethical dilemmas related to prognosis-related communication. Phrases and sentences were selected as the unit of analysis because, compared with single words, they more fully represented the concepts of interest. Concepts were then condensed into categories of similar meaning with the goal of drawing conclusions related to the specific situation.[16]

Data were hand coded without the use of an electronic database because the limited volume of text was manageable in printed copy. Consistent with inductive content analysis, each response was broken down into phrases that contained 1 meaning and were labeled as a concept. Similar concepts were grouped together and were determined by the researchers as distinct. Concepts were then clustered into subcategories and finally into 4 higher-order categories. Not all concepts fit completely within a discrete category. In some cases, there was overlap between categories. In those situations, both coders agreed on the "best fit" category, and a notation was made that there was some overlap with other categories Labels for the concepts were found within the data to support face validity. Exemplars of data for the concepts and categories are reported below. Two authors (S.M.M., S.L.) with experience with qualitative data analysis and coding performed the primary analyses, which contributed to the reliability of the coding. A content expert (P.R.H.) reviewed the categories to provide additional verification. Once the data were coded, grouped, and categorized, frequency counts for each of the main and subcategories were performed to more fully describe the data. Details are provided in Table 2.