Transferrin Saturation, Dietary Iron Intake, and Risk of Cancer

Arch G. Mainous III, PhD; James M. Gill, MD, MPH; Charles J. Everett, PhD


Ann Fam Med. 2005;3(2):131-137. 

In This Article


This cohort study followed up persons aged 25 to 74 years at the time of the index interview in the National Health and Nutrition Examination Survey (NHANES) I (1971–1974). The NHANES I baseline data were merged with the NHANES I Epidemiologic Follow-Up Study (NHEFS) data (1982–1984, 1986, 1987, and 1992).

NHANES I was designed to collect extensive demographic, medical history, nutritional, clinical, and laboratory data representative of the noninstitutionalized civilian US population. The survey was a multistage, stratified probability sample of clusters of persons aged 1 to 74 years. It was conducted from 1971 to 1975. The NHANES I survey design included oversampling of certain population subgroups, including persons living in poverty areas, women of childbearing age (aged 25–44 years), and elderly persons (aged 65 years and older).

The NHEFS is a national longitudinal data set that allows investigation of the relationships among clinical, nutritional, and behavioral factors assessed at baseline NHANES I and subsequent morbidity, mortality, and institutionalization. The NHEFS initial population includes the 14,407 participants who were aged 25 to 74 years when first examined in NHANES I. More than 98% of the participants in the initial NHANES I cohort were traced and supplied data in the 1992 NHEFS.

The follow-up information was gathered in 1 of 3 ways. Study participants interviewed were those who could be contacted and could participate. Surviving participants were always administered the subject questionnaire. If the original participant was alive but incapacitated, a slightly modified version of the subject questionnaire was administered to a proxy respondent. A separate proxy questionnaire was used only when the participant had died. Finally, for participants who had died between the NHANES I index interview and the follow-up interview, information from death certificates was recorded.

A total of 1,681 proxy respondents were interviewed in the 1992 NHEFS. Of these, 551 responded for an incapacitated participant and were administered a modified version of the subject questionnaire, and 1,130 responded for a deceased participant and thus were administered the proxy questionnaire.

The total number of persons with recorded transferrin saturation, dietary iron intake, smoking status, and complete follow-up data was 7,772. For this study, we excluded participants who had previous physician-diagnosed malignant tumors before the NHANES I examination (n = 91). In an effort to provide the most stable estimate of intake of dietary iron, we also excluded individuals who reported that the 24-hour dietary history did not represent the way that they usually eat (n = 1,372). These exclusions resulted in an unweighted cohort of 6,309 persons. This unweighted cohort was weighted and the design effect controlled for so that the population used for all analyses in this report represented the US adult noninstitutionalized population at the baseline of 62,720,183 persons.

In the original NHANES I, serum transferrin saturation was measured. The percentage of serum transferrin saturation was calculated by National Center for Health Statistics personnel by dividing the serum iron level by the total iron-binding capacity. We defined increased serum transferrin saturation at several levels based both on the literature and on our investigation of how low transferrin saturation might be in the presence of high dietary iron intake to still yield an increased cancer risk.[9] The first cutoff value of more than 45% had previously been proposed or used in population-based studies as one of the lowest levels of serum transferrin saturation that might be considered increased.[10,11] This level is substantially lower than a transferrin saturation of 60%, or 1.1% of the adult population, found previously to be associated with the risk of cancer without knowledge of dietary iron intake.[6,7] In addition to the 45% cutoff, we also investigated transferrin saturation levels at levels less than 45% to see whether these levels (which are not normally considered elevated) carried an increased risk in the presence of high dietary iron intake. Compared with persons at the 60% level, persons at these lower levels of transferrin saturation represent a much larger population at potentially increased risk.

Iron ingestion was measured by using the 24-hour dietary recall found in the NHANES I. Total iron intake (milligrams per day) was estimated by the National Center for Health Statistics for this 24-hour period. The US recommended daily allowance for daily iron intake is 8 mg for postmenopausal women, 8 mg for all men, and 18 mg for premenopausal women.[12] Previous studies calculated a potential risk from excessive iron intake of 25 to 50 mg of iron per day.[12,13] For this study, more than 18 mg of iron per day was considered "high intake."

Incidents of cancer included reports of both diagnosis and mortality and were determined by answers to interview questions in the 1982–1984, 1986, 1987, and 1992 NHEFS interviews. The date for diagnosis of cancer was reported in the interviews. Information from death certificates was used only in cases in which the occurrence of cancer had never been reported during an NHEFS interview. Thus, because no other date was reported for the individual, the time at death was used for the cancer event. The International Classification of Diseases, Ninth Revision codes associated with cancer mortality were 140 through 239. Nonmelanoma skin cancer ( International Classification of Diseases, Ninth Revision , code 173.XX) was not classified as a cancer event.

We examined the independent relationship of increased serum transferrin saturation and dietary iron intake to cancer events by controlling for potential confounders. Control variables that were available in the NHANES I baseline were age, sex, and race. Four age categories were defined (25–34, 35–52, 53–69, and > 69 years). Race followed the NHANES I designations (white, black, and other). Because of our focus on cancer events, we also included body mass index (BMI) and self-reported smoking status (ever- or never-smokers) at baseline or during the 1982–1984 interview. BMI was calculated from measured height and weight information, and values more than 30 kg/m2 defined obesity.

In an effort to control for severity of illness, we included comorbidity conditions. A variety of conditions were assessed in NHANES I. Comorbidities were positive responses in the baseline interview to questions regarding whether a doctor ever told the patient that he or she had 1 of 40 different conditions. The Charlson Comorbidity Index was calculated from the responses to these questions.[14]

We classified the population into 4 groups based on normal and increased transferrin saturation and low and high iron intake. For analysis of the NHEFS, we used sampling weights to calculate prevalence estimates for the civilian noninstitutionalized US population. Because of the complex sampling design of the survey, we performed all analyses with SUDAAN (RTI International, Research Triangle Park, NC). Thus, the population used in the analysis represented more than 62 million people. In an effort to control for potential misclassification of persons as being cancer-free at baseline who actually had cancer but had not been diagnosed, we left the analysis censored to exclude any cancer event for the first 3 years of the cohort.

Using the population estimates generated by SUDAAN, we graphically show the cumulative percentage of cancer incidence as the unadjusted relationship between cancer events and normal or increased serum transferrin saturation and low or high iron ingestion. We performed Cox proportional hazards analysis with cancer event time for each group, controlling for age, sex, race, smoking status, BMI, and Charlson Comorbidity Index. Although we believed that the interaction between transferrin saturation and dietary iron intake was based on having high levels of each, thus necessitating the categories used in the dummy variable listed previously, we also computed a model that investigated a statistical interaction between the 2 variables when they were both coded as continuous variables. In these models, cancer-free survival time was a continuous variable measured in 1-year increments up to 18 years from baseline. We evaluated the proportionality of the hazards through examination of the Schoenfeld residuals.[15]


Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.
Post as: