Hi. I'm Art Caplan. I run the Division of Medical Ethics at the NYU School of Medicine.
A really interesting study appeared in the journal Pediatrics recently. It was a study in New Zealand of risk factors determinative of child abuse. Researchers put together a profile of a variety of measures, none of them surprising: how many kids were in the household, criminal record of the parents, drug abuse by the parents. They basically tried to forecast the chance that a particular child would suffer neglect or abuse at the hands of their parents.
What was interesting about this is that the researchers were able to put together a pretty predictive formula—that may have been as accurate as 65%—in deciding who, with a certain profile and a certain set of factors, was likely to harm, abuse, or neglect their child.
In one sense, that's great. It would help us a good deal to be able to intervene and start to predict who might be a high-risk candidate for child neglect or abuse, maybe have social services step in, do programs with people. Remember about New Zealand that it's a small country with a robust public health and hospital system. The researchers were able to keep tabs on these families in the way that, in a big country like the United States—with people moving all the time, moving from doctor to doctor and from health system to health system over time—it might be hard to track them down.
With all of that said, it raises a different set of ethical issues. If we had the algorithm, we could forecast anything about parental behavior—maybe whose kid is more likely to use drugs, or whose child is more likely to be abusive or violent, in addition to whose child is likely to be abused or neglected. What do we feel about that in terms of privacy and in terms of parents, if you will, being stereotyped? Remember, not everybody who scored high on the risk-factor list wound up doing anything to their child. It was predictive but not 100% accurate.
Should we look forward, in this day of information, to more and more tests and predictions of the sort that the New Zealanders have started to explore? Should we start to say, I think that family is at high risk of going on welfare, or getting food stamps, or needing Medicaid support, or any other set of problems? Is there any limit to what we would say is reasonable to try to predict?
I would say this: Unless we're ready to mount an intervention to stop the problem, I have a lot of ethical heartburn about doing the forecast. That is to say, if I don't have a program to help somebody learn how not to be abusive, if I don't have an intervention to teach a child how to get along better with their peers, then just forecasting trouble seems to me to bring stigma and penalty to a child and their family. It doesn't do them any good to have this knowledge, or anyone else any good to be in possession of knowledge, that something bad is going to happen to this child or this family.
The commitment to doing accurate forecasting has to be interventions—attempts to minimize the risk and do something about what you can foresee. Just knowing that something bad is coming is not enough. Knowing that something bad is coming and trying to do something to prevent it, I think, is the moral requirement for doing forecasts in the future, built on the growing amounts of data that we're collecting in our healthcare system.
I'm Art Caplan at the Division of Medical Ethics at the NYU School of Medicine. Thanks for watching.
Medscape Business of Medicine © 2018 WebMD, LLC
Any views expressed above are the author's own and do not necessarily reflect the views of WebMD or Medscape.
Cite this: Arthur L. Caplan. Should We Try to Predict Child Abuse--and Proactively Prevent It? - Medscape - Mar 23, 2018.