COMMENTARY

Hudis on CancerLinQ: Goal Is Real-Time 'Accident Avoidance System'

Kathy D. Miller, MD; Clifford A. Hudis, MD

Disclosures

January 22, 2019

Kathy D. Miller, MD: Hi. I'm Dr Kathy Miller, professor and associate director of clinical research at the Indiana University Simon Cancer Center in Indianapolis. Welcome to Medscape Oncology Insights. Joining me today is Clifford Hudis, chief executive officer of the American Society of Clinical Oncology (ASCO). Welcome, Cliff.

Clifford A. Hudis, MD: Thanks for having me.

Miller: I know you've been passionate about big data and the potential for big data in oncology for a long time. I thought it was time to get you in here to talk more about that. ASCO really got us started in this field quite a few years ago with a project known as CancerLinQ. Remind us all of the goal of CancerLinQ and where we are now with that project.

Hudis: We have to rewind to 2008 and the market crash. As we came out of the crash, the federal government incentivized the wholesale conversion of record-keeping in medicine—not just oncology—to electronic format. Somewhere north of $30 billion in TARP (the Troubled Asset Relief Program) funds were used to accelerate this conversion.

There was a disconnect between the kinds of research results we get, especially in oncology, and the patients we actually treat.

Beginning in 2008, and in a relatively short period of time, the majority of record-keeping in America converted from paper and pen to electronic. The early view was that this would give us a new heretofore nonexistent resource, big piles of data that we could dive into. The talking points are well known. Three percent of adults go on to clinical trials. The vast majority of people who go on clinical trials do not represent the patients we actually treat in our clinics. There was a disconnect between the kinds of research results we get, especially in oncology, and the patients we actually treat.

In theory, this big pile of data from the real world could provide insights. I would never suggest that it will provide insights into new cures for cancer; that's not happening out there. But it might provide insights into unrecognized toxicities, interactions, subtle outcome differences with therapies, and off-label use, especially in the old days when drugs were frequently given off-label. That was the concept behind the push to aggregate big data. We can talk more about the challenges.

Miller: We've seen the power of big data applied to other fields. Amazon is probably everybody's favorite example. They are able to give me 2-hour delivery because they know before I order what I'm going to order, so they can stock the regional distribution centers with the right products. They do have the advantage, though, that all of that data is electronic and it's all in one system. The funds from TARP enabled much more of our data to become electronic. But we have a system problem.

Hudis: You've very quickly gotten to one of the many complex challenges here. You can summarize it as interoperability, but it boils down to that we don't consistently record specific data elements in the same standard way, both within our systems and across systems. There are all manner of tweaks. My favorite example from CancerLinQ is that we've identified more than 60 ways to record the single number you want for a white blood cell count.

Now, think about the transformation of all of those disparate data points and data sources into a single standard so you can begin your analytics. On top of that, I just gave you a hard number that's always in a grid in every system, even though they're represented differently. Now, go to the prose, to the free text records that contain the rich details about the course and outcomes per patient. Try to discern electronically the progression of disease or [presence of] stable disease in a wheelchair-bound 80-year-old receiving palliative therapy. It is not so simple. We are making real progress. I am excited about this, or we wouldn't be investing in it, but I believe that the complexity of what we have here is not immediately appreciated by people who have not been involved already.

Miller: It sounds elegantly simple, although I realize it's not. If all of our data are electronic, put it all together; find some whiz kid from MIT to figure out how to merge 60 into one, and then we could learn all sorts of things about toxicities in different populations, what works in a real world, and how that might be different from our clinical trial data.

Hudis: We've done this, Kathy. The last I heard—and this number is probably higher—within just CancerLinQ we have more than 80,000 rules for transforming data that we've had to author. It's worth going back to something fundamental, though. Congress thought they were creating an interoperable system. Actually, within Congress, some expressed anger about the fact that we have these towers of Babel rather than a coordinated single system.

We would not have had to build the infrastructure pipes, the tubing, the software of CancerLinQ if we actually had a unified data standard across the entire field. But that horse has left the barn, and now we're essentially reverse-engineering standardization.

Miller: Just because it's difficult doesn't mean that it's not important or that we shouldn't invest in it and do it. Since 2008, what progress has CancerLinQ made? A lot of it involves building the pipes and whistles to make things talk to each other.

Hudis: My predecessor, Allen Lichter, had this idea, derived from work at the then Institute of Medicine, aimed at creating from this data what we call a rapid learning system. That's what you're describing—a system that sees or detects signs and signals in the data and alerts the community in some way. It could be about unsuspected toxicities or it could be trends and outcomes. It could be any number of uses at the clinical level. It could be measuring quality metrics, which is becoming increasingly important and needs to be electronic for efficiency.

Miller: It could be things we didn't think about asking.

Hudis: Right, all true. In 2011, as a small project within ASCO, we created CancerLinQ. We proved that we could take data—it wasn't real data at the time—but we could strip down deidentified data so that we could start to learn from it and generate some outcomes that looked like what we knew we saw in the real world. Over the next couple of years, we built, essentially, a small nonprofit business, a limited liability company, or LLC, wholly owned by ASCO, called CancerLinQ. The Q at the end stands for quality.

With that, we assembled a board of governors. They are a diverse group of people from industry and Silicon Valley. They're not our usual list of ASCO volunteers. CancerLinQ has its own CEO; we hired our second CEO this summer. Cory Wiegert came to us from IBM. It is a lean, small group that aims to conquer these challenges as quickly and efficiently as possible.

We have a small office in San Francisco that performs a whole lot of the technical work and then a larger group in Alexandria, Virginia, at our headquarters. They run semi-independently right now. Where have we gotten to? We've hooked up practices around the country, which provide a very diverse set of patient data. We have onboarded data from almost a million patients at this point. I'm very cautious about this, though, and a million is a big splashing number. Make no mistake about the limits therein. These records are the real-world records. They have all the gaps and errors and omissions...

Miller: And inconsistencies.

Hudis: All of that. The number of usable cases for specific purposes will always be dramatically less than that big headline number. Our intention, therefore, is to keep growing and onboarding more and more practices. We have a big backlog of practices onboard.

I want to clarify where we're going, though, because there's a fair bit of, if not confusion, then uncertainty about the goals. The reason ASCO, your nonprofit professional society, is conducting this project is to improve quality of care. That's the goal. We do that by measuring things that matter. ASCO often defines the measures and writes the metrics, reporting them back so docs like you and me can see where we might be doing less well than we've thought.

ASCO can then provide support to improve that quality of care, by far the first and most important goal. Our goal for this coming year is to deploy metrics that are firing, we say, or reading out of the dataset that we have so that the participating practices can start to get the return on their investment.

Miller: Let me ask you about safety and confidentiality. These are real patients' records. They're coming through electronic pipes and whistles to a big data warehouse. Hardly a week goes by that we don't hear about Walmart or Costco getting hacked or an airline getting hacked. Suddenly, personal data is out in places where it should not be. That's got to be a huge concern that I'm sure you have thought about deeply.

Hudis: Certainly. In my position, that's one of the two or three top causes of insomnia in my life. I'm not able to get technical answers on this. I can simply say that we ascribe to and adhere to and continually upgrade to the highest standards of care for that that are available to us. If we have this data, we have risk. There is no such thing as zero risk, obviously, but we take every precaution we can. One small point is that one part of what we do is deidentifying this data. So for certain purposes, the big visible purposes for the projects our community wanted to do that we've already started—we have a lung cancer dataset; we have a colon cancer dataset; we have a few others—those are deidentified datasets [to begin with].

Miller: How about feeding data back to the practices? I could imagine that happening in a couple of different ways, as practices might want. It could be aggregate data based on quality measures for individual physicians or the practice as a whole. I could imagine individual ticklers as well. I practice in a system that had one of the original homegrown electronic medical records (EMRs) more than 30 years ago. They did a lot of studies building in ticklers, so that if a primary care physician saw a woman at an age when she should have a mammogram, and the system didn't see a mammogram report when you opened her chart, it would send you a little tickler to remind you. They showed that those sorts of strategies worked. Are those individualized quality measures possible or on the horizon for CancerLinQ?

Hudis: The answer is very much yes. I'll stipulate that everybody who uses EMRs has already crossed to other side of this Rubicon and now has alert fatigue. That's been well documented.

Miller: Yes. I am among them as well.

Hudis: Let's back up a half-step. Aviation safety has been a model that we in medicine continually come back to because, in certain ways, it's about culture as opposed to logistics or technology. We need to capture some of that in medicine, the blameless recognition of a near-miss, so that we can then address it, for example.

For years, ASCO has certified practices by assessing their quality in a program we call the Quality Oncology Practice Initiative, or QOPI. It's voluntary and free to all ASCO members. If you score above a certain threshold, you can take part in QOPI certification.

With QOPI certification, in addition to meeting metrics, you get an inspection and a careful review of certain soft processes within your office. Those practices that are certified are recognized as such. You probably see little tweets and notices from ASCO on a regular basis about high-performing practices.

We have taken that program globally. CancerLinQ is one more part of the foundation for an electronic future where those quality measures don't require manual abstraction by nurses. It doesn't happen once every 6 months looking back 4 months, which is the current model. Instead, to your point, it is happening in real time. You are seeing dashboards. We've configured them as donuts—how green or gray is your donut, meaning 0% to 100% filled. We've already begun to deploy that for some basic measures, which helps practices see where they have good data and where they don't.

I'll give you a very concrete example. Practices already are getting these reports on their data. You may think you must know the age of every patient you take care of. That donut will thus be all green all the way around.

But you'd be surprised that a codified, machine-readable cancer stage may not exist so consistently.

Miller: I am not surprised at all.

Hudis: TNM (tumor, node, metastases) doesn't look so good. But I see hope because if you think about the meaningful-use parameters, things like accurate smoking history, the compliance on them is nearly 100% in practices because, of course, they're linked to payment.

Miller: In many areas, we make progress and we pay attention to the things that are measured and the measurements we see. Putting data in front of people is a way to drive quality.

Hudis: My dream has always been this. I don't really want a quality program like QOPI to say, "Dr Hudis, in 2017, you only missed 7% of the patients you were supposed to give a bisphosphonate or an aromatase inhibitor to" or whatever it is. It doesn't matter. Yes, I missed them. Maybe I learned from that and I improved, but those individual patients aren't helped by the fact that I recognized [what I omitted] a year later.

What I want is an accident avoidance system—not a post-hoc glucose analysis but avoidance—to the degree that we bring this into real time, start to have measures firing in real time, start to report your performance in real time or almost real time. Start to see patients, for example, where windows of eligibility for the right therapy are diminishing and you get the "tickler" you asked about. This indeed is our dream and this is what we're building toward. I'm very careful to say that we're not there yet. It will take a while longer. This is a huge, expensive, cumbersome, and complex project, but it's also really fun and rewarding to make this happen.

Miller: It will require a big cultural change in how we approach medicine, how we approach looking at performance and metrics. While the goal is forward-looking, we do have to be able to look at mistakes and near-misses to understand them and avoid them. That I think is still something we struggle with.

Hudis: Absolutely. There's a positive side to this. We started with the cacophony of medical records, systems, and standards. But coming out of this work, we now have such concrete evidence for the need for standards. This year's ASCO's president, Monica Bertagnolli, has pushed forward with a project you probably haven't heard about yet, called M-CODE. It stands for Minimal Cancer Data Elements, or something like that. [Editor's note: M-CODE stands for Minimal Common Oncology Data Elements.]

This project is a multi-stakeholder meeting, and our goal is to establish a core of standards for the recording of basic data and then build out from that. We have quasi-governmental partnerships and others involved in this. Again, it's reengineering to make up for an unforced error in the past. The price we're paying for the lack of standardization, the way it's slowing us down, is motivating us all.

Just to point to your Amazon model, bear in mind that, yes, they and others can do all of the things you talk about. But they force the coding of the incoming data into very strict fields from the first click. They don't have this worry about going back and figuring out what you meant.

Miller: Cliff, it's always fun to talk to you, and this is such an important area. We will look forward to those other developments from the CancerLinQ program.

This is Dr Kathy Miller for Medscape.

Comments

3090D553-9492-4563-8681-AD288FA52ACE
Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.
Post as:

processing....