For drug treatment decisions machine learning must be more than a black box

The industry has its work cut out to rebuild trust in the tech following doubts about one pioneer, IBM Watson.

The concept of using machine learning to guide drug treatment has come under fire recently, with IBM Watson accused of giving bad advice about what therapies cancer patients should receive.

Others in the space are convinced that artificial intelligence could still be a valuable tool to help doctors make prescribing decisions, but trust in this kind of technology is currently low. Crucially, none of the currently available tools seek to replace doctors – Vantage spoke to several companies that stressed that the final decision about therapy always rested with the clinician.

Still, there are things that could improve confidence in these products. One is greater transparency, and a criticism of such tools is that they are akin to black boxes, giving little information about how they generate treatment recommendations.

Another would be a seal of approval from regulators. But this might be harder to achieve, according to one analyst, Alan Louie of IDC Health Insights.

Expensive second opinion

Clinical support tools use machine learning to compare patients’ genomic data against the medical literature, for example to ascertain which therapies might work best in patients with certain mutations. Watson’s flagship oncology product ranks therapies in order of preference, while some others merely provide a list of potentially suitable treatments.

At present, such products do not require FDA approval as they are, strictly speaking, intended for research only, though there is little to stop doctors from using them in clinical practice to help make treatment decisions.

Machines are more efficient at this kind of research, speeding up processes that doctors would normally carry out themselves, advocates of artificial intelligence say.

But this means the tools amount to little more than an “expensive second opinion”, IDC’s Mr Louie told Vantage. Watson for Oncology cost $800-1,000 per patient, he noted, but added: “I’m not sure how often they were getting that amount of money.”

Allegations that Watson’s algorithms led to incorrect treatment recommendations raise the question of whether these products are a waste of cash – and, more worryingly, of whether they could also be putting patients at risk. If so, should the FDA step in?

How to regulate

However, regulating these kinds of products would not be simple, according to Mr Louie. “The problem is that these tools don’t just look at a single cancer for which a company could submit evidence to the FDA saying this is the suggested approach for an individual – it doesn’t apply generically.”

He likened machine learning-based clinical support tools to genomic tests that fall under the Clinical Laboratory Improvement Amendments Act (CLIA), which covers lab-developed diagnostics.

These tests are also currently outside FDA oversight, but the agency has made noises about tightening up their regulation. So could machine learning-based tools be next?

“You’re talking about the FDA coming in and putting some grand stamp on Watson, and they would never put a stamp on everything – it would only be on an individual, per-cancer basis,” Mr Louie replied.

More transparent

What else, then, could industry do to improve trust in Watson for Oncology and other products like it?

Mr Louie opined that greater openness might help, a view shared by Roche and GE Healthcare. These companies have teamed up to develop clinical support tools combining their respective expertise in in vitro diagnostics and imaging.

Their first products, in oncology and critical care, should be available next year. Initially, the companies are creating tools that will not require FDA approval under the current model, but further down the line the groups hope to market products that recommend the best course of therapy for a given patient, which would fall into the regulated realm.

The training of artificial intelligence tools can be particularly problematic, John Lin, global strategy leader for GE Healthcare, told Vantage: “We’ve learned, through our own R&D efforts, that algorithms trained on one patient population or dataset don’t always extend readily to another population or dataset.”

He said the companies wanted to resolve such issues before bringing a product to the market, and that “publishing evidence around these kinds of things is a really important step that we’ll be taking”.

Everybody else is doing it, so why can’t we?

Still, this might not reassure people worried about the validity of available clinical support tools like Watson for Oncology.

One company commercialising a product for research purposes is Sophia Genetics – the group is working on gaining FDA approval, but this could take time.

The Swiss group’s vice-president of marketing, Tarik Dlala, admitted that the field had come under attack recently but added: “There are companies doing it the right way. We’re one example and there are others.”

Still, GE’s John Lin concluded: “It’s important not to over-promise. We are already seeing some great benefits, but AI in healthcare is still at an early stage.”

Perhaps Watson tried to run before it could walk. As IDC’s Mr Louie put it: “Watson is aspiring to hit the home run while people are still at first base.”

This story has been amended for clarity.

Share This Article