One Step Closer Towards Personalized Epilepsy Management

Zhibin Chen; Alison Anderson; Zongyuan Ge; Patrick Kwan

Disclosures

Brain. 2021;144(6):1624-1626. 

Antiseizure medications (ASMs) have been the mainstay of epilepsy treatment for over a century, suppressing the occurrence of seizures without modifying the underlying pathology. The past 30 years represent an era of particularly rapid drug development, with nearly 20 new ASMs approved.[1] However, there is no reliable method to predict which medication will be most effective for a given patient, and one-third of patients continue to have seizures despite treatment.[2] Further, ASMs are associated with a range of adverse effects.[3]

The current standard of care relies on a trial-and-error approach with sequential regimens of ASMs, guided by limited comparative data. An ASM is selected mainly on the basis of a patient's reported seizure type(s) or epilepsy syndrome. However, for a given seizure type, a number of ASMs have demonstrated similar efficacy.[4] The physician typically narrows down the options by taking into consideration tolerability and safety in an attempt to 'individualize' treatment. With no reliable surrogate biomarker of treatment response the patient can only wait for the passage of time to know whether the epilepsy is under control. If seizures recur or the patient experiences intolerable adverse effects, another drug regimen is tried, either as a substitute for or in combination with the original drug. Non-drug options are considered only when the epilepsy is regarded as 'pharmacoresistant'. This sequential treatment approach risks damage to the brain from uncontrolled seizures while the patient endures trials of ineffective treatments, and a more reliable way to select the most effective drug for an individual patient is needed. Recent advances in artificial intelligence (AI) are raising hopes that personalized epilepsy management could soon be a viable alternative to this trial-and-error approach (Figure 1). In line with this vision, in this issue of Brain, De Jong and co-workers[5] propose using machine learning to predict the response to an adjunctive ASM.

Figure 1.

Conceptual view of how machine learning may be applied for personalized drug selection in epilepsy. Instead of the present trial and error approach, a patient's relevant medical record is entered into a machine learning model which acts as a clinical decision support tool for the physician to select the most effective medication. Adapted from Chen et al. 6

Specifically, the authors developed and retrospectively validated machine learning models that aim to predict response to brivaracetam, using clinical and genomic data from two placebo-controlled phase III clinical trials [N01358 (NCT01261325) and N01252 (NCT00490035)]. Brivaracetam is a recently approved add-on therapy for children and adults with focal-onset seizures. The drug is believed to bind to the SV2A protein in synaptic vesicles and to alter the release of neurotransmitter into the synapse. The authors used the data from the first trial (N01358) to train the prediction models, and then validated the models using the data from the second trial (N01252). To maximize correspondence between the two datasets, the authors only included the 235 patients randomized to brivaracetam 100–200 mg/day and 235 randomized to placebo in the first trial and 47 patients randomized to brivaracetam 100 mg/day in the second trial. The treatment response was defined as >50% seizure frequency reduction relative to baseline after 12 weeks.

The authors systematically integrated four data modalities (i) the gene set-wise mutational load scores; (ii) the polygenic risk score; (iii) the SV2A structural variance; and (iv) clinical data, into both traditional machine learning models, i.e. linear models and decision tree models, and a novel deep learning multimodal neural network for predicting response to brivaracetam. The decision tree classifiers demonstrated better performance compared to the other machine learning approaches, with the single gradient-boosted trees classifier trained jointly on all data modalities achieving the highest area under the receiver operating characteristic curve (AUC) of 0.76 in the training (discovery) dataset and 0.75 (95% confidence interval: 0.6–0.9) in the independent testing (validation) dataset.

Given the complex associations between the data modalities and outcomes, it is to be expected that linear methods will not be able to approximate the complex non-linear relationships, and indeed both models performed poorly with AUC < 0.7. Disappointingly, the rather novel multimodal neural network also performed as poorly as the linear methods with an AUC of 0.63. Generally, deep learning methods will achieve better performance compared to traditional machine learning methods owing to the depth of architecture with multiple successive layers and hierarchical representation learning (that is, the ability to learn complex representations from the data samples). However, the complex structure and large number of parameters also require large amounts of data to avoid model overfitting and achieve desirable performance which, unfortunately, the authors do not have in the present study. Future large-scale studies will thus be required to identify the best performing methods for predicting ASM treatment outcome.

Notably, prediction by the integrated model was significantly improved as compared to the clinical model alone. This finding encourages further application and enhancement of the methodology. Optimization of the feature selection step represents a critical challenge for the integrative model approach. In particular, the high dimensionality (number of features) of genomic data makes it challenging to incorporate. De Jong et al.[5] used a variety of information resources and a scoring system to derive a workable list of 40 features. The feature selection was highly tailored, taking into consideration what is known about the molecular underpinnings of both the disease and brivaracetam's mode of action. It is feasible that variations in this approach, and the scoring system devised, could yield quite different results. Structural variants overlapping SV2A were found to be highly informative, highlighting the fact that the mechanism of action is an important consideration and that lack of this knowledge for some ASMs might be a limiting factor.

As the four data modalities used in the integrated model may also affect placebo response, the model could be further validated using the placebo group to assess its robustness. Unfortunately, genomic data were only available in patients randomized to brivaracetam treatment and were not available in those randomized to placebo. It was therefore not possible to fully validate the integrated model in a control cohort. Nonetheless, the authors rigorously assessed the placebo response in the model trained only on clinical data, showing that its prediction of placebo response was no better than a random guess. The authors further univariately tested the association of placebo response with the main clinical determinants of the model and found that no single significant feature could be identified. This provides further assurance that the model trained on only clinical data was not related to the placebo response.

How could models like this be used in clinical research or practice? De Jong and colleagues[5] propose that the prediction model could be used to reduce sample size requirements in clinical trials by screening for potential responders to increase the overall response rate. This would be good news in terms of reducing the cost of drug development. Ultimately patients would be empowered by the greater certainty of treatment outcome and be able to return to normal life more quickly. However, how might such trial design impact subsequent approval labelling by regulators? Would the drug be approved only for use in patients with a certain predicted probability of response? Given the 'black box' nature of machine learning models, would there be ethical concerns over denying patients with low predicted probability of response the possibility of participating in clinical trials and of subsequently receiving the treatment? Is there a risk that the models would take away the autonomy of physicians or would they be used primarily as decision support tools? In this study, while the incorporation of genomic information improved the performance of the model, the time and cost associated with genetic sequencing also risk delaying treatment and reducing its accessibility in resource-limited settings. Careful health economics evaluation will be needed to determine the cost-effectiveness of such models prior to widespread implementation in public healthcare systems.

Nonetheless, while many questions remain to be addressed, the study by De Jong and co-workers represents an important step towards personalized epilepsy management. With concerted effort on the part of researchers, clinicians, patients, industry and other stakeholders, it is hoped that this vision will become a reality in the not-too-distant future.

Comments

3090D553-9492-4563-8681-AD288FA52ACE

processing....