Clinical Applications of Artificial Intelligence in Urologic Oncology

Sharif Hosein; Chanan R. Reitblat; Eugene B. Cone; Quoc-Dien Trinh

Disclosures

Curr Opin Urol. 2020;30(6):748-753. 

In This Article

Diagnostic Imaging

Recognizing suspicious patterns on imaging requires skill and intuition that may take decades of experience to develop. There is potential to advance the field of diagnostic imaging by leveraging artificial intelligence to mitigate gaps in skill and experience.

Prostate Cancer

Accurately identifying areas of concern within the prostate is critical to guide fusion or in bore targeted prostate biopsies.[10] The Prostate Imaging -- Reporting and Data System (PI-RADS) was developed to establish a common nomenclature to describe prostate MRIs.[11] The widespread adoption of PI-RADS has improved the detection of high-grade lesions and minimized the detection of low-grade lesions, yet there is still a level of intraobserver and interobserver discrepancy in PI-RADS scoring.[12] To address these issues, Ishioka et al. created a convolutional neural network (CNN), a type of algorithm commonly applied to image recognition that works to evaluate MRIs and determine, which regions warrant a biopsy, mimicking the work of a radiologist assigning PI-RADS scores. Two algorithms were trained using images from 301 patients confirmed as being true positive or negative for malignancy via biopsy. They were then assessed on images from 34 patients who underwent MRIs for suspicion of prostate cancer.[13] Although the CNN's sensitivity and specificity were lower than that of the radiologists, further training on larger datasets may improve detection of cancerous lesions enough to act in tandem with clinicians as second opinions. Hectors et al. similarly applied artificial intelligence for interpreting MRIs in prostate cancer diagnosis by incorporating radiomic features to assign Gleason scores to suspicious lesions. Radiomics data from 64 patients who had undergone an MRI were correlated with respective Gleason scores, which was used to train a machine learning model. The model performed well (area under the curve 0.72) at predicting Gleason scores of 8 or higher.[14] However, as this study was conducted on a small population and only at one center, the algorithm is likely overtrained to that dataset and may not perform as well on external validation. Furthermore, this algorithm was not optimized for, and therefore, may miss lower grade yet clinically significant lesions. Yang et al. used mpMRI metrics to train a machine to identify targets for radiation boost from MRIs. Their model analyzed radiomic features including geometric descriptions and gray-level intensity to predict which regions would benefit from dosage escalation.[15]

Kidney Cancer

The incidence of renal cancer has increased from 1975 to 2016, primarily because of increased incidental detection of tumors on imaging studies.[16,17] These tumors are often treated but increased treatment has not translated to a significant decrease in the mortality associated with renal cancer, raising concerns of overtreatment.[18,19] Therefore, clinicians would benefit from improved risk stratification of incidentally discovered tumors. Baghdadi et al. developed a machine learning method that differentiated between clear cell renal cell carcinoma (CCRCC) and oncocytomas on comuted tomography (CT) scans with 95% accuracy. Using 192 renal masses, they developed their machine by first segmenting normal and abnormal kidney tissue and then assigning peak early-phase enhancement ratio (PEER) to tumors. Their algorithm identified regions of interest, and subclassified histologically confirmed tumors with 100% sensitivity and 89% specificity.[20] Zabihollahy et al. created a CNN that also classified solid renal tumors from CT scans. Their system differentiated renal cell carcinoma (RCC) from benign masses with 83.5% accuracy and 89.05% precision.[21] Artificial intelligence has also been used for preoperative planning to construct 3D models of the kidney and tumor morphology from CT scans. Heller et al. sought to optimize surgical results by training an algorithm to automatically segment tumors with data from retrospective CT scans and patient outcomes. Their model's segmentation overlapped well with manually segmented kidney scans.[22]

Bladder Cancer

In patients with suspected bladder cancer, CT or MRI urography and cystoscopy are the primary method of diagnosis and staging.[23,24] Assessing CT urography scans is labor-intensive and error-prone.[25] Garapati et al. designed machine-learning models that stage bladder cancer from CT images to address the mis-staging of indolent cancers. They used CT scans from 76 patients to train several artificial intelligence algorithms to classify tumor stages as least T2 or less than T2, corresponding to a need for neoadjuvant chemotherapy. Their most successful algorithm classified tumors efficiently on two independent testing sets compared with radiologist-classified lesions.[26] As an adjunct to white light cystoscopy, Shkolyar et al. designed a CNN, CystoNet, capable of performing image analysis and detecting tumors on cystoscopy videos. In a prospective cohort of 54 patients, CystoNet's per-frame sensitivity and specificity were 90.9 and 98.6%.[27] Eminaga et al. proposed a similar CNN capable of diagnosing 44 different urological findings on cystoscopy, including normal tissues, prostate cancer, and urothelial cancer with 99.45% accuracy. However, their model was trained and validated on static images from a cystoscopy atlas published in 1985 and has not been tested on a patient cohort.[28]

Comments

3090D553-9492-4563-8681-AD288FA52ACE

processing....