Peter Chang: Importantly, AI, artificial intelligence, machine learning is really a technology, right? It doesn’t really stand on its own. What it does is it oftentimes interfaces and inject itself in other types of innovation in technology like the virtual reality or the fusion biopsy. All of these types of things can in fact be enhanced or augmented in some way using artificial intelligence, and I think that’s the right way to think about this tool. And so that’s what I’m going to try to talk about today, highlight some of the opportunities and ways we might imagine this technology can help transform and disrupt urology.

Now, these are some of my disclosures, none of which I will reference at any point during this talk. All right, so to begin, my guess is that most of you in the audience over the past several months or years have likely heard of some innovative or newer or rather impressive technology, leveraging artificial intelligence in some way outside of medicine, things like self-driving cars or facial recognition. These are technologies that we almost take for granted today, but if you ask an engineer or an expert 10 or 15 years ago, these are technologies that really wouldn’t have been possible. It’s remarkable how new and how rapid the innovation in this field has been.

As an example, take voice recognition. Just 10 years ago, a decade ago, this was a military-grade research endeavor, hundreds of millions of dollars spent, and it’s now a technology we can all access on our cell phones, each and every one of us. Take, for example, also the game of Go, more number of moves in this game than the number of atoms in the universe. So really impossible to properly master with any of our traditional machine learning and AI techniques. It really needs some technology that can intuit and play the game properly.

Of course, the natural extension is that we’re now seeing this technology seep into healthcare. Headlines, some hype and some more realistic about artificial intelligence perhaps replacing the job of physicians or perhaps augment it in certain ways. And anyway, we’ll talk about some of that during the course of this particular session.

Now before I dive into the opportunities, I want to take a step back and look at the history of artificial intelligence and really what brought us to the current state that we’re in right now. In doing so, I want to define several oftentimes very similarly related and interchangeable terms and sort of highlight again the evolution over time. So first we have the term artificial intelligence. This is the most generic of all terms. It really references any type of software that’s created to mimic or reproduce human behavior of any type, so very, very generic. The term, as you might imagine, is quite old. So just about a decade after the first computers were made publicly available, a team of very ambitious scientists here in the summer of 1955 declared that they would get together and largely solve the problem of artificial intelligence in just one summer. Now, of course, they did not solve artificial intelligence in those several months but what they did do was lay the groundwork for what would be decades of research and innovation that would come to being.

Now in those early days, artificial intelligence, again really could be anything under the sun, and because of a lack of hardware and proper algorithms, most of these type of tools were rule-based. In other words, I as a human would come up with a set of rules, a set of if-then statements that if properly executed one after another would in some ways simulate or reproduce intelligent behavior. A good example of this I oftentimes give is a hypothetical agent, which needs to learn how to play tic-tac-toe. It’s a relatively simple game and as you might imagine with just these six rules, I can play the perfect game of tic-tac-toe every time, every single time.

Now, even though this might seem rudimentary, it’s true even today that this is actually the most common type of artificial intelligence, very, very commonly seen. And so if you’re trying to evaluate a tour, evaluate how a particular algorithm might work, think of this first because it’s quite common. Now, of course, there are some problems where I as a human may not know ahead of time a priori what the right rules are to make a particular decision or prediction.

Take, for example, a number of prostate lesions on MRI and the attempt to predict the pathology of that lesion just on imaging alone, so Gleason Score or malignancy, however, you want to call it. Now, there’s certainly some things that I as a human might suspect should contribute. The amount of dark signal on ADC or the amount of dark signal on T2, perhaps the blurriness of the margin. So there are some things that I as a human can essentially program in and write rules for and write hypotheses for, maybe even mathematical formulas that capture texture or something about the image. And that’s where machine learning really starts to evolve. I can take those hypotheses and by finding lots of data, lots of examples of a particular entity of interest, I can test my hypotheses and let a machine essentially figure out what combination of those features might lead to a successful prediction, big-data-driven approach.

Now again, while this really did represent quite a leap forward in the technology, there might oftentimes be cases, as you might imagine, where I as a human may not know all the features that I want to test for. I think that there’s some correlation. I think there’s some features that make sense to me, but there might be something else that would be completely novel and completely impossible for me to come up with on my own, and that’s really where deep learning and neural networks come into play. It’s essentially a technology that for the first time, no human intervention needs to be injected during the training processes and all. I don’t have to come up with assumptions on my own. I can simply feed the algorithm images trained, paired images with a particular outcome and let the system itself determine what the necessary features are for a successful prediction.

Another analogy I like to give to the engineers and coders out there is traditionally, I would write tens of thousands of lines of code to again try to capture some sort of decision-making process or some sort of features or hypotheses I have in my head. With neural networks, my code is now based on the simple premise of creating a virtual network of neurons. I just create this virtual neuron. I give it the capacity to reorganize as I show it data and through that process alone, all of the intelligent behavior and all sorts of interesting patterns emerge naturally, no other work for me as a human. So really a remarkable, really recent advance in technology that again, has enabled the types of applications we saw in that initial slide.

So what can we do with this technology? What can we do in medicine? Well, the list here is quite broad. The answer’s really almost anything. If I have a problem where I have some type of input that I want to match to a particular output and I have enough of that data available, those are really problems that I can try to use AI to solve in some way. And as you can see here in our center at Irvine, in just the past nine months, we’ve really tackled problems from head to toe, including those in urology, which we’ll talk about today. Now before I go into the specific details there, I’m going to show you one example. This is not a urology example, but it sort of highlights the idea of things that are possible, and it also goes back to the theme of this conference, which is translation.

We want to not only create these tools and test them in a research bubble but see how these tools actually work in the everyday workflow. So this is a particular tool for detection of hemorrhage on head CT. It actually is now run on every single head CT that’s done in the emergency room at our institution. In real-time, you’ll see a little green dot here indicating cases of head CTs that have already been cleared successfully for hemorrhage by our AI engine in real-time. As a new patient is added to the list, the AI system begins to run, identify the proper sequences it needs to interpret, and in the case of a positive hemorrhage, status turns to a red notifying the radiologist that that study needs to be looked at immediately. By clicking on that patient’s exam, you can also in the same system, scroll through the head CT and in fact confirm that there is a hemorrhage in the brainstem, and with the click of a single additional button, quantify that volume of hemorrhage all in real-time.

Nope, I don’t need to play it again and again, quickly just to highlight the fact that though I’m a radiologist, these tools are not unique or special to radiology itself. We’ve done plenty of projects in the realm of pathology, counting cells and counting different morphologies and even in non-imaging time-series data sets, manometry readings in the esophagus for example. But anyways, we’re all here. We’re interested in urology applications. So let’s go through a couple of examples and the way I’m going to present the next several slides is to really create more of a framework for understanding the types of applications. I’m not going to really list everything that’s possible because that’s just too much, but these are some of the main categories to think about as you’re coming up with the potential application.

The first category I’m going to lead off here with is quantification, so tools to precisely, objectively measure some sort of disease entity or normal physiology. In this category, the most common application by far is this idea of segmentation, image segmentation on radiology images. For those that are not aware, it’s simply going through a particular image and looking at each pixel and voxel and trying to classify that in one of a number of categories. Again, this can be either disease process or something normal. In doing so, as you might imagine, that will allow me to quantify volumes, certainly to some extent. What it will also enable you to do is again use AI as a bridge to many other technologies that we heard about today. So the example with virtual and augmented reality, one to two hours for a human to manually label each one of these anatomic structures for a neural network. This would be done in order of seconds.

All right, I’ll say that this problem is virtually solved by AI, which is really a remarkable feat, again just probably 10 or 20 years ago, this idea of image segmentation was very extensively researched. There would be all sorts of novel interesting techniques that people could come up with, but really prior to neural networks, the problem remained an open challenge. With neural networks in recent years, we’ve now seen over and over again repeatedly, that as long as you have enough annotated data, the algorithm is able to universally reach and/or exceed human performance in again, every application we’ve looked at.

This is in part mediated also by a number of very well-described algorithms, both tools that we’ve designed at our center and also many other research labs in the country, and compared to some of the other types of applications we’ll see here, this type of tool needs relatively little data to train it and I’m going to have a slide on that in just a second. So what are some examples we’ve looked at? Well, renal cell carcinoma, RCC on CT has been an interest of our groups. In this particular project, we trained an AI system first to identify the renal fascia, so a big triphasic CT exam, thousands of images, but the kidneys are really only a small portion of that. So we had the tool first crop out the kidneys of interest, then a second tool go through and identify the entire renal contour, including tumor. And then finally a tool that identifies the the cancer itself, so a three-step algorithm here. While this is shown in two dimensions, the algorithm is in fact a full 3D algorithm, so in again, a few seconds, you get all this data done automatically with the deep learning system.

Prostate segmentation has been another very, very popular application, not in just our lab, but across the country. And suffice it to say that we can approach near-human accuracy on this task very, very easily. In this particular example, we actually did a special analysis to look at the number of cases that you had for training and what the effect for algorithm accuracy was and to again highlight the fact that this is actually not a very data-intensive type of task. You’ll notice that just after about 50 or 100 cases, the algorithm has really started to plateau, reaching about 85%, 90% DAI score and that the last few hundred cases that we added to our caseload really gave us just a really incremental benefit. So really not that much data needed to train these algorithms to high performance.

Moving aside from like a 3D volumetric approach, sometimes there’s things that you just simply need to measure. So you have landmark A on one side and a second landmark B on another side. Those two landmarks can be easily detected by a deep learning system using regression type networks. And then of course from there, the measurement in three dimensions can be easily calculated between the two points. I’ll also point out, again, compared to other applications, this is another relatively very simple task for the neural network to learn and does not require nearly as much data as some of the more challenging problems that you can find.

The second category here is the objective characterization of some type of disease process. So volumes, of course, are one way to quantify things, but sometimes there’s a finding. There’s how dark is this blob here that I see on the image or how blurry are the edges? Those are things that as a radiologist, I know I’m really bad at. I know I’m oftentimes just taking a stab in the dark, but something that an AI system can very objectively and easily do. One example I have here is a tool we essentially made to automate PI-RADS scoring. So building off of the prostate segmentation algorithm that we have, we run that first and generate cropped images of the prostate itself. We then have a second algorithm go through and detect the actual lesions in the entire prostate, so some areas of questionable concern here. Eventually, the algorithm rules out and then goes in and finds a dominant lesion over here on the right.

The lesion is then fully segmented in three dimensions and fed to a final algorithm that attempts to score the PI-RADS. Now this is a very interesting algorithm because the system essentially has seen thousands and thousands of lesions over time, and not only does it make a classification or regression of what it thinks the lesion should be, but it can actually show for you based on its repository of previous lesions, what it expects a lower or a higher-grade lesion to look like, and it gives it that to you visually so that as a human, if you don’t agree, you can certainly slide the needle one way or the other and kind of enables the radiologist to also kind of have the final say.

The next category I’ll include here is prediction, so trying to essentially prognosticate either underlying pathology or some sort of outcome, therapeutic outcome or perhaps patient outcome. When I say prediction, the first question, of course, is, “Well, what are you using to make that prediction?” And you might think, I’ve shown all imaging examples here, so right, we’re going to use an imaging, but I’ll take a step back and point out that non-imaging data is in fact even easier for neural networks to ingest, things like routine clinical risks, factor scores that ingest very simple numeric data, age, PSA for example, things like that. Any type of risk factor model that we have that fits in one of these risk calculators is something that we can reformulate as a deep learning or neural network-based question, to take these combinations and feed it through a neural network to train.

What this allows you to do, so compared to a regular linear classifier, which just essentially tries to determine whether something is directly or inversely correlated with the outcome, a neural network is now able to model very sophisticated nonlinear patterns. So we can say in the presence of risk factor A, if I have this and this, then they may be synergized and then really add together or maybe these two risk factors are typically not good, but in the presence of a third risk factor, they’re actually okay or that you can ignore them. So you can come up with any number of these permutations. Really any underlying pattern that might exist is something that the neural network can find. And I will again emphasize that this one is relatively simple in architecture, so whereas most of our tools need to be trained on very expensive big GPU hardware clusters, this is something you could train on your laptop in a matter of hours, so very easy to do.

Of course, the flexibility of neural networks means that I don’t have to limit myself to routine clinical data. I can throw in images, exact same architecture, exact same type of system. I can just take a cropped image of a RCC on CT and feed that directly into the neural network to predict whether or not that lesion might be benign or malignant. So it’s very flexible, very powerful. And as you might imagine, and this is something we haven’t done yet in the urology world, but as you might imagine, if I have a system that can take in clinical data, that can take in imaging data, that can take in pathology data, really can ingest any type of data stream, I can then synthesize all those together using the same network and actually combine predictions based on a number of different multi-modal inputs.

All right. And then the final category that I’ll now sort of touch on here is the idea of discovery, so not only predicting a particular outcome or finding a diagnosis, but trying to tell us as humans something that I learned along the way. The first example I’m going to give here is much more of an imaging, pure imaging type of example but the idea that we have an MRI oftentimes is how do I make the exam faster. It’s a very long tedious study. If I could shorten my exam time, that would be very significant, and one idea that’s been very popular as to simply subsample the original data. Subsample k-space, acquire, not all the data, just part of the data that makes my MRIs faster and simply reconstruct that data, fill in the missing data, using a deep learning algorithm, take into account in other words, the natural symmetry that we see here in k-space as well as the properties of a normal image that we might expect.

Images are not random, they have specific patterns of kind of consistency. Blobs tend to organize in certain ways. So these are all things that deep learning system can learn over time and properly again, go from subsample accelerated MRIs back to our original clean image.

And then this is an example I’m going to spend a few seconds just to elaborate on. So in this project, we trained an AI system first to detect brain cancer, brain tumors, GBMs and essentially classify each lesion as belonging to a number of different mutation categories. It’s actually how we in the neuro world now characterize brain tumors. We don’t really use the WHO classification anymore. Anyways, so the algorithm was able to learn this, which is not surprising. We’ve sort of given some examples before where you can take some paired data and predict a number of permutations. But more importantly, after the algorithm trained and converged and predicted with about 90% accuracy, we asked, “Well, out of these thousands and thousands of cases that you’ve seen, what is it that you learned? What is it that made you tell me with high certainty that this is an IDH mutant or a wild type case or a 1p/19q-co deleted case.

This is interesting because really outside of medicine when you’re looking at Google and classify dogs from cats, that’s not really a question you ask yourself because you know what a dog and cat looks like. No, you’re not interested in that problem. But here in medicine, this is actually a big opportunity for the AI to really teach us, humans, something about what I learn, and so that’s what we did. We asked it, “Tell us the most prototypical features, the most common things that you’ve seen that make you lean one way or the other.”, and what we were able to generate is an atlas, a completely AI-derived atlas for common imaging features completely without human supervision.

What’s interesting about this project is as we were writing up the discussion and doing a lit review, what we encountered were single case reports that oftentimes highlighted one or two of these things, one by one, and cumulatively over 10 or 20 case reports, almost all of these features we found were corroborated in some way in the literature, but this was really the first time we were able to combine all that information in sort of one very elegant and an efficient solution.

All right, so that more or less summarizes and highlights kind of the dominant topics I want to talk about. Before I finish here, I just want to throw in a few notes about the center because as you saw, we’re doing many different projects. We’re working on head to toe, I’m just an uro-radiologist, so how do we make this all happen? Well, the reality is that if you try to build a deep learning algorithm for the first time yourself after, completely brand new, that first project may take you several years to get off the ground, and it’s not because the deep learning coding component is challenging.

It is. It’s hard. But I’m sure the guys in this room are more than savvy enough to pick that up. It turns out that the network design component is really just a very, very small part of a larger pipeline to do these projects properly. In other words, I need to first come up with some sort of experiment design, then download thousands of cases from my packs, figure out how to archive it, figure out how to annotate it properly, figure out how I’m going to store it. And then after I get all that taken care of, I would write my algorithm, I’d have to figure out if I have any GPUs available to me, whether they’re connected in the proper way. And if I wanted to test it out in the clinical setting, I’d have to figure out how to then reconnect things back into packs, give them to the clinical teams and create it in a relatively user-friendly interface.

That part is the part that almost universally becomes a bottleneck for these types of projects, and these are the things that we’ve built already at our center. So then that makes now doing new projects very easy. And so that’s actually my pitch now to the rest of you in this room. If you’re thinking, “Hey, I have a pretty interesting project and I think it could maybe use a little bit of artificial intelligence in some way.” We highly encourage you guys to come out and simply just reach out to us, please. I’m going to host one of the lunch sessions, I think, in just a few hours so you can come find me there or you can contact me here. We work with groups both internally at Irvine and across the country. And if you have a data set, you have some problem, you have something that you think could use some of this technology, let us know. And we have a team of engineers that can help translate it and give it right back to you. So with that, I’ll conclude. I think I have some time, but I think we’re running pretty late, right? So we’ll save maybe questions for… so anyways, thank you for your attention. I appreciate the invitation again and hope you learned something.