Life Sciences 360

Can AI Help Doctors Find Cancer Faster?

• Harsh Thakkar • Season 3 • Episode 76

Have you ever wondered if AI could help doctors find cancer faster and more accurately? In this episode, we explore OmniScreen, an AI-powered biomarker tool that is revolutionizing cancer diagnostics.

Joe Oakley, a pathologist and expert in digital pathology, explains how OmniScreen was trained on 3 million digital slides to detect cancer biomarkers with higher precision and speed. He shares how AI is reducing testing time, preserving tissue samples, and improving early cancer detection.

We also tackle the big question: Will AI replace pathologists? Joe breaks down the role of AI in pathology and why it should be seen as a tool to assist rather than replace doctors. Plus, learn how AI is helping hospitals, pharmaceutical companies, and researchers in drug development and clinical trials.

🔹 Key Topics Covered:
✔️ How OmniScreen is transforming cancer diagnostics
✔️ AI vs. traditional cancer detection methods
✔️ The role of AI in hospitals, pharma, and research
✔️ Will AI replace pathologists? The expert’s take
✔️ The future of AI-driven cancer research

🎙️ Guest: Joe Oakley, Lead Consultant/ Founder at Oakley Pathology Consultants, LLC
đź”— Connect with Joe Oakley: LinkedIn 

📌 Chapters:
00:00 Introduction
00:49 The problem: Can AI find cancer faster?
01:19 How OmniScreen speeds up cancer detection
01:58 The AI behind OmniScreen: Trained on 3 million slides
02:28 How cancer diagnosis worked before AI
04:31 The evolution of pathology and diagnostic tools
06:10 Why AI can detect cancer better than traditional tests
07:24 How OmniScreen works step by step
08:34 The real impact: Saving time, tissue, and costs
10:04 Who are the early adopters of OmniScreen?
11:59 The role of hospitals, pharma, and researchers
14:12 The cost of AI-driven cancer detection
15:52 The big debate: Will AI replace pathologists?
17:10 Why AI is a tool, not a replacement
19:35 Future impact on drug development and clinical trials
22:09 AI’s role in personalized medicine
24:42 The future of AI in cancer research
27:10 How AI is changing clinical trials
30:05 Key takeaways from the episode
32:40 Where to find Joe Oakley & final thoughts
36:00 Closing remarks

Subscribe for more insights on AI and healthcare!


For transcripts, check out the podcast website - www.lifesciencespod.com

Harsh Thakkar (00:08)
Hi, welcome to the show. Have you ever wondered if there's a faster way to find the right cancer treatment? I have been always searching for different AI companies and AI tools because I'm really passionate about this topic. And guess what I found? Sometime in like August 2024, I came across this AI driven biomarker tool that really speeds up how fast and how accurately you can detect cancers.

Before I tell you who the guest is and how they're related to this technology, let me just give you some stats. This technology, it can look at 500 genes and find more than 1200 important biomarkers. It was trained on three million different images of slides, and it was built by a team at a company called Page. The technology is called OmniScreen, and my guest today is...

one of the pathologists who was really involved in building this technology. So I wanted to bring him on here, talk to him, understand how this works and what are the implications for clinical trials and drug development. With that said, let's dive in and have a chat with Joe Oakley. Welcome to the show, Joe.

Joe Oakley (01:24)
Hi, harsh. Thanks for having me on.

Harsh Thakkar (01:27)
Yeah, before we talk all about OmniScreen, I want to ask you, rewind back when you were defining the problem statement and trying to build this tool. What was life before OmniScreen? Like, what were the older ways of detecting cancer through these slides and why, what were some of the inefficiencies there?

Joe Oakley (01:50)
Well, there's actually a couple different eras that we can go back to. And it's interesting, I would actually argue that in some ways, OmniScreen is a way to make things that were old new again. So as a pathologist, know, the knock on our field or some would even say the advantage of our field is that our main piece of technology for diagnosing cancer

Harsh Thakkar (02:05)
Hmm.

Joe Oakley (02:16)
hasn't changed in 200 years. We still use a light microscope looking at H and E stained sections of either biopsies or tumor resections to make a diagnosis of cancer. And in the beginning, was just, you know, pathologists who were looking at the patterns of the cells that they were seeing on the microscope. And you could associate those patterns with clinical behavior. That's how we know, for instance, what a breast cancer looks like. You know, we know that there's certain appearance of the cells that

Harsh Thakkar (02:43)
Mmm.

Joe Oakley (02:45)
means that they're going to be malignant, and that every time we've seen this type of phenotype among these cells, this is a patient who is going to eventually have metastasis, if we don't do something about it. Now, as we got better in chemistry over the next couple centuries, we started to be able to add new techniques. And sort of the first revolution was actually proteomics-based. This would be immunohistochemistry.

where now we can do stains where we have an antibody looking for a specific protein except now it's got a signal that we can see on a light microscope. So it can tell us if not only is that a breast cancer that we can recognize off the H &E, but we can take additional tissue, do additional chemistry on it, and now see if it is expressing particular proteins.

And so this has become integral not only for diagnosis, but there are some very well-known drugs that are entirely dependent on using this technology to highlight the cases that should be getting these particular drugs. So staying just within sort of our breast cancer example, HER2, is the classic example of this, where we're going to be testing every breast cancer for HER2 expression by immunohistochemistry primarily. And if it's expressing a high enough level of HER2, it would qualify for...

Harsh Thakkar (03:46)
Mm-hmm.

Joe Oakley (03:59)
trastuzumab or herceptin. And then now we have even better therapeutics in NHRTU, which apparently, as we've seen, can attack cancers that have a much lower expression level of that protein. And so the diagnostic cutoff for us has now become a challenge using that classic technique, that immunohistochemistry, because NHRTU is just a drug that can hit a lower level of...

HER2 receptor, use it to find, bind, and then deliver the payload to those particular tumor cells. And then on top of that, again, as our chemistry got even better, we were able to add nucleic acid testing on top of this. And there's a couple different ways that we can do that. You can do in situ hybridization where you're still looking for a signal, actually by a light microscope, for certain, you know, RNA expression, or you know, can sometimes probe for a particular...

Harsh Thakkar (04:33)
Hmm.

Joe Oakley (04:55)
DNA targets that's a little bit less common. But typically with nucleic acid testing, we're doing is very frequently what we call kind of a grind and bind, where again, you're taking tissue, which you hope is at least approximately close to what you were looking at when you were making the diagnosis of cancer on the H and E slide. And you're scraping it off, and you're going to perform either PCR for particularly nucleic acid targets, DNA-specific mutations.

Harsh Thakkar (05:09)
Mm-hmm.

Joe Oakley (05:26)
Or you next-generation sequencing relies on this where you're going to be taking the tissue off of the slide or off the curls You're going to be doing nucleic acid extraction from it and then again It's an entirely separate chemistry reaction now to go looking for You know the the sequence of the DNA and that particular tumor and that's now you know very important for Entirely new classes of therapies that are targeted to specific mutations. So that the problem with doing all of that

you as it sounds, is it's all very involved, right? You know, you have multiple different chemistries, multiple different technologies, and every single one of them is requiring additional tissue. And if it, particularly for like a biopsy specimen, you can really run yourself out of tissue very quickly if you're having to look for a bunch of different targets, especially if there's multiple IHC that you need to look at either diagnostically or therapeutically, as well as, you know, additional nucleic acid testing.

Harsh Thakkar (05:57)
Yeah.

Hmm.

Hmm.

Joe Oakley (06:22)
You know, lung biopsies are notorious for this because they're always trying to get the minimum amount of tissues so that they don't cause a pneumothorax going in, you know, with a giant needle or stabbing the patient in the lungs multiple times, right? So that was really the problem that we set out to solve with AI. And that was really sort of the genesis of the OmniScreen and the paper that you were talking about there.

Harsh Thakkar (06:33)
Yeah.

Mm.

Yeah, so I know the article had come out in August 2024 when it was publicly announced how this model was built by the company, Page. And you were a VP at the time working at, I think you were still working at Page at that time, is that correct? And now you're a consultant in this field, right?

Joe Oakley (07:10)
Yes, now I'm a consultant in the field. So I was the vice president of biomarker development, which is how I got involved and why I'm a co-author on that particular paper helping develop that model. And now, of course, I do consulting on digital pathology as well as all of those traditional techniques for various pharmaceutical companies.

know, digital pathology in general.

Harsh Thakkar (07:34)
Yeah, so for our audience that are maybe from a very technical or scientific discipline side of things that may understand some of the stuff you're saying, or for our audience who are maybe not that technical, or they work in life sciences but they don't really understand, they want to simplify, how would you explain to them how OmniScreen works?

Joe Oakley (08:00)
So what OmniScreen does is it starts at the bare minimum that the pathologist is working with. So even today, we still get a biopsy or a resection specimen. And the first stain that is made of it is the H &E stain.

Because again, that's what we're using primarily for diagnosis. We're looking for a particular pattern phenotype of those cells to say, you know, this is a cancer. This is the kind of cancer that we think it's going to be. Except now we can take that H and E slide and turn it into a digital image. And once we've turned it into a digital image, we can now basically train a computer to start to look for certain features that can now, you know, at least screen for, if not take the place of some of these.

Harsh Thakkar (08:34)
Mm.

Joe Oakley (08:47)
you know, traditional chemistry methods that I've spent the last couple of minutes describing. And that's really what OmniScreen is able to do and do best. It's able to screen for, you like you mentioned, there's 500 genes that it's been validated against because we were using the MSK Impact next generation sequencing panel as the ground truth when we were training OmniScreen. And it can identify, you know, as you mentioned, thousands of different mutations that are in the MSK Impact database.

that we have enough cases to validate its performance against. And so it's a really easy way to say, know, in non-smell-so-lung cancer, for example, you can have that slide, that biopsy slide that you're looking at, and run it through the OmniScreen analysis. And OmniScreen can tell you, hey, I think this is a case that's highly likely to harbor an alteration in EGFR or KRAS or ALT.

And so that's now a case that would be high priority for some form of nucleic acid testing, most likely next generation sequencing. because you can get sort of relatively, the algorithm can at least make a best guess as to which gene it thinks is likely responsible for that. So if it thinks, hey, I think this really harbors an EGFR mutation, you can now save some additional time, tissue, and money by even potentially screening after OmniScreen with some of the single gene

Harsh Thakkar (09:43)
I'm gonna hit.

Joe Oakley (10:12)
methods that are out there. may not need a full NGS panel for some of the targets for OmniScreen. You might be able to go with a smaller, more tailored solution really for the gene of interest rather than having to spend all of that money, all of that tissue, and to this day still, you know, two to eight weeks depending on who you're sending to, turnaround time, for a full next generation sequencing panel. And I think it's particularly useful for... Oh, go ahead.

Harsh Thakkar (10:34)
Yeah.

No, I was going to say I'm glad you mentioned about the MSK database because for our listeners who are trying to understand where you're going, it's from what I understand researching on this, was a spin-off from Memorial Sloan Kettering Cancer Center back in 2021 as a computational pathology startup, is that correct? That's how Page was born.

Joe Oakley (11:03)
Yes, that's correct. Page was a computational biology spin off from a Morse code catering. So and that's what really powers Omni screen more than anything else is that partnership with MSK gives them access to all of the metadata, including the MSK impact results as well as clinical results, the identified, of course, that they can tie to IHC results and the millions of.

Harsh Thakkar (11:06)
Yeah, okay.

Yeah.

Joe Oakley (11:32)
Slides that MSK has already scanned in. Already form the basis for the. The training sets.

Harsh Thakkar (11:35)
So

Yeah.

Yeah. So that's what I wanted to ask you for the training set. So it's three million slides or three million digital images. Were these images available in the MSK database or were these some that you had to collect from different sources? How did you come to that number of three million images?

Joe Oakley (12:07)
Yes, the only screen it's three million scanned whole slide images because msk. Had really been a leader in digital pathology for a number of years now they have been routinely scanning all of their at least the h &e and a lot of the i.h.c. as well. To create these whole slide images to build up this massive database so it's a massive internal database.

And so OmniScreen was based off of taking the 14 most common tumor histologies that MSK sees, which are broadly speaking the 14 most common tumor histologies, organ system wise, in humanity. just because of Memorial Sloan Kettering is a cancer center, it's a well-known international cancer center. And then.

Harsh Thakkar (12:50)
Thank you.

Joe Oakley (12:57)
Among those 14 different indications, there were 3 million total slides among all of the cases. And that was sort of the set that's been published on. Now they've actually been able to expand that with more in some of the recent iterations. know, there's continuing to develop the models that are powering OmniScreen and the underlying VirCow Foundation model as well. So OmniScreen is really just an application of what they call the foundation model, the VirCow Foundation model.

That's based on those millions and millions of slides. I mean, in terms of it being confined to one institution, the nice thing is, again, everybody does an H &E stain. So the hidden secret of the MSK database is that somewhere around 30 % of those cases are referrals from across the world. So there's examples of everybody's version of an H &E stain there, which along with the sheer number

tends to make the model a little bit more robust.

Harsh Thakkar (13:58)
Okay. And you mentioned about having access to this database and you also mentioned that why this is better than the other methods because you don't want to use too much tissue, you don't want to run these tests and it takes, every different test is going to take longer. Whereas now here you can get more information from this one digital scan image. So we talked about

shortage of the tissue or faster turnaround times. But for somebody that's listening to this and they're thinking about affordability or how much is this test with AI gonna cost compared to what it was before? Do you have any quick numbers to share?

Joe Oakley (14:46)
I mean, of course you'd have to talk to the

business development of folks at Page for a precise quote. But certainly, where they are hoping to price would be, it's not going to be any more expensive than doing an IHC slide. And it is certainly going to be much quicker and less expensive than doing the thousands of dollars that you're going to get quoted for doing a full next generation sequencing panel. And that, think, is its real power, both for the researcher

Harsh Thakkar (14:52)
Yep, yep.

Okay.

Hmm. Hmm.

Joe Oakley (15:18)
the clinical trial list as well as the patient ultimately, is its ability to save time and save money by really helping you prioritize some of those longer turnaround time and bigger ticket tests. And then, you if you let me prattle on here for a minute, I think there's another sneaky advantage in what the computer actually does as well.

Harsh Thakkar (15:44)
Yeah, say that again. Say that again.

Joe Oakley (15:44)
So the reason

that, yeah, the reason that OmniScreen ultimately works is that it's identifying phenotypic features. And so where I mentioned that this is making everything that was old new again, the reason that, again, I can look at as a pathologist, an H &E stained slide and say, OK, this is a breast cancer is because hundreds of years ago, the

Harsh Thakkar (15:56)
Mm-hmm.

Peace.

Joe Oakley (16:11)
earliest generations of pathologists were taking a look at these stained sections and saying, okay, every time we see cells that look like this, that are from a breast sample, you know, this is a patient who clinically has cancer. They're making that association with the clinical outcome. What the OmniScreen is now able to do is essentially do the same thing. It's looking at the H and E, except now it can do that much quicker and in much greater detail than my mere human eyes, like

I'm sure if you handed me a stack of 10,000 breast cases and said, find all of the ones that have a BRCA mutation, or if you handed me 10,000 non-smoke cell lung carcinoma cases and said, find me all of the ones that have an EGFR mutation, and I could look through them over a decade, I might eventually be able to pick out patterns where I could say, I think this is a case that's now very likely to harbor an EGFR mutation.

Harsh Thakkar (16:39)
Meh.

Joe Oakley (17:06)
What we're able to do with AI though is allow the computer to do that for us. You it can train on those thousands of cases except it's not going to take it a decade. It could train to identify phenotypes associated with EGFR mutation or no EGFR mutation or her2 overexpression and breast cancer or no her2 overexpression and breast cancer. You know, in as little as a weekend, maybe a week. And it can pick out those type of phenotypic associations that we ordinarily would not have seen.

Harsh Thakkar (17:11)
Mm-hmm.

Joe Oakley (17:35)
And I think one of the hidden superpowers of this technology now is really the ability to marry that sort of omni-screen interrogation of the phenotype on the H &E, the features that are associated with an EGFR mutation that is active and driving that tumor versus those mutations that we know from clinical experience are simply a long for the ride because there's a certain percentage of patients, lung cancer for example, who have an EGFR activating mutation. You give them the drug, but they don't really respond.

terribly well. And my argument would be is I think we can now start to look for those type of phenotype-genotype correlations where, you know, I suspect that once the research really starts to get, once OmniScreen is in more common practice and we're really starting to line up, you know, phenotype to genotype to outcome with targeted therapies in particular, you'll be able to better identify those cases where, you know, the

Harsh Thakkar (18:06)
Yeah.

Joe Oakley (18:30)
The algorithm says, yes, it has this phenotype of an activated EGFR mutation because I've trained against what that looks like. And you can confirm that now with a faster targeted assay. And in some cases, you need that because different drugs have different targets in EGFR to identify the right EGFR therapy to give to that patient. And that patient is probably more likely to respond than the patient who has the mutation, but

Harsh Thakkar (18:37)
Yep.

Joe Oakley (18:55)
does not have the phenotype consistent with activation. That's probably one of those tumors where the mutation is along for the ride. It's not really driving. And I think that's really kind of an exciting future direction for this kind of technology beyond the cost savings that it gives you right now, particularly for relatively rare mutations or in healthcare systems that are really, you know, they don't have a whole lot of sequencing capacity, so they tend to reserve it. It gives them a way to triage really effectively now those cases that really do

Harsh Thakkar (19:03)
I

Joe Oakley (19:24)
justify space on the limited number of sequencers that they have available. Or in some cases, there might be healthcare systems that would not have sequenced. They would have said, hey, this mutation, it's too rare in our patient population. We're not sure we really want to thousands looking at every breast cancer, every renal cell cancer for this particular mutation. Whereas with OmniScreen, now they can look at the H and E very quickly, very cheaply, and say, okay, this is a case that actually is likely to harbor that rare mutation.

we can go ahead and sequence that one now, which lets that drug gets to patients which it might not have been able to reach had it just been an argument, well, we need you to sequence all of them to find the tiny percentage of them that actually have it.

Harsh Thakkar (19:56)
Mmm.

Yep. so the other question I had for you is who are the early adopters of this technology? know maybe you can't or cannot share names, will this be used by hospitals? Will this be used by academia? Will this be used by pharmaceutical and biotech companies in their R &D? Who are going to be the early adopters of OmniScreen?

Joe Oakley (20:34)
So of OmniScreen, again, I can't get too far into specifics. There definitely has been a lot of interest generated by the paper that you found. You're definitely not the only ones we've spoken to about this particular paper and its use.

And some of them certainly are in history. There are pharmaceutical companies that are interested. Some of them are also CROs that are interested in being able to offer their clients a way to pre-screen. There are definitely some, there are some, I would say, hospitals that have been interested. I would say they tend to be more academic centers because of course, you know, I should stress OmniScreen is not

Harsh Thakkar (21:00)
Hmm.

Interesting.

Joe Oakley (21:21)
It's research use only at the moment. So the academic centers that have, you know, a bunch of cases and a real interest in a particular project where OmniScreen can help reduce their cost in terms of interrogating something or starting to make some of those phenotype-genotype correlations that I mentioned that I think could be very powerful and are probably almost certainly lurking out there. That tends to be their primary interest at the moment. And then, of course, once, you you can generate a full

clinical package that can be turned into the FDA for approval as a formal screening tool. I think you'll see increased adoption within hospitals and make it directly available to patients. But that's sort of down the road. It still needs the full validation for the claims in the paper.

Harsh Thakkar (22:04)
Yep. Yeah.

Yep, No, the CRO one is a, that's a genius one. Like, you know, using that for pre-screening. I wasn't thinking about it, but that's really interesting. Yeah, because I mean, this sounds like, so then is Page the only one in this AI, quote unquote, AI powered digital biomarkers, or are there other players? How is Page different from them? Can you share some?

I'm sure you've studied the competition long enough to know what's happening elsewhere.

Joe Oakley (22:46)
Yeah, so it is a very competitive space, I will say. There are a number of players who have

Harsh Thakkar (22:51)
Mm.

Joe Oakley (22:54)
Actually a number of different foundation models, which is again sort of like Veer Cow, Page has Veer Cow as their foundation model, AnomniScreen is developed from that. You have a number of other players that are out there. So for example, Modela, actually some of my former colleagues at Page, you're now working very heavily with Modela, that's a spin out of Harvard. There's even some big names that have

tried to develop their own pathology models. mean, Google, for example, has like a 500,000 image model that's out there that I've seen show up in some of the comparisons between the two. Yeah, there's a few others who also have some different metadata. There's one company, I'm drawing a blank on the name, unfortunately, just off the top of my head here, but they've been using a little bit more expression data.

as one of their ground truths that they have been training to as well. Yeah, I mean, I would say the biggest differentiator right now for Page is, know, Page is a more mature company. You know, again, you know, mature, it's in air quotes. Like you mentioned, it was, you know, formed just a few years ago as a spin out of a Morris-Lunc Hedrick. Some of the others that are coming on, foundation models, are even earlier than that.

PAGE, think, still has a significant advantage just in terms of the sheer number of data, sheer amount of data that they have access to through that partnership with Memorial Sloan Kettering. Memorial Sloan Kettering has been able to provide millions of slides that are exclusive to PAGE. And I think that shows in some of the studies that are out there. So just this past December, there's been some efforts to kind of benchmark these various models across a couple of the different publicly available data sets or other.

Harsh Thakkar (24:20)
Yep.

Nice.

Joe Oakley (24:43)
data set applications that are out there and have been published on and you know the page is. Yeah at. Still in the lead across the vast majority of those and is you know that still the best overall performer. There are a few others that are getting close, including some of the ones that I named there.

Harsh Thakkar (25:01)
Hmm.

Nice, yeah.

Joe Oakley (25:05)
And then of

course, there are other applications that I would consider more niche as well. So if you have a focused enough project, and sometimes you don't necessarily need a foundation model to be able to do it.

Harsh Thakkar (25:21)
Yeah.

Joe Oakley (25:23)
Now, I don't want to monopolize the questions. That gets into a slight difference in how the methods are trained. I don't know if you want me to get into the weeds on that. As you can tell, I can sometimes talk for a while once I get going.

Harsh Thakkar (25:28)
Yeah.

Yeah,

no, think that was good. The few examples that you shared will obviously cut this part out from when you stopped there. If you want to pause or you want me to just give you an angle of my next question, which was going to be, will AI take my job or will AI take pathology's job? So I'm going to frame that up. But in the meantime, you can...

Joe Oakley (25:47)
Mm-hmm.

Harsh Thakkar (26:04)
have something interesting to say about that. So Joe, for mentioning those examples. Those are really interesting. And as with anything with AI, the first question everyone has is, is AI gonna take my job? When people are creating content on social media, copywriters

Joe Oakley (26:08)
Yeah.

Harsh Thakkar (26:23)
and editors were like, is AI gonna take my job? So I'm sure there are people listening to this episode.

thinking is, AI going to replace pathologists? Like, what's your take on that? You are a pathologist.

Joe Oakley (26:37)
I am a pathologist. I'm a pathologist who's worked extensively with digital pathology and AI. And I am not afraid that AI is coming to take my job anytime soon. If anything, I think it's

Harsh Thakkar (26:38)
you

Yeah.

Joe Oakley (26:52)
You know, as I mentioned, it's another tool in the toolbox. It's a way of making the things that are old, you know, the generations of pathologists before us, even before there was immunohistochemistry, you know, I was fortunate enough without completely dating myself to have trained with some of those, you know, those men and women who...

who were doing pathology before we had nucleic acid testing, before we had immunohistochemistry. And it was amazing to sit with them around the microscope because they could tell things just by looking at the H &E that my generation of pathologists has become dependent on doing by IHC or nucleic acid testing because it's simply easier for us to just order the diagnostic test. They would still do it, but they could usually predict with high accuracy.

was amazed that some of them could look at a case and go, okay, this one is going to turn out to be positive for this, this, and this. And sure enough, that's exactly what it would be. And that's just through sheer association with the H &E. Now we have a computer that can make some of those same correlations for us. And I think it really is going to unlock, particularly for pathologists, the ability to look at the phenotype that we're all trained on in new ways and make some of these new connections, particularly with genotype.

particularly with patient outcome in a way that the biopsy and our analysis of it as pathologists aided by AI in the right settings, either as the tool of choice itself or as the screen par excellence for what additional diagnostic testing we should be doing to make the best patient care for the decisions. It's going to, if anything, make us more valuable as a member of the treatment team going forward.

And that's not even counting some of the really nice diagnostic applications of AI, which we haven't even, that's an entirely separate category from OmniScreen. But there are algorithms out there in the wild already being used by pathologists because they're really good at saving us time. know, finding small foci of metastatic cells in lymph nodes, finding small foci of prostate cancer in biopsies so that we're wasting less time staring at the slides and the.

Harsh Thakkar (28:33)
Yep.

Yeah, yeah.

Joe Oakley (29:00)
the computer can steer us to the areas that are of greatest concern.

Harsh Thakkar (29:06)
Yeah, yeah, definitely. From what it sounds like, what you're saying is, you know, it's not going to replace the pathologist. It's going to give the pathologist a different angle or different pattern or trends or whatever you want to call it to look at. And it's going to give that to them really fast rather than having to wait and look at the slides and look for the results of the test to come back.

this is going to present those, you know, slice and dice those data sets or data points much faster. Ultimately, the pathologist is still going to be responsible in the workflow to make certain decisions, and that's not going to change.

Joe Oakley (29:50)
That's not going to change. And honestly, if I'm an academic pathologist, I'm even more excited about it because one of the cool things about AI is when it does find an unexpected association with either a genotype or outcome, we can have the model tell us where it was looking on the slide to make its determination. And sometimes it's not necessarily everything that you would expect. It might get really intrigued by

Harsh Thakkar (29:58)
Mm.

Joe Oakley (30:19)
you know, a particular type of inflammatory response at the border of the tumor. And digging out what exactly it was looking for, what that relationship was that is now, you know, clearly so important and so strongly associated with either the genotype or a particular outcome that the algorithm was able to find it. And every time it finds it, you know, it's dead on that this patient is going to have this outcome or it's going to harbor this particular mutation.

figuring out that why, I think it's going to be a really exciting war for some of my academic colleagues.

Harsh Thakkar (30:49)
Amazing. Yeah.

Yeah, it's amazing how it can actually go back and like recheck its own work. That's fascinating.

Joe Oakley (31:07)
Mm-hmm.

Harsh Thakkar (31:09)
So with all of this advancements that are happening with OmniScreen and with all other AI digital biomarker tools and models out there, what is the big picture implication for either clinical trials or drug development? What is this going to bring? Obviously, I'm guessing there's going to be faster cycles, but like

Where exactly do you see this helping the industry?

Joe Oakley (31:44)
So I think, particularly in oncology, I think there's a couple key places that these type of tools can make a difference now. mean, first and foremost is something like OmniScreen, where particularly if you have a relatively low prevalence gene target, and it turns out to have a fairly robust phenotype that the computer can...

can recognize that unlocks some indications and some studies that you might not have considered before, something because you would have had to screen too many patients to find it. You can really speed enrollment and make drugs available to patients who would definitely benefit from them in a way that you might not have necessarily been able to do before. It just would have been cost and time prohibitive. And now I think you have a tool to help unlock that.

Harsh Thakkar (32:20)
Mm-hmm.

Joe Oakley (32:39)
I think you can also start to interrogate more directly the ability of AI to either predict outcome as its own separate end goal or, as I mentioned, with that phenotype-genotype correlation, particularly for targeted therapies, that may improve your probability of technical success simply by being better able to identify that

Harsh Thakkar (32:59)
Mm.

Joe Oakley (33:03)
population of patients who not only has the mutation you're after, but their tumor is probably being driven by that mutation. And you're not having an unexpectedly low response rate simply based off of the genotype alone, because you have a certain proportion of that particular cancer where the mutation is long for the ride. It's not really driving that particular tumor. So I think that you can save yourself some overall enrollment numbers there.

Harsh Thakkar (33:11)
I see.

Mm.

Joe Oakley (33:31)
as well as provide, you know, really better, more personalized medicine to the patient down on the back end. And then I think that for certain drug classes, particularly the ADCs, I think an AI-based tool has the best chance to be sort of a one-stop shop for tailoring those. Because, of course, the challenge there is that, you know, again, going back to in HER2 and HERceptin and HER2 testing and breast cancer.

Perceptin works because it's overexpression of the HER2 receptor that's its target. You know, it wants to interfere with a tumor that's being driven by overexpression and that's how, know, Trastusumab is going to work in that setting. For an HER2, the mechanism of action is very different. It still is looking for HER2 to be able to identify breast cancers, but that means it doesn't care if it's overexpressed or not. That's been a challenge for the immunohistochemistry diagnostic because that wasn't our original diagnostic cutoff.

Harsh Thakkar (34:07)
Hmm.

Joe Oakley (34:30)
And that wasn't really what the IHC kits that are out there are designed for. And I think that's what's been emerging in some of these more recent clinical trials with the one plus cutoff that's a real hassle because it's kind of a gray area for us pathologists to begin with already in treating and HER2. But then on top of that, you would also potentially have phenotypic indicators of either resistance or susceptibility to the payload. So not only doesn't HER2 have to find, you know, a HER2 expressing breast cancer cell,

Harsh Thakkar (34:45)
Hmm.

Joe Oakley (34:58)
got to deliver that payload. And if the tumor cell is resistant to that payload, know, in HER2 is still probably not the best option. And you may be able to better identify with AI some of those phenotypic indications of a breast cancer that's not likely to respond to the payload as well. So you can train a model to recognize not only is there enough expression of the target that the ADC can even find it and bind to it, but as well, how likely is it to respond to the payload?

And then there's also a discussion as well in the ADCs about the bystander effect. Like you can have tumor cells that are sitting next to normal structures that express the target, where the drug will find the normal structure and release the payload close enough that the tumor can still be impacted. And again, AI, because it can, is looking at the digital image of that slide, can make some of those spatial associations as you train to that as well. Whereas if we tried to do that with our current...

Harsh Thakkar (35:29)
Okay.

Yeah.

Joe Oakley (35:54)
You know chemistry methods I would need H &E to measure the spatial relationship. I need a human to be doing that. That's going to be slow. I would need an immunohistochemistry assay typically at least to identify if there's enough expression of the target of that particular drug. And then I've got to pray to God that there is either an immunohistochemistry assay probably a separate one or some other genotypic or RNA based indicator of resistance to the payload. So

Harsh Thakkar (36:01)
Mm-hmm.

Mm.

Joe Oakley (36:21)
AI, I think, has the possibility with some of these ADCs to replace at least three different separate tests that you'd have to design to best tailor that particular class of therapeutic.

Harsh Thakkar (36:29)
Yeah.

Yeah, I think that's a good ending statement that you mentioned to sum up the big picture benefit of this. So to our audience that's listening in or watching this, we covered a lot of stuff today. talked with Joe, explained to us how this was done, how cancer diagnostics and all these other things were done before AI was in the picture. We also talked about how OmniScreen works.

Joe Oakley (36:42)
Mm-hmm.

Harsh Thakkar (37:02)
how it was trained, who are the early adopters or who's going to be using this. We also talked about the other players in the industry or what they're doing in this space and how they're developing different tools and models. And we also ended with the big picture implication of all this for the industry. So I hope you've enjoyed this conversation. Joe, thank you so much for coming on, being so transparent and sharing everything.

Before we drop off, where can the listeners find you or chat with you or work with you? Do you want to share the information?

Joe Oakley (37:41)
Yeah, easiest place to find me is on LinkedIn.

You can either message me directly there or use the email that's associated with it. Email is generally a little bit faster, but you can find it. And then I'm Joe Oakley on LinkedIn and you can find me under Oakley Pathology Consultants.

Harsh Thakkar (38:01)
All right, we'll add those links into the show notes of this episode. And thanks again, Joe, thanks for coming on. And for our listeners and viewers, if you enjoyed this episode, I highly, highly recommend to check out one of our other episodes with Dr. Arnon Chait. He is the CEO of Cleveland Diagnostics, and they have a very innovative prostate cancer diagnostic test. So we'll put that link somewhere here.

and you can check out that episode if you enjoyed this content. That's it from me. Thank you, Joe, and I'll see you in the next one.

Joe Oakley (38:37)
Thank you harsh appreciate it

People on this episode