Life Sciences 360

Data, Decisions, and AI: The Power of Analytics in Life Sciences with Elizabeth Smalley

January 18, 2024 Harsh Thakkar Season 1 Episode 32
Data, Decisions, and AI: The Power of Analytics in Life Sciences with Elizabeth Smalley
Life Sciences 360
More Info
Life Sciences 360
Data, Decisions, and AI: The Power of Analytics in Life Sciences with Elizabeth Smalley
Jan 18, 2024 Season 1 Episode 32
Harsh Thakkar

Episode 032: Harsh Thakkar (@harshvthakkar) interviews Elizabeth Smalley (@elizabeth-smalley-product-leader), the Director of Product Management, Data and Analytics at Aris Global. 

Elizabeth discusses proactive safety signal detection in the context of drug development and the challenges faced in this space. She highlights the importance of utilizing real-world data and changing the methods of detection to overcome these challenges, along with limitations in reporting adverse events and the potential of AI in drug repurposing.

Harsh and Elizabeth touch on ethical considerations in AI and the potential for AI to be used in various emerging sectors of life sciences. They provide tips on staying up-to-date with the latest developments in the field and offer advice for working in life sciences. They emphasize the need for democratizing data and AI to make it accessible to all users.

-----
Links:

*Elizabeth's LinkedIn
*Aris Global LinkedIn
*Aris Global
*Would you rather watch the video episode? Subscribe to full-length videos on our YouTube channel.

-----
Show Notes:

(7:30) Using AI to analyze EMR data for pharmaceutical insights. 

(11:43) AI in life sciences and healthcare.

(18:15) Ethical Considerations in AI and transparency.

(22:47) Learning Resources and Staying Up-to-Date. 

(25:59) The Impact and Challenges of Working in Life Sciences.


For more, check out the podcast website - www.lifesciencespod.com

Show Notes Transcript Chapter Markers

Episode 032: Harsh Thakkar (@harshvthakkar) interviews Elizabeth Smalley (@elizabeth-smalley-product-leader), the Director of Product Management, Data and Analytics at Aris Global. 

Elizabeth discusses proactive safety signal detection in the context of drug development and the challenges faced in this space. She highlights the importance of utilizing real-world data and changing the methods of detection to overcome these challenges, along with limitations in reporting adverse events and the potential of AI in drug repurposing.

Harsh and Elizabeth touch on ethical considerations in AI and the potential for AI to be used in various emerging sectors of life sciences. They provide tips on staying up-to-date with the latest developments in the field and offer advice for working in life sciences. They emphasize the need for democratizing data and AI to make it accessible to all users.

-----
Links:

*Elizabeth's LinkedIn
*Aris Global LinkedIn
*Aris Global
*Would you rather watch the video episode? Subscribe to full-length videos on our YouTube channel.

-----
Show Notes:

(7:30) Using AI to analyze EMR data for pharmaceutical insights. 

(11:43) AI in life sciences and healthcare.

(18:15) Ethical Considerations in AI and transparency.

(22:47) Learning Resources and Staying Up-to-Date. 

(25:59) The Impact and Challenges of Working in Life Sciences.


For more, check out the podcast website - www.lifesciencespod.com

Elizabeth Smalley:

We know that AI does a really good job of mimicking our own analytical abilities at scale. The primary way the bread and butter of how this is done today is

Harsh Thakkar:

what's up everybody. This is Harsh from Qualtivate.com. And you're listening to the life sciences 360 podcast. On this show, I chat with industry experts and thought leaders to learn about their stories, ideas and insights, and how their role helps bring new therapies to patients. Thanks for joining us. Let's dive in. All right, welcome to another episode of Life Sciences 360. My guest today is Elizabeth Smalley. She is the director of product management data and analytics at Aris global. Welcome to the show, Elizabeth.

Elizabeth Smalley:

Good to be here.

Harsh Thakkar:

Yeah, I'm really excited to have a data analytics person because I've been working on a lot of projects in the data management and analytics space. So the first question I have for you is, how did you end up here? Did you were you? Did you find data or data found you? How did you end up in this field?

Elizabeth Smalley:

I would say data found me. And I think that's true for many, many people. Early on maybe 20 years ago, in my career, I found it was impossible to do anything without getting into the data side of the world. So ever since then, I've been working in data analytics, moving into AI, and then moving into health tech and life sciences.

Harsh Thakkar:

And I know that you're you have a lot of expertise in the proactive safety signal detection space. So just for listeners who are new to this concept, including myself, because I don't have much expertise in that area. What is it? What is that concept? And what is the significance, especially in the context of drug development.

Elizabeth Smalley:

So to understand proactive safety signal detection, a lot of people need to understand signal detection first and how it's done today, which will lead us into why the proactive signal detection is such a breakthrough. So today, when a new drug is developed, we have clinical trials, and we test its efficacy and safety. But no matter how big your clinical trial is, it can't cover the variability in the human genome. So when that drug comes to market, all of our drug manufacturers monitor the safety, trying to understand are there things about the safety or the efficacy that we didn't uncovering in clinical trials, and it takes some time for a really robust risk benefit profile to emerge. Now, the way that is done, the bread and butter of how that post marketing safety signals are done today, is it depends on something called an individual case safety report, or ICSR. So what this is, is you are taking a new drug for headaches, let's say. And you go and see your doctor, hey I'm taking this drug, it's really helping my headaches, but I can't sleep. I haven't been able to sleep for weeks. And it's affecting my my life. And I don't know what to do. The doctor says, well, it could be this new drug you're taking I've heard about this, let's have you pause and see what happens. Now, if he has time. And if he knows about the process, he may also file this case safety report on your behalf letting the manufacturer know you experienced this adverse event. Or he may not, by some reports, these are under reported up to 40 to 95%. You can buy all these on your own as well. And so we take those case safety reports, and we just look for disproportionality is insomnia occurring more often than I would expect with this particular drug than compared to the background rates. And if it is, let's investigate it. All kinds of limitations with that approach. The first being the data source itself takes a long time for any patterns to emerge. And a second, being those statistical methods, while directionally correct, can be really imprecise. It's like trying to measure height with a meter stick. So I haven't met you in person. But I'm going to say you're somewhere between one and two meters tall. Am I right?

Harsh Thakkar:

Close enough.

Elizabeth Smalley:

Close enough? Usually, I'm right on that. It tends to be a very accurate statement, but imprecise, and that's that's where we are with the status quo, the bread and butter of a signal detection. And it's very reactive.

Harsh Thakkar:

So yeah, it is and then what are like you mentioned one of the challenges were not being able to report or you said like maybe the provider may or may not report on behalf of the patient or the clinical trial participant. What are some of the other challenges in this space that you've seen from your work?

Elizabeth Smalley:

So these reports where we rely on them as a primary source, especially in post marketing, certainly in clinical trials, these can be really robust and enriched data sets, but especially in post marketing, they can be just a slice in time. And they don't tell the longitudinal patient story. So maybe you're experiencing this insomnia, you experience insomnia in the past, that may or may not get included, are you taking other drugs that could be causing it that may or may not get included. So it's, it can be very sparse data source. And again, it means it takes a long time for these patterns to emerge. There's also a lot of papers written about the fact that it's not particularly good at detecting subtle changes in in, in what we know about drug reactions. And I would say, is completely focused on adverse events, it's important for us to look at that risk side. What about the benefit side? Is it possible that there is a way to repurpose some of these drugs?

Harsh Thakkar:

Interesting. And in your opinion, what can companies or people who are working in this space can do to overcome some of these challenges? Like, have you seen maybe a company or a team doing something that others are not that you can maybe share for the listeners?

Elizabeth Smalley:

Yeah, so these limitations are well known. And some of the ways we start to solve them is one getting proactive about it. So instead of waiting for that report to be filed, why don't I go out and survey all of the available data and see if these reactions are happening, either good or bad. Now to do that, you need to go beyond those reports, you need a different data source, right. And one of the great data sources for that is real world data, specifically EMRs claims and for Rare Disease Registry data. Another thing that helps is changing up that method of detection. So again, it's disproportionality. This is just a really simple statistic. directionally correct. But what happens is, you tend to get a lot of noise, a lot of false positives with it, and what does that mean for patient safety, that means that the teams who are analyzing this to try to keep you safe are spending a lot of time looking at things that don't matter. So if we can cut through that noise by using different methods, then we can make a lot of a lot of progress in this area.

Harsh Thakkar:

Yep, agreed. And it's interesting, because I was talking to a consultant on my team, maybe a month ago. And in his previous project, he was doing some work in this space. I don't exactly know the technical details of the entire project. But I do know that he and his team, were looking at data from the EMRs, like unstructured data and using different AI and data analytics tools to analyze the data. That's all I know, I don't know more than that. But he was also talking about the same problem where it's so hard to, first of all, get access to these files, or even if you have access to make sense of what you're looking at, because you can't just sit and read all of these documents, you have to have a strategy of like you mentioned slicing or dicing the data. So you, you can only look at what matters to you and ignore everything else. So that's a very interesting example. You shared.

Elizabeth Smalley:

Yeah, so it's a goldmine of data, right for

Harsh Thakkar:

Yeah, that's that's, that's a great pain Life Sciences. And what we're seeing is this rise of, of real point that I've also seen from many other groups is they maybe world data providers who do all of that work, right. So they, they work with the providers, they get the data privacy know where the raw data or the source data is. But that data is agreements in place, They normalize the data, they clean it up. And with the rise of that in the industry, it really not readily available for them to make any decisions. And allows organizations to say, I'm not in Data Prep. That's a whole business. But what I do want to do is use that data. So now you that's where these service providers come in, where they're can buy that ready to use fit for purpose data to use in some actually taking the data, either cleaning it up or reviewing it of these in some of these applications. to find inconsistencies and then stand on making it in a standardized format. So the end users can actually go and use that. I know this, so this is happening more with like external service providers. But do you feel that the end users themselves in the future can use AI to do this on their own?

Elizabeth Smalley:

Yes, yes, yes. So something our teams worked on, is this idea of democratizing the data. So a lot of times this EMR data is really only available to data science and epidemiology teams with very, very specialized skills. And no one else can or should touch it, is the prevailing concept. So we've used my team has used AI to go across this EMR and claims data, and the ICSR data, and the literature data all at once in a holistic way. And elicit out of it, all of the things we would normally look for when trying to determine if there's a causal relationship between a drug and an event, things like, what was the time to onset? Did you start to feel have insomnia two days after taking this new medication? Or was it three months later? That there's a difference, right? Or is there an increasing trend, an increasing frequency in this? Am I seeing this go up and up and up and up? If I am, that could be a causal indicator. And so by using an AI model to elicit all of these factors out and returning it back to the user, in graphs and charts with explainability, this is where we got it. This is how we got to it. It really democratizes it and allows the everyday user to be a citizen data scientist.

Harsh Thakkar:

And so I agree with that. But if I'm one of the things that I'm skeptical about is yes, like you mentioned, AI can be used to democratize the data. But what would be your take on maybe users or end users of the data? Who are not data savvy, right? So like, we have tech savvy and non tech savvy. Now, you even within data, there are a lot of people who inherently have good data management practice. But there are others who, despite off all the training, that other companies are providing they let's just say, quote on quote, I don't know how to handle data, right. So do you think putting AI into the mix will make this problem even worse? Or how would you? What would you have to say to them?

Elizabeth Smalley:

I mean, I hope not. And maybe we can take a bit of playing take a page from the financial industry, right? So the financial industry came up with the credit score. And I bet all your listeners know what a credit score is, I bet they have an idea of what theirs is, and where it falls in the scale of good or bad. And even if they if they have a credit score of less than they would like, they probably know why. Because they've applied for credit, they got a letter. And that said, you had too many open accounts or too many delinquent accounts, so they know why. So even as a consumer, we can consume and understand this credit score that's created out of AI. Now your underwriters are using it as well, right? Before the credit score on the underwriter is going to take a look at how have you been delinquent, do you make enough money really to repay this loan? Do you own your own home? Now all of that's put into this credit score, and instead a new objective measure they have to anchor on, they don't have to understand how it all works, how the data worked. They use it as an objective point in their analysis when they're trying to decide to approve that loan for you. And I think the credit industry has done a great job at socializing that and making it understandable to all of us. There's a lot of magic underneath that credit score.

Harsh Thakkar:

Hmm, interesting. Yeah, that's a great example. I didn't think of that. And but as you were explaining, I was thinking about it. And it does make sense because it is still the end result is used for decision making. But how you got to the result, the understanding of that is not shared. Or maybe it's there, but not everybody knows it.

Elizabeth Smalley:

And the fact that...

Harsh Thakkar:

Not even nerds wouldn't know it.

Elizabeth Smalley:

The fact that this isn't an example you think of often is a sort of indication of how accessible and relatable it's become. It's just part of it's part of everyday life, or at least when you're applying for credit or buying a house.

Harsh Thakkar:

Yes, yes, I just did like few months ago, so I'm, I'm well aware. So in in this space, using data analytics, or seeing how AI is helping in different areas of life sciences. Can you share some examples of where, like, I know in drug discovery research, there's already a lot of use cases. I've heard different conferences from different speakers of different companies. So I know that it's already being used there. But what I'm not sure is what other areas can it be expanded to. So do you have any Ideas of any emerging sectors or departments in life sciences where they could benefit from AI or DNA analysis, more than what they're doing today.

Elizabeth Smalley:

So as you said, it's being used in a lot of different places, especially in drug discovery that's been out there for a while. Even in administrative things, like predicting how many people we might see in the emergency room, how we might want to stop that. But where it hasn't been used, as much as I would like to see is in that safety signal detection space, which is, which is a little crazy, because it's all about analyzing data. And we know that AI does a really good job of mimicking our own analytical capabilities at scale. So as I mentioned, the primary way, the bread and butter of how this is done today is statistics. Look at methods, is it a denominator and the numerator, and is it happening more often? And that's it, there's a couple of nuances and little more to that. But really, it's just an equation, when you're running a run against the data, using AI across those large data sets, creating something like a signal strength, which would be akin to your credit score, just sort of this probability that these two things are causally related, is really the next thing for Life Sciences and AI. And it's a place where it just hasn't been used as much as we could.

Harsh Thakkar:

And it's you brought up an example of the credit score and the financial industry. And I had a question on my mind, but I forgot to ask you, and it just came back to my mind is, so that's a good example of how maybe a different industry is using AI and how we can sort of piggyback on that and apply it. Do you know of any other examples from your career in healthcare and life sciences or in any other industries? Where you've seen something work? And you have an, you have an idea of how it could be used in life sciences, but you're not seeing anyone do it?

Elizabeth Smalley:

Well, I'm going to give you an example of where life sciences has taken a page from the industry playbook. And this one, I think we can expand quite a bit. So we know that social media companies have done a really good job of changing behaviors, right? predicting what you might like to see helping you see that and engage with that content, and then having things that really play on human psychology to keep you engaged to keep you coming back. And we see remote patient monitoring companies do this with patients to drive healthy behaviors, to keep you measuring your biometrics that as someone living with diabetes, or living with heart failure, it's really important for you to measure to keep you coming back to engage in healthy habits to nudge you towards healthier habits. And anytime we can leverage our natural human behavior to nudge you in a healthier direction. I think it's a win. And I would say that life sciences learned that from social media companies how to do that.

Harsh Thakkar:

So any any other examples that come to your mind where you haven't seen it used in life sciences, but maybe you would like to see in the in the next few years?

Elizabeth Smalley:

Well, I think generative AI is coming into play in all domains. I don't think life sciences will be last on this one the way life sciences usually is. And from what I've seen, life sciences tends to be at the forefront of trying to embrace generative AI as of as of right now. There's all kinds of uses, such as safety signal profiling, drug discovery, where this technology is being piloted. So maybe next time we talk, the question will be what can other industries learn from Life Sciences?

Harsh Thakkar:

Yep, interesting. And I hope I hope that life sciences are not the last because I personally am very passionate about this area, but I mostly work in quality and regulatory. So I'm championing just like you said, about using AI or just the AI is kind of sometimes can be taken as a buzzword, but just good data management, and understanding whether you can make decisions using data. I personally feel and I'm very strong on this opinion that it's not used to the extent that it should be in quality and regulatory and I've, I work in that space. So that those are the projects that I work on and I get a lot of pushback, because there are compliance requirements and everything has to be validated, tested. I get it right but it shouldn't stop you from using something that is way better than any manual or even an electronic system that just inputs and outputs, right, this is at scale, you can you can do massive amounts of work in fraction of time. So, so that's, that's my stance on.

Elizabeth Smalley:

Yeah, yeah, and I think I mean, as far as like the validated systems, I feel like regulators are really open and encouraging us to find ways to make this work. And when you think about the classical AI, we've done a pretty good job of putting reasonable systems in place to allow that to work at scale, but to still have this quality check, and to make sure that it's working as expected. And when you get into the world of generative AI, it's going to be a different game. And, and the biggest hurdle to me right now, is the fact that we have these foundation models trained on who knows how much data with with some kind of under the hood, things about prompt refusal, what kind of prompts Do we not respond to, as far as the LLMs go? And none of that's visible to to the rest of the world? And so we now need to come up with how do we work with blackbox AI, as opposed to the kind of AI where I built the model? I know exactly what's in it and exactly what it's trained on. Now, I need to find a way to validate this new kind of AI that's coming out as foundation models.

Harsh Thakkar:

Yeah, that's, that's a challenge. And then also, the other challenge that I've heard from many other people who are working in this space is how do you maintain ethics? Like how, what are the ethical considerations if you're building this type of model, right? So have do you have any ideas about what is the right way or the wrong way to build an AI model? Let's just put it that way.

Elizabeth Smalley:

Yeah, I mean, I saw again, with like the classical AI, we've done a good job, kind of norming around this data privacy, data rights usage, checking your training data for biases. It all seems now maybe 10 years ago, it seemed very difficult to think about. But now with generative AI on the scene, it all seemed very, very straightforward. And teams that do this best build an ethics by design, they have an ethicist on their team to make sure that this is growing up, this model is growing up with ethics built in. With the generative AI, I think it's going to be a whole new ballgame. Because again, we're leveraging foundation models that are not transparent to us. And the lack of transparency is going to be one of the most difficult things to figure out how we build ethics into

Harsh Thakkar:

Yeah, it's it's definitely a challenge. Because even like using generative AI for simple examples, like content creation, or just other non regulated use cases, there are still issues with transparency, or how you ask the question, what words are you using? Or you you're gonna get a favorable response? Right? And based on how you frame your question and how AI analyzes that, so

Elizabeth Smalley:

it's very positive to so I've asked Chat GPT to give me some feedback on things I've written. And it's always great feedback. I think it's going to help my self esteem.

Harsh Thakkar:

Yeah. So outside of work, what do you what are some of the resources? Or if you can throw something like what do you do to keep yourself up to date? Or just get more on this topic? Like, do you? Do you get your insights from conferences? Do you talk to people like do you read books, can you share some?

Elizabeth Smalley:

All of those things, all of those things, so conferences are a great resource, it's always good to talk to people face to face, and they give us things to talk about because there's always talks there. I read a lot of scientific journals especially in the safety signal detection space, as far as AI goes, and the new AI hitting the scene there's all kinds of online resources to learn about what's coming out with each area of AI. So all of those things, and I think leveraging every resource you you can get your hands on is one of the best ways that we do well rounded learning.

Harsh Thakkar:

Yeah, it's it definitely is because there's the challenge I'm having on this topic is sometimes it's hard to figure out the if I'm if I'm getting to a material like a blog post or an article if that's going to be just filler stuff or it's going to have some substance just because of the hype around AI.

Elizabeth Smalley:

Sure.

Harsh Thakkar:

Have you how do you deal with that because I'm sure there's other people who are like oh, here we go another AI article or another blog on AI.

Elizabeth Smalley:

So at Aris Global we partner with a lot of different vendors and I find the education that they're putting out tends to be a pretty good source vendors that work in data like snowflake, tends to be a pretty good source. You can always ask chat GPT, or Bard for some resources, which is an interesting thought experiment to ask LLM about itself.

Harsh Thakkar:

Yeah, I've tried and the answers were funny or not. I don't know. It was weird. Do you have any? What's what's one advice that you received in your career?

Elizabeth Smalley:

So I think the most important thing that I've learned in my career, especially in life sciences, is the need to share early and often and to have thick skin. So back to that concept of transparency, the more transparent we can do be with the studies we're doing with the models we have, the better they'll come, and the quicker they'll become a more efficient and safer models.

Harsh Thakkar:

And what is it? This is also a question I've asked many people and I want to know your answer. What is it that you love about working in life sciences, and maybe what's one thing that you, I don't want to say don't love but you wish was better or different than how it is.

Elizabeth Smalley:

So, life sciences is a great place to work, especially with AI. Because we can make real impacts on real people, and really kind of embrace this idea of AI for good. If you can, with remote patient monitoring, keep somebody out of the hospital. That's a real human impact you've made. If you can use AI, to bring safer medicines to market faster, then that's a whole set of people who have access to that medicine now instead of five years from now to have real life improvements. So that impact is why I work in life sciences. I think most technologists would agree that being able to move faster, would be what we would change about it. I think you said that yourself.

Harsh Thakkar:

I would as well. Yes. Yeah, faster. I think sometimes. It's just I've heard from most people that because we're in a regulated space regulation tends to slow things down. Other examples I've heard are quality or wrong stakeholders on the project, like I could go on and on of why, why something would slow down. But in general, I agree with you that we need to find ways to make it faster or maybe more seamless, right? Like we should not be discussing whether we need to do something or not do it for like five meetings, we should know in one meeting if it's right or wrong. And then if it's wrong, we just move on and do something else.

Elizabeth Smalley:

Yeah, yeah. I mean, I would push back about regulators or regulation slowing things down, because there are many regulated industries that prove that wrong. So I think, no, there's something to that if we can find a way to speed it up. I don't think there's anything standing in our way, though.

Harsh Thakkar:

I agree. Yeah. I mean, it's one of those arguments, right? Like, there's, I'm sure, whoever is making that case has their own like certain people like companies who maybe had regulatory challenges in the past are probably going to be on the fence. And if there are other companies who've not had that who've had favorable inspections and audits who've had good relationships, they might be a little bit more ambitious to take the risk, right? So every, every company and every person is coming with their own story or their their experience in the industry and taking that stance. So yeah.

Elizabeth Smalley:

And we see a really big range to your point on the acceptance of AI, especially generative AI in life sciences, um, different market authorization holders that I've spoken to. Some of them say, I don't want generative AI anywhere in my organization. And some are jumping in feet first, how can we use this? Where can we implement it? It's a broad spectrum.

Harsh Thakkar:

Yeah, I'm optimistic. I think it's a great time to be working in life sciences, like you said, and we'll see what the next 10 years how it shapes up. So before we wrap this up any final thoughts and how can people connect with you to learn more about what you're doing or to exchange ideas or thoughts with you?

Elizabeth Smalley:

Yeah, so I'm on LinkedIn, Elizabeth Smalley that girl with the glasses, and my picture is there. Final thoughts, I would say life sciences touch all of us. So we are all at some point in our lives. We are patients, our friends and family are patients and most of us will experience a medical event, either directly or the ones that we love will, and so I think it's incumbent upon all of us to be informed patients and advocates of our own health. And today in this world that will mean understanding AI and how it's being used by your doctor, by your nurses, by your pharmacist.

Harsh Thakkar:

Right. Thank you. Yeah, thank you for that and wishing you all the best for all the work you're doing, a happy holiday season. And yeah, thank you for coming on the show.

Elizabeth Smalley:

Thank you very much.

Harsh Thakkar:

Appreciate it. Thank you so much for listening. I hope you enjoyed today's episode. Check out the show notes in the description for a full episode summary with all the important links. Share this with a friend on social media and leave us a review on Apple podcasts, Spotify, or wherever you listen to your favorite podcast.

Using AI to analyze EMR data for pharmaceutical insights.
AI in life sciences and healthcare.
Ethical Considerations in AI and transparency.
Learning Resources and Staying Up-to-Date.
The Impact and Challenges of Working in Life Sciences.