
Life Sciences 360
Life Sciences 360 is an interview show that educates anyone on challenges, trends, and insights in the life-sciences industry. Hosted by Harsh Thakkar, a life-sciences industry veteran and CEO and co-founder of Qualtivate, the show features subject-matter experts, business leaders, and key life-science partners contributing to bringing new therapies to patients worldwide. Harsh is passionate about advancements in life sciences and tech and is always eager to learn from his guests— making the show both informative and useful.
Life Sciences 360
Continuous Validation: The Key to Successful Life Sciences Software with Nagesh Nama
Episode 019: Harsh Thakkar (@harshvthakkar) interviews Nagesh Nama (@nageshnama), the Chief Executive Officer at xLM.
Nagesh shares his perspective on continuous validation and emphasizes the unique capabilities of his platform, which includes the generation of digitally signed PDF reports.
Harsh and Nagesh discuss xLM's platform and its approach to software validation and test automation, as well as challenges related to incorporating AI technology and the importance of data governance
-----
Links:
* Continuous Validation
* Do you love LS 360 and want to see Harsh's smiling face? Subscribe to our YouTube channel.
-----
Show Notes:
(3:28) The concept of continuous validation.
(7:12) What happens if you already work on the platform?
(11:32) The production pipeline for Vmware.
(15:00) Manual vs. AI.
(19:27) Log analysis and audit trails.
(25:59) Man vs. Machines
(30:48) Data Governance and audits.
For transcripts, check out the podcast website - www.lifesciencespod.com
Nagesh Nama 00:00
If not 100% 80% of test cases can be automatically generated. It just looks at the pattern of
usage. These are not plain vanila go from point A to point B done one versus a no. It constantly
monitors how the user is using, what features they're using, what order they're doing, not just
from one user.
Harsh Thakkar 00:18
What's up everybody, this is harsh from Qualtivate. And you're listening to the life sciences 360
Podcast. All right, we're live on another episode of Life Sciences 360. And today, I have with me
Nagesh Nama and Nagesh is the CEO of x, LM and validation. So welcome to the show. Nagesh
Hirsch, good to be here. Good afternoon,
Nagesh Nama 00:51
we're going to talk a lot about validation of software validation and life sciences. Because I
know that's where you have products and services through both your companies. But I want to
start, I know I've seen your LinkedIn profile. And you have this catchphrase, which I've heard
you mentioned, on your profile. When you talk to me the first time you even said the word
continuous validation. I know you're trying to hammer that point home. So how did you? How
did you come about? Figuring out that that was what you wanted to build your company or
even explore? What what is what is? How did you come about deciding? Continuous validation
was the place to explore? Yeah, good question. To be honest with you, I was thinking of this
concept way back into 2000. Levin, but excellent was born in 2016. So the idea was there. The
reason I came up in 2011, if you remember, life science companies were not using cloud that
much cloud itself was a baby. And on top of that life science companies were not using, but I
kind of internally, I bet that in other cloud is going to probably become a big deal. And then I
somehow saw that with AWS kind of being born then Azure was its infancy. So I kind of use
those products. Then I said, You know what, if this thing comes to the validation world, half of
me said it will offer me said maybe not because life science is very conservative, they may not give the data out, meaning it won't be in the public cloud, for example. So I said, What if my job
is eliminated? Meaning my job as a consultant is eliminated? Because the machine does the
validation? Maybe there'll be a day when does so? Can I make the best of both worlds? I've
already worked in consulting since 1996, to be exact. So I've seen the world, the validation
world, in the real world, you know, in many parts of the real world, I've seen it. So I thought,
what can I do, and it was just a, an idea that I had. And I was mulling over it. And as I started
using more and more of the cloud products, I said, I need to come up with something that will
be the future of validation, in my opinion, you know that that's what I wanted to, then I looked
around and not many people were talking about continuous validation in our space in the life
sciences space. So I said, Okay, now is the time. So the idea is very simple. In a traditional
validation, right, the user requirements correct, then the entire lifecycle is about proving that
the user requirements is correct. The design, the testing, the validation, ongoing basis,
everything is kind of showing that now the user requirement is met is Matt is met. Now I'm
thinking if the platform is changing all the time, for example, Azure, or GCP, or AWS. And the
software also may change not to that extent, maybe you can control something at the software
level, right. But still, it will change probably on a monthly basis or a quarterly basis. How about
reversing that thing? A little bit? Still, the concept is correct, the validation has to be done. So I
was thinking, can I prove that, that instead of knowing the changes, can I constantly prove that
the requirements have been met, ultimately, FDA wants to make sure that requirements are
being met. So that's why we have change control and all that good stuff. But if changes are
happening, that's beyond my control, what can I do? So I can look back and say, Can I
constantly show that my software is performing the way it's supposed to perform, irrespective
of the changes, of course, you still need some good practices of configuration management and
all the good stuff. But in a public cloud, no matter what kind of configuration management you
have, even if you know all the changes in it's impossible to validate, you know, it's coming, it's
like a train, you know, push through the tunnel. So that's the that's where the concept of
continuous validation was born. So my idea was, it could be run intermittently, maybe it's every
week, every month, have a patch, or release or on a almost a continuous, it cannot be
continuous meaning it's not constantly running, right? Just a discrete event, let's say every
hour, you can run it every day, you can run it. So that was idea. And it's a little bit more than
that. But the concept was that can I build something, making sure that my requirements are
frozen, and I need to constantly pro my requirements are met no one no matter what happens
to the platform, no matter what happens to the software, so that was kind of the premise, and I
defined continuous validation that way in In spite of the unknown changes, can I prove my
requirements have been met? So that's kind of the premise I took. It could be hybrid, maybe I
can meet you halfway. But that was idea. Even if I don't know the changes, I need to prove that
my requirements are met. Okay, so then then from what what you explained, it sounds like,
excellent is, is a suite or a platform where you can do as you define continuous validation for a
wide range of SAS tools, is that correct? So like which words the which were the first like three
or four that you started out building in your in your SLM? Yeah, just to I mean, our platform can
technically do any software validation. I mean, I can take the same engine, put it into a closed
network, and it can test like manufacturing operations, for example, like custom HMIs, and all
that we have done that for large enterprises who have done that. But typically, we want to
target the Cloud Platform. Our first platform was tracelink, the serialization platform, we did the
serialization platform, then we partnered with a company called EO docs, which has a QMS
suite, we completely build a suite, the testing suite for product validation, and also for the
customer instance validation. So we did that then we did for AWS services, you know, various
services, we qualify, not validation. But we did that automation for service qualification of AWS
as well as Azure, we did that. Then we moved on to viva, for example, we did we, so that went
on after that. So we have many, many, many platforms, I think over 20 plus platforms that we
are we have done since 2016. Interesting. And and I'm sure you've come across the situation.
So I don't want to ask you the question Have you come across? But what I'm going to ask you is
what how do you approach when you work with a client that wants to implement a SAS tool?
Let's say anything with DocuSign? Whatever? Yep. But let's say it's a new SAS tool that nobody
has used in life sciences, but they want your like, how would you build that? If it's a newer tool?
Can you? Yep, yep, really started somewhere that you want tracelink, we never had the suite,
the first suite that we first software platform that we validated, or SaaS platform that we
validated was stress, like, so we never had the suite. So we had to build from scratch. So what
happens is, if you already have the suite, let's say it could be a different customer. But if you
already worked on the platform, it just becomes easier, because you will know the nuances of
the platform, we are quicker to respond, right? We can make it faster. Because no matter how
much you know, you have to get your hands dirty, you need to know, everybody says it will
work nicely. But until you kind of go through the lifecycle like you know, you just don't know
what you're getting into. Right? It's like, it's like getting married. Yes, the date was good, you
know, and now I want to get married. When I don't know what I'm I guess that's a good
analogy. Yeah. So anyway, so a customer, let's say comes to platform X and backwards, never
validated by us. So the approach would be similar only thing is we have to invest a little bit
more, because otherwise, they might find us cost prohibitive, right, because we need to build a
suite out. So that's the only difference, but enter time will be a little bit longer. If I already have
a suite like ServiceNow, we did ServiceNow a long time ago. Yeah, one of the first few platforms
with ServiceNow. So for a ServiceNow is easy, because we know that architecture, Mike, my
team knows that how the system is configured and everything else, same thing. So otherwise,
you know what, maybe the time is a little bit longer. And we have to invest a little bit more,
because we hope that that platform will attract more customers. So that third, fourth, fifth
customer will make a little bit more money compared to the first customer and faster and
faster. So that's the only thing but other than that approach is very similar. Very same mean,
there is no difference. And then form from given where in life sciences, we can't really do
anything without documentation. So is the excellent software or the product is that capable of
generating documents, or that is still a manual person for which you have consultants or
manual tasks. I wouldn't say the it's 100% automation, meaning first of all, there is no paper,
we just don't use, we have a platform for not only the test automation, but also for Lifecycle
Management, we have a platform. So we are getting away from the concept of documents like
urs is not a document for us. So every requirement is a is a task for example, issue or a record.
So that's what we need to manage individually because that will be mapped to a test
automation suite or a specification. So it becomes a urs is basically a bunch of requirements.
Each requirement is a record for us, you know, but we can make it look like a document we
have templates where it takes all the time requirements, logic and creates a document. So my
business analysts or the validation analysts will start with a plan once the plan is done, they will
build these requirements working with the customer. In some in most cases customer will give
us the requirements. But in other cases we collaboratively work now it's like a back of the
napkin kind of a thing and we kind of give them something to look at then they will enhance it
then that becomes that's all we need from our customer if they can give us the requirements.
Great Other than that, we don't need anything else from the customer, we need access to the
tool or the SAS platform, just like a human being accesses, we can access it. But we can also
access it via API's that's only difference dot just the UI, but we can access with API's also. But
still, we need the security access. And that's it. Urs requirements. That's it. So what we do is,
we take that, we figured out the software, if it's, we already have done it, it's easy. If it's not
done it, then lets a little bit longer they go through and kind of figure out how the software
works, then we document the configuration in a config spec. Again, it's a record by record, you
know, we run the record and trace it back to the requirements, we will do that. And once that is
done, the Sandbox is kind of frozen customer is happy design is kind of steady is configured to
freeze happens. Then we get the config qualification or the config spec done. Then in the
meantime, my validation analyst will build the workflow meaning what workflows into be
validated. So that typically goes in customer we only do the UAE and the country qualification
that's it, there is no typical IQ OQ, pq, when doing your product validation is typically to the IQ
and OQ. And maybe when PQ, like a hypothetical customer, you put that and like SLM
customer, excellent pharma kind of a company, and we kind of do the whole thing. So that's
typically how we follow and everything is electronic. The traceability is very easy for us to build.
And the test automation script, we know how to do it, irrespective of the target technology,
because mostly It's browser based, or API based or hybrid combination. And also we can, if
there is a thick client, like a desktop client, we can also incorporate that into our our clock, the
testing workflow, we can incorporate that. So once that is done, my developers don't freeze
their code until all the testing is done. And the draft scripts automation scripts are reviewed, meaning the output is reviewed by my QA guys, then they promote that code, the code reviews
happen, they promote the code into the production pipeline, that developers don't have access
to production pipeline. So the pipeline is a step by step it knows exactly which code to retrieve
to be used Microsoft DevOps for that it goes, it retrieves the code from get from the Git
repository, it documents the pull request, that's that's the code corresponds to, then it builds
the VM on the flight loads, all the tools that the VM needs, all done by scripting, no human
intervention, it builds the VM. And you can say I want Linux, I want MacOS, you know, whatever
the the end client operating system and the browser version is, we can build that any tools that
we've added, we can do that. And all the logs are there. Now how the VM itself is built, every
single thing is loved. Then when the VM is built, the test code is injected into the scripting is
injected into the VM automatically again, then it actually performs the test, then we can even
record the video of the test if you want it. And we'd normally take screenshots anyway. But
video is an option. Then screenshots are taken all the validation testing is completed that time.
And once that is done, it puts everything in a JSON format, the actual results, including the
screenshots, any reports, everything is put in JSON format. And the PDF rendering will take that
as the last step and generates a PDF. If it has attachments. It is like let's say for example, API
responses, you know, are attached as reports. And all this thing is built into a digitally signed
PDF package, everything is in one package. So it typical suite might have multiple PDFs,
depending on how it's organized. But that's the package. And that can be either somebody can
take that, review it and manually give it to the customer via DocuSign, or any of the Adobe sign
or one of these signature platforms, or I can be sent by email, once they are okay, then they
can DocuSign it and give us a signed copy. So we can upload it or we can actually use API's. So
send that document directly like DocuSign is very easy for us, we can actually route it to the
automatically to DocuSign. If there is no failure, if there is a failure, then ticket is opened. And
root cause analysis we performed the RCS we perform your own lifecycle. So in short, once the
suite is built, it's promoted to production, the validation production environment, right?
Everything is pushed button, there is absolutely no human intervention, we can do 100% 100%
automation. And at the end product is this digitally signed PDF with as all the timestamps, all
the screenshots, all the reports, it look kind of kind of like your manual executed the only thing
the machine is doing all the work. And there is no human even God, the reason we did that was
a lot of people to test automation, but the output doesn't match what life science companies
expect, right? So you need to correct output the correct way. And also build that into a format
that your auditors will, will accept it. So that's what we have done. We build that rendering
engine also. And that standard is across customers. It's very configurable. We can change
certain parameters, like make it their logo or their confidentiality statement or instruction, we
can solve the problem for us, you know, we can even add sequence charts, you know, like we
can show how the software is navigating in a particular workflow. We can add sequence charts.
So it's pretty modern compared to manual testing, and very easy to read. And it's fast. You
know, it might take let's say, a manual test rate hours, we could probably finish that in 15
minutes, 20 minutes, you know, maybe even lesser maybe lessor I, that you all, you already
answered my question, I was gonna ask you, what was your, like analysis of the numbers of
manual versus yours? But you already answered? So eight hours manual is 15 minutes, maybe
perhaps, yeah, pretty crazy. And if you really bump up the horsepower, sometimes what
happens is, you know, the target application slows down. Let's say we are fast, but target
application cannot keep up. So we need to wait, no, just to kind of slow down a little bit. So a
page gets loaded. If the UI testing API, this thing is typically very fast. But we can't do
everything through API's, right? We it has to be through UI because the human interacts with
the software via the UI. So I can't just use the API, I can't justify using the API in all cases, some
cases, maybe, but not in our case. Very interesting. Yeah. Because I kind of knew a little bit
about what the product does. But this last five minutes or so off, how you explained right from
the requirements to all the way getting the PDF report, and how the soft icon, I knew in my
mind a little bit, but you just you just explained it in a in a much nicer and elaborated way. So
so thanks for that you are in you are in the validation space, you're, for the last decade or so
you've been working on SAS systems. So now I want to ask you like a forward looking
statement. Because you you figured out continuous validation, you invested, you know,
resources in building the platform and building the company. And now the new twist here is the
influx of artificial intelligence tools. Right. So slowly as the industry accepts more technology, or
accepts more software with AI technology, and they get into the QMS, and the validation and all
the different use cases of the life sciences world. Your challenge is now going to be not just SAS
systems, but SAS with AI features. Right? And so have you thought about how will you sort of
morph your platform to cater to even those kinds of use cases with AI coming in? I want to ask
you a question into seven sections. Sure. One is using AI within test automation, assuming the
software is still a typical software. Okay. That's one aspect of it. Next is obviously the targets of
itself as AI components, you know, how are we going to validate? I meant I meant the second
one, I meant the second but you can answer both? Yeah. Both? Yes, it gives you the whole
picture. Right, right. From a test automation point of view, there are many pain points in test
automation. One is the UI changes very often. And especially with the cloud guys like they
might move, I feel it, I feel I'm just give a very simple example. It could be other changes also.
So computer vision is really helping, you know, because what happens is, when you run
computer vision models, it are the modern test automation software, it can look at a data point
or a screen object in many different ways. Typically, traditionally speaking, it always looks at
the XPath, meaning it has a pointer to the object, so the testing robot can get captured the
document object model of the screen that you're seeing, you know, then you can really know,
okay, this is the name of this particular object, I need to test that out, he doesn't care where it's
really placed, but it goes like in okay, this is user name, ah, this is password, it knows the
handle to that particular object it does it, or what happens is, a lot of times that is dynamic, it's
not static, you know, and if the page is not coded correctly, it is a mess, you know, it just you
know, you cannot use it. So, I cannot use bitmap, you know, olden days, you know, we are
talking about 2030 years ago, they would use bitmaps to position or know, the object location,
those days, or you can't do that, right, it's it's very archaic way of doing it. So, with computer
vision, you know, this is solving a little bit of the problem, we are also experimenting with it, so
that a lot of the modern tools we're changing are completely We're upgrading our platform, to
SLM the latest platform, and something that we're considering is one is computer vision, you
know, because we want to make sure that you know, minor changes, it can be self healing, the
script can be self healing, because that's a big maintenance overhead for us. You will if it's not
fulfilling, at least you tell us, you know, this is why I failed on this particular page, you know, so
that is also makes it easier for me run the whole script and undo it. It's a little bit challenging,
but that's one thing that, you know, that will really help. One is computer vision. And also, you
know, this log analysis, you know, because we generate a lot of data, right, let's say we are
testing every week, I just given that as an example. That means it's 32 runs in a year, just by
going by that you know, so it could be because of changes meaning new releases, it could be
because of patches or because nothing you just ran a regression regression test. So when you
have the data for different types of data points or the lifecycle of a software in one year, right,
so you have a lot of historical data now, that is still not big, big data, but it is compared to a
manual validation world It's still more data, because manual testing will not give me the data.
So now I can use the data to see, whenever there's a new patch released, we'll have a failure, meaning our discrepancy is more compared to normal, when there are no changes nothing, or
maybe there always bugs, you know, we don't know, like, you know, maybe when the number
of users at logon, exceeds 100, when that happens, you know, you get a lot of bugs or, you
know, tickets are open, because you know, something is not working, right. So that kind of a
assurance, meaning you can look at the software assurance in a different way. Now, because
you're more testing data in the manual world, you could, you could never, never do that. So
and also audit trail monitoring, you know, that's a big problem, right, you know, just looking at
audit trails, and kind of raising tickets a day, some abnormal activity, or something happened,
right, or notification, that becomes very easy. Now, with a log analysis, you know, that's, that's,
it's very easy with you on the current toolset, you know, it's very easy to do. So you can kind of
include all these things, you know, computer vision, log analysis, audit, trail monitoring, you
know, historical data in a totally, totally, totally new way. Now, let's say you have, even before
you go live, you can have 10 users using the software just using the software like they would
do, maybe your you can call it UHD preview at whatever it is. But you can actually put agents
that can study how the software is being used. Based on that I can generate the test cases, if
not 100% 80%, the test cases can be automatically generated, he just looks at the pattern
pattern of usage, these are not playing on a lab go from point A to point B done one bar versus
said no, it constantly monitors how the user is using, what features they're using, what order
they are doing, not just from one user, one user over time, many users over time, it takes that
and such as you know, what kind of scripts you should be able to thinking about. So that is
happening. And with our latest platform, let's say I'm a validation analyst. Now I can just easily
go through and say, Okay, this is the path I want to follow, kind of walk through the workflow, my ad person the code is generated, it was done before also, but it was never that robust, you
know, meaning it didn't work well, time and time again, time and time again, you know, so But
now, it's almost like English, you know, you can write the code in, you can look at the code that
system generates, it's very close to English, and it works very nicely. And that is being
translated into complex selenium code in the back. But in the front, the, you don't even need a
coder. Basically, you need an analyst, typing in all your scripts, meaning doing the using the
recorder, getting most of the script, then making some minor changes, you're done, you can
run the run the script, it's that easy. So now I don't need a senior developer to do, the analyst
can do, because in our current model, the current platform we have is banners will write the
feature file, you know, what that means is that is like your test script is quaza. English is a login
in our parameter username, password, you know, go here, go here. So it's, it's kind of that and
every line will have a test automation code, which the developers do. So that's how they do it,
that design the feature file, every line will have a code, and they make sure that the code gets
executed. Now, the second piece completely eliminated. When these guys write a feature file,
it's pretty much that's a code I need. So, so these are the many changes that are coming in are
in, in in the automation, but test automation, I can go on, but these are kind of the salient,
salient points. Your second part of the question? You know, obviously, this came to me even
three years ago, when ChatGPT was not even that well known. Right is so this is definitely a
challenge. My CTO is working on it. He comes from the ecommerce field, where he dealt with
fraud and all that, you know, like, right, right is is old company would support the infrastructure
for fraud detection, in you're talking about transactions get executed in fraction of a second,
and 1000s of transaction going through the credit card processing lot of analysis to be done. So
he is used to the AI being used in that space. So he has built our software models, software
programs that actually provided the services. So we're very lucky to have him on board. So now
we are looking at internally, if a biotech company or any company came to us, you know,
saying that we have aI model, how are we going to validate it? Right, from a concept point of
view, it is a question of the need to go back and see how the data models is built. Like what
kind of data is fed initially and how it's connected? You know, that becomes a very important
thing, right. Now then after that, what kind of guardrails you're going to build meaning it has to
be working certain guardrails, it cannot go to those guardrails. And what are those guardrails?
Again, that depends on each different area or domain. So this is something that industry is
figuring it out. You know, it's not that I have answers to all these things. But you know, those
things will be part of the validator. Shouldn't we may not create those guardrails? But we will
review those guardrails and say that hey, one is the data model and how the data model is
built. The next is the guardrails, you know that after the data model is working to a certain
extent, you've got the guardrails that has to be done. Now that my test automation comes into
play. So now I can get to create many, many scenarios, right? And I'm able to do that. And I
already know the expected results, like how the data modelers will do, right? The teach the
software, when it gives up, right? No, no, no, no, you guys, you should not do this in a way that
is not acceptable, because you need to correct it. That's why even ChatGPT was done. 1000
guys kept feeding it cleaning it basically. Right, right now, now I can use the training data, meaning just in case of training, I'm going to be testing. But I know even before it gives me the
output, what output to expect, we are also figuring it out. But in obviously, once we do some
1020 different models that we experiment with in terms of testing, we will definitely have a
better answer maybe a year from now, and better answers for you. But my CTO is already
working on it. And we'll be making some announcements also in this area. Interesting. Yeah.
Yeah, that's this is an area that I really I'm very passionate about. And I like learning, obviously,
like you said, we don't have all the answers yet, because it's still a new, most of the use cases
I've heard are in E commerce, or social media or content creation. In these cases, there is no
risk involved in using AI tools. Now, even if these tools are capable of doing tasks that you
would do in a life science company, that doesn't mean we start using them. Like I've seen
many post on LinkedIn and other social media platforms, or people talking about, oh, I use
ChatGPT for doing XY and Z, you know, in life sciences. But without understanding, Hey, you,
you might have just exposed some data, intentionally or unintentionally to this AI tool. Because
it's a it's a black box, you don't know who has access to it. You don't know the challenges, you
don't even know how the machine is going to process that. I agree. Right? I agree. So the this
is, to me, the modern struggle is not man versus machine. The modern struggle is man with
machine and man without machine. Right now, I agree with you. And also, I would like to add
one more thing. That's why I think you won a contest validation becomes a play. So as a
models change, you don't know what answers it'll give what's right, we need to constantly test
it now. Right? Yeah, if especially with critical areas, when it's making critical decisions. So you
know, you just cannot say I'm gonna test it once those days are gone. Now. It's it has to be like
real time testing. You're testing models are constantly running. And so that way, the data
scientists know what if I feed this kind of input, what's the output I can expect? You know, and
based on that, and it has just a little bit flexible, or they can be very rigid also, by the same
time the guardrails are pulled. So I think continuous validation becomes even more important,
given where we are heading in terms of AI and constant learning, constant output being given.
But genomics, they're already using it, you know, yeah, yes, yeah. Genomic drugs and drug
discovery are even even for like forecasting during like mergers and acquisitions, and like
market analysis. So a lot of the FinTech use cases like Big Pharma, when they are scouting,
smaller companies and their financial analysis, their stock analysis, I've talked to one person,
he's into investing in life sciences, and he helps life science companies go through these, like,
analysis of, hey, you should, this company at 5 billion is projected to make 8 billion over the
next two years or whatever. So in those cases, yes. So AI is amazing to do that kind of analysis
of looking at a company's past five years and looking at its current numbers and doing the
analysis. So yeah, those that have heard that as well. But validation, quality, compliance reg
affairs, these areas are still not heard any use cases, but I'm sure that it will, it will clean out
clinical trials. I've heard many, but not like clinical operations within a company. So yeah, but I
do. I do agree with your point. And this is kind of a good segue, because I know you mentioned
having those guardrails around data. And to me, it's like, even with SAS systems, let's even go
back before SAS systems before when it was mostly on premise or desktop based systems.
Data was still a challenge. And then even with SAS systems, it became a bigger challenge
because now you a third party has basically access to data, you're just paying a subscription to
use that software. And then when you're done you get but while you're using that software,
there's no way to prevent right somebody could get access to your data or whatever. So with AI
I feel like I'm Before a company even decides to invest in technologies with AI capabilities, I
think they really have to tighten up their data governance strategy. Because when they do
come to somebody like you, who is a partner in helping them, you can't, you can't work with a
client that is using an AI tool with a bad data governance strategy, because that's going to set
you up for failure. What do you think about that? No, even without AI, right? Data governance
in the cloud is a challenge. Yeah. Because just to validate your vendor, meaning qualify your
vendor, making sure that you understand all aspects of their platform, how the data is
managed, you know, sometimes we are signing contracts very quickly. I'm just talking about
life sciences, you know, yep. Yep. So as a consumer, it's different as a life sciences company.
And I've been to some audits like that, where they brought us in after the fact they already
signed up. But now they're finding it challenging, they don't understand. They said, Oh, my
gosh, can you just join our team lead the audit thing? So and that itself is becoming a
challenge, right? So they need to have data governance, forget AI, just with using the cloud
models, right? The cloud software itself, they should have a very good way to analyze their
vendors, right? And make sure that that contractually, they're buttoned it up, they have to
make sure that because data is important to us, ultimately, what is my software? Do? It
generates data? Correct. It's creating records, files, all kinds of data. Correct. So they need to
have that. And that discipline, I'm not seen in all instances. So it's very important. Some of the
mature pharma companies and some of the mature software vendors, they already have all
this in the contract, it's clearly laid out the data governance strategies, I know. And there are
third party audits performed on the data governance strategies, and they'll be certified also by
third party audits. But some of the smaller ones, we have more customers have hired us to do
the data governance strategies just to understand how the data lifecycle works. Right? Once
the data is good. It's not just creating the data, but also in real time, is it replicated? What is
the point of failure? And what about your cybersecurity and who has access to the data? So
there are many aspects that we need to even if something is broken? How do you get notified?
Like, how do you know that somebody is in there? What are the monitoring systems there? So
it's easier said than done. But we normally do those audits, also to make sure that SAS provider
has this governance framework in place. And that will work for the GXP customers. So that way
to do it. Now you add AI, it's even more the, the model itself, we don't allow for using ChatGPT
y API's, right? You're giving a data. So I cannot accept it. Because you need to have ChatGPT
within your own for instance, right? Correct. Yeah, the data cannot leave your thing. Either you
start from scratch and train it or if it comes with a pre trained model data, then you cut it off
after that, you know, you cannot just put it into some public space and expect everything to be
okay. No, it's I'm pretty sure these guys will learn, right? When it's a controlled environment,
they have to deploy that pre trained model within a closed, whether it's in the cloud or non
cloud doesn't matter. You know, it's in some kind of space that only I controlled after that the
data doesn't leave, right. It's why I have to put in, so those things become even more important
that data governance, you know, not just from creating records, but also where is the model?
How is the data being ingested? Where is the training data? You know, all that becomes very
important, I think, yeah, definitely agree with you. Because it's still a challenge. Like you said,
even without AI, it's, it's been a challenge. And fundamentally, like you said, every computer
system or software is either creating processing or Yep, doing something with data, right now,
each data element might have different risks associated with it. So some data if it's corrupted,
you could probably live with it, or you could trace it back. But there's other elements that are
very sensitive that you if it's corrupted, or goes into a wrongs environment, or to a wrong
person, that could be disastrous, you know, for the, for the company, or for, you know,
whoever's using that. So, yeah, it's it's into a very interesting space. And obviously, we'll see
how it how it evolves in the next five or 10 years. Thank you so much for coming. I have to ask
you a few more questions about when you post on LinkedIn. I see you're where you're based in
Florida. Yes, we are headquartered in Jacksonville, Florida. And we have offices in Berwyn,
Pennsylvania, and Mumbai and we are very close to starting one in Singapore also. Nice. So
these are four locations mainly right now, but we serve global clientele in because for us,
especially in the country has validation space, you know, it doesn't matter where we are. If it's
a SaaS application, now we can, my team is in Mumbai, and our the software development
software testing happens in out of Mumbai. So even within India, mostly it's concentrated in
Mumbai, but also we obtain I'm remotely also working remotely as well.
Harsh Thakkar 35:02
Great, great. So like, Are there any exciting features or items that you're working on that you
can share?
Nagesh Nama 35:09
Yeah, uh, one thing we are trying to build right now is one is we are looking at a platform where
data is in highly controlled environment on one platform. So what happens is if you're a smaller
biotech, for example, let's say I'm not talking about enterprise, they already have many
applications. Yep, it's too complex, you know, one single application will never be enough for
them, like Johnson and Johnson's of the world, right? Yeah, that will be the case I'm talking
about a smaller to medium sized biotech companies, medical device pharma companies. So
when they go from Phase Two to phase three, then they'll start Okay, that's the catch, start
getting serious right? Now validation becomes serious because they're gonna go commercial,
you know, everything becomes a big deal because one snafu then you know, you can get
delayed, which you don't want that to happen. So first, that client fail I, our goal is to develop
data center in a box, meaning it's a cloud data center with a we have come up with about 20
services, some of them are core, some of them are optional, but together, it should give you
the data governance. So it's from a cybersecurity data backup, archiving. All that is covered
data monitoring, auditing, and monitoring, all that is covered the data governance layer, but all
the data resides in one. In one, one database, you can say it's not exactly database, but remain
in one storage location where every app can access it. Now on this, you build many apps, it
could be document management, it ticket management, risk management, audit, trail
monitoring, app validation, which will run meaning all the apps should be get validated. It could
be operations, monitoring, infrastructure monitoring. So we are coming up with about 90 to 20
services that we're going to build the process of building that right now. And every app will be
have its own validation module should constantly test, make sure that the app is working
properly. And so the customer will get all these apps. So if you come to me, let's say and say I
need 10 services, these are the out of your 20. These are not important to me build a data
center for me, we can build that in three weeks. And with best practices baked in and validated,
we'll deliver that as validated. That's what we're working on right now. So we call it the data
integrity suite. So it includes continuous validation, but it includes all the apps also in a box,
basically.
Harsh Thakkar 37:19
Wow, interesting. Yeah, I'll win. Whenever you have that close to a demo. I would love to see it
because it sounds very, very interesting. Thank you so much for coming. I know like a lot of the
stuff we talked about today is only going to resonate with the digital transformation, software
validation testing, folks, because you have so much experience in that area and everything that
you're doing with SLM. So thank you so much for coming on to the show. And I wish you all the
success with with SLM and feel free to reach out if I can help you with anything or we can
collaborate because I feel like even my consulting projects are very similar to the kind of work
you're doing. So I would love to maybe work on a project with you for one of the clients.
Nagesh Nama 38:04
That will be fantastic has thank you so much. Thanks. Thanks for having me here.
Harsh Thakkar 38:08
Yep, appreciate it. Thank you. Thank you. Take care. Bye. Thank you so much for listening. I
hope you enjoyed today's episode. Check out the show notes in the description for a full
episode summary with all the important links. Share this with a friend on social media and
leave us a review on Apple podcasts, Spotify or wherever you listen to your favorite Podcast.