Life Sciences 360

Digital Health Expert: THIS is the No.1 Roadblock to Healthcare AI Adoption

Harsh Thakkar Season 3 Episode 79

Healthcare AI adoption is transforming the way we address risk, confidentiality, and patient care. In this episode, RJ Kedziora, co-founder of Estenda Solutions talks about the practical steps to safely integrate AI in clinical workflows.

Learn how to manage data privacy, mitigate algorithmic bias, and keep a human in the loop to prevent misdiagnoses. Discover real-world strategies for using AI ethically, from ambient listening to second-opinion checks, and why it’s irresponsible not to harness AI’s potential.

The discussion also highlights how AI can enhance the roles of healthcare professionals, ultimately improving patient outcomes.

🎙️ Guest: Richard Kedziora | co-founder/COO of Estenda Solutions
🔗 Connect with Richard Kedziora: LinkedIn 

📌 Chapters:
00:00 Introduction to AI in Healthcare
03:01 Understanding Risks in AI Implementation
06:02 Confidentiality and Data Protection
09:03 Ethics and Explainability in AI
11:52 Bias in AI and Its Implications
15:09 Mitigating Risks in AI Usage
17:51 The Role of Humans in AI Decision Making
21:04 AI Enhancing Healthcare Professionals' Roles
24:10 Future of AI in Healthcare
26:57 Final Thoughts and Resources

Subscribe for more insights on AI in Healthcare!


For transcripts, check out the podcast website - www.lifesciencespod.com

Harsh Thakkar (00:08)
All right, so last week I was having coffee with one of the people that had reached out on LinkedIn and they work in life sciences, in medical affairs to be specific. And after like 20 minutes of conversation, we started obviously talking about AI, generative AI, talking about generative AI. So I gave him some examples of some of the projects that we had done and I set the stage and just asked him a question, what do you think?

Is generative AI ready to be brought into your company? And he had a very long pause. He was like, I don't know. And I was like, tell me more, why? And his first reason that he gave me was confidentiality. Him and his team and his management were worried that they didn't have enough confidence of who had access to the data, where the servers were located, know, what were other...

things going on with the use of AI, what were the other risks like hallucination, bias, all of that stuff. And despite knowing that there is potential with Gen. they haven't really pulled the trigger, right? So I wanted to bring today a guest here that knows a lot about this topic. He is a trusted voice in the AI in healthcare space. And I want...

to ask him these questions and take his thoughts on what he has seen with some of the clients that he's working with. My guest today is R.J. Kedsoria. He has co-founded Estenda Solutions, which is a leading company specializing in building custom software and data analytics for a lot of healthcare and medical companies. And he's done a lot of projects to improve patient outcomes and implement better AI adoption with his clients. So without further ado,

I'm not going to waste any more time. Let's go in and have a chat with RJ. Welcome to the show, RJ.

RJ Kedziora (02:12)
Harsh, thanks for having me today. And what you were just saying there is fascinating to me from so many perspectives, because I think we're not approaching AI projects the right way in healthcare. We're not thinking about them properly. And that's a big challenge that we're having because there's so much opportunity in AI and particularly gen AI, what it can bring to the table. We need to change our thinking.

Harsh Thakkar (02:27)
Hmm.

Yeah, and for someone that's clicked on this video, either through scrolling to their podcast platform or on YouTube watching or recommended by search, why should they stick around for the next 30 minutes or so? What are we gonna talk about that's so important?

RJ Kedziora (03:01)
I think there's two, it's very risk related. So everything in healthcare is about risk. We aren't talking about dealing with the health and wellness of people, but there are underappreciated risks and some that are over height. You were just speaking about the confidentiality of data. That is a very well understood concept. We can approach that. have to worry about the confidentiality, but there's very well known methodologies for addressing that.

Harsh Thakkar (03:03)
Yes.

Hmm.

Mmm.

RJ Kedziora (03:30)
And that's why I

think of we're not focused on the right things in AI and how do we bring it to the table. So let's talk about those.

Harsh Thakkar (03:39)
Yeah, yeah, and let's talk about those, right? So you mentioned that we're not focusing on the right things. You also mentioned we're not approaching AI. And at least the story that I gave you, I have like seven other stories like that. But the point is that 85 to 90 % of the people I talk to, their immediate reaction is, no, there's too much risk, I'm not ready, right? So my question to you is,

Can we as consultants like you are, or even if you're creating content on this topic or educating people, what can we do to sort of get these people more towards, okay, I'm ready to listen. Is this something that we have to do or we just have to wait for the AI tech companies to make it raise that confidence level for these clients?

RJ Kedziora (04:34)
I think there's multiple ways of approaching this. One, there is an education factor here that people just need to be more aware of what's capable with these technologies. And how do we even apply the things that we're very comfortable with, like HIPAA and business associate agreements to protecting that data? So your example earlier was the idea of like, I'm worried about the confidentiality of my data.

And where does this go? If I input this into a large language model, OpenAI and the chat GPTs, what happens to that? Well, those organizations will sign business associate agreements just like any other vendor that you have been working with for 20 years now. So that's your first level of protection when you think about confidentiality. Okay, let's put this in place. Those trusted frameworks that we know are well accepted.

Harsh Thakkar (05:12)
Hmm. Hmm.

RJ Kedziora (05:32)
by the hospitals, the health systems, by doctors, by providers. We can overcome barrier one. And even if you don't agree with that kind of thing, you can now take these and build them in-house and have control over these systems. are various models out there that you can deploy yourself in-house so then you know exactly what servers they're on. So we start having these considerations and concerns, and it's like, OK.

we have approaches to these problems, we just have to apply it to the AI. And even in the example of patient, if a physician is using one of these systems and wondering, what's the right diagnosis for this patient? What am I not thinking of? You don't need to enter the patient's name into the system. You're gonna walk down the corridor and run into a colleague.

Harsh Thakkar (06:25)
Hmm.

RJ Kedziora (06:31)
ask them for a second opinion, maybe they think of something different that you didn't, you don't have to tell that physician the patient's name. Likewise, you don't have to tell the AI the individual patient's name either. So you can follow the HIPAA guidelines and protect the integrity of that data very easily.

Harsh Thakkar (06:51)
That's a good example because we were in a similar situation with a client that wanted to do some experiment with different AI data extraction tools, but they didn't want us to give their actual product data or manufacturing batch records for their commercial products until they knew that the system that we're working with was stable and we had done like a assessment of those tech companies.

and we basically uploaded a pseudo record. So we told them, just show us what your real record looks like. And we just changed the lot number, we changed the name of the company, like we changed everything so that there was no confidential details, but the template was still very accurate to the real data set. And yeah, that's one way to go about it. If you're worried and you wanna just test the waters, that's definitely one way to go about it.

What are some other, you mentioned confidentiality, we've talked about that for a few minutes now, but are there any other things that you are hearing from people that are blockers when AI implementation comes on the table?

RJ Kedziora (08:04)
I think it's interesting. One thing that comes to mind is a lot of projects start out with ethics. Like let's write ethics guidelines for this project. I think the ethics are in place. You if you look at the medical profession, you know, what does everyone talk about? First, you know, harm. Okay. There's our base ethics for how we're going to approach this project. And when you get into ethics statements, then you get into massive discussions and it drags things out for weeks and months kind of thing.

Harsh Thakkar (08:12)
Mmm.

RJ Kedziora (08:33)
And I'm not saying that you don't have to apply ethics. I think they're in place already. We don't need a new ethics statement for AI projects. And in that ethics statement, a lot of things come up, which I put in that category of we're not thinking about them the right way. One of those big ones is the idea of explainability. So you look at a large language model and it is generally, we don't know

how it's coming up with its decisions. We don't know how it's making, if you're asking it about diagnoses or asking it to generate a patient note. We don't know, and that's okay. Because what's interesting is, my father tells this joke from time to time, the person who graduates last from medical school, what do you call them? Doctor.

Harsh Thakkar (09:30)
Doctor.

RJ Kedziora (09:33)
And yes,

that individual still goes through extensive medical training. It's like they didn't just slide through. And there are genius level physicians out there. think of the bell curve. There's a lot of physicians, healthcare professionals that are in the middle of the bell curve. When they make decisions, I don't know how they're making that decision. And yes, I can question them and what went into this process, but you can question

Harsh Thakkar (09:41)
Hmm.

RJ Kedziora (10:03)
the large language models as well and ask them how did they come into this? as you look at the more advanced reasoning based systems that are available now, they do provide that stepwise thinking of how they came up with the decision. But there's also this idea of second opinions. And there's a reason for those second opinions. It's like, OK, I hear what you're saying. Let me make sure that what you're saying

Harsh Thakkar (10:23)
Hmm.

RJ Kedziora (10:32)
Makes sense and get another person's opinion. So yeah, when we think about explainability I'm challenged by that one as well. It shouldn't be a barrier that that people put it up and make it into

Harsh Thakkar (10:48)
Yeah, and also I would argue that if you are not using technology and let's just even forget AI. If you're not using technology, if you're doing everything on paper and manual records, then you're relying solely on people and their professional expertise. But you could also work, let's say, in a company where there's high turnover. So every two or three years that person leaves and when they leave,

the knowledge is gone, except for what is written on a piece of paper and stored in their file room, everything else goes with that person, right? So the point I'm trying to make is we don't dissect a human brain and go to this level of, how did you make this decision? I mean, yes, we ask questions, we have slide decks, we have presentations, whatever stuff is stored, but we're not going to that level of detail. And I feel like...

with AI, we're expecting that level of precision, which is not fair.

RJ Kedziora (11:52)
Yeah, we are. And I found, as I've been thinking about this over the weeks and months, found one company that has some statistics in a research report, the company which I won't name, said 14 % of daily medical decisions aren't backed by evidence-based guidelines. 14%. So, you know, they're just, not making it out. They, you know, they do have experience and

Harsh Thakkar (12:14)
That's huge. Yeah.

RJ Kedziora (12:20)
but they're not looking at what the current literature, what the current medical guidelines are. There was another, an actual research report by a physician at the Children's Hospital up in Boston associated with Harvard, where they did a process over a course of a week where physicians were asked about the decisions they were making throughout the day. And they came to the conclusion that only 3 %

of those decisions were based on some specific information that the physician could point to. So again, it's like not saying they're doing a bad job, they're probably doing a really good job. It is a good hospital. Everybody is well intentioned, but it is really difficult to figure out, okay, what is the current medical practice where the computers, the technology has that at their fingertips.

Harsh Thakkar (13:14)
Hmm. I want to ask you, sorry, I saw, is my audio video clear? Because I saw an error on the top saying trying to reconnect.

RJ Kedziora (13:27)
Oh yeah, your video's a little blurry, but you're good. I can hear you.

Harsh Thakkar (13:28)
Okay, all right.

Yeah, where was I? I wanted to ask you about how to mitigate the risk, but before that wanted to ask you another risk. yeah, so before we go into talking about how to mitigate these risks and what are some advice that you give to your clients, I wanna talk about one more topic, which is introducing risk intentionally, right? So there's this thing called bias.

And what if you are only training your AI on good data or favorable data so you get the good output, but you don't want to exclude a lot of the outliers. And without AI in pharma and life sciences, there's also this phrase called testing for compliance, or you keep testing until you get the right results and you discard everything else. And there's a lot of companies that get into trouble for that kind of stuff.

How do you, how do we, because this has nothing to do with AI. This is somebody who is using it, deciding that that's the way they want to go about it.

RJ Kedziora (14:39)
It's interesting because bias is definitely a challenge. It is a challenge because of where that data is coming from. That data is coming from us, us humans. We have inherent biases and that then is built into the data that the systems are trained on. But what I find fascinating about the use of AI is you can ask it about its biases.

And it is going to be much more open and much more aware of those biases than us as individuals. And we each have biases. That doesn't mean that we have evil intentions. It's just the way the brain works and the way we think about the world. Like you're not out to get that person or provide bad medical care. Like that is not the intention.

but it's how the brain processes information to make it easier to understand the world, that their biases inherit that. And as you work with different systems, think about two examples. You have an academic medical center in a major city, how are they recording data? How are they capturing information versus a small rural community health center, which is

historically understaffed, and just doesn't have the resources, the data they have access is going to be different between those different institutions. So as you're developing these AI systems, there's definitely a challenge there in making sure they work for all the different populations. We know today, and it's being worked on, that a bunch of the wearables, the medical devices out there, are using sensors to look at through the skin.

don't work as well with people with darker pigmentation. It's a known bias in how these devices work. All the companies out there are working on addressing this and making it better. But that's a reality of these technologies. It's not talked about enough. We need to talk about those more and make people aware of them such that we can address them. But again, to the idea of the ethics and what we're talking about, biases exist in

Harsh Thakkar (16:38)
Hmm.

Yeah.

Hmm.

RJ Kedziora (17:04)
humans. Bias exists in clinical decision support algorithms already. It's not unique to AI. So we are developing and putting in process those systems, you know, to make sure that we address those biases. And we just have to continue that process as we develop the AI system.

Harsh Thakkar (17:24)
Yeah, and we've talked a lot about the different risks, confidentiality or training or hallucinations, decision making. We also talked about bias. I want to switch gears and go on the side of how do you mitigate these risks, right? So when I work with most of the clients in like software implementations or

If they're bringing new equipment to their manufacturing facility and the equipment is connected to a piece of software or sending data to a piece of software through sensors, we do a lot of risk assessment. And even in the FDA and EU guidelines, you'll see risk-based validation or risk-based system testing, right? There are templates, I've seen everything from single...

two page document to giant spreadsheets where everyone's sitting down feature by feature and going through the risks. Like what if this functionality fails or what if this data is put incorrectly? What is our backup plan, right? Is this going to stop production? Is this going to hurt a patient? Is this going to mess up our data? Those are the three that we look at. Have you seen any example of a risk assessment or risk?

mitigation analysis done on AI and can you share how did you or that client went about it?

RJ Kedziora (18:54)
It's a lot of the same ideas, the same risks apply that you have to think about, that there's biases in the data, that there are biases in the answers that can come out of the solution, that likewise in data entry errors can occur in terms of hallucinations. The AI system can hallucinate.

A big risk mitigation factor at this point in time is you're not letting the AI make the final decision. Everything is going through a physician. That is probably your biggest risk mitigation from an overall people process perspective, which is so much as I think about cybersecurity and other systems that we've just historically developed over the years, is that there is that

person involved in the decision making. It's not just the AI saying, okay, here's what we're going to do. And that helps mitigate the risk. Transparency also is one of those things to help mitigate the risk, particularly as you make something available on the market. If you are transparent as a developer and what data you used, how the system was trained, what did that population, what are the

Harsh Thakkar (20:09)
Hmm.

RJ Kedziora (20:20)
What is the metrics on the data that you used such that then the next person that's picking this up is like, okay, this makes sense for our system or things that they have to be aware as they might implement it in their system. The interesting thing is the FDA director was like, has said in public, like we cannot police, we cannot look at all of these AI systems. We just don't have the capacity and they really, as an industry, we have to move to.

Harsh Thakkar (20:43)
Mm.

RJ Kedziora (20:49)
more of a partnership model. How does industry help focus on addressing the risks and concerns of that? I like the idea of various different people have created this, they give it as a recipe for the AI called model cards and Coalition for Healthcare AI has put one out there that I particularly like. And it gets into those...

Harsh Thakkar (20:52)
Right.

Hmm. Yep.

RJ Kedziora (21:14)
You know, how is this system developed? What are the potential biases in data? Who is it intended to target the demographics of the population? That provides a lot of value. I think they still have to be made a little more simple, you know, and a lot of, you know, some of the stuff you look at, you still need the PhD in AI to understand, you know, how it was trained. That makes it less useful. So how do we continue to simplify and explain that to the general population?

Harsh Thakkar (21:29)
Mm.

Yep.

RJ Kedziora (21:44)
But one of the interesting is we can use the AI to do that.

Harsh Thakkar (21:48)
So are these recipes that you're talking about, is that a set of instructions that you would give to AI with, let's say, with different conditions embedded? If this happens, give this instruction. If this happens, ask these three questions. Is that how it looks?

RJ Kedziora (22:14)
Yeah, it's not that prescriptive. It's more the background information about how the system was trained, who the intended audience is for use of that algorithm. So if you are that academic medical center in a major city, it was developed with this population of patients in mind, with this methodology of capturing the data. These were how the lab values and, you know,

Harsh Thakkar (22:17)
Okay.

Mm-mm.

RJ Kedziora (22:44)
the high and low values that you use in assessing different parameters as part of creating that model. And then you're going to a rural community health clinic and you can look at that and be like, okay, we're not gonna match the expectation that was used to develop this algorithm. Or you can use it to then develop the tests such to see that it does work with your data within your institution.

Harsh Thakkar (22:55)
Hmm.

Yeah.

No, that's an interesting statistic, or sorry, that's an interesting example that you gave. I was thinking more along the lines of how you have a system SOP or a work instruction where you basically have these different steps and troubleshooting. When you said recipe, I was thinking something along those lines. You mentioned that having the human in the loop is

RJ Kedziora (23:32)
Right, yeah, that makes sense.

Harsh Thakkar (23:41)
There's no question about that. That's always going to be there, especially for regulated use cases like working in healthcare or life sciences. I don't see the human element going anywhere in the near future. Maybe some of the jobs will change. know like you mentioned in something, I don't know if you've written it on LinkedIn or you had talked at some event and you had this phrase about AI is going to help doctors become

Doctors be doctors and nurses be nurses. You want to expand on like what you meant by that phrase?

RJ Kedziora (24:17)
I'm trying to remember where I saw that, but yes, it is exactly that. So doctors, nurses, healthcare professionals got into the profession to help individuals, to help people. And unfortunately, what has happened with the advent particularly of electronic medical records is that physician is like turned around facing the computer, trying to type away at that keyboard to enter the notes and make sure all the information is captured.

Harsh Thakkar (24:21)
Yeah.

RJ Kedziora (24:47)
for meaningful use, make sure all the check boxes are checked and are they adding a lot of value? Yes, no, they'd much rather be talking and interacting with that patient. And that's where we can use the power of this technology. Ambient listening is becoming a huge use case in healthcare. Now let the AI listen to the conversation, record that information.

Harsh Thakkar (24:50)
Yeah.

Hmm.

RJ Kedziora (25:11)
Probably a secondary use cases chart summarization, you know, as more and more information gets buried, we bring more information to the table with wearables. You know, so much of your health happens outside of those four walls. You know, I wear one of the rings, know, various watches can track all sorts of metrics. You can share that with your physician, healthcare practitioner with meaningful information, not just dumping data on them because they don't have time to consume it. You can help.

Harsh Thakkar (25:41)
right.

RJ Kedziora (25:41)
make a difference. And so now you take the ambient listening, you take the summarization capabilities and the fact that the computer, the tech, the AI understands current medical guidelines is aware of those. It's like, it can bring up interesting questions. The physician can use it to better explore what this patient diagnosis may be. When you think about people with rare diseases.

A doctor's not going to see a particular condition very often, so it's going to be harder to diagnose that. Well, the computer is aware of all of the rare diagnoses out there and can help you think through that process and just make it a better experience for everybody. Let doctors get back to being doctors.

Harsh Thakkar (26:29)
Yeah, and you gave a good example when you mentioned ambient listening. And I know just about like medical records and stuff like that because my wife is a physician assistant and she's written a fair share of notes in her career. like the most common way is to type in your notes, which pretty much you have to do a lot of medical professionals have to do that.

I know there are some softwares that have dictation capabilities that are really good. So maybe you're taking some workload off because you don't have to sit and type, but then you're spending enough time auto correcting all of this stuff. If you have a really hard last name or first name of a patient or whatever, there's gonna be a lot of errors there and you're spending more time correcting than you could have just typed everything clearly. But.

The scary thing is I have a friend in healthcare who mentioned some tool. I can't remember the name of it, but it basically works only on Apple MacBooks and Apple devices, but it does exactly what you said. So that tool can, it can listen, it can record stuff, it can also record what you're saying. So it's not like you have to invite it to a meeting or nothing.

works in the background. You just have it in your computer and it basically, it remembers which browsers you've gone to, it remembers what you've typed, it remembers everything and you can search it. You can say like, when did I tell Richard or when did I tell RJ that I was gonna give him this project plan in this meeting and it would say, you told him you were gonna give it on third May, right? So it goes to that level and...

But again, the reality is will somebody ever use that kind of stuff in healthcare? And if they even use it, they have to get the consent from the patient like, we have these tools that are listening. So I'm just sitting here, you are talking, you're explaining whatever problem you have with your health condition or you're telling me your, you know.

whatever it is, and I'm not typing, I'm just listening to you and I go after and fix all of that. So I was really surprised when he told me that. Yeah.

RJ Kedziora (29:01)
It's interesting in a challenge that do you have to tell the patient? So yes, it is a good idea full transparency, like, yes, it's a good idea. But do I really need to tell the patient? I am personally challenged by that. And I challenge others like, is that necessary to tell them that there's an AI listening?

Harsh Thakkar (29:08)
Exactly. That's I don't know the answer. Yep.

Mm, mm.

RJ Kedziora (29:28)
I

think those fears are overblown and that helps promote the idea of fear that, okay, we have to let them know. So if you're using a video, like a recorder kind of thing, like you're not always telling them this is being recorded. Sometimes you do. And there is, you want to be upfront with your patients and et cetera. But I think this just has to become part of our standard world. Think about from a marketing perspective, you know, there is,

Harsh Thakkar (29:43)
Yep. Yep.

RJ Kedziora (29:58)
so much information known about you as an individual when you walk into a store, you know, that odds are you're going to turn left first and not to the right. You know, the optimal place for product placement is in the middle shelves because that's where you're looking first, not up top, not up bottom. There are so much information, which, you know, some people think of that as an invasion of privacy, but it's making my experience better.

Harsh Thakkar (30:09)
Yeah.

Yeah.

RJ Kedziora (30:28)
So how do we take those things that are making that experience better and bring them into the healthcare to make the healthcare experience better, more efficient, improve my health? Like when that physician has those seven to 10 minutes with you, it's not a lot of time. So how do we optimize that? And I think there's huge opportunity to make the system much more efficient. again, in terms of those challenges of AI and...

Harsh Thakkar (30:28)
Mm, mm.

Yeah.

Hmm.

RJ Kedziora (30:57)
I've seen various statistics of 7 % error rates in speech recognition transcriptions, 9 % error rates. There was one study where patients looked at their medical chart and one in five found issues. it was 40 % of those were classified as serious mistakes in the medical record. None of those statistics were because of AI. They were because of people, people making mistakes.

Harsh Thakkar (31:14)
Hmm.

right.

RJ Kedziora (31:27)
We have systems and processes in place to handle those, to address those. So the speech recognition, a second person does look at it to make sure, and then the error rates drop significantly. So we apply those same time proven processes to AI and its use, the whole system becomes much more efficient.

Harsh Thakkar (31:39)
Hmm.

Yeah, and you know, I still think about it like when you... How about now? Is that better?

RJ Kedziora (31:53)
Have fruit.

Yeah,

you're back now.

Harsh Thakkar (31:59)
Okay, all right. Yeah, no, that's a, when you ask that question about do I have to tell them, right? Like the friend that I was talking about, he was in a similar situation. He was working at a big medical practice or a hospital network and he was like, you know, I don't know if I should, you know, because obviously his laptop and all his devices are monitored by IT and.

You know, just by the way, this is not, we're not saying go install these machine, these kind of tools and do it. Everyone that's listening to this or watching this, please, this is a disclaimer, check your company's policies, check your privacy laws in whichever state or country that you're living in. This is just meant to be open discussion of, you know, what kind of tools and technologies are out there. So make sure you understand before you start implementing these tools in your work.

But that was his question, that was his ultimate question. Do I need to tell or should I just start using and I will be a much better medical provider and more efficient with my time?

RJ Kedziora (33:07)
Yeah,

think of it, and here's an example. So if when you're prescribing medications or, you know, delivering something through an IV and there's a complicated calculation, you know, particularly like in children based on weight, you know, does someone do it in their head or do you have the computer do it? The computer is most likely doing it to make sure it's right, or at least it's being double checked. But do you go to the patient and go,

Harsh Thakkar (33:15)
Mm.

Yeah.

RJ Kedziora (33:36)
I'm using algorithm X, Y, and Z for manufacturing this company. No, it's like, this is standard practice. I want the computer, the technology that doesn't get tired. And it has that knowledge and that it has been tested and proven out to help make those decisions. Yes, today, GEN.AI is still very new. There's still always a human in that loop. But at some point,

it is going to be irresponsible to not use the AI. Today, it's still in that questionable character, but five years from now, it might be standard medical practice to use the AI.

Harsh Thakkar (34:21)
Yeah, so we've covered a lot today. We've covered a lot of the risks, know, confidentiality, hallucinations, how to train the AI, make sure you're using, you're documenting all the right training sets. We also talked about bias and how to, you know, prevent that. You also shared examples of how to go about with the risk mitigation and, you know, being, how do we,

basically like the last few minutes we're talking about, I see that as a, know, to summarize that how can humans and AI coexist, let's just put it that way, is what we've been talking about for the last five, 10 minutes or so. It's been great with RJ, like this is amazing. I know you're here second time. The first time we talked about digital health, so this was a different conversation and I'm so glad that you.

decided to come back and share everything that you are doing in this space. For people that want to connect with you or talk more about any of the projects they're working on, where can they find

RJ Kedziora (35:31)
LinkedIn is usually the best place and RJ, Kedsiora will find me. Astenda, my company, astenda.com is out there kind of thing. And fascinatingly enough, I'm on a journey writing my first book about productivity and how we shouldn't focus on time management. You really think about energy management. So sleep, eat better, move more, and your productivity is going to

Harsh Thakkar (35:47)
Awesome. Nice.

Hmm.

Nice, yeah, that's an interesting take and I personally have been reflecting a lot about that topic just because doing content and doing consulting and all of this stuff is, and personal life, I just had my second child three months ago, so not getting enough sleep. So yeah, I definitely need, let me know when the book is out, I'll be one of the first for you to grab it, yeah.

RJ Kedziora (36:20)
wow.

Harsh Thakkar (36:31)
All right, RJ, it's been great and this was a very interesting conversation. Thank you for coming on. Before we drop off, any final words for whoever's listened to this episode and they're thinking about either exploring possibilities with AI or just getting themselves educated or up-skilled. Where can they start after they drop off listening to this episode?

RJ Kedziora (36:59)
If you haven't used it, go start using it. Like that is the biggest thing. Start educating yourself on what it's capable of. And if you're not impressed, ask it how to use it better. It will help you become a better user of the technology.

Harsh Thakkar (37:16)
Mm.

interesting. Alright, that's it for this podcast and I'll see you in the next one. Thank you.

People on this episode