Chapters Transcript Video Augmented Reality in Surgery - What is the Future Archan Khandekar, MD Good morning everybody. Uh, thank you very much for joining. I think it is 7, so I, so I think I'll just get started. I think we have another presentation to follow me, so. I think I'll just start. OK, so today we'll be talking about AR and uh urology mainly surgery and urology. I have absolutely no disclosures for this presentation. So let's begin from going to the absolute basics. This is the first slide. This is what I'll be mainly talking about, so the, the ER as a concept was first for the first time defined back in 1994. This came out of Toronto and uh Japanese collaboration, and they defined what basically, you know, they they define a whole big spectrum where they went from what an actual virtual environment was and they defined it all the way to real environment and how it could be divided into VR, MR, AR, and it's a very technical but nice to read the paper. But when you look at it in the real world, uh, this is how we are set up at the moment. This spectrum gets divided into VR, MR, and AI today. If you look at real world examples, you have a lot of examples of uh VR today, where you have virtual headsets, where you can immerse yourselves completely. A common example of VR would be, uh, say the Da Vinci simulator, uh, where you have absolutely no contact with the real world. Then comes something like augmented reality where you can actually not interact with objects, but you can place them in your world. And you know, the commonest example I'm sorry, that we see today, it's, it's Amazon. Like you look at any product and you can just place it in your room. They have an example that's the most commonest example of augmented reality. Mixed reality would be something that we'd be describing today. Uh, where you have a HoloLens, uh, it's again one of the proprietary AI devices, uh, through which you can interact with what's actually happening with the devices that you're placing. So where did, where did we start? The whole concept started back in 1968. This is Dr. Ivan Sutherland. Uh, this is called as the Sword of Damocles. And you know, this is the whole idea of what a sword of Damocles is. It's very interesting. It's, it's something that, you know, it's beyond the scope of this particular lecture, but it's definitely worth reading. But this is back in 1968 where he imagined that you could actually have video that could interact with. Uh, what was going on outside and this is his proprietary lens that he devised. Uh, the term AR was, was actually coined much, much later. Now this was done at Boeing when where they were actually assembling airplane parts as a part of AR to actually form the back end of engines, and that's where this term came in. Over the last few years, especially over the last decade, the biggest thing that actually pushed AR in the real world was Pokemon Go. Uh, Pokemon was the first game that, you know, had over a million downloads that had AR in it. And subsequently over the last, say, at least 5 years, uh, you have the Apple device, you have the Meta with 3, you have the HoloLens, they have pretty much landed up in a lot of people's living rooms, and this has become a consumer product now. Now, so far as uh medicine is concerned, so far as surgery is concerned, which are the specialties that are basically, you know, indulged in AR and you know, the ones that have really taken off are neurosurgery, orthopedics, ENT, head and neck surgery, and there is a very specific reason for things, you know, these specific branches of surgery for them to take off. And there is a reason why urology is not leading the way, leading the way. The thing is, for a technology like this to flourish and function, there are a few things that need to be very clear, and one of them is stable anatomy. Uh, neurology, orthopedics, kull-based surgery, the patient stays absolutely still. There is absolutely no movement. Uh, this stable anatomy does provide a great segue for a technology like this, whereas any soft tissue surgery where we are doing kidneys, bladders, prostates, there is a lot of respiration. You have to do a lot of gating. Even with the actual surgery, there is a lot of manipulation of tissues that goes on, and there is a phenomenon that we'll be talking about later called as registration drift. Uh, you know, we'll be talking about registration and we'll also be talking about registration drift, why this is a problem. Also, the landmarks, so far as neurosurgery is concerned or ENT is concerned, are mainly bones. So it's much more easier for the images to get registered and intervention to happen. And also these are branches that have been traditionally relying a lot on image guidance during their actual procedures while we have been relying a lot more on tactile feedback, visual cues, and things like that. So again, just to put, put this together, like this is an example coming from neurosurgery. This is an example coming from UM itself, uh, where you have, as you can see, this is, uh, neurosurgery OR uh at UM and you can see they are planning for a skull-based surgery. They're trying to do a left, uh, parietal lobectomy and You can see how easy it is for them to mark out the landmarks on the skull, and then you want to replicate the same thing on the patient. It's very easy for an algorithm to point out exactly where things are on the MRI and where things are on the skull because there is absolutely no movement, and this stays the same throughout the course of the surgery. So for an intervention to occur on, on the skull or inside ear, it's very easy for them, it's very straightforward to put the skull on it. No movement, no registration drip. And the actual procedure can take place. This is one of the ways that registration can occur. We'll be talking more about registration, but this is just to give you an idea how this registration takes place. What we mean by registration is you're basically getting, making the system aware of where things are placed in the outside world as compared to the image, which is the MRI in this case. You see it's actually a combination of CT as well as an MRI. That's right. And then once you have a chicken, you can see how well you can use it for any sort of intervention. Anyway, uh, moving over to urology, we'll be getting that, uh, getting back to those slides so far as urology is concerned later on. But you know, in urology, this, this technology has been tried in outpatient counseling, which we have also been involved in. A lot of work has been done in the partial nephrectomy space. This has come from Uh, a lot of the European programs, uh, but what we'll be talking about more is, you know, how we use it in outpatient care for, you know, we'll briefly, we'll also be talking about how we are trying to develop our augmented reality prostate biopsy program, and that will be the last crux of the talk. Uh, this has also been used in radical prostatectomies, robotic ones. Uh, efforts have been made because since it's a procedure that's similar to, say, um, IR procedures or neurointervention procedures, the same thing has been tried for PCNLs. Although the respiratory getting has been a big problem, uh, deformation of tissues, changing the position of the patient from prone to supine has been an issue. So, uh, this has caused issues and has not really taken off. Uh, again, training and simulation, like we discussed earlier, like the da Vinci robot, um, VR itself is a big part, but again, uh, AR has not really taken off here. So, uh, when we talk about this, is this really a new idea? Now, the first paper about AR impartial nephrectomies came out about 16 years ago. Uh, this was from a European group. And ever since then there has, there has been talk of, you know, having an ER guided route to doing partial nephrectomies, to removing tumors, looking at margins, things like that. But, and again, you know, this is the Porigliaro in coming from Italy, they have published and published over the last 5 or 6 years, but you know, it's, it's been difficult to make it standard of care. Even today, you do not see ER as a part of, you know, as a standard of care in any surgery. And what is the reason for that? For that, we will be going to basics, to the absolute basics how, you know, something like this is created. So the way that something like this is actually done is like a 4 step process process. You have the imaging in the form of CT or MRI on yourICO, then that is brought over to some and segmented. So and a 3D model is, is made out of it. And that is the only initial part. That is the only, the part, you know, the first part is creating the actual. A 3D image for you to see, and this is the first part of it. Even this was not possible, so maybe say about 5 to 6 years ago there used to be a lot of manual work that had to be done. But with a lot of AI machine learning algorithms, a lot of the segmentation has been completely automated, and that makes it actually doable for us to view something like this in real time. You also have the hardware now on the mixed reality headsets, have the low latency, high volume flows through which something like this can be viewed. As we discussed, you have the algorithms to uh sort of dissect through all these images and and. Create a 3D model in good time. And also, uh, regulation has been helping quite a lot. FDA have made specific laws for uh MRAR, a lot of software actually, and a lot of approvals have been received for this hardware uh software integration recently. So how does this work? Uh, this is the image that is getting taken up from PAs, the picture archiving and communication upload. System, in, in a computer. This is something that we did a couple of weeks back. This is an anonymized scan uh coming from the patient, and you can see this is, you know, coming up straight away into our system and this segmentation when we talk about how this is done. When I say segmentation, it is the division of soft tissues separately. Here you can see the tumor is sitting on the kidney. You can see the collecting system, the ureter. Uh, the vein, and this is automatic segmentation that has been done by the system itself, uh, through a ML algorithm called as unit, and we'll be talking about what unit is in brief. Uh, but this is, this is, this used to be like, uh, previously, as compared to this, what used to happen was, even in, even in, say, maybe about 2 years ago when you had something that was loaded up like this, uh, this is what used to happen. You had to have, this was the CT scan that was loaded. Although you had, you could actually look at it completely, you had to actually identify the area that you wanted to sort of look at properly, uh. Uh, in the, from the 2D to the 3D space. This even is a segmented model. But again, even a segmented model where they are segmenting out soft tissues automatically, you had to actually go ahead and separately select out your arteries, uh, your veins, uh, your collecting system, your tumors. And now with all these algorithms, it actually just tells you that these are the separate parts that, you know, that have different consistencies and you can actually just select them or color them or modify them the way you want. Uh, this is still semi-automatic segmentation. What used to be done like maybe 6 years ago was something completely manual, which made it almost impossible for the stick to be used in the OR. So what is, what is unit? What is this model? So this is something that came out of Google DeepMine with some Austrian and German uh German collaboration. Uh, this paper came out about 6 years ago and defined an AI algorithm how something like this could be done. And when I say segmentation, it is for all practical purposes. It is conversion of your 2D CT scan or MRI slices into a 3D volume. And the reason it is called a unit is basically literally how it looks like. Now you know the way it looks at a picture is it looks at the broad picture first. Suppose you have a map of the city and you want to basically point out where you live, it will actually go back and look at the whole city, how the arrangement is done, and then it'll pick up on, on your house itself, and then. Demarcated. Basically, that's what it does on an MRI or a CT scan. It looks at the whole picture. It, it looks at what you have asked it to segment or, you know, and again this depends on how that model, that particular model has been trained on. Then it goes deep into it. Separates out the nitty gritty, separates out the particular tissue or the bone or the substance that you want, and then goes back on the U and decodes it and again goes into a more pixelation and shows out the image. And that's why, you know, that's how the actual U is formed and that's literally why it is called this unit. Once you have the image segmented from UNET, and again we can discuss this much more in detail, but I'm not, you know, this wouldn't be, this will take up a long time. But after the unit segments out the image that you want, there is something called a marching cubes algorithm. Again, one of the AI algorithms through which modeling is through which a 2D model is converted into a 3D model once it knows what to do. And then all of this is put into an SDL format. An STL format is something. Uh, which is accepted into most AR or even 3D model programs. It's something that's even accepted into a lot of Adobe programs. Even Illustrator picks up STL, so you can either manually or, you know, automatically use these 3D models to either say color code them or use them actually in surgery. Uh, unit is obviously not the only model. This was the model that was released back in the day, but this has been modeled upon quite a bit, and there have been various, various models of this unit software where the new unit is the one that is used most of the time so far as medical use is concerned. But for different types of segmentation where you need different purposes, there have been different units available. Uh, this is the work we presented last year at the UA where we use this segmentation and we saw how we created something like this and, uh, put this up. Let me just show you what we did. Uh, so after images were taken up through packs in real time, basically after the patient showed up to the clinic, we pulled their images from packs and showed up, showed the images to the patients, and this is what the patient was able to see. Uh, we explained, so this is a patient who had a left side adrenal tumor. We were able to actually point out exactly where the tumor was, and the patient would actually hold where the headset, hold the, hold the scan in his hand, manipulate it by himself, and had a, you know, much better understanding than a 2D picture. Eventually, what we found was, uh, there was much better understanding of the tumor of his anatomy and pathology for the particular patient for particular patients with renal tumors. We are trying to replicate this for, uh, benign conditions also. Uh, and yeah, so this is, again, like coming to the OR. This is again only from last week. Uh, we have, we are trying to do something, you know, similar to what the group did. This is us setting up a procedure. This is us setting up a nephrectomy. You can see in the Da Vinci PIP vision, uh, this is the view coming in from our segmented image of the tumor. So this is only step one. This is only the segmented image that you can see on the left side. Uh, this can be again manipulated intra-op as the surgery is starting, you can keep on eyeing it. You can also look at it at different steps of the surgery, and you can plan out your surgery according to that. But again, this is only the first step. Where have we reached? This is, you know, the more cutting edge work. This is coming out of Dr. Mott's group from Belgium, and what they have managed to do is not just segment this, but register this also. Now when I say register this, registration basically means that the image itself is put on the actual patient pathology. And uh also making sure that there is adequate movement of the pathology with the patient moving with the respiration and with soft tissue manipulation. But for doing something like this, there is a lot of compute power that that goes in. You have data that goes in from the Da Vinci card. It goes through a separate whole system that, you know, this is a proprietary system that is actually built and sold by Nvidia itself, and it's probably the only company that is bigger than Intuity into the space. So I do expect it to make some headway. Uh, there is a separate RTX 6000, which is a GPU that looks at AR proposition. So it looks, it takes up data from the da Vinci. It takes up data from the segmented data from your MRI or CT. It overlaps both of them and then it sends back the signal to the da Vinci surgeon console in the PIP model. Uh, this is used maybe in, I think 3 or 4 institutions around the world. It is sold by Uh, and, and again, when I say 3 or 4, I mean neurology institutions, a lot of the neurosurgeon institutions, uh, use it. It's based on the Nvidia Holo scan and Nvidia also has its own complete, you know, um, portfolio for healthcare services, and this is a part of this. It's called NVD Isaac. Uh, it has a whole portfolio and you know, ER is just one of the capabilities. It has solutions for the whole healthcare system, but you know, this is the one that we are looking at. And eventually, this is again, this is Dr. Mottri's work, his group, uh, this is what it eventually looks like. Like you can see on the bottom left corner, uh, you can actually see the segmentation done on and off uh of the tumor. So you can actually see this is, you know, they call it like a uh. A mobile, they have their own version of the ultrasound. They have a proprietary name for it that I don't have on this, but they basically use the ultrasound and the AI device to create like a virtual ultrasound through which you can actually see the tumor and it also stays there once the tumor is cut. So you can, they have sort of looked at the margins and they have in a very small series shown that it increased the margin rates are better with doing something like this. Uh, on this front, there has been a small systematic review looking at about 8 studies very heterogeneous that has shown that with doing something like this causes decreases blood loss, ischemia time, better renucleation rates. Uh, but you know, again, like a lot of metrics have not really changed the post-op GFR rate in this particular systematic review, no real change in surgical margins or any complication rate. So again, very early stages for something like this. But again, the fact that, you know, it has only started building up over the last 3 or 4 years tells you how compute power really powers something like this. Uh, moving on to the second part of the talk, uh, we will be talking about, um, prostate biopsies and, you know, starting off with, when we talk about prostate biopsies, as we all know, over the last, say, 30, over the last 20 years, uh, ever since MRI has sort of come into play and over the last decade, uh, ever since A adult's paper, uh, we do have um uh MRI guided fusion biopsies pretty much becoming standard of care as a part of all guidelines. But there's something new that we would like to propose, and you know this is something that we have been trying on for the last couple of years now. Uh, this is the AR guided biopsy. Now why would you do an AR guided biopsy? That is the first question. The first easy answer to that is, you know, a heads up target overlaid view where you don't actually have to keep on looking at the screen. You have direct spatial alignment where you feel like you're actually going into uh the prostate straight looking at the patient, and we'll show how that looks like. Uh, lesser passes, you don't have to look at the screen back again. You don't have to look back at where it's going. You also have better visualization. That is what we believe and it's easier to teach. So this is how you know a conventional fusion biopsy is done today. You load the prostate, you load the prostate MRI, you load and transfer the segmentation that is already done. Then the fusion occurs where you do the fusion, the ultrasound with the MRI. Then you use the fused view to navigate where the needle is going into the targets and then sample. And this is what we are sort of uh proposing. This is actually the workflow that we have been trying. Uh we load the prostate MRI, we generate the segmentation. The segmentation is based on the unit platform and similar to what we just showed. After that, we select or load the legion points. Now we have been trying to generate a full AI to AR pipeline through HRS, or we also have a method of loading conventionally the pirates points that are, that are sort of controlled on the MRI. Then we connect our ER headset. We put the custom references, and these references are important for registration as I'll be showing. And then we connect the ultrasound, collect the ultrasound frames, which is the sweep that we do on the urony also. Then we register the volume. Adjust the registration and finally use the biopsy that is again tracked to get samples. Uh, the main issue again, which is different, is segmentation that we already discussed, uh, connecting the headset and then registration is again what we'll be talking about. So this is what it actually looks like a segmented prostate. Right. Uh, Let me see. So this is, uh, I'm sorry, I don't know why that didn't play up on the big screen, on the main screen. But this is the uh MRI segmented prostate. This is again similar to what we showed on the kidney. You had the, you have the prostate MRI that is put into the system and we have now created enough to segment out not just the prostate, but also the things around it. And we are using it for registration purposes. And I will go along as we go along as we talk more about this. The reason I'm showing out the segmentation of not just the prostate but the things around it is for registration. And when I say registration, it's basically attaching the MRI to the actual patient body. You need more references and not just the prostate. And what we have seen is that using things around the prostate. Uh, really helps us doing something like that. So this is the whole AI to AR uh platform that we are trying to develop. Uh, this is something that's coming from RATA Group where, uh, they have the HRS algorithm through which they are looking at specific points on the prostate along with the pirates algorithm. So what we do is along with the, the, the HRS file also comes along as an STL file. So for the software, it's easy to recognize. So along with the pirates lesions that are pointed out on a conventional MRI, you also have the STL file, uh, taken up from the HRS. Then you have the alignment of the HRS with the MRI and then we, uh, not just take lesions from the not we take samples from the PyS lesions, but we are also looking at HRS similar to how we have been doing in the Uh, on, on the you don't have in the office. This is how it actually looks like. It's pretty straightforward. This is again a Windows-based software, so we have an STL file along with the T2 small for field of vision uh file, and you just pick it up. And on the prostate you actually look at The HRS lesion. So how is this all done? This is done like right now. This is the commercial option of all the AR headsets. The Apple Vision Pro is also there, but you know, a lot of the companies, because Microsoft had about a 3 year head start, I have been using the HoloLens for doing something like this. What it has is it has an RGB depth sensing camera right on the top of it, and it is able to look at what we call as trackers. These trackers are devices that are placed on top of every biopsy needle on top of the probe, and they have absolute pinpoint pixel and resolution capture. There are 4 grayscale cameras that are present and a death sensing camera that are present on top of the HoloLens and that can recognize every single small movement that is done by. Uh, all the instrumentation. This is how what the instru instrumentation looks like. It is pretty much the same instrumentation that you would use for a conventional biopsy, but you can see the biopsy gun, you can see the tracker, the tracker of, and now you can see the micro ultrasound probe, the exact view probe on the right side. Uh, we have developed these small trackers that need to be basically seen by the HoloLensn to actually make sure in a 3D space where each object actually is. And this is also one of the small bans of this whole thing that, you know, when you're doing the whole procedure, all of these trackers need to be absolutely seen by the Hololens, and they need to be in the same field of view. Uh, there, there have been a lot of issues with developing these trackers, uh, especially regarding how these positions need to be when you're doing this, uh, you know, that the fact that all of these need to be tracked, where they should be actually located so that they don't interfere with the actual biopsy procedure. Uh, and if this has undergone, I think about this is our 6th iteration that we are using at the moment over the last 6 years or the last 2 years. Starting off, this were the issues that we had with segmentation. Segmentation was still the easy part, as we showed you, just getting it from the MRI onto the actual patient. It looked like, you know, we would make super great progress, super fast, but this was 2 years ago and, you know, we have struggled getting our steps, you know, moving forward, but Uh, this is how, this is one of the advantages that is there from day one, I feel, and this is just getting the ergonomics right. When I say ergonomics right, you don't really have to turn around and look at the screen. If you wear the headset, you can actually see the actual ultrasound screen also right in front of you. And because of that, as you can see, you know, that turn that you have to do to look at the screen, to adjust your probe every time you take a sample goes away. And I think that itself is beneficial to the surgeon, also saves a lot of time. Now, you know, the reason that you feel that, you know, this whole AI program needs to be absolutely thrown into the trash is mainly always the registration issue, and I think the registration issue is what causes most of the AR programs to fail. After you have registration issues, obviously, even if you have gotten the objects, the MRIs to absolutely fit to the actual anatomy of the patient, there is again, there are things like drift where any small movement of the patient, you know, totally, uh, sets it off. Like it's similar to what happens in the urine in the office, but you know, you. Even more so, uh, there have been issues with latency where, you know, the tracking is slightly, even if it's delayed by a few milliseconds, it really, it really shows and really throws up the whole process. And tissue deformation is again, uh, an issue that is there with the uron have also with the conventional biopsies also, but I don't feel that, you know, that would be an issue that we'll be able to solve with this. The other thing is, you know, field of view and ergonomics, when you're looking at the actual image, you need to always make sure that everything is in view, otherwise, You're not able to actually go ahead with the procedure. So you know, I've been talking a lot about registration and you know what is actually registration. So registration is basically a procedure that aligns the virtual data in the form of MRI and then segmented data, which is the segmented image coming out of UNET to actually Actually exactly fit on the real world object and registration transformation requires matching points between virtual world, which is your MRI, and the physical world, which is on the patient. And this has to be done through actually placing marker points on the patient itself. And these, these have to be marker points that do not move with the patient and there also should be something they are something that needs to be tracked with your Hololens. Uh, these are very small trackers that we are currently using to see where, for the HoloLens to see where, uh, you know, exactly these things are, and, uh, these are the patients that we are attaching on the, on the pelvis, uh, as we go ahead. When we started with, you know, the procedure, this was done in the lab and we thought, you know, this was going to be pretty straightforward because we were able to segment out the prostate nicely, very easily on the actual, uh, in, in the actual lab, but it, it turned out like most things, when we actually took it to the Uh, real world, even the segmentation was not that easy and we were not really able to, you know, find out those, the shape of the prostate and even the algorithm that was applied so nicely in the lab, uh, really struggled to do its work, uh, inside the OR. So, you know, this is the procedure for the landmark registration and, you know, for actually uh matching the MRI images with the prostate in the OR. This is how we started off. Uh, we started off by annotating all the procedures, uh, I mean all the MRIs, um, with specific points around the prostate. And our idea was to match it. We would be matching it with specific points again, uh, going inside the actual procedure and matching it. Along with this, just to make sure that the headset understood and the system understood where things lied, you had, uh, points that were placed on the pelvis, uh, and an ultrasound sweep, similar to what we do on a urinary-based biopsy was done to make sure that Uh, these lesions were actually matching and you know, the actual registered image did not move from where it was taken from. Uh, this is, you know, the initial way we try to register. Uh, as you can see on the top right, if we have specific points, 1234, where we initially just went through the prostate, uh, and as you can see, we are first of all, uh, matching the points on the axial view, then we matched them on the sagittal view and then on the corona also. And then going ahead and uh trying to match the exact same points that we maxed out on the MRI into the real uh sweep. You can see here, this is uh the custom patient reference where we did an ultrasound sweep. You can see them lighting up. They don't actually light up. It's just the way uh it looks on the, on the ultrasound. And this is the sweep that we are collecting which is similar to what we did, what we do on the um urinal biopsy in the office. Yeah, the sweep is done two ways and then followed by that, uh, like we discussed on the like we showed on the previous slide, we are marking out points. These are points marked out on the prostate. On all the edges so that the machine basically recognizes where the prostate exactly is. But again, it has been a struggle, and the idea obviously of all of this is to make sure that when you have these points registered, you want that image, the same image, to come back onto the ultrasound. And if it comes back on the ultrasound, that's the only way that you'll be able to actually go and target. Uh, a lesion. You obviously, you have the micro ultrasound that also assists you, uh, taking, uh, the samples. And we, we haven't reached to a point where we'd be taking the samples, but this is how we are trying to adjust this. This is what we tried last week, um, moving on from. Um, you know, from using the specific points on the MRI, we, we thought that maybe another way of registration would be just putting the whole actual uh pelvic bones and, and, and the bony anatomy, and then just aligning the bony anatomy with the patient. Uh, since you, you have registered now, you would have the MRI image going back onto the ultrasound. And this is the actual pyros lesion that we have marked out. But as you can see, it's, it's a struggle because, you know, the physician himself is seeing this image in 3D at the bottom that he's trying to exist. And along with that, you also have the prostate ultrasound image in front. You that you also want to try and adjust, which also gives you an idea regarding how off you are. Now, as you can see at the bottom of the screen, you can see that your, the, the image is almost loaded exactly on the pelvis. The orientation is matching, but even if it's off by a few millimeters, which you can't really appreciate on the, uh, on the image down down there on the pelvis, on the actual ultrasound, it, it is off by quite a bit. So it is not an easy problem to solve, but we are trying to use different ways of registration uh to sort of solve this. Uh, this is, you know, this is the future. This is what we would be trying to do, uh, with this, and this is something that we believe is, you know, one step ahead of the conventional biopsies where you can also have. Um, you, you do have various methods of tracking the needle right now, but this would be exact tracking, uh, since you would have probes on the needle tracking devices on the needle itself, where you can see exactly how deep your sampling is, what are the angles and trajectories of your sample, and where exactly in the prostate it is going. But again, we are slightly far off from there. So, you know, it's also important to acknowledge that, you know, this is something that has been tried before, uh, and this has been tried uh the the most recent case report for something like this has come out of the NIH uh that came out only last month. Um, this is uh Doctor Pinter's group. Uh, where they have used the software similar to this for a non-rectal, completely, completely non-rectal transperineal biopsy, but it's only a case report. There have been, there has been another study of 10 biopsies that has come out of the UK. But again, most of the people are struggling with registration and trying to find a suitable solution for that. Uh, for adoption of AR just in general, not just for prostate biopsies, the main problem, the main problems are, as we mainly discussed, the registration accuracy and the drift. Even the smallest of movements throws it off by quite a bit, and that is, you know, the whole idea of using ER is to get pinpoint precision. And even if you have a little movements, then it just throws up the whole concept. It just makes it a fancy idea. Uh, the ergonomics are also a problem because you are the headset, if you actually wear it, it's pretty heavy and you would not really be able to. Uh, do this for maybe some more than an hour on a, on a good day. And then, you know, if you have multiple biopsies doing this on and off, it does affect your workflow and your whole workflow integration is sort of thrown off if you do something like this. And again, if this is just starting off, so getting adopting this technology and getting people to adopt something like this where you hardly have any evidence and you're trying to build is also difficult. There are uh medical legal issues with the Hololens with AR in general, uh, because you, it's also like having another camera in the room. Uh, what that camera records, where the data goes, how that data is handling, there, there are, there are not enough policies for uh looking at something like this. Uh, again, a lot of the traditional pathways and how they are sort of worked on need to be changed for adopting this technology and using this. So that is a problem. I do feel that, you know, going forward, uh, this current AR systems that are just maps, they will sort of show the way gradually, especially so far as using AR and robotic surgery is concerned, since Intuitive has also started developing interest. They have their own version of 3D models coming in. You have the NVDS support. So as compute speeds do go up right up there, I think AI along with AR will become mainstream and things like that. So I was told to finish 15 to 20 minutes early, so I am done. Uh, any ideas, proposals, questions are welcome. And this is from Nano Banana, you know, anybody who wants to give it a try, I think it's very interesting. Hey, Arjun, this is super cool work. Congratulations. Thank you. Really enjoyed it. Any, in terms of the registration, have you guys ever thought of using like actual fiduials? In the prostate as as trackers. No, I, I mean, the advantage of, you know, the whole idea is that it should be something that should not move with the patient using the markers, we feel the markers that we are using currently, the devices like I showed, I don't know if you had a chance to look. They sit right on the pelvis. of the patient. So I mean that part is currently so. What what happens with fidus or any of these markers is when this, when these things go very close to the patient anatomy, for somebody who was on the who actually wearing the glasses and manipulating the image, it becomes very tough to sort of move them. So even if you are putting the fiduials or having those markers, there has to be some area where I can actually reach for it physically and actually move them. If I want to change the registration, and it's been an issue because more than the final registration, how it sits, it's just the manipulation to get to that point that has been an issue in that, in that time, you know, in that I say about 4 or 5 minutes that you are allowed sort of to start off before you actually do the procedure. Otherwise, I mean, I do believe that registration is a problem that can be solved with slightly more time, but then I don't think we have the time to say do 15 or 16 minutes of registration. Um, before we do this, and you know, I, I mean, to your point, it's, it's a great idea. I mean, using fiduials is something that we could do, but the problem with fiduials is, uh, like I should, again, I'm just, uh, as I'm just thinking as we speak, uh, the fiduials also need to be recognized by the HoloLens, which should not, which would not be possible because, you know, when the HoloLens, it has to be in the line of sight. All the trackers that we are using for it to be used for registration purpose, I think that would be the biggest hurdle that, you know, you would not have any line of sight with the fidus as they are completely engulfed in the prostate. So I think that would be a bigger issue so far as fidicials are concerned. Yeah, that, that makes sense. The other quick comment and, and that question is, you know, we had talked about this over the phone the other day. So the AUA is looking closely at like all these policies like you mentioned and the implications. One of the big implications is, so you're doing all this great work and you, Bruno Sonaj, do these biopsies and, and a lot of these uh learning tools are using your techniques to sort of develop their algorithm and the question is, when you're a surgeon and you're You're showing them these techniques, who, who owns the rights to all of these things and, and that's one of the things that the AUA is looking at. Similar to, you know, if you're an artist, let's say you were Picasso and you're starting out today and AI existed, it could look at your brush strokes and say, wow, you painted something beautiful, I'm going to keep following you and then I'm going to use your brush strokes to make something of our own. But, you know, he gets cut out of the, I guess, ultimate deal. So I don't know if, if when you talk to these companies, if they Ask you to sign anything or if they discuss what happens to your quote unquote techniques when you're doing this. Uh, yeah, I mean, that's a, that's a great point. And you know, this is something that we do have soft clauses in the IP, uh, you know, when you talk about IP issues with these companies, there have, there are soft clauses, but it's difficult to point out, you know, anything hard on, on these things because, uh, say, you know, it's, it's about like the same thing as. Uh, the New York Times suing Chad GPT for using the newspapers. So, you know, these are, these are issues that are there in the broad world also. But so far it's very specific things like specific techniques, you know, if this thing becomes, say, eventually commercial, if they are selling it, how that becomes, I think, you know, before I, and I think even for this tech. Uh, we are having those sort of discussions even now. Like this is, say at least 2 or 3 years before it actually becomes commercial, but this is the time when you see something like this that is becoming that could become commercial or become marketable in the future, you need to have like exact clauses regarding how the future sales would be, how this right would be owned by which company. Uh, you know, how they, how you would have, say, the where the patent would belong to, whether it would belong to that company, how much of ownership on the patent would be with the university where this was developed would have, and you know, there, there are lots of implications like we discussed that day also, but I think a lot of gray area like you said, it's not really clear cut. Awesome. Thank you. Congrats again. Thank you. Arjun, great talk. I think that was a really good question, Chad. I mean, so some places where you have, you know, contracts in the work that you're doing. So for example, the grant that we got with exosome to look at a prostate cancer detection test is going to look at a lot of genomic sequencing and things like that. So there are things related to IP in the contracts that were done. Related to that grant that kind of covered this, but yes, we did have conversations with them very, very early on. Same way even in this project, Arjun has like really been having a lot of conversations between the company, with Radka us in terms of how we would do this because we're even thinking about looking at this, um, you know, in a potential academic industry grant, which is another way of kind of, you know, further developing the software and things like that. Archen, for you, is there like a, you know, I know for example, you know, MIM is a good place that does a lot of registration, and RADA's done a lot of work with them, but are there other areas, you know, I know Ash was on the call too, or was that you could look at that have been doing registration, you know, even within or outside of medicine and things like that that we could kind of, you know, get support from to kind of take this a little bit further. Yeah, I mean, there, there, there have been other places genuinely that I have sort of looked at and tried to copy the idea, but you know, this is very like the, the another issue with something like this is that we do this maybe like once a month. So it's very difficult to sort of track progress once you do something like this when you go back in for the second time. And again, we, we almost feel that, you know, sometimes you go in there, you almost solve the puzzle, and then you realize that there's a very small problem, like, like last time around, like I was showing you, and I think we, we discussed this last time also that, you know, we almost thought that we had solved the registration issue when we use the pelvic bones and things like that. Again, a concept concept that is coming straight from neurosurgery, like, you know, just align it to the bones and the rest. Of it will just fall into place. But you know, not able to access the whole pelvis when you're sitting there wearing the headset, not being able to manipulate the whole MRI sequence, uh, that you have segmented out onto the actual pelvis. Like these are small things, but I, I, to your point, like I agree that, you know, looking outside, maybe that just seeing how, you know, these things are solved outside of the whole, uh, small medicine ecosystem, I think might be the way to go. I, I, I honestly had to explore and see how we can do that. Yeah, no, no, but I mean, it's very good work and it's it's a cool journey. Thank you. Published August 28, 2025 Created by