February 17, 2026

00:44:03

100 Episodes and Your Case is Still on Hold!

Hosted by

Antonia Chen, MD Andrew Schoenfeld, MD Ayesha Abdeen, MD
100 Episodes and Your Case is Still on Hold!
Your Case Is On Hold
100 Episodes and Your Case is Still on Hold!

Feb 17 2026 | 00:44:03

/

Show Notes

In this episode, Antonia and Andrew discuss the February 18, 2026 issue of JBJS, along with an added dose of entertainment and pop culture. Listen at the gym, on your commute, or whenever your case is on hold!

Link:

JBJS website: https://jbjs.org/issue.php

Sponsor:

This episode is brought to you by JBJS Clinical Classroom.

Subspecialties:

Knee, Oncology, Pediatrics, Shoulder, Hand & Wrist, Orthopaedic Essentials, Trauma, Spine

Chapters

  • (00:00:03) - Case is On Hold
  • (00:00:45) - Episode 100
  • (00:03:03) - Sneak Preview: Miller Review Course
  • (00:03:42) - AI Generated Text in Orthopedics
  • (00:05:36) - AI in Orthopedics: The Promised Land
  • (00:13:44) - Artificial Intelligence in orthopedic and sports medicine
  • (00:16:27) - Osteo and Sports Medicine Editorial Policies on AI
  • (00:24:42) - How to Write a Paper With a Computer
  • (00:25:16) - Deep Learning Model for Differentiating Neoplastic Fractures from Non
  • (00:31:36) - The Ms. Cleo Phone Paradigm
  • (00:32:34) - Machine Learning and Neoplastic Fractures
  • (00:37:05) - AI-driven CT MRI Image Fusion and Automatic ACL Reconstruction
  • (00:39:05) - A 100 Episodes of JBGS: Thank You!
  • (00:40:46) - Aisha Abdeen Is The Next Co-Host!
View Full Transcript

Episode Transcript

[00:00:03] Speaker A: Welcome to your Cases on Hold, a JBJS podcast hosted by Antonia Chen and Andrew Stonefield. [00:00:10] Speaker B: Here we discuss the science of each issue of JBJS with an additional dose of entertainment and pop culture. [00:00:17] Speaker A: Take us with you in the gym, on the commute, or most certainly whenever your case is on. [00:00:27] Speaker B: Welcome back to the hundredth episode of your Case is on Hold. For those of you who've been with us from the very beginning, we thank you for being loyal listeners and for listening to all the different takes, antics, and fun social commentary we've had throughout all of your cases on Hold. I'm Antonia Chen, executive editor at jbjs, and I have here. [00:00:49] Speaker A: I am Andrew Schoenfeld, associate editor for Methods. And we've got the whole gang here for episode 100. We've got the Kaiser Soze is here and the Sopranos are here and Jackie Childs is here and Donnie Brasco is here. [00:01:05] Speaker B: Another good one, right? We've got a lot, actually. You've also got. We had college football trivia, too. [00:01:13] Speaker A: College football trivia. We've had a lot of. We've had a lot of good. What's your. What's your favorite episode? Like, what was your favorite thing that we did. Not necessarily episode, but just so I'm. [00:01:23] Speaker B: A Usual Suspects fan. Have always been. That's always been my favorite movie. So Kaiser Sosey is always a win for me. How about for you? [00:01:31] Speaker A: We gave a lot. We gave away a lot of secret stuff over the years. We gave away like the secret meaning to the Sopranos ending. We gave away the secret meaning to, you know, the hidden secrets. And if you watch the Usual Suspects and then listen to Ghostface Killer's Iron man album, their special secrets that you learn. On that front, I really enjoyed back in 2022. We talked about like orthopedic case volume and relative perspectives on case volume, which I enjoyed that one and one that I've listened to so many times because I just thought it was really great. Is repeat listener for me. It was on my Spotify Unwrapped, like highest listened to podcast was where we covered cardiac clearance for total joint arthroplasty. There were so many great points that were covered in that one. And that was just recent one that was from just this. Like it was definitely 2025. I think it was like September, October or something like that. [00:02:48] Speaker B: That was a good one. I forgot we did Monty Python references as well, too. [00:02:52] Speaker A: I mean the references are. Yeah. Untold. The number of references that we've gone through. A lot of good times more to come, but probably different going forward. [00:03:02] Speaker B: A little bit different. [00:03:03] Speaker A: Sneak preview Sneak preview As per usual. [00:03:06] Speaker B: These opinions are of our own, so all our wonderful reminiscing is clearly of us and not representative of anyone else@jbjs. Jbjs editorial board. Everything like that is separate from the views expressed here. This is brought to you by the Miller Review Course. I've had the pleasure of virtually teaching at one of the Miller Review courses and it's a great way to get your learn on, get a lot of information. STEP 1 STEP 2 Great setting and great instructors and great people. So if you have the chance, use Miller Review to help you with your board preparation. Without further ado, we're going to go into the hundredth episode for Top of the pile we have AI generated Text in Orthopedic articles, a cross sectional analysis. You will see a lot of similarities here or interesting threads, common threads throughout the different articles. Because this is an AI topic here, which is obviously a very hot and up and coming topic, this is looking specific at Orthopedics. This is AI generated text by Sweeney with a commentary and it's free for 30 days. The future of AI and orthopedics boon or Bust by Stresslo and it's permanently Free. What's New in Machine Learning and Generative Artificial Intelligence in Orthopedics also by Stresslow. This is a highlight article. It's also permanently free. Conducting Systematic Reviews in a Day. Enter Artificial Intelligence. Remember when those used to take weeks and pon months to do these systematic reviews? And just reviewing all these articles, it's crazy to think about it in one Day by Kao and it's a highlight article. AI Based Medical Decision Support Exploring the Data Gap by Schwab An Algorithmic Scalpel, Realistic Expectations for Artificial Intelligence and Orthopedic Practice by Stresslow. A Humanist View of Artificial Intelligence in Orthopedic Surgery by Wallace this is permanently free. The Power of AI to Turn Words to Into Images by Gert this is also permanently free. And the Transformative Potential of Artificial Intelligence in Latin American Research by Becker and the Application of Agentic Artificial Intelligence in Orthopedics by Billy. A lot of really cool AI articles here. Without further ado, we're going to have your feature article Minimizing Misdiagnoses of Tibia Plateau Fracture the Role of AI in Radiographic Evaluation by Chen, Not Me et al. Permanently free with the commentary. [00:05:36] Speaker A: Okay, so AI is going to Lead Us to the Promised land. That's the subtitle of this special issue. If you've read through all of those articles. In the top of the pile, you're getting a lot of different perspectives. All of them, I think, uniformly pretty positive and optimistic about what AI can do for us. And obviously in orthopedics, there's a lot of different areas in which AI can play a role. And clinical applications of AI, I think, are very different from research applications. And then as a subset of clinical applications is radiographic evaluation, which is where I uniformly, and I've said this before on the podcast, that's where I uniformly think AI has the greatest immediately actionable potential is just because of the ubiquity of radiographs and how many are taken and how many are available in different practices, in different centers, in different cities, in different states and regions across the country. You just have very robust data for which to deploy artificial intelligence. And that's really where artificial intelligence does what it is advertised to do, which is pick up on things that other folks are going to miss, just because it has the benefit of working off of so many other experiential inputs that go well beyond what one single individual can do. But as with any kind of study, from a diagnostic standpoint, and our avid listeners will know when you're talking about diagnostic studies, you need to have several important parameters for the study itself. It doesn't matter that you're using AI. If you did AI with one example, like, the only thing that AI could learn would be normal. There's no way it could tell you abnormal. But that's probably it. It just says, this doesn't look like what I understand normal is. That doesn't necessarily mean that there's a fracture or a tumor or something like that. It just, it just says, no, not what I think is normal. But you have to then teach it about all the other. The other kinds of abnormalities. [00:07:54] Speaker B: And that's key in these cases, right? [00:07:57] Speaker A: Yeah, absolutely. And so this study aimed to develop an artificial intelligence diagnostics tool for identifying tibial plateau fractures on radiographs. So we're going to talk about this a little philosophically, and I think this is a really cool study. And that's why I picked it as my headline, because there's a lot of layers to unpack here, and I think some take home messages that are more translatable. It's not just about the tibial plateau fracture or fractures or orthopedic trauma. I think it applies to a lot of different contexts in terms of the talking points, the learning points that I would emphasize. This study was done from 2018 to 2020. In terms of where the radiographs were obtained, and that's not a here nor there. It could have been done from 2010 to 2020. I mean, we're talking about the modern period of imaging. So they have 1800 plus 9 plus 9 over 1800, with an equal distribution of male to female. That's fine. It's nice, but it doesn't add that much, really. What we really want to see is the clinical variation in the fracture patterns. That's really what it comes down to. And having appropriate clinical innervation across male and female adults, that's value added. But just having male and female adults, like, if all your fractures were in females and you just have, you know, 900 males that all have normal imaging, well, that's going to be a problem. Not saying that that's what they did here, but just moving on. You know, essentially they're looking to basically have an AI tool that can look at radiographs and identify those that show abnormal, that show fracture. Now, of course, we know, and this for some is going back to residency. For others, it's more immediately actionable that there are sex fracture patterns in the Schatzker classification. So I was, like, really into classifications when I was a resident. [00:09:55] Speaker B: Like, it doesn't surprise me. [00:09:56] Speaker A: I love knowing all the classifications, but. Well, some of it was gamesmanship because, like, you know, if, like, you call up an attending and you're like, yeah, this is a, you know, Frickman 4 distal radius fracture. And they have to be like, oh, what's that? Then you're like, ah, telling you something you don't know. So. [00:10:13] Speaker B: And I was a time who was like, one is not too bad and six is really bad. That's really all I need to know. [00:10:20] Speaker A: No, but I mean, so. But like, the shat is a little bit more descriptive and I guess, you know, in some ways it's like, you know, one is the lateral split, and then two is the split depression, and three is the pure depression, and four is the medial condyle, and five is the bicondylar. And then six is like just really devastated, obliterated. No, no, no. But I mean, I think technically it's like bicondylar with shaft. Like, there's basically the tibial plateau is. Is no longer articulating. That's probably not the best term. But no longer in contiguity. That's a better term. No longer in contiguity with the tibial shaft in some way, and there's going to be a lot of heterogeneous variation within each of these fracture patterns. Right. Like you're going to have displaced and not displaced and hairline and again, completely like destroyed, open, close, closed. There's so much that goes into this. And the real important point here on this front is that about 17% of the fractures in this cohort were type 5. So the bicondylar and three column injuries, they report as 27%. So, you know, right there they don't have that many fractures to begin with. And when the plurality. Easy, the plurality are Shatzker five or three column and. Or three column injuries. These are probably not the injuries that get missed. Yeah, I mean, can like a hairline, you know, non. Completely non displaced bicondylar tibial plateau fracture be missed? Sure. And they're comparing to CT as the gold standard. So, you know, for a study like this, the other important part is that if you need the CT as the gold standard from a Bayesian standpoint. We talked about Bayesian analyses last time. From a Bayesian standpoint, if you had somebody who didn't get a CT scan of their knee subsequently, well, they can't be included in this testing paradigm. Right. So it's not identifying fractures that ultimately were probably not picked up. There could be some people out there that didn't get a CT scan and were put in a neomobilizers or whatever, and they said, oh, you have a contusion. And maybe they did have. You know, I, I would say that the biggest risk are the ones with cortical depression and probably type three, because that's like the pure depression. Right. Like that's the one that is your most likely. Like, if it's just like a center superior to inferior kind of divot, so to speak, in the center of the bone, like, you know, right in the tibial plateau. Like, like right in the center where the, the medial meniscus or the lateral meniscus, whatever it may be, that those are the ones that I think are most likely to be missed. If, if you have the fracture itself and it's displaced, you're probably going to see it on an X ray. Like if, if, if it's an open fracture and like the boat is sticking out of the body, I don't think you need an X ray. Right. Like you. That there's a problem here. Right. So. So doing a bunch of testing with obvious fractures is not really. Yeah. But I mean, if, you know, if you can see the fracture with the naked eye, you probably don't need an AI machine learning algorithm to, to run, to make sure. That you're identifying it. And that's, that's the real problem here is that they've identified, they've developed a model, the model, they ran the analytics of the model and the performance of the model is really going to be based on the substrate. In this case, I'm just questioning whether the substrate is really conforming with fractures that are going to be missed. They say this has new, the model has a new found ability for identifying tibial plateau fractures. I don't think it's newfound. I think that the idea that AI could do this is readily known already and accepted. Maybe it, it hasn't been studied specifically in tibial plateaus in this context or this way. I'm sure there are lots of ways to rationalize that statement. But at the end of the day, my biggest concern is, you know, how much of what really is going to be missed in a clinical setting is included in this analytic platform. [00:14:51] Speaker B: That's what you need more data to put into it to actually feed these models. And that's where it says humans are still very useful, even in the context of radiology. [00:15:00] Speaker A: Yeah, absolutely. [00:15:02] Speaker B: All right. Mine's looking at exploring the endorsement and implementation of artificial intelligence guidelines in leading orthopedic and sports medicine journals. A cross sectional study by Major et al. There's a commentary, an infographic, and it's free for 30 days. So this is something real quick. Yeah. [00:15:19] Speaker A: That I just, I, you know, I'm sure there's nothing new under the sun and JBGS has been publishing since 1876, so like since the battle of Little Bighorn, but I just, I just had to remark that this study was conducted in the Department of Psychiatry and Behavioral Sciences. And, and if it's not an absolute first, it probably isn't, but it, it certainly is rare enough that it's, it's a first for the current time. I can't recall that we've covered an article where it was conducted in the Department of Psychiatry, published in jbjs that I agree. [00:15:53] Speaker B: Unless it's about a psych psychiatric topic. Right. Like depression in patients or you know, other major disorders and things like that. But to do this in the context of papers, that's pretty impressive. What I'm curious about, and I have to admit I did not go and look at this, but it is possible that these authors have done this in multiple different settings, not just orthopedics, but also general surgery or dermatology or plastic surgery. So I'm curious, but I've actually not looked into that. I'm hoping that they did it in psychiatry because that's their mainstay, but we'll see. So we know that more and more people are using AI. You've already just talked about one where to use it in radiology and it's probably the biggest area in medicine that it's been used in in clinics. I don't know. Have you guys been using the AI dictation device when seeing patients? That's something I've been using in my clinic and I find it really helpful. Do you use it at all? [00:16:45] Speaker A: No, I'm too old school. I like to know that exactly. Like I don't want to do something and then have to read what someone else said. Like I can just do it myself. Thanks. I know what I want to say. I know what I want to say. [00:16:59] Speaker B: Well, what you say is good, so that makes a difference. But I do have to say that it probably has helped me in my clinic setting for sure. And obviously as we're using increased reliance on AI, there are critical concerns regarding especially in publishing, transparency, ethical considerations and reproducibility. So this study systematically evaluated editorial policies of leading orthopedic and sports medicine journals concerning AI usage. That was the first aim and the second aim was to evaluate AI specific reporting guidelines. So they did a cross sectional review in accordance with STROBE guidelines and they looked at the top 100 peer reviewed orthopedic and sports medicine journals using the 2023 SCI Mago journal rank system and journals that provided instructions for authors in English and actively published relevant clinical research were eligible. Data extraction happened on August 29, 2024 and included title, geographic region, this SJR or the Sci Mago Journal Rank Quartile the 2023 journal impact factor publishing company International Committee on Medical Journal Editors the ICMJE acknowledgement that we've a lot of us have filled out Committee on Public Ethics Acknowledgement World association of medical editors acknowledgement AI related policies within instructions for authors and references to AI specific reporting guidelines, of which there are actually 11. So I didn't actually know that there were AI specific reporting guidelines out there, but there are 11 of them and they're listed in this article. Data was collected in a mass duplicate fashion with discrepancies resolved through consensus. During the initial search they had 319 orthopedic and sports medicine journals using this SJR. From these, the top 110 were selected upon review. The instructions for author were inaccessible for five of the top 100, so they were excluded. They took the next five highest ranking ones. So the five from the original selection of 110 were outside the top five 100 and were excluded. So all the hundred journals had the pre defined inclusion criteria. Most are published in Europe, 48% were from Europe and North America was 45%. If you do the math, that's not 100%, but there's others in other areas as well too. The median impact factor was 2.65. Elsevier and Springer Nature were the most common publishers. Of the hundred journals analyzed, 94 referenced AI in their editorial policies, 84% included an ICMJ statement about AI, 82% mentioned COPE and 11% mentioned the WAME one. All that referenced AI in the editorial processes explicitly prohibited AI authorship, required the disclosure of AI use and manuscript preparation and and permitted AI use in manuscript preparation. AI generated content was permitted in 82% of journals. AI assisted image generation was permitted by 60% of journals but was explicitly prohibited by 34% of journals with no mention in 6%. Despite these policies, only 1% of journals referenced AI specific reporting guidelines, with the Checklist for Artificial Intelligence and Medical Imaging being the sole guideline mentioned. So while most of the orthopedic journals had established policies on AI usage, I.e. don't use it as an author, but you can use it, you just have to disclose it. There was a notable lack of standardization with respect to AI generated images and not really a good use of AI specific reporting guidelines. So there is a gap in methodological guidance. Ideally what we can do is standardize AI policy and encourage the adoption of reporting guidelines. This could hopefully increase the transparency and the reproducibility of articles being generated. But we have to be really, really careful about the ethical integrity of using AI in everything that we do. Sometimes being a little old school, like you're saying, like I edit my own papers, I wrote my own papers, I don't like to generate it through AI, any of that stuff. I think that's what original thought is all about. But it is interesting that this is being addressed in this field. Obviously orthopedics and sports medicine as well as all of academic medicine where publishing is happening. But it's something to keep an eye out in the future because more and more of it will be used as time goes. What are your thoughts? [00:21:09] Speaker A: Yeah, I mean, AI is pervasively integrating into like everything we do. You know, I won't belabor the point. I've touched on it before. I'm in firm agreement that, and I think most journals are clear on they do not want people using AI as essentially a shortcut to write papers, because that's what it is. Like you're abrogating your responsibility as an author and the effort that goes into putting a paper together to just have some computer write it for you. That's just like having your friend do your homework when you were in eighth grade. Like that. It's the same kind of thing. [00:21:47] Speaker B: Wait, that's a problem. [00:21:50] Speaker A: I think that everyone is pretty much on the same page on that front. I think where it is useful and where you do see these permissions is for a lot of people use it where English is not the first language of the authors and they're using it to, you know, sort of help smooth or correct textual language usage issues that otherwise might impair readability. Another area where I think it is very helpful and useful is in coding. Some people use AI to create, you know, when they're working with large scale databases and there's a lot of moving pieces and parts, they'll use AI to kind of write the code. And image generation, you know, like what image generation can mean a lot of different things. How I'm understanding where I think it's permissive is you want the AI to create a graph for you, right? So you're outsourcing what normally, you know, in the past was a job that was given to like a junior member of the author team. You're, you know, cutting them out of a job, I guess, and giving it to AI to do it faster and maybe quicker and you have immediate action instead of, you know, waiting for the resident or the research assistant to put that together, send it to you, and you're like, oh, I don't like it. You can just tell AI change this, change that. So graphs and images on that front, good. If it's like, like if it's just making up like an X ray or something like that, that's probably not good. Like, you know, show a total joint arthroplasty with a periprosthetic fracture and like it gives you, you know, something that then has like, you know, a six finger or something. [00:23:28] Speaker B: It's an elbow. [00:23:30] Speaker A: Yeah, right. Don't do that. So I mean, I, I think those are the areas where there is some permissibility and Gerald's are open to that and they just, you know, basically it comes down to you just need to disclose it and just be upfront about it and say this is where. And. And I see lots of people doing it and it's not a problem, it's just, you know, put in there in the, the historical memorial of, you know, generally, like, like in the fine print below the, on the title page. It's included there for memorialization purposes. What we don't need is more checklists. I, I don't, I, I'm putting the checklist thing. We don't need that. Especially where it's like, you know, we just told you what. We don't need a checklist to say, did you tell us what it did? Like you did it? Like, just do it. And that we don't need a checklist. [00:24:23] Speaker B: Or guidelines. [00:24:26] Speaker A: You know, I think, like, you don't need a guideline when the guideline is just disclose what you did. [00:24:30] Speaker B: Very fair. [00:24:31] Speaker A: Just be honest up front and then let you know. I mean, again, I think it should be universally accepted and a standard academic red line. You're not outsourcing the composition of your work to AI. I think it's also, it's not ideal when you're letting AI do your literature search and stuff like that too. But I guess at least if AI is pulling articles, you should be reading them and then deciding what you want to include in your paper or not. [00:25:02] Speaker B: Great. [00:25:03] Speaker A: But, but definitely the red line is writing the paper. [00:25:07] Speaker B: Great, done. Well, we'll see what guidelines go. And I'm curious actually, in a few years from now if that will change and if this type of article will change. All right, now for the your cases on hold feature at Deep learning model for differentiating between neoplastic pathological fracture and non pathological fracture using hip radiographs. Another AI use of hip radio gas by Kim et al. This is a lead article with 30 days free and commentary. Most hip X rays are diagnosed using X rays, but it's difficult to differentiate between neoplastic pathologic fractures and non pathologic hip fractures on X rays. This study aimed to develop and evaluate a deep learning model capable of distinguishing neoplastic pathological fractures from non pathological fractures on hip radiographs to hopefully enhance diagnostic accuracy. This is a retrospective multicenter study conducted in Korea. It was performed using AP hip rated graphs for patients who visited the emergency room at four different institutions. Now, what they did is they did the deep learning model and it was trained on and tested using data from 338 patients at a single institution and then externally validated from data from 488 patients across the three other institutions validated in other facilities, not the facility. It was done in the inclusion criteria for selection of hip radiographs for patients who are greater than or equal to 18 years who came to the emergency and had either a diagnostic image of a neoplastic pathological fracture or a non pathological proximal femur fracture. Patients were excluded if there was poor image quality, missing radiographs, a periprosthetic fracture, hopefully not one generated by AI, a diagnosis of osteonecrosis of the femoral head, a bisphosphonate associated atypical fracture, a history of surgery involving the proximal femur, healed multiple myeloma of the proximal funeral, heterotopic ossification and the diagnosis of septic arthritis. The model is implemented using a vision transformer architecture and it was enhanced by pre trained weights from distillation with no labels. Also dyno the authors applied class balanced weighting by assigning greater weight to less common samples during training which helped the model learn to recognize the features more effectively. What happened in the original cohort of 338 patients from a single institution? The the mean average age of the patients were 71 and there were neoplastic pathological fractures in 33.4% of patients, which I find to be a pretty high percentage, although they could have a tertiary referral center like we do here where we have a lot of pathological or neoplastic pathological fractures. Most patients with neoplastic pathological fractures in the derivation external validation cohorts were diagnosed with bone metastases as opposed to primary bone tumors. The model had an overall accuracy of 0.88 with 1.0 being perfect, a sensitivity of 0.882 and a specificity of 0.8979 on the internal test set. Then using the externally validated data they had 67 neoplastic pathological fractures and 421 non pathological fractures and the model achieved an overall accuracy of 0.84 and a sensitivity of 0.91, just higher and a sensitivity of 0.786 which is a little lower now they did another comparison of that to expert evaluators, so the internal set was evaluated by three board certified general orthopedic surgeons. Interestingly, the interrelator reliability between experts was 0.49 with 1.0 going perfect. This is using the FLEIS K indicating only fair agreement between the experts in all these images. So not with AI but just with the experts was 0.49. Expert consensus was determined by majority rule achieve an accuracy of 0.8 and F1 score of 0.72, sensitivity of 0.7 and a specificity of 0.82. The model's performance was comparable with that of the general orthopedic surgeons. There are four cases misclassified by all expert evaluators on the basis of radiographs. 2 Neoplastic pathological fractures were incorrectly identified by experts as being non pathologic and 2 non pathologic fractures were erroneously judged as pathologic. The model, however, identified three of the four of them correctly and there was only one misclassified case. It was described as a subtle permeated lesion that was difficult to detect even on expert review with minimal cortical changes visible in radiography. That sounds a little subjective if you ask me. In conclusion, they said the developed deep learning model is reliable and valid tool for distinguishing neoplastic pathological fractures from non pathological fractures and hip radiographs. I've been on call for the last few days. Just for fun. I took a screenshot of one of my path fractures that I actually fixed today. Sorry, not path fractures. One of my hip fractures that I fixed today. Screened it and put in their model. So they have a few things that are available to the public. One, if you want to use the code for image pre processing, model training and prediction along with detailed training protocols, it's available online at. The model is publicly available at. Org. So I ran this image through and when I range as a whole bone it said there was a neoplastic pathological fracture present. But then when I narrowed down the window to the fracture site it said non pathologic fracture. I got a little bit of a mixed message just from using the model already. But that said, when really narrowed down and screenshotted that area of where the fracture did occur, it did say non pathological fracture, which it was. It was a non pathological fracture. Ideally, as you say in all these algorithms you just need more images. You don't need the obvious non subtle images to go in there. It's the subtle images that really train the system and make a difference. So more images should be added to strengthen this algorithm, especially from different sites. It's nice to have the external validation of 488 images, but that's just not a lot in the grand scheme of hip fractures that we have present. So I'll be able to put through there would hopefully really strengthen this. Hopefully it's a good tool, but we can't rely on AI 100%. [00:31:21] Speaker A: Yes, I mean I think certainly right for right now. And as was illustrated by your anecdote regarding the utilization of this publicly available website, this is in the. I'm sure we've touched on it before. We didn't touch on it when we were talking about the paradigms earlier. But this is the Ms. Cleo keeping it real paradigm. I know we've talked on this before. It has to do with like, you know, children of the 80s and 90s, the, the, the 1900 number phenomenon where like you would call, you would call Ms. Cleo. She was like a fortune teller. And they're always saying like, you know, for entertainment purposes only. It was just like a legal disclaimer, you know, like get a message from Santa, get a message from the new kids on the block, you know, they'll sing you a song. Corey Feldman and Corey Haim had, you know, call us to get, you know, it was always like for entertainment purposes only. For entertainment purposes only. Right? This is for entertainment purposes only. Like this is not informing patient care. Like, just like you, you did, you put this in and you were like, well that's not right. Right. So I mean the first thing is that the, the study talks about the fractures. Pathologic fractures were distinguished from non pathologic fractures using ct, mri, medical records, clinical presentation, operative notes, follow up documentation, histopathologic findings. But all of these things have layered Bayesian effects like we touched on previously. We touched on. You're not going through CT and MRI and depending how routine sending things for pathology might be in terms of, I guess it depends on how you fix the hip fracture. But the more you're sending and the more you're imaging, the more you're concerned that this is not a normal fracture. There's a Bayesian effect there. Right off the bat. They don't have that many patients. When you're talking about leveraging, machine learning and AI, they had 338 patients in one and 488 patients in the other with relatively small numbers of actual pathologic fractures. So right there you have problems with representation across the spectrum of disease states. There's probably inadequate, just by definition, clinical variation in terms of what this is seeing. It doesn't have enough reps to really cover all the bases of the entire universe of the spectrum of what neoplastic pathologic fractures can look like and its performance is. It's okay, but I think there are some, yeah, there's some concerning signals. So the Overall accuracy is 88% and the sensitivity of 88 and a specificity of 88 in their, in their, the, the cohort that was used for development, then when they test it, the accuracy goes down because the specificity goes down but the sensitivity goes up. So what you really want is you want very high. This is a screening test. X rays are screening tests. Like the confirmatory tests are the ones that are supposed to have the high specificity. So you want very high sensitivity. And neither 88% or 91% sensitivity when the overall accuracy is below 90% is really that awesome. What that says to me is that again, you saw this where they ran it in a relatively small number of confirmatory studies with a lower prevalence of fractures. [00:35:13] Speaker B: It. [00:35:13] Speaker A: It had a harder time with the specificity and the accuracy. So what if. Right. What if you did it with thousands and thousands of numbers? [00:35:22] Speaker B: Even worse. [00:35:23] Speaker A: Right. It likely is the, the. Because the training set was just not that big enough. [00:35:28] Speaker B: Right. So should they leave the training set to be bigger to start off with and then validated with other ones afterwards? [00:35:34] Speaker A: I guess. But you know, at the end of the day, I think that what's probably better rather than, you know, for example, UT Southwestern or Mass General, Brigham, using the FXDX from this university in, in Korea. Right. Is do you just develop your own. [00:35:54] Speaker B: Yeah, true. I mean, they do provide the sources, which is good. I give. [00:35:58] Speaker A: No, I know, but you don't need a study for that. Like, just. We know that this is a proof of concept. AI can do it. Okay. You know, whether that was really necessary or not. It's just your guys's radiology system has its own, you know, basically like its own internally built AI that then can flag it for the radiologist and be like, ah, something doesn't look completely right here. You know, and then you get the radiology message that says like, you know, more imaging is necessary or something like that. And again, you want high sensitivity. That doesn't mean that it has to be right. It just means that that's what high sensitivity is. You're doing more testing, more definitive testing to work it up further. [00:36:38] Speaker B: Yeah. [00:36:39] Speaker A: You don't need to use their platform. And in fact, their platform is probably not going to perform that well in Boston or Dallas or just again, because of restricted clinical variation. A different patient substrate, a different population of patients whose bones may look differently in some respects. [00:36:59] Speaker B: Yeah. And there you go, guys. AI with a grain of salt, I would say. And then the last one is honorable mentions. AI driven CT MRI image fusion and segmentation for automatic preoperative planning of ACL reconstruction development and application. This is an honorable mention by you et al. There's a commentary and a visual summary of this. The AI driven segmentation of CT MRI fusion images and automatic preoperative ACL reconstruction planning and in this study demonstrated the capability to automatically, precisely and reproducibly generate plans for nearly identical tunnel entry and exit points with isometric, anatomical and individualized individualization characteristics. The CT MRI image fusion was able to generate an individualized 3D model with high segmentation accuracy that only required approximately 192 seconds in each case, so faster than what we're able to template in the bone model validation, the mean deviation between the planned and executed values was less than 1 millimeter for the femoral and tibial tunnel lengths and graph lengths between the tunnels. So used in the ability for planning of ACL reconstruction. And I foresee that they'll use this in other fields as well too, besides just ACL reconstruction, but any sort of surgeries where we need some planning and hopefully more accurate execution of what we've planned. So this is the hard part. So this is being a hard part. [00:38:26] Speaker A: We left the audience with a cliffhanger on the last episode that was worse than Stranger Things on Netflix. [00:38:33] Speaker B: I'll just say that that has been the one that everyone has been talking about. I have not heard more discussion about Stranger Things than anything any other show I think that I've ever heard of. I don't know, maybe there are comparisons. Lost was one that people hated, the ending of Sopranos. Yeah, there have been some really disappointing endings out there, I have to say. [00:38:56] Speaker A: Especially is this going to be a disappointing ending of your cases? [00:38:58] Speaker B: On so this is an end to one era, but a start of a new era. So for everyone who's listening here and for my co host, Andrew Schoenfeld, I want to thank you for being an awesome co host to work with. This idea was conceived, I think, in a clinic room or a conference. Either in a clinic room or discussing patients or in a conference room at JBJS where you say, I have an idea for you for a podcast. And I was like, I think you're crazy. At the same time, it's actually been one of the most enjoyable things that I've done. Been able to discuss articles in depth, really learn from them and have fun bantering with you to discuss them. So thank you, Andrew, for this opportunity to do 100 episodes of a podcast talking about JBGS articles. I want to say thank you to JBGS for letting us do this podcast. I think there was a few mentions in the beginning where we're like, well, thank you for letting us do this. We'll see if you'll let us keep doing this in the midst of all the craziness and all the things that we've said. But we appreciate all the support of everyone who's been incredibly instrumental in making this reality, for uploading these, for editing these, for, for getting them on board. And to our listeners, thank you. We wouldn't be here without you. So we hope that this has been entertaining. It's been fun. I've truly enjoyed all the episodes. We have some favorite ones, some less favorite ones that we've come out with depending on the articles that were there. But through it all, the banter was fun, the education was great, and just the camaraderie was fantastic. So thank you for letting me do this with you, Andrew, and for selecting me to work with you for 100 episodes of fun podcast filled entertainment. So, without further ado, we'll be announcing our Next co host, Dr. Aisha Abdeen. She's at BMC Boston Medical College fulfilling the female arthroplasty role again in a different person. She's going to be bigger, better and more advanced than what we've covered. But it was a true pleasure working with you, Andrew, in making this reality. And I know that it'll only get better from here. [00:41:13] Speaker A: No, well, you can't be replaced. There's no question about that. And it's really been a lot of fun and I've looked forward to every episode that we've recorded over now more than four years. Those of you who have been listening to the last few episodes have heard from Dr. Abdeen. She was a guest host on the last two episodes prior to this one. As Dr. Chen mentioned, she is an associate professor of orthopedics at Boston University and the chief of Arthroplasty at BMC Boston Medical Center. She is local, internationally known and recognized from the state capital to the pineapple, from the Big Apple to the nation's capital. [00:42:03] Speaker B: Well done. [00:42:04] Speaker A: Yeah. So we are looking forward to the next phase of your cases on hold with Dr. Abdin and I continuing on this journey together. And, you know, maybe you'll come back once in a while for a guest episode. I know you'll be monitoring from the sidelines in your executive editor role and. [00:42:28] Speaker B: I'll be listening to your cases on hold. [00:42:31] Speaker A: You'll be listening and being like, I can't believe they said that. This was so much better when I was on. [00:42:35] Speaker B: No, no, I couldn't listen to it. When I'm speaking, I, I can only listen to you guys doing because someone else doing it. I can't listen to my own podcast. [00:42:44] Speaker A: Your own sections. [00:42:46] Speaker B: I can't listen to my voice I can't listen. It's very strange to me. So I will enjoy listening to your case on hold with you two hosting it. [00:42:53] Speaker A: All right, sounds good. Sounds good. It's not. It's not an uncommon paradigm. Like, Leonardo DiCaprio doesn't watch his movies. [00:43:01] Speaker B: He's never seen Titanic, and everyone has seen it, like, 60 times. Very interesting. Well, then I will go into the realms of Leonardo DiCaprio. So thank you again. And here's to more in your cases on hold and hopefully another 100 episodes. And hopefully your case is not on hold anymore. [00:43:27] Speaker A: In another 100 episodes, your case will still be on hold. [00:43:31] Speaker B: That's a long time. I mean, we're talking four years. [00:43:33] Speaker A: We've been waiting four years for this case to go and see just like, oh, the lactate levels were 0.001. I'm not sure we need the third. [00:43:43] Speaker B: Line in for the patient. We're going to go in the foot now, like, yeah. So. Well, thank you so much, and thank you, everyone, for listening to us.

Other Episodes