Microsoft Says Its New AI Diagnosed Patients 4 Times More Accurately Than Human Doctors (microsoft.ai)
from Pro@programming.dev to technology@lemmy.world on 30 Jun 20:21
https://programming.dev/post/33180634

The Microsoft AI team shares research that demonstrates how AI can sequentially investigate and solve medicine’s most complex diagnostic challenges—cases that expert physicians struggle to answer.

Benchmarked against real-world case records published each week in the New England Journal of Medicine, we show that the Microsoft AI Diagnostic Orchestrator (MAI-DxO) correctly diagnoses up to 85% of NEJM case proceedings, a rate more than four times higher than a group of experienced physicians. MAI-DxO also gets to the correct diagnosis more cost-effectively than physicians.

#technology

threaded - newest

Pro@programming.dev on 30 Jun 20:35 next collapse

I know that I might be the only Lemmy user happy with this, but AI applications in the medical field seems very promising for lowering costs and being more accurate.

Fiivemacs@lemmy.ca on 30 Jun 20:45 next collapse

more accurate.

Until it’s not…then what. Who’s liable? Google…Amazon …Microsoft …chatgpt… Look, I like ai because it’s fun to make stupid memes and pictures without any effort but I do not trust this nonsense to do ANYTHING with accuracy especially my medical.

This thing will 100% be designed to diagnose people to sell you drugs and Not fix your health. Corporations control this. Currently they need to bribe Doctors to push their drugs…this will circumvent that entirely. You’ll end up paying drastically more, for less.

The sheer fact that’s it’s telling people to kill themselves to end suffering should be proof enough that it’s dogshit

Saik0Shinigami@lemmy.saik0.com on 30 Jun 20:57 collapse

And the risk is that if we rely on AI in any meaningful capacity, it will eventually erode away the expertise who would be knowledgeable enough to detect the problems that the future AI may create/ignore. This assumes even best case where AI isn’t being specifically tampered with.

Imgonnatrythis@sh.itjust.works on 01 Jul 00:29 collapse

I agree with you. I think this will likely happen to some degree. At the same time, that kind of argument could be used against many new technologies and is not a valid one to not utilize new tech.

Saik0Shinigami@lemmy.saik0.com on 01 Jul 01:06 collapse

Simply using AI isn’t an issue… Allowing it to take over in a way that accelerates the removal of the knowledge from our pools of knowledge is a problem. Allowing companies to use AI as a direct replacement of actual medical professionals will remove knowledge from society. We already know that we can’t use AI to fuel more AI learning… the models implode. In order to continue learning more from medicine, we need to keep pushing for human learning and understanding.

Funny that you agree with me and apparently see useful discussion to have here… but downvote me even though the comment certainly added to the discussion.

Oh, and next time don’t put words into someone’s mouth, very much a bad faith action that harms meaningful discussion. I never said we should ban it or never use it. A better answer would be to legislate that doctors must still oversee, or must be the approving authority. That AI can never have a final say in someone’s care and that research must never be sourced from AI sources. All I said, is that if we continue what we’re doing and rely on AI in any meaningful capacity, we will run into problems. Especially in the context of the comment I responded to which opined upon corporation controlled AI.

FFS… they can’t even run a vending machine. www.anthropic.com/research/project-vend-1

Oh… and actually I would consider the 85% that it gets to be pretty poor considering that the AI was likely trained on the full breadth of NEJM information. Doctors don’t have that ability to retain and train on 100% of all knowledge of the NEJM, so mistaking things makes sense for them. It doesn’t make sense for something that was trained on NEJM data to screw up on an NEJM case.

My stance is the same for all AI. I’ll use it to generate basic code for me. I’ll never run that without review. Or to jumpstart research into a topic… and validate the information presented with outside direct sources.

TL;DR: Tool is good… Source is bad.

comador@lemmy.world on 30 Jun 21:17 collapse

People don’t realize how much doctors leverage opening old books, reading subscription articles and looking at case files to help their patients out.

Anything that can aide in the diagnosis and treatment of patients is a good thing, even if it’s AI.

Source: I am in IT and my wife’s two siblings are a general practitioner doctor and an otolaryngologist (Ear Nose Throat Specialist). There’s not much difference between being a systems administrator and a doctor in many ways.

PancakesCantKillMe@lemmy.world on 30 Jun 21:26 collapse

Have you tried swapping out the part (CPU/videocard/memory/random component) whilst the patient is still running?

Doctors do this all the time! ;)

comador@lemmy.world on 30 Jun 22:46 collapse
Arghblarg@lemmy.ca on 30 Jun 21:41 collapse

AI for pattern recognition (statistical stuff) IMHO is fine, it’s different than expecting original thought, reasoning or understanding, which the new ‘AI’ does not do, despite the constant hype.