Ways AI is plugging in to doctor’s offices and emergency rooms
Home » Ways AI is plugging in to doctor’s offices and emergency rooms

Ways AI is plugging in to doctor’s offices and emergency rooms

by Inkhabar webdesk
Ways AI is plugging in to doctor’s offices and emergency rooms

Checking scans for cancer, telling surgeons when they can operate or reminding when tests need a follow-up: The use of AI in medical settings is transformative. But the fast-evolving tools are prompting philosophical questions, too, that make take more time to untangle. Dallas, Texas (tca/dpa)  – An extra set of eyes on radiology scans, double-checking for signs of prostate cancer. A green light telling surgeons when it’s safe to operate on trauma patients. A reminder system that finds and flags test results that need to be followed up on. Artificial intelligence has bounded into the mainstream, into personal lives, classroom assignments and work meetings — so it should be no surprise to find it in doctors’ offices and emergency rooms, too. Nationwide, according toa federal brief , hospitals’ use of AI tools is growing rapidly. In 2023, 66% of hospitals used predictive AI tools in their electronic record systems. A year later, that number was up to 71%. As AI saturates nearly every aspect of our modern world, some medical applications run directly parallel to the types of tools we’re already familiar with. Many doctors, for instance, are using AI tools to listen to, transcribe and summarize their patient visits. Medical offices are using automated scheduling tools to navigate patient appointments and cancellations. These administrative tools, while not the most exciting, are proving to be hugely important. By reducing medical providers’ workloads, these tools can helpcurb physician burnout, a problem that hasplagued the medical field for years. But in the field of medicine, there are also much more dynamic — and controversial — applications. Artificial intelligence tools can be used in clinical processes and decision-making, too, interfacing either directly with patients or with those patients’ care plans. The people who are working most closely with the development and implementation of these tools are excited. There are so many backstops that AI can provide, they say, to keep medical providers from making mistakes and to help understaffed emergency rooms respond more effectively to patient needs. Many of these tools are either in use or under development at North Texas hospitals, too. The people who are most excited about AI in health care describe the technology as “transformative.” As AI continues to evolve, day by day, the central question for health care leaders — including those in Texas — is no longer whether or not they’ll use the technology in their medical facilities. The question, now, is how they’ll make sure the technology is improving patient care instead of imperiling it. The caveats AI tools of all sorts come with caveats. ChatGPT, among the most popular consumer-facing AI tools, has a caveat written at the bottom of the webpage. “ChatGPT can make mistakes,” the message says. “Check important info.” Google’s AI tool contains a caveat, too, in a sidebar. “Generative AI is a work in progress and info quality may vary,” it says. AI mistakes orhallucinations may not have life-altering consequences when a user is looking for help rewriting emails or optimizing a to-do list. When AI tools are applied to medical diagnosis and decision-making, though, the stakes are significantly higher. That’s part of why doctors and other health leaders emphasize that AI tools, at this stage in their evolution, are meant to assist medical professionals — not replace them. Xiaoqian Jiang — a researcher and the director of the Center for Secure Artificial Intelligence for Healthcare at the University of Texas Health Houston — said that many of the existing tools perform well in straightforward medical cases. The same isn’t yet true, though, for complex cases. “I think we are on the edge, but many of the models we currently have are still not actually to the level of the expert,” Jiang said. “A lot of the time, sophisticated scenarios still need human judgment.” Even tools that do work well can still make mistakes or erroneous connections, which a human eye may be able to suss out before any damage is done. AI is evolving rapidly, though, and in many ways it’s developing outside the boundaries of existing rules and regulations. Dr. Ryan Choudhury , a hospice and geriatrics physician at University of North Texas Health Fort Worth , said he thinks AI has outpaced governmental and safety regulations. “It feels like the government is probably five years behind on where they need to be in terms of legislating and helping guide what this looks like,” Choudhury said. A number of health experts pointed to liability law as one protection mechanism. From a legal perspective, doctors remain responsible for the care they provide, no matter what outside tools they’re using. Angela Clark is the director of the Urology Research & Education Foundation . The organization was created by Dr. Pat Fulgham , a urologist who practiced at Texas Health Presbyterian Hospital Dallas for 35 years. Clark and Fulgham said doctors’ legal liability is a built-in protection mechanism, preventing providers from leaning too heavily on AI tools. “The providers are still held accountable, liable, for whatever they diagnose,” Clark said. “Or fail to diagnose,” Fulgham added. Even with those caveats on AI, experts say there are myriad ways the tools can help doctors do their jobs better. And there are some things, AI proponents say, that these tools can do even better than a human doctor can. The applications Even just looking at clinical and patient care applications, there are more potential uses of AI tools than could be covered in any one article. But Dallas-Fort Worth doctors and health care leaders gave some examples of applications they’re focusing on, to give a sense of what role AI could increasingly play in the U.S. health care system. Fulgham said there are AI tools that double check radiology scans to identify risk factors that a human radiologist might have missed. That could help to ensure accurate diagnosis of prostate cancer, he said. “It’s not meant to replace the radiologist,” Fulgham said, “but it may point out something that was inobvious to them.” Similarly, there are tools that can look over a biopsy and assist a pathologist in determining how aggressive of a cancer a patient has. That information can then be used to inform a treatment plan. Dr. Brett Moran , the chief health officer at Parkland Health , which is Dallas County’s public hospital system, pointed to another soon-to-be implemented tool, which has its roots in a problem he’s seen firsthand. Years ago, Moran said, a patient came into the emergency room for chest pain. The medical staff sent the patient for a CT scan, primarily to look for blood clots. The scan turned up no blood clots, but there was a small nodule in the patient’s lung. Separately from the chest pain, the staff told the patient, he should follow up on that nodule. “In all the hoopla of the ER,” Moran said, “it didn’t sink in.” The patient didn’t go for follow-up scans. A year later, the patient was admitted to Moran’s care. The patient had cancer and, by then, it had spread through his body. “It’s a story that we’ve seen too often, and it really bothered me and it stuck with me,” Moran said. “This isn’t a single doctor that failed, this is a system failure.” Parkland now has a team that follows up manually with patients, based on flags that have been raised by radiologists and other medical providers. But when a provider is treating a specific problem, and trying to juggle a large number of patients, they may forget to go back through scans and flag unrelated issues. “What we needed was a more automated solution,” Moran said. Soon, Moran said, Parkland will switch on an AI tool built by the Parkland Center for Clinical Innovation .The tool will look through the interpretations of medical scans and flag potential foll…

(The article has been published through a syndicated feed. Except for the headline, the content has been published verbatim. Liability lies with original publisher.)

You may also like