Connect with us

Healthcare

AI Could Help Reduce Alcohol Related Risks

Using Artificial Intelligence to scan surgical patients’ medical records, a study suggests that AI can identify signs of risky drinking

Published

on

For surgery patients, AI could help reduce alcohol-related risks

Using Artificial Intelligence to scan surgery patients’ medical records for signs of risky drinking might help spot those whose alcohol use raises their risk of problems during and after an operation, a new study suggests.

The AI record scan tested in the study could help surgery teams know in advance which patients might need more education about such risks, or treatment to help them reduce their drinking or stop drinking for a period of time before and after surgery.

The findings, published in Alcohol: Clinical and Experimental Research by a team from the University of Michigan, show that using a form of AI called natural language processing to analyse a patient’s entire medical record can spot signs of risky drinking documented in their charts, such as in doctor’s notes, even when they don’t have a diagnosis of an alcohol problem.

Past research has shown that having more than a couple of drinks a day on average is associated with a higher risk of infections, wound complications, pulmonary complications and prolonged hospital stays in people having surgery.

Many people who drink regularly don’t have a problem with alcohol, and when they do they may never receive a formal diagnosis for alcohol use disorder or addiction, which would be easy for a surgical team to spot in their chart.

Scouring Records and Notes

The researchers, from Michigan Medicine, U-M’s academic medical centre, trained their AI model by letting it review 100 anonymous surgical patients’ records to look for risky drinking signs, and comparing its classifications with those of expert human reviewers.

In all, the AI model matched the human expert classification most of the time. The AI model found signs of risky drinking in the notes of 87% of the patients who experts had identified as risky drinkers.

Meanwhile, only 29% of these patients had a diagnosis code related to alcohol in their list of diagnoses. So, many patients with higher risk for complications would have slipped under the radar for their surgical team.

The researchers then allowed the AI model to review more than 53,000 anonymous patient medical records compiled through the Michigan Genomics Initiative. The AI model identified three times more patients with risky alcohol use through this full-text search than the researchers found using diagnosis codes. In all, 15% of patients met criteria via the AI model, compared to 5% via diagnosis codes.

“This evaluation of natural language processing to identify risky drinking in the records of surgical patients could lay the groundwork for efforts to identify other risks in primary care and beyond, with appropriate validation,” said V. G. Vinod Vydiswaran, Ph.D., lead author of the new paper and an associate professor of learning health sciences at the U-M Medical School. “Essentially, this is a way of highlighting for a provider what is already contained in the notes made by other providers, without them having to read the entire record.”

“Given the excess surgical risk that can arise from even a moderate amount of daily alcohol use, and the challenges of implementing robust screening and treatment in the pre-op period, it’s vital that we explore other options for identifying patients who could most benefit from reducing use by themselves or with help, beyond those with a recorded diagnosis,” said senior author Anne Fernandez, Ph.D., an addiction psychologist at the U-M Addiction Center and Addiction Treatment Services and an associate professor of psychiatry.

The new data suggest that surgical clinics that simply review the diagnosis codes listed in their incoming patients’ charts, and flag ones such as alcohol use disorder, alcohol dependence or alcohol-related liver conditions, would be missing many patients with elevated risk.

Alcohol + Surgery = Added Risk

In addition to known risks of surgical complications, Fernandez and colleagues recently published data from a massive Michigan surgical database showing that people who both smoke and have two or more drinks a day were more likely to end up back in the hospital, or back in the operating room, than others. Those with risky drinking who didn’t smoke also were more likely to need a second operation.

She and colleagues also found that 19% of people having surgery may have risky levels of alcohol use, in a review of detailed questionnaire data from people participating in two different studies that enroll people from Michigan Medicine surgery clinics.

The new study used the NLP form of AI not to generate new information, but to look for clues in the pages and pages of provider notes and data that make up a person’s entire medical record.

After validation, Vydiswaran said, the tool could potentially be run on a patient’s record before they are seen in a pre-operative appointment and identify their risk level. Just knowing that a person has a potentially risky level of drinking isn’t enough, of course. 

Fernandez is leading an effort to test a virtual coaching approach to help people scheduled for surgery understand the risks related to their level of drinking and support them in reducing their intake.

Read More

Artificial Intelligence

China Restricts Use of AI in Scientific Research

New guidelines issued by the Ministry of Science and Technology in China prohibit the use of generative AI in research declaration materials.

Published

on

China Restricts Use of AI in Scientific Research

New guidelines issued by the Ministry of Science and Technology prohibit researchers from using Generative AI to directly generate declaration materials for their research or having Artificial Intelligence be listed as a co-author of research results.  

Released on Dec. 21, the research code of conduct applies to researchers in scientific institutions, higher education institutions, medical institutions, and enterprises.

The ministry said the guidelines are a response to new challenges in research data processing and intellectual property rights that have arisen from the rapid development of AI.

The guidelines require all AI generated content to be clearly labeled as such, with information provided as to how the content was generated. 

Zhang Xin, director of the Digital Economy and Legal Innovation Research Center at the University of International Business and Economics in Beijing, believes the guidelines will help promote more responsible use of generative AI in scientific research. 

“If researchers use reference materials generated by AIGC (AI-generated content) without verification, it may not only jeopardize the quality of research outcomes but also intensify the spread of false information, posing various risks to society,” Zhang told Sixth Tone.

The prohibiting of generative AI as a co-author aligns with broader academic practice in China currently, Zhang added.

In September, the Institute of Scientific and Technical Information of China, a research institute under the Ministry of Science and Technology, collaborated with world leading academic publishers Elsevier, Springer Nature, and John Wiley & Sons to release guidelines on the use of AI-generated content in academic papers, which also required clear labeling of such content. 

In August, authorities released an updated draft law on academic degrees specifying that students caught using AI to write dissertations will have their degrees revoked. While the draft has yet to be finalized, some domestic academic journals are already rejecting papers produced with the help of generative AI.  

Further clarification of rules surrounding the use of AI in research is needed as AI becomes an important research tool, said Zhang. 

As AI tools continue to proliferate in China, authorities have been proactive in regulating various applications, including recommendation algorithms and “deepfakes” — fake videos or recordings of people manipulated through AI. 

In April 2023, the Cyberspace Administration of China unveiled specific rules for generative AI, becoming the first in the world to do so.

Meanwhile, major social media platforms such as Douyin, the Chinese version of TikTok, and video streaming platform Bilibili have also started requiring labeling of AI-generated videos.

Read More

Continue Reading

Healthcare

AI’s Biggest Challenges Are Still Unsolved

AI’s Biggest Challenges are still unsolved. Three researchers weigh in on the issues that artificial intelligence faces in 2024

Published

on

AI’s Biggest Challenges Are Still Unsolved

The following essay is reprinted with permission from artificial intelligence The ConversationThe Conversation, an online publication covering the latest research.

2023 was an inflection point in the evolution of Artificial Intelligence and its role in society. The year saw the emergence of Generative AI, which moved the technology from the shadows to center stage in the public imagination. It also saw boardroom drama in an AI startup dominate the news cycle for several days. And it saw the Biden administration issue an executive order and the European Union pass a law aimed at regulating AI, moves perhaps best described as attempting to bridle a horse that’s already galloping along.

We’ve assembled a panel of AI scholars to look ahead to 2024 and describe the issues AI developers, regulators and everyday people are likely to face, and to give their hopes and recommendations.


Casey Fiesler, Associate Professor of Information Science, University of Colorado Boulder

2023 was the year of AI hype. Regardless of whether the narrative was that AI was going to save the world or destroy it, it often felt as if visions of what AI might be someday overwhelmed the current reality. And though I think that anticipating future harms is a critical component of overcoming ethical debt in tech, getting too swept up in the hype risks creating a vision of AI that seems more like magic than a technology that can still be shaped by explicit choices. But taking control requires a better understanding of that technology.

One of the major AI debates of 2023 was around the role of ChatGPT and similar chatbots in education. This time last year, most relevant headlines focused on how students might use it to cheat and how educators were scrambling to keep them from doing so – in ways that often do more harm than good.

However, as the year went on, there was a recognition that a failure to teach students about AI might put them at a disadvantage, and many schools rescinded their bans. I don’t think we should be revamping education to put AI at the center of everything, but if students don’t learn about how AI works, they won’t understand its limitations – and therefore how it is useful and appropriate to use and how it’s not. This isn’t just true for students. The more people understand how AI works, the more empowered they are to use it and to critique it.

So my prediction, or perhaps my hope, for 2024 is that there will be a huge push to learn. In 1966, Joseph Weizenbaum, the creator of the ELIZA chatbot, wrote that machines are “often sufficient to dazzle even the most experienced observer,” but that once their “inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away.” The challenge with generative artificial intelligence is that, in contrast to ELIZA’s very basic pattern matching and substitution methodology, it is much more difficult to find language “sufficiently plain” to make the AI magic crumble away.

I think it’s possible to make this happen. I hope that universities that are rushing to hire more technical AI experts put just as much effort into hiring AI ethicists. I hope that media outlets help cut through the hype. I hope that everyone reflects on their own uses of this technology and its consequences. And I hope that tech companies listen to informed critiques in considering what choices continue to shape the future.


Kentaro Toyama, Professor of Community Information, University of Michigan

In 1970, Marvin Minsky, the AI pioneer and neural network skeptic, told Life magazine, “In from three to eight years we will have a machine with the general intelligence of an average human being.” With the singularity, the moment artificial intelligence matches and begins to exceed human intelligence – not quite here yet – it’s safe to say that Minsky was off by at least a factor of 10. It’s perilous to make predictions about AI.

Still, making predictions for a year out doesn’t seem quite as risky. What can be expected of AI in 2024? First, the race is on! Progress in AI had been steady since the days of Minsky’s prime, but the public release of ChatGPT in 2022 kicked off an all-out competition for profit, glory and global supremacy. Expect more powerful AI, in addition to a flood of new AI applications.

The big technical question is how soon and how thoroughly AI engineers can address the current Achilles’ heel of deep learning – what might be called generalized hard reasoning, things like deductive logic. Will quick tweaks to existing neural-net algorithms be sufficient, or will it require a fundamentally different approach, as neuroscientist Gary Marcus suggests? Armies of AI scientists are working on this problem, so I expect some headway in 2024.

Meanwhile, new AI applications are likely to result in new problems, too. You might soon start hearing about AI chatbots and assistants talking to each other, having entire conversations on your behalf but behind your back. Some of it will go haywire – comically, tragically or both. Deepfakes, AI-generated images and videos that are difficult to detect are likely to run rampant despite nascent regulation, causing more sleazy harm to individuals and democracies everywhere. And there are likely to be new classes of AI calamities that wouldn’t have been possible even five years ago.

Speaking of problems, the very people sounding the loudest alarms about AI – like Elon Musk and Sam Altman – can’t seem to stop themselves from building ever more powerful AI. I expect them to keep doing more of the same. They’re like arsonists calling in the blaze they stoked themselves, begging the authorities to restrain them. And along those lines, what I most hope for 2024 – though it seems slow in coming – is stronger AI regulation, at national and international levels.


Anjana Susarla, Professor of Information Systems, Michigan State University

In the year since the unveiling of ChatGPT, the development of generative AI models is continuing at a dizzying pace. In contrast to ChatGPT a year back, which took in textual prompts as inputs and produced textual output, the new class of generative AI models are trained to be multi-modal, meaning the data used to train them comes not only from textual sources such as Wikipedia and Reddit, but also from videos on YouTube, songs on Spotify, and other audio and visual information. With the new generation of multi-modal large language models (LLMs) powering these applications, you can use text inputs to generate not only images and text but also audio and video.

Companies are racing to develop LLMs that can be deployed on a variety of hardware and in a variety of applications, including running an LLM on your smartphone. The emergence of these lightweight LLMs and open source LLMs could usher in a world of autonomous AI agents – a world that society is not necessarily prepared for.

These advanced AI capabilities offer immense transformative power in applications ranging from business to precision medicine. My chief concern is that such advanced capabilities will pose new challenges for distinguishing between human-generated content and AI-generated content, as well as pose new types of algorithmic harms.

The deluge of synthetic content produced by generative AI could unleash a world where malicious people and institutions can manufacture synthetic identities and orchestrate large-scale misinformation. A flood of AI-generated content primed to exploit algorithmic filters and recommendation engines could soon overpower critical functions such as information verification, information literacy and serendipity provided by search engines, social media platforms and digital services.

The Federal Trade Commission has warned about fraud, deception, infringements on privacy and other unfair practices enabled by the ease of AI-assisted content creation. While digital platforms such as YouTube have instituted policy guidelines for disclosure of AI-generated content, there’s a need for greater scrutiny of algorithmic harms from agencies like the FTC and lawmakers working on privacy protections such as the American Data Privacy & Protection Act.

A new bipartisan bill introduced in Congress aims to codify algorithmic literacy as a key part of digital literacy. With AI increasingly intertwined with everything people do, it is clear that the time has come to focus not on algorithms as pieces of technology but to consider the contexts the algorithms operate in: people, processes and society.

This article was originally published on The Conversation. Read the original article.

Read More

Continue Reading

Artificial Intelligence

Emotional Intelligence Must Guide Artificial Intelligence

Discover the role of AI in patient care. While it has its limitations, it is crucial to combine AI with emotional intelligence for better outcomes.

Published

on

ai in healthcare

Artificial intelligence — Together they can ensure quality and holistic patient care

By: Arthur Lazarus, MD, MBA

Arthur Lazarus is an adjunct professor of psychiatry and a regular commentator on the practice of medicine.

I don’t understand the brouhaha about artificial intelligence (AI). It’s artificial — or augmented — but in either case, it’s not real. AI cannot replace clinicians. AI cannot practice clinical medicine or serve as a substitute for clinical decision-making, even if AI can outperform humans on certain exams. When put to the real test — for example, making utilization review decisions — the error rate can be as high as 90%.

Findings presented at the 2023 meeting of the American Society of Health-System Pharmacists showed that the AI chatbot ChatGPT provided incorrect or incomplete information when asked about drugs, and in some cases invented references to support its answers. Researchers said the AI tool is not yet accurate enough to answer consumer or pharmacist questions. Of course it’s not. AI is only as smart as the people who build it.

What do you expect from a decision tree programmed by an MBA and not an actual doctor? Or a large language model that is prone to fabricate or “hallucinate” — that is, confidently generate responses without backing data? If you try to find ChatGPT’s sources through PubMed or a Google search you often strike out.

The fact is the U.S. healthcare industry has a long record of problematic AI use, including establishing algorithmic racial bias in patient care. In a recent study that sought to assess ChatGPT’s accuracy in providing educational information on epilepsy, ChatGPT provided correct but insufficient responses to 16 of 57 questions, and one response contained a mix of correct and incorrect information. Research involving medical questions in a wide range of specialties has suggested that, despite improvements, AI should not be relied on as a sole source of medical knowledge because it lacks reliability and can be “spectacularly and surprisingly wrong.”

It seems axiomatic that the development and deployment of any AI system would require expert human oversight to minimize patient risks and ensure that clinical discretion is part of the operating system. AI systems must be developed to manage biases effectively, ensuring that they are non-discriminatory, transparent, and respect patients’ rights. Healthcare companies relying on AI technology need to input the highest-quality data and monitor the outcomes of answers to queries.

What we need is more emotional intelligence (EI) to guide artificial intelligence.

EI is fundamental in human-centered care, where empathy, compassion, and effective communication are key. Emotional intelligence fosters empathetic patient-doctor relationships, which are fundamental to patient satisfaction and treatment adherence. Doctors with high EI can understand and manage their own emotions and those of their patients, facilitating effective communication and mutual understanding. EI is essential for managing stressful situations, making difficult decisions, and working collaboratively within healthcare teams.

Furthermore, EI plays a significant role in ethical decision-making, as it enables physicians to consider patients’ emotions and perspectives when making treatment decisions. Because EI enhances the ability to identify, understand, and manage emotions in oneself and others, it is a crucial skill set that can significantly influence the quality of patient care, physician-patient relationships, and the overall healthcare experience.

AI lacks the ability to understand and respond to human emotions, a gap filled by EI. Despite the advanced capabilities of AI, it cannot replace the human touch in medicine. From the doctors’ perspective, many still believe that touch makes important connections with patients.

Simon Spivack, MD, MPH, a pulmonologist affiliated with Albert Einstein College of Medicine and Montefiore Health System in New York, remarked, “touch traverses the boundary between healer and patient. It tells patients that they are worthy of human contact … While the process takes extra time, and we have precious little of it, I firmly believe it’s the least we can do as healers — and as fellow human beings.”

Spivack further observed: “[I]n our increasingly technology-driven future, I am quite comfortable predicting that nothing — not bureaucratic exigencies, nor virtual medical visits, nor robots controlled by artificial intelligence — will substitute for this essential human-to-human connection.”

Patients often need reassurance, empathy, and emotional support, especially when dealing with severe or chronic illnesses. These are aspects that AI, with its current capabilities, cannot offer. I’m reminded of Data on Star Trek: The Next Generation. Data is an artificially intelligent android who is capable of touch but lacks emotions. Nothing in Data’s life is more important than his quest to become more human. However, when Data acquires the “emotion chip,” it overloads his positronic relays and eventually the chip has to be removed. Once artificial, always artificial.

Harvard medical educator Bernard Chang, MD, MMSc, remarked: “[I]f the value that physicians of the future will bring to their AI-assisted in-person patient appointments is considered, it becomes clear that a thorough grounding in sensitive but effective history-taking, personally respectful and culturally humble education and counseling, and compassionate bedside manner will be more important than ever. Artificial intelligence may be able to engineer generically empathic prose, but the much more complex verbal and nonverbal patient-physician communication that characterizes the best clinical visits will likely elude it for some time.”

In essence, AI and EI are not competing elements but complementary aspects in modern medical practice. While AI brings about efficiency, precision, and technological advancements, EI ensures empathetic patient interactions and effective communication. The ideal medical practice would leverage AI for tasks involving data analysis and prediction, while relying on EI for patient treatment and clinical decision-making, thereby ensuring quality and holistic patient care.

There was a reason Jean-Luc Picard was Captain of the USS Enterprise and Data was not.

Data had all the artificial intelligence he ever needed in his computer-like brain and the Enterprise’s massive data banks, but ultimately it was Picard’s intuitive and incisive decision-making that enabled the Enterprise crew to go where no one had gone before.

Read More 

Continue Reading

Latest Reviews

chatgpt chatgpt
ChatGPT2 weeks ago

ChatGPT vs Gemini vs Copilot

Introduction When it comes to writing, researching, or coding there’s no shortage of online tools promising to make your life...

ai tools ai tools
Reviews2 weeks ago

Top 10 AI Voice Generators Entering 2024

We have entered a new era of digital narration gone are the days of robotic and monotonous speech

ai tools ai tools
AI Tools2 weeks ago

Prisma Transforms Photos with AI Powered Artistic Filters

Overview Prisma Labs is a leading technology company specialised in artificial intelligence and computer vision. With their flagship product, the...

ai tools ai tools
AI Tools2 weeks ago

Tickeron AI Powered Trading Simplifying Investment Decisions

AI powered stock trading tool that simplifies decision making with real time market insights technical analysis and trading signals

Synthesia Synthesia
AI Tools2 weeks ago

Synthesia: Great #1 Contender in AI Text to Video Generation?

Overview Synthesia is a powerful Artificial Intelligence voice and video generator tool that allows users to create hyper-realistic videos with...

canva canva
AI Tools3 weeks ago

Canva: The Ultimate AI Graphic Design Tool for Stunning Visuals

Overview Canva is a renowned AI powered graphic design tool that allows users to create stunning visuals and graphics effortlessly....

ai tools ai tools
AI Tools3 weeks ago

Quantconnect: AI Powered Trading Tool for Automated Strategies

An AI powered trading tool offering strategy automation and extensive features

storychief storychief
AI Tools3 weeks ago

StoryChief Eliminates 5 Tools in Content Creation

Looking to streamline your content creation and distribution efforts? Discover StoryChief, the all in one platform for bloggers and content...

kasba.ai grammerly kasba.ai grammerly
AI Tools1 month ago

Grammarly: An Essential Writing Tool for Students, Teachers and Writers

Improve your writing check for grammar correct errors enhance vocabulary and detect plagiarism

ai tools ai tools
AI Tools2 months ago

Explore Artbreeder, the Innovative AI Image Creator

Discover the innovative AI application, Artbreeder, that combines existing artworks to create unique images. Learn about its features and benefits...

Trending

Disclosure: KASBA.AI participates in various affiliate programs which means we may earn a fee on purchases and product links to our partner websites. Copyright © 2024 KASBA.AI