Sunny

Written by Sunny

Published: 23 Dec 2024

Facts About AI in Mental Health

A recent study, Patient Perspectives on AI for Mental Health Care: Cross-Sectional Survey Study, investigates the patient and general public’s perspective towards artificial intelligence in healthcare. This is a massive change from other studies and surveys, which only assess the attitudes of professionals in the field.

Understanding the benefits and usage of artificial intelligence in the healthcare field is essential for those with a Masters in Clinical Mental Health Counseling online. AI is a valuable tool and can be used alongside the expertise of professionals for improving outcomes.

This study is a 1-time cross-sectional survey with a nationally representative sample of 500 US-based adults. It is also one of the first studies to explore the public perspective of artificial intelligence for mental health-related applications.

Just under half of the respondents in the study, 49.3%, reported that they believe artificial intelligence could improve mental health care and ‘make it better or somewhat better’. This is similar to a study conducted in Germany where 53% of the respondents reported positive attitudes towards AI, however, this wasn’t specifically for AI in mental health.

The world is currently facing a mental health crisis, with at least 10% of the global population affected. Almost 15% of adolescents around the world are experiencing a mental health condition. When it comes to overcoming this crisis, all tools, including AI need to be deployed.

Table of Contents

Findings From the Study

This study revealed that different demographics were associated with different perceived perceptions of the benefits of AI in healthcare. For example, people with lower self-rated health literacy and those of Black or African American race were associated with more positive perceptions.

In comparison, the answers from women respondents in this study were linked to lower perceived benefits from AI for mental health. Interestingly enough, the previously linked study conducted in Germany had similar findings regarding women having a lower perceived benefit.

This study also covered the respondents’ concerns regarding the use of artificial intelligence in mental health care. Participants in the study cited concerns regarding:

  • Decreased human communication and connection
  • Concerns regarding confidentiality and privacy
  • Risk of harm, for instance, a wrongful diagnosis or inappropriate treatmen

These findings regarding concerns were consistent with previous studies relating to worries about AI accuracy. The study also revealed that the general public is also concerned regarding the performance and cost of AI tools.

AI & Mental Health Tasks

Results from this study also outlined how patients’ comfort with artificial intelligence within healthcare depends on what the artificial intelligence does. Participants in the study were the least comfortable with diagnosis delivery tasks.

Despite the growth of chatbots, less than half of the participants, 47.4%, were comfortable discussing their mental health information with one. This shows that chatbots should only be used on an opt-in basis and as supplementary to visiting a professional.

Artificial Intelligence in Mental Health

The findings from the study on patient perception of artificial intelligence in mental health reveal that a plurality of participants consider AI to be beneficial depending on how it’s used. AI tools are already being adopted in the mental health sector. These include:

  • Therapeutic chatbots in the form of smartphone applications and computer programs.
  • AI-powered games can also be used as a therapy tool and for patients to express emotions.
  • Virtual reality with AI has been used to treat phobias in a safe and simulated way through controlled, immersive environments.

The use of artificial intelligence has also been explored for administrative tasks for mental health care providers, for mental health monitoring (like wearables), personalizing treatment, and for suicide prevention

For instance, the social media giant Facebook has started to use artificial intelligence to scan people’s accounts for dangers of imminent self-harm. The AI can scan people’s content and how other users are replying to their videos, like livestreams.

Depending on the replies, the AI then flags the user for Facebook to manually review and decide if they should contact local authorities. AI can be a valuable tool for suicide intervention and helping people access the proper mental health care.

Potential Risks

Artificial intelligence is still a growing technology. While it has the potential to revolutionize the mental health sector and the healthcare industry overall, there are several risks and challenges it’ll need to overcome. These include:

  • Data privacy and security, especially in a field like healthcare, where patient confidentiality is critical.
  • AI systems need to be unbiased with the data they’re trained on. It’s currently too easy for artificial intelligence to give less than-accurate responses for minority groups and people with rarer medical conditions.
  • There is currently a lack of transparency with AI, also known as ‘the black box’ problem. Regulations need to be implemented to improve transparency and trust in AI tools.

Guardrails for AI

The Australian Government has proposed mandatory guidelines for artificial intelligence in high-risk settings, which would include the mental health sector. Currently, these guidelines are voluntary, with the plan to be mandatory for all developers and deployers of AI tools.

These guidelines include tackling issues like the black box problem, establishing a clear line of accountability for artificial intelligence-related problems, and reducing bias in data for training

Artificial intelligence is rapidly developing, so it’s been hard for regulatory bodies to develop a clear set of rules and standards. More countries should follow the Australian Government’s proactive lead.

Alongside implementing guardrails for artificial intelligence development and use, the Australian Government has taken an adaptive stance. They plan to develop the guardrails and laws over time to adapt to the development of AI, taking advice from industry professionals.

Was this page helpful?

Our commitment to delivering trustworthy and engaging content is at the heart of what we do. Each fact on our site is contributed by real users like you, bringing a wealth of diverse insights and information. To ensure the highest standards of accuracy and reliability, our dedicated editors meticulously review each submission. This process guarantees that the facts we share are not only fascinating but also credible. Trust in our commitment to quality and authenticity as you explore and learn with us.