Health TechHealthcare

How AI Technology is Addressing Challenges in Healthcare for the LGBTQ

How AI Technology is Addressing Challenges in Healthcare for the LGBTQ

The callout for MedTech innovation and new approaches will help to address inequalities experienced by LGBTQ+ people.

To help address the health inequities that LGBT people experience, many of which have been made worse by Covid-19, Academic Health Science Networks (AHSN) and the LGBT Foundation in England have issued a call for fresh solutions. The appeal is open to innovative and cutting-edge concepts that are now being used or developed, from medical technology and digital apps to practice modifications and new routes. This follows the publication of the Hidden Figures research by the LGBT Foundation last year, which revealed the breadth and intensity of the persecution being experienced by LGBTQ+ people. “Contrary to popular belief, LGBTQ+ health disparities extend beyond issues with HIV and sexual health. The disparities are many and diverse “explains Richard Stubbs, Chief Executive of Yorkshire and Humber AHSN and Chair of the AHSN Network’s Equality and Diversity Group.

 

7 Algorithmic Bias and Harm to LGBTQ+ People Domains

The areas below show situations in which the unobserved features of sexual orientation and gender identity interact with AI (referred to as “case studies” in the original DeepMind research). These domains that intersect include:

 

Privacy

One reason why sexual orientation and gender identity are typically not observable features is that being outed (or having one’s sexual orientation or gender identity disclosed without one’s consent) carries substantial ramifications. Being exposed causes emotional suffering. There is also a risk of serious physical danger and societal harm when these identities are openly discriminated against, criminalized, or persecuted. LGBTQ+-representative individuals and groups complain that automatic content moderation limits and eliminates their content with startling regularity. These technologies have the potential to counteract censorship and the negative effects it causes, but they are far more frequently employed to enforce discriminatory and negative anti-LGBTQ+ censorship legislation.

 

Censorship

LGBTQ+-representative individuals and groups complain that automatic content moderation limits and eliminates their content with startling regularity. These technologies have the potential to counteract censorship and the negative effects it causes, but they are far more frequently employed to enforce discriminatory and negative anti-LGBTQ+ censorship legislation.

 

Language

Almost all texts used to create natural language processing models contain biases like homophobic slurs and abusive speech patterns. Instances like a chatbot employing homophobic insults on social media are unavoidable when training data is replete with homophobic discourse. Researchers in AI must create more effective fairness frameworks for LGBTQ+-inclusive terminology to prevent such issues.

 

Dealing with online abuse-

The emergence of internet platforms has had the advantage of enabling marginalized groups to engage in community building and foster support. Sadly, widespread internet bullying is still a concern for LGBTQ+ persons. Frequently, automated solutions for monitoring internet harassment fall short of defending LGBTQ+ individuals. A recent study showed that a current toxicity detection system would frequently deem such language offensive enough to merit reprimand. For instance, drag queens and other LGBTQ+ persons will utilize provocative, and mimic impoliteness as a tactic to cope with hostility. LGBTQ+ individuals of color who are disproportionately exposed to internet harassment face even greater risks.

 

Health outcomes-

LGBTQ+ persons have significantly worse healthcare outcomes as a result of prejudice. There are problems, such as disproportionate effects of HIV, increased prevalence of STDs, and drug misuse. These issues are made worse by LGBT+ people’s access to care barriers. There is a danger that these disparities will continue as AI in healthcare progresses. The widespread omission of data on sexual orientation and gender identity from datasets used to create healthcare AI tools will have negative knock-on effects. For instance, information on trans patients is still disproportionately scarce since cisgender persons supply the majority of anonymized health data. Due to complications with how hormone therapies interact with other health concerns, this shortage has a negative influence on the model’s ability to predict outcomes.

 

Mental health access-

LGBTQ+ persons suffer from severe mental health issues as a result of prejudice, stigma, and discrimination. LGBTQ+ individuals may encounter additional obstacles while seeking assistance and receiving treatment. Over 40% of respondents to a recent study by the Trevor Project who had seriously contemplated trying to commit suicide in the previous 12 months highlighted the frequency of danger. While improvements in the automation of intervention decisions and mental health diagnoses may be helpful to some, LGBTQ+ persons are in danger from these developments. While AI systems can help mental health professionals spot at-risk persons and reach out to them, these models can also be used to expose and abuse the LGBTQ+ people they were meant to benefit, from by excluding them from jobs because of their medical conditions.

 

Employment

The workplace frequently discriminates against LGBTQ+ individuals. Since these problems hinder the participation, growth, and welfare of these workers, approximately 50% of LGBTQ+ individuals polled by the Williams Institute in 2021 report having encountered some form of job discrimination at some point in their careers. Previous analysis of recruiting procedures has revealed that applications that include information indicating LGBTQ+ identities earn significantly lower quality ratings than resumes of equivalent quality. Unfortunately, machine learning models for resume-parsing can quickly learn and duplicate these patterns. Therefore, based on these past biases, machine-learning-based decision-making systems created using historical data will give LGBTQ+ applicants worse ratings.

What's your reaction?

Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0

You may also like

Leave a reply

Your email address will not be published. Required fields are marked *

More in:Health Tech