Artificial IntelligenceHealth TechHealthcare

AI in Healthcare: Strange Case of Teen Pregnancy that was ‘Predicted’ by the Creepy Algorithm

AI in Healthcare

AI in Healthcare is expanding, but in 2018 a case of a creepy algorithm was reported that could predict teen pregnancy was reported.

In 2018, when the Argentine Congress was considering whether or not to outlaw abortion, the Ministry of Early Childhood in the northern province of Argentine in Salta approved the creation of a futuristic Microsoft algorithm that could reportedly predict which low-income “future teens” were likely to become pregnant.

“With technology, you can see which girl, future teenager – five or six years in ahead, with first name, last name, and address – 86 percent of teens are preset to conceive –” stated Juan Manuel Urtube, the province’s then-governor, proudly declared on national television. The idea was to utilize an algorithm to anticipate which low-income females from the following five years would become pregnant. It was never apparent what would happen if a girl or young woman was labeled “predetermined” for motherhood, or how this information could be used to avoid teenage pregnancy.

The AI system‘s social concepts, like its algorithms, were a mystery and unclear. The algorithm analyzed demographic data such as age, race, disability, place of origin, and whether or not their home had hot water in its bathroom to decide which of the women and girls in a small Argentinian town were “predestined” for parenthood. According to the magazine’s findings, the women and girls labeled as would-be adolescent parents by Microsoft’s algorithm were often marginalized in many ways, ranging from poor backgrounds and migrant families to indigenous heritage. The Technology Platform for Social Intervention algorithm is notable since it was developed by an American business, Microsoft, and deployed in a country with a long history of monitoring as well as population control strategies.

According to a Microsoft spokeswoman, Salta is “one of the leading cases in the use of AI data” in-state technology programs, although it rarely introduces anything new. And her answer demonstrates how Argentine feminists exploited this artificial intelligence misuse.

This type of eugenic thinking tends to mold and adapt to new scientific paradigms and political conditions, according to historian Marissa Miranda, who oversees Argentina’s efforts to regulate the population through science and technology. Take, for example, immigration. Argentina’s history alternates between celebrating immigration as a tool to “better” population opinion and carefully observing and managing immigrants as an undesirable and political threat.

While Saltani described the AI system for “predicting pregnancy” as futuristic, it can only be interpreted within the context of this long history, particularly the ongoing eugenic drive that “refers to the future” and anticipates reproduction, as Miranda put it. The powerful should be in charge.”

The lack of clarity and accountability surrounding the program reflects those leanings. For one thing, the Argentinian government has never properly evaluated the algorithm’s impact on women and girls.

Worse, according to Wired, the operation featured the deployment of “territorial agents” who examined persons recognized by AI as being predestined for pregnancy, photographed them, and even logged their GPS coordinates. It’s still unclear what the provincial and national governments did with the data, and how — if at all — they were tied to the abortion issue. All we know about the system comes from the grassroots audits of the defective and dangerous AI system undertaken by feminist activists and journalists. These activists brought the attention of the national media to how an unqualified, unregulated technique had been used to invade the privacy of girls and women by immediately igniting a well-oiled machine of community organization.

Argentina voted to decriminalize abortion in 2020, marking a watershed moment in the South American country’s history — but the program’s presence should be a reason for alarm.

The research should serve as a warning of the potentially hazardous junction of American AI technology and authoritarianism, as well as a timely reminder that, for the time being, we have much more to worry about from the humans behind the algorithms than from the algorithms themselves.

What's your reaction?

Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0

You may also like

Leave a reply

Your email address will not be published. Required fields are marked *