Artificial Intelligence (AI) Can Do More Damage At Its Current Stage

@ireti · 2025-08-11 07:29 · StemSocial
We have seen a lot of sci-fi movies where Androids with Artificial Intelligence become self-aware and begin to attack humans because they want to take over the world or because humans tried to shut them down. This has been the storyline of a lot of movies. While a lot of people fear this possibility, there is even a real-life danger associated with artificial intelligence that doesn't require them to become self-aware before they can cause great harm. https://upload.wikimedia.org/wikipedia/commons/c/ca/Artificial_Intelligence_-_Resembling_Human_Brain.jpg

wikimedia

Artificial Intelligence has become a part of our lives from powering our house, helping with repetitive and non-repetitive tasks, driving, automation, organization, and piloting military drones, and so on. This thing doesn't even need to be self-aware to be dangerous, and as of currently, there is no evidence that they are becoming conscious in any anyway.

When fed bad data, you should expect bad results. Since AI functions based on data collected and trains itself based on the data it receives, it can teach itself based on the data it receives. In 2019, Optum used AI to identify patients who would benefit from extra medical care. Instead of the AI picking patients based on the severity of their illness, it picked patients based on higher predicted medical expenses. Its decision was based on the information it was fed which included past insurance claims, previous diagnoses, age, sex, and demography.

server-ai.jpg sectorlink

The algorithm went on to even discriminate against black patients not because it wanted to be racist but because the algorithm wasn't fed data about the race of patients and in the case where black people needed more care, the algorithm could not differentiate them. Also, the issue regarding the patients with higher predicted medical expenses was because the algorithm was given data based on how much the patients had spent on health care previously, and not their physical health. Just like this AI, any other AI that is not fed accurate, fair, representative, and reliable data.

One thing about AI is that they require being taught, so whatever you train them with is what they learn, or whatever is placed as an instruction in their algorithm is what they follow. Now in weapons, powerful companies are building AI-powered missiles and drones, to target enemies including buildings and people, but there isn't enough knowledge on this and their capabilities.

If we can have worries with AI’s functions on simple things, bringing them to warfare reality without proper knowledge can be dangerous. There is no way there wouldn't be failures with these algorithms since it is quite difficult to get real-life data with matters of war. The drone being able to identify military personnel from civilians, bandits, and rebels, from nomads will be something that needs to be looked at.



Reference



https://www.science.org/doi/pdf/10.1126/science.aax2342 https://arstechnica.com/cars/2019/11/how-terrible-software-design-decisions-led-to-ubers-deadly-2018-crash/ https://www.hsph.harvard.edu/news/hsph-in-the-news/study-widely-used-health-care-algorithm-has-racial-bias/ https://www.mdpi.com/1999-5903/16/9/308 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9122957/ https://www.tandfonline.com/doi/full/10.1080/00396338.2017.1375263#d1e269 https://www.wired.com/story/ubers-fatal-self-driving-car-crash-saga-over-operator-avoids-prison/ https://www.tandfonline.com/doi/full/10.1080/15027570.2018.1481907 https://arxiv.org/pdf/1712.05846 https://arxiv.org/pdf/1706.05125

#hive-196387 #science #neoxian #waivio #cent #appreciator #stemgeeks #ocd #proofofbrain #piotr
Payout: 0.000 HBD
Votes: 335
More interactions (upvote, reblog, reply) coming soon.