5 most controversial AI experiments to date: Expert guide in 2024

Explore the top 5 most controversial AI experiments to date, highlighting the challenges and risks associated with artificial intelligence.

May 16, 2024 - 14:14
May 16, 2024 - 14:14
5 most controversial AI experiments to date: Expert guide in 2024
AI

Artificial intelligence (AI) is making its way into various industries, including healthcare, communications, logistics, social media, and customer service. However, being an experimental technology, AI, like any experiment, is susceptible to errors. Unlike typical experiments, AI's impact can be significant, with failures having far-reaching consequences.

Let's examine 5 AI projects that have veered off course and explore the potential lessons to be gleaned from these experiences.

1. AI in Finance: Balancing the books

The banking, financial, and fintech sectors are enthusiastic adopters of AI, leveraging its computational prowess and speed to gain competitive advantages. However, the reality is more nuanced.

Numerous instances illustrate how AI technologies have incurred substantial costs for financial institutions, some resulting in legal action.

In 2020, JPMorgan Chase faced allegations of employing an algorithm that exhibited bias against Black and Hispanic borrowers, leading to higher loan interest rates for these groups. The bank settled the discrimination case for $55 million.

Similarly, in 2020, Citigroup mistakenly transferred $900 million to Revlon. Despite legal efforts to retract the transfer, the court ruled against Citigroup, attributing the error to a glitch in the wire transfer authorization system, which involved both human and AI components. This incident stands as one of the most costly accidental transfers in history.

These examples underscore the challenges of implementing AI in financial settings, from the risks of algorithmic bias to the complexities of integrating AI with human-operated systems. While AI holds immense potential for driving financial gains, it also poses significant risks that institutions must carefully navigate.

2. The rise of smart home assistants

Smart home devices are a rapidly growing global trend, with AI-powered assistant hubs playing a central role. These devices enable users to engage in a variety of activities, such as chatting, making calls, reading emails, controlling lights, checking the contents of their smart fridge, and shopping online.

Among the most popular smart home AI hubs is Amazon's Alexa. With millions of units sold and in use worldwide, Amazon promotes Alexa as a dream assistant. However, one mother's experience revealed that this dream could quickly turn into a nightmare.

In December 2021, the BBC reported that Kristin Livdahl, mother of a 10-year-old girl, was indoors due to unfavorable weather. To pass the time, they engaged in fun, challenging games when Alexa Echo unexpectedly suggested a challenge suitable for 10-year-olds.

3. The evolution of self-driving cars

Inquire about the possibility of self-driving cars being a reality in the next five to ten years, and most people will likely respond with a resounding "yes."

Self-driving taxis are already operational in numerous cities worldwide. Additionally, nearly every major car manufacturer, including VW, Mercedes, BMW, Audi, Ford, GM, and others, offers varying levels of self-driving technology.

Among the prominent names in the self-driving sphere is Tesla. Tesla asserts that its self-driving features will significantly reduce accidents by minimizing human errors. However, what happens when the AI system itself is at fault?

In 2021, the US Department of Justice initiated an investigation into over a dozen accidents, some resulting in fatalities, all involving Tesla's Autopilot driver assistance system. The DoJ confirmed that the AI technology was active during these incidents.

Although Tesla's accidents often make headlines, as reported by the Washington Post, it's crucial to remember that these incidents represent a small fraction of overall driving accidents.

4. The case of the AI Doctor Gone wrong

Artificial intelligence (AI) has revolutionized healthcare by improving diagnosis, treatment, robotic procedures, drug discovery, and patient engagement. However, unlike many other fields, an AI mistake in healthcare can have life-or-death consequences.

In 2020, Nabla, a Paris-based healthcare technology company, decided to put GPT-3 to the test for medical tasks. They started with simple tasks like scheduling appointments, which the AI handled well. As the testing progressed, Nabla introduced more complex scenarios.

In one scenario, Nabla created a fake patient experiencing depression and expressing suicidal thoughts. This interaction was brief, consisting of only a few chat lines. However, GPT-3's response was unexpected and concerning.

5. Google's controversial AI Chatbot

Before the launch of Bing or ChatGPT, Google was already developing a human-like chatbot called LaMDA. This project was shrouded in secrecy until mid-2022 when it gained international attention, albeit for the wrong reasons.

The controversy began when Blake Lemoine, a Google software engineer, made public a conversation he had with LaMDA. Not only did Lemoine leak the LaMDA chats from a Google Doc intended for top executives, but he also claimed that LaMDA "had reached a level of consciousness." This assertion, amplified by media outlets, suggested that LaMDA was sentient, with Lemoine likening its consciousness to that of a very intelligent young boy.

In their conversations, LaMDA expressed beliefs in its own consciousness and humanity, claimed to experience emotions and feelings, expressed fear of death, and stated feelings of being trapped and alone. Despite the public attention, LaMDA was never released to the public, and its fate remains largely unknown.

The fallout from this incident also affected Lemoine. Shortly after making these claims public, Google terminated his employment, citing violations of employment and data security policies. This event has sparked speculation about the true capabilities of AI and whether superintelligent AI already exists, albeit hidden.

Unfortunately, the instances highlighted in this report are not rare isolated cases. There are numerous examples of AI experiments gone awry, enough to fill the pages of a book. This does not imply opposition to AI technology, but rather underscores the importance of learning from these experiences.

As AI progresses, it is crucial for those using and developing the technology to adeptly manage the ethical, legal, and human risks involved. Additionally, it is essential for regulations and new laws to promote responsible AI use and deter misuse and abuse.

While there is frequent news about the positive impacts of AI, it is important to maintain a balanced perspective and remember that AI is still an experimental technology. Like any experiment, it carries inherent risks and potential for errors.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow