AI Is Dangerous — But Not for the Reasons You Think

In a world captivated by dramatic headlines about artificial intelligence potentially ending humanity, AI ethics researcher Sasha Luccioni offers a grounded and urgent perspective. In her TED Talk delivered at TEDWomen 2023 and released in October 2023, titled “AI Is Dangerous, but Not for the Reasons You Think”, Luccioni — a climate lead at Hugging Face with over a decade of experience in AI — argues that the real threats from AI are not speculative future catastrophes, but the very real harms unfolding right now.

The talk opens with a personal anecdote: Luccioni recounts receiving an unusual email from a stranger accusing her work in AI of being destined to “end humanity.” While she understands the cultural fascination with doomsday scenarios — fueled by science fiction and sensational media coverage — she believes this fixation distracts from pressing, tangible issues. “AI won’t kill us all,” she asserts, “but that doesn’t make it trustworthy.” Instead of chasing hypothetical existential risks, society should confront the concrete negative impacts already caused by today’s AI systems.

Luccioni identifies three primary ways AI is inflicting harm in the present day.

First, its massive environmental footprint. Training and deploying large-scale AI models requires enormous amounts of computational power and electricity, often powered by fossil fuels. This contributes significantly to carbon emissions and accelerates climate change. Even simple interactions with models like ChatGPT generate notable CO₂ output, and the energy demands of massive data centers continue to grow unchecked. Luccioni stresses that this environmental cost is frequently overlooked amid the excitement surrounding AI’s capabilities.

Second, copyright infringement and exploitation of creators. Many generative AI systems are trained on vast datasets scraped from the internet, incorporating artworks, photographs, books, and other creative works without the consent, compensation, or attribution of the original artists and authors. This practice amounts to intellectual property theft on an industrial scale, undermining livelihoods and raising serious ethical questions about fairness in the digital age.

Third, the amplification of bias and harmful information. AI models can perpetuate and magnify existing societal biases embedded in their training data, leading to discriminatory outputs in areas such as hiring, criminal justice, lending, and facial recognition. These systems may produce misleading, inaccurate, or outright harmful content, spreading misinformation and disproportionately affecting marginalized communities.

Luccioni emphasizes that obsessing over long-term, apocalyptic risks — like rogue superintelligence — diverts attention and resources from addressing these immediate problems. By focusing on today’s harms, we can develop practical solutions: greater transparency in AI development, robust regulations, tools to measure and mitigate environmental impacts, fairer data practices that respect creators’ rights, and bias-detection mechanisms to ensure equitable outcomes.

She advocates for responsible AI that is inclusive, sustainable, and accountable. Interestingly, in a follow-up TED Talk delivered in September 2025 titled “We’re Doing AI All Wrong. Here’s How to Get It Right”, Luccioni builds on these ideas, critiquing the current trajectory of ever-larger models controlled by a few corporations and proposing a shift toward smaller, more efficient, open-source AI systems. These “small but mighty” models, she argues, could deliver strong performance while drastically reducing energy consumption, making AI both more accessible and environmentally viable.

As AI continues to reshape society in 2026, Luccioni’s message remains strikingly relevant: the danger lies not in a distant robot uprising, but in the unchecked environmental, ethical, and social consequences of the technology we are deploying today. By redirecting our focus to these real-world issues and implementing meaningful safeguards, we have the opportunity to build a future where AI serves humanity — and the planet — rather than harms them. The time to act is now, before the costs become even more irreversible.

About The Author

Leave a Reply

Scroll to Top

Discover more from NEWS NEST

Subscribe now to keep reading and get access to the full archive.

Continue reading

Verified by MonsterInsights