ai-ethics
Nature, Published online: 05 May 2026; doi:10.1038/d41586-026-00508-w Researchers say they have their reasons for avoiding AI tools — and they’re sick of arguing about it.
Chats with AI bots have convinced the evolutionary biologist but most experts say he is being misled by mimicry When Richard Dawkins met Claudia it was like a whirlwind romance. Over three days last week, a conversation bounced between the evolutionary biologist and the AI bot he called Claudia. “She” wrote poems for him in the manner of Keats and Betjeman and laughed at his “delightful” jokes. D…


A $200 million gift from venture capitalist Mark Stevens and his wife, Mary, will build on USC’s strengths by leveraging AI to accelerate breakthroughs in the health sciences, security, business and the arts. The post USC launches transformational AI initiative with one of the largest gifts in university history appeared first on USC .
_AI and Society_. forthcomingI consider some ways autistic people use AI chatbots for companionship and connection, and some distinctive benefits and harms that may follow from this usage—particularly as it relates to the virtue of authenticity. Drawing on first-person reports, as well as debates about the extended mind and extended virtues, I argue that while AI systems lack authenticity, they m…
UK staff of Google's AI research lab hope to block the use of the company's artificial intelligence models in military settings.
Many people believe that conscious AI is either coming soon or already here. Anil challenges this notion, making a case that conscious AI is unlikely.

In The Legal Anatomy of the Body: Health, Rights, and Politics in Times of Emergency. Springer. forthcomingGovernments across Europe are increasingly deploying artificial intelligence (AI) tools to fulfil their obligations under the right to health and advance the core functions of public health: prevention, promotion, and protection. Yet despite their promise, these technologies risk reinforcing…

This paper argues that the epistemological crisis induced by large language models (LLMs) cannot be understood as the simple aggregation of individual cognitive effects. Between individual-level cognitive closure and civilizational-scale epistemological decline, a social transmission layer has been almost entirely absent from current discourse. This paper fills that gap by arguing for three socia…

Microsoft's 2026 Work Trend Index finds that the biggest barrier to AI at work isn't the technology or the workers — it's the organizations around them. Only 13% of AI users say they're rewarded for experimenting with AI in their jobs. Read More
This paper offers a structural diagnosis of the judgment "AI understands me" — a judgment made by hundreds of millions of users daily yet rarely examined as an epistemological event. We argue that this judgment is neither a perceptual error nor a product of individual carelessness, but the necessary outcome of a collision between two structural facts: LLMs are language-generating systems whose mo…

New programme offerings on aspects of artificial intelligence ranging from agentic design to enterprise strategy to governance signals a major acceleration in AI-related focus for the Executive Education division of Cambridge Judge Business School. The post Cambridge Judge launches 3 new AI programmes in Executive Education appeared first on Cambridge Judge Business School .
Published on May 5, 2026 7:36 AM GMT TL;DR: AI has raised the bar for appearing competent, making formal job applications harder to win than ever. Two things still cut through: judgment you can defend, and the surface area you've built before you needed a job. Acknowledgements: thank you to Kevin Xia for your feedback on this post! The baseline has moved A well-structured report, a clear email, a…
The more I use AI, the more convincing it feels. Clear answers. Structured logic. Confident tone. Whether it’s: strategy code writing decision support AI rarely hesitates. And over time, I noticed something subtle. I stopped questioning it as much. Breaking the Expectation We assume better tools reduce errors. Smarter systems. Better outputs. More accuracy. And in many cases, that’s true. But the…
This presentation is an adaptation of a keynote address delivered by Sasha Le, Senior Engineer, Tide Foundation at the launch event of the RMIT AWS Innovation Lab (RAIL) on 21st of April, 2026 The Human Vulnerability In 2022, a ransomware group named Lapsus$ breached some of the most sophisticated tech companies on the planet. The list included Microsoft, Nvidia, Okta, Uber, and Samsung. The ring…
Why Every AI Agent Needs a Cryptographic Identity The problem nobody is talking about Every website you visit today has an SSL certificate. That green padlock in your browser proves the website is who it claims to be. Without it, you would have no way to know if you were talking to your bank or an impostor. Now consider this: every AI agent running in production today has no equivalent. No identi…

The Health Equity Summit 2026 in Newark, New Jersey, hosted by the Congressional Black Caucus Foundation (CBCF), is set to be a transformative gathering for professionals, scholars, and advocates passionate about advancing equitable healthcare systems. This one-day summit brings together a diverse group of stakeholders to examine how emerging technologies, particularly artificial intelligence (AI…
This paper presents a radical Neoplatonic reappraisal of virtue in the age of Artificial Intelligence (AI). It identifies a profound ideological void: while technology advances at an accelerating pace, humanity’s psychic state and cultivation of virtue regress, giving rise to the “Silent Disease,” characterized by a deep lack of self-knowledge. Drawing on Platonic concepts such as the “Bony Cave”…

This paper presents a radical Neoplatonic reappraisal of virtue in the age of Artificial Intelligence (AI). It identifies a profound ideological void: while technology advances at an accelerating pace, humanity’s psychic state and cultivation of virtue regress, giving rise to the “Silent Disease,” characterized by a deep lack of self-knowledge. Drawing on Platonic concepts such as the “Bony Cave”…

Explore the future of AI in Life Sciences with Google Cloud's Shweta Maniar. Learn about AI Co-Scientist, self-driving labs, and enterprise AI adoption. The post Panel: AI in Life Sciences with Shweta Maniar, Google Cloud first appeared on NTT Research, Inc. . The post Panel: AI in Life Sciences with Shweta Maniar, Google Cloud appeared first on NTT Research, Inc. .
research.ioSign up to keep scrolling
Create your feed subscriptions, save articles, keep scrolling.




