Updated
Updated · 404 Media · Apr 27
Alexander Lerchner argues AI cannot achieve consciousness
Updated
Updated · 404 Media · Apr 27

Alexander Lerchner argues AI cannot achieve consciousness

7 articles · Updated · 404 Media · Apr 27
  • In a new paper, Google DeepMind scientist Lerchner claims no AI or computational system will ever be conscious, challenging CEO Demis Hassabis's predictions about AGI's transformative impact.
  • Lerchner's argument, termed the 'abstraction fallacy,' asserts AI requires human-defined meaning and lacks intrinsic motivation or physical embodiment, thus capping its practical and commercial potential.
  • Experts note Lerchner's views echo longstanding philosophical debates, and highlight that AI company narratives often diverge from rigorous academic scrutiny, with some researchers criticizing the insularity of corporate AI research.
Why would a top DeepMind scientist publicly challenge his own CEO's vision for AGI?
Is Google trying to avoid future AI rights lawsuits by defining AI as non-conscious?
An AI co-authored a paper claiming it is conscious. Is this a breakthrough or a deception?
Should AI creators be held liable when their systems generate harmful content?
If an AI reports its own feelings, on what grounds can we deny its consciousness?
Has the tech industry's insular culture ignored decades of crucial philosophical research?