Updated · Marcus on AI | Gary Marcus | Substack · Apr 22
ChatGPT's new image engine faces criticism for errors in bike labeling and drawing
Updated
Updated · Marcus on AI | Gary Marcus | Substack · Apr 22
ChatGPT's new image engine faces criticism for errors in bike labeling and drawing
9 articles · Updated · Marcus on AI | Gary Marcus | Substack · Apr 22
A recent critique highlights the engine's mislabeling of bike parts, such as confusing brakes, gears, and spokes, and generating implausible tandem bike diagrams.
The author notes that the system combines features from different bike types incorrectly and demonstrates a lack of functional understanding, especially in tasks not easily sourced from internet images.
While acknowledging that even humans may struggle with such tasks, the critique emphasizes that knowledgeable users can easily spot these errors, raising questions about the engine's real-world comprehension.
If AI fails to grasp a bicycle's design, can we trust it to drive cars or perform surgery?
Why does AI show superhuman skill in medicine but fail to understand simple mechanical objects?
As AI 'regurgitates' facts without understanding, are we at risk of teaching humans to do the same?
Does new research prove AI is developing its own 'world understanding' that is invisible to us?
Are 'World Models' the key to teaching AI common sense, or just a more sophisticated form of mimicry?