Researchers develop Talkie AI trained on pre-1930 data
Updated
Updated · Futurism · May 2
Researchers develop Talkie AI trained on pre-1930 data
5 articles · Updated · Futurism · May 2
The 13-billion-parameter model, described by University of Toronto professor David Duvenaud, mimics old-timey language and is said to be the largest known “vintage” AI.
Researchers said Talkie lacks awareness of its historical cutoff but shows “temporal leakage”, sometimes giving anachronistic answers such as Franklin D. Roosevelt’s 1933-37 presidency.
Early tests found it could write simple one-line programs and make playful forecasts, while raising questions about whether historically limited models can extrapolate, predict discoveries or learn modern concepts.
Could an AI trained only on pre-1931 texts ever independently discover modern science or technology without exposure to recent data?
How does Talkie's 'old-timey' worldview challenge our assumptions about AI's ability to reason, predict, and avoid anachronisms?