Updated
Updated · Decrypt · Apr 29
Talkie-1930 launches as pre-1931-trained LLM for AI research
Updated
Updated · Decrypt · Apr 29

Talkie-1930 launches as pre-1931-trained LLM for AI research

6 articles · Updated · Decrypt · Apr 29
  • Led by Nick Levine, David Duvenaud and Alec Radford, the 13B open-weight model was trained on 260 billion tokens and is live at talkie-lm.com/chat.
  • The team says its hard 1931 cutoff prevents benchmark contamination by design, while Anthropic provided compute support and two Apache 2.0 checkpoints were released.
  • Researchers aim to scale the corpus beyond one trillion tokens and build a GPT-3-level vintage model by summer 2026 to study how non-web training shapes AI behaviour.
This vintage AI accidentally learned about the 1930s New Deal. What other future knowledge might be hiding in our historical data?
Since this AI predicted a utopian 2026, what does its failure reveal about the fundamental limits of AI forecasting?
Could an AI ignorant of modern times offer uniquely unbiased solutions to problems we face today?