Updated
Updated · O'Reilly Media · May 1
Gemma 4 release highlights local AI models challenging large providers
Updated
Updated · O'Reilly Media · May 1

Gemma 4 release highlights local AI models challenging large providers

14 articles · Updated · O'Reilly Media · May 1
  • Google’s open-weight Gemma 4 comes in 2B, 4B, 26B and 31B variants, with multimodal support and local deployment costs from about $500 for a GPU.
  • The report says local models are now viable for production, especially where privacy, GDPR-style data sovereignty rules, and high API bills make cloud services less attractive.
  • It adds that developers outside the US are driving adoption, while open-weight rivals from China and elsewhere broaden multilingual fine-tuning despite security, auditing and concurrency trade-offs.
Will the AI 'chip winter' make powerful local models too expensive for the average developer?
Are we trading Big Tech's privacy risks for the hidden security threats of local AI models?
Is the local AI boom a tech revolution or a new front in the US-China geopolitical contest?