Mistral AI launches Mistral Medium 3.5, remote coding agents, and Le Chat Work mode
Updated
Updated · Mistral AI · Apr 29
Mistral AI launches Mistral Medium 3.5, remote coding agents, and Le Chat Work mode
15 articles · Updated · Mistral AI · Apr 29
Mistral Medium 3.5 is a 128B dense model with a 256k context window, released as open weights under a modified MIT license and available via API and Hugging Face.
The model powers new cloud-based coding agents in Vibe and a Work mode in Le Chat, enabling parallel, long-running, multi-step tasks and integration with tools like GitHub, Jira, and Slack.
Mistral Medium 3.5 outperforms previous models on coding and agentic benchmarks, supports self-hosting on four GPUs, and is now the default for Mistral Vibe and Le Chat Pro, Team, and Enterprise plans.
Will Mistral Medium 3.5’s open-weight approach finally give enterprises true control over their AI workflows, or introduce new challenges?
How do Mistral’s remote coding agents compare in reliability and security to established solutions like GitHub Copilot or Amazon CodeWhisperer?
As Mistral expands its sovereign AI infrastructure, could this reshape Europe’s global AI competitiveness and influence data privacy standards?
Can Leanstral’s formal proof verification agent drive a shift toward mathematically guaranteed software, or will adoption hurdles persist?
What risks might enterprises face by rapidly adopting agentic AI platforms, especially regarding decision quality and tech debt?
With studies showing mixed productivity results, what hidden factors determine whether AI agents help—or hinder—experienced developers?