Amazon Bedrock Launches Prompt Optimizer Across 5 Models, Aiming to Ease AI Model Migration
Updated
Updated · AWS Blog · May 14
Amazon Bedrock Launches Prompt Optimizer Across 5 Models, Aiming to Ease AI Model Migration
5 articles · Updated · AWS Blog · May 14
Amazon Bedrock rolled out Advanced Prompt Optimization, a new tool that rewrites and tests prompts across up to five models in one job to improve performance or support model migration.
The service runs a metric-driven feedback loop using prompt templates, sample inputs and optional ground-truth answers, then returns optimized prompts with evaluation scores, cost estimates and latency data.
Users can score results with one method per prompt template: a custom AWS Lambda metric, an LLM-as-a-judge rubric, or natural-language steering criteria; multimodal inputs including PNG, JPG and PDF are supported.
The feature is available now in 15 regions spanning the US, Europe, Asia Pacific, Canada and Brazil, with charges based on Bedrock inference tokens at standard per-token rates.
With AI spending soaring, can automated tools finally get runaway operational costs under control?
As AI automates prompt design, is the creative 'art' of prompt engineering now obsolete?
When one AI judges another's work, whose biases are being encoded into our automated future?