AWS Launches Fully Managed AI Models: Qwen3 and DeepSeek-V3.1
AWS Announces Full-Managed Open Weight Models Qwen3 and DeepSeek-V3.1 Added to AI Model Portfolio
Amazon Web Services (AWS) today announced the launch of fully managed open weight models Qwen3 and DeepSeek-V3.1, which have been added to its AI model portfolio.
These newly introduced models offer greater flexibility to customers utilizing the Amazon Bedrock generative AI service, enabling them to better meet evolving business needs.
Open weight models provide developers with higher transparency into model weights, making it easier to customize models for specific use cases. The new open weight models in Amazon Bedrock join other models from leading developers, including Meta Platforms, Mistral AI, and OpenAI.
Each of these models has its unique strengths across different domains. Qwen3 from Alibaba offers model options for complex coding and general reasoning, while DeepSeek-V3.1 excels in mathematics, coding, and agent tasks. Qwen3 marks the first fully managed model from the Qwen family to be included in the Amazon Bedrock portfolio.
Although these models are available at no cost, using them within Bedrock enables customers to leverage AWS enterprise-grade security features, such as data encryption and strict access controls, helping ensure data privacy and compliance. Customers retain full control over their data, as AWS does not share model inputs and outputs with model providers, nor are they used to improve the foundational models.
Expanding Customer Choice Across Global Regions
Prior to the announcement, I spoke with Shaown Nandi, a technical director at AWS, about the value these new models bring to customers. Nandi, who previously served as Chief Information Officer at Dow Jones, a division of News Corp, before joining AWS six years ago, mentioned that AWS plans to roll out these new models in key global markets, including Asia, Latin America, Europe, and North America. He explained that large general-purpose AI models may be too large for many companies with more specific use cases. "You might need a smaller or more cost-effective model, and that’s perfectly fine, given the variety of use cases," Nandi said. "What we’re seeing with open weight models is a cost advantage and a choice advantage. Additionally, with models like Llama, AWS supports model distillation, allowing Bedrock users to train these models down to smaller sizes and retain most of the accuracy at up to 30 times lower runtime costs after distillation." "Whether it's choosing a narrower model, a distilled model, or simply avoiding the high licensing costs of some proprietary models—such as for agent use cases—this is where open weight models truly shine," he added. Nandi noted that customers in Latin America and parts of Asia show particular interest in adapting models to local needs, which is more feasible with open weight models. "I see strong demand from both international startups and companies in the U.S.," he said. Open weight models also offer the speed and flexibility organizations require. "What makes open weight models unique is the ability to fine-tune and customize them," Nandi explained. "We’re seeing customers experiment with these models—distilling them or fine-tuning them at different scales—and effectively building SLMs (small language models) that reflect their own industry or business needs."Qwen3 Capabilities
According to AWS, customers can now access four new open weight models from the Qwen3 family. These multilingual models can plan multi-step workflows, integrate tools and APIs, and handle long context windows within tasks. Two general-purpose models offer both "reasoning" and "non-reasoning" inference modes. Additionally, the announcement stated that if the Qwen3 models "were human," they could "fluently speak dozens of languages and share encyclopedic knowledge across diverse topics—from explaining scientific concepts to writing creative stories."DeepSeek-V3.1 Features
The DeepSeek-V3.1 models excel in hybrid reasoning capabilities, allowing customers to switch between modes depending on the type of problem they aim to solve, effectively balancing quick responses with deep, transparent thinking. Moreover, these models are highly efficient, rarely turning basic queries into lengthy discussions while maintaining a high level of expertise for strategic decision-making. The models also clearly explain their reasoning process, making it easier to understand how they arrive at recommendations.Leveraging Customer Feedback
No one has a crystal ball to tell AWS or any model developer exactly which models to launch in which markets. Therefore, they listen to customer feedback, analyze usage patterns, and make informed decisions about deployment and update plans. "This last point is important," Nandi emphasized. "We want to fill the gaps. We want customers to have full choice. Right now, there are many new agent use cases emerging. We're constantly under pressure to add more models." Today, AWS offers hundreds of models and continues to expand into new regions. Another source of customer feedback is Bedrock's model evaluation tool. "It uses large language models as judges," Nandi explained. "Based on the parameters you input, it tells you which model fits your needs best. This is a scalable way we provide automated feedback to customers in Bedrock. For them, this is a game-changer." These new models, along with AWS's strategy of bringing Amazon Bedrock models to more global customers, represent a smart business move. They are expected to provide current and future customers with stronger foundational model solutions to support business growth.