OpenAI landed on Amazon Web Services on April 28, 2026 — and for millions of developers already running infrastructure on AWS, the AI infrastructure map has not looked the same since. The expanded AWS-OpenAI partnership, backed by a $50 billion Amazon investment, brings GPT-5.5, OpenAI Codex, and Amazon Bedrock Managed Agents powered by OpenAI to AWS in limited preview. For the first time, AWS-native teams can access OpenAI frontier models without stepping outside their existing IAM security policies, AWS PrivateLink perimeter, or CloudTrail audit logs. This guide covers what shipped at the "What's Next with AWS" 2026 event, how each product works, how to request access, and what this means for teams deciding between Azure OpenAI Service and Bedrock as the foundation for production AI applications.
Why This Partnership Rewires the Cloud AI Landscape
For years, OpenAI's tightest cloud relationship was with Microsoft Azure. Azure OpenAI Service gave enterprise customers early access to GPT models alongside the compliance controls and enterprise agreements they needed — and for a long time, Azure was the only cloud where you could access OpenAI models with enterprise SLAs. AWS customers who wanted GPT-4 or GPT-5.4 had to call openai.com APIs directly, routing traffic outside their existing cloud security perimeters, billing systems, and compliance frameworks.
The April 28, 2026 announcement at Amazon's "What's Next with AWS" event changed that. The new partnership includes:
- A $50 billion Amazon investment in OpenAI ($15 billion initial, $35 billion conditional on scaling milestones)
- GPT-5.5 and GPT-5.4 available on Amazon Bedrock in limited preview
- OpenAI Codex available through the Bedrock API, Codex CLI, Codex desktop app, and VS Code extension — authenticated with AWS credentials
- Amazon Bedrock Managed Agents powered by OpenAI — production-ready agent infrastructure built around the OpenAI harness, running entirely within AWS
The strategic implication is significant. OpenAI moves from a de facto Azure-exclusive distribution arrangement to multi-cloud availability. AWS customers can now consolidate OpenAI workloads under existing IAM policies, PrivateLink connectivity, and CloudTrail audit logs — the same controls they apply to every other AWS service. The Register called it OpenAI "jumping out of Microsoft's bed, into Amazon's Bedrock" — an oversimplification, since Azure OpenAI Service continues to exist, but an accurate reflection of how much this shifts the competitive dynamics between AWS and Azure as platforms for enterprise AI development.
GPT-5.5 and GPT-5.4 on Amazon Bedrock
The first piece of the partnership is model availability. GPT-5.5 — released by OpenAI on April 23, 2026, and positioned as its most capable frontier model for coding, research, and agentic workflows — is now accessible through the Amazon Bedrock Model Catalog alongside GPT-5.4, the production-stable version that preceded it.
From a developer perspective, the API surface is identical to every other Bedrock model. You call GPT-5.5 using the same InvokeModel and Converse APIs you use today for Anthropic Claude or Amazon Nova. This matters for teams that have already built Bedrock abstractions for model selection, prompt logging, cost tagging, and guardrails — switching between Claude Opus 4.7 and GPT-5.5 becomes a one-line configuration change:
import boto3
bedrock = boto3.client("bedrock-runtime", region_name="us-east-1")
response = bedrock.converse(
modelId="openai.gpt-5.5-v1",
messages=[
{
"role": "user",
"content": [{"text": "Explain the tradeoffs between RAG and fine-tuning for production agents."}]
}
]
)
print(response["output"]["message"]["content"][0]["text"])
Model IDs follow the provider.model-version convention Bedrock uses for all third-party models. Pricing follows the Bedrock on-demand model: you pay per input and output token with no minimums, and usage applies toward your existing AWS compute commitments and enterprise discount agreements (EDPs).
OpenAI Codex on Bedrock: What Is Different
Codex on Bedrock is not simply the Codex API proxied through AWS. The integration is deeper: Codex runs its inference through Amazon Bedrock infrastructure, which means session data stays within your AWS account boundary, logs flow into CloudTrail, and network traffic never traverses the public internet if you configure AWS PrivateLink. For organizations with strict data residency requirements or SOC 2 compliance mandates, this is the difference between Codex being available and Codex being deployable in production environments.
For developers, the surface change is minimal. You authenticate to Codex using your existing AWS credentials rather than an OpenAI API key:
# Configure Codex to use Amazon Bedrock for inference
codex --provider bedrock --model openai.codex-v2 "refactor this function to use async/await"
# Or set via environment variable
export CODEX_PROVIDER=bedrock
export CODEX_MODEL=openai.codex-v2
codex "add comprehensive error handling to the payment module"
The Codex CLI, desktop app, and VS Code extension all support Bedrock as an inference provider. For VS Code users, Codex-on-Bedrock completions appear in the same usage analytics dashboard as completions from other Bedrock models, giving engineering managers a unified view of AI coding tool usage across the organization — important for teams tracking AI tool spend under a single budget center.
Codex usage on Bedrock also counts toward AWS credits and enterprise discount agreements. For organizations with large AWS commitments, this is a meaningful cost optimization: rather than maintaining a separate OpenAI billing relationship, Codex spend consolidates under existing AWS invoicing.
Amazon Bedrock Managed Agents Powered by OpenAI
The most architecturally significant piece of the partnership is Amazon Bedrock Managed Agents powered by OpenAI — a new agent infrastructure tier that combines the OpenAI agent harness with Amazon Bedrock's managed infrastructure, IAM security model, and CloudWatch observability.
Before this announcement, building production-grade agents on OpenAI models required calling openai.com APIs directly (fast to start, hard to operationalize at scale) or using an orchestration framework like the OpenAI Agents SDK and deploying your own infrastructure. Bedrock Managed Agents powered by OpenAI is a third option: a fully managed agent runtime where AWS handles provisioning, scaling, fault recovery, and observability, while you define agent behavior through a simple configuration object:
{
"agentName": "customer-support-agent",
"foundationModel": "openai.gpt-5.5-v1",
"instruction": "You are a customer support agent for Acme Corp. Answer questions about orders, returns, and product availability. Escalate complex refund disputes to a human agent.",
"actionGroups": [
{
"actionGroupName": "order-lookup",
"apiSchema": "s3://my-bucket/order-api-schema.json",
"actionGroupExecutor": {
"lambda": "arn:aws:lambda:us-east-1:123456789:function:order-lookup"
}
}
],
"memoryConfiguration": {
"enabledMemoryTypes": ["SESSION_SUMMARY"],
"storageDays": 30
}
}
The agent runtime manages the OpenAI function-calling loop, retries on tool failures, session memory across conversations, and integration with AWS Lambda action groups. Each agent action is logged to CloudTrail with the invoking identity, action type, tool called, and input/output tokens consumed. This makes Bedrock Managed Agents powered by OpenAI the highest-governance option currently available for GPT-5.5-based agents — more auditable than the OpenAI Assistants API and more production-ready than self-managed frameworks for teams already invested in AWS.
Teams migrating existing OpenAI Agents SDK implementations to managed infrastructure will find the configuration model familiar: the same tool definitions, memory settings, and model parameters carry over directly to the Bedrock Managed Agents configuration format.
Enterprise Security: IAM, PrivateLink, and CloudTrail
The enterprise security story for OpenAI on Bedrock is consistent with every other Bedrock model — and that consistency is the point. Access is controlled through IAM policies. Traffic flows through AWS PrivateLink, never touching the public internet. All model invocations appear in CloudTrail logs with full request metadata. Bedrock Guardrails — content filtering, PII redaction, prompt injection detection — apply to OpenAI models identically to how they apply to Anthropic Claude or Amazon Nova models.
For organizations already using Bedrock for Claude or Titan workloads, adding OpenAI models requires no new security architecture. You extend your existing IAM policies to include the OpenAI model ARNs:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["bedrock:InvokeModel", "bedrock:Converse"],
"Resource": [
"arn:aws:bedrock:us-east-1::foundation-model/openai.gpt-5.5-v1",
"arn:aws:bedrock:us-east-1::foundation-model/openai.gpt-5.4-v1",
"arn:aws:bedrock:us-east-1::foundation-model/openai.codex-v2"
]
}
]
}
AWS PrivateLink endpoints for Bedrock extend automatically to cover OpenAI models — no separate endpoint to configure. CloudTrail events include the model ID in the requestParameters field, making it straightforward to build cost allocation reports that distinguish OpenAI model usage from other Bedrock models and feed into existing financial governance workflows.
Getting Access: The Limited Preview Process
As of early May 2026, OpenAI models on Bedrock are in limited preview. Access is request-based rather than self-serve, and the approval process takes two to five business days for most commercial AWS accounts:
- Open the Amazon Bedrock console and navigate to Model Catalog then Model Access
- Locate the OpenAI model family using the provider filter
- Click Request access for each model (GPT-5.5, GPT-5.4, Codex) — requests can be submitted simultaneously
- Complete the use case attestation form, which is required for all frontier models on Bedrock and is not specific to the OpenAI integration
- Receive an email confirmation when access is approved; the model becomes immediately available in the Bedrock API with no additional configuration
Enterprise accounts with existing AWS Premier Support or committed spend above the enterprise tier threshold may be eligible for expedited preview access through their AWS account team. Organizations on AWS GovCloud should note that the OpenAI model preview is currently limited to commercial regions; GovCloud availability is on the roadmap but has not been announced with a date.
Model Selection: GPT-5.5 vs. Claude Opus 4.7 on Bedrock
The practical effect of OpenAI on Bedrock for most AWS-native development teams is that GPT-5.5 becomes a first-class option in their model selection matrix — no longer separated by an organizational security boundary from the rest of the stack. Teams can now run structured capability evaluations using Bedrock's A/B model testing feature, routing the same prompts to both models simultaneously and comparing outputs before committing to a production choice.
For agentic workloads specifically — which represent the most consequential model selection decision in 2026 — the tradeoffs to evaluate are:
- Claude Opus 4.7 via Bedrock: Strongest benchmark scores on multi-step tool use and instruction-following in complex workflows, lower per-token cost at high volume, longer track record on Bedrock with mature tooling. For teams using the Bedrock AgentCore runtime, Claude integrates with the native Bedrock agent infrastructure.
- GPT-5.5 via Bedrock Managed Agents powered by OpenAI: OpenAI's own agent harness with a different tool-selection profile, stronger scores on pure code generation benchmarks, and compatibility with existing OpenAI Agents SDK implementations you want to migrate to managed infrastructure without re-engineering agent logic.
Neither model is universally superior. The right choice depends on task domain, latency requirements, and whether your team has existing OpenAI Agents SDK code you want to carry forward. Running a two-week parallel evaluation using Bedrock's model comparison tooling before committing to a production architecture is the lowest-risk path.
What This Does Not Change
Some fundamentals remain the same despite the partnership announcement. OpenAI models on Bedrock are still OpenAI models: inference runs on OpenAI's infrastructure, not on AWS servers. AWS acts as the security boundary, distribution layer, and billing intermediary — the actual computation happens at OpenAI endpoints. This matters for compliance scenarios where data processing location, not just data transit, is regulated under frameworks like GDPR or HIPAA. Review your specific requirements with legal and security teams before routing regulated data through OpenAI Bedrock endpoints.
Azure OpenAI Service is also not going away. Microsoft's existing agreement with OpenAI remains in place, and Azure continues to offer GPT-5.5 with its own enterprise controls. Organizations with existing Azure OpenAI deployments should evaluate migration on workload-specific merits — not on the assumption that AWS Bedrock is now the preferred channel. It is one of two enterprise channels, and the right choice depends on where your other infrastructure lives.
Bottom Line
The April 28, 2026 AWS-OpenAI partnership closes the gap that has existed for AWS-native teams since GPT-4 launched: you can now use GPT-5.5 and Codex inside the same IAM, PrivateLink, and CloudTrail security perimeter you use for everything else in your AWS environment. Bedrock Managed Agents powered by OpenAI adds a managed agent runtime that makes production deployment straightforward and audit trails complete. Limited preview access is available now through the Bedrock Model Catalog; most commercial AWS accounts should expect approval within two to five business days. For teams that have been waiting for a compliant path to OpenAI frontier models on AWS, the wait is over.
Written by
Anup Karanjkar
Expert contributor at WOWHOW. Writing about AI, development, automation, and building products that ship.
Ready to ship faster?
Browse our catalog of 3,000+ premium dev tools, prompt packs, and templates.
Monday Memo · Free
One insight, every Monday. 7am IST. Zero fluff.
1 field report, 3 links, 1 tool we actually use. Join 11,200+ builders.
Comments · 0
No comments yet. Be the first to share your thoughts.