How can I configure Cross-Region VPC endpoints for AWS S3 Bucket?

Manish Sharma
6 min readOct 22, 2022

--

With AWS PrivateLink for Amazon S3, we can provision interface VPC endpoints (interface endpoints) in our virtual private cloud (VPC). These endpoints are directly accessible from applications that are

  • on premises over VPN and AWS Direct Connect, or
  • in a different AWS Region over VPC peering.

The problem statement here is ,

I want to configure cross-Region Amazon Virtual Private Cloud (Amazon VPC) endpoints so that I can access an AWS resource, such as Amazon Simple Storage Cloud (Amazon S3) buckets, using a private link. How do I do this?

Resolution

Here is the high level design which will give you enough clarity to implement the overall solution.

Else for the detailed step by step explanation follow below points,

  1. Create AWS Account to create all these resources
  2. For this article I have identified North California and Mumbai AWS regions

Tasks to complete in AWS N. California Region

3. Create a VPC with CIDR 172.0.0.0/18

4. Create two Public Subnets

5. Create minimum 2 Private Subnets

6. Create an Internet Gateway to send traffic to Internet

7. Create a Route Table and associate this to Public Subnet.

Step no #3 to 7 can be completed in very simple step. Click on Create VPC > and Select VPC and More. Specify all the details and click on Create VPC. With this way AWS will create basic network pieces for us.

Fig: Create VPC & Its components
Fig: Public and Private Subnets

8. Initiate a peering request to AWS Mumbai region with CIDR (10.0.0.0/18) to peer peer both N. California and Mumbai region VPCs

Fig: Peering request to Mumbai Region VPC

9. Add 2 routes to public subnet route table.

  • one for the local traffic route within VPC (172.0.0.0/18)
  • and 2nd for the traffic to the internet 0.0.0.0/0 via internet gateway created in step #6
Fig: Public Subnet Route Table

10. Create a Route Table and associate this to Private Subnet.

11. Add 2 routes to private subnet route table.

  • one for the local traffic route within VPC (172.0.0.0/18)
  • and 2nd for the traffic to the Mumbai region VPC via VPC peering created in step #8
Fig: One of the private subnet route table

12. Create an EC2 machine in Public Subnet taking Amazon Linux AMI. Amazon Linux comes with preinstalled SSM and CloudWatch Agents.

This instance will have public ip associated so can be accessed from anywhere making use of default security group.

Fig: Public EC2 Instance

13. Create an EC2 machine in Private Subnet taking Amazon Linux AMI. Amazon Linux comes with preinstalled SSM and CloudWatch Agents.

This machine will have only private ip associated so for this article this machine can only be accessed from the public instance.

Fig: Private EC2 instance

21. From SSM Session Console you can access this machine from the AWS Console directly but this will give you only command line access to machine.

Tasks to complete in AWS Mumbai Region

03 . Create a VPC in AWS Mumbai Region with CIDR 10.0.0.0/18

08. Accept peering request initiated from AWS N. California region VPC in step #8

Fig: VPC Peering in Mumbai Region

14. Create private subnets

Fig: Private and Public Subnets

15. Create a Route Table and associate this to Private Subnet.

16. Add 2 routes to this private subnet route table.

  • one for the local traffic route within VPC (10.0.0.0/18)
  • and 2nd for the traffic to the North California region VPC via VPC peering created in step #8
Fig: One of the private subnet route table

17. Create AWS S3 VPC Interface endpoint to access S3 bucket over AWS internal/private backbone.

Select interface endpoint , private subnets to create S3 Interface endpoint. Also select default security group which allows all traffic. I am okay with this security group for this article purpose but you must create custom security group to allow restricted access.

The following image shows the Enpoints console Details tab, where you can find the DNS name of a VPC endpoint. In this example, the VPC endpoint ID (vpce-id) is vpce-07ada758f42d9b493 and the DNS name is *.vpce-07ada758f42d9b493-wh8ljg3j.s3.ap-south-1.vpce.amazonaws.com. Remember to replace * when using the DNS name. For example, to access a bucket, the DNS name would be bucket.vpce-07ada758f42d9b493-wh8ljg3j.s3.ap-south-1.vpce.amazonaws.com.

Fig: Interface endpoint

18. Create AWS S3 bucket (data-bucket-s3-endpoint) in ap-south-1 Mumbai region. Remove block public access from the bucket permissions.

Fig: Create S3 Bucket

Add below bucket policy only if you have specific use case like

I want to block any traffic that isn’t coming from a specific Amazon Virtual Private Cloud (VPC) endpoint . The website must be accessible from specific VPC endpoints or IP addresses.

{
"Version": "2012-10-17",
"Id": "DenyAccessToBucket",
"Statement": [
{
"Sid": "VPCe",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::data-bucket-s3-endpoint",
"arn:aws:s3:::data-bucket-s3-endpoint/*"
],
"Condition": {
"StringNotEquals": {
"aws:SourceVpce": "vpce-07ada758f42d9b493"
}
}
}
]
}

Access S3 Bucket

  1. Access bucket from N. California Private Instance
  • First login to public subnet EC2 instance and from there login to private subnet EC2 instance
  • Post login execute this command to list bucket objects
aws s3 --region ap-south-1 --endpoint-url https://bucket.vpce-07ada758f42d9b493-wh8ljg3j.s3.ap-south-1.vpce.amazonaws.com ls s3://data-bucket-s3-endpoint/

As you are accessing using VPC S3 Interface endpoint so you will not get permission denied error and bucket objects will be listed in output.

2. Access bucket from N. California Public Instance

  • Login to public subnet EC2 instance
  • Post login execute this command to list bucket objects
aws s3 --region ap-south-1 ls s3://data-bucket-s3-endpoint/

As per the bucket policy applied bucket can be accessed using the VPC endpoint hence you will below error while accessing bucket.

Thanks for reading this article. Follow me if you liked reading this article.

--

--

Manish Sharma

I am technology geek & keep pushing myself to learn new skills. I am AWS Solution Architect — Associate, Professional & Terraform Associate Developer certified.