"Moving" my S3 backup bucket to a new AWS region

I have moved the Techroads server from an AWS US region to an AWS EU region. This server and its chums are backing up to an S3 bucket, which has been left behind in the US.

"Moving" my S3 backup bucket to a new AWS region

I have moved the Techroads server from an AWS US region to an AWS EU region. This server and its chums are backing up to an S3 bucket, which has been left behind in the US. The backups are still working fine, but the bucket is now halfway around the planet, and it'll start costing me a bit of cash for data transfer.

My objective is to move the bucket to a new region. But move is "move" in quotes, because such a thing is not possible. I need to create a temporary bucket, copy data, delete the old one, create the new one with the old name, and copy the data again.

To do this, you can simply follow the road of the AWS process here, which I have done. I am presenting my journey in the form of a walkthrough so you can see a few commands, and enjoy my insightful commentary.

My backups are scheduled, configured as per this article. In a nutshell I have cron jobs which regularly kick off a "sync" via the AWS CLI. In theory if I do this right, I shouldn't need to change anything on the servers, which BTW are Ubuntu Linux 18.04, on AWS Lightsail.

I should point out you could also just make a new bucket and reconfigure all your backups, and update the IAM permissions. If you have a low number of bucket consumers it's probably easier.

Create the new S3 bucket

I am going to reanimate the fictional bucket from the original article, linux123-backup-skhvynirme, and bung temp on the end for my interim bucket, so it shall be linux123-backup-skhvynirme-temp.

Find your way to the AWS S3 console and begin the create of the temp bucket. Optionally, if you any customisations you want to migrate such as settings, tags, or bucket policy, you can choose to copy settings from the origin bucket (and later from the temporary when creating the new).

Select the new bucket name and the new region. I am going to import the settings of the original bucket, and commence creation.

Transfer data from original -> temporary S3 bucket

Now we weild the AWS CLI for the transfer. If your basic CLI "ls" doesn't work, and show you your new bucket, you'll need to get it working before having any chance of success. E.g.  aws s3 ls

Our sync syntax is basically from-old to-new. The dry-run option is great, for checking in advance what's going to happen. As this will have squillions of files output, I'm piping to more as I just want a quick look to sanity check. I'm on Windows. So for me:

$ aws s3 sync s3://linux123-backup-skhvynirme s3://linux123-backup-skhvynirme-temp --dryrun | more

It's looking good so I remove the dry-run and let it go at it. I have a lot of old junk in the bucket and a slow connection, so it takes hours.

Transfer data from temporary -> new S3 bucket

Once you are satisfied the temporary bucket has all you need in it, remove the original bucket, then create the new bucket with your original name, in the new region.

Warning, I encountered an error "Error A conflicting conditional operation is currently in progress against this resource. Please try again." even after the delete operation was complete. It took a full hour after the deletion completed before I could finally execute the re-creation.

There is more on bucket creation and IAM setup in the CLI backup post, should you need it.

It is worthwhile double checking the ARN in your IAM user policy, for the IAM CLI user that runs your backups. It shouldn't have changed, but worth a look.

Once you have created the new bucket, running the CLI sync again from the temporary to the new one should bring you up to speed. In my example that would look like this.

$ aws s3 sync s3://linux123-backup-skhvynirme-temp s3://linux123-backup-skhvynirme

Of course, once done, test the backup functions end-to-end from your server.

I am not actually going to do this last step. I have changed a lot around on my servers, different paths, folders, moved some container stacks off to a separate server etc., so I will take the opportunity to mothball the backups in the "temp" bucket longer term, clean up all my origin directories, and use the new bucket for fresh backups.

Manually running a backup

Because I am going bananas and initiating a new sync, I would like to try it from the command line as per the original post. I just prefer to see errors and fix them, rather than wait until my weekly automated run.

I'll have to sudo it to read everything.

$ sudo aws s3 sync /data s3://linux123-backup-skhvynirme/myserver --exclude '*.log*' --dryrun

It looks good, and dropping the --dryrun, it fires up and backs up /data very quickly.

Best of luck with your bucket move.

Main photo courtesy of Jordan Crawford on Unsplash.

You are welcome to comment anonymously, but bear in mind you won't get notified of any replies! Registration details (which are tiny) are stored on my private EC2 server and never shared. You can also use github creds.