Well, after procrastinating for around 8 months, I finally was able to figure out way to migrate a DigitalOcean Spaces Bucket to Amazon S3. I will be sharing my steps to solve that.
Why Migrate?
It depends on usecase to usecase. In my case, most of my project’s infrastructure lies on Amazon AWS. Therefore, I wanted everything at one place.
Initial Struggles?
I didn’t find any proper documentation/ tutorial to do this. All those tools looked little scary at first.
Solution
Step 1- Install Rclone: If you are using Windows then install WSL2 first.
curl https://rclone.org/install.sh | sudo bash
Now, when you hit rclone in WSL2 ubuntu bash, It should show all the options.
Step 2- Get Access Credentials from DigitalOcean and AWS
a) Getting Credentials from DigitalOcean
b) Getting credentials from AWS.
Step 3- Rclone configuration: Now, we can create the configuration directory and open up a configuration file:
mkdir -p ~/.config/rclone
nano ~/.config/rclone/rclone.conf
We will now configure our DigitalOcean Spaces region and AWS S3 region as a Rclone “remotes” in the Rclone configuration file. For Digital Ocean spaces, paste the following section in the configuration file to define the first region:
[spaces-endpoint]
type = s3
env_auth = false
access_key_id = access-key-id
secret_access_key = secret-access-key
endpoint = fra1.digitaloceanspaces.com
acl = private
[aws-s3-bucket-name]
type = s3
provider = AWS
env_auth = false
access_key_id = access-key-id
secret_access_key = secret-access-key
region = eu-central-1
endpoint =
location_constraint =
acl =
server_side_encryption =
storage_class =Now save the file with Ctrl+O and exit with Ctrl+X
Be sure to lock the permissions of the configuration file.
chmod 600 ~/.config/rclone/rclone.conf
Step 4- Listing Objects from S3 and Spaces Remote
Now our configuration is read, we are now ready to transfer the files.
Let’s check what are our remote available. Remotes are the regions that you have configured in previous steps, In our case, it would be S3 and Spaces endpoint region.
rclone listremotesOutput
spaces-endpoint:
aws-s3-endpoint:Both of the regions, that we have defined are available.
Now, lets proceed and ask Rclone to get us the list of directories associated with the remotes. (Important: make sure to add the colon to the end of remote name):
rclone lsd spaces-endpoint:
Output
-1 2021-05-29 14:38:32 -1 spaces-bucket-nameNow Let’s check the content of S3 region.
rclone lsd aws-s3-endpoint:Output
-1 2020-06-12 19:06:11 -1 aws-s3-bucket-name
-1 2020-06-12 19:39:09 -1 aws-s3-bucket-name2
-1 2020-12-21 22:01:44 -1 aws-s3-bucket-nameLet’s take a look at the contents of a Spaces or S3 bucket. We can use the tree command and pass in the remote name, followed by a colon and the name of the “directory/bucket” we wish to list (the Spaces or S3 name):
rclone tree spaces-endpoint:spaces-bucket-name
Output
/
└── Images
├──image1.jpg
├── image2.jpg
├── image3.jpg
├── image4.jpg
└── image5.jpg
1 directories, 5 filesNow lets check the content in S3 bucket.
rclone tree aws-s3-endpoint:aws-s3-bucket-name
Output
/
└── Images
└── image6.jpg
1 directories, 1 fileStep 5- Copying Objects from Spaces to S3
Lets try downloading the bucket of Spaces bucket in local machine.
cd /mnt/d/spacesbucket/
rclone sync spaces-endpoint:spaces-bucket-name .It will take a little while and once it finishes executing you will see all the content in D:/spacesbucket directory.
Similarly, optionally if you wanna download all the content to local directory.
cd /mnt/d/s3bucket/
rclone sync aws-s3-endpoint:aws-s3-bucket-name .Now, time for the final step, which is copying all the content from Spaces bucket to S3.
rclone sync spaces-endpoint:spaces-bucket-name aws-s3-endpoint:aws-s3-bucket-nameIt will take a while, once, it finishes you can navigation to your s3 bucket (aws-s3-bucket-name) and you can see all the content from Spaces bucket in S3 bucket.
Bonus: Migrate from S3 to Spaces
If you need to migrate from S3 to Spaces
rclone sync aws-s3-endpoint:aws-s3-bucket-name spaces-endpoint:spaces-bucket-nameFeel free to let me know in case of any questions.