I asked someone at AWS about some tips to speed up AWS S3 transfer speeds

My name is Ito and I am an infrastructure engineer
Amazon S3 .
A highly scalable system with 99.99% availability
(which means 99.9999999% robustness)
Some people use S3 for static sites,
others for storage.
What really matters is the transfer speed
Today I would like to introduce a "trick to speed up transfer speeds" that someone from AWS mentioned!
S3 is not a directory-file structure
Before we get into the tricks to speed up transfer speeds...
copies files to three data centers in the same region when you upload them
In S3, there are things called buckets and objects, and we tend to think of folders as buckets and files as objects, but that's not actually the case
I quote
The underlying technology of Amazon S3 is nothing more than a simple KVS (Key-Value data store). For example, let's say we have the following folder structure (as we perceive it). (In this entry, we will simply consider that bar.txt contains the character bar, and baz.txt contains the character baz.)
(Root)
└ foo/
└ bar.txt
However, this is just how we perceive it, and from an S3 perspective, it simply stores the following information. In S3, / basically has no special meaning.
Key (full path name) Value (file contents)
foo/bar.txt barReference site: Shattering the illusion of "folders" in Amazon S3 and revealing their true nature | Developers.IO
Although Amazon S3 supports buckets and objects, Amazon S3 does not have a hierarchy, although prefixes and delimiters in object key names can imply hierarchy in the Amazon S3 console and AWS SDKs, introducing the concept of folders
Reference: Object Keys and Metadata - Amazon Simple Storage Service
Using buckets and objects makes them seem like folders and files, but it seems they are only created as concepts
Because it is key-value data, retrieving the data is a simple search.
Also, if you use similar bucket names, the data will be stored in the same data center and then copied, which
tends to slow down the transfer speed.
Prepend the bucket name with a hash value
By adding a hash value of a few characters to the bucket name, you can prevent data from being written to the same data center
- test01
- test02
- test03
Instead,
- abctest01
- yjctest02
- ckttest03
That's what it's like
However, not all bucket names are acceptable;
there are restrictions on naming rules, so be careful.
- Bucket names must be between 3 and 63 characters long
- Specify a bucket name as a single label or a series of labels, with adjacent labels separated by a single period. Bucket names can contain lowercase letters, numbers, and hyphens (-). Each label must start and end with a lowercase letter or number
- Bucket names cannot be in the format of an IP address (for example, 192.168.5.4)
- When using virtual hosted buckets with SSL, SSL wildcard certificates only match buckets that do not contain a period. To avoid this issue, use HTTP or write your own certificate validation logic. We recommend that you do not use periods (".") in bucket names
Source: Bucket Constraints and Limitations - Amazon Simple Storage Service
This means that various data is written to different data centers within the same region,
which is expected to be faster than if it were written within the same data center.
I wonder how many characters were required at the beginning...I think it was at least three characters
Well, have a good S3 life!
2