I asked people at AWS about tricks to speed up AWS S3 transfer speeds.
My name is Ito and I am an infrastructure engineer.
Amazon S3 .
It is a highly scalable system with 99.99% availability.
(By the way, the robustness is 99.9999999%)
Some of you may be using S3 for static sites,
others for storage.
What I am concerned about is the transfer speed.
This time I would like to introduce a ``trick to increase transfer speed'' that someone at AWS said!
S3 is not a directory-file structure.
Before we get into the tricks to speed up the transfer speed...
copies files to three data centers in the same region at the same time as the file is uploaded
So, S3 has something called buckets and objects, and although you may think that "folder = bucket" and "file = object", that is not actually the case.
I quote.
The basic technology of mazon S3 is a simple KVS (Key-Value data store). For example, suppose we have a folder structure (that we recognize) as shown below. (In this entry, we will simply assume that bar.txt contains the characters bar and baz.txt contains the characters baz.)
(Root)
└ foo/
└ bar.txt
However, this is what we It is recognized in this way, and S3 simply stores the following information. / basically has no special meaning in S3.
Key (full path name) Value (file content)
foo/bar.txt barReference site: Destroying the illusion of "folders" in Amazon S3 and revealing their reality | Developers.IO
Although Amazon S3 supports buckets and objects, Amazon S3 does not have hierarchies. However, object key name prefixes and delimiters allow you to imply hierarchy and introduce the concept of folders in the Amazon S3 console and AWS SDKs.
Reference site: Object keys and metadata - Amazon Simple Storage Service
Using buckets and objects makes them look like folders and files, but it seems like they are just creating them as a concept.
Since it is key-value type data, retrieving data is a simple search.
Also, if you use similar bucket names, the data will be stored in the same data center before being copied, which
tends to slow down the transfer speed.
Add a hash value to the beginning of the bucket name
Adding a hash value to the bucket name will prevent it from being written to the same data center.
- test01
- test02
- test03
rather than
- abctest01
- yjctest02
- ckttest03
That's what I said.
However, this does not mean that you can use any bucket name, and
there are restrictions on naming rules, so be careful.
- Bucket names must be between 3 and 63 characters.
- Specify the bucket name as a label or a series of labels. Separate adjacent labels with a single period. Bucket names can contain lowercase letters, numbers, and hyphens (-). Each label must begin and end with a lowercase letter or number.
- Bucket names cannot be in the form of IP addresses (for example, 192.168.5.4).
- When using virtual hosting style buckets with SSL, SSL wildcard certificates only match buckets that do not contain periods. To work around this issue, use HTTP or write your own certificate validation logic. We recommend that you avoid using periods (".") in bucket names.
Source: Bucket Constraints and Limitations - Amazon Simple Storage Service
This allows various data to be written to different data centers within the same region,
which can be expected to be faster than writing within the same data center.
I wonder how many characters it needs at the beginning...I think it must have been 3 or more.
Have a nice S3 life!