Edge Storage
Preview
Azion Edge Storage is a scalable and secure storage service designed to integrate object storage with the Azion Edge Platform using the S3 standard for object operations.
Edge Storage allows you to create buckets, which can be used as origins for edge applications or as directories for real-time object upload. Alongside bucket creation, you possess complete control over storage allocation, bucket and object access management, as well as the ability to upload, change, and delete objects.
Implementation
Scope | Resource |
---|---|
Manage a bucket | How to create and modify an Edge Storage bucket |
Upload and download objects | How to upload and download objects from an Edge Storage bucket |
Use bucket as origin | How to use an Edge Storage bucket as the origin of a static edge application |
Set up the S3 protocol | How to access an Edge Storage bucket using the S3 protocol |
Runtime API | Edge Storage API |
Buckets
Buckets are the system used to organize stored objects. Similar to folders, buckets are the top-level containers to store objects. Buckets can be created using the Azion API.
All buckets created with Azion Edge Storage are stored in the us-east cloud server.
Bucket names are exclusive between all Azion accounts. Names must range between 6 and 63 characters and must not start with azion
. Alphanumeric characters and hyphen (-
) are accepted.
Best practices for naming buckets include specifying what types of objects are stored and the type of permissions for the objects. For example, a bucket for an edge application Banking App
in read-only mode could be named banking-app-ro
.
Objects
Objects, or files, can be uploaded, modified, downloaded, and removed from buckets using the Azion API, Azion Runtime, and the S3 protocol.
Object key
An object key is a string of characters that composes a unique identifier for objects stored in Edge Storage buckets. Through the available tools, users can retrieve a file stored in a bucket by using its object key.
The object key isn’t required to match the original file path or name from the local storage from which it was retrieved, nor contain the original file extension. However, when uploading a local file to a bucket, it’s recommended to attribute the object key after the file to match local storage conventions. For example, for the local file folder/file.png
, the object key should be the same.
The object key cannot be changed. Uploading a different object or modifying object contents using an existing key replaces the object. Once an object is replaced, earlier versions can’t be retrieved.
Prefix
A prefix is a combination of paths that simulate a folder hierarchy. Since buckets can’t be organized into folders, you can use the forward slash (/
) when creating keys to categorize objects in your bucket into a prefix.
For instance, the list of keys below represents the simulated hierarchy of an application stored in a bucket with prefixes:
The object README.md
is located at the root of the bucket. The src
prefix corresponds to a folder and contains the objects index.js
and index.html
. Additionally, the src/assets
prefix contains a styles.css
object and the src/assets/images
prefix, in turn, contains the image.png
object.
When creating an edge storage origin, you can set a prefix to serve to the edge. For instance, using the example above, you can create an origin that only serves the image.png
object by setting the prefix to src/assets/images
.
Origin
With Edge Storage, you can use buckets as an origin in Azion Edge Application to retrieve the content of an edge application.
You can determine if the content is retrieved from the root of the bucket or from a prefix within the bucket.
Operations
An operation refers to any data exchange between a client and Edge Storage. All actions related to buckets and objects, such as create, delete, list, and update, are considered an operation. Each time one of these methods is used, an operation is logged through the API or the S3 protocol.
The current release of the Edge Storage offers operations listed below, depending on whether the Azion API or the S3 protocol is used.
Azion API operations
Class | Operation name | HTTP method |
---|---|---|
A | ListObjects | GET |
A | CreateBucket | POST |
A | ListBuckets | GET |
A | UpdateBucket | PATCH |
B | GetObject | GET |
C | PostObject | POST |
C | PutObject | PUT |
C | DeleteObject | DELETE |
C | DeleteBucket | DELETE |
ListObjects
Retrieves a list of objects loaded into a bucket.
This operation returns details of all objects in the bucket, including the size in bytes and the timestamp of the last modification.
CreateBucket
Creates a new bucket for an account.
ListBuckets
Retrieves a list of buckets associated with an account.
UpdateBucket
Modifies bucket information.
Use this operation to change the access permissions to the objects in the bucket. Buckets cannot be renamed with this operation.
GetObject
Retrieves an object from a bucket.
PostObject
Uploads an object to a bucket. Objects are limited to a maximum size of 20 MB.
For the Azion API, you can specify the MIME type of the object being sent in the body using the Content-Type
header. For example, objects with the .txt
extension should contain the Content-Type: text/plain
header. If the MIME type isn’t specified, Edge Storage will attempt to interpret the file type based on the file extension. Alternatively, use the application/octet-stream
MIME type to indicate that the data is a binary stream and the server should handle it as raw binary data.
Sending a new object with an object key already in the bucket will replace the previous object.
PutObject
Uploads an object to a bucket.
Sending a new object with an object key that already existed in the bucket will replace the previous object.
DeleteObject
Removes an object from a bucket.
When you delete an object being served on the edge, it’ll immediately stop being served and will no longer be listed in the bucket.
DeleteBucket
Removes a bucket from an account.
Buckets that contain objects cannot be deleted. After removing the final object from a bucket, there is a 24-hour period before the bucket can be deleted.
S3 operations
If
listBuckets
is enabled, when attempting to retrieve files that aren’t in the bucket using an S3 credential, the proper404 Not Found
status response returns instead of a403 Forbidden
status. Find out more about S3 capabilities in S3 protocol compatibility.
ListBuckets
Retrieves a list of buckets associated with an account.
S3cmd command: s3cmd ls
HeadBucket
Checks bucket existence and permissions, returning 200 OK if it does exist or 404 Not Found if it doesn’t.
S3cmd command: s3cmd info s3://BUCKET
ListMultipartUploads
Lists in progress multipart uploads in a bucket. It refers to a multipart upload that the S3 has initiated a Create Multipart Upload
request but hasn’t yet been completed or aborted.
S3cmd command: s3cmd multipart s3://BUCKET
ListObjects
Returns a list of up to 1,000 objects in the bucket, sorted alphabetically by key. You can use the query parameters to filter the search.
S3cmd command: s3cmd ls s3://BUCKET
For more than 1,000 results, it’s recommended to use ListObjectsV2
ListObjectsV2
Returns a list of up to 1,000 objects in the bucket, sorted alphabetically by key. You can use the query parameters to filter the search.
This limit is a default setup. However, if the search results in more than the maximum result set size, then the first set is returned in the initial response, the <IsTruncated>
response element contains the value true and the <NextContinuationToken>
element contains a token to retrieve the next set of results.
Use this token as the continuation-token
query parameter in another request to retrieve the next set of results.
S3cmd command: s3cmd ls s3://BUCKET
CopyObject
Creates a copy of an object that is already stored.
S3cmd command: s3cmd cp s3://BUCKET1/OBJECT1 s3://BUCKET2/OBJECT2
GetObject
Retrieves an object from a bucket.
S3cmd command: s3cmd get s3://BUCKET/OBJECT LOCAL_FILE
HeadObject
Retrieves metadata from an object without returning the object itself.
S3cmd command: s3cmd info s3://BUCKET/OBJECT
DeleteObject
Removes an object from a bucket entirely.
S3cmd command: s3cmd del s3://BUCKET/OBJECT
DeleteObjects
Deletes multiple objects from a bucket in a single request. In the XML body, provide the object keys and, optionally, version IDs if you want to delete a specific object version.
S3cmd command: s3cmd del s3://BUCKET/PREFIX --recursive
AbortMultipartUpload
Aborts a multipart upload.
S3cmd command: s3cmd abortmp s3://BUCKET/OBJECT Id
CompleteMultipartUpload
Completes a multipart upload
by assembling previously uploaded parts.
This operation is part of the S3 multipart upload process, being handled automatically as part of the put process when uploading large files once all parts are successfully uploaded.
CreateMultipartUpload
Initiates a multipart upload
and returns a uploadId
. This uploadId
is used to associate all of the parts in the specific multipart upload.
This operation is part of the S3 multipart upload process, being handled automatically as part of the put process when uploading large files when you use the put command with a large file. There’s no need to explicitly call this operation.
ListParts
Lists parts that have been uploaded for a given multipart upload.
This limit is a default setup. However, if the search results number is more than the maximum result set size, then the first set is returned in the initial response, the response element contains the value true and the element contains a token to retrieve the next set of results. Use this token as the part-number-marker query parameter in another request to retrieve the next set of results.
S3cmd command: s3cmd listmp s3://BUCKET/OBJECT Id
PutObject
Uploads an object to a bucket.
S3cmd command: s3cmd put FILE s3://BUCKET/OBJECT
UploadPart
Uploads a part of the file in a multipart upload. s3cmd automatically splits large files into parts and uploads them. You don’t need to manage the parts or call this operation directly manually.
Authentication and permissions
You can manage the following permissions for Edge Storage:
- Azion Teams Permissions: operations involving buckets, such as uploading, listing, and deleting objects, always require authentication through the Azion account. Go to Teams Permissions to know more about the available Edge Storage account permissions.
- Bucket permissions: manage access from the edge and other users to buckets and objects within buckets; related to the
edge_access
attribute. - S3 credentials: manage access for Azion account users through capabilities and assign operation permissions exclusive to S3 protocol access.
Bucket permissions
In addition to the required authentication and necessary permissions, some API operations can be restricted by bucket permissions. The permissions available are:
- Read-only: objects in the bucket can be read but not modified by the Azion Edge Platform.
- Read-write: objects in the bucket can be modified by the Azion Edge Platform.
- Restricted: objects in the bucket can be modified and read but can’t be accessed by the Azion Edge Platform. Restricted buckets can’t be modified using Azion Runtime and can’t be used as an origin for edge applications.
These permissions are related to the way the edge can access the bucket. For instance:
- If the bucket is set to
read-only
, the Azion Edge Platform can retrieve objects from the bucket but cannot upload or modify them. However, authorized users can continue writing to Edge Storage through the API or the S3 protocol. - If the bucket is set to
read-write
, the Azion Edge Platform and other users can both retrieve and modify objects within the bucket. - If the bucket is set to
restricted
, the Azion Edge Platform cannot access the bucket’s content. In this case, only authorized users can continue writing to Edge Storage through the API or the S3 protocol.
For example, when an external user attempts to send a POST
or PUT
request to an edge application using a bucket configured with read-only or restricted permissions as its origin, the edge will deny access and return an error message.
S3 credentials
Edge Storage offers compatibility with the S3 protocol through credentials.
Credentials can be created for any bucket that you own or for your account as a whole to manage all your buckets. With them, you can control permissions for operations associated with that credential. The permissions for the credential are exclusive to access thorugh the S3 protocol.
To create a S3 credential, you must use an Azion personal token and run a POST
request via API. However, after the credential is created, it works independently from your Azion token. This way, even if the token expires, the credential remains valid.
Once a credential is created, an access key and a secret key are generated, which can be used to set up access to the bucket through the S3 protocol. For security reasons, the secret key won’t be available after the credential is created. Existing credentials can’t be modified in any way.
Once a user’s access is verified, they’re allowed to make operations depending on the capabilities and permissions set for the credential.
Capabilities
You can assign the following capabilities to S3 credentials:
listFiles
: equivalent to ListObjects, returns a list of objects within the bucket.readFiles
: equivalent to GetObject, returns an object from the bucket through the object key.writeFiles
: equivalent to PutObject, allows modifying files in the bucket through the object key.deleteFiles
: equivalent to DeleteObject, allows object deletion through the object key.listAllBucketNames
: equivalent to ListBuckets, allows you to list all buckets associated with the account.listBuckets
: if enabled, returns the proper404 Not Found
response when attempting to retrieve files that aren’t in the bucket using the credential.
S3 protocol compatibility
After an S3 credential is created for a bucket, you can use the S3 protocol (s3://
) to execute operations according to the list of capabilities.
The S3 protocol allows you to access buckets and objects using an Edge Storage URL. This configuration facilitates file operations through command line interface (CLI) tools, such as s3cmd, database services, or functions.
You can use the access and secret keys provided by the S3 credentials API to set up a connection using the S3 protocol.
Learn How to access Edge Storage using the S3 protocolTo do so, you’ll need the following information:
Data | Description |
---|---|
Access key | The credential’s access key generated upon creating the S3 credential with the Azion API |
Secret key | The credential’s secret key generated upon creating the S3 credential with the Azion API. This information is confidential and will only be available at the moment of creation |
Region | The assigned server’s region, which is us-east-005 |
S3 endpoint | The default S3 address for all operations, which is s3.us-east-005.azionstorage.net |
DNS-style template | The host name template to access the bucket and objects. Can be bucket+hostname:port/object-key or hostname:port/bucket . For example, for a file.txt object in the my-bucket bucket, the host names could be: my-bucket.s3.us-east-005.azionstorage.net/file.txt s3.us-east-005.azionstorage.net/my-bucket/file.txt |
Limits
These are the default limits:
Scope | Limit |
---|---|
Buckets | 100 per account |
Region | us-east |
S3 credential access keys | 100,000 per account |
Learn how Azion Edge Storage scales unstructured data effortlessly. Watch the video below: