How to install the Send Event to Endpoint integration through Azion Marketplace

Azion Send Event to Endpoint is a serverless integration available at Azion Marketplace. This integration enables you to stream request data to an HTTP endpoint, taking the request data and transmitting it to a user-defined endpoint via Javascript fetch API.

The integration also permits you to specify what kind of data you wish to capture by editing a JSON file.

After sending the collected data, the integration allows the request to proceed through the Rules Engine.


To use the Send Event to Endpoint integration, you have to:

  1. Access Azion Console > Marketplace.
  2. On the Marketplace homepage, select Secure Token card.
  3. Once the integration’s page opens, click the Get It Now button, at the bottom-right corner of the page.

You’ll see a message indicating that your integration was successfully installed.


To instantiate the Send Event to Endpoint integration, follow these steps:

  1. Under Products menu, select Edge Firewall in the SECURE section.
  2. Click the Add Rule Set button.
  3. Give an easy to remember name to your edge firewall.
  4. Select the domains you want to protect with the function.
  5. Turn the Edge Functions switch on.
  6. Click the Save button.

Done. Now you have instantiated the rule for your function.

To instantiate the Send Event to Endpoint integration, while still on the Edge Firewall page, select the Functions tab and follow these steps:

  1. Click the Add Function button.
  2. Give an easy to remember name to your instance.
  3. On the dropdown menu, select the Send Event to Endpoint function.

After you select the integration, a Code form with the integration’s source code will be loaded. This is just for study and can’t be modified. In the same form you have another tab: the Args tab. On the Args tab, you’ll pass the parameters to configure your integration.

The JSON Args form for this integration will look like this:

{
"metadata": ["remote_addr"],
"headers": ["x-hello"],
"body": ["message", "user.id"],
"http_connection_args": {
"endpoint": "http://example_api:3000/test",
"headers": {
"Authorization": "FakeAuth",
"X-Provider": "Azion Cells"
}
}
}

Where:

FieldRequiredData TypeNotes
metadataNoNull or ArrayDefines which metadata fields will be streamed.

When null (or not set), all metadata fields will be streamed.

If you don’t want to stream any metadata, you must set an empty array [ ] as the value of this field.
headersNoNull or ArrayDefines which request headers will be streamed.

When null (or not set), all request headers will be streamed.

If you don’t want to stream any header, you must set an empty array [ ] as the value of this field.
bodyNoNull or ArrayDefines which request body fields will be streamed.

When null (or not set), all request body fields will be streamed.

If you don’t want to stream any body field, you must set an empty array [ ] as the value of this field.

To filter multi-level fields, use the dot notation. For example, if you use the string ‘user.name’ here, the function will seek for the field ‘name’ within the object ‘user’ in the request body.
connection_argsYesObjectDefines the data that will be used to stream the request data.

The URL to which the data will be posted is defined by the endpoint.

The headers specify which headers will be included in the fetch request.

An additional ‘Content-Type: application/json’ header will be used.
s3_connection_argsNoObjectDefines the arguments used to connect to the S3 bucket.
s3_connection_args.full_hostOnly when using s3_connection_argsStringDefines the full host of the S3 bucket.
s3_connection_args.regionOnly when using
s3_connection_args
StringDefines the region of the S3 bucket.
s3_connection_args.access_keyOnly when using
s3_connection_args
StringDefines the access key to be used in the connection to the S3 bucket.
s3_connection_args.secret_keyOnly when using
s3_connection_args
StringDefines the secret key to be used in the connection to the S3 bucket.
s3_connection_args.file_pathNoStringDefines the path where the file created by the function must be stored.

Default value:
/
s3_connection_args.use_date_prefixNoStringWhen enabled, it’ll include a sub-folder with the current date (in format YYYY-MM-DD) to the file path.

Default value:
true

This integration will return a response with the data streamed in a JSON file that will look like this:

{
"body": {
"field_a": <data>,
...
},
"geoip_asn": <data>,
"geoip_city": <data>,
"geoip_city_continent_code": <data>,
"geoip_city_country_code": <data>,
"geoip_city_country_name": <data>,
"geoip_continent_code": <data>,
"geoip_country_code": <data>,
"geoip_country_name": <data>,
"geoip_region": <data>,
"geoip_region_name": <data>,
"headers": {
"x-header-a": <data>,
...
},
"remote_addr": <data>,
"remote_port": <data>,
"remote_user": <data>,
"request_id": <data>,
"request_url": <data>,
"server_protocol": <data>,
"ssl_cipher": <data>,
"ssl_protocol": <data>
}

Notice how the request_id, request_url, and metadata fields will be delivered in the root of the JSON file, whereas the body fields and request headers will be sent in objects.

Important: you can also use a “catch-all” JSON Args file, like this:

{
"connection_args": {
"endpoint": "http://example_api:3000/test",
}
}

For each new function run, a new file will be produced in the given S3 bucket. The file will be named after the request ID that initiated the function.

Example: if the connection_args.file_path parameter is set to /my-data/ and the function is performed on May 9th, 2023, with the request ID abcd-1234, the resultant file will be saved at /my-data/2023-05-09/abcd-1234.json. If the connection_args.use_date_prefix parameter is set to false, the resultant file will be saved as /my-data/abcd-1234.json.

If there are no http_connection_args or s3_connection_args supplied in the JSON Args, the function doesn’t have any valid connection arguments to utilize. Then, the request will be terminated, and a JSON error message will be sent to indicate the cause of the issue.

{
"error": "A001",
"detail": "The function instance is missing or has invalid required arguments."
}

If the function is unable to connect to the HTTP endpoint or the S3 provider, the user request will be ignored. Regardless, an error log will be created, which the client may access via Data Stream.

For example, if an invalid access key is used, the following notice will be displayed:

[Send event to endpoint] S3 connection error;
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Error>
<Code>InvalidAccessKeyId</Code>
<Message>The key 'DAKEY' is not valid</Message>
</Error>

At last, if you supply the correct connection options for both the HTTP endpoint and the S3 bucket, the function will deliver event data to both sources at the same time.

To finish, you have to set up the Rules Engine to configure the behavior and the criteria to run the integration.

Still in the Edge Firewall page, select the Rules Engine tab and follow these steps:

  1. Click the New Rule button.
  2. Give an easy to remember name to the rule.
  3. Select a criteria to run and catch the domain you want to run the integration on. For example: if Hostname is equal xxxxxxxxxxxx.map.azionedge.net.
  4. Below, select a behavior to the criteria. In this case, it’ll be Run Function. Then, select the adequate Send Event to Endpoint function according to the name you gave it in the instantiating step.
  5. Click the Save button.

Done. Now the Send Event to Endpoint integration is running for every request made to the domain you indicated.


Contributors