Automate the building and deployment of static frontend applications using an Argo Workflows pipeline and AWS CloudFront.
Don’t run your frontend applications on servers or containers. Move your project to a content distribution network (CDN) and gain the following benefits:
- Increase application responsiveness
- Reduced operational overhead
- Improved Security
Back Story
Frontend projects were separated from our centralized UI project to increase agility in deploying features and simplify deployment. Originally, all of our “micro” frontend applications used Vite for running the application, building and running. This was just bad, the application is generating the site on the fly and none of the files are chunked so it would not be performant, on top of that we need the entire node runtime and packages.
Our next intermediate step was to server the files statically with Nginx. We still use Vite but would build to a static application and then copy it into Nginx. This was a lot better with a minimal base image, smaller image size, and the memory/cpu resources usage was reduced.
Now today we are serving the application fully on CloudFront. It greatly increases the response times for the applications since AWS caches the files on their edge locations. Reduces the operational overhead since we don’t have to run individual pods for each frontend. Additionally, it is more secure, we don’t have to work about an exploit in the runtime granting internal access to our clusters.
Code
The code is written in Golang and can be found here on GitHub.
The application can be broken down into four steps:
- Pull the modules and build the application
- Generates environment specific configuration
- Update the files in S3
- Invalidate old CloudFront files
The program assumes that the source frontend code is already checked out in a shared directory
Environment specific information is read from the envMapping.yaml
file. (bucket, cloudfront distribution & domain)
The production (non-dev) modules are pulled using npm. Then using Vite which should be installed from the packages builds the frontend application. Another frontend packaging system can be used in its place by changing this function with a new command.
The configuration file (config.yaml
) is read which is used to inject custom environment specific information into a standardized path in the destination web code.
Additional context such as release stage & api/app/logging urls are also contained in the file.
The tag expire = true
is added to all old frontend objects.
The path is determined based on the release stage and project name.
For example S3://bucket-name/<releaseStage>/<project>/*
Then new objects are uploaded overwriting the old files with the same names. There is a little work I could do here and check for differences in file checksums so we don’t have to reupload the same file but this might break the expiration tagging.
Finally, frontend resources that should change between revisions are invalidated in CloudFront.
Running Locally
To run the application locally all you need is to have the code checked out, golang and node installed.
go run cmd/main.go publish \
--project PROJECT_1 \
--stage=main \
--output /tmp/output \
--environment-config envMapping.yaml \
--config config.yaml \
--workspace /app
Terraform
Deploy the AWS resource with infrastructure as code. All files can be found here.
S3
There are only a few things to note:
- There is a lifecycle configuration to expire objects after 1 day that have the tag
expire = true
- Grants CloudFront access to the bucket with origin access control
CloudFront
The CDN is set up with a wildcard ACM certificate for *.svc.DOMAIN
that forwards everything to the backend S3 bucket on the /main
path. (This is based on the release stage)
There is a header policy to implement CORS to restrict the methods and domains that are making requests.
Function
To be able to support multiple frontend applications a CloudFront Function is attached to the distribution that will do the following.
- Splits off the first subdomain, the project name
- Determines if there is a file extension
- If there is no extension to the projects index.html S3 file
- If there is an extension return that uri
function handler(event) {
let request = event.request;
const subdomain = request.headers.host.value.split('.')[0];
if (!/\..+/.test(request.uri)) {
request.uri = `/${subdomain}/index.html`;
} else {
request.uri = `/${subdomain}${request.uri}`;
}
return request;
}
The reason we need the wild-extension routing is because our projects use React Router, but do not have all the routes converted into static files.
Inspiration for this solution was found on Christian Johansen’s blog
CI/CD Deployment
The final step is to put the build container into our CI system. In my case, I use Argo Workflows but this can be configured similarly depending on your setup.
Example workflow code can be found here.
The workflow template requires the code to be checked and accessible in a shared volume.
It also requires four parameters:
- Build path: Location to the code inside the volume
- Release stage: Which environment to deploy this to
- Project: Name of the frontend project
- Source Workflow: This is used to find the source code volume
The workflow also has steps to send webhooks on a successful build or trigger a failure notification.
Service Account Permissions
The workflow has a service account that gets passed an AWS OIDC role.
It is granted access to the slack webhook secrets as well as the following AWS permissions.
{
"Statement": [
{
"Action": [
"s3:PutObject",
"s3:PutObjectTagging",
"s3:TagResource"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::example-spa-frontend/*",
"Sid": "UpdateObjects"
},
{
"Action": "s3:ListBucket",
"Effect": "Allow",
"Resource": "arn:aws:s3:::example-spa-frontend",
"Sid": "ListBucket"
},
{
"Action": "cloudfront:CreateInvalidation",
"Effect": "Allow",
"Resource": "arn:aws:cloudfront::0123456789:distribution/ABCDEFGHIJK",
"Sid": "InvalidateCloudFrontCache"
}
],
"Version": "2012-10-17"
}
If you found this article useful you may also like: