https://karnwong.me/posts/rss.xml

The true cost of deploying on EC2

2025-07-18

Naturally, when you are working on a project, be it python, go, rust, you would write code and make sure it works as expected. For deployments, packaging your project into a docker image and deploy it as a container would guarantee that it would work the same regardless of which machine it's running on.

Initial deployment

It would be understandable for people to think "I need a VM to deploy my stuff," since it uses the same setup as their development workflow. It would go like this:

  1. Spin up a VM
  2. Install docker
  3. Clone source repository
  4. Run docker build command
  5. Run docker run or docker compose up

For simplicity, let's assume there is no databases involved. At this point an app would run normally, however you wouldn't be able to access your service, because you haven't set up ingress rules. To do that, you would need to fiddle with vpc and security groups attached to your ec2. If done correctly, you should be able to access your service with http://${EC2_PUBLIC_IP}:${SERVICE_PORT}.

Note that, if you stop your ec2 instance and restart, your ec2 would get a new IP address on reboot, which means you have to change the IP you use to access your service. If you want to map your service to a domain, this would pose an issue because the IP would keep changing, rendering your A record unresolvable. To fix this, you need to reserve an ip (elastic ip on aws) and assign it to your ec2.

SSL strikes back

Keen readers would realize that it's http, which is not ideal because there would be no encryption when you are interacting with this service. To solve this, you need to put a reverse proxy in front of your service. You can achieve this in two ways:

  1. Add apache2, nginx or caddy in front of your service. This can run as another container on ec2, or as systemd.
  2. Create AWS Load Balancer and use ec2 as backend. You also have to generate a certificate via AWS (or you can import it) and attach to your load balancer as well.

You probably don't need a dedicated load balancer for a small project, but at certain scale it would be helpful when you have a lot of incoming requests, because an LB can distribute loads (assuming you point it to a fleet of ec2 serving up the same service), in turn allowing you to scale out your service.

What about CI/CD

Manual setup would be for every code changes, you ssh into your ec2, do a git pull and docker build and docker compose down && docker compose up. If you think this is very tedious, that's because it is.

You have a few possible approaches to achieve ci/cd:

  1. Set up a cronjob to periodically pull your repo and update running image -> has downtime
    • Alternatively, you can replace a cronjob by using a ci/cd system to ssh into ec2 and run the same commands
  2. Use ci/cd system to build and push docker image to aws ecr, then set up a cronjob to periodically update running image -> you need to setup aws iam so your ec2 can pull from a private ecr repo, and this still involves downtime

Both solutions still results in a downtime, in which if this is not acceptable for your application, keep reading...

Containers as a service

At this point you can probably tell that deploying to ec2 is simple, unless it involves ci/cd. But since you are using containers, aws has ecs where you can spin up a deployment from a supplied ecr image. Alternatively, you can also use eks, but if you only have a single service, eks is an overkill.

For ci/cd, it's as simple as build and push image to ecr, update your ecs deployment manifest and update it - all can be done via cli commands, as long as you supply the correct aws iam credentials.

You still need to create a load balancer, because an ecs deployment update works on a rolling basis - a new deployment spins up and if it doesn't fail, only then the load balancer would switch traffic to this new deployment, then teardown the previous revision. This translates to no downtime.

Using EC2 but with a twist

The good thing about aws ecs is that you can use ec2 as backend as well, in some cases it might be cheaper than the default backend (fargate - which is serverless). Essentially you add containers orchestration on top of ec2.


While deploying on ec2 is simple at a glance, it introduces a lot of issues, namely unavoidable downtime and complex ci/cd setup. By using containers-as-a-service, ci/cd can be easier to achieve, and rolling update would be introduced to avoid service downtime during upgrades.