Logically ECS Cluster is a virtual unit which got computing resource attached to and receives tasks to schedule. Technically ECS Cluster is nothing but service running on regular EC2 instances.
Computing resource here is EC2 instances which can be created either manually or using Auto Scaling Groups.
When you create ECS Cluster from AWS Console you can also specify requirements on instances and it will automatically create Launch Configuration, Auto Scaling Group and Cloud Config record which will link Group with the ECS Cluster.
When you try this with terraform you have to create LC, ASG and CC by yourself as a regular resources.
At this point you will have generic ECS Cluster with computing resources attached and you will still need to have at least two more units in place: ECS Task Definition and ECS Service.
Task Definition (similar to Kubernetes deployment template spec) defines container requirements: CPU, Ram, Ports, …. This is necessary for scheduling – finding right place for the container on EC2 instances. You can have multiple containers in the same Task Definition – sidecar stuff.
Slightly complicated part is ECS Service (similar to Kubernetes Deployments + Services). Service defines at least which container and how many but can also be linked to container autoscaling and container load balancing.
Compared to Kubernetes
So if we compare ECS with Kubernetes:
- ECS Cluster is similar to Kubernetes Cluster except it does not have namespaces
- ECS Task Definition is similar to Kubernetes Deployment template spec part
- ECS Service is similar to Kubernetes Deployment + Service as it also contains load balancing configuration
- ECS Service also has a Placement Templates which defines scheduling rules e.g. where and when which is a bit spread in Kubernetes
Some more yet to come.