Amazon Web Service (AWS) offers a very attractive limited period free tier package for those who wants to taste a bit of fantastic features of AWS like me and try to make use its services to put my docker compose application online. Of course the application is just a playground for me, my main purpose is to try out how good AWS would be in my situation. After a week of hard works, I finally made it work as my expectation but it comes with series of pain during the processes.
Background
Like I mentioned in preface, I made a web application, which contains 4 containers
- Frontend (A web interface for user to interact)
- Loopback framework (A RESTful handling framework)
- MongoDB (Database container)
- Mongo Express (Database management container)
And the following are the docker compose file I composed
My target is to make the above 4 containers up and running, and AWS would be the platform for me to run and maintain these containers.
Procedures & Experiences
I developed using the above compose file, and after the development, I created another branch and changed the docker compose settings to fulfill the production environment. E.g.: changing the node environment variable to "production", changing port back to 80 etc. And hence a new docker-compose-prod.yml is created. My strategy is at least I made the production yml file up and running in local first, and I know in certain extent, it is easier to kick off my AWS journey.
The following is the docker-compose-prod.yml file I used
I think working on AWS is as easy as I did in local machine, but outcome it did not. After 2 weeks of struggling in AWS, here are some of the experiences and thinking I would like to share.
In Preface section, I mentioned about the attractive free tier trial. Indeed there are quite a number of "traps", maybe it is unfair to Amazon, but I have to say being a newbie of AWS is hard, AWS is comprehensive in a good way, but too complicated to familiar with is its drawbacks.
Why hard? Because Amazon is confusing us with tons of different unfamiliar names, EC2, ECS, ECR, EBS, t2.micro, Amazon Cloud Watch, fargate blablabla... You may ask why are these terms so important that I need to care? First of all money, remember the 750 hour free tier usage? If you are not familiar with how the AWS game works, you will probably being "tricked" by those "free" ads. And finally find the bill have some charges that you are not noticed
In traditional hosting, we only pay monthly or yearly (specific amount), like hosting speed, we pick a plan, and we will get 10 GB web file storage, certain amount of emails, a domain name, a control panel and we are good to go for our web application!
However in AWS, game is not playing in this way. Amazon tries to divide different parts of the hosting services as many many tiny different sub-services. Say in my case, I want to put my docker compose application to the public through AWS, I need the following components
The above is pretty much all the components. The following are the procedures and the difficulties experienced. For the procedures, skipped some of the details (e.g.: IAM role setup) for simpler representation. Detailed tutorials will be hyperlinked referenced.
Login to AWS console, search "security", pick "IAM", choose Users in left panel, choose designated users you want to access the AWS service, and choose "Security Credentials" tab, then click "Create Access Key", copy the credential data out which is needed to do the authentication before accessing any of the AWS services. For details
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html#cli-quick-configuration
AWS CLI will ask for the following
Something needs to pay attention
- Region name is important, fill in the one exactly the same as you choose in the control panel
- Fill in the access key and secret with the one created above
- Country list can be found here: https://docs.aws.amazon.com/general/latest/gr/rande.html#endpoint-tables
After the setup, we have right to run authenticated (signed) AWS command for every actions we do through the aws cli.
3. Create IAM (Identity and Access Management)
- Allow aws cli to have access right to access your services, like running docker commands on behalf of you in ec2 instances, therefore it needs your permission grant. Some security stuff needed to perform first.
- Create IAM group and attach policy to that group
3b. However, if your first target is starting from ECS following tutorial in https://docs.aws.amazon.com/AmazonECS/latest/developerguide/get-set-up-for-amazon-ecs.html, (which is indeed suit my case), then it will guide you through all the IAM, group policy and credentials needed to use the ECS services, in this case, just follow the link for the whole setup and ignore step 3 as it is basically a subset of step 3, but specifically for ECS (for docker containers)
Indeed I completely followed the CLI tutorial in this step's URL to create IAM group, users, policies and credentials. And if you already attach "AdministratorAccess" policy to your IAM group, you have already gained accesses to all the AWS services, there is no need to create extras unless you want to grant accesses to other users.
4. Using the ECS CLI to Setup the ECS Cluster and the EC2 Instances
Actually you can follow the first run guide (if you have no docker compose file), the panel will guide you through the setup processes of the container https://us-east-2.console.aws.amazon.com/ecs/home?region=us-east-2#/firstRun, however, I have already defined all the docker containers through docker compose, this method is not suit for me.
I instead use ECS CLI to create the cluster, attach the ec2 instance to cluster and run my containers (through my docker-compose file)
First install and configure the ECS CLI
- Installation of ECS-CLI
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_CLI.html
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_CLI_installation.html
- Configure the ECS-CLI
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_CLI_Configuration.html
And finally bring up our ECS cluster using the profiles we have created. The keypair are the one generated in https://docs.aws.amazon.com/AmazonECS/latest/developerguide/get-set-up-for-amazon-ecs.html#create-a-key-pair, note that keypair is not the local one, but the name of the keypair located in "ec2 > Key pairs"
The following is the docker-compose-prod.yml file I used
This docker-compose-prod is nothing special but a copy of docker-compose.yml and basically
- Change the port to production ready port
- Remove frontend, loopback container's volume dependencies (we include all production files in the docker image instead)
- Change some context docker file to its docker-prod file
So far so good, I make the production containers up and run with the production yml in my machine, happy! Let's see how to accomplish the same thing in AWS!
By the way, I use this docker command to make those containers up and run
docker-compose -f ./docker-compose-prod.yml up
I think working on AWS is as easy as I did in local machine, but outcome it did not. After 2 weeks of struggling in AWS, here are some of the experiences and thinking I would like to share.
In Preface section, I mentioned about the attractive free tier trial. Indeed there are quite a number of "traps", maybe it is unfair to Amazon, but I have to say being a newbie of AWS is hard, AWS is comprehensive in a good way, but too complicated to familiar with is its drawbacks.
Why hard? Because Amazon is confusing us with tons of different unfamiliar names, EC2, ECS, ECR, EBS, t2.micro, Amazon Cloud Watch, fargate blablabla... You may ask why are these terms so important that I need to care? First of all money, remember the 750 hour free tier usage? If you are not familiar with how the AWS game works, you will probably being "tricked" by those "free" ads. And finally find the bill have some charges that you are not noticed
In traditional hosting, we only pay monthly or yearly (specific amount), like hosting speed, we pick a plan, and we will get 10 GB web file storage, certain amount of emails, a domain name, a control panel and we are good to go for our web application!
However in AWS, game is not playing in this way. Amazon tries to divide different parts of the hosting services as many many tiny different sub-services. Say in my case, I want to put my docker compose application to the public through AWS, I need the following components
- Amazon EC2 (Elastic Compute Cloud): Consider it as an utmost base service (actually a remote virtual machine) to execute your cloud application, root of all the services. E.g.: boot up the server instances in order to run your web server, restful API service, mongoDB, blabla... A prerequisite in most of the cases.
- Amazon EBS (Elastic Block Storage): Nothing special but can see as a general storage spaces to store your application files.
- Amazon ECS (Elastic Container Service): As the name implies, provide services to run docker containers. I first confused EC2 and ECS, I think ECS is necessary for running docker containers, indeed it isn't. ECS is a cluster (group of EC2 instances). So a very simple question, why we need ECS? ECS acts as a "proxy-like" role, we run ecs-cli command to tell ECS to launch container on EC2. But I am confused, why can't I directly launch them in EC2 instances? Why I need 1 more layer to do so?
Analogy ECS as your mom, EC2 as myself. I can cook myself (launch container), but why we still need mom to do so? (manage to cook the food). Because mom knows how to cook delicious food, she knows how the ingredients work with each other, quantity, combination etc. To make use of the best effective resources to cook the delicious food. That's why we need mom (ECS)
ECS knows very good on how to manage EC2 effectively, organizes optimal resources (CPU, RAM, Storage...). And ECS use docker to initialize containers in EC2 virtual machines. So nothing magic on this ECS. ECS is meaningless if no EC2s are associated with. On the other hand, you can live without ECS, but not EC2. The following better illustrate the relationships (from Amazon). - Amazon ECR (Elastic Container Registry): Nothing special but a place to store the docker images. A docker image repository.
The above is pretty much all the components. The following are the procedures and the difficulties experienced. For the procedures, skipped some of the details (e.g.: IAM role setup) for simpler representation. Detailed tutorials will be hyperlinked referenced.
1. Sign Up An AWS Account - Nothing special, but you must put your credit card down first... :)
2. Install and Configure AWS CLI
To access different AWS services and if you plan to manage your services through the command line interface, you must install and setup AWS CLI
To access different AWS services and if you plan to manage your services through the command line interface, you must install and setup AWS CLI
Login to AWS console, search "security", pick "IAM", choose Users in left panel, choose designated users you want to access the AWS service, and choose "Security Credentials" tab, then click "Create Access Key", copy the credential data out which is needed to do the authentication before accessing any of the AWS services. For details
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html#cli-quick-configuration
Basically follow the instruction for the installation & the credential creation will be fine (my installation version is v1.x). After installation, run
aws configure
AWS CLI will ask for the following
aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-west-2
Default output format [None]: json
Something needs to pay attention
- Region name is important, fill in the one exactly the same as you choose in the control panel
- Fill in the access key and secret with the one created above
- Country list can be found here: https://docs.aws.amazon.com/general/latest/gr/rande.html#endpoint-tables
After the setup, we have right to run authenticated (signed) AWS command for every actions we do through the aws cli.
3. Create IAM (Identity and Access Management)
- Allow aws cli to have access right to access your services, like running docker commands on behalf of you in ec2 instances, therefore it needs your permission grant. Some security stuff needed to perform first.
- Create IAM group and attach policy to that group
https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-started_create-admin-group.html#getting-started_create-admin-group-cli
- Create user and add user to the IAM group
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_cliwpsapi
Amazon strongly recommend not to use AWS account root user for tasks not require root accesses. Instead create an administrator group and put newly created users to that group for management.
- Create user and add user to the IAM group
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_cliwpsapi
Amazon strongly recommend not to use AWS account root user for tasks not require root accesses. Instead create an administrator group and put newly created users to that group for management.
3b. However, if your first target is starting from ECS following tutorial in https://docs.aws.amazon.com/AmazonECS/latest/developerguide/get-set-up-for-amazon-ecs.html, (which is indeed suit my case), then it will guide you through all the IAM, group policy and credentials needed to use the ECS services, in this case, just follow the link for the whole setup and ignore step 3 as it is basically a subset of step 3, but specifically for ECS (for docker containers)
Indeed I completely followed the CLI tutorial in this step's URL to create IAM group, users, policies and credentials. And if you already attach "AdministratorAccess" policy to your IAM group, you have already gained accesses to all the AWS services, there is no need to create extras unless you want to grant accesses to other users.
4. Using the ECS CLI to Setup the ECS Cluster and the EC2 Instances
Actually you can follow the first run guide (if you have no docker compose file), the panel will guide you through the setup processes of the container https://us-east-2.console.aws.amazon.com/ecs/home?region=us-east-2#/firstRun, however, I have already defined all the docker containers through docker compose, this method is not suit for me.
I instead use ECS CLI to create the cluster, attach the ec2 instance to cluster and run my containers (through my docker-compose file)
First install and configure the ECS CLI
- Installation of ECS-CLI
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_CLI.html
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_CLI_installation.html
- Configure the ECS-CLI
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_CLI_Configuration.html
I used the following commands to create the ecs cli connection profile. The access key ID and key is the one created in step 2
Create ECS credential profile to tell ECS how to connect to the remote machine (by providing the credential), profile will be saved as file for later connection use in "ecs-cli up" command
And the following command to create a cluster profile (for future creation of cluster). Region must be the same as the one chosen in AWS console in step 2. This defines some simple cluster information, the name and the related region (which in turn stored many of the EC2 instances). The --config-name parameter should be referred by the "ecs-cli up" command so that ECS knows how to initialize the cluster.
Create ECS credential profile to tell ECS how to connect to the remote machine (by providing the credential), profile will be saved as file for later connection use in "ecs-cli up" command
ecs-cli configure profile --profile-name ec2-mw-good-man-eat --access-key $AWS_ACCESS_KEY_ID --secret-key $AWS_SECRET_ACCESS_KEY
And the following command to create a cluster profile (for future creation of cluster). Region must be the same as the one chosen in AWS console in step 2. This defines some simple cluster information, the name and the related region (which in turn stored many of the EC2 instances). The --config-name parameter should be referred by the "ecs-cli up" command so that ECS knows how to initialize the cluster.
ecs-cli configure --cluster ec2-mw-good-man-eat --default-launch-type EC2 --config-name mw-good-man-eat-conf --region us-east-2
And finally bring up our ECS cluster using the profiles we have created. The keypair are the one generated in https://docs.aws.amazon.com/AmazonECS/latest/developerguide/get-set-up-for-amazon-ecs.html#create-a-key-pair, note that keypair is not the local one, but the name of the keypair located in "ec2 > Key pairs"
ecs-cli up --keypair keypairName --capability-iam --size 1 --instance-type t2.micro --cluster-config mw-good-man-eat-conf --ecs-profile ec2-mw-good-man-eat
Note that, instance type should be specified as the remote machine type you preferred, for free tier, t2.micro is suited, the --cluster-config parameter is the configuration parameters we have made in "ecs-cli configure"
Now the ECS has created and initialized the cluster and the related EC2 instances, we can arrange the docker containers to be run in the EC2 instances.
5. Setup ECR
Before arranging the docker containers to be run in EC2 instances, 1 of the tricky thing here is that AWS ECS CLI won't have the ability to docker compose build from scratch like what we did in local, instead we need to set up an image registry beforehand to store the custom images for the ecs-cli compose up command. following the above ECR tutorial. ECS will NOT help you to build the image in the EC2 instances "locally" for you. You must build it locally and push to amazon ECR first.
https://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_GetStarted.html
So basically, what "official" images mean images that can be directly fetch from docker official repository, if you directly use official image without modifying, then it is no need to create image repository.
In my case, I have 2 customized images, i need to create 2 container registry to store the images. Note that the images should built in local before using docker-compose command and push using docker-compose push.
Here are the related steps / commands I used
5.1. Create container registry in AWS
Navigate to ecr services, choose "create repository" to create 2 different namespaces to store the 2 images.
Or you can follow this https://stackoverflow.com/questions/44052999/docker-compose-push-image-to-aws-ecr, to manually create repository through command line
5.2. Modify the docker compose file
Modify the docker compose file as follows
https://docs.google.com/document/d/1Y8fvJiV_YYbwZsoFD9A92eP54-1wzrHdicmXAijd1co/edit?usp=sharing
Points to Note:
- Change the docker-compose version to 2, as ECS does not support minor versions
- Update the image tag to the repository URL created in ECR, to tell ECS which repository the image needs to be pushed to the AWS registry
- Change depends_on to links for inter-container identifications as ECS does not recognize depends_on for the docker-compose file
- Add logging tag to allow capture of logs in case there are errors, we can have some logs for debugging
5.3. Prepare and Push Images to AWS
Use the output from the command, directly copy and paste to the command prompt and run again. We are now logged in to AWS ECR with our docker client, we are ready to push the images to the AWS cloud to serve our containers
We should re-build the images in local to make sure they are all up-to-date
And finally, all we need to do is to push all images
6. Start up Containers
Basically we have reached the very end of the step, docker compose in the through ecs cli command and up command for initializing the container in the ec2.
One of the difference between local compose and ecs compose is that we have to specify the resources needed for each containers using the ecs parameters file, here is my parameter file
https://docs.google.com/document/d/1PyogvbSTPtarrQyH4MEUBh26CzGRXcW95ezxNKdEXtU/edit?usp=sharing
It specifies how much CPU, RAM resources limited for each container, AWS will follow the specified information to distribute the resources accordingly.
Run ecs-cli compose up command as follows
ECS will parse the docker compose file using their customized docker rules, and if everything is going well, you can see the 4 containers up and running.
And that's it. Your servers are up and running now and are ready to serve. Of course we also need to subscribe Amazon Route 53 service for domain name, but I haven't tried yet, maybe later I will fire another thread to talk about.
Other AWS Features
Usually, we will be satisfied after step 6 as all desired containers are up and running. However a responsible and "talented" developers should think of afterwards domain name for deploying your projects, performance, maintenance and continuous development, thus making this session useful.
1. AWS Backup
Definitely a must do routine works for our cloud server to perform to prevent loss of data. Here is what I have done.
1a. Search and navigate to AWS Backup in service search box
1b. Create the backup plan
Usually I would choose the pre-defined backup plan for the sake of time saving, but I will take a custom backup plan as an example. Basically just follow the guide to fill in the backup details would be just fine except the tags which is really important for the identification of which service's docker volumes you are backing up.
1c. Assigning the backup plan to current resources
This is the most important part of the backup process, after 1b, the backup will not start until you assign the resource to backup plan. Think of backup plan as a clothes, it does not function (keep you warm) until you apply to anyone (wear it).
So the point is, how can we locate the resource to backup. "BY TAGS". After creating the backup plan, there is nothing happen, you need to then navigate to backup plans and select the name of the backup plan you have just created.
Click "Resource assignments > Assign resources", find "Assign resources", and you can choose identified by tags or resources ids, this time I choose "tags", we need to navigate to the EC2 dashboard, navigate to "volume" (Located under EBS (Elastic Block Storage) > Volumes), navigate to tags, and make the tag ourselves. This tag will be our tag to be filled out in our AWS backup > Assign resources field.
1d. Confirm the backup
If the backup plan is successfully created, you should be able to see there are some backup operations operated in the AWS backup dashboard page
2. Configuring SSL over HTTP and Dynamic Port Mapping
https://jackygeek.blogspot.com/2020/03/aws-dynamic-port-mapping-ssl-equipment.html
3. Register domain name via Route 53
To be updated...
Epilogue
You can see how complicated starting from 0 to make your containers up and running. I can say the whole AWS ecosystem is not so user friendly, especially when you are not quite familiar with linux commands, dockers etc. And the most important thing is Amazon tries to package their services and sell them in quite a number of tiny pieces, with bunch of hard to understand terms and not quite managed documentation, I found their documentation a bit ... confusing, or somehow difficult to follow. And sometimes when following 1 tutorial and then the other tutorial seems performing the same thing with different methods, which spend me much time to make myself clear on its mechanism.
I have already made this tutorial as simple as I could but still complicated, I will try to improve this and create another one specific for the concept of AWS only after getting more familiar with this "game"
References
Now the ECS has created and initialized the cluster and the related EC2 instances, we can arrange the docker containers to be run in the EC2 instances.
5. Setup ECR
Before arranging the docker containers to be run in EC2 instances, 1 of the tricky thing here is that AWS ECS CLI won't have the ability to docker compose build from scratch like what we did in local, instead we need to set up an image registry beforehand to store the custom images for the ecs-cli compose up command. following the above ECR tutorial. ECS will NOT help you to build the image in the EC2 instances "locally" for you. You must build it locally and push to amazon ECR first.
https://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_GetStarted.html
So basically, what "official" images mean images that can be directly fetch from docker official repository, if you directly use official image without modifying, then it is no need to create image repository.
In my case, I have 2 customized images, i need to create 2 container registry to store the images. Note that the images should built in local before using docker-compose command and push using docker-compose push.
Here are the related steps / commands I used
5.1. Create container registry in AWS
Navigate to ecr services, choose "create repository" to create 2 different namespaces to store the 2 images.
Or you can follow this https://stackoverflow.com/questions/44052999/docker-compose-push-image-to-aws-ecr, to manually create repository through command line
5.2. Modify the docker compose file
Modify the docker compose file as follows
https://docs.google.com/document/d/1Y8fvJiV_YYbwZsoFD9A92eP54-1wzrHdicmXAijd1co/edit?usp=sharing
Points to Note:
- Change the docker-compose version to 2, as ECS does not support minor versions
- Update the image tag to the repository URL created in ECR, to tell ECS which repository the image needs to be pushed to the AWS registry
- Change depends_on to links for inter-container identifications as ECS does not recognize depends_on for the docker-compose file
- Add logging tag to allow capture of logs in case there are errors, we can have some logs for debugging
5.3. Prepare and Push Images to AWS
aws ecr get-login --region us-east-2 --no-include-email
Use the output from the command, directly copy and paste to the command prompt and run again. We are now logged in to AWS ECR with our docker client, we are ready to push the images to the AWS cloud to serve our containers
We should re-build the images in local to make sure they are all up-to-date
docker-compose -f ./docker-compose-prod.yml build
And finally, all we need to do is to push all images
docker-compose -f ./docker-compose-prod.yml push
6. Start up Containers
Basically we have reached the very end of the step, docker compose in the through ecs cli command and up command for initializing the container in the ec2.
One of the difference between local compose and ecs compose is that we have to specify the resources needed for each containers using the ecs parameters file, here is my parameter file
https://docs.google.com/document/d/1PyogvbSTPtarrQyH4MEUBh26CzGRXcW95ezxNKdEXtU/edit?usp=sharing
It specifies how much CPU, RAM resources limited for each container, AWS will follow the specified information to distribute the resources accordingly.
Run ecs-cli compose up command as follows
ecs-cli compose --file ./docker-compose-prod.yml --cluster cluster-name --ecs-profile ecs-profile --cluster-config cluster-config-file --project-name project-name --ecs-params ./ecs-params.yml service up
ECS will parse the docker compose file using their customized docker rules, and if everything is going well, you can see the 4 containers up and running.
And that's it. Your servers are up and running now and are ready to serve. Of course we also need to subscribe Amazon Route 53 service for domain name, but I haven't tried yet, maybe later I will fire another thread to talk about.
Other AWS Features
Usually, we will be satisfied after step 6 as all desired containers are up and running. However a responsible and "talented" developers should think of afterwards domain name for deploying your projects, performance, maintenance and continuous development, thus making this session useful.
1. AWS Backup
Definitely a must do routine works for our cloud server to perform to prevent loss of data. Here is what I have done.
1a. Search and navigate to AWS Backup in service search box
1b. Create the backup plan
Usually I would choose the pre-defined backup plan for the sake of time saving, but I will take a custom backup plan as an example. Basically just follow the guide to fill in the backup details would be just fine except the tags which is really important for the identification of which service's docker volumes you are backing up.
1c. Assigning the backup plan to current resources
This is the most important part of the backup process, after 1b, the backup will not start until you assign the resource to backup plan. Think of backup plan as a clothes, it does not function (keep you warm) until you apply to anyone (wear it).
So the point is, how can we locate the resource to backup. "BY TAGS". After creating the backup plan, there is nothing happen, you need to then navigate to backup plans and select the name of the backup plan you have just created.
Click "Resource assignments > Assign resources", find "Assign resources", and you can choose identified by tags or resources ids, this time I choose "tags", we need to navigate to the EC2 dashboard, navigate to "volume" (Located under EBS (Elastic Block Storage) > Volumes), navigate to tags, and make the tag ourselves. This tag will be our tag to be filled out in our AWS backup > Assign resources field.
1d. Confirm the backup
If the backup plan is successfully created, you should be able to see there are some backup operations operated in the AWS backup dashboard page
2. Configuring SSL over HTTP and Dynamic Port Mapping
https://jackygeek.blogspot.com/2020/03/aws-dynamic-port-mapping-ssl-equipment.html
3. Register domain name via Route 53
To be updated...
Epilogue
You can see how complicated starting from 0 to make your containers up and running. I can say the whole AWS ecosystem is not so user friendly, especially when you are not quite familiar with linux commands, dockers etc. And the most important thing is Amazon tries to package their services and sell them in quite a number of tiny pieces, with bunch of hard to understand terms and not quite managed documentation, I found their documentation a bit ... confusing, or somehow difficult to follow. And sometimes when following 1 tutorial and then the other tutorial seems performing the same thing with different methods, which spend me much time to make myself clear on its mechanism.
I have already made this tutorial as simple as I could but still complicated, I will try to improve this and create another one specific for the concept of AWS only after getting more familiar with this "game"
References
Thanks for the below references, without them, I definitely cannot make it.
- Free Tier AWS Services & How They Are Billed: https://aws.amazon.com/premiumsupport/knowledge-center/ec2-instance-hour-billing/
- Basic EC2 FAQs: https://aws.amazon.com/ec2/faqs/
- What is ECS: https://aws.amazon.com/ecs/
- EC2 and ECS, the differences: https://stackoverflow.com/questions/40575584/what-is-the-difference-between-amazon-ecs-and-amazon-ec2
- Creating Cluster: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-cli-tutorial-ec2.html
- Getting Started with Amazon ECS: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_GetStarted_EC2.html
- ECS CLI Command
- References: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cmd-ecs-cli.html
- References 2: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_CLI_reference.html
- ECS CLI Compose Command
- ECS params
- References: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cmd-ecs-cli-compose-ecsparams.html
- ECR
- Introduction: https://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_GetStarted.html
- Use Case: https://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_AWSCLI.html
- Push Images to ECR: https://stackoverflow.com/questions/44052999/docker-compose-push-image-to-aws-ecr
- Route 53
- Routing to ec2 instance:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-ec2-instance.html - Configuring new domain:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-configuring-new-domain.html - DNS Configuring:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-configuring.html - AWS Backup
- Detailed introduction and how-to
https://docs.aws.amazon.com/aws-backup/latest/devguide/getting-started.html - Others
- AWS ECS Compose Not Support "depends_on": https://github.com/aws/amazon-ecs-cli/issues/708
- Anonymous Volume Fixed in MongoDB:
- https://stackoverflow.com/questions/53509236/mongo-authentication-inside-docker
- https://stackoverflow.com/questions/51169488/docker-mongodb-cannot-sign-in-as-root
- Making nginx container for lightweight react server: https://medium.com/@pentacent/nginx-and-lets-encrypt-with-docker-in-less-than-5-minutes-b4b8a60d3a71
- Dockerizing a react app: https://mherman.org/blog/dockerizing-a-react-app/
- Formatting code to blogger: http://hilite.me/
沒有留言:
張貼留言