2019年12月15日 星期日

[Angular JS] ng-click and ui-serf

Problems
Using angularJS ui router with 3 ui views, Leftbar for showing menu, topbar for showing application title and actions and content view for displaying actual content.


Clicking links in leftbar trigger change in state which in turn change the view of the content correspondingly. At the same time, current usage of the tab will be highlighted.


Problem is the color highlight is wrong. Surfing content B highlights menu of title A.


Reasons and Solutions
The mixture of usage of "ng-click" and  "ui-serf" in menu bar item causes the issue, I somehow used both ng-click to change router state and the view simultaneously, but the funny thing is when you change the router state, the binding will re-do and all the view binding will reset to initial state. This explains why when I click the link one more time, the color highlight becomes correct.


To fix this, I change the implementation method, I bind the $stateParams and $state to $rootScope as follows


$rootScope.$stateParams = $stateParams;
$rootScope.$state = $state;

After assigning, I can access the current state directly in the view template and therefore save the effort on changing the state in ng-click

References

2019年11月16日 星期六

[Docker] Console Log Problem

Background
Making a docker application through docker compose, the log terminal does not return any of the console logging returned from my node server


Cause & Solution
Turn out the problem comes from my mistakes on making a "command substitution" in entrypoint of the dockerFile. Here is my entrypoint command

ENTRYPOINT /bin/bash -c "npm install && if [ $NODE_ENV == '"development"' ]; then \
$(nodemon --ignore node_modules/ mongoEngine.js); else $(node mongoEngine.js); fi"

Outputting in command substitution mode will not be captured. Remove the $(...) quoting solves the problem

References



2019年11月14日 星期四

[System] Remove Backup Volume from Laptop

Background
Bought a new laptop, but the partition is being chopped into different pieces, 512GB of SSD spaces is being chopped into half of system and half of data, and some are being occupied by Acer backup partition which is not what I wanted.


Solution
Following this guideline https://macrorit.com/wipe-hard-drive/delete-oem-partition.html, I first list current volumes and remove the 1 using command line interface. Restart the laptop, and the backup partition is backed again!

2019年11月12日 星期二

Putting the docker composed application online to AWS - a painful journey

Preface
Amazon Web Service (AWS) offers a very attractive limited period free tier package for those who wants to taste a bit of fantastic features of AWS like me and try to make use its services to put my docker compose application online. Of course the application is just a playground for me, my main purpose is to try out how good AWS would be in my situation. After a week of hard works, I finally made it work as my expectation but it comes with series of pain during the processes.


Background
Like I mentioned in preface, I made a web application, which contains 4 containers
  • Frontend (A web interface for user to interact)
  • Loopback framework (A RESTful handling framework)
  • MongoDB (Database container)
  • Mongo Express (Database management container)
And the following are the docker compose file I composed

My target is to make the above 4 containers up and running, and AWS would be the platform for me to run and maintain these containers.

Procedures & Experiences 
I developed using the above compose file, and after the development, I created another branch and changed the docker compose settings to fulfill the production environment. E.g.: changing the node environment variable to "production", changing port back to 80 etc. And hence a new docker-compose-prod.yml is created. My strategy is at least I made the production yml file up and running in local first, and I know in certain extent, it is easier to kick off my AWS journey.

The following is the docker-compose-prod.yml file I used
This docker-compose-prod is nothing special but a copy of docker-compose.yml and basically
  • Change the port to production ready port
  • Remove frontend, loopback container's volume dependencies (we include all production files in the docker image instead)
  • Change some context docker file to its docker-prod file
So far so good, I make the production containers up and run with the production yml in my machine, happy! Let's see how to accomplish the same thing in AWS!

By the way, I use this docker command to make those containers up and run
docker-compose -f ./docker-compose-prod.yml up

I think working on AWS is as easy as I did in local machine, but outcome it did not. After 2 weeks of struggling in AWS, here are some of the experiences and thinking I would like to share.

In Preface section, I mentioned about the attractive free tier trial. Indeed there are quite a number of "traps", maybe it is unfair to Amazon, but I have to say being a newbie of AWS is hard, AWS is comprehensive in a good way, but too complicated to familiar with is its drawbacks.

Why hard? Because Amazon is confusing us with tons of different unfamiliar names, EC2, ECS, ECR, EBS, t2.micro, Amazon Cloud Watch, fargate blablabla... You may ask why are these terms so important that I need to care? First of all money, remember the 750 hour free tier usage? If you are not familiar with how the AWS game works, you will probably being "tricked" by those "free" ads. And finally find the bill have some charges that you are not noticed

In traditional hosting, we only pay monthly or yearly (specific amount), like hosting speed, we pick a plan, and we will get 10 GB web file storage, certain amount of emails, a domain name, a control panel and we are good to go for our web application!

However in AWS, game is not playing in this way. Amazon tries to divide different parts of the hosting services as many many tiny different sub-services. Say in my case, I want to put my docker compose application to the public through AWS, I need the following components

  • Amazon EC2 (Elastic Compute Cloud): Consider it as an utmost base service (actually a remote virtual machine) to execute your cloud application, root of all the services. E.g.: boot up the server instances in order to run your web server, restful API service, mongoDB,  blabla... A prerequisite in most of the cases.
  • Amazon EBS (Elastic Block Storage): Nothing special but can see as a general storage spaces to store your application files.
  • Amazon ECS (Elastic Container Service):  As the name implies, provide services to run docker containers. I first confused EC2 and ECS, I think ECS is necessary for running docker containers, indeed it isn't. ECS is a cluster (group of EC2 instances). So a very simple question, why we need ECS? ECS acts as a "proxy-like" role, we run ecs-cli command to tell ECS to launch container on EC2. But I am confused, why can't I directly launch them in EC2 instances? Why I need 1 more layer to do so?

    Analogy ECS as your mom, EC2 as myself. I can cook myself (launch container), but why we still need mom to do so? (manage to cook the food). Because mom knows how to cook delicious food, she knows how the ingredients work with each other, quantity, combination etc. To make use of the best effective resources to cook the delicious food. That's why we need mom (ECS)

    ECS knows very good on how to manage EC2 effectively, organizes optimal resources (CPU, RAM, Storage...). And ECS use docker to initialize containers in EC2 virtual machines. So nothing magic on this ECS. ECS is meaningless if no EC2s are associated with. On the other hand, you can live without ECS, but not EC2. The following better illustrate the relationships (from Amazon).

  • Amazon ECR (Elastic Container Registry): Nothing special but a place to store the docker images. A docker image repository.

The above is pretty much all the components. The following are the procedures and the difficulties experienced. For the procedures, skipped some of the details (e.g.: IAM role setup) for simpler representation. Detailed tutorials will be hyperlinked referenced.

1. Sign Up An AWS Account - Nothing special, but you must put your credit card down first... :)

2. Install and Configure AWS CLI
To access different AWS services and if you plan to manage your services through the command line interface, you must install and setup AWS CLI

Login to AWS console, search "security", pick "IAM", choose Users in left panel, choose designated users you want to access the AWS service, and choose "Security Credentials" tab, then click "Create Access Key", copy the credential data out which is needed to do the authentication before accessing any of the AWS services. For details
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html#cli-quick-configuration

Basically follow the instruction for the installation & the credential creation will be fine (my installation version is v1.x). After installation, run
aws configure

AWS CLI will ask for the following
aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-west-2
Default output format [None]: json

Something needs to pay attention
- Region name is important, fill in the one exactly the same as you choose in the control panel
- Fill in the access key and secret with the one created above
- Country list can be found here: https://docs.aws.amazon.com/general/latest/gr/rande.html#endpoint-tables

After the setup, we have right to run authenticated (signed) AWS command for every actions we do through the aws cli.

3. Create IAM (Identity and Access Management) 
Allow aws cli to have access right to access your services, like running docker commands on behalf of you in ec2 instances, therefore it needs your permission grant. Some security stuff needed to perform first.




















- Create IAM group and attach policy to that group
https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-started_create-admin-group.html#getting-started_create-admin-group-cli

- Create user and add user to the IAM group
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_cliwpsapi

Amazon strongly recommend not to use AWS account root user for tasks not require root accesses. Instead create an administrator group and put newly created users to that group for management.

3b. However, if your first target is starting from ECS following tutorial in https://docs.aws.amazon.com/AmazonECS/latest/developerguide/get-set-up-for-amazon-ecs.html, (which is indeed suit my case), then it will guide you through all the IAM, group policy and credentials needed to use the ECS services, in this case, just follow the link for the whole setup and ignore step 3 as it is basically a subset of step 3, but specifically for ECS (for docker containers)

Indeed I completely followed the CLI tutorial in this step's URL to create IAM group, users, policies and credentials. And if you already attach "AdministratorAccess" policy to your IAM group, you have already gained accesses to all the AWS services, there is no need to create extras unless you want to grant accesses to other users.

4. Using the ECS CLI to Setup the ECS Cluster and the EC2 Instances
Actually you can follow the first run guide (if you have no docker compose file), the panel will guide you through the setup processes of the container https://us-east-2.console.aws.amazon.com/ecs/home?region=us-east-2#/firstRun, however, I have already defined all the docker containers through docker compose, this method is not suit for me.

I instead use ECS CLI to create the cluster, attach the ec2 instance to cluster and run my containers (through my docker-compose file)

First install and configure the ECS CLI

- Installation of ECS-CLI
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_CLI.html
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_CLI_installation.html

- Configure the ECS-CLI
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_CLI_Configuration.html

I used the following commands to create the ecs cli connection profile. The access key ID and key is the one created in step 2

Create ECS credential profile to tell ECS how to connect to the remote machine (by providing the credential), profile will be saved as file for later connection use in "ecs-cli up" command
ecs-cli configure profile --profile-name ec2-mw-good-man-eat --access-key $AWS_ACCESS_KEY_ID --secret-key $AWS_SECRET_ACCESS_KEY

And the following command to create a cluster profile (for future creation of cluster). Region must be the same as the one chosen in AWS console in step 2. This defines some simple cluster information, the name and the related region (which in turn stored many of the EC2 instances). The --config-name parameter should be referred by the "ecs-cli up" command so that ECS knows how to initialize the cluster.
ecs-cli configure --cluster ec2-mw-good-man-eat --default-launch-type EC2 --config-name mw-good-man-eat-conf --region us-east-2

And finally bring up our ECS cluster using the profiles we have created. The keypair are the one generated in https://docs.aws.amazon.com/AmazonECS/latest/developerguide/get-set-up-for-amazon-ecs.html#create-a-key-pair, note that keypair is not the local one, but the name of the keypair located in "ec2 > Key pairs"
ecs-cli up --keypair keypairName --capability-iam --size 1 --instance-type t2.micro --cluster-config mw-good-man-eat-conf --ecs-profile ec2-mw-good-man-eat

Note that, instance type should be specified as the remote machine type you preferred, for free tier, t2.micro is suited, the --cluster-config parameter is the configuration parameters we have made in "ecs-cli configure"

Now the ECS has created and initialized the cluster and the related EC2 instances, we can arrange the docker containers to be run in the EC2 instances.

5. Setup ECR
Before arranging the docker containers to be run in EC2 instances, 1 of the tricky thing here is that AWS ECS CLI won't have the ability to docker compose build from scratch like what we did in local, instead we need to set up an image registry beforehand to store the custom images for the ecs-cli compose up command. following the above ECR tutorial. ECS will NOT help you to build the image in the EC2 instances "locally" for you. You must build it locally and push to amazon ECR first.
https://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_GetStarted.html

So basically, what "official" images mean images that can be directly fetch from docker official repository, if you directly use official image without modifying, then it is no need to create image repository.

In my case, I have 2 customized images, i need to create 2 container registry to store the images. Note that the images should built in local before using docker-compose command and push using docker-compose push.

Here are the related steps / commands I used

5.1. Create container registry in AWS
Navigate to ecr services, choose "create repository" to create 2 different namespaces to store the 2 images.

Or you can follow this https://stackoverflow.com/questions/44052999/docker-compose-push-image-to-aws-ecr, to manually create repository through command line

5.2. Modify the docker compose file 
Modify the docker compose file as follows
https://docs.google.com/document/d/1Y8fvJiV_YYbwZsoFD9A92eP54-1wzrHdicmXAijd1co/edit?usp=sharing

Points to Note:
- Change the docker-compose version to 2, as ECS does not support minor versions
- Update the image tag to the repository URL created in ECR, to tell ECS which repository the image needs to be pushed to the AWS registry
- Change depends_on to links for inter-container identifications as ECS does not recognize depends_on for the docker-compose file
- Add logging tag to allow capture of logs in case there are errors, we can have some logs for debugging

5.3. Prepare and Push Images to AWS
aws ecr get-login --region us-east-2 --no-include-email

Use the output from the command, directly copy and paste to the command prompt and run again. We are now logged in to AWS ECR with our docker client, we are ready to push the images to the AWS cloud to serve our containers

We should re-build the images in local to make sure they are all up-to-date
docker-compose -f ./docker-compose-prod.yml build

And finally, all we need to do is to push all images
docker-compose -f ./docker-compose-prod.yml push

6. Start up Containers
Basically we have reached the very end of the step, docker compose in the through ecs cli command and up command for initializing the container in the ec2.

One of the difference between local compose and ecs compose is that we have to specify the resources needed for each containers using the ecs parameters file, here is my parameter file

https://docs.google.com/document/d/1PyogvbSTPtarrQyH4MEUBh26CzGRXcW95ezxNKdEXtU/edit?usp=sharing

It specifies how much CPU, RAM resources limited for each container, AWS will follow the specified information to distribute the resources accordingly.

Run ecs-cli compose up command as follows
ecs-cli compose --file ./docker-compose-prod.yml --cluster cluster-name --ecs-profile ecs-profile --cluster-config cluster-config-file --project-name project-name --ecs-params ./ecs-params.yml service up

ECS will parse the docker compose file using their customized docker rules, and if everything is going well, you can see the 4 containers up and running.













And that's it. Your servers are up and running now and are ready to serve. Of course we also need to subscribe Amazon Route 53 service for domain name, but I haven't tried yet, maybe later I will fire another thread to talk about.

Other AWS Features
Usually, we will be satisfied after step 6 as all desired containers are up and running. However a responsible and "talented" developers should think of afterwards domain name for deploying your projects, performance, maintenance and continuous development, thus making this session useful.

1. AWS Backup
Definitely a must do routine works for our cloud server to perform to prevent loss of data. Here is what I have done.

1a. Search and navigate to AWS Backup in service search box

1b. Create the backup plan
Usually I would choose the pre-defined backup plan for the sake of time saving, but I will take a custom backup plan as an example. Basically just follow the guide to fill in the backup details would be just fine except the tags which is really important for the identification of which service's docker volumes you are backing up.

1c. Assigning the backup plan to current resources
This is the most important part of the backup process, after 1b, the backup will not start until you assign the resource to backup plan. Think of backup plan as a clothes, it does not function (keep you warm) until you apply to anyone (wear it).

So the point is, how can we locate the resource to backup. "BY TAGS". After creating the backup plan, there is nothing happen, you need to then navigate to backup plans and select the name of the backup plan you have just created.

Click "Resource assignments > Assign resources", find "Assign resources", and you can choose identified by tags or resources ids, this time I choose "tags", we need to navigate to the EC2 dashboard,  navigate to "volume" (Located under EBS (Elastic Block Storage) > Volumes), navigate to tags, and make the tag ourselves. This tag will be our tag to be filled out in our AWS backup > Assign resources field.

1d. Confirm the backup
If the backup plan is successfully created, you should be able to see there are some backup operations operated in the AWS backup dashboard page

2. Configuring SSL over HTTP and Dynamic Port Mapping
https://jackygeek.blogspot.com/2020/03/aws-dynamic-port-mapping-ssl-equipment.html

3. Register domain name via Route 53
To be updated...

Epilogue
You can see how complicated starting from 0 to make your containers up and running. I can say the whole AWS ecosystem is not so user friendly, especially when you are not quite familiar with linux commands, dockers etc. And the most important thing is Amazon tries to package their services and sell them in quite a number of tiny pieces, with bunch of hard to understand terms and not quite managed documentation, I found their documentation a bit ... confusing, or somehow difficult to follow. And sometimes when following 1 tutorial and then the other tutorial seems performing the same thing with different methods, which spend me much time to make myself clear on its mechanism.

I have already made this tutorial as simple as I could but still complicated, I will try to improve this and create another one specific for the concept of AWS only after getting more familiar with this "game"


References
Thanks for the below references, without them, I definitely cannot make it.

2019年11月3日 星期日

[JS] Something About Bubbling And Capturing

Background
During the implementation of a React application, I need to accomplish the following effect

Let's say we have an application with list of items listed, user can interact with the items, and the click event will trigger the calculation of another react component. More precisely, you can think of an app which user clicks to add products to carts, and the other component immediately checks and calculates the total price and show in menu bar. To better illustrate, I borrow the following image from google to present the idea






















When user scroll (either on smartphone or desktop), I want to make the floating menu disappear.


Problem
Following the thread https://medium.com/@pitipatdop/little-neat-trick-to-capture-click-outside-react-component-5604830beb7f, I first add ref to the components of the floating cart component, and add click event listener to the whole document DOM in componentDidMount, then for each user clicking event, use "contains" method, check whether the clicked DOM is the child of the floating cart component.

The idea seems nice, but the contains method always deduced the clicked DOM element as "false", that is not containing even when I clicked to the designated Paybox element. If this is not solved, the scroll to hide functionality will not work.


Findings and Solutions
Taking me an hour, finally the problem is caused by javascript's bubbling and capturing issues. Straightly speaking, the onclick event listener is being responded after the child onclick listener is determined.

The tricky point here is the addEventListener 's third userCapture argument. It is used to determine the order of responding the onclick when both parent and child element is equipped with onclick listener.

In my case, I have a "PayBox" DOM element which is inside the document DOM element, both of them are equipped with click event listener, and the interesting thing is I attach the document listener using


1
2
3
4
5
6
componentDidMount() {
  const {getProductList} = this.props;
  getProductList();
  // Add scroll event to whole app, when scroll, hide the order meal list
  document.addEventListener('click', this.handleDocumentScroll, false);
}

Here, I marked userCapture paramter to false, bubbling effect applies. That means Paybox DOM's onclick listener will be evaluated first and then the document element's listener comes after. The following better illustrates the alignment of elements.












And even more, I put the rendering of the Box element in the render function, which means, whenever there are changes of props within the component, it will have chance to trigger re-render, which in turn re-render the Box element.

Here is the capture of the target element, the parent element, and the results of the contain function.













The interesting thing here is that the parent of the target element is different from the one wrapped by the Box element (noted with the ***-root-290*** and ***-root-291***).

The reason behind is that, when user clicks, bubbling effects apply, js determines the event listener of the inner DOM first (i.e.: the box element), and because there are some changes of props in that onclick function, which in turn triggers the re-render of the box element, and that results in the renewal of the class name associated.

The document click event listener is then being determined afterwards, the event.target's DOM still keeps the reference of the last element (the ***-root-290*** class name's element), so the child of the outdated element is not included in the updated box element, that explains why the dropdown menu is not popping up anytime.

The solution is yet simple, just change the userCapture parameter from false to true. Capturing applies, and the document event listener now determined prior to the child's one, making the contain functions determined the desired result.

References
https://blog.othree.net/log/2007/02/06/third-argument-of-addeventlistener/
https://www.w3schools.com/jquery/tryit.asp?filename=tryjquery_event_stoppropagation
https://www.w3schools.com/jquery/event_stoppropagation.asp
https://stackoverflow.com/questions/34522931/example-for-bubbling-and-capturing-in-react-js

2019年9月10日 星期二

Configuring Gitlab account in Linux & Windows

Background
I need to create a git working environment on my Windows 10 installed desktop, I then setup the git account, add the ssh key of the machine through the gitlab control panel but gitlab replied me with no access right when trying to git pull my project.

I tried



  • Re-generate the RSA key and load again
But not working...

Solution
Turn out the username and the user account must match the one created in gitlab, git client will make use that account for the verification process with git server. I misunderstand that the user account can be arbitrary created on the fly.

Procedure

  1. Configure the username and user account (MUST match with the one in gitlab / github, skip the --global parameter if you are managing multiple users)
    • git config --global user.name "yourname"
    • git config --global user.email“your@email.com"
  2. Remove known_hosts (may not be necessary, it depends)
  3. ssh-keygen -t rsa -C "whateverName"
  4. Copy all text in /Users/your_user_directory/.ssh/id_rsa.pub
  5. Go to setting in gitlab and add the public key for authentication
  6. Set the git remote url with "git remote set-url origin https://gitUsername:gitPassword@gitlab.com/yourRepo.git"
Points to Note
Sometimes, login still failed with

sign_and_send_pubkey: signing failed: agent refused operation
Permission denied (publickey).
fatal: Could not read from remote repository.

Due to ssh-agent cannot find the keys attached, in such case we need to add the private key identities by running command


ssh-add


repeat step 6 and the remote origin may now be read



But in case...
If it sill fails to log in for some error messages like no access rights to the repo, navigate to ~/.ssh/config, add the entry as follows.


Note: You can also specify multiple IdentityFile entries if you have same hostname using different key pairs, just add 1 more IdentityFile entry would be done, remember, you should put the private key in the IdentityFile option
Host gitlab.com
  PubkeyAcceptedKeyTypes +ssh-rsa
  HostName gitlab.com
  IdentityFile ~/.ssh/id_rsa_anywhere /*Private key directory here!!*/
  User git

And use all the possible keys for login



ssh git@gitlab.com -vv
After the configuration, gitlab should be able to login and fetch updates again

References


2019年9月4日 星期三

[Onenote] Setting Default Mail Client When Outposting

Background
I usually use onenote's "Email Page" function but the default email client used to sent is always thunderbird.

I tried

  • Setting mailto default in windows 10 default open application settings
  • Setting preferred Outlook as default mail client
Both not working

Solution
Good job, M$Soft, finally I used the following method to solve (removing registry).


Just remove the key under mail and mailTo folder, and that's it, no need to restart