2021年11月8日 星期一

Cannot install npm modules "cross-env"

Problems

Installing npm module "cross-env" states the following error

npm error on every command: EEXIST: file already exists, mkdir 'c:\users\user\appdata\Roaming\npm'

Troubleshoot

  1. Try re-installing the module -> no luck
  2. Try installing with "npm install --force --save-dev cross-env" -> no luck
  3. Try removing npm cache -> no luck
Solutions
Turns out I have put the npm start scripts before actually install the module, which is the cause of the error, DO NOT update the script with the command "cross-env" before installing it.

References

2021年9月23日 星期四

Pipeline failed after changing the desktop machine and upgrading Ubuntu

Background

Recently I have changed my desktop to a mini PC and update the Ubuntu system from v18 to v20, which cause the gitlab runner failed to run.








The annoying part is that it only outputs "Job failed! exit status 1" which provides no clue for my debugging.


Troubleshooting and Solutions

Retrieve more debugging information using "journalctl -u gitlab-runner", which did provides some clues with the following





But it still is not enough for my debugging, I then find out with the help of the reference links, stated that the ".bash_logout" script of the user "gitlab-runner" is the cause of the issue. Every time when the runner use user gitlab-runner and exit (logout) the account, under the logout script, clear_console script 






is being called which gitlab-runner user does not have any privileges. Commenting the clear console script and re-run the runner solved the issue.


References

2021年9月12日 星期日

Printing image in non POSIX system

Background

In a project which needs to print receipt using heat sensitive printer, I successfully print out the receipt for each take away ordering, like below

Existing version of the receipt contains only text, using node module printer I was able to print the text properly, but recently a new request requires to print the qr code, which is in the form of an image, which this node module is not supported.








Somehow only CUPS printer support image printing which is not what Windows is supporting...


Solutions

So luckily I keep searching in the issue thread of the node printer plugin, and found this thread









So from the reply, I agree that there is no need for me to use time for finding node plugins, instead I follow the path to use a Windows application "printhtml" to first generate a HTML file and then print it.


Although it is quite a bit complicated, as I need to create a child process in my node application and execute the remote command, but it finally works.


Steps

  1. In loopback application, send socket message to the remote node daemon which that PC is connecting to the printer.

  2. The node daemon received the socket message and generate the HTML file (with the QR code image)

  3. Execute the child process and run the printhtml executable and print success.
References

2021年9月3日 星期五

Clone system disk from larger to smaller size HDD

Items

  1. Windows 10 USB (for fixing MBR)
  2. Gparted USB

Procedure

  1. Bootup the GParted USB
  2. Shrink the volume of the large HDD enough for the smaller HDD to save to
  3. Copy all the partitions of the source HDD to destination HDD
  4. Reboot the system
  5. Boot up the Windows 10 USB
  6. Repair Windows MBR partition
    1. Choose troubleshoot > command prompt
    2. Type Diskpart
    3. List disk > sel disk X, where X is the one of the new HDD
    4. List vol 
    5. Sel vol X, where X is the EFI partition (the one with ~500MB volume)
    6. assign letter=V:
    7. Type "exit"
    8. Type "V:"
    9. Type "bcdboot X:\windows /s V: /f UEFI"

References

2021年7月17日 星期六

Setting up ARC between Onkyo amplifier and Panasonic TV

Background

Want to try the functionality of using ARC to pass through TV audio to Onkyo 626 amplifier to output sound

Problems

After configuring settings in amplifier and TV, the audio is still not playing through my speakers

Solutions

One of the device (media player EGreat A5) is blocking the ARC signal, disconnecting it with the Onkyo amplifier make things work normally

Set-up sharing

Panasonic P55VT50H

  • Press remote button "Option", configure VIERA Link as "On"
  • You will find "Home theatre" selection in Application section

Onkyo NR626

  • The output channel should be selected with "TV/CD"
  • The main TV out of amplifier (Out 1) should connect with Panasonic TV's (TV in 2)
  • Advanced Settings > Input / Output Assign > Digital Audio Input > TV / CD, configure it as "--"
  • Advanced Settings > Source Setup > Audio Selector configured as "ARC"
  • Advanced Settings > Hardware Setup > HDMI,HDMI CEC (RIHD) configured as "ON" and Audio Return Channel configured as "Auto"

References

2021年7月10日 星期六

React concepts and learning

1. Usage of useRef

  • Syntax: const refObj = useRef(initialVal), where refObj={current: initialVal}
  • refObj is mutable
  • Value persists through the react lifetime (even after you changed the initialVal)
  • Use cases
    • Managing DOM directly, e.g.: textfield focus, integrate with 3rd party DOM libraries
      • <input ref={textInput} type="text" />, where const textInput = useRef(null) is declared on top of the functional component, by accessing textInput, you retrieve the input DOM element for further processing
    • Store instance variables
      • Use to store setInterval or setTimeout's ID for clearing at unmount stage. 
        • const intervalRef = useRef(null);
        • intervalRef.current = setInterval(() => { // something... });
        • clear stuffs in return function of useEffect return () => { clearInterval(intervalRef) }
    • Use as render counter when you want to know how many re-render made by React (since ref values keep throughout the React lifetime)
2. Usage of useMemo

3. Usage of useCallback

References

4. About useEffect and rendering timing
useEffect always run after the UI rendering is done, the useEffect will check the dependecy array against the value of the last render, and run the useEffect callback when they are different from each other.

References

2021年6月12日 星期六

Anywhere maintenance & troubleshooting cheat sheet

Useful Command

1. Remove node's log  

rm -rf /mnt/logs

2. Stream content of p2_ha log continuously

tail -f /mnt/logs/p2ha.log

3. Dump troubleshooting statistics for bug fixing

dumpStat.sh

4. Get neighbor from neighbor table

cat /sys/kernel/p2_nbr/nbr_tbl_dump

5. Restart syslogd when log is not outputting for some reasons

/etc/init.d/syslogd restart

6. View p2 routing protocol information

p2rp_cli -u -D

7. To find packets to and from eth1 interface filtered with only port 16001 related 

tcpdump -i eth1 -p 16001

Useful Meta Data

1. Log path of launcher in Windows

C:\Program Files\Anywhere Node Manager Launcher\user_data\logdata


Troubleshooting Notes

Scenario 1: In A-NM, you find node information of any nodes not showing up in mesh topology after performing node recovery (given the lost node is physically on)

Procedure

1. Check console log http response output to compare the result with UI behavior

2. Check whether the node is in managed device list (Note: node recovery failure may cause lost node being unamanged)

3. If 1. returns fail to retrieve, get the reason of failure, try to map the failure object with the error object documented in "doc_ui_p2_controller", try to resolve through the detailed error message returned from the controller

4. Try to isolate the problem from UI using "controller_restful_tester", run get-nodeinfo command to the remote node from the controller, if success, connection of controller <--> node should be good

5. Try to further isolate the issue from controller by directly retreiving node information using protobuf tool. Remember to run the meshTopology command first to achieve the access port of the remote node first

& '<pythonEXE_dir>' .\cli\ha.py --pw mgnt_pwd mgnt_ip -p access_port 1 > mesh_topology.txt

The "1" refers to the request type, no need to place the whole action name

6. If in 5, we still cannot get the result, troubleshoot, in host node, using tcpdump, data-network packet anaylzer to at least confirm packets are flowing normally between controller <---> host_node and host_node <--> remote_node with their perspective interfaces and ports

e.g.:

Let's say we have a controller (10.240.2.34), a host node (10.240.222.224) and a remote node. 

First we run "tcpdump -i eth1 port 16001" on host node to ensure packet is running to remote node through host node (as 16001 is the NAT port of the remote node), if there are packets routing through port 16001, it implies there are some traffic from 10.240.222.224 to the remote node through the ethernet port connecting between controller

Then we run "tcpdump -i mesh0 port 12381" on remote node, making sure packet is coming to and from the remote node

If the result is positive, we can be sure the layer 3 and layer 4 connectivity is working as expected.

7. Next, use command 4 to make sure neighbor link can be discovered in both nodes


Scenario 2: Capturing logs and output to other parties for bug fixing

1. Before capturing any logs, we need to make sure time stamp of 1) Node 2) Controller and 3) UI is in sync first.

2. For node to sync time, navigate to cluster configuration, and set the timezone to Hong Kong, reboot of nodes in cluster to apply the changes

3. Navigate to system settings, configure the ntp server (IP only, same L2 environment). Therefore we should find a PC (better Linux based), install the ntpd and configure with the HK time server first before proceeding 4) 

4. Login to the cluster and run "date" command on all nodes make sure their time is in sync with the time server you have configured in 3)

5. Since controller and UI should share the same time (Windows system time), just make sure they are in sync with the Windows time should be fine

6. Navigate to /mnt/logs to remove all logs from the target nodes

7. Remove all controller logs in "C:\Program Files\Anywhere Node Manager Launcher\user_data\logdata"

8. 

Miscellaneous Materials

1. SNAT, DNAT and masquerade

https://www.huaweicloud.com/articles/90a13a644803d0efcd024df76fb130ae.html

2021年6月5日 星期六

Frequently used web programming / design techniques

1. Use "data-*" to pass data to event listener

There are many times we need to pass parameters to the event listeners but may not want to use bind or arrow function to prevent frequent unnecessary re-rendering in React, in this case we can add "data-*" html attribute to accomplish.

e.g.: 

<a href="#someLink" onCLick={onClickHandler}>
  <span data-idx={valToOnclickListener}>someSpanText</span>
</a>

Then in "onClickHandler" function, we can use event object "e" to retrieve the data-idx parameter we would like to make use for further processing in this way.

const onClickHandler = (e) => {
  const valToOnclickListener = e.target.dataset.idx;

  // do anything here with valToOnclickListener

}

Sometimes the listener will capture the child element even though you actually put the "data-idx" attribute to the parent, in this case use e.currentTarget,  the target capture must be the one which attaches the listener.


2. Specify child DOM using JSS

Use > to specify the child, use together with the class name, e.g.:

{
  someClassName: {
        '& > :first-child': {
                borderLeft: '2px solid transparent',
         }
    }
}


References

2021年5月18日 星期二

Multi-container restaurant project

Background

A project includes a restaurant owner who wants to deploy our online ordering system with all his restaurants (total 8) to include online ordering features, currently, only a static page goldenthumb.com.hk is running (with https enabled), our target is to use a single AWS lightsail instance to serve for 

1. Providing portal page (run by Gatsby JS), currently goldenthumb.com.hk, act as a portal page to allow user to quickly select different restaurants for ordering, reference site: https://www.maximsmx.com.hk/takeaway_promotion/?utm_source=eatizen

2. Use subdomain to distinguish different restaurants, e.g. lck.goldenthumb.com.hk is the ordering system of restaurant located at Lai Chi Kok

3. api.lck.goldenthumb.com.hk is the API URL served by loopback JS container to provide RESTful service to access database (mongoDB)'s data

4. Run in https and able to check and renew the cert automatically

5. Provide ftp access for the static goldenthumb.com.hk file updates for site maintenance










Problems

This project setup is very much similar to another post, but more complicated, as it includes 2 more containers, certbot and static portal page container which co-exists with the nginx, loopback, mongo express, mongoDB containers (the ordering web application formation) as well as the ftp container and another nginx container.

The following are the problems I have faced during the system configuration

1. The lck.goldenthumb.com.hk requests are not forwarded to the desired nginx container, showing "502 bad gateway"

Turns out I have wrongly restart the container that is never meant to be restarted, normally the static web container "goldenthumbStatic" is used as the main server to receive client requests, the proxy_server settings should be configured inside "goldenthumbStatic" as follows

server {
  # Serving api url for internal use
  server_name lck.goldenthumb.com.hk;
  
  ssl_certificate /etc/letsencrypt/live/goldenthumb.com.hk/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/goldenthumb.com.hk/privkey.pem;
  
  listen 80 ;
  listen 443 ssl ;
  
  # access_log /var/log/nginx/access.loopback.log;
  # We redirect all outside request to appropriate loopback container
  location / {
    proxy_pass http://lck.goldenthumb.com.hk;
  }

  location /admin {
    proxy_pass http://lck.goldenthumb.com.hk;
    try_files $uri /index.html;
  }

  location /.well-known/acme-challenge/ {
    root /var/www/certbot;
  }
}

The location / block is important, it tells the server when request comes as "lck.goldenthumb.com.hk", then forward the request to server block "http://lck.goldenthumb.com.hk", which is defined below 

# Groups of server for proxy_pass usage in below server block
upstream lck.goldenthumb.com.hk {
  # We put docker container service name (internal docker IP) for pointing to right API server (8082:80)
  server frontend;
}

Here the upstream block is used to define blocks of server(s) for reference of proxy_pass directives, the value in proxy_pass is related to this upstream block and forward the request to frontend.

Although configurated correctly, I keep restarting the wrong docker container which the settings are therefore yet to apply 

2. The api.lck.goldenthumb.com.hk requests are not forwarded to the desired loopback container, showing "Connection refused"

This is turn out a silly mistake which forgot to update loopback configuration file (config.json), 80 port should be configured to accept the API connections from other containers

3. The admin management page lck.goldenthumb.com.hk/admin is not working, showing "Connection refused"

The nginx conf is not configured correctly, below is the corrected config, the "try_files" is important to guide nginx server to query original index.html when api.lck.goldenthumb.com.hk/admin is navigated, and let react route to handle the rest of the routing tasks instead of the server itself

  location /admin {
    proxy_pass http://lck.goldenthumb.com.hk;
    try_files $uri /index.html;
  }

4. The APIs used in admin page all fired as "lck.goldenthumb.com.hk/xxx" which is different from expected "api.lck.goldenthumb.com.hk/xxx"

This is due to the wrongly configured dockerfile setup, missing a step to copy the compiled js files to the production docker container, causing the updated code not reflecting the changes.

5. The https certbot challenges for multiple subdomains

HTTPS needs to apply for every sub-domain, lck.goldenthumb.com.hk, api.lck.goldenthumb.com.hk needs to be applied separately, update the certbot script (init-letsencrypt.sh) first, with domain marked to all desired sub-domains

domains=(goldenthumb.com.hk www.goldenthumb.com.hk lck.goldenthumb.com.hk api.lck.goldenthumb.com.hk)

Then in docker compose file, configure the all container volumes which need https certificates by adding /var/www/certbot to accept the challenges

./app/certbot/www:/var/www/certbot


2021年5月16日 星期日

MySQL learning notes

 Queries

1. Create simple tables

CREATE TABLE IF NOT EXISTS Menus (
    id VARCHAR(20) NOT NULL,
    filename VARCHAR(30),
    PRIMARY KEY(id)
) DEFAULT CHARSET=utf8;

CREATE TABLE IF NOT EXISTS Restaurant (
    id VARCHAR(20) NOT NULL,
    name VARCHAR(20) NOT NULL,
    address VARCHAR(100),
    isOpen BOOLEAN,
    menu VARCHAR(20),
    PRIMARY KEY (id),
    FOREIGN KEY(menu) REFERENCES Menus(id)
) DEFAULT CHARSET=utf8;

2. Update existing column's (from not null to null)

ALTER TABLE Menus MODIFY filename VARCHAR(30) NULL;

2021年1月11日 星期一

Building web server of multiple docker instances with ssl (https) protection in AWS lightsail

Background

Continue from last tutorial of creating http server with ssl protection, this time I want a bid more advanced, here is the situation.

I have a take away web application, which consists of 5 servers, they are frontend server powered by react, middleware server powered by loopback (nodeJS server), mongoExpress for monitoring mongoDB in UI, mongoDB database and a ftp server for updating the menu and meal information. System architecture is as shown as follow

So there exists 5 docker containers, running in the same local network, I want to put them altogether as a web application to serve my client, but the problem is the SSL cert, I want security transaction and need to register and deploy the certificate for my react front-end and middleware loopback server, so how can I deploy all of them (with same internal 80 port) to the Internet?


Problems

The following are the requirements & problems needed to solve

1. Getting a signed certificate from trusted party

2. Allow "api.goodmaneat.com" to be surfed by front-end server to acquire middleware functionalities through port 443 (https) (which co-exists with the front-end container)

3. Is the cert being shared by *.goodmaneat.com and goodmaneat.com?

4. The following is what needed to achieve

- www.goodmaneat.com -> goodmaneat.com

- www.goodmaneat.com/admin -> goodmaneat.com/admin

- http://goodmaneat.com -> https://goodmaneat.com

So basically, I want the server to strip the www prefix and force redirect to https


Solutions

The below items are what I have done to tackle the problems

1. Create a lightsail AWS instance of type Amazon Linux 2, which is good if you have any service needed to use aws cli

2. Install docker and docker compose to the Amazon Linux (https://gist.github.com/npearce/6f3c7826c7499587f00957fee62f8ee9)

    - Note: Logout / restart the instance after installing or docker cannot function properly

3. Assume you deploy your docker images in AWS, configure your login credential first before accessing Amazon registry (https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html)

4. Git clone or upload all files to the server and use docker compose to build up the docker images to containers, I have the following directories for making the whole application functions. Check all 5 containers are up and run.

Docker-compose file reference: (https://docs.google.com/document/d/1SPWOBeLL23E75W_D9U38jZACVNR9jZVwbZqtljIWX2I/edit?usp=sharing)

Note for some important points for the docker-compose yml file

- Loopback container's port setting should be 8082:80 (host:container), we cannot have 80:80 as frontend has already occupied the port 80

- Add 443:443 port settings to frontend container to serve for https connection

- Add nginx and letsencrypt folder to store nginx configuration (which we will deal with in later steps) and certificates (Note: the whole /etc/letsecrypt folder should be mounted for https to work properly)

- Update the dockerFile of the frontend container as the following (https://docs.google.com/document/d/1hJyFyWSazhE_qx9G2dBGc0of9BlDuBgdk0VdKGRRGwk/edit?usp=sharing), here, we install certbox for obtaining certificates

5. docker exec into frontend container, follow step 2 to step 6 to complete the retrieval of certificate process (https://lightsail.aws.amazon.com/ls/docs/en_us/articles/amazon-lightsail-using-lets-encrypt-certificates-with-wordpress)

Note: 

- Here we assume you already have a domain name registered and have full control as you need to add TXT records to complete the letsencrypt challenges, you should have also configured the route 53 record (assume you are using AWS as your domain name service provider) to add the domain name and IP mapping of "www.goodmaneat.com", "goodmaneat.com" and other sub domain name required to the route 53 record table.

6. Still in frontend container, assuming you are using nginx as web server, head to /etc/nginx/conf.d/nginx.conf (https://docs.google.com/document/d/18LuvdXzsE1qsyBP3fFpP1A1v528MLiYA_Ime-_yptfY/edit?usp=sharing), update the configuration file as the specified URL to 

- Locate the certificate registered in step 5

- Update port settings to only accept https connections

- 301 permanent redirect when www.goodmaneat.com/* is detected, the first sever block configuration accomplish such effect by recognizing domain name "www.goodmaneat.com" and all its subdomains, and return 301 header redirect to its https and www removed URL.

- The second server block serves only https URLs

- The third server block is the most tricky part, it detects server name "api.goodmaneat.com", we still need to provide certificate file here because we accepts only https connection even for internal docker container access. 

- The "proxy_pass" in location block is important, it works with upstream block to guide nginx server when api.goodmaneat.com (http/https) is being accessed, it reverses proxy to send the request to the requested server (this time it is our "loopback container", which can be referred using docker service name), nginx then fetches the response and send it back to our client (frontend container), the upstream block provides group of servers for proxy_pass directive to refer to, e.g.: the value of "proxy_pass http://api.goodmaneat.com" will be parsed as "proxy_pass http://(internal docker IP of loopback container)"

6. After all the nginx configurations, reload the nginx server by "nginx -s reload -c /etc/nginx/conf.d/nginx.conf"


Finally, the file structure of the host server (not the docker container) should be as follows

- Letsencrypt folder volume amount is for storing the certificate and key files in frontend nginx web server container

- nginx-conf folder to map and permanently store the server configuration files in step 5 in frontend nginx web server container









Note

When cert is being updated, some cert file's ownership and user rights will change to root, which makes reading of certificate failed, kindly change to appropriate user and user right before deployment when cert is being renewed


References

Nginx multiple server blocks listening to same port

Update: Using Free Let’s Encrypt SSL/TLS Certificates with NGINX

Multiple docker containers accessible by nginx reverse proxy 

How to Host Multiple Docker Containers on a Single Droplet with Nginx Reverse Proxy?

How to proxy_pass to a node docker container on port 80 with nginx container

How To Redirect HTTP To HTTPS In Nginx

nginx with Let’s Encrypt in Docker container

Multiple SSL certificates for a single domain on different servers

How nginx processes a request

NGINX multiple server blocks with reverse proxy

Module ngx_http_upstream_module

Differences Between A and CNAME Records

Secure your site with HTTPS

http directive error in nginx.conf

Example for a reverse multi-domain proxy using nginx and docker

Automated nginx proxy for Docker containers using docker-gen

Using Amazon ECR with the AWS CLI


2021年1月1日 星期五

[Apache] Updating http server to https

Background

No one will delay the importance of https, to secure the data transaction between client and server, however in the past when https is not very common in Internet, we developers suffer from registration cost of a certificate and complicated setup of Apache server. One of our client although not specifically request, needs https at all time of their official web site.


Solutions

Steps are not difficult nowadays to complete the https setup, here is my system setup. 

- A LAMP docker image (mattrayner/lamp:latest-1804) https://hub.docker.com/r/mattrayner/lamp which has already setup and work in production.

- Amazon lightsail service for hosting

- A valid hostname registered in hosting speed with full control of the domain name through the domain name panel

Steps

1. Register the SSL certificate (FREE) in letsencrypt (https://letsencrypt.org/) through lightsail web terminal or any ways you can think of accessing the virtual server. Install software-properties-common and certbot accordingly (Details refer to https://lightsail.aws.amazon.com/ls/docs/en_us/articles/amazon-lightsail-using-lets-encrypt-certificates-with-wordpress)

2. Specify the domain name and the wildcard in the environment variable, use certbot to request Let's Encrypt for the new certificate.

3. Use the following command to start certbot in interactive mode, follow the instruction to complete the registration.

sudo certbot -d $DOMAIN -d $WILDCARD --manual --preferred-challenges dns certonly

Note: There is a process where Let's Encrypt verifies the ownership of the domain, and you as the domain owner needs to add a TXT DNS record to complete the challenge. I am stucked at this as I wrongly follow the unclear instructions of Let's Encrypt's instruction. The TXT record required to fill in your DNS panel is _acme-challenge.example.com with a series of long string, but I wrongly put the whole address in the below table which caused failure of the challenges. It indeed needs only the first part "_acme-challenge" since the .goldenthumb.com.hk has already added for you during the DNS enquiry, so NO NEED to put the whole address to the name field of the DNS record panel.




4. Complete the challenges in the interactive shell and your certificate will be issued. Mark down the directory in which the certificates are stored

5. Update the docker-compose file volumes configuration so that volume is mapped to include the certificate files in the docker container, they will be used as https's certificate afterwards, at the same time, add 443:443 port mapping in ports configuration to allow correct functions of https

6. You can also map the path /etc/apache2/sites-available to local for easy access and configure the apache web server settings

7. Update the 000-default.conf to read the certificate files in step 5, the file sample is as follows

https://drive.google.com/file/d/1usQ9kHb38SyCxW8oqYT1fMAja1_Xj54S/view

The sample configuration includes pointing the certificate files, setting up 443 port based virtual server, redirecting all non https request permenantly to the https URL

8. Update the docker-compose file again, add build configuration and remove image configuration, because we are adding custom commands / scripts to the new yml file

9. Create a new dockerFile with source using the LAMP container (mattrayner/lamp:latest-1804), add the following custom commands in the dockerFile yml

- a2enmod ssl

- service apache2 restart

10. Navigate to lightsail management panel, open the port inbound for port 443 used in https

11. Stop the running containers, rebuild and make the containers up again, your server is now https protected


Certificate Renewal

1. Run "/usr/bin/certbot renew >> /var/log/certbot"

2. Reply the interactive terminal, provide emails, agree terms etc.

3. Add DNS record (TXT) for the challenges  






4. Create a file to accept the second challenge  





5. Restart the web server

Congrats, certificate renewed

References