Measuring how much time vagrant up is using to complete orchestration

A little bit of effort to search the net and encountering the solution, powershell will is a simple way to measure time spent executing a command. In Linux environment, time command can be used, but in this post, it is limited to Windows environment.

Based on the conventional wisdom from the documentation of Measure-Command of powershell. The command would be as follows:

Measure-Command {vagrant up}

However, by running the above powershell command, the output to the screen will be not available.

Hence, more searching in the internet (Source: time – Timing a command’s execution in PowerShell – Stack Overflow) leads to the complete command below:

Measure-Command {vagrant up|Out-Default}
Console output is shown when the command running is being piped into Out-Default

Based on the previous post of the vagrant orchestration of GitLab, the orchestration takes about 50 minutes in my computer with timedotcom broadband connection.

Total time spent for the command to end, 50 minutes plus.

Conclusion, measure-command is easy way but lack of detail such as time spent on CPU, IDLE time and network operations time. It will not be suitable for use if more details are needed.

How to use Hashicorp Vagrant to quickstart GitLab docker compose sample

The code of the project is available at chowkarmeng/vagrant_gitlab (github.com), the docker compose is based on the sample provided in GitLab Docker images | GitLab

The improvement done was to change the folder sync for virtual box into docker volumes.

First, git clone the repository https://github.com/chowkarmeng/vagrant_gitlab.git

Fire up the quickstart by running “vagrant up” in the localdev

The process will take hours depending on the speed of your computer and speed of your internet connection.

Continue reading

Making your windows RDP accessible from your home network

It is simple to make your computer in your home LAN accessible from internet.

The pre-requisite would be understanding how Network Address Translation works. To make the dynamic IP of ISP provide to your home internet access, feel free to research into finding Dynamic DNS providers. This will make it easier than using websites like whatismyip.com . And you just need to fire up the RDP client, and use the Dynamic DNS FQDN which will be always pointing to your static IP.

Depending on the network equipment used in your home or provided by your Internet Service Provider (ISP), you may need to look for NAT or in my case IPv4 port mapping.

The screenshot above, shows how to create a NAT mapping or a IPv4 port mapping for a computer with LAN IP, 192.168.100.12, take note that the port of RDP is 3389 and the protocol used is TCP. Hence, the internal port must be numbered 3389.

External port set to 3389 are for the sake of simplicity.

If there are more host in your LAN need to be accessible from the internet remotely using RDP, there is no need to change the default port of RDP of your host.

Instead, use the known port that is not blocked by your ISP. Example assuming that higher port numbers are not blocked by your ISP, you could use 13389, or 23389, or 33389 or 43389, or 53389 or 63389, as long as it is not more than 65535 or lesser than 1024. Those port number can be used as the External port number, while maintaining the internal port number to 3389.

Setting up custom domain in Zoho mail for free

From the previous post, this is definitely a follow up to enable my vanity email.

I was recommended by a friend to try our Zoho mail as it is free and offer use of customed domains.

Here are the results and how to get started.

First sign up a free plan with Zoho at Zoho Mail Pricing and Editions – Free for 5 Users

Do not panic, scroll down to view the free plan sign up.

Continue reading

DNS routing traffic for subdomains using AWS Route 53

Long explanation in AWS Routing traffic for subdomains – Amazon Route 53

GROUND RULE or RULE OF THUMB of DNS, NEVER change the NS record of your domain in your Domain registrar before completing setup of your DNS server. The result may cause 72 hours of outtage.

Using my domain karmeng.my as an example, the objective is to have chow.karmeng.my to be a valid domain for email hosting which offers customed domain.

Continue reading

Vagrant to orchestrate ubuntu in VirtualBox installing boto3 and Ansible

At the time of this post, the compatibility matrix of vagrant and VirtualBox is as follows:

Vagrant versionVirtualBox version
2.3.77.0.10
7.0.12
Vagrant and Virtualbox compatibility matrix

Unfortunately, Vagrant 2.4.0 does not work well with VirtualBox 7.0

This post was created using Vagrant 2.3.7 and VirtualBox 7.0.10

To make Vagrant possible, after installing the Vagrant from hashicorp webpage, a Vagrant file needs to be created. The most basic file that needs to exist in your Vagrant to work is a folder with Vagrantfile

Additional post start up scripts to complete the installations that are used in this example are setup.sh, install_ansible.sh, install_boto3.sh and install_python3.sh

Continue reading

Setting up OpenVPN using Amazon Lightsail

Pre-requisites:
1. Download “openvpn-install.sh” GitHub – angristan/openvpn-install: Set up your own OpenVPN server on Debian, Ubuntu, Fedora, CentOS or Arch Linux.
2. Have Amazon Lightsail activated with quota in AWS account.
* Credits to cyberciti for instructions and the scripts introduction Ubuntu 20.04 LTS Set Up OpenVPN Server In 5 Minutes – nixCraft (cyberciti.biz)
3. SSH keypair are created and added into your AWS account.
4. Make sure OpenVPN client is install on your computer OpenVPN Connect – Client Software For Windows | OpenVPN

Steps:
Creating a Ubuntu AWS instance
1. Click on “Create Instance”

2. Select OS only, and choose your favorite Linux distro, in my case Ubuntu were chosen.
At the time of this blog post created, the openvpn install script works on both ubuntu 22.04 and 20.04

Continue reading

Setting up AWS to use Amazon Lightsail

Requirements:

  1. Resides in a country that is not sanctioned
  2. Credit card
  3. Enabling and Setting up MFA

Problem statement:

The misadventures started when attempting to migrate this blog from webhost to AWS. Amazon Lightsail was used as it was an easier option and it is friendly for the wallet.

Request to have AWS to increase the quota for my AWS account. Started on 31st July 2023. After login into the dashboard of AWS web console, click on your user name on the upper right, then click Service Quotas.

Continue reading

Bitnami installing let’s encrypt SSL for customed fancy domain

Based on the link https://docs.bitnami.com/aws/how-to/generate-install-lets-encrypt-ssl/

For users with customed domain that is similar to my domain, chow.karmeng.my it is important to chane the –tls switch in the lego command into –http

sudo /opt/bitnami/letsencrypt/lego --http --email="youremail@yourdomain.com" --domains="domain.com" --domains="fancy.domain.com" --path="/opt/bitnami/letsencrypt" run

Working with AWS EC2 burstable Instances

Technically incline folks may choose to use the “free” tier for the first 2 months from the AWS lightsail. Free comes at a cost of burstable CPU consumption. User of lightsail needs to get familiar with the overview of the Average CPU Utilization and Remaining CPU burst capacity as shown in the screenshot below.

Once the AWS EC2 burstable instance started, AWS will add a small amount of burstable credit as long as the EC2 instance CPU usage are not exceeding 10%. If deployment of software into the instances uses CPU exceeds 10%, the CPU burst capacity will be deducted. Remaining CPU burst at 0% capacity will cause capping of the performance for the EC2 instance.

In order for the CPU burst capacity the build to 100%, the EC2 must CPU utilization must not exceed 10%. This will the CPU burst capacity to accumulate to 100% capacity, over the period of 27 hours.

Therefore, before developers or system architects needs to understand this feature of EC2 burstable CPU, allow ample time for the CPU burst capacity to build up. In real world, when deploying applications, the software provisioning may utilized all the CPU burst capacity and causes bad user experience of software performance in the AWS cloud. If I were to be an enterprise solutions architect, I would not use EC2 that has burstable CPU capacity.