How to Install htop on Oracle Linux 7

I wrote an article on how to install htop on Oracle Linux before. Thanks to Markus, I learned that installing htop is just a matter of enabling a repo on Oracle Linux 8. I have a Oracle Linux 7 host that I use for a customer and I wanted to install htop on it. I tried to look for epel repo in /etc/yum.repos.d/oracle-linux-ol7.repo but I could not find it. So the only option for me is to add the epel repo under /etc/yum.repo.d

I looked for EPEL repo for Oracle Linux 7 and added the following in /etc/yum.repos.d/oracle-epel-ol7.repo

[ol7_developer_EPEL]
name=Oracle Linux $releasever EPEL Packages for Development ($basearch)
baseurl=https://yum$ociregion.$ocidomain/repo/OracleLinux/OL7/developer_EPEL/$basearch/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
gpgcheck=1
enabled=1

Then run the following command.

sudo yum update
sudo yum install htop

Then, you get to install htop on Oracle Linux 7. 🙂

Uploading Backup File to OCI’s Object Storage via Jenkins

I have had a need to upload a zip file for backup from a Windows agent to Oracle Cloud Infrastructure’s Object Storage. Here is what I did.

Installed OCI CLI for Windows. Please follow this link to install it on Windows. Then, Install Jenkins slave on the same machine. I have a step by step instruction on how to do it. Once you install it, make sure to change the account to run the slave as to the account you used to install OCI CLI. Otherwise, it won’t work.

On the Jenkins job, using Compress-Archive Cmdlet, you can zip up some directories into a zip file.

Compress-Archive -Path $zipPaths -DestinationPath $zipFile

Please note that Compress-Archive has a limitation of 2GB. I heard that it’s the limitation of the underlining API.

Now that you have the zip file, you can upload it to Object Storage like the following.

oci os object put -bn backup --file $zipFile -ns "yournamespace" `
	--parallel-upload-count 5 --part-size 20 --verify-checksum

I am recommending this method to a customer because Object Storage is a relatively cheap and secure storage on OCI. It also supports retention duration and also replication. Great feature for relatively reasonable service.

Cheapest Way to Blog with Your Own Domain

Most of the hosting services want you to buy domain and host your site there. As I was working on my blog site, I’ve learned how I could change DNS record to point to my free tier host on Oracle Cloud Infrastructure. I wanted to do it because iPage.com was too slow for me.

Then, I thought what if I could use a service that allows me to just buy domains and manage my own DNS records without any hosting and host my site on OCI’s free tier?

When I was watching Scott Hanselman‘s YouTube video, I noticed something. He was using DNSimple for his DNS management. So this is a site where you can buy domains and manage DNS records and SSL certs.

So I pay $6 every month for the service and I pay $16 every year for my domain. $6×12+$16=$88 My blog site is hosted at OCI’s free tier host, so it does not cost anything. So I can have my own blog with my domain name for $88 per year. I think it’s quite reasonable.

Of course, this method requires pretty good knowledge of DNS, Web Server and SSL but if you are an engineer or planning to be one, I’d highly recommend it.

How to Add an Additional Public Key to an Existing Instance on OCI

You may want to access an existing instance from another client machine that has a different public/private key pair on Oracle Cloud Infrastructure. I looked around the net and I could not find a solid documentation on how to do it. Basically, the OCI console itself does not have a support to add another public key.

I thought of my previous post on passwordless SSH article. Basically, you are adding public key to ~/.ssh/authorized_keys so that whoever has the public key in the list can SSH into it. I tried adding a new public key to ~/.ssh/authorized_keys on an existing instance and it was successful. I will describe the steps below.

  1. On a new client machine, execute ssh-keygen to generate a new public/private key pair. If you have already done it, you can skip this step.
  2. It generates ~/.ssh/id_rsa (private key) and ~/.ssh/id_rsa.pub (public key).
  3. Print the public key on your terminal by executing cat ~/.ssh/id_rsa.pub
  4. Copy the public key to the machine that you already have access to the instance. You could email it to yourself or use something like DropBox.
  5. SSH into your existing instance.
  6. Open authorized_keys file. vim ~/.ssh/authorized_keys
  7. Copy the new public key and paste it in the last line.
  8. Save the file and exit vim. (:wq)
  9. Go back to the machine where you generated the public key.
  10. SSH into the machine now and you now have access to the instance.

This method probably can be applied to other cloud services as well as long as you have a Linux distro instance.

How Install htop on Oracle Linux

Now that I would like to monitor resources on Oracle Linux for this blog host, I wanted to install htop. top does its job but I prefer htop. htop has more features I would like to utilize.

First add the yum repo that has htop.

sudo rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

And then install htop.

sudo yum install htop

So here is what it looks like.

From what I see, I guess there are 2 cores in one CPU and memory is 687 MB though the OCI UI says 1 GB. That said, this site is running OK without resource contention.

New Server

I’ve migrated my blog to yet another host on OCI‘s free tier host. It was much easier this time because I had all my contents into one zip file and expanded it on the new host. By Dockerizing the whole site, it completely separates data and server and makes the migration so much easier.

As a blogger (though a mediocre one) who maintains the whole thing by himself, I cannot live without this anymore. It’s really convenient and much more maintainable.

OCI Free Tier Means Free

This tech blog has been hosted at OCI free tier for a few weeks and Oracle has not changed me a dime. It really has been free. My WordPress site is much faster on OCI than being hosted at iPage.com now.

One caveat is that I still have to pay for my own domain and hosting at iPage.com but it’s a cheap service so it doesn’t hurt me too much.

It has been running very well on all Dockerized components, so this is very cool. Much faster and I have control over every single details of my site. As an engineer, this is exactly what I wanted. Though it was quite a bit of work to figure out how to get it to work, it has been totally worth it.

Dockerized WordPress Architecture

For the currently running WordPress site, I envisioned and applied the architecture below.

All the components in the rectangle are Docker containers. NGINX container accepts HTTP(S) requests routes them to the WordPress container listening on port 80 only within Docker internal network. I could have exposed WordPress container listening to port 80 but it wasn’t going to be able to handle HTTPS requests, so I decided to have NGINX container to handle SSL cert.

version: "3.9"
    
services:
  db:
    image: mysql:5.7
    volumes:
      - ./db_data:/var/lib/mysql 
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: hogehogehogehoge
      MYSQL_DATABASE: mydatabase
      MYSQL_USER: buhibuhi
      MYSQL_PASSWORD: foobar
    networks:
      proxynet:
  phpmyadmin:
    depends_on:
      - db
    image: phpmyadmin/phpmyadmin
    restart: always
    ports:
      - '8080:80'
    environment:
      PMA_HOST: db
      MYSQL_ROOT_PASSWORD: hogehogehogehoge
    networks:
      proxynet:
  wordpress:
    image: wordpress:latest
    container_name: wordpress
    depends_on:
      - db
    restart: always
    environment:
      WORDPRESS_DB_HOST: db:3306
      WORDPRESS_DB_USER: buhibuhi
      WORDPRESS_DB_PASSWORD: foobar
      WORDPRESS_DB_NAME: mydatabase
      WORDPRESS_DEBUG: 'true'
    volumes:
      - ./html:/var/www/html
      - ./wp-content:/var/www/html/wp-content
    networks:
      proxynet:
  reverse:
    image: nginx:latest
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf
      - ./nginx/conf.d:/etc/nginx/conf.d
    ports:
      - "80:80"
      - "443:443"
    restart: always
    networks:
      proxynet:
volumes:
  db_data: {}
  wordpress: {}
networks:
  proxynet:

Paste the whole YAML to docker-compose.yaml file. There is some more steps to do before you can spin up the containers.

Let’s set up a firewall on the OS side. Ubuntu utilizes ufw. Execute the command to install it.

sudo apt install ufw

Let’s open a few necessary ports…

sudo ufw allow 80 # for NGINX http request redirect handling
sudo ufw allow 443 # for SSL (HTTPS) requests
sudo ufw allow 8080 # for myPHPAdmin access
sudo ufw allow 22 # for SSH access

You will need to get the firewall to start when the OS starts, so execute the following command.

sudo ufw enable

In the directory you created docker-compose.yaml file, create nginx/nginx.conf file with the following configuration.

user  nginx;
worker_processes  2;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

events {
   worker_connections  1024;
   use epoll;
   accept_mutex off;
}

http {
   include       /etc/nginx/mime.types;
   proxy_set_header X-Real-IP $remote_addr;
   proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

   default_type  application/octet-stream;

   log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                     '$status $body_bytes_sent "$http_referer" '
                     '"$http_user_agent" "$http_x_forwarded_for"';

   access_log  /var/log/nginx/access.log  main;

   sendfile        on;
   #tcp_nopush     on;

   keepalive_timeout  65;

   client_max_body_size 300m;
   client_body_buffer_size 128k;

   gzip  on;
   gzip_http_version 1.0;
   gzip_comp_level 6;
   gzip_min_length 0;
   gzip_buffers 16 8k;
   gzip_proxied any;
   gzip_types text/plain text/css text/xml text/javascript application/xml application/xml+rss application/javascript application/json;
   gzip_disable "MSIE [1-6]\.";
   gzip_vary on;

   server {
       listen       80 default_server;
       listen       [::]:80 default_server;
       server_name  _;
       sendfile        on;
       #tcp_nopush     on;

       keepalive_timeout  65;

      client_max_body_size 300m;
      client_body_buffer_size 128k;

      gzip  on;
      gzip_http_version 1.0;
      gzip_comp_level 6;
      gzip_min_length 0;
      gzip_buffers 16 8k;
      gzip_proxied any;
      gzip_types text/plain text/css text/xml text/javascript application/xml application/xml+rss application/javascript application/json;
      gzip_disable "MSIE [1-6]\.";
      gzip_vary on;
      return 301 https://hayato-iriumi.net$request_uri;
   }
    include /etc/nginx/conf.d/*.conf;
}


You need another configuration file at nginx/conf.d/ssl.conf. As you could see, this file handles HTTPS requests with the certificate and routes the requests to the wordpress container listening to the port 80 within the Docker network. So the port 80 is not exposed to the host side.

upstream wordpress_upstream {
    server wordpress:80;
}

server {
    server_name hayato-iriumi.net;
    listen 443 ssl;
    ssl_certificate /etc/nginx/conf.d/ssl/certificate.crt;
    ssl_certificate_key /etc/nginx/conf.d/ssl/private.key;

    location / {
        proxy_set_header        Host $host:$server_port;
        proxy_set_header        X-Real-IP $remote_addr;
        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header        X-Forwarded-Proto $scheme;
        resolver 127.0.0.11;
        proxy_redirect http:// https://;
        proxy_pass http://wordpress_upstream;
        # Required for new HTTP-based CLI
        proxy_http_version 1.1;
        proxy_request_buffering off;
        proxy_buffering off; # Required for HTTP-based CLI to work over SSL
    }
}


Obviously, you need to have your SSL cert ready. I have blogged about a free SSL solution here, so please refer to it for your SSL cert. Place the SSL cert files at the specified location in the ssl.conf file.

Finally, if you go back to the directory where you created docker-compose.yaml file, and execute docker-compose up -d, you will be able to see the WordPress initial UI.

If you run through the WordPress installation process, you are half way there. Once you run through the installation process, it creates necessary tables in the MySQL database. Once you have the basic data structure automatically created, you need to export the data from the following tables and import them in the new database. It is easy to export and import data using myPHPAdmin UI even if you don’t know anything about SQL. (Knowing basics of SQL helps when there are problems in importing data)

  • wp_posts
  • wp_terms
  • wp_termsrelationships
  • wp_comments
  • wp_commentmeta
  • wp_postmeta
  • wp_term_taxonomy

Your WordPress site still lacks the images. Backup all files under wp_contents/uploads and restore it at the wp_contents/uploads directory of docker-compose.yaml location. And what I had to do was to run the following command to be able to upload files from WordPress UI.

sudo chmod 777 wp_contents

This may be too open. I may tighten it up a little more later.

I put too much information alraedy in this blog post. I will write about Dockerizing WordPress more as I have some time.

Moved to OCI’s Free Tier

I decided to move to OCI’s free tier because the charge was going up faster than I expected. If you see this blog post, it’s on the free tier host.

I had already gone through the migration process so it wasn’t too hard to do it but still it was some work.

I was starting to get charged 67 cents a day. My ads on this blog doesn’t earn that much, so I was starting to be negative on the budget.

So far, since June 3 to June 17, only $3.13 but you can see the cost was starting to get higher in the last few days. The Shape of the host was VM.Stardard.E3.Flex with 2 AMD CPUs and 2GB of memory. $3.13 is nothing for the technical knowledge I gained migrating my WordPress site to my own Cloud but I wanted to make it economical to make it more sustainable. I thought 1 AMD CPU and 1GB of memory would do, so I moved everything to the free tier host. For more details about OCI’s free resources, click here.

As far as I know, it is possible for anyone who has OCI account to have a free tier host in us-ashburn-1 region in Availability Domain 3 with an AMD CPU. I saw a free tier host with ARM CPU just recently but I will wait and see if docker-compose releases bits for ARM processor. I don’t know if they will charge me for anything, but we will see. Once DNS propagates to this new host, I am going to terminate the old host.

Edit: I was just digging the OCI’s UI and found the combination is free. Just need to have docker-compose for ARM processor…

Restoring MySQL Database for WordPress

Backing up is one thing and restoring the data is another important piece to migrate a WordPress site. Restoring the data and being able to access from the Dockerized WordPress was not easy for me.

I restored the entire database and I tried to access it from the newly Dockerized WordPress instance but it would not work. It raise page took too long to load type of error. What I resorted to do is to restore database as blog_bak and exported data from several tables and then imported them to the new database and the site worked. I will write about it in detail.

Open your backed up database file (.sql) in a texteditor and change the following part where it tries to create the database with the original database name to blog_bak so that the database name does not conflict with the new one instance. If you don’t see CREATE DATABASE statement at the beginning of the file, you may have to re-export your database with custom export with Add CREATE DATABASE option.

--
-- Database: `hayato_iriumi_db`
--
CREATE DATABASE IF NOT EXISTS `blog_bak` DEFAULT CHARACTER SET latin1 COLLATE latin1_swedish_ci;
USE `blog_bak`;

Zip the database backup to [youfilename].sql.zip to get ready to be restored on the target MySQL Server. It is much faster to restore database using a zipped file.

Navigate to Import tab on the target MySQL Server and upload the zip file to restore your old database to blog_bak database.

This will restore your blog database to blog_bak. I am going to stop here because we haven’t gone over spinning up the new MySQL Server with Docker yet. You can get ready up to the point where editing the exported SQL file.

Will continue on this topic later because there is a lot more to cover.