Copy Drives with rsync on Windows from WSL

I have E: drive that I have been using for years to dump my personal files. I just purchased 4TB drive for fear of failing drive, so I want to copy everything on E: to the new drive (F:). I could have drag and drop everything from E: to F: but I wanted a geekier way (well maybe more robust way) to copy files from E: drive too F: drive. Here is how I started the process with rsync.

rsync -avz --progress /mnt/e/* /mnt/f

I have like 1TB of data (that’s it?) and it will take hours for it to finish the copying process.

VPN Server After a Month Usage

It’s been more than a month since I started to provision a VPN server in Tokyo on Oracle Cloud Infrastructure. I’ve been watching TVs and movies in Japan whenever I have time. I analyzed how much it would cost me to have a semi-permanent VPN server in Japan on OCI previously and I estimated it at $1.8 per month. Has it been really that little? Here is the actual cost.

So for entire March (31 days), it was $1.98. If cost per day is $0.06, it would be $1.86 but it was $1.98. I found out the resource cost to the fraction of cent is $0.064, so it came up to $1.98. That said, a stable and reliable VPN server without noisy neighbor problem at less than $5 is very very reasonable.

One thing I found out is that if you connect multiple devices at the same time, it can get unstable. I have no problem with it because I can’t watch multiple TVs at the same time. If I wanted to have multiple devices connected, I’d provision one or two more VPN servers because one ARM host is at around $2 after all.

Validating Downloaded File with File Size from Object Storage on OCI

This is a note for myself.

#!/usr/bin/env python3

import oci.object_storage
import urllib3
import os

def download_backup(bucket_name, file_name, local_dir):
    signer = oci.auth.signers.InstancePrincipalsSecurityTokenSigner()
    object_client = oci.object_storage.ObjectStorageClient(config={},signer=signer)

    object = object_client.get_object('id4qji14rv70', bucket_name, file_name)
    restored_file = os.path.join(local_dir, file_name)
    with open(restored_file, 'wb') as f:
        for chunk in * 1024, decode_content=False):

    object_meta = object_client.head_object('id4qji14rv70', bucket_name, file_name)

    file_stats = os.stat(file_name)

    if file_stats.st_size == int(content_length):
        print(f"Validated {content_length}")
        print(f"Validation failed. Expected: {content_length} Actual: {file_stats.st_size}")

if __name__ == '__main__':
    download_backup('backup', '', '/home/opc')

How to Validate a Big Downloaded File from Object Storage (OCI)

When you upload relatively a big file to Object Storage in OCI, it doesn’t have the MD5 hash ready for you. It’s because the big file is split into multi parts and they are uploaded into separate space. Then, when you download the file, the multi parts are downloaded sequentially and they are put into one file on the client side. Object Storage does not calculate the MD5 hash putting the multi parts together on the service side due to its sheer required processing power it may need. When you try to view the information of the file on Object Storage, you don’t see the actual MD5 hash.

Tough opc-multipart-md5 looks promising, that’s just a part of the whole thing. To get my point crossed, when I uploaded a small file, MD5 hash is calculated and available on the service side.

Now, how do we solve this problem? The best way is to calculate the MD5 hash before you upload the file with md5sum and then attach the MD5 hash to metadata when uploading the file to Object Storage.

You can get the data by executing the following command.

 oci os object head --auth instance_principal -bn backup --name

Here is the data you get as JSON.

  "accept-ranges": "bytes",
  "access-control-allow-credentials": "true",
  "access-control-allow-methods": "POST,PUT,GET,HEAD,DELETE,OPTIONS",
  "access-control-allow-origin": "*",
  "access-control-expose-headers": "accept-ranges,access-control-allow-credentials,access-control-allow-methods,access-control-allow-origin,content-length,content-type,date,etag,last-modified,opc-client-info,opc-client-request-id,opc-meta-md5hash,opc-multipart-md5,opc-request-id,storage-tier,version-id,x-api-id",
  "content-length": "217286450",
  "content-type": "application/octet-stream",
  "date": "Sun, 20 Mar 2022 04:42:05 GMT",
  "etag": "500168df-c90a-4d35-b4f6-c7a6c99d5969",
  "last-modified": "Sun, 20 Mar 2022 04:40:27 GMT",
  "opc-client-request-id": "92C495DFAA8647C4B230B10580FED145",
  "opc-meta-md5hash": "1eed774bb61c15f8c50c7771e71bbb24",
  "opc-multipart-md5": "i1Ap4X2OnVAU7aK8RwxgMg==-2",
  "opc-request-id": "iad-1:mHbU0Aq4kW9abs3NCSv77cOKPDYdcQ74lsT4sPgfDI44xXLWLwYk8MKcX3WmPE7L",
  "storage-tier": "Standard",
  "version-id": "29ba4883-5fb3-4316-acbc-ceb218b5e3d1",
  "x-api-id": "native"

So the process would be to execute oci os object head on the object you are going to download and keep the MD5 hash in a variable. Then once you download the file, have md5sum calculate the MD5 hash on the downloaded file. And then see if the calculated MD5 matches the one from oci os object head.

Here is the bash script I came up with to upload the file with the metadata.

rm -rf /home/opc/
zip -r /home/opc/ /home/opc/wordpress/*

md5=`md5sum | awk '{ print $1 }'`
filename=`date +%Y-%m-%d`.zip
oci os object put --auth instance_principal -bn backup --file --name $filename --force --metadata $json

I have cron’ed the bash script to run every day so the file backup is automated.

Bash Scripts

Bash scripting is my weakness as an engineer. I don’t like bash. Here are the reasons I don’t like it.

  • Bash depends on standard output. If you need to work on a complex scripting, it easily gets ugly dealing with a bunch of string mambo jumbo with regex.
  • Commands are each independent executable.
  • You cannot create functions with parameters. You have to use variables always at the global level.

So as a strong alternative, I use Python most of the time. Because I don’t prefer to use Bash script, my skill in bash scripting is limited. I believe bash does have a place to be but I tend to avoid it.

That said, I think I should spend more time on Bash to be at least fluent with it. I don’t consider myself to be fluent yet.

I used this bash script to install and configure openvpn. This is one of the most amazing script I have seen. There is a lot to learn from it.


Now I want to learn C++ on Linux… But it’s OK because this is my own space. I am just going over some basics on C++ coding.

#include <iostream>

using namespace std;

double square(double x)
    return x*x;

void print_square(double x)
    cout << "the square of " << x << " is " << square(x) << "\n";

int main()

I saved as helloworld.cpp. Now to compile it, I run…

g++ helloworld.cpp -o helloworld

It generates a binary file helloworld. When I execute it, it shows something like this…

the square of 5 is 25

Very simple, but I love something like this. When I start to learn a new language, I don’t use IDE. It makes me learn the language and how it works better.

Tokyo VPN Server Cost

I provisioned an ARM64 VM in Tokyo last weekend to create a VPN server in Japan. I noticed that I was starting to get charged for it. Here is how much…

So far only $0.09. Looks like only $0.06 per day. That means $0.06 x 30 = $1.8 a month. A full blown VPN server just for myself for $1.8 a month. The boot volume size is what’s costing me, and the size is 47GB. That’s the default size I picked.

I thought up to 4 ARM64 hosts were free but that seems to be only in the home region, which is us-ashburn-1 in my case. But still $1.8 per month for my own VPN server in Japan is very very cheap and I have no problem keeping it running. I have been using the VPN server to watch movies and contents in Japan and I have been very happy with it.

I used to have an VPN server in Japan with Azure and it used to cost me around $20 a month for mostly data transfer but OCI seems to be very generous in the amount of data transferred.

My Own VPN Server in Japan

I’m from Japan and I want to watch movies and TV programs in Japan from time to time. I subscribe to Amazon Prime in Japan but the IP address here in US prevents me from watching movies on it. In my opinion, that kind of service really kills advantage of the Internet but there must be business reasons why they want to filter the traffic by the source IP address.

To get around it, you could use a VPN connection. You can connect to a server in Japan and watch contents there pretending that you are in Japan. Yeah, there are VPN services out there and you can easily get decent service relatively reasonably but as an engineer, I thought why don’t I create a VPN host in Japan.

I provisioned a host in Japan on OCI. It is a ARM64 Ubuntu host. After Googling some, I was able to find a nice article that let me walk through steps to configure a VPN server. After like 20 to 30 mins, I was able to use the VPN server. It was a breeze.

As far as I see, the ARM64 Ubuntu host in Japan is free so far, so as long as you are willing to go through some steps your self, you get a free VPN server in the country you want.

Migrated Yet Again

I didn’t feel right that I had to clone the boot volume and recreated the blog instance out of it to recover my SSH key, so I created an ARM instance from scratch again.

It was very easy to install and configure the Docker containers this time because I already had an Ansible project to automate it.

If you are seeing this article, you are seeing it on a yet another ARM host with Dockerized WordPress.

Recovering SSH Key

I stupidly reinstalled Ubuntu on my desktop on which I had Linux Mint just because I wanted to try it but I ended up with going back to Linux Mint again. I’m writing this blog from my Linux Mint. I casually formatted the hard drive and did a little distro hopping. When I tried to SSH into my blog host on OCI, I realized that I lost the SSH key and no other host can access the blog host. Crap!

However, I was able to recover it relatively quickly. Here is the list of what I did.

  1. Cloned the existing boot volume.
  2. Created an instance out of the cloned boot volume. When I created the instance, I had a chance to enter public key.
  3. Since it was a cloned volume, everything was already on it. Since it’s got a different public IP, I just changed the DNS A Record to point to the new instance.

It’s all back up and I am able to SSH into the host again.