How to Download SSL Certificate from a URL Using openssl

This is a note to myself because I don’t do this often enough to remember the whole thing…

openssl s_client -showcerts -connect < /dev/null | openssl x509 -outform DER >

You obviously have to have openssl on your machine. Install Cygwin if you are on Windows and that will give you an ability to execute openssl.

How to Assign Static IP Address on CentOS 7

When provisioning a server, it’s usually a good practice to assign static IP addresses. I have provisioned a CentOS 7 and ran yum update -y to update the default packages.

Checking Your Current IP Address

I have logged in to the console as root so that I can check the current IP address. Enter the following command.

# ip addr

You can see that this machine has This IP address was obtained from the DHCP server. I’m going to give it Fortunately, CentOS 7 comes with an utility to make it easier for us to assign a static IP address.

Assign a Static IP Address

Enter nmtui in your terminal.

# nmtui

You will see a UI like the image below.

Hit Enter key and you will see one or more network interface for you to configure. Select the network interface and hit Enter key.

Navigate to <Show> for IPv4 and hit Enter key. And then change Automatic to Manual.

Now navigate your cursor to Addresses <Add…> and hit enter.

Add for the static IP address (it depends on your environment), for its gateway and we will use OpenDNS ( for DNS servers. Navigate your cursor to OK at the lower right corner of the screen and hit Enter key.

Restart network daemon by entering the following command.

# systemctl restart network

At this point, the terminal might look frozen because the OS’s IP address has changed to a different one if you SSH’ed into it. Check the IP address again by entering ip addr on it. You now should see the new static IP address you just configured.

Creating an AD User on Windows Server Core

Windows Server Core has been around for a while but I have not used it as much as I should. I love headless Linux because it doesn’t have the unnecessary GUI overhead and Windows Server Core is supposed to be the headless Windows Server.

I have installed Windows Server 2019 in server core mode and I have promoted it to a domain controller. There are many articles out there regarding promoting Windows Server to a domain controller if you look it up.

What I want to do in this article is to summarize the steps to create a AD user and have it belong to the correct AD group.

Listing AD Groups

I want to make sure that I know in which AD group to create a new AD user. Let’s see how we can list them.

Let’s login to server core and type powershell to start PowerShell console.

Get-ADGroup -Filter * | Select name | more

You will see a result that shows all the AD groups on the domain controller.

Get-ADGroup result

Let’s take a look at Domain Admins group by entering the following command.

Get-AdGroup -Filter {name -eq "Domain Admins"}

Then you will get details of the group.

Adding a New AD User

I am intending to create a user that belongs to Domain Admins group. Here is the script for it.

$pass = "YourPassword" | ConvertTo-SecureString -AsPlainText -Force
$givenName = "FirstName"
$surName = "LastName"
$fullName = "$givenName $surName"

$username = "Your SamAccountName e.g. hiriumi"

New-ADUser -Name $fullName -GivenName $givenName -Surname $surName -SamAccountName "$username" -UserPrincipalName "$username@[Your Domain e.g.]" -AccountPassword $pass -Enabled $true

Add-ADGroupMember -Identity "Domain Admins" -Members "$username"

The script above basically creates a user with a password and then adds the user to Domain Admins group. This will allow the user to do pretty much all the administrative work such as getting computers belong to the domain, managing accounts and so forth.

Let’s finally check if the user I just created actually is a part of Domain Admins group by executing the following command.

Get-ADGroupMember -Identity "Domain Admins"

It’s definitely created my user within the group I wanted to belong to.


Creating users in appropriate AD group is the first thing to do before you can start to manage domain controller. It’s important to be able to manage them with PowerShell.

I will write about how Linux machines can belong to Windows domain later in my blog.

Buying a Server

I just ordered a server. Not a desktop PC but a server. I never thought of buying one but I have been wanting either a Mac where I can test things or a PC.

This morning I typed “PC server” on and found this one. It turned out that a PC server is a much better deal. Now that my older son came back home with computer science diploma, I thought he and I should have a home lab.

Here is the spec.

  • 2 of 2.93 Intel Xeon 6 core processors = 12 cores
  • 2TB hard drive
  • 6 SATA slots
  • 64GB DDR3 memory

All these for $520. I was going to get a Mac mini but this is a far better deal. I intend to install ESXi and to make it a VMWare server. Sure, I should do proof of concept stuff on AWS or Azure or whatever cloud solution I may choose, but on-prem is a cheaper way to do experiments.

I can’t wait to share my experiments with it here on my blog!

How to Force Code Deploy to Puppet Master

I previously wrote about how to setup Git integration with Puppet but with the way it’s set up, the code change you made is not deployed to Puppet master right away. We need to do something about this.

First of all, it’s quite easy to set up a webhook in GitLab to tell Puppet that “code is pushed, so deploy now”. Please see this documentation on how to set it up. This is the best way to go about deploying the code change to Puppet master.

What if your source control such as GitHub or GitLab cannot reach your instance of Puppet master because the Puppet master is behind a firewall or router or whatever? Here is a poor man’s solution to it.

Generate API Token

We will utilize Puppet’s API to get the code deployed. Let’s generate an API token.

  1. ssh into Puppet master.
  2. Execute the following command to generate the token. –lifetime 365d in the command below means the token will expire in 365 days.
    $ puppet-access login --lifetime 365d
  3. The API token is generated at the following location.

Deploy the Code

  1. Now execute the command to deploy the code from Git.
    $ puppet-code deploy --all --wait
  2. On any agent or on Puppet master, execute the following command to apply the change.
    $ sudo /opt/puppetlabs/bin/puppet agent -t
    # puppet agent -t


Like I mentioned, the best way to get the code change to be deployed to Puppet is webhook. It basically means that Git server sends HTTPS request to Puppet master to let it know that there was a code change and Puppet does its job to sync the code. Otherwise, you could use the technique above. To take it a little further, bash script could be created to execute puppet-code deploy and puppet agent -t and get it to be run as a cron job. It’s really up to you.

How to Integrate Puppet Master with Git

Pretty much all of us engineers want to manage Puppet code in a source control for traceability and manageability. I’m going to write a step-by-step documentation on how to just do it based on this document, this documentation and some other documentation I found by Googling. I had a hard time finding a single document that takes me to where Puppet works out of Git source control, so here it is.

Create a Control Repo from the Puppet template

We will create a Git repo based on a template that Puppet offers on GitHub. Here is the picture of how it work.

Getting Puppet Master Ready to Sync

Using ssh-keygen, create public/private key pair.

First of all, ssh into the Puppet master you installed as root and generate an SSH private/public key pair.

# ssh-keygen -t rsa -b 2048 -C 'code_manager' -f /etc/puppetlabs/puppetserver/ssh/id-control_repo.rsa -q -N ''

It generates 2 files id-control_repo.rsa and under /etc/puppetlabs/puppetserver/ssh. id-control_repo.rsa is the private key and is the public key. Let’s print the content of the public key to get the text.

# cat /etc/puppetlabs/puppetserver/ssh/

Next make sure that the user pe-puppet created by the puppet installation has the ownership on the directory /etc/puppetlabs/puppetserver/ssh Execute the following command.

# chown -R pe-puppet:pe-puppet /etc/puppetlabs/puppetserver/ssh

Next make sure that the pe-puppet account has rwx permissions for the files in SSH key directory.

# chmod 755 /etc/puppetlabs/puppetserver/ssh/

Getting a Git repo Ready

Puppet master (optionally) needs a Git repo to pull code from. In this example, I will use GitLab as my source. This process does not need to be performed on Puppet master. Please skip this section if you already have access to GitLab (or other Git) repository.

SSH Key to GitLab

  1. Open your terminal or console.
  2. Generate a SSH key.
    $ ssh-keygen -t ed25519
  3. Print the public key on your console.
    $ cat ~/.ssh/
  4. Navigate to on your browser.
  5. Click on the icon at the upper right corner of the screen and select Settings.
  6. Click SSH Keys from the menu on the left.
  7. On the terminal, copy the printed public key in the previous section and paste it to the Key textarea and click Add key button.
  8. Now you are ready to access GitLab.

Create a New Repo on GitLab

In this section, we will create a repo on GitLab where Puppet master pull code from. The source control does not have to be GitLab. Any Git server will do as long as your Puppet master can reach it.

  1. Navigate to on your browser and login.
  2. Click New Project button.
  3. Enter control-repo in Project name (or whatever you like). Keep the project private if you don’t want to expose the code but I am making this one a public because this is just an example and I would like to share the code to public later on. Click Create project button.

Cloning control-repo from GitHub to GitLab

Puppet provides us with a template source in GitHub, so we will copy the repo to my own repo on GitLab. The following process can be done from your desktop. As long as you create a copied repo on GitLab, our mission is accomplished in this section.Puppet provides us with a template source in GitHub, so we will copy the repo to my own repo on GitLab. The following process can be done from your desktop. As long as you create a copied repo on GitLab, our mission is accomplished in this section.

  1. Open your terminal and navigate to the directory where you want to store the code to. (e.g. C:\Windows\Users\[myaccount]\Dev)
  2. Clone the source repo from GitHub.
    $ git clone
  3. We don’t want to push any change to GitHub, so we will remove the origin.
    $ git remote remove origin
  4. Add the URL to the GitLab repo we created in the previous section. Please change the URL accordingly.
    $ git remote add origin
  5. Push the code to the GitLab repo.
    $ git push --set-upstream origin production
  6. Alternatively, you can check the URL of the remote by executing the following command.
    $ git remote get-url origin
  7. When you open the GitLab UI, you can see production branch was created automatically because the original repo has production branch. All code must be pushed to the production branch for it to take effect.

Configure Puppet Master

Now we need to tell Puppet master where to pull the source code from. We will do this from the UI.

  1. Login to https://puppet (or wherever your Puppet is installed).
  2. Click Classification on the menu.
  3. Expand PE Infrastructure and click PE Master.
  4. Click Configuration tab.
  5. Navigate to Class:puppet_enterprise::profile::master and select r10k_remote from the dropdown list. Paste the SSH URL for the GitLab repo configured in the previous section. Click Add parameter.
  6. From the same dropdown list under Class:puppet_enterprise::profile::master, select r10k_private_key and enter /etc/puppetlabs/puppetserver/ssh/id-control_repo.rsa.
  7. From the same dropdown list, select code_manager_auto_configure and set the value to true. Click Commit 3 change button at the lower right corner of the screen.
  8. Let’s test the configuration by executing the following command.
    # puppet agent -t
  9. You will see an output like the following.
  10. Login to get a token to execute puppet-code.
    # puppet-access login --lifetime 2h
    This means the token to execute puppet-code will last for 2 hours.
  11. Next deploy the environment with puppet-code.
    # puppet-code deploy production
  12. The command above should show an output like this.

  13. Execute puppet agent -t again.
  14. All the code pulled from the source control can be located at /etc/puppetlabs/code/environments.


This document explained how to copy the existing template repo and apply it to your own environment. Puppet now can talk to the Git server. However, we need to understand how we can create groups and different environments for testing. I will talk more about it in my blog later.

PowerShell Custom Object to and from JSON

Imagine a situation where you need to keep some complex data and want to be able to work with it in PowerShell. Traditionally, we used XML but JSON is much lighter way to do it.

There are situations where you want to deal with complex data within your script. Let’s try a basic PSCustomObject.

$data = [PSCustomObject]@{
    attr1 = "value1"
    attr2 = "value2"
    attr3 = @{
        attr4 = "value4"
        attr5 = "value5"

Write-Host $data

When you execute the code above, you will see an output like the following.

@{attr1=value1; attr2=value2; attr3=System.Collections.Hashtable}

So it’s quite easy to nest the object. Let’s see if we can have a collection.

# sample2
$data2 = [PSCustomObject]@{
    attr1 = "value1"
    attr2 = "value2"
    attr3 = @(
        "val1", "val2", "val3"

Write-Host $data2

Executing the code will give us an output like the following.

@{attr1=value1; attr2=value2; attr3=System.Object[]}

This example may look useless but it is useful in a sense that you can deal with parameters as one object which contains complex data. To make it more useful, we can serialize and deserialize the data to and from JSON file. Let’s give it a try. I will use the second sample in this article to generate JSON file from the PSCustomObject.

# sample3
$data3 = [PSCustomObject]@{
    attr1 = "value1"
    attr2 = "value2"
    attr3 = @(
        "val1", "val2", "val3"

$data3 | ConvertTo-Json | Set-Content -Path "data3.json"

$data3 object is carried over the pipe and converted to JSON format and then the string data is yet again carried over the second pipe and saved into disk by Set-Content command. The content of data3.json looks like the following.

    "attr1":  "value1",
    "attr2":  "value2",
    "attr3":  [

If you give -Compress option to ConverTo-Json, it literally compresses the data. It’s hard to read, but it saves a lot of disk space especially when you have to deal with big data.


So converting the data to JSON format and saving it to a file is half useful. The best thing is to be able to deserialize the data back to PSObject and start to work on it again. Let’s try it.

# sample 4
$data4 = Get-Content "data3.json" | ConvertFrom-Json
Write-Host $data4.attr1
Write-Host $data4.attr2
ForEach($v in $data4.attr3)
    Write-Host $v

How easy it is to retrieve data from file and create an useful object in memory in just a line!

This technique can be used for any object. I will try it on Get-ChildItem.

# sample 5
Get-ChildItem | ConvertTo-Json -Compress | Set-Content -Path "file-system-data.json"

Get-Service, Get-Process and whatever the command that returns objects can be utilized to save the data into JSON object and even deserialized from the JSON data.