Jenkins Pipeline

I have seen and used Jenkins Pipeline in the last few years. What I realized is that we should not overuse or not misuse Jenkins pipeline.

When you click on New Item in Jenkins, we see a bunch of job types. If you have more plugins, there could be a lot more. I personally like MultiJob plugin.

Although pipeline can do so many things and it’s tempting to just use pipeline for everything, but I think freestyle project should be used for most of the automation job you may have. Pipeline should be used to connect the dots as the name indicates what it is for.

I have seen pipeline kind of being misused in places and it made the job very complex. Pipeline is there to not only connect the dots of jobs, but to connect dev, QA and ops to carry the code all the way through the departments to production systems. Pipeline is not there just to run bunch of jobs sequentially and that’s what’s I’m seeing people misusing pipeline.

I believe it’s beneficial for any company to move code developed by devs as quickly as possible to production. So Jenkins jobs should be designed in the ways that can run in parallel on multiple slaves. Pipeline is there to support all that.

I could be wrong about what I’m thinking right now and I’m willing to accept it if I’m proven wrong but that’s what I believe right now.

Jenkins Slaves on Azure

The beauty of having Jenkins on Azure is that you really don’t have to manually configure Jenkins slaves unless you have complex requirements. By creating a job with a label “linux” or “win”, Azure automatically creates a slave for you and executes the job. Let’s see a very simple case.

  1. Click New Item.
  2. Create a Freestyle project with the name “linux-test”. Click OK button at the bottom of the screen to proceed.
  3. Click New Item.
  4. Create a Freestyle project with the name “linux-test”. Click OK button at the bottom of the screen to proceed.
  5. In the configuration screen, enter “linux” in Restrict where this project can be run.
  6. Select “Execute shell” task from Build section.
  7. Enter the following script in Command.
    echo "Hello from Azure Linux Jenkins slave!"
    cat /etc/os-release

  8. Click Save to commit the change.
  9. Click “Build Now” button .
  10. When you do, you will see a message in the Build History section saying that “(pending–‘Jenkins’ doesn’t have label ‘linux’)”. Don’t get deceived by the message. Actually, Jenkins and Azure are working very hard in the background to provision a Linux VM slave. You will start to see something like the following image after a little while.
  11. After provisioning is done, the job is executed successfully. If you take a look at the console log, you will see something like the following.
Started by user hogehogehoge
Building remotely on linux-agentfbcb70 (linux) in workspace /home/agentadmin/workspace/linux-test
[linux-test] $ /bin/sh -xe /tmp/
+ echo Hello from Azure Linux Jenkins slave!
Hello from Azure Linux Jenkins slave!
+ cat /etc/os-release
VERSION="16.04.6 LTS (Xenial Xerus)"
PRETTY_NAME="Ubuntu 16.04.6 LTS"
Finished: SUCCESS

The agents provisioned by the Jenkins job stay around for a while and accept requests as needed but when there is no job to do for a while, the system automatically decommissions the agents.

How is this possible? How does Jenkins know how to provision Linux and Windows agent? Let’s take a look at Manage Jenkins section.

However you want to provision the VM agents, it’s configurable on this screen.

By default, Jenkins on Azure is configured to provision Linux and Windows automatically. This means that the more agents it needs, the more agents it automatically adds in the pool and as requests die down, the system automatically decommission them. This is really cool in a sense that you are not really wasting resource on Azure when the VM agents are not used as much.

If you take a look at Cloud Statistics in Manage Jenkins section, you will see the history of the automatically provisioned VM agents.

Managing slaves can be troublesome based on the loads but Jenkins on Azure makes it very easy and scalable.