Clean up Jenkins Workspaces

Problem :

When you use Jenkins for continuous integration, you can quickly have disk usage problems on slave nodes and sometimes with the master node too. Disk usage problems happen when you have many jobs . For example one of our jobs is taking up to 2GB. Therefore where Jenkins sits we might have disk usage problems.

Delete Workspace when build is done

For my projects I use often this option to delete workspace after build :

Disadvantage :

After build, sometimes it is useful to keep the workspace  in order to understand failures. You can always decide to suppress this option temporarily if there is a problem.

Clean up Slaves Workspaces

Slaves workspaces are not deleted by this method. Therefore i use as well a script to delete slave workspaces :

To execute the script on the master node, go to “manage Jenkins” -> “ Console  Script”.

Comment the line “workspacePath.deleteRecursive()” if you want to verify which folders are going to be deleted  .

Write a clean up job for Jenkins :

It is a good practice to clean up Jenkins’s Workspaces.

Write jobs for your maintenance tasks, such as cleanup operations to avoid full disk problems.

Go to “Manage Jenkins”-> “Manage Plugin”.After reboot of Jenkins the plugin can be used.
  • Secondly set up and configure Groovy. Go to Manage Jenkins -> Configure System where you can configure Groovy. Then start to follow the steps indicated on the wiki of the Groovy  plugin(previous link).

Few tips how I installed Groovy when following the steps of the wiki

Use repo URL by extracting it from zip file :

Choose a label for the Groovy installation. For example it can be Groovy.

Download Url for binary archive :

  • Then you can just create your Jenkins job to clean up daily workspaces. In configuration of the job you choose “Execute System Groovy script”.
Put the previous command from gist.github here
  • This Jenkins job saved me lot of time ! Since then i did not have disk problem ! I don’t need anymore to clean up the disk manually.

Clean up Master Workspace

It happens from time to time that the master node is full. It is necessary to clean up the workspace to launch jobs. In order to do so go to the console of the master node first.

Then choose the option ‘Script Console’ in the master node. Then i simply look for jobs which are taking the most of space.

In order to do so first find the directory of JENKINS_HOME  with this command for example :

println "env".execute().text

Find out which jobs are taking the most spaces with :

println "du -mh [JENKINS_HOME]/jobs".execute().text

Suppress the directory taking the most space(you need admin rights)

Example ::

println "rm -rf   [JENKINS_HOME]/jobs/[my_job_to_clean_up]/builds".execute().text

Link to the book “The Pragmatic Programmer” on Amazon

The Pragmatic Programmer: From Journeyman to Master

I advised this book because it helped me to understand the big picture of informatics :

Other commands useful

Check where is located the path of the slaves directory  :

def hi = hudson.model.Hudson.instance
hi.getItems(hudson.model.Job).each {
job ->

Delete unused jobs :

How to create groovy script which clean up workspaces :

List of groovy scripts for Jenkins

Wipe out workspaces for a specific job :


Audit security of software

I Introduction


I had the opportunity to analyse the security of some applications.

Basically I decompose the analysis in three steps by inspiring myself to the reference OWASP. Firstly I need to know what are the important data of the application. It is best to ask questions to the team who realised the application. I also need to test the application and understand it, get access to the code, look at the specification working.


Then i will use a tool  to do a pen test. The aim is to evaluate the application by trying to exploit vulnerabilities. Zap Proxy is a popular tool and free. I recommend also Acunetix but you need to pay. Also i used findbugs with all security options ticked to detect some security flaws..


I was given a deadline to make an audit of an application. Therefore the aim was not find all security problems of the application but the most importants ones.
Over the years the top ten security problems did not change. So we need to be particularly looking at these potential problems



I Know the application

At first we want to know if it is really worth the effort to do a full/partial audit.Read OWASP reference as a guideline for the code review[1].  Before starting a full audit we need to know some important aspects about the application !


  1. Code: The language(s) used, the features and issues of that language from a security perspective. The issues one needs to look out for and best practices from a security and performance perspective.


  1. Context: The working of the application being reviewed. All security is in context of what we are trying to secure.


  1. Audience: The intended users of the application, is it externally facing or internal to “trusted” users. Does this application talk to other entities (machines/services)? Do humans use this application?


  1. Importance: The availability of the application is also important. Shall the enterprise be affected in any great way if the application is “bounced”[1]

Before meeting the team we need to make a checklist to know better the application :

The checklist should cover the most critical security controls and vulnerability areas such as:

  • Data Validation
  • Authentication
  • Session management
  • Authorization
  • Cryptography
  • Error handling
  • Logging
  • Security Configuration
  • Network Architecture

Input, for example, can be:

  • Browser input
  • Cookies
  • Property files
  • External processes
  • Data feeds
  • Service responses
  • Flat files
  • Command line parameters
  • Environment variables

Exploring the attack surface includes dynamic and static data flow analysis: Where and when are variables set and how

the variables are used throughout the workflow, how attributes of objects and parameters might affect other data

within the program. It determines if the parameters, method calls, and data exchange mechanisms implement the

required security

Read the documentation about the product.  Is there any security requirement ? Is there any sensitive data ? How can we communicate to the product(graphical interface ? Web services  ? etc…)
Talk to the team and go through a check list. The Owasp code review give a good idea about what to ask to the team. Example of questions :



–       What is the programming language of the application ?

–        What are the security requirement ?

–        What are the sensitive data ? Do you have a documentation about the architecture of the application ?

–          How can we access to the application ? Web Services? Graphical Interfaces  ?

–          Data input are they being validated ?

–          Where can we access the application ? If the application is only being accessible in a private network with no connection internet the need of security is not as important as an application accessible by everybody on the web.

–          Who are the users of the application ?The general public or users with special rights  ?

–          What is the authentication to the application ?



In the end talking to the team and reading the document is not enough. The code of the application is the best source of information.When you understand better the application, you can evaluate what kind of  security audit you should do.


For example It is crucial to have a bank application very secure on the web. However it might not be useful to do a full audit for an application with no important

II Tools to scan the application

Pen test tool

In order to perform a pen test , there is a number of tools you can use. Zap proxy is a popular tool and free. Acunetix can find more security bugs but it is not free.


For some websites it is difficult to access to all the links. It is important to retrieve the majority of the links of the website. Therefore sometimes it is necessary to manually crawl all links.


This link is very useful to manually crawl all links in Acunetix:

The procedure is similar with other tools in the market :


Configure the web browser

Presuming that the web browser is running on the same machine where the tools is installed, set the proxy server IP to and the proxy server port to 8080.

  1. Start the HTTP Sniffer and browse the website using the previously configured web browser.
  2. Once ready, stop the HTTP sniffer. Save captured data by selecting ‘Save Logs’ from the Actions drop down menu.

In the Site Crawler node, click the ‘Build Structure from HTTP Sniffer log’ button (highlighted in the above screen shot) to import the captured data into the Site Crawler.

It is also possible to import HTTP Sniffer logs to an already existing scan, or import multiple HTTP Sniffer logs into the same crawl. To do so, simply tick the option “Merge the log9s0 with the currently opened crawl results in the HTTP Sniffer Log import window as highlighted below.


  1. Import Logs to Crawler
  2. Save the crawler import results by selecting ‘Save Results’ from the Actions drop down menu.
  3. Launch the Scan


Click on the New Scan button to launch the scan wizard.  In the first step of the Scan Wizard select the option ‘Scan using saved crawling results’ as highlighted in the above screen shot.  Proceed with completing the scan wizard to launch the automated scan against the manually browsed website.


Bug detection tool

Sonar : It can detect important security problem.

Findbugs : tick all options before scanning (by default security option aredeactivated).


III Manual testing

Read OWASP reference as a guideline for the security verification[2].

This phase can be the longest if there are lot of problems with the application. We need to do some verification requirements manually to certify the level of security of the application.

DIfferent Level of Security 0 to 3 (low to high security) In order to meet one of these levels an application need to pass some manual tests.[2]


In details this is the list of the security requirements :


V2. Authentication

V3. Session Management

V4. Access Control

V5. Malicious Input Handling

V7. Cryptography at Rest

V8. Error Handling and Logging

V9. Data Protection

V10. Communications


V13. Malicious Controls

V15. Business Logic

V16. File and Resource

V17. Mobile


In practice, I would verify manually the results of the scans. The scans I refer can be done with the tools previously mentioned (example : findbugs, sonar and zap proxy) . The scans will show a list of problems. We need to make sure these security problems are real and not false positive. Concentrate on the majors problems before going through the minor security breaches. Sometimes several minor security problems can lead to serious security holes.

IV Reporting


The last step of an audit is to realise a document to help the team of the application to fix the main security bugs. In the report I would  recommend to put the main security breaches. From my experience, the teams do not have time to spend lot of time in fixing security issues. It is best to focus on big security breaches and the one easy to fix first.




[1]Code Review :OWASP_Code_Review_Guide-V1_1.pdf

Available to download here :



Available to download here :

Using Jmeter with Jenkins

The problem :


A performance problem was found on one pre-production platform while running manually Jmeter. This problem was found after the release was done which postpone our release.


The solution :

Performance tests should be done daily as part of the continuous integration. Thus we can discover problems of performance easily and fix earlier major problems.
Assuming you have already have a jmx file(Jmeter file), you need to call this file from maven with Jenkins . The second step is to parametrize variables in Jmeter in order to modify some variables from Jenkins ( very useful).


Call Maven from Jenkins


The configuration of the job Jenkins to call maven from Jenkins :

Configuration Build


Some variables have been parameterized such as NUMBER_ITERATION and NUMBER_USER . These Jenkins variables are being pass to maven variables such as number.user.jenkins and


Therefore it is possible to modify those variables when launching Jmeter from Jenkins manually. The default values are used when the job is launched periodically.



I used the plugin “Publish Performance test result report” to report performance and detect bugs. There are other reporting tools in Jenkins for Jmeter.


Call Jmeter from Maven  to pass variables


This article was very useful to mavenify Jmeter :

I chose the plugin jmeter-maven-plugin to do it. I used the same method as in the article.I just added commons-logging as dependency to make it work :



This list of variables are passed to the Jmx file :


Modify JMX file for parametrization


The final piece of work is to parametrize the actual JMX file. I found informations how to parametrize this here :


After few tests I manage to find the exact wording to pass variables from maven to Jmeter. An example will tell more. For example you can define the number of thread and loop in “Thread Group”. Number of threads = ${__P(number.user,7)}. The default value is 7 if number.user is not passed to the JMX file.


Also here is the actual soap request with the url and port parametrised :

As you can see some variables are inside the request ${input1} and ${input2}. They are actually being fed from a CSV file. Tutorial :


In Jmeter, the CSV file is being loaded from “CSV Data Set Config”. Example of CSV configuration for Jmeter  :

Variable Names (comma-delimited) : input1,input2

FileName                                            : resources/${__P(csvfile,default.csv)}

FileName is where the csv file is located in my arborescence.

The CSV file contains the data used when calling the SOAP request. Example of CSV file :





At the first iteration, the first line of the CSV file will be used. Therefore a SOAP request will be send with input1=data1 and input2=test1. Then the next line will be used and so on .





At first I did not know why variables would not be pass to the JMX file from Jenkins. There was no errors in the logs “target\jmeter\logs” and the jtl file was not generated.I had to look in target\jmeter\bin to check that my variables were passed in the files.


My problem was that the names of my variables were different from the variables names in JMX. I resolved my issues by checking the user file.


User variables defined in maven as <propertiesUser> are passed in this file target\jmeter\bin\


SSH Timeout problem with Jenkins and maven ant plugin

The problem:

Recently a Jenkins job would report a problem from a particular script:

Remote command failed with exit status -1.

This problem happened when using the command sshexec task from maven ant plugin:

<sshexec host="${host}" username="${login}" password="${pwd}" command="sh" trust="true"

After long investigation I realised the problem was not coming from the script but from the distant server. Indeed the connection with the server would stop when a process was running more than 200 seconds.

The solution

    Ssh Configuration solution

At first I decided to modify directly the ssh configuration of the distant server.

I modified directly in the file  /etc/ssh/sshd_config the value ClientAliveInterval from 200 to 2000. Then I restarted the server httpd : /etc/init.d/sshd restart . It fixes the issue but the solution was refused.
Therefore I decided to modify the ssh client configuration instead. But I did not find how to configure the sshexec task I don’t have access to /etc/ssh/ssh_config in the client machine either.

Workaround for running long command remotely.

This solution works for me :!msg/rundeck-discuss/iK55if9Vk9E/skNPHAfF3qgJ.

Instead of running:


I run:

truncate -s 0 /mydir/log.out
nohup  > /mydir/log.out 2>&1 &
PID=`echo $!`
echo "$PID"
for (( ; ; ))
sleep 30
if ps -p $PID > /dev/null
echo "longcommand running"
echo "longcommand executed"
cat /mydir/log.out

If you compare the original solution to mine i just deleted 0<&-.

Automate the installation of a product with bash scripts

The problem

I was given the repetitive task to make an archive and install the content on a machine every 3 weeks. This archive contained sql files and shell scripts. A shell script would install the product and create a database.

The procedure was entirely manual and repetitive. I decided steps by steps to automate all the procedure with maven and ant plugin. With Jenkins I launched the procedure daily ,thus I could detect problems early.
I have been using mavento create the archive. Then I used maven plugin “maven-antrun-plugin” to install automatically the archive on a distant server.

Generate the archive

Content of the pom.xml

First of all I had to generate an archive with maven-assembly plugin. This following plugin will call the file assembly.xml to generate an archive of type zip.


Content of the assembly.xml

This is the headline of the assembly. The variable ${name} is defined in the properties file of the  pom.


The filesets will add files from the local code in the archive. I can as well exclude all files of type txt for example if I don’t want them in the archive.


Below I select the module child1 in the tree of my modules. Then I retrieve the content of the directory src/main in the module child1. I include all text files of types war,properties, etc…

More information on moduleSets :


Then I want to retrieve some dependencies of the pom into my archive file :


That’s it : I have created a basic archive.

Install the archive in some distant server

Content of the pom.xml

In order to install the archive in some distant server I used the plugin maven-antrun-plugin

                                <property name="compile_classpath" refid="maven.compile.classpath"/>
                                <property name="runtime_classpath" refid="maven.runtime.classpath"/>
                                <property name="test_classpath" refid="maven.test.classpath"/>
                                <property name="plugin_classpath" refid="maven.plugin.classpath"/>
                                <property name="remote_host" value="${remoteHost}"/>
                                                               <property name="remote.dir" value="${remote.dir}"/>
                                <property name="remote.login" value="${remote.login}"/>
                                <property name="remote.pwd" value="${remote.pwd}"/>
                                <ant antfile="${antdir}">
                                    <target name="deploy"/>

The plugin maven will call the build.xml. All variables set in <tasks> are sent to the build.xml file.


The default target of the build.xml is in the  header.Here I have defined some properties and load some constants from the file

<?xml version="1.0" encoding="UTF-8"?>
<project name="Build" basedir="." default="deploy">

<property name="baseDirectory" location="./target" />

<property name="myworkspace" location="${baseDirectory}" />
<property file="${baseDirectory}/" />

Here is defined the target deploy .If remote.dir is not defined I can take the value /remote/dir by default.

<target name="deploy">
          <equals arg1="${remote.dir}" arg2="" />
              <property name="remote_dir" value="/remote/dir" />
        <property name="remote_dir" value="${remote.dir}" />

Another ant file is called. It is useful when we want to factor some ant code.

<ant antfile="src/main/resources/ant/build_common.xml" target="deploy"/>

Copy the zip file(scp) generated from the current directory to the distant server remote_host at the remote_dir folder

 <scp todir="${remote_login}:${remote_pwd}@${remote_host}:${remote_dir}/" file="${baseDirectory}/${archive}.zip" trust="true" />

Execute a command unzip on the remote server  with sshexec.

    <sshexec host="${remote_host}" username="${remote_login}" password="${remote_pwd}" command="cd ${remote_dir};unzip -o ${remote_dir}/colis/${name.archive.full}.zip;" trust="true" />

Execution of bash script which will install the product, create the database, etc…

  <sshexec host="${remote_host}" username="${remote_login}" password="${remote_pwd}" command="cd ${remote_dir};${remote_dir}/" trust="true" />

After the installation I need to import a dump on the distant server. As you can see i defined the variables before calling the script.

  <sshexec host="${remote_host}" username="${remote_login}" password="${remote_pwd}" command="export ORACLE_HOME=${oracle_home};export ORACLE_SID=${oracle_sid};${oracle_home}/bin/imp login/pass file=/mydir/mydump.dump tables=* commit=y ignore=y" trust="true" />


From spending two to four days in manual installing every three weeks, I spend only half days now doing this process.

Furthermore I had also to manually install and test a patch at each delivery. This patch is now created and installed automatically. I have done it in the same pom so it was a bit tricky. Maybe i will explain in another post.

Unit Testing of Jsp Custom Tag Before Version Spring 2.5

As it is described on this link, it is easier to mock Custom Tag with Spring 2.5 : .

If for whatever reason you are stuck with a version of Spring before 2.5, you can still test Jsp Custom Tag.This is the class i would like to unit test :

public final class CustomTag extends javax.servlet.jsp.tagext.BodyTagSupport{

    public void writeMyTag() throws JspException {

              pageContext.getOut().write("<th> … text here <th>");              



My problem is : how to mock the object pageContext ? My solution is to use Mockito.

I did the following unit test and mock pageContext and jspWriter:

public class CustomTagTest {


PageContext pageContext;


JspWriter jspWriter;


public void setup(){




public void testwriteMyTag() throws JspException, IOException {

CustomTag tag = new CustomTag ();


tag.setPageContext(pageContext );




Here we go : i covered the custom tag code. However it does not test the output result of JspWriter.

The power of Unit Testing

Why Unit Testing is important ?

Some people think unit tests is a waste of time. For them you should spend more time developing real code instead of doing unit testing. But I disagree. I am more at ease when unit test are running every day without failure. Indeed know my code works like I want. Personally I am faster to develop code because I am testing incrementally my developement. In my experience my code tend to have less bugs too.

I find that Unit testing has many advantages:

– Gain time when reproducing a problem / testing functionality before integration. Instead of launching the entire application to test your changes, you can just test the small part you did.

– Regression testing. The code is easier to refactor when there is lot of unit tests. If there are big changes in the code the unit test should help us to avoid regression.

Unit test are great, but if they are not run daily it is like they do not exist. That’s why it is very important to have a platform that launches them every day. For example for Java projects Jenkins is a popular platform.

Not only you should automatically test the code of programming languages such as Java ,Csharp, Javascript but as well script used for the installation of the product. Ideally the installation from scratch should be run daily on a machine.