Audit security of software

I Introduction


I had the opportunity to analyse the security of some applications.

Basically I decompose the analysis in three steps by inspiring myself to the reference OWASP. Firstly I need to know what are the important data of the application. It is best to ask questions to the team who realised the application. I also need to test the application and understand it, get access to the code, look at the specification working.


Then i will use a tool  to do a pen test. The aim is to evaluate the application by trying to exploit vulnerabilities. Zap Proxy is a popular tool and free. I recommend also Acunetix but you need to pay. Also i used findbugs with all security options ticked to detect some security flaws..


I was given a deadline to make an audit of an application. Therefore the aim was not find all security problems of the application but the most importants ones.
Over the years the top ten security problems did not change. So we need to be particularly looking at these potential problems



I Know the application

At first we want to know if it is really worth the effort to do a full/partial audit.Read OWASP reference as a guideline for the code review[1].  Before starting a full audit we need to know some important aspects about the application !


  1. Code: The language(s) used, the features and issues of that language from a security perspective. The issues one needs to look out for and best practices from a security and performance perspective.


  1. Context: The working of the application being reviewed. All security is in context of what we are trying to secure.


  1. Audience: The intended users of the application, is it externally facing or internal to “trusted” users. Does this application talk to other entities (machines/services)? Do humans use this application?


  1. Importance: The availability of the application is also important. Shall the enterprise be affected in any great way if the application is “bounced”[1]

Before meeting the team we need to make a checklist to know better the application :

The checklist should cover the most critical security controls and vulnerability areas such as:

  • Data Validation
  • Authentication
  • Session management
  • Authorization
  • Cryptography
  • Error handling
  • Logging
  • Security Configuration
  • Network Architecture

Input, for example, can be:

  • Browser input
  • Cookies
  • Property files
  • External processes
  • Data feeds
  • Service responses
  • Flat files
  • Command line parameters
  • Environment variables

Exploring the attack surface includes dynamic and static data flow analysis: Where and when are variables set and how

the variables are used throughout the workflow, how attributes of objects and parameters might affect other data

within the program. It determines if the parameters, method calls, and data exchange mechanisms implement the

required security

Read the documentation about the product.  Is there any security requirement ? Is there any sensitive data ? How can we communicate to the product(graphical interface ? Web services  ? etc…)
Talk to the team and go through a check list. The Owasp code review give a good idea about what to ask to the team. Example of questions :



–       What is the programming language of the application ?

–        What are the security requirement ?

–        What are the sensitive data ? Do you have a documentation about the architecture of the application ?

–          How can we access to the application ? Web Services? Graphical Interfaces  ?

–          Data input are they being validated ?

–          Where can we access the application ? If the application is only being accessible in a private network with no connection internet the need of security is not as important as an application accessible by everybody on the web.

–          Who are the users of the application ?The general public or users with special rights  ?

–          What is the authentication to the application ?



In the end talking to the team and reading the document is not enough. The code of the application is the best source of information.When you understand better the application, you can evaluate what kind of  security audit you should do.


For example It is crucial to have a bank application very secure on the web. However it might not be useful to do a full audit for an application with no important

II Tools to scan the application

Pen test tool

In order to perform a pen test , there is a number of tools you can use. Zap proxy is a popular tool and free. Acunetix can find more security bugs but it is not free.


For some websites it is difficult to access to all the links. It is important to retrieve the majority of the links of the website. Therefore sometimes it is necessary to manually crawl all links.


This link is very useful to manually crawl all links in Acunetix:

The procedure is similar with other tools in the market :


Configure the web browser

Presuming that the web browser is running on the same machine where the tools is installed, set the proxy server IP to and the proxy server port to 8080.

  1. Start the HTTP Sniffer and browse the website using the previously configured web browser.
  2. Once ready, stop the HTTP sniffer. Save captured data by selecting ‘Save Logs’ from the Actions drop down menu.

In the Site Crawler node, click the ‘Build Structure from HTTP Sniffer log’ button (highlighted in the above screen shot) to import the captured data into the Site Crawler.

It is also possible to import HTTP Sniffer logs to an already existing scan, or import multiple HTTP Sniffer logs into the same crawl. To do so, simply tick the option “Merge the log9s0 with the currently opened crawl results in the HTTP Sniffer Log import window as highlighted below.


  1. Import Logs to Crawler
  2. Save the crawler import results by selecting ‘Save Results’ from the Actions drop down menu.
  3. Launch the Scan


Click on the New Scan button to launch the scan wizard.  In the first step of the Scan Wizard select the option ‘Scan using saved crawling results’ as highlighted in the above screen shot.  Proceed with completing the scan wizard to launch the automated scan against the manually browsed website.


Bug detection tool

Sonar : It can detect important security problem.

Findbugs : tick all options before scanning (by default security option aredeactivated).


III Manual testing

Read OWASP reference as a guideline for the security verification[2].

This phase can be the longest if there are lot of problems with the application. We need to do some verification requirements manually to certify the level of security of the application.

DIfferent Level of Security 0 to 3 (low to high security) In order to meet one of these levels an application need to pass some manual tests.[2]


In details this is the list of the security requirements :


V2. Authentication

V3. Session Management

V4. Access Control

V5. Malicious Input Handling

V7. Cryptography at Rest

V8. Error Handling and Logging

V9. Data Protection

V10. Communications


V13. Malicious Controls

V15. Business Logic

V16. File and Resource

V17. Mobile


In practice, I would verify manually the results of the scans. The scans I refer can be done with the tools previously mentioned (example : findbugs, sonar and zap proxy) . The scans will show a list of problems. We need to make sure these security problems are real and not false positive. Concentrate on the majors problems before going through the minor security breaches. Sometimes several minor security problems can lead to serious security holes.

IV Reporting


The last step of an audit is to realise a document to help the team of the application to fix the main security bugs. In the report I would  recommend to put the main security breaches. From my experience, the teams do not have time to spend lot of time in fixing security issues. It is best to focus on big security breaches and the one easy to fix first.




[1]Code Review :OWASP_Code_Review_Guide-V1_1.pdf

Available to download here :



Available to download here :


Using Jmeter with Jenkins

The problem :


A performance problem was found on one pre-production platform while running manually Jmeter. This problem was found after the release was done which postpone our release.


The solution :

Performance tests should be done daily as part of the continuous integration. Thus we can discover problems of performance easily and fix earlier major problems.
Assuming you have already have a jmx file(Jmeter file), you need to call this file from maven with Jenkins . The second step is to parametrize variables in Jmeter in order to modify some variables from Jenkins ( very useful).


Call Maven from Jenkins


The configuration of the job Jenkins to call maven from Jenkins :

Configuration Build


Some variables have been parameterized such as NUMBER_ITERATION and NUMBER_USER . These Jenkins variables are being pass to maven variables such as number.user.jenkins and


Therefore it is possible to modify those variables when launching Jmeter from Jenkins manually. The default values are used when the job is launched periodically.



I used the plugin “Publish Performance test result report” to report performance and detect bugs. There are other reporting tools in Jenkins for Jmeter.


Call Jmeter from Maven  to pass variables


This article was very useful to mavenify Jmeter :

I chose the plugin jmeter-maven-plugin to do it. I used the same method as in the article.I just added commons-logging as dependency to make it work :



This list of variables are passed to the Jmx file :


Modify JMX file for parametrization


The final piece of work is to parametrize the actual JMX file. I found informations how to parametrize this here :


After few tests I manage to find the exact wording to pass variables from maven to Jmeter. An example will tell more. For example you can define the number of thread and loop in “Thread Group”. Number of threads = ${__P(number.user,7)}. The default value is 7 if number.user is not passed to the JMX file.


Also here is the actual soap request with the url and port parametrised :

As you can see some variables are inside the request ${input1} and ${input2}. They are actually being fed from a CSV file. Tutorial :


In Jmeter, the CSV file is being loaded from “CSV Data Set Config”. Example of CSV configuration for Jmeter  :

Variable Names (comma-delimited) : input1,input2

FileName                                            : resources/${__P(csvfile,default.csv)}

FileName is where the csv file is located in my arborescence.

The CSV file contains the data used when calling the SOAP request. Example of CSV file :





At the first iteration, the first line of the CSV file will be used. Therefore a SOAP request will be send with input1=data1 and input2=test1. Then the next line will be used and so on .





At first I did not know why variables would not be pass to the JMX file from Jenkins. There was no errors in the logs “target\jmeter\logs” and the jtl file was not generated.I had to look in target\jmeter\bin to check that my variables were passed in the files.


My problem was that the names of my variables were different from the variables names in JMX. I resolved my issues by checking the user file.


User variables defined in maven as <propertiesUser> are passed in this file target\jmeter\bin\


SSH Timeout problem with Jenkins and maven ant plugin

The problem:

Recently a Jenkins job would report a problem from a particular script:

Remote command failed with exit status -1.

This problem happened when using the command sshexec task from maven ant plugin:

<sshexec host="${host}" username="${login}" password="${pwd}" command="sh" trust="true"

After long investigation I realised the problem was not coming from the script but from the distant server. Indeed the connection with the server would stop when a process was running more than 200 seconds.

The solution

    Ssh Configuration solution

At first I decided to modify directly the ssh configuration of the distant server.

I modified directly in the file  /etc/ssh/sshd_config the value ClientAliveInterval from 200 to 2000. Then I restarted the server httpd : /etc/init.d/sshd restart . It fixes the issue but the solution was refused.
Therefore I decided to modify the ssh client configuration instead. But I did not find how to configure the sshexec task I don’t have access to /etc/ssh/ssh_config in the client machine either.

Workaround for running long command remotely.

This solution works for me :!msg/rundeck-discuss/iK55if9Vk9E/skNPHAfF3qgJ.

Instead of running:


I run:

truncate -s 0 /mydir/log.out
nohup  > /mydir/log.out 2>&1 &
PID=`echo $!`
echo "$PID"
for (( ; ; ))
sleep 30
if ps -p $PID > /dev/null
echo "longcommand running"
echo "longcommand executed"
cat /mydir/log.out

If you compare the original solution to mine i just deleted 0<&-.

Automate the installation of a product with bash scripts

The problem

I was given the repetitive task to make an archive and install the content on a machine every 3 weeks. This archive contained sql files and shell scripts. A shell script would install the product and create a database.

The procedure was entirely manual and repetitive. I decided steps by steps to automate all the procedure with maven and ant plugin. With Jenkins I launched the procedure daily ,thus I could detect problems early.
I have been using mavento create the archive. Then I used maven plugin “maven-antrun-plugin” to install automatically the archive on a distant server.

Generate the archive

Content of the pom.xml

First of all I had to generate an archive with maven-assembly plugin. This following plugin will call the file assembly.xml to generate an archive of type zip.


Content of the assembly.xml

This is the headline of the assembly. The variable ${name} is defined in the properties file of the  pom.


The filesets will add files from the local code in the archive. I can as well exclude all files of type txt for example if I don’t want them in the archive.


Below I select the module child1 in the tree of my modules. Then I retrieve the content of the directory src/main in the module child1. I include all text files of types war,properties, etc…

More information on moduleSets :


Then I want to retrieve some dependencies of the pom into my archive file :


That’s it : I have created a basic archive.

Install the archive in some distant server

Content of the pom.xml

In order to install the archive in some distant server I used the plugin maven-antrun-plugin

                                <property name="compile_classpath" refid="maven.compile.classpath"/>
                                <property name="runtime_classpath" refid="maven.runtime.classpath"/>
                                <property name="test_classpath" refid="maven.test.classpath"/>
                                <property name="plugin_classpath" refid="maven.plugin.classpath"/>
                                <property name="remote_host" value="${remoteHost}"/>
                                                               <property name="remote.dir" value="${remote.dir}"/>
                                <property name="remote.login" value="${remote.login}"/>
                                <property name="remote.pwd" value="${remote.pwd}"/>
                                <ant antfile="${antdir}">
                                    <target name="deploy"/>

The plugin maven will call the build.xml. All variables set in <tasks> are sent to the build.xml file.


The default target of the build.xml is in the  header.Here I have defined some properties and load some constants from the file

<?xml version="1.0" encoding="UTF-8"?>
<project name="Build" basedir="." default="deploy">

<property name="baseDirectory" location="./target" />

<property name="myworkspace" location="${baseDirectory}" />
<property file="${baseDirectory}/" />

Here is defined the target deploy .If remote.dir is not defined I can take the value /remote/dir by default.

<target name="deploy">
          <equals arg1="${remote.dir}" arg2="" />
              <property name="remote_dir" value="/remote/dir" />
        <property name="remote_dir" value="${remote.dir}" />

Another ant file is called. It is useful when we want to factor some ant code.

<ant antfile="src/main/resources/ant/build_common.xml" target="deploy"/>

Copy the zip file(scp) generated from the current directory to the distant server remote_host at the remote_dir folder

 <scp todir="${remote_login}:${remote_pwd}@${remote_host}:${remote_dir}/" file="${baseDirectory}/${archive}.zip" trust="true" />

Execute a command unzip on the remote server  with sshexec.

    <sshexec host="${remote_host}" username="${remote_login}" password="${remote_pwd}" command="cd ${remote_dir};unzip -o ${remote_dir}/colis/${name.archive.full}.zip;" trust="true" />

Execution of bash script which will install the product, create the database, etc…

  <sshexec host="${remote_host}" username="${remote_login}" password="${remote_pwd}" command="cd ${remote_dir};${remote_dir}/" trust="true" />

After the installation I need to import a dump on the distant server. As you can see i defined the variables before calling the script.

  <sshexec host="${remote_host}" username="${remote_login}" password="${remote_pwd}" command="export ORACLE_HOME=${oracle_home};export ORACLE_SID=${oracle_sid};${oracle_home}/bin/imp login/pass file=/mydir/mydump.dump tables=* commit=y ignore=y" trust="true" />


From spending two to four days in manual installing every three weeks, I spend only half days now doing this process.

Furthermore I had also to manually install and test a patch at each delivery. This patch is now created and installed automatically. I have done it in the same pom so it was a bit tricky. Maybe i will explain in another post.

Unit Testing of Jsp Custom Tag Before Version Spring 2.5

As it is described on this link, it is easier to mock Custom Tag with Spring 2.5 : .

If for whatever reason you are stuck with a version of Spring before 2.5, you can still test Jsp Custom Tag.This is the class i would like to unit test :

public final class CustomTag extends javax.servlet.jsp.tagext.BodyTagSupport{

    public void writeMyTag() throws JspException {

              pageContext.getOut().write("<th> … text here <th>");              



My problem is : how to mock the object pageContext ? My solution is to use Mockito.

I did the following unit test and mock pageContext and jspWriter:

public class CustomTagTest {


PageContext pageContext;


JspWriter jspWriter;


public void setup(){




public void testwriteMyTag() throws JspException, IOException {

CustomTag tag = new CustomTag ();


tag.setPageContext(pageContext );




Here we go : i covered the custom tag code. However it does not test the output result of JspWriter.

The power of Unit Testing

Why Unit Testing is important ?

Some people think unit tests is a waste of time. For them you should spend more time developing real code instead of doing unit testing. But I disagree. I am more at ease when unit test are running every day without failure. Indeed know my code works like I want. Personally I am faster to develop code because I am testing incrementally my developement. In my experience my code tend to have less bugs too.

I find that Unit testing has many advantages:

– Gain time when reproducing a problem / testing functionality before integration. Instead of launching the entire application to test your changes, you can just test the small part you did.

– Regression testing. The code is easier to refactor when there is lot of unit tests. If there are big changes in the code the unit test should help us to avoid regression.

Unit test are great, but if they are not run daily it is like they do not exist. That’s why it is very important to have a platform that launches them every day. For example for Java projects Jenkins is a popular platform.

Not only you should automatically test the code of programming languages such as Java ,Csharp, Javascript but as well script used for the installation of the product. Ideally the installation from scratch should be run daily on a machine.

Review of the book Pragmatic Programmer

Cover book of The Pragmatic Programmer

Comments about the book

To my mind this book is mind blowing. The first time i read it , it was a revelation. Thanks to it,  I grasped the use of systematic Unit Testing.

Before reading it, I was doing unit testing systematically while bug fixing. This was because of the strong policy of my first company.

After, I also understood many other programming practices. I realised that Unit testing or automatic testing is one of the most important tasks to do while programming. It allows automatic regression test and easier maintenance and bug fixing..

Another idea important to me : code only once policy. No duplication allowed. This is what i have been implementing so far .

Addtitionally, the use of design patterns is really useful to maintain the code . Before reading the book, as a young engineer getting from university, I barely knew what it was useful for. Design pattern is a skill you can implement whatever object language you use : C++,C# ,Java,etc… I have used it with these programming languages with success. It allows better extensibility and maintainability of the code.

Whenever there is a choice between different algorithms or possibility of extensions : use design pattern. The factory pattern is the one that I used the most in the end.

Advice for a young programmer

I would advice to a young engineer to start reading this book. Indeed, it introduces you with many skills you will need to learn if you want to be an excellent programmer. To my mind, the first things you will need to learn are :

  • To add systematically unit test in your code. If you want to go further: use TDD method.
  • Have a platform which launch your unit tests everyday . Be proactive: if it does not exist, suggest one to your manager. This is very important because without it your unit tests are not used. And therefore they are useless.
  • When you resolve a bug , first reproduce the bug. The best way is to reproduce the bug with a Unit test. Make the test failed at first,when the test is reproduced,the test should pass when you found the fix.
  • Use the ‘no duplication’ of code policy.  It is better to factorise the code to avoid duplication. In the long run the maintenance of the code is easier.
  • Use design patterns to make the code more extensible. When you develop a solution in programming, you should always think long term. For example: you need to develop a simple algorithm but maybe you will use a different one in the future. Use the factory method pattern in order to call easily a different algorithm in the future.
  • Learn MVC design pattern. It teaches how to separate the UI interface into several layers. It is important to have a user interface clearly separated into different modules in order to modify change of technology later.
  • Use Sonar to improve your coding skills. This tool when used with Jenkins, will show the problem in the code.

All these tips come from the book, therefore don’t hesitate to read it. It teaches lot of important skills that experienced programmer will only learn after years of development.