Speaking at JavaOne San Francisco 2016

This year, third in a row, I’ve been honoured to speak at JavaOne San Francisco conference.

I will be joining forces with my colleagues Julio Palma (@restalion), Mariano Rodriguez (@locoporf1), Vicente Gonzalez (@viarellano) and Kevin Hooke (@kevinhooke) to deliver four sessions on Java ME, Java SE on constrained devices, face recognition using open standards, and web application testing with Selenium.

The full conference schedule is already available, so you can start looking at the fantastic sessions and building your own agenda for the five conference days. These are our sessions, for if you would like to join us: https://oracle.rainfocus.com/scripts/catalog/oow16.jsp?event=javaone&search=accenture&search.event=javaone

  • Monday, 19 Sep, 12:30-13:30, Hilton – Golden Gate 6/7/8
    • Session CON3189: Introduction to Java ME 8
    • Speaking: Julio Palma and Kevin Hooke
  • Tuesday, 20 Sep, 14:30-15:30, Hilton – Golden Gate 6/7/8
    • Session CON3187: Java ME and Single-Board Computers for Creating Industrial Middleware
    • Speaking: Julio Palma and Jorge Hidalgo
  • Wednesday, 21 Sep, 13:00-14:00, Hilton – Continental Ballroom 7/8/9
    • Session CON3080: Testing Java Web Applications with Selenium: A Cookbook
    • Speaking: Jorge Hidalgo and Vicente Gonzalez
  • Wednesday, 21 Sep, 15:00-16:00, Hilton – Golden Gate 6/7/8
    • Session CON6217: All Your Faces Belong to Us: Building an Open Face Recognition Platform
    • Speaking: Jorge Hidalgo and Mariano Rodriguez

Looking forward to meet you there!

 

Pitest: Measure the Quality of your Unit Tests with Mutation Testing

It is not uncommon among developers to discuss about the quality of automated unit tests: Are they testing enough of application code? And more importantly, are they really verifying the expected behavior?

The first question has a relatively simple answer: use automated code coverage tools that will track which lines of code and which branches in execution flow are being tested. Code coverage reports are very helpful to 1) determine which portions of application code are not being tested; and 2) if measuring code coverage per individual test, determine whether each test is effectively testing the appropriate piece of application code. If interested in techniques for that, you may want to look at this other blog post: https://deors.wordpress.com/2014/07/04/individual-test-coverage-sonarqube-jacoco/

However, no matter how useful is to measure code coverage, these reports will not let you know one fundamental aspect of tests: which behavior is being verified!

Simply put, your test code may be passing through every single line of your code, and not verifying anything. If you are familiar with JUnit framework, your test code may not contain a single assertion!

To overcome this limitation of automated unit testing, one technique that can be of great help is Mutation Testing.

Mutation Testing… Explained

Let’s assume you have your application code and your test code as usual. A mutation testing tool will take your application code and make small surgical changes, one at a time, a so-called “mutation”. It could be changing a logical operator in an if statement (e.g. > is changed to <=), it could be removing some service call, it could be changing some for loop, it could be altering some return value, and so forth.

Mutation testing is, therefore, based on the assumption that if you are testing your code and making the right assertions to verify the behavior, once you re-execute your unit tests with a mutation in application code some of them should fail.

Pitest – A Mutation Testing Framework for Java

Although very interesting, such a technique would be useless without the proper tools. There are some, and for different languages, like Jester, Jumble or NinjaTurtles, but probably the most mature and powerful we’ve seen to date is Pitest (http://pitest.org).

Working with Pitest is very simple and requires minimal effort to start. It can be integrated with build tools like Maven, Ant or Gradle, with IDEs like Eclipse (Pitclipse plug-in) or IntelliJ, and with quality tools like SonarQube.

Regardless of the way you execute it, Pitest will analyze the application byte codes and decide which mutations will be introduced (for a full list and description of available mutators in Pitest, check their site here: http://pitest.org/quickstart/mutators/).

To optimize the test execution as much as possible, Pitest gathers code coverage metrics in a “normal execution” and then re-executes only the matching test cases for a certain mutation. Total execution time is noticeably longer than a normal unit test execution, basically because the incredible test harness that Pitest adds even to the most simple of code bases.

As a result, Pitest generates a fully detail report showing which mutations “lived” after the execution, that is, which mutations where not detected by any existing assertion. These “lived” mutations are your main focus, because they mean that there is some logic, some return value, or some call that is not being verified.

Of course not all of the mutations will be meaningful. Some may produce out of memory errors or infinite loops. For those cases, Pitest does its best to detect them and remove from the resulting reports. These can be fine-tuned if needed, for example by tuning time outs and other parameters, but sensible defaults work really well to start with.

Pitest in Action

Seeing is believing, so we put Pitest to work on a simple 10-classes Java library. We decided to use the Maven plug-in, as this method requires zero configuration to start. We opened a command prompt at the project directory, and just executed this command:

> mvn org.pitest:pitest-maven:1.0.0:mutationCoverage

After a few minutes (5 to 6 for this project) and lots of iterations showing in the console, the build finishes and the reports are generated in target directory:

> target\pit-reports\201408181908\index.html

When the report loaded in the browser, the first fact that caught our attention was that one class, that we worked hard to be fully tested, AbstractContext, although with a 100% code coverage it showed one lived mutation. Oops, something was not properly verified. Was Pitest right?

mutation-1

After clicking the class name, we could see the detail on where the lived mutation was found:

mutation-2

Pitest was right! Although that method is fully tested, and there are test cases for every single execution flow, we were missing the proper assertion for that if statement. Really really hard to catch if not for a good tool helping us to find out more about our unit tests.

Of course, next step was to add the forgotten assertion to the relevant test method. Once done, we re-launched Pitest. After a few minutes, a new set of reports where created and once loaded in the browser… clean result for that class!

mutation-4

Conclusion

Although arguably a bit fortunate to obtain such a fabulous result at the first try, it is true that after a more thorough inspection of the reports we found many other places where assertions were missing.

Our view is that Pitest is a very valuable tool to write really meaningful and truly useful automated unit test suites, and should be standard gear for Java projects going forward. It is simple to use, requires zero or minimal configuration, and produces valuable results that directly impact in the quality of the test we create, and therefore in the quality of our deliverables.

To mutate, or not to mutate: that is the question.
Whether 'tis nobler in the mind to suffer
The slings and arrows of outrageous unit tests.

Code Coverage of Individual Tests with SonarQube and JaCoCo

This post explains how to enable SonarQube to gather test code coverage metrics of individual tests. Code coverage tools typically produce a report showing the code coverage (by line, branch, etc.) for the combined effect of all the tests executed during a given test session. This is case, for example, when you run unit tests in continuous integration. With the help of SonarQube and JaCoCo, it is possible to gather coverage metrics split at the level of the individual test case (test method in JUnit or TestNG). To enable this, there is some special configuration required that we are showing in this post.

The Environment

The following process has been verified with SonarQube 4.1.2 and 4.3.2 versions, but it should work with SonarQube 3.7.x (latest LTS release), too. The application code we have used to verify the setup is the familiar Spring Pet Clinic application, enhanced to support Tomcat 7 and Spring 3 (see this post here for reference on updates needed in Pet Clinic: https://deors.wordpress.com/2012/09/06/petclinic-tomcat-7/) The code can be downloaded from GitHub in the repository: https://github.com/deors/deors.demos.petclinic

The Instructions

The instructions are really simple, once you’ve figured out how to connect all the dots. All that is required is to add some specific configuration to Maven Surefire plug-in (Surefire is the plug-in that is tasked with the unit test execution, and it supports JUnit and TestNG). As this specific configuration should not impact the regular unit test execution, it is recommended to include the needed configuration in a separate profile that will be executed only when the SonarQube analysis is performed. Let’s describe the required changes in the pom.xml file, section by section.

Build Section

No changes are needed here. However, you should take note of any customised configuration of Maven Surefire to be sure it is also applied to the profile we are going to create. In the case of Spring Pet Clinic, this is the relevant portion of the POM we are writing down for reference:

<build><plugins>
...
 <plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-surefire-plugin</artifactId>
  <version>2.13</version>
  <configuration>
   <argLine>-XX:-UseSplitVerifier</argLine>
   <includes>
    <include>**/*Test.java</include>
    <include>**/*Tests.java</include>
   </includes>
   <excludes>
    <exclude>**/it/*IT.java</exclude>
   </excludes>
  </configuration>
 </plugin>
...
</plugins></build>

This piece of configuration is telling Surefire to: 1) exclude the integration tests for the execution of unit tests (integration tests are covered by Surefire’s twin plug-in, Failsafe); and 2) disable the byte code verifier, preventing runtime errors when classes are instrumented (i.e. when adding mocks, or TopLink enhancements).

Dependencies Section

Again no changes are needed in this section. We just wanted to note that if your project is already leveraging JaCoCo to gather integration test coverage metrics, and is explicitly referring to JaCoCo artefact in this section, it can be left – no conflicts have been identified so far. Anyway it should not be needed here, so it’s probably safer to remove it from this section.

Profiles Section

All the required changes come in this section. And they are very clean to add, as they all require only to add a new profile to the POM. This profile will configure a special listener for Surefire that will ensure that coverage metrics for each individual test case are appropriately gathered. To guarantee a successful test execution, we will maintain here the same configuration that appears in the build section of the POM. Finally, the profile will add a new dependency to the artefact that contains the listener code. The result is this:

<profile>
 <!-- calculate coverage metrics per test with SonarQube and JaCoCo -->
 <id>coverage-per-test</id>
  <build>
   <plugins>
    <plugin>
     <groupId>org.apache.maven.plugins</groupId>
     <artifactId>maven-surefire-plugin</artifactId>
     <version>2.13</version>
     <configuration>
      <!-- same configuration as in the regular test execution goal -->
      <!-- plus argLine parameter configured by JaCoCo prepare-agent -->
      <argLine>${argLine} -XX:-UseSplitVerifier</argLine>
      <includes>
       <include>**/*Test.java</include>
       <include>**/*Tests.java</include>
      </includes>
      <excludes>
       <exclude>**/it/*IT.java</exclude>
      </excludes>
      <!-- new configuration needed for coverage per test -->
      <properties>
       <property>
        <name>listener</name>
         <value>org.sonar.java.jacoco.JUnitListener</value>
       </property>
      </properties>
     </configuration>
    </plugin>
   </plugins>
  </build>
 <dependencies>
  <dependency>
   <groupId>org.codehaus.sonar-plugins.java</groupId>
   <artifactId>sonar-jacoco-listeners</artifactId>
   <version>2.3</version>
   <scope>test</scope>
  </dependency>
 </dependencies>
</profile>

A piece of warning around the JaCoCo listener artefact version. Although it is unclear in the documentation, it seems that the best results are obtained when the JaCoCo listener version matches that of the Java plug-in installed in SonarQube. In this case, as the Java plug-in that we have installed in SonarQube is version 2.3, we have used the listener artefact version 2.3. We also tested with listener 1.2 with same good results, but to prevent any future conflict, we recommend keeping versions aligned.

Running the Analysis

Once the changes in the project configuration are done, you just need to re-execute a SonarQube analysis to see the new reports.

Depending on which SonarQube Java version you have installed, the configuration differs a bit.

Running the Analysis in Older Versions

When the Java plug-in version in use is 2.1 or an earlier version, the profile should be enabled when the analysis executes, and only when the analysis executes. This means that it is now a requirement to launch the sonar:sonar goal as a separate Maven build (it was recommended to do so, but in many cases you could execute all the targets in one run). In the case of our version of Pet Clinic:

>mvn clean verify -P cargo-tomcat,selenium-tests,jmeter-tests
>mvn sonar:sonar -P coverage-per-test

If your build is triggered by a Jenkins job, then the new profile should be added to the post-build action as can be seen in this screenshot: sonar-post-build

Running the Analysis in Newer Versions

When the Java plug-in version in use is 2.2 or newer, code coverage is no longer executed during the analysis. Therefore you should configure the build to gather the code coverage metrics first:

>mvn clean org.jacoco:jacoco-maven-plugin:0.7.0.201403182114:prepare-agentverify -P coverage-per-test,cargo-tomcat,selenium-tests,jmeter-tests
>mvn sonar:sonar -P coverage-per-test

If your build is triggered by a Jenkins job, then the JaCoCo prepare agent goal and the new profile should be added to the build action as can be seen in this screenshot:

sonar-maven-modern

Analysis Results

Once the analysis is completed, the code coverage reports get some new interesting views. When clicking on any test on the test view, a new column labelled ‘Covered Lines’ shows the individual hits for each test method in the class: sonar-test-summary When the link on Covered Lines value is followed, a new widget shows containing all the classes hit by that test method, and the touched lines per class: sonar-test-detail When the link under each of the classes is followed, a new widget appears showing the class source coloured with the actual line/branch hits:

sonar-test-code

Users can also get to this view if navigating through other views, as components or violations drill-down. Once the class level is reached, users can use the ‘Coverage’ tab to get this information:

sonar-class-coverage
By default, the decoration shown is ‘Lines to cover’, showing the code coverage from all tests combined. Use the drop-down list and select ‘Per test -> Covered lines’ and then select the right text case in the new drop-down list that will appear:
sonar-class-select-decoration
sonar-class-select-testcase
sonar-class-final

Conclusion

Measuring code coverage of individual tests is a very useful feature to have in development projects. Code coverage metrics alone may not be sufficient to identify that the rights tests are being executed and they are touching the right functionality. With the ability to identify which portions of the code are executed by any test case, developers and tester can ensure that the expected code logic is tested, versus what can be obtained with other code coverage tools that only gives a combined coverage report.

Next-generation IDEs talk at OpenSlava 2013 conference

On October 11th, 2013 I participated in the first ever OpenSlava conference (www.openslava.sk).

openslava

Located in Bratislava, Slovakia, this conference was devoted to the latest and greatest around Java and open source technologies. With talks about dynamic languages, Node.js, data architectures or automated infrastructure provisioning with Chef, OpenSlava tried to establish itself as a reference in the region, and I really thing it exceeded all expectations.

Excellent organization, impressive selection of topics and warm and crowded reception by academicals, students and professionals from the region. I fell honoured to have been part of it.

The talk I contributed with was “Next-generation IDEs”. A journey trough IDE features and characteristics from the point of view of three people: Lisa, an undergraduate student; Stefan, and enterprise developer with 2 years IT experience; and Adam, a hardcore developer with many years of experience, passionate about coding and contributor to open source projects. While each of them focus on the IDE with different priorities, their collective experiences and needs will help us to determine which characteristics a Next-generation IDE would have.

Deck and session recordings will be made public on conference site shortly, but in the meantime you can get the slides directly in the link below:

OpenSlava – Next-Gen IDEs v2 – Jorge Hidalgo

Edit: Session recording is now available at YouTube: http://www.youtube.com/watch?v=QipY0vcgVA8

Check OpenSlava channel for many other awesome presentations. Don’t miss them!

Idiom for Browser-Selectable Selenium Tests

For some time I’ve wanted to share an idiom I personally use and recommend when building Selenium Tests. This idiom allows to control which browsers are used to run the tests without needing to update test sources or configuration.

The simple ideas behind this idiom are:

  • Test code and configuration should not depend on the test environment.
  • Tests can be executed in any given browser, independently from others.
  • To change the browsers used for test execution, it is not needed to update test sources or configuration.
  • Selenium Grid URL and application URL are also configurable.
  • Both environment variables and Java system variables can be used.
  • All settings have sensible defaults.

I call this idiom ‘Browser-Selectable Tests’. I promise I keep thinking on a better name 🙂

Continue reading “Idiom for Browser-Selectable Selenium Tests”

Selenium WebDriver: Waiting for an application to be fully loaded

While working with Selenium WebDriver to automate web application tests in multiple browsers and application platforms, we found out some cases in which the application under test was not fully loaded when our tests executed, causing the test with the first browser to fail (but not the subsequent ones).

For example, when using Cargo to provision an embedded JBoss container, the ‘server ready’ flag was sent once the HTTP services were available, but the application was not loaded until first request hit the server.

After unsuccessfully trying with some Cargo settings, we considered adapting the test script to handle this.

To our surprise, the WebDriver API provided us with a very elegant, minimally disruptive, way of handling with this situation.

In previous posts we were simply using a pattern like this one to load a page, find a link and click on it:

driver.get(baseUrl);
driver.findElement(By.linkText("Find owner")).click();

In those posts we also used another pattern for waiting a page to be loaded after a link click or form submit:

(new WebDriverWait(driver, 5)).until(new ExpectedCondition<Boolean>() {
    public Boolean apply(WebDriver d) {
        return d.getCurrentUrl().startsWith(baseUrl + "/owners/search");
    }
});

Up to here, nothing new. WebDriverWait and ExpectedCondition allows for defining a wide range of conditions, like waiting for a new page to become available, waiting for a new field to be enabled or waiting for some AJAX response to be received. Combining the previous two snippets, we can write a condition that reads as: try loading this page until it contains a link with the text “Find owner”, but do not wait for more than five seconds. Here is the code we used:

// wait for the application to get fully loaded
WebElement findOwnerLink = (new WebDriverWait(driver, 5)).until(new ExpectedCondition<WebElement>() {
    public WebElement apply(WebDriver d) {
        d.get(baseUrl);
        return d.findElement(By.linkText("Find owner"));
    }
});

findOwnerLink.click();

With this small change in code, the test script waits until the application is fully loaded and the first page in the test sequence is available.

... Cargo output provisioning the server and JBoss starting up

[INFO] [talledLocalContainer] 09:20:06,004 INFO [org.jboss.as.server.deployment] (MSC service thread 1-1) JBAS015876: S
tarting deployment of "petclinic.war"
[INFO] [talledLocalContainer] 09:20:06,004 INFO [org.jboss.as.server.deployment] (MSC service thread 1-3) JBAS015876: S
tarting deployment of "cargocpc.war"
[INFO] [talledLocalContainer] 09:20:07,779 INFO [org.jboss.web] (MSC service thread 1-5) JBAS018210: Registering web co
ntext: /cargocpc
[INFO] [talledLocalContainer] JBoss 7.1.1.Final started on port [8180]

... JBoss Cargo adapter sends the 'ready' flag at this point

[INFO]
[INFO] --- maven-failsafe-plugin:2.8.1:integration-test (integration-test) @ org.springframework.samples.petclinic-rhc -
--
[INFO] Failsafe report directory: C:\projects\deors.demos\petclinic\org.springframework.samples.petclinic-rhc\target\fai
lsafe-reports
-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running org.springframework.samples.petclinic.it.NewPetFirstVisitIT

... Test script is waiting!

[INFO] [talledLocalContainer] 09:20:12,521 INFO [org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-1) JBAS01
0404: Deploying non-JDBC-compliant driver class com.mysql.jdbc.Driver (version 5.1)
[INFO] [talledLocalContainer] 09:20:12,605 INFO [org.apache.catalina.core.ContainerBase.[jboss.web].[default-host].[/pe
tclinic]] (MSC service thread 1-2) Initializing Spring root WebApplicationContext
[INFO] [talledLocalContainer] 09:20:12,608 INFO [org.springframework.web.context.ContextLoader] (MSC service thread 1-2
) Root WebApplicationContext: initialization started
[INFO] [talledLocalContainer] 09:20:12,648 INFO [org.springframework.web.context.support.XmlWebApplicationContext] (MSC
 service thread 1-2) Refreshing Root WebApplicationContext: startup date [Fri Jan 11 09:20:12 CET 2013]; root of context
 hierarchy
[INFO] [talledLocalContainer] 09:20:12,704 INFO [org.springframework.beans.factory.xml.XmlBeanDefinitionReader] (MSC se
rvice thread 1-2) Loading XML bean definitions from ServletContext resource [/WEB-INF/classes/applicationContext-jdbc.xm
l]
[INFO] [talledLocalContainer] 09:20:13,037 INFO [org.springframework.beans.factory.xml.XmlBeanDefinitionReader] (MSC se
rvice thread 1-2) Loading XML bean definitions from ServletContext resource [/WEB-INF/classes/applicationContext-dataSou
rce.xml]

... Rest of JBoss and Pet Clinic initialization - test is waiting for the app to be available

[INFO] [talledLocalContainer] 09:20:16,743 INFO [org.jboss.as.server] (DeploymentScanner-threads - 2) JBAS018559: Deplo
yed "petclinic.war"
[INFO] [talledLocalContainer] 09:20:16,744 INFO [org.jboss.as.server] (DeploymentScanner-threads - 2) JBAS018559: Deplo
yed "cargocpc.war"
Tests run: 6, Failures: 0, Errors: 0, Skipped: 5, Time elapsed: 11.745 sec

... The test executed successfully (some browsers skipped)

Out of curiosity, this does not happen with Cargo and Tomcat/Jetty, but it did no harm to add the wait!

Happy testing!

P.S.: More on WebDriver waits here:  http://seleniumhq.org/docs/04_webdriver_advanced.jsp

Installing Sonar in OpenShift as a DIY application

Note: this is an excerpt extracted from my talk at Red Hat Developer Day London. You can see more about the talk in my post here:

https://deors.wordpress.com/2012/10/03/developer-day/

Sonar is a popular code profiler and dashboard that excels when used along a Continuous Integration engine:

  • Seamless integration with Maven.
  • Leverages best-of-breed tools as Checkstyle, PMD or FindBugs.
  • Configurable quality profiles.
  • Re-execution of tests and test code coverage (UT, IT).
  • Design Structure Matrix analysis.
  • Flexible and highly customisable dashboard.
  • Actions plans / peer reviews.
  • Historic views / run charts.
  • Can be used with Java, .Net, C/C++, Groovy, PHP,…

Continue reading “Installing Sonar in OpenShift as a DIY application”