Provision an OpenStack instance with Terraform


A quick way to create an instance on OpenStack is to use Terraform, an open-source Infrastructure-as-Code (IaC) tool developed by HashiCorp.
Here is my environment, quickly drawn 😉 :

This instance will be provisioned with Linux CirrOS.
The Terraform plan that i used :

# Creating an SSH key pair resource
resource "openstack_compute_keypair_v2" "MaCle2" {
  provider   = openstack.hello # Provider name declared in provider.tf
  name       = "MaCle2" # Name of the SSH key to use for creation
  public_key = file("~/.ssh/id_rsa.pub") # Path to your previously generated SSH key
}

# Creating the instance
resource "openstack_compute_instance_v2" "create_an_instance" {
  name        = "MonInstance2" #Instance name
  provider    = openstack.hello  # Provider name
  image_name  = "cirros-0.5.2-x86_64-disk" # Image name
  flavor_name = "m1.nano" # Instance type name
  key_pair    = openstack_compute_keypair_v2.MaCle2.name
  security_groups = ["default"]
  network {
    name      = "Shared" # Adds the network component to reach your instance
  }
}

The provider to use to interact with OpenStack is terraform-provider-openstack :

# Define providers and set versions
terraform {
required_version    = ">= 0.14.0" 
  required_providers {
    openstack = {
      source  = "terraform-provider-openstack/openstack"
      version = "~> 1.42.0"
    }
  }
}

# Configure the OpenStack provider 
provider "openstack" {
  auth_url    = "http://10.0.2.15/identity" # Authentication URL
  domain_name = "default" 
  alias       = "hello" 
}

The documentation for terraform-provider-openstack can be found here :
https://registry.terraform.io/providers/terraform-provider-openstack/openstack/latest/docs

Commands to run :
1) terraform validate : to validate/verify the configuration

2) terraform plan : to preview the changes

3) terraform apply : to actually apply/run the actions

4) terraform destroy : to destroy everything and return to the initial state. In this case, that means both the instance and the key will be deleted.

This is part of the output after running the “terraform apply” instruction :

Of course, the OpenStack environment variables need to be set first :

source admin-openrc.sh

OpenStack compute service : error when trying to access the instances section

In the Horizon dashboard, I tried to access the instances section, which is part of the compute service. I got this error :

JSONDecodeError at /project/instances/
Expecting value: line 1 column 1 (char 0)
Request Method: 	GET
Request URL: 	http://localhost/dashboard/project/instances/
Django Version: 	3.2.10
Exception Type: 	JSONDecodeError
Exception Value: 	
Expecting value: line 1 column 1 (char 0)
Exception Location: 	/usr/lib/python3.8/json/decoder.py, line 355, in raw_decode

I checked all the running devstack services, they were all running but one :

I tried to start this service only but it was not enough. I had to restart all services :

sudo systemctl restart devstack@*

Installing OpenStack on Ubuntu


Here are the steps to install OpenStack with DevStack on Ubuntu 20.04.3 LTS.
DevStack really simplifies the installation of OpenStack.
I followed the steps described here : https://docs.openstack.org/devstack/latest/
1) First you need to create the user “stack” :
sudo useradd -s /bin/bash -d /opt/stack -m stack
2) That user should have no password and have sudo privileges :
echo “stack ALL=(ALL) NOPASSWD: ALL” | sudo tee /etc/sudoers.d/stack
3) Login as the user stack :
sudo -u stack -i
4) Then clone the DevStack files :
git clone https://opendev.org/openstack/devstack
5) Go to the devstack folder and create a file called local.conf :
cd devstack
/opt/stack/devstack/touch local.conf
Add the following content :
[[local|localrc]]
ADMIN_PASSWORD=secret
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD

And finally launch the script that will install OpenStack :
./stack.sh

2021-12-15 20:03:55.748 | stack.sh completed in 1444 seconds. (it took a bit more than 20mn)

You can then login to the Horizon dashboard. Just go to the URL http://localhost. User/password : admin/secret

You can check the version installed :

And also launch OpenStack CLI :

Spring boot and Graylog

Graylog is a platform (free and enterprise) that is useful for log management of aggregated data. It is based on Elasticsearch, MongoDB and Scala.
It can display messages and histograms.

To send data to a graylog server through an HTTP appender with log4J2, i used the GELF layout with the following configuration :

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="debug" monitorInterval="30">
	<Properties>
		<Property name="LOG_PATTERN">
			%d{yyyy-MM-dd HH:mm:ss.SSS} %5p ${hostName}
			--- [%15.15t] %-40.40c{1.} : %m%n%ex
		</Property>
	</Properties>
	<Appenders>
		<Console name="ConsoleAppender" target="SYSTEM_OUT" follow="true">
			<PatternLayout pattern="${LOG_PATTERN}" />
		</Console>

        <!--  Appender for GRAYLOG -->     
		<Http name="Http" url="http://xxx.yyy.zz.aa/gelf" >		
			<GelfLayout host="localhostGraylog" compressionType="OFF" includeStacktrace="true">
				<KeyValuePair key="environment" value="myappGraylog" />
			</GelfLayout>
		</Http>
		
	</Appenders>

	<Loggers>
		<Logger name="com.celinio.app" level="debug"
			additivity="false">
			<AppenderRef ref="ConsoleAppender" />
 			<AppenderRef ref="Http" /> 
		</Logger>

		<!-- Spring Boot -->
		<Logger name="org.springframework.boot" level="debug"
			additivity="false">
		        <AppenderRef ref="ConsoleAppender" />
 			<AppenderRef ref="Http" /> 
		</Logger>

		<Logger name="org.hibernate" level="debug" additivity="false">
			<AppenderRef ref="ConsoleAppender" />
 			<AppenderRef ref="Http" /> 
		</Logger>

		<Root level="debug">
			<AppenderRef ref="ConsoleAppender" />
 			<AppenderRef ref="Http" /> 
		</Root>
	</Loggers>
</Configuration>

I had to use the latest version of log4j2 (2.11.2) and i also had to comment out the use of the Spring boot starter for log4j2 :

<properties>
        ...
    	<log4j.version>2.11.2</log4j.version>
</properties>
<dependencies>
        ...
	<!-- 
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-log4j2</artifactId>
	</dependency>
	 -->
	<dependency>
		<groupId>org.apache.logging.log4j</groupId>
		<artifactId>log4j-api</artifactId>	
		<version>${log4j.version}</version>
	</dependency>
	<dependency>
		<groupId>org.apache.logging.log4j</groupId>
		<artifactId>log4j-core</artifactId>
		<version>${log4j.version}</version>
	</dependency>
</dependencies>

Version of Spring boot : 1.5.21.RELEASE

Delete a file from a project generated through a custom maven archetype if a condition is met

With the help of Groovy, we can delete a file from a generated project if it meets a condition (for instance if a maven property equals a certain value).

Here is the procedure :
1) Create the archetype. To do that, type the following command from the root of a Maven project :
mvn archetype:create-from-project
result ==> [INFO] Archetype project created in E:\eclipse\workspace\celinio\target\generated-sources\archetype

The archetype project is here :
/celinio/target/generated-sources/archetype

Copy-paste this folder structure into another folder in the workspace and import it as an existing maven project.
By default, I have decided that this archetype will generate a project which contains a file /src/main/resources/META-INF/configuration.xml. This is the file that I do not want to include if the maven property configurable is set to no.

2) Add the following required property to /celinioArchetype/src/main/resources/META-INF/maven/archetype-metadata.xml

  <requiredProperties>
    <requiredProperty key="configurable">
      <defaultValue>yes</defaultValue>
    </requiredProperty>
  </requiredProperties>

Also add this property to /celinioArchetype/src/test/resources/projects/basic/archetype.properties :
configurable=yes

3) Add the following groovy script archetype-post-generate.groovy under the folder /celinioArchetype/src/main/resources/META-INF/
It will be executed upon creating a project from this archetype.
https://maven.apache.org/archetype/maven-archetype-plugin/advanced-usage.html

/**
* CF : This script will be executed upon creating a project from this archetype.
* https://maven.apache.org/archetype/maven-archetype-plugin/advanced-usage.html
* It will delete a folder called configuration.xml if the value for the "configurable" property
* is set to true.
*/

import java.nio.file.Files
import java.nio.file.Path
import java.nio.file.Paths

println "artifactId: " + artifactId
println "request: " + request
println "archetypeArtifactId: " + request.getArchetypeArtifactId()
println "archetypeGroupId: " + request.getArchetypeGroupId()
println "archetypeVersion: " + request.getArchetypeVersion()
println "archetypeName: " + request.getArchetypeName()
println "artifactId: " + request.getArtifactId()
println "groupId: " + request.getGroupId()
println "version: " + request.getVersion()

Path projectPath = Paths.get(request.outputDirectory, request.artifactId)
Properties properties = request.properties
String configurableProperty = properties.get("configurable")
println "configurableProperty : " + configurableProperty

if (!configurableProperty.equals("yes")) {
   println "Deleting the configuration.xml file"
   
   Path configPath = projectPath.resolve("src/main/resources/META-INF")   
   String configurationFile = "configuration.xml";    
   Path cxfConfigPath = configPath.resolve(configurationFile)
   println "cxfConfigPath " + cxfConfigPath  
   Files.deleteIfExists cxfConfigPath
}

To test it in eclipse, you will need to create a project from a run configuration as the groovy script is not executed if you go through the wizard (File > New Maven Project > select archetype etc)
See this link for details :
https://bugs.eclipse.org/bugs/show_bug.cgi?id=514993

I have created a repository on github to host the code for this basic maven archetype :
https://github.com/longbeach/mavenArchetypeGroovy

I added two run configuration files which will help testing it.

Add UML diagrams to the JavaDoc

It is possible to add UML diagrams to the JavaDoc generated during the build phase.

The first thing to do is to install Graphviz which is an open source graph visualization software.
After installation, add the bin folder (D:\Graphviz\bin for instance) to the PATH environment variable.
Then configure the pom.xml :

<build>
		<plugins>
			<plugin>
				<groupid>org.apache.maven.plugins</groupid>
<artifactid>maven-javadoc-plugin</artifactid>
<version>3.0.1</version>
<configuration>
<doclet>org.umlgraph.doclet.UmlGraphDoc</doclet>
<docletartifact>
<groupid>org.umlgraph</groupid>
<artifactid>umlgraph</artifactid>
<version>5.6.6</version>
</docletartifact>
<additionalparam>-views -all</additionalparam>
<doclint>none</doclint>
<usestandarddocletoptions>true</usestandarddocletoptions>
</configuration>
<executions>
<execution>
<id>attach-javadocs</id>
<goals>
<goal>jar</goal>
</goals>
</execution>
</executions>
			</plugin>
		</plugins>
	</build>

Here i chose to add the maven-javadoc-plugin to the build maven phase. In the configuration of the plugin, i added the UmlGraphDoc doclet.
UMLGraph allows the declarative specification and drawing of UML class and sequence diagrams.

Run mvn install and then check the generated JavaDoc under the folder target/site/apidocs/
Here is a sample :

The code of this project is available on my github repository :
https://github.com/longbeach/eclipseumlgraph

Modularity and plug-ability features in Servlet 3.0 specification with Tomcat 9

Modularity and plug-ability features in Servlet 3.0 specification is also called web fragments. The JSR 315 (Java Servlet 3.0 specification) is already 9 years old. The final release dates back to december 10, 2009.
https://jcp.org/en/jsr/detail?id=315
Web fragments are one of the main features of this JSR.
A web fragment is an XML file named web-fragment.xml and located inside the META-INF folder of a jar library.
The root element of this file must be :

<web-fragment>

Web fragments are XML files that will have part of the configurations of a web.xml file.

Here is a short and quick demonstration of it. First, create a web app project with the help of the maven-archetype-webapp archetype.

Here is the content of the web.xml of this first project :

<!DOCTYPE web-app PUBLIC
 "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN"
 "http://java.sun.com/dtd/web-app_2_3.dtd" >

<web-app>
  <display-name>Archetype Created Web Application</display-name>
</web-app>

Now create a second project with the help of the maven-archetype-quickstart archetype. Add a resources folder under src/main and then create a META-INF folder under that resources folder. Finally add a new file called web-fragment.xml to that META-INF folder with the following content :

<?xml version="1.0" encoding="UTF-8"?>
<web-fragment xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
              xmlns="http://java.sun.com/xml/ns/javaee"
              xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"
              xsi:schemaLocation="http://java.sun.com/xml/ns/javaee/web-fragment_3_0.xsd"
              id="WebAppFragment_ID" version="3.0">
    <name>authentication-fragment</name>

    <security-constraint>
        <web-resource-collection>
            <web-resource-name>Client</web-resource-name>
            <url-pattern>/*</url-pattern>
        </web-resource-collection>
        <auth-constraint>
            <role-name>client</role-name>
        </auth-constraint>
    </security-constraint>

    <security-role>
        <role-name>client</role-name>
    </security-role>

    <login-config>
        <auth-method>FORM</auth-method>
        <realm-name>Client</realm-name>
        <form-login-config>
            <form-login-page>/login.jsp</form-login-page>
            <form-error-page>/login-error.jsp</form-error-page>
        </form-login-config>
    </login-config>

    <error-page>
        <error-code>404</error-code>
        <location>/home.jsp</location>
    </error-page>

</web-fragment>

Add this second project as a dependency in the first project :

...
	<dependency>
			<groupId>com.celinio</groupId>
			<artifactId>secondproject</artifactId>
			<version>0.0.1-SNAPSHOT</version>
		</dependency>
...

Now if we deploy the artefact mainproject.war to a web server such as Tomcat for instance, the following web.xml will be produced :

15-Dec-2018 18:42:12.960 INFO [main] org.apache.catalina.startup.HostConfig.deployWAR Deploying web application archive [E:\devJava\apache-tomcat-9.0.13\webapps\mainproject.war]
15-Dec-2018 18:42:13.194 INFO [main] org.apache.catalina.startup.ContextConfig.webConfig web.xml:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE web-app PUBLIC
  "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN"
  "http://java.sun.com/dtd/web-app_2_3.dtd">
<web-app>  <display-name>Archetype Created Web Application</display-name>




  <servlet>
    <servlet-name>default</servlet-name>
    <servlet-class>org.apache.catalina.servlets.DefaultServlet</servlet-class>
    <init-param>
      <param-name>listings</param-name>
      <param-value>false</param-value>
    </init-param>
    <init-param>
      <param-name>debug</param-name>
      <param-value>0</param-value>
    </init-param>
    <load-on-startup>1</load-on-startup>
  </servlet>
  <servlet>
    <servlet-name>jsp</servlet-name>
    <servlet-class>org.apache.jasper.servlet.JspServlet</servlet-class>
    <init-param>
      <param-name>fork</param-name>
      <param-value>false</param-value>
    </init-param>
    <init-param>
      <param-name>xpoweredBy</param-name>
      <param-value>false</param-value>
    </init-param>
    <load-on-startup>3</load-on-startup>
  </servlet>

...
<security-constraint>
    <web-resource-collection>
      <web-resource-name>Client</web-resource-name>
      <url-pattern>%2F*</url-pattern>
    </web-resource-collection>
    <auth-constraint>
      <role-name>client</role-name>
    </auth-constraint>
    <user-data-constraint>
      <transport-guarantee>NONE</transport-guarantee>
    </user-data-constraint>
  </security-constraint>

  <login-config>
    <auth-method>FORM</auth-method>
    <realm-name>Client</realm-name>
    <form-login-config>
      <form-login-page>/login.jsp</form-login-page>
      <form-error-page>/login-error.jsp</form-error-page>
    </form-login-config>
  </login-config>

  <security-role>
    <role-name>client</role-name>
  </security-role>



</web-app>
With Tomcat 9, to view the content of the generated web.xml, it is necessary to add the logEffectiveWebXml attribute to the context tag in the context.xml file located under the conf folder and set it to true :
<?xml version="1.0" encoding="UTF-8"?>
<!-- The contents of this file will be loaded for each web application -->
<Context  logEffectiveWebXml="true">

    <!-- Default set of monitored resources. If one of these changes, the    -->
    <!-- web application will be reloaded.                                   -->
    <WatchedResource>WEB-INF/web.xml</WatchedResource>
    <WatchedResource>WEB-INF/tomcat-web.xml</WatchedResource>
    <WatchedResource>${catalina.base}/conf/web.xml</WatchedResource>

    <!-- Uncomment this to disable session persistence across Tomcat restarts -->
    <!--
    <Manager pathname="" />
    -->
</Context>
During the deployment, the web server scans and treats the various web fragments that it might find. It will concatenate all the fragments to the main web.xml file. This is a simple demonstration. You might be wondering what might happen if there is more than one web fragment file and more precisely in which order will they be added to the generated web.xml. Well, to deal with that, the absolute-ordering tag was added :
<web-app>
...
<absolute-ordering>
   <name>MyFirstFragment</name>
   <name>MySecondFragment</name>
</absolute-ordering>
...
</web-app>

Finally, the web fragments can also contain ordering themselves with the use of the ordering tag :
<web-fragment>
<name>MyFirstFragment</name>
...
<ordering>
   <before>
   		<name>MySecondFragment</name>
   </before>
</ordering>
...
</web-app>

The code of this small demonstration is on my github account :
https://github.com/longbeach/webfragments.git


Batch insert data with JPA, Hibernate and PostgreSQL

So you coded the data layer of your application using JPA (with for instance the Hibernate implementation) and when the application reaches the staging environment, you hear it is too slow !
Too many queries, too long queries, etc.
What do you do ? You try to optimize your data layer !
One possible optimization is to batch insert/update data.
The following properties must be set in persistence.xml :

...
      <property name="hibernate.jdbc.batch_size" value="50" />
      <property name="hibernate.order_inserts">true</property>
      <property name="hibernate.order_updates">true</property>
...

Now the big question is : how do i know it works ?
Checking the Hibernate logs still shows that several insert queries where generated instead of 1 (version used is hibernate-core-4.3.6.Final.jar) :


02 nov. 2017 19:02:42,190 DEBUG SQL(109) -
insert
    into
        lambda
        (id_lambda, color, city)
    values
        (?, ?, ?)
02 nov. 2017 19:02:42,193 DEBUG AbstractBatchImpl(145) - Reusing batch statement
02 nov. 2017 19:02:42,193 DEBUG SQL(109) -
    insert
    into
        lambda
        (id_lambda, color, city)    
  values
        (?, ?, ?)
02 nov. 2017 19:02:42,196 DEBUG AbstractBatchImpl(145) - Reusing batch statement
02 nov. 2017 19:02:42,196 DEBUG SQL(109) -
    insert
    into
        lambda
        (id_lambda, color, city)    
    values
        (?, ?, ?)
02 nov. 2017 19:02:42,199 DEBUG AbstractBatchImpl(145) - Reusing batch statement
02 nov. 2017 19:02:42,199 DEBUG SQL(109) -
    insert
    into
        lambda
        (id_lambda, color, city)    
    values
        (?, ?, ?)
02 nov. 2017 19:02:42,201 DEBUG BatchingBatch(110) - Executing batch size: 4
02 nov. 2017 19:02:42,253 DEBUG Expectations(78) - Success of batch update unknown: 0
02 nov. 2017 19:02:42,254 DEBUG Expectations(78) - Success of batch update unknown: 1
02 nov. 2017 19:02:42,254 DEBUG Expectations(78) - Success of batch update unknown: 2
02 nov. 2017 19:02:42,254 DEBUG Expectations(78) - Success of batch update unknown: 3
02 nov. 2017 19:02:42,254 DEBUG SQL(109) -

The thing to remember and keep in mind is that the Hibernate logs are one thing and what ACTUALLY happens in the database is another thing.

The postgreSQL logs (C:\Program Files\PostgreSQL\9.4\data\pg_log\postgresql-2017-11-06_100916.log) show a batched insert statement :


2017-11-02 19:02:42 CET LOG:  execute <unnamed>: insert into lambda (id_lambda, color, city) values ($1, $2, $3),($4, $5, $6),($7, $8, $9),($10, $11, $12)
2017-11-02 19:02:42 CET DETAIL:  parameters : $1 = '1', $2 = 'blue', $3 = 'Paris', $4 = '2', $5 = 'red', $6 = 'Dakar', $7 = '3', $8 = 'red', $9 = 'Lisbon', $10 = '4', $11 = 'violet', $12 = 'Helsinki'

To make it work with postgreSQL it is necessary to use the latest JDBC postgreSQL driver :

...
 <dependency>
      <groupId>org.postgresql</groupId>
      <artifactId>postgresql</artifactId>
      <version>42.1.4.jre7</version>
  </dependency>
...

That version of the driver supports batched insert statements through the use of the reWriteBatchedInserts property :

...
<bean id="poolPostgresql" class="org.apache.tomcat.jdbc.pool.PoolProperties">
...
<property name="dbProperties">
     <value>reWriteBatchedInserts=true</value>
</property>
</bean>
...

More info on that property here :
https://github.com/pgjdbc/pgjdbc/blob/master/README.md

Variable which is “effectively final” in Java 8

Java 8 has an interesting feature that I learnt recently.
A variable can become “effectively final”.
Very useful.
For instance, the following class would not compile in Java 1.7.
It would return the following error :
“Cannot refer to the non-final local variable myVocabulary defined in an enclosing scope”.
It would require to add the modifier FINAL to the myVocabulary variable.
In Java 8, there is no need to make it final but make sure you do not change the value once assigned.

package com.celinio.training.threads;

import java.util.Hashtable;

public class MainApplication {
	public static void main(String[] args) {
         // No need to declare this variable FINAL
		Hashtable<String, Integer> myVocabulary = new Hashtable<String, Integer>();

		Thread t = new Thread() {
			public void run() {
				try {
					Thread.sleep(2000);
					myVocabulary.put("Hello", 1);
					myVocabulary.put("Bye", 2);
                   // Compilation error in Java 8
                   // myVocabulary = new Hashtable<String, Integer>();
					Thread.sleep(1000);
				} catch (InterruptedException e) {
					e.printStackTrace();
				}
			}
		};
		t.start();
	}
}

Assigning a new value to the myVocabulary variable (line 28) would produce an error at compilation time :
“Local variable myVocabulary defined in an enclosing scope must be final or effectively final”

How is the cyclomatic complexity of an algorithm calculated ?

When checking the issues detected by Sonar, i noticed a few issues related to cyclomatic complexity.
The usual message is clear :

The Cyclomatic Complexity of this method “checkSomething” is 22 which is greater than 12 authorized.

Indeed, the method checkSomething has way too many if conditions :

private static SomeDto checkSomething(AnotherDto anotherDto, String reference)
{
SomeDto someDto = new SomeDto();

// condition 1
if (!someDto.getA())
    return new SomeDto("bla1", "blabla");

// condition 2
if (someDto.getName2() == null || checkSurName(anotherDto.getName()))
    return new SomeDto("bla2", "blabla");

// condition 3
if (someDto.getName3() == null || checkSurName(anotherDto.getName()))
    return new SomeDto("bla3", "blabla");

// condition 4
if (someDto.getName4() == null && checkSurName(anotherDto.getName()))
    return new SomeDto("bla4", "blabla");

// condition 5
if (someDto.getName5() == null || checkSurName(anotherDto.getName()))
    return new SomeDto("bla5", "blabla");

// condition 6
if (someDto.getName6() == null && checkSurName(anotherDto.getName()))
    return new SomeDto("bla6", "blabla");

// condition 7
if (someDto.getName7() == null && checkSurName(anotherDto.getName()))
    return new SomeDto("bla7", "blabla");

// condition 8
if (someDto.getName8() == null && checkSurName(anotherDto.getName()))
    return new SomeDto("bla8", "blabla");

// condition 9
if (someDto.getName9() == null && checkSurName(anotherDto.getName()))
    return new SomeDto("bla9", "blabla");

// condition 10
if (someDto.getName10() == null && checkSurName(anotherDto.getName()))
    return new SomeDto("bla10", "blabla");

// condition 11
if (someDto.getName11() == null && checkSurName(anotherDto.getName()))
    return new SomeDto("bla11", "blabla");

return someDto;
}

Question : how is the cyclomatic complexity of an algorithm calculated ?
Cyclomatic complexity is the number of independent paths through the program.
A program can be seen as a connected graph.
The formula is the following :
v(g) = e – n + 2P
where
e = number of edges
n = number of nodes
P = the number of exit points (usually only 1)

An edge connects 2 nodes. It represents non-branching and branching links between nodes.
Nodes are decisions points which may be conditional statements like if, if … else, switch, while, etc.
This metric was developed by Thomas J. McCabe in 1976.

Well, after further investigation and according to this link (checkstyle tool), I have concluded that the McCabe formula is not really applied to calculate the cyclomatic complexity in a Java program.

The complexity is equal to the number of decision points + 1 Decision points: if, while , do, for, ?:, catch , switch, case statements, and operators && and || in the body of target.

So if I apply this rule to the previous sample code :


private static SomeDto checkSomething(AnotherDto anotherDto, String reference)  // 1
{
SomeDto someDto = new SomeDto();

// condition 1
if (!someDto.getA())                                 // 2
return new SomeDto("bla1", "blabla");

// condition 2
if (someDto.getName2() == null || checkSurName(anotherDto.getName()))  // 4
return new SomeDto("bla2", "blabla");

// condition 3
if (someDto.getName3() == null || checkSurName(anotherDto.getName()))  // 6
return new SomeDto("bla3", "blabla");

// condition 4
if (someDto.getName4() == null && checkSurName(anotherDto.getName()))  // 8
return new SomeDto("bla4", "blabla");

// condition 5
if (someDto.getName5() == null || checkSurName(anotherDto.getName()))  // 10
return new SomeDto("bla5", "blabla");

// condition 6
if (someDto.getName6() == null && checkSurName(anotherDto.getName())) // 12
return new SomeDto("bla6", "blabla");

// condition 7
if (someDto.getName7() == null && checkSurName(anotherDto.getName()))  // 14
return new SomeDto("bla7", "blabla");

// condition 8
if (someDto.getName8() == null && checkSurName(anotherDto.getName()))  // 16
return new SomeDto("bla8", "blabla");

// condition 9
if (someDto.getName9() == null && checkSurName(anotherDto.getName()))  // 18
return new SomeDto("bla9", "blabla");

// condition 10
if (someDto.getName10() == null && checkSurName(anotherDto.getName())) // 20
return new SomeDto("bla10", "blabla");

// condition 11
if (someDto.getName11() == null && checkSurName(anotherDto.getName())) // 22
return new SomeDto("bla11", "blabla");

return someDto;
} 

Anyways, the result number of 22 makes sense.
This method has an awful number of successive if conditions and something should be done about it to improve its maintainability. If we would apply strictly the McCabe formula, the result would be strange. At least according to this plugin for Eclipse : Eclipse Control Flow Graph Generator
The flow chart generated is the following :

And the McCabe result would be 1 !

Proxy setting in Docker Toolbox

If you are behind a corporate proxy, you might end with a connection error while running this command :

docker search jenkins

Here is how to set up Docker so that it works behind a corporate proxy:
1) Edit the Docker VM /var/lib/boot2docker/profile

$ docker-machine ssh default
docker@default:~$ sudo vi /var/lib/boot2docker/profile 

2) Add these lines at the end of the file :

# replace PROXY and PORT with the corporate proxy and port values
export "HTTP_PROXY=http://PROXY:PORT"
export "HTTPS_PROXY=http://PROXY:PORT"
# add the IPs, URLs which do no need proxy
export "NO_PROXY=192.168.99.*,*.local,169.254/16,*.example.com,192.168.59.*"

3) Finally, restart Docker :

sudo /etc/init.d/docker restart
 

4) Try again to search for the jenkins image :

docker search jenkins

JMS with Weblogic Server and EJB MDB

Here is a complete and short example of asynchronous development using JMS and a Message-Driven Bean (MDB).
First the procedure to configure inside Weblogic Server a JMS server in order to create a queue, then the code for the producer/sender of a message to the queue and then the code for a consumer (MDB).

1) Weblogic configuration
a) Create the JMS server

JMS_Weblogic1

JMS_Weblogic2

JMS_Weblogic3

JMS_Weblogic4

b) Create a JMS module
JMS_Weblogic5

JMS_Weblogic6

JMS_Weblogic7

JMS_Weblogic8

JMS_Weblogic9

JMS_Weblogic10

JMS_Weblogic11

JMS_Weblogic12

c) Create a destination (queue or topic)

JMS_Weblogic13

JMS_Weblogic14

JMS_Weblogic15

JMS_Weblogic16

d) Create a connection factory

JMS_Weblogic17

JMS_Weblogic18

JMS_Weblogic19

2) Create a producer

For the producer, I got strongly inspired by the code found in this blog. This is a clean example of a producer/sender so no need to reinvent the wheel here.

import java.util.Hashtable;

import javax.jms.JMSException;
import javax.jms.Queue;
import javax.jms.QueueConnection;
import javax.jms.QueueConnectionFactory;
import javax.jms.QueueSender;
import javax.jms.QueueSession;
import javax.jms.TextMessage;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;

public class JMSProducer {

	private static InitialContext initialContext = null;
	private static QueueConnectionFactory queueConnectionFactory = null;
	private static QueueConnection queueConnection = null;
	private static QueueSession queueSession = null;
	private static Queue queue = null;
	private static QueueSender queueSender = null;
	private static TextMessage textMessage = null;
	private static final String CONNECTIONFACTORY_NAME = "ConnectionFactory-Test";
	private static final String QUEUE_NAME = "jms/QueueTest";

	public JMSProducer() {
		super();
	}

	public static void sendMessageToDestination(String messageText) {
		// create InitialContext
		Hashtable properties = new Hashtable();
		properties.put(Context.INITIAL_CONTEXT_FACTORY, "weblogic.jndi.WLInitialContextFactory");
		properties.put(Context.PROVIDER_URL, "t3://localhost:7001");
		properties.put(Context.SECURITY_PRINCIPAL, "weblogic");
		properties.put(Context.SECURITY_CREDENTIALS, "weblogic1");
		try {
			initialContext = new InitialContext(properties);
		} catch (NamingException ne) {
			ne.printStackTrace(System.err);
		}
		System.out.println("InitialContext : " + initialContext.toString());
		// create QueueConnectionFactory
		try {
			queueConnectionFactory = (QueueConnectionFactory) initialContext.lookup(CONNECTIONFACTORY_NAME);
		} catch (NamingException ne) {
			ne.printStackTrace(System.err);
		}

		System.out.println("QueueConnectionFactory : " + queueConnectionFactory.toString());
		// create QueueConnection
		try {
			queueConnection = queueConnectionFactory.createQueueConnection();
		} catch (JMSException jmse) {
			jmse.printStackTrace(System.err);
		}

		System.out.println("QueueConnection : " + queueConnection.toString());
		// create QueueSession
		try {
			queueSession = queueConnection.createQueueSession(false, 0);
		} catch (JMSException jmse) {
			jmse.printStackTrace(System.err);
		}

		System.out.println("QueueSession : " + queueSession.toString());
		// lookup Queue
		try {
			queue = (Queue) initialContext.lookup(QUEUE_NAME);
		} catch (NamingException ne) {
			ne.printStackTrace(System.err);
		}

		System.out.println("Queue : " + queue.toString());
		// create QueueSender
		try {
			queueSender = queueSession.createSender(queue);
		} catch (JMSException jmse) {
			jmse.printStackTrace(System.err);
		}

		System.out.println("QueueSender : " + queueSender.toString());
		// create TextMessage
		try {
			textMessage = queueSession.createTextMessage();
		} catch (JMSException jmse) {
			jmse.printStackTrace(System.err);
		}

		System.out.println("TextMessage : " + textMessage.toString());
		try {
			textMessage.setText(messageText);
		} catch (JMSException jmse) {
			jmse.printStackTrace(System.err);
		}

		System.out.println("TextMessage : " + textMessage.toString());
		// send message
		try {
			queueSender.send(textMessage);
		} catch (JMSException jmse) {
			jmse.printStackTrace(System.err);
		}

		System.out.println("Message sent.");
		// clean up
		try {
			textMessage = null;
			queueSender.close();
			queueSender = null;
			queue = null;
			queueSession.close();
			queueSession = null;
			queueConnection.close();
			queueConnection = null;
			queueConnectionFactory = null;
			initialContext = null;
		} catch (JMSException jmse) {
			jmse.printStackTrace(System.err);
		}

		System.out.println("Cleaned up done.");
	}

	public static void main(String args[]) {
		sendMessageToDestination("test");
	}
}

3) Create a consumer

This MDB consumer just prints out something in the log when it consumes the message in the queue.
The lifecycle methods are empty but they could be used to clean up resources for instance.

import javax.ejb.ActivationConfigProperty;
import javax.ejb.EJBException;
import javax.ejb.MessageDriven;
import javax.ejb.MessageDrivenBean;
import javax.ejb.MessageDrivenContext;
import javax.jms.JMSException;
import javax.jms.Message;
import javax.jms.MessageListener;
import javax.jms.TextMessage;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

@MessageDriven(mappedName = "jms/QueueTest", activationConfig = {
		@ActivationConfigProperty(propertyName = "acknowledgeMode", propertyValue = "Auto-acknowledge"),
		@ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue") })
public class TestJMSSABean implements MessageDrivenBean, MessageListener {

	/** The Constant LOGGER. */
	protected static final Logger LOGGER = LoggerFactory.getLogger(TestJMSSABean.class);

	public TestJMSSABean() {
	}

	@Override
	public void onMessage(Message message) {
		try {
			if (message instanceof TextMessage) {
				System.out.println(((TextMessage) message).getText());
				LOGGER.trace("***************** : " + ((TextMessage) message).getText());
			} else {
				System.out.println(message.getJMSMessageID());
				LOGGER.trace("***************** : " + message.getJMSMessageID());
			}
		} catch (JMSException ex) {
			LOGGER.trace("***************** ERROR : " + ex.getMessage());
		}
	}

	@Override
	public void ejbRemove() throws EJBException {
		// TODO Auto-generated method stub
	}

	@Override
	public void setMessageDrivenContext(MessageDrivenContext arg0) throws EJBException {
		// TODO Auto-generated method stub
	}
}

Prevent Weblogic Server from wrapping data types

By default, Weblogic Server wraps data types such as Struct, Blob, Array, Clob, etc.

For instance, oracle.sql.STRUCT becomes weblogic.jdbc.wrapper.Struct_oracle_sql_STRUCT.

That can become a problem if for instance you’re expecting a data type :


oracle.sql.STRUCT objSTRUCT = (oracle.sql.STRUCT) callableStatement.getObject(2);

This would raise the following error :


java.lang.ClassCastException: weblogic.jdbc.wrapper.Struct_oracle_sql_STRUCT

To avoid it, one can simply disable that wrapping in the Weblogic administration console :

Go to Services > Data sources > Select your datasource > “Connection pools” tab > Advanced  :

uncheck “Wrap data types”

WrapDataTypes_WeblogicWrapDataTypes2

Note : Weblogic Server version used is 10.3.5

Persistence.xml and the Oracle hint FIRST_ROWS

I found out that it is important to specify the exact version of Oracle in the persistence.xml file. I am currently using EclipseLink as the persistence framework provider.

Oracle hints were introduced to speed up/optimize SQL queries. Starting with Oracle 10, some of them have become almost unecessary because the optimizer was greatly improved. Also, depending on the structure of the queries and the database design (use of primary keys or not, index are where they should be or not, etc), hints can be counterproductive since they slow down queries instead of speeding them up.

SELECT * FROM (
SELECT /*+ FIRST_ROWS */ a.*, ROWNUM rnum  FROM (
SELECT col1, col2 from table1)
a WHERE ROWNUM <= 20)
WHERE rnum > 0;

This SQL query is used for pagination, was coded with JPA criteria and was generated by EclipseLink using ORACLE as the target database as specified in the following persistence.xml file :

<property name="eclipselink.target-database" value="Oracle" />

We use Oracle 11g. In order to indicate EclipseLink to produce SQL code for Oracle11, the target database should be ORACLE11, not ORACLE:

<property name="eclipselink.target-database" value="Oracle11" />

In that case, the SQL query generated no longer contains the FIRST_ROWS hint :

SELECT * FROM (
SELECT a.*, ROWNUM rnum  FROM (
SELECT col1, col2 from table1)
a WHERE ROWNUM <= 20)
WHERE rnum > 0;

If we examine the source code of EclipseLink, we can see in OraclePlatform.java the initial value of the HINT variable :

<b>protected</b> String HINT = "/*+ FIRST_ROWS */ ";

In the Oracle10Platform class, which inherits from OraclePlatform, that value is overriden :

public Oracle10Platform(){
super();
//bug 374136: override setting the FIRST_ROWS hint as this is not needed on Oracle10g
HINT = "";
}

Unit testing EJBs with PowerMockito

I had to find a way to test a method from a stateful session bean (EJB PersonSABean implements PersonSF) defined in a service layer that calls another method from another stateful session bean (EJB HouseSFBean implements HouseSF) defined in a business layer.

To call that method from EJB HouseSFBean, the EJB PersonSABean needs to perform a lookup to get a reference to the EJB HouseSF. In that case, a static method is often created inside a static or final class. For instance :

public final class EjbUtil {
public static <T> T lookup(final Class<T> clazz) throws SomeException {
 InitialContext context = null;

try {
 context = new InitialContext(new PropertiesLoader().loadProperties("myProperties.txt"));
}
 return clazz.cast(context.lookup(name));
} catch (NamingException e) {
 throw new SomeException(e.getMessage());
} finally {
...
}

...

}
@TransactionManagement(TransactionManagementType.CONTAINER)
@Stateless(name = "PersonSA")
@Remote(value = { PersonSA.class })
@Interceptors(HandlerExceptionInterceptorSA.class)
public class PersonSABean extends BasicSABean implements PersonSA {
 ...
 private HouseSF houseSF;

 public HouseSF getHouseSF() {
    houseSF = EjbUtil.lookup(PersonSF.class, "ejb.company.project.application.HouseSF");
    return houseSF;
 }
}

But unit tests, unlike integration tests, are meant to test a specific method (or class). They are not meant to test the methods that it calls. So in my case, i do not need  to make – and should not make – a JNDI lookup to get a reference to the EJB HouseSF . So I have to mockup the EjbUtil.lookup(…)  static method.

Fortunately, PowerMockito can mock a static method so the solution is to use both Mockito and PowerMockito :

@RunWith(PowerMockRunner.class)
@PrepareForTest(EjbUtil.class)

import static org.mockito.Matchers.any;
import static org.mockito.Matchers.anyInt;
import static org.mockito.Matchers.anyListOf;
import static org.mockito.Matchers.anyString;
import static org.mockito.Mockito.stub;

import java.util.ArrayList;
import java.util.List;

import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;
import org.junit.experimental.categories.Category;
import org.junit.runner.RunWith;
import org.mockito.Mock;
import org.mockito.Mockito;
import org.mockito.MockitoAnnotations;
import org.powermock.api.mockito.PowerMockito;
import org.powermock.core.classloader.annotations.PrepareForTest;
import org.powermock.modules.junit4.PowerMockRunner;

@Category(UnitTest.class)
@RunWith(PowerMockRunner.class)
@PrepareForTest(EjbUtil.class)
public class PersonSABeanTest {

@Mock
private HouseSF houseSF;

private PersonSABean personSABean;

@Before
public void setUp() throws Exception {
  MockitoAnnotations.initMocks(this);
  personSABean = new PersonSABean();
  personSABean.setHouseSF(houseSF);
  PowerMockito.mockStatic(EjbUtil.class);
}

@Test
public void testSomeMethod() throws SomeBusinessException {
  ...
  SomeDTO someDTO = new SomeDTO();
  stub(houseSF.someMethod(anyListOf(A.class), anyListOf(BDTO.class), any(CDTO.class), anyInt())).toReturn(someDTO);
  Mockito.when(EjbUtil.lookup(anyString(), Mockito.eq(HouseSF.class), anyString())).thenReturn(houseSF);
  ...
}

Link :
https://code.google.com/p/powermock/wiki/MockitoUsage13


			
				
			
			

[Spring Batch] Order of execution of the reader, the processor and the writer

It seems to me that the order of execution of the reader, the processor and the writer, inside a step and a chunk, is not always clear in everybody’s mind (included mine when i started with Spring Batch).

So I made the following 2 tables that should help – I hope – clarify things.

1) First scenario : we read, process and write all data at once. There is no commit interval defined.

Here is an excerpt of a job with one step. The reader reads the data from a datasource such as a database.


<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns:b="http://www.springframework.org/schema/batch"
xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/batch
http://www.springframework.org/schema/batch/spring-batch-2.1.xsd
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd">
...

<b:job id="doSomething" incrementer="idIncrementer" >
<b:step id="mainStep" >
<b:tasklet>
<b:chunk reader="doSomethingReader" processor="doSomethingProcessor"
writer="doSomethingWriter"    chunk-completion-policy="defaultResultCompletionPolicy" />
</b:tasklet>
</b:step>
</b:job>

</beans>

If the total number of lines (items) returned from the database is 6, then here is how Spring Batch will process each item :

Order of execution with defaultResultCompletionPolicy

Execution orderReaderProcessorWriterTransactions
11st itemT1
22nd item
33rd item
44th item
55th item
66th item
71st item
82nd item
93rd item
104th item
115th item
126th item
13The 6 items at the same time

In that configuration, there is a single transaction. The items are all written at once.

2) Second scenario : we define a size of 4 items for each chunk. So that means there will be a commit every 4 items.


<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns:b="http://www.springframework.org/schema/batch"
xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/batch
http://www.springframework.org/schema/batch/spring-batch-2.1.xsd
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd">

...

<b:job id="doSomething" incrementer="idIncrementer" >
<b:step id="mainStep" >
<b:tasklet>
<b:chunk reader="doSomethingReader" processor="doSomethingProcessor"
writer="doSomethingWriter"    commit-interval="4" />
</b:tasklet>
</b:step>
</b:job>

</beans>

Then chunk processing will occur in that order (supposing there are 6 items) :

Order of execution with chunk processing

Execution orderReaderProcessorWriterTransactions
11st itemT1
22nd item
33rd item
44th item
51st item
62nd item
73rd item
84th item
9The first 4 lines, at the same time
105th itemT2
116th item
125th item
136th item
14The last 2 lines, at the same time
There are 2 transactions, one for each chunk.
That means if there is a problem with the 6th item, the first 4 items (1st chunk) will already have been processed and committed.
A rollback will occur for all items of the 2nd chunk (items 5 and 6).

Asynchronously run a method with the Async and Await keywords

I wrote this small class to illustrate the use of the Async and Await keywords which were added in .NET 4.5.
It should quickly help anyone curious to understand this easier way to write asynchronous programs in C#.


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace ConsoleApplication3
{
    class Program
    {
        static void Main(string[] args)
        {
            Console.WriteLine("Synchronous method");
            Console.WriteLine("Before calling DisplaySum");
            DisplaySum();
            Console.WriteLine("After calling DisplaySum");
            Console.WriteLine("Press a key to continue.");
            Console.ReadKey();

            Console.WriteLine("************************");
            Console.WriteLine("Asynchronous method");

            Console.WriteLine("Before calling DisplaySumAsync");
            DisplaySumAsync();
            Console.WriteLine("After calling DisplaySumAsync");
            Console.WriteLine("Press a key to quit.");
            Console.ReadKey();
        }

        public static double Calculate()
        {
            Console.WriteLine("Method Calculate()");
            double x = 1;
            // Long-running method
            for (int i = 1; i < 100000000; i++)
            {
                x += Math.Tan(x) / i;
            }
            return x;
        }

        // Synchronous method
        private static void DisplaySum()
        {
            Console.WriteLine("Method DisplaySum()");
            double result = Calculate();
            Console.WriteLine("DisplaySum DisplaySum - result : " + result);
        }

        // ***************************************

        public static Task<double> CalculateAsync()
        {
            Console.WriteLine("Method CalculateAsync()");
            return Task.Run(() =>
            {
                double x = 1;
                // Long-running method
                for (int i = 1; i < 100000000; i++)
                {
                    x += Math.Tan(x) / i;
                }
                return x;
            });
        }

        // Asynchronous method
        private static async void DisplaySumAsync()
        {
            Console.WriteLine("Method DisplaySumAsync()");
            double result = await CalculateAsync();
            Console.WriteLine("Method DisplaySumAsync - result : " + result);
        }
    }
}

The output below (DOS window) shows clearly the difference between the call of a synchronous method and the call of an asynchronous method.
The message “After calling DisplaySumAsync” (line 25) is called immediately after the call of the DisplaySumAsync() method, whether it is finished or not.
The important thing to note is : the lines written in the same thread used to call the asynchronous method (which runs in another thread) are executed immediately. An asynchronous method does not block the calling function, allowing that function to continue functioning.

AsyncAwait

EclipseLink error : Null primary key encountered in unit of work clone

With EclipseLink used as the JPA provider, the following error message “Null primary key encountered in unit of work clone” may appear if the value 0 is used for a field which is a primary key.

To resolve it, a few solutions :

1) start your primary key values from 1 instead of 0, which in my opinion is the best solution.

2) add the following annotation to the entity :


import javax.persistence.Entity;
import javax.persistence.Table;

import org.eclipse.persistence.annotations.IdValidation;
import org.eclipse.persistence.annotations.PrimaryKey;


/**
 * The persistent class for the HOUSE database table.
 * 
 */
@Entity
@Table(name = "HOUSE")
@PrimaryKey(validation=IdValidation.NULL)
public class House implements Serializable {

...

}

3) add the following property to the persistence.xml file :


<?xml version="1.0" encoding="UTF-8" ?>
<persistence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd"
	xmlns="http://java.sun.com/xml/ns/persistence" version="2.0">

	<persistence-unit name="myUnit">

		<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
		<jta-data-source>jdbc/MySource</jta-data-source>
		
		<class>com.blabla.myEntity</class>
						
		<properties>
                    <property name="eclipselink.allow-zero-id" value="true"/> 
		</properties>

	</persistence-unit>

</persistence>

Links :
http://eclipse.org/eclipselink/documentation/2.4/jpa/extensions/a_primarykey.htm

Super Dev Mode with GWT 2.5.1-rc1 and Chrome 26.0

Here are the steps to run the super dev mode which is a new feature of GWT 2.5.
The versions i used are GWT 2.5.1-rc1 and Chrome 26.0. That version of Chrome requires the latest version of GWT, otherwise it will not work. I tested it with GWT 2.5 previously and I could not see the source code in Chrome browser.

  • Step 1 :

Modify the module.gwt.xml file to add these lines :


<add-linker name="xsiframe"/>
<set-configuration-property name="devModeRedirectEnabled" value="true"/>

  • Step 2 :

Compile your code as you usually do : mvn install

  • Step 3 :

Launch the super dev mode. That can easily be done with the gwt-maven-plugin plugin and the goal run-codeserver :

SuperDevMode

The following lines appear at the end of the console in Eclipse :

[INFO] The code server is ready.
[INFO] Next, visit: http://localhost:9876/
  • Step 4 :

Open the URL http://localhost:9876/ in Chrome :

GWTCodeServerChrome

And do as it says : create 2 bookmarks in Chrome for Dev Mode On and Dev Mode Off, with the help of the bookmarklets. Which I have already done.
Then run the application.
Since I did not use an external server, I ran the application in development mode (gwt:run) so it runs with Jetty. But i could have run it directly under WebLogic for instance.
Then go to the URL of the application :
http://127.0.0.1:8888/PlanetEarth.html?gwt.codesvr=127.0.0.1:9997
In fact remove the “?gwt.codesvr=127.0.0.1:9997” part of the URL. It is no longer necessary to install the Google Web Toolkit Developer Plugin for the browser. So the URL to go to is http://127.0.0.1:8888/PlanetEarth.html :

SuperDevModeChrome1

  • Step 5 :

While on that page, click on the “Dev Mode On” bookmark. A popup asking to compile the module will show up :

SuperDevModeChrome2

Click on Compile. This will recompile the application.
Then in Chrome go to Tools > Developers Tools > Settings and check the checkbox “Enable source maps” :

SuperDevModeChrome3

Refresh the page and you should see the Java code in the “Sources” tab :

SuperDevModeChrome4

You can then directly debug in Chrome by adding breakpoints :

SuperDevModeChrome5

Link :
https://developers.google.com/web-toolkit/articles/superdevmode

About some of the new features of Java 8 …

Just want to signal something that a lot of Java developers seem not to be aware of :
a few of the upcoming features of the JDK 8 have been available in C# for some time now.

.NET / C# Java
Lambda expressions (closures) .NET 3.5 - C# 3 - 2007 JDK 8 - 2013
Virtual extension methods .NET 3.5 - C# 3 - 2007 JDK 8 - 2013
... ... ...

C# was inspired by Java and C++ and now it kind of feels like Java 8 is catching up with C# !
But it is a good thing when the languages influence each other.
That is the case with C# and Java but that is also the case with Windows Forms / WPF + XAML and JavaFX 2 / GWT or ASP.NET MVC and Java MVC-based frameworks, etc.

That is another good reason to be open to both worlds.

[JPA 2] Adding support to WebLogic 10.3.5 for JPA 2

WebLogic 10.3.5 is JAVA EE 5 certified. However JPA 2 is not part of the JAVA EE 5 (2006) specifications, it is part of the JAVA EE 6 (2009) specifications.
So since WebLogic Server implements the Java EE 5 specification, it is not required to support JPA 2. However it is possible to add support to WebLogic 10.3.5 for JPA 2. Look no further, the answer is of course on Oracle WebLogic website: Using Oracle TopLink with Oracle WebLogic Server

A default WebLogic installation already contains the files needed :

javax.persistence_1.0.0.0_2-0-0.jar
com.oracle.jpa2support_1.0.0.0_2-0.jar

So if you make the choice to configure things manually then you do not need to install a patch.
It is just a matter of declaring a PRE_CLASSPATH environment variable in the file commEnv.cmd (Windows) or commEnv.sh (Linux) located in the WL_HOME/common/bin/ folder.

Under Windows :

@rem JPA 2 activation

set PRE_CLASSPATH=%BEA_HOME%\modules\javax.persistence_1.0.0.0_2-0-0.jar;%BEA_HOME%\modules\com.oracle.jpa2support_1.0.0.0_2-0.jar

Under Linux :

PRE_CLASSPATH=${BEA_HOME}/modules/javax.persistence_1.0.0.0_2-0-0.jar:${BEA_HOME}/modules/com.oracle.jpa2support_1.0.0.0_2-0.jar
export PRE_CLASSPATH

If you do not configure WebLogic 10.3.5 to support JPA 2 you will get an error message like this one :

nested exception is:
	javax.ejb.EJBException: what do i do: seems an odd quirk of the EJB spec.
  The exception is:java.lang.NoSuchMethodError: javax.persistence.EntityManager.createQuery(Ljava/lang/String;Ljava/lang/Class;)Ljavax/persistence/TypedQuery;;
nested exception is: javax.ejb.EJBException: what do i do: seems an odd quirk of the EJB spec.
The exception is:java.lang.NoSuchMethodError: javax.persistence.EntityManager.createQuery(Ljava/lang/String;Ljava/lang/Class;)Ljavax/persistence/TypedQuery;

javax.ejb.EJBException: what do i do: seems an odd quirk of the EJB spec.
 The exception is:java.lang.NoSuchMethodError: javax.persistence.EntityManager.createQuery(Ljava/lang/String;Ljava/lang/Class;)Ljavax/persistence/TypedQuery;

There is no need to edit the file weblogic-application.xml and to add some additional prefered application package. That one for instance works :

<?xml version="1.0" encoding="UTF-8" ?>
<weblogic-application xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xmlns:wls="http://www.bea.com/ns/weblogic/90"
	xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/j2ee_1_4.xsd http://www.bea.com/ns/weblogic/90 http://www.bea.com/ns/weblogic/90/weblogic-application.xsd">
	<prefer-application-packages>
		<package-name>org.apache.commons.lang.*</package-name>
		<package-name>org.eclipse.persistence.*</package-name>
	</prefer-application-packages>
	<application-param>
		<param-name>webapp.encoding.default</param-name>
		<param-value>UTF-8</param-value>
	</application-param>
	<session-descriptor>
		<persistent-store-type>replicated_if_clustered</persistent-store-type>
	</session-descriptor>
</weblogic-application>


[GWT] How to print a widget or a page

To print a widget or an entire page with GWT, you can use JSNI and the JavaScript print function.

A good practice is to put all the content you want to print inside a DIV tag, for instance :

<!DOCTYPE ui:UiBinder SYSTEM "http://dl.google.com/gwt/DTD/xhtml.ent">
<ui:UiBinder xmlns:ui="urn:ui:com.google.gwt.uibinder" xmlns:g="urn:import:com.google.gwt.user.client.ui" xmlns:p1="urn:import:com.mycompany.gwtproject.client.common.ui.composite">
	<ui:style>
		/* Add CSS here. See the GWT docs on UI Binder for more details */
		.important {
			font-weight: bold;
		}
	</ui:style>
...
<div id="printAll">
	<g:Label text="Some title" styleName="pageTitle" />		
	<div style="height:30px"></div>
		<p1:DetailAgency ui:field="detailAgency " />
	<div style="height:30px"></div>
		<p1:DetailCountry ui:field="detailCountry" />
	<div style="height:15px"></div>
</div>
<div style="float:left;margin-right:20px">
		<g:Button text="Print" ui:field="printBtn" />
</div>

An id is assigned to that DIV tag (line 10) and a button is added to the page. It calls the Java print method from the Print class:


import com.mycompany.gwtproject.client.service.print.Print;

import com.google.gwt.core.client.GWT;
import com.google.gwt.event.dom.client.ClickEvent;
import com.google.gwt.uibinder.client.UiBinder;
import com.google.gwt.uibinder.client.UiField;
import com.google.gwt.uibinder.client.UiHandler;
import com.google.gwt.user.client.DOM;
import com.google.gwt.user.client.Element;
import com.google.gwt.user.client.ui.Button;
import com.google.gwt.user.client.ui.Composite;
import com.google.gwt.user.client.ui.Widget;
...

public class MyViewImpl extends Composite implements MyView {
...
@UiField
Button printBtn;

@UiHandler("printBtn")
void onPrintBtnClick(ClickEvent event) {
	Element element = DOM.getElementById("printAll");
	Print.it(element);
}

This class Print.java contains several methods it that help you print the window, a widget, etc. In the end, they call the JavaScript print() function :

...
public static native void printFrame() /*-{
        var frame = $doc.getElementById('__printingFrame');
        frame = frame.contentWindow;
        frame.focus();
        frame.print();
    }-*/;
...

You can apply CSS to the printed page. If you work with uiBinder and want to print a widget that has no id defined in the XML template, you can assign one in the corresponding Java class :

...
<g:VerticalPanel ui:field="myDetails">
	<p1:DetailAgency ui:field="detailAgency " />
	<p1:DetailCountry ui:field="detailCountry " />
</g:VerticalPanel>
...

import com.mycompany.gwtproject.client.service.print.Print;

import com.google.gwt.core.client.GWT;
import com.google.gwt.event.dom.client.ClickEvent;
import com.google.gwt.uibinder.client.UiBinder;
import com.google.gwt.uibinder.client.UiField;
import com.google.gwt.uibinder.client.UiHandler;
import com.google.gwt.user.client.DOM;
import com.google.gwt.user.client.Element;
import com.google.gwt.user.client.ui.Button;
import com.google.gwt.user.client.ui.Composite;
import com.google.gwt.user.client.ui.Widget;
...

public class MyView2Impl extends Composite implements MyView2 {

@UiField
VerticalPanel myDetails;

@UiHandler("printBtn")
void onPrintBtnClick(ClickEvent event) {
	printBtn.getElement().setId("details");
	Element element = DOM.getElementById("details");
	Print.it(element);
}

This will fire the print window :
Print

Links :
http://code.google.com/p/gwt-print-it/source/browse/trunk/src/br/com/freller/tool/client/Print.java

[Book review] Programmation GWT 2, second edition

Here is my book review of the second edition of the book Programmation GWT 2 (author : Sami Jaber). In French since the book is written in French.

Cette deuxième édition fait 21 chapitres (516 pages) alors que la première édition comptait 17 chapitres (461 pages). Soit environ 50 pages supplémentaires.
Elle prend en compte les nouveautés introduites depuis la version 2.0 jusqu’à la version 2.5.
Les développeurs francophones peuvent certainement remercier l’auteur d’avoir mis à leur disposition un livre en français qui est aussi complet et à jour sur GWT.

Chapitre 1
Le premier chapitre présente clairement la structure d’un projet GWT mais aussi les différents modes : mode développement et mode production. Avec une petite introduction à une nouveauté de la version 2.5 : le super DevMode.

Chapitres 2 et 3
Dans le chapitre 2, j’ai apprécié l’honnêteté de l’auteur qui souligne dès le début “la simplicité et la sobriété” des widgets disponibles en standard dans le framework. Les principaux widgets sont passés en revue, avec exemples d’utilisation à l’appui (code Java et CSS, ainsi que des captures d’écran).

Chapitre 4
Je pense que ce chapitre dresse un bon tour d’horizon des bibliothèques tierces (Sencha Ext-GWT, SmartGWT, GWT-DnD, GChart, GWT HighCharts) et des frameworks complémentaires (Vaadin entre autres).

Chapitre 5
L’utilisation de JavaScript dans du code Java est étudiée de façon minutieuse dans ce chapitre : insertion de code JavaScript dans une méthode Java, intégration d’un fichier JavaScript externe, correspondance des types entre Java et JavaScript, etc. L’auteur fournit également des explications sur les types Overlay, plutôt méconnus.

Chapitre 6
La création de composants personnalisés est souvent nécessaire dans un projet et ce livre ne manque pas d’y consacrer un chapitre entier. On y voit de façon détaillée la mécanique événementielle de GWT et le modèle de widget.

Chapitre 7
Ce chapitre décrit les services RPC et dévoile les bonnes pratiques à mettre en place lors de leurs utilisations.

Chapitre 8
Ce chapitre met l’accent sur l’intégration J2EE avec un exemple d’utilisation des EBJ 3 et de JPA. J’ai trouvé l’exemple pertinent et suffisamment illustré par du code.

Chapitres 9, 10 et 11
Ces chapitres expliquent et décrivent amplement le chargement à la demande (code splitting), la liaison différée (deferred binding) et la gestion des ressources (API ClientBundle) à grand renfort de code, de rapports de compilation, de captures d’écran de navigateurs, etc.

Chapitre 12
Ce chapitre rentre dans les entrailles de GWT : différentes facettes du compilateur, fichiers créés, réduction de code (pruning) et optimisations.

Chapitre 13
Le mécanisme de l’internationalisation avec GWT est traité dans ce chapitre. L’auteur aborde et montre bien l’utilisation de l’API i18n, les dictionnaires, les messages, les conversions de types ainsi que l’outillage.

Chapitre 14
Les tests ne sont pas délaissés puisqu’un chapitre entier y est dédié. On y voit l’utilisation de GWTTestCase, HTMLUnit et Selenium. Ce dernier framework est pas mal utilisé dans le chapitre, notamment pour l’écriture des tests fonctionnels (description de Selenium IDE et du module WebDriver). Enfin la notion de mocking n’est pas oubliée puisqu’elle est illustrée avec un exemple utilisant JMock et EasyMock.

Chapitre 15
Ce chapitre sur les design patterns est à mon avis un des plus importants car il liste les bonnes pratiques d’architecture et de conception. Et à mon avis l’auteur a fait un très bon travail en fournissant pas mal de conseils : comment gérer la session, l’historique du navigateur, les traitements longs avec les classes Timer et Scheduler, les patterns Commande, MVC et MVP, etc.

Chapitre 16
J’ai trouvé le chapitre dédié à UIBinder très exhaustif. On y voit la gestion des styles et ressources, l’incorporation des images, la gestion des évènements, le référencement de widgets composites à l’intérieur d’un widget composite…

Chapitre 17
Un très court chapitre qui prend la peine de faire un tour d’horizon du plug-in Eclipse pour GWT. Je pense qu’il montre bien ses possibilités.

Chapitre 18
Un chapitre est consacré aux composants CellWidget. J’ai bien apprécié la description et l’utilisation (nombreux extraits de code) du widget CellTable, très utilisé pour la création de tableaux. La documentation officielle de Google est très bonne sur le sujet mais des explications supplémentaires et de surcroît en français ne sont jamais de trop !

Chapitre 19
Enfin un chapitre dédié à l’API Activities and Places. À l’heure actuelle il existe encore peu d’ouvrages qui abordent en détail cette API et c’est un gros manque. Dans ce chapitre l’auteur explique longuement cette API complexe et l’illustre à l’aide d’un exemple et d’un schéma décrivant toute la chaîne d’exécution de l’API Activities and Places.

Chapitres 20 et 21
Ces deux derniers chapitres concernent les API RequestFactory, AutoBean et Editors. Tous les deux sont bien riches en code, explications, conseils et avertissements !

De manière générale j’ai beaucoup apprécié l’exhaustivité des explications fournies dans la plupart des chapitres. Mais aussi les conseils et avertissements sur certains sujets complexes.
J’ai aussi beaucoup apprécié l’apparition d’un chapitre entièrement dédié à Activities and Places. L’importance de ce framework dans le développement d’une application GWT mérite en effet un chapitre à lui tout seul et l’auteur a bien veillé à en inclure un mais a également produit un bel effort d’explications détaillées de cette API difficile pour la rendre plus compréhensible et plus assimilable.
Je n’ai a posteriori noté qu’un seul manque dans ce livre : une présentation un peu plus détaillée du super dev mode, qu’on avait annoncé comme une nouveauté très intéressante de la version 2.5. Mais cette fonctionnalité est relativement neuve, notamment au moment de l’écriture de cette deuxième édition du livre.

Sommaire

Introduction à GWT
Chapitre 1 : L’environnement de développement
Chapitre 2 : Les contrôles
Chapitre 3 : Le modèle de placement CSS
Chapitre 4 : Les bibliothèques tierces
Chapitre 5 : L’intégration de code JavaScript
Chapitre 6 : La création de composants personnalisés
Chapitre 7 : Les services RPC
Chapitre 8 : L’intégration J2EE
Chapitre 9 : Le chargement à la demande
Chapitre 10 : La liaison différée
Chapitre 11 : La gestion des ressources
Chapitre 12 : Sous le capot de GWT
Chapitre 13 : L’internationalisation
Chapitre 14 : L’environnement de tests
Chapitre 15 : Les design patterns GWT
Chapitre 16 : La création d’interfaces avec UIBinder
Chapitre 17 : Le plug-in Eclipse pour GWT
Chapitre 18 : Les composants CellWidget
Chapitre 19 : Activités et places
Chapitre 20 : L’API Request Factory
Chapitre 21 : L’API Editors