miranda

#1 gradlecast – getting started with gradle

By breskeby | October 27, 2010

A week ago someone asked me on twitter about any available screencasts about gradle (http://www.gradle.org). As far as I know there were no screencasts available yet. So, I’ve started to create one. This is my first screencast about gradle (a GradleCast). I hope you enjoy it and be pleasant about my english skills.

This screencast is done with gradle 0.9-rc1. feedback, comments, linking, voting and flattering are (as always) appreciated.

let me know about gradle related topics you want to see in another gradle screencast.
regards,
René

Write a custom Caching AST Transformation with Groovy

By breskeby | June 21, 2010

At the last JAX in Mainz I attended a talk of Hamlet D’Arcy called “code generation on the jvm”. The title wasn’t that inviting. But since I knew him and his groovy addiction, I’ve known it would be worth it. Besides a tiny introduction to spring roo and another library I don’t remember, he gave a nice introduction to Groovy AST Transformations. BTW, AST is a abbreviation of Abstract Syntax Tree.

What is an AST Transformation?

In short:

The purpose of AST Transformations is to let developers hook into the compilation process to be able to modify the AST before it is turned into bytecode that will be run by the JVM

Groovy is shipped with several build in AST Transformations. If you still have no clue what an AST Transformation is, or what it can do for you, have a look at the singleton example, that explains how a simple (groovy) class is converted into a singleton using AST Transformations.

Adding Caching
I won’t discuss general pros and cons of caching here. In this post I want to show how to create a custom AST Transformation which caches method calls. Think of an expensive method call like any kind of remote call or some image processing depending on one input parameter:

1
2
3
4
5
6
7
8
9
10
11
class SomeServiceClass {

   public String getRemoteValue(String input) {
        //...
        //make expensive remote call
        //or do a lot of calculations here
        //...
       
        return value
   }
}

In some cases it would be nice to cache the method results. There are different ways to do this:

  • in 1995 – The java developer would change the implementation of the SomeServiceClass into something like this:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    public class SomeServiceClass {
       private Map<string , String> cachedRemoteValues = new HashMap</string><string ,String>();

       public String getRemoteValue(String input) {
            String returnValue = cachedRemoteValues.get(input);
            if(null == returnValue){
                //...
                //make expensive remote call
                //or do a lot of calculations here
                //and store value locally in returnValue
                //...
               
                //store calculated value in hashmap
                cachedRemoteValues.put(input, returnValue)
            }
           
            return returnValue;
       }
    }

  • In 2004 – The smart guys would have wrote an aspect, that does this for you and compile their code with the iaic compiler
  • In 2006 – The state of the art guys would have wrote an aspect, but weaving it into their code at runtime
  • In 2008 – Today (thanks to the osgi hype), those of you who want to code at the bleeding edge would use equinox aspects ( http://www.eclipse.org/equinox/incubator/aspects/ ) to weave different versions of an caching aspect into your service bundle on.

But what sexy solution could we use in 2010 to get this done? What is sexier than:

1. using a sexy modern language like groovy
2. using a DSL (Domain Specific Language) to describe a what you really want
3. hooking into the compilation, juggling with AST nodes and tell the compiler directly what you want?

So lets get into the details. What is our target. I think it would be nice to mark all methods I want be cachable with an annotation named “@Cached” the example above would look like (no surprises here):

1
2
3
4
5
6
7
8
9
10
11
12
class SomeServiceClass {

   @Cached
   public String getRemoteValue(String input) {
        //...
        //make expensive remote call
        //or do a lot of calculations here
        //...
       
        return value
   }
}

Writing an Annotation that works as a marker for AST Transformations doesn’t much differ from normal Annotations. All it needs are some more arguments.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
package com.breskeby.example

import org.codehaus.groovy.transform.GroovyASTTransformationClass
import java.lang.annotation.ElementType
import java.lang.annotation.Target
import java.lang.annotation.RetentionPolicy
import java.lang.annotation.Retention

/**
 * Created by IntelliJ IDEA.
 * User: Rene
 * Date: 10.06.2010
 * Time: 23:25:30
 * To change this template use File | Settings | File Templates.
 */

@Retention (RetentionPolicy.SOURCE)
@Target ([ElementType.METHOD])
@GroovyASTTransformationClass (["com.breskeby.example.CachedTransformation"])
@interface Cached {

}

The first annotations should be known. RetentionPolicy.SOURCE means that the annotation is discarded by the compiler and not available at runtime or in the generated class. Since this annotation is only needed as a marker during the compilation, this is pretty obvious. ElementType.METHOD as parameter of @Target indicates that our annotation is only applicable for methods.
The real interesting part of the code snippet above is

@GroovyASTTransformationClass (["com.breskeby.example.CachedTransformation"])

This annotation indicates, that an ASTTransformation is linked to this Annotation. As a parameter you need to add the full qualified classname of an associated ASTTransformation. The class c.b.e.CachedTransformation implements the ASTTransformation interface.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
@GroovyASTTransformation(phase = CompilePhase.INSTRUCTION_SELECTION)
class CachedTransformation implements ASTTransformation {

 void visit(ASTNode[] astNodes, SourceUnit sourceUnit) {
    if(!astNodes) return
    if(!astNodes[0]) return
    if(!astNodes[1]) return
    if(!(astNodes[0] instanceof AnnotationNode)) return
    if(!(astNodes[1] instanceof MethodNode)) return

    //validate AnnotationNode
    MethodNode annotatedMethod = astNodes[1]
    if(annotatedMethod.parameters.length != 1) return
    if(annotatedMethod.returnType.name == "void") return

    ClassNode declaringClass = annotatedMethod.declaringClass
    makeMethodCached(declaringClass, annotatedMethod)
  }
}

The @GroovyASTTransformation provides information about how and when to apply the transformation. Further informations about compile phases can be found here. The whole AST Transformation itself is implemented via the visitor pattern.
Our implementation of the visit method checks that the annotaded method has only one parameter and that the result value isn’t void. The transformation can not know what to cache inside the method if the return value is void. After all these checks are done we call makeMethodCached to make the method cached (surprise! surprise!). The method makeMethodCached does the real work. We should take a look at it, shouldn’t we? The whole method is shown in the following listing:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
void makeMethodCached(ClassNode classNode, MethodNode methodNode) {
   // add field of hashmap for cached objects
   def cachedFieldName = methodNode.getName();
   FieldNode cachedField =
    new FieldNode("cache$cachedFieldName", Modifier.PRIVATE, new ClassNode(Map.class), new ClassNode(classNode.getClass()),
      new ConstructorCallExpression(new ClassNode(HashMap.class), new ArgumentListExpression()));
    classNode.addField(cachedField)

    //augment method with cache calls
    Parameter[] params = methodNode.getParameters()
    //methodNode
    String parameterName = params[0].getName()
    List<statement> statements = methodNode.getCode().getStatements();
    Statement oldReturnStatement = statements.last();
    def ex = oldReturnStatement.getExpression();
    def ast = new AstBuilder().buildFromSpec  {
      expression{
          declaration {
                variable "cachedValue"
                token "="
                methodCall {
                    variable "cache$cachedFieldName"
                    constant 'get'
                    argumentList {
                      variable parameterName
                    }
                }
          }
      }
      ifStatement {
          booleanExpression {
              variable "cachedValue"
          }
          //if block
          returnStatement {
              variable "cachedValue"
          }
          //else block
          empty()
      }
      expression{
          declaration {
            variable "localCalculated$cachedFieldName"
            token "="
            {-> delegate.expression < < ex}()
          }
        }
        expression {
          methodCall {
            variable "cache$cachedFieldName"
            constant 'put'
            argumentList {
              variable parameterName
              variable "localCalculated$cachedFieldName"
            }
          }
        }
        returnStatement {
              variable "localCalculated$cachedFieldName"
        }
    }

    statements.remove(oldReturnStatement)
    statements.add(0,ast[0]);
    statements.add(1,ast[1]);
    statements.add(ast[2])
    statements.add(ast[3])
    statements.add(ast[4])
  }

At first we add a FieldNode to our ClassNode. This is the private Map we use to store our cached Elements. After that we temporally store the name of the parameter and the expression of the return statement. Trust me, we need both, later…

Now its time to create some AST nodes. To do that groovy has a build-in AstBuilder. This builder offers different capabilities for that. In this example here we use the buildFromSpec method. Maybe this is a more verbose way than buildFromCode or buildFromString. But that’s a nice exercise to get a better understanding of an Abstract Syntax Tree. To get into the relationship of written code and the corresponding Abstract Syntax Tree in different compile phases you can use the groovy console and its “inspect AST” feature. The best documentation of the AST Specification DSL I found in the internet was the AstBuilderFromSpecificationTest class in groovy trunk.

Using AstBuilder.buildFromSpec we create five nodes here. Let’s take a look at each of them

  1.  // def cachedValue =  cacheMethodName.get("parameter")
    expression{
        declaration {
           variable "cachedValue"
           token "="
           methodCall {
              variable "cache$cachedFieldName"
              constant 'get'
              argumentList {
                 variable parameterName
              }
           }
        }
     }
    

    This calls a get on the hashmap with the parameter value of the method parameter.

  2.  // if(cachedValue) return cachedValue
     ifStatement {
        booleanExpression {
           variable "cachedValue"
        }
        //if block
        returnStatement {
           variable "cachedValue"
        }
        //else block
        empty()
     }
    

    This is a simple if statement. if the cachedValue is not null return cachedValue

  3.  // def localCalulatedCachedField = ...
     expression{
        declaration {
           variable "localCalculated$cachedFieldName"
           token "="
           {-> delegate.expression < < ex}()
        }
     }
    

    The third expression assigns a local variable to the expression of the returnstatement we stored at the beginning. Doing this via Specification is a bit tricky. We have to bring our stored expression into the spec. What makes it work is that for "declaration {}", the 3rd call has to be a closure execution that pushes one expression (type = Expression) into AstSpecificationCompiler's expression list. (Roshan Dawrani told me that. Ask him for further details...)

  4.  expression {
        methodCall {
           variable "cache$cachedFieldName"
           constant 'put'
           argumentList {
              variable parameterName
              variable "localCalculated$cachedFieldName"
           }
        }
     }
    

    The 4th expression puts the value stored in a variable in expression three into the hashmap

  5.  returnStatement {
        variable "localCalculated$cachedFieldName"
     }
    

    The last expression is a simple return statement. After we stored the calculated Expression in a hashmap (see expression 4) we return the value

  6. After creating our different AST Nodes we have to rearrange the list of Statements of the method we want to cache. First we remove the old return statement, and then we add the expressions above to the statement list of the method:

        statements.remove(oldReturnStatement)
        statements.add(0,ast[0]);
        statements.add(1,ast[1]);
        statements.add(ast[2])
        statements.add(ast[3])
        statements.add(ast[4])
    

    Now we’re done with adding caching to a method via AST Transformations. I pushed the whole example including tests to github.

    Limitations of this example
    This example uses the simpliest approach of caching. Introducing caching to your application, can bring different performance improvements, but can also introduce different problems. We didn’t care about cache invalidation in the example above. Furthermore using a simple HashMap can be a problem too. You should always use a SoftReference-based Map to do caching (see kabutz. Maybe I change this in a later post.

    links:

Griffon in Action (a.k.a. hackergarten #2)

By Administrator | May 3, 2010

Last friday we met in Basel for the second hackergarten. Since Andres joined the canoo team last month a griffon jam was obvious. The goal of the evening was to write at least one griffon plugin and release them the same evening. I have to admit, I didn’t wrote swing apps since years now. Writing a plugin for a swing framework without a lot of swing knowledge seems to be very ambitious for just one evening. Though I’m the maintainer of the griffon port at macports and already played a bit with the framework I don’t consider myself as a griffon guy. Last week Manning released the “plugin” chapter of “Griffon in Action” (GiA) in the manning early access programe (MEAP). Great timing for our griffon jam, wasn’t it? Since I had the chance to take a look at Griffon in Action MEAP I like to share my thoughts about the first chapters.

I skipped the chapter about installation. I’m on a mac and use the macport griffon port (latest version is griffon 0.3). If you’re on a mac too, have a look at a previous blog entry about groovy tools and macports. Those of you who are familiar with the grails framework will detect lots of similarities. In fact, griffon was inspired by grails and reuses lots of its approaches.

Even without groovy knowledge, the examples in the book are easy to understand. The final griffon in action will contain an appendix with some further explanation about the groovy foo you need to get started with griffon.
What was always hard for me while developing swing, was the correct usage/implementation of the Model-View-Controler Pattern. Even though you may not use griffon in production, GiA is a nice documentation of a well structured swing applications.

What I didn’t like while following the examples in the book is, that some code snippets are not copy & paste ready. Some listings are polluted with footnotes (e.g. Listing 1.2) what IMHO should be avoided in general. I had even issues with the “clean” code snippets. Somehow the pasted snippet was cluttered and I always have to rearrange the code manually. But maybe that isn’t an issue of the book and more an issue of macosx’ pdf preview. Another thing I didn’t like about the
snippets is the mixed and not explained footnote syntax (A, B or 1, 2, 3)

As mentioned above, GiA contains a complete chapter about writing plugins for griffon. As we already discovered during the griffon jam last at friday extracting your custom components to a griffon plugins is really simple. We (mostly griffon newbies) released nearly four griffon plugins in about 5-6 hours. I think thats the best evidence how easy griffon plugin developement is. Maybe you’re complaining that’s having the project lead there is a bit cheating and maybe you’re right about that. But GiA chapter 11 is all you need to start plugin developing on your own. The chapter is well structured and gives a really nice intro about plugins. This easy plugin environment allows the great reusability of your custom components in different griffon projects.

I learned a lot about swing and swing app structuring while flying over (didn’t read all chapters in depth) the first 12 chapters.

regards,
René

add emma code coverage reporting to your gradle build

By breskeby | April 15, 2010

I’m actually porting an ant based build to gradle. The (deprecated) ant build contains different reporting tools like pmd, checkstyle, findbugs and much more. Since a colleague of mine just asks for code coverage statistics on our build, this post describes how to add EMMA support to your gradle build. EMMA is a free code coverage tool for java. If you havn’t heard about it have a look at http://emma.sourceforge.net/. It is shipped with custom ant tasks. And as we know, gradle works fine with ant. So,
How complicated can it be to get EMMA working with my gradle build?

Let’s have a look at a very simple java project gradle build file:

1
2
3
4
5
6
7
8
9
apply plugin:'java'

repositories{
    mavenCentral()
}

dependencies{
  testCompile "junit:junit:4.7"
}

This is all you need to get all java sources in src/main/java compiled, and run all junit tests you have stored at src/test/java.
After running this build by executing “gradle test
You can take a look at your test results at build/reports/test-results. But besides the test results we want to know which code we covered with our tests and which we don’t. There are different code coverage tools for java available. Since years I’m happy with EMMA for two reasons:

  1. the generated reports contain enough information for me
  2. there is a plugin available for the hudson ci server

What must be done to get EMMA code coverage work for your test task? EMMA in generell has two modi:

  • on-the-fly instrumentation mode
  • offline class instrumentation

on-the-fly instrumention can be used to add emma support on demand for every java application. for further information have a look at the EMMA reference.

To get EMMA working with our junit tests we use offline class instrumentation. This means, that EMMA instructs the compiled classes with emma specific information.

We need resolve the additional emma artifacts. luckily like junit, they are available at the central maven repo. To manage these additional artifacts, we add a custom configuration called emma and add the emma core and the emma ant modules to this configuration:

1
2
3
4
5
6
7
8
9
10
configurations{
    emma
}

dependencies{
  emma "emma:emma:2.0.5312"
  emma "emma:emma_ant:2.0.5312"

  testCompile "junit:junit:4.7"
}

Now we need to configure our test task to do byte code instrumentation on our compiled classes just before running the tests. we use the doFirst{} closure for that:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
test{
   jvmArgs "-Demma.coverage.out.file=build/tmp/emma/metadata.emma", "-Demma.coverage.out.merge=true"

   doFirst{
      ant.taskdef( resource:"emma_ant.properties", classpath: configurations.emma.asPath)
        ant.path(id:"run.classpath"){
            pathelement(location:sourceSets.main.classesDir.absolutePath )
        }
        ant.emma(verbosity:'info'){
            instr(merge:"true", destdir:'build/tmp/emma/instr', instrpathref:"run.classpath", metadatafile:'build/tmp/emma/metadata.emma'){
                instrpath{
                    fileset(dir:sourceSets.main.classesDir.absolutePath, includes:"*.class")
                }
            }
        }
         setClasspath(files("$buildDir/tmp/emma/instr") + configurations.emma +  getClasspath())
      }
}

We do four things here:

  1. add EMMA related JVM args to our tests
  2. define the custom EMMA ant tasks
  3. instruct our compiled classes and store them at $buildDir/tmp/emma/instr
  4. update the test classpath with the instructed classes and the emma libs

Running your build script now via gradle test should now look similar to this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
rene-groschkes-macbook-pro:sample Rene$ gradle test
:compileJava
:processResources
:classes
:compileTestJava
:processTestResources
:testClasses
:test
processing instrumentation path ...
instrumentation path processed in 235 ms
[1 class(es) instrumented, 0 resource(s) copied]
metadata merged into [/Users/Rene/workspaces/gradle/github/gradleplugins/emmaPlugin/sample/build/tmp/emma/metadata.emma] {in 2 ms}

BUILD SUCCESSFUL

Total time: 7.709 secs

Now EMMA works with our junit tests. What we’re missing are the generated code coverage reports. The report should be generated directly after the tests are done. We add a doLast{} closure to do this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
test{
   jvmArgs "-Demma.coverage.out.file=build/tmp/emma/metadata.emma", "-Demma.coverage.out.merge=true"

   doFirst{
      ant.taskdef( resource:"emma_ant.properties", classpath: configurations.emma.asPath)
        ant.path(id:"run.classpath"){
            pathelement(location:sourceSets.main.classesDir.absolutePath )
        }
        ant.emma(verbosity:'info'){
            instr(merge:"true", destdir:'build/tmp/emma/instr', instrpathref:"run.classpath", metadatafile:'build/tmp/emma/metadata.emma'){
                instrpath{
                    fileset(dir:sourceSets.main.classesDir.absolutePath, includes:"*.class")
                }
            }
        }
         setClasspath(files("$buildDir/tmp/emma/instr") + configurations.emma +  getClasspath())
      }

      doLast{
        ant.emma(enabled:"true"){
            report(sourcepath:"src/main/java"){
                fileset(dir:"build/tmp/emma"){
                    include(name:"*.emma")
                }
                txt(outfile:"build/reports/emma/coverage.txt")
                html(outfile:"build/reports/emma/coverage.html")
                xml(outfile:"build/reports/emma/coverage.xml")
            }
        }
    }
}

We create three types (txt, html, xml) of reports here. Running your build script now should result in output like that:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
rene-groschkes-macbook-pro:sample Rene$ gradle test
:compileJava
:processResources
:classes
:compileTestJava
:processTestResources
:testClasses
:test
processing instrumentation path ...
instrumentation path processed in 298 ms
[1 class(es) instrumented, 0 resource(s) copied]
metadata merged into [/Users/Rene/workspaces/gradle/github/gradleplugins/emmaPlugin/sample/build/tmp/emma/metadata.emma] {in 2 ms}
processing input files ...
1 file(s) read and merged in 3 ms
writing [txt] report to [/Users/Rene/workspaces/gradle/github/gradleplugins/emmaPlugin/sample/build/reports/emma/coverage.txt] ...
writing [html] report to [/Users/Rene/workspaces/gradle/github/gradleplugins/emmaPlugin/sample/build/reports/emma/coverage.html] ...
writing [xml] report to [/Users/Rene/workspaces/gradle/github/gradleplugins/emmaPlugin/sample/build/reports/emma/coverage.xml] ...

BUILD SUCCESSFUL

Total time: 8.221 secs

That wasn’t that complicated, was it? I shared a emma gradle plugin with a sample project on github.
This reduces our build script to the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
apply plugin:'java'
apply from:'http://github.com/breskeby/gradleplugins/raw/master/emmaPlugin/emma.gradle'

repositories{
    mavenCentral()
}

dependencies{
  emma "emma:emma:2.0.5312"
  emma "emma:emma_ant:2.0.5312"

  testCompile "junit:junit:4.7"
}

Have fun with that. comments appreciated as always!

regards,
René

keep your groovy based tools up to date with macports

By breskeby | April 1, 2010

Some months ago (about a year I guess) I was looking for an easy way to keep my groovy and grails installations on my mac up to date. Downloading, installing each time I want to switch the version was annoying and no proper solution for my. Other *nix based OS are offering packaging tools out of the box. Debian has its apt-get, redhat their rpm packages, but macos doesnt offer such a tool out of the box. Searching the web for a macos based packaging tool guided me to macports.

The plain macports installation offers a commandline tool named “port” to handle your different port installations. A port is a software package, which is installable and maintainable via macports. These ports are based on Tcl scripts. In short these scripts describe where to get the desired software, how to build it and how to install it on your system.

But how can you get the latest groovy release working on your mac? Instead of searching for a available download in the inet, unpackaging the distribution to a favoured place on your hd and setting the symlinks, the only thing you have to do is to open your shell and type:

1
 sudo port install groovy

After hitting enter macports does the following tasks for you:

  1. check if ant is installed (needed to build groovy from source, if not installed, install the apache-ant port first)
  2. get the latest source distribution of groovy from their download site (in this case from http://dist.codehaus.org/groovy/distributions/)
  3. validate the downloaded archive via md5, sha1 and rmd160 checksums
  4. build groovy from source via ant
  5. move the builded distribution to the dist directory
  6. patch the permissions of the executable scripts
  7. link the groovy scripts to your bin path

There are some gui based macports tools available for those of you who don’t like the shell (though thats hard to believe). My favourite one is porticus

Actually their are the following groovy related ports available:

  • groovy
  • grails
  • griffon
  • gradle
  • gant

If you have installed a port (in my example here the groovy port. And want to update it to the latest version, you have to do run two commands:

1
2
port selfupdate
port upgrade groovy

The selfupdate command syncs your local port tree with the one hosted at macports to get the newest entries.
If you have installed several ports (such as the one above) and want to update all outdated ports you can do this easily by running

1
2
port selfupdate
port upgrade outdated

Besides the latest released versions, there are several ports available to keep track of milestone/rc distributions. Assume you have installed the grails port (version 1.2.2) and want to take a look on the latest release candidate (actually 1.3.0.RC1). Switching can be done via two commands on your shell:

1
2
sudo port uninstall grails
sudo port install grails-devel

Thats it! Switching versions was never easier. The actual available groovy related developer ports are:

  • gradle-devel
  • groovy-devel
  • grails-devel
  • griffon-devel

Sometimes it can take a day or two until the latest version of these tools is available via macports, since the ports are actually maintained manually. I’m looking for a more automatic way to do this, but couldn’t find the time yet.

I know there are other packaging tools for mac available. As a convinced git user, in my opinion the approach of homebrew is awesome. But since I have to work by the way to pay my rent and still need some hours of sleep I havn’t the time for maintaining the groovy tools additionally at homebrew. any homebrew/groovy fan boys out there, who can do this? would be nice too.

speed up your build with gradle

By breskeby | March 12, 2010

In my last post, I explained how to add aspectj support to gradle. Some tasks (like the shown iajc task) in complex projects take a lot of time, even though nothing has changed (e.g. neither the sources, nor the dependencies, nor the compiler settings). In my opinion fast builds are a key player in CI environments. An (up to now) underestimated feature of gradle is the ability to support you to keep your build jobs fast by making your builds incremental.
To speed up your build jobs, gradle offers an easy way to check whether executing a task is really necessary. Lets speed up our aspectj project build by skipping the compileJava task if its up-to-date.

Lets take a look at the original aspectj compile task:

1
2
3
4
5
6
7
8
9
10
11
12
task compileJava(dependsOn: JavaPlugin.PROCESS_RESOURCES_TASK_NAME, overwrite: true) &lt; &lt; {
    ant.taskdef( resource:&quot;org/aspectj/tools/ant/taskdefs/aspectjTaskdefs.properties&quot;, classpath: configurations.ajc.asPath)
    ant.iajc(source:sourceCompatibility, target:targetCompatibility, destDir:sourceSets.main.classesDir.absolutePath, maxmem:&quot;512m&quot;, fork:&quot;true&quot;,
        aspectPath:configurations.aspects.asPath, inpath:configurations.ajInpath.asPath, sourceRootCopyFilter:&quot;**/.svn/*,**/*.java&quot;,classpath:configurations.compile.asPath){
       
        sourceroots{
            sourceSets.main.java.srcDirs.each{
                pathelement(location:it.absolutePath)
            }      
        }
    }
}

When could the task be skipped or not? Gradle differentiates between inputs and outputs of a task. The task must be executed if:

  • If the output was deleted, manipulated or whatever
  • the input files have changed (e.g. sources and classpaths)
  • an property the task relies on

In the the compileJava task above

  • the output directory (the task result) is the classes directory (sourceSets.main.classesDir)
  • the input files are the sourceSets.main.java and all classpathes (iajc, compile, aspectpath, inpath)
  • the properties the task relies on are the compiler properties sourceCompatibility and targetCompatibility

to mark the respective parts gradle adds the properties inputs and outputs to a task. Have a look at the refactored code snippet:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
task compileJava(dependsOn: JavaPlugin.PROCESS_RESOURCES_TASK_NAME,
                overwrite: true) {
   outputs.files sourceSets.main.classesDir
   inputs.files sourceSets.main.java.srcDirs, configurations.aspects, configurations.ajInpath, configurations.compile
   inputs.properties(["sourceCompatibility":sourceCompatibility, "targetCompatibility":targetCompatibility])

   doLast{
    ant.taskdef( resource:"org/aspectj/tools/ant/taskdefs/aspectjTaskdefs.properties", classpath: configurations.ajc.asPath)
    ant.iajc(source:sourceCompatibility, target:targetCompatibility, destDir:sourceSets.main.classesDir.absolutePath, maxmem:"512m", fork:"true",
        aspectPath:configurations.aspects.asPath, inpath:configurations.ajInpath.asPath, sourceRootCopyFilter:"**/.svn/*,**/*.java",classpath:configurations.compile.asPath){
        sourceroots{
            sourceSets.main.java.srcDirs.each{
                pathelement(location:it.absolutePath)
            }      
        }
    }  
   }
}

speed up our build was really easy, wasn’t it? The example in this post uses one of the latest gradle 0.9 snapshots available at http://snapshots.dist.codehaus.org/gradle/. As Hans mentioned at the mailing list (and I definitely agree to that) they are of good quality. For further informations about the mentioned feature have a look at the latest javadoc for the Task, TaskOutputs and TaskInputs interfaces. Unfortunately this feature isn’t yet explained in the gradle userguide.

Have a nice day and keep comments about my english skills under your hat. ;-)

regards,
René

using gradle with aspectj

By breskeby | February 22, 2010

In this post I want to show you how easy it is to build your aspectj projects with gradle. IMHO gradle is the most flexible, versatile build tool for JVM based projects. It fully supports ant and integrates well in maven environments.
But lets dive into the example I prepared.

The example project is based on the “Bean Example” provided by the ajdt plugin for eclipse. This example contains three classes:

We have a basic bean class named Point:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
package com.breskeby.bean;

class Point {

    private int x = 0;
    private int y = 0;

    public int getX(){
        return x;
    }
    public int getY(){
        return y;
    }

     public void setX(int newX) {
        this.x = newX;
    }

    public void setY(int newY) {
        this.y = newY;
    }
}

Now I want to use this bean with full property change listener support without pollute my Point source code. So I create an aspectj BoundPoint which weaves the propertychange support into the bean:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
/*
 * Copyright (c) 1998-2002 Xerox Corporation.  All rights reserved.
 *
 * Use and copying of this software and preparation of derivative works based
 * upon this software are permitted.  Any distribution of this software or
 * derivative works must comply with all applicable United States export
 * control laws.
 *
 * This software is made available AS IS, and Xerox Corporation makes no
 * warranty about the software, its performance or its conformity to any
 * specification.
 */


package com.breskeby.bean;

import java.beans.*;
import java.io.Serializable;

/*
 * Add bound properties and serialization to point objects
 */


aspect PointAspect {
  /*
   * privately introduce a field into Point to hold the property
   * change support object.  `this' is a reference to a Point object.
   */

  private PropertyChangeSupport Point.support = new PropertyChangeSupport(this);

  /*
   * Introduce the property change registration methods into Point.
   * also introduce implementation of the Serializable interface.
   */

  public void Point.addPropertyChangeListener(PropertyChangeListener listener){
    support.addPropertyChangeListener(listener);
  }

  public void Point.addPropertyChangeListener(String propertyName,
                                              PropertyChangeListener listener){

    support.addPropertyChangeListener(propertyName, listener);
  }

  public void Point.removePropertyChangeListener(String propertyName,
                                                 PropertyChangeListener listener) {
    support.removePropertyChangeListener(propertyName, listener);
  }

  public void Point.removePropertyChangeListener(PropertyChangeListener listener) {
    support.removePropertyChangeListener(listener);
  }

  public void Point.hasListeners(String propertyName) {
    support.hasListeners(propertyName);
  }

  declare parents: Point implements Serializable;

  /**
   * Pointcut describing the set<property> methods on Point.
   * (uses a wildcard in the method name)
   */

  pointcut setter(Point p): execution( public void Point.set*(*) ) && target(p);

  /**
   * Advice to get the property change event fired when the
   * setters are called. It's around advice because you need
   * the old value of the property.
   */

  void around(Point p): setter(p) {
        String propertyName =
      thisJoinPointStaticPart.getSignature().getName().substring("set".length());
        int oldX = p.getX();
        int oldY = p.getY();
        proceed(p);
        if (propertyName.equals("X")){
      firePropertyChange(p, propertyName, oldX, p.getX());
        } else {
      firePropertyChange(p, propertyName, oldY, p.getY());
        }
  }

  /*
   * Utility to fire the property change event.
   */

  void firePropertyChange(Point p, String property, double oldval, double newval) {
        p.support.firePropertyChange(property, new Double(oldval), new Double(newval));
  }
}

To test the correct behaviour of this aspect we use a junit test:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
package com.breskeby.bean;

import java.beans.PropertyChangeEvent;
import java.beans.PropertyChangeListener;

import org.junit.Before;
import org.junit.Test;
import static org.mockito.Mockito.*;

public class PointTest {

    //class under test
    private Point cut;
   
    @Before public void setup(){
        cut = new Point();
    }
   
    @Test
    public void testPropertyChangeEventFired(){
        PropertyChangeListener changeListenerMock =
                   mock(PropertyChangeListener.class); 
        cut.addPropertyChangeListener(changeListenerMock);
        cut.setX(1);
        verify(changeListenerMock,times(1))
                   .propertyChange((PropertyChangeEvent) anyObject());
    }
}

After explaining the boring part of the example we can focus on the automatic build now.

As a starting point we use a simple build file that uses the java plugin:

1
2
3
4
5
6
7
8
9
10
11
apply id:'java'

repositories {
    mavenCentral()
}

dependencies{
    compile "aspectj:aspectjlib:1.5.3"
    testCompile "junit:junit:4.7"
    testCompile "org.mockito:mockito-all:1.8.2"
}

When running “gradle test” gradle tells us, that something went wrong:

1
2
3
4
5
6
7
8
Example/src/test/java/com/breskeby/bean/PointTest.java:26: cannot find symbol
symbol  : method addPropertyChangeListener(java.beans.PropertyChangeListener)
location: class com.breskeby.bean.Point
        cut.addPropertyChangeListener(changeListenerMock);
           ^
1 error

FAILURE: Build failed with an exception.

To get this running we need to replace the compileJava task of the java plugin by a custom task that uses the aspectj compiler. The easiest way to get this working is to use the iajc ant task. the aspectj compiler demands additional configurations to setup the classpath for the iajc task and the classpaths for the inpath an aspectpath. To add custom configurations we simple add the following lines to our build file

1
2
3
4
5
configurations {
    ajc
    aspects
    ajInpath
}

in our dependency block we add the aspect ant task to ajc. Since we don’t use external dependencies for inpath and aspectpath we needn’t add here anything. The complete dependency section for our tiny example is shown here:

1
2
3
4
5
6
dependencies{
    ajc "aspectj:aspectjtools:1.5.3"
    compile "aspectj:aspectjrt:1.5.3"
    testCompile "junit:junit:4.7"
    testCompile "org.mockito:mockito-all:1.8.2"
}

As mentioned we have to replace the compileJava task with our own one. Our custom task looks like the following:

1
2
3
4
5
6
7
8
9
10
11
12
task compileJava(dependsOn: JavaPlugin.PROCESS_RESOURCES_TASK_NAME, overwrite: true) < < {
    ant.taskdef( resource:"org/aspectj/tools/ant/taskdefs/aspectjTaskdefs.properties", classpath: configurations.ajc.asPath)
    ant.iajc(source:sourceCompatibility, target:targetCompatibility, destDir:sourceSets.main.classesDir.absolutePath, maxmem:"512m", fork:"true",
        aspectPath:configurations.aspects.asPath, inpath:configurations.ajInpath.asPath, sourceRootCopyFilter:"**/.svn/*,**/*.java",classpath:configurations.compile.asPath){  
       
        sourceroots{
            sourceSets.main.java.srcDirs.each{
                pathelement(location:it.absolutePath)
            }      
        }
    }
}

running now “gradle test” should work and the test succeeds. The complete working build.gradle file looks like that:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
apply id:'java'

repositories {
    mavenCentral()
}

configurations {
    ajc
    aspects
    ajInpath
}

dependencies{
    ajc "aspectj:aspectjtools:1.5.3"
    compile "aspectj:aspectjrt:1.5.3"
    testCompile "junit:junit:4.7"
    testCompile "org.mockito:mockito-all:1.8.2"
}

task compileJava(dependsOn: JavaPlugin.PROCESS_RESOURCES_TASK_NAME, overwrite: true) < < {
    ant.taskdef( resource:"org/aspectj/tools/ant/taskdefs/aspectjTaskdefs.properties", classpath: configurations.ajc.asPath)
    ant.iajc(source:sourceCompatibility, target:targetCompatibility, destDir:sourceSets.main.classesDir.absolutePath, maxmem:"512m", fork:"true",
        aspectPath:configurations.aspects.asPath, inpath:configurations.ajInpath.asPath, sourceRootCopyFilter:"**/.svn/*,**/*.java",classpath:configurations.compile.asPath){  
       
        sourceroots{
            sourceSets.main.java.srcDirs.each{
                pathelement(location:it.absolutePath)
            }      
        }
    }
}

I wrapped all aspectj specific parts of the build script to a aspectj plugin available at
http://github.com/breskeby/gradleplugins/raw/0.9-upgrade/aspectjPlugin/aspectJ.gradle

If you’re already running gradle version 0.9+ you can use this plugin and the build file looks like the following:

1
2
3
4
5
6
7
8
9
10
11
12
apply url:'http://github.com/breskeby/gradleplugins/raw/0.9-upgrade/aspectjPlugin/aspectJ.gradle'

repositories {
    mavenCentral()
}

dependencies{
    ajc "aspectj:aspectjtools:1.5.3"
    compile "aspectj:aspectjrt:1.5.3"
    testCompile "junit:junit:4.7"
    testCompile "org.mockito:mockito-all:1.8.2"
}

This sample project is available at github and comments and suggestions are appreciated.

regards,
René

Demagogie 1.5

By breskeby | August 18, 2009

Es folgt der mitschnitt einer rede von fräulein [tag]zensursula[/tag]. ein musterbeispiel an schamloser lügerei, hetze, demagogie und widerlicher anachronistischer rhetorik der 30/40er. die version 1.5 im titel hab ich gewählt es mir eine 2.0 definitiv nicht wert war. stellvertretend für die armen politiker in amt und würden, die sich durch etwaige aussagen quasi selbst welches amtes auch immer entheben würden, ziehe ich hiermit den nazivergleich aus meiner nichtvorhandenen kopfbedeckung und verweise auf etwaige führende figuren der 30er Jahre, welche auch das kinderreiche familienglück dem pöbel öffentlich zur schau stellten und gleichzeitig im sportpalast das “R” fast noch schöner rollen konnten als diese dame hier. wenn sie ‘”ruck zuck” sagt läufts mir echt kalt den rücken runter. hier erstmal das video:

das kann einem wirklich angstmachen wie sie hier gegen CCC und piratenpartei hetzt. und wo wir schonmal dabei sind und die gute frau neben den missbrauchsopfern, auch den armen antoine de saint-exupéry hier für ihre zwecke missbraucht muss ich schon noch mal ein besseres zitat vom selbigen nachlegen, dass mir bei den worten “HIMMEL NOCHMAL” in den sinn kommt:

wenn der glaube erlischt, stirbt gott und erweist sich fortan als unnötig

und eins noch ursel. ich glaub die linken trauen dir inzwischen alles zu. auch dieser linke ungewaschene schwarze block vom verfassungsgericht in karlsruhe kann dir ja wohl nicht ernsthaft mit meinungsfreiheit oder gewaltenteilung kommen. ZEIGS ihnen ursel, kämpf dein(en) kampf

Der Pritlove füllt das Sommerloch

By breskeby | August 11, 2009

Das gabs wohl noch nie beim chaosradio. gestern schrieb tim pritlove in seinem blog dass er momentan in der komfortablen situation ist, mehrere sendungen des großartigen chaosradio express (CRE) auf halde zu haben. um jetzt den geneigten hörer nicht zu überfrachten lässt er die aufzeichnungen nur nach und nach raus und wünscht erstmal die längst überfällige huldigung seitens der blogosphäre (ich mag dieses wort eigentlich überhaupt nicht).
es ist wirklich schwierig eine folge als favourite hier besonders hervorzuheben, da müsste ich unter in den letzten jahren aufgezeichneten cre folgen mindestens zwei dutzend hier auflisten. unter den sendungen der letzten wochen würde ich persönlich den cre126 als beste sendung küren. die vielen super sendungen im letzten jahr zeigen, dass sich die bahncard 100 aktion für alle beteiligten gelohnt hat. … egal ich werd jetzt müde und deshalb war es das meinerseits mit der cre werbung für heute. also verneige ich mich noch einmal und geh dann ins bett. schneller als ich war unter anderem antiblau. ich war aber schneller als http://soupeter.soup.io/post/25241060/chaosradio-express-133-mp3
[tags]blog4cre[/tags]
gruß brs

Update your IDE

By breskeby | July 7, 2009

Its just some days ago that the eclipse foundation released version 3.5 (a.k.a galileo) of the best (opensource) IDE. The new [tag]eclipse[/tag] release is called [tag]galileo[/tag], traditional named after a jupiter moon. I was a bit sceptic because the first release of eclipse ganymede (3.4) last year wasn’t that stable and it tooks two minor updates to get a real stable version of ganymede.

On the download page of galileo at http://eclipse.org/downloads/ you can find 9 different galileo based IDE packages. I think (and the downloads counter affirms that) the JEE package is the most popular eclipse package. That download page also clarifies, that eclipse isn’t a pure [tag]Java[/tag] IDE anymore. In the meantime eclipse is also a great IDE for C/C++ and PHP developers.

There is no predefined package for python developers. No need to cry ;-) . With the [tag]pydev[/tag] plugin (http://pydev.sourceforge.net/ ) [tag]python[/tag] and or jython developers should also feel confortable with eclipse.

According to my dailiy development work, my first choice to download was the jee package. In comparison to the ganymede release in 2008 the release train of ganymede seems to be much faster. The download of 187.8mb for the cocoa version for my mac was done in less than 5 minutes. Galileo is the first release, which supports the cocoa api for mac. Altough galileo is also available as a carbon version, I think the cocoa version is the future, since carbon was just an API collection to support mac developers on updating mac-os software to mac-os x its days are numbered.

Working on java projects with jdt hasn’t changed a lot. JDT offers just minor improvements like the improved java comparison editor which adopted a lot of the features you already know from the plain java editor. The most noticable change during importing my old workspace was the update of the integrated junit from version 4.4 to 4.5. In some circumstances junit 4.5 works different than junit4.4. It wouldn’t go amiss to give the developer the oppertunity to choose between junit 4.4 and 4.5.

The PDE project improved a lot since ganymede. The equinox runtime galileo predicated on, implements the OSGi specification draft version 4.2. Among other things, the OSGi Specification V4.2 addresses improvements to the OSGi security layer, transactions in osgi, a bundletracker modeled on the already known servicetracker and a common command line interface.

The enhancements of the OSGi tooling support, assures that eclipse is the cutting edge tool for OSGi developers. Primarily the target platform management made good progress. working with targets was a lot of pain in the past. now you can manage different target platforms, define targets in one sole file, share that file with your colleagues and even populate your target definitions using p2.

After starting galileo for the first time I missed a lot of plugins I used in ganymede. Since we made heavy usage of the dropins folder to manage (manually) added plugins I just copied the dropin folder of my ganymede to the galileo directory. Pleasantly surprised I was able to run my galileo dist with nearly all old plugins working. Even the JProfiler plugin, that is official only supporting ganymede is working without any problems. Merely the ajdt plugin of ganymede wasn’t working anymore. But having problems with mirrored update sites on the first galileo day and no available release version of ajdt, the ajdt guys fixed these issues in just a couple of days. By now a release version of [tag]ajdt[/tag] v2.0 is available and the update sites are working again. The second plugin I had serous trouble with was the google appengine for java plugin. Unfortunately this plugin isn’t yet available for galileo.

To sum up, one could say that updating your eclipse IDE to the latest galileo should be very easy because the eclipse guys definitely have done their homework.

regards,
René

ps: comments are welcome!