Presentation of the Vert.x-Swagger project

by phiz71 at May 22, 2017 12:00 AM

This post is an introduction to the Vert.x-Swagger project, and describe how to use the Swagger-Codegen plugin and the SwaggerRouter class.

Eclipse Vert.x & Swagger

Vert.x and Vert.x Web are very convenient to write REST API and especially the Router which is very useful to manage all resources of an API.

But when I start a new API, I usually use the “design-first” approach and Swagger is my best friend to define what my API is supposed to do. And then, comes the “boring” part of the job : convert the swagger file content into java code. That’s always the same : resources, operations, models…

Fortunately, Swagger provides a codegen tool : Swagger-Codegen. With this tool, you can generate a server stub based on your swagger definition file. However, even if this generator provides many different languages and framework, Vert.X is missing.

This is where the Vert.x-Swagger project comes in.

The project

Vert.x-Swagger is a maven project providing 2 modules.


It’s a Swagger-Codegen plugin, which add the capability of generating a Java Vert.x WebServer to the generator.

The generated server mainly contains :

  • POJOs for definitions
  • one Verticle per tag
  • one MainVerticle, which manage others APIVerticle and start an HttpServer.

the MainVerticle use vertx-swagger-router


The main class of this module is SwaggerRouter. It’s more or less a Factory (and maybe I should rename the class) that can create a Router, using the swagger definition file to configure all the routes. For each route, it extracts parameters from the request (Query, Path, Header, Body, Form) and send them on the eventBus, using either the operationId as the address or a computed id (just a parameter in the constructor).

Let see how it works

For this post, I will use a simplified swagger file but you can find a more complex example here based on the petstore swagger file

Generating the server

First, choose your swagger definition. Here’s a YAML File, but it could be a JSON file :

Then, download these libraries :

Finally, run this command

java -cp /path/to/swagger-codegen-cli-2.2.2.jar:/path/to/vertx-swagger-codegen-1.0.0.jar io.swagger.codegen.SwaggerCodegen generate \
  -l java-vertx \
  -o path/to/destination/folder \
  -i path/to/swagger/definition \
  --group-id \

For more Information about how SwaggerCodegen works
you can read this

You should have something like that in your console:

[main] INFO io.swagger.parser.Swagger20Parser - reading from ./wineCellarSwagger.yaml
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/model/
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/model/
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/verticle/
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/verticle/
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/verticle/
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/verticle/
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/resources/swagger.json
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/resources/
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/pom.xml
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/.swagger-codegen-ignore
And this in your destination folder:

Generated sources

What have been created ?

As you can see in 1, the vertx-swagger-codegen plugin has created one POJO by definition in the swagger file.

Example : the bottle definition

In 2a and 2b you can find :

  • an interface which contains a function per operation
  • a verticle which defines all operationId and create EventBus consumers

Example : the Bottles interface

Example : the Bottles verticle

… and now ?

Line 23 of, you can see this

BottlesApi service = new BottlesApiImpl();
This line will not compile until the BottlesApiImpl class is created.

In all XXXAPIVerticles, you will find a variable called service. It is a XXXAPI type and it is instanciated with a XXXAPIImpl contructor. This class does not exist yet since it is the business of your API.

And so you will have to create these implementations.

Fine, but what if I don’t want to build my API like this ?

Well, Vert.x is unopinionated but the way the vertx-swagger-codegen creates the server stub is not. So if you want to implement your API the way you want, while enjoying dynamic routing based on a swagger file, the vertx-swagger-router library can be used standalone.

Just import this jar into your project :

You will be able to create your Router like this :

FileSystem vertxFileSystem = vertx.fileSystem();
vertxFileSystem.readFile(YOUR_SWAGGER_FILE, readFile -> {
    if (readFile.succeeded()) {
        Swagger swagger = new SwaggerParser().parse(readFile.result().toString(Charset.forName(“utf-8”))); 
        Router swaggerRouter = SwaggerRouter.swaggerRouter(Router.router(vertx), swagger, vertx.eventBus(), new OperationIdServiceIdResolver());
   } else {
You can ignore the last parameter in SwaggerRouter.swaggerRouter(...). As a result, addresses will be computed instead of using operationId from the swagger file. For instance, GET /bottles/{bottle_id} will become GET_bottles_bottle-id


Vert.x and Swagger are great tools to build and document an API but using both in the same project can be painful. The Vert.x-Swagger project was made to save time, letting the developers focusing on business code. It can be seen as an API framework over Vert.X.

You can also use the SwaggerRouter in your own project without using Swagger-Codegen.

In future releases, more information from the swagger file will be used to configure the router and certainly others languages will be supported.

Though Vert.x is polyglot, Vert.x-Swagger project only supports Java. If you want to contribute to support more languages, you’re welcome :)

Thanks for reading.

by phiz71 at May 22, 2017 12:00 AM

JBoss Tools and Red Hat Developer Studio Maintenance Release for Eclipse Neon.3

by jeffmaury at May 19, 2017 05:10 PM

JBoss Tools 4.4.4 and Red Hat JBoss Developer Studio 10.4 for Eclipse Neon.3 are here waiting for you. Check it out!



JBoss Developer Studio comes with everything pre-bundled in its installer. Simply download it from our JBoss Products page and run it like this:

java -jar jboss-devstudio-<installername>.jar

JBoss Tools or Bring-Your-Own-Eclipse (BYOE) JBoss Developer Studio require a bit more:

This release requires at least Eclipse 4.6.3 (Neon.3) but we recommend using the latest Eclipse 4.6.3 Neon JEE Bundle since then you get most of the dependencies preinstalled.

Once you have installed Eclipse, you can either find us on the Eclipse Marketplace under "JBoss Tools" or "Red Hat JBoss Developer Studio".

For JBoss Tools, you can also use our update site directly.

What is new?

Our main focus for this release was improvements for container based development and bug fixing.

Improved OpenShift 3 and Docker Tools

We continue to work on providing better experience for container based development in JBoss Tools and Developer Studio. Let’s go through a few interesting updates here.

OpenShift Server Adapter enhanced flexibility

OpenShift server adapter is a great tool that allows developers to synchronize local changes in the Eclipse workspace with running pods in the OpenShift cluster. It also allows you to remote debug those pods when the server adapter is launched in Debug mode. The supported stacks are Java and NodeJS.

As pods are ephemeral OpenShift resources, the server adapter definition was based on an OpenShift service resource and the pods are then dynamically computed from the service selector.

This has a major drawback as it allows to use this feature only for pods that are part of a service, which may be logical for Web based applications as a route (and thus a service) is required in order to access the application.

So, it is now possible to create a server adapter from the following OpenShift resources:

  • service (as before)

  • deployment config

  • replication controller

  • pod

If a server adapter is created from a pod, it will be created from the associated OpenShift resource, in the preferred order:

  • service

  • deployment config

  • replication controller

As the OpenShift explorer used to display OpenShift resources that were linked to a service, it has been enhanced as well. It now displays resources linked to a deployment config or replication controller. Here is an example of a deployment with no service ie a deployment config:

server adapter enhanced

So, as an OpenShift server adapter can be created from different kind of resources, the kind of associated resource is displayed when creating the OpenShift server adapter:

server adapter enhanced1

Once created, the kind of OpenShift resource adapter is also displayed in the Servers view:

server adapter enhanced2

This information is also available from the server editor:

server adapter enhanced3

Security vulnerability fixed in certificate validation database

When you use the OpenShift tooling to connect to an OpenShift API server, the certificate of the OpenShift API server is first validated. If the issuer authority is a known one, then the connection is then established. If the issuer is an unknown one, a validation dialog is first shown to the user with the details of the OpenShift API server certificate as well as the details of the issuer authority. If the user accepts it, then the connection is established. There is also an option to store the certificate in a database so that next time a connection is attempted to the same OpenShift API server, then the certificate will be considered valid an no validation dialog will be show again.

certificate validation dialog

We found a security vulnerability as the certificate was wrongly stored: it was partially stored (not all attributes were stored) so we may interpret a different certificate as validated where it should not.

We had to change the format of the certificate database. As the certificates stored in the previous database were not entirelly stored, there was no way to provide a migration path. As a result, after the upgrade, the certificate database will be empty. So if you had previously accepted some certificates, then you need to accept them again and fill the certificate database again.

CDK 3 Server Adapter

The CDK 3 server adapter has been here for quite a long time. It used to be Tech Preview as CDK 3 was not officially released. It is now officially available. While the server adapter itself has limited functionality, it is able to start and stop the CDK virtual machine via its minishift binary. Simply hit Ctrl+3 (Cmd+3 on OSX) and type CDK, that will bring up a command to setup and/or launch the CDK server adapter. You should see the old CDK 2 server adapter along with the new CDK 3 one (labeled Red Hat Container Development Kit 3).

cdk3 server adapter5

All you have to do is set the credentials for your Red Hat account and the location of the CDK’s minishift binary file and the type of virtualization hypervisor.

cdk3 server adapter1

Once you’re finished, a new CDK Server adapter will then be created and visible in the Servers view.

cdk3 server adapter2

Once the server is started, Docker and OpenShift connections should appear in their respective views, allowing the user to quickly create a new Openshift application and begin developing their AwesomeApp in a highly-replicable environment.

cdk3 server adapter3
cdk3 server adapter4

OpenShift Container Platform 3.5 support

OpenShift Container Platform (OCP) 3.5 has been announced by Red Hat. JBossTools 4.4.4.Final has been validated against OCP 3.5.

OpenShift server adapter extensibility

The OpenShift server adapter had long support for EAP/Wildfly and NodeJS based deployments. It turns out that it does a great deal of synchronizing local workspace changes to remote deployments on OpenShift which have been standardized through images metadata (labels). But each runtime has its own specific. As an example, Wildfly/EAP deployments requires that a re-deploy trigger is sent after the files have been synchronized.

In order to reduce the technical debt and allow support for other runtimes (lots of them in the microservice world), we have refactored the OpenShift server adapter so that each runtime specific is now isolated and that it will be easy and safe to add support for new runtime.

For a full in-depth description, see the following wiki page.

Pipeline builds support

Pipeline based builds are now supported by the OpenShift tooling. When creating an application, if using a template, if one of the builds is based on pipeline, you can view the detail of the pipeline:

pipeline wizard

When your application is deployed, you can see the details of the build configuration for the pipeline based builds:

pipeline details

More to come as we are improving the pipeline support in the OpenShift tooling.

Update of Docker Client

The level of the underlying com.spotify.docker.client plug-in used to access the Docker daemon has been upgraded to 3.6.8.

Run Image Network Support

A new page has been added to the Docker Run Image Wizard and Docker Run Image Launch configuration that allows the end-user to specify the network mode to use. A user can choose from Default, Bridge, Host, None, Container, or Other. If Container is selected, the user must choose from an active Container to use the same network mode. If Other is specified, a named network can be specified.

Network Mode
Network Mode Configuration

Refresh Connection

Users can now refresh the entire connection from the Docker Explorer View. Refresh can be performed two ways:

  1. using the right-click context menu from the Connection

  2. using the Refresh menu button when the Connection is selected

Refresh Connection

Server Tools

API Change in JMX UI’s New Connection Wizard

While hardly something most users will care about, extenders may need to be aware that the API for adding connection types to the &aposNew JMX Connection&apos wizard in the &aposJMX Navigator&apos has changed. Specifically, the & extension point has been changed. While previously having a child element called &aposwizardPage&apos, it now requires a &aposwizardFragment&apos.

A &aposwizardFragment&apos is part of the &aposTaskWizard&apos framework first used in WTP’s ServerTools, which has, for a many years, been used throughout JBossTools. This framework allows wizard workflows where the set of pages to be displayed can change based on what selections are made on previous pages.

This change was made as a direct result of a bug caused by the addition of the Jolokia connection type in which some standard workflows could no longer be completed.

This change only affects adopters and extenders, and should have no noticable change for the user, other than that the below bug has been fixed.

Hibernate Tools

Hibernate Runtime Provider Updates

A number of additions and updates have been performed on the available Hibernate runtime providers.

Hibernate Runtime Provider Updates

The Hibernate 5.0 runtime provider now incorporates Hibernate Core version 5.0.12.Final and Hibernate Tools version 5.0.5.Final.

The Hibernate 5.1 runtime provider now incorporates Hibernate Core version 5.1.4.Final and Hibernate Tools version 5.1.3.Final.

The Hibernate 5.2 runtime provider now incorporates Hibernate Core version 5.2.8.Final and Hibernate Tools version 5.2.2.Final.

Forge Tools

Forge Runtime updated to 3.6.1.Final

The included Forge runtime is now 3.6.1.Final. Read the official announcement here.


What is next?

Having JBoss Tools 4.4.4 and Developer Studio 10.4 out we are already working on the next release for Eclipse Oxygen.


Jeff Maury

by jeffmaury at May 19, 2017 05:10 PM

N4JS Becomes an Eclipse Project

by Brian Smith ( at May 19, 2017 03:34 PM

We’re proud to announce that N4JS has been accepted as an Eclipse Project and the final official steps are underway. Our team have been working very hard to wrap up the Initial Contribution and are excited to be part of Eclipse. The project will be hosted at, although this currently redirects to the project description while our pages are being created. In the meantime, N4JS is already open source - our GitHub project pages are located at which contains articles, documentation, the source for N4JS and more.

Some background information about us:
N4JS was developed by Enfore AG, founded in 2009 as NumberFour AG by Marco Boerries. Enfore’s goal is to build an open business platform for 200+ million small businesses and to provide those businesses with the tools and solutions they need to stay competitive in a connected world.

Initially, JavaScript was intended as the main language for third-party developers to contribute to our platform; it runs directly in the browser and it’s the language of the web! One major drawback is the absence of a static type system; this turned out to be an essential requirement for us. We wanted to ensure reliable development of our platform and our own applications, as well as making life easier for third-party contributors to the Enfore platform. That’s the reason why we developed N4JS, a general-purpose programming language based on ECMAScript 5 (commonly known as JavaScript). The language combines the dynamic aspects of JavaScript with the strengths of Java-like types to facilitate the development of flexible and reliable applications.

N4JS is constantly growing to support many new modern language features as they become available. Some of the features already supported are concepts introduced in ES6 including arrow functions, async/await, modules and much more. Our core team are always making steady improvements and our front end team make use of the language and IDE daily for their public-facing projects. For more information on how the N4JS language differs from other JavaScript variants introducing static typing, see our detailed FAQ.

Why Eclipse?
For us, software development is much more than simply writing code, which is why we believe in IDEs and Eclipse in particular. We were looking for developer tools which leverage features like live code validation, content assist (aka code completion), quick fixes, and a robust testing framework. Contributors to our platform can benefit from these resources for their own safe and intuitive application development.

We tried very hard to design N4JS so that Java developers feel at home when writing JavaScript without sacrificing JavaScript’s support for dynamic and functional features. Our vision is to provide an IDE for statically-typed JavaScript that feels just like JDT. This is why we strongly believe that N4JS could be quite interesting in particular for Eclipse (Java) developers. Aside from developers who are making use of N4JS, there are areas in the development of N4JS itself which would be of particular interest to committers versed in type theory, semantics, EMF, Xtext and those who generally enjoy solving the multitude of challenges involved in creating new programming languages.

What’s next?
While we are moving the project to Eclipse, there are plenty of important checks that must be done by the Eclipse Intellectual Property Team. The Initial Contribution is under review with approximately thirty Contribution Questionnaires created. This is a great milestone for us and reflects the huge effort involved in the project to date. We look forward to joining Eclipse, taking part in the ecosystem in an official capacity and seeing what the community can do with N4JS. While we complete these final requirements, we want to extend many thanks to all at Eclipse who are helping out with the process so far!

by Brian Smith ( at May 19, 2017 03:34 PM

Open Testbeds, DB Case Study, and IoT Events

by Roxanne on IoT at May 19, 2017 01:02 PM

The Eclipse IoT community has been working hard on some pretty awesome things over the past few months! Here is a quick summary of what has been happening.

Open Testbeds

We recently announced the launch of Eclipse IoT Open Testbeds. Simply put, they are collaborations between vendors and open source communities that aim to demonstrate and test commercial and open source components needed to create specific industry solutions.

The Asset Tracking Management Testbed is the very first one! It is a collaboration between Azul Systems, Codenvy, Eurotech, Red Hat, and Samsung’s ARTIK team. It demonstrates how assets with various sensors can be tracked in real-time, in order to minimize the cost of lost or damaged parcels. You can learn more about the Eclipse IoT Open Testbeds here.

Watch Benjamin Cabé present the Asset Tracking testbed demo in the video below. It was recorded at the Red Hat Summit in Boston this month.⬇

Case Study

We have been working with Deutsche Bahn (DB) and DB Systel to create a great case study that demonstrates how open source IoT technology is being used on their German railway system. They are currently using two Eclipse IoT projects, Eclipse Paho and Eclipse Mosquitto, among other technologies. In other words, if you’ve taken a DB train in Germany, you might have witnessed the “invisible” work of Eclipse IoT technology at the station or on board. How awesome is that?!

Case Study — Eclipse IoT and DB

Upcoming IoT Events

I am currently working on the organization of two upcoming Eclipse IoT Days that will take place in Europe this fall! 🍂 🍁 🍃 We are currently accepting talks for both events. Go on, submit your passion! I am excited to read your proposal :)

Eclipse IoT Day @ Thingmonk
September 11 | London, UK
📢 Email us your proposal iot at eclipse dot org

Eclipse IoT Day @ EclipseCon Europe
October 24 | Ludwigsburg, Germany
📢 Propose a talk

I look forward to meeting you in person at both events!

— Roxanne (Yes, I decided to sign this blog post.)

by Roxanne on IoT at May 19, 2017 01:02 PM

Installing Red Hat Developer Studio 10.2.0.GA through RPM

by jeffmaury at May 19, 2017 12:23 PM

With the release of Red Hat JBoss Developer Studio 10.2, it is now possible to install Red Hat JBoss Developer Studio as an RPM. It is available as a tech preview. The purpose of this article is to describe the steps you should follow in order to install Red Hat JBoss Developer Studio.

Red Hat Software Collections

JBoss Developer Studio RPM relies on Red Hat Software Collections. You don’t need to install Red Hat Software Collections but you need to enable the Red Hat Software Collections repositories before you start the installation of the Red Hat JBoss Developer Studio.

Enabling the Red Hat Software Collections base repository

The identifier for the repository is rhel-server-rhscl-7-rpms on Red Hat Enterprise Linux Server and rhel-workstation-rhscl-7-rpms on Red Hat Enterprise Linux Workstation.

The command to enable the repository on Red Hat Enterprise Linux Server is:

sudo subscription-manager repos --enable rhel-server-rhscl-7-rpms

The command to enable the repository on Red Hat Enterprise Linux Workstation is:

sudo subscription-manager repos --enable rhel-workstation-rhscl-7-rpms

For more information, please refer to the Red Hat Software Collections documentation.

JBoss Developer Studio repository

As this is a tech preview, you need to manually configure the JBoss Developer Studio repository.

Create a file /etc/yum.repos.d/rh-eclipse46-devstudio.repo with the following content:


Install Red Hat JBoss Developer Studio

You’re now ready to install Red Hat JBoss Developer Studio through RPM.

Enter the following command:

sudo yum install rh-eclipse46-devstudio

Answer &aposy&apos when transaction summary is ready to continue installation.

Answer &aposy&apos one more time when you see request to import GPG public key

Public key for rh-eclipse46-devstudio .rpm is not installed
      Retrieving key from
      Importing GPG key 0xA5787476:
       Userid     : "Red Hat, Inc. (development key) <>"
       Fingerprint: 2d6d 2858 5549 e02f 2194 3840 08b8 71e6 a578 7476
       From       :
      Is this ok [y/N]:

After all required dependencies have been downloaded and installed, Red Hat JBoss Developer Studio is available on your system through the standard update channel !!!

You should see messages like the following:

rh eclipse46 devstudio.log

Launch Red Hat JBoss Developer Studio

From the system menu, mouse over the Programming menu, and the Red Hat Eclipse menu item will appear.

programming menu

Select this menu item and Red Hat JBoss Developer Studio user interface will appear then:



Jeff Maury

by jeffmaury at May 19, 2017 12:23 PM

EcoreTools: user experience revamped thanks to Sirius 5.0

by Cédric Brun ( at May 19, 2017 12:00 AM

Every year the Eclipse M7 milestone act as a very strong deadline for the projects which are part of the release train: it’s then time for polishing and refining!

When your company is responsible for a number of inter-dependent projects some of them core technologies like EMF Services , the GMF Runtime, others user facing tools like Acceleo, Sirius or EcoreTools, packaging and integration oriented projects like Amalgam or the Eclipse Packaging project and all of these releases needs to be coordinated, then may is a busy month.

I’m personally involved in EcoreTools which makes me in the position to step in the role of the consumer of the other technologies and my plan for Oxygen was to make use of the Property Views support included in Sirius. This support allows me, as the maintainer of EcoreTools to specify directly through the .odesign every Tab displayed in the properties view. Just like the rest of Sirius it is 100% dynamic, no need for code generation or compilation, and complete flexibility with the ability to use queries in every part of the definition.

Before Oxygen EcoreTools already had property editors. Some of them were coded by hand and were developed more than 8 years ago. When I replaced the legacy modeler by using Sirius I made sure at that time to reuse those highly tuned property editors. Others I generated using the first generation of the EEF Framework so that I could cover every type of Ecore and benefit from the dialogs to edit properties using double click. The intent at that time was to make the modeler usable in fullscreen when no other view is visible.

Because of this requirement I had to wait for the Sirius team to make its magic: the properties views support was ready for production with Sirius 4.1, but this was not including any support for dialogs and wizards yet.

Then magic happened: the support for dialogs and wizards is now completely merged in Sirius, starting with M7. In EcoreTools the code responsible for those properties editors represents more than 70% of the total code which peaks at 28K.

Lines of Java code subject to deletion in EcoreTools

In gray those are the plugins which are subject to removal once I use this new feature, as a developer one can only rejoice at the idea of deleting so much code!.

I went ahead and started working on this, the schedule was tight but thanks to the ability to define reflective rules using Dynamic Mappings I could quickly cover everything in Ecore and get those new dialogs working.

New vs old dialogs

Just by using a dozen reflective rules and adding specific Pages or Widgets when needed.

The tooling definition in ecore.odesign

It went so fast I could add new tools for the Generation Settings through a specific tab.

Genmodel properties exposed through a specific tab

And even introduce a link to directly navigate to the Java code generated from the model:

Link opening the corresponding generated Java code.

Even support for EAnnotations could be implemented in a nice way:

Tab to add, edit or delete any EAnnotation

As a tool provider I could focus on streamlining the experience, providing tabs and actions so that the end user don’t have to leave the modeler to adapt the generation settings or launch the code generation, give visual clues when something is invalid. I went through many variants of these UIs just to get the feel of it, as I get an instant feedback I only need minutes to rule out an option. I have a whole new dimension I can use to make my tool super effective.

This is what Sirius is about, empowering the tool provider to focus on the user experience of its users.

It is just one of the many changes which we’ve been working on since last year to improve the user experience of modeling tools, Mélanie and Stéphane will present a talk on this very subject during EclipseCon France at Toulouse: “All about UX in Sirius.”.

All of these changes are landing in Eclipse Oxygen starting with M7, those are newly introduced and I have no doubt I’ll have some polishing and refining to do, I’m counting on you to report anything suspicious

EcoreTools: user experience revamped thanks to Sirius 5.0 was originally published by Cédric Brun at CTO @ Obeo on May 19, 2017.

by Cédric Brun ( at May 19, 2017 12:00 AM

Case Study: Deploying Eclipse IoT on Germany's DB Railway System

May 18, 2017 08:55 AM

We worked with Deutsche Bahn (DB) to find out how they use Eclipse IoT technology on their railway system!

May 18, 2017 08:55 AM

New blog location

by Kim Moir ( at May 17, 2017 09:12 PM

I moved my blog to WordPress.

New location is here

by Kim Moir ( at May 17, 2017 09:12 PM

What can Eclipse developers learn from Team Sky’s aggregation of marginal gains?

by Tracy M at May 17, 2017 01:36 PM

The concept of marginal gains, made famous by Team Sky, has revolutionized some sports. The principle is that if you make 1% improvements in a number of areas, in the long run the cumulative gains will be hugely significant. And in that vein, a 1% decline here-and-there will lead to significant problems further down the line.

So how could we apply that principle to the user experience (UX) of Eclipse C/C++ Development (CDT) tools? What would happen if we continuously improved lots of small things in Eclipse CDT? Such as the build console speed? Or a really annoying message in the debugger source window? It is still too soon to analyse the impact of these changes but we believe even the smallest positive change will be worth it. Plus it is a great way to get new folks involved with the project. Here’s a guest post from Pierre Sachot, a computer science student at IUT Blagnac who is currently doing open-source work experience with Kichwa Coders. Pierre has written an experience report on fixing his very first CDT UX issue.


This week I worked with Yannick on fixing the CDT CSourceNotFoundEditor problem – the unwanted error message that Eclipse CDT shows when users are running the debugger and jumping into a function which is in another project file. When Eclipse CDT users were running the debugger on the C Project, a window was opening on screen. This window was both alarming in appearance and obtrusive. In addition, the message itself was unclear. For example, it could display “No source available for 0x02547”, which is irrelevent to the user because he/she does not have an access to this memory address. Several users had complained about it and expressed a desire to disable the window (see: stack overflow: “Eclipse often opens editors for hex numbers (addresses?) then fails to load anything”). In this post I will show you how we replaced CSourceUserNot FoundEditor with a better user experience display.

Problem description:

1- The problem we faced was that CSourceNotFoundEditor displayed on several occasions. For example:

  • When the source file was not found
  • When the memory address was known but not the function name
  • When the function name was known

2- We also wanted to tackle that red link! Red lettering is synonymous with big problems – yet the error message was merely informing the user that the source could not be found, so we felt a less alarmist style of text would be more appropriate.

CSourceNotFoundEditor Dialog:

Previous version New version

CSourceNotFoundEditor Preferences:

Previous version New version

How to resolve the problem ?


CSourceNotFoundEditor is the class called by the openEditor() function, Yannick added a link to the debug preferences page inside it:

  • The first thing to do was to create the “Preferences…” button and a text to go with it. Yannick did this in the createButtons() function.
  • Next, we made it possible for the user to open the Preferences on the correct page – in our case, the Debug page – using this code:
PreferencesUtil.createPreferenceDialogOn(parent.getShell(), "org.eclipse.cdt.debug.ui.CDebugPreferencePage", null, null).open();

“org.eclipse.cdt.debug.ui.CDebugPreferencePage” is the class name we want to load in the debug preferences.


This class is the one which contains the debug preferences page. I set about modifying it so that the CSourceNot Found preferences could be re-set and access to them enabled. This included the option to modify which contains the String values of the buttons, and to declare them and use them. The last thing we did was to create a global value in CCorePreferenceConstants to get and set the display preferences. This we did in 4 stages:

  • First we created a group for the radio buttons. This is in the function createContents().
  • Second we created the variable intended to store the preference value. This value is a String store in the CCorePreferenceConstants class. To get a preference String value, you need to use:
DefaultScope.INSTANCE.getNode(CDebugCorePlugin.PLUGIN_ID).get(CCorePreferenceConstants.YOUR_PREFERENCE_NAME, null);

And to store it:

InstanceScope.INSTANCE.getNode(CCorePlugin.PLUGIN_ID).put(CCorePreferenceConstants.YOUR_PREFERENCE_NAME, "Your text");

Here we created a preference named: SHOW_SOURCE_NOT_FOUND_EDITOR which can take 3 values, defined at the begining of the CDebugPreferencePage class:

* Use to display by default the source not found editor
* @since 6.3
public static final String SHOW_SOURCE_NOT_FOUND_EDITOR_DEFAULT = "all_time"; //$NON-NLS-1$

* Use to display all the time the source not found editor
* @since 6.3
public static final String SHOW_SOURCE_NOT_FOUND_EDITOR_ALL_THE_TIME = "all_time"; //$NON-NLS-1$

* Use to display sometimes the source not found editor
* @since 6.3
public static final String SHOW_SOURCE_NOT_FOUND_EDITOR_SOMETIMES = "sometimes"; //$NON-NLS-1$

* Use to don't display the source not found editor
* @since 6.3
public static final String SHOW_SOURCE_NOT_FOUND_EDITOR_NEVER = "never"; //$NON-NLS-1$
  • Third, we needed to find where to set the values and where to get them. So, to set the values on your components, use the setValues() function.To store a value, you will need to add your code in storeValues(), like it’s name suggests it will store the value inside of the global preferences variable.
  • The fourth and final stage is really important: – You need to put the default value of the preference you want to add in setDefaultValues() to allows access to the original value of the preferences.


This is the class which calls CSourceNotFoundEditor, so here in the function openEditor, we needed to check the preferences options in order to know if it was possible to display CSourceFoundEditor. These checks need to be carried out in openEditor() function because this is the function which opens the CSourceNotFoundEditor. To do that, we created two cases:

  • First case in which the user wants to display the Editor all the time
  • Second for when the user only wants to display it if the source file is not found
  • The last case is an exclusion of the “all_time”, so you don’t need to check it because nothing is done in this case.

To do that, we did it like this:
how to display CSourceNotFoundEditor


Now users have the capacity to disable CSourceNotFoundEditor window altogether or to choose for themselves when to display it. Thus saving time and improving the user experience of the Eclipse debugger. This is a great example of how working on an open source project can really benefit a whole community of users. But, a word of warning, CDT project isn’t the easiest program to develop or the easiest to master, you need to understand other user’s code and if you change it you need to retain its original logic and style. Fiddly perhaps but well worth it! The user community will appreciate your efforts and the flow of coding future work will be smoother and more efficient. A better user experience for everyone.

by Tracy M at May 17, 2017 01:36 PM

EclipseCon Europe 2017 | Call for Papers Open

May 17, 2017 01:29 PM

Submissions are now open for EclipseCon Europe 2017, October 24 - 26, in Ludwigsburg, Germany.

May 17, 2017 01:29 PM

Theia – One IDE For Desktop & Cloud

by Sven Efftinge at May 17, 2017 11:54 AM

Today, I want to point you at a GitHub repository we have been contributing to for the last couple of weeks. Theia is a collaborative and open effort to build a new IDE framework in TypeScript.

Yet another IDE?”, You might think. Let me explain the motivation behind it and how its scope is unique compared to existing open-source projects.

Single-Sourcing Desktop & Browser (Cloud) Tools

Let’s start with the unique selling point: Theia targets IDEs that should run as native desktop applications (using Electronas well as in modern browsers (e.g. Chrome).

So you would build one application and run it in both contexts. Theia even supports a third mode, which is a native desktop app connecting to a remote workspace. No matter if you target primarily desktop or cloud, you can leverage the goodness of web technology and will be well prepared for the future. Although implemented using web technologies, neither VSCode nor Atom support execution in a browser with a remote backend.


Theia is an open framework that allows users to compose and tailor their Theia-based applications as they want. Any functionality is implemented as an extension, so it is using the same APIs a third-party extension would use. Theia uses the dependency injection framework Inversify.js to compose and configure the frontend and backend application, which allows for fine-grained control of any used functionality.

Since in Theia there is no two-class treatment between core code and extensions, any third-party code runs in the main application processes with the same rights and responsibilities the core application has. This is a deliberate decision to support building products based on Theia.

Dock Layout

Theia focusses on IDE-like applications. That includes developer tools but extends to all kinds of software tools for engineers. We think only splitting an editor is not enough. For such applications, you want to allow representing data in different ways (not only textual) and provide the user more freedom to use the screen estate.

Theia uses the layout manager library phosphor.js. It supports side panels similar to what JetBrains’ products do and allows the user to layout editors and views as they want in the main area.


Language Server Protocol

Another goal of this effort is to reuse existing components when sensible. The language server protocol (LSP) is, therefore, an important, central concept. Theia uses Microsoft’s Monaco code editor, for which I already found some positive words last week. That said, Theia has a thin generic editor API that shields extensions from using Monaco-specific APIs for the most common tasks. Also, other components, like Eclipse Orion’s code editor, could be utilized as the default editor implementation in Theia as well.

To show-case the LSP support, Theia comes with Eclipse’s Java Language Server which also nicely shows how to add protocol extensions. For instance, the Java LS has a particular URI scheme to open source files from referenced jars, which Theia supports.



The JavaScript (JS) language is evolving, but the different targeted platforms lag behind. The solution to this is to write code in tomorrow’s language and then use a transpiler to ‘down-level’ the source code to what the targeted platforms require. The two popular transpilers are Babel and TypeScript. In contrast to Babel, which supports the latest versions of JavaScript (ECMAScript), TypeScript goes beyond that and adds a static type system on top.

Furthermore, the TypeScript compiler exposes language services to provide advanced tool support, which is crucial to read and maintain larger software systems. It allows navigating between references and declarations, gives you smart completion proposals and much more. Finally, we are not the only ones believing TypeScript is an excellent choice (read ‘Why TypeScript Is Growing More Popular’).

Status Quo & Plans

Today we have the basic architecture in place and know how extensions should work. In the Theia repository, there are two examples (one runs in a browser the other on Electron), which you can try yourself. They allow to navigate within your workspace and open files in code editors. We also have a command registry with the corresponding menu and keybinding services. Depending on whether you run in Electron or a browser the menus will be rendered natively (Electron) or using HTML. The language server protocol is working well, and there are two language servers integrated already: Java and Python. We are going to wrap the TypeScript language service in the LSP, so we can start using Theia to implement Theia. Furthermore, a terminal gives you access to the workspace’s shell.

Don’t treat this as anything like a release as this is only the beginning. But we have laid out a couple of important fundamentals and now is a good time to make it public and get more people involved. The CDT team from Ericsson have already started contributing to Theia and more parties will join soon.

Although Theia might not be ready for production today, but if you are starting a new IDE-like product or looking into migrating the UI technology of an existing one (e.g. Eclipse-based), Theia is worth a consideration. Let me know what you think or whether you have any questions.

by Sven Efftinge at May 17, 2017 11:54 AM

OSGi Event Admin – Publish & Subscribe

by Dirk Fauth at May 16, 2017 06:49 AM

In this blog post I want to write about the publish & subscribe mechanism in OSGi, provided via the OSGi Event Admin Service. Of course I will show this in combination with OSGi Declarative Services, because this is the technology I currently like very much, as you probably know from my previous blog posts.

I will start with some basics and then show an example as usual. At last I will give some information about how to use the event mechanism in Eclipse RCP development, especially related to the combination between OSGi services and the GUI.

If you want to read further details on the Event Admin Service Specification have a look at the OSGi Spec. In Release 6 it is covered in the Compendium Specification Chapter 113.

Let’s start with the basics. The Event Admin Service is based on the Publish-Subscribe pattern. There is an event publisher and an event consumer. Both do not know each other in any way, which provides a high decoupling. Simplified you could say, the event publisher sends an event to a channel, not knowing if anybody will receive that event. On the other side there is an event consumer ready to receive events, not knowing if there is anybody available for sending events. This simplified view is shown in the following picture:


Technically both sides are using the Event Admin Service in some way. The event publisher uses it directly to send an event to the channel. The event consumer uses it indirectly by registering an event handler to the EventAdmin to receive events. This can be done programmatically. But with OSGi DS it is very easy to register an event handler by using the whiteboard pattern.


An Event object has a topic and some event properties. It is an immutable object to ensure that every handler gets the same object with the same state.

The topic defines the type of the event and is intended to serve as first-level filter for determining which handlers should receive the event. It is a String arranged in a hierarchical namespace. And the recommendation is to use a convention similar to the Java package name scheme by using reverse domain names (fully/qualified/package/ClassName/ACTION). Doing this ensures uniqueness of events. This is of course only a recommendation and you are free to use pseudo class names to make the topic better readable.

Event properties are used to provide additional information about the event. The key is a String and the value can be technically any object. But it is recommended to only use String objects and primitive type wrappers. There are two reasons for this:

  1. Other types cannot be passed to handlers that reside external from the Java VM.
  2. Other classes might be mutable, which means any handler that receives the event could change values. This break the immutability rule for events.

Common Bundle

It is some kind of best practice to place common stuff in a common bundle to which the event publisher bundle and the event consumer bundle can have a dependency to. In our case this will only be the definition of the supported topics and property keys in a constants class, to ensure that both implementations share the same definition, without the need to be dependent on each other.

  • Create a new project org.fipro.mafia.common
  • Create a new package org.fipro.mafia.common
  • Create a new class MafiaBossConstants
public final class MafiaBossConstants {

    private MafiaBossConstants() {
        // private default constructor for constants class
        // to avoid someone extends the class

    public static final String TOPIC_BASE = "org/fipro/mafia/Boss/";
    public static final String TOPIC_CONVINCE = TOPIC_BASE + "CONVINCE";
    public static final String TOPIC_ENCASH = TOPIC_BASE + "ENCASH";
    public static final String TOPIC_SOLVE = TOPIC_BASE + "SOLVE";
    public static final String TOPIC_ALL = TOPIC_BASE + "*";

    public static final String PROPERTY_KEY_TARGET = "target";

  • PDE
    • Open the MANIFEST.MF file and on the Overview tab set the Version to 1.0.0 (remove the qualifier).
    • Switch to the Runtime tab and export the org.fipro.mafia.common package.
    • Specify the version 1.0.0 on the package via Properties…
  • Bndtools
    • Open the bnd.bnd file
    • Add the package org.fipro.mafia.common to the Export Packages

In MafiaBossConstants we specify the topic base with a pseudo class org.fipro.mafia.Boss, which results in the topic base org/fipro/mafia/Boss. We specify action topics that start with the topic base and end with the actions CONVINCE, ENCASH and SOLVE. And additionally we specify a topic that starts with the base and ends with the wildcard ‘*’.

These constants will be used by the event publisher and the event consumer soon.

Event Publisher

The Event Publisher uses the Event Admin Service to send events synchronously or asynchronously. Using DS this is pretty easy.

We will create an Event Publisher based on the idea of a mafia boss. The boss simply commands a job execution and does not care who is doing it. Also it is not of interest if there are many people doing the same job. The job has to be done!

  • Create a new project org.fipro.mafia.boss
  • PDE
    • Open the MANIFEST.MF file of the org.fipro.mafia.boss project and switch to the Dependencies tab
    • Add the following dependencies on the Imported Packages side:
      • org.fipro.mafia.common (1.0.0)
      • org.osgi.service.component.annotations (1.3.0)
      • org.osgi.service.event (1.3.0)
    • Mark org.osgi.service.component.annotations as Optional via Properties…
    • Add the upper version boundaries to the Import-Package statements.
  • Bndtools
    • Open the bnd.bnd file of the org.fipro.mafia.boss project and switch to the Build tab
    • Add the following bundles to the Build Path
      • org.apache.felix.eventadmin
      • org.fipro.mafia.common

Adding org.osgi.service.event to the Imported Packages with PDE on a current Equinox target will provide a package version 1.3.1. You need to change this to 1.3.0 if you intend to run the same bundle with a different Event Admin Service implementation. In general it is a bad practice to rely on a bugfix version. Especially when thinking about interfaces, as any change to an interface typically is a breaking change.
To clarify the statement above. As the package org.osgi.service.event contains more than just the EventAdmin interface, the bugfix version increase is surely correct in Equinox, as there was probably a bugfix in some code inside the package. The only bad thing is to restrict the package wiring on the consumer side to a bugfix version, as this would restrict your code to only run with the Equinox implementation of the Event Admin Service.

  • Create a new package org.fipro.mafia.boss
  • Create a new class BossCommand
    property = {
        "osgi.command.function=boss" },
    service = BossCommand.class)
public class BossCommand {

    EventAdmin eventAdmin;

    @Descriptor("As a mafia boss you want something to be done")
    public void boss(
        @Descriptor("the command that should be executed. "
            + "possible values are: convince, encash, solve")
        String command,
        @Descriptor("who should be 'convinced', "
            + "'asked for protection money' or 'finally solved'")
        String target) {

        // create the event properties object
        Map<String, Object> properties = new HashMap<>();
        properties.put(MafiaBossConstants.PROPERTY_KEY_TARGET, target);
        Event event = null;

        switch (command) {
            case "convince":
                event = new Event(MafiaBossConstants.TOPIC_CONVINCE, properties);
            case "encash":
                event = new Event(MafiaBossConstants.TOPIC_ENCASH, properties);
            case "solve":
                event = new Event(MafiaBossConstants.TOPIC_SOLVE, properties);
                System.out.println("Such a command is not known!");

        if (event != null) {

The code snippet above uses the annotation @Descriptor to specify additional information for the command. This information will be shown when executing help boss in the OSGi console. To make this work with PDE you need to import the package org.apache.felix.service.command with status=provisional. Because the PDE editor does not support adding additional information to package imports, you need to do this manually in the MANIFEST.MF tab of the Plugin Manifest Editor. The Import-Package header would look like this:

Import-Package: org.apache.felix.service.command;status=provisional;version="0.10.0",

With Bndtools you need to add org.apache.felix.gogo.runtime to the Build Path in the bnd.bnd file so the @Descriptor annotation can be resolved.

There are three things to notice in the BossCommand implementation:

  • There is a mandatory reference to EventAdmin which is required for sending events.
  • The Event objects are created using a specific topic and a Map<String, Object> that contains the additional event properties.
  • The event is sent asynchronously via EventAdmin#postEvent(Event)

The BossCommand will create an event using the topic that corresponds to the given command parameter. The target parameter will be added to a map that is used as event properties. This event will then be send to a channel via the EventAdmin. In the example we use EventAdmin#postEvent(Event) which sends the event asynchronously. That means, we send the event but do not wait until available handlers have finished the processing. If it is required to wait until the processing is done, you need to use EventAdmin#sendEvent(Event), which sends the event synchronously. But sending events synchronously is significantly more expensive, as the Event Admin Service implementation needs to ensure that every handler has finished processing before it returns. It is therefore recommended to prefer the usage of asynchronous event processing.

The code snippet uses the Field Strategy for referencing the EventAdmin. If you are using PDE this will work with Eclipse Oxygen. With Eclipse Neon you need to use the Event Strategy. In short, you need to write the bind-event-method for referencing EventAdmin because Equinox DS supports only DS 1.2 and the annotation processing in Eclipse Neon also only supports the DS 1.2 style annotations.

Event Consumer

In our example the boss does not have to tell someone explicitly to do the job. He just mentions that the job has to be done. Let’s assume we have a small organization without hierarchies. So we skip the captains etc. and simply implement some soldiers. They have specialized, so we have three soldiers, each listening to one special topic.

  • Create a new project org.fipro.mafia.soldier
  • PDE
    • Open the MANIFEST.MF file of the org.fipro.mafia.soldier project and switch to the Dependencies tab
    • Add the following dependencies on the Imported Packages side:
      • org.fipro.mafia.common (1.0.0)
      • org.osgi.service.component.annotations (1.3.0)
      • org.osgi.service.event (1.3.0)
    • Mark org.osgi.service.component.annotations as Optional via Properties…
    • Add the upper version boundaries to the Import-Package statements.
  • Bndtools
    • Open the bnd.bnd file of the org.fipro.mafia.boss project and switch to the Build tab
    • Add the following bundles to the Build Path
      • org.apache.felix.eventadmin
      • org.fipro.mafia.common
  • Create a new package org.fipro.mafia.soldier
  • Create the following three soldiers Luigi, Mario and Giovanni
    property = EventConstants.EVENT_TOPIC
        + "=" + MafiaBossConstants.TOPIC_CONVINCE)
public class Luigi implements EventHandler {

    public void handleEvent(Event event) {
        System.out.println("Luigi: "
        + event.getProperty(MafiaBossConstants.PROPERTY_KEY_TARGET)
        + " was 'convinced' to support our family");

    property = EventConstants.EVENT_TOPIC
        + "=" + MafiaBossConstants.TOPIC_ENCASH)
public class Mario implements EventHandler {

    public void handleEvent(Event event) {
        System.out.println("Mario: "
        + event.getProperty(MafiaBossConstants.PROPERTY_KEY_TARGET)
        + " payed for protection");

    property = EventConstants.EVENT_TOPIC
        + "=" + MafiaBossConstants.TOPIC_SOLVE)
public class Giovanni implements EventHandler {

    public void handleEvent(Event event) {
        System.out.println("Giovanni: We 'solved' the issue with "
        + event.getProperty(MafiaBossConstants.PROPERTY_KEY_TARGET));


Technically we have created special EventHandler for different topics. You should notice the following facts:

  • We are using OSGi DS to register the event handler using the whiteboard pattern. On the consumer side we don’t need to know the EventAdmin itself.
  • We need to implement org.osgi.service.event.EventHandler
  • We need to register for a topic via service property event.topics, otherwise the handler will not listen for any event.
  • Via Event#getProperty(String) we are able to access event property values.

The following service properties are supported by event handlers:

Service Registration Property Description
event.topics Specify the topics of interest to an EventHandler service. This property is mandatory.
event.filter Specify a filter to further select events of interest to an EventHandler service. This property is optional. Specifying the delivery qualities requested by an EventHandler service. This property is optional.

The property keys and some default keys for event properties are specified in org.osgi.service.event.EventConstants.

Launch the example

Before moving on and explaining further, let’s start the example and verify that each command from the boss is only handled by one soldier.

With PDE perform the following steps:

  • Select the menu entry Run -> Run Configurations…
  • In the tree view, right click on the OSGi Framework node and select New from the context menu
  • Specify a name, e.g. OSGi Event Mafia
  • Deselect All
  • Select the following bundles
    (note that we are using Eclipse Oxygen, in previous Eclipse versions org.apache.felix.scr and org.eclipse.osgi.util are not required)

    • Application bundles
      • org.fipro.mafia.boss
      • org.fipro.mafia.common
      • org.fipro.mafia.soldier
    • Console bundles
      • org.apache.felix.gogo.command
      • org.apache.felix.gogo.runtime
      • org.eclipse.equinox.console
    • OSGi framework and DS bundles
      • org.apache.felix.scr
      • org.eclipse.equinox.ds
      • org.eclipse.osgi
      • org.eclipse.osgi.util
    • Equinox Event Admin
      • org.eclipse.equinox.event
  • Ensure that Default Auto-Start is set to true
  • Click Run

With Bndtools perform the following steps:

  • Open the launch.bndrun file in the org.fipro.mafia.boss project
  • On the Run tab add the following bundles to the Run Requirements
    • org.fipro.mafia.boss
    • org.fipro.mafia.common
    • org.fipro.mafia.soldier
  • Click Resolve to ensure all required bundles are added to the Run Bundles via auto-resolve
  • Click Run OSGi

Execute the boss command to see the different results. This can look similar to the following:

osgi> boss convince Angelo
osgi> Luigi: Angelo was 'convinced' to support our family
boss encash Wong
osgi> Mario: Wong payed for protection
boss solve Tattaglia
osgi> Giovanni: We 'solved' the issue with Tattaglia

Handle multiple event topics

It is also possible to register for multiple event topics. Say Pete is a tough guy who is good in ENCASH and SOLVE issues. So he registers for those topics.

    property = {
        EventConstants.EVENT_TOPIC + "=" + MafiaBossConstants.TOPIC_CONVINCE,
        EventConstants.EVENT_TOPIC + "=" + MafiaBossConstants.TOPIC_SOLVE })
public class Pete implements EventHandler {

    public void handleEvent(Event event) {
        System.out.println("Pete: I took care of "
        + event.getProperty(MafiaBossConstants.PROPERTY_KEY_TARGET));


As you can see the service property event.topics is declared multiple times via the @Component annotation type element property. This way an array of Strings is configured for the service property, so the handler reacts on both topics.

If you execute the example now and call boss convince xxx or boss solve xxx you will notice that Pete is also responding.

It is also possible to use the asterisk wildcard as last token of a topic. This way the handler will receive all events for topics that start with the left side of the wildcard.

Let’s say we have a very motivated young guy called Ray who wants to prove himself to the boss. So he takes every command from the boss. For this we set the service property event.topics=org/fipro/mafia/Boss/*

    property = EventConstants.EVENT_TOPIC
        + "=" + MafiaBossConstants.TOPIC_ALL)
public class Ray implements EventHandler {

    public void handleEvent(Event event) {
        String topic = event.getTopic();
        Object target = event.getProperty(MafiaBossConstants.PROPERTY_KEY_TARGET);

        switch (topic) {
            case MafiaBossConstants.TOPIC_CONVINCE:
                System.out.println("Ray: I helped in punching the shit out of" + target);
            case MafiaBossConstants.TOPIC_ENCASH:
                System.out.println("Ray: I helped getting the money from " + target);
            case MafiaBossConstants.TOPIC_SOLVE:
                System.out.println("Ray: I helped killing " + target);
            default: System.out.println("Ray: I helped with whatever was requested!");


Executing the example again will show that Ray is responding on every boss command.

It is also possible to filter events based on event properties by setting the service property event.filter. The value needs to be an LDAP filter. For example, although Ray is a motivated and loyal soldier, he refuses to handle events that target his friend Sonny.

The following snippet shows how to specify a filter that excludes event processing if the target is Sonny.

    property = {
        EventConstants.EVENT_TOPIC + "=" + MafiaBossConstants.TOPIC_ALL,
        EventConstants.EVENT_FILTER + "=" + "(!(target=Sonny))"})
public class Ray implements EventHandler {

Execute the example and call two commands:

  • boss solve Angelo
  • boss solve Sonny

You will notice that Ray will respond on the first call, but he will not show up on the second call.

The filter expression can only be applied on event properties. It is not possible to use that filter on service properties.

At last it is possible to configure in which order the event handler wants the events to be delivered. This can either be ordered in the same way they are posted, or unordered. The service property can be used to change the default behavior, which is to receive the events from a single thread in the same order as they were posted.

If an event handler does not need to receive events in the order as they were posted, you need to specify the service property

    property = {
        EventConstants.EVENT_TOPIC + "="
            + MafiaBossConstants.TOPIC_ALL,
        EventConstants.EVENT_FILTER + "="
            + "(!(target=Sonny))",
        EventConstants.EVENT_DELIVERY + "="
            + EventConstants.DELIVERY_ASYNC_UNORDERED})

The value for ordered delivery is async.ordered which is the default. The values are also defined in the EventConstants.


By using the event mechanism the code is highly decoupled. In general this is a good thing, but it also makes it hard to identify issues. One common issue in Eclipse RCP for example is to forget to automatically start the bundle org.eclipse.equinox.event. Things will simply not work in such a case, without any errors or warnings shown on startup.

The reason for this is that the related interfaces like EventAdmin and EventHandler are located in the bundle The bundle wiring therefore shows that everything is ok on startup, because all interfaces and classes are available. But we require a bundle that contains an implementation of EventAdmin. If you remember my Getting Started Tutorial, such a requirement can be specified by using capabilities.

To show the implications, let’s play with the Run Configuration:

  • Uncheck org.eclipse.equinox.event from the list of bundles
  • Launch the configuration
  • execute lb on the command line (or ss on Equinox if you are more familiar with that) and check the bundle states
    • Notice that all bundles are in ACTIVE state
  • execute scr:list (or list on Equinox < Oxygen) to check the state of the DS components
    • Notice that org.fipro.mafia.boss.BossCommand has an unsatisfied reference
    • Notice that all other EventHandler services are satisfied

That is of course a the correct behavior. The BossCommand service has a mandatory reference to EventAdmin and there is no such service available. So it has an unsatisfied reference. The EventHandler implementations do not have such a dependency, so they are satisfied. And that is even fine when thinking in the publish & subscribe pattern. They can be active and waiting for events to process, even if there is nobody available to send an event. But it makes it hard to find the issue. And when using Tycho and the Surefire Plugin to execute tests, it will even never work because nobody tells the test runtime that org.eclipse.equinox.event needs to be available and started in advance.

This can be solved by adding the Require-Capability header to require an osgi.service for objectClass=org.osgi.service.event.EventAdmin.

Require-Capability: osgi.service;

By specifying the Require-Capability header like above, the capability will be checked when the bundles are resolved. So starting the example after the Require-Capability header was added will show an error and the bundle org.fipro.mafia.boss will not be activated.

If you add the bundle org.eclipse.equinox.event again to the Run Configuration and launch it again, there are no issues.

As p2 does still not support OSGi capabilities, the p2.inf file needs to be created in the META-INF folder with the following content:

requires.1.namespace = osgi.service = org.osgi.service.event.EventAdmin

Typically you would specify the Require-Capability to the EventAdmin service with the directive effective:=active. This implies that the OSGi framework will resolve the bundle without checking if another bundle provides the capability. It can then be more seen as a documentation which services are required from looking into the MANIFEST.MF.

Important Note:
Specifying the Require-Capability header and the p2 capabilities for org.osgi.service.event.EventAdmin will only work with Eclipse Oxygen. I contributed the necessary changes to Equinox for Oxygen M1 with Bug 416047. With a org.eclipse.equinox.event bundle in a version >= 1.4.0 you should be able to specify the capabilities. In previous versions the necessary Provide-Capability and p2 capability configuration in that bundle are missing.

Handling events in Eclipse RCP UI

When looking at the architecture of an Eclipse RCP application, you will notice that the UI layer is not created via OSGi DS (actually that is not a surprise!). And we can not simply say that our view parts are created via DS, because the lifecycle of a part is controlled by other mechanics. But as an Eclipse RCP application is typcially an application based on OSGi, all the OSGi mechanisms can be used. Of course not that convenient as with using OSGi DS directly.

The direction from the UI layer to the OSGi service layer is pretty easy. You simply need to retrieve the service you want to uw3. With Eclipse 4 you simply get the desired service injected using @Inject or @Inject in combination with @Service since Eclipse Oxygen (see OSGi Declarative Services news in Eclipse Oxygen). With Eclipse 3.x you needed to retrieve the service programmatically via the BundleContext.

The other way to communicate from a service to the UI layer is something different. There are two ways to consider from my point of view:

This blog post is about the event mechanism in OSGi, so I don’t want to go in detail with the observer pattern approach. It simply means that you extend the service interface to accept listeners to perform callbacks. Which in return means you need to retrieve the service in the view part for example, and register a callback function from there.

With the Publish & Subscribe pattern we register an EventHandler that reacts on events. It is a similar approach to the Observer pattern, with some slight differences. But this is not a design pattern blog post, we are talking about the event mechanism. And we already registered an EventHandler using OSGi DS. The difference to the scenario using DS is that we need to register the EventHandler programmatically. For OSGi experts that used the event mechanism before DS came up, this is nothing new. For all others that learn about it, it could be interesting.

The following snippet shows how to retrieve a BundleContext instance and register a service programmatically. In earlier days this was done in an Activator, as there you have access to the BundleContext. Nowadays it is recommended to use the FrameworkUtil class to retrieve the BundleContext when needed, and to avoid Activators to reduce startup time.

private ServiceRegistration<?> eventHandler;


// retrieve the bundle of the calling class
Bundle bundle = FrameworkUtil.getBundle(getClass());
BundleContext bc = (bundle != null) ? bundle.getBundleContext() : null;
if (bc != null) {
    // create the service properties instance
    Dictionary<String, Object> properties = new Hashtable<>();
    properties.put(EventConstants.EVENT_TOPIC, MafiaBossConstants.TOPIC_ALL);
    // register the EventHandler service
    eventHandler = bc.registerService(
        new EventHandler() {

            public void handleEvent(Event event) {
                // ensure to update the UI in the UI thread
                Display.getDefault().asyncExec(() -> handlerLabel.setText(
                        "Received boss command "
                            + event.getTopic()
                            + " for target "
                            + event.getProperty(MafiaBossConstants.PROPERTY_KEY_TARGET)));

This code can be technically added anywhere in the UI code, e.g. in a view, an editor or a handler. But of course you should be aware that the event handler also should be unregistered once the connected UI class is destroyed. For example, you implement a view part that registers a listener similar to the above to update the UI everytime an event is received. That means the handler has a reference to a UI element that should be updated. If the part is destroyed, also the UI element is destroyed. If you don’t unregister the EventHandler when the part is destroyed, it will still be alive and react on events and probably cause exceptions without proper disposal checks. It is also a cause for memory leaks, as the EventHandler references a UI element instance that is already disposed but can not be cleaned up by the GC as it is still referenced.

The event handling is executed in its own event thread. Updates to the UI can only be performed in the main or UI thread, otherwise you will get a SWTException for Invalid thread access. Therefore it is necessary to ensure that UI updates performed in an event handler are executed in the UI thread. For further information have a look at Eclipse Jobs and Background Processing.
For the UI synchronization you should also consider using asynchronous execution via Display#asyncExec() or UISynchronize#asyncExec(). Using synchronous execution via syncExec() will block the event handler thread until the UI update is done.

If you stored the ServiceRegistration object returned by BundleContext#registerService() as shown in the example above, the following snippet can be used to unregister the handler if the part is destroyed:

if (eventHandler != null) {

In Eclipse 3.x this needs to be done in the overriden dispose() method. In Eclipse 4 it can be done in the method annotated with @PreDestroy.

Ensure that the bundle that contains the code is in ACTIVE state so there is a BundleContext. This can be achieved by setting Bundle-ActivationPolicy: lazy in the MANIFEST.MF.

Handling events in Eclipse RCP UI with Eclipse 4

In Eclipse 4 the event handling mechanism is provided to the RCP development via the EventBroker. The EventBroker is a service that uses the EventAdmin and additionally provides injection support. To learn more about the EventBroker and the event mechanism provided by Eclipse 4 you should read the related tutorials, like

We are focusing on the event consumer here. Additionally to registering the EventHandler programmatically, it is possible in Eclipse 4 to specify a method for method injection that is called on event handling by additionally providing support for injection.

Such an event handler method looks similar to the following snippet:

void handleConvinceEvent(
        @UIEventTopic(MafiaBossConstants.TOPIC_CONVINCE) String target) {
    e4HandlerLabel.setText("Received boss CONVINCE command for " + target);

By using @UIEventTopic you ensure that the code is executed in the UI thread. If you don’t care about the UI thread, you can use @EventTopic instead. The handler that is registered in the back will also be automatically unregistered if the containing instance is destroyed.

While the method gets directly invoked as event handler, the injection does not work without modifications on the event producer side. For this the data that should be used for injection needs to be added to the event properties for the key This key is specified as a constant in IEventBroker. But using the constant would also introduce a dependency to, which is not always intended for event producer bundles. Therefore modifying the generation of the event properties map in BossCommand will make the E4 event handling injection work:

// create the event properties object
Map<String, Object> properties = new HashMap<>();
properties.put(MafiaBossConstants.PROPERTY_KEY_TARGET, target);
properties.put("", target);

The EventBroker additionally adds the topic to the event properties for the key event.topics. In Oxygen it does not seem to be necessary anymore.

The sources for this tutorial are hosted on GitHub in the already existing projects:

The PDE version also includes a sample project org.fipro.mafia.ui which is a very simple RCP application that shows the usage of the event handler in a view part.

by Dirk Fauth at May 16, 2017 06:49 AM

Moving On - Part 2

by Sebastian Zarnekow ( at May 15, 2017 07:08 PM

A big thank you for all the nice feedback and encouraging words that I received after my announcement to leave SMACC. Now, that I’ve had my last day at the company, I think it’s time to raise the curtain. And there aren’t too many surprises behind it, I guess.
From 01 June 2017 on, I’ll be a freelancer and professional consultant. I will build solutions for software developers and solve language engineering problems for my customers. My goal is to help developers and domain experts sharpening their tools, so that they can tackle their business challenges more efficiently.
Also I will work closely with the great people and friends from itemis and be part of the growing team in the Berlin branch. Of course I’m looking forward to contributing to Xtext again. After being absent for more than 15 months, a few things changed in the project, but there are plenty of interesting topics to tackle in the framework, for sure. Time to get my hands dirty!
Long story short: I’m happy to be back :)

by Sebastian Zarnekow ( at May 15, 2017 07:08 PM

Extract eclipse svg images

by Christian Pontesegger ( at May 15, 2017 05:44 PM

When creating new icons for applications I like browsing existing eclipse svg images. The repository structure is nice when you know what to look for. But with all its subfolders it is not suited for interactive browsing.

While I am not worlds greatest bash script kiddie, I assembled a script that clones the repo and sorts its svg images. after execution you end up with a folder eclipse_images that hosts the svg files.

If you improve the script, please post it here so others can benefit.


# create working dir
mkdir eclipse_images
cd eclipse_images/

# get images
git clone git://

# extract all svg images
for line in `find eclipse.platform.images/ -iname "*.svg"`;
echo line | awk -v source="$line" '{str=source; gsub(/\//, "_", str); gsub(/eclipse.platform.images_org.eclipse.images_eclipse-svg_/, "", str); gsub(/icons_full_/, "", str); gsub(/_icons_/, "_", str); print "mv \"" source "\" \"" str "\""}' | bash -sx

# remove rest of repository
rm -rf eclipse.platform.images

# extract subtype 'wizard banner'
mkdir "wizban"
for line in `find . -maxdepth 1 -iname "*_wizban_*.svg"`;
mv "$line" "wizban"

# extract overlay images
mkdir "overlay"
for line in `find . -maxdepth 1 -regextype posix-extended -regex "^.*_ovr(16_.*)?.*.svg"`;
mv "$line" "overlay"

# extract progress indicators
mkdir "progress"
for line in `find . -maxdepth 1 -regextype posix-extended -regex "^.*_(prgss|progress)_.*.svg"`;
mv "$line" "progress"

# extract view images
mkdir "views"
for line in `find . -maxdepth 1 -regextype posix-extended -regex "^.*_e?view(16)?_.*.svg"`;
mv "$line" "views"

# ... and all the rest
declare -a arr=("obj16" "elcl16" "clcl16" "etool16" "ctool16" "obj")
mkdir "images"
for token in "${arr[@]}"
for line in `find . -maxdepth 1 -iname "*_${token}_*.svg"`;
mv "$line" "images"

cd ..

by Christian Pontesegger ( at May 15, 2017 05:44 PM

Specification-By-Example for Model Transformations

by Andreas Graf ( at May 15, 2017 02:02 PM

At itemis one of our core development activities in a lot of our projects is the specification and implementation of model-to-model transformations. In a large project in the automotive domain we have been implementing a huge code base of transformations to and from a common domain model. 

Two key points from this and other projects are:

  1. Providing a written prose specification for model transformations is often moot: The effort it takes to achieve in natural language the degree of detail and formality that is required for implementation is high. It is useful to use a formal specification for the transformation – which means that you almost have an implementation ready at this point.
  2. Domain models and the transformations are very complex. In the end, for eliciting the requirements, detailing the specification and providing documentation, examples of the source and target models for a transformation are vital.

Specification-by-example for an AUTOSAR model

So in this project we chose to enhance or even replace the specs with "Specification-By-Example". For all transformation steps and relevant source model variations, we keep a combination of source model and expected target model. For convenience, these are implemented in the test model language that I blogged about recently. The transformations are then implemented in Xtend (Xtend fragments are also directly written in discussion meetings).

Assume that we have an inplace-transformation for an AUTOSAR model (containing software components and ports), which should merge all the ports and interfaces of a component into one port and one merged interface. Our specification-by-example-files could look like this:

Source Model:


Target Model:


Note that our specification-by-example is based on Xtext models and can be used for any (EMF-based) meta-model. Xtext provides comfortable editing features that make it possible, to create such models during meetings. Since they are text files, they can be easily added to version control systems (such as git), and we can place them in the developer workspace for easy access.

After having such an example-based specification, the next step would be obviously to use the same approach for testing. The source file is the input and the target file is the expected output. Our test framework reads in the source model, applies the transformations and then uses the EMF Compare framework from Eclipse to compare the actual result to the expected result. If there are differences, the test will fail.

We have added a textual / HTML based formatting for the results of EMF Compare, so that we can easily see which elements have changed based on the output of the build system, such as Jenkins, without having to reproduce the transformation on the local machine first.

Improve your understanding of the transformation

The specification-by-example approach improves the teams understanding of the transformation and eases writing relevant tests. It also supports some of the approaches that some of my colleagues blogged about:

  • Test Driven Development: As we directly use the specifications-by-example for test cases (and expand on them), we do have a set of test cases that is actually implemented before the first line of code is written and can directly be used to validate the code against. Usually, they will be supplemented by additional tests (e.g. unit tests).
  • Agile Tests: It is in our "Definition of Done" to provide test cases for each transformation implementation of a sprint. In the sprint review the "before" and "after" models can be reviewed to discuss what has actually been implemented. 

You want to learn more about our projects in the automotive domain?

Check our blog for more information

by Andreas Graf ( at May 15, 2017 02:02 PM

Save the date: Eclipse DemoCamp Oxygen 28.06.2017

by Maximilian Koegel and Jonas Helming at May 15, 2017 12:50 PM

Please save the date for the Eclipse DemoCamp Oxygen 2017 on June 28th. We will offer ~120 seats, but we usually receive around 200 registrations. To give everybody the same chance, registration for the event will open exactly on May 24th 2017 2pm.

More details on the event and the possibility to register you will find here.

The DemoCamp Munich is one the biggest DemoCamps worldwide and therefore an excellent opportunity to showcase all the cool, new and interesting technology being built by the Eclipse community. This event is open to Eclipse enthusiasts who want to show demos of what they are doing with Eclipse. It aims to create an opportunity for you to meet other Eclipse enthusiasts in Munich in an informal setting.

Seating is limited, so please register on May 24th if you plan to attend.

We look forward to meeting you at the Eclipse DemoCamp Munich 2017!


Leave a Comment. Tagged with democamp, eclipse, democamp, eclipse

by Maximilian Koegel and Jonas Helming at May 15, 2017 12:50 PM

Devoxx4Kids Ottawa June 2017

by waynebeaton at May 15, 2017 02:56 AM

We all had such a great time at the Devoxx4Kids session in San Jose this past March that we’ve decided to try running a session here in Ottawa.

The goals and mission of Devoxx4Kids is to:

  1. Teach children Computer Programming while having fun and introduce them to concepts of robotics, electronics and generally being creative with these kind of devices.
  2. Inspire not only children but also the classical education system, so they too can start including computer science in their curriculum.
  3. Demystify programming for girls and introduce them to computer science in order to improve gender equality in that field.

The full manifesto is on the Devoxx4Kids website. There’s also all sorts of information about the programme, including links to the workshops.


Don’t let this picture fool you. Plenty of young women attended the session in San Jose, but we were so caught up in the fun that we didn’t take all that many pictures…

For this first attempt, we’re going to keep it simple and run only two workshops (they ran eight in four parallel streams in San Jose). Since we’re new at this, we’re going to stick to coding workshops with a plan to branch out and maybe try some of the hardware workshops in a future session (these workshops require that we acquire some supplies and equipment that we don’t have readily at hand).

For this first run, the good people at Carleton University have offered up some space. Registration will open at 9:00 am on Saturday, June 3, 2017; we’ll be in room 5345 of the Herzberg Physics building.

5345 Herzberg Physics
Carleton University
1125 Colonel By Dr
Ottawa, ON K1S 5B6

The target age range for attendees is between ten and fourteen years of age (close counts). We’ll post more information, including how to register, on our event page. Registration includes lunch. We’re charging a modest fee of $30 to cover our expenses. Attendees will need to bring their own laptop computer to complete the exercises (we may be able to bring a few spares).

If you’re interested in helping to mentor the session, please send us a note at We’ll get the mentors together in late May to go through the exercises and make sure that everybody is ready to hit the ground running.

The first workshop will focus on a simple game written in Javascript and HTML using a game engine called Phaser. Participants are shown some basic JavaScript expressions and are then invited to use their new knowledge to modify the game. The beautiful thing about this exercise is that it requires virtually no set-up: the code is all self-contained, any text editor (including Notepad) can be used for modifications, and it all runs in a browser. Further, it can be run successfully without requiring an Internet connection.

The second workshop is concerned with Minecraft Modding using Forge for Minecraft and Eclipse IDE as the development environment. This workshop has a few more moving parts than the first and so will require a bit more effort to set up and most certainly does require a stable Internet connection to at least assemble the initial development environment via a Gradle build. There’s certainly a lot of opportunities in this workshop to explain all sorts of interesting concepts without getting bogged down in too many details (which will be good if we end up having attendees with prior experience).

Screenshot from 2017-05-12 15-36-17

We’ll send out setup instructions a week or so ahead of the session; we can hit the ground running faster if everybody has the software that we’re going to need already downloaded.

by waynebeaton at May 15, 2017 02:56 AM

Xtext LSP vs. Xtext Web

by Sven Efftinge at May 12, 2017 10:03 AM

The Eclipse Xtext language development framework can be used to implement domain-specific languages (DSLs) as well as fully blown programming languages. In addition to a modern compiler architecture, it comes with tool support for different platforms, like Eclipse, IntelliJ and Web.

Since supporting all these different editor platforms is a lot of effort, we are strong supporters of the Language Server Protocol (LSP). The LSP defines a set of requests and notifications for editors to interact with language servers. A language server essentially is a smart compiler watching a workspace and exposing services for an editor. Such services cover things like content assist, find references, rename refactoring and so on. So the big question is :

When should I use Xtext LSP instead of a native editor integration?

As of today if you are looking for an Eclipse Plug-in my answer is clearly, go with the traditional Xtext Eclipse Plug-in. With Eclipse LSP4E there is Eclipse support for language servers, but it is not even close to what our native Eclipse support does. I also doubt that this will change any time in the future. The native Eclipse integration of Xtext is here to stay.

For IntelliJ IDEA the situation is different. Neither the Xtext integration has been updated with the last release, nor has Jetbrains yet started to work on LSP support. The code for the IDEA integration is quite extensive and deep. So deep that we get regularly broken because we use non-public API. Since the demand for IDEA integration is not high, maintaining it doesn’t make sense to us. That is why I asked Jetbrains to work on LSP integration last year already. So far they don’t seem to be convinced, but you could add your 2cents or +1 to this ticket if you think LSP would be a good IDEA.

For the rest of this post, I want to talk about Xtext Web and why you should not use it anymore and prefer the LSP integration instead.

The Xtext Web support was our first attempt to generalize language features over multiple editors. At that time we only abstracted over the web editors Ace, CodeMirror and Eclipse Orion (the editor widget, not the IDE). We did it over a REST interface and focussed on single code editors, only. The LSP integration works with any editor supporting it and while Eclipse Orion is still working on supporting it, the Monaco code editor from Microsoft fully supports it already. So here are my four reasons why you should use LSP for web applications:

Monaco Is Awesome

Our team has been working with Monaco since it came out last summer. For instance, we are developing a data science IDE for (you can try it for free :-)), where we use Monaco with language servers (currently Python and R). The R language server has been implemented in Xtext using the brand new LSP support. Please have a look at this article to learn more about its features.

So far working with Monaco has been a decent experience. The code is well written and organized, and the quality is very high. Microsoft uses TypeScript, which we do, too, when working on a JavaScript stack. It is to JavaScript what our Xtend programming language is to Java :).

Feature-wise I can say that it has all the things other editors have, but also comes with additional nice features like code-lenses, peak definition or the integrated find references. Moreover, it is very extensible letting use inline any kind of html for instance.

Multiple Editor Support

Monaco directly supports to work with multiple editors in a single website and connect them for e.g. navigation. This is also a big difference between Xtext LSP and Xtext Web. Xtext LSP is built on top of our incremental builder infrastructure, so it can naturally deal with multiple documents and even projects and dependencies. This doesn’t mean that you need to serve your files from a file system or need to deal with complicated project setups. It just supports this once you want to do it.

Xtext Web, on the other hand, can only handle a single document, and the underlying resource set needs to be provided programmatically.

Write Once, Run Everywhere

Having a fully compliant language server for your Xtext DSL will allow to use it in other contexts, too. Single-sourcing your language implementation and being able to run it in all different LSP-supporting editors is a huge plus. You decouple the work that you put into your language from the decisions you make regarding in which editors or applications you integrate it.

Future Proof

When it comes to integrating Xtext languages in web applications all our passion and love goes to the LSP. Our customers use either Eclipse or LSP, and we are happy helping people to migrate their existing Xtext Web solutions to LSP and Monaco. Going forward we won’t invest into the Xtext Web support but likely will deprecate it soon. In the future, given the adoption of the LSP, there will be even more tools and editors that can run your Xtext languages.

Final Words

So for me, the main focus in Xtext will be the traditional Eclipse support and the LSP support for everything else. The Eclipse support will benefit from the LSP support as well since we plan to implement new tool features in a way such that it can be used from Eclipse as well as from LSP.

Please get in touch if you have questions or any doubts whether your use case is well covered by this focus.


by Sven Efftinge at May 12, 2017 10:03 AM

Dimitris Kolovos at IncQuery Labs Academy

by Csenge Kolozsvari at May 11, 2017 03:50 PM

IncQuery Labs Academy – an IT engineering professional educational platform – founded by IncQuery Labs Ltd. in April 2016. It aims to present our specializations most interesting and challenging projects, results and experiences. The presentations go around diverse topics in software development, validation and verification, model driven engineering, open-source technologies, etc.

IncQuery Labs Academy continues with a special speaker: Dimitris Kolovos, Senior Lecturer (Associate Professor) at the Department of Computer Science of the University of York, where he researches and teaches automated and model-driven software engineering. He is also an active Eclipse Foundation committer, leading the development of the open-source Epsilon platform under the Eclipse Modelling project.

His talks title: Model-Based Engineering in Industry: An Academic Toolsmith's Perspective


In this talk he will reflect on lessons learnt through developing and evangelising Eclipse-based open-source technologies for Model-Based Engineering for more than a decade now. He will focus on different states of maturity and practice he has encountered in industry, on the external perception of Eclipse-based MBE technologies, and on opportunities and challenges involved in bridging heterogeneous open-source and proprietary MBE tools.

Join us online on the following link and leave your questions, comments in IncQuery Labs’ Facebook page!

by Csenge Kolozsvari at May 11, 2017 03:50 PM

Time scheduling with Chime

by LisiLisenok at May 09, 2017 12:00 AM

Time scheduling.

Eclipse Vert.x executes periodic and delayed actions with periodic and one-shot timers. This is the base for time scheduling and reach feature extension must be rather interesting. Be notified at certain date / time, take into account holidays, repeat notifications until a given date, apply time zone, take into account daylight saving time etc. There are a lot of useful features time scheduler may introduce to the Vert.x stack.


Chime is time scheduler verticle which works on Vert.x event bus and provides:

  • scheduling with cron-style, interval or union timers:
    • at a certain time of day (to the second);
    • on certain days of the week, month or year;
    • with a given time interval;
    • with nearly any combination of all of above;
    • repeating a given number of times;
    • repeating until a given time / date;
    • repeating infinitely
  • proxying event bus with conventional interfaces
  • applying time zones available on JVM with daylight saving time taken into account
  • flexible timers management system:
    • grouping timers;
    • defining a timer start or end times
    • pausing / resuming;
    • fire counting;
  • listening and sending messages via event bus with JSON;
  • publishing or sending timer fire event to the address of your choice.

Chime is written in Ceylon and is available at Ceylon Herd.


Ceylon users.

Deploy Chime using Verticle.deployVerticle method.

import io.vertx.ceylon.core {vertx}
import herd.schedule.chime {Chime}

Or with vertx.deployVerticle(\"ceylon:herd.schedule.chime/0.2.1\"); but ensure that Ceylon verticle factory is available at class path.

Java users.

  1. Ensure that Ceylon verticle factory is available at class path.
  2. Put Ceylon versions to consistency. For instance, Vert.x 3.4.1 depends on Ceylon 1.3.0 while Chime 0.2.1 depends on Ceylon 1.3.2.
  3. Deploy verticle, like:

example with Maven is available at Github.


Well, Chime verticle is deployed. Let’s see its structure.
In order to provide flexible and broad ways to manage timing two level architecture is adopted. It consists of schedulers and timers. Timer is a unit which fires at a given time. While scheduler is a set or group of timers and provides following:

  • creating and deleting timers;
  • pausing / resuming all timers working within the scheduler;
  • info on the running timers;
  • default time zone;
  • listening event bus at the given scheduler address for the requests to.

Any timer operates within some scheduler. And one or several schedulers have to be created before starting scheduling.
When Chime verticle is deployed it starts listen event bus at chime address (can be configured). In order to create scheduler send to this address a JSON message.

    "operation": "create",
    "name": "scheduler name"

Once scheduler is created it starts listen event bus at scheduler name address. Sending messages to chime address or to scheduler name address are rather equivalent, excepting that chime address provides services for every scheduler, while scheduler address provides services for this particular scheduler only.
The request sent to the Chime has to contain operation and name keys. Name key provides scheduler or timer name. While operation key shows an action Chime has to perform. There are only four possible operations:

  • create - create new scheduler or timer;
  • delete - delete scheduler or timer;
  • info - request info on Chime or on a particular scheduler or timer;
  • state - set or get scheduler or timer state (running, paused or completed).


Now we have scheduler created and timers can be run within. There are two ways to access a given timer:

  1. Sending message to chime address with ‘name’ field set to scheduler name:timer name.
  2. Sending message to scheduler name address with ‘name’ field set to either timer name or scheduler name:timer name.

Timer request is rather complicated and contains a lot of details. In this blog post only basic features are considered:

    "operation": "create",
    "name": "scheduler name:timer name",
    "description": {}

This is rather similar to request sent to create a scheduler. The difference is only description field is added. This description is an JSON object which identifies particular timer type and its details.
The other fields not shown here are optional and includes:

  • initial timer state (paused or running);
  • start or end date-time;
  • number of repeating times;
  • is timer message to be published or sent;
  • timer fire message and delivery options;
  • time zone.

Timer descriptions.

Currently, three types of timers are supported:

  • Interval timer which fires after each given time period (minimum 1 second):

      "type": "interval",
      "delay": "timer delay in seconds, Integer"
  • Cron style timer which is defined with cron-style:

    “type”: “cron”,
    “seconds”: “seconds in cron style, String”,
    “minutes”: “minutes in cron style, String”,
    “hours”: “hours in cron style, String”,
    “days of month”: “days of month in cron style, String”,
    “months”: “months in cron style, String”,
    “days of week”: “days of week in cron style, String, optional”,
    “years”: “years in cron style, String, optional”

    Cron timer is rather powerful and flexible. Investigate specification for the complete list of features.

  • Union timer which combines a number of timers into a one:

    “type”: “union”,
    “timers”: [“list of the timer descriptions”]

    Union timer may be useful to fire at a list of specific dates / times.

Timer events.

Once timer is started it sends or publishes messages to scheduler name:timer name address in JSON format. Two types of events are sent:

  • fire event which occurs when time reaches next timer value:
      "name": "scheduler name:timer name, String",  
      "event": "fire",  
      "count": "total number of fire times, Integer",  
      "time": "ISO formated time / date, String",  
      "seconds": "number of seconds since last minute, Integer",  
      "minutes": "number of minutes since last hour, Integer",  
      "hours": "hour of day, Integer",  
      "day of month": "day of month, Integer",  
      "month": "month, Integer",  
      "year": "year, Integer",  
      "time zone": "time zone the timer works in, String"
  • complete event which occurs when timer is exhausted by some criteria given at timer create request:
      "name": "scheduler name:timer name, String",  
      "event": "complete",  
      "count": "total number of fire times, Integer"  

Basically, now we know everything to be happy with Chime: schedulers and requests to them, timers and timer events. Will see some examples in the next section.


Ceylon example.

Let’s consider a timer which has to fire every month at 16-30 last Sunday.

// listen the timer events
eventBus.consumer (
    "my scheduler:my timer",
    (Throwable|Message msg) {
        if (is Message msg) { print(msg.body()); }
        else { print(msg); }    
// create scheduler and timer
eventBus.send (
    JsonObject {
        "operation" -> "create",
        "name" -> "my scheduler:my timer",
        "description" -> JsonObject {
            "type" -> "cron",
            "seconds" -> "0",
            "minutes" -> "30",
            "hours" -> "16",
            "days of month" -> "*",
            "months" -> "*",
            "days of week" -> "SundayL"

‘*’ means any, ‘SundayL’ means last Sunday.

If ‘create’ request is sent to Chime address with name set to ‘scheduler name:timer name’ and corresponding scheduler hasn’t been created before then Chime creates both new scheduler and new timer.

Java example.

Let’s consider a timer which has to fire every Monday at 8-30 and every Friday at 17-30.

// listen the timer events
MessageConsumer consumer = eventBus.consumer("my scheduler:my timer");
consumer.handler (
    message -> {
// description of timers
JsonObject mondayTimer = (new JsonObject()).put("type", "cron")
    .put("seconds", "0").put("minutes", "30").put("hours", "8")
    .put("days of month", "*").put("months", "*")
    .put("days of week", "Monday");
JsonObject fridayTimer = (new JsonObject()).put("type", "cron")
    .put("seconds", "0").put("minutes", "30").put("hours", "17")
    .put("days of month", "*").put("months", "*")
    .put("days of week", "Friday");
// union timer - combines mondayTimer and fridayTimer
JsonArray combination = (new JsonArray()).add(mondayTimer)
JsonObject timer = (new JsonObject()).put("type", "union")
    .put("timers", combination);
// create scheduler and timer
eventBus.send (
    (new JsonObject()).put("operation", "create")
        .put("name", "my scheduler:my timer")
        .put("description", timer)

Ensure that Ceylon verticle factory with right version is available at class path.

At the end.

herd.schedule.chime module provides some features not mentioned here:

  • convenient builders useful to fill in JSON description of various timers;
  • proxying event bus with conventional interfaces;
  • reading JSON timer event into an object;
  • attaching JSON message to the timer fire event;
  • managing time zones.

There are also some ideas for the future:

  • custom or user-defined timers;
  • limiting the timer fire time / date with calendar;
  • extracting timer fire message from external source.

This is very quick introduction to the Chime and if you are interested in you may read more in Chime documentation or even contribute to.

Thank’s for the reading and enjoy with coding!

by LisiLisenok at May 09, 2017 12:00 AM