e(fx)clipse 2.2.0 – New features overview

by Tom Schindl at December 01, 2015 11:07 PM

This blog post collects all new and shiny features provided with the e(fx)clipse 2.2.0 release which will be released on December 11th 2015.

by Tom Schindl at December 01, 2015 11:07 PM

e(fx)clipse 2.2.0 – Loading Media files from OSGi-Bundles – sponsored by 360T

by Tom Schindl at December 01, 2015 11:04 PM

As JavaFX is not only a UI-Toolkit but also provides a “fullblow” multimedia stack you can play sound files and videos without residing to other APIs/Libraries. Unfortunately you can not feed any URL into it.

Only the following protocols are supported

  • http:
  • file:
  • jar:file:

For most developers those protocols are enough but if you are running in an Equinox-OSGi-Environment and want to play a sound file packaged with in your bundle you are in trouble because the URL you’ll get is eg on Equinox bundleresource://.... so you can’t directly pass this URL to the JavaFX Media API because it will fail to resolve the resource.

Even if you know how to fix this and translate the URL to one of the above protocols your business code should never have a direct dependency on OSGi so you can not deal with this problem yourself but need a library to do this.

One of our JavaFX customers (360T) approach us a few days ago and so we implemented API giving you exactly this feature and you can/should always create Media instance like this:

public void playSound() {
  URL url = getClass().getResource("mysound.mp3");
  MediaLoader.createMedia( url )
    .map( MediaPlayer::new )
    .ifPresent( MediaPlayer::play );

by Tom Schindl at December 01, 2015 11:04 PM

Visualizing Java 9 Module Relationships

by waynebeaton at December 01, 2015 10:39 PM

As I stated in an earlier post, I’ve been running Eclipse Neon on the Java 9 + Jigsaw builds for a little while and haven’t had any issues. I spent a few hours over the weekend tinkering with some modularity tool ideas. I wrote a little plug-in for Eclipse Neon that visualizes the connections between the various modules.


The implementation is a little rough right now as I was focused on just sorting out how to get the various bits of technology to work. The Modules Dependencies View updates whenever the user highlights a Java project (e.g. in the Package Explorer, Project Explorer, or Navigator): it queries the Java runtime associated with the project for its list of modules and readability associations between them, and then uses Eclipse GEF Zest to visualize the results.

The part where I query the Java runtime is a huge hack right now, but the good news is that in playing with the visualization, I may have sorted out a better less hacky way of getting this done. More on this later.

I’m also just relearning how to use Zest (not that I was ever much of an expert with it), so there’s probably more that I can do with the layout algorithm. One thing that I did do was make it so that you can drill down into any single module with a double-click.


While I was at it, I threw together a quick tree view that lets you drill down through the reads relationships between modules.


It’s all very ugly of course, but it’s been pretty helpful for me see all the parts. I need to test this with a custom jimage (perhaps I’ll build one based on one of the compact profiles).

It would also be cool to include full build path of the Java project (e.g. workspace contents and JAR files) in the visualization. I also completely disregarded layers and should probably put some thought into representing them.

I haven’t pushed out the code yet; it will appear in Eclipse Labs when it’s a little more fully-baked.

Note that JDK 9 is going to slip a bit: with less than two weeks to go before the “feature complete” deadline, a proposal has been put forward to extend the release date by six months.

by waynebeaton at December 01, 2015 10:39 PM

Eclipse Newsletter - Model your World

December 01, 2015 03:26 PM

Four Eclipse Modeling projects have great things up their sleeve! Read the newsletter now.

December 01, 2015 03:26 PM

IAdaptable - GEF4's Interpretation of a Classic

by Alexander Nyßen (noreply@blogger.com) at December 01, 2015 03:24 PM

Adaptable Objects as a Core Pattern of Eclipse Core Runtime

The adaptable objects pattern is probably the most important one used by the Eclipse core runtime. Formalized by the org.eclipse.core.runtime.IAdaptable interface, an adaptable object can easily be queried by clients (in a type-safe manner) for additional functionality that is not included within its general contract.
public interface IAdaptable {
   * Returns an object which is an instance of the given class
   * associated with this object. Returns <code>null</code> if
   * no such object can be found.
   * @param adapter the adapter class to look up
   * @return a object castable to the given class, 
   *    or <code>null</code> if this object does not
   *    have an adapter for the given class
   public Object getAdapter(Class adapter);
From another viewpoint, if an adaptable object properly delegates its getAdapter(Class) implementation to an IAdapterManager (most commonly Platform.getAdapterManager()) or provides a respective proprietary mechanism on its own, it can easily be extended with new functionality (even at runtime), without any need for local changes, and adapter creation can flexibly be handled through a set of IAdapterFactory implementations.

Why org.eclipse.core.runtime.IAdaptable is not perfectly suited for GEF4 

As it has proven its usefulness in quite a number of places, I considered the adaptable objects pattern to be quite a good candidate to deal with the configurability and flexibility demands of a graphical editing framework as well. I thus wanted to give it a major role within the next generation API of our model-view-controller framework (GEF4 MVC).

GEF4 MVC is the intended replacement of GEF (MVC) 3.x as the core framework, from which to build up graphical editors and views. As it has a high need for flexibility and configurability it seemed to be the ideal playground for adaptable objects. However, the way the Eclipse core runtime interprets the adaptable objects pattern does not make it a perfect match to fulfill our requirements, because:
  • Only a single adapter of a specific type can be registered at an adaptable. Registering two different Provider implementations at a controller (e.g. one to specify the geometry used to display feedback and one to depict where to layout interaction handles) is for instance not possible.
  • Querying (and registering) adapters for parameterized types is not possible in a type-safe manner. The class-based signature of getAdapter(Class) does for instance not allow to differentiate between a Provider<IGeometry> and a Provider<IFXAnchor>.
  • IAdaptable only provides an API for retrieving adapters, not for registering them, so (re-) configuration of adapters at runtime is not easily possible. 
  • Direct support for 'binding' an adapter to an adaptable object, i.e. to establish a reference from the adapter to the adaptable object, is not offered (unless the adapter explicitly provides a proprietary mechanism to establish such a back-reference).

    Adaptable Objects as Interpreted by GEF4 Common

    I thus created my own interpretation of the adaptable objects pattern, formalized by org.eclipse.gef4.common.adapt.IAdaptable. It is provided by the GEF4 Common component and can thus be easily used standalone, even for applications that have no use for graphical editors or views (GEF4 Common only requires Google Guice and Google Guava to run).

    AdapterKey to combine Type with Role

    Instead of a simple Class-based type-key, adapters may now be registered by means of an AdapterKey, which combines (a Class- or TypeToken-based) type key (to retrieve the adapter in a type-safe manner) with a String-based role.

    The combination of a type key with a role allows to retrieve several adapters of the same type with different roles. Two different Provider implementations can for instance now easily be retrieved (to provide independent geometric information for selection feedback and selection handles) through:
    getAdapter(AdapterKey.get(new TypeToken<Provider<IGeometry>>(){}, "selectionFeedbackGeometryProvider"))
    getAdapter(AdapterKey.get(new TypeToken<Provider<IGeometry>>(){}, "selectionHandlesGeometryProvider"))

    TypeToken instead of Class

    The second significant difference is that a com.google.common.reflect.TypeToken (provided by Google Guava) is used as a more general concept instead of a Class, which enables parameterized adapters to be registered and retrieved in a type-safe manner as well. A geometry provider can for instance now be easily retrieved through getAdapter(new TypeToken<Provider<IGeometry>>(){}), while an anchor provider can alternatively be retrieved through getAdapter(new TypeToken<Provider<IFXAnchor>>(){}). For convenience, retrieving adapters by means of Class-based type keys is also supported (which will internally be converted to a TypeToken).

    IAdaptable as a local adapter registry

    In contrast to the Eclipse core runtime interpretation, an org.eclipse.gef4.common.adapt.IAdaptable has the obligation to provide means to not only retrieve adapters (getAdapter()) but also register or unregister them (setAdapter(), unsetAdapter()). This way, the 'configuration' of an adaptable can easily be changed at runtime, even without providing an adapter manager or factory.

    Of course this comes at the cost that an org.eclipse.gef4.common.adapt.IAdaptable is itself responsible of maintaining the set of registered adapters. This (and the fact that the interface contains a lot of convenience functions) is balanced by the fact that a base implementation (org.eclipse.gef4.common.adapt.AdaptableSupport) can easily be used as a delegate to realize the IAdaptable interface.

    IAdaptable.Bound for back-references

    If adapters need to be 'aware' of the adaptable they are registered at, they may implement the IAdaptable.Bound interface, which is used to establish a back reference from the adapter to the adaptable. It is part of the IAdaptable-contract that an adapter implementing the IAdaptable.Bound will be provided with a back-reference during registration (if an adaptable uses org.eclipse.gef4.common.adapt.AdaptableSupport to internally realize the interface, this contract is  of course guaranteed). 

    IAdaptables and Dependency Injection

    While the possibility to re-configure the registered adapters at runtime is quite helpful, proper support to create an initial adapter configuration during instantiation of an adaptable is also of importance. To properly support this, I integrated the GEF4 Common adaptable objects mechanism with Google Guice. 

    That is, the adapters that are to be registered at an adaptable can be configured in a Guice module, using a specific AdapterMap binding (which is based on Guice's multi-bindings). To register an adapter of type VisualBoundsGeometryProvider at a FXGeometricShapePart adaptable can for instance be performed using the following Guice module configuration:
    protected void configure() {
      // enable adapter map injection support
      install(new AdapterInjectionSupport());
      // obtain map-binder to bind adapters for FXGeometricShapePart instances
      MapBinder<AdapterKey<?>, Object> adapterMapBinder
    AdapterMaps.getAdapterMapBinder(binder(), FXGeometricShapePart.class);
      // bind geometry provider for selection handles as adapter on FXGeometricShapePart
    It will not only inject a VisualBoundsGeometryProvider instance as an adapter to all direct instances of FXGeometricShapePart but also to all instances of its sub-types, which may be seen as a sort of 'polymorphic multi-binding'.

    Two prerequisites have to be fulfilled in order to make use of adapter injections:
    1. Support for adapter injections has to be enabled in your Guice module by installing an org.eclipse.gef4.common.inject.AdapterInjectionSupport module as outlined in the snippet above.
    2. The adaptable (here: FXGeometricShapePart.class) or any of its super-classes has to provide a method that is eligible for adapter injection:
    public <T> void setAdapter(TypeToken<T> adapterType, T adapterString role) {
      // TODO: implement (probably by delegating to an AdaptableSupport)
    GEF4 MVC makes use of this mechanism quite intensively for the configuration of adapters (and indeed, within the MVC framework, more or less everything is an adapter). However, similar to the support for adaptable objects itself, the related injection mechanism is easily usable in a standalone scenario. Feel free to do so!

    by Alexander Nyßen (noreply@blogger.com) at December 01, 2015 03:24 PM

    ECF 3.12.0 Released

    by Scott Lewis (noreply@blogger.com) at November 30, 2015 09:18 PM

    ECF 3.12.0 was released today.

    The new work was primarily on improving the Remote Services/Remote Service Admin Tooling.

    The new RSA Manager view, along with an enhanced Endpoint Discovery view, support easier OSGi Remote Services testing and debugging.

    by Scott Lewis (noreply@blogger.com) at November 30, 2015 09:18 PM

    Upcoming Virtual IoT meetups

    by Benjamin Cabé at November 30, 2015 04:05 PM

    We have some great Virtual IoT meetups lined up for the next couple months! They are a great opportunity to learn about existing and emerging IoT tech, as well as to engage with some of the key people that are actually building the Internet of Things. Please make sure to check them out and register to be reminded to attend them in time!

    Also, if you’re interested in presenting something cool that you’re doing in the field of IoT, please contact me!

    Introducing the Arduino C++ IDE for Eclipse

    Wednesday, Dec 2, 2015, 8:00 AM

    No location yet.

    47 IoT enthusiasts Attending

    This is a virtual Meetup occurring at 8AM Pacific time (11am Eastern, 5pm Central European Time). For help with your timezone calculation, refer to this.The meetup will be held on Google Hangouts and you will be able to watch the live stream directly on YouTube.http://www.youtube.com/watch?v=4gCprxHFeuwThe Arduino IDE from arduino.cc provides a …

    Check out this Meetup →


    Smart Charging of Electric Vehicles with RISE V2G Project

    Wednesday, Dec 16, 2015, 8:00 AM

    No location yet.

    2 IoT enthusiasts Attending

    This is a virtual Meetup occurring at 8AM Pacific time (11am Eastern, 5pm Central European Time). For help with your timezone calculation, refer to this.The meetup will be held on Google Hangouts and you will be able to watch the live stream directly on YouTube.http://www.youtube.com/watch?v=ImXnDLHyZbERISE V2G is a Reference Implementation Supp…

    Check out this Meetup →

    by Benjamin Cabé at November 30, 2015 04:05 PM

    How to Clone Git Repositories with JGit

    by Rüdiger Herrmann at November 30, 2015 08:00 AM

    Written by Rüdiger Herrmann

    Whatever you plan to do with an existing repository, first a clone has to be created. Whether you plan to contribute or just want to peek at its history, a local copy of the repository is needed.

    While cloning a repository with JGit isn’t particularly difficult, there are a few details that might be worth noting. And because there are few online resources on the subject, this article summarizes how to use the JGit API to clone from an existing Git repository.

    Cloning Basics

    To make a local copy of a remote repository, the CloneCommand needs at least to be told where the remote is to be found:

    Git git = Git.cloneRepository()
      .setURI( "https://github.com/eclipse/jgit.git" )

    The Git factory class has a static cloneRepository() method that returns a new instance of a CloneCommand. setURI() advises it where to clone from and like with all JGit commands, the call() method actually executes the command.

    Though remote repositories – like the name suggests – are usually stored on a remote host, the location given in setURI() can also be a path to a local resource.

    If no more information is given, JGit will choose the directory in which the cloned repository will be stored for you. Based on the current directory and the repository name that is derived from its URL, a directory name is built. In the example above it would be ‘/path/to/current/jgit’.

    But usually you would want to have more control over the destination directory and explicitly state where to store the local clone.

    The setDirectory() method specifies where the work directory should be and with setGitDir() the location of the metadata directory (.git) can be set. If setGitDir() is omitted, the .git directory is created directly underneath the work directory

    The example below

    Git git = Git.cloneRepository()
      .setURI( "https://github.com/eclipse/jgit.git" )
      .setDirectory( "/path/to/repo" )

    will create a local repository whose work directory is located at ‘/path/to/repo’ and whose metadata directory is located at ‘/path/to/repo/.git’.

    However the destination location is chosen, explicitly through your code or by JGit, the designated directory must either be empty or must not exist. Otherwise an exception will be thrown.

    The settings for setDirectory(), setGitDir() and setBare() (see below) are forwarded to the InitCommand that is used internally by the CloneCommand. Hence more details thereover are explained in Initializing Git Repositories with JGit.

    The Git instance that is returned by CloneCommand.call() provides access to the repository itself (git.getRepository()) and can be used to execute further commands targeting this repository. When finished using the repository it must be closed (git.close()) or otherwise the application may leak file handles.

    To later regain a Repository (or Git) instance, the path to the work directory or .git directory is sufficient. The article How to Access a Git Repository with JGit has detailed information on the subject.

    Upstream Configuration

    As a last step the clone command updates the configuration file of the local repository to register the source repository as a socalled remote.

    When looking at the configuration file (.git/config) the remote section looks like this:

    [remote "origin"]
      url = https://github.com/eclipse/jgit.git
      fetch = +refs/heads/*:refs/remotes/origin/*

    If no remote name is given, the defaults ‘origin’ is used. In order to have the CloneCommand use a certain name under which the remote repository is registered, use setRemote().

    The refspec given by ‘fetch’ determines which branches should be exchanged when fetching from or pushing to the remote repository by default.

    Cloning Branches

    By default, the clone command creates a single local branch. It looks at the HEAD ref of the remote repository and creates a local branch with the same name as the remote branch referenced by it.

    But the clone command can also be told to clone and checkout certain branch(es). Assuming that the remote repository has a branch named ‘extra’, the following lines will clone this branch.

    Git git = Git.cloneRepository()
      .setURI( "https://github.com/eclipse/jgit.git" )
      .setDirectory( "/path/to/repo" )
      .setBranchesToClone( singleton( "refs/heads/extra" ) );
      .setBranch( "refs/heads/extra" )

    With setBranchesToClone(), the command clones only the specified branches. Note that the setBranch() directive is necessary to also checkout the desired branch. Otherwise, JGit would attempt to checkout the ‘master’ branch. While this is isn’t a problem from a technical point of view, it is usually not what you want.

    If all branches of the remote repository should be cloned, you can advise the command like so:

    Git git = Git.cloneRepository()
      .setURI( "https://github.com/eclipse/jgit.git" )
      .setDirectory( "/path/to/repo" )
      .setCloneAllBranches( true )

    To prevent the current branch from being checked out at all, the setNoCheckout() method can be used.

    Listing Remote Branches

    If you want to know which branches a remote repository has to offer, the LsRemoteCommand comes to the rescue. To list all branches of a JGit repository, use Git’s lsRemoteRepository() like shown below.

    Collection<Ref> remoteRefs = Git.lsRemoteRepository()
      .setHeads( true )
      .setRemote( "https://github.com/eclipse/jgit.git" )

    In case you would also want to list tags, advise the command with setTags( true ) to include tags.

    For reasons I rather don’t want to know, JGit requires a local repository for certain protocols in order to be able to list remote refs. In this case Git.lsRemoteRepository() will throw a NotSupportedException. The workaround is to create a temporary local repository and use git.lsRemote() instead of Git.lsRemoteRepository() where git wraps the temporary repository.

    Cloning Bare Repositories

    If the local repository does not need a work directory, the clone command can be instructed to create a bare repository.

    By default non-bare repositories are created, but with setBare( true ) a bare repository is created like shown below:

    Git git = Git.cloneRepository()
      .setBare( true )
      .setURI( "https://github.com/eclipse/jgit.git" )
      .setGitDir( "/path/to/repo" )

    Here the destination directory is specified via setGitDir() instead of using setDirectory().
    The resulting repository’s isBare() will return true, getGitDir() will return /path/to/repo and since there is no work directory getWorkTree() will throw a NoWorkTreeException.

    Note that ‘bare’ here only applies to the destination repository. Whether the source repository is bare or not doesn’t make a difference when cloning.

    Cloning Submodules

    If the remote repository is known to have submodules or if you wish to include submodules in case there are any, the clone command can be instructed to do so:

    Git git = Git.cloneRepository()
      .setCloneSubmodules( true )
      .setURI( "https://github.com/eclipse/jgit.git" )
      .setDirectory( "/path/to/repo" )

    The above example advises the clone command to also clone any submodule that is found.

    If setCloneSubmodules( true ) wan’t specified while cloning the repository, you can catch up on the missing submodules later. For more details see the article How to manage Git Submodules with JGit.

    Cloning with Authentication

    Of course, JGit also allows to access repositories that require authentication. Common protocols like SSH and HTTP(S) and their authentication methods are supported. A detailed explanation on how to use authentication support can be found in the JGit Authentication Explained article.

    Concluding How to Clone Git Repositories with JGit

    For almost all features of the native Git clone command there is an equivalent in JGit. Even a progress monitor which may be useful when JGit is embedded in interactive applications exists. And for the missing mirror option apparently a workaround exists. Only the often asked for shallow clones (e.g. git clone --depth 2) aren’t yet supported by JGit.

    The snippets shown throughout this article are excerpts from a learning test that illustrates the common use cases of the CloneCommand. The full version can be found here:

    If you still have difficulties or questions, feel free to leave a comment or ask the friendly and helpful JGit community for assistance.

    The post How to Clone Git Repositories with JGit appeared first on Code Affine.

    by Rüdiger Herrmann at November 30, 2015 08:00 AM

    CodeRASPIDe to go bright with Sirius

    by Its_Me_Malai (noreply@blogger.com) at November 30, 2015 06:05 AM

    Having released a Stable GPIO Version of CodeRASPIDe, our plans for the next release were falling in place.
    1. Graphical Editor to design the Hardware Wiring of RaspberryPI
    2. Support for Python and C Code Generation

    Graphical Editor to Design -> Immediate thot goes to Sirius as we have been here about Sirius and when starting CodeRASPIDe, we also came across Adruiono Designer by Obeo using Sirius. So we decided yday to get our hands dirty with Sirius.

    We ran thru the 2 Tutorials on Eclipse Wiki.
    1. Basic Sirius Tutorial
    2. Advanced Sirius Tutorial

    And this is what we got for CodeRASPIDe. I was amazed with what Sirius could do on the 1st Code. Definitely a framework that a lot of Managers would love [I love it for CodeRASPIDe]. Yet to see whats the Kick on the Developers ASS [Will Post soon as i move further in using Sirius].

    Looking forward to play with Sirius a little bit more to conclude on its usecase limitations. 
    But on GO One "SIRIUS is AWESOME" is the WORD.
    Also will soon have our SIRIUS Tutorials out soon to support this great piece of work by OBEO.

    by Its_Me_Malai (noreply@blogger.com) at November 30, 2015 06:05 AM

    Combine vert.x and mongo to build a giant

    by cescoffier at November 30, 2015 12:00 AM

    This blog post is part of the introduction to vert.x series. Last time, we have seen how we can use the vertx-jdbc-client to connect to a database using a JDBC driver. In this post, we are going to replace this JDBC client by the vertx-mongo-client, and thus connect to a Mongo database.

    You don’t understand the title, check the mongoDB website.

    But before going further, let’s recap.

    Previously in ‘introduction to vert.x’

    1. The first post has described how to build a vert.x application with Maven and execute unit tests.
    2. The second post has described how this application can become configurable.
    3. The third post has introduced vertx-web, and a small collection management application has been developed. This application offers a REST API used by a HTML/JavaScript frontend.
    4. The fourth post has presented how you can run integration tests to ensure the behavior of your application.
    5. The last post has presented how you can interact with a JDBC database using the vertx-jdbc-client.

    This post shows another client that lets you use MongoDB in a vert.x application. This client provides an vert.x API to access asynchronously to the Mongo database. We won’t compare whether or not JDBC is superior to Mongo, they have both pros and cons, and you should use the one that meet your requirements. Vert.x lets you choose, that’s the point.

    The vertx-mongo-client documentation is available here.

    The code developed in this blog post is available in the branch post-6. Our starting point is the code from the post-5 branch.

    Asynchronous data access

    One of the vert.x characteristics is being asynchronous. With an asynchronous API, you don’t wait for a result, but you are notified when this result is ready. Thanks to vert.x, this notification happens in the same thread (understand event loop) as the initial request:

    Asynchronous data access

    Your code (on the left) is going to invoke the mongo client and pass a callback that will be invoked when the result is available. The invocation to the mongo client is non blocking and returns immediately. The client is dealing with the mongo database and when the result has been computed / retrieved, it invokes the callback in the same event loop as the request.

    This model is particularly powerful as it avoids the synchronization pitfalls. Indeed, your code is only called by a single thread, no need to synchronize anything.

    As with every Maven project….

    … we need to update the pom.xml file first.

    In the pom.xml file, replace the vertx-jdbc-client by the vertx-mongo-client:


    Unlike JDBC where we were instantiating a database on the fly, here we need to explicitly starts a MongoDB server. In order to launch a Mongo server in our test, we are going to add another dependency:


    This dependency will be used in our unit tests, as it lets us start a mongo server programmatically. For our integration tests, we are going to use a Maven plugin starting and stopping the mongo server before and after our integration tests. Add this plugin to the section of your pom.xml file.


    Notice the port we use here (37017), we will use this port later.

    Enough XML for today

    Now that we have updated our pom.xml file, it’s time to change our verticle. The first thing to do is to replace the jdbc client by the mongo client:

    mongo = MongoClient.createShared(vertx, config());

    This client is configured with the configuration given to the verticle (more on this below).

    Once done, we need to change how we start the application. With the mongo client, no need to acquire a connection, it handles this internally. So our startup sequence is a bit more simple:

            (nothing) -> startWebApp(
                (http) -> completeStartup(http, fut)
            ), fut);

    As in the previous post, we need to insert some predefined data if the database is empty:

    private void createSomeData(Handler> next, Future fut) {
        Whisky bowmore = new Whisky("Bowmore 15 Years Laimrig", "Scotland, Islay");
        Whisky talisker = new Whisky("Talisker 57° North", "Scotland, Island");
        // Do we have data in the collection ?
        mongo.count(COLLECTION, new JsonObject(), count -> {
          if (count.succeeded()) {
            if (count.result() == 0) {
              // no whiskies, insert data
              mongo.insert(COLLECTION, bowmore.toJson(), ar -> {
                if (ar.failed()) {
                } else {
                  mongo.insert(COLLECTION, talisker.toJson(), ar2 -> {
                    if (ar2.failed()) {
                    } else {
            } else {
          } else {
            // report the error

    To detect whether or not the database already contains some data, we retrieve the number of documents from the whiskies collection. This is done with : mongo.count(COLLECTION, new JsonObject(), count -> {}). The second parameter is the query. In our case, we want to count all documents. This is done using new JsonObject() that would create a query accepting all documents from the collection (it’s equivalent to a SELECT * FROM ...).

    Also notice the insert calls. Documents are passed as JSON object, so to insert an object, just serialize it to JSON and use mongo.insert(COLLECTION, json, completion handler).

    Mongo-ize the REST handlers

    Now that the application boot sequence has been migrated to mongo, it’s time to update the code handling the REST requests.

    Let’s start by the getAll method that returns all stored products. To implement this, we use the find method. As we saw for the count method, we pass an empty json object to describe a query accepting all documents:

    private void getAll(RoutingContext routingContext) {
        mongo.find(COLLECTION, new JsonObject(), results -> {
          List objects = results.result();
          List whiskies = objects.stream().map(Whisky::new).collect(Collectors.toList());
              .putHeader("content-type", "application/json; charset=utf-8")

    The query results are passed as a list of JSON objects. From this list we can create our product instances, and fill the HTTP response with this set.

    To delete a specific document we need to select the document using its id:

    private void deleteOne(RoutingContext routingContext) {
        String id = routingContext.request().getParam("id");
        if (id == null) {
        } else {
          mongo.removeOne(COLLECTION, new JsonObject().put("_id", id),
              ar -> routingContext.response().setStatusCode(204).end());

    The new JsonObject().put("_id", id) describes a query selecting a single document (selected by its unique id, so it’s the equivalent to SELECT * WHERE id=...). Notice the _id which is a mongo trick to select a document by id.

    Updating a document is a less trivial:

    private void updateOne(RoutingContext routingContext) {
        final String id = routingContext.request().getParam("id");
        JsonObject json = routingContext.getBodyAsJson();
        if (id == null || json == null) {
        } else {
              new JsonObject().put("_id", id), // Select a unique document
              // The update syntax: {$set, the json object containing the fields to update}
              new JsonObject()
                  .put("$set", json),
              v -> {
                if (v.failed()) {
                } else {
                      .putHeader("content-type", "application/json; charset=utf-8")
                      new Whisky(id, json.getString("name"),

    As we can see, the update method takes two JSON objects as parameter:

    1. The first one denotes the query (here we select a single document using its id).
    2. The second object expresses the change to apply to the selected document. It uses a mongo syntax. In our case, we update the document using the $set operator.

    Replace document
    In this code we update the document and replace only a set of fields. You can also replace the whole document using mongo.replace(...).

    I definitely recommend to have a look to the MongoDB documentation, especially:

    Time for configuration

    Well, the code is migrated, but we still need to update the configuration. With JDBC we passed the JDBC url and the driver class in the configuration. With mongo, we need to configure the connection_string - the mongo:// url on which the application is connected, and db_name - a name for the data source.

    Let’s start by the unit test. Edit the MyFirstVerticleTest file and add the following code:

    private static MongodProcess MONGO;
        private static int MONGO_PORT = 12345;
        public static void initialize() throws IOException {
          MongodStarter starter = MongodStarter.getDefaultInstance();
          IMongodConfig mongodConfig = new MongodConfigBuilder()
              .net(new Net(MONGO_PORT, Network.localhostIsIPv6()))
          MongodExecutable mongodExecutable =
         MONGO = mongodExecutable.start();
        public static void shutdown() {  MONGO.stop(); }

    Before our tests, we start (programmatically) a mongo database on the port 12345. When all our tests have been executed, we shutdown the database.

    So now that the mongo server is managed, we need to to give the right configuration to our verticle. Update the DeploymentOption instance with:

    DeploymentOptions options = new DeploymentOptions()
            .setConfig(new JsonObject()
                .put("http.port", port)
                .put("db_name", "whiskies-test")
                    "mongodb://localhost:" + MONGO_PORT)

    That’s all for the unit tests.

    For the integration-test, we are using an externalized json file. Edit the src/test/resources/my-it-config.json with the following content:

          "http.port": ${http.port},
          "db_name": "whiskies-it",
          "connection_string": "mongodb://localhost:37017"

    Notice the port we are using for the mongo server. This port was configured in the pom.xml file.

    Last but not least, we still have a configuration file to edit: the configuration you use to launch the application in production:

          "http.port": 8082,
          "db_name": "whiskies",
          "connection_string": "mongodb://localhost:27017"

    Here you would need to edit the localhost:27017 with the right url for your mongo server.

    Some changes in the integration tests
    Because mongo document id are String and not integer, we have to slightly change document selection in the integration test.

    Time for a run

    It’s time to package and run the application and check that everything works as expected. Let’s package the application using:

    mvn clean verify

    And then to launch it, start your mongo server and launch:

    java -jar target/my-first-app-1.0-SNAPSHOT-fat.jar \
      -conf src/main/conf/my-application-conf.json

    If you are, like me, using docker / docker-machine for almost everything, edit the configuration file to refer to the right host (localhost for docker, the docker-machine ip if you use docker-machine) and then launch:

    docker run -d -p 27017:27017 mongo
    java -jar target/my-first-app-1.0-SNAPSHOT-fat.jar \
      -conf src/main/conf/my-application-conf.json
    # or
    java -jar target/my-first-app-1.0-SNAPSHOT-fat.jar \
      -conf src/main/conf/my-application-conf-docker-machine.json

    The application live and running

    That’s all folks !

    We are reaching the end of this post. We saw how you can use the vert-mongo-client to access asynchronously data stored inside a mongo database as well as inserting/updating this data. Now you have the choice between JDBC or Mongo. In addition, vert.x provides a client for Redis.

    Next time, we will see how the verticle class can be split in two verticles in order to better organize your code. The interaction between the two verticles will uses services.

    Stay tuned & Happy coding !

    by cescoffier at November 30, 2015 12:00 AM

    Enhancements and Fixes on CodeRASPIDe

    by Its_Me_Malai (noreply@blogger.com) at November 28, 2015 04:02 AM

    Further Fixes happening on CodeRASPIDe. We have stabilised on our earlier release, added a few enhancements and a most of the bugs fixed to make it stable. We also have an improved Web Documentation talking about Installation, Usage, Contact Details if in case on bugs or interest in the project.

    Our Website for CodeRASPIDe : http://www.ancitconsulting.com/coderaspide/

    All the functionalities are available thru our UpdateSite or Marketplace
    UpdateSite : http://www.ancitconsulting.com/coderaspide/updatesite/
    Marketplace : http://marketplace.eclipse.org/content/code-raspide-ide-raspberry-pi

    Functionalities Added
    1. Improved Project Creation Wizard

    2. Support for Multiple RaspberryPI Boards [40 Pins and 26 Pins]

    3. GPIO Provisioning of Pins : Input Configuration, Output Configuration and MultiPIN Configuration

    Bug Fixed : 
    1. Option to Delete
    2. Generate Code Action on FormPage Header
    3. Few NPEs and Other Bugs Fixed.

    by Its_Me_Malai (noreply@blogger.com) at November 28, 2015 04:02 AM

    Oomph changes the way you handle multiple eclipse installations

    November 27, 2015 07:53 AM

    I work on many Eclipse projects and projects based on Eclipse technologies. I have of course the project I work on for our customer (daily job), but I also try to contribute to different open source projects and I also like trying new stuff and playing around with new technologies.

    All these different tasks require different Eclipse installations. When I experiment with Xtend I use “Eclipse IDE for Java and DSL Developers”. Sometime I need an IDE with Maven support, but not always. After the EclipseCon I just installed an Eclipse supporting Gradle and Groovy development.

    So far, I have never tried to install everything in the same Eclipse IDE. I feared the RAM consumption, overloaded menu trees and an IDE containing too many plugins. So I ended up with a folder containing different eclipse installations (one for each domain):

    Such a folder structure both consumes disk space and maintaining each installation is time consuming and repetitive. Take the Mars.1 update release as an example. I have so many bad reasons for not updating my IDEs: it is boring, it takes time, it takes disk space, ….

    At the last EclipseCon Europe I finally found the time to get into Oomph and the great new is: with Eclipse Oomph (a.k.a. Eclipse Installer), I am convinced that I can do much better than in the past.When you look at a classic Eclipse installation, it looks like this:

    I wasn’t really aware of it, but this folder contains the result of 3 different mechanisms: an Eclipse installation containing the executable and some configuration, a p2 bundle pool where the plugins are effectively stored and a p2 agent that keeps everything together by storing the installation details into profiles. Oomph takes advantage of the flexibility offered by P2 and does not create a standalone installation.

    The main advantage of this approach is that the Bundle Pool can be shared across your Eclipse installations. This drastically reduces the disk space for your eclipse installations. It also reduces the time it takes to install and to update each of your eclipse installations.

    When you install something in one IDE, files are stored in the bundle pool. Installing the same plugin again in another IDE is straight forward. Oomph just updates the profile and reuse the files already present in your bundle pool.

    In addition Oomph is all about setting up your Eclipse environment. Automation is the key word here. You can easily define setup tasks that are shared across all your Eclipse installations. This way you no longer loose time with modifying the default Eclipse settings to obtain the installation you want.

    The Oomph approach is really great. In my opinion you can now have one Eclipse IDE setup for each project you work on. On this blog I will continue to explain the advantages of Oomph and what it changes when you work with it. The Scout team will of course also contribute a setup task to facilitate contributions on our framework.

    Feedback: please use this forum thread.

    Project Home, Forum, Wiki, Twitter, Google+

    November 27, 2015 07:53 AM

    Vert.x ES6 back to the future

    by pmlopes at November 25, 2015 12:00 AM

    On October 21th, 2015 we all rejoiced with the return from the past of Marty McFly with his flying car and so on, however in the Vert.x world we were quite sad that the JavaScript support we have was still using a technology released in December 2009. The support for ES5 is not something that we Vert.x team controls but something that is inherited from running on top of Nashorn.

    With all these nostalgic thoughts on my mind I’ve decided to bring us back to the future and by future I mean, lets start using a modern JavaScript, or more correctly, lets start using ECMAScript 6.

    It turned out to be quite simple to achieve this so I’ll pick the hello world example and write it in ES6 just to show how you can port your code to ES6 and still use the current Vert.x APIs. Note that Vert.x internals still are ES5 and have not been touched or modified to support any of ES6 features.


    Traditionally your main.js file would reside in the root of your module (this is where NPM will look for it by default); however as we are going to transpile to ES5 you’ll want to put your index file in /src/main.js.

    However, because we are transpiling to ES5, your package.json‘s main block should point to the transpiled index.js file in the /lib directory.

      "name": "vertx-es6",
      "version": "0.0.1",
      "private": true,
      "main": "lib/main.js",
      "scripts": {
        "build": "rm -Rf lib && ./node_modules/.bin/babel --out-dir lib src",
        "start": "./node_modules/.bin/vertx run lib/main.js"
      "dependencies": {
        "vertx3-full": "3.1.0",
        "babel-cli": "6.2.0",
        "babel-preset-es2015": "6.1.18"

    As you can see, the main idea is to invoke the transpiler (Babel) when we are building our project, and run it using the generated files. This is slightly equivalent to a compilation process you would have using compiled language.


    If you’re planning to deploy your package to npm either local or private you should be aware that npm will exclude anything listed on your .gitignore since we should ignore the generated code from git it need to inform npm to ignore that rule and keep the lib directory. The .gitignore should be something like:


    And the .npmignore:


    Hello fat arrows and let keywords

    So all the heavy work has been done, in order to create our hello world we just need to code some ES6 in our src/main.js file:

    var Router = require("vertx-web-js/router");
    var server = vertx.createHttpServer();
    var router = Router.router(vertx);
    router.get("/").handler((ctx) => {
        let response = ctx.response();
        response.putHeader("content-type", "text/plain");
        response.end("Hello ES6 World!");

    As you can see we’re using fat arrows instead of writing a function closure and scoped variables using let keyword. If you now compile your project:

    npm run build

    And then start it:

    npm start

    You have your first back to the future ES6 verticle!

    by pmlopes at November 25, 2015 12:00 AM

    Eclipse Key Binding: Select Enclosing Element

    by waynebeaton at November 24, 2015 04:03 PM

    Here’s a Eclipse command that’s pretty handy: Select Enclosing Element  (key binding: Shift+Alt+Up on Linux).


    Every time you hit the key combination, it expands the selection to the enclosing element. In this example, it starts with the method selector, and expands to include the parameters, the receiver, the statement, the block, the method, … all the way up to the entire compilation unit.

    To go the other way (back towards the original selection), use the Restore Last Selection command (key binding Shift+Alt+Down on Linux).

    It is, of course, up to you to decide what to do with the selection once you have it just right: Maybe use Alt+Up or Alt+Down to move selection around in the file…

    You can find this and more commands by hitting Ctrl+3 and just typing a bit of what you’re looking for, or hit Shift+Ctrl+L to open a pop-up with the full list of key bindings.

    by waynebeaton at November 24, 2015 04:03 PM

    Andmore 0.5-M3 available.

    by kingargyle at November 24, 2015 03:33 PM

    Screen Shot 2015-04-19 at 4.44.52 PM

    The third stable milestone is ready for you, the user community to kick the tires, and use for your development.  This is primarily a bug and stability milestone.  The one big addition is that Andmore now supports multi-dexing of the APK files.    Also, starting with the next Neon milestone, an Android Developers EPP package will be available.   This effort is being lead by the newest committer Kaloyan Raev.

    Also this release of Andmore can be installed on older versions of Eclipse other than Mars.   There is also now an Eclipse Marketplace entry for the project as well to make installing the tooling even easier.



    The latest version can always be obtained from the following p2 url:


    by kingargyle at November 24, 2015 03:33 PM

    Exploring TypeScript Support in Eclipse

    by aaronlonin at November 24, 2015 02:34 PM

     TypeScript is an open source superset of JavaScript that adds class based objects that compile to JavaScript plain code. Currently there are two main options to support TypeScript in Eclipse. I’m going to discuss their features and pros and cons of each.Palantir’s TypeScript (Version 1.6.0.v20151006)This plugin offers minimal support for TypeScript and uses the official TypeScript […]

    The post Exploring TypeScript Support in Eclipse appeared first on Genuitec.

    by aaronlonin at November 24, 2015 02:34 PM

    AnyEdit 2.6.0 for beta testers

    by Andrey Loskutov (noreply@blogger.com) at November 22, 2015 09:09 PM

    I did some bigger changes in AnyEdit plugin related to "Compare To/Replace With" menu entries.

    They are now contributed differently to avoid duplicated menu entries and to better support workspace external files. Together with the latest nightly build of EGit one can see this menus first time in the Git Repositories view:

    This beta offline update site contains the not yet released 2.6.0 version of AnyEdit.

    It would be really nice if you could test and report possible regressions. If no one complain, I will release this in a week or so.

    P.S: be aware, I've dropped support for Eclipse 3.7 in AnyEdit. While technically this still should work, I do not plan to support Eclipse 3.7 anymore. Eclipse 3.8 is now the minimal platform version for AnyEdit.

    by Andrey Loskutov (noreply@blogger.com) at November 22, 2015 09:09 PM

    IncQuery and VIATRA at EclipseCon Europe 2015

    by István Ráth at November 21, 2015 06:53 PM

    This year, István Ráth and Ákos Horváth have attended EclipseCon Europe and represented our team. IncQuery and VIATRA have been featured in two talks:

    • "IoT Supercharged: Complex Event Processing for MQTT with Eclipse Technologies" (session, slides) - featuring a live demo squeezed into a 10 minute lightning talk - luckily everything worked like a charm :-) This talk is about VIATRA-CEP, a new, high-level framework for complex event processing over runtime models. CEP is a hot topic in IoT for obvious reasons: it is one of the key technologies you want to use to process data/event streams - preferably as close to the data source as possible. In an IoT context, this would typically mean your gateway - which is now possible via new technologies such as Eclipse Kura. VIATRA-CEP is part of our Eclipse.org model transformation framework VIATRA, completely open source and licensed under the EPL.
    • "IncQuery gets Sirius: Faster and Better Diagrams" (session, slides, video). This talk is about a new innovation coming from the IncQuery team, namely the integration between EMF-IncQuery and Sirius. Once this is publicly released, you will be able to use IncQuery patterns in Odesign diagram definitions (next to the traditional options and the new AQL), and enjoy the performance benefits. Additionally, we also provide an integration with IncQuery Viewers, which allows you to define "live diagrams" that are synchronized to the semantic model automatically by incremental transformations.

    Both talks were well received, we were asked quite a few interesting questions and even suggestions on future development ideas. The entire conference schedule was very strong this year, with hot topics such as the "IoT Day", "Project Quality Day", and the "LocationTech Day" that all provided high quality talks on interesting topics. Here are my favorite picks:

    • The first day keynote by Stefan Ferber of Bosch Software Innovations was one of the best keynotes I have seen at EclipseCons. It is good to see such a powerful entity joining the Eclipse ecosystem.
    • I really liked two Xtext talks: Business DSLs in Web Applications and The Future of Xtext. As both IncQuery and VIATRA rely on Xtext, it is good to see this framework moving ahead at such a high pace and quality.
    • GEF4 - Sightseeing Mars was one of the highlights of the Conference to me. I was very pleased to see such an important piece of technology gaining new momentum and fresh ideas. The demos were impressive too!
    • Processing GeoSpatial Data at Scale was very useful to anyone interested in this topic, as it provided a thorough overview of the domain, including challenges and technologies. Stay tuned for some new innovation coming from the IncQuery team in this area in the near future.
    • Finally, I'm very happy to see GS Collections joining the Eclipse family under the name Eclipse Collections framework. The talk was one of the best at the conference. In fact, we are planning to evaluate this technology for use in IncQuery, to optimize the memory footprint.

    We would like to thank the Budapest University of Technology and Economics, the MONDO and CONCERTO EU FP7 Projects, and IncQuery Labs Ltd. for supporting our talks at the EclipseCon Europe 2015 conference.

    by István Ráth at November 21, 2015 06:53 PM

    EMF Forms goes AngularJS

    by Maximilian Koegel and Jonas Helming at November 20, 2015 01:13 PM

    Over three years ago, we started the explicit development of EMF Forms as a sub component of the EMF Client Platform. The goal was to ease the development of data-centric form-based UIs based on a given EMF data model. Rather than manually coding UIs (e.g. in SWT), the approach is to describe them declaratively in a simple model language, which is focussed on the specification of forms – the View Model. A view model instance is then interpreted by a flexible and extensible rendering component to create a working UI at the end.
    The approach has been a major success and shows significant advantages over manual UI programming or the usage of WYSIWYG editors. Besides the lower development effort and the higher quality of the resulting form-based UI, another advantage is the technological flexibility of the approach. The view model, which specifies the UI is not depending on a certain UI toolkit (e.g. SWT, GWT or JavaFX). Implementing new renderers allows you to switch the UI technology without respecifying the concrete UI itself. With renderers for the Remote Application Platform and Vaadin, EMF Forms is already used for web applications.
    EMF Forms has grown to a very active, frequently used project. This success motivates us to continuously drive the technology forward, extend its use cases, and, in doing so, attract new users. An obvious new field for applying the concepts of EMF Forms is the implementation of data-centric web applications. More and more complex business applications are developed as single-page web applications using JavaScript and frameworks such as AngularJS and EmberJS. These clients are then connected to backends using Restful Services. This JavaScript based area opens the gate for a large group of new users and use cases. Therefore, it is a consequent step to implement an AngularJS based renderer for EMF Forms.
    The former eponym “EMF” is rather unknown in the web area. Additionally, it loses its central role for defining the data schema and the UI schema. Therefore, a new name was required: JSON Forms. However, it is not difficult to guess, which technology takes over the role of EMF.

    What happened before?

    Before we talk about the world of JavaScript, we would like to take a brief look back on the main concepts of EMF Forms. While with JSON Forms we switch to a new technology stack, the basic idea and the basic concepts remains the same as in EMF Forms. If you are already familiar with EMF Forms, you can probably skip this section.

    Both frameworks are based on the fact that typical UI toolkits are not focussed on the implementation of form-based UIs. Therefore, they make things unnecessarily complicated and require too much effort. For a typical input field, displaying a String attribute, you need to implement a label as a text field and possibly validate it as well as the binding to the underlying data entity. This has to be repeated for all required fields in a form.

    In a declarative language, like the one provided by EMF/JSON Forms, you only need to specify the existence of a control, which references an attribute of the data schema. The whole creation of a fully functional UI, including validation and binding is then done by a rendering component.

    Analogously, layouts are specified, in which controls can be embedded. As for controls, the declarative approach focuses on concepts, which are typically required in form-based UIs. Instead of complex layouts, such as GridLayout, the declarative language provides simple concepts such as groups or columns. This abstraction makes the specification of UIs much simpler and more efficient. Additionally, the central rendering component, which replaces a great deal of the manually written UI code, improves the adaptability and maintenance. Any desired change to the UI must be applied only once on the renderer. Further information on EMF Forms can be found here.

    Last but not least, the declarative description of the UI is technology independent. Only the renderer component is bound to a specific toolkit. Therefore, it is possible to migrate existing UIs to new technologies. JSON Forms provides a complete new technology stack, based on HTML, CSS, JavaScript, JSON, JSON Schema and AngularJS. However, our goal is to keep the declarative description compatible, so it is also possible to reuse existing “view models” from EMF Forms on the new stack.

    A new technology stack

    As mentioned before, the general concept of EMF Forms has been maintained. The declarative way of describing UIs and the central rendering approach provide the same advantage and can be efficiently implemented in an client server oriented browser application. To display a form, JSON Forms still needs three artefacts: the definition of the displayed entity (Data Model), the declarative description of the UI (View Model) and finally the data entity to be displayed (Data Model Instance). In EMF Forms, all those artefacts are modelled in EMF. As the name already implies, we use JSON instead of EMF for JSON Forms. More precisely, we use JSON Schema for the data model, and JSON objects for the view model and the data entity. Therefore, in JSON Forms, we use the terms “data schema” and “ui schema” instead of “data model” or “View Model”.
    The following example shows a very simple data schema, which defines a data object with only two attributes. The specified entity shall be displayed in a form-based UI later on. That means, there are controls to enter both attributes “Name” and “Gender”. The schema defines the possible values for “Gender”, and is an enumeration. The “Name” field is specified as mandatory. Both constraints are considered by JSON Forms.

      "type": "object",
      "properties": {
        "name": {
          "type": "string"
        "gender": {
          "type": "string",
          "enum": [ "Male", "Female" ]

    The schema only describes the data to be displayed, it does not specify how the data should be rendered in the form. This is specified in a second JSON-based artifact, the “UI Schema” (see listing below). The ui schema references the data schema, more precisely the attributes defined in the data schema. The ui schema element “Control” specifies, a certain attribute shall be rendered at a specific location in the UI. Therefore, it references the specific attribute from the data schema (see the JSON attribute “scope” in the following example). The renderer is responsible for displaying the attribute now,  accordingly, it also selects a UI element, e.g. a text field for a string attribute. Therefore, you do not need to specify, that there should be a drop down for the “Gender” attribute or which values it should contain. Also, you do not need to specify additional labels, if the labels should just show the name of the attribute. If you want to show another label, you can optionally specify it in the ui schema. As you can see, JSOn Forms derives a lot of information directly from the data schema and therefore reduces duplicate specification.

    The ui schema also allows to structure controls with container elements. This allows to specify a logical layout. As a simple example is a HorizontalLayout, which displays all children elements next to each other. As in EMF Forms, there are more complex layouts in practice, for example groups, stacks, or multi-page forms. However, the basic concepts remains the same, the ui schema specifies, in a simple way, how the form-based UI is structured, the rendering component takes care of creating a concrete UI. The following example ui schema shows two controls structured in a horizontal layout:

      "type": "HorizontalLayout",
      "elements": [
          "type": "Control",
          "scope": {
            "$ref": "#/properties/name"
          "type": "Control",
          "scope": {
            "$ref": "#/properties/gender"

    This simple and concise specification of the data schema and the ui schema is already enough to render a fully functional form-based UI. This is done by the JSON Forms rendering component describe in the following section.


    The two artifacts, the data schema and the ui schema, are now rendered to a UI. That is done by the rendering component. In JSON Forms, it uses HTML for defining the visible elements of the UI and JavaScript for the implementation of any behavior, such as data binding and validation. To ease the implementation and also offer state-of-the-art features such as bi-directional data binding, we additionally use the framework AngularJS. It has a growing user base since its publication in 2012, which is already a long period in the volatile area of web frameworks.

    The rendering component consists of several registered renderers. Every renderer is responsible for rendering a specific element of the ui schema. This allows the modular development of new renders. Additionally, specific renderers can be replaced.


    A frequently raised question is, why AngularJS alone is not enough to efficiently develop form-based Web UI. AngularJS definitely provides a lot of  support to the development of web UIs, however, implementing the desired feature manually is still required. To implement the example above, you have to complete several manual steps in AngularJS. First you have to create an HTML template defining all UI elements such as the text box, the labels and the drop down element. This can lead to complex HTML documents, especially for larger forms. Now, you have to manually set the AngularJS directives on those elements to bind the data and to add validations. Finally, to achieve a homogenous look & feel, you need to layout and alling all created elements, typically with CSS. That means, even for defining a very simple form, you have to deal with three different languages, JavaScript, HTML and CSS. In contrast, when using JSON Forms, you just have to define the data schema and ui schema, both done in a simple JSON format. The rendered form just needs to be embedded into an existing web page using a custom directive.

    The declarative approach of JSON Forms especially pays off, in case the underlying data schema is extended or changed. In this case, you just need to slightly adapt the ui schema, e.g. by adding a new control. Another advantage is that there is one implementation per UI element, enabled by the renderer registry. This allows you to adapt the form-based UI at a central place in a homogenous way. As an example, if you want to add a specific behavior to text fields, you just need to adapt the renderer for string attributes.

    Therefore, JSON Forms allows to adapt and replace existing renderers. Renderers can be registered for the complete application, e.g. for all text fields, or alternatively, for specific attributes, e.g. only for the string attribute “name”. As an example, if you want to show the Enumeration “Gender” from before as radio buttons instead of a drop down box, you could adapt the renderer for Enumerations. Alternatively, you can even depend on a certain condition, e.g. you can adapt the renderer for all Enumerations with only two possible values.


    After the renderer has created a read-to-use UI, the questions remains open, how this can be used in an existing application? In the following section, we describe how to embed JSON Forms into any web application.


    To use the rendered form at the end, it is typically embedded into an existing application. As for EMF Forms, our focus is on a not invasive integration. That means, it is possible to integrate the framework as easily into an existing application. Additionally, it should integrate well with existing frameworks, such as Bootstrap.

    For embedding EMF Forms into a HTML page, we use a specific directive (as often done in AngularJS). The JSON Forms directive specifies the data schema, the data entity as well as the ui schema to be shown (see following code example). The values of the attributes “schema”, “ui-schema” and input must be in the scope of the current controller. Therefore, they can be retrieved from any kind of source, which allows full flexibility in connecting a backend. In a typical application, the data schema and the ui schema would be static content, while the data entity is retrieved from a REST service.


    <jsonforms schema=”mySchema” ui-schema=”myUiSchema” input=”myDataObject”/>


    In contrast to EMF Forms, JSON Forms uses JSON to represent the ui schema. The default JSON serialization is much easier to read than the EMF one. Further, JSON is a de-facto standard in the JavaScript world. A first ui schema can easily be created using any kind of text editor. However, one good tooling for creating and modifying View Models (ui schemata in JSON Forms) has been an important success factor for EMF Forms. As an example, the EMF Forms tooling allows one to generate a complete default view model based on a given data schema. This default model can then be iteratively adapted. This feature reduced the initial effort for creating a form-based UI even more.

    Of course we also want to support the user as much as possible to specify ui schemata in JSON Forms. The first good news is that you can directly export existing view models from EMF Forms to JSON forms. That allows you to reuse any existing view model as well as using the existing and well-known tooling for the creation of new ui schemata. Beside the ui schema, we also provide an export from an existing EMF model to a json data schema. Therefore, you can reuse the existing features of the EMF Forms and EMF/Ecore tooling.

    To export view models and Ecore models to JSON Forms, EMF provides a new option in the right click menu on both artefacts. This has been introduced in the 1.8.x development stream. Further documentation on how the ui schema can be used with JSON Forms then can be found on the JSON Forms homepage and the EMF Forms / JSON Forms integration guide.

    On a medium term, we of course also want to address user groups outside of the Eclipse ecosystem with JSON Forms. Therefore, we are working on a ui schema editor as a pure web application. As for EMF Forms, the editor is of course based on JSON Forms itself. Therefore the framework is bootstrapping itself.


    With JSON Forms, EMF Forms enters the JavaScript world and the area of typical single page web applications. The well-proven concepts of EMF Forms are transferred seamlessly. The implementation is adapted to the new context and the requirements of typical web applications. Technically, JSON Forms is based on HTML, CSS, JavaScript, JSON, JSONSchema und AngularJS. JSON Forms can be used independently of Eclipse. However, you can reuse the data models and view models from EMF Forms. Therefore it should be easy for existing EMF Forms users to get started with JSON Forms. The most important concepts of EMF Forms are already supported in JSON Forms, we are actively working on completing the framework.

    The development of JSON Forms does not mean we will lose our focus on EMF Forms. We plan on continuously driving it forward and extending its vital user base.

    One advantage of the declarative approach is especially relevant in the web area, i.e. for JSON Forms: The independance of a UI toolkit. If you manually develop web UIs, you bind the application to a specific JavaScript framework, e.g. AngularJS or Ember.JS. This is a technical risk, especially in the area of web framework. Angular has been pretty popular for the past 3 years, which is already a long time for a JavaScript framework. However, at the end of 2015, the next major version will be published, which is not fully backwards compatible. With JSON Forms, you partially mitigate this risk. The actual specification of your custom forms is one in a declarative and UI technology independant JSON format. By adding new renderers, you can reuse this specification with new JavaScript frameworks or new versions of them.

    If you want to try JSON Forms, please visit our website, it provides tutorials and a running live example, where you can specify a ui schema online. Further information about EMF Forms and first steps with JSON Forms can be also found on the EMF Forms website. Finally, we will soon start a blog series, which provides detailed tutorial how to implement complex forms with JSON Forms based on an example application. if you want to learn about the on-going development of JSON Forms, please follow us on twitter.


    3 Comments. Tagged with emf, emfforms, JSON, jsonfo, emf, emfforms, JSON, jsonfo

    by Maximilian Koegel and Jonas Helming at November 20, 2015 01:13 PM

    New Releases of Eclipse IoT Projects Advance IoT Open Source Technology

    November 19, 2015 02:00 PM

    These projects and the Eclipse IoT ecosystem provide open source IoT technology for developers to build IoT solutions.

    November 19, 2015 02:00 PM