Configuring OSGi Declarative Services

by Dirk Fauth at September 26, 2016 06:40 AM

In my blog post about Getting Started with OSGi Declarative Services I provided an introduction to OSGi declarative services. How to create them, how they behave at runtime, how to reference other services, and so on. But I left out an important topic there: configuring OSGi components. Well to be precise I mentioned it, and one sort of configuration was also used in the examples, but it was not explained in detail. As there are multiple aspects with regards to component configuration I wanted to write a blog post that is dedicated to that topic, and here it is.

After reading this blog post you should have a deeper understanding of how OSGi components can be configured.

Basics

A component can be configured via Component Properties. Properties are key-value-pairs that can be accessed via Map<String, Object>. With DS 1.3 the Component Property Types are introduced for type safe access to Component Properties.

Component Properties can be defined in different ways:

  • inline
  • via Java properties file
  • via OSGi Configuration Admin
  • via argument of the ComponentFactory.newInstance method
    (only for factory components, and as I didn’t cover them in the previous blog post, I won’t cover that topic here aswell)

Component Properties that are defined inline or via properties file can be overridden by using the OSGi Configuration Admin or the ComponentFactory.newInstance argument. Basically the property propagation is executed sequentially. Therefore it is even possible to override inline properties with properties from a properties file, if the properties file is specified after the inline properties.

The SCR (Service Component Runtime) always adds the following Component Properties that can’t be overridden:

  • component.name – The component name.
  • component.id – A unique value (Long) that is larger than all previously assigned values. These values are not persistent across restarts.

In a life cycle method (activate/modified/deactivate) you can get the Component Properties via method parameter. The properties that are retrieved in event methods for referencing other services (bind/updated/unbind) are called Service Properties. The SCR performs a property propagation in that case, which means that all non-private Component Properties are propagated as Service Properties. To mark a property as private, the property name needs to be prefixed with a full stop (‘.’).

First I will explain how to specify Component Properties in different ways. I will use a simple example that inspects the properties in a life cycle method. After that I will show some examples on the usage of properties of service references.

Let’s start to create a new project for the configurable components:

  • Create a new Plug-in Project via File -> New -> Plug-in Project. (Plug-in Perspective needs to be active)
    • Set the Plug-in name to org.fipro.ds.configurable
    • Press Next
    • Ensure that no Activator is generated, no UI contributions will be added and that no Rich Client Application is created
    • Press Finish
  • Open the MANIFEST.MF file and switch to the Dependencies tab
  • Add the following dependency on the Imported Packages side:
    • org.osgi.service.component.annotations (1.2.0)
  • Mark org.osgi.service.component.annotations as Optional via Properties… to ensure there are no runtime dependencies. We only need this dependency at build time.
  • Create the package org.fipro.ds.configurable

Inline Component Properties

You can add Component Properties to a declarative service component via the @Component annotation property type element. The value of that annotation type element is an array of Strings, which need to be given as key-value pairs in the format
<name>(:<type>)?=<value>
where the type information is optional and defaults to String.

The following types are supported:

  • String (default)
  • Boolean
  • Byte
  • Short
  • Integer
  • Long
  • Float
  • Double
  • Character

There are typically two use cases for specifying Component Properties inline:

  • Define default values for Component Properties
  • Specify some sort of meta-data that is examined by referencing components

Of course the same applies for Component Properties that are applied via Properties file, as they have an equal ranking.

  • Create a new class StaticConfiguredComponent like shown below.
    It is a simple Immediate Component with the Component Properties message and iteration, where message is a String and iteration is an Integer value. In the @Activate method the Component Properties will be inspected and the message will be printed out to the console as often as specified in iteration.
    Remember that it is an Immediate Component, as it doesn’t implement an interface and it doesn’t specify the service type element.
package org.fipro.ds.configurable;

import java.util.Map;

import org.osgi.service.component.annotations.Activate;
import org.osgi.service.component.annotations.Component;

@Component(
    property = {
        "message=Welcome to the inline configured service",
        "iteration:Integer=3"
    }
)
public class StaticConfiguredComponent {

    @Activate
    void activate(Map<String, Object> properties) {
        String msg = (String) properties.get("message");
        Integer iter = (Integer) properties.get("iteration");

        for (int i = 1; i <= iter; i++) {
            System.out.println(i + ": " + msg);
        }
        System.out.println();
    }
}

Now execute the example as a new OSGi Framework run configuration (please have a look at Getting Started with OSGi Declarative Services – 6. Run to see how to setup such a configuration). If you used the same property values as specified in the above example, you should see the welcome message printed out 3 times to the console.

It is for sure not a typical use case to inspect the inline specified properties at activation time. But it should give an idea on how to specify Component Properties statically inline via @Component.

Component Properties from resource files

Another way specify Component Properties statically is to use a Java Properties File that is located inside the bundle. It can be specified via the @Component annotation properties type element, where the value needs to be an entry path relative to the root of the bundle.

  • Create a simple properties file named config.properties inside the OSGI-INF folder of the org.fipro.ds.configurable bundle.
message=Welcome to the file configured service
iteration=4
  • Create a new class FileConfiguredComponent like shown below.
    It is a simple Immediate Component like the one before, getting the Component Properties message and iteration from the properties file.
package org.fipro.ds.configurable;

import java.util.Map;

import org.osgi.service.component.annotations.Activate;
import org.osgi.service.component.annotations.Component;

@Component(
    properties="OSGI-INF/config.properties"
)
public class FileConfiguredComponent {

    @Activate
    void activate(Map<String, String> properties) {
        String msg = (String) properties.get("message");
        String iter = (String) properties.get("iteration");

        if (msg != null && iter != null) {
            Integer count = Integer.valueOf(iter);
            for (int i = 1; i <= count; i++) {
                System.out.println(i + ": " + msg);
            }
            System.out.println();
        }
    }
}
  • Add the OSGI-INF/config.properties file to the build.properties to include it in the resulting bundle jar file. This is of course only necessary in case you haven’t added the whole directory to the build.properties.

On executing the example you should now see the console outputs for both components.

I’ve noticed two things when playing around with the Java Properties File approach:

  • Compared with the inline properties it is not possible to specify a type. You can only get Strings, which leads to manual conversions (at least before DS 1.3 – see below).
  • The properties file needs to be located in the same bundle as the component. It can not be added via fragment.

Having these two facts in mind, there are not many use cases for this approach. IMHO this approach was intended to support client specific properties that are for example placed inside the bundle in the build process.

Bndtools vs. PDE

  • Create the config.properties file in the project root
  • Add the -includeresource instruction to the bnd.bnd file
    This is necessary to include the config.properties file to the resulting bundle jar file. The instruction should look similar to the following snippet to specify the destination and the source.

    -includeresource: OSGI-INF/config.properties=config.properties

    Note:
    The destination is on the left side of the assignment and the source is on the right.
    If only the source is specified (that means no assignment), the file is added to the bundle root without the folder where it is included in the sources.

Component Properties via OSGi Configuration Admin

Now let’s have a look at the dynamic configuration by using the OSGi Configuration Admin. For this we create a new component, although it would not be necessary, as we could also use one of the examples before (remember that we could override the statically defined Component Properties dynamically via the Configuration Admin). But I wanted to start with creating a new component, to have a class that can be directly compared with the previous ones.

To specify properties via Configuration Admin it is not required to use any additional type element. You only need to know the configuration PID of the component to be able to provide a configuration object for it. The configuration PID (Persistent IDentity) is used as a key for objects that need a configuration dictionary. With regards to the Component Configuration this means, we need the configuration PID to be able to provide the configuration object for the component.

The PID can be specified via the configurationPid type element of the @Component annotation. If not specified explicitly it is the same as the component name, which is the fully qualified class name, if not explicitly set to another value.

Via the configurationPolicy type element it is possible to configure the relationship between component and component configuration, e.g. whether there needs to be a configuration object provided via Configuration Admin to satisfy the component. The following values are available:

  • ConfigurationPolicy.OPTIONAL
    Use the corresponding configuration object if present, but allow the component to be satisfied even if the corresponding configuration object is not present. This is the default value.
  • ConfigurationPolicy.REQUIRE
    There must be a corresponding configuration object for the component
    configuration to become satisfied. This means that there needs to be a configuration object that is set via Configuration Admin before the component is satisfied and therefore can be activated. With this policy it is for example possible to control the startup order or component activation based on configurations.
  • ConfigurationPolicy.IGNORE
    Always allow the component configuration to be satisfied and do
    not use the corresponding configuration object even if it is present. This basically means that the Component Properties can not be changed dynamically using the Configuration Admin.

If a configuration change happens at runtime, the SCR needs to take actions based on the configuration policy. Configuration changes can be creating, modifying or deleting configuration objects. Corresponding actions can be for example that a Component Configuration becomes unsatisfied and therefore Component Instances are deactivated, or to call the modified life cycle method, so the component is able to react on a change.

To be able to react on a configuration change at runtime, a method to handle the modified life cycle can be implemented. Using the DS annotations this can be done by using the @Modified annotation, where the method parameters can be the same as for the other life cycle methods (see the Getting Started Tutorial for further information on that).

Note:
If you do not specify a modified life cycle method, the Component Configuration is deactivated and afterwards activated again with the new configuration object. This is true for the configuration policy require as well as for the configuration policy optional.

Now create a component similar to the previous ones, that should only be satisfied if a configuration object is provided via the Configuration Admin. It should also be prepared to react on configuration changes at runtime. Specify an alternative configuration PID so it is not necessary to use the full qualified class name of the component.

  • Create a new class AdminConfiguredComponent like shown below.
    It is an Immediate Component that prints out a message for a specified number of iterations.

    • Specify the configuration PID AdminConfiguredComponent so it is not necessary to use the full qualified class name of the component when trying to configure it.
    • Set the configuration policy REQUIRE, so the component will only be activated once a configuration object is set by the Configuration Admin.
    • Add life cycle methods for modified and deactivate to be able to play around with different scenarios.
package org.fipro.ds.configurable;

import java.util.Map;

import org.osgi.service.component.annotations.Activate;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.ConfigurationPolicy;
import org.osgi.service.component.annotations.Deactivate;
import org.osgi.service.component.annotations.Modified;

@Component(
    configurationPid = "AdminConfiguredComponent",
    configurationPolicy = ConfigurationPolicy.REQUIRE
)
public class AdminConfiguredComponent {

    @Activate
    void activate(Map<String, Object> properties) {
        System.out.println();
        System.out.println("AdminConfiguredComponent activated");
        printMessage(properties);
    }

    @Modified
    void modified(Map<String, Object> properties) {
        System.out.println();
        System.out.println("AdminConfiguredComponent modified");
        printMessage(properties);
    }

    @Deactivate
        void deactivate() {
        System.out.println("AdminConfiguredComponent deactivated");
        System.out.println();
    }

    private void printMessage(Map<String, Object> properties) {
        String msg = (String) properties.get("message");
        Integer iter = (Integer) properties.get("iteration");

        if (msg != null && iter != null) {
            for (int i = 1; i <= iter; i++) {
                System.out.println(i + ": " + msg);
            }
        }
    }
}

If we now execute our example, we will see nothing new. The reason is of course that there is no configuration object yet provided by the Configuration Admin.

Before we are able to do this we need to prepare our environment. That means that we need to install the Configuration Admin Service to the Eclipse IDE or the used Target Platform, as it is not part of the default installation.

To install the Configuration Admin to the Eclipse IDE you need to perform the following steps:

  • Select Help -> Install New Software… from the main menu
  • Select the Neon – http://download.eclipse.org/releases/neon repository
    (assuming you are following the tutorial with Eclipse Neon, otherwise use the matching update site)
  • Disable Group items by category
  • Filter for Equinox
  • Select the Equinox Compendium SDKinstall_equinox_compendium
  • Click Next
  • Click Next
  • Accept the license agreement and Finish
  • Restart the Eclipse IDE to safely apply the changes

Now we can create a Gogo Shell command that will be used to change a configuration object at runtime.

  • Open MANIFEST.MF of org.fipro.ds.configurable
    • Add org.osgi.service.cm to the Imported Packages
  • Create a new package org.fipro.ds.configurable.command
  • Create a new class ConfigureServiceCommand in that package that looks similar to the following snippet.
    It is a Delayed Component that will be registered as a service for the ConfigureCommand class. It has a reference to the ConfigurationAdmin service, which is used to create/get the Configuration object for the PID AdminConfiguredComponent and updates the configuration with the given values.
package org.fipro.ds.configurable.command;

import java.io.IOException;
import java.util.Hashtable;

import org.osgi.service.cm.Configuration;
import org.osgi.service.cm.ConfigurationAdmin;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;

@Component(
    property = {
        "osgi.command.scope=fipro",
        "osgi.command.function=configure"
    },
    service=ConfigureCommand.class
)
public class ConfigureCommand {

    ConfigurationAdmin cm;

    @Reference
    void setConfigurationAdmin(ConfigurationAdmin cm) {
        this.cm = cm;
    }

    public void configure(String msg, int count) throws IOException {
        Configuration config =
            cm.getConfiguration("AdminConfiguredComponent");
        Hashtable<String, Object> props = new Hashtable<>();
        props.put("message", msg);
        props.put("iteration", count);
        config.update(props);
    }
}

Note:
The ConfigurationAdmin reference is a static reference. Therefore it doesn’t need an unbind method. If you follow the example with Eclipse Neon you will probably see an error mentioning the missing unbind method. Either implement the unbind method for now or disable the error via Preferences. This is fixed with Eclipse Oxygen M2.

Note:
The two Component Properties osgi.command.scope and osgi.command.function are specified inline. These are necessary so the Apache Gogo Shell recognizes the component as a service that can be triggered by entering the corresponding values as a command to the console. This shows the usage of Component Properties as additional meta-data that is examined by other components. Also note that we need to set the service type element, as only services can be referenced by other components.

To execute the example you need to include the org.eclipse.equinox.cm bundle to the Run configuration.

On executing the example you should notice that the AdminConfiguredComponent is not activated on startup, although it is an Immediate Component. Now execute the following command on the console: configure foo 2

As a result you should get an output like this:

AdminConfiguredComponent activated
1: foo
2: foo

If you execute the command a second time with different parameters (e.g. configure bar 3), the output should change to this:

AdminConfiguredComponent modified
1: bar
2: bar
3: bar

The component gets activated after we created a configuration object via the Configuration Admin. The reason for this is ConfigurationPolicy.REQUIRED which means that there needs to be a configuration object for the component configuration in order to be satisfied. Subsequent executions change the configuration object, so the modified method is called then. Now you can play around with the implementation to get a better feeling. For example, remove the modified method and see how the component life cycle handling changes on configuration changes.

Note:
To start from a clean state again you need to check the option Clear the configuration area before launching in the Settings tab of the Run configuration.

Using the modified life cycle event enables to react on configuration changes inside the component itself. To be able to react to configuration changes inside components that reference the service, the updated event method can be used.

  • Create a simple component that references the AdminConfiguredComponent to test this:
package org.fipro.ds.configurable;

import java.util.Map;

import org.osgi.service.component.annotations.Activate;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Deactivate;
import org.osgi.service.component.annotations.Modified;
import org.osgi.service.component.annotations.Reference;

@Component
public class AdminReferencingComponent {

    AdminConfiguredComponent component;

    @Activate
    void activate() {
        System.out.println("AdminReferencingComponent activated");
    }

    @Modified
    void modified() {
        System.out.println("AdminReferencingComponent modified");
    }

    @Deactivate
    void deactivate() {
        System.out.println("AdminReferencingComponent deactivated");
    }

    @Reference
    void setAdminConfiguredComponent(
        AdminConfiguredComponent comp, Map<String, Object> properties) {
        System.out.println("AdminReferencingComponent: set service");
        printMessage(properties);
    }

    void updatedAdminConfiguredComponent(
        AdminConfiguredComponent comp, Map<String, Object> properties) {
        System.out.println("AdminReferencingComponent: update service");
        printMessage(properties);
    }

    void unsetAdminConfiguredComponent(
        AdminConfiguredComponent comp) {
        System.out.println("AdminReferencingComponent: unset service");
    }

    private void printMessage(Map<String, Object> properties) {
        String msg = (String) properties.get("message");
        Integer iter = (Integer) properties.get("iteration");
        System.out.println("[" + msg + "|" + iter + "]");
    }
}
  • Configure the AdminConfiguredComponent to be a service component by adding the attribute service=AdminConfiguredComponent.class to the @Component annotation. Otherwise it can not be referenced.
@Component(
    configurationPid = "AdminConfiguredComponent",
    configurationPolicy = ConfigurationPolicy.REQUIRE,
    service=AdminConfiguredComponent.class
)
public class AdminConfiguredComponent {

Now execute the example and call the configure command two times. The result should look similar to this:

osgi> configure blubb 2
AdminConfiguredComponent activated
1: blubb
2: blubb
AdminReferencingComponent: set service
[blubb|2]
AdminReferencingComponent activated
osgi> configure dingens 3
AdminConfiguredComponent modified
1: dingens
2: dingens
3: dingens
AdminReferencingComponent: update service
[dingens|3]

Calling the configure command the first time triggers the activation of the AdminConfiguredComponent, which then can be bound to the AdminReferencingComponent, which is satisfied and therefore can be activated afterwards. The second execution of the configure command triggers the modified life cycle event of the AdminConfiguredComponent and the updated event method of the AdminReferencingComponent.

If you ask yourself why the AdminConfiguredComponent is still immediately activated, although we made it a service now, the answer is, because it is referenced by an Immediate Component. Therefore the target services need to be bound, which means the referenced services need to be activated too.

This example is also helpful in getting a better understanding of the component life cycle. For example, if you remove the modified life cycle method from the AdminConfiguredComponent and call the configure command subsequently, both components get deactivated and activated, which results in new instances. Modifying the @Reference attributes will also lead to different results then. Change the cardinality, the policy and the policyOption to see the different behavior. Making the service reference OPTIONAL|DYNAMIC|GREEDY results in only re-activating the AdminConfiguredComponent but keeping the AdminReferencingComponent in active state. Changing it to OPTIONAL|STATIC|GREEDY will lead to re-activation of both components, while setting it OPTIONAL|STATIC|RELUCTANT any changes will be ignored, and actually nothing happens as the AdminReferencingComponent never gets satisfied, and therefore the AdminConfiguredComponent never gets activated.

The correlation between cardinality, reference policy and reference policy option is explained in detail in the OSGi Compendium Specification (table 112.1 in chapter 112.3.7 Reference Policy Option in Specification Version 6).

Location Binding

Some words about location binding here. The example above created a configuration object using the single parameter version of ConfigurationAdmin#getConfiguration(String). The parameter specifies the PID for which a configuration object is requested or should be created. This means that the configuration is bound to the location of the calling bundle. It then can not be consumed by other bundles. So the method is used to ensure that only the components inside the same bundle are affected.

A so-called bound configuration object is sufficient for the example above, as all created components are located in the same bundle. But there are also other cases where for example a configuration service in another bundle should be used to configure the components in all bundles of the application. This can be done by creating an unbound configuration object using the two argument version of ConfigurationAdmin#getConfiguration(String, String). The first parameter is the PID and the second parameter specifies the bundle location string.

Note:
The location parameter only becomes important if a configuration object will be created. If a configuration for the given PID already exists in the ConfigurationAdmin service, the location parameter will be ignored and the existing object will be returned.

You can use different values for the location argument:

  • Exact bundle location identifier
    In this case you explicitly specify the location identifier of the bundle to which the configuration object should be bound. The location identifier is set when a bundle is installed and typically it is a file URL that points to the bundle jar. It is impossible to have that hard coded and work across multiple installations. But you could retrieve it via a snippet similar to this:

    Bundle adminBundle =
        FrameworkUtil.getBundle(AdminConfiguredComponent.class);
    adminBundle.getLocation()

    But doing this introduces a dependency to the bundle that should be configured, which is typically not a good practice.

  • null
    The location value for the binding will be set when a service with the corresponding PID is registered the first time. Note that this could lead to issues if you have multiple services with the same PID in different bundles. In that case only the services in the first bundle that requests a configuration object would be able to get it because of the binding.
  • Multi-locations
    By using a multi-location binding, the configurations are dispatched to any target that has visibility to the configuration. A multi-location is specified with a leading question mark. It is possible to use only the question mark or adding a multi-location name behind the question mark, e.g.

    Configuration config =
        cm.getConfiguration("AdminConfiguredComponent", "?");
    Configuration config =
        cm.getConfiguration("AdminConfiguredComponent", "?org.fipro");

    Note:
    The multi-location name only has importance in case security is turned on and a ConfigurationPermission is specified. Otherwise it doesn’t has an effect. That means, it can not be used to restrict the targets based on the bundle symbolic name without security turned on.

Note:
The Equinox DS implementation has some bugs with regards to location binding. Basically the location binding is ignored. I had a discussion on Stackoverflow (thanks again to Neil Bartlett) and created the ticket Bug 493637 to address that issue. I also created Bug 501898 to report that multi-location binding doesn’t work.

To get familiar with the location binding basics create two additional bundles:

  • Create the bundle org.fipro.ds.configurator
    • Open the MANIFEST.MF file and switch to the Dependencies tab
    • Add the following dependencies on the Imported Packages side:
      • org.osgi.service.cm
      • org.osgi.service.component.annotations (1.2.0)
      • Mark org.osgi.service.component.annotations as Optional
    • Create the package org.fipro.ds.configurator
    • Create the class ConfCommand
      • Copy the ConfigureCommand implementation
      • Change the property value for osgi.command.function to conf
      • Change the method name from configure to conf to match the osgi.command.function property
  • Create the bundle org.fipro.ds.other
    • Open the MANIFEST.MF file and switch to the Dependencies tab
    • Add the following dependency on the Imported Packages side:
      • org.osgi.service.component.annotations (1.2.0)
      • Mark org.osgi.service.component.annotations as Optional
    • Create the package org.fipro.ds.other
    • Create the class OtherConfiguredComponent
      • Copy the AdminConfiguredComponent implementation
      • Change the console outputs to show the new class name
      • Ensure that it is an Immediate Component (i.e. remove the service property or add the immediate property)
      • Ensure that configurationPID and configurationPolicy are the same as in AdminConfiguredComponent

Use three different scenarios:

  1. Use the single parameter getConfiguration(String)
    Calling the conf command on the console will result in nothing. As the configuration object is bound to the bundle of the command, the other bundles don’t see it and the contained components don’t get activated.
  2. Use the double parameter getConfiguration(String, String) where location == null
    Only the component(s) of one bundle will receive the configuration object, as it will be bound to the bundle that first registers a service for the corresponding PID.
  3. Use the double parameter getConfiguration(String, String) where location == “?”
    The components of both bundles will receive the configuration object, as it is dispatched to all bundles that have visibility to the configuration. And as we didn’t mention and configure permissions, all our bundles receive it.

Note:
Because of the location binding issues in Equinox DS (see above), the examples doesn’t work using it. For testing I replaced Equinox DS with Apache Felix SCR in the Run Configuration, which worked well. To make this work just download SCR (Declarative Services) from the Apache Felix Download page and put it in the dropins folder of your Eclipse installation. After restarting the IDE you are able to select org.apache.felix.scr as bundle in the Run Configuration. Remember to remove org.eclipse.equinox.ds to ensure that only one SCR implementation is running.

Bndtools vs. PDE

For the org.fipro.ds.configurable bundle you need to add the package org.fipro.ds.configurable.command to the Private Packages in the bnd.bnd file. Otherwise it will not be part of the resulting bundle.

While we needed to add the Import-Package statement for org.osgi.service.cm manually in PDE, that import is automatically calculated by Bndtools. So at that point there is no action necessary. Only the launch configuration needs to be updated manually to include the Configuration Admin bundle.

  • Open the launch.bndrun file
  • On the Run tab click on Resolve
  • Verify the values values shown in the opened dialog in the Required Resources section
  • Click Finish

If you change a component class while the example is running, you will notice that the OSGi framework automatically restarts and the values set before via Configuration Admin are gone. This is because the Bndtools OSGi Framework launch configuration has two options enabled by default on the OSGi tab:

  • Framework: Update bundles during runtime.
  • Framework: Clean storage area before launch.

To test the behavior of components in case of persisted configuration values, you need to disable these settings.

DS 1.3

A new feature added to the DS 1.3 specification are the Component Property Types. They can be used as alternative to the component property Map<String, Object> parameter for retrieving the Configuration Properties in a life cycle method. The Component Property Type is specified as a custom annotation type, that contains property names, property types and default values. The following snippet shows the definition of such an annotation for the above examples:

package org.fipro.ds.configurable;

public @interface MessageConfig {
    String message() default "";
    int iteration() default 0;
}

Most of the examples found in the web show the definition of the annotation inside the component class. But of course it is also possible to create a public annotation in a separate file so it is reusable in multiple components.

The following snippet shows one of the examples above, modified to use a Component Property Type instead of the property Map<String, Object>.

package org.fipro.ds.configurable;

import java.util.Map;

import org.osgi.service.component.annotations.Activate;
import org.osgi.service.component.annotations.Component;

@Component(
    property = {
        "message=Welcome to the inline configured service",
        "iteration:Integer=3"
    }
)
public class StaticConfiguredComponent {

    @Activate
    void activate(MessageConfig config) {
        String msg = config.message();
        int iter = config.iteration();

        for (int i = 1; i <= iter; i++) {
            System.out.println(i + ": " + msg);
        }
    }
}

Note:
If properties are needed that are not specified in the Component Property Type, you can have both as method arguments. Since DS 1.3 there are different method signatures supported, including the combination of Component Property Type and the component property Map<String, Object>.

Although the Component Property Type is defined as an annotation type, it is not used as an annotation. The reasons for choosing annotation types are:

  • Limitations on annotation type definitions match component property types (no-argument methods and limited return types supported)
  • Support of default values

As Component Property Types are intended to be type safe, an automatic conversion happens. This is also true for Component Properties that are specified via Java Properties files.

To set configuration values via ConfigurationAdmin service you still need to operate on a Dictionary, which means you need to know the parameter names. But of course on setting the values you are type safe.

Another new feature in DS 1.3 is that you can specify multiple configuration PIDs for a component. This way it is for example possible to specify configuration objects for multiple components that share a common PID, while at the same time having a specific configuration object for a single component. To specify multiple configuration PIDs and still keep the default (that is the component name), the placeholder “$” can be used. By adding the following property to the StaticConfiguredComponent and the FileConfiguredComponent created before, the execution of the configure command will update all three components at once.

@Component(
    configurationPid = {"$", "AdminConfiguredComponent"},
    ...
)

Note that we don’t update the configurationPid value of AdminConfiguredComponent. The reason for this is that we use the configuration policy REQUIRE, which means that the component only gets satisfied if there are configuration objects available for BOTH configuration PIDs. And our example does not create a configuration object for the default PID of the AdminConfiguredComponent.

The order of the configuration PIDs matters with regards to property propagation. The configuration object for a PID at the end overrides values that were applied by another configuration object for a PID before. This is similar to the propagation of inline properties or property files. The processing is sequential and therefore later processed instructions override previous ones.

Service Properties

As initially explained there is a slight difference between Component Properties and Service Properties. Component Properties are all properties specified for a component that can be accessed in life cycle methods via method parameter. Service Properties can be retrieved via Event Methods (bind/updated/unbind) or since DS 1.3 via field strategy. They contain all public Component Properties, which means all excluding those whose property names start with a full stop. Additionally some service properties are added that are intended to give additional information about the service. These properties are prefixed with service, set by the framework and specified in the OSGi Core Specification (service.id, service.scope and service.bundeid).

To play around with Service Properties we set up another playground. For this create the following bundles to simulate a data provider service:

  • API bundle
    • Create the bundle org.fipro.ds.data.api
    • Add the following service interface
      package org.fipro.ds.data;
      
      public interface DataService {
      
          /**
           * @param id
           * The id of the requested data value.
           * @return The data value for the given id.
           */
          String getData(int id);
      }
    • Modify the MANIFEST.MF to export the package
  • Online data service provider bundle
    • Create the bundle org.fipro.ds.data.online
    • Add the necessary package import statements to the MANIFEST.MF
    • Create the following simple service implementation, that specifies the property fipro.connectivity=online for further use
      package org.fipro.ds.data.online;
      
      import org.fipro.ds.data.DataService;
      import org.osgi.service.component.annotations.Component;
      
      @Component(property="fipro.connectivity=online")
      public class OnlineDataService implements DataService {
      
          @Override
          public String getData(int id) {
              return "ONLINE data for id " + id;
          }
      }
  • Offline data service provider bundle
    • Create the bundle org.fipro.ds.data.offline
    • Add the necessary package import statements to the MANIFEST.MF
    • Create the following simple service implementation, that specifies the property fipro.connectivity=offline for further use
      package org.fipro.ds.data.offline;
      
      import org.fipro.ds.data.DataService;
      import org.osgi.service.component.annotations.Component;
      
      @Component(property="fipro.connectivity=offline")
      public class OfflineDataService implements DataService {
      
          @Override
          public String getData(int id) {
              return "OFFLINE data for id " + id;
          }
      }

Note:
For Java best practices you would of course specify the property name and the possible values as constants in the API bundle to prevent typing errors.

To be able to interact with the data provider services, we create an additional console command in the bundle  that references the services and shows the retrieved data on the console on execution. Add it to the bundle org.fipro.ds.configurator or create a new bundle if you skipped the location binding example.

package org.fipro.ds.configurator;

import java.util.ArrayList;
import java.util.List;
import java.util.Map;

import org.fipro.ds.data.DataService;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;
import org.osgi.service.component.annotations.ReferenceCardinality;
import org.osgi.service.component.annotations.ReferencePolicy;

@Component(
    property= {
        "osgi.command.scope:String=fipro",
        "osgi.command.function:String=retrieve"},
    service=DataRetriever.class
)
public class DataRetriever {

    private List<DataService> dataServices = new ArrayList<>();

    @Reference(
        cardinality=ReferenceCardinality.MULTIPLE,
        policy=ReferencePolicy.DYNAMIC
    )
    void addDataService(
            DataService service, Map<String, Object> properties) {
        this.dataServices.add(service);
        System.out.println(
            "Added " + service.getClass().getName());
    }

    void removeDataService(DataService service) {
        this.dataServices.remove(service);
        System.out.println(
            "Removed " + service.getClass().getName());
    }

    public void retrieve(int id) {
        for (DataService service : this.dataServices) {
            System.out.println(service.getData(id));
        }
    }
}

Add the new bundles to an existing Run Configuration and execute it. By calling the retrieve command on the console you should get an output similar to this:

osgi> retrieve 3
OFFLINE data for id 3
ONLINE data for id 3

Nothing special so far. Now let’s modify the example to verify the Service Properties.

  • Modify DataRetriever#addDataService() to print the given properties to the console
    @Reference(
        cardinality=ReferenceCardinality.MULTIPLE,
        policy=ReferencePolicy.DYNAMIC
    )
    void addDataService(
            DataService service, Map<String, Object> properties) {
        this.dataServices.add(service);
    
        System.out.println("Added " + service.getClass().getName());
        properties.forEach((k, v) -> {
            System.out.println(k+"="+v);
        });
        System.out.println();
    }
  • Start the example and execute the retrieve command. The result should now look like this:
    osgi> retrieve 3
    org.fipro.ds.data.offline.OfflineDataService
    fipro.connectivity=offline
    component.id=3
    component.name=org.fipro.ds.data.offline.OfflineDataService
    service.id=51
    objectClass=[Ljava.lang.String;@1403f0fa
    service.scope=bundle
    service.bundleid=5
    
    org.fipro.ds.data.online.OnlineDataService
    fipro.connectivity=online
    component.id=4
    component.name=org.fipro.ds.data.online.OnlineDataService
    service.id=52
    objectClass=[Ljava.lang.String;@c63166
    service.scope=bundle
    service.bundleid=6
    
    OFFLINE data for id 3
    ONLINE data for id 3

    The Service Properties contain the fipro.connectivity property specified by us, aswell as several properties that are set by the SCR.

    Note:
     The DataRetriever is not in Immediate Component and therefore gets activated when the retrieve command is executed the first time. The target services are bound at activation time, therefore the setter is called at that time and not before.

  • Modify the OfflineDataService
    • Add an Activate life cycle method
    • Add a property with a property name that starts with a full stop
    package org.fipro.ds.data.offline;
    
    import java.util.Map;
    
    import org.fipro.data.Constants;
    import org.fipro.data.DataService;
    import org.osgi.service.component.annotations.Activate;
    import org.osgi.service.component.annotations.Component;
    
    @Component(
        property= {
            "fipro.connectivity=offline",
            ".private=private configuration"
        }
    )
    public class OfflineDataService implements DataService {
    
        @Activate
        void activate(Map<String, Object> properties) {
            System.out.println("OfflineDataService activated");
            properties.forEach((k, v) -> {
                System.out.println(k+"="+v);
            });
            System.out.println();
        }
    
        @Override
        public String getData(int id) {
            return "OFFLINE data for id " + id;
        }
    }

    Execute the retrieve command again and verify the console output. You will notice that the output from the Activate life cycle method contains the .private property but no properties with a service prefix. The output from the bind event method on the other hand does not contain the .private property, as the leading full stop marks it as a private property.

    osgi> retrieve 3
    OfflineDataService activated
    objectClass=[Ljava.lang.String;@c60d42
    component.name=org.fipro.ds.data.offline.OfflineDataService
    component.id=3
    .private=private configuration
    fipro.connectivity=offline
    
    org.fipro.ds.data.offline.OfflineDataService
    fipro.connectivity=offline
    component.id=3
    component.name=org.fipro.ds.data.offline.OfflineDataService
    service.id=51
    objectClass=[Ljava.lang.String;@2b5d77a6
    service.scope=bundle
    service.bundleid=5
    
    ...

Service Ranking

In case multiple services of the same type are available, the service ranking is taken into account to determine which service will get bound. In case of multiple bindings the service ranking effects in which order the services are bound. The ranking order is defined as follows:

  • Sorted on descending ranking order (highest first)
  • If the ranking numbers are equal, sorted on ascending service.id property (oldest first)

As service ids are never reused and handed out in order of their registration time, the ordering is always complete.

The property service.ranking can be used to specify the ranking order and in case of OSGi components it can be specified as a Component Property via @Component where the value needs to be of type Integer. The default ranking value is zero if the property is not specified explicitly.

Modify the two DataService implementations to specify the initial service.ranking property.

@Component(
    property = {
        "fipro.connectivity=online",
        "service.ranking:Integer=7"
    }
)
public class OnlineDataService implements DataService {
...
@Component(
    property = {
        "fipro.connectivity=offline",
        "service.ranking:Integer=5",
        ".private=private configuration
    }
)
public class OfflineDataService implements DataService {
...

If you start the application and execute the retrieve command now, you will notice that the OnlineDataService is called first. Change the service.ranking of the OnlineDataService to 3 and restart the application. Now executing the retrieve command will first call the OfflineDataService.

To make this more obvious and show that the service ranking can also be changed dynamically, create a new DataGetter command in the org.fipro.ds.configurator bundle:

package org.fipro.ds.configurator;

import java.util.Map;

import org.fipro.ds.data.DataService;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;
import org.osgi.service.component.annotations.ReferencePolicy;
import org.osgi.service.component.annotations.ReferencePolicyOption;

@Component(
    property= {
        "osgi.command.scope:String=fipro",
        "osgi.command.function:String=get"
    },
    service=DataGetter.class
)
public class DataGetter {

    private DataService dataService;

    @Reference(
        policy=ReferencePolicy.DYNAMIC,
        policyOption=ReferencePolicyOption.GREEDY
    )
    void setDataService(DataService service,
            Map<String, Object> properties) {
        this.dataService = service;
    }

    void unsetDataService(DataService service) {
        if (service == this.dataService) {
            this.dataService = null;
        }
    }

    public void get(int id) {
        System.out.println(this.dataService.getData(id));
    }
}

This command has a MANDATORY reference to a DataService. The policy option is set to GREEDY which is necessary to bind to a higher ranked service if available. The policy is set to DYNAMIC to avoid re-activation of the DataGetter component if a service changes. If you change the policy to STATIC, the binding to the higher ranked service is done by re-activating the component.

Note:
For dynamic references the unbind event method is mandatory. This is necessary because the component is not re-activated if the bound services change, which means there will be no new Component Instance. Therefore the Component Instance state needs to be secured in the unbind method. In our case we check if the current service reference is the same that should be unbound. In that case we set the reference to null, otherwise there is already another service bound.

Finally create a toggle command, which dynamically toggles the service.ranking property of OnlineDataService.

package org.fipro.ds.configurator;

import java.io.IOException;
import java.util.Dictionary;
import java.util.Hashtable;

import org.osgi.service.cm.Configuration;
import org.osgi.service.cm.ConfigurationAdmin;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;

@Component(
    property= {
        "osgi.command.scope:String=fipro",
        "osgi.command.function:String=ranking"
    },
    service=ToggleRankingCommand.class
)
public class ToggleRankingCommand {

    ConfigurationAdmin admin;

    @Reference
    void setConfigurationAdmin(ConfigurationAdmin admin) {
        this.admin = admin;
    }

    public void ranking() throws IOException {
        Configuration configOnline =
            this.admin.getConfiguration(
                "org.fipro.ds.data.online.OnlineDataService",
                null);
        Dictionary<String, Object> propsOnline = null;
        if (configOnline != null
                && configOnline.getProperties() != null) {
            propsOnline = configOnline.getProperties();
        } else {
            propsOnline = new Hashtable<>();
        }

        int onlineRanking = 7;
        if (configOnline != null
                && configOnline.getProperties() != null) {
            Object rank =
                configOnline.getProperties().get("service.ranking");
            if (rank != null) {
                onlineRanking = (Integer)rank;
            }
        }

        // toggle between 3 and 7
        onlineRanking = (onlineRanking == 7) ? 3 : 7;

        propsOnline.put("service.ranking", onlineRanking);
        configOnline.update(propsOnline);
    }
}

Starting the example application the first time and executing the get command will return the ONLINE data. After executing the ranking command, the get command will return the OFFLINE data (or vice versa dependent on the initial state).

Note:
Equinox DS will log an error or warning to the console every second time. Probably an issue with processing the service reference update in Equinox DS. The example will still work, and if you replace Equinox DS with Felix SCR the message does not come up. So it looks like another Equinox DS issue.

Reference Properties

Reference Properties are special Component Properties that are associated with specific component references. They are used to configure component references more specifically. With DS 1.2 the target property is the only supported Reference Property. The reference property name needs to follow the pattern <reference_name>.<reference_property> so it can be accessed dynamically. The target property can be specified via the @Reference annotation on the bind event method via the target annotation type element. The value needs to be an LDAP filter expression and is used to select target services for the reference. The following example specifies a target property for the DataService reference of the DataRetriever command to only select target services which specify the Service Property fipro.connectivity with value online.

@Reference(
    cardinality=ReferenceCardinality.MULTIPLE,
    policy=ReferencePolicy.DYNAMIC,
    target="(fipro.connectivity=online)"
)

If you change that in the example and execute the retrieve command in the console again, you will notice that only the OnlineDataService will be selected by the DataRetriever.

Specifying the target property directly on the reference is a static way of defining the filter. The registering of custom commands to the Apache Gogo Shell seems to work that way, as you can register any service to become a console command when the necessary properties are specified.

In a dynamic environment it needs to be possible to change the target property at runtime aswell. This way it is possible to react on changes to the environment for example, like whether there is an active internet connection or not. To change the target property dynamically you can use the ConfigurationAdmin service. For this the reference property name needs to be known. Following the pattern
    <reference_name>.<reference_property>
this means for our example where
    reference_name = DataService
    reference_property = target
the reference property name is
    DataService.target

To test this we implement a new command component in org.fipro.ds.configurator that allows us to toggle the connectivity state filter on the DataService reference target property.

package org.fipro.ds.configurator;

import java.io.IOException;
import java.util.Dictionary;
import java.util.Hashtable;

import org.osgi.service.cm.Configuration;
import org.osgi.service.cm.ConfigurationAdmin;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;

@Component(
    property= {
        "osgi.command.scope:String=fipro",
        "osgi.command.function:String=toggle"
    },
    service=ToggleConnectivityCommand.class
)
public class ToggleConnectivityCommand {

    ConfigurationAdmin admin;

    @Reference
    void setConfigurationAdmin(ConfigurationAdmin admin) {
        this.admin = admin;
    }

    public void toggle() throws IOException {
        Configuration config =
            this.admin.getConfiguration(
                "org.fipro.ds.configurator.DataRetriever");

        Dictionary<String, Object> props = null;
        Object target = null;
        if (config != null
                && config.getProperties() != null) {
        	props = config.getProperties();
        	target = props.get("DataService.target");
        } else {
            props = new Hashtable<String, Object>();
        }

        boolean isOnline = (target == null
            || target.toString().contains("online"));

        // toggle the state
        StringBuilder filter =
            new StringBuilder("(fipro.connectivity=");
        filter.append(isOnline ? "offline" : "online").append(")");

        props.put("DataService.target", filter.toString());
        config.update(props);
    }
}

Some things to notice here:

  1. We use the default PID org.fipro.ds.data.configurator.DataRetriever to get a configuration object.
  2. We check if there is already an existing configuration. If there is an existing configuration we operate on the existing Dictionary. Otherwise we create a new one.
  3. We try to get the current state from the Dictionary.
  4. We create an LDAP filter String based on the retrieved information (or default if the configuration is created) and set it as reference target property.
  5. We update the configuration with the new values.

From my observation the reference policy and reference policy option doesn’t matter in that case. On changing the reference target property dynamically, the component gets re-activated to ensure a consistent state.

DS 1.3

With DS 1.3 the Minimum Cardinality Reference Property was introduced. Via this reference property it is possible to modify the minimum cardinality value at runtime. While it is only possible to specify the optionality via the @Reference cardinality attribute (this means 0 or 1), you can specify any positive number for MULTIPLE or AT_LEAST_ONE references. So it can be used for example to specify that at least 2 services of a special type needs to be available in order to satisfy the Component Configuration.

The name of the minimum cardinality property is the name of the reference appended with .cardinality.minimum. In our example this would be
DataService.cardinality.minimum

Note:
The minimum cardinality can only be specified via the cardinality attribute of the reference element. So it is only possible to specify the optionality to be 0 or 1. To specify the minimum cardinality in an extended way, the minimum cardinality reference property needs to be applied via Configuration Admin.

Create a command component in org.fipro.ds.configurator to modify the minimum cardinality property dynamically. It should look like the following example:

package org.fipro.ds.configurator;

import java.io.IOException;
import java.util.Dictionary;
import java.util.Hashtable;

import org.osgi.service.cm.Configuration;
import org.osgi.service.cm.ConfigurationAdmin;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;

@Component(
    property = {
        "osgi.command.scope=fipro",
        "osgi.command.function=cardinality"
    },
    service=ToggleMinimumCardinalityCommand.class
)
public class ToggleMinimumCardinalityCommand {

    @Reference
    ConfigurationAdmin admin;

    public void cardinality(int count) throws IOException {
        Configuration config =
            this.admin.getConfiguration(
                "org.fipro.ds.configurator.DataRetriever");

        Dictionary<String, Object> props = null;
        if (config != null
                && config.getProperties() != null) {
            props = config.getProperties();
        } else {
            props = new Hashtable<String, Object>();
        }

        props.put("DataService.cardinality.minimum", count);
        config.update(props);
    }
}

Launch the example and execute retrieve 3. You should get a valid response like before from a single service (online or offline dependent on the target property that is set). Now if you execute cardinality 2 and afterwards retrieve 3 you should get a CommandNotFoundException. Checking the components on the console via scr:list will show that org.fipro.ds.configurator.DataRetriever now has a unsatisfied reference. Calling cardinality 1 afterwards will resolve that again.

Now you can play around and create additional services to test if this is also working for values > 1.

While I was writing on this blog post, finding and reporting some issues in Equinox DS, the following ticket was created Bug 501950. If everything works out, Equinox DS will be replaced with Felix SCR. This would solve several issues and finally bring DS 1.3 also to Eclipse. So I cross my fingers that this ticket will be fixed for Oxygen. (which on the other hand means some work for the DS Annotations @pnehrer)

That’s if for this blog post. It again got much longer than I intended. But on the way writing the blog post I again learned a lot that wasn’t clear to me before. I hope you also could take something out of it to use declarative services even more in your projects.

Of course you can find the sources of this tutorial in my GitHub account:


by Dirk Fauth at September 26, 2016 06:40 AM

New Eclipse feature: See return value during debugging

by leoufimtsev at September 23, 2016 07:16 PM

Selection_121.png

With a recent patch, Eclipse can now show you the return value of a method during a debug session.

For years, when I was debugging and I needed to see the return value of a method, I would change code like:

return function();

 

To:

String retVal = function();
return retVal;

And then step through the code and inspect the value of “retVal”.

Recently [September 2016] a patch was merged to support this feature. Now when you return from a method, in the upper method, in the variable view it shows the return value of the previously finished call:

Selection_120.png

As a side note, the reason this was not implemented sooner is that the Java virtual machine debugger did not provide this information until Java 1.6.

If your version of Eclipse doesn’t yet have that feature, try downloading a recent integration or nightly build.

Happy debugging.

 

 



by leoufimtsev at September 23, 2016 07:16 PM

JBoss Tools and Red Hat Developer Studio Maintenance Release for Eclipse Neon

by jeffmaury at September 21, 2016 04:04 PM

JBoss Tools 4.4.1 and Red Hat JBoss Developer Studio 10.1 for Eclipse Neon are here waiting for you. Check it out!

devstudio10

Installation

JBoss Developer Studio comes with everything pre-bundled in its installer. Simply download it from our JBoss Products page and run it like this:

java -jar jboss-devstudio-<installername>.jar

JBoss Tools or Bring-Your-Own-Eclipse (BYOE) JBoss Developer Studio require a bit more:

This release requires at least Eclipse 4.6 (Neon) but we recommend using the latest Eclipse 4.6 Neon JEE Bundle since then you get most of the dependencies preinstalled.

Once you have installed Eclipse, you can either find us on the Eclipse Marketplace under "JBoss Tools" or "Red Hat JBoss Developer Studio".

For JBoss Tools, you can also use our update site directly.

http://download.jboss.org/jbosstools/neon/stable/updates/

What is new?

Our main focus for this release was improvements for container based development and bug fixing.

Improved OpenShift 3 and Docker Tools

We continue to work on providing better experience for container based development in JBoss Tools and Developer Studio. Let’s go through a few interesting updates here.

Support for Container Labels

Users can now specify labels when running a container. The labels are saved in the launch configuration and can also be edited before relaunching the container.

Container Labels

Automatically detect known Docker daemon connections

When the Docker Explorer view is opened, the list of existing connections (saved from a previous session) is reloaded. In addition to this behaviour, the view will also attempt to find new connections using default settings such the &aposunix:///var/run/docker.sock&apos Unix socket or the &aposDOCKER_HOST&apos, &aposDOCKER_CERT_PATH&apos and &aposDOCKER_TLS_VERIFY&apos environment variables. This means that by default, in a new workspace, if a Docker daemon is reachable using one of those methods, the user does not have to use the "New Connection" wizard to get a connection.

Extension point for Docker daemon connection settings

An extension point has been added to the Docker core plugin to allow for custom connection settings provisionning.

Support for Docker Compose

Support for Docker Compose has finally landed !

Users can select a docker-compose.yml file and start Docker Compose from the context menu, using the Run > Docker Compose launcher shortcut.

The Docker Compose process displays it logs (with support for text coloring based on ANSI escape codes) and provides a stop button to stop the underlying process.

Docker Compose

Also, as with the support for building and running containers, a launch configuration is created after the first call to Docker Compose on the selected docker-compose.yml file.

Docker Image Hierarchy View Improvements

The new Docker Image Hierarchy view not only shows the relationships between images (which is particularly interesting when an image is built using a Dockerfile), but it also includes containers based on the images in the tree view while providing with all relevant commands (in the context menu) for containers and images.

Docker Image Hierarchy View

Server templates can now be displayed / edited

Server templates are now displayed in the property view under the Templates tab:

property view template

You can access/edit the content of the template with the Edit command.

Events can now be displayed

Events generated as part of the application livecycle are now displayed in the property view under the Events tab (available at the project level):

property view event

You can refresh the content of the event with the Refresh command or open the event in the OpenShift web console with the Show In → Web Console command.

Volume claims can now be displayed

Volume claims are now displayed in the property view under the Storage tab (available at the project level):

property view storage1

You can create a new volume claim using a resource file like the following:

{
          "apiVersion": "v1",
          "kind": "PersistentVolumeClaim",
          "metadata": {
              "name": "claim1"
          },
          "spec": {
              "accessModes": [ "ReadWriteOnce" ],
              "resources": {
                  "requests": {
                      "storage": "1Gi"
                  }
              }
          }
      }

If you deploy such a resource file with the New → Resource command at the project level, the Storage tab will be updated:

property view storage2

You can access/edit the content of the volume claim with the Edit command or open the volume claim in the OpenShift web console with the Show In → Web Console command.

Server Tools

QuickFixes now available in runtime detection

Runtime detection has been a feature of JBossTools for a long while, however, it would sometimes create runtime and server adapters with configuration errors without alerting the user. Now, the user will have an opportunity to execute quickfixes before completing the creation of their runtimes and servers.

JBIDE 15189 rt detect 1

To see this in action, we can first open up the runtime-detection preference page. We can see that our runtime-detection will automatically search three paths for valid runtimes of any type.

JBIDE 15189 rt detect 2

Once we click search, the runtime-detection’s search dialog appears, with results it has found. In this case, it has located an EAP 6.4 and an EAP 7.0 installation. However, we can see that both have errors. If we click on the error column for the discovered EAP 7.0, the error is expanded, and we see that we’re missing a valid / compatible JRE. To fix the issue, we should click on this item.

JBIDE 15189 rt detect 3

When we click on the problem for EAP 7, the new JRE dialog appears, allowing us to add a compatible JRE. The dialog helpfully informs us of what the restrictions are for this specific runtime. In this case, we’re asked to define a JRE with a minimum version of Java-8.

JBIDE 15189 rt detect 4

If we continue along with the process by locating and adding a Java 8 JRE, as shown above, and finish the dialog, we’ll see that all the errors will disappear for both runtimes. In this example, the EAP 6.4 required a JRE of Java 7 or higher. The addition of the Java 8 JRE fixed this issue as well.

JBIDE 15189 rt detect 5

Hopefully, this will help users preemptively discover and fix errors before being hit with surprising errors when trying to use the created server adapters.

Support for WildFly 10.1

The WildFly 10.0 Server adapter has been renamed to WildFly 10.x. It has been tested and verified to work for WildFly 10.1 installations.

Hibernate Tools

Hibernate Runtime Provider Updates

A number of additions and updates have been performed on the available Hibernate runtime providers.

New Hibernate 5.2 Runtime Provider

With final releases available in the Hibernate 5.2 stream, the time was right to make available a corresponding Hibernate 5.2 runtime provider. This runtime provider incorporates Hibernate Core version 5.2.2.Final and Hibernate Tools version 5.2.0.Beta1.

hibernate 5 2
Figure 1. Hibernate 5.2 is available
Other Runtime Provider Updates

The Hibernate 4.3 runtime provider now incorporates Hibernate Core version 4.3.11.Final and Hibernate Tools version 4.3.5.Final.

The Hibernate 5.0 runtime provider now incorporates Hibernate Core version 5.0.10.Final and Hibernate Tools version 5.0.2.Final.

The Hibernate 5.1 runtime provider now incorporates Hibernate Core version 5.1.1.Final and Hibernate Tools version 5.1.0.CR1.

Forge Tools

Added Install addon from the catalog command

From Forge 3.3.0.Final onwards it is now possible to query and install addons listed in the Forge addons page.

addon install from catalog

Forge Runtime updated to 3.3.1.Final

The included Forge runtime is now 3.3.1.Final. Read the official announcement here.

startup

Freemarker

Freemarker 2.3.25

Freemarker library included in the Freemarker IDE was updated to latest available version 2.3.25.

flth / fltx file extensions added

The new flth and fltx extensions have been added and associated with Freemarker IDE. flth stands for HTML content whereas fltx stands for XML content.

Overhaul of the plugin template parser

The parser that FreeMarker IDE uses to extract IDE-centric information (needed for syntax highlighting, related tag highlighting, auto-completion, outline view, etc.) was overhauled. Several bugs were fixed, and support for the newer template language features were added. Also, the syntax highlighting is now more detailed inside expressions.

Fixed the issue when the (by default) yellow highlighting of the related FTL tags shift away from under the tag as you type.

Showing whitespace, block selection mode

The standard "Show whitespace characters" and "Toggle block selection mode" icons are now available when editing a template.

Improved automatic finishing of FreeMarker constructs

When you type <#, <@, ${, #{ and <#-- the freemarker editor now automatically closes them.

When a FreeMarker exception is printed to the console, the error position in it is a link that navigates to the error. This has worked long ago, but was broken for quite a while.

Fixed auto-indentation

When hitting enter, sometimes the new line haven’t inherited the indentation of the last line.

Updated the "database" used for auto completion

Auto completion now knows all directives and "built-ins" up to FreeMarker 2.3.25.

What is next?

Having JBoss Tools 4.4.1 and Developer Studio 10.1 out we are already working on the next maintenance release for Eclipse Neon.1.

Enjoy!

Jeff Maury


by jeffmaury at September 21, 2016 04:04 PM

Native browser for GTK on linux

by Christian Pontesegger (noreply@blogger.com) at September 21, 2016 09:14 AM

Having support for the internal browser is often not working out of the box on linux. You can check the status by opening your Preferences/General/Web Browser settings. If the radio button Use internal  web browser is enabled (not necessarily activated) internal browser support is working, otherwise not.

Most annoyingly without internal browser support help hovers in your text editors use a fallback mode not rendering links or images.

To solve this issue you may first check the SWT FAQ. For me working on gentoo linux the following command fixed the problem:
emerge net-libs/webkit-gtk:2
It is important to not only install the latest version of webkit-gtk which will not be recognized by Eclipse. After installation restart eclipse and your browser should work. Verified on Eclipse Neon.

by Christian Pontesegger (noreply@blogger.com) at September 21, 2016 09:14 AM

Creating My First Web App with Angular 2 in Eclipse

by dimitry at September 20, 2016 02:00 PM

Angular 2 is a framework for building desktop and mobile web applications. After hearing rave reviews about Angular 2, I decided to check it out and take my first steps into modern web development. In this article, I’ll show you how to create a simple master-details application using Angular 2, TypeScript, Angular CLI and Eclipse […]

The post Creating My First Web App with Angular 2 in Eclipse appeared first on Genuitec.


by dimitry at September 20, 2016 02:00 PM

Eclipse 4.7 M2 is out with a focus on usability

by Lars Vogel at September 19, 2016 08:23 AM

Eclipse 4.7 M2 is out with a focus on usability.

From simplified filter functionality in the Problems, Bookmark and Task view, improved color usage for popups, simplified editor assignments for file extensions, enhancements of quick access, a configurable compare direction in the compare editor, etc. you will find lots of nice goodies which will increase your love with the Eclipse IDE.

Also the background jobs API has been improved and we run jobs still fast, even if you do a lot of status updates in your job implementation.

Checkout the Eclipse 4.7 M2 Notes and Noteworthy for the details.


by Lars Vogel at September 19, 2016 08:23 AM

Eclipse basics for Java development

by leoufimtsev at September 19, 2016 03:10 AM

Just a basic into to Eclipse, aimed at people who are new to Java. Covers creating a new project, debugging, common shortcuts/navigation, git.

Workspace

Workspace contains your settings, ex your keyboard shortcut preferences, list of your open projects. You can have multiple workspaces.

Selection_097.png

You can switch between Workspaces via File ->Switch Workspaces

Projects

A project is essentially an application, or a library to an application. Projects can be opened or closed. Content of closed projects don’t appear in searches.

Hello world Project

To run some basic java code:

  • File -> New -> Java project
  • Give the project some name ->  finish.
    Selection_085.png
  • Right click on src -> New -> Class
    Selection_086.png
  • Give your Class some name, check “Public static void main(String [] args)”
    Selection_087.png
  • Add “Hello World” print line:
    System.out.println(“Hello world”);
    Selection_088.png
  • Right click on “SomeName.java” -> run as -> Java Application
    Selection_089.png
  • Output printed in Console:
    Selection_090.png
  • Next time you can run the file via run button:Selection_092.png
  • Or via “Ctrl+F11”

Debugging

Set a breakpoint by double clicking on the line numbers in the margin, then click on the bug icon or right click and “Debug as” -> “Java appliaction”
Selection_098.png

For more info on debugging, head over to Vogella:
http://www.vogella.com/tutorials/EclipseDebugging/article.html

Switching perspectives

Eclipse has the notion of Perspectives. One is for Java development, one for debugging, (others could be C++ development, or task planning etc..). It’s basically a customisation of features and layout.

When you finish debugging, you can switch back to the java perspective:
Selection_099.png

Common keyboard shortcuts

  • Ctrl+/    – comment code “//”
    Selection_093.png
  • Ctrl+shift+/    – comment code ‘/* … */’
    Selection_094.png
  • Ctrl+F11   – Run last run configuration
  • Ctrl+Shift+L  Keyboard reminder cue sheet. (Type to search)
    Selection_095.png
  • Ctrl+Shift+L, then Ctrl+Shift+L again, open keyboard preferences.
  • Ctrl+O – Java quick method Outline:
    Selection_096.png
    Note: Regex and case search works. Ex “*Key” will find “getBackgroundColorKey()”, so will  “gBFCK”.
  • Ctrl+Shift+r – search for resource (navigate between your classes).
  • Ctrl+Shift+f – automatically format the selected code. (Or all code if no block selected).

For more on shortcuts, head over to Vogella:
http://www.vogella.com/tutorials/EclipseShortcuts/article.html

Source code navigation

Right click on a method/variable to bring up a context menu, from there select:

Open Declaration (F3)

This is one of the most used functions. It’s a universal “jump to where the method/variable/class/constant is defined”.

 

Open Call hierarchy

See where variable or method is called.

Tip: For variables, you can narrow down the Field Access so that it only shows where a field is read/written.

Selection_105.png

Quick outline (Ctrl+O)

The quick outline is a quick way to find a function in your class. It has regex support and case search. E.g “*size” will find any method with ‘size’ in it and “cSI” will find ‘computeSizeInPixels’.
Tip: Press Ctrl+O again and you will be shown methods that get inherited from parent classes.

Selection_102.png

Navigate to super/implementation (Ctrl+click)

Sometimes you may want to see which sub-classes overrides a method. You can hover over a method and press ctrl+click, then on “Open Implementation”.

Selection_106.png

You will be presented with a list of sub-implementations.

Selection_108.png

You can similarly navigate to parent classes.

Code completion

Predict variable names, method names,

Start typing something, press: “Ctrl+space”

Selection_101.png

It can complete by case also, ex if you type “mOF” and press Ctrl+Space, it will expand to “myOtherFunction()”.

Templates

Typing “System.out.println();” is tedious. Instead you can type “syso” and then press ‘ctrl+space’. Eclipse can fill in the template code.Selection_084.png
You can find more on templates in Eclipse Preferences.

Git integration

99% of my git workflow happens inside Eclipse.

You will want to open three useful views:

Window -> Show view -> others

  • Team -> History
  • git -> Git Repositories
  • git -> Git Staging

You can manage git repositories in the “Git Repositories” view:

Selection_109.png

You can add changed files in the “Git Staging View” via drag and drop, and fill in the commit message. You can view your changes by double clicking on the files:

Selection_110.png

In the “History” view, you can create new branches, cherry pick commits, checkout older versions. Compare current files to previous versions etc..

Selection_112.png

Selection_113.png

More on Eclipse

If you want to know more about the Eclipse interface, feel free to head over to Vogella’s in-depth Eclipse tutorial:
http://www.vogella.com/tutorials/Eclipse/article.html

Also free free to leave comments with questions.



by leoufimtsev at September 19, 2016 03:10 AM

Pushing the Eclipse IDE Forward

by Doug Schaefer at September 17, 2016 06:40 PM

It’s been a crazy week if you follow the ide-dev mailing list at Eclipse. We’ve had many posts over the years discussing our competitive relationship with IntelliJ and the depression that sets in when we try to figure out how to make Eclipse make better so people don’t hate on it so much and then how nothing changes.

This time, though,  sparked by what seemed to be an innocent post by Mickael Istria about yet another claim that IntelliJ has better content assist (which from what I’ve seen, it actually does). This time it sparked a huge conversation with many Eclipse contributors chiming in with their thoughts about where we are with the Eclipse IDE and what needs to be done to make things better. A great summary of the last few days has been captured in a German-language Jaxenter article.

The difference this time is that it’s actually sparked action. Mickael, Pascal Rapicault, and others have switched some of their focus on the low hanging user experience issues and are providing fixes for them. The community has been activated and I love seeing it.

Someone asked why the Architecture Council at Eclipse doesn’t step in and help guide some of this effort and after discussing it at our monthly call, we’ve decided to do just that. Dani Megert and I will revive the UI Guidelines effort and update the current set and extend it to more general user experience guidance. We’ll use the UI Best Practices group mailing list to hold public discussions to help with that. Everyone is welcome to participate. And I’m sure the ide-dev list will continue to be busy as contributors discuss implementation details.

Eclipse became the number one Java IDE with little marketing. Back in the 2000’s developers were hungry for a good Java IDE and since Eclipse was free and easy to set up (yes, unzipping the IDE wasn’t that bad an experience) and worked well, had great static analysis and refactoring, they fell in love with it.

Other IDEs have caught up and in certain areas passed Eclipse and, yes, IntelliJ has become more popular. It’s not because of marketing. Developers decide what they like to use by downloading it and trying it out. As long as we keep our web presence in shape that developers can find the IDE, especially the Java one, and then keep working to make it functionally the best IDE we can, we’ll be able to continue to serve the needs of developers for a long time.

Our best marketing comes from our users. That’s the same with all technology these days. I’d rather hear from someone who’s tried Docker Swarm than believe what the Docker people are telling me (for example). That’s how we got Eclipse to number one, and where we need to focus to keep the ball rolling. And as a contributor community, we’re working hard to get them something good to talk about.


by Doug Schaefer at September 17, 2016 06:40 PM

Me as text?

by tevirselrahc at September 16, 2016 07:05 AM

Over the last few days, a large group of my minions and admires met in Sweden at EMD2017 to talk about me…in all my incarnation.

One of the most polarizing discussion was about whether I should stay graphical or whether I also needed to be textual. For those who do not know, I am a UML-based modeling tool and therefore graphical by nature.

However, some of my minions think that I would be more usable if I also allowed them to create/edit models using text (just like this posting, but in a model instead of a blog post.

During the meeting, there was a lot of discussion about whether it was a good idea or not, whether it was useful or not, whether I was even able to support this!

The main point made by the pro-text minions was that many things are simply easier to do by writing text rather than drawing images, but that both could be supported. Other minions were saying that it was simply impossible.

Now, this is all a bit strange to me. After all, when I look at my picture, I am an image, but then I can express myself in text (again, like in this posting).

Regardless, any new capability given me makes me happy!

And I wonder how I would look as text…

papyrus-logo-asciiart

I think I like myself better as an image, but it’s good to have a choice. In the end, I trust my minions.

 


Filed under: Papyrus, Textual Tagged: modeling, Textual, uml

by tevirselrahc at September 16, 2016 07:05 AM

Install "Plug-in Spy" in your Eclipse Neon IDE

September 15, 2016 10:00 PM

There is a lot of documentation about the Eclipse "Plug-in Spy" feature (Plug-in Spy for UI parts or Eclipse 3.5 - Plug-in Spy and menus). Im my opinion one information is missing: what you need to install to use the Spy feature in your Eclipse Neon IDE. Here is my small how-to.

Select "Install new Software…​" in the "Help" Menu. In the dialog, switch to the "The Eclipse Project Updates" update site (or enter its location http://download.eclipse.org/eclipse/updates/4.6). Filter with "PDE" and select the "Eclipse PDE Plug-in Developer Resources". Validate your choices with "Next" and "Finish", Eclipse will install the feature and ask for a Restart.

2016 09 16 install dialog
Figure 1. Install new Software in Eclipse

If you prefer the Oomph way, you can paste the snippet contained in Listing 1 in your installation.setup file (Open it with the Menu: Navigate ▸ Open Setup ▸ Installation).

<?xml version="1.0" encoding="UTF-8"?>
<setup.p2:P2Task
    xmi:version="2.0"
    xmlns:xmi="http://www.omg.org/XMI"
    xmlns:setup.p2="http://www.eclipse.org/oomph/setup/p2/1.0">
  <requirement
      name="org.eclipse.pde.source.feature.group"/>
  <repository
      url="http://download.eclipse.org/eclipse/updates/4.6"/>
</setup.p2:P2Task>

Your Oomph Editor should looks like in Figure 2. Save the file and select "Perform Setup Task…​" (in the Help menu). Oomph will update your installation and will ask for a restart.

2016 09 16 installation oomph editor
Figure 2. Oomph setup Editor: installation.setup File

In both cases, after the restart you can press alt+shift+f1 and use the Plug-in Spy as in Figure 3.

2016 09 16 plugin spy
Figure 3. Plug-in Spy in Eclipse Neon

September 15, 2016 10:00 PM

Oomph 04: P2 install tasks

by Christian Pontesegger (noreply@blogger.com) at September 15, 2016 02:15 PM

Form this tutorial onwards we are going to extend our project setup step by step. Today we are looking how to install additional plugins and features to our setup.

Source code for this tutorial is available on github as a single zip archive, as a Team Project Set or you can browse the files online.  

For a list of all Oomph related tutorials see my Oomph Tutorials Overview.

Step 1: Add the Repository

Open your Oomph setup file and create a new task of type P2 Director. To install components we need to provide a p2 site location and components from that location to install. So create a new Repository child to our task. When selected the Properties view will ask for a URL. Point to the p2 location you want to install stuff from. Leave Type to Combined. If you do not know about repository types you definitely do not need to change this setting.
When you are working with a p2 site provided by eclipse, Oomph can help to identify update site locations.

Step 2: Add features

Once you added the repository you can start to add features to be installed from that site. The manual way requires you to create a new child node of type Requirement. Go to its Properties and set Name to the feature id you want to install. You may add version ranges or make installs optional (which means do not throw an error when the feature cannot be installed).
The tricky part is to find out the name of the feature you want to install. I like to use the target platform editor from Mickael Barbero with its nice code completion features. An even simpler way is to use the Repository Explorer from Oomph:

Right click on your Repository node and select Explore. The Repository Explorer view comes up and displays features the same way as you might know it from the eclipse p2 installer. Now you can drag and drop entries to your P2 Director task.



by Christian Pontesegger (noreply@blogger.com) at September 15, 2016 02:15 PM

Keynotes, Tracks and Sponsors Announced for the 11th Annual EclipseCon Europe Conference

September 15, 2016 01:19 PM

The Eclipse Foundation is pleased to announce EclipseCon Europe 2016.

September 15, 2016 01:19 PM

Contribute to the vogella Android tutorials via Github pull requests

by Lars Vogel at September 15, 2016 06:25 AM

If you want to contribute an improvement to the vogella Android tutorials, we are providing our Android tutorials Asciidoc source code via Github. Please clone and send your Pull Requests.

vogella Android tutorial at Github.

Thanks for your contributions and lets kill all these typos. :-)


by Lars Vogel at September 15, 2016 06:25 AM

Java 9 module-info Files in the Eclipse IDE

by waynebeaton at September 14, 2016 07:31 PM

Note that this post is not intended to be a status update; it’s just a quick update based on some experimenting that I’ve been doing with the beta code.

It’s been a while, but I’m back to experimenting in Java 9 support in the Eclipse IDE.

For testing purposes, I downloaded the most recent Oxygen (4.7) integration build (I20160914-0800) from the Eclipse Project downloads the latest  Java 9 JRE build (135).

I configured the Eclipse IDE to run on the Java 9 JVM. This still requires a minor change in the eclipse.ini file: to launch successfully, you must add --add-modules=java.se.ee to the vmargs section (I expect this to be resolved before Java 9 support is officially released; see Bug 493761 for more information). I used and used the Install new software… dialog to pull in updates from the BETA_JAVA9 SDK builds repository (see the Java9 Eclipsepedia page for more information).

I created a very simple Java application with a module-info.java file. Content assist is available for this file.

screenshot-from-2016-09-14-14-27-24

Note that there is an error indicated on the import of java.awt.Frame. This error exists because the module info file does not provide visibility to that class (AWT is not included with java.base).

If we change that requires statement, the visibility issue is resolved and the compiler is happy. Well, mostly happy. Apparently not using declared variables gets you a stern warning (this is, of course, configurable).

screenshot-from-2016-09-14-14-27-51

The Eclipse Project is planning to ship support as part of an Eclipse Neon update release that coincides with the official release date of Java 9. I’ll be talking a bit about this during my JavaOne talk and demonstrating this (and more Java topics) at the Eclipse Foundation’s booth.

Conference: JavaOne
Session Type: Conference Session
Session ID: CON6469
Session Title: Developing Java Applications with Eclipse Neon
Room: Hilton—Continental Ballroom 6
Date and Time: 09/19/16, 11:00:00 AM – 12:00:00 PM

The call for papers for Devoxx US is open. Devoxx is a community conference from developers for developers. Submit your proposal now.



by waynebeaton at September 14, 2016 07:31 PM

Vert.x 3.3.3 is released !

by cescoffier at September 12, 2016 12:00 AM

We have just released Vert.x 3.3.3, a bug fix release of Vert.x 3.3.x.

Since the release of Vert.x 3.3.2, quite a few bugs have been reported. We would like to thank you all for reporting these issues.

Vert.x 3.3.3 release notes:

The event bus client using the SockJS bridge are available from NPM, Bower and as a WebJar:

Docker images are also available on the Docker Hub. The Vert.x distribution is also available from SDKMan and HomeBrew.

The artifacts have been deployed to Maven Central and you can get the distribution on Bintray.

Happy coding !


by cescoffier at September 12, 2016 12:00 AM

Qt World Summit 2016 San Francisco Conference App: Behind The Scenes

by ekkescorner at September 09, 2016 12:02 PM

Qt World Summit 2016

Meet me at this years Qt World Summit 2016 in San Francisco

qtws16_sfo

I’ll speak about development of upcoming Qt World Summit Conference App running on

  • BlackBerry 10 (Qt 4.8, Cascades)
  • Qt 5.7 (Qt Quick Controls 2)
    • Android
    • iOS
    • Windows 10

My Session

See how easy it is to develop cross-platform mobile Apps using Qt 5.7+ and new Qt QuickControls 2

qtws16_session_ekke

BlackBerry 10 Cascades Development ?

Already have BlackBerry 10 Apps (Cascades) ? Learn how to save your investment: most C++ Code für Business Logic, REST / Web Services, Persistence (SQLite, JSON) can be re-used and the app architecture is similar using Qt SIGNALS – SLOTS concept.

cu in San Francisco


Filed under: BB10, C++, Cascades, mobile, Qt

by ekkescorner at September 09, 2016 12:02 PM

Running nightly Eclipse for the impatient

by leoufimtsev at September 08, 2016 06:21 PM

 

Eclipse-Icon.png

If you’re an Eclipse developer, you might consider running a nightly version of Eclipse so that you can easily test out the latest patches. Bleeding edge is the cool stuff right? It’s actually surprisingly quite stable.

The advantage of this setup is that you won’t have to re-download a new version and re-download all the plugins over and over. You just configure the thing once and just click on ‘check for updates’ once in a while.

The setup is a little bit counter intuitive. This article is not just ‘follow these steps’, but more about understanding the mechanism and workflow.

My Experience with using nightly for 4 months

Eclipse doesn’t actually auto-update on it’s own. You manually trigger an update by going to help->check for updates. So you never really have the situation where one day Eclipse randomly stops working.
I don’t update my eclipse every day actually. Perhaps only once every 2-3 weeks or when I want run the latest patch. I’ve never had Eclipse break on me during the process, but never the less I tend to backup my Eclipse before every update via a bash script.

Pre-requisite: Understanding update sites

Instead of re-downloading eclipse each time, you can simply configure it to pull it’s packages from update sites. There are different update sites for different parts of Eclipse.

The list of core update sites can be found via:
Google: “Eclipse update sites” ->
https://wiki.eclipse.org/Eclipse_Project_Update_Sites

Types of update sites

There are two important update sites that you have to be aware of. “Update” and “Release”.
Update” contains core Eclipse components, ex platform.ui. Ex:
http://download.eclipse.org/eclipse/updates/4.7-N-builds/

Release” contains additional plugins like mylyn, etc..
http://download.eclipse.org/releases/neon/

But there are others, CDT, Orbit etc.. You can often find them by googling “PLUGIN-NAME update site”.

Which update sites to pick for ‘nighties’?

In general when you develop version N+1 (where N+1 is not released yet), you point to update sites of N, (because N+1 repositories are not released yet). Once N+1 is released, it becomes N and you change your update sites accordingly.

For example, I currently work on Oxygen, but I’m pointing my update sites to pull from ‘Neon’ (where Neon is older than Oxygen, N < O).

Setup

Now you’re ready to roll. You can install your desired plugins (ex mylyn, SWT Tools, git integration etc.)

To update this business, make a backup of your downloaded Eclipse (and maybe workspace), go to help -> check for updates.

If you have questions / feedback / suggestions, please post comments.

For more details on update, check out Vogella:
http://www.vogella.com/tutorials/Eclipse/article.html#updates-and-installation-of-plug-ins



by leoufimtsev at September 08, 2016 06:21 PM

Centralized logging for Vert.x applications using the ELK stack

by ricardohmon at September 08, 2016 12:00 AM

This post entry describes a solution to achieve centralized logging of Vert.x applications using the ELK stack, a set of tools including Logstash, Elasticsearch, and Kibana that are well known to work together seamlessly.

Table of contents

Preamble

This post was written in context of the project titled “DevOps tooling for Vert.x applications“, one of the Vert.x projects taking place during the 2016 edition of Google Summer of Code, a program that aims to bring together students with open source organizations, in order to help them to gain exposure to software development practices and real-world challenges.

Introduction

Centralized logging is an important topic while building a Microservices architecture and it is a step forward to adopting the DevOps culture. Having an overall solution partitioned into a set of services distributed across the Internet can represent a challenge when trying to monitor the log output of each of them, hence, a tool that helps to accomplish this results very helpful.

Overview

As shown in the diagram below, the general centralized logging solution comprises two main elements: the application server, which runs our Vert.x application; and a separate server, hosting the ELK stack. Both elements are linked by Filebeat, a highly configurable tool capable of shipping our application logs to the Logstash instance, i.e., our gateway to the ELK stack.

Overview of centralized logging with ELK

App logging configuration

The approach described here is based on a Filebeat + Logstash configuration, that means first we need to make sure our app logs to a file, whose records will be shipped to Logstash by Filebeat. Luckily, Vert.x provides the means to configure alternative logging frameworks (e.g., Log4j, Log4j2 and SLF4J) besides the default JUL logging. However, we can use Filebeat independently of the logging framework chosen.

Log4j Logging

The demo that accompanies this post relies on Log4j2 as the logging framework. We instructed Vert.x to use this framework following the guidelines and we made sure our logging calls are made asynchronous, since we don’t want them to block our application. For this purpose, we opted for the AsyncAppender and this was included in the Log4J configuration together with the log output format described in a XML configuration available in the application’s Resource folder.

<Configuration>
  <Appenders>
    <RollingFile name="vertx_logs" append="true" fileName="/var/log/vertx.log" filePattern="/var/log/vertx/$${date:yyyy-MM}/vertx-%d{MM-dd-yyyy}-%i.log.gz">
      <PatternLayout pattern="%d{ISO8601} %-5p %c:%L - %m%n" />
    RollingFile>
    <Async name="vertx_async">
      <AppenderRef ref="vertx_logs"/>
    Async>
  Appenders>
  <Loggers>
    <Root level="DEBUG">
      <AppenderRef ref="vertx_async" />
    Root>
  Loggers>
Configuration>

Filebeat configuration

Now that we have configured the log output of our Vert.x application to be stored in the file system, we delegate to Filebeat the task of forwarding the logs to the Logstash instance. Filebeat can be configured through a YAML file containing the logs output location and the pattern to interpret multiline logs (i.e., stack traces). Also, the Logstash output plugin is configured with the host location and a secure connection is enforced using the certificate from the machine hosting Logstash. We set the document_type to the type of instance that this log belongs to, which could later help us while indexing our logs inside Elasticsearch.

filebeat:
  prospectors:
    -
      document_type: trader_dashboard
      paths:
        - /var/log/vertx.log
      multiline:
        pattern: "^[0-9]+"
        negate: true
        match: after
output:
  logstash:
    enabled: true
    hosts:
      - elk:5044
    timeout: 15
    tls:
      insecure: false
      certificate_authoritites:
        - /etc/pki/tls/certs/logstash-beats.crt

ELK configuration

To take fully advantage of the ELK stack with respect to Vert.x and our app logs, we need to configure each of its individual components, namely Logstash, Elasticsearch and Kibana.

Logstash

Logstash is the component within the ELK stack that is in charge of aggregating the logs from each of the sources and forwarding them to the Elasticsearch instance.
Configuring Logstash is straightforward with the help of the specific input and output plugins for Beats and Elasticsearch, respectively. In the previous section we mentioned that Filebeat could be easily coupled with Logstash. Now, we see that this can be done by just specifying Beat as the input plugin and set the parameters needed to be reached by our shippers (listening port, ssl key and certificate location).

input {
  beats {
    port => 5044
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash-beats.crt"
    ssl_key => "/etc/pki/tls/private/logstash-beats.key"
  }
}

Now that we are ready to receive logs from the app, we can use Logstash filtering capabilities to specify the format of our logs and extract the fields so they can be indexed more efficiently by Elasticsearch.
The grok filtering plugin comes handy in this situation. This plugin allows to declare the logs format using predefined and customized patterns based in regular expressions allowing to declare new fields from the information extracted from each log line. In the following block, we instruct Logstash to recognize our Log4j pattern inside a message field, which contains the log message shipped by Filebeat. After that, the date filtering plugin parses the timestamp field extracted in the previous step and replaces it for the one set by Filebeat after reading the log output file.

filter {
  grok {
    break_on_match => false
    match =>  [ "message", "%{LOG4J}"]
  }
  date{
    match => [ "timestamp_string", "ISO8601"]
    remove_field => [ "timestamp_string" ]
  }
}

The Log4j pattern is not included within the Logstash configuration, however, we can specify it using predefined data formats shipped with Logstash and adapt it to the specific log formats required in our application, as shown next.

# Pattern to match our Log4j format
SPACING (?:[\s]+)
LOGGER (?:[a-zA-Z$_][a-zA-Z$_0-9]*\.)*[a-zA-Z$_][a-zA-Z$_0-9]*
LINE %{INT}?
LOG4J %{TIMESTAMP_ISO8601:timestamp_string} %{LOGLEVEL:log_level}%{SPACING}%{LOGGER:logger_name}:%{LINE:loc_line} - %{JAVALOGMESSAGE:log_message}

Finally, we take a look at Logstash’s output configuration. This simply points to our elasticsearch instance, instructs it to provide a list of all cluster nodes (sniffing), defines the name pattern for our indices, assigns the document type according to the metadata coming from Filebeat, and allows to define a custom index template for our data.

output {
  elasticsearch {
    hosts => ["localhost"]
    sniffing => true
    manage_template => true
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
    template => "/etc/filebeat/vertx_app_filebeat.json"
    template_overwrite => true
  }
}

Elasticsearch

Elasticsearch is the central component that enables the efficient indexing and real-time search capabilities of the stack. To take the most advantage of Elasticsearch, we can provide an indexing template of our incoming logs, which can help to optimize the data storage and match the queries issued by Kibana at a later point.
In the example below, we see an index template that would be applied to any index matching the pattern filebeat-*. Additionally, we declare our new log fields type, host, log_level, logger_name, and log_message, which are set as not_analyzed except for the last two that are set as analyzed allowing to perform queries based on regular expressions and not restricted to query the full text.

{
  "mappings": {
    "_default_": {
      "_all": {
        "enabled": true,
        "norms": {
          "enabled": false
        }
      },
      "dynamic_templates": [
        {
          "template1": {
            "mapping": {
              "doc_values": true,
              "ignore_above": 1024,
              "index": "not_analyzed",
              "type": "{dynamic_type}"
            },
            "match": "*"
          }
        }
      ],
      "properties": {
        "@timestamp": {
          "type": "date"
        },
        "offset": {
          "type": "long",
          "doc_values": "true"
        },
        "type": { "type": "string", "index": "not_analyzed" },
        "host": { "type": "string", "index": "not_analyzed" },
        "log_level": { "type": "string", "index": "not_analyzed" },
        "logger_name": { "type": "string", "index": "analyzed" },
        "log_message": { "type": "string", "index": "analyzed" }
      }
    }
  },
  "settings": {
    "index.refresh_interval": "5s"
  },
  "template": "filebeat-*"
}

Kibana

Although we could fetch all our logs from Elasticsearch through its API, Kibana is a powerful tool that allows a more friendly query and visualization. Besides the option to query our data through the available indexed field names and search boxes allowing typing specific queries, Kibana allows creating our own Visualizations and Dashboards. Combined, they represent a powerful way to display data and gain insight in a customized manner. The accompanied demo ships with a couple of sample dashboards and visualizations that take advantage of the log fields that we specified in our index template and throw valuable insight. This includes: visualizing the number of log messages received by ELK, observe the proportion of messages that each log source produces, and directly find out the sources of error logs.

Kibana Dashboard

Log shipping challenge

The solution presented here relied on Filebeat to ship log data to Logstash. However, if you are familiar with the Log4j framework you may be aware that there exists a SocketAppender that allows to write log events directly to a remote server using a TCP connection. Although including the Filebeat + Logstash combination may sound an unnecessary overhead to the logging pipeline, they provide a number of benefits in comparison to the Log4j socket alternative:

  • The SocketAppender relies on the specific serialization of Log4j’s LogEvent objects, which is no an interchangeable format as JSON, which is used by the Beats solution. Although there are attempts to output the logs in a JSON format for Logstash, it doesn’t support multiline logs, which results in messages being split into different events by Logstash. On the other hand, there is no official nor stable input plugin for Log4j version 2.
  • While enabling Log4j’s async logging mode in an application delegates logging operations to separate threads, given their coexistence in the same JVM there is still the risk of data loss in case of a sudden JVM termination without proper log channel closing.
  • Filebeat is a data shipper designed to deal with many constraints that arise in distributed environments in a reliable manner, therefore it provides options to tailor and scale this operation to our needs: the possibility to load balance between multiple Logstash instances, specify the number of simultaneous Filebeat workers that ship log files, and specify a compression level in order to reduce the consumed bandwidth. Besides that, logs can be shipped in specific batch sizes, with maximum amount of retries, and specifying a connection timeout.
  • Lastly, although Filebeat can forward logs directly to Elasticsearch, using Logstash as an intermediary offers the possibility to collect logs from diverse sources (e.g., system metrics).

Demo

This post is accompanied by a demo based on the Vert.x Microservices workshop, where each of them is shipped in a Docker container simulating a distributed system composed of independent addressable nodes.
Also, the ELK stack is provisioned using a preconfigured Docker image by Sébastien Pujadas.

Following the guidelines in this post, this demo configures each of the Microservices of the workshop, sets up a Filebeat process on each of them to ship the logs to a central container hosting the ELK stack.

Installation

In order to run this demo, it is necessary to have Docker installed, then proceed with:

  • Cloning or downloading the demo repository.
  • Separately, obtaining the source code of the branch of the Microservices workshop adapted for this demo.

Building the example

The Docker images belonging to the Vert.x Microservices workshop need to be built separately to this project before this project can be launched.

Building the Vert.x Microservices workshop Docker images.

Build the root project and the Trader Dashboard followed by each of the modules contained in the solution folder. Issue the following commands for this:

mvn clean install
cd trader-dashboard
mvn package docker:build
cd ../solution/audit-service
mvn package docker:build
cd ../compulsive-traders
mvn package docker:build
cd ../portfolio-service
mvn package docker:build
cd ../quote-generator/
mvn package docker:build

Running the example

After building the previous images, build and run the example in vertx-elk using the following command:

docker-compose up

The demo

You can watch the demo in action in the following screencast:

Conclusion

The ELK stack is a powerful set of tools that ease the aggregation of logs coming from distributed services into a central server. Its main pillar, Elasticsearch, provides the indexing and search capabilities of our log data. Also, it is accompanied by the convenient input/output components: Logstash, which can be flexibly configured to accept different data sources; and Kibana, which can be customized to present the information in the most convenient way.

Logstash has been designed to work seamlessly with Filebeat, the log shipper which represents a robust solution that can be adapted to our applications without having to make significant changes to our architecture. In addition, Logstash can accept varied types of sources, filter the data, and process it before delivering to Elasticsearch. This flexibility comes with the price of having extra elements in our log aggregation pipeline, which can represent an increase of processing overhead or a point-of-failure. This additional overhead could be avoided if an application would be capable of delivering its log output directly to Elasticsearch.

Happy logging!


by ricardohmon at September 08, 2016 12:00 AM

ECF 3.13.2

by Scott Lewis (noreply@blogger.com) at September 05, 2016 07:16 PM

ECF 3.13.2 is now available.

This is a maintenance/bug fix release, but includes new documentation on growing set of ECF distribution providers to support our implementation of OSGi Remote Services.

New and Noteworthy here.





by Scott Lewis (noreply@blogger.com) at September 05, 2016 07:16 PM

Back to school update on FEEP

September 05, 2016 10:00 AM

You remember the Friends of Eclipse Enhancement Program, right? It is a program that utilizes all the donations made through the Friends of Eclipse program to make significant and meaningful improvements and enhancements to the Eclipse IDE/Platform. I think it is a good time for me to provide you with an update about what we have done in the last quarter with this program.

One of the major effort we have been focused on is the triage of the key Eclipse Platform UI bugs. The bid has been awarded to Patrik Suzzi and I must say that the Eclipse Platform team and the Eclipse Foundation have been delighted to work with Patrik. Since the beginning of April, he has triaged about 400 bugs and fixed or contributed to fix 70 bugs in the Platform. It granted him to become an Eclipse Platform UI committer. Congratulation!

Among others, Patrik has fixed some very annoying bugs like the broken feedback on drag and drop of overflown tabs editor or the inclusion of an Eclipse help search in the quick access field. By the way, if you don’t know what quick access is, I urge you to have a look at it.

image

Another area I’ve been working on is the progress monitor performance. When projects would do heavy progress reporting, the reporting itself was slowing down the running task by a huge factor. Have a look at how it now runs faster and smoother now.

image

Many more fixes and improvements can be done to the Eclipse IDE/Platform with this program. Obviously, the development depends on the amount of donations received. You can help improve the Eclipse Platform and make a difference. You only need to donate today!


September 05, 2016 10:00 AM