XtextSummit goes EclipseCon France

by Holger Schill (schill@itemis.com) at January 23, 2017 03:11 PM

Xtext is a well known framework and highly represented at conferences like EclipseCon around the world. It is always fun getting together with people who build great software and talk about their experiences with the framework. In 2014 itemis decided to host a dedicated conference on Xtext – the XtextCon – to present more advanced talks on this special topic. We have come a long way since then...

From XtextCon to XtextSummit

The conference attracted around 100 people and that meant a lot to us. It was a great event and the attendees enjoyed it. People from around the globe came to the city of Kiel. It was such a big thing to us that we decided to have another XtextCon in 2015 and so the story continued with even more attendees.

In 2016 we decided to take a break. We took a deep breath to come back in 2017 with a new idea:

Together with Sven Efftinge from TypeFox and Lorenzo Bettini, we are more than happy to announce that we are hosting a brand new event called XtextSummit at EclipseCon France in lovely Toulouse this June.

We’ll start with a full day at the unconference where the Xtext community can exchange their thoughts and experience. At the conference we’ll have a full track to talk about advanced Xtext-related technical aspects, as we have done it before at XtextCon.

Be part of the Xtext story

By submitting a talk you can help make it a great event and let the story continue. We are looking forward to your talks and the EclipseCon France!


by Holger Schill (schill@itemis.com) at January 23, 2017 03:11 PM

openHAB 2 has arrived!

by Kai Kreuzer at January 23, 2017 12:00 AM

Three years after initiating the Eclipse SmartHome project, I am proud to finally announce the general availability of openHAB 2.0, the first openHAB release that is based on Eclipse SmartHome!

This release replaces openHAB 1.8 as the officially recommended runtime, only the openHAB 1 compatible add-ons will be further maintained. They are now available in version 1.9.0 and can be used on existing openHAB 1.8 installations. We have reached far more than 200 add-ons in this package by now and most of them can also be used in conjunction with openHAB 2 (see below).

Back in 2014, our goals for openHAB 2 were the following:

  1. Better support for low-end embedded hardware
  2. Simpler setup and configuration possibilities for “regular” i.e. not tech-savvy users

So what have we achieved?

Embedded Use

With respect to embedded systems, the evolution of hardware was quicker than us, which made our goal almost obsolete. While in 2014 many users were using a Raspberry Pi 1 which was not ideal for openHAB regarding system performance. Now in 2017 almost everyone has upgraded to more powerful boards like the Raspberry Pi 2/3 or the PINE64, where CPU power is no longer a problem. As a matter of fact, openHAB 2 runs very decently on such boards, even for large installations.

Hardware Powerful hardware options for openHAB

Constrained hardware resources are therefore merely still an issue for commercial platforms. Besides the free openHAB, there are other solutions being built on Eclipse SmartHome, like e.g. QIVICON of Deutsche Telekom. The shrinked core framework of Eclipse SmartHome proves its worth on such solutions. As a demonstrator, there is a sample solution packaging available, which uses Eclipse Concierge as a low-footprint OSGi framework and only has an overall size of only 20MB and which requires less then 128MB Java heap.

Usability Improvements

Usability has been addressed on many different levels and version 2.0 only marks the beginning of these efforts, upcoming versions 2.x will continue on this path.

Setup Through a User Interface

All newly introduced APIs and features are designed in a way that they enable setup and configuration through user interfaces.

  • After a first start, the user can choose an initial installation package to start with. These packages define common sets of functionality for different user types. Additional add-ons can be installed at any time through the UI by a single click.
  • A major feature of all newly introduced bindings is the device discovery, i.e. the bindings will scan themselves the network/system for supported devices and offer them to the user. This heavily simplifies the integration of devices in openHAB.
  • A new rule engine has been created, which allows building simple rules through a UI, similar to IFTTT, but with the big difference that no cloud connection is required as everything is executed locally.
  • A new UI called HABPanel has been introduced, which provides flexible dashboards for tablets, which can be created and modified fully UI-driven. HABPanel is specifically well-suited for wall-mounted displays.

UIs Usability Improvements: Initial Setup - Discovery - Rule Editor - HABPanel

These features together allow for the first time a purely UI-driven setup of openHAB - but it must be said that this only covers a small fraction of the capabilities of openHAB. A core strength of openHAB is its flexibility and the possibility to cover all kinds of special - sometimes really weird - individual use cases. To do so, the textual configuration, as known from openHAB 1, is still required and recommended.

1.x Compatibility

While these new features for beginners have been introduced, a focus has also been to have as little disruption for existing openHAB users as possible and to keep and even extend the textual configuration options. No one is forced to use UIs in future, they should be merely seen as an optional alternative. Most functionality of openHAB 1 has therefore be retained and only minor changes need to be done to the personal configurations and rules. Specifically, most openHAB 1.9 add-ons can be used as openHAB 2 as it has a built-in compatibility layer.

Installation

A major obstacle in the past for many users was also the required setup around openHAB, e.g. installing Linux on the Raspberry Pi, configuring users, shares, ssh, etc. This is all much simpler now through openHABian - a self-configuring Raspberry Pi setup, which starts off from a minimal SD card image. It automatically installs Java, openHAB, Samba and more and optionally even KNXd, Homegear, Mosquitto and others. This is definitely the best choice for Raspberry Pi users!

Feature Overview

So what is new in general in openHAB 2? Let me give you a rough overview:

Bindings

openHAB 2 comes with 130 bindings for different devices, technologies or protocols. 57 of them are using the new 2.0 APIs, so that they support discovery and graphical configuration. The rest are bindings from openHAB 1, which are included in the 2.0 distribution.

Many of these bindings support technologies, which are not supported by openHAB 1. There are e.g. bindings for AllPlay, Miele@home, Minecraft, Russound, Z-Way and even Tesla, just to name a few.

UIs Some sample products & technologies that are supported by openHAB 2

Many other are currently under development or queued for review, so we can expect to reach 200 bindings by the end of this year.

User Interfaces

  • The visually outdated Classic UI (which is still available as an option) has been replaced by the much more modern Basic UI.
  • While being an external project for openHAB 1, HABmin has meanwhile become an official part of the project and is a very powerful administration tool, especially suited for Z-Wave users.
  • The new Paper UI is the main interface for doing UI-driven setup and configuration.
  • HABPanel is another new web UI, which specifically focuses on nifty dashboards on tablets.
  • Besides Android and iOS, there is now a native client for Windows 10 (Mobile) available in the official Microsoft App Store.

UIs New UIs: Basic UI - HABPanel - HABmin - Windows 10 App

Further Integrations

Besides the bindings, which integrate external system in openHAB, the opposite is possible as well: To include openHAB in an external system:

  • A very simple integration in Amazon Echo is possible through the Hue emulation add-on. This mimics a Philips Hue bridge and can also fool some other 3rd party apps that exist for Philips Hue.
  • The HomeKit add-on makes openHAB appear in iOS as a not-certified device that, once added, allow the use of any hardware from within HomeKit/iOS.
  • IFTTT integration is now offered through the new myopenHAB service that is operated by the openHAB Foundation.
  • Text-to-Speech and other audio playback can now be directed to remote devices (such as connected speakers). Already supported are e.g. Sonos speakers, Onkyo receivers, Chromecast and the Kodi media center.

Distribution Packages

Besides the classic zip archives as well as the APT packages for Linux, there are by now many further alternatives how to get hold of openHAB. (Please note that only openHABian is available from today on, the other types of packaging of the final 2.0 version will follow in the next days):

  • openHABian: A hassle-free setup for the Raspberry Pi, starting off a minimal SD card image.
  • PINE64 image: A pre-installed openHAB setup as an SD card image.
  • Docker: A Docker container, available for both x86 and ARM architectures.
  • Synology-NAS: Packages for the popular Synology Diskstations
  • QNAP-NAS: Likewise, there are installation packages available for QNAP.
  • Ubuntu Core Snap: Snaps for the new Ubuntu Core

Overall, openHAB 2.0 is a huge step forward and I would like to thank all the new contributors and maintainers that have joined the project recently, as without them none of this would have been possible. Todays release 2.0 is just a first step and many more things are to be introduced with upcoming 2.x releases - so stay tuned!


by Kai Kreuzer at January 23, 2017 12:00 AM

Eclipse Neon.2: quick demo of three improvements

by howlger at January 19, 2017 02:30 PM

In December 2016 Neon.2 was released with only a few but nonetheless very helpful improvements. My Eclipse Neon.2: quick demo of 3 improvements shows three of these improvements:

  1. IDE – Compare Editor: Swap Left and Right View
  2. Java – Open Projects from File System: new Java and JDK detectors
  3. Arduino C++ Tools

Eclipse Neon.2: quick demo of 3 improvements

Opening a Java project that has not been created with Eclipse becomes a no-brainer with the new Java detector that is used by File > Open Projects from File System. Also the Arduino Downloads Manager of the Arduino C++ Tools shows how simple things can be: Just choose your Arduino board or compatible system and the libraries you want to use. Everything required, e. g. C++ compiler, will be downloaded and configured for you. Watch Doug‘s 11-minute video for more details.

There are also Eclipse IDE Git integration improvements, but EGit and JGit forgot to contribute their version 4.5 (I like auto-stage selected files on Commit…) and 4.6 to Neon.2. To get the latest Git improvements add the update site http://download.eclipse.org/egit/updates to Window > Preferences > Install/Update > Available Software Sites.

If you missed the last two releases, here are my quick demos of 10 Neon.1 and 22 Neon.0 improvements:

Eclipse Neon: 5:30-minute demo of 22 nice improvementsEclipse Neon: 5:30-minute demo of 22 nice improvements

The next and last Neon update Neon.3 will be released on March 23 before the next annual main release Oxygen on June 28.



by howlger at January 19, 2017 02:30 PM

Eclipse Newsletter | Exploring New Technologies

January 19, 2017 10:42 AM

Jumpstart an Angular project, develop microservices w/fabric8, build a blog app w/JHipster 4, and test Java microservices.

January 19, 2017 10:42 AM

2017 Board Elections | Nominations Open

January 18, 2017 03:40 PM

Nominations for the 2017 Eclipse Foundation Board Election is open for Committer & Sustaining Member representatives.

January 18, 2017 03:40 PM

How to improve your GUI design with usability tests

by Rainer Klute (rainer.klute@itemis.de) at January 18, 2017 09:45 AM

Developing and actually using a graphical user interface (GUI) are two sides of a coin. As a software developer, have you ever wondered how users of your application are getting along with the GUI you created for them?

Someone knowing an application and its domain inside out has certainly a different view than someone else who is just trying to take his first steps. This article shows how to improve your GUI with the help of usability tests by an example.

The challenge: designing a supportive user interface

If you are aware of how a suboptimally designed GUI might hamper the user: congratulations! However, how can you find out which of several possible user interface variants supports the user experience best? Your own experience in the field will not necessarily contribute a good piece of advice, because quite likely you are professionally blinkered – ironically due to your very experience.

You need a method that offers a lot of insights and can help you make the right choices. Such a method is to sketch a few different GUI variants and have potential or actual users check them out. You don't need to implement these variants in your software. A wireframe tool to set up some mockups is sufficient, preferably with some scriptable interactivity.

A wizard page for Xtext

Our case deals with a wizard page for Xtext, a tool for creating domain-specific languages and associated language infrastructures. Don't worry, you do not need to understand in detail what Xtext is and what it does to follow this use case and the basic principle behind usability tests. Suffice to know that Xtext is Eclipse-based, has a wizard to create a new Xtext project and that this wizard includes a particular page to configure some advanced options for a new Xtext project. Fig. 1 shows the implementation of this dialogue in the current Xtext release 2.10.

implementation-dialog-xtext-2.10.-fig1.pngFig.1: Original wizard page

There has been some discussion going on among Xtext core developers on whether this wizard page's design really satisfies users' needs or not. There wasn't a clear conclusion, so itemis' usability experts were asked to investigate further and run a usability test on the current GUI and some alternatives.

They used Balsamiq Mockups to draw wireframe models of the original wizard page and two alternative versions. Balsamiq Mockups is able to mimic some dynamic behavior. You can configure it e.g. to toggle a certain option from disabled to enabled state if some checkbox has been checked. This is really nice, because this way users can play around with mockup interfaces and get a more realistic impression of how the real implementation would behave. Fig. 2 shows a wireframe version of the screenshot in fig. 1.

wireframe-original-user-interface-xtext-2.10..pngFig. 2: Wireframe of the original user interface (variant 1)

Running usability tests

In the relaxed atmosphere of a usability dinner, five software developers were asked to perform a simple task with all three user interface variants: creating a new Xtext project with support for an Eclipse editor front-end. The participants had none to moderate Xtext experience and none to senior expert level Eclipse experience. The "New Project" wizard or at least the “Advanced Xtext Configuration” wizard page was new to all of them.

While performing their task, they were asked to think aloud and comment on what they see, what they like, what they dislike, what irritates them etc.

Hidden option dependencies

The most prominent test result: All users stumbled over an unexpected dependency between wizard options. The background: If you want to support a front-end in your Xtext project you have to check "Generic IDE Support" first, and then select the kind of front-end you want, i.e. Eclipse Plugin, IntelliJ IDEA Plugin or Web Integration. By default, "Generic IDE Support" is preselected in the wizard. However, the user could get into a situation where the option is disabled, because e.g. he unchecked it inadvertently.

No user was able to spot this dependency by just looking at the wizard page. Everyone at first checked "Eclipse Plugin" – only to run into an error message shown at the bottom of the page, see fig. 1 or fig. 2. Not everyone noticed that message immediately and not everyone was immediately able to tell what to do next. Sure, sooner or later everyone managed to find out that he had to enable "Generic IDE Support" first in order to activate Eclipse support. However, this is a severe usability issue, because everyone got irritated by the unexpected dependency and by the "unavoidable" necessity of dealing with an error message.

generic-ide-support-xtext-2.10.-variant2.pngFig. 3: Automatically setting the "Generic IDE Support" option (variant 2)

A second wizard page variant (fig. 3) copes with this deficiency. It looks almost identical to the first version, but its dynamic behavior is different: If the user checks "Eclipse Plugin", the "Generic IDE Support" option is automatically checked, too, i.e. the user interface essentially does by itself what the user would have to do otherwise. "Generic IDE Support" is also disabled so it cannot be unchecked as long as "Eclipse Plugin" or one of the other front-end options are checked. Users liked this behaviour very much and had no issues fulfilling their task. This holds even though the dependency was still not explicitly visible in the GUI.

option-dependencies-xtext-2.10.-variant3.png
Fig. 4: Making option dependencies visible (variant 3)

A third variant of the wizard page visualized the options in a dependency hierarchy (fig 4). Users were now able to see that

  • "Generic IDE Support" is a requirement for "Eclipse Plugin", "IntelliJ IDEA Plugin", and "Web Integration", and
  • there is no dependency between IDE issues and the remaining options like e.g. testing support or source layout.

On the other hand some users found it confusing that they could not select "Eclipse Plugin" right away, but instead had to check "Generic IDE Support" first.

Overall, users felt that variant 2 supported them best, followed by variant 3. Nobody preferred the original variant 1.

Explain your options!

As an additional result it turned out that users don't necessarily understand what the options offered by the wizard actually mean. Our testers had quite different ideas – or no idea at all – on what "Eclipse Plugin", "IntelliJ IDEA Plugin", "Web Integration", "Generic IDE Support", "Testing Support", "Preferred Build System" and "Source layout" might mean. The "Source Layout" option really took the cake: Not a single user explained it correctly without seeing the available options. As a consequence, the developers should add tooltips to the options. These tooltips could explain each option in some detail, link to the appropriate section of the documentation or even include it.

Bottom line

Consider drafting some alternative GUI variants or have usability experts do that for you and run usability tests! It will be a benefit to your users and might help propagate your software.

And please don't take for granted that everyone knows the terms you are familiar with! Explain them to your users!

by Rainer Klute (rainer.klute@itemis.de) at January 18, 2017 09:45 AM

EclipseCon France 2017 | Call for Papers

January 18, 2017 08:40 AM

Time to send us your proposals! Submissions close March 29. Submit by March 15 to be an early-bird pick.

January 18, 2017 08:40 AM

Eclipse Infrastructure Support for IP Due Due Diligence Type

by waynebeaton at January 16, 2017 10:01 PM

The Eclipse Foundation’s Intellectual Property (IP) Policy was recently updated and we’re in the process of updating our processes and support infrastructure to accommodate the changes. With the updated IP Policy, we introduced the notion of Type A (license certified) and Type B (license certified, provenance checked, and scanned) due diligence types for third-party dependencies that projects can opt to adopt.

With Type A, we assert only that third-party content is license compatible with a project. For Type B third-party content, the Eclipse Foundation’s IP Team invests considerable effort to also assert that the provenance is clear and that the code has been scanned to ensure that it is clear of all sorts of potential issues (e.g. copyright or license violations). The type of due diligence applies at the release level. That is, a project team can decide the level of scrutiny that they’d like to apply on a release-by-release basis.

For more background, please review License Certification Due Diligence and What’s Your (IP Due Diligence) Type?

By default all new projects at the Eclipse Foundation now start configured to use Type A by default. We envision that many project teams will eventually employ a hybrid solution where they have many Type A releases with period Type B releases.

The default due diligence type is recorded in the project’s metadata, stored in the Project Management Infrastructure (PMI). Any project committer or project lead can navigate to their project page, and click the “Edit” button to access project metadata.

project_edit.png

In the section titled “The Basics” (near the bottom), there’s a place where the project team can specify the default due diligence type for the project (it’s reported on the Governance page). If nothing is specified, Type B is assumed. Specifying the value at the project level is basically a way for the project team to make a statement that their releases tend to employ a certain type of due diligence for third-party content.

project_dd_type.png

Project teams can also specify the due diligence type for third-party content in the release record. Again, a project committer or project lead can navigate to a release record page, and click “Edit” to gain access to the release metadata.

release_dd_type.png

As for projects, the metadata for IP due diligence type is found in the section titled “The Basics”. The field’s description is not exactly correct: if not specified in the release record, our processes all assume the value specified in the project metadata. We’ll fix this.

When the time comes to create a request to the Eclipse Foundation’s IP Team to review a contribution (a contribution questionnaire, or CQ), committers will see an extra question on the page for third-party content.

create_cq.png

As an aside, committers that have tried to use our legacy system for requesting IP reviews (the Eclipse Developer Portal) will have noticed that we’re now redirecting those requests to the PMI-based implementation. Project committers will find a direct link to this implementation under the Committer Tools block on their project’s PMI page.

We’ve added an extra field to the record that gets created in our IP tracking system (IPZilla), Type, that will be set to Type_A or Type_B (in some cases, it may be empty, “–“). We’ve also added a new statelicense_certified, that indicates that the licenses has been checked and the content can be used by the project in any Type A release.

Any content that is approved can be assumed to also be license certified.

There are many other questions that need to be answered, especially with regard to IP Logs, mixing IP due diligence types, and branding downloads. I’ll try to address these topics and more in posts over the next few days.



by waynebeaton at January 16, 2017 10:01 PM

Using MQTT-SN over BLE with the BBC micro:bit

by Benjamin Cabé at January 16, 2017 11:11 AM

The micro:bit is one of the best IoT prototyping platforms I’ve come across in the past few months.

The main MCU is a Nordic nRF51822 with 16K RAM and 256K Flash. A Freescale KL26Z is used for conveniently implementing a USB interface as well as a mass storage driver so as deploying code onto the micro:bit is as simple as directly copying a .hex file over USB (if your familiar with the mbed ecosystem, this will sound familiar :-)).

The board is packed with all the typical sensors and actuators you need for prototyping an IoT solution: accelerometer, compass, push buttons, an LED matrix, … What’s really cool, is the built-in BLE support, combined with the battery connector, making it really easy to have a tetherless, low-power 1, IoT testing device.

So how does one take the micro:bit and turn it into an IoT device? Since there is no Internet connectivity, you need to rely on some kind of gateway to bridge the constrained device that is the micro:bit to the Internet. You can of course implement your own protocol to do just that, but then you have to basically reimplement the wheel. That’s the reason why I thought the micro:bit would be ideal to experiment with MQTT-SN.

You can jump directly to the video tutorial at the end of the post, and come back later for more in-depth reading.

What is MQTT-SN and why you should care

If I were to over simplify things, I would just say that MQTT-SN (which stands for “MQTT for Sensor Networks”, by the way) is an adaptation of the MQTT protocol to deal with constrained devices, both from a footprint/complexity standpoint, and to adapt to the fact constrained devices may not have TCP/IP support.

MQTT-SN is designed so as to make the packets as small as possible. An example is the fact that an MQTT-SN client registers the topic(s) it wishes to us against the  server, this way further PUBLISH or SUBSCRIBE exchanges only have to deal with a 2-byte long ID, as opposed to a possibly very long UTF-8 string.

Like I said before, you really don’t want to reimplement your own protocol, and using MQTT-SN just makes lot of sense since it bridges very naturally to good ol’ MQTT.

Setting up an MQTT-SN client on the micro:bit

The MQTT-SN supports the BLE UARTService from Nordic, that essentially mimics a classical UART by means of two BLE characteristics, for RX and TX. This is what we’ll use as our communication channel.

The Eclipse Paho project provides an MQTT-SN embedded library that turns out to be really easy to use. It allows you to serialize and deserialize MQTT-SN packets, the only remaining thing to do is for you to effectively transmit them (send or receive) over your communication channel – BLE UART in our case.

In order to show you how simple the library is to use, here’s an example of how you would issue a CONNECT:

MQTTSNPacket_connectData options = MQTTSNPacket_connectData_initializer;
options.clientID.cstring = microbit_friendly_name();
int len = MQTTSNSerialize_connect(buf, buflen, &options);
int rc = transport_sendPacketBuffer(buf, len);

/* wait for connack */
rc = MQTTSNPacket_read(buf, buflen, transport_getdata);
if (rc == MQTTSN_CONNACK)
{
    int connack_rc = -1;

    if (MQTTSNDeserialize_connack(&connack_rc, buf, buflen) != 1 || connack_rc != 0)
    {
        return -1;
    }
    else {
        // CONNECTION OK - continue
    }
} else {
    return -1;
}

Now what’s behind the transport_sendPacketBuffer and transport_getdata functions? You’ve guess correctly, this is where either send or read a buffer to/from the BLE UART.
Using the micro:bit UART service API, the code for transport_getdata is indeed very straightforward:

int transport_getdata(unsigned char* buf, int count)
{
    int rc = uart->read(buf, count, ASYNC);
    return rc;
}

You can find the complete code for publishing the micro:bit acceloremeter data over BLE on my Github. Note that for the sake of simplifying things, I’ve disabled Bluetooth pairing so as connecting to a BLE/MQTT-SN gateway just works out of the box.

MQTT-SN gateway

There are a few MQTT-SN gateways available out there, and you should feel free to use the one that floats your boat. Some (most?) MQTT-SN gateways will also behave as regular MQTT brokers so you won’t necessarily have to bridge the MQTT-SN devices to MQTT strictly speaking, but rather directly use the gateway as your MQTT broker.
For my tests, I’ve been pretty happy with RSMB, an Eclipse Paho component, that you can get from Github.

The README of the project is pretty complete and you should be able to have your RSMB broker compiled in no time. The default configuration file for RSMB should be named broker.cfg (you can specify a different configuration file on the command line, of course).
Below is an example of the configuration file so as RSMB behaves as both a good ol’ MQTT broker, but also an MQTT-SN gateway, bridged to iot.eclipse.org’s MQTT sandbox broker. Note that in my example I only care about publishing messages, so the bridge is configured in out mode, meaning that messages only flow from my MQTT-SN devices to iot.eclipse.org, and not the other way around. Your mileage may vary if you also want your MQTT-SN devices to be able to subscribe to message, in which case the bridging mode should be set to both

# will show you packets being sent and received
trace_output protocol

# MQTT listener
listener 1883 INADDR_ANY mqtt

# MQTT-S listener
listener 1884 INADDR_ANY mqtts

# QoS 2 MQTT-S bridge
connection mqtts
  protocol mqtt
  address 198.41.30.241:1883
  topic # out

Bridging the BLE device(s) to the MQTT-SN gateway

Now there is still one missing piece, right? We need some piece of software for forwarding the messages coming from the BLE link, to the MQTT-SN gateway.

I’ve adapted an existing Node.js application that does just that. For each BLE device that attaches to it, it creates a UDP socket to the MQTT-SN gateway, and transparently routes packets back and forth. When the micro:bit “publishes” an MQTT-SN packet, it is just as if it were directly talking to the MQTT-SN gateway.

The overall architecture is as follows:

Note that it would be more elegant (and also avoid some nasty bugs, actually 2) to leverage MQTT-SN’s encapsulation mechanism so as to make the bridge even more straightforward, and not have to maintain one UDP socket per BLE device. To quote the MQTT-SN specification:

The forwarder simply encapsulates the MQTT-SN frames it receives on the wireless side and forwards them unchanged to the GW; in the opposite direction, it decapsulates the frames it receives from the gateway and sends them to the clients, unchanged too.

Unfortunately RSMB does not support encapsulated packets at this point, but you can rely on this fork if you want to use encapsulation: https://github.com/MichalFoksa/rsmb.

Visualizing the data: mqtt-spy to the rescue!

Like in my previous article about Android Things, I used mqtt-spy to visualize the data coming from the sensors.

Note that publishing sensor data in JSON might not be the best idea in production: the MTU of a BLE packet is just 20 bytes. Those extra curly braces, commas, and double quotes are as many bytes you won’t be able to use for your MQTT payload. You may want to look at something like CBOR for creating small, yet typed, binary payloads.
However, JSON is of course pretty convenient since there’s a plethora of libraries out there that will allow you to easily manipulate the data…

Using mqtt-spy, it’s very easy to visualize the values we’re collecting from the accelerometer of the micro:bit, either in “raw” form, or on a chart, using mqtt-spy’s ability to parse JSON payloads.

Video tutorial and wrap-up

I’ve wanted to give MQTT-SN a try for a long time now, and I’m really happy I took the time to do so. All in all, I would summarize my findings as follow:

  • The Eclipse Paho MQTT-SN embedded client just works! Similarly to the MQTT embedded client, it is very easy to take it and port it to your embedded device, and no matter what actual transport layer you are using (Bluetooth, Zigbee, UDP, …), you essentially just have to provide an implementation of “transport_read” and “transport_write”.
  • You may want to be careful when doing things like “UART over BLE”. The main point of BLE is that it’s been designed to be really low-power, so if you tend to overly communicate or to remain paired with the gateway all the time, you will likely kill your battery in no time!
  • The NRF5x series from Nordic is very widely available on the market, so it would be really interesting to run a similar MQTT-SN stack on other devices than the micro:bit, therefore demonstrating how it truly enables interoperability. If you build something like this, I really want to hear from you!
  • Although it’s true that there are not quite as many MQTT-SN libraries and gateways available out there as there are for MQTT, the protocol is pretty straightforward and that shouldn’t be preventing you from giving it a try!

 

Notes:

  1. You should keep in mind that the micro:bit, like other similar boards, is meant to be a prototyping platform, and for example having the KL26Z core taking core of the USB controller might not be ideal battery-wise, if you only care about doing tetherless BLE communications.
  2. RSMB expects the first packet received on an incoming UDP connection to be a CONNECT packet. If the bridge forwards everything to the gateway transparently, that may not always be the case. If, instead, it takes care of encapsulating all MQTT-SN packets properly, that means you know need only one UDP socket from your BLE/UDP bridge to the gateway)

by Benjamin Cabé at January 16, 2017 11:11 AM

ECF 3.13.4 now available

by Scott Lewis (noreply@blogger.com) at January 15, 2017 06:13 PM

ECF 3.13.4 is now available.  This was a maintenance release, with bug fixes for the Eclipse tooling for OSGi Remote Services and an update of the Apache Httpclient filetransfer provider contributed to Eclipse.

by Scott Lewis (noreply@blogger.com) at January 15, 2017 06:13 PM

What’s Your (IP Due Diligence) Type?

by waynebeaton at January 13, 2017 07:01 PM

Long-time Eclipse Committer, Ian Bull initiated a interesting short chat on Twitter yesterday about one big challenge when it comes to intellectual property (IP) management. Ian asked about the implications of somebody forking an open source project, changing the license in that fork, and then distributing the work under that new license.

We can only surmise why somebody might do this (at least in the hypothetical case), but my optimistic nature tends toward assuming that this sort of thing isn’t done maliciously. But, frankly, this sort of thing does happen and the implications are the same regardless of intent.

Even-longer-time Eclipse Committer, Doug Schaefer offered an answer.

The important takeaway is that changing a license on intellectual property that you don’t own is probably bad, and everybody who touches it will potentially be impacted (e.g. potentially face litigation). I say “probably bad”, because some licenses actually permit relicensing.

Intellectual property management is hard.

The Eclipse Foundation has a dedicated team of intellectual property analysts that do the hard work on behalf of our open source project teams. The IP Team performs analysis on the project code that will be maintained by the project and for third-party libraries that are maintained elsewhere. It’s worth noting that there is no such thing as zero risk; the Eclipse IP Team’s work is concerned with minimising, understanding, and documenting risk. When they reject a contribution or third-party library use request, they do so to benefit of the project team, adopters of the project code, and everybody downstream.

In yesterday’s post, I introduced the notion of Type A (license certified), or Type B (license certified, provenance checked, and scanned). The scanned part of Type B due diligence includes—among many other things—the detection of the sort of relicensing that Ian asked about.

Since we don’t engage in the same sort of deep dive into the code, we wouldn’t detect this sort of thing with the license certification process that goes with Type A. That is, of course, not to say that it’s okay to use inappropriately relicensed third-party code in a Type A release, we just wouldn’t detect it via Type A license certification due diligence. This suggests a heightened risk associated with Type A over Type B to consider.

Type B due diligence is more resource intensive and so potentially takes a long time to complete. One of the great benefits of Type A, is that the analysis is generally faster, enabling a project team to get releases out quickly. For this reason, I envision a combination approach (some Type A releases mixed with less frequent Type B releases) to be appealing to many project teams.

So project teams needs to decide for themselves and for their downstream consumers, what sort of due diligence they require. I’ve already been a part of a handful of these discussions and am more than happy to participate in more. Project teams: you know how to find me.

It’s worth noting that Eclipse Foundation’s IP Team still does more due diligence review with Type A analysis than any other open source software foundation and many commercial organisations. If a committer suspects that shenanigans may be afoot, they can ask the IP Team to engage in a deeper review (a Type A project release can include Type B approved artifacts).

April wrapped up the Twitter conversation nicely.

Indeed. Kudos to the Eclipse Intellectual Property Team.

If you want to discuss the differences between the types of due diligence, our implementation of the Eclipse IP Policy changes, or anything else, I’ll be at Eclipse Converge and Devoxx US. Register today.

Eclipse Converge



by waynebeaton at January 13, 2017 07:01 PM

License Certification Due Diligence

by waynebeaton at January 12, 2017 08:02 PM

With the changes in the Eclipse Intellectual Property (IP) Policy made in 2016, the Eclipse Foundation now offers two types of IP Due Diligence for the third-party software used by a project. Our Type A Due Diligence involves a license certification only and our Type B Due Diligence provides our traditional license certification, provenance check, and code scan for various sorts of anomalies. I’m excited by this development at least in part because it will help new projects get up to speed more quickly than they could have in the past.

Prior to this change, project teams would have to wait until the full application of what we now call Type B Due Diligence was complete before issuing a release. Now, a project team can opt to push out a Type A release after having all of their third-party libraries license certified.

A project team can decide what level of IP Due Diligence they require for each release. Hypothetically, a project team could opt to make several Type A releases followed by a Type B release, and then switch back. I can foresee this being something that project teams that need to engage in short release cycles will do.

We’ve solicited a few existing projects to try out the new IP Due Diligence type and have already approved a handful of third-party libraries as Type A. The EMO has also started assuming that all new projects use Type A (license certification) by default. As we move forward, we expect that all new projects will employ Type A Due Diligence for all incubation releases and then decide whether or not to switch to Type B (license certification, provenance check, and code scan) for their graduation. There is, of course, no specific requirement to switch at graduation or ever, but we’re going to encourage project teams to defer the decision of whether or not to switch from Type A until that point.

After graduation, project teams can decide what they want to do. We foresee at least some project teams opting to issue regular multiple Type A releases along with an annual Type B release (at this point in the time, there is no specific requirement to be Type A or Type B to participate in the simultaneous release).

We’ve started rolling out some changes to the infrastructure to support this update to the IP Due Diligence process. I’ll introduce those changes in my next post.

Update: Based on some Tweets, I changed my intended topic for the next post. Please see What’s Your (IP Due Diligence) Type?

Eclipse Converge



by waynebeaton at January 12, 2017 08:02 PM

Making @Service annotation even cleverer

by Tom Schindl at January 12, 2017 06:45 PM

As some of you might know e(fx)clipse provides a Eclipse DI extension supporting more powerful feature when we deal with OSGi-Services:

  • Support for dynamics (eg if a higher-ranked service comes along you get it injected, …)
  • Support for service list
  • ServiceFactory support because the request is made from the correct Bundle

Since tonights build the @Service annotation has support to define:

  • A static compile time defined LDAP-Filter expression
    public class MySQLDIComponent {
      @Inject
      public void setDataSource(
        @Service(filterExpression="(dbtype=mysql)") 
        DataSource ds) {
         // ...
      }
    }
    
    public class H2DIComponent {
      @Inject
      public void setDataSource(
        @Service(filterExpression="(dbtype=h2)") 
        DataSource ds) {
         // ...
      }
    }
    
  • A dynamic LDAP-Filter expression who is calculated at runtime and can change at any time
    public class CurrentDatasource extends BaseValueObservable<String> implements OString {
      @Inject
      public CurrentDatasource(
        @Preference(key="database",defaultValue="h2") String database) {
        super("(dbtype="+database+")");
      }
    
      @Inject
      public void setDatabase(
        @Preference(key="database",defaultValue="h2") String database) {
        setValue("(dbtype="+database+")");
      }
    }
    
    public class DIComponent {
      @Inject
      public void setDataSource(
        @Service(dynamicFilterExpression=CurrentDatasource.class)
        DataSource ds) {
        // ...
      }
    }
    

    You notice the dynamic provider itself if integration fully into the DI-Story 😉



by Tom Schindl at January 12, 2017 06:45 PM

JBoss Tools 4.4.3.AM1 for Eclipse Neon.2

by jeffmaury at January 11, 2017 09:51 PM

Happy to announce 4.4.3.AM1 (Developer Milestone 1) build for Eclipse Neon.2.

Downloads available at JBoss Tools 4.4.3 AM1.

What is New?

Full info is at this page. Some highlights are below.

OpenShift 3

Although our main focus is bug fixes, we continue to work on providing better experience for container based development in JBoss Tools and Developer Studio. Let’s go through a few interesting updates here and you can find more details on the What’s New page.

Scaling from pod resources

When an application is being deployed to Openshift, it was possible to scale the pod resources from the service resource.

scale command from service

However, it was not a very logical choice. So the command is also available at the pod level, leading to better usability.

scale command from pod

Enjoy!

Jeff Maury


by jeffmaury at January 11, 2017 09:51 PM

JSON Forms – Day 3 – Extending the UI Schema

by Maximilian Koegel and Jonas Helming at January 11, 2017 09:15 AM

JSON Forms is a framework to efficiently build form-based web UIs. These UIs are targeted at entering, modifying and viewing data and are usually embedded within an application. JSONForms eliminates the need to write HTML templates and Javascript for manual databinding to create customizable forms by leveraging the capabilities of JSON and JSON schema as well as by providing a simple and declarative way of describing forms. Forms are then rendered within a UI framework – currently based on AngularJS. If you would like to know more about JSON Forms the JSON Forms homepage is a good starting point.

In this blog series, we would like to introduce the framework based on a real-world example application, a task tracker called “Make It happen”. On day 0 and 1 we defined our first form and on day 2 we introduced the UI schema and adapted it for our sample application.

Day 2 resulted in a functional form looking like this:

jsonforms_blogseries_day2_form

If you would like to follow this blog series please follow us on twitter. We will announce every new blog post on JSON Forms there.

The goal of our third iteration on the “Make It Happen” example is to enhance the data schema with additional attributes and update the UI schema accordingly. 

So far, the JSON Schema for our data entity defined three attributes:

  • “Name” (String) – mandatory
  • “Description” (multi-line String)
  • “Done” (Boolean).

In our third iteration, we add two additional attributes to the Task entity. These additional attributes are:

  • “Due Date” (Date)
  • “Rating” (Integer)

As JSON Forms facilitates JSON Schema as a basis for all forms, we start with enhancing the data schema with the new attributes. The following listing shows the complete data schema, only due_date and rating have to be added, though:

{
    "type": "object",
    "properties": {
      "name": {
        "type": "string"
      },
      "description": {
        "type": "string"
      },
      "done": {
        "type": "boolean"
      },
      "due_date": {
        "type": "string",
        "format": "date"
      },
      "rating": {
        "type": "integer",
        "maximum": 5
      }
    },
    "required": ["name"]
}

Based on the extended data schema, we also need to extend the UI schema to add the new properties to our rendered form:

{
  "type": "VerticalLayout",
  "elements": [
    {
      "type": "Control",
      "scope": {
        "$ref": "#/properties/name"
      }
    },
    {
      "type": "Control",
      "label": false,
      "scope": {
        "$ref": "#/properties/done"
      }
    },
    {
      "type": "Control",
      "scope": {
        "$ref": "#/properties/description"
      },
      "options": {
        "multi":true
      }
    },
    {
      "type": "Control",
      "scope": {
        "$ref": "#/properties/due_date"
      }
    },
    {
      "type": "Control",
      "scope": {
        "$ref": "#/properties/rating"
      }
    }
  ]
}

Based on those two schemas, the JSON Form renderer will now automatically produce this form:

jsonforms_blogseries_day3_form

Note that JSON Forms automatically creates the correct widgets for the new attributes: a date picker for “due date” and a input field for “rating”. For rating it would be nice to have a more special control, though. This is possible with JSON Forms and will be described later in this series. Please also note that those controls are automatically bound to the underlying data and provide the default features such as validation.

Another interesting feature often required in forms is to control the visibility of certain controls based on the current input data. This is supported in JSON Forms, we will describe this rule-based visibility next week.

If you are interested in trying out JSON Forms, please refer to the Getting-Started tutorial. It explains how to set up JSON Forms in your project and how you can try the first steps out yourself. If you would like to follow this blog series, please follow us on twitter. We will announce every new blog post on JSON Forms there.

We hope to see you soon for the next day!


TwitterGoogle+LinkedInFacebook

Leave a Comment. Tagged with AngularJS, emf, emfcp, emfforms, Forms, JSON, JSON Schema, AngularJS, emf, emfcp, emfforms, Forms, JSON, JSON Schema


by Maximilian Koegel and Jonas Helming at January 11, 2017 09:15 AM

First Papyrus IC Research/Academia webinar of 2017

by tevirselrahc at January 10, 2017 05:29 PM

If you’ve been following this blog, you already know that I have an Industry Consortium.

And if you looked at the Papyrus Industry Consortium’s (PIC) website, you also know that it has a Research and Academia Committee!

And that committee is known to hold very interesting webinars about various aspects of modeling, open source, and, of course, ME!

Well, the first webinar of the year will happen this Friday, January 13th, at 16:00 – 17:00 CET, 15:00 – 16:00 GMT, 10:00 – 11:00 EST.

Our first speaker of 2017 is none other than Jordi Cabot, ICREA Research Professor at IN3 (Open University of Catalonia), a well-known member of our community with many years of experience as a researcher in Model Driven Engineering and in open-source software and the driving force behind the MOdeling LAnguages blog.

Jordi will be talking about some of the key factors in the success of open-source software projects. His talk is titled:

Wanna see your OSS project succeed? Nurture the community

I hope you will join us for this very interesting talk.

You can find the connection information in the Papyrus IC wiki.


Filed under: Research and Academia, Uncategorized Tagged: academia, community, open-source, project, research, webinar

by tevirselrahc at January 10, 2017 05:29 PM

Use the Eclipse Java Development Tools in a Java SE application

January 09, 2017 11:00 PM

Stephan Herrmann has announced that some libraries of the Eclipse Neon.2 release are now available on maven central.

Some eclipse jars are now available the central repository

It is now easy to reuse the piece of Eclipse outside any Eclipse based application. Let me share with you this simple example: use the java code formatter of Eclipse JDT in a simple java main class.

Step 1: create a very simple maven project. You will need org.eclipse.jdt.core as dependency.

Listing 1. Example pom.xml
<project xmlns="http://maven.apache.org/POM/4.0.0"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>example</groupId>
  <artifactId>java-formatter</artifactId>
  <version>1.0.0-SNAPSHOT</version>

  <dependencies>
    <dependency>
      <groupId>org.eclipse.jdt</groupId>
      <artifactId>org.eclipse.jdt.core</artifactId>
      <version>3.12.2</version>
    </dependency>
  </dependencies>
</project>

Step 2: write a java class with a main method.

Listing 2. Example main class
import java.util.Properties;

import org.eclipse.jdt.core.JavaCore;
import org.eclipse.jdt.core.ToolFactory;
import org.eclipse.jdt.core.formatter.CodeFormatter;
import org.eclipse.jdt.internal.compiler.impl.CompilerOptions;
import org.eclipse.jface.text.BadLocationException;
import org.eclipse.jface.text.Document;
import org.eclipse.jface.text.IDocument;
import org.eclipse.text.edits.TextEdit;

public class MainFormatter {

  public static void main(String[] args) {
    String result;

    String javaCode = "public class MyClass{ "
                        + "public static void main(String[] args) { "
                        + "System.out.println(\"Hello World\");"
                        + " }"
                        + " }";

    Properties prefs = new Properties();
    prefs.setProperty(JavaCore.COMPILER_SOURCE, CompilerOptions.VERSION_1_8);
    prefs.setProperty(JavaCore.COMPILER_COMPLIANCE, CompilerOptions.VERSION_1_8);
    prefs.setProperty(JavaCore.COMPILER_CODEGEN_TARGET_PLATFORM, CompilerOptions.VERSION_1_8);

    CodeFormatter codeFormatter = ToolFactory.createCodeFormatter(prefs);
    IDocument doc = new Document(javaCode);
    try {
      TextEdit edit = codeFormatter.format(CodeFormatter.K_COMPILATION_UNIT | CodeFormatter.F_INCLUDE_COMMENTS,
                                             javaCode, 0, javaCode.length(), 0, null);
      if (edit != null) {
        edit.apply(doc);
        result = doc.get();
      }
      else {
        result = javaCode;
      }
    }
    catch (BadLocationException e) {
      throw new RuntimeException(e);
    }

    System.out.println(result);
  }
}

Step 3: there is no step 3! You can just run your code in your IDE or from the command line using maven to compute your classpath.

Console output

The code used in this example is a simplification of what you can find in another great open-source project: JBoss Forge Roaster.


January 09, 2017 11:00 PM

Eclipse Neon.2 is on Maven Central

by Stephan Herrmann at January 09, 2017 10:21 PM

It’s done, finally!

Bidding farewell to my pet peeve

In my job at GK Software I have the pleasure of developing technology based on Eclipse. But those colleagues consuming my technology work on software that has no direct connection to Eclipse nor OSGi. Their build technology of choice is Maven (without tycho that is). So whenever their build touches my technology we are facing a “challenge”. It doesn’t make a big difference if they are just invoking a code generator built using Xtext etc or whether some Eclipse technology should actually be included in their application runtime.

Among many troubles, I recall one situation that really opened my eyes: one particular build had been running successfully for some time, until one day it was fubar. One Eclipse artifact could no longer be resolved. Followed long nights of searching why that artifact may have disappeared, but we reassured ourselves, nothing had disappeared. Quite to the contrary somewhere on the wide internet (Maven Central to be precise) a new artifact had appeared. So what? Well, that artifact was the same that we also had on our internal servers. Well, if it’s the same, what’s the buzz? It turned out it had a one-char difference in its version: instead of 1.2.3.v20140815 its version was 1.2.3-v20140815. Yes take a close look, there is a difference. Bottom line, with both almost-identical versions available, Maven couldn’t figure out what to do, maybe each was considered as worse than the other, to the effect that Maven simply failed to use either. Go figure.

More stories like this and I realized that relying on Eclipse artifacts in Maven builds was always at the mercy of some volunteers, who typically don’t have a long-term relationship to Eclipse, who filled in a major gap by uploading individual Eclipse artifacts to Maven Central (thanks to you volunteers, please don’t take it personally: I’m happy that your work is no longer needed). Anybody who has ever studied the differences between Maven and OSGi (wrt dependencies and building that is) will immediately see that there are many possible ways to represent Eclipse artifacts (OSGi bundles) in a Maven pom. The resulting “diversity” was one of my pet peeves in my job.

At this point I decided to be the next volunteer who would screw up other people’s builds who would collaborate with the powers that be at Eclipse.org to produce the official uploads to Maven Central.

As of today, I can report that this dream has become reality, all relevant artifacts of Neon.2 that are produced by the Eclipse Project, are now “officially” available from Maven Central.

Bridging between universes

I should like to report some details of how our artifacts are mapped into the Maven world:

The main tool in this endeavour is the CBI aggregator, a model based tool for transforming p2 repositories in various ways. One of its capabilities is to create a Maven repository (a dual use repo actually, but the p2 side of this is immaterial to this story). That tool does a great job of extracting meta data from the p2 repo in order to create “meaningful” pom files, the key feature being: it copies all dependency information, which is originally authored in MANIFEST.MF, into corresponding declarations in the pom file.

Still a few things had to be settled, either by improving the tool, by fine tuning the input to the tool, or by some steps of post-processing the resulting Maven repo.

  • Group IDs
    While OSGi artifacts only have a single qualified Bundle-SymbolicName, Maven requires a two-part name: groupId x artifactId. It was easy to agree on using the full symbolic name for the artifactId, but what should the groups be? We settled on these three groups for the Eclipse Project:

    • org.eclipse.platform
    • org.eclipse.jdt
    • org.eclipse.pde
  • Version numbers
    In Maven land, release versions have three segments, in OSGi we maintain a forth segment (qualifier) also for releases. To play by Maven rules, we decided to use three-part versions for our uploads to Maven Central. This emphasizes the strategy to only publish releases, for which the first three parts of the version are required to be unique.
  • 3rd party dependencies
    All non-Eclipse artifacts that we depend on should be referenced by their proper coordinates in Maven land. By default, the CBI aggregator assigns all artifacts to the synthetic group p2.osgi.bundle, but if s.o. depends on p2.osgi.bundle:org.junit this doesn’t make much sense. In particular, it must be avoided that projects consuming Eclipse artifacts will get the same 3rd party library under two different names (perhaps in different versions?). We identified 16 such libraries, and their proper coordinates.
  • Source artifacts
    Eclipse plug-ins have their source code in corresponding .source plug-ins. Maven has a similar convention, just using a “classifier” instead of appending to the artifact name. In Maven we conform to their convention, so that tools like m2e can correctly pick up the source code from any dependencies.
  • Other meta data
    Followed a hunt for project url, scm coordinates, artifact descriptions and related data. Much of this could be retrieved from our MANIFEST.MF files, some information is currently mapped using a static, manually maintained mapping. Other information like licences and organization are fully static during this process. In the end all was approved by the validation on OSSRH servers.

If you want to browse the resulting wealth, you may start at

Everything with fully qualified artifact names in these groups (and date of 2017-01-07 or newer) should be from the new, “official” upload.

This is just the beginning

The bug on which all this has been booked is Bug 484004: Start publishing Eclipse platform artifacts to Maven central. See the word “Start”?

To follow-up tasks are already on the board:

(1) Migrate all the various scripts, tools, and models to the proper git repo of our releng project. At the end of the day, this process of transformation and upload should become a routine operation to be invoked by our favourite build meisters.

(2) Fix any quirks in the generated pom files. E.g., we already know that the process did not handle fragments in an optimal way. As a result, consuming SWT from the new upload is not straight forward.

Both issues should be handled in or off bug 510072, in the hope, that when we publish Neon.3 the new, “official” Maven coordinates of Eclipse artifacts will be even fit all all real world use. So: please test and report in the bug any problems you might find.

(3) I was careful to say “Eclipse Project”. We don’t yet have the magic wand to apply this to literally all artifacts produced in the Eclipse community. Perhaps s.o. will volunteer to apply the approach to everything from the Simultaneous Release? If we can publish 300+ artifacts, we can also publish 7000+, can’t we? 🙂

happy building!



by Stephan Herrmann at January 09, 2017 10:21 PM

EMF Forms 1.11.0 Feature: Grid Table and more

by Maximilian Koegel and Jonas Helming at January 02, 2017 01:28 PM

With Neon.1, we released EMF Forms 1.11.0. EMF Forms makes it really simple to create forms which edit your data based on an EMF model. To get started with EMF Forms please refer to our tutorial. In this post, we wish to outline the improvements in the release 1.10.0: An alternative table renderer based on Nebula Grid Table.

EMF Forms allows you to describe a form-based UI in a simple and technology independent model, which in turn is translated by a rendering component to create the actual UI. Besides controls for simple values and layouts, EMF Forms as always supports tables. So, instead of manually implementing columns, databinding, validation, as well as all other typical table features, you only need to specify which attributes of which elements shall be displayed in the table. Like all other controls, this is specified in the view model. The following Screenshot shows a simple view with one table containing elements of type “Task”.

image13

Please note that the property “Detail Editing” is there as well with the “WithPanel”. This is already a more advanced option of the table renderer. It will display a detail panel below the table, when you click on an entry (see the following screenshot). The default is, of course, there to directly edit the values in the table cells.

image16

Now imagine how long it would have taken you to implement the table above. In EMF Forms, you can literally do this within a minute. However, there is another scenario, in which the approach is even more powerful.

Imagine, you have manually developed a few tables for your UI using the default SWT table. Now, you have the option to enable “single cell selection”, meaning you can only select a single cell instead of complete rows. This is not possible with SWT Table. Therefore, you must switch to another table implementation, e.g. Nebula Grid or NatTable. In the case in which you manually implemented your tables, you must change all of your code to a new API. However, with EMF Forms, you simply need to provide a new renderer. This component is responsible for interpreting the view model information specifying the table into the running UI. The renderer is then used for all tables in your application, so you only need to do this work once. For the example of the table, EMF Forms already provides an alternative table renderer out of the box. As you can see in the following screenshot, it uses Nebula Grid to render the same table, but enables single cell selection. To use this, just include the new renderer feature (org.eclipse.emf.ecp.view.table.ui.nebula.grid.feature) into your application, and it is again done in less than a minute.

image15

As shown along the example of tables, enhancing the existing renderers provides all types of customizations. Please note that the framework already includes a variety of renderers, but it is also simple to write your own. If you miss any feature or ways to adapt it, please provide feedback by submitting bugs or feature requests or contact us if you are interested in enhancements or support.


TwitterGoogle+LinkedInFacebook

Leave a Comment. Tagged with eclipse, emf, emfcp, emfforms, eclipse, emf, emfcp, emfforms


by Maximilian Koegel and Jonas Helming at January 02, 2017 01:28 PM

New year resolution for using Eclipse – Hiding the toolbar

by Lars Vogel at January 01, 2017 01:12 PM

Happy 2017.

For this year I plan to use Eclipse without toolbar. I think this will enforce me to use more shortcuts, e.g. for perspective switching, for starting the last run porgram and the like. Also it gives me more “real estate” in the IDE for the code.

If you want to do the same, select Windows -> Appearance -> Hide Toolbar from the menu.


by Lars Vogel at January 01, 2017 01:12 PM