EMF Forms 1.17.0 Feature: Table Detail Panes

by Jonas Helming and Maximilian Koegel at August 10, 2018 10:06 AM

EMF Forms makes it easy to create forms that are capable of editing your data based on an EMF model. To get started with EMF Forms please refer to our tutorial. If you are an adopter of EMF Forms, please note, that we have recently published 1.17.1 an update to 1.17.0. The update fixes three bugs which occurred if you use EMF Forms in Photon. Please see here for details and our download page to get the new release.

In this post, we would like to outline a new feature in the 1.17.0 release: The improved table detail panes.

While EMF Forms is well known to support form-based UIs with classic input fields, such as text controls or checkboxes, it also supports showing list of elements in tables and list views, as well as hierarchies in trees.

As an example, you can very easily create a tree like this:

EMF Forms 1.17.0 Feature: Table Detail Panes

Or a table like this:

EMF Forms 1.17.0 Feature: Table Detail Panes

With 1.17.0, we have updated the documentation, please see this tutorial for tables and this tutorial for tree view.

Any element showing several elements can allow inline editing (like the EMF Forms table does), or show a detail pane (like the tree does), or both. As an example, if elements shown in a table have many attributes, you could show some of them in the table and then all of them in a detail pane. To do so, just enable the detail pane on the TableControl in the view model:

EMF Forms 1.17.0 Feature: Table Detail Panes

The result, after removing most columns from the table would then look like this:

EMF Forms 1.17.0 Feature: Table Detail Panes

Alternatively, you can set the DetailEditing property to “WithDialog”. As a result the renderer opens a separate window showing the details on double clicking an element. With 1.17.0, both options are supported by all table renderers, including the table renderer based on Nebula Grid.

You might wonder, where the layout of the detailed pane comes from. This detail itself is rendered with EMF Forms. Therefore, the framework retrieves the view model for the selected element, such that, if you already have a view model for the type “User”, it will be used in the detail pane as well. For this to work, you need to register the view model with EMF Forms by default, via an extension point.

Another minor improvement, which comes with 1.17.0 is that you can also try out those detail panes with a separate view model in the preview provided by the EMF Forms tooling. Therefore, you can add those additional view models to the preview using the “Manage Additional Views” button in the toolbar of the preview.

EMF Forms 1.17.0 Feature: Table Detail Panes

Any view model added here will be picked up by the preview when a detail pane is to be rendered.

As for all EMF Forms features, the detail panes of the respective tooling is of course adaptable to even more custom requirements. If there are any features you miss or ways you wish to adapt it, please provide feedback by submitting bugs or feature requests or contact us if you are interested in enhancements or support.


by Jonas Helming and Maximilian Koegel at August 10, 2018 10:06 AM

We are hiring 2 Eclipse developers

by Andrey Loskutov (noreply@blogger.com) at August 09, 2018 12:47 PM

We are hiring again!

We have 2 opened positions for Eclipse developers in our main office in Böblingen, Germany (no, it is not a remote job).

The job focus is Java/Eclipse development in the context of the very complex Eclipse based IDE used as the front end for the semiconductor tester.

We speak English and Java here, if you are interested, just drop me a mail.


by Andrey Loskutov (noreply@blogger.com) at August 09, 2018 12:47 PM

Modeling Symposium @ EclipseCon Europe 2018

by Jonas Helming and Maximilian Koegel at August 08, 2018 11:45 AM

We are happy to announce that Ed, Philip and Jonas are organizing the Modeling Symposium for the EclipseCon Europe 2018 in Ludwigsburg. The symposium aims to provide a forum for community members to present a brief overview of their work. We offer 10 minute lightning slots (including set-up and questions) to facilitate a broad range of speakers. The primary goal is to introduce interesting, new technological features. This targets mainly modeling projects which are otherwise not represented at the conference.

If you are interested in giving a talk, please send a short description (a few sentences) to munich@eclipsesource.com. Depending on the number, we might have to select among the submissions.

Deadline for submission: Wednesday, September 5th, 2018

Acceptance/ Decline notification: Monday, September 10th, 2018

Please adhere to the following guidelines:

  • Please provide sufficient context. Talks should start with a concise overview of what the presenter plans to demonstrate, or what a certain framework offers.  Even more important, explain how and why this is relevant.
  • Do not bore us! Get to the point quickly.  You do not have to use all your allocation. An interesting 3 minute talk will have a bigger impact than a boring 10 minute talk. We encourage you to plan for a 5 minute talk, leaving room for 5 minutes of discussion.
  • Keep it short and sweet, focus on the most important aspects. A conference offers the major advantage of getting in contact with people who are interested in your work. So consider the talk more as a teaser to prompt follow-up conversations than a forum to demonstrate or discuss technical details in depth.
  • A demo is worth a thousand slides. We prefer to see how your stuff works rather than be told about how it works with illustrative slides.  Please restrict the slides to summarize your introduction or conclusion.

Looking forward to your submissions!


by Jonas Helming and Maximilian Koegel at August 08, 2018 11:45 AM

Supporting OpenJFX 11 from JDK11 onwards in e(fx)clipse

by Tom Schindl at August 04, 2018 09:42 PM

So starting with JDK-11 OpenJFX is not part of any downloadable distribution. As JavaFX is designed to run on the module-path (and tested only there) you have 2 options to run JavaFX inside OSGi:
* You create your own JDK-Distribution using jlink
* You launch the VM you want to use JavaFX adding the JavaFX-Modules

While the 2nd solution is doable for RCP-Applications it is less than a nice one, and for integrating into external frameworks (like the Eclipse IDE) it is not possible at all. So we need a different solution to satisfy both usecases.

The solution to this problem is that e(fx)clipse installs a classloader hook using the Equinox AdapterHook-Framework (you can do crazy stuff with that) and on the fly spins up a new Java-Module-Layer containing all the JavaFX-Modules and uses the classloader from the Module-Layer to load the JavaFX-Classes.

With this strategy you can supply the JavaFX-Modules (including the native bits) required for your application to run as part of your p2-repository.


by Tom Schindl at August 04, 2018 09:42 PM

New improvements to the Eclipse Packaging website

August 02, 2018 02:30 PM

In my previous blog post, we announced a new look and feel for the Eclipse Foundation website. The plan was to roll out our new design to eclipse.org first and then gradually migrate our other web properties.

Since then, we migrated our Hugo theme, Eclipsepedia, Eclipse Community Forums and a few other Drupal sites, such as the Eclipse User Profile and the Eclipse Foundation Blog to the Quicksilver look and feel!

This week, I am happy to announce an update to the Eclipse Packaging website. For those who don’t know, the Eclipse Packaging website is used to publish download links for the Eclipse Installer and Eclipse Packages.

I am very proud of the work done here since the original site desperately needed some TLC. I’m hoping the new look and feel will improve the way the Eclipse IDE is downloaded by the community!

Eclipse.org new home page

New features include:

  • A website redesign based off the Quicksilver look and feel.
  • The links to the Eclipse Installer, Eclipse Packages and Eclipse Developer Builds are more accessible via a new submenu beneath our breadcrumbs.
  • Created a new Eclipse Installer download page page with instructions.
  • Made improvements to our breadcrumb links which allow users to easily find every Eclipse release on the Eclipse Packaging site.
  • The More Downloads sidebar includes links to Eclipse Packages instead of the release train landing page.
  • Links to the Eclipse Installer is available in the sidebar.

Finally, this migration is also a win for the Eclipse Foundation staff. These changes to the Eclipse Packages site allow us to streamline the Eclipse Release process and no longer requires us to manually submit Gerrit patches to publish a release.


August 02, 2018 02:30 PM

We Are Open

August 02, 2018 01:00 PM

We Are Open campaign provides a peek into Eclipse community's openness, innovation, and collaboration.

August 02, 2018 01:00 PM

We Are Open

by Thabang Mashologu at August 01, 2018 06:31 PM

Back in April, our Executive Director Mike Milinkovich blogged about a new logo and redesigned website for the Eclipse Foundation. Our new branding is meant to reflect the Foundation’s role beyond the Eclipse IDE. We are proud of our heritage and successfully launched the Eclipse Photon release recently to a global base of over 4 million active users. But clearly the Eclipse Foundation and its 350+ open source projects represent more than the Eclipse IDE. 

The fact is, we are a leading platform and environment for global developers and organizations to collaborate on open technologies that solve complex problems and enable value creation. 

From enterprise Java to IoT and autonomous vehicles, we are increasingly becoming the open source foundation of choice for digital companies looking for a vendor-neutral governance model to help them to accelerate market adoption of technologies and standards, increase the pace of innovation, and to reduce development costs. In fact, we are supported by over 275 organizations who see the strategic, operational and financial value of open source software development at the Eclipse Foundation.

For thousands of developers around the world, we offer great opportunities to contribute to game-changing technologies, demonstrate expertise, and participate in our vibrant Eclipse community, among many other benefits. At the time of writing, we have over 1,550 committers and counting who power Eclipse projects spanning many technology domains.

The Foundation marketing team has the fun job of sharing the stories and successes of our community with the world. To that end, we developed the We Are Open video campaign to provide a quick peek into how the Eclipse community represents openness, innovation, and collaboration. We hope you like it, share and subscribe to our various channels!
 


by Thabang Mashologu at August 01, 2018 06:31 PM

Accepted Sessions Announced

by Anonymous at July 31, 2018 08:14 PM

It was a lot of work for the program committee, but they got it done! And thank you again to all the community members who sent in a talk proposal.

Visit this page to see the list of accepted tutorials and talks. We expect to have the schedule done by mid-August.


by Anonymous at July 31, 2018 08:14 PM

Eclipse Foundation Announces Jakarta EE Committee Election Results

July 31, 2018 02:10 PM

The results are in for Participant and Committer Member elections for representatives to the Jakarta EE Working Group!

July 31, 2018 02:10 PM

Eclipse Newsletter | Embedded Development

July 26, 2018 01:30 PM

This month's newsletter features five articles that focus on Embedded Development. Read it now.

July 26, 2018 01:30 PM

Eclipse Newsletter on Papyrus UML Light

by tevirselrahc at July 26, 2018 01:23 PM

Back in June, I reported that a new variant of Papyrus was being funded for development by the Papyrus Industry Consortium.

Well there’s no turning back with an official article in this month’s Eclipse Newsletter!


by tevirselrahc at July 26, 2018 01:23 PM

We scaled IoT – Eclipse Hono in the lab

by Jens Reimann at July 25, 2018 12:03 PM

Working for Red Hat is awesome. Not only can you work on amazing things, you will also get the tools you need in order to do just that. We wanted to test Eclipse Hono (yes, again) and see how far we can scale it. And of course which limits and issues we encounter on the way. So we took the current development version of Hono (0.7) from Eclipse IoT, backed by EnMasse 0.21 and ran it on an OpenShift 3.9 cluster.

Note: This blog post presents an intermediate result of the whole test, as it is still ongoing. Want to know more? We put in a talk for EclipseCon Europe about this scale test. With a bit of luck we can show you more in person at the end of October in Ludwigsburg.

The lab

From the full test cluster, we received an allocation of 16 nodes with a bit of storage (mostly HDDs), Intel Xeon E5-2620, 2×6 cores (24 threads) each and a mix of 64GB/128GB RAM. 12 nodes got assigned for the IoT cluster, running Eclipse Hono, EnMasse and OpenShift. The remaining 4 nodes made up the simulation cluster for generating the IoT workload. For the simulation cluster, we also deployed OpenShift, simply to re-use the same features like scaling, deploying, building as we did for the IoT cluster. Both clusters are a single master setup. For the IoT cluster, we went with GlusterFS as the storage provider as we wanted to have dynamic provisioning for the broker deployments. Everything is connected by a 1GBit Ethernet link. In the IoT cluster, we allocated 3 nodes for infrastructure-only purposes (like the Docker registry and the OpenShift router). Which left 8 general-purpose compute nodes that Hono could make use of.

Node distribution

The test

The focus of this test was put on telemetry data using HTTP as a transport. For this we simulated devices, sending one message per second. In the context of IoT, you have a bigger number of senders (devices), but they do send less payload and less frequent than e.g. a cloud-side enterprise system might do. It is also most likely that an IoT device wouldn’t send once each second over HTTP. But “per second” is easier to process. And, at least in theory, you could trade in 1.000 devices sending once per second with 10.000 devices sending once every 10 seconds.

The simulator cluster consisted of three main components. An InfluxDB to store some metrics. A “consumer” and a “HTTP simulator” deployment. The consumer directly consumed from the EnMasse Qpid dispatch router instance via AMQP 1.0, as fast as possible. The HTTP simulator tries to simulate 2.000 devices with a message rate of 1 message per second per device. If the HTTP adapter stalls, it will wait for requests to complete. For the HTTP client, we used the Vert.x Web Client, as it turned out to be the most performant Java HTTP client (aside from having a nice API). So scaling up by single pod means that we increase the IoT workload by 2.000 devices (meaning 2.000 additional messages per second).

Testing architecture

To the max

As a first exercise we tried out a few configurations and see how far we could get. In the end, we were able to saturate the ethernet port of our (initially) two ingress nodes and so decided to re-allocate one node from Eclipse Hono to the OpenShift infrastructure. Having 3 ingress nodes and 8 compute nodes. This did reduce the capacity available for Hono and let us run into a limit of processing messages. However, it seemed better to run into a limit with Hono compared to running into a limit of network throughput. Adding an additional ingress node would be a simple task to do. And if we could improve Hono during the test, then we would actually see more throughput as we have some reserves in network throughput with that third node.

The final setup processed something around 80.000 devices with 1 message/second. There was a bit of room above that. But our DNS round-robin “load balancer” was not optimal, so we kept that reserve for further testing.

Note: Please note, that this number may be quite different on other machines, in other environments. We simply used this as a baseline for further testing.

Scaling up

The first automated scenario we ran was a simple scale up test. For that we scaled down all producers and consumer and slowly started to scale up the producers. After adding a new pod it waited until the message flow has settled. If the failure rate is too high, then scale up an additional protocol adapter. Otherwise, scale up another producer and continue.

As an acceptable failure rate, this test used 2% of the messages over the last 3 minutes. And a “failure” is actually a rejection of the message at the current point in time. Devices may re-try at a later time to submit its data. For telemetry data, it may be fine to, drop some information (with QoS 0) every now and then. Or use QoS 1 instead and but be aware of the fact that the current request as rejected and re-try at a later time. In any case, if Hono responds with a failure of 503, then the adapter cannot handle any more requests at the moment, leading to an increased failure rate in the simulator.

Initial results

So let’s have a quick look at the results of this test:

Eclipse Hono scale testing results, number of pods

This chart shows the scale-up of the simulator pods and the accompanying scale-up of the Eclipse Hono protocol adapter pods. You can also see the number of messages each instance of the protocol adapters processes. It looks like, once we push a few messages into the system, this evens out around 5.000 msgs/s. Meaning that each additional Hono HTTP adapter instance can serve 5.000 more messages/s, or 5.000 devices sending one message per second. Or 50.000 devices sending one message every 10 seconds. And each time we fire up a new instance the whole system can handle 5.000 msgs/s more.

In the second chart we can see the failure rate:

Eclipse Hono scale testing results, failure rate

Now the rule for the test was, that the failure rate has to be below 2% in order for the test to continue scaling up. We the test didn’t do well was to wait a bit longer and see if the failure rate declined even more. The failure rate is a moving average over 3 minutes. For that reason, this behavior has been changed in succeeding tests. The scenario now waits a bit longer before recording the final result of the current step.

So what you can see is that the failure rate stays below that “magic” 2% line. But that was the requirement. Except of course for the last entry, where the test was ended as there were no more resources to scale up in order for the scenario to compensate.

Yes it scales

Does Eclipse Hono scale? With charts and numbers, there is always room for interpretation. 😉 But to me, it definitely looks that way. When we increase the IoT workload we can compensate by scaling up protocol adapters in a linear way. Settling around 5.000 msgs/s per protocol adapter instance and keeping that figure until the end of the test. Until we ran out of computing resources.

Want more?

More background? You can have a look at the source code around this test on GitHub at redhat-iot/hono-simulator and redhat-iot/hono-scale-test. But please remember that this setup might be very specific to our infrastructure and test.

More details? Come to our talk at EclipseCon Europe if we get accepted and learn more about how we did the test. What improvements we tried out, which issues we ran in and how we set up of our infrastructure. And maybe have a chat with us in person about the gory details of IoT testing.

More throughput? Come and join the Eclipse Hono community and bring in your ideas about performance improvements.

The post We scaled IoT – Eclipse Hono in the lab appeared first on ctron's blog.


by Jens Reimann at July 25, 2018 12:03 PM

Eclipse IoT Day Singapore Announced

July 24, 2018 11:00 AM

The very first Eclipse IoT Day Singapore will take place Sept. 18 in co-location with IoT World Asia 2018.

July 24, 2018 11:00 AM

EC by Example: Collectors2

by Donald Raab at July 23, 2018 02:26 AM

Learn how to transition to Eclipse Collections types using Collectors2 with any Java Stream.

Visualizing Collectors2

Anatomy of a Collector

One of the many great additions to Java 8 was the interface named Collector. A Collector can be used with the collect method on the Stream interface. The collect method will allow you to reduce a Stream to any type you want. Java 8 included a set of stock Collector implementations which are part of the Collectors utility class. Eclipse Collections includes another set of Collector implementations that return Eclipse Collections types. The name of the utility class in Eclipse Collections is Collectors2.

So what is a Collector? Let’s take a look at the interface to find out. There are five public instance methods on a Collector.

  • supplier → Supplier<A>
  • accumulator → BiConsumer<A, T>
  • combiner → BinaryOperator<A>
  • finisher → Function<A, R>
  • characteristics → Set<Characteristics> → Enum(CONCURRENT, UNORDERED, IDENTITY_FINISH)

There are also two static of methods on Collector which can be used to easily create your own Collector implementations.

So let’s see how we can create a Collector to better understand what these individual components are used for.

@Test
public void collector()
{
Collector<String, Set<String>, Set<String>> toCOWASet =
Collector.of(
HashSet::new, // supplier
Set::add, // accumulator
(set1, set2) -> { // combiner
set1.addAll(set2);
return set1;
},
CopyOnWriteArraySet::new); // finisher
List<String> strings = Arrays.asList("a", "b", "c");
Set<String> set =
strings.stream().collect(toCOWASet);
Assert.assertEquals(new HashSet<>(strings), set);
}

Here I use the static of method which takes five parameters. I leave the var arg’d final parameter for characteristics empty here. The supplier here creates a new HashSet. The accumulator is used to specify what operation to apply on the object created using the supplier. The items in the Stream will be passed to the add method of the Set. The combiner is used to specify how collections should be merged in the case where a parallelStream is used. I cannot use a method reference for the combiner here because one of the collections must be returned, and the addAll method on Collection returns a boolean. Finally, the finisher coverts the final result to a CopyOnWriteArraySet.

Building a reusable Collector

The Collector example above would not be very useful if it needed to be inlined directly in code as it is rather verbose. It would be much more useful if it could handle any type instead of just String. This can be done easily by moving the construction of this Collector to a static method and giving it a name like toCopyOnWriteArraySet.

public static <T> Collector<T, ?, Set<T>> toCopyOnWriteArraySet()
{
return Collector.<T, Set<T>, Set<T>>of(
HashSet::new, // supplier
Set::add, // accumulator
(set1, set2) -> { // combiner
set1.addAll(set2);
return set1;
},
CopyOnWriteArraySet::new, // finisher
Collector.Characteristics.UNORDERED); // characteristics
}

@Test
public void reusableCollector()
{
List<String> strings = Arrays.asList("a", "b", "c");
Set<String> set1 =
strings.stream().collect(toCopyOnWriteArraySet());
Verify.assertInstanceOf(CopyOnWriteArraySet.class, set1);
Assert.assertEquals(new HashSet<>(strings), set1);

List<Integer> integers = Arrays.asList(1, 2, 3);
Set<Integer> set2 =
integers.stream().collect(toCopyOnWriteArraySet());
Verify.assertInstanceOf(CopyOnWriteArraySet.class, set2);
Assert.assertEquals(new HashSet<>(integers), set2);
}

Now I’ve created a reusable Collector that can be used with a Stream of any type. I’ve additionally specified a Collector.Characteristics in the reusable implementation. These characteristics can be used by the Stream collect method to optimize the reduction implementation. Since I am accumulating to a Set which is unordered in this case, it makes sense to use the UNORDERED characteristic.

Filtering with Collectors2

In order to filter with Collectors2, you will need three things:

  • A select, reject, or partition Collector
  • A Predicate
  • A target collection Supplier

Here are examples using select, reject, and partition with Collectors2.

@Test
public void filtering()
{
List<Integer> list = Arrays.asList(1, 2, 3, 4, 5);
Predicate<Integer> evens = i -> i % 2 == 0;

MutableList<Integer> selectedList = list.stream().collect(
Collectors2.select(evens, Lists.mutable::empty));
MutableSet<Integer> selectedSet = list.stream().collect(
Collectors2.select(evens, Sets.mutable::empty));

MutableList<Integer> rejectedList = list.stream().collect(
Collectors2.reject(evens, Lists.mutable::empty));
MutableSet<Integer> rejectedSet = list.stream().collect(
Collectors2.reject(evens, Sets.mutable::empty));

PartitionList<Integer> partitionList = list.stream().collect(
Collectors2.partition(evens, PartitionFastList::new));
PartitionSet<Integer> partitionSet = list.stream().collect(
Collectors2.partition(evens, PartitionUnifiedSet::new));

Assert.assertEquals(selectedList, partitionList.getSelected());
Assert.assertEquals(rejectedList, partitionList.getRejected());

Assert.assertEquals(selectedSet, partitionSet.getSelected());
Assert.assertEquals(rejectedSet, partitionSet.getRejected());
}

Transforming with Collectors2

There are several methods which provide different transformations using Collectors2. The most basic transformation is available through the collect method. In order to use collect, you will need two things:

  • A Function
  • A target collection Supplier

The other transforming Collectors I will demonstrate below are makeString, zip, zipWithIndex, chunk, and flatCollect.

@Test
public void transforming()
{
List<Integer> list = Arrays.asList(1, 2, 3, 4, 5);
MutableList<String> strings = list.stream().collect(
Collectors2.collect(Object::toString,
Lists.mutable::empty));

String string = list.stream().collect(Collectors2.makeString());

Assert.assertEquals(string, strings.makeString());

MutableList<Pair<Integer, String>> zipped =
list.stream().collect(Collectors2.zip(strings));

Assert.assertEquals(Tuples.pair(1, "1"), zipped.getFirst());
Assert.assertEquals(Tuples.pair(5, "5"), zipped.getLast());

MutableList<ObjectIntPair<Integer>> zippedWithIndex =
list.stream().collect(Collectors2.zipWithIndex());

Assert.assertEquals(
PrimitiveTuples.pair(Integer.valueOf(1), 0),
zippedWithIndex.getFirst());
Assert.assertEquals(
PrimitiveTuples.pair(Integer.valueOf(5), 4),
zippedWithIndex.getLast());

MutableList<MutableList<Integer>> chunked =
list.stream().collect(Collectors2.chunk(2));

Assert.assertEquals(
Lists.mutable.with(1, 2), chunked.getFirst());
Assert.assertEquals(
Lists.mutable.with(5), chunked.getLast());

MutableList<Integer> flattened = chunked.stream().collect(
Collectors2.flatCollect(e -> e, Lists.mutable::empty));

Assert.assertEquals(list, flattened);
}

Converting with Collectors2

There are two sets of converting Collector implementations available in Collectors2. One set converts to MutableCollection types. The other converts to ImmutableCollection types.

Collectors converting to Mutable Collections

@Test
public void convertingToMutable()
{
List<Integer> source = Arrays.asList(2, 1, 4, 3, 5);
MutableBag<Integer> bag = source.stream().collect(
Collectors2.toBag());
MutableSortedBag<Integer> sortedBag = source.stream().collect(
Collectors2.toSortedBag());
Assert.assertEquals(
Bags.mutable.with(1, 2, 3, 4, 5), bag);
Assert.assertEquals(
SortedBags.mutable.with(1, 2, 3, 4, 5), sortedBag);

MutableSet<Integer> set = source.stream().collect(
Collectors2.toSet());
MutableSortedSet<Integer> sortedSet = source.stream().collect(
Collectors2.toSortedSet());
Assert.assertEquals(
Sets.mutable.with(1, 2, 3, 4, 5), set);
Assert.assertEquals(
SortedSets.mutable.with(1, 2, 3, 4, 5), sortedSet);

MutableList<Integer> list = source.stream().collect(
Collectors2.toList());
MutableList<Integer> sortedList = source.stream().collect(
Collectors2.toSortedList());
Assert.assertEquals(
Lists.mutable.with(2, 1, 4, 3, 5), list);
Assert.assertEquals(
Lists.mutable.with(1, 2, 3, 4, 5), sortedList);

MutableMap<String, Integer> map =
source.stream().limit(4).collect(
Collectors2.toMap(Object::toString, e -> e));
Assert.assertEquals(
Maps.mutable.with("2", 2, "1", 1, "4", 4, "3", 3),
map);

MutableBiMap<String, Integer> biMap =
source.stream().limit(4).collect(
Collectors2.toBiMap(Object::toString, e -> e));
Assert.assertEquals(
BiMaps.mutable.with("2", 2, "1", 1, "4", 4, "3", 3),
biMap);
}

Collectors converting to Immutable Collections

@Test
public void convertingToImmutable()
{
List<Integer> source = Arrays.asList(2, 1, 4, 3, 5);
ImmutableBag<Integer> bag = source.stream().collect(
Collectors2.toImmutableBag());
ImmutableSortedBag<Integer> sortedBag = source.stream().collect(
Collectors2.toImmutableSortedBag());
Assert.assertEquals(
Bags.immutable.with(1, 2, 3, 4, 5), bag);
Assert.assertEquals(
SortedBags.immutable.with(1, 2, 3, 4, 5), sortedBag);

ImmutableSet<Integer> set = source.stream().collect(
Collectors2.toImmutableSet());
ImmutableSortedSet<Integer> sortedSet = source.stream().collect(
Collectors2.toImmutableSortedSet());
Assert.assertEquals(
Sets.immutable.with(1, 2, 3, 4, 5), set);
Assert.assertEquals(
SortedSets.immutable.with(1, 2, 3, 4, 5), sortedSet);

ImmutableList<Integer> list = source.stream().collect(
Collectors2.toImmutableList());
ImmutableList<Integer> sortedList = source.stream().collect(
Collectors2.toImmutableSortedList());
Assert.assertEquals(
Lists.immutable.with(2, 1, 4, 3, 5), list);
Assert.assertEquals(
Lists.immutable.with(1, 2, 3, 4, 5), sortedList);

ImmutableMap<String, Integer> map =
source.stream().limit(4).collect(
Collectors2.toImmutableMap(
Object::toString, e -> e));
Assert.assertEquals(
Maps.immutable.with("2", 2, "1", 1, "4", 4, "3", 3),
map);

ImmutableBiMap<String, Integer> biMap =
source.stream().limit(4).collect(
Collectors2.toImmutableBiMap(
Object::toString, e -> e));
Assert.assertEquals(
BiMaps.immutable.with("2", 2, "1", 1, "4", 4, "3", 3),
biMap);
}

The Collector implementations that convert to ImmutableCollection types use the finisher to convert from a mutable container to an immutable container. Here is the example of the Collector implementation for toImmutableList().

public static <T> Collector<T, ?, ImmutableList<T>> toImmutableList()
{
return Collector.<T, MutableList<T>, ImmutableList<T>>of(
Lists.mutable::empty, // supplier
MutableList::add, // accumulator
MutableList::withAll, // combiner
MutableList::toImmutable, // finisher
EMPTY_CHARACTERISTICS); // characteristics
}

The finisher here is the MutableList::toImmutable method reference. This is the final step that converts the MutableCollection with the results into an ImmutableCollection.

Eclipse Collections API vs. Collectors2

My preference is always to use the Eclipse Collections API directly if I can. If I need to operate on a JDK Collection type or if I am only given a Stream, then I will use Collectors2. As you can see in the examples above, Collectors2 is a natural gateway to the Eclipse Collections types and their functional, fluent, friendly and fun APIs.

Check out this presentation to learn more about the origins, design and evolution of the Eclipse Collections API.

Eclipse Collections is open for contributions. If you like the library, you can let us know by starring it on GitHub.


by Donald Raab at July 23, 2018 02:26 AM

New Working Group and Charter at the Eclipse Foundation: OpenMobility

July 20, 2018 05:00 PM

OpenMobility will drive the evolution and broad adoption of mobility modelling and simulation technologies.

July 20, 2018 05:00 PM

RHAMT Eclipse Plugin 4.1.0.Final has been released!

by josteele at July 18, 2018 12:06 PM

Happy to announce version 4.1.0.Final of the Red Hat Application Migration Toolkit (RHAMT) is now available.

Getting Started

Downloads available through JBoss Central and from the update site.

RHAMT in a Nutshel

RHAMT is an application migration and assessment tool. The migrations supported include application platform upgrades, migrations to a cloud-native deployment environment, and also migrations from several commercial products to the Red Hat JBoss Enterprise Application Platform.

What is New?

Eclipse Photon

The tooling now targets Eclipse Photon.

Photon

Ignoring Patterns

Specify locations of files to exclude from analysis (using regular expressions).

Ignore Patterns

External Report

The generated report has been moved out of Eclipse and into the browser.

Report View

Improved Ruleset Schema

The XML ruleset schema has been relaxed providing flexible rule structures.

Schema

Custom Severities

Custom severities are now included in the Issue Explorer.

Custom Category

Stability

A good amount of time has been spent on ensuring the tooling functions consistently across Windows, OSX, and Linux.

You can find more detailed information here.

Our goal is to make the RHAMT tooling easy to use. We look forward to your feedback and comments!

Have fun!
John Steele
github/johnsteele


by josteele at July 18, 2018 12:06 PM

JBoss Tools and Red Hat Developer Studio for Eclipse Photon

by jeffmaury at July 17, 2018 03:43 PM

JBoss Tools 4.6.0 and Red Hat Developer Studio 12.0 for Eclipse Photon are here waiting for you. Check it out!

devstudio12

Installation

Red Hat Developer Studio comes with everything pre-bundled in its installer. Simply download it from our Red Hat Developer product page and run it like this:

java -jar devstudio-<installername>.jar

JBoss Tools or Bring-Your-Own-Eclipse (BYOE) Developer Studio require a bit more:

This release requires at least Eclipse 4.8 (Photon) but we recommend using the latest Eclipse 4.8 Photon JEE Bundle since then you get most of the dependencies preinstalled.

Once you have installed Eclipse, you can either find us on the Eclipse Marketplace under "JBoss Tools" or "Red Hat Developer Studio".

For JBoss Tools, you can also use our update site directly.

http://download.jboss.org/jbosstools/photon/stable/updates/

What is new?

Our main focus for this release was on adoption of Java10, improvements for container based development and bug fixing. Eclipse Photon itself has a lot of new cool stuff but let me highlight just a few updates in both Eclipse Photon and JBoss Tools plugins that I think are worth mentioning.

OpenShift 3

Enhanced Spring Boot support for server adapter

Spring Boot runtime was already supported by the OpenShift server adapter. However, it has one major limitation: files and resources were synchronized between the local workstation and the remote pod(s) only for the main project. If your Spring Boot application had dependencies that were present in the local workspace, any change to a file or resource of one of these dependencies was not handled. This is not true anymore.

Server tools

Wildfly 13 Server Adapter

A server adapter has been added to work with Wildfly 13. It adds support for Servlet 4.0.

Fuse Tooling

Camel Rest DSL from WSDL wizard

There is a new "Camel Rest DSL from WSDL" wizard. This wizard wraps the wsdl2rest tool now included with the Fuse 7 distribution, which takes a WSDL file for a SOAP-based (JAX-WS) web service and generates a combination of CXF-generated code and a Camel REST DSL route to make it accessible using REST operations.

To start, you need an existing Fuse Integration project in your workspace and access to the WSDL for the SOAP service. Then use File→New→Other…​ and select Red Hat Fuse→Camel Rest DSL from WSDL wizard.

On the first page of the wizard, select your WSDL and the Fuse Integration project in which to generate the Java code and Camel configuration.

SOAP to REST Wizard page 1

On the second page, you can customize the Java folder path for your generated classes, the folder for the generated Camel file, plus any customization for the SOAP service address and destination REST service address.

SOAP to REST Wizard page 2

Click Finish and the new Camel configuration and associated Java code are generated in your project. The wizard determines whether your project is Blueprint, Spring, or Spring Boot based, and it creates the corresponding artifacts without requiring any additional input. When the wizard is finished, you can open your new Camel file in the Fuse Tooling Route Editor to view what it created.

Fuse Tooling editor Rest Tab

That brings us to another new functionality, the REST tab in the Fuse Tooling Route Editor.

Camel Editor REST tab

The Fuse Tooling Route Editor provides a new REST tab. For this release, the contents of this tab is read-only and includes the following information:

  • Details for the REST Configuration element including the component (jetty, netty, servlet, etc.), the context path, the port, binding mode (JSON, XML, etc.), and host. There is only one REST Configuration element.

  • A list of REST elements that collect REST operations. A configuration can have more than one REST element. Each REST element has an associated property page that displays additional details such as the path and the data it consumes or produces.

Fuse Tooling Rest Elements Properties View
  • A list of REST operations for the selected REST element. Each of the operations has an associated property page that provides details such as the URI and output type.

Fuse Tooling Rest Operations Properties View

For this release, the REST tab is read-only. If you want to edit the REST DSL, use the Route Editor Source tab. When you make changes and save them in the Source tab, the REST tab refreshes to show your updates.

Camel URI completion with XML DSL

As announced here, it was already possible to have Camel URI completion with XML DSL in the source tab of the Camel Route editor by installing the Language Support for Apache Camel in your IDE.

This feature is now installed by default with Fuse Tooling!

Camel URI completion in source tab of Camel Editor

Maven

Maven support updated to M2E 1.9.1

The Maven support is based on Eclipse M2E 1.9.1, bringing the following features:

Advanced classpath isolation

Thanks to Eclipse Photon, there are new two different classpathes, the main and the test classpath. The main classes will now no longer see the test classes and dependencies

Embedded Maven runtime

The embedded Maven runtime is now based on Apache Maven 3.5.3.

Archetype catalog management

It is now possible to disable an archetype catalog.

Java 9/10 support

Support for Java 9/10 has been improved: bugs fixes, better handling of module path.

Java Developement Tools (JDT)

Support for Java™ 10

Quick fix to change project compliance and JRE to 10

A quick fix Change project compliance and JRE to 10 is provided to quickly change the current project to be compatible with Java 10.

quickfix change compliance 10

Java Editor

Quick Fix to add @NonNullByDefault to packages

A new quick fix is offered to fix issues that are reported when the Missing &apos@NonNullByDefault&apos annotation on package warning is enabled. If the package already has a package-info.java, the quick fix can be invoked from the editor:

add nnbd existing packageinfo

Otherwise, the quick fix must be invoked from the problems view, and will create a package-info.java with the required annotation:

add nnbd create packageinfo

When invoked from the problems view, both variations of the quick fix can fix the problem for multiple packages simultaneously.

You can now Ctrl+click or use Open Declaration (F3) on case or default keywords to quickly navigate to the beginning of the switch statement.

navigate to switch
Escape non-ASCII characters when pasting into a string literal

The Java > Editor > Typing > Escape text when pasting into a string literal preference option now has a suboption Use Unicode escape syntax for non-ASCII characters:

escape non ascii settings

When enabled, characters outside the visible ASCII range will be replaced by unicode escape sequences when pasted into a string:

escape non ascii example
Improved Java syntax coloring in the dark theme

To improve readability in the dark theme, bold style usage has been reduced and some colors that were too close to each other have been altered.

java syntax dark

The colors of links in code element information control now takes the color settings of the Hyperlink text color and the Active hyperlink text color from the Colors & Fonts preference page into account. The readability in the dark theme has been improved a lot by this.

Before:

element info before

After:

element info after
Improved coloring of inherited members in the Quick Outline in the dark theme

The Eclipse default dark theme now includes styling of inherited members in JDT’s Quick Outline. This improves readability in the dark theme a lot. The color can be configured via the Java > Inherited Members color definition on the Colors and Fonts preference page.

Before:

inherited before

After:

inherited after

Java Views and Dialogs

Test sources

In the Java Build Path project settings, there is now an attribute Contains test sources to configure that a source folder contains test sources. (Note: test sources must have their own output folder). Similarly, for projects and libraries there is an attribute Visible only for test sources. This setting also exists for classpath containers, and if it is set to Yes for one of these, this value will be used for all contained libraries and projects.

1 sourcefolder settings 521330

Test source folders and dependencies are shown with a darker icon in the build path settings, the package explorer and other locations. This can be disabled in Preferences > Java > Appearance:

1a modified test icon preferences 530179

Referenced projects can contain test sources and have test dependencies themselves. Usually, when test sources are compiled, the test code in projects on the build path will be visible. As this is not always desirable, it can be changed by setting the new build path attribute Without test code, that is available for projects, to Yes.

2 without test code 526858

Build path entries configured like this have a decoration [without test code] after the project name, which can be disabled in Preferences > General > Appearance > Label Decorations:

2a without test code decorator 530179

For each project, compilation is now done in two phases: First all main sources (which cannot see any test-code on the build-path) and then all test sources.

3 visibilities 224708

As a consequence, if the project is a modular Java 9 project, test dependencies like JUnit can not be referenced in the module-info.java, as they will not be visible while compiling it. The solution used to handle this is the same, that Maven uses: When test dependencies are put on the classpath, the module being compiled will automatically be configured to read the unnamed module during the compilation of the test sources, so the test dependencies will be visible.

Of course, code completion will not suggest test code in main sources:

4a completion in main 521331

There are now two dynamic Java working sets Java Main Sources and Java Test Sources containing the source folders grouped according to value of the Contains test sources attribute. This can for example be used to remove warnings in test sources from the problems view:

5a problems view 521336

To achieve this, create a new filter that shows warnings for the Java Main Sources working set and select it with the All Errors on Workspace filter:

5b problems view filters 521336

There are also dedicated filters to quickly remove hits in main code or test code from Java search results:

6 filter search result 521332

Similar, there is a filter to remove test code from Call hierarchies:

7 filter call hierarchy 521335

Another filter to remove test code exists for Quick type hierarchies:

8 filter quick type hierarchy 521333

Test source folders will be preselected in the New JUnit Test Case wizard

9 new junit 332602

In Run and Debug configurations, the Classpath tab (or Dependencies tab when launching with Java 9) contains a new option Exclude Test Code, that is automatically preselected when launching a Java Application from a source folder that is not marked to contain test sources:

10 launching 529321

When launching with Java 9 and this option is not selected, command line options will automatically be added so modules that have a non-empty classpath read the unnamed module. These command line options are part of what can be overridden using the new Override Dependencies button.

Sort library entries alphabetically in Package Explorer

The content of libraries are displayed in the order of the classpath. This makes it difficult to find specific libraries by their name, especially when projects have many dependencies. The library entries can now be sorted alphabetically when setting the preference Sort library entries alphabetically in Package Explorer on the Java > Appearance preference page:

jdt sort library pref
jdt library entries unsorted

The default for this preference is OFF.

Generate dialogs use verbs instead of OK

The Generate…​ dialogs of the Java tools have been adapted to use verbs instead of OK.

Java Compiler

This is an experimental support provided to allow the regular expression usage in search field while searching for module declaration. This can be considered as a wrapper of the API change.

To invoke the regular expression search from the search field under Java Search, start the expression with "/r " i.e, a slash &apos/&apos, the letter &aposr&apos and a blank &apos &apos (not tab) followed by a regex, an example of which is shown below:

mod.regex.trap

In the above example, all the characters trailing "/r " form a Java regular expression to denote a module name which starts with zero or more &aposn’s followed by the string ".ver" and followed again by zero or more number of arbitrary characters.

Another example would be to search for all modules that start with java.x followed by zero or more characters which is given by the regular expression /r java\.x.* - note the backslash for . to consider this as a "normal" character instead of the special regex].

Yet another example would be search for all module names that start with j followed by zero or more characters and ending with .xml which in regex language translates to /r j.*\.xml. Please note that here the first &apos.&apos is the special regex character while the second &apos.&apos is escaped to denote that this is a normal character.

Note: You should use this only for Declarations search for modules as it is not implemented for module references. Selecting All occurrences in conjunction with regex will default to finding only the Declarations matching the regex ignoring the references.

@NonNullByDefault per module

If a module is annotated with @NonNullByDefault, the compiler will interpret this as the global default for all types in this module:

@org.eclipse.jdt.annotation.NonNullByDefault
      module my.nullsafe.mod { ...

Note, however, that this requires an annotation type declared either with target ElementType.MODULE, or with no explicit target at all. Versions 2.2.0 and greater of bundle org.eclipse.jdt.annotation use the latter strategy and hence support a module-wide non-null default.

@NonNullByDefault improvements

When using annotation-based null analysis, there are now more ways to define which unannotated locations are implicitly assumed to be annotated as @NonNull:

  • @NonNullByDefault annotations based on enum DefaultLocation can also be used if the primary nullness annotations are declaration annotations (previously this was supported only for TYPE_USE annotations).

  • Support for @NonNullByDefault annotations that are targeted at parameters has been implemented.

  • Multiple different @NonNullByDefault annotations (especially with different default values) may be placed at the same target, in which case the sets of affected locations are merged.

  • Annotations which use a meta annotation @TypeQualifierDefault instead of a DefaultLocation-based specification are now understood, too, e.g. @org.springframework.lang.NonNullApi.

Version 2.2.0 of bundle org.eclipse.jdt.annotation contains an annotation type NonNullByDefault that can be applied to parameter and module declarations (in addition to the previously allowed targets).

Test sources

There is now support for running Java annotation processors on test sources. The output folder for files generated for these can be configured in the project properties in Java Compiler > Annotation Processing as Generated test source directory.

testsources apt 531072
New preference added "Compiler Compliance does not match used JRE"

A new preference Compiler Compliance does not match used JRE is added to Compiler Preference Building Page.

This preference indicates the severity of the problem reported when project’s used JRE does not match the compiler compliance level selected. (e.g. a project using JRE 1.8 as JRE System Library, and the compiler compliance is set to 1.7).

The value of this preference is by default WARNING.

If the JRE being used is 9 or above and the --release option is selected and even if the compiler compliance does not match the JRE being used, this option will be ignored.

This preference can be set as shown below:

jdt compiler compliance mismatch JRE

Java Formatter

New formatter profile page

The formatter profile preference page (Java > Code Style > Formatter > Edit…​) has a new look which makes it much easier to set preferences for formatting Java code. Instead of multiple tabs, all preferences are presented in an expandable tree.

formatter profile overview

You can use filtering to display only the settings with names matching a specific phrase. Filtering by values is also possible (prefix a value filter with a tilde).

formatter profile filtering

Most sections have a Modify all button in their header that lets you set all their preferences to the same value with one click.

formatter profile modify all

Some preferences have more convenient controls. For example, number values can be easily modified with arrow buttons. Wrap policy settings are controlled by simple toolbars so that you can see and compare multiple policies at once.

formatter profile wrap settings

In the preview panel you can now use your own code to immediately see how it will be affected by the modified settings. You can also see the raw form of standard preview samples and make temporary modifications to them.

formatter profile preview
Formatter: align Javadoc tags in columns

The formatter can now align names and/or descriptions in Javadoc tags in new ways. The formatter profile editor is available for selection, under Comments > Javadoc.

formatter javadoc prefs

For example, the Align descriptions, grouped by type setting is now used in the built-in Eclipse profile.

formatter javadoc preview

The setting previously known as Indent Javadoc tags is now called Align descriptions to tag width. The two settings related to @param tags also had their labels changed to better describe what they do.

Java code formatter preferences now styled for the dark theme

The formatter preferences tree styling has been fixed to work properly in the dark theme.

New Cleanup Action "Remove redundant modifiers"

The new cleanup action "Remove redundant modifiers" removes unnecessary modifiers on types, methods and fields. The following modifiers are removed:

  • Interface field declarations: public, static, final

  • Interface method declarations: public, abstract

  • Nested interfaces: static

  • Method declarations in final classes: final

The cleanup action can be configured as save action on the Unnecessary Code page.

jdt remove redundant modifiers

Debug

Launch configuration prototypes for Java Launch Configurations

A Java Launch Configuration can now be based on a prototype.

prototype java launch configuration

A prototype seeds attributes in its associated Java Launch Configurations with the settings specified in the Prototype tab.

prototype tab java launch configuration 1

Once a Java Launch Configuration has been created, you can override any initial settings from the prototype. You can also reset the settings of a Java Launch Configuration with the ones from its prototype. A Java Launch Configuration maintains a link to its prototype, but is a complete stand-alone launch configuration that can be launched, exported, shared, etc.

prototype tab java launch configuration 2
Advanced source lookup implementation

More precise advanced source lookup implementation, particularly useful when debugging applications that load classes dynamically at runtime.

New org.eclipse.jdt.launching.workspaceProjectDescribers extension point can be used to enable advanced source lookup for projects with non-default layout, like PDE Plug-In projects.

New org.eclipse.jdt.launching.sourceContainerResolvers can be used to download sources jar files from remote artifact repositories, like Maven Central or Eclipse P2.

Advanced source lookup affects debug launches only and can be enabled or disabled with Java > Debug > Use advanced source lookup (JRE 1.5 and higher) preference option:

advanced source lookup
Debugger listens to thread name changes

Debug view now automatically updates thread names if they are changed in the debuggee JVM. This shows live information for worker instances, as described above.

Technically speaking, Java debugger automatically adds a new (user invisible) breakpoint in the JVM and notifies clients (like Debug view) on a breakpoint hit. If this behavior is undesired for some reason, product owners can disable it via product customization.

The property value is: org.eclipse.jdt.debug.ui/org.eclipse.jdt.debug.ui.javaDebug.ListenOnThreadNameChanges=false

Value displayed for method exit and exception breakpoints

When a method exit breakpoint is hit, the value being returned is now shown in the variables view.

returningvalue

Similarly, when an exception breakpoint is hit, the exception being thrown is shown.

throwingexception
Display view renamed to Debug Shell

The Display view has been renamed to Debug Shell to better match the features and purpose of this view. Also, a java comment is shown in the Debug Shell on fresh open that explains when and how to use it.

debugShell

And more…​

You can find more noteworthy updates in on this page.

What is next?

Having JBoss Tools 4.6.0 and Red Hat Developer Studio 12.0 out we are already working on the next release for Eclipse 2018-09.

Enjoy!

Jeff Maury


by jeffmaury at July 17, 2018 03:43 PM

Xtext editors for binary files

by Arne Deutsch (adeutsch@itemis.de) at July 13, 2018 12:10 PM

 What does "4 + 1" mean? Well, for example itemis employees have been developing a Java bytecode editor with Xtext. This editor allows the contents of .class files to be made visible and editable.
In the first part of this article I explained how the JBC editor is used. In this second part I want to discuss the technical problems that arise when you want to make a binary file editable with an Xtext-based editor. 

The first issue to solve for a text editor for binary files is to convert the binary data into a textual format without the text editor being involved. This is done by replacing the editor with an IDocumentProvider, which then performs appropriate transformations when loading and saving. As usual in Xtext this is done by dependency injection and registration within the UI module:

package com.itemis.jbc.ui

import com.itemis.jbc.ui.custom.JBCDocumentProvider
import org.eclipse.xtext.ui.editor.model.XtextDocumentProvider

@FinalFieldsConstructor
class JBCUiModule extends AbstractJBCUiModule {
    def Class<? extends XtextDocumentProvider> bindXtextDocumentProvider() {
        JBCDocumentProvider
    }
}

 
The JBCDocumentProvider now overrides the two methods setDocumentContent and doSaveDocument. The first method converts the binary stream into text, while the second returns binary content from the model the editor obtained from the XTextDocument.

package com.itemis.jbc.ui.custom

import com.itemis.jbc.binary.ByteCodeWriter
import com.itemis.jbc.jbc.ClassFile
import java.io.ByteArrayInputStream
import java.io.InputStream
import org.eclipse.core.runtime.CoreException
import org.eclipse.core.runtime.IProgressMonitor
import org.eclipse.jface.text.IDocument
import org.eclipse.ui.IFileEditorInput
import org.eclipse.xtext.resource.XtextResource
import org.eclipse.xtext.ui.editor.model.XtextDocument
import org.eclipse.xtext.ui.editor.model.XtextDocumentProvider
import org.eclipse.xtext.util.concurrent.IUnitOfWork

class JBCDocumentProvider extends XtextDocumentProvider {
    override protected setDocumentContent(IDocument document, InputStream contentStream,
            String encoding) throws CoreException {         document.set(new JBCInputStreamContentReader().readContent(contentStream, encoding))
    }

    override protected doSaveDocument(IProgressMonitor monitor, Object element,
            IDocument document, boolean overwrite) throws CoreException {
        if (element instanceof IFileEditorInput) {
            if (document instanceof XtextDocument) {
                if (element.file.exists && element.file.name.endsWith(".class")) {
                    document.readOnly(new IUnitOfWork.Void<XtextResource>() {
                        override process(XtextResource resource) throws Exception {
                            val ast = resource.parseResult.rootASTElement
                            element.file.setContents(new ByteArrayInputStream(

                                    ByteCodeWriter.writeClassFile(ast as ClassFile)),
true, true, monitor))});
                    return;
                }
            }
        }
        super.doSaveDocument(monitor, element, document, overwrite)
    }
}

 
This is enough to fool the Xtext-based editor, as it provides it with a plain text file, but the result is not quite satisfactory. This is because the editor compares the textual content with the binary data obtained from the .class file to highlight changed regions. This happens because the comparison algorithm does not get the file content directly from the editor, but instead requests IFileEditorInput from the file content and getStorage via the method to get the InputStream.
 

NoProxyForIFileInput

To make the comparison meaningful, this stream also has to be transformed in the same way as was done when creating the IDocument. To do this, the doSetInput (IEditorInput input) method is overridden by the JBCEditor, so that the set input is packaged in a dynamic proxy.

package com.itemis.jbc.ui.custom

import java.io.InputStreamReader
import java.lang.reflect.InvocationHandler
import java.lang.reflect.Method
import java.lang.reflect.Proxy
import org.eclipse.core.resources.IEncodedStorage
import org.eclipse.core.resources.IStorage
import org.eclipse.core.runtime.CoreException
import org.eclipse.ui.IEditorInput
import org.eclipse.ui.IFileEditorInput
import org.eclipse.xtext.ui.editor.XtextEditor
import org.eclipse.xtext.util.StringInputStream

class JBCEditor extends XtextEditor {
    override protected doSetInput(IEditorInput input) throws CoreException {
        if (input instanceof IFileEditorInput) {
            if (input.file.name.endsWith(".class")) {
                super.doSetInput(input.proxy)
                return
            }
        }
        super.doSetInput(input)
    }
    def private IFileEditorInput proxy(IFileEditorInput editorInput) {
        Proxy.newProxyInstance(this.class.classLoader, #[IFileEditorInput],
                new IFileEditorInputHandler(editorInput)) as IFileEditorInput
    }


The latter returns another dynamic proxy for the getStorage query, which converts the file content supplied by getContents into textual format.
 

package class IFileEditorInputHandler implements InvocationHandler {
    private final IFileEditorInput original

    new(IFileEditorInput original) {
        this.original = original
    }
    override invoke(Object proxy, Method method, Object[] args) throws Throwable {
        if (method.name.equals("getStorage")) {
            return (method.invoke(original, args) as IStorage).proxy
        } else {
            return method.invoke(original, args)
        }
    }
    def private IStorage proxy(IStorage storage) {
        Proxy.newProxyInstance(this.class.classLoader, #[IStorage],
new IStorageHandler(storage)) as IStorage
    }
}
package class IStorageHandler implements InvocationHandler {
    private final IStorage original

    new(IStorage original) {
        this.original = original
    }
    override invoke(Object proxy, Method method, Object[] args) throws Throwable {
        if (method.name.equals("getContents") && method.parameterCount === 0) {
            val reader = new InputStreamReader(original.contents)
            try {
                val content = new JBCInputStreamContentReader().readContent(original.contents
                        (original as IEncodedStorage).charset)
                return new StringInputStream(content)
            } finally {
                reader.close()
            }
        } else {
            return method.invoke(original, args)
        }
    }
}

 
As a result the editor.getStorage().GetContents() call returns the same content as was supplied by the document.get(), and the comparison of the document content with that from the file now yields the expected results.
 

WithProxyForIFileInput

The editor implemented here is quite simple, in that each .class file is considered individually: there is no global scope to allow references between multiple files to be resolved and validated. This means that it isn’t easy to develop an entire project directly in class-file-format.

However, this is not a fundamental problem, merely a design decision. The editor is explicitly intended for editing individual .class files. There is nothing wrong, however, with the idea of extending the techniques to other binaries in order to create useful editors for them without an explicit intermediate textual format. These could be stored in files, and these files be linked by references within a global scope.


by Arne Deutsch (adeutsch@itemis.de) at July 13, 2018 12:10 PM