CheConf18

February 16, 2018 12:00 PM

A small post to announce that I will be speaking at CheConf18, a one day conference dedicated to Eclipse Che an extensible cloud development platform. I am really excited to participate to this event! I will co speak with Stevan Le Meur, Che maintainer from RedHat about Building Extensibility and Community for Che. During our session, you’ll get a preview of the work we prototyped at Obeo to bring the modeling stack to the web and what class of tools one can envision.

Perhaps you hadn’t hear about CheConf so far ? No problem: it’s not too late to participate. Indeed, there’s no need to negotiate with your boss, no need to book a flight and an hotel: this conference happens entirely online and is free!

You just need to subscribe on the web site, and you’ll be able to join the conference when you want. Have a look at the great schedule. And do not forget our session Wednesday February 21, 2018 17:30-18:00 Paris Time or wherever you are in the world!

Stay tuned!


February 16, 2018 12:00 PM

Cloud Native and Serverless Landscape

by Chris Aniszczyk at February 16, 2018 10:21 AM

For the last year or so, the CNCF has been exploring the intersection of cloud native and serverless through the CNCF Serverless WG:

As the first artifacts of the working group, we are happy to announce a whitepaper and landscape to bring some clarity to this early and evolving technology space. The CNCF Serverless WG is also working on a draft specification for describing event data in a common way to ease event declaration and delivery, focused on the serverless space. The goal is to eventually present this project to the CNCF TOC to formalize it as an official CNCF project:

We’re still early days, but in my opinion, serverless is one application/programming built on cloud native technology. There are some open source efforts out there for serverless but they tend to be focused on specific projects (e.g., OpenFaaS, kubeless) versus collaboration across cloud providers and startups. The CNCF is looking to enable collaboration/projects in this space that adhere to our values. What are our values? See these from our charter:

  • Fast is better than slow. The foundation enables projects to progress at high velocity to support aggressive adoption by users.
  • Open. The foundation is open and accessible, and operates independently of specific partisan interests. The foundation accepts all contributors based on the merit of their contributions, and the foundation’s technology must be available to all according to open source values. The technical community and its decisions shall be transparent.
  • Fair. The foundation will avoid undue influence, bad behavior or “pay-to-play” decision-making.
  • Strong technical identity. The foundation will achieve and maintain a high degree of its own technical identify that is shared across the projects.
  • Clear boundaries. The foundation shall establish clear goals, and in some cases, what the non-goals of the foundation are to allow projects to effectively co-exist, and to help the ecosystem understand where to focus for new innovation.
  • Scalable. Ability to support all scales of deployment, from small developer centric environments to the scale of enterprises and service providers. This implies that in some deployments some optional components may not be deployed, but the overall design and architecture should still be applicable.
  • Platform agnostic. The specifications developed will not be platform specific such that they can be implemented on a variety of architectures and operating systems.

Anyways, if you’re interested in this space, I highly recommend you attend the CNCF Serverless WG meetings which are public and currently happen on a weekly basis.


by Chris Aniszczyk at February 16, 2018 10:21 AM

Presentation: Spring Tools 4 - Eclipse and Beyond

by Martin Lippert, Kris De Volder at February 15, 2018 06:27 PM

Martin Lippert and Kris De Volder introduce and demo a new generation of Spring tools including Spring Tool Suite for Eclipse (STS4), STS4 VS Code and STS4 Atom.

By Martin Lippert

by Martin Lippert, Kris De Volder at February 15, 2018 06:27 PM

JBoss Tools 4.5.3.AM1 for Eclipse Oxygen.2

by jeffmaury at February 13, 2018 08:44 PM

Happy to announce 4.5.3.AM1 (Developer Milestone 1) build for Eclipse Oxygen.2.

Downloads available at JBoss Tools 4.5.3 AM1.

What is New?

Full info is at this page. Some highlights are below.

OpenShift

Minishift Server Adapter

A new server adapter has been added to support upstream Minishift. While the server adapter itself has limited functionality, it is able to start and stop the Minishift virtual machine via its minishift binary. From the Servers view, click New and then type minishift, that will bring up a command to setup and/or launch the Minishift server adapter.

minishift server adapter

All you have to do is set the location of the minishift binary file, the type of virtualization hypervisor and an optional Minishift profile name.

minishift server adapter1

Once you’re finished, a new Minishift Server adapter will then be created and visible in the Servers view.

minishift server adapter2

Once the server is started, Docker and OpenShift connections should appear in their respective views, allowing the user to quickly create a new Openshift application and begin developing their AwesomeApp in a highly-replicatable environment.

minishift server adapter3
minishift server adapter4

Fuse Tooling

New shortcuts in Fuse Integration perspective

Shortcuts for the Java, Launch, and Debug perspectives and basic navigation operations are now provided within the Fuse Integration perspective.

The result is a set of buttons in the Toolbar:

New Toolbar action

All of the associated keyboard shortcuts are also available, such as Ctrl+Shift+T to open a Java Type.

Performance improvement: Loading Advanced tab for Camel Endpoints

The loading time of the "Advanced" tab in the Properties view for Camel Endpoints is greatly improved.

Advanced Tab in Properties view

Previously, in the case of Camel Components that have a lot of parameters, it took several seconds to load the Advanced tab. For example, for the File component, it would take ~3.5s. It now takes ~350ms. The load time has been reduced by a factor of 10. (See this interesting article on response time)

If you notice other places showing slow performance, you can file a report by using the Fuse Tooling issue tracker. The Fuse Tooling team really appreciates your help. Your feedback contributes to our development priorities and improves the Fuse Tooling user experience.

Enjoy!

Jeff Maury


by jeffmaury at February 13, 2018 08:44 PM

Eclipse tested with a few Gnome themes

by Lorenzo Bettini at February 13, 2018 09:53 AM

In this small blog post I’ll show how Eclipse looks like in Linux Gnome (Ubuntu 17.10) with a few Gnome themes.

First of all, the default Ubuntu theme, Ambiance, makes Eclipse look not very nice… see the icons, which are “packed” and “compressed” in the toolbar, not to mention the cut “Filter Files” textbox in the “Git Staging” view:

Numix has similar problems:

Adwaita, (the default Gnome theme) instead makes it look great:

The same holds for alternative themes; the following screenshots are based on Arc, Pop and Matcha, respectively:

So, in the end, stay away from Ubuntu default theme 😉

Be Sociable, Share!

by Lorenzo Bettini at February 13, 2018 09:53 AM

Python 3 and Import Hooks for OSGi Services

by Scott Lewis (noreply@blogger.com) at February 13, 2018 02:15 AM

In a previous post I described using Python for implementing OSGi Services.   This Python<->Java service bridge allows Python-provided/implemented OSGi services called from Java, and Java-provided/implemented OSGi Services called from Python.   OSGi Remote Services provides a standardized way of communicating service meta-data (e.g. service contracts, endpoint meta-data) between Java and Python processes.

As this Java<->Python communication conforms to the OSGi Remote Services specification, everything is completely inter-operable with Declarative Services and/or other frameworks based upon OSGi Services.  It will also run in any OSGi R5+ environment, including Eclipse, Karaf, OSGi-based web servers, or other OSGi-based environments.

Recently, Python 3 has introduced the concept of an Import Hook.   An import hook allows the python path and the behavior of the python import statement to be dynamically or extended. 

In the most recent version (2.7) of the ECF Py4j Distribution Provider, we use import hooks so that Python module import is resolved by a Java-side OSGi ModuleResolver service.   For example, as described in this tutorial, this Python statement

from hello import HelloServiceImpl

imports the hello.py module as a string loaded from within an OSGi bundle.  Among other things, this allows OSGi dynamics to be used to add and remove modules from the python path without stopping and restarting either the Java or the Python processes.



by Scott Lewis (noreply@blogger.com) at February 13, 2018 02:15 AM

Eclipse Vert.x 3.5.1 released!

by vietj at February 13, 2018 12:00 AM

We have just released Vert.x 3.5.1!

Fixes first!

As usual this release fixes bugs reported in 3.5.0, see the release notes.

JUnit 5 support

This release introduces the new vertx-junit5 module.

JUnit 5 is a rewrite of the famous Java testing framework that brings new interesting features, including:

  • nested tests,
  • the ability to give a human-readable description of tests and test cases (and yes, even use emojis),
  • a modular extension mechanism that is more powerful than the JUnit 4 runner mechanism (@RunWith annotation),
  • conditional test execution,
  • parameterized tests, including from sources such as CSV data,
  • the support of Java 8 lambda expressions in the reworked built-in assertions API,
  • support for running tests previously written for JUnit 4.

Suppose that we have a SampleVerticle verticle that exposes a HTTP server on port 11981. Here is how we can test its deployment as well as the result of 10 concurrent HTTP requests:

@Test
@DisplayName("🚀 Deploy a HTTP service verticle and make 10 requests")
void useSampleVerticle(Vertx vertx, VertxTestContext testContext) {
  WebClient webClient = WebClient.create(vertx);
  Checkpoint deploymentCheckpoint = testContext.checkpoint();

  Checkpoint requestCheckpoint = testContext.checkpoint(10);
  vertx.deployVerticle(new SampleVerticle(), testContext.succeeding(id -> {
    deploymentCheckpoint.flag();

    for (int i = 0; i < 10; i++) {
      webClient.get(11981, "localhost", "/")
        .as(BodyCodec.string())
        .send(testContext.succeeding(resp -> {
          testContext.verify(() -> {
            assertThat(resp.statusCode()).isEqualTo(200);
            assertThat(resp.body()).contains("Yo!");
            requestCheckpoint.flag();
          });
        }));
    }
  }));
}

The test method above benefits from the injection of a working Vertx context, a VertxTestContext for dealing with asynchronous operations, and the guarantee that the execution time is bound by a timeout which can optionally be configured using a @Timeout annotation.

The test succeeds when all checkpoints have been flagged. Note that vertx-junit5 is agnostic of the assertions library being used: you may opt for the built-in JUnit 5 assertions or use a 3rd-party library such as AssertJ as we did in the example above.

You can checkout the source on GitHub, read the manual and learn from the examples.

Web API Contract enhancements

The package vertx-web-api-contract includes a variety of fixes, from schema $ref to revamped documentation. You can give a look at list of all fixes/improvements here and all breaking changes here.

From 3.5.1 to load the openapi spec and instantiate the Router you should use new method OpenAPI3RouterFactory.create() that replaces old methods createRouterFactoryFromFile() and createRouterFactoryFromURL(). This new method accepts relative paths, absolute paths, local URL with file:// and remote URL with http://. Note that if you want refeer to a file relative to your jar’s root, you can simply use a relative path and the parser will look out the jar and into the jar for the spec.

From 3.5.1 all settings about OpenAPI3RouterFactory behaviours during router generation are inglobed in a new object called RouterFactoryOptions. From this object you can:

  • Configure if you want to mount a default validation failure handler and which one (methods setMountValidationFailureHandler(boolean) and setValidationFailureHandler(Handler))
  • Configure if you want to mount a default 501 not implemented handler and which one (methods setMountNotImplementedFailureHandler(boolean) and setNotImplementedFailureHandler(Handler))
  • Configure if you want to mount ResponseContentTypeHandler automatically (method setMountResponseContentTypeHandler(boolean))
  • Configure if you want to fail during router generation when security handlers are not configured (method setRequireSecurityHandlers(boolean))

After initialization of route, you can mount the RouterFactoryOptions object with method routerFactory.setOptions() when you want before calling getRouter().

RxJava deprecation removal

It is important to know that 3.5.x will be the last release with the legacy xyzObservable() methods:

@Deprecated()
public Observable listenObservable(int port, String host);

has been replaced since Vert.x 3.4 by:

public Single rxListen(int port, String host);

The xyzObservable() deprecated methods will be removed in Vert.x 3.6.

Wrap up

Vert.x 3.5.1 release notes and breaking changes:

The event bus client using the SockJS bridge are available from NPM, Bower and as a WebJar:

Docker images are also available on the Docker Hub. The Vert.x distribution is also available from SDKMan and HomeBrew.

The artifacts have been deployed to Maven Central and you can get the distribution on Bintray.

Happy coding !


by vietj at February 13, 2018 12:00 AM

EclipseCon France 2018 Call for Papers

February 12, 2018 02:15 PM

Call for paper submissions are now open until March 19. We'll see you June 13 - 14 in Toulouse!

February 12, 2018 02:15 PM

The Eclipse Committer Election Workflow

by waynebeaton at February 08, 2018 03:30 PM

In the world of open source, Committers are ones who hold they keys. Committers decide what code goes into the code base, they decide how a project builds, and they ultimately decide what gets delivered to the adopter community. With awesome power, comes awesome responsibility, and so it’s no mistake that the Open Source Rules of Engagement described by the Eclipse Development Process, puts Meritocracy on equal footing with Transparency and Openness: becoming a committer isn’t necessarily hard, but it does require a demonstration of commitment (committer… commitment… see what I did there?)

There’s two ways to become an Eclipse Committer. The first way is to be listed as an initial committer on a new project proposal. When projects come to the Eclipse Foundation we need them to actually start with committers, and so we include this as part of the bootstrapping. As part of the process of community vetting a new project proposal, the committers listed are themselves vetted by the community. That’s why we include space for a merit statement for every committer listed on a proposal (in many cases, the merit statement is an implied “these are the people who worked on the code that is being contributed”). In effect, the project proposal process also acts as a committer election that’s open to the entire community.

The second way to become a committer is to get voted in via Committer Election. This starts with a nomination by an existing committer that includes a statement of merit that usually takes the form of a list various contributions that the individual has made to the project. What constitutes a sufficient demonstration of merit varies by project team and PMC. Generally, though, after an individual has made a small number of high quality contributions that demonstrate that they understand how the project works, it’s pretty natural for them to be invited to join the team.

There’s actually a third way. In cases where a project is dysfunctional, the project leadership has an option to add and remove committers and project leads. In the rare cases where this option is exercised, it is first discussed in the corresponding Project Management Committee‘s (PMC) mailing list.

Last week, we rolled out some new infrastructure to support Committer Elections.

Every project page in the Project Management Infrastructure (PMI) includes a block of Committer Tools on the right side of the page. From this block, project committers can perform various actions, including the new Nominate a Committer action.

Screenshot from 2018-02-06 10-34-43

Committer Tools

Clicking this will bring up the nomination form where the existing committer will provide the name and email address of the nominee along with the statement of merit.

Screenshot from 2018-02-06 10-35-15

What the committer sees when they nominate a new committer.

When you click the Nominate button, the Committer Election begins by sending a note to the project mailing list inviting existing project committers to vote. Committers visit the election page to cast their vote and—since this is a transparent process—everybody else can watch the election unfold.

According to our election rules, an election ends when either everybody votes in the affirmative or seven days has passed. If at the end of the election we have at least three affirmative votes and no negative votes, the vote is considered successful and it passed on to the PMC for approval (note that when a project has fewer than three committers, success is declared if everybody votes in the affirmative). The PMC will validate that the merit statement is sufficient and that the election was executed correctly, and either approve or veto it. PMC-approved elections get passed into the next piece of the workflow: Committer Paperwork.

Regardless of how a developer becomes a committer (by vote, by proposal, or by appointment), they are required to complete legal paperwork before we can grant them write access to project resources. The Eclipse Foundation needs to ensure that all committers with write access to the code, websites, and issue tracking systems understand their role in the intellectual property process; and that we have accurate records of the people who are acting as change agents on the projects. Committers must provide documentation asserting that they have read, understood, and will follow the committer guidelines; and must gain their employers consent to their participation in Eclipse Foundation open source projects.

Our Commmitter Paperwork process is initiated whenever a developer joins us as a new committer, or—since paperwork is tied to a specific employer—when a committer changes employers.

Screen Shot 2018-02-07 at 11.54.35 AM

The exact nature of the type of paperwork required varies based on the individual’s employment status and the Eclipse Foundation membership status of their employer. Again, a full discussion of this is out-of-scope for this post, but we need to have either an Individual Committer Agreement or a Member Committer Agreement on file for every committer. The workflow guides the new committer through the options.

Note that we’ve just gotten approval on an update to the Individual Committer Agreement that eliminates the need for the companion Eclipse Foundation Committer Employer Consent Form. This should make it easier for new committers to get started. We’re rolling the new version out now.

We owe this great new implementation of this workflow to the tireless efforts of the entire Eclipse IT Team, and especially Eric, Chris, and Matt. Big Thanks!


by waynebeaton at February 08, 2018 03:30 PM

Starting an open source program office?

by Chris Aniszczyk at February 08, 2018 02:17 PM

To make good on my new years resolutions of writing more, I recently wrote an article for opensource.com on starting an open source program for your company:

Please check it out and let me know if you have any comments. I’d really like to see us build a future where more companies have formal open source programs, that’s a key path towards making open source sustainable for everyone.


by Chris Aniszczyk at February 08, 2018 02:17 PM

Becoming Xtext Co-Project Lead

by Christian Dietrich (dietrich@itemis.de) at February 07, 2018 09:07 AM

I started using Xtext more than 10 years ago. Back then it was a small part of the openArchitectureware framework. I began using it heavily after the move to Eclipse and got a power user and supporter in the newsgroups and forum. In 2016 I joined the Xtext Committer team and worked on the framework for about 50% of my time.

Roughly at the same time parts of the Xtext team moved away from itemis. So the people working on Xtext and their main focus changed. I still think Xtext is a very valuable framework and it deserves more attention than it currently gets. This is why I stepped up to become a co-project lead for the project to ensure it's management is put on wider legs.

whats-new


As Xtext committer and co-project lead my main goals are the following

  • Ensure that Xtext and Xtend are actively maintained and will work with future versions of the Eclipse Platform and JDT as well as with future versions of Java itself (for example the Java 9 support we are currently working on).
  • Relevant bugs and performance problems keep being addressed and fixed in a reasonable manner and timespan.
  • Enable more users to contribute to Xtext.
  • Develop new features that make life of our users more easy and keep track with trends and developments inside and outside the Eclipse ecosystem.
  • Make sure Xtext in future still will be supported and running smooth in standalone modes such as LSP as well as inside Eclipse IDE.
  • Make sure we have regular releases and keep track with the release process that is currently planned for Eclipse platform but we may slow down from the release cadence you are used from the past.

It is not only the TypeFox guys or our Xtext-Team at itemis (Karsten, Sebastian, Holger and other colleagues) that drive Xtext. I invite you as Xtext community to actively contribute to the framework: Not only by filing bugs or giving feedback but i warmly welcome everybody who is willing to contribute fixes or features to Xtext. Get in contact with us and let us work together on a great future for Xtext.

 


by Christian Dietrich (dietrich@itemis.de) at February 07, 2018 09:07 AM

EE.next working group - community review process

February 06, 2018 01:00 PM

Announcing the EE.next working group to support the EE4J projects. Join the review process of the charter.

February 06, 2018 01:00 PM

Introducing the EE.next Working Group

by Mike Milinkovich at February 05, 2018 07:43 PM

As part of our continuing adventures in migrating Java EE to the Eclipse Foundation I am pleased to announce that the draft of the EE.next Working Group charter  has now been posted for community review. Comments and feedback are welcomed on the ee4j-community@eclipse.org mail list. But please please pretty please make sure you read the FAQ (also copied below) before you do.

You can think of this EE.next group as the replacement for the Java Community Process for Java EE. It will be the body that the ecosystem can join and participate in at a corporate level. Individuals can also join if they are committers on EE4J projects. EE.next will also be the place where the new specification process will be created and managed, and where specs will be reviewed and approved.

Under the process for establishing Eclipse Foundation working groups, there will now be a community review period lasting a minimum of 30 days.

 

FAQ

What is the purpose of a working group?

An Eclipse Foundation working group is a special-purpose consortia of Eclipse Members interested in supporting a technology domain. They are intended to complement the activities of a collection of Eclipse Foundation open source projects. Open source projects are excellent for many things, but they typically do not do a great job with activities such as marketing, branding, specification and compliance processes, and the like.

What is the role of the PMC versus the working group or the working group Steering Committee?

Eclipse Foundation projects are self-governing meritocracies that set their own technical agendas and plans. The Project Management Committee for an Eclipse top-level project oversees the day-to-day activities of its projects through activities such as reviewing and approving plans, accepting new projects, approving releases, managing committer elections, and the like.

Working groups and their steering committees are intended to complement the work happening in the open source projects with activities that lead to greater adoption, market presence, and momentum. Specifically the role of the working group is to foster the creation and growth of the ecosystem that surrounds the projects.

Working groups do not direct the activities of the projects or their PMC. They are intended to be peer organizations that work in close collaboration with one another.

Who defines and manages technical direction?

The projects manage their technical direction. The PMC may elect to coordinate the activities of multiple projects to facilitate the release of software platforms, for example.

Because the creation of roadmaps and long term release plans can require market analysis, requirements gathering, and resource commitments from member companies, the working group may sponsor complementary activities to generate these plans. However, ultimately it is up to the projects to agree to implement these plans or roadmaps. The best way for a working group to influence the direction of the open source projects is to ensure that they have adequate resources. This can take the form of developer contributions, or under the Member Funded Initiatives programs, working groups can pool funds to contract developers to implement the features they desire.

Why are there so many levels of membership?

Because the Java EE ecosystem is a big place, and we want to ensure that there are roles for all of the players in it. We see the roles of the various member classes to roughly align as follows:

  • Strategic members are the vendors that deliver Java EE implementations. As such they are typically putting in the largest number of contributors, and are leading many of the projects.
  • Influencer members are the large enterprises that rely upon Java EE today for their mission critical application infrastructure, and who are looking to EE.next to deliver the next generation of cloud native Java. They have made strategic investments in this technology, have a massive skills investment in their developers, and want to protect these investments as well as influence the future of this technology.
  • Participant members are the companies that offer complementary products and services within the Java EE ecosystem. Examples include ISVs which build products on Java EE, or system integrators that use these technologies in delivering solutions to their customers.
  • Committer members are comprised of the committers working on the various EE4J projects who are also members of the Eclipse Foundation. While the Eclipse bylaws define the criteria for committers to be considered members, in essence any committer members are either a) a committer who is an employee of an EE.next member company or b) any other committer who has explicitly chosen to join as a member. Giving Committer members a role in the working group governance process mimics the governance structure of the Eclipse Foundation itself, where giving committers an explicit voice has been invaluable.

What makes this different from the Java Community Process (JCP)?

The EE.next working group will be the successor organization to the JCP for the family of technologies formerly known as Java EE. It has several features that make it a worthy successor to the JCP:

  1. It is vendor neutral. The JCP was owned and operated first by Sun and later by Oracle. EE.next is designed to be inclusive and diverse, with no organization having any special roles or rights.
  2. It has open intellectual property flows. At the JCP, all IP flowed to the Spec Lead, which was typically Oracle. We are still working out the exact details, but the IP rights with EE.next and EE4J will certainly not be controlled by any for-profit entity.
  3. It is more agile. This is an opportunity to define a 21st century workflow for creating rapidly evolving Java-based technologies. We will be merging the best practices from open source with what we have learned from over 15 years of JCP experience.

Is the WG steering committee roughly equivalent to the JCP Executive Committee?

No, not really. The JCP EC always had two mixed roles: as a technical body overseeing the specification process, and as an ecosystem governance body promoting Java ME, SE, and EE. In EE.next the Steering Committee will be the overall ecosystem governance body. The EE.next Specification Committee will focus solely on the development and smooth operation of the technical specification process.

Does a project have to be approved as a spec before it can start?

That is actually a decision which will be made by the EE4J PMC, not the working group. However, it is a goal of the people and organizations working on creating this working group that the Java EE community move to more of a code-first culture. We anticipate and hope that the EE4J PMC will embrace the incubation of technologies under its banner. Once a technology has been successfully implemented and adopted by at least some in the industry, it can then propose that a specification be created for it.

In addition to the Steering Committee, what other committees exist?

There are four committees comprising the EE.next governance structure – the Steering Committee, the Specification Committee, the Marketing and Brand Committee, and the Enterprise Requirements Committee. A summary of the make-up of each of the committees is in the table below.

Strategic Member Influencer Member Participant Member Committer Member
Member of the Steering Committee Appointed Elected Elected Elected
Member of the Specification Committee Appointed Elected Elected Elected
Member of the Marketing Committee Appointed Elected Elected Elected
Member of the Enterprise Requirements Committee Appointed Appointed N/A N/A

by Mike Milinkovich at February 05, 2018 07:43 PM

In 5 Minuten zur DSL mit transitiven Importen in Xtext

by Christian Wehrheim (cwehrheim@itemis.de) at February 05, 2018 02:51 PM

Xtext ermöglicht das Referenzieren von Elementen in DSLs auf mehrere Arten. Eine Möglichkeit sieht den Import von Elementen über Namensräume vor. Dies geschieht über die Verwendung des ImportedNamespaceAwareLocalScopeProvider und erlaubt den "Import" einzelner oder, unter Einsatz von Wildcards (.*), aller Elemente eines Namensraumes.

Es kann aber Sprachen geben, in denen dieses Verhalten nicht gewünscht ist. In diesen Sprachen importiert der Nutzer explizit eine oder mehrere Ressource-Dateien, um auf deren Inhalte zugreifen zu können.

Eine einfache DSL mit Import-Verhalten dank Xtext

Eine DSL mit einem solchen Import-Verhalten lässt sich mit https://www.itemis.com/en/xtext/ recht einfach erstellen, indem man eine Parser-Regel mit dem speziellen Attributnamen XtextimportURI in die DSL einbaut. Das folgende Beispiel stellt eine einfache DSL dar, die es erlaubt, in beliebigen Ressourcen Namen zu definieren und diese in Grußbotschaften zu verwenden.

grammar org.xtext.example.mydsl.MyDsl with org.eclipse.xtext.common.Terminals
generate myDsl "http://www.xtext.org/example/mydsl/MyDsl"
Model:
	includes+=Include*
	names+=Name*
	greetings+=Greeting*;
Include:
	'import' importURI=STRING
	;
Name:
	'def' name=ID
	;
Greeting:
	'Hallo' name=[Name] '!'
	;

Wir möchten Kollegen aus unserer Firma Grußbotschaften schicken. Da die Firma aber groß ist und aus vielen Kollegen besteht, die in unterschiedlichen Bereichen arbeiten, möchten wir für jeden Firmenbereich eine eigene Datei erstellen, die die Namen der jeweiligen Kollegen enthält. Dies erhöht die Übersicht und Wartbarkeit.

Nur durch einen expliziten Import einer Ressource wollen wir die enthaltenen Namensdefinitionen in den Scope aufnehmen. Dabei soll dies möglichst schnell und ressourcenschonend erfolgen.

Der Ansatz ist hierbei, die Verwendung des Index, die das unnötige und (bei großen Modellen) zeitaufwendige Laden von Ressourcen überflüssig macht. Als ersten Schritt müssen wir die Informationen bzgl. der importierten Ressourcen in den Index schreiben. Dazu implementieren wir eine Klasse MyDslResourceDescriptionStrategy, die von DefaultResourceDescriptionStrategy ableitet. Die Strings mit den URIs, der in der Parser-Regel Model importierten Ressourcen, werden in einen durch Kommas getrennten String zusammengeführt und unter dem Schlüssel includes in der userData Map der Objektbeschreibung im Index gespeichert.

package org.xtext.example.mydsl

import com.google.inject.Inject
import java.util.HashMap
import org.eclipse.xtext.naming.QualifiedName
import org.eclipse.xtext.resource.EObjectDescription
import org.eclipse.xtext.resource.IEObjectDescription
import org.eclipse.xtext.resource.impl.DefaultResourceDescriptionStrategy
import org.eclipse.xtext.scoping.impl.ImportUriResolver
import org.eclipse.xtext.util.IAcceptor
import org.xtext.example.mydsl.myDsl.Model
import org.eclipse.emf.ecore.EObject

class MyDslResourceDescriptionStrategy extends DefaultResourceDescriptionStrategy {
	public static final String INCLUDES = "includes"
	@Inject
	ImportUriResolver uriResolver

	override createEObjectDescriptions(EObject eObject, IAcceptor acceptor) {
		if(eObject instanceof Model) {
			this.createEObjectDescriptionForModel(eObject, acceptor)
			return true
		}
		else {
			super.createEObjectDescriptions(eObject, acceptor)
		}
	}

	def void createEObjectDescriptionForModel(Model model, IAcceptor acceptor) {
		val uris = newArrayList()
		model.includes.forEach[uris.add(uriResolver.apply(it))]
		val userData = new HashMap<string,string>
		userData.put(INCLUDES, uris.join(","))
		acceptor.accept(EObjectDescription.create(QualifiedName.create(model.eResource.URI.toString), model, userData))
	}
}

Um unsere ResourceDescriptionStrategy nutzen zu können, müssen wir sie noch im MyDslRuntimeModule binden.

 

package org.xtext.example.mydsl

import org.eclipse.xtext.resource.IDefaultResourceDescriptionStrategy
import org.eclipse.xtext.scoping.IGlobalScopeProvider
import org.xtext.example.mydsl.scoping.MyDslGlobalScopeProvider

class MyDslRuntimeModule extends AbstractMyDslRuntimeModule {
def Class<? extends IDefaultResourceDescriptionStrategy> bindIDefaultResourceDescriptionStrategy() {
MyDslResourceDescriptionStrategy
}
}


Bisher haben wir nur Informationen gesammelt und im Index gespeichert. Um sie verwenden zu können, benötigen wir zusätzlich einen eigenen
IGlobalScopeProvider. Dazu implementieren wir eine Klasse MyDslGlobalScopeProvider, die von ImportUriGlobalScopeProvider ableitet, und überschreiben die Methode getImportedUris(Resource resource). Diese Methode liefert ein LinkedHashSet zurück, das letztendlich alle URIs enthält, die in der Ressource importiert werden sollen.

Das Auslesen der importierten Ressourcen aus dem Index wird von der Methode collectImportUris erledigt. Die Methode fragt den IResourceDescription.Manager nach der IResourceDescription der Ressource. Aus dieser wird für jedes Model-Element aus der userData Map der unter dem Schlüssel includes gespeicherte String mit den URIs der importierten Ressourcen ausgelesen, zerlegt und die einzelnen URIs in einem Set gespeichert.


package org.xtext.example.mydsl.scoping

import com.google.common.base.Splitter
import com.google.inject.Inject
import com.google.inject.Provider
import java.util.LinkedHashSet
import org.eclipse.emf.common.util.URI
import org.eclipse.emf.ecore.resource.Resource
import org.eclipse.xtext.EcoreUtil2
import org.eclipse.xtext.resource.IResourceDescription
import org.eclipse.xtext.scoping.impl.ImportUriGlobalScopeProvider
import org.eclipse.xtext.util.IResourceScopeCache
import org.xtext.example.mydsl.MyDslResourceDescriptionStrategy
import org.xtext.example.mydsl.myDsl.MyDslPackage

class MyDslGlobalScopeProvider extends ImportUriGlobalScopeProvider {
	private static final Splitter SPLITTER = Splitter.on(',');

	@Inject
	IResourceDescription.Manager descriptionManager;

	@Inject
	IResourceScopeCache cache;

	override protected getImportedUris(Resource resource) {
		return cache.get(MyDslGlobalScopeProvider.getSimpleName(), resource, new Provider<linkedhashset>() {
			override get() {
				val uniqueImportURIs = collectImportUris(resource, new LinkedHashSet(5))

				val uriIter = uniqueImportURIs.iterator()
				while(uriIter.hasNext()) {
					if (!EcoreUtil2.isValidUri(resource, uriIter.next()))
						uriIter.remove()
				}
				return uniqueImportURIs
			}

			def LinkedHashSet collectImportUris(Resource resource, LinkedHashSet uniqueImportURIs) {
				val resourceDescription = descriptionManager.getResourceDescription(resource)
				val models = resourceDescription.getExportedObjectsByType(MyDslPackage.Literals.MODEL)
				
				models.forEach[
					val userData = getUserData(MyDslResourceDescriptionStrategy.INCLUDES)
					if(userData !== null) {
						SPLITTER.split(userData).forEach[uri |
							var includedUri = URI.createURI(uri)
							includedUri = includedUri.resolve(resource.URI)
							uniqueImportURIs.add(includedUri)
						]
					}
				]
				return uniqueImportURIs
			}
		});
	}
}


Um unseren
MyDslGlobalScopeProvider nutzen zu können, müssen wir diesen wiederum im MyDslRuntimeModule binden.

package org.xtext.example.mydsl

import org.eclipse.xtext.resource.IDefaultResourceDescriptionStrategy
import org.eclipse.xtext.scoping.IGlobalScopeProvider
import org.xtext.example.mydsl.scoping.MyDslGlobalScopeProvider

class MyDslRuntimeModule extends AbstractMyDslRuntimeModule {
	def Class bindIDefaultResourceDescriptionStrategy() {
		MyDslResourceDescriptionStrategy
	}
	override Class bindIGlobalScopeProvider() {
		MyDslGlobalScopeProvider;
	}
}


Wir starten den Editor für unsere kleine Sprache und beginnen die Modell-Dateien zu erstellen. Dabei haben wir die Idee, die Ressourcen der unterschiedlichen Firmenbereiche nicht einzeln zu importieren, sondern eine Ressource zu erstellen, die alle Importe enthält, und diese dann zu importieren. Dazu erstellen wir folgende Ressourcen:

Ressourcen-Agile.png

 Ressourcen-Xtext.png

Ressource-Kollegen.png


Beim Erstellen der Ressource mit den Grußbotschaften stellen wir fest, dass die Namen nicht aufgelößt werden können.

Ressource-Greetings.png


Woran liegt das? Wir haben doch alle importierten Ressourcen in den Index geschrieben.

Das ist soweit richtig. Alle direkt importieren Ressourcen werden in den Index geschrieben. Die Importe in einer importierten Ressource jedoch werden ignoriert. Das von uns gewünschte Feature bezeichnet man als transitive Importe. Mit dem Import einer Ressource werden implizit alle von ihr importierten Ressourcen mit importiert.

Um in unserer Sprache transitive Importe zu ermöglichen, müssen wir unseren MyDslGlobalScopeProvider anpassen. Statt die URI einer importierten Ressource nur in dem Set zu speichern, rufen wir zusätzlich die Methode collectImportUris auf und übergeben die URI als Parameter, sodass deren importierte Ressourcen ebenfalls verarbeitet werden.

package org.xtext.example.mydsl.scoping

import com.google.common.base.Splitter
import com.google.inject.Inject
import com.google.inject.Provider
import java.util.LinkedHashSet
import org.eclipse.emf.common.util.URI
import org.eclipse.emf.ecore.resource.Resource
import org.eclipse.xtext.EcoreUtil2
import org.eclipse.xtext.resource.IResourceDescription
import org.eclipse.xtext.scoping.impl.ImportUriGlobalScopeProvider
import org.eclipse.xtext.util.IResourceScopeCache
import org.xtext.example.mydsl.MyDslResourceDescriptionStrategy
import org.xtext.example.mydsl.myDsl.MyDslPackage

class MyDslGlobalScopeProvider extends ImportUriGlobalScopeProvider {
	private static final Splitter SPLITTER = Splitter.on(',');

	@Inject
	IResourceDescription.Manager descriptionManager;

	@Inject
	IResourceScopeCache cache;

	override protected getImportedUris(Resource resource) {
		return cache.get(MyDslGlobalScopeProvider.getSimpleName(), resource, new Provider<linkedhashset>() {
			override get() {
				val uniqueImportURIs = collectImportUris(resource, new LinkedHashSet(5))

				val uriIter = uniqueImportURIs.iterator()
				while(uriIter.hasNext()) {
					if (!EcoreUtil2.isValidUri(resource, uriIter.next()))
						uriIter.remove()
				}
				return uniqueImportURIs
			}

			def LinkedHashSet collectImportUris(Resource resource, LinkedHashSet uniqueImportURIs) {
				val resourceDescription = descriptionManager.getResourceDescription(resource)
				val models = resourceDescription.getExportedObjectsByType(MyDslPackage.Literals.MODEL)
				
				models.forEach[
					val userData = getUserData(MyDslResourceDescriptionStrategy.INCLUDES)
					if(userData !== null) {
						SPLITTER.split(userData).forEach[uri |
							var includedUri = URI.createURI(uri)
							includedUri = includedUri.resolve(resource.URI)
							if(uniqueImportURIs.add(includedUri)) {
								collectImportUris(resource.getResourceSet().getResource(includedUri, true), uniqueImportURIs)
							}
						]
					}
				]
				
				return uniqueImportURIs
			}
		});
	}
}


Wenn wir nach dieser kleinen Anpassung unsere Ressource mit den Grußbotschaften erneut öffnen sehen wir, dass die Namen durch die transitiven Importe aufgelöst werden können.

Das Beispielprojekt kann hier heruntergeladen werden.


by Christian Wehrheim (cwehrheim@itemis.de) at February 05, 2018 02:51 PM

Use the eclipse-settings-maven-plugin to synchronize prefs files across projects

February 04, 2018 11:00 PM

The question « Should the meta files related to an IDE be committed in the git repository? » is a never-ending fight. According to Aurelien Pupier, the answer to this question is YES (Talk from 2015 - slides and video). I totally agree with him, because settings files like org.eclipse.core.resources.prefs, org.eclipse.jdt.core.prefs, org.eclipse.jdt.ui.prefs or org.eclipse.m2e.core.prefs can contain valuable configuration information that will be shared between all Eclipse IDE users working on the project: code formatting rules, save actions, automated code cleanup tasks, compiler settings…

Enable project specific settings

Even today a lot of people still prefer not to have the IDE metadata files in their git Repository. This means that every coworker needs to configure his IDE and more important everybody needs to keep the configuration in sync with the team over the time.

In both cases (having the settings files in your repo or not), the eclipse-settings-maven-plugin can be interesting for you. The idea is to use maven in order to replicate the same prefs files across multiple maven modules. This way you can distribute the prefs file if they are missing in the git repository. An other use case is the distribution accros multiple teams (for example at organization level).

The source for the settings file is a simple maven artifact located in a maven repository. With a single maven command, you can synchronize the prefs files.

Using eclipse-settings-maven-plugin to copy prefs files

If you wants to see how the setup looks like, you can refer to my sync-eclipse-settings-example page and the associated GitHub project. I have updated it in order to use the latest version published last week by my former colleagues at BSI Business Systems Integration AG.


February 04, 2018 11:00 PM

Last Week to Submit for FOSS4G NA 2018!

February 02, 2018 01:30 PM

Submissions close Feb 8, so propose your talk now for FOSS4G NA 2018, May 14-16 in St. Louis

February 02, 2018 01:30 PM

The Sum of all Reductions

by Donald Raab at February 02, 2018 02:46 AM

The reduction of all sums

Belmar, NJ during the Deep Freeze of 2018

During the Deep Freeze of 2018, everything in the world seemed to be reduced to ice and snow, including the sky. Even the sun seemed to be reduced as it ran away from my camera with a shiver.

This got me thinking about different kinds of reductions we have available in Java with Eclipse Collections and Java Streams. I wondered how many ways we could define sum with the various methods available.

Summing an array of ints

Let’s consider several ways we can sum the values in an int array in Java.

Here is the data we will use.

int[] ints = {1, 2, 3, 4, 5};
// This supplier will generate an IntStream on demand
Supplier<IntStream> intStream = () -> Arrays.stream(ints);
// This creates an IntList from Eclipse Collections
IntList intList = IntLists.immutable.with(ints);

For Loop

int sumForLoop = 0;
for (int i = 0; i < ints.length; i++)
{
sumForLoop += ints[i];
}
Assert.assertEquals(15, sumForLoop);

forEach (IntStream) / each (IntList)

// sumForEach will be effectively final
int[] sumForEach = {0};
intStream.get().forEach(e -> sumForEach[0] += e);
Assert.assertEquals(15, sumForEach[0]);
// sumEach will be effectively final
int[] sumEach = {0};
intList.each(e -> sumEach[0] += e);
Assert.assertEquals(15, sumEach[0]);

injectInto (IntList)

// injectInto boxes on IntList as there is no primitive version
int sumInject =
intList.injectInto(Integer.valueOf(0), Integer::sum).intValue();
Assert.assertEquals(15, sumInject);

reduce (IntStream)

// reduce does not box on IntStream
int sumReduce =
intStream.get().reduce(Integer::sum).getAsInt();
Assert.assertEquals(15, sumReduce);

sum (IntStream / IntList)

int sum1 = intStream.get().sum();
Assert.assertEquals(15, sum1);

long sum2 = intList.sum();
Assert.assertEquals(15, sum2);

Clearly, the sum methods available on IntStream and IntList are the simplest solutions. The minor difference with IntList is that the result is widened to a long which means you can add very large ints without overflowing.

Summarizing an array of ints

When we summarize using the IntSummaryStatistics class that was added in Java 8, we get the count, sum, min, max and average calculated at the same time. This saves you from iterating multiple times. We will use the same data as before.

For Loop

IntSummaryStatistics statsForLoop = new IntSummaryStatistics();
for (int i = 0; i < ints.length; i++)
{
statsForLoop.accept(ints[i]);
}
Assert.assertEquals(15, statsForLoop.getSum());
Assert.assertEquals(1, statsForLoop.getMin());
Assert.assertEquals(5, statsForLoop.getMax());

forEach (IntStream) / each (IntList)

IntSummaryStatistics statsForEach = new IntSummaryStatistics();
intStream.get().forEach(statsForEach::accept);

Assert.assertEquals(15, statsForEach.getSum());
Assert.assertEquals(1, statsForEach.getMin());
Assert.assertEquals(5, statsForEach.getMax());

IntSummaryStatistics statsEach = new IntSummaryStatistics();
intList.each(statsEach::accept);

Assert.assertEquals(15, statsEach.getSum());
Assert.assertEquals(1, statsEach.getMin());
Assert.assertEquals(5, statsEach.getMax());

injectInto (IntList)

IntSummaryStatistics statsInject =
intList.injectInto(
new IntSummaryStatistics(),
(iss, each) -> {iss.accept(each); return iss;});

Assert.assertEquals(15, statsInject.getSum());
Assert.assertEquals(1, statsInject.getMin());
Assert.assertEquals(5, statsInject.getMax());

collect (IntStream)

IntSummaryStatistics statsCollect =
intStream.get().collect(
IntSummaryStatistics::new,
IntSummaryStatistics::accept,
IntSummaryStatistics::combine);

Assert.assertEquals(15, statsCollect.getSum());
Assert.assertEquals(1, statsCollect.getMin());
Assert.assertEquals(5, statsCollect.getMax());

Note: I could not use reduce because both parameters have to be the same type. I had to use collect instead, which is a mutable reduction. The collect method on primitive Streams does not take a Collector, but instead takes a Supplier, ObjectIntConsumer (accumulator) and a BiConsumer (combiner).

summaryStatistics (IntStream / IntList)

IntSummaryStatistics stats1 = intStream.get().summaryStatistics();

Assert.assertEquals(15, stats1.getSum());
Assert.assertEquals(1, stats1.getMin());
Assert.assertEquals(5, stats1.getMax());

IntSummaryStatistics stats2 = intList.summaryStatistics();

Assert.assertEquals(15, stats2.getSum());
Assert.assertEquals(1, stats2.getMin());
Assert.assertEquals(5, stats2.getMax());

Again, the summaryStatistics methods are the simplest solutions.

Summing the lengths of an array of Strings

Let’s say we want to sum the lengths of Strings in an array. This approach could be used for summing any int attribute of an object.

Here is the data we will use.

String[] words = {"The", "Quick", "Brown", "Fox", "jumps", "over", "the", "lazy", "dog"};

Supplier<Stream<String>> stream = () -> Stream.of(words);
ImmutableList<String> list = Lists.immutable.with(words);

For Loop

int sumForLoop = 0;
for (int i = 0; i < words.length; i++)
{
sumForLoop += words[i].length();
}
Assert.assertEquals(35, sumForLoop);

For Each (Stream) / each (ImmutableList)

int[] sumForEach = {0};
stream.get().forEach(e -> sumForEach[0] += e.length());
Assert.assertEquals(35, sumForEach[0]);

int[] sumEach = {0};
list.each(e -> sumEach[0] += e.length());
Assert.assertEquals(35, sumEach[0]);

collectInt (ImmutableList)+ injectInto (IntList)

int sumInject = list
.collectInt(String::length)
.injectInto(Integer.valueOf(0), Integer::sum)
.intValue();
Assert.assertEquals(35, sumInject);

collect (Stream) + reducing (Collectors)

int sumReducing = stream.get()
.collect(Collectors.reducing(0,
String::length,
Integer::sum)).intValue();
Assert.assertEquals(35, sumReduce);

mapToInt (Stream) + Reduce (IntStream)

int sumReduce = stream.get()
.mapToInt(String::length)
.reduce(Integer::sum)
.getAsInt();
Assert.assertEquals(35, sumReduce);

mapToInt (Stream) + sum (IntStream)

int sum1 = stream.get()
.mapToInt(String::length)
.sum();
Assert.assertEquals(35, sum1);

collectInt (ImmutableList) + sum (IntList)

long sum2 = list
.collectInt(String::length)
.sum();
Assert.assertEquals(35, sum2);

collect (Stream) + summingInt (Collectors)

Integer summingInt = stream.get()
.collect(Collectors.summingInt(String::length));
Assert.assertEquals(35, summingInt.intValue());

sumOfInt (ImmutableList)

long sumOfInt = list.sumOfInt(String::length);
Assert.assertEquals(35, sumOfInt);

I think in these examples, sumOfInt is the simplest solution.

Summing the lengths of Strings grouped by the first character

In this problem we will group Strings by their first character and sum the length of the Strings for each character. I will prefer to use use primitive maps here for the grouping if possible.

Here is the data.

String[] words = {"The", "Quick", "Brown", "Fox", "jumps", "over", "the", "lazy", "dog"};

Supplier<Stream<String>> stream =
() -> Stream.of(words).map(String::toLowerCase);
ImmutableList<String> list =
Lists.immutable.with(words).collect(String::toLowerCase);

The Stream and ImmutableList strings are converted to lowercase using map and collect, respectively. We will do this manually in the for loop example.

For Loop

MutableCharIntMap sumByForLoop = new CharIntHashMap();
for (int i = 0; i < words.length; i++)
{
String word = words[i].toLowerCase();
sumByForLoop.addToValue(word.charAt(0), word.length());
}
Assert.assertEquals(35, sumByForLoop.values().sum());
Assert.assertEquals(6, sumByForLoop.get('t'));

for Each (Stream) / each (ImmutableList)

MutableCharIntMap sumByForEach = new CharIntHashMap();
stream.get().forEach(
e -> sumByForEach.addToValue(e.charAt(0), e.length()));
Assert.assertEquals(35, sumByForEach.values().sum());
Assert.assertEquals(6, sumByForEach.get('t'));

MutableCharIntMap sumByEach = new CharIntHashMap();
list.each(
e -> sumByEach.addToValue(e.charAt(0), e.length()));
Assert.assertEquals(35, sumByEach.values().sum());
Assert.assertEquals(6, sumByEach.get('t'));

injectInto (ImmutableList)

MutableCharIntMap sumByInject =
list.injectInto(
new CharIntHashMap(),
(map, each) -> {
map.addToValue(each.charAt(0), each.length());
return map;
});
Assert.assertEquals(35, sumByInject.values().sum());
Assert.assertEquals(6, sumByInject.get('t'));

reduce (Stream)

MutableCharIntMap sumByReduce = stream.get()
.reduce(
new CharIntHashMap(),
(map, e) -> {
map.addToValue(e.charAt(0), e.length());
return map;
},
(map1, map2) -> {
map1.putAll(map2);
return map1;
});

Assert.assertEquals(35, sumByReduce.values().sum());
Assert.assertEquals(6, sumByReduce.get('t'));

aggregateBy (ImmutableList)

ImmutableMap<Character, Long> aggregateBy = list.aggregateBy(
word -> word.charAt(0),
() -> new Long(0),
(sum, each) -> sum + each.length());
Assert.assertEquals(35,
aggregateBy.valuesView().sumOfLong(Long::longValue));
Assert.assertEquals(6, aggregateBy.get('t').longValue());

aggregateInPlaceBy (ImmutableList)

ImmutableMap<Character, LongAdder> aggregateInPlaceBy = 
list.aggregateInPlaceBy(
word -> word.charAt(0),
LongAdder::new,
(adder, each) -> adder.add(each.length()));
Assert.assertEquals(35,
aggregateInPlaceBy.valuesView()
.sumOfLong(LongAdder::longValue));
Assert.assertEquals(6, aggregateInPlaceBy.get('t').longValue());

collect (Stream)

MutableCharIntMap sumByCollect = stream.get().collect(
CharIntHashMap::new,
(map, e) -> map.addToValue(e.charAt(0), e.length()),
CharIntHashMap::putAll);

Assert.assertEquals(35, sumByCollect.values().sum());
Assert.assertEquals(6, sumByCollect.get('t'));

collect (Stream) + groupingBy (Collectors) + summingInt (Collectors)

Map<Character, Integer> sumByCollectSummingInt =
stream.get()
.collect(Collectors.groupingBy(
word -> word.charAt(0),
Collectors.summingInt(String::length)));
Assert.assertEquals(
35,
sumByCollectSummingInt
.values().stream().mapToInt(Integer::intValue).sum());
Assert.assertEquals(
Integer.valueO(6), sumByCollectSummingInt.get('t'));

collect (Stream) + sumByInt (Collectors2)

ObjectLongMap<Character> sumByCollectors2 =
stream.get().collect(
Collectors2.sumByInt(
word -> word.charAt(0), String::length));
Assert.assertEquals(35, sumByCollectors2.values().sum());
Assert.assertEquals(6, sumByCollectors2.get('t'));

reduceInPlace (ImmutableList)+ sumByInt (Collectors2)

ObjectLongMap<Character> reduceInPlaceCollectors2 =
list.reduceInPlace(
Collectors2.sumByInt(
e -> e.charAt(0), String::length));
Assert.assertEquals(35, reduceInPlaceCollectors2.values().sum());
Assert.assertEquals(6, reduceInPlaceCollectors2.get('t'));

sumByInt (ImmutableList)

ObjectLongMap<Character> sumByInt =
list.sumByInt(e -> e.charAt(0), String::length);
Assert.assertEquals(35, sumByInt.values().sum());
Assert.assertEquals(6, sumByInt.get('t'));

The simplest solution here is sumByInt.

Conclusion

We’ve covered a lot of different approaches you can use to sum or summarize values using Java and Eclipse Collections. In the case of summing, using a method with sum in the name will probably give you the simplest solution. You can solve almost any problem using methods like injectInto and reduceInPlace (Eclipse Collections) or collect (Java Stream). Methods like reduce are less useful when your result needs to be different than your input. Methods like aggregateBy and aggregateInPlaceBy give you a more specific result than collect because they always return a Map. Using Collectors2 can be helpful if you want to iterate over a Stream and get a primitive map result easily using collect.

Eclipse Collections is open for contributions. If you like the library, you can let us know by starring it on GitHub.


by Donald Raab at February 02, 2018 02:46 AM

Eclipse — Still the Best IDE!

by Brian Fernandes at January 31, 2018 07:07 AM

As a developer, you’ve probably grown to love using an IDE — but in the modern coding universe of choices, how do you choose the best IDE? While some might try to convince you that the “cool kids” are using IntelliJ, whether you’re looking for your first IDE, or being pressured into switching, here are some reasons […]

The post Eclipse — Still the Best IDE! appeared first on Genuitec.


by Brian Fernandes at January 31, 2018 07:07 AM

Blockchain Smart Contracts are the new Serverless!

by David Bosschaert (noreply@blogger.com) at January 30, 2018 09:20 PM

Smart Contracts (thanks to Michael Bacina)
Over the recent past I've been experimenting with Smart Contracts for block chain implementations. Smart Contracts are essentially programs running on the block chain infrastructure. For example Ethereum supports Smart Contracts written in Solidity. EOS is another example of a blockchain that will support smart contracts. I've been looking at EOS an its smart contracts in more detail, here the smart contracts can be written in C/C++ so you don't have to learn a new language for it.


A CryptoKitty
So what can you do with a smart contract? Smart contracts are designed to provide some sort of computation and store the result (immutably) on the blockchain. The computation is a custom program that applies to your domain, so for example you could be renting out holiday accommodation and your website might store the holiday home rental contract including the price, optional extras, insurance etc for a given period of time, after having computed it, on the blockchain. In most cases smart contracts can also handle the payment, so let's say the rental home costs 500 Euro per week, then the equivalent in Ether (ETH) or whatever the current blockchain/crypto is, can be transferred to the owner as part of the smart contract execution.
Or, more creatively as has been done on the ethereum network, your contract could compute a unique CryptoKitty for you that is a cute looking creature created just for you to look at and store the result on the blockchain.

You can even take this a little bit further. As shown with the CryptoKitty the smart contract does not need to have anything to do with transferring money from a to b or writing some sort of financial contract. In theory you could use the smart contract to do anything you might be able to use asynchronous computing power for.

After playing briefly with Solidity, the smart contract language for Ethereum I moved to play a bit more with EOS smart contracts. Why EOS? EOS as a blockchain is still in it's early phase and under heavy development although they do have a test network up and running at this stage. However I find EOS interesting because it has a couple of interesting aspects:
  • First of all it's quite easy to run a test EOS node on your own machine which allows you as a developer to play with it and understand it in a sandbox type environment.
  • EOS aims to provide much higher transaction rates than the current major block chains can provide. It promises up to 50000 transactions per second which all of a sudden is big enough to handle amounts of payments similar to major credit card companies like Visa and Mastercard.
  • Smart contracts on EOS can be written in C/C++ which is really nice, as you don't need to learn a new programming language for it.
So let's take a look at how I got my example EOS contract deployed to my own test EOS node.

Build the EOS code

I tried this on Ubuntu 16.04. Compile the EOS code base:

First, clone the EOS code
  ~$ git clone https://github.com/eosio/eos ~/eos
  
~$ cd eos

Then build the whole lot:
  ~/eos$ ./build.sh ubuntu full
This takes a while but once it's finished you should be able to run your EOS node:
  ~/eos/build/programs/eosiod$ ./eosiod

It now exits with an error. You need to set up data-dir/config.ini as described in the EOS docs: https://github.com/EOSIO/eos

At this point your EOS node should be happily up and running. Which is really neat. You've got a EOS block chain node running for development purposes on your local machine!

~/eos/build/programs/eosiod $ ./eosiod 


Before you can use your EOS node you need to create a wallet and an account. Since our smart contract will be computing the Fibonacci sequence, I'm going to call the account fibonacci. The following commands do that for you. They use a demo account inita that is created in the config.ini file when its set up as above.

Here we use the eosioc program which is a client to the EOS network:
~/eos/build/programs/eosioc $ ./eosioc wallet create
~/eos/build/programs/eosioc $ ./eosioc wallet open

Import the inita demo key:
~/eos/build/programs/eosioc $ ./eosioc wallet import 5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3

Create two private keys for the fibonacci account:
~/eos/build/programs/eosioc $ ./eosioc create key
Private key:###
Public key: ###
~/eos/build/programs/eosioc $ ./eosioc create key
Private key:###
Public key: ###

Create the fibonacci account:
~/eos/build/programs/eosioc $ ./eosioc create account inita fibonacci {private key1} {private key2}

Create the smart contract

I created a little testproject called fibonacci which computes the fibonacci sequence to a certain iteration in the EOS smart contract and stores the result in the EOS database.

The code can be found in github here: https://github.com/coderthoughts/fibonacci

It exists of two components: an external contract, in the fibonacci.abi file which defines how the application communicates with the outside world. The actual communication typically happens in JSON:

{
  "structs": [{
      "name": "compute",
      "base": "",
      "fields": {
        "iterations": "uint64"
      }
  },{
    "name": "result",
    "base": "",
    "fields": {
      "id": "name",
      "value": "uint64"
    }
  }],
  "actions": [{
    "action_name": "compute",
    "type": "compute"
  }],
  "tables": [{
    "table_name": "results",
    "type": "result",
    "index_type": "i64",
    "key_names": ["id"],
    "key_types": ["name"]
  }]
}

And there is the fibonacci.cpp file that contains the source code in C++ of the contract. The main bit of the C++ contract is just the apply() method that gets invoked when the EOS Smart Contract receives a message:
    uint64_t fibonacci(uint64_t iterations) {
        uint64_t first = 0;
        uint64_t second = 1;

        if (iterations == 0)
            return 0;

        eosio::print("1 ");
        for (uint64_t i=1; i < iterations; i++) {
            uint64_t res = first + second;
            first = second;
            second = res;
            eosio::print(res, " ");
        }
        return second;
    }

    /// The apply method implements the dispatch of events to this contract
    void apply(uint64_t code, uint64_t action) {
        if (action == N(compute)) {
            auto message = eosio::current_message<compute>();
            eosio::print("Calling fibonacci\n");
            uint64_t num = fibonacci(message.iterations);

            result res(eosio::name(code), num);
            Results::store(res, res.id);
            eosio::print("Stored result in database\n");
        }
    }

There are also specific EOS libraries available, the documentation is here: https://eosio.github.io/eos/group__contractdev.html

Deploy your own smart contract

An interesting part of the EOS smart contract development lifecycle is that these contract don't get compiled in regular machine language, what C++ compilers normally do, but it gets compiles into a WebAssembly .wast file. This file is some sort of assembler language but then platform independent and this is what EOS uses at runtime.

Once deployed, you can execute your contract by sending a message to it. The contract executes and can write its output to a database location for later retrieval by a client.

The easiest way to compile the fibonacci source is to put the files with the other example smart contracts in the EOS codebase, in the contracts directory. Then you've got everything in the path as the compiler expects it:

Compile the project:
~/eos/contracts$ ../build/tools/eoscpp -o fibonacci/fibonacci.wast fibonacci/fibonacci.cpp
Upload the fibonacci smart contract to the EOS node:
~/eos/contracts$ eosioc set contract fibonacci fibonacci/fibonacci.wast fibonacci/fibonacci.abi
Now we can start executing it. Lets run the fibonacci compute for 8 iterations and store the result:
~/eos/contracts$ eosioc push message fibonacci compute '{"iterations":8}' -S fibonacci

On the EOS demon console you can see some debug output from the smart contract

> Calling fibonacci
> 1 1 2 3 5 8 13 21 Stored result

However a real user will obviously never be able to see this. So to obtain the result of the computation, we'll look it up in the EOS database:

/build/programs/eosioc$ ./eosioc get table fibonacci fibonacci results 


{
  "rows": [{
      "id": "fibonacci",
      "value": 21
    }
  ],
  "more": false
}


So the result is 21. We've executed our smart contract and obtained the result on the EOS blockchain!

Conclusion

I'm pretty excited by the possibilities of smart contracts with regard to the possibilities that these, once matured, can provide. It becomes similar to what today is labeled 'serverless' computing. Things that are at the moment possible through large providers such as Amazon Lambda and Microsoft Azure Functions will also be provided via block chain networks. One difference is that the computation is not done by a single cloud entity, but rather by a collection of nodes that are run by individuals who have mining machines for that crypto. In my eyes it's still early days and certain things in the contracts can certainly be improved, e.g. the available APIs usable from within the smart contracts are still fairly limited, but that will probably improve over time. The fun thing is: it's pretty easy to get started experimenting and writing smart contracts, even from a simple Linux box, so you can learn and develop your smart contracts while the blockchain teams are working on maturing the infrastructure.

by David Bosschaert (noreply@blogger.com) at January 30, 2018 09:20 PM

Winding Down an Open Source Project

by Chris Aniszczyk at January 30, 2018 02:29 PM

For 2018, I’ve made a commitment to myself to simply WRITE AND SHARE more. I used to be really good at cranking out posts but I’ve been so heads down in running and building out open source foundations that I’ve neglected sharing what I’ve learned over the years. I recently wrote a post about the process of starting an open source program for your company:

Also yesterday we posted a new open source program guide in the TODO Group about what to do when you unfortunately wind down an open source project:

While some open source foundations have well defined project lifecycles with notions of an “attic” or “archive” – many companies who open source projects generally do not.

Anyways, I hope these articles are useful to you and you learn something new 🙂


by Chris Aniszczyk at January 30, 2018 02:29 PM