Feed aggregator

Interview with IOTA Co-Founder Dominik Schiener [blockcentric #4]

codecentric Blog - Thu, 11-Jan-18 01:29

Dominik Schiener is one of the co-founders of IOTA – a German non-profit organization that implements a completely new Distributed Ledger Technology competing with blockchain. I talked to Dominik about his early days in the world of Distributed Ledger Technologies, Tangle technology, the IOTA ecosystem, and the future.

Jannik Hüls: Thank you for your time. You are traveling a lot. Where am I catching you right now?

Dominik Schiener: I’m in the headquarters of Deutsche Telekom in Bonn. We are set up internationally, which involves a lot of traveling. We have offices in Berlin, Oslo, Chicago, and are in the process of opening one in Singapore.

Let’s talk about your history: The world of Distributed Ledger Technologies (DLT) is not that old yet. When was your first encounter with that topic – how did you hear about blockchain?

My first point of contact was in 2011, when I came across the Bitcoin whitepaper and tried to understand it. Back then, I was 14 or 15 years old, my English was not that good yet, and I didn’t fully grasp the concept. But I was quickly fascinated – I’ve always been the curious type – so that I have worked on it full time since 2012. I started by mining. Everyone kept telling me this was how you made big money, and of course that’s why I was really ambitious from the outset. I wanted to become an entrepreneur and started my own projects at the age of 14. With AWS credits I mined and thus earned quite a lot of money. I then used this money to implement my first own projects.

So basically mining was your main field of application. Or did you actually implement something back then, did you see other options besides cryptocurrencies?

The main focus back then was on cryptocurrency. It was not until 2013 that I really understood the value added, applications that are interesting not just from a financial point of view, but where blockchain itself creates added value.

Before we talk about IOTA: currently, the topic of DLT covers a wide range of things. To you, what defines a use case that lends itself to applying DLT?

If we look at time-to-implement or time-to-market, a blockchain solution for supply chain management is the one most likely to be integrated within the next two to five years. In supply chains, it creates the greatest added value. Thanks to the underlying transparency, we can identify inefficient processes and improve things like insurance.

“A blockchain solution for supply chain management is the one most likely to be integrated within the next two to five years.”

Tangle is the foundation of IOTA. How did the idea for the Tangle come about?

Since we had a startup developing new hardware for IT, or Fog Computing, to be precise, the basic idea was to have the machines pay each other – machine payments to buy and sell resources. Since we are also blockchain experts, our technical know-how was good enough to realize that none of the existing blockchain architectures were able to cater to the demands of the IoT space. Many had fundamental technical flaws and were too slow or too costly. So we did some research on Directed Acyclic Graphs to solve the existing problems. We did the mathematical proofs, and that’s how we developed IOTA.

Can you briefly outline the differences between IOTA and blockchain?

There are two major differences: The architecture is no longer a chain, but a Directed Acyclic Graph. And the way consensus is reached is different: [with blockchain,] miners use the Competitive Proof of Work or another consensus algorithm that validates transactions in cycles. In IOTA there are no cycles like in blockchain. When transactions are executed in IOTA, two older transactions must be confirmed. There are no more miners, but whenever someone in the network performs a transaction, they also contribute to the consensus of the network, which is one of the main benefits.

Another advantage is scalability. IOTA scales horizontally: the more network participants, the faster transactions are confirmed. Plus, there are no transaction fees because there are no miners left to substitute. This means that no expenses have to be paid because everyone participates in the validation process. Other benefits include partition tolerance and the fact that quantum computers can no longer attack our hashes.

You mentioned the IoT as possible use case – is there a difference between Full Nodes and Small Nodes? Does every IoT device need to store the entire tangle? And how big is it when it’s stored?

Right now there are only Full Nodes, but we are also developing Light Nodes and Small Nodes. Small Nodes are clusters of devices that are combined. In this cluster, they will use a Full Node that meets, for example, the more demanding requirements. However, these very questions depend very much on the architecture of future systems. So, what do they really look like in the future? How do the IoT devices interact?

That’s why the concept of Fog Computing is so relevant to us. The interesting thing about IOTA is that you can really outsource all processes that are involved in running a transaction to different devices. Every IoT device can have its own signature, which means: every IoT device can have a wallet. Even my coffee machine. The second step is Tip Selection. IOTA can find two transactions that need to be confirmed. For that I have to be a Full Node or a Light Node. This Tip Selection algorithm can then really be executed by the node, and thus the coffee machine can also make a transaction by interacting with the Full Node. We imagine this as a SmartHub within our own four walls. The final process is the proof of work, which is a bit more computation-intensive. That’s why we work on special hardware – especially for Fog Computing.

You said that the big advantage of IOTA is that there are no transaction costs. What’s the incentive for the miner then – why should I run a Full Node?

That’s one of the biggest misconceptions: There is no incentive to run a Full Node in Bitcoin and Ethereum. A Full Node is not necessarily a miner. There are about 5,000 to 6,000 Full Nodes in Bitcoin, but only a small fraction of these Full Nodes are miners. The advantage of IOTA is that the effort related to the validation process is much lower compared to Ethereum or Bitcoin. This means it’s better to run an IOTA Full Node than a Bitcoin or Ethereum Full Node. A node runs to be able to participate in the network.

“The advantage of IOTA is that the effort related the validation process is much lower compared to Ethereum or Bitcoin. This means it’s better to run an IOTA Full Node than a Bitcoin or Ethereum Full Node.”

In terms of business organization, IOTA deliberately sets itself apart from other DLT startups. You are a German non-profit organization. What’s the rationale behind this step?

It was of course a strategic move. We realized that the potential of the technology is simply too big to be limited by patents – that’s a conflict of interest. This is why this foundation idea makes so much sense, because the base layer, i.e. IOTA, should be free to use and open source. It should be used as widely as possible. In our opinion, the foundation is the best way to promote adoption. That’s why non-profit makes sense. Our goal is to bring together big companies, startups, and governments to build an ecosystem and invest. Since it is agnostic and independent, other companies are very interested in working with us rather than, for example, IBM.

Initially you also did an ICO [Initial Coin Offering], but you sold all IOTA tokens. How is the foundation funded?

We sold 100 percent of the tokens, then said to the community: if you want a foundation, you’ll need to donate money. As a result, the community got together and donated five percent of the tokens, which currently makes up about 200 million Euros. This is how the foundation is funded.

In other words, the value of the currency IOTA is also fundamental to the financial resources of the foundation.

Exactly. Now we bring companies on board, who then donate to the foundation. And we work with governments.

For example, the Data Marketplace is currently implementing a use case based on IOTA. There, data can be paid with micropayment.

Interesting. How do you explain the volatility of the market? How can I sell this use case better? When I buy something today, it costs, let’s say, 1000 IOTAs, which is perhaps 5 Euros, and tomorrow it might cost 50 Euros.

This is one of the biggest problems in IOTA and one of the biggest problems of cryptocurrencies in general. Volatility is in direct conflict with usability. One could think about an additional layer in which the use of the cryptocurrency is abstracted and that allows for payment in Euros, for example. However, our vision for IOTA is that the tokens are actually used. With other cryptocurrencies, the token is useless. We do not want a network in which each institution maintains its own token. This leads to a way too fragmented ecosystem. Nevertheless, the usability of the token is problematic and remains an unresolved issue.

Currently there are a lot of news about you after you announced Masked Authentication Messaging, Payment Channels, the Data Marketplace, and many other things. Can you roughly tell us what direction you are headed in? I assume you have more things in the pipeline.

We focus on the announcements, especially the partnerships we start with big companies. There, we are able to integrate IOTA into large-scale existing ecosystems. This is how we really get a scaling effect – where we can deploy thousands of nodes. But I can’t say more about this right now.

At codecentric, we are developers. Are there any SDKs [Software Development Kits] for IOTA?

As a matter of fact, we are working on this, especially for the modules that we develop. In terms of IOTA development, we are currently at the point where we have the IRI client to join the network and execute the transactions. Over the past few months, we’ve been working on a completely new system architecture which implements microservices and which is enterprise-oriented. You see, right now we just have one monolithic block, just as Ethereum or Bitcoin.

In addition, the IRI client is becoming much more modular. So as a company you can decide what communication protocol you want to use, and what database, be it SAP Hana or Redis. This really is the future of IOTA and it will be one of the best releases ever. Hopefully it’ll come in February, right now it’s still being developed and thoroughly tested.

There is a sandbox and a test net for developers. Are these the best ways to validate a proof of concept, or what’s the easiest way to go?

We are presently improving the entire sandbox environment. Our goal is that developers only need to send out an API call, and we then take care of the deployments. We are currently cooperating with some companies because they are so interested in IOTA that they also help the network by managing the deployments.

Looking at the Data Marketplace as use case for application developers: what is really stored in the Tangle? Or is it just about paying for the sensor data? Can you roughly outline the architecture?

Of course, the sensor data are represented in the Tangle. In IOTA, a transaction can contain about 1.2 kilobytes of data. In other words, if I have a sensor just for temperature logging or a small dataset, I can use IOTA for data transfer. This is how IOTA also ensures the integrity of the data. If someone wants to buy the data of a sensor, this will be billed directly by micropayment. The data is then not read by the sender, but from the tangle.

So basically the sensor pushes data into the Tangle, and I as a consumer can use the data from the Tangle to read sensor data. Pretty cool.

Finally, to reiterate: you talked about the IRI earlier. I’ve already seen that’s open source. What else is?

Everything. The Data Marketplace will also be open source. We also want to make other use cases, such as SatoshiPay, available to the community. We are not done with that, though, we are still working on it.

Thank you for your time. Cool stuff you’re working on, and fun to follow. Have a great time in Bonn!


Thank you, Jannik!


The interview was conducted in German and then translated into English.

Our article series “blockcentric” discusses Blockchain-related technology, projects, organization and business concerns. It contains knowledge and findings from our 20% time work, but also news from the area.

Blockcentric Logo
We are looking forward to your feedback on the column and exciting discussions about your use cases.

Previously published blockcentric-Posts

The post Interview with IOTA Co-Founder Dominik Schiener [blockcentric #4] appeared first on codecentric AG Blog.

Categories: Agile, Java, TDD & BDD

Spring, Reactor, and ElasticSearch: From Callbacks to Reactive Streams

Javalobby Syndicated Feed - Thu, 11-Jan-18 01:01

Spring 5 (and Boot 2, when it arrives in a couple of weeks) is a revolution. Not the "annotations over XML" or "Java classes over annotations" type of revolution. It's truly a revolutionary framework that enables writing a brand new class of applications.

In recent years, I became a little bit intimidated by this framework. Spring Cloud is a framework that simplifies the usage of Spring Boot, which is a framework that simplifies the usage of Spring, which is a framework that simplifies enterprise development.

Categories: Java

A Practical Guide to Java 9 Migration

Javalobby Syndicated Feed - Wed, 10-Jan-18 22:01

This article is a practical example of migrating a Java application from 1.8 to 9. It walks through the steps and different problems and solutions of migrating a Java application to 9 and to modules. For clarification, this article is not supposed to cover all aspects of Java 9 adoption, but rather focuses on this example and the problems related to it.

The example used in this article is a simple implementation of the CQRS design pattern that I have extracted from the repository java-design-pattern.

Categories: Java

Chain of Responsibility Design Pattern in Java: 2 Implementations

Javalobby Syndicated Feed - Wed, 10-Jan-18 12:01

The Chain of Responsibility (COR) design pattern is used when more than one object handles a request and performs their corresponding responsibilities to complete the whole task.

The pattern can be used to achieve loose coupling in software design, where the request can be passed through a chain of objects or request handler for processing. Based on some criteria in each handler object, it will handle the request or pass it to the next handler. Following is a representation of the COR pattern:

Categories: Java

Spring Boot - HATEOAS for RESTful Services

Javalobby Syndicated Feed - Wed, 10-Jan-18 08:01

This guide will help you implement HATEOAS for a REST API/Service with Spring Boot.

You Will Learn

  • What is HATEOAS?
  • Why do you need HATEOAS?
  • How do you implement HATEOAS with Spring Boot?
  • What are the HATEOAS best practices?

10 Step Reference Courses

Project Code Structure

Following screenshot shows the structure of the project we will create.Image

Categories: Java

Setting Up JRebel for WebSphere AS in a Docker Environment

Javalobby Syndicated Feed - Wed, 10-Jan-18 03:02

Getting any Java application server up and running in the development environment is usually a fairly simple task. You can just download the ZIP archive and start the container either from the command line or via IDE integration. Configuring a JRebel agent for the server is also quite straightforward. However, there are some exceptions to that. For instance, if you'd like to try JRebel on WebSphere AS and you are using MacOS, then you will have to take another route.

WebSphere Application Server is available for Linux and Windows platforms, but not for MacOS. The good news is that there is a WebSphere Docker image that you can use for development.

Categories: Java

This Week in Spring: Bootstrapping, Spring Cloud, and Microservices

Javalobby Syndicated Feed - Tue, 09-Jan-18 23:02

Hi, Spring fans! Welcome to another installment of This Week in Spring! This week, I’m off to Germany, where I’ll be speaking at the Java User Group in Münster on Wednesday night. Then, it’s off to Solingen for a Cloud Native day on the 12th (this Friday) where I’ll be presenting all afternoon — register now! And, if you’re closer to the Pacific Ocean than the Atlantic Ocean, join me next Monday in Hawaii and we’ll talk about all things Spring at the very promising LavaOne conference.

As usual, we’ve got a lot to cover, so let’s get to it.

Categories: Java

Multidimensional Arrays vs. Method References

Javalobby Syndicated Feed - Tue, 09-Jan-18 14:01

Playing with multidimensional arrays and method references can be tricky sometimes.

Referencing an Array’s Constructor

Let’s say we want to create a function that takes an integer value and returns an array with a size initialized to that value.

Categories: Java

Create Your Own Constraints With Bean Validation 2.0

Javalobby Syndicated Feed - Tue, 09-Jan-18 10:01

Data integrity is an important part of application logic. Bean Validation is an API that provides a facility for validating objects, objects members, methods, and constructors. This API allows developers to constrain once, validate everywhere. Bean Validation 2.0 is part of Java EE 8, but it can be used with plain Java SE.

Bean Validation 2.0 brings several built-in constraints, perhaps those are the most common used from small to large applications, some of them are: @NotNull, @Size, @Max, @Min, @Email.

Categories: Java

Looking beyond accuracy to improve trust in machine learning

codecentric Blog - Tue, 09-Jan-18 04:00

Traditional machine learning workflows focus heavily on model training and optimization; the best model is usually chosen via performance measures like accuracy or error and we tend to assume that a model is good enough for deployment if it passes certain thresholds of these performance criteria. Why a model makes the predictions it makes, however, is generally neglected. But being able to understand and interpret such models can be immensely important for improving model quality, increasing trust and transparency and for reducing bias. Because complex machine learning models are essentially black boxes and too complicated to understand, we need to use approximations to get a better sense of how they work. One such approach is LIME, which stands for Local Interpretable Model-agnostic Explanations and is a tool that helps understand and explain the decisions made by complex machine learning models.

Accuracy and Error in Machine Learning

A general Data Science workflow in machine learning consists of the following steps: gather data, clean and prepare data, train models and choose the best model based on validation and test errors or other performance criteria. Usually we – particularly we Data Scientists or Statisticians who live for numbers, like small errors and high accuracy – tend to stop at this point. Let’s say we found a model that predicted 99% of our test cases correctly. In and of itself, that is a very good performance and we tend to happily present this model to colleagues, team leaders, decision makers or whoever else might be interested in our great model. And finally, we deploy the model into production. We assume that our model is trustworthy, because we have seen it perform well, but we don’t know why it performed well.

In machine learning we generally see a trade-off between accuracy and model complexity: the more complex a model is, the more difficult it will be to explain. A simple linear model is easy to explain because it only considers linear relationships between variables and predictor. But since it only considers linearity, it won’t be able to model more complex relationships and the prediction accuracy on test data will likely be lower. Deep Neural Nets are on the other end of the spectrum: since they are able to deduce multiple levels of abstraction, they are able to model extremely complex relationships and thus achieve very high accuracy. But their complexity also essentially makes them black boxes. We are not able to grasp the intricate relationships between all features that lead to the predictions made by the model so we have to use performance criteria, like accuracy and error, as a proxy for how trustworthy we believe the model is.

Trying to understand the decisions made by our seemingly perfect model usually isn’t part of the machine learning workflow.
So why would we want to invest the additional time and effort to understand the model if it’s not technically necessary?

One way to improve understanding and explain complex machine learning models is to use so-called explainer functions. There are several reasons why, in my opinion, model understanding and explanation should become part of the machine learning workflow with every classification problem:

  • model improvement
  • trust and transparency
  • identifying and preventing bias

Model Improvement

Understanding the relationship between features, classes and predictions, thereby understanding why a machine learning model made the decisions it made and which features were most important in that decision can help us decide if it makes intuitive sense.

Let’s consider the following poignant example from the literature: we have a deep neural net that learned to distinguish images of wolves from huskies [1]; it was trained on a number of images and tested on an independent set of images. 90 % of the test images were predicted correctly. We could be happy with that! But what we don’t know without running an explainer function is that the model based its decisions primarily on the background: wolf images usually had a snowy background, while husky images rarely did. So we unwittingly trained a snow detector… Just by looking at performance measures like accuracy, we would not have been able to catch that!

Having this additional knowledge about how and based on which features model predictions were made, we can intuitively judge whether our model is picking up on meaningful patterns and if it will be able to generalize on new instances.

Trust and Transparency

Understanding our machine learning models is also necessary to improve trust and provide transparency regarding their predictions and decisions. This is especially relevant given the new General Data Protection Regulation (GDPR) that will go into effect in May of 2018. Even though it is still hotly discussed whether its Article 22 includes a “right to explanation” of algorithmically derived decisions [2], it probably won’t be enough for long any more to have black box models making decisions that directly affect people’s lives and livelihoods, like loans [3] or prison sentences [4].

Another area where trust is particularly critical is medicine; here, decision will potentially have life-or-death consequences for patients. Machine learning models have been impressively accurate at distinguishing malignant from benign tumors of different types. But as basis for (no) medical intervention we still require a professional’s explanation of the diagnosis. Providing the explanation for why a machine learning model classified a certain patient’s tumor as benign or malignant would go a long way to help doctors trust and use machine learning models that support them in their work.

Even in everyday business, where we are not dealing with quite so dire consequences, a machine learning model can have very serious repercussions if it doesn’t perform as expected. A better understanding of machine learning models can save a lot of time and prevent lost revenue in the long run: if a model doesn’t make sensible decisions, we can catch that before it goes into deployment and wreaks havoc there.

Identifying and Preventing Bias

Fairness and bias in machine learning models is a widely discussed topic [5, 6]. Biased models often result from biased ground truths: if the data we use to train ours model contains even subtle biases, our models will learn them and thus propagate a self-fulfilling prophecy! One such (in)famous example is the machine learning model that is used to suggest sentence lengths for prisoners, which obviously reflects the inherent bias for racial inequality in the justice system [4]. Other examples are models used for recruiting, which often show the biases our society still harbors in terms of gender-associations with specific jobs, like male software engineers and female nurses [5].

Machine learning models are a powerful tool in different areas of our life and they will become ever more prevalent. Therefore, it is our responsibility as Data Scientists and decision makers to understand how the models we develop and deploy make their decisions so that we can proactively work on preventing bias from being reinforced and removing it!


LIME stands for Local Interpretable Model-agnostic Explanations and is a tool that helps understand and explain the decisions made by complex machine learning models. It has been developed by Marco Ribeiro, Sameer Singh and Carlos Guestrin in 2016 [1] and can be used to explain any classification model, whether it is a Random Forest, Gradient Boosting Tree, Neural Net, etc. And it works on different types of input data, like tabular data (data frames), images or text.

At its core, LIME follows three concepts:

  • explanations are not given globally for the entire machine learning model, but locally and for every instance separately
  • explanations are given on original input features, even though the machine learning model might work on abstractions
  • explanations are given for the most important features by locally fitting a simple model to the prediction

This allows us to get an approximate understanding of which features contributed most strongly to a single instance’s classification and which features contradicted it and how they influenced the prediction.

The following example showcases how LIME can be used:
I built a Random Forest model on a data set about Chronic Kidney Disease [7]. The model was trained to predict whether a patient had chronic kidney disease (ckd) or not (notckd). The model achieved 99 % accuracy on validation data and 95 % on test data. Technically, we could stop here and declare victory. But we want to understand why certain patients were diagnosed with chronic kidney disease and why others weren’t. A medical professional would then be able to assess whether what the model learned makes intuitive sense and can be trusted. To achieve this, we can apply LIME.

As described above, LIME works on each instance individually and separately. So first, we take one instance (in this case the data from one patient) and permute it; i.e. the data is replicated with slight modifications. This generates a new data set consisting of similar instances, based on one original instance. For every instance in this permuted data set we also calculate how similar it is to the original instance, i.e. how strong the modifications made during permutation are. Basically, any type of statistical distance and similarity metric can be used in this step, e.g. Euclidean distance converted to similarity with an exponential kernel of specified width.
Next, our complex machine learning model, which was trained before, will make predictions on every permuted instance. Because of the small differences in the permuted data set, we can keep track of how these changes affect the predictions that are made.

And finally, we fit a simple model (usually a linear model) to the permuted data and its predictions using the most important features. There are different ways to determine the most important features: we typically define the number of features we want to include in our explanations (usually around 5 to 10) and then either

  • choose the features with highest weights in the regression fit on the predictions made by the complex machine learning model
  • apply forward selection, where features are added to improve the regression fit on the predictions made by the complex machine learning model
  • choose the features with smallest shrinkage on the regularization of a lasso fit on the predictions made by the complex machine learning model
  • or fit a decision tree with fewer or equal number of branch splits as the number of features we have chosen

The similarity between each permuted instance and the original instance feeds as a weight into the simple model so that higher importance is given to instances which are more similar to the original instance. This precludes us from using any simple model as an explainer that is able to take weighted input, e.g. a ridge regression.

Now, we can interpret the prediction made for the original instance. With the example model described above, you can see the LIME output for the eight most important features for six patients/instances in the figure below:

LIME explanations of example machine learning model

Each of the six facets shows the explanation for the prediction of an individual patient or instance. The header of each facet gives the case number (here the patient ID), which class label was predicted and with what probability. For example, the top left instance describes case number 4 which was classified as “ckd” with 98 % probability. Below the header we find a bar-plot for the top 8 most important features; the length of each bar shows the weight of the feature, positive weights support a prediction, negative weights contradict it. Again described for the top left instance: the bar-plot shows that the hemoglobin value was between 0.388 and 0.466, which supports the classification as “ckd”; packed cell volume (pcv), serum creatinine (sc), etc. similarly support the classification as “ckd” (for a full list of feature abbreviations, see http://archive.ics.uci.edu/ml/datasets/Chronic_Kidney_Disease). This patient’s age and white blood cell count (wbcc), on the other hand, are more characteristic of a healthy person and therefore contradict the classification as “ckd”.

Links and additional resources

This article is also available in German: https://blog.codecentric.de/2018/01/vertrauen-und-vorurteile-maschinellem-lernen/

  1. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’16). ACM, New York, NY, USA, 1135-1144. DOI: https://doi.org/10.1145/2939672.2939778
  2. Edwards, Lilian and Veale, Michael, Slave to the Algorithm? Why a ‘Right to an Explanation’ Is Probably Not the Remedy You Are Looking For (May 23, 2017). 16 Duke Law & Technology Review 18 (2017). Available at SSRN: https://ssrn.com/abstract=2972855
  3. http://www.insidertradings.org/2017/12/18/machine-learning-as-a-service-market-research-report-2017-to-2022/
  4. http://mitsloan.mit.edu/newsroom/press-releases/mit-sloan-professor-uses-machine-learning-to-design-crime-prediction-models/ and https://www.nytimes.com/2017/05/01/us/politics/sent-to-prison-by-a-software-programs-secret-algorithms.html
  5. https://www.bloomberg.com/news/articles/2017-12-04/researchers-combat-gender-and-racial-bias-in-artificial-intelligence
  6. https://www.engadget.com/2017/12/21/algorithmic-bias-in-2018/
  7. http://archive.ics.uci.edu/ml/datasets/Chronic_Kidney_Disease

The post Looking beyond accuracy to improve trust in machine learning appeared first on codecentric AG Blog.

Categories: Agile, Java, TDD & BDD

Staring Into My Java Crystal Ball for 2018

Javalobby Syndicated Feed - Tue, 09-Jan-18 03:01

Happy New Year!

Last year, I wrote a blog with my predictions for what would happen in the world of Java in 2017. Now that 2017 has ended and we’re starting 2018, I thought it would be interesting to look back at what I had predicted and make some new predictions for this year.

Categories: Java

Silly Code: Can We Try to Do a Bit Better in 2018?

Javalobby Syndicated Feed - Tue, 09-Jan-18 01:01

The new year has officially started and around the world, individuals are working to meet goals and resolutions that were made in the late stages of 2017. If I could cast a wish across our entire software engineering industry, I would ask if we could maybe try a little harder to avoid writing silly code in 2018.

For a majority of the year, I returned to the role of a consultant, but even in those months where I was a corporate employee, I still encountered what I would call "silly code."

Categories: Java

Jlink in Java 9

Javalobby Syndicated Feed - Mon, 08-Jan-18 22:01

Jlink is Java’s new command line tool through which we can create our own customized JRE.

Usually, we run our program using the default JRE, but in case if you want to create your own JRE, then you can go with the jlink concept.

Categories: Java

Server-Side Validators Using Functional Interfaces

Javalobby Syndicated Feed - Mon, 08-Jan-18 15:14

Note: This post is inspired by and is an attempt at creating an extension to this post on Medium. As such, it will use some of the code in that post by Joel Planes.

As a Java developer, the most common task we have to do is to write some server-side validations for our model data so as to validate the incoming objects to our application. Sure there are frameworks like Hibernate Validator that are used to perform these validations, but sometimes, they are just not an option.

Categories: Java

IntelliJ IDEA 2017.3: Debugger Improvements

Javalobby Syndicated Feed - Mon, 08-Jan-18 10:01

As usual, the latest release of IntelliJ IDEA comes with improvements to help with debugging applications.

New Overhead Tab

Debugging an application comes with an inevitable cost. While we may know this, it's not always obvious what this might be. IntelliJ IDEA 2017.3 comes with a way to visualize this cost. There's now a new tab, Overhead, which gives a view of the cost of debugging. If you don't see this in the Debugger Tool Window, you may need to click the "Restore Overhead View" button:

Categories: Java

Morning Java: The Blunt Fundamentals

Javalobby Syndicated Feed - Mon, 08-Jan-18 08:02

With the holidays behind us and a new year freshly being rung in, it's time to see what's been going on in the world of Java! This ended up being a very blunt compilation of articles and news (although the headlines were pretty enjoyable). "Java 8: The Bad Parts" still makes me chuckle, and we'll even cover something as fundamental as a name in the news section. But this compilation also dives into some fundamental aspects of programming and considers a higher-level view of what's coming ahead.

It's Java'clock

By the way, if you're interested in writing for your fellow DZoners, feel free to check out our Writers' Zone, where you can also find some current hot topics and our Bounty Board, which has writing prompts coupled with prizes.

Categories: Java

Understanding flatMap

Javalobby Syndicated Feed - Mon, 08-Jan-18 04:01

Have you ever scrolled someone’s code and bumped into this weird method called flatMap, not knowing what it actually does from the context? Or maybe you compared it with method map but didn’t really see much difference? If that is the case, then this article is for you. 

flatMap is extremly useful when you try to do proper functional programming. In Java, it usually means using Streams and Optionals — concepts introduced in version 8. These two types can be thought of as some kind of wrapper over a type that adds some extra behaviour — Stream<T> wrapps type T, allowing you to store any number of elements of type T inside, whereas Optional<T> wraps some type T to explicitly say that the element of that type may or may not be present inside.

Categories: Java

Spring-Based Apps: Migrating to JUnit 5

Javalobby Syndicated Feed - Mon, 08-Jan-18 01:01

This is a quick write-up on migrating a Gradle-based Spring Boot app from JUnit 4 to the shiny new Junit 5. JUnit 4 tests continue to work with Junit 5 Test Engine abstraction, which provides support for tests written in different programming models. In this instance, JUnit 5 supports a Vintage Test Engine with the ability to run JUnit 4 tests.

Here is a sample project with JUnit 5 integrations already in place along with sample tests in JUnit 4 and JUnit 5.

Categories: Java

Spring Cloud Service Discovery with Dynamic Metadata

codecentric Blog - Sun, 07-Jan-18 23:00

Spring Cloud Service Discovery

If you are running applications consisting of a lot of microservices depending on each other, you are probably using some kind of service registry. Spring Cloud offers a set of starters for interacting with the most common service registries.

For the rest of this article, let’s assume you are familiar with the core Spring Boot ecosystem.

In our example we will use the service registry Eureka from Netflix running on localhost:8761. We will run two services service-1 on port 8080 and service-2 on port 8081 both implemented with Spring Boot. In order to register itself and look up others services, each service includes the following starter as a build dependency:




Each service has to configure its unique name and where the Eureka server is located. We will use the YAML format in src/resources/application.yml:

    name: service-1

      defaultZone: http://localhost:8761/eureka

The configuration of service-2 will look similar.

Service Discovery

The implementation of service-2 contains a simple REST controller to access the service registry:

public class RegistryLookup {

   private DiscoveryClient discoveryClient;

   public List lookup(@PathVariable String serviceId) {
      return discoveryClient.getInstances(serviceId);


Spring Cloud Service Discovery

In order to use the DiscoveryClient API, you have to use the annotation @EnableDiscoveryClient somewhere within your configuration classes.

A lookup of service-1 …


… yields a lot of information on that service:

    "host": "SOME.IP.ADDRESS",
    "port": 8080,
    "secure": false,
    "instanceInfo": {
        "instanceId": "SOME.IP.ADDRESS:service-1:8080",
        "app": "SERVICE-1",

This information is used by REST clients like Feign to discover the HTTP endpoint of services by symbolic names like service-1.

Static Service Metadata

Each service can define arbitrary metadata within its configuration. With Eureka, this metadata can be a map of values defined in application.yml:

      fixed-s1: "value_1"

This metadata can be used by clients for various use cases. A service can describe SLAs, quality of data or whatever you want. But this metadata is fixed in a sense that is has to be known at build or deployment time. The latter is the case when you overwrite your Spring environment with JVM or OS environment variables. Anyway, the metadata show up within the lookup http://localhost:8081/lookup/service-1:

    "host": "localhost",
    "port": 8080,
    "uri": "http://localhost:8080",
    "metadata": {
        "fixed-s1": "value_1"
    "secure": false,
    "serviceId": "SERVICE-1",

Dynamic Service Metadata

If a service wants to detect and register its metadata at runtime, you have to use the client API of the service registry in use. With the Eureka client this may look like this:

public class DynamicMetadataReporter {

   private ApplicationInfoManager aim;

   public void init() {
      Map map = aim.getInfo().getMetadata();
      map.put("dynamic-s1", "value_2");

The class ApplicationInfoManager is from the Eureka client API and allows you to dynamically add metadata that shows up if we query the service registry for service-1:

    "host": "localhost",
    "port": 8080,
    "uri": "http://localhost:8080",
    "metadata": {
        "dynamic-s1": "value_2",  // dynamic metadata
        "fixed-s1": "value_1"
    "secure": false,
    "serviceId": "SERVICE-1",

The post Spring Cloud Service Discovery with Dynamic Metadata appeared first on codecentric AG Blog.

Categories: Agile, Java, TDD & BDD

Introducing Picocog: A Lightweight Code Generation Library

Javalobby Syndicated Feed - Sun, 07-Jan-18 22:01

Picocog is a lightweight and new open-source Java library created to make programmatic code generation quite easy.

Code generation is the process by which source code can be automatically generated based on inputs.

Categories: Java

Thread Slivers eBook at Amazon

Syndicate content