Feed aggregator

Agile, oven-fresh documentation. Part 2: PlantUML with JBake

codecentric Blog - Wed, 07-Mar-18 01:00

After looking at the configuration of JBake in the first part, I will now extend the scenario described in the previous post by adding PlantUML. PlantUML makes it possible to describe UML diagrams in simple textual notation.

:Hello world;
Result of

First of all, we will integrate PlantUML into the existing Gradle build file. In order to do so, we need asciidoctorj-diagram as an external library for the build script. With the upcoming release of the Gradle plugin it is no longer necessary. The configuration of JBake needs to be adjusted so that the asciidoctorj-diagram library can be used. The asciidoctorj-diagram library now allows us to use PlantUML diagrams within the Asciidoctor document.

buildscript {
    repositories {
    dependencies {
        classpath 'org.asciidoctor:asciidoctorj-diagram:'
plugins {
     id 'org.jbake.site' version '1.0.0'
group 'de.codecentric.dk.afgd'
version '1.0-SNAPSHOT'
apply plugin: 'groovy'
apply plugin: 'java'
sourceCompatibility = 1.8
repositories {
dependencies {
    compile 'org.codehaus.groovy:groovy-all:2.3.11'
    testCompile group: 'junit', name: 'junit', version: '4.12'
jbake {
    srcDirName = 'src/documentation'
    destDirName = 'documentation'
    configuration['asciidoctor.option.requires'] = "asciidoctor-diagram"
    configuration['asciidoctor.attributes'] = [

build.dependsOn bake

We now create a new file, architecture.adoc, within the content folder. The following code block shows the content of the file.

= Architecture documentation
Daniel Kocot
:jbake-type: page
:jbake-tags: documentation, manual
:jbake-status: published

== First draft

[plantuml, "first_draft", "png"]
node "Confluence as content repository" as nodeconf
folder Intranet{
    together {
        node "WordPress Node 1" as nodeiwp1
        node "WordPress Node 2" as nodeiwp2
        node "WordPress Node 3" as nodeiwp3
        node "WordPress Node 4" as nodeiwp4
folder Extranet{
    together {
        node "WordPress Node 1" as nodeewp1
        node "WordPress Node 2" as nodeewp2
        node "WordPress Node 3" as nodeewp3
        node "WordPress Node 4" as nodeewp4
node "LoadBalancer / nginx Intranet" as lbinginx
node "LoadBalancer / nginx Extranet" as lbenginx
node "Content Delivery Network" as cdn
cloud "Intranet" as intranet
cloud "Extra/Internet" as internet
actor "Internal User" as internal
actor "External User" as external
nodeconf --> nodeiwp1
nodeconf --> nodeiwp2
nodeconf --> nodeiwp3
nodeconf --> nodeiwp4
nodeconf --> nodeewp1
nodeconf --> nodeewp2
nodeconf --> nodeewp3
nodeconf --> nodeewp4
nodeewp1 <--> lbenginx
nodeewp2 <--> lbenginx
nodeewp3 <--> lbenginx
nodeewp4 <--> lbenginx
nodeiwp1 <--> lbinginx
nodeiwp2 <--> lbinginx
nodeiwp3 <--> lbinginx
nodeiwp4 <--> lbinginx
cdn <--> nodeconf
cdn --> nodeewp1
cdn --> nodeewp2
cdn --> nodeewp3
cdn --> nodeewp4
cdn --> nodeiwp1
cdn --> nodeiwp2
cdn --> nodeiwp3
cdn --> nodeiwp4
lbinginx <--> intranet
lbenginx <--> internet
external <--> internet
internal <--> intranet

After writing the first draft of our architectural diagram into the document, there is still the question of how the text content is turned into a graphic.

For this purpose, when creating the entire documentation, the first thing we do is generate a picture – in our case a PNG file – via PlantUML. This picture is saved in the assets folder under the file name first-draft with the file extension .png. In a second step, JBake will then create the corresponding HTML file based on the Asciidoctor file.

The PlantUML integration enables us to also maintain the important architecture diagrams in close alignment with the source code.

In the third part of this series, we will take a look at test reports and their integration in JBake.

Check out the demo in my Git Repository: agile-fresh-baked-docs

The post Agile, oven-fresh documentation. Part 2: PlantUML with JBake appeared first on codecentric AG Blog.

Categories: Agile, Java, TDD & BDD

The Developer's Guide to Collections: Queues

Javalobby Syndicated Feed - Tue, 06-Mar-18 22:01

Queues are an essential part of most software applications, whether in an isolated system or a widely-distributed, networked application. In recent years, queues have taken on an entirely new meaning with the acceptance of the Advanced Message Queuing Protocol (AMQP) and many of its implementations, such as RabbitMQ. While these are important aspects of the Java ecosystem, the same importance of queues can be found at the micro-application level, as well. For example, many concurrent (multi-threaded) applications require queues to process data or execute tasks. This category of problem is so prolific, it has even been codified into a well-known problem for decades: The Producer-Consumer problem.

In this last installment of The Developer's Guide to Collections, we will export the queue portion of the Java Collections Framework (JCF) and focus particularly on two types of data structures: Queues and double-ended queues (deque; pronounced "deck"). Starting with an explanation of the concepts behind queues, we will work our way into the actual code of the Queue and Deque interfaces, looking at the intent and logic behind each of these abstractions of the queue concept. Lastly, we will delve into the various queue implementations Java offers and how each implementation differs from one another. With this understanding, we will then be able to make grounded decisions about when and when not to use each implementation. Before diving into these details, though, we must first gain a solid grasp of the concept of a queue.

Categories: Java

This Week in Spring: Spring Boot 2.0

Javalobby Syndicated Feed - Tue, 06-Mar-18 10:01

Hi, Spring fans and welcome to another installment of This Week in Spring! As I write this it’s early morning Tuesday in Sydney, Australia, where I’ve been visiting with some of Pivotal’s amazing customers, and I’m now preparing for my flight to Dubai, in six short hours, where I’ll visit some more of Pivotal’s amazing customers. Later this week I’ll be in Bangalore, India, for the amazing Agile India conference, and then — early next week on Tuesday — I’ll be in Boston, MA for the first SpringOne Tour event. If you’re around don’t hesitate to say hi, as usual!

This week we’ve got a lot of great content celebrating the wonderful release of Spring Boot 2.0, so without further ado let’s get to it!

Categories: Java

Variance, Immutability, and Strictness in Kotlin

Javalobby Syndicated Feed - Mon, 05-Mar-18 22:01

Variance is the way parameterized types relate regarding inheritance of their type parameter. This article will first offer a reminder of how variance works, and then elaborate how strictness and mutability interfere with variance — and how to deal with this problem. This is a preview of my upcoming book The Joy of Kotlin, published by Manning.

Let’s start with an example object hierarchy. As you can see in the following figure, Gala extends Apple, which in turn extends Fruit. In other words, a Gala is an Apple and an Apple is a Fruit.Image title

Categories: Java

IntelliJ IDEA 2018.1 Public Preview

Javalobby Syndicated Feed - Mon, 05-Mar-18 14:01

Good news everyone! IntelliJ IDEA 2018.1 is now ready for Public Preview! The upcoming IntelliJ IDEA 2018.1 brings a lot of important improvements: support for Git partial commits, inline external annotations, merged features from Android Studio 3.0 and many more. We are excited about all these new features, and we encourage you to take the preview build for a ride right away!

Enhancements in Code Completion

In the upcoming IntelliJ IDEA 2018.1, there are enhancements to the code completion. Now completion in the Stream API chains is aware of the type casts. The code completion not only suggests a completion item according to the existing call filter(String.class::isInstance), but also an automatically typecasted completion item.

Categories: Java

Java: BlockingQueues and Continuous Monitoring

Javalobby Syndicated Feed - Mon, 05-Mar-18 10:01

In Java, the BlockingQueue interface is available in the java.util.concurrent package. BlockingQueue implementations are designed to be used primarily for producer-consumer queues with thread-safety. They can safely be used with multiple producers and multiple consumers.

We can find many BlockingQueue examples in various forums and articles. In this article, we are going to explain that how to monitor requests continuously in queues and how to process them immediately whenever the request comes in the queue.

Categories: Java

Java Engineering at Microsoft: Interview with Rikki Gibson

Javalobby Syndicated Feed - Mon, 05-Mar-18 04:01

Today I have another interview to share! Following my interview with Yoshio Terada, a Java evangelist at Microsoft, today I have an interview with Rikki Gibson, a software engineer at Microsoft, working exclusively on Java-related projects. I am quite envious of his role, as engineering and solving fun problems is always what excites me most! So, to everyone reading – enjoy this post, and I’ll work on getting more stories about Java people at Microsoft up every week or two.

Hi Rikki – can you please introduce yourself to everyone?
Hi there! I’m Rikki Gibson. I come from Corvallis, Oregon and I’m a 2017 graduate from Oregon State University in Computer Science. When I was in school I worked part-time for a few years on .NET-based systems for the Oregon state government, and I have some Java background from a small foray into Android development. When I joined Microsoft in July 2017 I was brought on the Azure SDKs team within Microsoft Developer Division working on Java and .NET libraries.

Categories: Java

Rust for Java Devs: Structs

Javalobby Syndicated Feed - Mon, 05-Mar-18 01:01

Next up in Rust for Java Devs, we have structs. They are used to hold data within a logical unit that can be passed to other functions or execute their own methods on the values that they store. Now, this sounds very familiar… Java objects do the same thing. For example, if you took a POJO (Plain Old Java Object), you also pass it to other methods or execute the object’s own methods. In this nature, they are alike, but they do have their differences. In this post, we will look into creating structs, retrieving their values, defining their own methods, and how to execute them.

Creating a Struct

Let’s start with creating a struct.

Categories: Java

Cloud Launcher for MongoDB in the Google Compute Engine

codecentric Blog - Sun, 04-Mar-18 23:00

In this post you will learn how to use Google’s Cloud Launcher to set up instances for a MongoDB replica set in the Google Compute Engine.

Replication in MongoDB

A minimal MongoDB replica set consists of two data bearing nodes and one so-called arbiter that only takes part in the election of a new primary in case of failure.

Unlike other distributed databases, MongoDB does not offer auto-discovery of cluster nodes. If all nodes are up, you have to initialize the cluster via an administrative command inside the Mongo CLI that takes list of all replica set members. This fact makes it hard (but not impossible) to script this with the cloud provisioning tool of your choice.

Cloud Launcher for MongoDB

With the help of Google’s Cloud Launcher for MongoDB, the provisioning of a replica set is done in just a few steps.

First, we define a deployment name (that will prefix the names of the instances), a zone and the name of the replica set.

cloud launcher mongodb

Then, we set up the instances that will run in the Compute Engine. We define a minimal replica set with two server instances and one arbiter. All other parameters will use the defaults for now (the server instances will use a n1-highmem-4 machine type).

cloud launcher mongodb

This will finally lead to three running instances.

Compute Engine Instances

The instances will show up in the Compute Engine dashboard where they can be managed:

compute engine instances

If you prefer your CLI, you can list the instances with the gcloud tool:

$ gcloud compute instances list
mongo-0-arbiters-vm-0  europe-west3-b  n1-standard-1       RUNNING
mongo-0-servers-vm-0   europe-west3-b  n1-highmem-4         RUNNING
mongo-0-servers-vm-1   europe-west3-b  n1-highmem-4        RUNNING

Replica Set Status

In order to check if everything is up and running, we open a SSH window to one of the instances from the dasboard and start the mongo CLI:

compute engine ssh

After connecting to the MongoDB, we run the rs.status() to get the status of the replica set:

  "set" : "rs0",
  "date" : ISODate("2018-02-12T12:52:47.562Z"),
  "myState" : 7,
  "members" : [
      "_id" : 0,
      "name" : "mongo-0-servers-vm-0:27017",
      "health" : 1,
      "state" : 1,
      "stateStr" : "PRIMARY",
      "uptime" : 787,
      "_id" : 1,
      "name" : "mongo-0-servers-vm-1:27017",
      "health" : 1,
      "state" : 2,
      "stateStr" : "SECONDARY",
      "uptime" : 787,
        "_id" : 2,
        "name" : "mongo-0-arbiters-vm-0:27017",
        "health" : 1,
        "state" : 7,
        "stateStr" : "ARBITER",
        "uptime" : 801,
        "configVersion" : 3,
        "self" : true
  "ok" : 1

(I ommitted some of the output, so we can focus on the important things.)

We can see the name of the replica set rs0 and our three instances

  • mongo-0-servers-vm-0
  • mongo-0-servers-vm-1
  • mongo-0-arbiters-vm-0

Restore Data Dump from Google Storage

We’ll restore a data dump from a file located in the Google Storage. Since only the the primary node can do write operations, we connect to the instance mongo-0-servers-vm-0 that has to role PRIMARY right now.

Inside that instance, we download a BSON dump from the storage by using the gsutil tool, assuming your storage bucket is called [BUCKET_ID]:

mongo-0-servers-vm-1:/tmp$ cd /tmp
mongo-0-servers-vm-1:/tmp$ gsutil cp gs://[BUCKET_ID].appspot.com/pois.* ./
Copying gs://[BUCKET_ID].appspot.com/pois.bson...
Copying gs://[BUCKET_ID].appspot.com/pois.metadata.json...             
/ [2 files][  5.6 KiB/  5.6 KiB]                                                
Operation completed over 2 objects/5.6 KiB. 

Now the dump is on our disk and we restore its data to the MongoDB replica set:

mongo-0-servers-vm-1:/tmp$ mongorestore pois.bson
2018-02-12T13:12:33.402+0000    checking for collection data in pois.bson
2018-02-12T13:12:33.402+0000    reading metadata for test.pois from pois.metadata.json
2018-02-12T13:12:33.418+0000    restoring test.pois from pois.bson
2018-02-12T13:12:33.481+0000    restoring indexes for collection test.pois from metadata
2018-02-12T13:12:33.590+0000    finished restoring test.pois (7 documents)
2018-02-12T13:12:33.590+0000    done

Finally, we perfom a simple query from the OS command line:

mongo-0-servers-vm-0:/tmp$ mongo --quiet --eval "db.pois.count()" test

to check if our 7 imported documents are really there.


We created a MongoDB replica set and deployed it to the Compute Engine. Then we checked the replica set status and imported some data.

In one of my next posts I will show you how to build a REST-ful microservice in the AppEngine that will access our MongoDB backend in the Compute Engine.

The post Cloud Launcher for MongoDB in the Google Compute Engine appeared first on codecentric AG Blog.

Categories: Agile, Java, TDD & BDD

About Immutability in OOP

Javalobby Syndicated Feed - Sun, 04-Mar-18 22:01

To explain what immutability is, first we need to understand what mutability is. Mutability means the possibility to change the state of an entity while its identity remains the same. Obviously, immutability is the opposite. Once an entity exists, its state will never change.

How are these concepts represented in OOP? Well, in OOP, the entities are the objects. So, the immutability concept in OOP is characterized by immutable objects. Once an object is constructed, it will have the same state for as long as it lives.

Categories: Java

Iterating Over a Bitset in Java

Javalobby Syndicated Feed - Sat, 03-Mar-18 23:01

How fast can you iterate over a bitset? Daniel Lemire published a benchmark recently in support of a strategy using the number of trailing zeroes to skip over empty bits. I have used the same technique in Java several times in my hobby project SplitMap and this is something I am keen to optimize. I think that the best strategy depends on what you want to do with the set bits, and how sparse and uniformly distributed they are. I argue that the cost of iteration is less important than the constraints your API imposes on the caller, and whether the caller is free to exploit patterns in the data.

C2 Generates Good Code

If you think C++ is much faster than Java, you either don’t know much about Java or do lots of floating point arithmetic. This isn’t about benchmarking C++ against Java, but comparing the compilation outputs for a C++ implementation and a Java implementation shows that there won’t be much difference if your Java method gets hot. Only the time to performance will differ, and this is amortised over the lifetime of an application. The trailing zeroes implementation is probably the fastest technique in Java as well as in C++, but that is to ignore the optimisations you can’t apply to the callback if you use it too literally.

Categories: Java

109 New Features in JDK 10

Javalobby Syndicated Feed - Fri, 02-Mar-18 22:01

It feels like it was only a few weeks since JDK 9 was launched, and that's because it was. Things move on, however, and with the new release cadence for OpenJDK, JDK 10 has already reached the release candidate milestone.

I've seen a variety of blog posts on the subject of what's new in JDK 10, but they tend to stick just to the big items defined through the JDK Enhancement Proposals (JEPS). For this blog, I thought I'd see if I can list out absolutely everything that has changed (both added and removed) in JDK 10.

Categories: Java

Scala: Option Type (Part 2)

Javalobby Syndicated Feed - Fri, 02-Mar-18 10:51

In the previous article, I had given a very basic introduction to the Option Type here. We saw that one way to check if a value is present by means of an isDefined method of the option, and if it is present, get the value via the get method. However, using the get method is not an elegant way to check if the value is defined in an option because you might forget about checking with the isDefined method before, leading to an exception at runtime — which is as good as getting a NullPointerException. It is recommended to stay away from this way of accessing Options whenever possible.

There are several ways to access the result from the Option, but the three most common ways are:

Categories: Java

Spring Boot 2.0 Goes GA

Javalobby Syndicated Feed - Fri, 02-Mar-18 10:01

On behalf of the team, it is my very great pleasure to announce that Spring Boot 2.0 is now generally available as 2.0.0.RELEASE from repo.spring.io and Maven Central!

This release is the culmination of 17 months work and over 6800 commits by 215 different individuals. A massive thank you to everyone that has contributed, and to all the early adopters that have been providing vital feedback on the milestones.

Categories: Java

ETL with Kafka

codecentric Blog - Fri, 02-Mar-18 05:04

“ETL with Kafka” is a catchy phrase that I purposely chose for this post instead of a more precise title like “Building a data pipeline with Kafka Connect”.


You don’t need to write any code for pushing data into Kafka, instead just choose your connector and start the job with your necessary configurations. And it’s absolutely Open Source!

Kafka Connect


Before getting into the Kafka Connect framework, let us briefly sum up what Apache Kafka is in couple of lines. Apache Kafka was built at LinkedIn to meet the requirements that message brokers already existing in the market did not meet – requirements such as scalable, distributed, resilient with low latency and high throughput. Currently, i.e. 2018, LinkedIn is processing about 1.8 petabytes of data per day through Kafka. Kafka offers a programmable interface (API) for a lot of languages to produce and consume data.

Kafka Connect

Kafka Connect has been built into Apache Kafka since version 0.9 (11/2015), although the idea had been in existence before this release, but as a project named Copycat. Kafka Connect is basically a framework around Kafka to get data from different sources in and out of Kafka (sinks) into other systems e.g. Cassandra with automatic offset management, where as a user of the connector you don’t need to worry about this, but rely on the developer of the connector.

Besides that, in discussions I have often come across people who were thinking that Kafka Connect was part of the Confluent Enterprise and not a part of Open Source Kafka. To my surprise, I have even heard it from a long-term Kafka developer. That confusion might be due to the fact that if you google the term Kafka Connect, the first few pages on Google are by Confluent and the list of certified connectors.

Kafka Connect has basically three main components that need to be understood for a deeper understanding of the framework.

Connectors are, in a way, the “brain” that determine how many tasks will run with the configurations and how the work is divided between these tasks. For example, the JDBC connector can decide to parallelize the process to consume data from a database (see figure 2).
Tasks contain the main logic of getting the data into Kafka from external systems by connecting e.g. to a database (Source Task) or consuming data from Kafka and pushing it to external systems (Sink Task).
Workers are the part that abstracts away from the connectors and tasks in order to provide a REST API (main interaction), reliability, high availability, scaling, and load balancing.


Kafka connect can be started in two different modes. The first mode is called standalone and should be used only in development because offsets are being maintained on the file system. This would be really bad if you were running this mode in production and your machine was unavialable. This could cause the loss of the state, which means the offset is lost and you as a develeoper don’t know how much data has been processed.

# connnect-standalone.properties


The second mode is called distributed. There, the configuration, state and status are stored in Kafka itself in different topics which benefit from all Kafka characteristics such as resilience and scalability. Workers can start on different machines and the group.id attribute in the .properties file will eventually form the Kafka Connect Cluster which can be scaled up or down.

# connnect-distributed.properties

So let’s look in the content of the pretty self-explanatory topic use in the configuration file:

// TOPIC => connect-configs
 "twitter", "twitter.consumersecret":"XXXXXX", 

// TOPIC => connect-offsets

// TOPIC => connect-status

The output shown here of the messages are just the values, the key of the message is used to identify the different connectors.

Interaction pattern

There is also a different interaction pattern normally between the standalone and distributed mode – in a non-production environment where you just want to test out a connector, for example, and you want to set manually the offset of your choice. You can start the standalone mode with passing in the sink or source connector that you want to use, e.g. bin/kafka-connect config/connect-standalone.properties config/connect-file-source.properties config/other-connector.properties.

On the other hand, you can start the Kafka Connect worker in the distributed mode with the following command: bin/kafka-connect config/connect-distributed.properties. After that, you can list all available connectors, start, change configurations on the fly, restart, pause and remove connectors via the exposed REST API of the framework. A full list of supported endpoints can be found in the offical Kafka Connect documentation.


So let’s have a closer look at an example of a running data pipeline where we are getting some real time data from Twitter and using the kafka-console-consumer to consume and inspect the data.


Here is the complete example shown in the terminal recording: Github repository. You can download and play around with the example project.


In this blog post, we covered the high-level components that are the building blocks of the Kafka Connect framework. The latter is a part of the Apache Kafka Open Source version that allows data engineers or business departments to move data from one system to another without writing any code via Apache Kafka’s great characteristics, of which we barely scratched the surface in this post. So happy connecting…

The post ETL with Kafka appeared first on codecentric AG Blog.

Categories: Agile, Java, TDD & BDD

SLF4J: How to Properly Use It [Video]

Javalobby Syndicated Feed - Fri, 02-Mar-18 01:01

If you are coming from other log frameworks like Log4j V1 or Commons Logging, it is useful to know a few useful tricks when logging with SLF4J.

I compiled a small screencast for you that shows the three most common tricks or features of SLF4J that you will be using on a day-to-day basis. Enjoy and make sure to leave your feedback below!

Categories: Java

Creating Immutable Sets, Lists, and Maps in Java 9

Javalobby Syndicated Feed - Thu, 01-Mar-18 22:01

Today, you'll learn about my favorite Java 9 feature "factory methods for collection", which is introduced as part of JEP 269.

If you have worked in Groovy or Kotlin, then you know that how easy is to create a list with elements using collection literals, e.g. to create a list of 1, 2, 3, you can simply write val items = listOf(1, 2, 3).

Categories: Java

This Week in Spring: Welcome Java 11 and Spring Boot 2.0

Javalobby Syndicated Feed - Thu, 01-Mar-18 10:01

Hi, Spring fans and welcome to another installment of Spring Tips! This is a super exciting week! Spring Boot 2.0 is coming! Keep your eyes on the Spring Initializr or you’ll miss it!

Today I was at the Okta Iterate conference talking to developers who are using Spring and Okta, thanks to my buddy Matt Raible. High point? I got to meet Jeff Atwood, the co-creator of Stack Overflow!

Categories: Java

Java Quiz 12: Unary Operators

Javalobby Syndicated Feed - Thu, 01-Mar-18 07:02

Before we start with this week's quiz, here is the answer to Java Quiz 11: Branching Statements

  1. The body of the while loop while(i < 3) is executed for the first time when i = 0. The statement x++; increments the value of x by one. So, x = 0 + 1 = 1.

Categories: Java

Java Getting Back to the Browser?

Javalobby Syndicated Feed - Thu, 01-Mar-18 04:01

Betteridge's law of headlines apply.

This article talks about WebAssembly and can be read to get a first glimpse of it. At the same time, I articulate my opinion and doubts. The summary is that WebAssembly is an interesting approach and we will see what it will become.

Categories: Java

Thread Slivers eBook at Amazon

Syndicate content