Feed aggregator

Tail Recursion in Scala [Video]

Javalobby Syndicated Feed - Mon, 11-Dec-17 04:01

Recursion is quite common in the programming world. As you probably know, it's the process of solving a problem by breaking it down into smaller subproblems. You can easily spot recursion if you see a method calling itself with a smaller subset of inputs.

Why Recursion?

Many programmers consider recursion tough and error-prone, but in Scala, we use recursion because:

Categories: Java

10 Talented Women in the Java/JVM Community

Javalobby Syndicated Feed - Mon, 11-Dec-17 01:01

A couple of weeks ago, Duchess, a global organization for women in Java technology, celebrated its 10th anniversary:

This got me thinking of the women in the Java/JVM community from whom I have learned a lot, whether through their books, courses, or presentations.

Categories: Java

Developing modern offline apps with ReactJS, Redux and Electron – Part 4 – Electron

codecentric Blog - Sun, 10-Dec-17 23:00

The previous part of this series showed the beautiful interplay of React and Redux. In this part, we are going to take a rough look at a technology called Electron. One essential technology in our recent projects, Electron is vastly different from the previous two parts of this blog series. React and Redux are solely used to implement the application logic. Electron, on the other hand, is used to implement both structure and application logic to create real cross-platform desktop apps. It is a wrapper which contains a chromium browser in a NodeJS environment. This technology enables the combination of pure web frontend technologies and additionally gives your application full access to the underlying operating system via NodeJS. In the following, we will introduce the basic concepts using a simple Electron app and show how this technology solves the everlasting single-threaded obstacle of non-responsive JavaScript applications.

  1. Introduction
  2. ReactJS
  3. ReactJS + Redux
  4. Electron framework
  5. ES5 vs. ES6 vs. TypeScript
  6. WebPack
  7. Build, test and release process

The Core Parts

An Electron app consists of a few main parts. The basic concept is that you have two or more concurrently running processes. First you have the main process of your application. In this process you have access to NodeJS and thus all your operating system’s power and access to a huge distinct subset of the Electron API. Furthermore the main process creates browser windows. They have one or more render processes and share an important property with your normal browser. These processes are contained in a sandbox. This is because these processes are responsible for rendering the DOM of our web app. Render processes have access to the NodeJS API and a distinct subset of the Electron API, but not to the operating system.

A few functionalities of Electron can even be used in both the main and a render processes. By default JavaScript processes in NodeJS and Chromium are single-threaded and therefore still limited, even if both processes are operating system level processes.

Electron core parts Electron core parts

OS Integration

Since Electron is a JavaScript technology, the final app can be deployed to common desktop operating systems like Windows, MacOS and Linux in 32 and 64-bit versions. To do so, you can use the electron-packager, which is developed by the community. The packager creates installers for various operating systems which make it easy to deploy the Electron apps in enterprise environments. Furthermore, Electron provides essential OS integration on its ownm, menu bars, OS level notifications, file dialogs and many other features for nearly all operating systems.

In our projects we used the file dialog to import files from the file system. The allowed properties depend on the operating system. Please check out the API for more details [DIALOG].

const {dialog} = require('electron');
const properties = ['openFile', 'openDirectory’];
dialog.showOpenDialog({ properties });

We also created custom Electron menu bars for production and development mode. During development we could toggle the developer tools from chromium. For production you can remove that feature from the final Electron app.

 const createMenu = () => {
 const { app, Menu } = electron;
 const template = [
   {
     label: 'Edit',
     submenu: [ 
      { role: 'cut' }, 
      { role: 'copy' }, 
      { role: 'paste' },
      { role: 'selectall' }
    ]
   },
   {
     label: 'View',
     submenu: [ 
      { role: 'reload' },
      { role: 'forcereload' },  
      { role: 'toggledevtools' }
     ]
   }
 ];
 const menu = Menu.buildFromTemplate(template);
 Menu.setApplicationMenu(menu);
};

Electron native menu

To see a full list of all native Electron features, go to [ELECTRON].

IPC Communication

In the previous section we talked about the awesome OS integration of Electron. But how can we harness the full potential of our operating system and backend languages like NodeJS to unleash the power of JavaScript? We can do this with the built-in inter-process-communication in Electron. The modules that handle that communication, the ipcMain and ipcRenderer, are part of Electron’s core. ipcMain enables communication from the main process to the render processes. The ipcRenderer handles the opposite direction from render to main.

“The ipcRenderer module is an instance of the EventEmitter class. It provides a few methods so you can send synchronous and asynchronous messages from the render process (web page) to the main process. You can also receive replies from the main process.” [IPCRENDERER]

In the following example, we register an Event Listener with ipcMain process using the channel name LOAD_FILE_WITH_PATH. Once the Event Listener finishes, we send an event back to the React app. Depending on the result, we add a “success” or “error” to the channel name. This allows us to operate differently with the response inside React [IPCMAIN].

In the React app, we use the ipcRenderer.send to send messages asynchronously to the Event Listener, using the identical channel name. To send messages synchronously use ipcRenderer.sendSync. After that we add a one time listener function for the event using ipc.once. To distinguish IPC calls we add a unique uuid to the Channel name [IPCRENDERER].

electron.js
const ipc = require('electron').ipcMain;
ipc.on(ipcConstants.LOAD_FILE_WITH_PATH, async (event, request) => {
  try {
    const fileContent = await fileService.readFileAsync(request.path);
    event.sender.send(
      `${ipcConstants.LOAD_FILE_WITH_PATH}-success-${request.uuid}`, fileContent);
  } catch (error) {
    event.sender.send(
      `${ipcConstants.LOAD_FILE_WITH_PATH}-error-${request.uuid}`, error.message);
  }
});
fileService.js
const ipc = require('electron').ipcRenderer;
export function readFileContentFromFileSystem(path) {
  const uuid = uuidV4();
  ipc.send(LOAD_FILE_WITH_PATH, { uuid, path });
  return new Promise((resolve, reject) => {
    ipc.once(`${LOAD_FILE_WITH_PATH}-success-${uuid}`,
      (event, xml) => {
        resolve(xml);
      });
    ipc.once(`${LOAD_FILE_WITH_PATH}-error-${uuid}`,
      (event, args) => {
        reject(args);
      });
  });
}

To debug the IPC communication between your React application and Electron, you need to install the Electron DevTools Extension.

npm install --save-dev devtron

Afterwards run the following command from the console tab of your application. This will add another tab with the Devtron tools.

require('devtron').install()

Under the Devtron tab you get all kinds of details about your Electron application. Devtron displays all default event listeners from Electron as well as your own custom listeners. Under the IPC link you can record all IPC calls from your application. The Lint tab allows you to do Lint checks and the Accessibility tab checks your web application against the Accessible Rich Internet Applications Suite (ARIA) standard.

Devtron event listener

Here is an example what the IPC communication in our project looks like.

Devtron IPC call

Remember that we claimed that Electron is the end of the everlasting single-threaded obstacle? Using IPC we can move CPU intensive work to Electron and outsource these tasks using electron-remote. With one single line we can create a task pool that will actually create a new browser window in the background and execute our code (electronFileService.js) in a separate OS process / browser window. Here is an example how to setup the task pool for the file service.

const { requireTaskPool } = require('electron-remote');
const fileService = requireTaskPool(require.resolve('./electronFileService'));

Offline and Storage

When developing an offline desktop application with Electron you have several options on where to store and read data from.

Option 1: Electron / NodeJS

In Electron you can execute NodeJS commands. Therefore you can use almost any module from npmjs.org to read and store data on your local operating system. We recommend this option when you need to persist and process a lot of data.

  • SQLite3 (relational database)[SQLITE]
  • MongoDB (document database)[MONGODB]
  • Neo4J (graph database)[NEO4J]

Electron app

Option 2: React & Redux / Web Browser

In the second option we persist and process data inside the browser. Modern browsers offer a range of APIs that allow for persisting browser data, i.e. LocalStorage, IndexedDB, SessionStorage, WebSQL and Cookies. We recommend this approach for small datasets that need to be persisted locally. This can be done with any web technology. In our case, the React web application uses Redux as a store for the application state. You can use the redux-persist module to automatically persist the Redux store to the IndexedDB or LocalStorage. In case your web app crashes or you restart the browser, you can configure redux-persist [REDUXP] to automatically rehydrate the Redux Store.

React WebApp

Modern browsers support service worker API to span threads for processing data. If there is information that you need to persist and reuse across restarts, service workers have access to the various browser storage technologies.

Option 3: Combination of Option 1 and 2

There might be times when your desktop client will be online and can retrieve data from a backend server. With our proposed stack you have the full freedom of choosing how to access the backend services. You can either call the backend services via the web application layer (i.e. React WebApp) or you can use the Electron/NodeJS layer. Which way you choose is up to you and might depend on security restrictions or the existence of NodeJS modules you can reuse or other aspects.

Electron React App

Summary

Electron is an extremely powerful technology that enables you and your team to create beautiful, responsive, OS independent and maintainable desktop applications. Because there is so much more to Electron, we highly recommend reading https://electronjs.org/docs for the parts that you are interested in or need in your projects. Just keep tuned for our next article.

References

The post Developing modern offline apps with ReactJS, Redux and Electron – Part 4 – Electron appeared first on codecentric AG Blog.

Categories: Agile, Java, TDD & BDD

Automatic-Module-Name: Calling All Java Library Maintainers

Javalobby Syndicated Feed - Sun, 10-Dec-17 22:01

Creating modular applications using the Java module system is an enticing prospect. Modules have module descriptors in the form of module-info.java, declaring which packages are exported and what dependencies it has on other modules. This means we finally have explicit dependencies between modules at the language level and can strongly encapsulate code within these modules. The book Java 9 Modularity (O'Reilly) written by Paul Bakker and me explains these mechanisms and their benefits in detail.

That, however, is not what this post is about. Today, we'll talk about what needs to be done to move the Java library ecosystem toward modules. In the ideal world where all libraries have module descriptors, all is well.

Categories: Java

Memory Leaks: Fallacies and Misconceptions

Javalobby Syndicated Feed - Sat, 09-Dec-17 23:01

In the years we have spent building Plumbr, we have detected and solved so many memory leaks that I have actually lost count. Interestingly, during these years, we have encountered even more situations where a memory leak was nowhere in sight, but somehow our users were convinced that there had to be one. The pressure from such users has been high enough for us to even come up with a specific term that we use internally: “memory anxiety.”

With Java memory management being a complex domain, I do understand the background of this anxiety. When your software does not perform the way it should, the current state of root cause detection forces you to apply different kinds of dark arts to really understand what is going on. Often enough, the process involves a lot of guesswork. One of the frequent guesses seems to often take the form of, “Gosh, I have a memory leak”. In this post, I would like to give some examples of different situations and suggest patterns that you can follow to verify whether or not you actually are a victim of a memory leak.

Categories: Java

Spring 5, Embedded Tomcat 8, and Gradle

Javalobby Syndicated Feed - Fri, 08-Dec-17 22:01

In this article, we are going to learn how to use Gradle to structure a Spring 5 project with Tomcat 8 embedded. We will start from an empty directory and will analyze each step needed to create an application that is distributed as an über/fat jar. This GitHub repository contains a branch called complete with the final code that we will have after following the steps described here.

Why Spring

Spring is the most popular framework available for the Java platform. Developers using Spring can count on a huge, thriving community that is always ready to help. For example, the framework contains more than 11k forks on GitHub and more than 120k questions asked on StackOverflow are related to it. Besides that, Spring provides extensive and up-to-date documentation that covers the inner workings of the framework.

Categories: Java

After 10 Years, Effective Java 3rd Edition Is Coming Soon

Javalobby Syndicated Feed - Fri, 08-Dec-17 14:01

Hello guys, I have an interesting news to share with you today. After a long wait of almost 10 years, Effective Java 3rd edition is finally coming this year.

The Effective Java 2nd Edition was released in May 2008 and updated for Java SE 6, but it has been a good 10 years now and there is a lot of interest from Java developers around the world for Effective Java 3rd edition, especially after Java SE 8's release. I am very happy to inform you all that, finally, all our wishes have been granted and Effective Java 3rd edition is set to arrive this year.

Categories: Java

This Week in Spring: Tool Suites and Cloud Gateways

Javalobby Syndicated Feed - Fri, 08-Dec-17 10:01

Hi, Spring fans and welcome to This Week in Spring from the premier JVM-language event SpringOne Platform 2017! There is a massive amount of stuff to cover, especially in light of SpringOne Platform, so let’s get to it!

Categories: Java

Refer to a Connector Configuration From a Java Component [Snippet]

Javalobby Syndicated Feed - Fri, 08-Dec-17 04:01

When using a Java component to send a message to a VM inbound endpoint, if there is more than one connector configuration defined, then it is necessary to specify the connector reference to use. Otherwise, an error similar to the one below will be shown:

org.mule.transport.service.TransportFactoryException: 
There are at least 2 connectors matching protocol "vm", 
so the connector to use must be specified on the endpoint 
using the 'connector' property/attribute. Connectors in your 
configuration that support "vm" are: VM-1, VM-2, 
(java.lang.IllegalStateException). Component that caused 
exception is: DefaultJavaComponent{vm-javaFlow.component.2140635066} 


Categories: Java

Choosing the Right GC

Javalobby Syndicated Feed - Fri, 08-Dec-17 01:01

Size matters when it comes to software. It has become clear that using small pieces within a microservices architecture delivers more advantages compared to the big monolith approach. The recent Java release of Jigsaw helps decompose legacy applications or build new cloud-native apps from scratch.

This approach reduces disk space, build time, and startup time. However, it doesn’t help enough with RAM usage management. It is well-known that Java consumes a large amount of memory in many cases. At the same time, many have not noticed that Java has become much more flexible in terms of memory usage and provided features to meet the requirements of microservices.

Categories: Java

The Power of the Gradle Kotlin DSL

Javalobby Syndicated Feed - Thu, 07-Dec-17 22:01

The following is based on Gradle 4.3.1.

A few weeks ago, I started migrating most of my Groovy-based gradle.build scripts to Kotlin-backed gradle.build.kts scripts using the Kotlin DSL.

Categories: Java

Scala: The Option Type (Part 1)

Javalobby Syndicated Feed - Thu, 07-Dec-17 14:01

Developers familiar with Java would have experienced NullPointerExceptions at some point. It is mainly used to indicate that no value or null is assigned to a reference variable or an Object.

Different languages treat null in different ways. Scala tries to solve the problem of nulls by getting rid of null values altogether and by providing a type to represent an optional/unknown value, i.e. Option[Employee].

Categories: Java

Uncommon Java Syntax: Ellipses…

Javalobby Syndicated Feed - Thu, 07-Dec-17 10:01

Existing since Java SE 5.0 the ellipsis, also known as varargs, is one of those rarely underutilized features of Java. My guess is many novice programmers, and indeed even some experienced ones, have yet to meet Mr. Ellipsis — "…". I for one didn’t come across this elegant feature until after a year of full-time programming with Java. So what is an ellipsis?

Defining Varargs

I could not find any clear definition from the Javadocs but from what I could gather online, ellipses (also officially known as varargs (Variable Arguments)) are a Java syntax that describes an argument in a method that can take in zero or many arguments. Confusing? Let’s look at an example.

Categories: Java

Java Quiz 6: Calling Constructors by Using the Keyword This

Javalobby Syndicated Feed - Thu, 07-Dec-17 07:01

Before we start with this week's quiz, here is the answer to Java Puzzle 5: Static Variables and Object Instantiation.

  1. We actually need to invoke the method intMethod to assign a value to the variable i and invoke the strMethod to assign a value to the object str.

Categories: Java

Expert Advice: NoSQL Options for Java Devs (Part 2)

Javalobby Syndicated Feed - Thu, 07-Dec-17 01:01

A few months ago, I wrote about NoSQL Options for Java Developers. To create that post, I analyzed data from a variety of sources (Indeed jobs, GitHub stars, Stack Overflow tags) to choose the top five options: MongoDB, Redis, Cassandra, Neo4j, and PostgreSQL. For this follow-up post, I shared my findings with a few experts I know in the Java and NoSQL communities and asked them the following questions:

  1. Do you agree with my choices of the top 5 NoSQL options (MongoDB, Redis, Cassandra, Neo4j, and PostgreSQL with its JSON support)?
  2. Do you have any good or bad stories about using any of these databases in production?
  3. Have you found any of these databases particularly difficult to get started with or maintain over time?
  4. What is your favorite NoSQL database and why?
  5. Anything else you’d like to share?

Today, I’m happy to share their answers with you. But first, some introductions.

Categories: Java

Validating Topic Configurations in Apache Kafka

codecentric Blog - Thu, 07-Dec-17 00:08

Messages in Apache Kafka are appended to (partitions of) a topic. Topics have a partition count, a replication factor and various other configuration values. Why do those matter and what could possibly go wrong?

Why does Kafka topic configuration matter?

There are three main parts that define the configuration of a Kafka topic:

  • Partition count
  • Replication factor
  • Technical configuration

The partition count defines the level of parallelism of the topic. For example, a partition count of 50 means that up to 50 consumer instances in a consumer group can process messages in parallel. The replication factor specifes how many copies of a partition are held in the cluster to enable failover in case of broker failure. And in the technical configuration, one can define the cleanup policy (deletion or log compaction), flushing of data to disk, maximum message size, permitting unclean leader elections and so on. For a complete list, see https://kafka.apache.org/documentation/#topicconfigs. Some of these properties are quite easy to change at runtime. For others this is a lot harder, though.

Let’s take the partition count. Increasing it upwards is easy – just run

bin/kafka-topics.sh --alter --zookeeper zk:2181 --topic mytopic --partitions 42

This might be sufficient for you. Or it might open the fiery gates of hell and break your application. The latter is the case if you depend on all messages for a given key landing on the same partition (to be handled by the same consumer in a group) or for example if you run a Kafka Streams application. If that application uses joins, the involved topics need to be copartitioned, meaning that they need to have the same partition count (and producers using the same partitioner, but that is hard to enforce). Even without joins, you don’t want messages with the same key end up in different KTables.

Changing the replication factor is serious business. It is not a case of simply saying “please increase the replication factor to x” as it is with the partition count. You need to completely reassign partitions to brokers, specifying the preferred leader and n replicas for each partition. It is your task to distribute those well across your cluster. This is no fun for anyone involved. Practical experience with this has actually led to this blog post.
The technical configuration has an impact as well. It could be for example quite essential that a topic is using compaction instead of deletion if an application depends on that. You also might find the retention time too small or too big.

The Evils of Automatic Topic Creation

In a recent project, a central team managed the Kafka cluster. This team kept a lot of default values in the broker configuration. This is mostly sensible as Kafka comes with pretty good defaults. However, one thing they kept was auto.create.topics.enable=true. This property means that whenever a client tries to write to or read from a non-existing topic, Kafka will automatically create it. Defaults for partition count and replication factor were kept at 1.

This led to the situation where the team forgot to set up a new topic manually before running producers and consumers. Kafka created that topic with default configuration. Once this was noticed, all applications were stopped and the topic deleted – only to be created again automatically seconds later, presumably because the team didn’t find all clients. “Ok”, they thought, “let’s fix it manually”. They increased the partition count to 32, only to realize that they had to provide the complete partition assignment map to fix the replication factor. Even with tool support from Kafka Manager, this didn’t give the team members a great feeling. Luckily, this was only a development cluster, so nothing really bad happened. But it was easy to conceive that this could also happen in production as there are no safeguards.

Another danger of automatic topic creation is the sensitivity to typos. Let’s face it – sometimes we all suffer from butterfingers. Even if you took all necessary care to correctly create a topic called “parameters”, you might end up with something like

Automatic topic creating means that your producer thinks everything is fine, and you’ll scratch your head as to why your consumers don’t receive any data.

Another conceivable issue is that a developer that maybe is not yet that familiar with the Producer API might confuse the String parameters in the send method

So while our developer meant to assign a random value to the message key, he accidentally set a random topic name. Every time a message is produced, Kafka creates a new topic.

So why don’t we just switch automatic topic creation off? Well, if you can: do it. Do it now! Sadly, the team didn’t have that option. But an idea was born – what would be the easiest way to at least fail fast at application startup when something is different than expected?

How to automatically check your topic configuration

In older versions of Kafka, we basically used the code called by the kafka-topics.sh script to programmatically work with topics. To create a topic for example we looked at how to use kafka.admin.CreateTopicCommand. This was definitely better than writing straight to Zookeeper because there is no need to replicate the logic of “which ZNode goes where”, but it always felt like a hack. And of course we got a dependency on the Kafka broker in our code – definitely not great.

Kafka 0.11 implemented KIP-117, thus providing a new type of Kafka client – org.apache.kafka.clients.admin.AdminClient. This client enables users to programmatically execute admin tasks without relying on those old internal classes or even Zookeeper – all Zookeeper tasks are executed by brokers.

With AdminClient, it’s fairly easy to query the cluster for the current configuration of a topic. For example, this is the code to find out if a topic exists and what its partition count and replication factor is:

The DescribeTopicsResult contains all the info required to find out if the topic exists and how partition count and replication factor are set. It’s asynchronous, so be prepared to work with Futures to get your info.

Getting configs like cleanup.policy works similarly, but uses a different method:

Under the hood there is the same Future-based mechanism.

A first implementation attempt

If you are in a situation where your application depends on a certain configuration for the Kafka topics you use, it might make sense to fail early when something is not right. You get instant feedback and have a chance to fix the problem. Or you might at least want to emit a warning in your log. In any case, as nice as the AdminClient is, this check is not something you should have to implement yourself in every project.

Thus, the idea for a small library was born. And since naming things is hard, it’s called “Club Topicana”.

With Club Topicana, you can check your topic configuration every time you create a Kafka Producer, Consumer or Streams client.

Expectations can be expressed programmatically or configuratively. Programmatically, it uses a builder:

This basically says “I expect the topic test_topic to exist. It should also have 32 partitions and a replication factor of 3. I also expect the cleanup policy to be delete. Kafka should retain messages for at least 30 seconds.”

Another option to specify an expected configuration is YAML (parser is included):

What do you do with those expectations? The library provides factories for all Kafka clients that mirror their public constructors and additionally expects a collection of expected topic configurations. For example, creating a producer can look like this:

The last line throws a MismatchedTopicConfigException if the actual configuration does not meet expectations. The message of that exception lists the differences. It also provides access to the computed result so users can react to it in any way they want.

The code for consumers and streams clients looks similar. Examples are available on GitHub. If all standard clients are created using Club Topicana, an exception will prevent creation of a client and thus auto creation of a topic. Even if auto creation is disabled, it might be valuable to ensure that topics have the correct configuration.

There is also a Spring client. The @EnableClubTopicana annotation triggers Club Topicana to read YAML configuration and execute the checks. You can configure if you want to just log any mismatches or if you want to let the creation of the application context fail.

This is all on GitHub and available on Maven Central.

Caveats

Club Topicana will not notice when someone changes the configuration of a topic after your application has successfully started. It also of course cannot guard against other clients doing whatever on Kafka.

Summary

The configuration of your Kafka topics is an essential part of running your Kafka applications. Wrong partition count? You might not get the parallelism you need or your streams application might not even start. Wrong replication factor? Data loss is a real possibility. Wrong cleanup policy? You might lose messages that you depend on later. Sometimes, your topics might be auto-generated and come with bad defaults that you have to fix manually. With the AdminClient introduced in Kafka 0.11, it’s simple to write a library that compares actual and desired topic configurations at application startup.

The post Validating Topic Configurations in Apache Kafka appeared first on codecentric AG Blog.

Categories: Agile, Java, TDD & BDD

Java Concurrency in Depth (Part 1)

Javalobby Syndicated Feed - Wed, 06-Dec-17 22:01

Java comes with strong support for multi-threading and concurrency, which makes it easy to write concurrent applications. But usually, multi-threaded applications are tricky to debug, troubleshoot, and sometimes to scale. From my experience with concurrent applications, most of the issues are found when they run at scale, which means when they go live in many cases. In order to make this easier, it is better to understand how things work under the hood and the pros and cons of every choice. 

This article is the first in a series of articles discussing the internals of Java concurrency. 

Categories: Java

Stubbing With Mockito ArgumentCaptor [Snippets]

Javalobby Syndicated Feed - Wed, 06-Dec-17 14:01

Consider a scenario where we are testing a method that depends on a collaborator. This collaborator takes an argument while calling one of its methods. 

Now, there can be two scenarios.

Categories: Java

Calling All Speakers for Index - San Francisco

Javalobby Syndicated Feed - Wed, 06-Dec-17 13:35

Calling all speakers!

Index - San Francisco is an open developer community event designed for those who write code for a living. By speaking at this new event, you’ll get the opportunity to educate and inspire your peers while establishing yourself as a leader in the developer community.

Categories: Java

Testing Your Spring App for Thread Safety

Javalobby Syndicated Feed - Wed, 06-Dec-17 10:01

In the following post, I want to show you how to test if your spring application is thread-safe. As an example application, I use the Spring petclinic project.

To detect concurrency bugs during our tests, we use vmlens. vmlens traces the test execution and analyzes the trace afterward. It detects deadlocks and race conditions during the test run.

Categories: Java

Thread Slivers eBook at Amazon

Syndicate content