Rust for Java Devs: Creating Variables

Javalobby Syndicated Feed - 4 hours 8 min ago

I’m going to do something different today and write about Rust instead of Java. This is something that I want to give a try, and if it all goes well, I will continue writing some posts about Rust in the future.

I will be writing about learning Rust from the perspective of a Java developer by making comparisons where appropriate to help you (and me) understand what is going on. These will most likely be shorter posts and, honestly, might not even contain that much information — but hopefully, someone finds these sort of posts helpful and makes it all worth it.

Categories: Java

Effective Java 3rd Edition: A Must-Read for Every Developer

Javalobby Syndicated Feed - 8 hours 8 min ago

JosEffective Java 3rd Editionhua Bloch finally updated his popular book Effective Java for Java 7, 8, and 9. The previous edition was one of the most popular books among professional Java developers, and I couldn’t wait to finally read the updated 3rd edition.

I got this book two weeks ago, and it more than fulfilled my expectations. It is packed with best practices and detailed descriptions of the finer details of the Java language. Every developer should at least read the chapters about generics and lambdas.

Categories: Java

Closures in Scala [Video]

Javalobby Syndicated Feed - 14 hours 8 min ago

Before we begin discussing closures in Scala, let us have a quick look at a function. The function literal bill below accepts only one input parameter: netBill. However, the function calculates the final bill by adding a service charge to netBill.

val bill = (netBill: Double) => netBill*(1 + serviceCharge/100)

Categories: Java

An Introduction to Hollow JARs

Javalobby Syndicated Feed - 17 hours 8 min ago

I have written in the past about the difference between an application server and an UberJAR. The short version of that story is that an application server is an environment that hosts multiple Java EE applications side by side, while an UberJAR is a self-contained executable JAR file that launches and hosts a single application.

There is another style of JAR file that sits in-between these two styles called a Hollow JAR.

Categories: Java

BigchainDB – The lightweight blockchain framework [blockcentric #5]

codecentric Blog - 19 hours 9 min ago

With BigchainDB we see one of the first complete but simple blockchain frameworks. The project strives to make blockchain usable for a large number of developers and use cases without requiring special knowledge in cryptography and distributed systems.

According to benchmarks (whose scripts are also included in the repository), a simple BigchainDB network is able to accept and validate 800 transactions per second (cf. 3-10 tx/s at Bitcoin). This high data throughput is due to the selected Big Data technologies for data persistence. Database systems from this environment use proven mechanisms based on Paxos algorithms to reach a consensus on the status of the data.
MongoDB or RethinkDB can be selected as the underlying database for a BigchainDB. Both options are document-oriented NoSQL databases, which can be scaled horizontally by replication and shaping. They are also schema-free, so that data can be stored in them without the need to globally define a uniform schema.
The throughput of both systems alone can generally be named with more than 80,000 write operations per second.

Transactions in BigchainDB

The framework makes uses of the consensus of the used database cluster. In order to validate transactions, it is also possible to implement your own validation mechanisms.
Two types of transactions are available in BigchainDB. CREATE transactions create an asset, i. e. a collection of data, and link it to at least one owner (its public key). In addition, all assets are “divisible” and can therefore be broken down into different proportions for different owners.
With TRANSFER transactions, complex instructions can be created that link existing assets (and their shares) to conditions of further transactions. In this way, assets and parts of the network can be easily moved between subscribers in the network.
As usual in a blockchain, there is no way to delete an asset or modify its properties.

For a transaction to be validly processed in the network, several conditions must be met.
Once the transaction has been received, the nodes check it for the correct structure. For example, CREATE transactions must contain their asset data while TRANSFER transactions reference the ID of an asset created. Both types also differ in the way they have to deal with inputs and outputs. Of course, each transaction must be signed and hashed before it is transmitted to the network.
If the structure of a transaction is valid, the validity of the contained data is checked. In short, the so-called “double spending” is prevented in this step. This prevents the transaction from repeatedly transferring the same asset or issuing assets that have already issued other transactions.
In addition, you can implement your own validation mechanisms that could check asset generation and transfers for correct functionality, for example. For example, whether realistic dimensions have been assigned for a car part or whether the color code of an automotive paint job exists.

For a deeper understanding of the idea as well as the technical and architectural details of the project, it is recommended to read the BigchainDB Whitepaper, which was maintained until June 2016.

Scenarios and operation of a blockchain

Since the consensus in a BigchainDB is not implemented via public mechanisms such as Proof of Work, Proof of Stake or similar, the technology is more suitable for private blockchains. This means that some parties will form a consortium in order to jointly execute their transactions among themselves without the need for an intermediary. To this end, each participant of this association adds some infrastructure on which at least one node of the blockchain solution is operated. Therefore, each transaction that occurs in the network must be validated and confirmed by a technical and organizational party of the consortium. This approach is very lightweight and does not require participants to be rewarded for their validations. The reward for the participants is, after all, to build up a trusting network without questionable and costly middlemen.

Due to these circumstances, the operation of a private blockchain based on BigchainDB is relatively easy. Each member of the consortium must take care of setting up and maintaining a database and BigchainDB cluster distributed in its infrastructure. In addition, there is of course the holding of a private key to sign his messages to the network. Each organization participating in the network can be identified and verified by in-built certificate management and registry.

One example would be a merger of a number of banks operating payment transactions and information exchange among themselves. Usually, these participants do not fully trust each other and must involve third parties to verify the transactions. However, if this network of banks were to form a consortium that would automate and cryptographically secure each transaction, a third instance would be superfluous and could therefore be excluded.
With a BigchainDB solution in place, each bank would operate its own cluster in its infrastructure that is linked to the network.

BigchainDB is therefore particularly suitable for private blockchain networks with high activity and data volume. This stack is also suitable for archiving solutions in which many data records have to be stored in a trustworthy way for many years. This can be used, for example, to make instances obsolete that call up high service, hardware and license fees for legally compliant data archiving.
Tracking steps in a supply chain can also cover the Bigchain transaction model excellently.

Getting started on the local machine or in IPDB

As a Managed BigchainDB Blockchain network, the Interplanetary Database (IPDB) is now also offered, with which one can interact as a registered organization.

Locally, Bigchain can either be installed directly on the host or operated with a docker. The docker variant is well suited for starting and testing.

In order to develop client applications against the started network, some official and community-maintained drivers are available. From the wide range of Python, JavaScript, Java, Ruby, Haskell and Go, every developer will probably find the right library.

We wish you a lot of fun trying it out.

Our article series “blockcentric” discusses Blockchain-related technology, projects, organization and business concerns. It contains knowledge and findings from our 20% time work, but also news from the area.

Blockcentric Logo
We are looking forward to your feedback on the column and exciting discussions about your use cases.

Previously published blockcentric-Posts

The post BigchainDB – The lightweight blockchain framework [blockcentric #5] appeared first on codecentric AG Blog.

Categories: Agile, Java, TDD & BDD

Application Design the Java 9 Way

Javalobby Syndicated Feed - 20 hours 8 min ago

Java 8 brought many great improvements to the language. Java 9 enhanced them further! A major paradigm shift now needs to occur. The tools that Java 8 and 9 provide for application design are immensely improved. Java 8 allows static methods on interfaces. Java 9 provides a level of organization above the package: the module. Putting these two techniques together results in better designs and stronger object-oriented guarantees.

Interfaces Rule!

It feels like interfaces passed out of favor in the Java world. There was a time when classes not implementing interfaces were considered suspicious and discussed in code review. Now, major IDEs provide class templates that automatically declare classes public and don't provide or expect an "implements" clause.

Categories: Java

JUnit4, JUnit5, and Spock: A Comparison

Javalobby Syndicated Feed - Mon, 15-Jan-18 14:01

Recently, I gave a talk in my local Java User Group about unit testing. Some of the content of the talk was about some popular libraries you can use in your Java project. I’ve reviewed JUnit4, JUnit5, and the Spock framework. Many of the attendees were quite surprised with the differences. In this post, I will summarize asserts, parametrized tests, and mocking.

I always like to demonstrate the concepts with examples and live coding, so I chose a simple algorithm: a Fibonacci number calculator. If you don’t know it, it’s just to generate numbers that are the sum of the two previous ones in the series: 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377.

Categories: Java

Java Annotated Monthly: January 2018

Javalobby Syndicated Feed - Mon, 15-Jan-18 10:01

Hopefully, everyone has recovered from the holidays. November is probably one of the craziest months, a sharp contrast to late December and early January, which are some of the quietest ones, at least in my experience. People are wrapping up the year’s results, making plans for the year ahead, and of course, enjoying the holidays. If you had holidays like we did, this January digest might be helpful for updating you on what has happened in the community during the last month. If not, I hope you find the news and articles interesting anyway.

If you missed all the champagne and watched the blogs and social accounts closer than we did, please feel free to share the missing news in the comments. Since Trisha, your regular host, is still on maternity leave; I will do my best to fill in for her.

Categories: Java

Kotlin: Tail Recursion Optimization

Javalobby Syndicated Feed - Mon, 15-Jan-18 04:01

The Kotlin compiler optimizes tail recursive calls — with a few catches. Consider a ranking function to search for the index of an element in a sorted array, implemented the following way using tail recursion (and a test for it):

fun rank(k: Int, arr: Array<Int>): Int {
    tailrec fun rank(low: Int, high: Int): Int {
        if (low > high) {
            return -1
        val mid = (low + high) / 2
        return when {
            (k < arr[mid]) -> rank(low, mid)
            (k > arr[mid]) -> rank(mid + 1, high)
            else -> mid
    return rank(0, arr.size - 1)
fun rankTest() {
    val array = arrayOf(2, 4, 6, 9, 10, 11, 16, 17, 19, 20, 25)
    assertEquals(-1, rank(100, array))
    assertEquals(0, rank(2, array))
    assertEquals(2, rank(6, array))
    assertEquals(5, rank(11, array))
    assertEquals(10, rank(25, array))

Categories: Java

Spring, Reactor, and ElasticSearch: Benchmarking With Fake Test Data

Javalobby Syndicated Feed - Mon, 15-Jan-18 01:01

In the previous article, we created a simple adapter from ElasticSearch's API to Reactor's Mono, which looks like this:

import reactor.core.publisher.Mono;

private Mono<IndexResponse> indexDoc(Doc doc) {

Categories: Java

Change Streams in MongoDB 3.6

codecentric Blog - Sun, 14-Jan-18 23:00

MongoDB 3.6 introduces an interesting API enhancement called change streams. With change streams you can watch for changes to certain collections by means of the driver API. This feature replaces all the custom oplog watcher implementations out there, including the one I used in the article on Near-Realtime Analytics with MongoDB.

For a start, we need to install MongoDB 3.6.0 or a higher version. After setting up a minimal replica set we connect to the primary and set the feature compatibility to 3.6 be able to use change streams (this will hopefully be the default in future version):

use admin
db.adminCommand( { setFeatureCompatibilityVersion: "3.6" } )

I will use the Java driver for the following examples. In order the be able to watch for documents changes, we write a simple program the inserts a bunch of document to a collection

MongoCollection eventCollection =
    new MongoClient(
        new MongoClientURI("mongodb://localhost:27001,localhost:27002,localhost:27003/test?replicatSet=demo-dev")

long i = 0;
while (true) {
  Document doc = new Document();
  doc.put("i", i++);
  doc.put("even", i%2);
  System.out.println("inserted: " + doc);
  Thread.sleep(2000L + (long)(1000*Math.random()));

The output of this Java process looks like this:

inserted: Document{{i=1, even=0, _id=5a31187a21d65707e8282fa7}}
inserted: Document{{i=2, even=1, _id=5a31187d21d65707e8282fa8}}
inserted: Document{{i=3, even=0, _id=5a31187f21d65707e8282fa9}}
inserted: Document{{i=4, even=1, _id=5a31188121d65707e8282faa}}

In another Java process, we use the same code for retrieving the collection. On that collection we call a method watch with takes a list of aggregation stages, just like the aggregate operation:

ChangeStreamIterable changes =
    Aggregates.match( and( asList(
      in("operationType", asList("insert")),
      eq("fullDocument.even", 1L)))

We register only for insert operations on the collection and additionally filter for documents with the field even being equal to 1.

Change Streams in MongoDB 3.6

When we iterate over the cursor we just print out the matching documents:

changes.forEach(new Block>() {
  public void apply(ChangeStreamDocument t) {
    System.out.println("received: " + t.getFullDocument());

The result looks like this:

received: Document{{_id=5a311e2021d657082268f38a, i=2, even=1}}
received: Document{{_id=5a311e2521d657082268f38c, i=4, even=1}}
received: Document{{_id=5a311e2a21d657082268f38e, i=6, even=1}}

With change streams, the MongoDB API grows even wider. Now you can quite easily build things that resemble triggers you know from traditional databases. There is no need for external event processing or your own oplog watcher implementation anymore.

The full source code can be found at GitHub.

The post Change Streams in MongoDB 3.6 appeared first on codecentric AG Blog.

Categories: Agile, Java, TDD & BDD

Converting Collections to Maps With JDK 8

Javalobby Syndicated Feed - Sun, 14-Jan-18 22:01

I have run into situations several times where it is desirable to store multiple objects in a Map instead of a Set or List because there are some advantages from using a Map of unique identifying information to the objects. Java 8 has made this translation easier than ever with streams and the Collectors.toMap(...) methods.

One situation in which it has been useful to use a Map instead of a Set is when working with objects that lack or have sketchy equals(Object) or hashCode() implementations, but do have a field that uniquely identifies the objects. In those cases, if I cannot add or fix the objects' underlying implementations, I can gain better uniqueness guarantees by using a Map of the uniquely identifying field of the class (key) to the class's instantiated object (value). Perhaps a more frequent scenario when I prefer Map to List or Set is when I need to lookup items in the collection by a specific uniquely identifying field. A map lookup on a uniquely identifying key is speedy and often much faster than depending on iteration and comparing each object with an invocation to the equals(Object) method.

Categories: Java

Java 9 Module Services

Javalobby Syndicated Feed - Sat, 13-Jan-18 23:01

Java has had a ServiceLoader class for a long time. It was introduced in 1.6, but a similar technology was in use from around Java 1.2. Some software components used it, but the use was not widespread. It can be used to modularize an application (even more) and provide a means to extend an application using some kind of plug-ins that the application does not depend on at compile time. Also, the configuration of these services is very simple: Just put it on the class/module path. We will examine the details.

The service loader can locate implementations of some interfaces. In EE environments, there are other methods to configure implementations. In non-EE environments, Spring has become ubiquitous, and it has a similar, though identical, solution to a similar, but not identical, problem.

Categories: Java

Java 9 Modules Introduction (Part 1)

Javalobby Syndicated Feed - Fri, 12-Jan-18 23:01

In this post, we will introduce the Java Platform Module System (JPMS), which is the biggest change in the Java 9 release. In this post, we will take a look at some basics of the JPMS (Why do we need modules? What has changed in the JDK?). After that, we will take a look at how a single-module application can be created, compiled, and executed. At the end, we will take a look at how a multi-module application can be created, compiled, and executed as well. In this post, we will only use command line tools. The examples used can be found on GitHub.


Let's start with the fundamental question: Why do we need modules anyway? In order to answer that question, we take a look at JSR-376. The following bullets are copied from the JSR:

Categories: Java

Spring Boot With Ehcache 3 and JSR-107

Javalobby Syndicated Feed - Fri, 12-Jan-18 14:01

Here we are going to cover how to use Ehcache 3 for caching in Spring Boot based on JSR-107. We will tackle how to do operations on the cache itself (besides the well-known annotation usage).

Before we start, let's highlight JSR-107.

Categories: Java

7 Techniques for Thread-Safe Classes

Javalobby Syndicated Feed - Fri, 12-Jan-18 10:01

Almost every Java application uses threads. A web server like Tomcat process each request in a separate worker thread, fat clients process long-running requests in dedicated worker threads, and even batch processes use the java.util.concurrent.ForkJoinPool to improve performance.

It is, therefore, necessary to write classes in a thread-safe way, which can be achieved by one of the following techniques.

Categories: Java

Gamma-TicTacToe – Neural Network and Machine Learning in a simple game

codecentric Blog - Fri, 12-Jan-18 06:52

This post is about implementing a – quite basic – Neural Network that is able to play the game Tic-Tac-Toe. For sure there is not really a need for any Neural Network or Machine Learning model to implement a good – well, basically perfect – computer player for this game. This could be easily achieved by using a brute-force approach. But as this is the author’s first excursion into the world of Machine Learning, opting for something simple seems to be a good idea.


The motivation to start working on this post and the related project can be comprised in one word: AlphaGo. The game of Go is definitely the queen of competitive games. Before the age of AlphaGo it was assumed that it would take a really long time until any computer program could beat the best human players, if ever. But unlike the predominant chess programs, AlphaGo is based on some super-advanced – the word really fits here – Neural Network implementation. With this it simply swept away any human top player in the world. Depending on the viewpoint this is amazing, sad, scary or a bit of all.

If this is about the game of Go then why is there a video embedded about playing chess? The engine behind AlphaGo has been developed further. Its latest incarnation is called AlphaZero and it is so generic that it can teach itself different games based only on the rules. There is no human input required anymore, but learning is completely performed using self-play. This is really fascinating, isn’t it? AlphaZero had already easily defeated all its predecessors in the game of Go when it was trained to conquer the world of chess. After only four hours (!) of self-training it crushed the best chess engine around, which in turn would beat any human chess player.

So far for the motivation to start this project, which obviously cannot – and is not intended to – even scratch on the surface of what has been achieved with AlphaZero. Though the project name is clearly inspired by it ;-).


Then what should be achieved? Learning about and implementing a Neural Network with some kind of self-learning approach to start with. As the rules of Tic-Tac-Toe are very simple – just play on an empty field – not too much time must be spent implementing the game mechanic as such. This allows focusing on the Neural Network and the learning approach.

Ideally the program should play the game perfectly in the end. This would mean it will never loose to any human player and win if that player does not play the best moves. Tic-Tac-Toe cannot be won by any player if both players are playing decent moves.

The basic – and a little bit less ambitious – objective is that it wins a fair amount of games when playing against a random computer player after some amount of self-learning.

Playing a random computer player

Playing a random computer player is the first assessment of what has been achieved. Then we are going to take a closer look at the implementation, the ideas that did not work and the ideas that worked out in the end.

The complete implementation of Gamma-Tic-Tac-Toe can be found here: That page also includes instructions on how to compile and run it.

Self-play against the random computer player is implemented in a way that allows independent matches with any amount of games. The Neural Network is re-initialized and trained again between two matches. By default each match consists of 10.000 games and 50 matches are performed. All these values are configurable. The amount of training games are of course also configurable as this is an interesting parameter to test the ability of the Neural Network to learn.


The match between two random computer players is used to crosscheck the implementation. It is expected that the results are almost totally even as can be also seen in the following chart.

It is easy to make mistakes when validating the results using self-play. In the beginning the Neural Network was always playing the first move. Of course in a game like Tic-Tac-Toe this has let to wrong results. Having two random computer players playing the game this could be detected as it was quite suspicious that one random player was winning far more often than the other one.

chart showing the results of two random players


The next match is the random computer player vs. an untrained gamma-engine (the fancy name used instead of writing “the Neural Net playing Tic-Tac-Toe”). This is interesting as the matches are going back and forth, but without a clear overall winner or loser. The individual matches are often won quite clearly in comparison to the games played between two random computer players.

chart showing the results of a random computer player vs. an untrained gamma-engine


Now we are having a gamma-engine that is trained in 50 games against the random computer player before each match. It can be seen that the amount of matches won is clearly increasing in comparison to the untrained version. But there are still quite some matches lost, sometimes even pretty clearly.

chart showing results of a trained gamma-engine


With 250 training games things are improving a lot. All but one match is won and often quite clearly.

chart showing results of an increasingly trained gamma-engine


Interestingly the results are pretty much the same as with 250 training runs. This time even two matches are lost. Still it is obvious that the training has a positive effect on playing.

chart showing results from a gamma-engine with 500 trained games


So let’s perform 1500 training games before each match. The result is again not changing dramatically, but there is still some improvement.

charts showing some improvement after 1500 training games


Finally let’s make a huge jump to 15000 training runs before each match. With this amount of training the Neural Network is winning very consistently and on a high level. This result has been double-checked by executing it several times. The same is true for the other results as well.

chart showing high winning rate after 15000 training runs

The journey to gamma-engine stage-1

The results presented in the previous chapter are based on stage-1 of the gamma-engine. The different stages are intended to differ regarding the amount of learning that is applied. This is not related to the number of training runs, but the factors used to learn something about the game. In stage-1 the “learning algorithm” is based on the following factors: If a game is won the first and the last move of that game are “rewarded”.

This “rewarding the right decisions” is a kind of backpropagation that is often used to train Neural Networks. Even though what has been done here seems to be a bit simpler than what is described in that article.

Therefore the output weights of the corresponding neurons triggering those moves are increased. This does not seem to be a lot of learning at all, but it is enough for the results shown above. Of course this is only possible due to the fact that Tic-Tac-Toe is such a trivial game to play.

There are a lot of articles dealing with Neural Networks and Machine Learning. The corresponding Wikipedia page for example is quite extensive. Therefore this article is focusing on the practical approach towards the specific problem at hand and not so much on the theoretical side of Neural Networks. Still we need some theoretical background to start with.

A Neural Network is composed of different layers. It has an input layer, any amount of hidden layers and an output layer. Theoretically each layer can have any amount of neurons. But the amount of input and output nodes are precluded by the data and the task at hand. Thus practically only hidden layers can have any number of nodes. In this implementation dense layers are used where each neuron of one layer is connected to each neuron of the next layer. There are lots of other layer types where this is not the case. The input to a neuron is an input value representing (a part of) the task to be solved and a weight assigned to that connection. By modifying those weights the Neural Network can learn. The following diagram shows an example of such a Neural Network.

Different layers of a Neural Network

The input layer and the output layer are defined by the values to be processed and the result to be produced. It is pretty clear that there will be one output neuron as we need to generate one move in the end. That move will be the output of that single output neuron.

For the input neurons things are not that straightforward. It is clear that the game state must be passed to the input neurons. On first sight the different possible board representations after making each valid move have been considered as the input. But it is hard to do something meaningful with this in the hidden layer. Furthermore the input neurons would have a different semantic every time. This makes the learning difficult. The next idea has been to map one field from the board to one input neuron. That worked to some extend. The final solution has three input neurons for each field on the board. Those are representing the possible game states: empty, occupied by computer player and occupied by opponent. With this approach it is important that the same field – with its corresponding state – is assigned to the same input neuron every time. This is depicted in the following diagram.

Input Layer with 27 Input Neurons

In addition some input value is required. This is defined based on the different fields and whether or not that field is empty, occupied by the computer player or occupied by the opponent.

Neurons in the hidden layer are calculating a “positional score” for the candidate moves. This is done based on the field values and the input weights. Hereby each neuron in the hidden layer always exactly represents a move to a certain field on the board.

In the beginning every neuron in the hidden layer was calculating a candidate move out of all possible moves. But this approach felt too much like an algorithmic solution through the backdoor.

That’s why there are nine neurons in the hidden layer, as there is at any time a maximum of nine possible moves.

Nine neurons in the hidden layer

Thus the first neuron in the hidden layer stands for a move on the first field, neuron two for a move on the second field and so on. This implies that some neurons cannot “fire” a valid move as the corresponding field is already occupied. This is the equivalent to a threshold that decides whether or not a neuron is activated (fires) or not. If no neuron in the hidden layer can be activated the game is anyway finished as there are no more valid moves anymore.

Activation functions

Activation functions are a vital part of any Neural Network implementation. They are using input values and input weights to calculate output values. Those are the input to the neurons of the next layer or the result computed by the Neural Network. Common to all layers is the randomized generation of output weights when the Neural Network is (re-)initialized. All neurons of one layer are sharing the same implementation of the activation function.

input layer

The activation function of this layer is rather simple. It stores the field information that it has retrieved as an input. Based on this it calculates a value depending on the field state and location on the board. Basically this includes a kind of threshold function. Only one of the three neurons reflecting one field is used as an input in the hidden layer.

hidden layer

For each neuron in the hidden layer a so-called position value is calculated based on the input weights and values. This is done applying the formula below where the sum over all input neurons is created. By doing so the complete board state is considered.

For every set of input neurons that are reflecting the same field the input weight and value of the corresponding neuron is used. This depends on whether the field is empty, owned by the computer or owned by the opponent. Thus from the 27 input neurons always only the nine relevant neurons from the input layer are used for this calculation.


Then the sigmoid function is applied to Z. The sigmoid function is quite commonly used in activation functions of Neural Networks.

S(Z) = 1 / 1 + e-Z

The resulting value is the positional score for this neuron and thus this candidate move.

output layer

In the output layer again a value Z is calculated. But this time not as a sum, but for each of the candidate moves.


Then again the sigmoid function is applied to Z. The candidate move where S(Z) is the maximum is chosen as the move to execute.

Summary and Outlook

This has been one of the most fun projects for quite some time. Not having any idea where to start was a really interesting experience. Playing around with different parameters like number of neurons and weight changes applied and then seeing how this affects the outcome of playing the game was really fascinating.

Luckily there is still plenty of room for improvements. First of all a more thorough training algorithm can be applied like rewarding all moves that lead to a win and not only the first and the last one. Another idea is to decrease the output weight of neurons if a move has let to a loss.

Then the structure of the Neural Network can be evolved by introducing additional hidden layers and thus increasing the number of neurons and connections between those.

Pretty sure there will be a follow-up to this blog post as one of the main objectives is not yet achieved: A Neural Network that learns to play Tic-Tac-Toe flawlessly :).

The post Gamma-TicTacToe – Neural Network and Machine Learning in a simple game appeared first on codecentric AG Blog.

Categories: Agile, Java, TDD & BDD

The Highly Useful Java ChronoUnit Enum

Javalobby Syndicated Feed - Fri, 12-Jan-18 04:01

Several years ago, I published the blog post "The Highly Useful Java TimeUnit Enum" that looked at the TimeUnit enum introduced with JDK 5. JDK 8 introduced a newer enum, ChronoUnit, that is better suited than TimeUnit for contexts other than concurrency such as date/time manipulations.

Located in the java.time.temporal package, the ChronoUnit class implements the TemporalUnit interface, an interface used extensively in the highly desired JDK 8-introduced Date/Time API. The blog post "Days Between Dates in Java 8" demonstrates use of this class to calculate periods of time between two instances of Temporal.

Categories: Java

Jackson Mix-In Annotations

Javalobby Syndicated Feed - Fri, 12-Jan-18 01:01

Prior to Jackson 1.2, the only way to serialize or deserialize JSON using Jackson was by using one of the following two methods:

  • Adding annotations to modify the POJO classes
  • Writing custom serializers and deserializers

Now imagine you want to serialize or deserialize a third-party POJO which you don’t have access to its source code. What would you do?

Categories: Java

Bootiful Development With Spring Boot and React

Javalobby Syndicated Feed - Thu, 11-Jan-18 22:01

React has been getting a lot of positive press in the last couple years, making it an appealing frontend option for Java developers! Once you learn how it works, it makes a lot of sense and can be fun to develop with. Not only that, but it’s wicked fast! If you’ve been following me, or if you’ve read this blog for a bit, you might remember my Bootiful Development with Spring Boot and Angular tutorial. Today, I’ll show you how to build the same application, except with React this time. Before we dive into that, let’s talk some more about what React is great for, and why I chose to explore it in this post.

First of all, React isn’t a full-fledged web framework. It’s more of a toolkit for developing UIs, a la GWT. If you want to make an HTTP request to fetch data from a server, React doesn’t provide any utilities for that. However, it does have a huge ecosystem that offers many libraries and components. What do I mean by huge? Put it this way: According to npmjs.comAngular has 17,938 packages. React has almost three times as many at 42,428!

Categories: Java

Thread Slivers eBook at Amazon

Syndicate content