Recently, I've been working on a new project using Spring. It's been a great chance to try out new things, gather all the best practices and see what comes out of it. One of the things that struck me was how confusing Spring naming is when trying to go the right way.
Let's say I want a class responsible for creating instances of my business classes. Something like:
Sometimes it is very useful to clean or build a selected set of files. For this I select the file(s) in the Eclipse Project Explorer and use the context menu:
Build Selected Files
This article explains the concept of Dependency Injection (DI) and how it works in Spring Java application development. You will learn about the advantages, disadvantages, and basics of DI with examples. Look further for more information.
DI allows a client the flexibility of being configurable. Only client's behavior is fixed.
A user recently made us aware of the Skynet benchmark, a microbenchmark for “extreme” multithreading (1M threads). We are generally wary of such microbenchmarks because they are often tailored to measure a specific strength of a particular platform, without taking into account how relevant that strength is for real applications. For example, a platform with a 1000x faster implementation of
sqrt would be hard pressed to yield even a 0.01% improvement in performance when running real applications. With threads the situation is a bit different: when many threads are active (say, over 10K) processing transactions in short bursts, the kernel thread scheduling overhead might become onerous and your application may then spend a significant portion of its time waiting for the kernel to schedule your code. Lightweight thread (AKA fibers) implementations, like those provided by Go, Erlang, and, on the JVM, Quasar (and Kilim), can reduce this overhead by two orders of magnitude. This may be the difference between your server application being able to handle 500 or 5000 requests per second (some benchmarks can be found here and here).
However, once the threading overhead is reasonably low – say, less than 1% of the total time – differences in a particular implementation matter less and less: if there’s no overhead at all, the performance improvement will be only 1%. Because the JVM does not yet have built-in fibers, Quasar is required to implement them in a way that adds more overhead than platforms with native implementations. This is why in a microbenchmark that tests scheduling overhead alone, a generally slow runtime like Erlang’s BEAM may outperform a very fast runtime like HotSpot, even though once there’s any actual workload, the JVM quickly makes up the difference and then some (and then a lot, really). To further confuse the picture, some classical scheduling benchmarks like the ring benchmark actually reward schedulers that are only good at single-core scheduling and penalize schedulers that are good at sharing load among many cores.
This is an evaluation on how the React toolset works as part of a modern web development project and why it is particularly suitable for scalability.
React is currently unique because it originated, and is used successfully, by the biggest social network in the world, Facebook. Nothing comes close in terms of size and complexity, back in 2012 they had 700 developers and a huge number working on their frontend.
Everybody uses these very dynamic and well designed webpages. This is why users and designers expect this level of sophistication today and developers have to keep up.
One alternative is to only use code and skip templating by operating directly on the DOM, but it is verbose and feels unnatural because in the end we create HTML without any abstraction in place.
To improve this, approaches like Angular and Web Components provide APIs for custom tags to encapsulate behaviour in a single component. But those just hide a lower level of templating and are also often unnecessarily complex.
Wouldn’t it be nice to compose components like functions while keeping the expressiveness of HTML?
React does that with a virtual DOM and JSX.
Now to protect you from the real terrifying DOM, React has its own virtual one in memory and every delta is checked first and then applied to the parsed HTML via direct DOM references. This is an extremely efficient way to apply changes and also adds a layer that can be virtualized on the server for testing and HTML rendering.
We got rid of the DOM and can further look at how the the React library supports scalable development.
Components encourage a unidirectional data flow and make state explicit. Data can only be entered into JSX elements through descriptive properties and nothing else, so every component forces the creator to think about his interface. Inside the component you usually access the properties (or child JSX elements for more powerful components) and they are immutable. If you want some complex logic or just change the UI, you can use the state attribute, but this information has no effect on the parent elements.
Let’s look at an example with a simple component that has one property:
React rendering of CombinedComponents:
Hello Class loop: i=1 i=2 i=3
This is a very basic example that displays the different kinds of components: Class, function and one just being imported.
Please note that JSX elements are just a part of regular code, so all the existing helper methods like map can be reused. They also can have descriptive variable names and organised for readability.
This perfectly encapsulates the underlying functionality, so this way you can easily have a big number of developers working “behind” one component without being affected.
Maybe this gives you some insight into why React is so popular, it’s very comfortable even for a low number of developers to work with it and build robust and maintainable software.
Today many companies use React and a growing ecosystem make it a safe bet for your stack.
In this post, we will see how we can use Jackson’s YamlFactory to read YAML files into Java Beans.
YAML is a human friendly data serialization
standard for all programming languages.
Golang (as opposed to Java) does not have exceptions such as try/catch/finally blocks. It has strict error handling with functions called panic and recover and a statement named defer. It is a totally different approach. Is this a better approach than Java takes? (Sorry that I keep comparing it to Java. I am coming from the Java world.)
When we handle exceptions in Java we enclose the commands into a ‘try’ block denoting that something may happen that we want to handle later in a ‘catch’ block. Then we have the ‘finally’ block that contains all the things that are to be executed no matter what. The problem with this approach is that it separates the commands that belong to the same concern.
tl;dr: Thank you to some lovely people for translating or graphic recording some of my work.
One of the nicest compliments you can receive as a writer is someone choosing to translate your work to make it available to a new audience. I am enormously grateful to the people who have translated my articles and blog posts over the years, most recently Julia Kadauke, who translated What’s In A Story into German.
However nothing prepared me for the email from Denis Oleynik, CTO of 1service.ru, telling me he had translated no fewer than eight posts and articles into Russian!
– Introducing Deliberate Discovery [Russian]
– Are You Ready for the Truth? [Russian]
– Continuous Build is not Continuous Integration [Russian]
– Blink Estimation [Russian]
– The Art of Misdirection [Russian]
– The Perils of Estimation [Russian]
– What’s in a Story? [Russian]
– A Classic Introduction to SOA [Russian]
While I am thanking people, I had the privilege of giving the opening talk earlier this month at Craft conference in Budapest. The talk is about Embracing Uncertainty and two graphic artists, Dóra Matyus and Márti Frigyik, recorded it in the following images:
I first encountered graphic recording via ImageThink and quite apart from the wonderful experience of having your ideas rendered as images, it is a great way to learn what someone heard from your talk rather than what you think you said.
Eclipse has a cool feature which might not be known to everyone: the "To-Do" (or Tasks) List which keeps track of what I have to do:
C# has in many ways inherited its relationship with Builder from Java, where it was usually called by the more degenerative term “Factory” or “Factory pattern”. (Technically, what Java calls a “Factory pattern” is typically one of Builder, Factory Method, or Abstract Factory, depending on what precisely looks to be varied and/or encapsulated.) C#, however, never fell quite as deeply in love with the “Factory pattern” as the Java development crowd did, and as such it wasn’t as widely used.
We start with the target Product:
The Anypoint platform provides various components and built-in functions for integration purposes. Most of our needs in a project can be fulfilled easily, especially when we use the Enterprise Edition. Not only does this include the availability of ready-to-use connectors, but also a quick response from MuleSoft support.
Let's say we have to develop integration project using Community Edition. Obviously, with limited support from them. Although we can rely on the community forum with tons of ideas and discussions, sometimes we face a common impediment from our internal developers. In particular, they resist adopting the Anypoint platform and still engage with their preferred programming language. For example, they do not want to use MEL or expression component, but prefer to write a Java class.
Java has long had a relationship with Builder, usually calling it by the more degenerative term “Factory” or “Factory pattern.” (Technically, what Java calls a “Factory pattern” is typically one of Builder, Factory Method, or Abstract Factory, depending on what precisely looks to be varied and/or encapsulated.)
We start with the target Product:
Thread dumps are vital artifacts to diagnose CPU spikes, deadlocks, memory problems, unresponsive applications, poor response times, and other system problems. There are great online thread dump analysis tools such as http://fastthread.io/ that can analyze and spot problems. But to those tools you need provide proper thread dumps as input. Thus in this article, I have documented 7 different options to capture thread dumps.
‘jstack’ is an effective command line tool to capture thread dumps. The jstack tool is shipped in JDK_HOME\bin folder. Here is the command that you need to issue to capture thread dump:
For anyone who pays close attention to Java EE, it has become clear in the past six months that there has been a decline of activity...especially in those JSRs for which Oracle maintains the lead. What's the deal? There has been a lot of conversation in the Java EE community in this regard lately, and I think it is important that the developer community be given a fair timeline of what we can expect for the future of Java EE. The uncertainty is becoming long in the tooth, and the community is becoming more concerned with the future of Java SE and Java EE as time goes on.
Let me give you a bit of background. I'm an expert group member on a couple of JSRs targeted for Java EE 8, those being JSR 372 (JavaServer Faces 2.3), and JSR 378 (Portlet 3.0 Bridge for JavaServer Faces 2.2). At the beginning of 2016, I had noticed that since October 2015 the number of emails on the Expert Group list for JSR 372 had really slowed down. In fact, in the final quarter of 2015, the activity on JSR 372 had slowed down to a near halt, whereas it should be picking up momentum as time moves forward closer to the eventual final release. In late January, I was contacted by a couple of members of the Java EE community, indicating that they also had seen a slowdown of activity and were very concerned. I was then asked to join a community of concerned Java EE advocates on a Slack community...and when I joined and read the backlog of messages I could clearly see that it looked as though Oracle had stopped activity in just about every area of Java EE, specifically work on all of the JSRs that were Oracle-led.
The verdict is in once again on the Oracle vs Google lawsuit regarding Google's use of Java in Android, and no sooner was Google declared the winner than Oracle announced their intent to appeal (at least) once more. The reaction to the verdict has been widespread in technology circles as one would expect, although the reaction is fairly one-sided (see thread examples here and here).
In reading some of the reaction on a range of site, I'm not seeing much (any?) sympathy for Oracle's loss, which might seem odd in a case like this. If you think about high-profile copyright/patent cases or questions of ownership in other industries, the public sympathy tends to be with the owner/creator. Remember Ice Ice Baby? (I'm trying to forget). The public was happy to see Vanilla Ice paying Queen royalties for using their song. Human nature in these types of cases would make one inclined to be behind Oracle, but this is a different case.
This is the first in the data structure reviews and likely the simplest: the humble array. The first issue is the term Array; its term differs depending on who uses it but we will get to that a bit later.
Generally, I think of an array like this:
In the last post, Code Smells - Part I, I talked about the bloaters: they are code smells that can be identified as Long Methods, Large Classes, Primitive Obsessions, Long Parameter List and Data Clumps. In this one, I would like to dig into the Object-Orientation Abusers and the Change Preventers.
This type of code smell usually happens when object-oriented principles are incomplete or incorrectly applied.
In the Java EE umbrella every piece of technology is standardized under a JSR (Java Specification Request). The Expert Groups of the JSRs have to deliver the specification, a Reference Implementation, and a TCK (Technology Compatibility Kit). For most of the JSRs, the TCK is licensed as closed-source software and is not available for the public.
That has the implication that any vendor who wants to implement a specific technology has to explicitly apply/ask for/buy the TCK in order to test their implementation and get officially certified. The problem with this situation is clearly that it raises the barrier for potential vendors. Or, in other words, it would be a benefit for vendors to more easily join in the game. Having that said it would then also be more favorable for users of Java EE to have more competition among implementations.
Eclipse-based IDE’s have a powerful feature to make ‘variants’ of the same projects: Build Configurations. Build configurations are a powerful thing in Eclipse: they allow me to make ‘variants’ of a project. The project will share the common things, and I can simply tweak things one way or the other for example to produce a ‘release’ or a ‘debug’ binary of my application without duplicate the project.
Build configurations are managed through either the context menu on the project or with the top menu: