It’s a funny thing to say that delivering business value is the most important thing when developing software. It doesn’t matter that a framework like Scrum is far from efficient, because we focus on value. We deliver business value. Working software. Every sprint. Period.
But in the end we often just don’t know the impact of the things we make. What do we really know about the business value we supposedly created? Is it a one-time validation? Is it a one-way validation?
We keep adding stuff to our products because that is what we are expected to do. We always need more features. But do we know the difference these features make, except that there are more features now? More stuff. What if one feature undoes the effect of another? Wouldn’t that be valuable to know? Wouldn’t it be nice to remove it?
Software development has everything to do with validations. There are at least three rounds of validation from ideas to maintaining the software in a production environment.
The first round of validation is about vision, business goals and problems to solve. Is the vision of the product (still) right? Are the right personas defined? Is the desired impact of the software defined? Are the right activities defined? Are the goals clear? What is the desired ROI?
The second round of validation is about technical and behavioural validation. Features are being designed, built, tested, accepted and deployed.
Hooray! Your software is live! Well, let’s NOT sit back and relax. This is usually the moment where we tend to run back to the product backlog and get started on new features. Burn down those points! Move post-its around like there is no tomorrow! Like dropping your children off at school without ever speaking to the teachers.
Let’s make sure that this is what the world really needs. Let’s also make sure we keep it that way.
Monitoring is often limited to raising the alarms when something is seriously wrong. With continuous validation you are monitoring something else entirely: you want to know the effects new features or changes have on the system and be able to act on them. Use the feedback you get from your users, that is, from the behaviour of your users. For instance:
You make your software with a purpose, right? You want to reach business goals or you want to solve problems. But to make sure your software achieves what you intended it to do and keeps doing you need to validate this continuously. If your goal was to sell more items and you implemented functionality to reach this goal, you’ll need to validate its effectiveness. And then you need to keep validating it because software is no concrete structure and will change.
I cannot emphasise this enough: remove functionality that doesn’t meet the criteria anymore. The world is full of bloated expanding products which become even bigger and unusable. Like the gate in the picture. I’m sure there was a good reason to make it and it worked very well when there was a fence. For now it only costs us because it needs maintenance.
Does your product have gates without fences? Are you aware? Not all as visible as the one in the picture I’m sure, but they’ll cost you money just the same.
The post After continuous integration there is continuous validation appeared first on codecentric Blog.
I'm working on our first ever research guide that focuses on a language platform rather than a major trend like continuous delivery or IoT. The Java Ecosystem guide is going to be pretty awesome for a number of reasons, one of which will be the survey data from 400+ Java developers that we already have. Make sure your situation is recorded too:
JRebel is indisputably the industry leading class reloading software. It is a useful product has earned its reputation by helping to expedite Java development for many organizations. How this product works is a mystery to most. I’d like to explain how I think it works and provide a basic prototype (with source code).
I like going to conferences. One of my regular conferences remains Devoxx, but I’ve done a lot of other conferences the last couple of years. However, over the years, I’ve noticed a very unsettling trend: the prices of conferences have risen each year. And not by a little. Whether the content quality has equally risen is debatable, but it seems like there’s no stopping in the rise of the price of admission to one of the most crucial parts of IT: learning.
I’ll take a couple of examples.
In JAX-RS both client and server can specify what content type they expect to consume or they are meant to produce. Technically speaking content type is the data format. For instance JSON and XML are two most well-known data formats that are commonly used is RESTful web services. This feature helps server and client developers to be more flexible in design and implementation.
Same as HTTP protocol, content types in JAX-RS are expressed as MIME types too. MIME formatting is the standard way to represent and to categorize different content types. For instance the text representing this article is categorized as (text/plain) in MIME.
The first early draft of JSON-P 1.1 (i.e. JSON-P 1.1 EDR1) released on the 3rd of August, will be covered in this first-look-article. As you probably know, Java API for JSON Processing 1.0 (JSR 353) is the standard JSON Processing API added in Java EE 7. Now, JSON-P is being updated with the new version JSON-P 1.1 also known as JSR 374. The current build comes with new JSON-based features such as JSON Patch, JSON Pointer, JSON Merge Patch, and Java SE 8 support, each of which has been scheculed so as to be part of Java EE 8. We will briefly document about the current noteworthy unites along with small examples in order that, you can try the draft version by yourselves and/or get some ideas about the new implemented functionalities.
In examining JSON-P 1.1 API, we have found 5 noteworthy classes, each of which is located in the
javax.json package as follows:
Allen Wirft-Brock gave the following defense of OOP a few days ago in a series of six posts on Twitter:
A young developer approached me after a conf talk and said, “You must feel really bad about the failure of object-oriented programming.” I was confused. I said, “What do you mean that object-orient programming was a failure. Why do you think that?”
One of my hobbies is photography. I took my first steps as a small boy with the camera my father gave to my mother as a present in the 1960s.
This camera didn’t have an exposure meter, so my father gave me some hints as to which aperture to adjust depending on the lighting conditions. He stuck a small note with instructions for my mother into the camera’s protection cap.
This note is nearly 50 years old, and if you take a look at my father’s handwriting you can imagine why he is still the one who always writes the place cards for our family celebrations!
Later I was permitted to use my father’s first Nikon camera, and again I learned a lot about shutter speed, picture composition, depth of field or ISO numbers.
Apart from regular motives and scopes, I loved to experiment with new things, e.g. long-time exposures, double exposures or black and white, just to mention a few examples.
When in the 1980s one of my favourite bands – U2 – released their fourth studio album The Unforgettable Fire, I saw something I had never seen before.
You need to click on the link to see the cover. For legal reasons I will refrain from linking to it directly.
On the cover you can see the band standing in front of a castle in Ireland. The picture is a black-and-white infrared photo taken by the dutch photographer Anton Corbijn, and it showed me for the very first time what is commonly known as the Wood Effect.
It is named after his discoverer Robert Williams Wood, and it makes us aware that all plants reflect sunlight, which they need for photosynthesis. Otherwise they would get burned. Speaking of “burned”: The album’s title was named after an art exhibition that showed pictures taken by survivors of the atomic bombing of Hiroshima and Nagasaki.
I realised, though, that experimenting with infrared pictures would be very expensive. After all I was just a teenager with average pocket money, and I also had football on my mind!
Eventually, digital photography arrived, and it is amazing to see the high quality of photos taken with today’s mobile phones. Along with digital photography came new interesting aspects such as panoramic photography, High Dynamic Range (HDR) or time-lapse photography. The current generation of mobile phones can handle all these aspects quite well.
The cover I looked at in 1984 was on an LP record. It is amazing that today many analogue things are en vogue again. This applies not only to vinyl, but also to analogue photography, which is seeing a renaissance.
When a new model of the Raspberry Pi was released early this year, its accessories and in particular its NoIR camera caught my attention. The camera is called „NoIR“ because it is able to take infrared photos. NoIR means that there is no notch filter that blocks infrared light.
Light is electro-magnetic radiation, but only a limited range can be perceived by humans. The shorter ultraviolet wavelength and the longer infrared wavelength are invisible to us.
But a camera’s sensor can catch these wavelengths. Therefore a notch filter is used to block these wavelengths and only to let through the wavelengths that we can perceive, so that a photo looks exactly like the motive we saw before.
Coming back to the Pi 2. I ordered the Pi 2 and accessories and planned to take infrared photos with it. The equipment included the Pi 2 itself, a microSD card for the operating system, the camera, a case, and a power supply. Taking photos outside at a later point in time would additionally require a power bank and the already existing WiFi adapter.
The Pi 2, as compared to the previous model, comes with a 900 MHz quad core ARM Cortex-A7 processor and one gigabyte of main memory. The previous model only features a 700 MHz single core ARM1176JZF-S processor and only half as much main memory. Due to its higher performance, the power consumption is at a maximum of 4 W, whereas the previous model has a power consumption of 2,5 to 3 W. Its dimensions are equal to the previous model:
85mm x 56mm x 17mm (L x W x H)
I decided to use the operating system that is officially supported by the Raspberry Pi Foundation – Raspbian.
An image, which is nearly one gigabyte large, can be downloaded here: http://downloads.raspberrypi.org/raspbian_latest
In order to write it on the microSD card, we need a tool. For Windows we can use a SourceForge project Win32 Disk Imager:
The unpacked image is written to the microSD card with Win32 Disk Imager.
Afterwards we can boot for the first time. In order to do so, the Pi 2 needs to be connected to a router with an ethernet cable and of course to the power network.
To establish a connection to the Pi 2 over SSH, I use PuTTY. In the administrator’s user interface of my router I can read out the IP address the Pi 2 was assigned.
After opening the connection I need to log in with the standard user pi and the standard password raspberry.
The configuration tool raspi-config is useful for the first and most important settings.
sudo is used for executing commands with higher user authorisation. It stands for superuser do.
We need to choose this first menu item in order to assign the entire disk space to Raspbian. This task is executed during the next boot sequence.
We choose the option to boot directly to the desktop so we can connect to the Pi 2 with RDP later but also still use PuTTY.
In an English system we don‘t need to change anything. But in a German system we set the locale to de_DE.UTF-8 and determine this to be the standard locale.
We set the timezone to Europe, Berlin.
We can find this entry in /etc/timezone.
We choose this menu item to change the keyboard layout.
We choose the given option Pi2 to overclock the Pi 2.
The settings can be configured manually in boot/config.txt:
#uncomment to overclock the arm. 700 MHz is the default.
And further below:
# Additional overlays and parameters are documented /boot/overlays/README
There are two settings important to us.
We can set the portion of main memory the GPU is to be assigned.
The setting can be configured manually in /boot/config.txt:
We update this tool to the current version. After that we leave the tool and perform a reboot:
APT is the package manager of Debian und thus of Raspbian, too. The list of available packages and their current versions are updated as follows:
sudo apt-get update
In order to update the installed packages, we use the command as follows:
sudo apt-get upgrade
You can combine both commands. They are then processed successively:
sudo apt-get update && sudo apt-get upgrade
Next we install the package xrdp, using the package manager, to access the Pi 2 via remote desktop:
sudo apt-get install xrdp
After a reboot we are able to access the Pi 2 from Windows via RDP.
There are two models for the Pi 2: the „normal“ one and the NoIR version. As mentioned before, the NoIR lacks a notch filter and thus is able to detect the infrared spectrum. At first glance the name may be a bit deceptive.
More information: https://www.raspberrypi.org/documentation/hardware/camera.md
The camera is connected by means of a flat cable. Before connecting, you need to make sure that the camera was activated with raspi-config.
After that, you need to reboot.
There are two programmes that you can control via scripts (bash or Python). raspivid is the programme for making videos with the camera. Thus I confine myself to raspistill to take photos.
To just take a simple snapshot, you can use the following command:
raspistill –o click.jpg
To get an overview of possible parameters, you can use the command:
Some important parameters in a nutshell:
|Parameter short||Parameter long||Description|
|-?||–help||Display help information|
|-w||–width||Width of the picture|
|-h||–height||Height of the picture|
|-q||–quality||JPG quality 0 – 100|
|-r||–raw||Integrates raw data into the metadata of the JPG file|
|-o||–output||Name of the output file|
|-tl||–timelapse||Timelapse mode in milliseconds|
|-sh||–sharpness||Sharpness of the picture -100 to 100|
|-co||–contrast||Contrast of the picture -100 bis 100|
|-hf||–hflip||Flip the picture horizontically|
|-vf||–vflip||Flip the picture vertically|
To obtain raw data from the camera und process it digitally later, we can take photos with the parameter –r. The camera doesn’t offer a direct raw file output, but the JPG has the raw data integrated in its metadata. This is why, at the beginning, we will get a much huger JPG file than before.
But there is a converter raspi_dng, that is able to convert JPG+RAW to DNG. In order to transfer the exif data from JPG to DNG we also need an exif tool, a library and Git.
All these packages can be installed in one go:
sudo apt-get install git libimage-exiftool-perl libjpeg62-dev
Then raspiraw needs to be downloaded from GitHub and compiled.
git clone https://github.com/illes/raspiraw.git
Cloning into 'raspiraw'...
remote: Counting objects: 14, done.
remote: Total 14 (delta 0), reused 0 (delta 0), pack-reused 14
Unpacking objects: 100% (14/14), done.
A subdirectory raspiraw was created. We change into this directory and execute the make utility.
Subsequently we copy the compilation to /usr/local/bin, so that we can use the command directly in the console next time.
sudo mv raspi_dng /usr/local/bin/raspi_dng
The process is as follows. First we take a photo with raw data:
raspistill –r –o raw.jpg
With the compiled raspiraw we can extract the raw data from the metadata of the JPG and convert it into the Adobe DNG format:
raspi_dng raw.jpg raw.dng
During this step no exif data has been copied, which is why we can use the previously downloaded exif tool to do so.
exiftool -tagsFromFile raw.jpg raw.dng -o raw.exif.dng
For outdoor projects a little bash script had to be written. It was supposed to take three photos with ev -10, 0 an +10. Using these three photos, we could create a HDR later.
if (( $EUID != 0 )); then
echo 'Please run this script as root!'
echo 'Taking photo#1'
raspistill -w 2592 -h 1944 -o $filename -ev -10
echo 'Taking photo#2'
raspistill -w 2592 -h 1944 -o $filename -ev 0
echo 'Taking photo#3'
raspistill -w 2592 -h 1944 -o $filename -ev +10
There are no limits to the developer, e.g. you directly upload your photos to your Fritz!Box at home.
In order to implement an outdoor project, I needed a mobile power supply, i.e. a powerbank, a WiFi adapter to telecontrol the Pi 2 from my mobile phone and an empty filmstrip as a replacement for a filter. The whole thing looked as follows:
In order to enable access to a WiFi network, we have to connect to the Pi 2 via remote desktop and then use a tool called wpa_gui. If you want to take photos outside you need to connect the Pi 2 to a WiFi hotspot on your mobile phone.
Just click on Scan to see the available networks in reach.
We choose the network with the desired SSID and double-click on it. We fill in the password on the following property page and click on Add.
Now we are able to connect to the corresponding WiFi network.
Finally we need an SSH client app, which will allow us to access the Pi 2 and let us execute scripts. I use JuiceSSH for that purpose.
And now I present my first infrared photo, nearly 30 years after I saw the U2 cover. Unfortunately I didn’t have a castle and a band available, so our garden had to suffice. I took the filmstrip as a filter.
Then I used a small tripod that is actually used for a microphone, and instead of the negative strip I used an 850 nm screw filter, which I held in front of the camera’s lens. The result looked as follows:
Here we see the Pi 2 with a tripod during an outdoor shooting:
And this is the photo it took:
Unfortunately the joy about the results were clouded by the great and constant effort it took to bring the camera and myself into position. However the results were so good that I didn’t want to stop.
So I started investigating on the defined camera conversion, in which the notch filter is removed and replaced by a defined filter instead. From this very moment the camera can only take infrared photos. This is why I needed a second Nikon camera body, so that I could use my Nikon lenses. I purchased a smaller model that was overhauled by the manufacturer.
I found an alternative for converting the camera, which was half as expensive as before, so I decided to convert my camera into super colour infrared (580 nm). The decision which filter to take was difficult, because in the beginning I just wanted to take black-and-white infrared photos, but I realised that colour infrared photography also has its merits. Since you are able to digitally edit and transform photos into black and white I decided to use this filter.
All infrared photos need to be edited digitally because the camera with a 580 nm filter produces pictures with a reddish tint. You need to correct the white-balance manually and switch blue and red channels within a channel mixer tool. Not every software can handle these conditions well, and as Adobe products are too expensive, I looked for open source software. By the way, with Adobe Lightroom it is not possible to adjust the white-balance manually. You need to create a DNG camera profile for infrared photos to be able to do so.
The photographer who converted my camera develops his photos digitally and exclusively with open source software, for example darktable. So I set up a virtual machine with Linux, took photos – and marveled at the outcome.
Here are my first results after one week.
Stahlwerk Becker, 500 m away from home:
Stahlwerk Becker again, alongside the water axis, colour infrared:
BuGa Area, Düsseldorf:
BuGa Area Düsseldorf, workshop for handicapped worker:
The Pi 2 project truly fulfilled my long-cherished wish. It is an ideal playground for small and big kids, and I believe that single-board computers should be part of every good school education. Let’s see what I will use it for next time.
If you are interested, you can visit me on Instagram. I am also looking forward to your enquiries or comments.
To all photographers: May there always be sufficient light, wherever you are.
The difference between men and boys is the price of their toys!
Thanks to Diana Kupfer for supporting me with the English translation!
The post How the Raspberry Pi 2 Fulfilled a Long-Cherished Wish appeared first on codecentric Blog.
A few weeks after including our community in Oracle's survey for Java version usage, we finally have results. The survey had a respectable 1,147 respondents, so the numbers should be fairly representative. I'll bulletpoint the version usage results for you:
Java SE 6 — 21%
At this stage, it is probably safe to say that a new CDI scope will be introduced in Java EE 8 as the MVC Expert Group (JSR 371) is in the process of introducing a new CDI custom scope (@RedirectScoped) to support redirect.
The idea is simple; an MCV controller can decide to redirect a client to a specific URL, to a specific controller method. A typical example would be that an anonymous user hits a page that required to be logged; the controller will then redirect the user to a login page. To do this, the controller uses HTTP 302 ('moved temporarily') to invite the client to make a new request to a different URL to a different controller. That second request implies a new (and thus different) request-response cycle. And that's the 'raison-d'être' of this new @RedirectScoped scope, i.e. to allow maintaining some state between the 2 request-response cycles of a redirect. Note that this new scope is for the MVC API only. That new scope is sometime referred as the 'Flash scope.' You can see how both redirect and @RedirectScoped works here.
I don’t know if events became part of software engineering since Graphical-User Interface interactions, but for sure they are a very convenient way of modeling them. With more and more interconnected systems, asynchronous event management has become an important issue to tackle. With functional programming also on the raise, this gave birth to libraries such as RxJava. However, modeling a problem that handles a stream of events shouldn’t be restricted to system events handling. It can also be used in testing in many different ways.
One common use-case for a testing setup is to launch a program, for example an external dependency such as a mock server. In this case, we need to wait until the program has been successfully launched. On the contrary, the test should stop as soon as the external program launch fails. If the program has a Java API, that’s easy. However, this is rarely the case and the more basic API are generally used, such as