This site did not show evidence of storing passwords in plaintext.
This site does allow secured connections (https)
This site did show a clear way to unsubscribe from their emails
This site does verify your email address.
Membership Emails
Below is a sample of the emails you can expect to receive when signed up to Java Specialists.
Our cover is not just a random image. It has meaning. We tried to create an abstract image of ...
Dynamic Proxies Cover
Our cover is not just a random image. It has meaning. We tried to create an abstract image of millions of code statements disappearing into a box. That is the power of the dynamic proxy. We can use it to replace a huge volume of code with just a few lines of dynamic proxy code.
We decided to publish the book for free as an e-book in partnership with InfoQ. You can download it from here.
DOWNLOAD FROM INFOQ
"I would like to congratulate you for your book, it is easy to read and understand and very well written so you don''t get lost. You have created a book I will definitely recommend to other JVM programmers." - A.P.
DOWNLOAD FROM INFOQ
Thank you for being part the JavaSpecialists community. We created this book together!
Heinz Kabutz Director Cretesoft Limited heinz@javaspecialists.eu +306975595262 - Work www.javaspecialists.eu
Hi William Thomas, I recorded a message for you, check it out!
This video contains important information recorded personally for you by Dr Heinz M. Kabutz
?
Hi William Thomas, I recorded a message for you, check it out!
Sent by Dr Heinz M. Kabutz
Watch the video I recorded for you
--
Bonjoro is an app for sending individual personalised welcome and thank you videos. This message was sent to @ If you don''t want to receive these emails from Bonjoro in the future, please Unsubscribe
.emailview
Thank you for joiningThe Java Specialists'' Newsletter
To show our appreciation, William, here is a personalized link that gives you a 50% discount on Data Structures in Java 9 (Late 2017 Edition).
The Data Structures Course is an action-packed 8 hours of tips and tricks that professional Java programmers have used for the past 20 years to produce code that is robust and fast. Every lecture is followed by a short quiz to test your learning. Sometimes the questions are easy, others require some research on your side. Over 130 quiz questions in total will help you see how well you understood the various data structures.
The 50% discount is available to you for 24 hours.
P.S. If you have already purchased the Data Structures Course, please don''t despair! Just reply to this email and we will give you an equivalent discount on another course of your choice :-)
The Java ReentrantReadWriteLock can never ever upgrade a read lock to a write lock. StampedLock can ....
[279] Upgrading ReadWriteLock
Author: Dr Heinz M. Kabutz | Date: 2020-05-28 | Category: Concurrency | Java Version: 11+ | Read Online
Abstract: The Java ReentrantReadWriteLock can never ever upgrade a read lock to a write lock. Kotlin''s extension function ReentrantReadWriteLock.write() cheats a bit by letting go of the read lock before upgrading, thus opening the door for race conditions. A better solution is StampedLock, which has a method to try to convert the lock to a write lock.
Welcome to the 279th edition of The Java(tm) Specialists'' Newsletter, sent to you from the Island of Crete. Thank you for reading my newsletter, either via email or online. I appreciate your support so much. A lot has changed since I sent my first newsletter almost twenty years ago, but the Java classes from then still (mostly) run on a Java 15 JVM. That''s an amazing feat!
I am still waiting to get bored during the lockdown. With four children at home, at various stages of emotional and physical development, we have had our hands full, in a satisfying way. But boredom? Nope, not yet. And Java also keeps on giving us new areas to explore. There''s never a dull moment.
javaspecialists.teachable.com: Please visit our new self-study course catalog to see how you can upskill your Java knowledge.
Upgrading ReadWriteLock
In Java 5, we got the ReadWriteLock interface, with an implementation ReentrantReadWriteLock. It had the sensible restriction that we could downgrade a write lock to a read lock, but we could not upgrade a read lock to a write lock. When we tried, we would immediately get a deadlock. The reason for this restriction is that if two threads both had a read lock, what if they both tried to upgrade at the same time? Only one could succeed - but what about the other thread? To be safe, it consistently deadlocks any thread that attempts an upgrade.
Downgrading the ReentrantReadWriteLock works fine and we can in this case hold a read and a write lock simultaneously. A downgrade means that whilst holding a write lock, we also lock the read lock and then release the write lock. This means that we do not allow any other threads to write, but they may read.
import java.util.concurrent.locks.*;
// This runs through finepublicclass DowngradeDemo {
publicstaticvoid main(String... args) {
var rwlock = new ReentrantReadWriteLock();
System.out.println(rwlock); // w=0, r=0
rwlock.writeLock().lock();
System.out.println(rwlock); // w=1, r=0
rwlock.readLock().lock();
System.out.println(rwlock); // w=1, r=1
rwlock.writeLock().unlock();
// at this point other threads can also acquire read locks
System.out.println(rwlock); // w=0, r=1
rwlock.readLock().unlock();
System.out.println(rwlock); // w=0, r=0
}
}
Attempting to upgrade a ReentrantReadWriteLock from read to write causes a deadlock:
One of the fun things I''ve managed to do whilst in lockdown is to learn Kotlin. Of all the resources I looked at, the book I liked most was Kotlin in Action by Dmitry Jemerov and Svetlana Isakova. Highly recommended, especially for Java programmers.
Having studied Kotlin for a couple of months, I can believe that it has a positive impact on application development time. There are a lot of time savers in there. I can see why it has so many fans. I''ve looked at a lot of languages besides Java in the last two decades, from Ruby to Clojure to Scala to Swift. However, Kotlin has been the winner so far of my second languages. I can imagine rewriting part of my JavaSpecialists website infrastructure with Kotlin, just for kicks.
On a seemingly off-topic note, in my opinion the only time that we are really reading a book is when we spot mistakes. Thus the highest compliment I can give an author is to send a long list of mistakes. OK, some publishing houses are incredibly bad at their editorial process and every page is littered with mistakes. I''m not talking about those, but rather, excellent books like Effective Java, Mastering Lambdas, Head First Design Patterns.
In the same way, I know that I am learning a new computer language properly when I start spotting mistakes in the actual language or API. After all, isn''t that what The Java(tm) Specialists'' Newsletter is all about? For twenty years I''ve been poking fun at all the weird anomalies in the Java Programming Language.
However, it is more challenging with Kotlin. Normally when I point out an error in a book, the authors will thank me, admit they made a mistake and try to fix it in a future version. With Kotlin I find it far more difficult to get my arguments across. It might be because I am a well-known "Java Guy" that the Kotlin community feels they need to defend their language. It could also be that I am not experienced enough in Kotlin to add something useful to the discussion. I was even a bit apprehensive writing this newsletter, in the fear that it would be misunderstood.
After that long defense, let us have a look at how Kotlin manages ReadWriteLock.
But before we do, one last glimpse at Java. A lot of programmers have asked me why ReentrantReadWriteLock does not support try-with-resource to automatically unlock. I wrote about this in newsletters 190 and 190b. Then lambdas came along and it would have been sensible to create an idiomatic implementation of locking/unlocking with the body contained inside a lambda. However, Java 8 lambdas do not support the throwing of checked exceptions all that well. Thus in Java we need to write all of this locking/unlocking code by hand. Tedious and error prone. A hint of a problem is that IntelliJ IDEA has predefined live templates to generate that code. Our IDE generating code for us (getter/setter, toString, equals/hashCode, constructors, locking/unlocking) is a sign of a language smell.
In Kotlin, lambdas are compiled in a slightly different way to Java. There are advantages and disadvantages, but I will not go into that here. Kotlin also has a great feature of extension functions, to allow us to supposedly add functionality to existing classes. It is a sleight-of-hand, but in a good way.
The following Kotlin code is similar to our DowngradeDemo. The only difference is with the fourth println(), which in our Java version shows // w=0, r=1 and in Kotlin is // w=1, r=0. In Java we did not unlock the read and write locks in the same order that they were locked. As soon as we did the downgrade, from write to read, other threads would have been able to acquire the read lock. This means that the Kotlin version does not allow other threads to get the read lock. It is not a true lock downgrade.
However, look at the status of each println(). In the DowngradeDemoKotlin.kt code, we have // w=1, r=1 in the middle. But not this time. We only hold the write lock, but not the read lock. If we peek into the implementation of the Kotlin extension function ReentrantReadWriteLock.write() we see the following:
/**
* Executes the given [action] under the write lock of this lock.
*
* The function does upgrade from read to write lock if needed,
* but this upgrade is not atomic as such upgrade is not
* supported by [ReentrantReadWriteLock].
* In order to do such upgrade this function first releases all
* read locks held by this thread, then acquires write lock, and
* after releasing it acquires read locks back again.
*
* Therefore if the [action] inside write lock has been initiated
* by checking some condition, the condition must be rechecked
* inside the [action] to avoid possible races.
*
* @return the return value of the action.
*/@kotlin.internal.InlineOnlypublicinlinefun <T> ReentrantReadWriteLock.write(action: () -> T): T {
val rl = readLock()
val readCount = if (writeHoldCount == 0) readHoldCount else 0
repeat(readCount) { rl.unlock() }
val wl = writeLock()
wl.lock()
try {
return action()
} finally {
repeat(readCount) { rl.lock() }
wl.unlock()
}
}
An equivalent version of UpgradeDemoKotlin.kt in Java would look like this:
publicclass UpgradeDemoKotlinAsJava {
publicstaticvoid main(String... args) {
var rwlock = new ReentrantReadWriteLock();
System.out.println(rwlock); // w=0, r=0
rwlock.readLock().lock();
try {
System.out.println(rwlock); // w=0, r=1int readCount = rwlock.getWriteHoldCount() == 0
? rwlock.getReadHoldCount() : 0;
for (int i = 0; i < readCount; i++)
rwlock.readLock().unlock();
rwlock.writeLock().lock();
try {
System.out.println(rwlock); // w=1, r=0
} finally {
for (int i = 0; i < readCount; i++)
rwlock.readLock().lock();
rwlock.writeLock().unlock();
}
System.out.println(rwlock); // w=0, r=1
} finally {
rwlock.readLock().unlock();
}
System.out.println(rwlock); // w=0, r=0
}
}
The documentation in the Kotlin function explicitly states that the read locks will be released before acquiring the write lock and that any condition before, has to be checked again inside the write lock. This is due to a (sensible) limitation in the ReentrantReadWriteLock, already mentioned at the beginning of this newsletter.
I would be surprised to see code like this in the JDK. In Java we are more careful about avoiding race conditions. A thread deadlock is preferable to a mystery race condition. Writing a warning into the documentation is not good enough IMHO. Who reads that anyway? Principle of least astonishment FTW (POLA).
Upgrading with StampedLock
The Java 8 StampedLock gives us more control over how a failed upgrade should be handled. A few things before we start.
The StampedLock is not reentrant, which means that we cannot hold both a read and a write lock at the same time. A stamp is not tied to a particular thread, thus we also cannot hold two write locks at the same time from one thread. We can hold lots of read locks at the same time, each with a different stamp. But we can only get a single write lock. Here is a demo:
publicclass StampedLockDemo {
publicstaticvoid main(String... args) {
var sl = new StampedLock();
var stamps = new ArrayList<Long>();
System.out.println(sl); // Unlockedfor (int i = 0; i < 42; i++) {
stamps.add(sl.readLock());
}
System.out.println(sl); // Read-Locks:42
stamps.forEach(sl::unlockRead);
System.out.println(sl); // Unlockedvar stamp1 = sl.writeLock();
System.out.println(sl); // Write-Lockedvar stamp2 = sl.writeLock(); // deadlocked
System.out.println(sl); // Not seen...
}
}
Since StampedLock does not know which thread owns the locks, the DowngradeDemo would deadlock:
However, StampedLock does allow us to try to upgrade or downgrade our locks. This will also convert the stamp to the new type. For example, here is how we could do the downgrade correctly. Note that we do not need to unlock the write lock, since the stamp was converted from write to read.
publicclass StampedLockDowngradeDemo {
publicstaticvoid main(String... args) {
var sl = new StampedLock();
System.out.println(sl); // Unlockedlong wstamp = sl.writeLock();
System.out.println(sl); // Write-lockedlong rstamp = sl.tryConvertToReadLock(wstamp);
if (rstamp != 0) {
System.out.println("Converted write to read");
System.out.println(sl); // Read-locks:1
sl.unlockRead(rstamp);
System.out.println(sl); // Unlocked
} else { // this cannot happen (famous last words)
sl.unlockWrite(wstamp);
throw new AssertionError("Failed to downgrade lock");
}
}
}
One little story that might amaze you. My friend Victor Grazi discovered a bug in an early version of StampedLock. When we downgraded a write to a read, threads waiting for a read lock stayed blocked until the read lock was finally released. The amazing part of the story is that he discovered this bug whilst clicking around in his Java Concurrent Animated program.
We can also try to convert a read lock to a write lock. Unlike the Kotlin ReentrantReadWriteLock.write() extension function, this will do the conversion atomically. However, it may still fail, for example if another thread currently holds the read lock as well. In that case, a reasonable approach would be to bail out and try again or perhaps start with a write lock. Let''s first have a look at the simple case of converting read to write:
publicclass StampedLockUpgradeDemo {
publicstaticvoid main(String... args) {
var sl = new StampedLock();
System.out.println(sl); // Unlockedlong rstamp = sl.readLock();
System.out.println(sl); // Read-locks:1long wstamp = sl.tryConvertToWriteLock(rstamp);
if (wstamp != 0) {
// works if no one else has a read-lock
System.out.println("Converted read to write");
System.out.println(sl); // Write-locked
sl.unlockWrite(wstamp);
} else {
// we do not have an exclusive hold on read-lock
System.out.println("Could not convert read to write");
sl.unlockRead(rstamp);
}
System.out.println(sl); // Unlocked
}
}
The StampedLock Javadoc documentation shows several idioms of how the StampedLock could be used. Two of these demonstrate how upgrades could be done, either from a pessimistic or an optimistic read. The upgrade idioms perform best when we have a relatively small chance of needing to upgrade to write and when that upgrade has a high chance of succeeding.
The idioms take some getting used to. At first they look a bit obscure, with labelled breaks and seemingly misconstructed for-loops. The optimistic read idioms in Java 8 were simpler to understand. However, the benefit of the more modern code is that we have less repetition of our reading code. I am not convinced that the check for if (stamp == 0L) continue retryHoldingLock; makes the code faster. Usually with optimistic reads, we want to go from tryOptimisticRead() to validate() as quickly as possible, to minimize the chances of another thread writing in the meantime. I did have a benchmark to prove this, but it was for an old version of StampedLock and I will have to redo that research.
To see the optimistic read idiom in action, have a look at today''s commit of jdk.internal.foreign.MemoryScope. (Complete coincidence that this was checked in today whilst I am busy writing a Java newsletter featuring StampedLock. Thank you Doug Lea for pointing it out :-))
Kind regards from Crete
Heinz
Java Specialists Superpack 2020
Our entire Java Specialists Training in One Huge Bundle
If you no longer wish to receive our emails, click the link below: Unsubscribe
Just checking you received the video message I sent through a few days ago : https://www.bonjoro.com/t/710e2d4e-5ec2-46b0-8339-9739d5e7340b/open
Just click the link above to watch it in case you missed it in your inbox.
Thanks!
.emailview
Each PrintStream uses about 25kb of memory. This might seem reasonable if we only have System.out and System.err. But what happens if we try
[285] I/O Stream Memory Overhead
Author: Dr Heinz M. Kabutz | Date: 2020-10-30 | Category: Performance | Java Version: 8+ | Read Online
Abstract: Each PrintStream uses about 25kb of memory. This might seem reasonable if we only have System.out and System.err. But what happens if we try create millions? And why do they use so much memory?
Welcome to the 285th edition of The Java(tm) Specialists'' Newsletter, sent to you from the rollin'' and shakin'' Island of Crete. Our house was rocking this afternoon from the strong earthquake near Samos. No matter how many times I feel the earth move under my feet in Crete, it always leaves me with a weird feeling. I know of at least one newsletter subscriber in Izmir - hope they are OK!
javaspecialists.teachable.com: Please visit our new self-study course catalog to see how you can upskill your Java knowledge.
I/O Stream Memory Overhead
A couple of weeks ago, my colleague John Green and I were experimenting with virtual threads (Project Loom). Our server would receive text messages, change their case, and echo them back. Our client simulated loads of users. We had spun the experiment up to 100k sockets per JVM, which worked out at a total of 200k virtual threads. Both server and client components were humming along fine, but we did notice that the memory usage on the client was magnitudes higher. But why? The server task looked like this:
import java.io.*;
import java.net.*;
class TransmogrifyTask implements Runnable {
privatefinal Socket socket;
public TransmogrifyTask(Socket socket) throws IOException {
this.socket = socket;
}
publicvoid run() {
try (socket;
InputStream in = socket.getInputStream();
OutputStream out = socket.getOutputStream()
) {
while (true) {
int val = in.read();
if (Character.isLetter(val))
val ^= '' ''; // change case of all letters
out.write(val);
}
} catch (IOException e) {
// connection closed
}
}
}
The client side task conveniently used PrintStream and BufferedReader to communicate with the server:
import java.io.*;
import java.net.*;
import java.util.concurrent.*;
class ClientTaskWithIOStreams implements Runnable {
privatefinal Socket socket;
privatefinalboolean verbose;
public ClientTaskWithIOStreams(Socket socket, boolean verbose) {
this.socket = socket;
this.verbose = verbose;
}
privatestaticfinal String message = "John 3:16";
publicvoid run() {
try (socket;
BufferedReader in = new BufferedReader(
new InputStreamReader(
socket.getInputStream()));
PrintStream out = new PrintStream(
socket.getOutputStream(), true)
) {
while (true) {
out.println(message);
TimeUnit.SECONDS.sleep(2);
String reply = in.readLine();
if (verbose) System.out.println(reply);
TimeUnit.SECONDS.sleep(2);
}
} catch (Exception consumeAndExit) {}
}
}
After running jmap''s histogram on both JVMs, we noticed that the biggest memory hog was the PrintStream, followed by the BufferedReader. We thus changed the client task to instead send and receive individual bytes. Not all the clients are verbose, and thus we only create a StringBuilder when it is necessary. Futhermore, by default each ClientTask shares the same static Appendable, which returns a StringBuilder if it is a verbose client.
import java.io.*;
import java.net.*;
import java.util.concurrent.*;
class ClientTask implements Runnable {
privatefinal Socket socket;
privatefinalboolean verbose;
public ClientTask(Socket socket, boolean verbose) {
this.socket = socket;
this.verbose = verbose;
}
privatestaticfinalbyte[] message = "John 3:16\n".getBytes();
privatefinalstatic Appendable INITIAL = new Appendable() {
public Appendable append(CharSequence csq) {
returnnew StringBuilder().append(csq);
}
public Appendable append(CharSequence csq, int start, int end) {
returnnew StringBuilder().append(csq, start, end);
}
public Appendable append(char c) {
returnnew StringBuilder().append(c);
}
};
publicvoid run() {
Appendable appendable = INITIAL;
try (socket;
InputStream in = socket.getInputStream();
OutputStream out = socket.getOutputStream()
) {
while (true) {
for (byte b : message) {
out.write(b);
}
out.flush();
TimeUnit.SECONDS.sleep(2);
for (int i = 0; i < message.length; i++) {
int b = in.read();
if (verbose) {
appendable = appendable.append((char) b);
}
}
if (verbose) {
System.out.print(appendable);
appendable = INITIAL;
}
TimeUnit.SECONDS.sleep(2);
}
} catch (Exception consumeAndExit) {}
}
}
This worked much better and the memory usage on the server and the client was roughly the same. We ran our experiment a bit longer and eventually had 2 million sockets open on the server JVM, serviced by 2 million virtual threads, serviced by just 12 carrier threads. Our client simulation had the same number of sockets and virtual threads, with a total of 4 million sockets and threads. The memory usage of all that came to under 3GB per JVM. Incredible technology and I cannot wait until it becomes mainstream in Java.
We performed another experiment to determine how much memory each of the Input- and OutputStreams, as well as the Readers and Writers, used. This was on our machine and your mileage might vary.
OutputStream
PrintStream 25064
BufferedOutputStream 8312
DataOutputStream 80
FileOutputStream 176
GZIPOutputStream 768
ObjectOutputStream 2264
InputStream
BufferedInputStream 8296
DataInputStream 328
FileInputStream 176
GZIPInputStream 1456
ObjectInputStream 2256
Writer
PrintWriter 80
BufferedWriter 16480
FileWriter 8608
OutputStreamWriter 8480
Reader
BufferedReader 16496
FileReader 8552
InputStreamReader 8424
As convenient as virtual threads are, we will need to change our coding practices. Who would have imagined that one day we would be able to create millions of threads in our JVMs? Even the Phaser has a maximum limit of 65535 parties. It is possible to compose Phasers, but I can imagine the inventors thinking that no one would ever have more than 64k threads. The ForkJoinPool has a similar limitation on the maximum length of their work queues. These numbers are reasonable when we have thousands of threads, but not so much when we have millions.
Kind regards from a wobbly Crete
Heinz
P.S. I have not answered the obvious question of why these objects use so much memory. It is mostly empty space in the form of buffers. For example, the BufferedReader has an 8k char[]. Since each char is two bytes, this comes to 16kb. The PrintStream contains an OutputStreamWriter (8kb) and a BufferedWriter (16kb), resulting in its roughly 25kb. Just lots and lots of empty nothingness.
Java Specialists Superpack 2020
Our entire Java Specialists Training in One Huge Bundle
If you no longer wish to receive our emails, click the link below: Unsubscribe
Thanks for asking to receive information from Cretesoft Limited, publishers of The JavaT Specialists'' Newsletter. Before we start sending you the information, we want to make sure we have your permission.
To confirm your request, please
click here.
Clicking the link above will confirm your email address and allow you to receive the information you requested. If you do not want to receive any communication, please ignore this message.
Thank you,
Kind regards
Heinz -- Dr Heinz M. Kabutz (PhD CompSci) heinz@javaspecialists.eu Author of "The Java™ Specialists'' Newsletter" Java Champion Oracle Developer Champion JavaOne Rock Star Skype: kabutz
?
If you no longer wish to receive our emails, click the link below:
Unsubscribe
Java 8 Streams were the first time that Java deliberately split utility classes into multiple versions to be used for Object, int, long and
[284] java.util.PrimitiveIterator.OfInt
Author: Dr Heinz M. Kabutz | Date: 2020-09-30 | Category: Tips and Tricks | Java Version: 8+ | Read Online
Abstract: Java 8 Streams were the first time that Java deliberately split utility classes into multiple versions to be used for Object, int, long and double. This design was also applied to Iterator, which now has specialized types for these primites in the form of the PrimitiveIterator. In this newsletter we have a look at how we can use that in our own primitive collections.
Welcome to the 284th edition of The Java(tm) Specialists'' Newsletter, sent to you from the beautiful Island of Crete. Whenever possible, I volunteer for the morning school drop-off, then head down to Kalathas Beach for a run on the soft sand and a dip in the sea afterwards. This morning, as I arrived, I saw one of the regulars doing his morning run, and a 70 year old grandmother from Ireland. I knew that the guy is usually faster than me, but he does not run as far. It seemed that he was faster than the granny, but it left me wondering: where do I fit into the speed of things? I waited until they were on the other side of the beach before starting my run. That way, they would have a long way to catch up and I could rest assured that even though I was slow, it was unlikely they would make up an entire stretch of beach. My plan was working, and I was feeling pretty confident that I would not get embarassed. After about a mile of running, granny popped into the sea to cool off. All good. I kept on running. When she emerged from the waves, she was about 30 meters behind me. How can a 48 year old Java programmer feel threatened by someone whose grandson is already 24? I picked up the pace a bit. I got to the end of the beach before her and turned around. She had gained about 15 meters on me. I upped the pace even more. After more running, and with the end in sight, I saw her pulling past me. If I am in a race, and the rest of the race are all pensioners, you can place a safe bet on me coming in last. But that won''t stop me from running every day :-) Today was #720 in a row. I''m not trying to break land-speed records, but rather am aiming for a consistent focus on health and exercise. Still, it was funny when she pulled past me and I''m sure it made her day. Amazing!
javaspecialists.teachable.com: Please visit our new self-study course catalog to see how you can upskill your Java knowledge.
java.util.PrimitiveIterator.OfInt
Last week I taught the abstract class pattern to students of my Java Design Patterns Course. This pattern is used extensively in the Java Development Kit. For example, the ArrayDeque inherits a partial implementation from AbstractCollection and implements the Collection interface. BTW, due to the current COVID-19 pandemic, we are no longer delivering our courses in person, but instead via remote delivery. So far we have had very happy students. Our calendar is filling up and we only have a couple of weeks available until the end of 2020. So if you would like to treat your team to an awesome Java course, please let me know :-) Lots of topics available, and also short one-day workshops.
The students then had to apply their knowledge. I had written an IntArrayList and an IntArrayDeque, as the name would suggest, primitive int versions of ArrayList and ArrayDeque. However, these did not have a common superclass and so my students had to integrate everything into a common hierarchy. I wrote these classes about two years ago. The IntArrayList is almost exactly the same as ArrayList, but based on an int[]. However, when I tried to convert the ArrayDeque I got stuck. They were using null values in the array to check for concurrent modification errors. At the time I left the array as an Integer[], wanting to one day get back to it.
During one of my recent remote design patterns courses, I had given the students 15 minutes to solve the above mentioned exercise, and thought that should suffice to quickly do the conversion. Whilst looking at it, I realized that what I was really missing was a thorough test for the ArrayDeque. Whenever we refactor code, we need to have a test suite to make sure we do not break anything. Talking of refactoring, I have also completely rewritten my Refactoring to Streams Course. We take an existing 300k LOC ERP application and go nuts rewriting the code into modern Java. It is more fun than should be allowed. During the refactoring course we don''t bother testing, because none of our refactorings are pushed to production. But in real life we need good tests whenever we work on code.
Two days ago I was again looking at my IntArrayDeque, feeling like Captain Ahab, wondering when I would finally conquer my Moby Dick. I started writing some tests, but then started scratching around in the OpenJDK. After some searching, I found the TCK suite for the java.util.concurrent package. After a bit (ok a lot) of fiddling, I managed to convert it to work with my IntArrayDeque. As so often happens when one looks into the abyss of one''s own code, I found a little bug in my IntArrayList#equals() method, thanks to the TCK. Oh, and I found a bug in the TCK :-)
During the process of refactoring the code over and over, I went through quite a few approaches to designing these primitive collections. Before we go on, I think I need to state the obvious: this is an intellectual curiosity, rather than of great practical consequence. In the last 23 years of coding Java, I am not sure that I ever truly needed a primitive ArrayDeque. But I did learn something new about what Java has to offer, and would like to share that with you today.
When I first tried creating a primitive collection, the first interface I created was an IntIterator, like so:
You might observe two weaknesses with this approach. Firstly, the enhanced for-in loop takes either an array or a java.lang.Iterable. Our IntIterable is neither. We thus cannot use the enhanced for-in loop for the body of our forEach() function. Back to the old iterator code idiom. The second issue is that we need to create an IntIteratorSpliterator, similar to the spliterator that is created for an iterator with Spliterators.spliteratorUnknownSize(Iterator).
import java.util.*;
import java.util.function.*;
publicclass IntIteratorSpliterator implements Spliterator.OfInt {
staticfinalint BATCH_UNIT = 1 << 10; // batch array size incrementstaticfinalint MAX_BATCH = 1 << 25; // max batch array size;privatefinal IntIterator it;
privatefinalint characteristics;
privatelong est; // size estimateprivateint batch; // batch size for splitspublic IntIteratorSpliterator(IntIterator iterator, int characteristics) {
this.it = iterator;
this.est = Long.MAX_VALUE;
this.characteristics = characteristics &
~(Spliterator.SIZED | Spliterator.SUBSIZED);
}
public OfInt trySplit() {
long s = est;
if (s > 1 && it.hasNext()) {
int n = batch + BATCH_UNIT;
if (n > s)
n = (int) s;
if (n > MAX_BATCH)
n = MAX_BATCH;
int[] a = new int[n];
int j = 0;
do { a[j] = it.next(); }
while (++j < n && it.hasNext());
batch = j;
if (est != Long.MAX_VALUE)
est -= j;
returnnew IntArraySpliterator(a, 0, j, characteristics);
}
returnnull;
}
publicvoid forEachRemaining(IntConsumer action) {
if (action == null) thrownew NullPointerException();
it.forEachRemaining(action);
}
publicboolean tryAdvance(IntConsumer action) {
if (action == null) thrownew NullPointerException();
if (it.hasNext()) {
action.accept(it.next());
returntrue;
}
returnfalse;
}
publiclong estimateSize() {
return est;
}
publicint characteristics() {
return characteristics;
}
privatestaticfinalclass IntArraySpliterator implements OfInt {
privatefinalint[] array;
privateint index; // current index, modified on advance/splitprivatefinalint fence; // one past last indexprivatefinalint characteristics;
public IntArraySpliterator(int[] array, int origin, int fence,
int additionalCharacteristics) {
this.array = array;
this.index = origin;
this.fence = fence;
this.characteristics = additionalCharacteristics |
Spliterator.SIZED | Spliterator.SUBSIZED;
}
public OfInt trySplit() {
int lo = index, mid = (lo + fence) >>> 1;
return (lo >= mid)
? null
: new IntArraySpliterator(array, lo, index = mid, characteristics);
}
publicvoid forEachRemaining(IntConsumer action) {
if (action == null)
thrownew NullPointerException();
int[] a;
int i, hi; // hoist accesses and checks from loopif ((a = array).length >= (hi = fence) &&
(i = index) >= 0 && i < (index = hi)) {
do { action.accept(a[i]); } while (++i < hi);
}
}
publicboolean tryAdvance(IntConsumer action) {
if (action == null)
thrownew NullPointerException();
if (index >= 0 && index < fence) {
action.accept(array[index++]);
returntrue;
}
returnfalse;
}
publiclong estimateSize() {
return (long) (fence - index);
}
publicint characteristics() {
return characteristics;
}
public Comparator<? super Integer> getComparator() {
if (hasCharacteristics(Spliterator.SORTED))
returnnull;
thrownew IllegalStateException();
}
}
}
This bothers me. Not as much as that sweet granny running past me on the beach, but almost as much. I knew that there were four different types of Spliterator. One for Object and then one for the primitives int, long and double. Could we also get different types of Iterator?
Some more digging brought up the java.util.PrimitiveIterator. It contains three inner interfaces OfInt, OfLong and OfDouble. Each of these has a specialized method for their type. For example PrimitiveIterator.OfInt has a nextInt() method. They are also preconfigured to use the correct consumer for their type.
I thus deleted my IntIterator changed IntIterable to the following:
Since our PrimitiveIterable.OfInt is also an Iterable<Integer>, we can use it in the enhanced for-in loop. In addition, Java should be smart enough to eliminate the object creation that happens due to the boxing and unboxing. However, I did not confirm that.
I started trying to also "primitivize" this interface, but got stuck on some of the methods, so abandoned that. Also, if we create an IntQueue interface, we could return an OptionalInt from the poll() and peek() methods. This would have been a better design than the current Queue that returns null when the queue is empty.
Initially I was going to post the entire IntArrayDeque implementation, but it got a bit too long. Also, there are at least a dozen subtleties in the code that would need some explaining. Perhaps one day I will offer it as a code walk-through webinar. But now I need to get ready for my jconf-dev talk starting in 30 minutes :-)
Kind regards from Crete
Heinz
Java Specialists Superpack 2020
Our entire Java Specialists Training in One Huge Bundle
If you no longer wish to receive our emails, click the link below: Unsubscribe
Black Friday is the time for companies to get rid of old unwanted stock. Not so at JavaSpecialists.eu! We give you our best, at excellent pr
Black Friday @ JavaSpecialists.EU
Black Friday is the time for companies to get rid of old unwanted stock. Not so at JavaSpecialists.eu! We give you our best, at excellent prices. While bits last! Offers expire on the 4th of December 2020 at 3pm Eastern Time.
JGym.IO Subscriptions - 10% Discount
We launched a new Java learning program in November 2020 - JGym.IO. Choose between three programs - Live, Gold and Diamond. "Live" gives full access to all our live courses. "Gold" includes "Live", plus bundled self-study courses. "Diamond" is our premium product with access to all our self-study material, live courses and 12 one-on-one coaching sessions. We have a limited availability of the "Diamond" membership. First come, first served.
JGym.IO Live 10% Off
JGym.IO Gold 10% Off
JGym.IO Diamond 10% Off
Individual Courses - 20% Discount
Our design patterns and dynamic proxies courses will change your way of thinking in Java. Good design leads to code that is more maintainable over a longer time. Dynamic code helps to avoid copy and paste programming.
Dynamic Proxies in Java 20% Off
Design Patterns in Java 20% Off
Superpack 2020 Bundle - 35% Discount
Get everything we have produced so far for a 35% discount. This discount is applicable to once-off payments and if you choose to pay in 10 installments. In addition, you will get a free upgrade to the Superpack 2021 bundle, once it becomes available. The Superpack does not include our live courses.
Superpack 35% Off
Inspirational - Life Skills - 20% Discount
The current pandemic has impacted most of our lives. We have spoken to a lot of programmers who are thinking of becoming freelancers at this time. But before you quit your job to do your own thing, please please please take this course. Included is a 15-minute consultation with the authors.
Entrepreneurially 20% Off
All prices exclude EU VAT.
Enjoy and happy Black Friday!
Learn More
Opt-Out of Course Adverts
Heinz Kabutz Director Cretesoft Limited heinz@javaspecialists.eu +306975595262 - Work www.javaspecialists.eu
Biased locking has made unnecessary mutexes cheap for over a decade. However, it is disabled by default in Java 15, slated for removal. From
[282] Biased locking a goner, but better things Loom ahead
Author: Dr Heinz M. Kabutz | Date: 2020-07-21 | Category: Concurrency | Java Version: 8+ | Read Online
Abstract: Biased locking has made unnecessary mutexes cheap for over a decade. However, it is disabled by default in Java 15, slated for removal. From Java 15 onwards we should be more diligent to avoid synchronized in places where we do not need it.
Welcome to the 282nd edition of The Java(tm) Specialists'' Newsletter, sent to you from ... the Island of Crete (good guess :-)). This month I did several live Java streams. The first six were accidental ;-) My friend David sent me frantic messages on WhatsApp:
[13:31, 7/13/2020] David Gomez Garcia: Hey Heinz.
[13:31, 7/13/2020] David Gomez Garcia: I''m not sure if you are
streaming online in Facebook and periscope on purpose.
[13:32, 7/13/2020] David Gomez Garcia: It seems like you are
recording clips for your courses... and not really meant for
a live stream.
I was trying to record a "sales pitch" for my new Juppies 2 course. I have no problem speaking about technical things for hours. But marketing stuff - that is hard. My little "Go Live" button sent it to Restream.io, which then diligently broadcast my antics to three Facebook accounts, Periscope/Twitter, YouTube, Twitch and a few others. This was not for public consumption, and one of the preview images had me digging for diamonds. It took me an hour to delete them all.
But then I thought - this is fun, let us do more. I announce them on Twitter and the recordings are here.
Another thing. I have moved my Java consulting offerings onto Teachable as well, to make purchasing easier. You can buy single hours or bundles of consulting over here.
javaspecialists.teachable.com: Please visit our new self-study course catalog to see how you can upskill your Java knowledge.
Biased locking a goner, but better things Loom ahead
Last month, I sent a puzzle showing how single-threaded access of Vector had slowed down in Java 15. The first to send the correct explanation was Ulrich Grepel. With JEP 374, biased locking has been disabled and deprecated. Turn it on with -XX:+UseBiasedLocking and Java 15 runs as fast as the previous versions.
My second puzzle showed further evidence that biased locking, or rather its absence, was to blame. The IdentityHashMap calls System.identityHashCode() on the vectors, thus disabling biased locking on those individual objects (see newsletter 222). Well done to Bas de Bakker for being the first to figure out that weird behavior.
I also mentioned in the puzzle that the results were a bit different for Java 10. No one picked up that subtlety. Here are the biased locking JVM flags for Java 9:
Biased locking got a bad rap in the Java performance world. Many years ago, one of the engineers at Azul Systems wrote a benchmark that seemed to indicate that biased locking could cause a long time to safepoint. However, he left and apparently his colleagues struggled to reproduce his results. Perhaps it is true, or maybe not. Or confirmation bias made programmers blame biased locking? That would be ironic.
When Java 5 was released, programmers moved en masse to ReentrantLock, following the promise of better performance and richer functionality. However, code with ReentrantLock was also harder to write and certainly more challenging to debug. Since Java 8, there has been a shift back to synchronized. For example, ConcurrentHashMap was rewritten and now locks internally with synchronized instead of ReentrantLock. CopyOnWriteArrayList changed to synchronized in Java 9, with this comment capturing the thinking nicely:
/**
* The lock protecting all mutators. (We have a mild preference
* for builtin monitors over ReentrantLock when either will do.)
*final transient Object lock = new Object();
Synchronized is in my experienced easier to analyze, more performant under low contention and more robust. The coding idioms are also much easier than with ReentrantLock or StampedLock.
The only disadvantage that I know with synchronized is that virtual threads, as found in Project Loom, do not play nicely with monitor locks. Project Loom promises to be a game changer and should make coding in Java so much easier. It took me 2.5 hours to explain the basics of non-blocking IO. With Project Loom I could create the same functionality in one little class and in about 10 minutes of explanation, including time for questions.
If I had to choose which I want in Java 17, biased locking or virtual threads, I would definitely take virtual threads.
Back to biased locking. In JEP 374 they state: Furthermore, many applications that benefited from biased locking are older, legacy applications that use the early Java collection APIs, which synchronize on every access (e.g., Hashtable and Vector). Newer applications generally use the non-synchronized collections (e.g., HashMap and ArrayList), introduced in Java 1.2 for single-threaded scenarios, or the even more-performant concurrent data structures, introduced in Java 5, for multi-threaded scenarios.
True, it is unlikely that I would use Vector in modern code. Instead, I would use Collections.synchronizedList(new ArrayList<>()) if I needed a thread-safe list. Most of the time, I would write my code so that I would not have to synchronize my list and thus an ArrayList would do. However, for maps I follow the advice by Jack Shirazi, to use the ConcurrentHashMap as my default map. It is as sensible as wearing a seat belt. Most likely you will be just fine never wearing a seat belt, but you just need one accident to ruin your life. Similarly, the advice that I have been following and promulgating for the last few decades is to make our Java code correct and then let HotSpot optimize it for us. If it is fast enough then great, otherwise we profile and fix the bottlenecks. Synchronized was easy to fix. If a lock was contended, we could find it quickly with the available tooling.
With Java 15, this advice might be dangerous to follow. As we saw, our demo ran twice as slowly as in Java 14. All we did was use a class that happened to be synchronized. Furthermore, since each list is thread confined, the lock is never contended. Thus the threads would not go into the BLOCKED state. Our usual toolset for finding lock contention would not help us.
The same issue can also happen with ConcurrentHashMap, which sometimes uses synchronized on put().
import java.util.*;
import java.util.concurrent.*;
import java.util.stream.*;
publicclass ConcurrentHashMapBench {
publicstaticvoid main(String... args) {
for (int i = 0; i < 10; i++) {
test(false);
test(true);
}
}
privatestaticvoid test(boolean parallel) {
IntStream range = IntStream.range(1, 100_000_000);
if (parallel) range = range.parallel();
long time = System.nanoTime();
try {
ThreadLocal<Map<Integer, Integer>> maps =
ThreadLocal.withInitial(() -> {
Map<Integer, Integer> result =
new ConcurrentHashMap<>();
for (int i = 0; i < 1024; i++)
result.put(i, i * i);
return result;
});
range.map(i -> maps.get().put(i & 1023, i)).sum();
} finally {
time = System.nanoTime() - time;
System.out.printf("%s %dms%n",
parallel ? "parallel" : "sequential",
(time / 1_000_000));
}
}
}
Here are the results for different versions of Java running on my 1-6-2 MacBook Pro Late 2018 model.
The degradation in performance when putting into a ConcurrentHashMap is not as bad in Java 15 as it was with Vector, but it is still easily observable:
When we explicitly turn biased locking on with the -XX:+UseBiasedLocking, then we get better performance:
OpenJDK 64-Bit Server VM warning: Option UseBiasedLocking was
deprecated in version 15.0 and will likely be removed in a future
release.
openjdk version "15-ea" 2020-09-15
OpenJDK Runtime Environment (build 15-ea+30-1476)
OpenJDK 64-Bit Server VM (build 15-ea+30-1476, mixed mode, sharing)
sequential 2237ms
parallel 490ms
sequential 2315ms
parallel 468ms
sequential 2285ms
parallel 444ms
sequential 2277ms
parallel 451ms
sequential 2222ms
parallel 461ms
sequential 2183ms
parallel 474ms
sequential 2236ms
parallel 455ms
sequential 2218ms
parallel 459ms
sequential 2192ms
parallel 437ms
sequential 2222ms
parallel 438ms
I have been consulting on Java for more than two decades. This change in Java 15 might add some wonderful new opportunities ;-) Jokes aside, for now there is an easy way to test. If the performance of your system is not good enough in Java 15, turn biased locking on and see if it improves to acceptable levels. Most likely it will not make a difference. If it does, then chances are that you are overusing synchronized. We would then need to use profilers to find the offending unnecessary mutexes. Good luck :-)
Kind regards from Crete
Heinz
Java Specialists Superpack 2020
Our entire Java Specialists Training in One Huge Bundle
If you no longer wish to receive our emails, click the link below: Unsubscribe
A nice puzzle to brighten your day - how can we make the Iterator think that the List has not been changed?
[283] Four Billion Changes
Author: Dr Heinz M. Kabutz | Date: 2020-08-28 | Category: Tips and Tricks | Java Version: 9+ | Read Online
Abstract: A nice puzzle to brighten your day - how can we make the Iterator think that the List has not been changed?
Welcome to the 283rd edition of The Java(tm) Specialists'' Newsletter, sent to you from the beautiful Island of Crete. We have had a rather odd summer. Warm weather, of course, but few tourists. Chania looks like it does in winter, just warmer. The beaches are void of organized sun umbrellas. Cretans, famous for their hospitality, are cautious of outsiders. But this too will pass.
I am speaking at the JVM Community Virtual Conference in Nigeria tomorrow (29th August) (more info here).
javaspecialists.teachable.com: Please visit our new self-study course catalog to see how you can upskill your Java knowledge.
Four Billion Changes
A few days ago, I tweeted a #Java puzzle, asking how we could let an iterator continue to work, even after its collection had changed.
Here is the puzzle code:
import java.util.*;
publicclass ListSurprise {
publicstaticvoid main(String[] args) {
// Make ListSurprise print 3.14159
System.setSecurityManager(new SecurityManager());
List<Integer> numbers = new ArrayList<>();
Collections.addAll(numbers, 3, 1, 4, 1, 5, 5, 9);
Iterator<Integer> it = numbers.iterator();
System.out.print(it.next()); // 3
System.out.print(''.'');
System.out.print(it.next()); // 1
System.out.print(it.next()); // 4
System.out.print(it.next()); // 1
System.out.print(it.next()); // 5
doSomething(numbers); // should make the next output be 9
System.out.println(it.next());
if (!numbers.equals(List.of(3, 1, 4, 1, 5, 9)))
thrownew AssertionError();
}
privatestaticvoid doSomething(List<Integer> list) {
// how???
}
}
The modCount field in AbstractList helps to discover concurrent updates to a list. Whenever the list is changed, the modCount is also incremented. When the Iterator is created, it makes a copy of the current modCount. If this changes during iteration, then the backing list must have changed. However, if we remove an item with the iterator itself, then the iterator''s expectedModCount is also changed to match the list''s modCount. The trick is thus to remove the element at index 5, and to then change the list another 4294967295 times, thus looping the int back to the starting point. My friend Olivier Croisier wrote about this trick ten years ago. Here is what it would look like:
privatestaticvoid doSomething(List<Integer> list) {
// Remove element at index 5 and modify list 4 billion times
list.remove(5);
for (int i = Integer.MIN_VALUE; i < Integer.MAX_VALUE; i++) {
((ArrayList<Integer>) list).trimToSize();
}
}
The trimToSize() method increments the modCount even if it does not really change the structure of the list. It is thus a fairly quicky way of spinning the modCount back to its original value.
Another interesting approach was with threads. The list.set() method does not increment modCount, since it does not change the structure of the list. Furthermore, when we look at System.out.println(it.next());, it is tempting to read from left to right. However, we all know that it.next() is executed first and then the System.out.println(). The second solution spawns a thread that synchronizes on System.out, thus preventing the main thread from continuing until we have removed the element at index 6.
I saw several solutions following similar approaches using Thread.sleep(). Here is mine, using Phaser and a spin loop that monitors the main thread''s state. Once it is BLOCKED, we continue with removing the last element.
privatestaticvoid doSomething(List<Integer> list) {
// Set item 5 to 9; block main thread as we remove last item
list.set(5, 9);
Phaser phaser = new Phaser(2);
Thread main = Thread.currentThread();
new Thread(() -> {
synchronized (System.out) {
phaser.arriveAndDeregister();
while(main.getState() != Thread.State.BLOCKED)
Thread.onSpinWait();
list.remove(6);
}
}).start();
phaser.arriveAndAwaitAdvance();
}
The third approach was even more obscure. We take advantage of type erasure to insert a magical object into the list. When toString() is called, which would be after the call to it.next(), but before the call to System.out.println(), we remove this, leaving behind the desired list of Integer objects.
privatestaticvoid doSomething(List<Integer> list) {
// Replace 5 with object that removes itself and returns "9"
((List)list).set(5, new Object() {
public String toString() {
list.remove(this);
return"9";
}
});
}
The first and third solutions would work even with a stricter security manager installed. The second might not, since it is possible to prevent thread construction.
Here is one more that is the simplest solution, but also one that I like the least:
Deep reflection was not possible due to the security manager, although one of my respondents set up a policy file to allow that.
The lesson to learn from this is that the ConcurrentModificationException is best-effort. It might happen if a collection is modified, but we can also see other strange behaviour. We need to eradicate this exception from our systems.
Kind regards from Crete
Heinz
Java Specialists Superpack 2020
Our entire Java Specialists Training in One Huge Bundle
If you no longer wish to receive our emails, click the link below: Unsubscribe
In our next puzzle, we up the ante a bit. We prevent GC during the test() method by storing a strong reference to all our Vectors...
[281] Puzzle 2: Is it garbage?
Author: Dr Heinz M. Kabutz | Date: 2020-06-24 | Category: Performance | Java Version: 8+ | Read Online
Abstract: In our next puzzle, we up the ante a bit. We prevent GC during the test() method by storing a strong reference to all our Vectors. Cock and bull story, or are we struggling to identity our biases?
Welcome to the 281st edition of The Java(tm) Specialists'' Newsletter. Yes, I''m still in Crete, where the sun is shining and the cicadas are finally starting to emerge from the ground. In a few days time we will have their deafening song make all human communication impossible. But it''s the sound of summer, and that''s a good time of year :-)
javaspecialists.teachable.com: Please visit our new self-study course catalog to see how you can upskill your Java knowledge.
Puzzle 2: Is it garbage?
A special thank you to all my wonderful Java Specialist readers for sending me a lot of great explanations to yesterday''s puzzle. Most answers were correct, although there was one subtlety that so far everyone has missed. Here is a follow-up puzzle, but be careful as our identity should not be in our biases, yet it is. Or is it the other way round? Have fun :-)
In our VectorBench2, instead of only storing the Vector inside the ThreadLocal, we also store it inside a shared set. Since Vector does not implement Comparable, we cannot use TreeSet nor ConcurrentSkipListSet. Also, the hashCode() method of Vector is calculated on all the elements and similarly equals() compares the contents. Since a Vector may change, it is not a good idea to use that as a key in any hash set. In our particular case, all the Vectors contain the same elements, thus only one of them would remain in the Set. Neither HashSet nor ConcurrentHashMap.newKeySet() are appropriate sets for our use case. The only set that would work is the IdentityHashMap converted with Collections.newSetFromMap() to a Set. The IdentityHashMap is not thread-safe, but we can solve that problem by wrapping it with Collections.synchronizedMap(). Since we now hold a reference to the Vector until we leave the method, we can be sure that it cannot be somehow garbage collected early. (My dad would have called all this a cock and bull story, but I assure you that it does affect the outcome.)
import java.util.*;
import java.util.stream.*;
publicclass VectorBench2 {
publicstaticvoid main(String... args) {
for (int i = 0; i < 10; i++) {
test(false);
test(true);
}
}
privatestaticvoid test(boolean parallel) {
Set<List<Integer>> vectors = Collections.newSetFromMap(
Collections.synchronizedMap(
// should not rely on a mutating hashCode()new IdentityHashMap<>()
)
);
IntStream range = IntStream.range(1, 100_000_000);
if (parallel) range = range.parallel();
long time = System.nanoTime();
try {
ThreadLocal<List<Integer>> lists =
ThreadLocal.withInitial(() -> {
List<Integer> result = new Vector<>();
vectors.add(result); // avoid GC during runfor (int i = 0; i < 1024; i++) result.add(i);
return result;
});
range.map(i -> lists.get().get(i & 1023)).sum();
} finally {
time = System.nanoTime() - time;
System.out.printf("%s %dms%n",
parallel ? "parallel" : "sequential",
(time / 1_000_000));
}
}
}
Master new Java skills this June - Patterns, dynamic proxies and threading.
A wonderful good afternoon from a balmy Island of Crete. I run every day, even in the hottest months. And these days I''m on a new streak - to see how many nights in a row I can sleep in my own bed.
And I''m using this time at home to learn a ton of new skills. The beauty of our current situation is that suddenly, everyone is clamouring to move their events into cyberspace. From the comfort of my own home, I can listen to podcasts and online courses, usually at 2x.
Remote learning is not at all like being in a normal classroom. It has many advantages:
Lecturer speaking too softly? Simply up the volume.
Screaming toddler needs attention? Simply watch the replay.
Too shy to ask a question? Simply send a text message.
Lecturer too slow for incredible brain? Simply listen later at 2x.
Fonts too small? Simply zoom in.
Forgot what was said? Simply go over the lesson again.
We are running three LIVE courses from the 1-19 June, with sessions Mondays through Fridays:
Design Patterns
We take a deep dive into the 10 most useful design patterns.
June 1-5 2020, $197
Sign Up
Dynamic Proxies
We explore how to use dynamic proxies to get rid of countless LOC.
June 8-12 2020, $197
Sign Up
Mastering Threads
We learn how to work with concurrency constructs in Java.
June 15-19 2020, $197
Sign Up
Each session starts at 9am Los Angeles time (12pm New York, 5pm London, 6pm Berlin, 9:30pm Bangalore, 2am Sydney (sorry Australia!)) and runs for up to two hours.
At the end of each session, you get fun exercises to prepare for the next day.
The training is highly interactive and you can ask as many questions as you like. You get access to the recordings of each session so that you can review the day''s lessons.
LIVE Bundle June 2020
Even better, get all three and score a $94 discount! It makes sense. Patterns are used in the dynamic proxies book, so you will be better prepared. And well, concurrency is a topic that we can all learn something more about. Besides, it''s only 2 hours (maximum) a day. At least you''ll come out of those first three weeks of June with some new skilllzzzz.
June 1-19 2020, $497
Sign Up
Bulk Discounts? 30% off
We offer a 30% discount on 50 licenses or more by one company. Please contact me by simply responding to this message and I will be delighted to personally assist you.
Hope to see you in two weeks time for our first bunch of LIVE courses :-)
Heinz Kabutz Director Cretesoft Limited heinz@javaspecialists.eu +306975595262 - Work www.javaspecialists.eu
How much memory was wasted when an additional boolean field was added to java.lang.String in Java 13? None at all. This article explains why
[278] Free Memory
Author: Dr Heinz M. Kabutz | Date: 2020-04-30 | Category: Performance | Java Version: 15 | Read Online
Abstract: How much memory was wasted when an additional boolean field was added to java.lang.String in Java 13? None at all. This article explains why.
Welcome to the 278th edition of The Java(tm) Specialists'' Newsletter, sent to you from the stunning Island of Crete. During the lockdown period, we are fortunately still allowed to go out for exercise. Thus my daily runs are continuing. I regularly share the lovely views on @heinzkabutz.
My book "Dynamic Proxies in Java" has now been published and you can get your free copy of the e-book from InfoQ.
javaspecialists.teachable.com: Please visit our new self-study course catalog to see how you can upskill your Java knowledge.
Free Memory
Last month, in newsletter 277, I wrote about a change in Java 13 that prevented having to recalculate the hash code of a String in the unlikely case that it was 0. I saw several objections to the change, asking why Oracle had added another field to String, thus increasing its memory consumption.
Object size in Java is somewhat hard to determine. We do not have a sizeof operator. It also varies by system. For example, in a 64-bit JVM with compressed OOPS, we use 4 bytes for a reference and 12 bytes for the object header. If our JVM is configured with a maximum heap of 32 GB or more, then a reference is 8 bytes and the object header is 16 bytes.
One thing that is consistent with all JVM systems I have looked at, is that objects are aligned on 8 byte boundaries. This means that the actual memory usage of an object will always be a multiple of 8. Thus the java.lang.Boolean class is 12 bytes for the object header and one byte for the boolean, totalling 13 bytes. However, it will use 16 bytes, wasting 3 bytes due to object alignment.
In the past, I used all sorts of trickery for guessing the object size. Nowadays I use JOL (Java Object Layout). For example, here is the output when we look at the internals of java.lang.Boolean:
java.lang.Boolean object internals:
OFFSET SIZE TYPE DESCRIPTION
0 4 (object header)
4 4 (object header)
8 4 (object header)
12 1 boolean Boolean.value
13 3 (loss due to the next object alignment)
Instance size: 16 bytes
Space losses: 0 bytes internal + 3 bytes external = 3 bytes total
As we see, the instance size is 16 bytes and we have three bytes that are unused space.
If we create a JVM with a 32GB heap (-Xmx32g), then the object header uses 16 bytes and thus the size is 17 bytes. However, the actual size is 24 bytes, due to object alignment:
java.lang.Boolean object internals:
OFFSET SIZE TYPE DESCRIPTION
0 4 (object header)
4 4 (object header)
8 4 (object header)
12 4 (object header)
16 1 boolean Boolean.value
17 7 (loss due to the next object alignment)
Instance size: 24 bytes
Space losses: 0 bytes internal + 7 bytes external = 7 bytes total
Let''s get back to String and consider the object sizes over the versions of Java. We are ignoring the size of the char[] or byte[] that contain the actual text.
Java 6 used 32 bytes, since they were storing the offset and count:
# java version "1.6.0_65"
OFFSET SIZE TYPE DESCRIPTION
0 4 (object header)
4 4 (object header)
8 4 (object header)
12 4 char[] String.value
16 4 int String.offset
20 4 int String.count
24 4 int String.hash
28 4 (loss due to the next object alignment)
Instance size: 32 bytes
Space losses: 0 bytes internal + 4 bytes external = 4 bytes total
(Incidentally, when the cached hash was added to String in Java 1.3, most JVMs were 32-bit and the object header was just 8 bytes. In those days, the extra hash field fitted into the wasted space. Another interesting factoid from 2001 - in those days every field took at least 4 bytes, even boolean and byte. That changed in Java 1.4. Enough ancient history!)
Java 7 decreases this to 24 bytes. The hash32 field was an optimization to reduce DOS attacks on hash maps. It was "free" in terms of memory usage, since without that we would have had 4 unused bytes anyway.
# openjdk version "1.7.0_252" (Zulu 7.36.0.5-CA-macosx)
java.lang.String object internals:
OFFSET SIZE TYPE DESCRIPTION
0 4 (object header)
4 4 (object header)
8 4 (object header)
12 4 char[] String.value
16 4 int String.hash
20 4 int String.hash32
Instance size: 24 bytes
Space losses: 0 bytes internal + 0 bytes external = 0 bytes total
Java 8 gets rid of the hash32 field, which they replaced with a generalized solution inside java.util.HashMap. This did not save any memory in String, since those 4 bytes are now "wasted" due to the next object alignment.
# openjdk version "1.8.0_242" (Zulu 8.44.0.11-CA-macosx)
java.lang.String object internals:
OFFSET SIZE TYPE DESCRIPTION
0 4 (object header)
4 4 (object header)
8 4 (object header)
12 4 char[] String.value
16 4 int String.hash
20 4 (loss due to the next object alignment)
Instance size: 24 bytes
Space losses: 0 bytes internal + 4 bytes external = 4 bytes total
Java 9 changed the array type to byte[] and added a coder. However, the String object still uses 24 bytes, with 3 lost due to object alignment.
# java version "9.0.4" build 9.0.4+11
java.lang.String object internals:
OFFSET SIZE TYPE DESCRIPTION
0 4 (object header)
4 4 (object header)
8 4 (object header)
12 4 byte[] String.value
16 4 int String.hash
20 1 byte String.coder
21 3 (loss due to the next object alignment)
Instance size: 24 bytes
Space losses: 0 bytes internal + 3 bytes external = 3 bytes total
Java 13 added the hashIsZero boolean field, which in Java uses 1 byte. However, we still do not use any additional memory. Thus, as stated in the abstract, adding this new field did not cost any additional memory.
# openjdk version "13.0.2" 2020-01-14 build 13.0.2+8
java.lang.String object internals:
OFFSET SIZE TYPE DESCRIPTION
0 4 (object header)
4 4 (object header)
8 4 (object header)
12 4 byte[] String.value
16 4 int String.hash
20 1 byte String.coder
21 1 boolean String.hashIsZero
22 2 (loss due to the next object alignment)
Instance size: 24 bytes
Space losses: 0 bytes internal + 2 bytes external = 2 bytes total
When I ran the test in Java 15, I noticed a slight change in the object layout:
After some searching, I found Shipilev''s "Java Objects Inside Out" article that includes a link to an enhancement added to Java 15. Since Java 15, the field layout is a bit different and they can pack fields across class hierarchies. This has a whole bunch of implications for high performance Java. I would encourage you to read Shipilev''s article.
Kind regards from Crete
Heinz
Java Specialists Superpack 2020
Our entire Java Specialists Training in One Huge Bundle
If you no longer wish to receive our emails, click the link below: Unsubscribe
For today''s puzzle, we are getting elements from Vector, sequentially and in parallel. But why is the performance so much worse in JDK 15-EA
[280] Puzzle: What''s up with Vector?
Author: Dr Heinz M. Kabutz | Date: 2020-06-23 | Category: Performance | Java Version: 8+ | Read Online
Abstract: For today''s puzzle, we are getting elements from Vector, sequentially and in parallel. But why is the performance so much worse in Java 15-ea+28?
Welcome to the 280th edition of The Java(tm) Specialists'' Newsletter, sent to you from the Island of Crete. We spent the past three weeks doing live classes on topics like concurrency, design patterns and dynamic proxies. It was a lot of fun and the feedback from the students was amazing. Even though we repeated the content in the afternoons, some of the students liked the sessions so much that they came to both. Please let me know by email if you would like me to teach private live classes to your team of programmers.
javaspecialists.teachable.com: Please visit our new self-study course catalog to see how you can upskill your Java knowledge.
Puzzle: What''s up with Vector?
So I wrote this little class this afternoon:
import java.util.*;
import java.util.stream.*;
publicclass VectorBench {
publicstaticvoid main(String... args) {
for (int i = 0; i < 10; i++) {
test(false);
test(true);
}
}
privatestaticvoid test(boolean parallel) {
IntStream range = IntStream.range(1, 100_000_000);
if (parallel) range = range.parallel();
long time = System.nanoTime();
try {
ThreadLocal<List<Integer>> lists =
ThreadLocal.withInitial(() -> {
List<Integer> result = new Vector<>();
for (int i = 0; i < 1024; i++) result.add(i);
return result;
});
range.map(i -> lists.get().get(i & 1023)).sum();
} finally {
time = System.nanoTime() - time;
System.out.printf("%s %dms%n",
parallel?"parallel":"sequential", (time / 1_000_000));
}
}
}
Here are the results for different versions of Java running on my 1-6-2 MacBook Pro Late 2018 model.
Java 8 and 9 have similar performance characteristics: