How does the JVM manage different types of locks?

Detailed Explanation:The JVM doesn't immediately use a "Heavyweight" OS mutex when you write synchronized. It uses an optimization path to reduce overhead:

  1. Biased Locking: When a thread acquires a lock, the JVM "stamps" that thread's ID into the object header (Mark Word). If the same thread tries to re-acquire it, it's free (no atomic operation needed).

  2. Lightweight Locking: If a different thread tries to acquire the lock, Biased Locking is revoked. The JVM tries to acquire the lock using a CAS (Compare-And-Swap) operation to copy the Mark Word to the thread's stack. This is a "spin-lock" approach—fast, but burns CPU if held too long.

  3. Heavyweight Locking: If contention continues (multiple threads spinning), the lock inflates. The JVM asks the OS for a real mutex, putting waiting threads to sleep (BLOCKED state) so they stop burning CPU.

Real-World Example: Imagine a Single-Threaded Event Loop (like in Vert.x or Node.js logic running on Java). You might have synchronized methods for safety, but 99% of the time, only the main Event Loop thread calls them. Biased locking makes these calls practically free. However, if you suddenly introduce a background worker thread that touches those objects, the JVM instantly revokes the bias and upgrades to lightweight locking.

← Back to Learning Journey