Thread Contention and Lock Optimization in High-Concurrency Java Systems
Thread contention is the silent killer of Java performance. A service that runs flawlessly at 50 requests per second can collapse to near-zero throughput at 500 req/s — not because of CPU or memory pressure, but because every thread is blocked waiting for the same lock. Diagnosing and eliminating contention is one of the highest-leverage skills in the Java performance engineering toolkit.
Part of the Java Performance Engineering Series.
Introduction
Modern Java applications are built around shared state. Connection pools, caches, rate limiters, counters, session stores — all require coordinated access from multiple threads. The JVM's memory model guarantees safety through synchronization primitives: synchronized blocks, ReentrantLock, ReadWriteLock, and the classes in java.util.concurrent. These primitives are correct, but correctness and performance are not the same thing.
When multiple threads compete for the same lock, only one wins. The rest block — they are suspended by the OS scheduler, their CPU core goes idle, and they sit in the BLOCKED or WAITING state consuming memory but producing nothing. At low concurrency, the probability of two threads colliding on the same lock is low. As request rate scales, the probability approaches certainty, and throughput collapses in a pattern that looks nothing like the smooth degradation curves engineers expect from capacity planning models.
This post covers the full lifecycle of a contention problem: from the production incident that reveals it, through the diagnostic tools that locate it, to the concrete code changes that eliminate it. Code examples are drawn from real patterns seen in high-traffic Java services.
The Real-World Problem: The Payment Service That Slowed to a Crawl
A fintech startup runs a payment processing service on a 16-core AWS instance. Load testing up to 100 req/s shows excellent P99 latency of 45ms. The team deploys to production with confidence. During the first week, traffic grows steadily. At 200 req/s, P99 climbs to 180ms — noticeable, but acceptable. At 350 req/s, something unexpected happens: P99 spikes to 4,200ms. At 500 req/s, the service is effectively dead — throughput measured at the load balancer drops to under 30 req/s even though the JVM is running, the GC is healthy, CPU is at 12%, and heap usage is normal.
The engineering team pulls a thread dump using jstack <pid>. What they see is startling: 194 of the 200 threads in the HTTP worker pool are in state BLOCKED, all waiting to enter the same method:
"http-nio-8080-exec-47" #89 prio=5 os_prio=0 cpu=2.31ms elapsed=41.22s tid=0x00007f... nid=0x... waiting for monitor entry [0x...]
java.lang.Thread.State: BLOCKED (on object monitor)
at com.payments.RateLimitService.checkAndIncrement(RateLimitService.java:34)
- waiting to lock <0x00000007b2c1e890> (a com.payments.RateLimitService)
at com.payments.PaymentController.processPayment(PaymentController.java:58)
at ...
"http-nio-8080-exec-3" #45 prio=5 os_prio=0 cpu=1.45ms elapsed=41.55s tid=0x00007f... nid=0x... locked <0x00000007b2c1e890>
java.lang.Thread.State: RUNNABLE
at com.payments.RateLimitService.checkAndIncrement(RateLimitService.java:34)
at ...
The root cause is a synchronized method in RateLimitService that reads from a Redis cache, checks a counter, and increments it — a chain of operations that takes 8–12ms per call under production load. With 200 threads each wanting to hold this lock for 10ms, the math is brutal: maximum theoretical throughput is 1000ms / 10ms = 100 calls per second, regardless of how many CPU cores or threads are available. Adding more threads makes it worse — more contenders means more time in the lock queue, and the OS scheduler overhead of suspending and resuming hundreds of blocked threads adds its own tax.
Understanding Thread Contention
Thread contention occurs whenever two or more threads compete for a shared resource that can only be held exclusively. It manifests in three primary forms.
Lock contention is the most common form in Java applications. A synchronized method, ReentrantLock, or database row lock serializes all access through a single gate. The thread holding the lock runs; all others block. Lock contention is observable in thread dumps as threads in state BLOCKED (for synchronized) or WAITING (for Lock.lock()).
CPU contention occurs when more runnable threads exist than available CPU cores. The OS scheduler time-slices CPU execution, introducing context-switch overhead. Unlike lock contention, CPU contention is addressed by reducing active thread count, not by eliminating locks. Profilers show high time in OS scheduler code when CPU contention dominates.
Memory contention occurs at the hardware level through cache-line invalidation. Two threads writing to variables that share the same 64-byte CPU cache line cause repeated cache coherence traffic between CPU cores, dramatically reducing throughput even without any explicit locking. This is the false sharing problem, and it can be eliminated by padding shared variables to separate cache lines — @Contended in Java 8+ achieves this.
Lock contention has a distinctive non-linear performance signature: throughput scales normally up to the lock's saturation point, then plateaus or collapses regardless of additional resources. This makes it particularly dangerous in capacity planning — a service that handles 10× load in load tests (with a single client) may collapse at 2× production load (with many concurrent clients).
Diagnosing Contention: Tools and Techniques
Correct diagnosis is everything. Optimizing the wrong lock wastes engineering time and may introduce new bugs. Use a graduated diagnostic approach: start with the fastest tool, progress to deeper analysis only when needed.
jstack thread dumps are the fastest first step. Run jstack <pid> three times, 5 seconds apart, and compare output. Threads that appear in BLOCKED state in all three dumps, waiting on the same monitor address, are your primary contention hotspot. The thread that holds the lock is identified by the locked <address> annotation — its stack trace tells you exactly what work it is doing while holding the lock.
async-profiler is the most powerful tool for production lock profiling. It uses Linux perf events and Java's AsyncGetCallTrace to generate lock contention flame graphs without significant overhead:
# Profile lock contention for 30 seconds and output a flame graph
./asprof -e lock -d 30 -f /tmp/lock-flamegraph.html <pid>
# Alternatively, using the agent JAR for containerized environments
java -agentpath:/opt/async-profiler/libasyncProfiler.so=start,event=lock,\
file=/tmp/locks.html,duration=30 -jar app.jar
The resulting flame graph shows which classes and methods are hotspots for lock contention, ranked by the number of blocked samples. A wide bar at RateLimitService.checkAndIncrement immediately confirms your thread dump findings with quantitative data.
Java Flight Recorder (JFR) provides deep JVM-level visibility with under 1% production overhead. Enable it and parse contention events:
# Start JFR recording with thread contention enabled
jcmd <pid> JFR.start duration=60s filename=/tmp/recording.jfr \
settings=profile
# Print only Java monitor blocked events, sorted by longest duration
jfr print --events jdk.JavaMonitorEnter --stack-depth 12 \
/tmp/recording.jfr | sort -k2 -rn | head -30
VisualVM provides a GUI-based thread timeline that visualizes BLOCKED vs RUNNABLE states over time — valuable for communicating findings to stakeholders who are not comfortable reading raw thread dumps.
Micrometer / JMX metrics expose contention indicators at runtime. Monitor jvm.threads.states with tag state=blocked. A rising baseline of blocked threads — even without a spike in errors — is an early warning of lock saturation approaching. Set an alert when blocked thread count exceeds 10% of the total pool size.
Common Contention Patterns and Fixes
a. Synchronized Method Holding a Lock Too Long
The most common contention pattern is a synchronized method that performs expensive work while holding the lock — I/O, remote calls, complex computation. The fix is to reduce lock scope to only the genuinely shared mutable state.
// BEFORE: entire method synchronized — Redis call holds the lock
public class RateLimitService {
private final Map<String, Integer> counters = new HashMap<>();
public synchronized boolean checkAndIncrement(String clientId) {
// This Redis call takes 8-12ms and holds the lock the entire time
int currentCount = redisClient.get("rate:" + clientId);
if (currentCount >= LIMIT) return false;
redisClient.increment("rate:" + clientId);
counters.put(clientId, currentCount + 1);
return true;
}
}
// AFTER: lock scope reduced to only the local counter update
public class RateLimitService {
private final ConcurrentHashMap<String, AtomicInteger> localCounters
= new ConcurrentHashMap<>();
public boolean checkAndIncrement(String clientId) {
// Redis call outside any lock
int redisCount = redisClient.get("rate:" + clientId);
if (redisCount >= LIMIT) return false;
// Only the local in-memory counter update is synchronized (via atomic)
AtomicInteger local = localCounters.computeIfAbsent(
clientId, k -> new AtomicInteger(0));
int localVal = local.incrementAndGet();
if (localVal > LOCAL_BUDGET) {
local.decrementAndGet();
return false;
}
return true;
}
}
By moving the Redis call outside the critical section, the time spent holding any lock drops from 10ms to microseconds. Throughput scales proportionally to the number of cores rather than being capped by a single serialization point.
b. Shared Mutable Counter: AtomicLong vs LongAdder
Incrementing a shared counter is a ubiquitous pattern — request counts, error rates, active sessions. Using a synchronized counter or even AtomicLong becomes a contention hotspot at high concurrency because every increment is a compare-and-swap (CAS) operation that invalidates the cache line holding the counter on all other cores.
LongAdder (Java 8+) solves this with striped cells — each thread updates a thread-local cell, and the true sum is computed only when sum() is called. Under high contention, LongAdder can be 10–100× faster than AtomicLong:
// AtomicLong: all threads CAS the same memory location
private final AtomicLong requestCount = new AtomicLong(0);
// Under 500 concurrent threads, CAS failures cause retry loops.
// Profile shows hot CAS loop in AtomicLong.incrementAndGet().
requestCount.incrementAndGet();
// LongAdder: threads update independent cells, no CAS contention
private final LongAdder requestCount = new LongAdder();
// Each increment is effectively uncontended in most cases.
requestCount.increment();
// Reading the sum is slightly more expensive (aggregates cells),
// so use LongAdder when writes >> reads.
long total = requestCount.sum();
Use AtomicLong when you need compare-and-set semantics (e.g., "increment only if below threshold"). Use LongAdder when you simply need a high-throughput counter and read frequency is low relative to write frequency.
c. Single HashMap → ConcurrentHashMap.compute()
Replacing a synchronized HashMap with ConcurrentHashMap is the most impactful single-line change in many Java services. ConcurrentHashMap uses internal lock striping — 16 independent segment locks by default in Java 8+, extended to per-bucket node locking in Java 8's full rewrite — so independent keys almost never contend.
// BEFORE: entire map locked on every read and write
private final Map<String, SessionData> sessions = new HashMap<>();
public SessionData getOrCreate(String sessionId) {
synchronized (sessions) {
return sessions.computeIfAbsent(sessionId, SessionData::new);
}
}
// AFTER: ConcurrentHashMap with atomic compute — lock only on the specific key's bucket
private final ConcurrentHashMap<String, SessionData> sessions
= new ConcurrentHashMap<>();
public SessionData getOrCreate(String sessionId) {
// computeIfAbsent locks only the bucket for sessionId,
// not the entire map. 16 threads with different keys run in parallel.
return sessions.computeIfAbsent(sessionId, SessionData::new);
}
// For update-or-insert patterns, compute() provides atomic read-modify-write:
public void recordRequest(String clientId, long latencyMs) {
sessions.compute(clientId, (key, existing) -> {
if (existing == null) return new SessionData(latencyMs);
existing.recordLatency(latencyMs);
return existing;
});
}
d. StampedLock for Read-Heavy Workloads
For data structures where reads vastly outnumber writes, ReadWriteLock allows concurrent readers while serializing writers. StampedLock (Java 8+) goes further with optimistic reads — a reader checks whether a write occurred during its read, and only falls back to a full read lock if a write was detected. Under low-write workloads, the optimistic path is almost always taken, giving read performance close to an uncontended lock:
public class ExchangeRateCache {
private final StampedLock lock = new StampedLock();
private double usdToEur = 1.08;
private double usdToGbp = 0.79;
// Optimistic read: no lock acquisition in the common case
public double getUsdToEur() {
long stamp = lock.tryOptimisticRead();
double result = usdToEur;
if (!lock.validate(stamp)) {
// A writer was active during our read — fall back to read lock
stamp = lock.readLock();
try {
result = usdToEur;
} finally {
lock.unlockRead(stamp);
}
}
return result;
}
// Write lock: exclusive access for updates (infrequent)
public void updateRates(double newEur, double newGbp) {
long stamp = lock.writeLock();
try {
usdToEur = newEur;
usdToGbp = newGbp;
} finally {
lock.unlockWrite(stamp);
}
}
}
Important: StampedLock is not reentrant and does not support condition variables. Do not use it as a drop-in replacement for ReentrantReadWriteLock in code that calls back into the same lock — doing so will deadlock. Also note that StampedLock does not cooperate with Java 21 virtual thread pinning avoidance — use ReentrantLock in virtual-thread-heavy code paths.
e. Lock Striping for High-Throughput Maps
When ConcurrentHashMap's built-in striping is insufficient (e.g., the value computation under compute() is expensive and long-running), implement your own lock striping with a fixed array of locks:
public class StripedCache<K, V> {
private static final int STRIPE_COUNT = 64; // power of 2 for cheap modulo
private final Object[] stripes = new Object[STRIPE_COUNT];
private final ConcurrentHashMap<K, V> map = new ConcurrentHashMap<>();
public StripedCache() {
for (int i = 0; i < STRIPE_COUNT; i++) stripes[i] = new Object();
}
private Object stripeFor(K key) {
return stripes[Math.abs(key.hashCode()) & (STRIPE_COUNT - 1)];
}
public V getOrLoad(K key, Function<K, V> loader) {
V existing = map.get(key);
if (existing != null) return existing; // Fast path: no lock needed
synchronized (stripeFor(key)) {
// Double-check inside lock: another thread may have loaded it
return map.computeIfAbsent(key, loader);
}
}
}
Lock-Free Programming with java.util.concurrent
The best lock is no lock at all. The java.util.concurrent package provides lock-free and wait-free data structures built on hardware compare-and-swap (CAS) operations that execute atomically at the CPU level without OS scheduler involvement.
ConcurrentHashMap is the workhorse for shared maps. Its compute(), computeIfAbsent(), and merge() operations provide atomic read-modify-write semantics without external locking.
CopyOnWriteArrayList is optimal when reads vastly outnumber writes and the list is small. Every write creates a new copy of the underlying array, making writes expensive but reads completely lock-free. Ideal for listener/observer lists, configuration values, or allowlists that are read thousands of times per second but updated rarely:
// Listener registration — written once at startup, read on every event
private final List<MetricsListener> listeners = new CopyOnWriteArrayList<>();
// Safe iteration without any synchronization — reads a snapshot of the array
public void publishMetric(Metric m) {
for (MetricsListener listener : listeners) {
listener.onMetric(m); // No ConcurrentModificationException ever
}
}
LinkedBlockingQueue vs ArrayBlockingQueue: For producer-consumer queues, LinkedBlockingQueue uses separate head and tail locks, allowing concurrent put and take operations from different threads without contention. ArrayBlockingQueue uses a single lock for both head and tail — simpler but higher contention under bidirectional load. Prefer LinkedBlockingQueue for balanced producer-consumer throughput; prefer ArrayBlockingQueue when memory predictability matters (no heap allocation per element).
AtomicReference for compare-and-swap logic: When you need to atomically update a complex object reference only if it matches an expected value, AtomicReference provides lock-free conditional update:
private final AtomicReference<Config> currentConfig = new AtomicReference<>(Config.defaults());
// Lock-free atomic config reload — safe under concurrent reads
public boolean reloadConfig(Config expected, Config newConfig) {
// Only updates if currentConfig still equals 'expected' at the moment of swap.
// If another thread updated it first, compareAndSet returns false — no lost updates.
return currentConfig.compareAndSet(expected, newConfig);
}
public Config getConfig() {
return currentConfig.get(); // Fully lock-free read
}
Deadlock: Detection and Prevention
Deadlock occurs when two threads each hold a lock the other needs, and neither can proceed. Unlike liveness issues caused by contention, deadlocks are permanent — the affected threads never make progress without external intervention (process restart).
The classic deadlock pattern involves two threads acquiring the same two locks in reverse order:
// Thread 1: acquires accountA lock, then tries accountB
public void transferAtoB(Account accountA, Account accountB, double amount) {
synchronized (accountA) {
Thread.sleep(10); // Simulates processing delay; Thread 2 acquires B in this window
synchronized (accountB) {
accountA.debit(amount);
accountB.credit(amount);
}
}
}
// Thread 2: acquires accountB lock, then tries accountA — DEADLOCK
public void transferBtoA(Account accountB, Account accountA, double amount) {
synchronized (accountB) {
synchronized (accountA) { // Waits for accountA — held by Thread 1
accountB.debit(amount);
accountA.credit(amount);
}
}
}
Thread 1 holds accountA and waits for accountB. Thread 2 holds accountB and waits for accountA. Both wait forever.
Detection: The JVM's ThreadMXBean can detect deadlocks programmatically at runtime:
ThreadMXBean tmx = ManagementFactory.getThreadMXBean();
long[] deadlockedThreadIds = tmx.findDeadlockedThreads();
if (deadlockedThreadIds != null) {
ThreadInfo[] infos = tmx.getThreadInfo(deadlockedThreadIds, true, true);
for (ThreadInfo info : infos) {
log.error("DEADLOCK DETECTED: Thread '{}' is deadlocked. " +
"Waiting for lock owned by thread '{}'",
info.getThreadName(), info.getLockOwnerName());
}
// Trigger alert / heap dump for post-mortem
}
You can also use Thread.holdsLock(object) in debug code to assert lock ownership invariants during testing.
Prevention — lock ordering: The simplest deadlock prevention strategy is to always acquire multiple locks in a consistent global order. For accounts, use the natural ordering of account IDs:
public void transfer(Account from, Account to, double amount) {
// Always acquire locks in account ID order regardless of transfer direction
Account first = from.getId() < to.getId() ? from : to;
Account second = from.getId() < to.getId() ? to : from;
synchronized (first) {
synchronized (second) {
from.debit(amount);
to.credit(amount);
}
}
}
Prevention — tryLock with timeout: ReentrantLock.tryLock(timeout, unit) allows a thread to give up if it cannot acquire the lock within a bounded time, breaking the deadlock cycle at the cost of a retry:
public boolean transfer(ReentrantLock lockA, ReentrantLock lockB,
double amount) throws InterruptedException {
if (lockA.tryLock(100, TimeUnit.MILLISECONDS)) {
try {
if (lockB.tryLock(100, TimeUnit.MILLISECONDS)) {
try {
performTransfer(amount);
return true;
} finally {
lockB.unlock();
}
}
} finally {
lockA.unlock();
}
}
return false; // Caller retries with backoff
}
Virtual Threads and Contention (Java 21)
Java 21's virtual threads (Project Loom) dramatically change the economics of concurrency. A JVM can host millions of virtual threads at near-zero memory overhead, and blocking I/O operations no longer consume an OS thread — the virtual thread is unmounted from its carrier, which is freed to run other virtual threads. For I/O-bound workloads, virtual threads effectively eliminate the thread pool sizing problem.
However, virtual threads introduce a new contention issue: carrier thread pinning. When a virtual thread executes a synchronized block, it pins its carrier OS thread for the duration of the block. If the synchronized block performs blocking I/O, the carrier is stuck, unable to run other virtual threads. Under high concurrency with many pinned carriers, the virtual thread executor's OS thread pool fills up, and the system degrades to behavior similar to traditional thread pool exhaustion.
// PROBLEMATIC with virtual threads: synchronized pins the carrier thread
public class TokenBucket {
private long tokens = CAPACITY;
public synchronized boolean tryConsume() {
if (tokens > 0) { tokens--; return true; }
return false;
// If this method did I/O while holding the lock, the carrier would be pinned
}
}
// CORRECT for virtual threads: ReentrantLock does NOT pin carrier threads
public class TokenBucket {
private final ReentrantLock lock = new ReentrantLock();
private long tokens = CAPACITY;
public boolean tryConsume() {
lock.lock();
try {
if (tokens > 0) { tokens--; return true; }
return false;
} finally {
lock.unlock();
}
}
}
Detect pinning in production with JFR event jdk.VirtualThreadPinned. The JVM also logs pinning events to stderr when the system property -Djdk.tracePinnedThreads=full is set. In Java 21, the JVM will print a warning for pinning events that last over 20ms. Migrating from synchronized to ReentrantLock in hot paths is the recommended remediation for virtual-thread-heavy applications.
Also be aware that StampedLock, Object.wait(), and LockSupport.park() inside synchronized blocks all cause pinning. Design virtual-thread-aware code to avoid any blocking operations inside synchronized regions.
Architecture: From Shared Mutable State to Partitioned State
The deepest fix for lock contention is architectural: eliminate shared mutable state by design rather than managing it with finer-grained locks.
Partitioned state with lock striping: Instead of one shared data structure protected by one lock, partition data by a natural key (user ID, tenant ID, shard key) and assign each partition an independent state object and lock. Threads operating on different partitions never contend. This is the architectural generalization of ConcurrentHashMap's internal design.
Actor model / message passing: The actor model (exemplified by Akka, or lightweight channels in Java 21 with Structured Concurrency) takes partitioning further by making each state shard an independent actor that processes messages sequentially from its own mailbox. No shared mutable state exists — all coordination is through message passing. Contention becomes a queue depth problem (actors with full mailboxes) rather than a lock problem, and the system degrades gracefully by throttling producers rather than collapsing into deadlock.
For the payment service example, the architectural fix would be to partition rate-limit counters by client ID shard, with each shard owning its Redis counter independently. Threads for client IDs in different shards never share state. Within a shard, contention is limited to a small subset of threads, capping the worst-case lock queue depth regardless of total concurrency.
Failure Scenarios
Lock escalation after seemingly unrelated refactor: Adding a logging statement, metric emission, or lazy-load inside a synchronized block that previously held the lock for microseconds can inflate lock hold time to milliseconds if the added code performs I/O. Always profile after refactors that touch synchronized code, even if the change appears trivial.
ConcurrentHashMap.computeIfAbsent() deadlock in Java 8: A known bug (JDK-8062841) in Java 8 causes deadlock when computeIfAbsent()'s mapping function recursively calls computeIfAbsent() on the same map. Upgrade to Java 9+ where this is fixed, or avoid recursive computeIfAbsent() calls.
ReadWriteLock write starvation: Under continuous reader pressure, a ReentrantReadWriteLock in default (non-fair) mode may starve writers indefinitely — readers keep acquiring the shared lock, and no gap appears for the writer to get through. Use new ReentrantReadWriteLock(true) (fair mode) or switch to StampedLock if write latency is critical.
False sharing with AtomicLong array: An array of AtomicLong counters intended for lock-free per-key counting may suffer severe false sharing — adjacent array elements share a cache line, so writes to index 0 and index 1 by two threads contend at the hardware level. Use @Contended (with -XX:-RestrictContended) or pad the array with dummy fields to ensure each counter occupies its own cache line.
When NOT to Optimize
Lock optimization carries real costs: increased code complexity, harder-to-reason-about behavior, and new failure modes. Apply the discipline of profiling-first rigorously.
"Premature optimization is the root of all evil." — Donald Knuth. Measure before changing. A synchronized method with 50 req/s load is not a contention problem. The same method at 5,000 req/s may be. Profile under production-representative load, then optimize exactly what the profiler shows.
Do not replace synchronized with ReentrantLock speculatively. ReentrantLock requires explicit finally unlock blocks, which are easy to omit during future maintenance, resulting in lock leaks (threads permanently blocked). synchronized is automatically released on exception, making it safer for code that may throw.
Do not use CopyOnWriteArrayList for large collections that are modified frequently. Each write copies the entire array — at 1,000 elements and 1,000 writes/second, you are allocating 1 million elements per second of garbage. Measure the GC pressure before choosing this data structure.
Do not over-stripe. Striping with 1,024 stripes when only 10 threads will ever contend wastes memory and makes debugging harder. Match stripe count to actual thread concurrency, typically the number of CPU cores or the HTTP worker pool size.
Key Takeaways
- Profile before optimizing: Use jstack thread dumps, async-profiler lock flame graphs, and JFR
jdk.JavaMonitorEnterevents to find the actual contention hotspot before writing a single line of optimization code. - Reduce lock scope first: Move I/O, computation, and remote calls outside
synchronizedblocks. This is often the highest-ROI change and requires no data structure replacement. - Prefer
java.util.concurrentclasses:ConcurrentHashMap,LongAdder,AtomicReference, andStampedLockare battle-tested, well-documented, and cover 95% of shared-state patterns without custom lock code. - Use lock ordering or
tryLockto prevent deadlocks: Consistent global lock ordering is simpler and more reliable than timeout-based approaches for most use cases. - Migrate to
ReentrantLockfor virtual-thread workloads: In Java 21 applications using virtual threads, anysynchronizedblock that performs blocking operations pins the carrier thread and degrades throughput. Replace withReentrantLockin those paths. - Think architecturally: When per-key lock striping feels complex, consider whether the data should be partitioned by design — separate state objects owned by separate components, coordinated through messages rather than shared locks.
- Monitor blocked thread count in production: A rising baseline of threads in BLOCKED state is a leading indicator of lock saturation, visible before it impacts P99 latency. Alert on it.
Related Articles
Discussion / Comments
Join the conversation — your comment goes directly to my inbox.