Java Caching with Spring
Alexander Stasiak
Feb 10, 2026・12 min read
Table of Content
Understanding Caching in Java Applications
Getting Started: Enabling Caching in a Spring Boot Project
Using Spring Boot Cache Starter
Using Spring Core Caching Without Spring Boot
Enabling and Configuring Spring’s Cache Abstraction
Defining Cache Names and Regions
Using Caching with Spring Annotations
@Cacheable: Caching Method Results
@CacheEvict: Removing Stale Entries
@CachePut: Forcing Cache Updates
@Caching: Combining Multiple Cache Operations
@CacheConfig: Centralizing Cache Settings
Conditional Caching and Advanced Key Strategies
Using the condition Attribute
Using the unless Attribute
Integrating Java-Based Cache Configurations
Configuring an In-Memory Cache (e.g., Caffeine)
Configuring a Distributed Cache (e.g., Redis)
Best Practices, Pitfalls, and Conclusion
Need Faster Spring APIs?
Get a caching strategy that improves latency without serving stale data.👇
If your Spring Boot application repeatedly fetches the same data from a database or external API, you’re likely wasting millisxeconds—or even seconds—on every request. Java caching with Spring solves this by storing frequently accessed results in fast storage, so subsequent calls skip the expensive operation entirely.
Spring’s cache abstraction, introduced in Spring Framework 3.1 around 2011 and refined significantly with Spring Boot 1.0 in 2014, provides a declarative, annotation-driven approach to caching. The beauty of this abstraction is that it decouples your business logic from the underlying cache provider. Whether you’re using a simple in memory cache for development or Redis for production, your service code remains unchanged.
This guide walks you through enabling, configuring, and using caching in a modern Spring Boot 3 / Java 17 project. Here’s what you’ll gain:
- Better performance: Reduce response times from hundreds of milliseconds to under a millisecond for cached operations
- Simpler cache management: Add caching with annotations instead of writing boilerplate cache logic
- Easier provider migration: Switch between ConcurrentMap, Caffeine, Redis, or Ehcache without touching business code
- Production-ready patterns: Learn conditional caching, cache eviction, and multi-cache strategies
Understanding Caching in Java Applications
Caching stores frequently used data in fast storage—typically RAM—to avoid repetitive expensive operations. In Java backends, this means intercepting method calls and returning previously computed results instead of executing the same logic repeatedly.
Consider these concrete scenarios where caching shines:
- Product details fetched via JPA: An e-commerce service calls productRepository.findById(productId) thousands of times per hour for popular items
- User profile data from an external REST API: Each profile lookup takes 150-300ms due to network latency
- Configuration values from a remote service: Feature flags and settings that rarely change but get requested on every page load
The performance impact is significant. A database call typically takes 200-300ms when you factor in connection overhead, query execution, and result mapping. A cache lookup completes in under 1ms. For a high-traffic endpoint handling 10,000 requests per hour, that difference compounds into hours of saved compute time daily.
Caching addresses several common issues:
- High latency on repeated reads of the same data
- Database bottlenecks under load when many requests hit identical queries
- Throttling on third-party APIs with rate limits
- Unnecessary compute cycles for deterministic calculations
Spring’s cache abstraction specifically targets method results. This is different from HTTP caching or cdn caching handled by a content delivery network. Those layers cache responses closer to the client, while Spring caching operates within your web application’s service layer.
Getting Started: Enabling Caching in a Spring Boot Project
This section shows how to go from a plain Spring Boot 3 application—created with Spring Initializr in 2025—to one with basic caching enabled. The setup takes just a few minutes.
Before proceeding, ensure your project uses:
- Java 17 or higher
- Spring Boot 3.x
- Maven (pom.xml) or Gradle (build.gradle.kts) as your build tool
The spring boot starter cache dependency must be added to your project. This starter brings in the spring context support module and everything needed for caching infrastructure. For a basic demo, the default ConcurrentMapCacheManager works without any additional provider dependency.
The core step is adding @EnableCaching to your main spring boot application class or a dedicated configuration class. Here’s what that looks like:
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cache.annotation.EnableCaching;
@SpringBootApplication
@EnableCaching
public class BookstoreApplication {
public static void main(String[] args) {
SpringApplication.run(BookstoreApplication.class, args);
}
}With those two elements in place—the dependency and the annotation—caching is ready to use.
Using Spring Boot Cache Starter
Adding spring boot starter cache activates Spring Boot auto-configuration for the caching infrastructure. This is the recommended approach for most projects.
Add the following dependency to your pom.xml:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cache</artifactId>
</dependency>For Gradle, add implementation 'org.springframework.boot:spring-boot-starter-cache' to your dependencies block.
What auto-configuration does for you:
- Registers a default cache manager based on what’s available on the classpath
- Scans for caching annotations once @EnableCaching is present
- Wires caches based on properties defined in application.yml or application.properties
- Provides sensible defaults that work out of the box
This starter is the same entry point regardless of whether your final cache provider is Caffeine, Redis, or a simple in-memory map. The only difference is which additional dependencies you include.
Using Spring Core Caching Without Spring Boot
Caching works in plain Spring applications too—for example, Spring Framework 6.x without Boot. In this case, you manually add spring-context and the spring context support module dependencies.
Without Boot’s auto-configuration, you must explicitly define a cache manager bean in a configuration class:
- Create a Java config class annotated with @EnableCaching
- Add a @Bean method returning your chosen CacheManager implementation
- For example, return new ConcurrentMapCacheManager("books", "users") with predefined cache names
- Optionally configure a more sophisticated manager like CaffeineCacheManager
This approach is common in legacy Java EE deployments, standalone Spring-based libraries, or scenarios where Spring Boot’s opinions don’t fit your requirements.
Enabling and Configuring Spring’s Cache Abstraction
Once dependencies are in place, the next step is enabling caching behavior and defining how your cache manager operates.
The @EnableCaching annotation triggers a post processor that scans spring bean instances for caching annotations and creates proxies around them. These proxies intercept method calls to check for cached results before execution.
Spring Boot selects the default configuration for CacheManager based on classpath detection:
| Provider on Classpath | CacheManager Used |
|---|---|
| None (just starter) | ConcurrentMapCacheManager |
| Caffeine | CaffeineCacheManager |
| Redis (spring-boot-starter-data-redis) | RedisCacheManager |
| EhCache | EhCacheCacheManager |
For production, explicit configuration is recommended. Define a CaffeineCacheManager or RedisCacheManager bean to control:
- Time to live (TTL) for cache entries
- Maximum size before eviction
- Eviction policies (LRU, LFU, size-based)
- Per-cache configurations with different policies
Spring also provides CacheManagerCustomizer<T extends CacheManager> as a hook to fine-tune caches created by auto-configured managers without fully replacing them.
Defining Cache Names and Regions
Caches in Spring are grouped by logical names—strings like "products", "users", or "exchangeRates". These names map to regions or spaces in the underlying provider.
Best practices for naming:
- Choose descriptive names that reflect the cached data (e.g., "productById" rather than "cache1")
- Keep names consistent across annotations and configuration files
- Use singular or plural consistently throughout your application
- Consider prefixing with service names in larger systems ("catalog-products", "pricing-rates")
For example, a ProductService class might use:
- "productById" cache for single-item lookups by ID
- "allProducts" cache for listing endpoints that return collections
- "productSearch" cache for search results with query parameters as keys
Some supported cache providers like Redis and Ehcache allow you to define cache configurations per region in an xml file or application.yml, specifying different TTLs and sizes for each cache name.
Using Caching with Spring Annotations
Spring’s method-level annotations are the primary way to cache data in your services. The framework provides all the caching annotations you need:
| Annotation | Purpose |
|---|---|
| @Cacheable | Cache method results; skip execution on cache hit |
| @CachePut | Always execute method; update cache with result |
| @CacheEvict | Remove entries from cache |
| @Caching | Combine multiple annotations on one method |
| @CacheConfig | Set shared cache settings at class level |
These annotations sit on public methods of Spring beans—typically classes annotated with @Service or @Repository. The proxies created when you enable caching intercept calls and apply the caching behavior.
By default, method parameters form the cache key. For a method getBook(String isbn), the ISBN value becomes the key. You can override this with Spring Expression Language (SpEL) for more control.
The following example demonstrates these annotations in practice using a BookService.
@Cacheable: Caching Method Results
The @Cacheable annotation is designed for read operations where the return value can be reused. Think findById, getDetails, or any lookup method that returns the same data for the same parameters.
@Service
public class BookService {
private final BookRepository bookRepository;
@Cacheable(cacheNames = "books", key = "#isbn")
public Book getBookByIsbn(String isbn) {
// This method body only executes on cache miss
simulateSlowService();
return bookRepository.findByIsbn(isbn);
}
private void simulateSlowService() {
try {
Thread.sleep(3000L);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}Here’s how it works:
- First call with ISBN “978-0134685991”: Cache miss → method executes → result stored in “books” cache
- Second call with same ISBN: Cache hit → method body skipped → cached result returned immediately
- Call with different ISBN: Cache miss for that key → method executes again
The above annotation triggers a cache lookup before method execution. If an entry exists for the key, the underlying database caching or computation is avoided entirely.
When to use @Cacheable:
- Catalog data that changes infrequently
- User settings and preferences
- Reference data like countries, currencies, or categories
When to avoid @Cacheable:
- Write-heavy methods that modify state
- Methods with side effects beyond returning data
- Real-time data that must always be fresh values
@CacheEvict: Removing Stale Entries
The @CacheEvict annotation invalidates cache entries when underlying data changes. Use it on update or delete methods to prevent stale data from being served.
@CacheEvict(cacheNames = "books", key = "#isbn")
public void updateBookPrice(String isbn, BigDecimal price) {
bookRepository.updatePrice(isbn, price);
}After this method executes, the cache addresses that specific ISBN entry and removes it. The next read will hit the database and cache fresh values.
For bulk operations—like nightly syncs or imports—clear the entire cache:
@CacheEvict(cacheNames = "books", allEntries = true)
public void refreshAllBooks() {
// Bulk import logic
bookRepository.syncFromExternalCatalog();
}The beforeInvocation option controls timing:
- beforeInvocation = false (default): Cache eviction happens after successful method execution
- beforeInvocation = true: Cache eviction happens before execution, ensuring removal even if the method throws an exception
Always pair data modification operations with appropriate cache eviction. Forgetting this step is one of the most common sources of stale data bugs in cached applications.
@CachePut: Forcing Cache Updates
The @CachePut annotation always executes the method and updates the cache with the return value. It’s ideal for write operations that should refresh the cache without skipping any logic.
@CachePut(cacheNames = "books", key = "#result.isbn")
public Book saveBook(Book book) {
return bookRepository.save(book);
}Notice the key uses #result.isbn—this references the return value, which is the saved entity with any generated fields populated.
Key differences from @Cacheable:
| Aspect | @Cacheable | @CachePut |
|---|---|---|
| Method execution | Skipped on cache hit | Always executed |
| Cache update | Only on cache miss | Always updated |
| Typical use case | Read operations | Write operations |
Avoid putting both @Cacheable and @CachePut on the same method. The conflicting behaviors create confusion. Instead, separate your read and write methods clearly.
Use @CachePut when you want to:
- Keep the cache warm after saves or updates
- Avoid a subsequent cache miss after writing new data
- Ensure the cache always reflects the latest state after mutations
@Caching: Combining Multiple Cache Operations
Java doesn’t allow declaring things multiple times with the same annotation type on a single method. The @Caching annotation solves this by grouping multiple caching annotations together.
@Caching(
evict = {
@CacheEvict(cacheNames = "books", key = "#book.isbn"),
@CacheEvict(cacheNames = "bestsellers", allEntries = true)
},
put = {
@CachePut(cacheNames = "books", key = "#result.isbn")
}
)
public Book updateBookAndRefreshCaches(Book book) {
return bookRepository.save(book);
}This method:
- Evicts the specific book from the “books” cache
- Clears the entire “bestsellers” cache (since rankings might change)
- Puts the updated book back into “books” with fresh data
Common scenarios for @Caching:
- A single update affects multiple cache regions
- You need multiple annotations of the same type (e.g., evicting from three caches)
- Complex workflows where eviction and refresh happen together
While powerful, heavy use of @Caching can make methods harder to read. Reserve it for complex but well-documented cases, and add comments explaining why multiple cache operations are necessary.
@CacheConfig: Centralizing Cache Settings
The @CacheConfig annotation at class level defines shared attributes so you don’t repeat them on every method. This reduces duplication in services that use the same cache across many methods.
@Service
@CacheConfig(cacheNames = "books")
public class BookService {
@Cacheable(key = "#isbn")
public Book getBookByIsbn(String isbn) {
return bookRepository.findByIsbn(isbn);
}
@CacheEvict(key = "#isbn")
public void deleteBook(String isbn) {
bookRepository.deleteByIsbn(isbn);
}
@CachePut(key = "#result.isbn")
public Book saveBook(Book book) {
return bookRepository.save(book);
}
}With @CacheConfig(cacheNames = "books") at class level, individual methods only specify the key. The name of the cache is inherited.
You can also configure:
- keyGenerator: A custom key generator bean name for all methods
- cacheResolver: A custom cache resolver for selecting caches dynamically
- cacheManager: A specific cache manager bean if you have multiple annotations for different providers
@CacheConfig does not activate caching by itself. You still need @EnableCaching at the configuration level, and individual methods still need their @Cacheable, @CachePut, or @CacheEvict annotations.
Conditional Caching and Advanced Key Strategies
Not every method call should be cached. Sometimes caching makes sense only for certain inputs or when the result meets specific criteria. Spring provides conditional attributes to fine-tune caching behavior.
All major caching annotations support two SpEL-based attributes:
| Attribute | Evaluated | Purpose |
|---|---|---|
| condition | Before method execution | Decide if caching logic should apply at all |
| unless | After method execution | Decide if the result should be cached |
This conditional caching capability matters for scenarios like:
- Only caching expensive lookups for valid, well-formed IDs
- Skipping cache for anonymous or test users
- Avoiding cache pollution with null results or error states
Key design also deserves attention. Default keys based on all parameters work for simple cases, but complex methods benefit from explicit key definitions that avoid collisions and handle nulls gracefully.
Using the condition Attribute
The condition attribute is evaluated before method execution to decide if caching logic should even be considered. If it evaluates to false, the method runs without any cache interaction.
@Cacheable(
cacheNames = "books",
key = "#isbn",
condition = "#isbn != null and #isbn.length() == 13"
)
public Book getBookByIsbn(String isbn) {
return bookRepository.findByIsbn(isbn);
}This caches only valid 13-digit ISBNs. Malformed or null ISBNs bypass caching entirely—no lookup, no storage.
Another example based on customer status:
@Cacheable(
cacheNames = "pricing",
key = "#productId",
condition = "#customer.status == 'PREMIUM'"
)
public PricingDetails getPremiumPricing(Long productId, Customer customer) {
return pricingService.calculatePremiumPrice(productId, customer);
}The condition attribute also works with @CacheEvict and @CachePut. For instance, in a multi-tenant system, you might evict only for certain tenants:
@CacheEvict(
cacheNames = "tenantData",
key = "#tenantId",
condition = "#tenantId != 'system'"
)
public void updateTenantData(String tenantId, TenantConfig config) {
// Update logic
}Using the unless Attribute
The unless attribute is evaluated after method execution and can inspect the return value via #result. This allows decisions like “cache only if the result is non-null or meets size criteria.”
@Cacheable(
cacheNames = "books",
key = "#isbn",
unless = "#result == null"
)
public Book getBookByIsbn(String isbn) {
return bookRepository.findByIsbn(isbn);
}This prevents caching missing records. Without it, a lookup for a non-existent ISBN would cache null, causing future requests to return null even after the book is added to the database.
Another example based on result size:
@Cacheable(
cacheNames = "searchResults",
key = "#query",
unless = "#result.size() > 1000 or #result.isEmpty()"
)
public List<Book> searchBooks(String query) {
return bookRepository.search(query);
}This avoids caching:
- Empty results (which might just mean no matches yet)
- Overly large results that would consume too much data in the cache
The condition and unless attributes can be used simultaneously:
- condition: “Should caching be attempted at all?” (pre-execution)
- unless: “Given this result, should we store it?” (post-execution)
Integrating Java-Based Cache Configurations
While Spring Boot auto-configures many things, defining cache configurations in Java code gives strong control over policies like TTL, maximum size, and eviction behavior.
The common pattern involves creating configuration classes with @Bean methods returning configured cache manager instances:
@Configuration
@EnableCaching
public class CacheConfig {
@Bean
public CacheManager cacheManager() {
CaffeineCacheManager manager = new CaffeineCacheManager();
manager.setCaffeine(Caffeine.newBuilder()
.maximumSize(10_000)
.expireAfterWrite(Duration.ofMinutes(10))
.recordStats());
return manager;
}
}With Java config, you can define different policies per cache. For example:
- "customers" cache with 10-minute TTL for frequently changing data
- "countries" cache with 24-hour TTL for reference data
- "exchangeRates" cache with 5-minute TTL for external API results
The cache manager connects your annotated service methods to actual storage. A CustomerDataService with @Cacheable("customers") will use the settings defined for that cache name.
Configuring an In-Memory Cache (e.g., Caffeine)
Caffeine is a high-performance local cache for Spring applications running on a single JVM. It’s the recommended choice when you don’t need distributed caching across multiple instances.
Add the Caffeine dependency:
<dependency>
<groupId>com.github.ben-manes.caffeine</groupId>
<artifactId>caffeine</artifactId>
</dependency>Configure CaffeineCacheManager with policies:
@Bean
public CacheManager cacheManager() {
CaffeineCacheManager manager = new CaffeineCacheManager("products", "categories");
manager.setCaffeine(Caffeine.newBuilder()
.maximumSize(10_000)
.expireAfterWrite(Duration.ofMinutes(5))
.recordStats());
return manager;
}This configuration:
- Creates named caches for “products” and “categories”
- Limits each cache to 10,000 entries maximum (preventing unused data from consuming memory)
- Expires entries 5 minutes after being written
- Records hit/miss statistics for monitoring
For simpler setup, Spring Boot offers direct integration via application.yml:
spring:
cache:
type: caffeine
caffeine:
spec: maximumSize=10000,expireAfterWrite=5mCaffeine excels in scenarios with:
- Single-node deployments or small clusters
- Low-latency requirements (sub-microsecond lookups)
- Read-heavy workloads where 95%+ hit rates are achievable
Configuring a Distributed Cache (e.g., Redis)
Redis is widely used in production as a distributed, in-memory data store. It’s the go-to choice for caching in microservices or scaled-out Spring Boot deployments where multiple instances need to share cached data.
Add the spring data Redis dependency:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>Configure connection details in application.yml:
spring:
redis:
host: localhost
port: 6379
password: ${REDIS_PASSWORD:}
cache:
type: redis
redis:
time-to-live: 600sSpring Boot auto-configures RedisCacheManager, or you can define your own bean for per-cache configurations:
@Bean
public RedisCacheManagerBuilderCustomizer cacheManagerCustomizer() {
return builder -> builder
.withCacheConfiguration("shortLived",
RedisCacheConfiguration.defaultCacheConfig()
.entryTtl(Duration.ofMinutes(1)))
.withCacheConfiguration("longLived",
RedisCacheConfiguration.defaultCacheConfig()
.entryTtl(Duration.ofHours(24)));
}Concrete use cases for Redis caching:
- Authentication tokens shared across application instances behind a load balancer
- Session-like data in stateless deployments
- API response caching for future requests across the cluster
- Feature flags and configuration that must be consistent across nodes
Trade-offs to consider:
- Network latency adds 5-10ms compared to in-process Caffeine
- Requires serialization (typically JSON via Jackson) which adds overhead
- Better scalability and shared state across services
- Redis handles 100k+ operations per second per shard

Best Practices, Pitfalls, and Conclusion
You’ve now covered the essentials of spring boot caching: enabling the cache abstraction, using @Cacheable/@CachePut/@CacheEvict and other caching annotations, applying conditional caching, and configuring cache provider options like Caffeine and Redis.
Here are the key best practices to follow:
- Choose appropriate TTLs: Balance freshness against performance. Too short means constant cache misses; too long means stale data
- Avoid caching highly volatile data: If data changes every second, caching adds complexity without benefit
- Design clear cache names: Use descriptive, consistent naming that reflects the cached domain objects
- Pair writes with invalidation: Every method that modifies data should evict or update relevant caches
- Monitor hit/miss metrics: Use Spring Boot Actuator’s /actuator/caches endpoint to track cache effectiveness
Common pitfalls to avoid:
| Pitfall | Problem | Solution |
|---|---|---|
| Self-invocation | Calling a cached method from the same class bypasses the proxy | Call through the injected bean or refactor to separate classes |
| Forgetting eviction | Writes happen but cache serves old data | Add @CacheEvict to all mutation methods |
| Unbounded caches | Memory grows until OOM | Always set maximumSize or entry limits |
| Caching nulls | Missing records cached indefinitely | Use unless = "#result == null" |
| Over-caching | Every given method gets @Cacheable | Cache selectively where latency matters |
To see the impact firsthand, try this exercise:
- Create a simple endpoint like /api/books/{isbn} that calls a slow service method
- Add @Cacheable to the service method
- Measure response times before and after using Spring Boot Actuator metrics or a tool like JMeter
- Watch the logs—you’ll see method execution only on cache misses
Spring’s cache abstraction provides a consistent, annotation-driven layer that works across providers. You can start with a simple memory cache in development, switch to Caffeine for production on a single node, and move to Redis when you scale to multiple instances—all without changing your service code.
For further reading, check the official documentation on Spring Boot Caching and explore more advanced patterns like cache-aside with reactive support in newer Spring Boot 3.x releases.
The most popular frameworks don’t stay popular by accident—Spring’s caching is battle-tested, flexible, and ready for whatever scale your application demands. Start small, measure the difference, and expand your caching strategy as your performance requirements grow.
Digital Transformation Strategy for Siemens Finance
Cloud-based platform for Siemens Financial Services in Poland


You may also like...

Is JavaScript Single-Threaded? Understanding the Basics
JavaScript is a single-threaded programming language that executes one task at a time, simplifying development while maintaining application responsiveness. This article explores how JavaScript manages multiple operations using asynchronous programming and the event loop. Understanding these concepts is essential for developers to optimize performance and create efficient web applications.
Marek Majdak
Apr 29, 2024・8 min read

5 Easy Steps to an Effective Bug Bash
Bug bashes have become a popular practice among development teams to streamline bug discovery and improve product quality. This article explains what bug bashes are, their advantages, and when to use them. You'll find detailed steps on how to prepare for and run a successful bug bash, including defining roles, determining testing scope, creating bug report templates, and conducting the bug bash event. Discover the benefits of bug bashes beyond bug discovery, such as fostering teamwork and promoting a deeper understanding of the product development lifecycle.
Valeriia Oliinyk
Jun 02, 2020・6 min read

OpenAI API Integration Partner in Poland – Unlock AI-Powered Innovation with Startup House
Harness the power of AI with Startup House – your trusted OpenAI API integration partner in Poland, delivering secure, compliant, and future-ready solutions.

Alexander Stasiak
Sep 17, 2025・10 min read
Let’s build your next digital product — faster, safer, smarter.
Book a free consultationWork with a team trusted by top-tier companies.




