Declarative Annotation-based Caching
For caching declaration, Infra caching abstraction provides a set of Java annotations:
-
@Cacheable
: Triggers cache population. -
@CacheEvict
: Triggers cache eviction. -
@CachePut
: Updates the cache without interfering with the method execution. -
@Caching
: Regroups multiple cache operations to be applied on a method. -
@CacheConfig
: Shares some common cache-related settings at class-level.
The @Cacheable
Annotation
As the name implies, you can use @Cacheable
to demarcate methods that are cacheable — that is, methods for which the result is stored in the cache so that, on subsequent
invocations (with the same arguments), the value in the cache is returned without
having to actually invoke the method. In its simplest form, the annotation declaration
requires the name of the cache associated with the annotated method, as the following
example shows:
@Cacheable("books")
public Book findBook(ISBN isbn) {...}
In the preceding snippet, the findBook
method is associated with the cache named books
.
Each time the method is called, the cache is checked to see whether the invocation has
already been run and does not have to be repeated. While in most cases, only one
cache is declared, the annotation lets multiple names be specified so that more than one
cache is being used. In this case, each of the caches is checked before invoking the
method — if at least one cache is hit, the associated value is returned.
All the other caches that do not contain the value are also updated, even though the cached method was not actually invoked. |
The following example uses @Cacheable
on the findBook
method with multiple caches:
@Cacheable({"books", "isbns"})
public Book findBook(ISBN isbn) {...}
Default Key Generation
Since caches are essentially key-value stores, each invocation of a cached method
needs to be translated into a suitable key for cache access. The caching abstraction
uses a simple KeyGenerator
based on the following algorithm:
-
If no parameters are given, return
SimpleKey.EMPTY
. -
If only one parameter is given, return that instance.
-
If more than one parameter is given, return a
SimpleKey
that contains all parameters.
This approach works well for most use-cases, as long as parameters have natural keys
and implement valid hashCode()
and equals()
methods. If that is not the case,
you need to change the strategy.
To provide a different default key generator, you need to implement the
cn.taketoday.cache.interceptor.KeyGenerator
interface.
Custom Key Generation Declaration
Since caching is generic, the target methods are quite likely to have various signatures that cannot be readily mapped on top of the cache structure. This tends to become obvious when the target method has multiple arguments out of which only some are suitable for caching (while the rest are used only by the method logic). Consider the following example:
@Cacheable("books")
public Book findBook(ISBN isbn, boolean checkWarehouse, boolean includeUsed);
At first glance, while the two boolean
arguments influence the way the book is found,
they are no use for the cache. Furthermore, what if only one of the two is important
while the other is not?
For such cases, the @Cacheable
annotation lets you specify how the key is generated
through its key
attribute. You can use SpEL to pick the
arguments of interest (or their nested properties), perform operations, or even
invoke arbitrary methods without having to write any code or implement any interface.
This is the recommended approach over the
default generator,
since methods tend to be quite different in signatures as the code base grows. While the
default strategy might work for some methods, it rarely works for all methods.
The following examples use various SpEL declarations (if you are not familiar with SpEL, do yourself a favor and read Infra Expression Language):
@Cacheable(cacheNames="books", key="#isbn")
public Book findBook(ISBN isbn, boolean checkWarehouse, boolean includeUsed)
@Cacheable(cacheNames="books", key="#isbn.rawNumber")
public Book findBook(ISBN isbn, boolean checkWarehouse, boolean includeUsed)
@Cacheable(cacheNames="books", key="T(someType).hash(#isbn)")
public Book findBook(ISBN isbn, boolean checkWarehouse, boolean includeUsed)
The preceding snippets show how easy it is to select a certain argument, one of its properties, or even an arbitrary (static) method.
If the algorithm responsible for generating the key is too specific or if it needs
to be shared, you can define a custom keyGenerator
on the operation. To do so,
specify the name of the KeyGenerator
bean implementation to use, as the following
example shows:
@Cacheable(cacheNames="books", keyGenerator="myKeyGenerator")
public Book findBook(ISBN isbn, boolean checkWarehouse, boolean includeUsed)
The key and keyGenerator parameters are mutually exclusive and an operation
that specifies both results in an exception.
|
Default Cache Resolution
The caching abstraction uses a simple CacheResolver
that retrieves the caches
defined at the operation level by using the configured CacheManager
.
To provide a different default cache resolver, you need to implement the
cn.taketoday.cache.interceptor.CacheResolver
interface.
Custom Cache Resolution
The default cache resolution fits well for applications that work with a
single CacheManager
and have no complex cache resolution requirements.
For applications that work with several cache managers, you can set the
cacheManager
to use for each operation, as the following example shows:
@Cacheable(cacheNames="books", cacheManager="anotherCacheManager") (1)
public Book findBook(ISBN isbn) {...}
1 | Specifying anotherCacheManager . |
You can also replace the CacheResolver
entirely in a fashion similar to that of
replacing key generation.
The resolution is requested for every cache operation, letting the implementation
actually resolve the caches to use based on runtime arguments. The following example
shows how to specify a CacheResolver
:
@Cacheable(cacheResolver="runtimeCacheResolver") (1)
public Book findBook(ISBN isbn) {...}
1 | Specifying the CacheResolver . |
Since Infra 4.1, the Similarly to |
Synchronized Caching
In a multi-threaded environment, certain operations might be concurrently invoked for the same argument (typically on startup). By default, the cache abstraction does not lock anything, and the same value may be computed several times, defeating the purpose of caching.
For those particular cases, you can use the sync
attribute to instruct the underlying
cache provider to lock the cache entry while the value is being computed. As a result,
only one thread is busy computing the value, while the others are blocked until the entry
is updated in the cache. The following example shows how to use the sync
attribute:
@Cacheable(cacheNames="foos", sync=true) (1)
public Foo executeExpensiveOperation(String id) {...}
1 | Using the sync attribute. |
This is an optional feature, and your favorite cache library may not support it.
All CacheManager implementations provided by the core framework support it. See the
documentation of your cache provider for more details.
|
Caching with CompletableFuture and Reactive Return Types
As of 6.1, cache annotations take CompletableFuture
and reactive return types
into account, automatically adapting the cache interaction accordingly.
For a method returning a CompletableFuture
, the object produced by that future
will be cached whenever it is complete, and the cache lookup for a cache hit will
be retrieved via a CompletableFuture
:
@Cacheable("books")
public CompletableFuture<Book> findBook(ISBN isbn) {...}
For a method returning a Reactor Mono
, the object emitted by that Reactive Streams
publisher will be cached whenever it is available, and the cache lookup for a cache
hit will be retrieved as a Mono
(backed by a CompletableFuture
):
@Cacheable("books")
public Mono<Book> findBook(ISBN isbn) {...}
For a method returning a Reactor Flux
, the objects emitted by that Reactive Streams
publisher will be collected into a List
and cached whenever that list is complete,
and the cache lookup for a cache hit will be retrieved as a Flux
(backed by a
CompletableFuture
for the cached List
value):
@Cacheable("books")
public Flux<Book> findBooks(String author) {...}
Such CompletableFuture
and reactive adaptation also works for synchronized caching,
computing the value only once in case of a concurrent cache miss:
@Cacheable(cacheNames="foos", sync=true) (1)
public CompletableFuture<Foo> executeExpensiveOperation(String id) {...}
1 | Using the sync attribute. |
In order for such an arrangement to work at runtime, the configured cache
needs to be capable of CompletableFuture -based retrieval. The Infra-provided
ConcurrentMapCacheManager automatically adapts to that retrieval style, and
CaffeineCacheManager natively supports it when its asynchronous cache mode is
enabled: set setAsyncCacheMode(true) on your CaffeineCacheManager instance.
|
@Bean
CacheManager cacheManager() {
CaffeineCacheManager cacheManager = new CaffeineCacheManager();
cacheManager.setCacheSpecification(...);
cacheManager.setAsyncCacheMode(true);
return cacheManager;
}
Last but not least, be aware that annotation-driven caching is not appropriate
for sophisticated reactive interactions involving composition and back pressure.
If you choose to declare @Cacheable
on specific reactive methods, consider the
impact of the rather coarse-granular cache interaction which simply stores the
emitted object for a Mono
or even a pre-collected list of objects for a Flux
.
Conditional Caching
Sometimes, a method might not be suitable for caching all the time (for example, it might
depend on the given arguments). The cache annotations support such use cases through the
condition
parameter, which takes a SpEL
expression that is evaluated to either true
or false
. If true
, the method is cached. If not, it behaves as if the method is not
cached (that is, the method is invoked every time no matter what values are in the cache
or what arguments are used). For example, the following method is cached only if the
argument name
has a length shorter than 32:
@Cacheable(cacheNames="book", condition="#name.length() < 32") (1)
public Book findBook(String name)
1 | Setting a condition on @Cacheable . |
In addition to the condition
parameter, you can use the unless
parameter to veto the
adding of a value to the cache. Unlike condition
, unless
expressions are evaluated
after the method has been invoked. To expand on the previous example, perhaps we only
want to cache paperback books, as the following example does:
@Cacheable(cacheNames="book", condition="#name.length() < 32", unless="#result.hardback") (1)
public Book findBook(String name)
1 | Using the unless attribute to block hardbacks. |
The cache abstraction supports java.util.Optional
return types. If an Optional
value
is present, it will be stored in the associated cache. If an Optional
value is not
present, null
will be stored in the associated cache. #result
always refers to the
business entity and never a supported wrapper, so the previous example can be rewritten
as follows:
@Cacheable(cacheNames="book", condition="#name.length() < 32", unless="#result?.hardback")
public Optional<Book> findBook(String name)
Note that #result
still refers to Book
and not Optional<Book>
. Since it might be
null
, we use SpEL’s safe navigation operator.
Available Caching SpEL Evaluation Context
Each SpEL
expression evaluates against a dedicated context
.
In addition to the built-in parameters, the framework provides dedicated caching-related
metadata, such as the argument names. The following table describes the items made
available to the context so that you can use them for key and conditional computations:
Name | Location | Description | Example |
---|---|---|---|
|
Root object |
The name of the method being invoked |
|
|
Root object |
The method being invoked |
|
|
Root object |
The target object being invoked |
|
|
Root object |
The class of the target being invoked |
|
|
Root object |
The arguments (as array) used for invoking the target |
|
|
Root object |
Collection of caches against which the current method is run |
|
Argument name |
Evaluation context |
Name of any of the method arguments. If the names are not available
(perhaps due to having no debug information), the argument names are also available under the |
|
|
Evaluation context |
The result of the method call (the value to be cached). Only available in |
|
The @CachePut
Annotation
When the cache needs to be updated without interfering with the method execution,
you can use the @CachePut
annotation. That is, the method is always invoked and its
result is placed into the cache (according to the @CachePut
options). It supports
the same options as @Cacheable
and should be used for cache population rather than
method flow optimization. The following example uses the @CachePut
annotation:
@CachePut(cacheNames="book", key="#isbn")
public Book updateBook(ISBN isbn, BookDescriptor descriptor)
Using @CachePut and @Cacheable annotations on the same method is generally
strongly discouraged because they have different behaviors. While the latter causes the
method invocation to be skipped by using the cache, the former forces the invocation in
order to run a cache update. This leads to unexpected behavior and, with the exception
of specific corner-cases (such as annotations having conditions that exclude them from each
other), such declarations should be avoided. Note also that such conditions should not rely
on the result object (that is, the #result variable), as these are validated up-front to
confirm the exclusion.
|
As of 6.1, @CachePut
takes CompletableFuture
and reactive return types into account,
performing the put operation whenever the produced object is available.
The @CacheEvict
Annotation
The cache abstraction allows not just population of a cache store but also eviction.
This process is useful for removing stale or unused data from the cache. As opposed to
@Cacheable
, @CacheEvict
demarcates methods that perform cache
eviction (that is, methods that act as triggers for removing data from the cache).
Similarly to its sibling, @CacheEvict
requires specifying one or more caches
that are affected by the action, allows a custom cache and key resolution or a
condition to be specified, and features an extra parameter
(allEntries
) that indicates whether a cache-wide eviction needs to be performed
rather than just an entry eviction (based on the key). The following example evicts
all entries from the books
cache:
@CacheEvict(cacheNames="books", allEntries=true) (1)
public void loadBooks(InputStream batch)
1 | Using the allEntries attribute to evict all entries from the cache. |
This option comes in handy when an entire cache region needs to be cleared out. Rather than evicting each entry (which would take a long time, since it is inefficient), all the entries are removed in one operation, as the preceding example shows. Note that the framework ignores any key specified in this scenario as it does not apply (the entire cache is evicted, not only one entry).
You can also indicate whether the eviction should occur after (the default) or before
the method is invoked by using the beforeInvocation
attribute. The former provides the
same semantics as the rest of the annotations: Once the method completes successfully,
an action (in this case, eviction) on the cache is run. If the method does not
run (as it might be cached) or an exception is thrown, the eviction does not occur.
The latter (beforeInvocation=true
) causes the eviction to always occur before the
method is invoked. This is useful in cases where the eviction does not need to be tied
to the method outcome.
Note that void
methods can be used with @CacheEvict
- as the methods act as a
trigger, the return values are ignored (as they do not interact with the cache). This is
not the case with @Cacheable
which adds data to the cache or updates data in the cache
and, thus, requires a result.
As of 6.1, @CacheEvict
takes CompletableFuture
and reactive return types into account,
performing an after-invocation evict operation whenever processing has completed.
The @Caching
Annotation
Sometimes, multiple annotations of the same type (such as @CacheEvict
or
@CachePut
) need to be specified — for example, because the condition or the key
expression is different between different caches. @Caching
lets multiple nested
@Cacheable
, @CachePut
, and @CacheEvict
annotations be used on the same method.
The following example uses two @CacheEvict
annotations:
@Caching(evict = { @CacheEvict("primary"), @CacheEvict(cacheNames="secondary", key="#p0") })
public Book importBooks(String deposit, Date date)
The @CacheConfig
Annotation
So far, we have seen that caching operations offer many customization options and that
you can set these options for each operation. However, some of the customization options
can be tedious to configure if they apply to all operations of the class. For
instance, specifying the name of the cache to use for every cache operation of the
class can be replaced by a single class-level definition. This is where @CacheConfig
comes into play. The following examples uses @CacheConfig
to set the name of the cache:
@CacheConfig("books") (1)
public class BookRepositoryImpl implements BookRepository {
@Cacheable
public Book findBook(ISBN isbn) {...}
}
1 | Using @CacheConfig to set the name of the cache. |
@CacheConfig
is a class-level annotation that allows sharing the cache names,
the custom KeyGenerator
, the custom CacheManager
, and the custom CacheResolver
.
Placing this annotation on the class does not turn on any caching operation.
An operation-level customization always overrides a customization set on @CacheConfig
.
Therefore, this gives three levels of customizations for each cache operation:
-
Globally configured, e.g. through
CachingConfigurer
: see next section. -
At the class level, using
@CacheConfig
. -
At the operation level.
Provider-specific settings are typically available on the CacheManager bean,
e.g. on CaffeineCacheManager . These are effectively also global.
|
Enabling Caching Annotations
It is important to note that even though declaring the cache annotations does not automatically trigger their actions - like many things in Infra, the feature has to be declaratively enabled (which means if you ever suspect caching is to blame, you can disable it by removing only one configuration line rather than all the annotations in your code).
To enable caching annotations add the annotation @EnableCaching
to one of your
@Configuration
classes:
@Configuration
@EnableCaching
public class AppConfig {
@Bean
CacheManager cacheManager() {
CaffeineCacheManager cacheManager = new CaffeineCacheManager();
cacheManager.setCacheSpecification(...);
return cacheManager;
}
}
Alternatively, for XML configuration you can use the cache:annotation-driven
element:
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:cache="http://www.springframework.org/schema/cache"
xsi:schemaLocation="
http://www.springframework.org/schema/beans https://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/cache https://www.springframework.org/schema/cache/spring-cache.xsd">
<cache:annotation-driven/>
<bean id="cacheManager" class="cn.taketoday.cache.caffeine.CaffeineCacheManager">
<property name="cacheSpecification" value="..."/>
</bean>
</beans>
Both the cache:annotation-driven
element and the @EnableCaching
annotation let you
specify various options that influence the way the caching behavior is added to the
application through AOP. The configuration is intentionally similar with that of
@Transactional
.
The default advice mode for processing caching annotations is proxy , which allows
for interception of calls through the proxy only. Local calls within the same class
cannot get intercepted that way. For a more advanced mode of interception, consider
switching to aspectj mode in combination with compile-time or load-time weaving.
|
For more detail about advanced customizations (using Java configuration) that are
required to implement CachingConfigurer , see the
javadoc.
|
XML Attribute | Annotation Attribute | Default | Description |
---|---|---|---|
|
N/A (see the |
|
The name of the cache manager to use. A default |
|
N/A (see the |
A |
The bean name of the CacheResolver that is to be used to resolve the backing caches. This attribute is not required and needs to be specified only as an alternative to the 'cache-manager' attribute. |
|
N/A (see the |
|
Name of the custom key generator to use. |
|
N/A (see the |
|
The name of the custom cache error handler to use. By default, any exception thrown during a cache related operation is thrown back at the client. |
|
|
|
The default mode ( |
|
|
|
Applies to proxy mode only. Controls what type of caching proxies are created for
classes annotated with the |
|
|
Ordered.LOWEST_PRECEDENCE |
Defines the order of the cache advice that is applied to beans annotated with
|
<cache:annotation-driven/> looks for @Cacheable/@CachePut/@CacheEvict/@Caching
only on beans in the same application context in which it is defined. This means that,
if you put <cache:annotation-driven/> in a WebApplicationContext for a
MockDispatcher , it checks for beans only in your controllers, not your services.
See the MVC section for more information.
|
Infra recommends that you only annotate concrete classes (and methods of concrete
classes) with the @Cache* annotations, as opposed to annotating interfaces.
You certainly can place an @Cache* annotation on an interface (or an interface
method), but this works only if you use the proxy mode (mode="proxy" ). If you use the
weaving-based aspect (mode="aspectj" ), the caching settings are not recognized on
interface-level declarations by the weaving infrastructure.
|
In proxy mode (the default), only external method calls coming in through the
proxy are intercepted. This means that self-invocation (in effect, a method within the
target object that calls another method of the target object) does not lead to actual
caching at runtime even if the invoked method is marked with @Cacheable . Consider
using the aspectj mode in this case. Also, the proxy must be fully initialized to
provide the expected behavior, so you should not rely on this feature in your
initialization code (that is, @PostConstruct ).
|
Using Custom Annotations
The caching abstraction lets you use your own annotations to identify what method
triggers cache population or eviction. This is quite handy as a template mechanism,
as it eliminates the need to duplicate cache annotation declarations, which is
especially useful if the key or condition are specified or if the foreign imports
(cn.taketoday
) are not allowed in your code base. Similarly to the rest
of the stereotype annotations, you can
use @Cacheable
, @CachePut
, @CacheEvict
, and @CacheConfig
as
meta-annotations (that is, annotations that
can annotate other annotations). In the following example, we replace a common
@Cacheable
declaration with our own custom annotation:
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.METHOD})
@Cacheable(cacheNames="books", key="#isbn")
public @interface SlowService {
}
In the preceding example, we have defined our own SlowService
annotation,
which itself is annotated with @Cacheable
. Now we can replace the following code:
@Cacheable(cacheNames="books", key="#isbn")
public Book findBook(ISBN isbn, boolean checkWarehouse, boolean includeUsed)
The following example shows the custom annotation with which we can replace the preceding code:
@SlowService
public Book findBook(ISBN isbn, boolean checkWarehouse, boolean includeUsed)
Even though @SlowService
is not a Infra annotation, the container automatically picks
up its declaration at runtime and understands its meaning. Note that, as mentioned
earlier, annotation-driven behavior needs to be enabled.