Query caches
Out of the box, the set of query caches is per database. That means that a new set of caches is initialized for each new database.
The maximum number of entries per cache is configured using server.memory.query_cache.per_db_cache_num_entries.
It determines the cache size only when server.memory.query_cache.sharing_enabled is set to false.
Query caches may consume a lot of memory, especially when running many active databases. To tackle this and improve predictability on memory consumption, you can configure the DBMS to use only one set of caches for all databases. For more information, see Unifying query caches.
Configure caches
The following is a summary of the query cache configurations. For more information, see Operations Manual → Configuration settings.
| Setting | Description | Default |
|---|---|---|
Enterprise only Enable sharing cache space between different databases. With this option turned on, databases will share cache space, but not cache entries. |
|
|
Enterprise only The number of cached queries for all databases. This setting is only deciding cache size when |
|
|
The number of cached queries per database.
This setting is only deciding cache size when |
|
Query size limitIntroduced in 2026.01
The default query size limit for a query to be considered for query caching is 128 KiB of query text.
This limit prevents large generated query text strings from occupying too much memory in the query cache. Such query strings often contain inlined data, and are unlikely to be reusable before auto-parameterization of literals is applied.
To circumvent the default of 128 KiB, you can prefix a query with either CYPHER cache=force to always cache it, or with CYPHER cache=skip to never cache it and skip the query cache lookup.
As a best practice, avoid passing data in the query text string and use parameters instead.