Graph analytics in Aura

Graph analytics uses graph algorithms to uncover patterns, detect anomalies, and predict behavior within connected data. Typical use cases include:

  • Fraud detection

  • Supply chain optimization

  • Recommendation engine development

Aura provides the cloud infrastructure to run complex graph algorithms at scale with the Neo4j Graph Data Science (GDS) library.

Aura Graph Analytics

Aura Graph Analytics is an on-demand ephemeral environment to run graph analytics on data stored anywhere, from AuraDB and self-managed Neo4j instances to non-Neo4j data sources.

With Aura Graph Analytics, you can create and delete sessions on demand. Sessions run as isolated Aura instances, with no memory or compute resources shared with your data store.

You can configure parameters like memory size and Time-To-Live (TTL) for each session according to your workload. See the Aura Graph Analytics page for more details.

If you plan to use Aura Graph Analytics from AuraDB, verify the following requirements:

  • AuraDB tier: AuraDB Professional, Business Critical, or Virtual Dedicated Cloud

  • Neo4j version: 5 or later

If you plan to use Aura Graph Analytics outside of an Aura environment (for example via the Python client), you need to create Aura API credentials too.

Alternative deployment options

If the on-demand ephemeral architecture of Aura Graph Analytics is not suitable for your specific workload, Aura offers two alternative graph analytics solutions:

  • Enable the Graph Analytics plugin on an existing AuraDB instance (Professional) for lightweight data exploration, where performance isolation is not critical.

  • Create an AuraDS instance (Professional, Enterprise) when you need a long-running, persistent environment for collaboration or massive model hosting.

Aura deployment comparison

Aura Graph Analytics (Recommended) Graph Analytics plugin
(AuraDB Pro)
AuraDS

Primary use cases

Analytical data pipelines (ETL), ephemeral workloads, interactive experimentation

Lightweight exploration

Persistent model serving, concurrent experiments on shared physical resources

Compute architecture

Ephemeral / On-demand (spin up, run, spin down)

Shared instance (runs on the same resources as the database)

Dedicated instance (always on, pausing possible)

Billing

Pay-as-you-go (per session minute)

Included in the AuraDB cost

Flat hourly rate (per instance size)

Data sources

AuraDB, self-managed Neo4j, Pandas DataFrames

AuraDB (integrated storage), Pandas DataFrames

AuraDS (integrated storage), Pandas DataFrames

Resources for analytics

Full isolation (no impact on database performance[1])

Shared (can slow down transactional database)

Full isolation or shared [2]

Machine learning

Train and predict (session-scoped)

Limited

Train, store and serve models

Maximum memory

Up to 128 GB (Pro, BC) or 512 GB (VDC)

Up to 128 GB

Up to 384 GB

Number of concurrent GDS sessions

Up to 3 (Pro trial) or 100 (Pro, BC, VDC)

-

-

Restart behavior

Downtime

No

No

Short

Projected graphs

Unaffected

Not retained

Restored automatically[3]

Trained models

Unaffected

Not retained

Restored automatically[3]

1. The graph projection and result write-back can impact the source database. In a cluster environment, for example, graph projections should not be performed on the cluster leader.
2. If the source transactional database is used on the same AuraDS instance, graph analytics can impact its performance.
3. Graphs and models created or updated after the instance update/upgrade process has started are not guaranteed to be restored upon restart.