This guide is for the Senzing APIs, if you are looking for details on the Senzing App please see the App System Requirements.
The guidelines herein are approximations of the hardware required to ingest the outlined number of input records from your data source(s). The exact hardware specifications and load time will vary depending on the data characteristics of your data sources.
Note, these figures are guidelines only and outline estimates of the initial historical ingestion of your source data. Senzing performs Entity Resolution in real time and constantly reevaluates prior analytical assertions and outcomes as data is ingested. Once the initial ingestion of your historical input records is complete the Entity Resolution processing has also completed; there are no subsequent analytical process to run.
To increase the ingestion rate of large historical input records a Senzing system can initially be deployed on more substantial hardware to complete the ingestion faster. If ongoing production demands don't require as substantial hardware - e.g. additions, delta changes, searches - the hardware provisions can be reduced to match. For additional details please contact us.
Single Node Deployments
Small - Up to 10 million records
Requirements: 8 cores, 48GB of RAM, 100GB of SSD or NVMe storage
Example AWS instance: i3.2xlarge ~$0.63/hr
Approximate ingestion rate of typical data at 100 records per second, 10 million would load in about 1 day.
Medium - Up to 50 million records
Requirements: 16 cores, 96GB of RAM, 500GB of SSD or NVMe storage
Example AWS instance: i3.4xlarge ~$1.25/hr
Approximate ingestion rate of typical data at 200 records per second, 50 million would load in under 3 days.
Large - Up to 100 million records
Requirements: 32 cores, 192GB of RAM, 1TB of SSD or NVMe storage
Example AWS instance: i3.8xlarge ~$2.50/hr
Approximate ingestion rate of typical data at 400 records per second, 100 million would load in under a week.
Multi-node deployments easily support billions of input records. Please contact us for further information and sizing information to meet your requirements.
The general guideline on storage planning is to allocate 10KB of flash based storage per input record. This equates to approximately 1TB of storage per 100M records.
You will also need to account for additional system software, logs, source record files, etc. This can be placed on general purpose storage.
A deployment of Senzing includes both a database engine and Senzing, the single node deployments above are indicative of having both the database and Senzing installed on a single machine.
In addition to hardware configurations that can achieve significantly faster performance and handle even larger data sets - in to the billions - Senzing can scale both the database and Senzing horizontally.
What to think about in cloud environments
Latency, latency, and less latency!
Cloud environments are great at providing elasticity, ease of resource allocation and in decreasing the time to provision new environments. Senzing is a perfect fit for this with horizontally scalable and sharing only the database engines enabling the increase and decrease of Senzing nodes to meet current or expected demands. At the same time, this ease means certain details are harder to control. Senzing performance is sensitive to latency and IOPS, there are a few things to watch out for.
Co-located: If your systems are sitting far away from each other in the data center, or in different data centers, the network latencies are going to be much higher than if they are co-located in the same switch.
Local Flash on the database: A single locally attached NVMe will achieve more than 100k IOPS where a remote SAN may only achieve 2k IOPS. In cloud environments, be particularly aware of the random read/write IOPS capabilities on your database modes.
Burstable and tier limits: Understand carefully - especially with IO systems - any burstable or tier limits that may apply to resources you provision. Are those guaranteed 10k IOPS only for a burstable peek period, do you only get these for a set size throughput and then they drop substantially?
Why Flash Storage is required
To perform real time entity resolution you must read from the database more than you write to it. Flash is much faster than traditional spinning disks and have become so affordable they are becoming standard equipment. You can still use spinning disks, but it may take 10x longer to load your data.
The performance expectations above are based upon typical person or company data sets such as master customer lists, prospect lists, employee lists, watch lists, national registries, etc.
You can run into data sets that have extra large records or highly related data – meaning everybody is related to everybody else. The nice thing is that you can increase performance by adding more cores and RAM at 6GB per core. If you want to think about sizing per thread, you will need 1.5GB RAM per thread with a minimum of 6GB per node.
If you run into slow data sets, please feel free to contact us as this often means the data was mis-mapped or could be mapped differently to achieve your performance needs. We are constantly improving our way to guide you to the proper mapping as well as automatically tolerate ineffective mapping or data.