Common latency pitfalls

Common factors that impact latency while testing or integrating

Low latency is critical to customers who use Spade’s data to make decisions in the authorization flow. Spade’s API is highly performant and able to process enrichment requests in <50ms. However, the network and systems outside of Spade’s service can have an impact on latency. We’ve outlined common pitfalls that can impact performance and recommendations for testing and integration that help mitigate and deliver the lowest round-trip latencies possible.

1. Geography

The most important factor impacting latency is the distance between the client and server, and the performance of the network in between. Spade offers environments in the eastern (Virginia) and western (Oregon) United States to reduce the distance to wherever you are making requests from.

Recommendation: Integrate and test with the closest Spade environment to you. Run tests remotely if needed but be aware of cloud throughput limits (see below).

2. Network Impact

There will always be variability in network latency, driven by factors including network congestion, transmission medium (fibre-optic, copper, wireless, etc.), routing efficiency, and Wi-Fi signal strength. You may even see noticeable differences in latency at different times of the day as network congestion ebbs and flows. Please bear these factors in mind when measuring latency, especially if you are running tests from your home or office internet.

Recommendation: Understand that the network itself is usually the most variable component of high latencies. Test and integrate under the best conditions possible.

3. Short Lived Connections & Repeated DNS Lookups

The process of creating and closing connections adds a substantial amount of overhead when you are trying to make many requests rapidly. Many request libraries will also perform a DNS lookup for every connection opened, which will additionally add latency.

If you are attempting to achieve extremely high throughput, then you may also run into limitations using a single connection. A conventional HTTP1.1 connection cannot handle multiple requests in parallel.

Recommendation: Use a client session when making repeated requests. This will avoid making repeated costly DNS lookups. Aavoid opening and closing connections unnecessarily. If you’re trying to enrich high volumes use multiple parallel connections to enrich.

4. Cloud Throughput Limits

If you are attempting to integrate or test from a cloud instance (e.g. EC2) it is not uncommon for the cloud provider to impose limits on the amount of data or number of requests you can make, especially on lower cost instances. Some instances also have burstable traffic limits, so you may also observe a decline in results as time goes on.

Recommendation: Understand the limits of your instances and scale up or out appropriately to avoid hitting bottlenecks.

5. VPN

Routing traffic through a VPN can substantially impact latency regardless of whether your VPN exit point is located close to the enrichment server or not. This is especially true if you are testing from a local machine.

Recommendation: Don't route traffic through a VPN or any other unnecessary paths to improve latency.