I get asked fairly frequently to give metrics on Pervasive performance, particularly as it relates to integrating SaaS applications such as NetSuite and RightNOW. Just today, a customer asked me what Pervasive’s performance metrics around SalesForce were. This is a very difficult question for me to answer.
Performance relies heavily on multiple factors. With web services, reliable performance measurements can be very difficult to get, sometimes even impossible, due to a wide range of factors that affect data transfer rates. Time of day, network bandwidth, and available RAM are just a few of these factors. In order to provide the most accurate estimate, the best option is to test speed directly in your particular situation, preferably at various times of day to get an idea of the variation in system loads.
Performance is most significantly affected by two variables, the frequency of data, and the volume. If you test a file of a specific size at 1pm on the same day of the week, odds are fair that the processing speed will not change significantly. However, if you don’t have a good estimate on the size of the files (small, large, medium) and the frequency of files coming in (thousands of files, hundreds, ten), your numbers will rarely match up. You can run dozens of tests, and if the frequency and volume are never consistent, you won’t get consistent results. In most projects, frequency and volume may not be controllable, due to being based on external workflows. You may have to set up special workflows just for the test, and that can give inaccurate results, since it isn’t going to match your production situation.
You can see why giving a simple answer to the question of performance isn’t simple at all. That said, there are times when it’s clear that there is a performance issue, and you need a good strategy to solve it.
So, how can you combat this issue? Parallelizing the workflow within the Pervasive process designer is one option that has been used time and time again with excellent results. That alone works for a large percentage of cases.
Now consider this. If you parallelize the process, do you know how many connections one network card on one server can manage? It’s possible that the network and server could become a bottleneck. It may seem like the software is causing this issue, but the nature of hardware and networks makes it difficult to pinpoint.
If you look at our performance benchmarks, you’ll find that Pervasive Data Integrator can process enough data to handle most, if not all of the data volumes you’re likely dealing with.
To help with performance issues, Pervasive offers multiple options for invoking. This allows load balancing on multiple servers, which can provide the throughput needed to expand. Currently, our Integration Server SDK allows for web services to invoke Pervasive integration processes directly. This gives us the ability to spread integration work across servers with the addition of a low cost network load balancer to spread the data among more servers. Instead of one stream of data hitting a web service such as SalesForce, we can have ten or twenty. This proves to be a very cost-effective and easy way to increase the throughput, and reduce delays brought on by peaks in frequency and volume of data. Since each server doesn’t require a large amount of RAM or CPU power, you can attack the problem with smaller servers and still expand your bandwidth.
In addition, if you would like a more robust load balancer, our management utilities in Integration Hub can provide that leverage. Integration Hub includes load balancing as part of its major features, as well as a web services API. The difference between the basic Integration Server and Integration Hub is that Integration Server manages an API for one server, while Integration Hub can manage it for all of its servers.
So, if you’ve gotten as good an idea as you can of what your bandwidth and network limitations are, and you’re concerned that one server may not be able to do the job, consider how spreading the amount of connections can possibly tackle the issue.