0

I am looking to make one of my applications more efficient from resource utilization perspective and would like to get your inputs to help me with the same.

This application connects to a number of databases. On every db, it runs a query, brings a bunch of records to memory and performs some operations.

I am using Executors.newFixedThreadPool(n) for spawning multiple threads, each to handle task corresponding a db. However, depending on the number of records fetched for the dbs being processed at a given point of time, the memory footprint fluctuates.

In an ideal scenario, I would have wanted to reduce my thread pool size (not supported in the current setup) in case the available memory gets lower than a threshold. The scheduler could essentially defer picking up the next task until we have sufficient memory available.

My question is whether such intelligent scheduling logic is already available somewhere that I can use or need to build it from scratch?

Thanks.

Santosh Kewat
  • 442
  • 5
  • 16
  • Have you considered retrieving from the db in batches (for e.g. pull 50 records at a go from the db) – Taylor Nov 20 '13 at 19:19
  • Thanks Taylor. The specific operations to be performed on the records need to have a full view of the results, since these operations need to do aggregate level calculations. These calculations are not simple enough to allow me to use SQL aggregate clauses in the queries. So, as much as possible, I would prefer fetching all the records together. – Santosh Kewat Nov 20 '13 at 19:38
  • Could you fetch in batches and incrementally calculate? I have no idea what the calculations are, so I'm just spitballing, but fetching large amounts of data into memory is going to be problematic. Another option would be a SQL function but again, this is spitballing. Can you post your calculation code? (or some scaled down version) – Taylor Nov 20 '13 at 20:12
  • Are you sure this would help? How much memory are you going to free by reducing the number of threads? – Alexei Kaigorodov Nov 21 '13 at 04:48

0 Answers0