General question about Server programming (and scalability for Data Center versions): How are we supposed to handle large batches of data?
- We support importing requirements (That’s our data) from Excel. How much data should we accept in a single Excel file?
- Are we supposed to handle extremely large files? That may require a lot of work with the streaming apis, and not everything can be done such a way.
- Are we supposed to let out-of-memory-error happen, or are we supposed to set an artificial limit to prevent OOME from happening? If so, how do we choose the magic number, should it be proportional to the Xmx parameter of Java/the RAM of the machine, according to our calculations on our computers? Why imposing a limit, given that some servers can handle more and we’re at risk of preventing the use of a feature that could work.
- We have a report that takes a long time to build, and we’ve already optimized/cached what we could. How can we limit the impact? Do administrators have a way of limiting expensive requests to a certain % of the CPU?
All of this is one single question, as you’ve guessed. How are we supposed to limit the impact of some users managing a lot of data, so the other users can still use Confluence/Jira properly.