If you’re modelling your data into business information in the universe then surely creating an analytic database as well is overkill?
It’s a distinct possibility. If your data is in reasonably good shape, then let the universe go at it with what it does best.
However, complex computations such as a validation rule calculating customer eligibility for a loyalty bonus may be pre-processed.
Taking this dreamed up criteria as an example:
Whilst the computation could be run at query time it would probably be better pre-processed, so at query time the only computation is true or false.
The universe doesn’t store data. You can sit it on top of a transactional system and let users see who’s bought what products in the last five seconds. But asking that same system which customers are eligible for a loyalty bonus, well that might make two people unhappy. Firstly the next customer waiting for the system to respond so they can buy one of your shiny products, but also the account humble manager wanting to know which of her customers she can delight with news of their eligibility. If the customer loyalty bonus object was selected with a few other objects of similar complexity, the resulting database query could take tens of seconds or over a minute to run. After waiting a minute for that information her mind may wander on to that spreadsheet she keeps of her own sales to her key customers and decide her time is better spent maintaining her own little island of data.
So by trimming the query run time from tens of second or a minute plus to milliseconds what will the impact be? Probably,
After all, data is the new gold!