Hi,
I’m currently working with a dataset of approximately 3 to 3.5 million rows retrieved from SQL, which I need to load into a pivot grid. However, the query fails after running for some time.
As a test, I tried loading a smaller subset of up to 1.5 million rows, but the same issue persists. The error indicates an out-of-memory exception during grid loading.
Given that each row contains only a few fields, should the pivot grid theoretically support this volume of data? If not, could you recommend efficient solutions or alternative approaches for handling such large datasets?
I would greatly appreciate any insights or suggestions.
Thanks,