Hi,
There appears to be a memory leak issue with XtraReports. We have two separate projects (built a little differently) that have this. When our service runs in a Docker container (Linux) and we monitor it, we see that the Working Set continually increases, until reaching a critical point after which the container stops with a "Segmentation fault (11)" and restarts automatically.
The problem does not occur when launching the application locally on Windows with Visual Studio. The Working Set remains stable at around 200 MB.
If we comment out the call to "ExportToPdfAsync(stream)" of XtraReports, the Working Set also remains stable in the container.
I have attached a sample project containing the memory leak, with a Dockerfile building a Docker image with the SDK and Microsoft monitoring tools. Simply :
- Unzip project and open command window at the project root
- dotnet build
- docker build . -t test:1
- docker run -d -p 8181:8181 -e "ASPNETCORE_URLS=http://+:8181" --name test-api test:1
- docker exec -it test-api /tools/dotnet-counters monitor -p 1
- Batch call the API on http://localhost:8181/api/v1/payStubs/report
- Observe the Working Set increasing continually
*You may have to configure DevExpress packages
Versions:
.Net Core 6
DevExpress 23.2.4
Could you please tell us if we are doing something wrong or if it's actually a bug in XtraReports ?
Thanks a lot,
Nathalie Richard
Hi Nathalie,
Thank you for the demo. I don't see anything wrong about your code. Indeed the
dotnet-counters
tool reports the ever increasing working set value. However, it appears to be quite different from what Docker Desktop shows or from what I get running scripts such as ps_mem.py.Could you please clarify if you also see the same discrepancy?
Hi Yaroslav,
I have indeed disparities that I can't explain between the result of the different tools I use.
I have captured the output of "top -p", "docker stats" and "ps_mem" commands.
Here is the result when the container has just started (no request yet) :
<top start.JPG>
<docker stats start.JPG>
<ps_mem start.JPG>
Here is the result after 4000 requests :
<top 4000 requests.JPG>
<docker stats 4000 requests.JPG>
<ps_mem 4000 requests.JPG>
And finally, the result after 12 000 requests :
<top 12000 requests.JPG>
<docker stats 12000 requests.JPG>
<ps_mem 12000 requests.JPG>
As you can see, the top command shows the increasing memory (like dotnet-counters), but ps_mem doesn't.
And for docker stats, the small increase to memory usage seems to match the difference between the top command RES and SHR values.
Thank you very much
Hello Nathalie,
I hope this message finds you well. Before I respond to your question, a word or two about your subscription and DevExpress Support policies.
As you may know, our products are sold on a subscription basis. While you are free to use your licensed DevExpress product indefinitely, support services are limited to individuals/organizations with active product subscriptions. For more information, please refer to our End User License Agreement/Licensing FAQ webpage: Licensing: EULAs and FAQ | DevExpress.
I mention this because – unfortunately - your DevExpress subscription has expired. If you require tech support services from DevExpress in the future, please take a moment to renew your license. Should you require purchase assistance, please contact our Client Services team via email (clientservices@devexpress.com) or submit your purchase-related question here.
Refer to the following help topic for more information on DevExpress support policies: Support Services.
Regarding this specific ticket, thank you for your update. Now your situation is clear to me. However, I would like to note that the fact that the .NET application's memory usage grows does not always indicate a memory leak. Due to .NET garbage collector specifics, the memory may not be cleared immediately after all the objects were released by the code. So, to catch memory leaks we recommend using special .NET memory profiling tools.
For example, use the dotnet-gcdump and dotnet-dump tools on your specific docker container. The dotnet-gcdump tool should trigger the garbage collection and should record only the managed memory. We recommend calling the dotnet-gcdump tool several times to make sure that the full garbage collection is complete. The dumps generated by this tool can be analyzed in Visual Studio, so you can compare a dump collected on 4000 requests with a dump collected after 12000 runs to check what .NET objects are leaking. If you see DevExpress objects leaking in these dumps, you can share these dumps with us, so that we will be able to review this issue more thoroughly.
Also, to analyze the entire memory used by your application, you can use the dotnet-dump tool. This tool may help you to catch leaks in the unmanaged memory (which is not controlled by garbage collector). Such leaks are usually caused by third-party libraries that use unmanaged memory. For example, check the following thread where a leak in unmanaged memory was discussed: MemoryLeak in XtraReports using XRChart running on Ubuntu Linux.
Finally, I would like to note that memory leaks you encounter may not even be related to our components. For more details, check Dennis's reply in the following thread where a similar issue was reported: High memory usage (memory leak?) I clicked on the reports several times and the memory usage jumped to 1.6GB and after 20 minutes (XAF Blazor Kubernetes).
Please let me know if you have any further questions on this topic. We will be happy to help you.
Hello Vasily,
Thank you very much for your reply and recommendations.
I have extracted 4 result files with dotnet-gcdump
- 20240312_080635_1.gcdump : after api startup, before any request
- 20240312_083705_1.gcdump : after 4000 requests
- 20240312_091907_1.gcdump : after 12000 requests
- 20240312_093331_1.gcdump : simply another gc dump collect 15 minutes later (no additionnal request)
If I compare the 12000 requests and the 4000 requests, I see that the vast majority of added objects come from DevExpress.
If I compare the last collect with the 4000 requests, same thing (although there is a bit less of DevExpress objects).
I have joined result files so you can analyze them on your side too.
For dotnet-dump, I have extracted 3 files :
- core_20240312_134957 (246 Mo) : after api startup, before any request
- core_20240312_140826 (2.7 Go) : after 4000 requests
- core_20240312_143629 (6.92 Mo) : after 12000 requests
When I compare the "dumpheap -stat" command result of the 4000 and the 12000 requests, I'm suprised to see that the number and size of objects seems to have diminished. Check the "dotnet-dump 4000 vs 12000 requests.JPG" image for the comparison. I cannot join the core files since it's heavier than the 30 MB permitted.
Not sure how to effectively use those dotnet-dump results though. Could you please help me to do so ?
Thank you again
Hello Nathalie,
Thank you for your update. We need some additional time to discuss this with our team. We will update this thread as soon as we have any update on this.
Hello Nathalie,
Thank you for your patience. We thoroughly reviewed all the memory dumps that you shared and tested your sample project on our side. Our conclusion is that the results that you observe do not indicate a memory leak. Let me address your findings:
Yes, it is true. When we compare these dumps on our side, we confirmed that there are uncleared objects in memory. However, we assume that this occurs because the dump was collected before all the unused objects were cleared.
Yes, while we were comparing these dumps, there were fewer DevExpress objects. So, this confirms our assumption that the previous dump was collected before the garbage collection was finished.

We throughly researched all the objects that appeared in this dump, but our final conclusion is that this difference does not indicate a memory dump. Refer to the below image that demonstrates the comparison of these dumps in Visual Studio:
Most of the memory is occupied by the list of
WeakReference
objects. This indicates that this is a queue of objects that should be cleared by garbage collection. Also, the memory difference that we observe in our comparisons is about 3.7 Mb. So, I doubt that such a difference may cause your Docker container to crash.To sum up, we do not see any confirmations that the behavior that you observe is directly related to our Reporting components. Also, we were not able to replicate major memory leaks while testing your report on our side.
I recommend you check Dennis's reply in the following thread where a similar issue was reported: High memory usage (memory leak?) I clicked on the reports several times and the memory usage jumped to 1.6GB and after 20 minutes (XAF Blazor Kubernetes). Perhaps you encountered a similar issue.
Hello Vasily,
Thank you for your investigation.
You are right, the difference in the managed memory is negligible and will probably be freed by the GC after some time or when it needs it. All right. So the problem resides in the unmanaged memory, which is even more nebulous for me.
I checked the last thread you gave me, but there is no real conclusion on the problem source or solution. There is some suggestion on settings to toggle, but they affect the managed memory part, which is not our problem (I have still tried them, without success).
The other link you sent me (https://supportcenter.devexpress.com/ticket/details/t1123238) talks about a memory leak in the libgdiplus drawing engine. The solution was to use the DevExpress wrapper around SkiaSharp. However, we are already using it. We even call the ForceSkia() method as you can see in the project I gave you initially.
Other links I have consulted were pointing at newer DevExpress version. But we are using the latest (23.2). Same thing for SkiaSharp (2.88.7).
However, I'm surprised you say that you can't reproduce on your side. Don't you have the top command showing increasing virtual memory by around 2 GB for each 4000 requests ? And the core dump file (from dotnet-dump) increasing close to 3 GB for each 4000 requests ?
And I would like to point out again that if I remove the ExportToPdfAsync method call, I don't have any problem. Therefore, I find it hard to believe that it doesn't come from DevExpress (or one of its dependencies, but they are still part of the product/software solution you provide). But in the other hand, we are not doing anything funky I think in the project. It's pretty simple. So I would expect other people to have reported the issue as well, but it doesn't seem the case …
Thank you again
Hello Nathalie,
Thank you for your update. We need some additional time to review this case more thoroughly to ensure we did not miss anything and discuss the results with our team. We will let you know as soon as we have any update on this.
Hello Nathalie,
Thank you for your patience. We have thoroughly researched the case once again but we did not find any leaks related to our Reporting components. Let me share the steps we went through while diagnosing the issue:
We started our research by using the steps you shared in your initial post and tested the memory usage along with your test project.
We used three different memory dump collecting tools to collect memory dumps:
dotnet-gcdump
command - to review managed memory usage.dotnet-dump
command - to review the complete dump.gcore
command - to analyze unmanaged memory usage.We executed your API multiple times, and compared different dumps to analyze the memory usage.
These are the results we obtained after researching all of the dumps:
Thus, we conclude that the "Segmentation fault (11)" failure that you got in your container cannot be caused by a memory leak.
But we found another issue that might cause this error: While testing your project on our side we found that the .NET Core threading code calls the pthread_getspecific function of the
libc
library, and this function crashes after a large number of calls. As a result, .NET fails to get handles for new threads, which may cause a crash. Since this crash occurs in Microsoft code, we cannot diagnose the issue further.The reason why this crash may occur while you use our
ExportToPdfAsync
method may be the fact that ourAsync
method implementation starts several threads to run an operation asynchronously. And .NET thread creation code may fail while creating new threads, as per our research.Since the same code works correctly on Windows, we assume that this may be a bug in the .NET Core implementation for Linux OS. The only recommendation that we can give to work around the issue is to try replacing the
ExportToPdfAsync
method in your API with the synchronousExportToPdf
implementation, which should not trigger new threads. Please try this, and let us know if this resolves the initial issue in your container.Hi Vasily,
I'm very grateful for your deep investigation. It's much appreciated.
I understand that it could be on Microsoft side, and particularly in the asynchronous handling. Unfortunately, I tried to use ExportToPdf instead of ExportToPdfAsync, but I have pretty much the same result.
What surprises/confuses me in that issue is that it means everyone running XtraReport (at least ExportToPdf) on Linux would have this problem. So I can't be the only one to have reported this problem to you ?
I found an example on GitHub from what looks to be an official (?) DevExpress examples compilation (https://github.com/DevExpress-Examples/reporting-asp-net-core-print-without-preview) using ExportToPdfAsync. I modified it a little to add theForceSkia, remove HTTPS requirement and use ExportToPdf (instead of ExportToPdfAsync), but I end up with the same unmanaged memory increasing issue until there is nothing left and the container fails.
Don't you have a working solution on Linux ? Is it that XtraReport (or again, at least ExportToPdf) never worked on that OS ?
Thank you
Hello Nathalie,
Thank you for sharing your findings with us. I need some additional time to check this and discuss with the team. We will update this thread once we have any update.