Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs

Dashboard
Notifications
Mark all as read
Q&A

Can I set a memory limit for the .NET Core garbage collector to collect objects more aggressively?

+6
−0

I have an ASP.NET Core 5 web application that provides functionality that involves file upload + rather large processing (memory consumption mainly). After each processing (with success or not) the memory could be reclaimed.

I have a stress test the endpoint with large files and checked the w3wp.exe process Memory (it runs in IIS) which goes something like this:

  • step 1: 400M
  • step 2: 800M
  • step 3: 1200M
  • step 4: 1800M
  • step 5: 1300M

So there seems to be no memory leak, but GC kicks in very late. I would like for it to begin the cleanup faster, but I cannot seem to find a way to do this. What I have found/tried:

  • why the delay in collection? - objects larger than 85KB are considered large by the GC and will be collected less frequent than smaller objects
  • forcing the GC collection after each operation - this can be done using GC.Collect(), but it is not recommended to do so
  • Runtime configuration options for garbage collection - I have checked GC settings and applied a limit for System.GC.HeapHardLimit, the final configuration looking like this:
{
  "runtimeOptions": {
    "tfm": "net5.0",
    "framework": {
      "name": "Microsoft.AspNetCore.App",
      "version": "5.0.0"
    },
    "runtimeOptions": {
      "configProperties": {
        "System.GC.HeapHardLimit": 1000000000
      }
    },
    "configProperties": {
      "System.GC.Server": true,
      "System.Reflection.Metadata.MetadataUpdater.IsSupported": false,
      "System.Runtime.Serialization.EnableUnsafeBinaryFormatterSerialization": false
    }
  }
}

The memory still goes beyond 1GB and stays like this until I process more data and it is finally reclaimed.

  • IIS application pool memory limit - setting the memory limit on the application pool will cause it to be recycled, not forcing the GC to act faster

The application runs on an internal shared server and I would like to have a reasonable peak memory for it.

Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

1 comment thread

Why is that a problem? (4 comments)

2 answers

+4
−0

Generally speaking, the frequency of garbage collection is a space / time tradeoff:

              collection effort   live object size
GC overhead ~ ----------------- = ---------------- 
                 memory freed     dead object size 

In defaulting to collect garbage only when memory is starting to get scarce, the runtime maximizes dead object size, and thus the efficiency of collection.

When determining whether memory is scarce, the runtime assumes that the Operating System's (or container's) free memory is available for use. In your case, this isn't the case, because you want to share that memory with many other applications. The typical response would be to tell the runtime how much memory it is allowed to use by setting System.GC.HeapHardLimit (or System.GC.HeapHardLimitPercent). It is worth noting that these settings set the maximum size of the heap, which is only part of the total memory used by your process. In particular, the just in time compiled executable code, as well as any native memory used, does not count towards that limit, so if you are aiming for 1GB total memory, you should be setting the heap limit quite a bit lower than that (as a rough guideline: if not specified, the max heap size defaults to 75% of available memory).

For applications with very dynamic live object sizes, better collection efficiency may be achieved by manually triggering collection when live object size is particularly low (and dead object size is considerable). Whether this is true for your file upload depends on how significantly the file upload affects the overall live object size, and how easy it is to identify times where no file upload is running (which may be non-trivial due to concurrency). It may be easier to simply set a heap size limit instead.

Why does this post require moderator attention?
You might want to add some details to your flag.

0 comment threads

+1
−0
  • forcing the GC collection after each operation - this can be done using GC.Collect(), but it is not recommended to do so

...unless you have good reason. Here you have good reason.

The other, even more, extreme options would be things like doing the processing in a new AppDomain or spawning a new process to do the processing.

Why does this post require moderator attention?
You might want to add some details to your flag.

1 comment thread

.NET Core does not support AppDomain unloading (1 comment)

Sign up to answer this question »