I’m now working more often with IEnumerables and IQueryables, and the more code I write the more nervous I get about the performance issues related to the transversal of these sequences. When writing code to fulfill a purpose I often forget about the multiple enumerations below LINQ’s syntactic sugar, but the feeling that something is not the way it is supposed to be emerges right after the first self-code review.
Today is a good day to talk about this, since I just read an article about Caching LINQ Results (edit: removed the link, the blog doesn’t exist anymore). In this article Chris Eargle talks about three techniques to use with LINQ:
- Delayed Execution (Lazy Loading)
- Materialization (Eager Loading)
- Memoization (Lazy Loading + Caching)
All these techniques have its own specific goals, but my purpose for this post is not to discuss when to use either. My real concern is about the limits of caching this kinds of sequences.
I know that memory nowadays is cheap, but if everyone made that assumption everyday we’d run out of memory on our servers faster than people can tweet “earthquake!” when the ground starts to move. What is the threshold for this kind of caching? Should there be a dedicated data-structure to host this information? What if the issue is not the memory that we use on this kind of caching, but the time it stays in memory before GC kicks in?
I will be experimenting with a few situations and searching to see if anyone has done the same. I’m thinking about exploring the following possibilities:
- Create a data-structure to host the IEnumerable cache, limited to a specific size, and with a ring buffer-like strategy (removing the oldest cache) or probably some metric to evaluate the usage of a cache piece.
- Use caching in several spots when needed, but use strategies like weak references, or even timeouts when the cache isn't used in a specific time limit, to try to free up space as soon as possible.
I’d like to hear from you regarding these topics, do you have any experiences in this field?