Lesson 5: Modern High-Performance C# & Real-World Scenarios
Explore zero-allocation data structures, pagination patterns, and the N+1 query problem — common interview and architecture topics.
Span<T> — Zero-Allocation Memory Access
Introduced in C# 7.2, Span<T> is a ref struct that provides a type-safe, bounds-checked window into a contiguous region of memory. It can point into arrays, stack-allocated buffers, or even unmanaged memory — all without allocating anything on the garbage-collected heap.
Why Does This Matter?
Consider how string.Substring() works: it allocates an entirely new string object on the heap, copies the characters, and returns it. In a hot loop processing millions of strings (like a log parser or CSV reader), these allocations create enormous pressure on the garbage collector.
Span<T> solves this by creating a view (a pointer + length) into the existing memory. No copy, no allocation:
// Traditional approach — allocates a NEW string on the heap string logLine = "2024-01-15|ERROR|Connection timeout to db-primary"; string level = logLine.Substring(11, 5); // "ERROR" — heap allocation! // Span approach — zero allocations, just a view into the same memory ReadOnlySpan<char> logSpan = logLine.AsSpan(); ReadOnlySpan<char> levelSpan = logSpan.Slice(11, 5); // "ERROR" — no allocation! // You can compare spans without allocating strings if (levelSpan.SequenceEqual("ERROR")) { // Handle error — still zero allocations }
Span<T> Rules and Constraints
Span<T> is a ref struct, which means the compiler enforces strict rules to ensure it always lives on the stack:
- Cannot be a field in a class (classes live on the heap).
- Cannot be boxed (boxing puts a value on the heap).
- Cannot be used in
asyncmethods (the state machine captures variables on the heap). - Cannot be captured in a lambda or local function.
- Cannot implement any interfaces (including
IEnumerable<T>).
allows ref struct anti-constraint, which relaxes several of these restrictions. Span<T> can now be used in certain lambda expressions, async methods (before the first await), and as generic type arguments in methods marked with allows ref struct. However, most production codebases are still on C# 12 or earlier, so interviewers will typically expect you to know the original restrictions listed above. If asked, mentioning C# 13's relaxations demonstrates up-to-date knowledge.
Span<T> cannot implement IEnumerable<T>, you cannot use standard LINQ methods on it. You must use manual loops or the specialized methods on the MemoryExtensions class (like Contains, IndexOf, Trim, StartsWith, etc.).
Span<T> with Arrays
int[] bigArray = new int[1_000_000]; // Create a span over just the first 100 elements Span<int> slice = bigArray.AsSpan(0, 100); // Modify through the span — this modifies the ORIGINAL array slice[0] = 42; Console.WriteLine(bigArray[0]); // 42 // Stack-allocated span (no heap at all) Span<int> stackBuffer = stackalloc int[128]; stackBuffer[0] = 99;
Memory<T> — The Heap-Safe Cousin
When you need span-like slicing but in contexts where Span<T> can't be used (async methods, class fields, lambdas), use Memory<T>. It is not a ref struct, so it can live on the heap. You convert it to a Span<T> when you need to do actual work:
public class BufferProcessor { private Memory<byte> _buffer; // OK — can be a class field public BufferProcessor(byte[] data) { _buffer = data.AsMemory(); } public async Task ProcessAsync() { Memory<byte> chunk = _buffer.Slice(0, 256); await SomeAsyncOperation(chunk); // OK — Memory works in async // When you need fast, synchronous access, get a Span Span<byte> span = chunk.Span; span[0] = 0xFF; } }
Quick Comparison
| Feature | Span<T> | Memory<T> |
|---|---|---|
| Type | ref struct (stack only) | Regular struct (heap OK) |
| Class fields | Not allowed | Allowed |
| Async methods | Not allowed | Allowed |
| Lambdas | Not allowed | Allowed |
| Performance | Fastest (no indirection) | Slightly slower (extra layer) |
| LINQ support | No | No |
| Best for | Synchronous, hot-path code | Async pipelines, buffering |
Pagination with Skip and Take
Pagination is one of the most common patterns in web APIs and database-backed applications. LINQ's Skip and Take operators map directly to SQL OFFSET and FETCH.
// Generic pagination method public static IQueryable<T> GetPage<T>( IQueryable<T> source, int pageNumber, int pageSize) { return source .Skip((pageNumber - 1) * pageSize) .Take(pageSize); } // Usage with Entity Framework int pageSize = 10; int pageNumber = 3; // Get the 3rd page var page = dbContext.Products .OrderBy(p => p.Name) // IMPORTANT: always order before Skip .Skip((pageNumber - 1) * pageSize) // Skip first 20 items .Take(pageSize) // Take items 21-30 .ToList(); // This generates SQL like: // SELECT * FROM Products ORDER BY Name OFFSET 20 ROWS FETCH NEXT 10 ROWS ONLY
OrderBy before Skip/Take. Without an explicit ordering, SQL Server can return rows in any order, meaning your pages might have duplicate or missing items. This is a common gotcha in interviews.
Building a Complete Paged Response
public class PagedResult<T> { public List<T> Items { get; set; } public int TotalCount { get; set; } public int PageNumber { get; set; } public int PageSize { get; set; } public int TotalPages => (int)Math.Ceiling(TotalCount / (double)PageSize); public bool HasPrevious => PageNumber > 1; public bool HasNext => PageNumber < TotalPages; } public static async Task<PagedResult<T>> ToPagedResultAsync<T>( this IQueryable<T> query, int pageNumber, int pageSize) { int totalCount = await query.CountAsync(); List<T> items = await query .Skip((pageNumber - 1) * pageSize) .Take(pageSize) .ToListAsync(); return new PagedResult<T> { Items = items, TotalCount = totalCount, PageNumber = pageNumber, PageSize = pageSize }; }
The N+1 Query Problem
This is one of the most critical performance pitfalls when using LINQ with an ORM like Entity Framework, and a very popular interview topic.
What Happens
Imagine you have Order entities, each with a collection of OrderItem child entities. If you load the orders and then iterate over their items, EF fires one query for the orders and then one additional query per order to load its items. For 100 orders, that's 101 database round-trips:
// N+1 PROBLEM: This fires 1 + N queries! var orders = dbContext.Orders.ToList(); // Query 1: Get all orders foreach (var order in orders) { // Each access to .Items fires a NEW query (lazy loading) foreach (var item in order.Items) // Query 2, 3, 4... N+1 { Console.WriteLine(item.ProductName); } }
The Fix: Eager Loading with Include()
// FIXED: One query with a JOIN — everything loaded upfront var orders = dbContext.Orders .Include(o => o.Items) // Eager load the child collection .ThenInclude(i => i.Product) // Can go deeper into nested entities .ToList(); // Single query with JOINs // Now iterating is free — all data is already in memory foreach (var order in orders) { foreach (var item in order.Items) { Console.WriteLine(item.ProductName); // No additional queries } }
Other Solutions
- Explicit Loading: Load related data manually with
dbContext.Entry(order).Collection(o => o.Items).Load(). Gives you precise control but still requires awareness. - Projection (Select): Instead of loading full entities, project into a DTO with
.Select(o => new { o.Id, Items = o.Items.Select(...) }). This generates optimized SQL that only fetches the columns you need. - Split Queries (.AsSplitQuery()): EF Core 5+ can split a complex Include into multiple simpler queries instead of one massive JOIN, which can be faster when the JOIN produces a cartesian explosion.
// Projection approach — most efficient, fetches only what's needed var orderSummaries = dbContext.Orders .Select(o => new { o.OrderId, o.OrderDate, ItemCount = o.Items.Count(), Total = o.Items.Sum(i => i.Price * i.Quantity) }) .ToList(); // Generates a single, optimized SQL query with subqueries
optionsBuilder.LogTo(Console.WriteLine)) during development and watch for repeated queries. Tools like MiniProfiler and the EF Core "query tags" feature also help. In production, monitor your database's query count per request.
Bonus: Modern C# Collection Features
Collection Expressions (C# 12)
C# 12 introduced a new terse syntax for creating collections:
// Old way List<int> nums = new List<int> { 1, 2, 3 }; // C# 12 collection expression List<int> nums = [1, 2, 3]; int[] arr = [4, 5, 6]; // Spread operator — combine collections List<int> combined = [..nums, ..arr, 7, 8]; // [1, 2, 3, 4, 5, 6, 7, 8]
FrozenDictionary and FrozenSet (.NET 8)
When you have a dictionary or set that is populated once and then only read, FrozenDictionary and FrozenSet (in System.Collections.Frozen) optimize the internal structure at creation time for the fastest possible reads:
using System.Collections.Frozen; var config = new Dictionary<string, string> { ["host"] = "localhost", ["port"] = "5432", ["db"] = "myapp" }; // Freeze it — optimized for reads, immutable afterward FrozenDictionary<string, string> frozenConfig = config.ToFrozenDictionary(); string host = frozenConfig["host"]; // Faster than Dictionary for reads
ImmutableCollections
For thread-safe, immutable collections that still allow "modifications" (by returning new instances), use the types in System.Collections.Immutable:
using System.Collections.Immutable; var list = ImmutableList<int>.Empty; var list2 = list.Add(1); // Returns a NEW list; 'list' is still empty var list3 = list2.Add(2); // list2 is [1], list3 is [1, 2]
Coding Challenge 1: Pagination
Write a generic pagination function that takes an IQueryable<T>, a page number, and a page size. It should return a PagedResult<T> object containing the items, total count, and navigation properties (has next, has previous, total pages).
View Solution
Reusing the PagedResult<T> class defined earlier in this lesson:
// Using the PagedResult<T> class from the section above, // but upgraded with init-only setters for immutability public static PagedResult<T> Paginate<T>( IQueryable<T> source, int pageNumber, int pageSize) { if (pageNumber < 1) pageNumber = 1; if (pageSize < 1) pageSize = 10; int totalCount = source.Count(); List<T> items = source .Skip((pageNumber - 1) * pageSize) .Take(pageSize) .ToList(); return new PagedResult<T> { Items = items, TotalCount = totalCount, PageNumber = pageNumber, PageSize = pageSize }; } // Usage: var page3 = Paginate(dbContext.Products.OrderBy(p => p.Name), 3, 10);
Coding Challenge 2: Span-Based Domain Extraction
Use ReadOnlySpan<char> to extract the domain name from a list of 10 email addresses without calling .Split() or .Substring(). The function should process each email and print the domain.
View Solution
public static ReadOnlySpan<char> ExtractDomain(ReadOnlySpan<char> email) { int atIndex = email.IndexOf('@'); if (atIndex == -1) return ReadOnlySpan<char>.Empty; // Slice from the character AFTER '@' to the end return email.Slice(atIndex + 1); } // Usage string[] emails = { "alice@gmail.com", "bob@outlook.com", "charlie@company.org", "diana@university.edu", "eve@startup.io", "frank@enterprise.com", "grace@research.net", "henry@design.co", "iris@consulting.biz", "jack@engineering.dev" }; foreach (string email in emails) { ReadOnlySpan<char> domain = ExtractDomain(email.AsSpan()); // Print without allocating a new string Console.Write("Domain: "); foreach (char c in domain) Console.Write(c); Console.WriteLine(); } // Output: // Domain: gmail.com // Domain: outlook.com // Domain: company.org // ... etc. // NOTE: If you need the domain as a string (e.g., for a dictionary key), // you can call .ToString() on the span — but that DOES allocate. // The power of Span is when you can stay in "span world" for the // entire operation without materializing strings.
Key points: IndexOf and Slice on ReadOnlySpan<char> are both zero-allocation operations. The entire pipeline processes all 10 emails without creating a single new string object on the heap. In a real log-processing or data-parsing scenario, this can reduce GC pressure by orders of magnitude.
Where to Go From Here
You've now covered the complete arc from foundational interfaces through high-performance modern C#. Here are some recommended next steps to deepen your practice:
- Benchmark your code using BenchmarkDotNet — it's the gold standard for .NET micro-benchmarks and will reveal surprising performance truths about your assumptions.
- Explore System.IO.Pipelines — the next evolution beyond
Span<T>for high-throughput I/O processing (used internally by Kestrel, the ASP.NET Core web server). - Practice LeetCode problems focusing on
Dictionary,HashSet, two-pointer techniques, and sliding window — these map directly to the data structures covered in this course. - Read EF Core's generated SQL — enable logging on every query you write and verify the SQL matches your expectations. This habit alone prevents most N+1 issues.