Tdd Fundamentals
32 min read- TDD Fundamentals โ Comprehensive Study Notes
- Red-Green-Refactor Cycle
- TDD vs Test-First vs Test-Last
- Unit Testing with xUnit in .NET
- Arrange-Act-Assert (AAA) Pattern
- Mocking with Moq and NSubstitute
- Test Doubles (Stubs, Mocks, Fakes, Spies, Dummies)
- Integration Testing in ASP.NET Core (WebApplicationFactory)
- Testing Async Code
- Code Coverage Strategies
- BDD with SpecFlow and Gherkin
- Testing Anti-Patterns (Brittle Tests, Testing Implementation Details)
- Additional Interview Questions
- Summary
TDD Fundamentals โ Comprehensive Study Notes
"TDD is not about testing. It is about design, confidence, and sustainable pace."
These notes cover the full landscape of Test-Driven Development and testing practices in .NET, from foundational cycles to advanced techniques. Use them to prepare for senior-level interview discussions where you must demonstrate depth, not just definitions.
Related: TDD Index | Testing Strategies | Dependency Inversion (DIP)
Red-Green-Refactor Cycle
The core loop of TDD is three discrete steps repeated in tight iterations:
- Red โ Write a small test that describes the next increment of behavior. Run it. It must fail for the right reason (not a compile error or wrong assertion).
- Green โ Write the simplest code that makes the test pass. Do not over-engineer. Hardcoding a return value is acceptable if the test allows it.
- Refactor โ Improve the structure of production and test code while keeping all tests green. Extract methods, rename variables, remove duplication.
RED GREEN REFACTOR
Write a Write just Improve design,
failing test --> enough code --> remove duplication --> (repeat)
to pass tests stay green
// === STEP 1: RED โ Write a test for a calculator that adds two numbers ===
[Fact]
public void Add_TwoPositiveNumbers_ReturnsSum()
{
var calc = new Calculator();
var result = calc.Add(2, 3);
Assert.Equal(5, result);
}
// This fails because Calculator does not exist yet.
// === STEP 2: GREEN โ Simplest implementation ===
public class Calculator
{
public int Add(int a, int b) => a + b;
}
// Test passes. Move on.
// === STEP 3: REFACTOR โ Nothing to improve yet. Add next behavior: ===
// RED: new test
[Fact]
public void Add_NegativeAndPositive_ReturnsCorrectSum()
{
var calc = new Calculator();
Assert.Equal(-1, calc.Add(-3, 2));
}
// GREEN: already passes with current implementation โ no new code needed.
// RED: drive out subtraction
[Fact]
public void Subtract_TwoNumbers_ReturnsDifference()
{
var calc = new Calculator();
Assert.Equal(4, calc.Subtract(7, 3));
}
// GREEN: add Subtract
public class Calculator
{
public int Add(int a, int b) => a + b;
public int Subtract(int a, int b) => a - b;
}
// REFACTOR: extract common pattern if needed, rename for clarity, etc.
A: Red means writing a failing test first to define expected behavior. Green means writing the minimum code to pass. Refactor means improving design while tests stay green. The order matters because Red ensures you write only necessary code, Green prevents over-engineering, and Refactor maintains quality. Skipping any phase leads to either untested code, gold-plating, or accumulating technical debt.
A: Writing too large a test in the Red phase (covering multiple behaviors at once), skipping the Green phase and jumping to the "real" implementation immediately, skipping Refactor entirely so passing tests accumulate over messy code, and refactoring while tests are red, which removes the safety net.
A: Each test should describe a single behavioral increment. If you find yourself writing a test that requires implementing more than a few lines of production code, the step is too large. Break it down. A useful heuristic: each Red-Green cycle should take 1-5 minutes.
A: Yes. If the test only checks one specific case, hardcoding is the simplest code that passes. The next test will force you to generalize. This is called "Triangulation" โ using multiple examples to drive out the real algorithm. It keeps you honest about not writing code that is not justified by a test.
// First test: Add(2, 3) == 5
// Green (hardcoded):
public int Add(int a, int b) => 5; // passes!
// Second test forces generalization: Add(1, 1) == 2
// Green (real):
public int Add(int a, int b) => a + b; // now it must be generalTDD vs Test-First vs Test-Last
These three approaches differ in when tests are written and how they influence design.
| Approach | When Tests Are Written | Design Influence | Feedback Speed |
|---|---|---|---|
| Test-Driven (TDD) | Before production code, one behavior at a time | High โ tests shape the API and dependencies | Immediate |
| Test-First | Before production code, but often in larger batches | Moderate โ tests verify a pre-planned design | Fast |
| Test-Last | After production code is complete | Low โ tests are retrofitted onto existing design | Delayed |
A: In TDD, tests are written one at a time and each test drives a single behavior increment. The design emerges from the pressure of making code testable. In Test-First, you write a suite of tests before implementing, often based on a specification โ it is less iterative. TDD gives tighter feedback loops. Test-First risks writing tests that reflect assumptions that change during implementation.
A: Test-Last is acceptable for exploratory work, spikes, or prototypes where the design is highly uncertain. It is also common in legacy codebases where retrofitting tests is the only practical option. The risk is that code written without tests in mind is often hard to test (tight coupling, static dependencies, hidden state). The key is that you still ship code with high-confidence test coverage regardless of the order.
A: TDD creates strong pressure toward dependency injection, loose coupling, small functions, and clear interfaces. Code that is hard to test โ code with static dependencies, deep inheritance, hidden global state โ resists TDD. Over time, TDD pushes you toward the same design principles (SOLID, DI, composition over inheritance) that lead to maintainable architectures.
// Without TDD pressure โ hard to test:
public class OrderProcessor
{
public void Process(Order order)
{
var db = new SqlConnection("connstring"); // hidden dependency
var now = DateTime.UtcNow; // non-deterministic
Logger.Log("Processing"); // static call
}
}
// With TDD pressure โ testable:
public class OrderProcessor
{
private readonly IOrderRepository _repo;
private readonly TimeProvider _clock;
private readonly ILogger<OrderProcessor> _logger;
public OrderProcessor(IOrderRepository repo, TimeProvider clock, ILogger<OrderProcessor> logger)
{
_repo = repo;
_clock = clock;
_logger = logger;
}
public async Task ProcessAsync(Order order, CancellationToken ct)
{
order.ProcessedAt = _clock.GetUtcNow();
await _repo.SaveAsync(order, ct);
_logger.LogInformation("Processed order {OrderId}", order.Id);
}
}Unit Testing with xUnit in .NET
A: xUnit is the most widely used test framework in modern .NET. [Fact] marks a test with no parameters. [Theory] with [InlineData] creates parameterized / table-driven tests. The constructor runs before each test (replaces [SetUp]). IDisposable.Dispose() runs after each test (replaces [TearDown]). IClassFixture<T> shares expensive setup across tests in a class. ICollectionFixture<T> shares setup across multiple test classes. Tests run in parallel by default.
public class OrderValidatorTests
{
// [Fact] marks a test with no parameters
[Fact]
public void Validate_NullOrder_ThrowsArgumentNullException()
{
var validator = new OrderValidator();
Assert.Throws<ArgumentNullException>(() => validator.Validate(null!));
}
// [Theory] + [InlineData] for parameterized / table-driven tests
[Theory]
[InlineData(0, false)]
[InlineData(-1, false)]
[InlineData(100, true)]
[InlineData(1, true)]
public void Validate_Amount_ReturnsExpected(decimal amount, bool expected)
{
var order = new Order { Amount = amount };
var validator = new OrderValidator();
var result = validator.Validate(order);
Assert.Equal(expected, result);
}
}[Fact] and [Theory] in xUnit?A: [Fact] is a test that takes no parameters and runs once. [Theory] is a parameterized test that runs once per data set. Data is supplied via [InlineData] (inline values), [MemberData] (method/property returning IEnumerable<object[]>), or [ClassData] (a class implementing IEnumerable<object[]>). Use [Theory] when you want to test the same logic with many different inputs.
// [MemberData] example โ more complex data than InlineData can handle
public class DiscountCalculatorTests
{
public static IEnumerable<object[]> DiscountScenarios =>
new List<object[]>
{
new object[] { CustomerType.Regular, 100m, 5m },
new object[] { CustomerType.Premium, 100m, 20m },
new object[] { CustomerType.VIP, 100m, 30m },
};
[Theory]
[MemberData(nameof(DiscountScenarios))]
public void CalculateDiscount_ReturnsExpected(
CustomerType type, decimal price, decimal expectedDiscount)
{
var calc = new DiscountCalculator();
var result = calc.Calculate(type, price);
Assert.Equal(expectedDiscount, result);
}
}A: xUnit uses the constructor for per-test setup and IDisposable.Dispose() for per-test teardown. For shared expensive resources (like a database connection), implement IClassFixture<T> โ the fixture is created once and shared across all tests in the class. For sharing across multiple classes, use ICollectionFixture<T> with a [Collection] attribute. This is more explicit than NUnit's [SetUp]/[TearDown] attributes.
// Per-test setup via constructor
public class AccountServiceTests : IDisposable
{
private readonly AccountService _sut;
private readonly FakeAccountRepository _repo;
public AccountServiceTests()
{
// Runs before EACH test
_repo = new FakeAccountRepository();
_sut = new AccountService(_repo);
}
[Fact]
public void Deposit_PositiveAmount_IncreasesBalance()
{
_sut.Deposit("acct-1", 100m);
Assert.Equal(100m, _repo.GetBalance("acct-1"));
}
public void Dispose()
{
// Runs after EACH test โ clean up resources
_repo.Clear();
}
}
// Shared fixture across all tests in the class
public class DatabaseFixture : IAsyncLifetime
{
public AppDbContext DbContext { get; private set; } = null!;
public async Task InitializeAsync()
{
var options = new DbContextOptionsBuilder<AppDbContext>()
.UseInMemoryDatabase($"TestDb-{Guid.NewGuid()}")
.Options;
DbContext = new AppDbContext(options);
await DbContext.Database.EnsureCreatedAsync();
}
public async Task DisposeAsync()
{
await DbContext.DisposeAsync();
}
}
public class ProductRepositoryTests : IClassFixture<DatabaseFixture>
{
private readonly AppDbContext _db;
public ProductRepositoryTests(DatabaseFixture fixture)
{
_db = fixture.DbContext;
}
[Fact]
public async Task Add_Product_PersistsToDatabase()
{
_db.Products.Add(new Product("Widget", 9.99m));
await _db.SaveChangesAsync();
var count = await _db.Products.CountAsync();
Assert.True(count >= 1);
}
}| Feature | xUnit | NUnit |
|---|---|---|
| Test method attribute | [Fact], [Theory] | [Test], [TestCase] |
| Setup/teardown | Constructor / IDisposable | [SetUp] / [TearDown] |
| Parameterized tests | [InlineData], [MemberData], [ClassData] | [TestCase], [TestCaseSource] |
| Parallel by default | Yes | No (opt-in) |
| Assertion style | Assert.Equal, Assert.Throws | Assert.That (constraint model) |
| Shared fixtures | IClassFixture<T> | [OneTimeSetUp] / [OneTimeTearDown] |
A:
A: Both FluentAssertions and Shouldly provide fluent, readable assertion syntax:
// FluentAssertions
result.Should().Be(5);
order.Should().NotBeNull();
action.Should().Throw<InvalidOperationException>()
.WithMessage("*insufficient*");
// Shouldly
result.ShouldBe(5);
order.ShouldNotBeNull();
Should.Throw<InvalidOperationException>(() => action());Arrange-Act-Assert (AAA) Pattern
AAA is the standard structure for unit tests. Each test has three clearly separated phases.
A: AAA structures tests into setup (Arrange), invoking the behavior (Act), and verifying outcomes (Assert). Violating it โ multiple Act steps, assertions mixed with arrangement, or conditional logic โ makes tests harder to read, debug, and maintain. When a test fails, AAA makes it immediately clear whether the problem is in setup, execution, or the assertion itself.
[Fact]
public async Task TransferFunds_SufficientBalance_DebitsCreditsBothAccounts()
{
// Arrange โ set up the system under test and its dependencies
var sourceAccount = new Account("src", balance: 500m);
var targetAccount = new Account("tgt", balance: 100m);
var repo = new InMemoryAccountRepository(sourceAccount, targetAccount);
var sut = new TransferService(repo);
// Act โ invoke the behavior under test
await sut.TransferAsync("src", "tgt", amount: 200m);
// Assert โ verify the expected outcome
var updatedSource = await repo.GetAsync("src");
var updatedTarget = await repo.GetAsync("tgt");
updatedSource.Balance.ShouldBe(300m);
updatedTarget.Balance.ShouldBe(300m);
}A: One Act per test โ multiple Act steps mean you are testing multiple behaviors, so split them. Minimize Arrange โ use builders or AutoFixture to reduce boilerplate. Explicit Assert โ assert on the outcome that matters, not on implementation details. No conditional logic in tests โ tests should be linear with no if, switch, or loops.
Withdraw_InsufficientFunds_ThrowsOverdraftExceptionCalculateDiscount_PremiumCustomer_Returns20PercentGivenExpiredToken_WhenAuthenticate_ThenReturnsUnauthorized
A: Use names that describe the scenario and expected outcome:
MethodUnderTest_Scenario_ExpectedBehavior
// or
Given_Scenario_When_Action_Then_ExpectedResult
Examples:
A: Use the Builder pattern or AutoFixture to construct objects with sensible defaults and override only what matters for each test. Centralize builders alongside the domain model so they evolve together.
public class OrderBuilder
{
private string _id = "default-id";
private decimal _amount = 100m;
private OrderStatus _status = OrderStatus.Pending;
public OrderBuilder WithId(string id) { _id = id; return this; }
public OrderBuilder WithAmount(decimal amount) { _amount = amount; return this; }
public OrderBuilder WithStatus(OrderStatus status) { _status = status; return this; }
public Order Build() => new Order(_id, _amount, _status);
}
// Usage in tests โ only override what matters
[Fact]
public void Validate_ZeroAmount_ReturnsFalse()
{
var order = new OrderBuilder().WithAmount(0).Build();
var validator = new OrderValidator();
var result = validator.Validate(order);
result.ShouldBeFalse();
}Mocking with Moq and NSubstitute
Mocking frameworks create test doubles for interfaces so you can isolate the system under test.
A: Moq creates mock objects from interfaces. Use Setup() to configure return values (stubs) and Verify() to assert interactions (mocks). The .Object property gives you the concrete instance to inject.
public class NotificationServiceTests
{
private readonly Mock<IEmailSender> _emailSender = new();
private readonly Mock<IUserRepository> _userRepo = new();
private readonly NotificationService _sut;
public NotificationServiceTests()
{
_sut = new NotificationService(_emailSender.Object, _userRepo.Object);
}
[Fact]
public async Task NotifyUser_ActiveUser_SendsEmail()
{
// Arrange โ stub the repository
_userRepo.Setup(r => r.GetByIdAsync("u1"))
.ReturnsAsync(new User("u1", "alice@test.com", isActive: true));
// Act
await _sut.NotifyAsync("u1", "Hello");
// Assert โ verify the email sender was called
_emailSender.Verify(
s => s.SendAsync("alice@test.com", "Hello", It.IsAny<CancellationToken>()),
Times.Once);
}
[Fact]
public async Task NotifyUser_InactiveUser_DoesNotSendEmail()
{
_userRepo.Setup(r => r.GetByIdAsync("u2"))
.ReturnsAsync(new User("u2", "bob@test.com", isActive: false));
await _sut.NotifyAsync("u2", "Hello");
_emailSender.Verify(
s => s.SendAsync(It.IsAny<string>(), It.IsAny<string>(), It.IsAny<CancellationToken>()),
Times.Never);
}
}A: Setup() / ReturnsAsync() configure return values. Verify() / Times assert that methods were called with expected arguments. It.IsAny<T>() and It.Is<T>(predicate) are argument matchers. Callback() captures arguments for deeper inspection. MockBehavior.Strict throws on unexpected calls (use sparingly โ it creates brittle tests).
A: NSubstitute uses a more natural syntax without .Object indirection:
public class NotificationServiceNSubTests
{
private readonly IEmailSender _emailSender = Substitute.For<IEmailSender>();
private readonly IUserRepository _userRepo = Substitute.For<IUserRepository>();
private readonly NotificationService _sut;
public NotificationServiceNSubTests()
{
_sut = new NotificationService(_emailSender, _userRepo);
}
[Fact]
public async Task NotifyUser_ActiveUser_SendsEmail()
{
// Arrange
_userRepo.GetByIdAsync("u1")
.Returns(new User("u1", "alice@test.com", isActive: true));
// Act
await _sut.NotifyAsync("u1", "Hello");
// Assert
await _emailSender.Received(1)
.SendAsync("alice@test.com", "Hello", Arg.Any<CancellationToken>());
}
[Fact]
public async Task NotifyUser_InactiveUser_DoesNotSendEmail()
{
_userRepo.GetByIdAsync("u2")
.Returns(new User("u2", "bob@test.com", isActive: false));
await _sut.NotifyAsync("u2", "Hello");
await _emailSender.DidNotReceive()
.SendAsync(Arg.Any<string>(), Arg.Any<string>(), Arg.Any<CancellationToken>());
}
}| Feature | Moq | NSubstitute |
|---|---|---|
| Create mock | new Mock<T>() | Substitute.For<T>() |
| Access mock instance | .Object | direct reference |
| Setup return | .Setup().Returns() | .Returns() directly |
| Verify call | .Verify(expr, Times.Once) | .Received(1).Method() |
| Argument matching | It.IsAny<T>() | Arg.Any<T>() |
| Syntax feel | Explicit and verbose | Concise and natural |
A: NSubstitute has a more concise, natural syntax without the .Object indirection. It is easier for teams to read and onboard onto. Moq offers MockBehavior.Strict and more granular verification options. I choose NSubstitute for greenfield projects where readability is paramount and Moq when I need strict verification or am joining a team already using it.
A: Use the Callback() method to capture arguments passed to a mocked method:
[Fact]
public async Task ProcessOrder_SetsTimestampBeforeSaving()
{
Order? capturedOrder = null;
var repo = new Mock<IOrderRepository>();
repo.Setup(r => r.SaveAsync(It.IsAny<Order>()))
.Callback<Order>(order => capturedOrder = order)
.Returns(Task.CompletedTask);
var sut = new OrderProcessor(repo.Object);
await sut.ProcessAsync(new Order("o1", 100m));
capturedOrder.ShouldNotBeNull();
capturedOrder.ProcessedAt.ShouldNotBe(default);
capturedOrder.ProcessedAt.ShouldBeLessThanOrEqualTo(DateTimeOffset.UtcNow);
}Test Doubles (Stubs, Mocks, Fakes, Spies, Dummies)
Understanding the taxonomy of test doubles is a common interview topic. Each serves a different purpose.
A: A stub provides canned answers to calls made during a test โ it controls indirect inputs. A mock verifies that specific interactions occurred โ it checks indirect outputs. In Moq terms, Setup().Returns() creates a stub; Verify() creates a mock. The distinction matters because over-reliance on mocks couples tests to implementation, while stubs keep tests focused on outcomes.
A: A Dummy is an object passed to satisfy a parameter but never actually used:
// The logger is required by the constructor but irrelevant to this test
var dummyLogger = new Mock<ILogger<OrderService>>().Object;
var sut = new OrderService(realRepo, dummyLogger);A: A Stub returns predetermined data. It controls indirect inputs to the system under test:
// Stub: always returns a fixed exchange rate
var stubRateProvider = new Mock<IExchangeRateProvider>();
stubRateProvider.Setup(r => r.GetRate("USD", "EUR")).Returns(0.85m);
var sut = new CurrencyConverter(stubRateProvider.Object);
var result = sut.Convert(100m, "USD", "EUR");
Assert.Equal(85m, result);A: A Mock verifies that specific interactions occurred. It focuses on behavior verification:
// Mock: verify that the audit log was written
var mockAuditLog = new Mock<IAuditLog>();
var sut = new PaymentProcessor(mockAuditLog.Object);
sut.ProcessPayment(payment);
mockAuditLog.Verify(a => a.RecordAsync(
It.Is<AuditEntry>(e => e.Action == "PaymentProcessed")),
Times.Once);A: A Spy records calls for later inspection. Useful when you need to verify complex interaction sequences:
// Manual spy implementation
public class SpyEmailSender : IEmailSender
{
public List<(string To, string Body)> SentEmails { get; } = new();
public Task SendAsync(string to, string body, CancellationToken ct)
{
SentEmails.Add((to, body));
return Task.CompletedTask;
}
}
// Usage
[Fact]
public async Task NotifyAll_SendsEmailToEachActiveUser()
{
var spy = new SpyEmailSender();
var users = new[] {
new User("u1", "alice@test.com", isActive: true),
new User("u2", "bob@test.com", isActive: true)
};
var sut = new NotificationService(spy, new FakeUserRepository(users));
await sut.NotifyAllAsync("Welcome");
Assert.Equal(2, spy.SentEmails.Count);
Assert.Equal("alice@test.com", spy.SentEmails[0].To);
Assert.Equal("bob@test.com", spy.SentEmails[1].To);
}A: A Fake is a working implementation with simplified behavior. Not suitable for production but functionally correct for testing:
// Fake: in-memory repository with real collection behavior
public class FakeOrderRepository : IOrderRepository
{
private readonly Dictionary<string, Order> _store = new();
public Task SaveAsync(Order order)
{
_store[order.Id] = order;
return Task.CompletedTask;
}
public Task<Order?> GetByIdAsync(string id)
{
_store.TryGetValue(id, out var order);
return Task.FromResult(order);
}
public Task<IReadOnlyList<Order>> GetAllAsync()
=> Task.FromResult<IReadOnlyList<Order>>(_store.Values.ToList());
}| Double | Purpose | Verifies behavior? | Has logic? |
|---|---|---|---|
| Dummy | Fill parameters | No | No |
| Stub | Provide canned answers | No | Minimal |
| Mock | Verify interactions | Yes | No |
| Spy | Record interactions | Yes (after the fact) | Minimal |
| Fake | Lightweight substitute | No (but could) | Yes |
A:
Integration Testing in ASP.NET Core (WebApplicationFactory)
WebApplicationFactory and when do you use it?A: WebApplicationFactory<TEntryPoint> boots your ASP.NET Core application in-memory with the real middleware pipeline, DI container, and routing. You use it for integration tests that verify HTTP behavior end-to-end without a real network. You can swap specific services (database, external APIs) in ConfigureServices while keeping everything else production-like. This catches wiring bugs that unit tests miss.
public class ProductsApiTests : IClassFixture<WebApplicationFactory<Program>>
{
private readonly HttpClient _client;
private readonly WebApplicationFactory<Program> _factory;
public ProductsApiTests(WebApplicationFactory<Program> factory)
{
_factory = factory;
_client = factory.WithWebHostBuilder(builder =>
{
builder.ConfigureServices(services =>
{
// Replace real database with in-memory
services.RemoveAll<DbContextOptions<AppDbContext>>();
services.AddDbContext<AppDbContext>(opts =>
opts.UseInMemoryDatabase("TestDb"));
// Replace external HTTP client with a stub
services.RemoveAll<IPaymentGateway>();
services.AddSingleton<IPaymentGateway>(new StubPaymentGateway());
});
}).CreateClient();
}
[Fact]
public async Task GetProducts_ReturnsOkWithJsonArray()
{
var response = await _client.GetAsync("/api/products");
response.StatusCode.ShouldBe(HttpStatusCode.OK);
var products = await response.Content.ReadFromJsonAsync<List<ProductDto>>();
products.ShouldNotBeNull();
}
[Fact]
public async Task CreateProduct_ValidPayload_Returns201AndLocationHeader()
{
var payload = new { Name = "Widget", Price = 9.99 };
var content = JsonContent.Create(payload);
var response = await _client.PostAsync("/api/products", content);
response.StatusCode.ShouldBe(HttpStatusCode.Created);
response.Headers.Location.ShouldNotBeNull();
}
}A: Subclass WebApplicationFactory<Program> and override ConfigureWebHost to swap services once for all tests that use the factory:
public class CustomApiFactory : WebApplicationFactory<Program>
{
protected override void ConfigureWebHost(IWebHostBuilder builder)
{
builder.UseEnvironment("Testing");
builder.ConfigureServices(services =>
{
// Swap real dependencies for test doubles
services.RemoveAll<DbContextOptions<AppDbContext>>();
services.AddDbContext<AppDbContext>(opts =>
opts.UseInMemoryDatabase($"TestDb-{Guid.NewGuid()}"));
services.RemoveAll<IMessageBus>();
services.AddSingleton<IMessageBus, FakeMessageBus>();
});
}
}
// Usage โ all tests in this class share the same factory
public class OrdersApiTests : IClassFixture<CustomApiFactory>
{
private readonly HttpClient _client;
public OrdersApiTests(CustomApiFactory factory)
{
_client = factory.CreateClient();
}
[Fact]
public async Task PlaceOrder_ValidOrder_ReturnsAccepted()
{
var order = new { Symbol = "AAPL", Quantity = 10, Side = "Buy" };
var response = await _client.PostAsJsonAsync("/api/orders", order);
response.StatusCode.ShouldBe(HttpStatusCode.Accepted);
}
}A: TestServer gives you lower-level control when you need to test middleware or handlers directly:
[Fact]
public async Task RateLimitingMiddleware_ExceedsLimit_Returns429()
{
using var host = await new HostBuilder()
.ConfigureWebHost(builder =>
{
builder.UseTestServer();
builder.ConfigureServices(services =>
{
services.AddRateLimiting(opts => opts.MaxRequestsPerMinute = 2);
});
builder.Configure(app =>
{
app.UseRateLimiting();
app.MapGet("/", () => "OK");
});
})
.StartAsync();
var client = host.GetTestClient();
// First two requests succeed
(await client.GetAsync("/")).StatusCode.ShouldBe(HttpStatusCode.OK);
(await client.GetAsync("/")).StatusCode.ShouldBe(HttpStatusCode.OK);
// Third request is rate-limited
(await client.GetAsync("/")).StatusCode.ShouldBe(HttpStatusCode.TooManyRequests);
}A: When in-memory fakes are insufficient, use Testcontainers to spin up real infrastructure in Docker:
public class PostgresIntegrationTests : IAsyncLifetime
{
private readonly PostgreSqlContainer _postgres = new PostgreSqlBuilder()
.WithImage("postgres:16-alpine")
.Build();
public async Task InitializeAsync() => await _postgres.StartAsync();
public async Task DisposeAsync() => await _postgres.DisposeAsync();
[Fact]
public async Task Repository_SaveAndRetrieve_RoundTrips()
{
var options = new DbContextOptionsBuilder<AppDbContext>()
.UseNpgsql(_postgres.GetConnectionString())
.Options;
await using var ctx = new AppDbContext(options);
await ctx.Database.MigrateAsync();
ctx.Products.Add(new Product("Widget", 9.99m));
await ctx.SaveChangesAsync();
var loaded = await ctx.Products.FirstAsync();
loaded.Name.ShouldBe("Widget");
}
}A: Integration tests verify that components work together within the application boundary โ DI wiring, middleware, database access โ typically using WebApplicationFactory or Testcontainers with swapped external dependencies. End-to-end tests verify complete user journeys across the entire deployed system including real external services, UI, and infrastructure. Integration tests run in seconds; E2E tests run in minutes and are more prone to flakiness.
Testing Async Code
A: Use async Task test methods with await instead of .Result or .Wait(), which can deadlock. For timeouts, use CancellationTokenSource with pre-cancellation or short timeouts. Avoid Task.Delay for synchronization โ use TaskCompletionSource or SemaphoreSlim to signal between test and production code deterministically. Mock async dependencies with ReturnsAsync() or .Returns(Task.FromResult(...)).
// Basic async test
[Fact]
public async Task GetUserAsync_ExistingId_ReturnsUser()
{
// Arrange
var repo = new Mock<IUserRepository>();
repo.Setup(r => r.GetByIdAsync("u1"))
.ReturnsAsync(new User("u1", "Alice"));
var sut = new UserService(repo.Object);
// Act
var user = await sut.GetUserAsync("u1");
// Assert
user.ShouldNotBeNull();
user.Name.ShouldBe("Alice");
}A: Use Assert.ThrowsAsync<T>() which properly awaits the task and captures the exception:
[Fact]
public async Task GetUserAsync_NonExistentId_ThrowsNotFoundException()
{
var repo = new Mock<IUserRepository>();
repo.Setup(r => r.GetByIdAsync("missing"))
.ReturnsAsync((User?)null);
var sut = new UserService(repo.Object);
await Assert.ThrowsAsync<NotFoundException>(
() => sut.GetUserAsync("missing"));
}A: Create a pre-cancelled CancellationTokenSource and verify the code throws OperationCanceledException:
[Fact]
public async Task ProcessAsync_CancellationRequested_ThrowsOperationCanceled()
{
var cts = new CancellationTokenSource();
cts.Cancel(); // pre-cancel
var sut = new DataProcessor();
await Assert.ThrowsAsync<OperationCanceledException>(
() => sut.ProcessAsync(cts.Token));
}
// Also verify that the token is passed through to dependencies
[Fact]
public async Task ProcessAsync_PassesCancellationTokenToRepository()
{
var repo = new Mock<IDataRepository>();
var sut = new DataProcessor(repo.Object);
var cts = new CancellationTokenSource();
await sut.ProcessAsync(cts.Token);
repo.Verify(r => r.LoadAsync(cts.Token), Times.Once);
}A:
[Fact]
public async Task SlowOperation_ExceedsTimeout_ThrowsTimeoutException()
{
var slowDependency = new Mock<IExternalService>();
slowDependency.Setup(s => s.CallAsync(It.IsAny<CancellationToken>()))
.Returns(async (CancellationToken ct) =>
{
await Task.Delay(TimeSpan.FromSeconds(30), ct);
return "result";
});
var sut = new ResilientCaller(slowDependency.Object,
timeout: TimeSpan.FromMilliseconds(100));
await Assert.ThrowsAsync<TimeoutException>(
() => sut.CallWithTimeoutAsync(CancellationToken.None));
}- Never use
.Resultor.Wait()in tests โ useasync Tasktest methods instead. These can deadlock. - Never use
async voidin test methods โ the framework cannot catch exceptions. - Use
ConfigureAwait(false)in library code, but it is unnecessary in test code. - Avoid
Task.Delayfor synchronization โ useSemaphoreSlim,TaskCompletionSource, orManualResetEventSlim.
A:
// BAD: deadlock risk
[Fact]
public void GetUser_Bad_DeadlockRisk()
{
var result = _sut.GetUserAsync("u1").Result; // DEADLOCK in some contexts
Assert.NotNull(result);
}
// GOOD: async all the way
[Fact]
public async Task GetUser_Good_AsyncAllTheWay()
{
var result = await _sut.GetUserAsync("u1");
Assert.NotNull(result);
}
// Using TaskCompletionSource for deterministic synchronization
[Fact]
public async Task BackgroundWorker_ProcessesItemWhenSignaled()
{
var tcs = new TaskCompletionSource<bool>();
var sut = new BackgroundWorker(onComplete: () => tcs.SetResult(true));
sut.Enqueue("item-1");
var completed = await Task.WhenAny(tcs.Task, Task.Delay(5000));
Assert.Equal(tcs.Task, completed); // ensure it completed, not timed out
}Code Coverage Strategies
- Line coverage โ was each line executed?
- Branch coverage โ was each
if/else/switchpath taken? - Method coverage โ was each method called?
A: Code coverage tools (Coverlet, dotCover, OpenCover) report what percentage of code was executed during test runs:
A: Chasing 100% leads to testing trivial code (DTOs, auto-properties), testing framework behavior, and writing shallow tests that execute lines without meaningful assertions. It creates maintenance overhead and false confidence. A better approach is targeting coverage on high-risk code (auth, payments, domain rules), using branch coverage over line coverage, and supplementing with mutation testing.
// This code has 100% line coverage but tests nothing meaningful
[Fact]
public void Constructor_SetsProperties()
{
var dto = new OrderDto { Id = "1", Amount = 50 };
Assert.Equal("1", dto.Id);
Assert.Equal(50, dto.Amount);
}
// Testing auto-properties on a DTO adds maintenance cost without value.- Target risk, not percentages. Focus coverage on critical paths: authentication, payment processing, domain rules, error handling.
- 70-80% line coverage is a healthy starting point for most teams. The last 20% often requires disproportionate effort.
- Branch coverage matters more than line coverage for complex logic with many conditional paths.
- Use coverage as a diagnostic tool, not a target. Low coverage in a file signals it may be untested. High coverage does not guarantee correctness.
- Goodhart's Law: "When a measure becomes a target, it ceases to be a good measure."
A:
A: Mutation testing introduces small changes (mutants) to production code โ like changing > to >= or replacing a return value โ and runs your tests. If tests still pass, the mutant survived, revealing a gap in your assertions. The mutation score (killed/total mutants) is a stronger quality metric than line coverage because it measures whether tests actually detect behavioral changes, not just whether code was executed.
// Production code
public decimal CalculateDiscount(decimal price, bool isPremium)
{
if (isPremium)
return price * 0.20m; // 20% discount
return price * 0.05m; // 5% discount
}
// A mutation tool might change 0.20m to 0.21m.
// If no test fails, the mutant survives โ you have a gap in your assertions.
// Good tests would catch this:
[Theory]
[InlineData(100, true, 20)] // catches mutation of 0.20m
[InlineData(100, false, 5)] // catches mutation of 0.05m
public void CalculateDiscount_ReturnsExactDiscount(
decimal price, bool isPremium, decimal expected)
{
var sut = new PriceService();
sut.CalculateDiscount(price, isPremium).ShouldBe(expected);
}- Mutation score (Stryker.NET) is a stronger indicator of test quality.
- Coverage of changed lines in PRs is more actionable than overall project coverage.
- Test-to-code ratio combined with defect rates gives a more holistic view.
A:
Stryker.NET configuration example:
{
"stryker-config": {
"project": "MyApp.csproj",
"test-projects": ["MyApp.Tests.csproj"],
"reporters": ["html", "progress"],
"mutate": ["src/**/*.cs"],
"thresholds": {
"high": 80,
"low": 60,
"break": 50
}
}
}BDD with SpecFlow and Gherkin
A: BDD uses Gherkin (Given/When/Then) scenarios written in natural language, making them readable by non-technical stakeholders. Step definitions bind scenarios to code. BDD excels when requirements are complex and involve business collaboration. The key difference is that BDD scenarios serve as living documentation of business rules, while unit tests focus on technical correctness.
Feature: Account Transfers
As a bank customer
I want to transfer funds between accounts
So that I can manage my money conveniently
Scenario: Successful transfer with sufficient balance
Given an account "A" with balance $500
And an account "B" with balance $100
When I transfer $200 from "A" to "B"
Then account "A" should have balance $300
And account "B" should have balance $300
Scenario: Transfer fails with insufficient balance
Given an account "A" with balance $50
And an account "B" with balance $100
When I transfer $200 from "A" to "B"
Then the transfer should fail with "Insufficient funds"
And account "A" should have balance $50
And account "B" should have balance $100
Scenario Outline: Various transfer amounts
Given an account "A" with balance $<start>
When I transfer $<amount> from "A" to "B"
Then account "A" should have balance $<remaining>
Examples:
| start | amount | remaining |
| 1000 | 100 | 900 |
| 500 | 500 | 0 |
| 300 | 50 | 250 |A:
[Binding]
public class TransferSteps
{
private readonly Dictionary<string, Account> _accounts = new();
private readonly TransferService _sut = new(new InMemoryAccountRepository());
private Exception? _caughtException;
[Given(@"an account ""(.*)"" with balance \$(.*)")]
public void GivenAnAccountWithBalance(string name, decimal balance)
{
_accounts[name] = new Account(name, balance);
}
[When(@"I transfer \$(.*) from ""(.*)"" to ""(.*)""")]
public async Task WhenITransfer(decimal amount, string from, string to)
{
try
{
await _sut.TransferAsync(from, to, amount);
}
catch (Exception ex)
{
_caughtException = ex;
}
}
[Then(@"account ""(.*)"" should have balance \$(.*)")]
public void ThenAccountShouldHaveBalance(string name, decimal expected)
{
_accounts[name].Balance.ShouldBe(expected);
}
[Then(@"the transfer should fail with ""(.*)""")]
public void ThenTheTransferShouldFail(string message)
{
_caughtException.ShouldNotBeNull();
_caughtException.Message.ShouldContain(message);
}
}A: BDD adds value when requirements are complex and involve multiple stakeholders, when business rules change frequently and need living documentation, and when QA teams write or review scenarios in plain language. BDD adds overhead for small teams where developers own the full stack (the Gherkin layer may be redundant), for highly technical/infrastructure code where scenarios feel forced, and when step definitions become a maintenance burden larger than the tests themselves.
Testing Anti-Patterns (Brittle Tests, Testing Implementation Details)
A: The Liar (tests with no meaningful assertions), brittle tests that verify implementation details, the Giant (one test covering too many behaviors), excessive setup indicating too many dependencies, shared mutable state causing order-dependent failures, and copy-paste tests that become a maintenance burden. The cure is testing behavior over implementation, keeping tests focused, and using builders/fixtures to reduce duplication.
A: A brittle test breaks when you refactor production code without changing behavior. The most common cause is testing implementation details rather than observable outcomes:
// BAD: testing exact method call sequence โ any refactoring breaks these tests
mockRepo.Verify(r => r.OpenConnection(), Times.Once);
mockRepo.Verify(r => r.BeginTransaction(), Times.Once);
mockRepo.Verify(r => r.SaveAsync(order), Times.Once);
mockRepo.Verify(r => r.CommitTransaction(), Times.Once);
mockRepo.Verify(r => r.CloseConnection(), Times.Once);
// BETTER: test the observable outcome
var savedOrder = await repo.GetByIdAsync(order.Id);
savedOrder.ShouldNotBeNull();
savedOrder.Status.ShouldBe(OrderStatus.Confirmed);A:
// BAD: asserting on internal data structure via reflection
Assert.Equal(3, sut._internalCache.Count); // accessing private field
// BETTER: assert on public behavior
var result = sut.GetAllCachedItems();
result.Count.ShouldBe(3);
// BAD: verifying that a specific private method was called
// (this test will break if you rename or restructure the private method)
// BETTER: verify the external effect of calling the public methodA: An inverted test pyramid with many end-to-end tests and few unit tests. Results in slow feedback, flaky CI, and hard-to-diagnose failures:
Correct (pyramid): Anti-pattern (ice cream cone):
/\ E2E __________ E2E
/ \ |__________|
/ \ Integration | | Integration
/______\ | |
/ \ Unit |__________| Unit
/____________\ | |A: A test that passes but does not actually verify behavior. The assertions are missing or too weak:
// BAD: The Liar โ test passes but verifies nothing
[Fact]
public async Task ProcessOrder_DoesNotThrow()
{
var sut = new OrderProcessor(new Mock<IOrderRepository>().Object);
await sut.ProcessAsync(new Order("o1", 100m));
// No assertions! This always passes even if the code is wrong.
}
// GOOD: actually verify the outcome
[Fact]
public async Task ProcessOrder_ValidOrder_PersistsWithConfirmedStatus()
{
var repo = new FakeOrderRepository();
var sut = new OrderProcessor(repo);
await sut.ProcessAsync(new Order("o1", 100m));
var saved = await repo.GetByIdAsync("o1");
saved.ShouldNotBeNull();
saved.Status.ShouldBe(OrderStatus.Confirmed);
}A: Verifying behavior you configured on the mock rather than on the system under test:
// BAD: you are testing Moq, not your code
var mock = new Mock<ICalculator>();
mock.Setup(c => c.Add(2, 3)).Returns(5);
Assert.Equal(5, mock.Object.Add(2, 3)); // This tests Moq itself
// GOOD: test your code that USES the calculator
var calc = new Mock<ICalculator>();
calc.Setup(c => c.Add(It.IsAny<int>(), It.IsAny<int>())).Returns(10);
var sut = new InvoiceService(calc.Object);
var invoice = sut.CalculateTotal(items);
invoice.Total.ShouldBe(10);- Tests that hit real databases, networks, or file systems without justification โ use fakes or Testcontainers.
- Tests that use
Thread.SleeporTask.Delayfor synchronization โ useTaskCompletionSourceorSemaphoreSlim. - Tests that boot the entire application when a unit test would suffice โ push most coverage to unit tests.
- Tests that create expensive resources per test instead of sharing via fixtures โ use
IClassFixture<T>.
A:
A: Fifty lines of Arrange for one line of Act signals the system under test has too many dependencies. It is a design smell โ consider breaking the class into smaller, focused components. In the meantime, use builders, AutoFixture, or shared factory methods to reduce boilerplate.
A: Delete tests that test deleted features, test implementation details that change with every refactor, duplicate other tests without adding coverage, test third-party library behavior (that is their responsibility), or are permanently flaky despite attempts to fix them. Dead tests erode trust in the suite and slow CI. Regularly prune tests during refactoring sessions.
Additional Interview Questions
A: The testing pyramid has many fast unit tests at the base, fewer integration tests in the middle, and a small number of end-to-end tests at the top. This structure optimizes for fast feedback (unit tests run in milliseconds), targeted wiring verification (integration tests), and confidence in critical user journeys (E2E). Inverting the pyramid (the ice cream cone) results in slow CI, flaky tests, and hard-to-diagnose failures.
| Consideration | Classicist | Mockist |
|---|---|---|
| Refactoring resilience | High | Lower |
| Design pressure | Moderate | High (pushes small classes) |
| Setup complexity | Can be higher (real objects) | Can be higher (mock configuration) |
| Best for | Domain logic, algorithms | Interaction-heavy orchestration |
A: I use classicist (real objects, fakes) for domain logic and algorithms where state verification is natural and refactoring resilience matters. I use mockist (mocks, interaction verification) for orchestration layers and infrastructure boundaries where verifying that the right calls happened is the core behavior. Most production teams blend both styles depending on the layer they are testing.
// Classicist: use a real in-memory repository
[Fact]
public async Task PlaceOrder_ValidOrder_PersistsToRepository()
{
var repo = new FakeOrderRepository();
var sut = new OrderService(repo, new RealPriceCalculator());
await sut.PlaceOrderAsync(new Order("o1", "AAPL", 10));
var saved = await repo.GetByIdAsync("o1");
saved.ShouldNotBeNull();
saved.Status.ShouldBe(OrderStatus.Placed);
}
// Mockist: verify interactions with mocked dependencies
[Fact]
public async Task PlaceOrder_ValidOrder_CallsRepositoryAndCalculator()
{
var repo = new Mock<IOrderRepository>();
var calc = new Mock<IPriceCalculator>();
calc.Setup(c => c.Calculate(It.IsAny<Order>())).Returns(150m);
var sut = new OrderService(repo.Object, calc.Object);
await sut.PlaceOrderAsync(new Order("o1", "AAPL", 10));
repo.Verify(r => r.SaveAsync(It.Is<Order>(o => o.Id == "o1")), Times.Once);
calc.Verify(c => c.Calculate(It.IsAny<Order>()), Times.Once);
}A: Abstract the clock behind an interface like ISystemClock or TimeProvider (introduced in .NET 8). Inject it as a dependency. In tests, provide a fake clock that returns a fixed or controlled time. This makes tests deterministic.
public class FakeTimeProvider : TimeProvider
{
private DateTimeOffset _now;
public FakeTimeProvider(DateTimeOffset startTime) => _now = startTime;
public override DateTimeOffset GetUtcNow() => _now;
public void Advance(TimeSpan duration) => _now = _now.Add(duration);
}
// Usage in a test
[Fact]
public void TokenIsExpired_WhenCurrentTimeExceedsExpiry()
{
var clock = new FakeTimeProvider(new DateTimeOffset(2025, 6, 15, 12, 0, 0, TimeSpan.Zero));
var token = new AuthToken(
expiresAt: new DateTimeOffset(2025, 6, 15, 11, 0, 0, TimeSpan.Zero));
var sut = new TokenValidator(clock);
sut.IsExpired(token).ShouldBeTrue();
}
[Fact]
public void TokenIsNotExpired_WhenCurrentTimeBeforeExpiry()
{
var clock = new FakeTimeProvider(new DateTimeOffset(2025, 6, 15, 10, 0, 0, TimeSpan.Zero));
var token = new AuthToken(
expiresAt: new DateTimeOffset(2025, 6, 15, 11, 0, 0, TimeSpan.Zero));
var sut = new TokenValidator(clock);
sut.IsExpired(token).ShouldBeFalse();
}A: Property-based testing generates hundreds of random inputs and verifies that invariants hold for all of them. It is more effective for mathematical operations (commutativity, associativity), serialization roundtrips, parsers, sorting algorithms, and any code with clear invariants. FsCheck is the primary .NET library.
using FsCheck;
using FsCheck.Xunit;
public class SortingProperties
{
[Property]
public Property Sort_PreservesLength(List<int> input)
{
var sorted = input.OrderBy(x => x).ToList();
return (sorted.Count == input.Count).ToProperty();
}
[Property]
public Property Sort_OutputIsOrdered(List<int> input)
{
var sorted = input.OrderBy(x => x).ToList();
var isOrdered = sorted.Zip(sorted.Skip(1), (a, b) => a <= b).All(x => x);
return isOrdered.ToProperty();
}
}A: Maximize unit tests (milliseconds each), minimize integration tests to critical wiring paths, parallelize test execution (xUnit does this by default), use in-memory fakes instead of real databases where possible, avoid Thread.Sleep/Task.Delay, share expensive fixtures with IClassFixture, and run heavy integration suites on separate CI stages rather than on every push.
A: Use the Builder pattern or AutoFixture to construct objects with sensible defaults and override only what matters for each test. Centralize builders alongside the domain model so they evolve together. Avoid constructing complex object graphs inline in every test.
A: Use unique resource identifiers (database names, queue topics, blob prefixes) per test run, isolate shared state through fixtures, and ensure teardown cleans resources. Mark collection fixtures to avoid serial bottlenecks and rely on containerized dependencies to avoid cross-test interference.
A: Attach in-memory exporters for OpenTelemetry during integration tests, trigger key user journeys, and assert on emitted spans/metrics/logs (names, attributes, and error flags). This ensures dashboards and alerts stay trustworthy without requiring external telemetry backends.
Summary
| Topic | Key Takeaway |
|---|---|
| Red-Green-Refactor | Discipline of small steps: fail, pass, improve |
| TDD vs Test-First vs Test-Last | TDD drives design; test-last is acceptable with discipline |
| xUnit | Modern default; [Fact]/[Theory], constructor lifecycle, parallel by default |
| AAA Pattern | Arrange-Act-Assert keeps tests readable and diagnosable |
| Mocking (Moq / NSubstitute) | Control inputs (stubs) and verify outputs (mocks) |
| Test Doubles | Know dummy/stub/mock/spy/fake โ interviewers test vocabulary |
| Integration Testing | WebApplicationFactory for APIs; Testcontainers for real infra |
| Async Testing | Always use async Task; never .Result or .Wait() |
| Code Coverage | Target risk, not percentages; 100% is a vanity metric |
| BDD / SpecFlow | Living documentation for complex business rules |
| Anti-Patterns | Test behavior, not implementation; keep tests fast and focused |
| Property-Based Testing | Verify invariants over random inputs with FsCheck |
| Mutation Testing | Stronger quality signal than coverage; use Stryker.NET |