Tdd Fundamentals

32 min read
Rapid overview

TDD Fundamentals โ€” Comprehensive Study Notes

"TDD is not about testing. It is about design, confidence, and sustainable pace."

These notes cover the full landscape of Test-Driven Development and testing practices in .NET, from foundational cycles to advanced techniques. Use them to prepare for senior-level interview discussions where you must demonstrate depth, not just definitions.

Related: TDD Index | Testing Strategies | Dependency Inversion (DIP)


Red-Green-Refactor Cycle

The core loop of TDD is three discrete steps repeated in tight iterations:

  1. Red โ€” Write a small test that describes the next increment of behavior. Run it. It must fail for the right reason (not a compile error or wrong assertion).
  2. Green โ€” Write the simplest code that makes the test pass. Do not over-engineer. Hardcoding a return value is acceptable if the test allows it.
  3. Refactor โ€” Improve the structure of production and test code while keeping all tests green. Extract methods, rename variables, remove duplication.
   RED              GREEN            REFACTOR
   Write a          Write just       Improve design,
   failing test --> enough code --> remove duplication --> (repeat)
                    to pass          tests stay green
// === STEP 1: RED โ€” Write a test for a calculator that adds two numbers ===
[Fact]
public void Add_TwoPositiveNumbers_ReturnsSum()
{
    var calc = new Calculator();
    var result = calc.Add(2, 3);
    Assert.Equal(5, result);
}
// This fails because Calculator does not exist yet.

// === STEP 2: GREEN โ€” Simplest implementation ===
public class Calculator
{
    public int Add(int a, int b) => a + b;
}
// Test passes. Move on.

// === STEP 3: REFACTOR โ€” Nothing to improve yet. Add next behavior: ===

// RED: new test
[Fact]
public void Add_NegativeAndPositive_ReturnsCorrectSum()
{
    var calc = new Calculator();
    Assert.Equal(-1, calc.Add(-3, 2));
}
// GREEN: already passes with current implementation โ€” no new code needed.

// RED: drive out subtraction
[Fact]
public void Subtract_TwoNumbers_ReturnsDifference()
{
    var calc = new Calculator();
    Assert.Equal(4, calc.Subtract(7, 3));
}

// GREEN: add Subtract
public class Calculator
{
    public int Add(int a, int b) => a + b;
    public int Subtract(int a, int b) => a - b;
}

// REFACTOR: extract common pattern if needed, rename for clarity, etc.
Q: What is the Red-Green-Refactor cycle, and why is the order important?

A: Red means writing a failing test first to define expected behavior. Green means writing the minimum code to pass. Refactor means improving design while tests stay green. The order matters because Red ensures you write only necessary code, Green prevents over-engineering, and Refactor maintains quality. Skipping any phase leads to either untested code, gold-plating, or accumulating technical debt.

Q: What are the most common mistakes in the Red-Green-Refactor cycle?

A: Writing too large a test in the Red phase (covering multiple behaviors at once), skipping the Green phase and jumping to the "real" implementation immediately, skipping Refactor entirely so passing tests accumulate over messy code, and refactoring while tests are red, which removes the safety net.

Q: How small should the steps be in the Red phase?

A: Each test should describe a single behavioral increment. If you find yourself writing a test that requires implementing more than a few lines of production code, the step is too large. Break it down. A useful heuristic: each Red-Green cycle should take 1-5 minutes.

Q: Can you hardcode a return value in the Green step?

A: Yes. If the test only checks one specific case, hardcoding is the simplest code that passes. The next test will force you to generalize. This is called "Triangulation" โ€” using multiple examples to drive out the real algorithm. It keeps you honest about not writing code that is not justified by a test.

// First test: Add(2, 3) == 5
// Green (hardcoded):
public int Add(int a, int b) => 5; // passes!

// Second test forces generalization: Add(1, 1) == 2
// Green (real):
public int Add(int a, int b) => a + b; // now it must be general

TDD vs Test-First vs Test-Last

These three approaches differ in when tests are written and how they influence design.

ApproachWhen Tests Are WrittenDesign InfluenceFeedback Speed
Test-Driven (TDD)Before production code, one behavior at a timeHigh โ€” tests shape the API and dependenciesImmediate
Test-FirstBefore production code, but often in larger batchesModerate โ€” tests verify a pre-planned designFast
Test-LastAfter production code is completeLow โ€” tests are retrofitted onto existing designDelayed
Q: What is the difference between TDD and Test-First development?

A: In TDD, tests are written one at a time and each test drives a single behavior increment. The design emerges from the pressure of making code testable. In Test-First, you write a suite of tests before implementing, often based on a specification โ€” it is less iterative. TDD gives tighter feedback loops. Test-First risks writing tests that reflect assumptions that change during implementation.

Q: When is Test-Last acceptable?

A: Test-Last is acceptable for exploratory work, spikes, or prototypes where the design is highly uncertain. It is also common in legacy codebases where retrofitting tests is the only practical option. The risk is that code written without tests in mind is often hard to test (tight coupling, static dependencies, hidden state). The key is that you still ship code with high-confidence test coverage regardless of the order.

Q: How does TDD influence software design?

A: TDD creates strong pressure toward dependency injection, loose coupling, small functions, and clear interfaces. Code that is hard to test โ€” code with static dependencies, deep inheritance, hidden global state โ€” resists TDD. Over time, TDD pushes you toward the same design principles (SOLID, DI, composition over inheritance) that lead to maintainable architectures.

// Without TDD pressure โ€” hard to test:
public class OrderProcessor
{
    public void Process(Order order)
    {
        var db = new SqlConnection("connstring"); // hidden dependency
        var now = DateTime.UtcNow;                // non-deterministic
        Logger.Log("Processing");                 // static call
    }
}

// With TDD pressure โ€” testable:
public class OrderProcessor
{
    private readonly IOrderRepository _repo;
    private readonly TimeProvider _clock;
    private readonly ILogger<OrderProcessor> _logger;

    public OrderProcessor(IOrderRepository repo, TimeProvider clock, ILogger<OrderProcessor> logger)
    {
        _repo = repo;
        _clock = clock;
        _logger = logger;
    }

    public async Task ProcessAsync(Order order, CancellationToken ct)
    {
        order.ProcessedAt = _clock.GetUtcNow();
        await _repo.SaveAsync(order, ct);
        _logger.LogInformation("Processed order {OrderId}", order.Id);
    }
}

Unit Testing with xUnit in .NET

Q: What are the key concepts of xUnit?

A: xUnit is the most widely used test framework in modern .NET. [Fact] marks a test with no parameters. [Theory] with [InlineData] creates parameterized / table-driven tests. The constructor runs before each test (replaces [SetUp]). IDisposable.Dispose() runs after each test (replaces [TearDown]). IClassFixture<T> shares expensive setup across tests in a class. ICollectionFixture<T> shares setup across multiple test classes. Tests run in parallel by default.

public class OrderValidatorTests
{
    // [Fact] marks a test with no parameters
    [Fact]
    public void Validate_NullOrder_ThrowsArgumentNullException()
    {
        var validator = new OrderValidator();
        Assert.Throws<ArgumentNullException>(() => validator.Validate(null!));
    }

    // [Theory] + [InlineData] for parameterized / table-driven tests
    [Theory]
    [InlineData(0, false)]
    [InlineData(-1, false)]
    [InlineData(100, true)]
    [InlineData(1, true)]
    public void Validate_Amount_ReturnsExpected(decimal amount, bool expected)
    {
        var order = new Order { Amount = amount };
        var validator = new OrderValidator();

        var result = validator.Validate(order);

        Assert.Equal(expected, result);
    }
}
Q: What is the difference between [Fact] and [Theory] in xUnit?

A: [Fact] is a test that takes no parameters and runs once. [Theory] is a parameterized test that runs once per data set. Data is supplied via [InlineData] (inline values), [MemberData] (method/property returning IEnumerable<object[]>), or [ClassData] (a class implementing IEnumerable<object[]>). Use [Theory] when you want to test the same logic with many different inputs.

// [MemberData] example โ€” more complex data than InlineData can handle
public class DiscountCalculatorTests
{
    public static IEnumerable<object[]> DiscountScenarios =>
        new List<object[]>
        {
            new object[] { CustomerType.Regular, 100m, 5m },
            new object[] { CustomerType.Premium, 100m, 20m },
            new object[] { CustomerType.VIP, 100m, 30m },
        };

    [Theory]
    [MemberData(nameof(DiscountScenarios))]
    public void CalculateDiscount_ReturnsExpected(
        CustomerType type, decimal price, decimal expectedDiscount)
    {
        var calc = new DiscountCalculator();
        var result = calc.Calculate(type, price);
        Assert.Equal(expectedDiscount, result);
    }
}
Q: How does xUnit handle test lifecycle (setup/teardown)?

A: xUnit uses the constructor for per-test setup and IDisposable.Dispose() for per-test teardown. For shared expensive resources (like a database connection), implement IClassFixture<T> โ€” the fixture is created once and shared across all tests in the class. For sharing across multiple classes, use ICollectionFixture<T> with a [Collection] attribute. This is more explicit than NUnit's [SetUp]/[TearDown] attributes.

// Per-test setup via constructor
public class AccountServiceTests : IDisposable
{
    private readonly AccountService _sut;
    private readonly FakeAccountRepository _repo;

    public AccountServiceTests()
    {
        // Runs before EACH test
        _repo = new FakeAccountRepository();
        _sut = new AccountService(_repo);
    }

    [Fact]
    public void Deposit_PositiveAmount_IncreasesBalance()
    {
        _sut.Deposit("acct-1", 100m);
        Assert.Equal(100m, _repo.GetBalance("acct-1"));
    }

    public void Dispose()
    {
        // Runs after EACH test โ€” clean up resources
        _repo.Clear();
    }
}

// Shared fixture across all tests in the class
public class DatabaseFixture : IAsyncLifetime
{
    public AppDbContext DbContext { get; private set; } = null!;

    public async Task InitializeAsync()
    {
        var options = new DbContextOptionsBuilder<AppDbContext>()
            .UseInMemoryDatabase($"TestDb-{Guid.NewGuid()}")
            .Options;
        DbContext = new AppDbContext(options);
        await DbContext.Database.EnsureCreatedAsync();
    }

    public async Task DisposeAsync()
    {
        await DbContext.DisposeAsync();
    }
}

public class ProductRepositoryTests : IClassFixture<DatabaseFixture>
{
    private readonly AppDbContext _db;

    public ProductRepositoryTests(DatabaseFixture fixture)
    {
        _db = fixture.DbContext;
    }

    [Fact]
    public async Task Add_Product_PersistsToDatabase()
    {
        _db.Products.Add(new Product("Widget", 9.99m));
        await _db.SaveChangesAsync();

        var count = await _db.Products.CountAsync();
        Assert.True(count >= 1);
    }
}
FeaturexUnitNUnit
Test method attribute[Fact], [Theory][Test], [TestCase]
Setup/teardownConstructor / IDisposable[SetUp] / [TearDown]
Parameterized tests[InlineData], [MemberData], [ClassData][TestCase], [TestCaseSource]
Parallel by defaultYesNo (opt-in)
Assertion styleAssert.Equal, Assert.ThrowsAssert.That (constraint model)
Shared fixturesIClassFixture<T>[OneTimeSetUp] / [OneTimeTearDown]
Q: How do xUnit and NUnit compare?

A:

Q: What assertion libraries can you use with xUnit?

A: Both FluentAssertions and Shouldly provide fluent, readable assertion syntax:

// FluentAssertions
result.Should().Be(5);
order.Should().NotBeNull();
action.Should().Throw<InvalidOperationException>()
      .WithMessage("*insufficient*");

// Shouldly
result.ShouldBe(5);
order.ShouldNotBeNull();
Should.Throw<InvalidOperationException>(() => action());

Arrange-Act-Assert (AAA) Pattern

AAA is the standard structure for unit tests. Each test has three clearly separated phases.

Q: What is the Arrange-Act-Assert pattern, and what happens if you violate it?

A: AAA structures tests into setup (Arrange), invoking the behavior (Act), and verifying outcomes (Assert). Violating it โ€” multiple Act steps, assertions mixed with arrangement, or conditional logic โ€” makes tests harder to read, debug, and maintain. When a test fails, AAA makes it immediately clear whether the problem is in setup, execution, or the assertion itself.

[Fact]
public async Task TransferFunds_SufficientBalance_DebitsCreditsBothAccounts()
{
    // Arrange โ€” set up the system under test and its dependencies
    var sourceAccount = new Account("src", balance: 500m);
    var targetAccount = new Account("tgt", balance: 100m);
    var repo = new InMemoryAccountRepository(sourceAccount, targetAccount);
    var sut = new TransferService(repo);

    // Act โ€” invoke the behavior under test
    await sut.TransferAsync("src", "tgt", amount: 200m);

    // Assert โ€” verify the expected outcome
    var updatedSource = await repo.GetAsync("src");
    var updatedTarget = await repo.GetAsync("tgt");
    updatedSource.Balance.ShouldBe(300m);
    updatedTarget.Balance.ShouldBe(300m);
}
Q: What are the rules of AAA?

A: One Act per test โ€” multiple Act steps mean you are testing multiple behaviors, so split them. Minimize Arrange โ€” use builders or AutoFixture to reduce boilerplate. Explicit Assert โ€” assert on the outcome that matters, not on implementation details. No conditional logic in tests โ€” tests should be linear with no if, switch, or loops.

  • Withdraw_InsufficientFunds_ThrowsOverdraftException
  • CalculateDiscount_PremiumCustomer_Returns20Percent
  • GivenExpiredToken_WhenAuthenticate_ThenReturnsUnauthorized
Q: What naming conventions should tests follow?

A: Use names that describe the scenario and expected outcome:

MethodUnderTest_Scenario_ExpectedBehavior
// or
Given_Scenario_When_Action_Then_ExpectedResult

Examples:

Q: How do you reduce Arrange boilerplate across many tests?

A: Use the Builder pattern or AutoFixture to construct objects with sensible defaults and override only what matters for each test. Centralize builders alongside the domain model so they evolve together.

public class OrderBuilder
{
    private string _id = "default-id";
    private decimal _amount = 100m;
    private OrderStatus _status = OrderStatus.Pending;

    public OrderBuilder WithId(string id) { _id = id; return this; }
    public OrderBuilder WithAmount(decimal amount) { _amount = amount; return this; }
    public OrderBuilder WithStatus(OrderStatus status) { _status = status; return this; }
    public Order Build() => new Order(_id, _amount, _status);
}

// Usage in tests โ€” only override what matters
[Fact]
public void Validate_ZeroAmount_ReturnsFalse()
{
    var order = new OrderBuilder().WithAmount(0).Build();
    var validator = new OrderValidator();

    var result = validator.Validate(order);

    result.ShouldBeFalse();
}

Mocking with Moq and NSubstitute

Mocking frameworks create test doubles for interfaces so you can isolate the system under test.

Q: How do you use Moq to create mocks in C#?

A: Moq creates mock objects from interfaces. Use Setup() to configure return values (stubs) and Verify() to assert interactions (mocks). The .Object property gives you the concrete instance to inject.

public class NotificationServiceTests
{
    private readonly Mock<IEmailSender> _emailSender = new();
    private readonly Mock<IUserRepository> _userRepo = new();
    private readonly NotificationService _sut;

    public NotificationServiceTests()
    {
        _sut = new NotificationService(_emailSender.Object, _userRepo.Object);
    }

    [Fact]
    public async Task NotifyUser_ActiveUser_SendsEmail()
    {
        // Arrange โ€” stub the repository
        _userRepo.Setup(r => r.GetByIdAsync("u1"))
                 .ReturnsAsync(new User("u1", "alice@test.com", isActive: true));

        // Act
        await _sut.NotifyAsync("u1", "Hello");

        // Assert โ€” verify the email sender was called
        _emailSender.Verify(
            s => s.SendAsync("alice@test.com", "Hello", It.IsAny<CancellationToken>()),
            Times.Once);
    }

    [Fact]
    public async Task NotifyUser_InactiveUser_DoesNotSendEmail()
    {
        _userRepo.Setup(r => r.GetByIdAsync("u2"))
                 .ReturnsAsync(new User("u2", "bob@test.com", isActive: false));

        await _sut.NotifyAsync("u2", "Hello");

        _emailSender.Verify(
            s => s.SendAsync(It.IsAny<string>(), It.IsAny<string>(), It.IsAny<CancellationToken>()),
            Times.Never);
    }
}
Q: What are the key features of Moq?

A: Setup() / ReturnsAsync() configure return values. Verify() / Times assert that methods were called with expected arguments. It.IsAny<T>() and It.Is<T>(predicate) are argument matchers. Callback() captures arguments for deeper inspection. MockBehavior.Strict throws on unexpected calls (use sparingly โ€” it creates brittle tests).

Q: How does NSubstitute differ from Moq?

A: NSubstitute uses a more natural syntax without .Object indirection:

public class NotificationServiceNSubTests
{
    private readonly IEmailSender _emailSender = Substitute.For<IEmailSender>();
    private readonly IUserRepository _userRepo = Substitute.For<IUserRepository>();
    private readonly NotificationService _sut;

    public NotificationServiceNSubTests()
    {
        _sut = new NotificationService(_emailSender, _userRepo);
    }

    [Fact]
    public async Task NotifyUser_ActiveUser_SendsEmail()
    {
        // Arrange
        _userRepo.GetByIdAsync("u1")
                 .Returns(new User("u1", "alice@test.com", isActive: true));

        // Act
        await _sut.NotifyAsync("u1", "Hello");

        // Assert
        await _emailSender.Received(1)
            .SendAsync("alice@test.com", "Hello", Arg.Any<CancellationToken>());
    }

    [Fact]
    public async Task NotifyUser_InactiveUser_DoesNotSendEmail()
    {
        _userRepo.GetByIdAsync("u2")
                 .Returns(new User("u2", "bob@test.com", isActive: false));

        await _sut.NotifyAsync("u2", "Hello");

        await _emailSender.DidNotReceive()
            .SendAsync(Arg.Any<string>(), Arg.Any<string>(), Arg.Any<CancellationToken>());
    }
}
FeatureMoqNSubstitute
Create mocknew Mock<T>()Substitute.For<T>()
Access mock instance.Objectdirect reference
Setup return.Setup().Returns().Returns() directly
Verify call.Verify(expr, Times.Once).Received(1).Method()
Argument matchingIt.IsAny<T>()Arg.Any<T>()
Syntax feelExplicit and verboseConcise and natural
Q: When would you choose NSubstitute over Moq?

A: NSubstitute has a more concise, natural syntax without the .Object indirection. It is easier for teams to read and onboard onto. Moq offers MockBehavior.Strict and more granular verification options. I choose NSubstitute for greenfield projects where readability is paramount and Moq when I need strict verification or am joining a team already using it.

Q: How do you capture arguments in Moq for deeper inspection?

A: Use the Callback() method to capture arguments passed to a mocked method:

[Fact]
public async Task ProcessOrder_SetsTimestampBeforeSaving()
{
    Order? capturedOrder = null;
    var repo = new Mock<IOrderRepository>();
    repo.Setup(r => r.SaveAsync(It.IsAny<Order>()))
        .Callback<Order>(order => capturedOrder = order)
        .Returns(Task.CompletedTask);

    var sut = new OrderProcessor(repo.Object);
    await sut.ProcessAsync(new Order("o1", 100m));

    capturedOrder.ShouldNotBeNull();
    capturedOrder.ProcessedAt.ShouldNotBe(default);
    capturedOrder.ProcessedAt.ShouldBeLessThanOrEqualTo(DateTimeOffset.UtcNow);
}

Test Doubles (Stubs, Mocks, Fakes, Spies, Dummies)

Understanding the taxonomy of test doubles is a common interview topic. Each serves a different purpose.

Q: What is the difference between a stub and a mock?

A: A stub provides canned answers to calls made during a test โ€” it controls indirect inputs. A mock verifies that specific interactions occurred โ€” it checks indirect outputs. In Moq terms, Setup().Returns() creates a stub; Verify() creates a mock. The distinction matters because over-reliance on mocks couples tests to implementation, while stubs keep tests focused on outcomes.

Q: What is a Dummy object?

A: A Dummy is an object passed to satisfy a parameter but never actually used:

// The logger is required by the constructor but irrelevant to this test
var dummyLogger = new Mock<ILogger<OrderService>>().Object;
var sut = new OrderService(realRepo, dummyLogger);
Q: What is a Stub?

A: A Stub returns predetermined data. It controls indirect inputs to the system under test:

// Stub: always returns a fixed exchange rate
var stubRateProvider = new Mock<IExchangeRateProvider>();
stubRateProvider.Setup(r => r.GetRate("USD", "EUR")).Returns(0.85m);

var sut = new CurrencyConverter(stubRateProvider.Object);
var result = sut.Convert(100m, "USD", "EUR");
Assert.Equal(85m, result);
Q: What is a Mock?

A: A Mock verifies that specific interactions occurred. It focuses on behavior verification:

// Mock: verify that the audit log was written
var mockAuditLog = new Mock<IAuditLog>();
var sut = new PaymentProcessor(mockAuditLog.Object);

sut.ProcessPayment(payment);

mockAuditLog.Verify(a => a.RecordAsync(
    It.Is<AuditEntry>(e => e.Action == "PaymentProcessed")),
    Times.Once);
Q: What is a Spy?

A: A Spy records calls for later inspection. Useful when you need to verify complex interaction sequences:

// Manual spy implementation
public class SpyEmailSender : IEmailSender
{
    public List<(string To, string Body)> SentEmails { get; } = new();

    public Task SendAsync(string to, string body, CancellationToken ct)
    {
        SentEmails.Add((to, body));
        return Task.CompletedTask;
    }
}

// Usage
[Fact]
public async Task NotifyAll_SendsEmailToEachActiveUser()
{
    var spy = new SpyEmailSender();
    var users = new[] {
        new User("u1", "alice@test.com", isActive: true),
        new User("u2", "bob@test.com", isActive: true)
    };
    var sut = new NotificationService(spy, new FakeUserRepository(users));

    await sut.NotifyAllAsync("Welcome");

    Assert.Equal(2, spy.SentEmails.Count);
    Assert.Equal("alice@test.com", spy.SentEmails[0].To);
    Assert.Equal("bob@test.com", spy.SentEmails[1].To);
}
Q: What is a Fake?

A: A Fake is a working implementation with simplified behavior. Not suitable for production but functionally correct for testing:

// Fake: in-memory repository with real collection behavior
public class FakeOrderRepository : IOrderRepository
{
    private readonly Dictionary<string, Order> _store = new();

    public Task SaveAsync(Order order)
    {
        _store[order.Id] = order;
        return Task.CompletedTask;
    }

    public Task<Order?> GetByIdAsync(string id)
    {
        _store.TryGetValue(id, out var order);
        return Task.FromResult(order);
    }

    public Task<IReadOnlyList<Order>> GetAllAsync()
        => Task.FromResult<IReadOnlyList<Order>>(_store.Values.ToList());
}
DoublePurposeVerifies behavior?Has logic?
DummyFill parametersNoNo
StubProvide canned answersNoMinimal
MockVerify interactionsYesNo
SpyRecord interactionsYes (after the fact)Minimal
FakeLightweight substituteNo (but could)Yes
Q: How do you summarize all test double types?

A:


Integration Testing in ASP.NET Core (WebApplicationFactory)

Q: What is WebApplicationFactory and when do you use it?

A: WebApplicationFactory<TEntryPoint> boots your ASP.NET Core application in-memory with the real middleware pipeline, DI container, and routing. You use it for integration tests that verify HTTP behavior end-to-end without a real network. You can swap specific services (database, external APIs) in ConfigureServices while keeping everything else production-like. This catches wiring bugs that unit tests miss.

public class ProductsApiTests : IClassFixture<WebApplicationFactory<Program>>
{
    private readonly HttpClient _client;
    private readonly WebApplicationFactory<Program> _factory;

    public ProductsApiTests(WebApplicationFactory<Program> factory)
    {
        _factory = factory;
        _client = factory.WithWebHostBuilder(builder =>
        {
            builder.ConfigureServices(services =>
            {
                // Replace real database with in-memory
                services.RemoveAll<DbContextOptions<AppDbContext>>();
                services.AddDbContext<AppDbContext>(opts =>
                    opts.UseInMemoryDatabase("TestDb"));

                // Replace external HTTP client with a stub
                services.RemoveAll<IPaymentGateway>();
                services.AddSingleton<IPaymentGateway>(new StubPaymentGateway());
            });
        }).CreateClient();
    }

    [Fact]
    public async Task GetProducts_ReturnsOkWithJsonArray()
    {
        var response = await _client.GetAsync("/api/products");

        response.StatusCode.ShouldBe(HttpStatusCode.OK);
        var products = await response.Content.ReadFromJsonAsync<List<ProductDto>>();
        products.ShouldNotBeNull();
    }

    [Fact]
    public async Task CreateProduct_ValidPayload_Returns201AndLocationHeader()
    {
        var payload = new { Name = "Widget", Price = 9.99 };
        var content = JsonContent.Create(payload);

        var response = await _client.PostAsync("/api/products", content);

        response.StatusCode.ShouldBe(HttpStatusCode.Created);
        response.Headers.Location.ShouldNotBeNull();
    }
}
Q: How do you create a custom WebApplicationFactory for shared test configuration?

A: Subclass WebApplicationFactory<Program> and override ConfigureWebHost to swap services once for all tests that use the factory:

public class CustomApiFactory : WebApplicationFactory<Program>
{
    protected override void ConfigureWebHost(IWebHostBuilder builder)
    {
        builder.UseEnvironment("Testing");

        builder.ConfigureServices(services =>
        {
            // Swap real dependencies for test doubles
            services.RemoveAll<DbContextOptions<AppDbContext>>();
            services.AddDbContext<AppDbContext>(opts =>
                opts.UseInMemoryDatabase($"TestDb-{Guid.NewGuid()}"));

            services.RemoveAll<IMessageBus>();
            services.AddSingleton<IMessageBus, FakeMessageBus>();
        });
    }
}

// Usage โ€” all tests in this class share the same factory
public class OrdersApiTests : IClassFixture<CustomApiFactory>
{
    private readonly HttpClient _client;

    public OrdersApiTests(CustomApiFactory factory)
    {
        _client = factory.CreateClient();
    }

    [Fact]
    public async Task PlaceOrder_ValidOrder_ReturnsAccepted()
    {
        var order = new { Symbol = "AAPL", Quantity = 10, Side = "Buy" };
        var response = await _client.PostAsJsonAsync("/api/orders", order);
        response.StatusCode.ShouldBe(HttpStatusCode.Accepted);
    }
}
Q: How do you test middleware in isolation using TestServer?

A: TestServer gives you lower-level control when you need to test middleware or handlers directly:

[Fact]
public async Task RateLimitingMiddleware_ExceedsLimit_Returns429()
{
    using var host = await new HostBuilder()
        .ConfigureWebHost(builder =>
        {
            builder.UseTestServer();
            builder.ConfigureServices(services =>
            {
                services.AddRateLimiting(opts => opts.MaxRequestsPerMinute = 2);
            });
            builder.Configure(app =>
            {
                app.UseRateLimiting();
                app.MapGet("/", () => "OK");
            });
        })
        .StartAsync();

    var client = host.GetTestClient();

    // First two requests succeed
    (await client.GetAsync("/")).StatusCode.ShouldBe(HttpStatusCode.OK);
    (await client.GetAsync("/")).StatusCode.ShouldBe(HttpStatusCode.OK);

    // Third request is rate-limited
    (await client.GetAsync("/")).StatusCode.ShouldBe(HttpStatusCode.TooManyRequests);
}
Q: How do you use Testcontainers for realistic integration tests?

A: When in-memory fakes are insufficient, use Testcontainers to spin up real infrastructure in Docker:

public class PostgresIntegrationTests : IAsyncLifetime
{
    private readonly PostgreSqlContainer _postgres = new PostgreSqlBuilder()
        .WithImage("postgres:16-alpine")
        .Build();

    public async Task InitializeAsync() => await _postgres.StartAsync();
    public async Task DisposeAsync() => await _postgres.DisposeAsync();

    [Fact]
    public async Task Repository_SaveAndRetrieve_RoundTrips()
    {
        var options = new DbContextOptionsBuilder<AppDbContext>()
            .UseNpgsql(_postgres.GetConnectionString())
            .Options;

        await using var ctx = new AppDbContext(options);
        await ctx.Database.MigrateAsync();

        ctx.Products.Add(new Product("Widget", 9.99m));
        await ctx.SaveChangesAsync();

        var loaded = await ctx.Products.FirstAsync();
        loaded.Name.ShouldBe("Widget");
    }
}
Q: What is the difference between integration tests and end-to-end tests?

A: Integration tests verify that components work together within the application boundary โ€” DI wiring, middleware, database access โ€” typically using WebApplicationFactory or Testcontainers with swapped external dependencies. End-to-end tests verify complete user journeys across the entire deployed system including real external services, UI, and infrastructure. Integration tests run in seconds; E2E tests run in minutes and are more prone to flakiness.


Testing Async Code

Q: How do you test async code without introducing flakiness?

A: Use async Task test methods with await instead of .Result or .Wait(), which can deadlock. For timeouts, use CancellationTokenSource with pre-cancellation or short timeouts. Avoid Task.Delay for synchronization โ€” use TaskCompletionSource or SemaphoreSlim to signal between test and production code deterministically. Mock async dependencies with ReturnsAsync() or .Returns(Task.FromResult(...)).

// Basic async test
[Fact]
public async Task GetUserAsync_ExistingId_ReturnsUser()
{
    // Arrange
    var repo = new Mock<IUserRepository>();
    repo.Setup(r => r.GetByIdAsync("u1"))
        .ReturnsAsync(new User("u1", "Alice"));
    var sut = new UserService(repo.Object);

    // Act
    var user = await sut.GetUserAsync("u1");

    // Assert
    user.ShouldNotBeNull();
    user.Name.ShouldBe("Alice");
}
Q: How do you test that async code throws the correct exception?

A: Use Assert.ThrowsAsync<T>() which properly awaits the task and captures the exception:

[Fact]
public async Task GetUserAsync_NonExistentId_ThrowsNotFoundException()
{
    var repo = new Mock<IUserRepository>();
    repo.Setup(r => r.GetByIdAsync("missing"))
        .ReturnsAsync((User?)null);
    var sut = new UserService(repo.Object);

    await Assert.ThrowsAsync<NotFoundException>(
        () => sut.GetUserAsync("missing"));
}
Q: How do you test CancellationToken propagation?

A: Create a pre-cancelled CancellationTokenSource and verify the code throws OperationCanceledException:

[Fact]
public async Task ProcessAsync_CancellationRequested_ThrowsOperationCanceled()
{
    var cts = new CancellationTokenSource();
    cts.Cancel(); // pre-cancel

    var sut = new DataProcessor();

    await Assert.ThrowsAsync<OperationCanceledException>(
        () => sut.ProcessAsync(cts.Token));
}

// Also verify that the token is passed through to dependencies
[Fact]
public async Task ProcessAsync_PassesCancellationTokenToRepository()
{
    var repo = new Mock<IDataRepository>();
    var sut = new DataProcessor(repo.Object);
    var cts = new CancellationTokenSource();

    await sut.ProcessAsync(cts.Token);

    repo.Verify(r => r.LoadAsync(cts.Token), Times.Once);
}
Q: How do you test timeout behavior?

A:

[Fact]
public async Task SlowOperation_ExceedsTimeout_ThrowsTimeoutException()
{
    var slowDependency = new Mock<IExternalService>();
    slowDependency.Setup(s => s.CallAsync(It.IsAny<CancellationToken>()))
                  .Returns(async (CancellationToken ct) =>
                  {
                      await Task.Delay(TimeSpan.FromSeconds(30), ct);
                      return "result";
                  });

    var sut = new ResilientCaller(slowDependency.Object,
        timeout: TimeSpan.FromMilliseconds(100));

    await Assert.ThrowsAsync<TimeoutException>(
        () => sut.CallWithTimeoutAsync(CancellationToken.None));
}
  • Never use .Result or .Wait() in tests โ€” use async Task test methods instead. These can deadlock.
  • Never use async void in test methods โ€” the framework cannot catch exceptions.
  • Use ConfigureAwait(false) in library code, but it is unnecessary in test code.
  • Avoid Task.Delay for synchronization โ€” use SemaphoreSlim, TaskCompletionSource, or ManualResetEventSlim.
Q: What are the common async testing pitfalls?

A:

// BAD: deadlock risk
[Fact]
public void GetUser_Bad_DeadlockRisk()
{
    var result = _sut.GetUserAsync("u1").Result; // DEADLOCK in some contexts
    Assert.NotNull(result);
}

// GOOD: async all the way
[Fact]
public async Task GetUser_Good_AsyncAllTheWay()
{
    var result = await _sut.GetUserAsync("u1");
    Assert.NotNull(result);
}

// Using TaskCompletionSource for deterministic synchronization
[Fact]
public async Task BackgroundWorker_ProcessesItemWhenSignaled()
{
    var tcs = new TaskCompletionSource<bool>();
    var sut = new BackgroundWorker(onComplete: () => tcs.SetResult(true));

    sut.Enqueue("item-1");

    var completed = await Task.WhenAny(tcs.Task, Task.Delay(5000));
    Assert.Equal(tcs.Task, completed); // ensure it completed, not timed out
}

Code Coverage Strategies

  • Line coverage โ€” was each line executed?
  • Branch coverage โ€” was each if/else/switch path taken?
  • Method coverage โ€” was each method called?
Q: What does code coverage actually measure?

A: Code coverage tools (Coverlet, dotCover, OpenCover) report what percentage of code was executed during test runs:

Q: Why is 100% code coverage not always desirable?

A: Chasing 100% leads to testing trivial code (DTOs, auto-properties), testing framework behavior, and writing shallow tests that execute lines without meaningful assertions. It creates maintenance overhead and false confidence. A better approach is targeting coverage on high-risk code (auth, payments, domain rules), using branch coverage over line coverage, and supplementing with mutation testing.

// This code has 100% line coverage but tests nothing meaningful
[Fact]
public void Constructor_SetsProperties()
{
    var dto = new OrderDto { Id = "1", Amount = 50 };
    Assert.Equal("1", dto.Id);
    Assert.Equal(50, dto.Amount);
}
// Testing auto-properties on a DTO adds maintenance cost without value.
  • Target risk, not percentages. Focus coverage on critical paths: authentication, payment processing, domain rules, error handling.
  • 70-80% line coverage is a healthy starting point for most teams. The last 20% often requires disproportionate effort.
  • Branch coverage matters more than line coverage for complex logic with many conditional paths.
  • Use coverage as a diagnostic tool, not a target. Low coverage in a file signals it may be untested. High coverage does not guarantee correctness.
  • Goodhart's Law: "When a measure becomes a target, it ceases to be a good measure."
Q: What is a practical code coverage strategy?

A:

Q: What is mutation testing, and how does it improve on code coverage?

A: Mutation testing introduces small changes (mutants) to production code โ€” like changing > to >= or replacing a return value โ€” and runs your tests. If tests still pass, the mutant survived, revealing a gap in your assertions. The mutation score (killed/total mutants) is a stronger quality metric than line coverage because it measures whether tests actually detect behavioral changes, not just whether code was executed.

// Production code
public decimal CalculateDiscount(decimal price, bool isPremium)
{
    if (isPremium)
        return price * 0.20m; // 20% discount
    return price * 0.05m;     // 5% discount
}

// A mutation tool might change 0.20m to 0.21m.
// If no test fails, the mutant survives โ€” you have a gap in your assertions.
// Good tests would catch this:
[Theory]
[InlineData(100, true, 20)]   // catches mutation of 0.20m
[InlineData(100, false, 5)]   // catches mutation of 0.05m
public void CalculateDiscount_ReturnsExactDiscount(
    decimal price, bool isPremium, decimal expected)
{
    var sut = new PriceService();
    sut.CalculateDiscount(price, isPremium).ShouldBe(expected);
}
  • Mutation score (Stryker.NET) is a stronger indicator of test quality.
  • Coverage of changed lines in PRs is more actionable than overall project coverage.
  • Test-to-code ratio combined with defect rates gives a more holistic view.
Q: What are better coverage metrics than line coverage?

A:

Stryker.NET configuration example:

{
  "stryker-config": {
    "project": "MyApp.csproj",
    "test-projects": ["MyApp.Tests.csproj"],
    "reporters": ["html", "progress"],
    "mutate": ["src/**/*.cs"],
    "thresholds": {
      "high": 80,
      "low": 60,
      "break": 50
    }
  }
}

BDD with SpecFlow and Gherkin

Q: How does BDD with SpecFlow differ from regular unit testing?

A: BDD uses Gherkin (Given/When/Then) scenarios written in natural language, making them readable by non-technical stakeholders. Step definitions bind scenarios to code. BDD excels when requirements are complex and involve business collaboration. The key difference is that BDD scenarios serve as living documentation of business rules, while unit tests focus on technical correctness.

Feature: Account Transfers
  As a bank customer
  I want to transfer funds between accounts
  So that I can manage my money conveniently

  Scenario: Successful transfer with sufficient balance
    Given an account "A" with balance $500
    And an account "B" with balance $100
    When I transfer $200 from "A" to "B"
    Then account "A" should have balance $300
    And account "B" should have balance $300

  Scenario: Transfer fails with insufficient balance
    Given an account "A" with balance $50
    And an account "B" with balance $100
    When I transfer $200 from "A" to "B"
    Then the transfer should fail with "Insufficient funds"
    And account "A" should have balance $50
    And account "B" should have balance $100

  Scenario Outline: Various transfer amounts
    Given an account "A" with balance $<start>
    When I transfer $<amount> from "A" to "B"
    Then account "A" should have balance $<remaining>

    Examples:
      | start | amount | remaining |
      | 1000  | 100    | 900       |
      | 500   | 500    | 0         |
      | 300   | 50     | 250       |
Q: How do you implement SpecFlow step definitions in C#?

A:

[Binding]
public class TransferSteps
{
    private readonly Dictionary<string, Account> _accounts = new();
    private readonly TransferService _sut = new(new InMemoryAccountRepository());
    private Exception? _caughtException;

    [Given(@"an account ""(.*)"" with balance \$(.*)")]
    public void GivenAnAccountWithBalance(string name, decimal balance)
    {
        _accounts[name] = new Account(name, balance);
    }

    [When(@"I transfer \$(.*) from ""(.*)"" to ""(.*)""")]
    public async Task WhenITransfer(decimal amount, string from, string to)
    {
        try
        {
            await _sut.TransferAsync(from, to, amount);
        }
        catch (Exception ex)
        {
            _caughtException = ex;
        }
    }

    [Then(@"account ""(.*)"" should have balance \$(.*)")]
    public void ThenAccountShouldHaveBalance(string name, decimal expected)
    {
        _accounts[name].Balance.ShouldBe(expected);
    }

    [Then(@"the transfer should fail with ""(.*)""")]
    public void ThenTheTransferShouldFail(string message)
    {
        _caughtException.ShouldNotBeNull();
        _caughtException.Message.ShouldContain(message);
    }
}
Q: When does BDD add value versus overhead?

A: BDD adds value when requirements are complex and involve multiple stakeholders, when business rules change frequently and need living documentation, and when QA teams write or review scenarios in plain language. BDD adds overhead for small teams where developers own the full stack (the Gherkin layer may be redundant), for highly technical/infrastructure code where scenarios feel forced, and when step definitions become a maintenance burden larger than the tests themselves.


Testing Anti-Patterns (Brittle Tests, Testing Implementation Details)

Q: What are the most common testing anti-patterns?

A: The Liar (tests with no meaningful assertions), brittle tests that verify implementation details, the Giant (one test covering too many behaviors), excessive setup indicating too many dependencies, shared mutable state causing order-dependent failures, and copy-paste tests that become a maintenance burden. The cure is testing behavior over implementation, keeping tests focused, and using builders/fixtures to reduce duplication.

Q: What makes a test brittle?

A: A brittle test breaks when you refactor production code without changing behavior. The most common cause is testing implementation details rather than observable outcomes:

// BAD: testing exact method call sequence โ€” any refactoring breaks these tests
mockRepo.Verify(r => r.OpenConnection(), Times.Once);
mockRepo.Verify(r => r.BeginTransaction(), Times.Once);
mockRepo.Verify(r => r.SaveAsync(order), Times.Once);
mockRepo.Verify(r => r.CommitTransaction(), Times.Once);
mockRepo.Verify(r => r.CloseConnection(), Times.Once);

// BETTER: test the observable outcome
var savedOrder = await repo.GetByIdAsync(order.Id);
savedOrder.ShouldNotBeNull();
savedOrder.Status.ShouldBe(OrderStatus.Confirmed);
Q: What does "testing implementation details" look like?

A:

// BAD: asserting on internal data structure via reflection
Assert.Equal(3, sut._internalCache.Count); // accessing private field

// BETTER: assert on public behavior
var result = sut.GetAllCachedItems();
result.Count.ShouldBe(3);

// BAD: verifying that a specific private method was called
// (this test will break if you rename or restructure the private method)

// BETTER: verify the external effect of calling the public method
Q: What is the ice cream cone anti-pattern?

A: An inverted test pyramid with many end-to-end tests and few unit tests. Results in slow feedback, flaky CI, and hard-to-diagnose failures:

    Correct (pyramid):          Anti-pattern (ice cream cone):

         /\  E2E                    __________  E2E
        /  \                       |__________|
       /    \  Integration         |          |  Integration
      /______\                     |          |
     /        \  Unit              |__________|  Unit
    /____________\                     |  |
Q: What is The Liar anti-pattern?

A: A test that passes but does not actually verify behavior. The assertions are missing or too weak:

// BAD: The Liar โ€” test passes but verifies nothing
[Fact]
public async Task ProcessOrder_DoesNotThrow()
{
    var sut = new OrderProcessor(new Mock<IOrderRepository>().Object);
    await sut.ProcessAsync(new Order("o1", 100m));
    // No assertions! This always passes even if the code is wrong.
}

// GOOD: actually verify the outcome
[Fact]
public async Task ProcessOrder_ValidOrder_PersistsWithConfirmedStatus()
{
    var repo = new FakeOrderRepository();
    var sut = new OrderProcessor(repo);

    await sut.ProcessAsync(new Order("o1", 100m));

    var saved = await repo.GetByIdAsync("o1");
    saved.ShouldNotBeNull();
    saved.Status.ShouldBe(OrderStatus.Confirmed);
}
Q: What is the "testing the mock" anti-pattern?

A: Verifying behavior you configured on the mock rather than on the system under test:

// BAD: you are testing Moq, not your code
var mock = new Mock<ICalculator>();
mock.Setup(c => c.Add(2, 3)).Returns(5);
Assert.Equal(5, mock.Object.Add(2, 3)); // This tests Moq itself

// GOOD: test your code that USES the calculator
var calc = new Mock<ICalculator>();
calc.Setup(c => c.Add(It.IsAny<int>(), It.IsAny<int>())).Returns(10);
var sut = new InvoiceService(calc.Object);

var invoice = sut.CalculateTotal(items);
invoice.Total.ShouldBe(10);
  • Tests that hit real databases, networks, or file systems without justification โ€” use fakes or Testcontainers.
  • Tests that use Thread.Sleep or Task.Delay for synchronization โ€” use TaskCompletionSource or SemaphoreSlim.
  • Tests that boot the entire application when a unit test would suffice โ€” push most coverage to unit tests.
  • Tests that create expensive resources per test instead of sharing via fixtures โ€” use IClassFixture<T>.
Q: What causes slow tests and how do you fix them?

A:

Q: What does excessive setup signal?

A: Fifty lines of Arrange for one line of Act signals the system under test has too many dependencies. It is a design smell โ€” consider breaking the class into smaller, focused components. In the meantime, use builders, AutoFixture, or shared factory methods to reduce boilerplate.

Q: When should you delete a test?

A: Delete tests that test deleted features, test implementation details that change with every refactor, duplicate other tests without adding coverage, test third-party library behavior (that is their responsibility), or are permanently flaky despite attempts to fix them. Dead tests erode trust in the suite and slow CI. Regularly prune tests during refactoring sessions.


Additional Interview Questions

Q: Explain the testing pyramid and why it matters.

A: The testing pyramid has many fast unit tests at the base, fewer integration tests in the middle, and a small number of end-to-end tests at the top. This structure optimizes for fast feedback (unit tests run in milliseconds), targeted wiring verification (integration tests), and confidence in critical user journeys (E2E). Inverting the pyramid (the ice cream cone) results in slow CI, flaky tests, and hard-to-diagnose failures.

ConsiderationClassicistMockist
Refactoring resilienceHighLower
Design pressureModerateHigh (pushes small classes)
Setup complexityCan be higher (real objects)Can be higher (mock configuration)
Best forDomain logic, algorithmsInteraction-heavy orchestration
Q: How do you decide between classicist and mockist TDD?

A: I use classicist (real objects, fakes) for domain logic and algorithms where state verification is natural and refactoring resilience matters. I use mockist (mocks, interaction verification) for orchestration layers and infrastructure boundaries where verifying that the right calls happened is the core behavior. Most production teams blend both styles depending on the layer they are testing.

// Classicist: use a real in-memory repository
[Fact]
public async Task PlaceOrder_ValidOrder_PersistsToRepository()
{
    var repo = new FakeOrderRepository();
    var sut = new OrderService(repo, new RealPriceCalculator());

    await sut.PlaceOrderAsync(new Order("o1", "AAPL", 10));

    var saved = await repo.GetByIdAsync("o1");
    saved.ShouldNotBeNull();
    saved.Status.ShouldBe(OrderStatus.Placed);
}

// Mockist: verify interactions with mocked dependencies
[Fact]
public async Task PlaceOrder_ValidOrder_CallsRepositoryAndCalculator()
{
    var repo = new Mock<IOrderRepository>();
    var calc = new Mock<IPriceCalculator>();
    calc.Setup(c => c.Calculate(It.IsAny<Order>())).Returns(150m);

    var sut = new OrderService(repo.Object, calc.Object);
    await sut.PlaceOrderAsync(new Order("o1", "AAPL", 10));

    repo.Verify(r => r.SaveAsync(It.Is<Order>(o => o.Id == "o1")), Times.Once);
    calc.Verify(c => c.Calculate(It.IsAny<Order>()), Times.Once);
}
Q: How do you test code that depends on the current time?

A: Abstract the clock behind an interface like ISystemClock or TimeProvider (introduced in .NET 8). Inject it as a dependency. In tests, provide a fake clock that returns a fixed or controlled time. This makes tests deterministic.

public class FakeTimeProvider : TimeProvider
{
    private DateTimeOffset _now;
    public FakeTimeProvider(DateTimeOffset startTime) => _now = startTime;
    public override DateTimeOffset GetUtcNow() => _now;
    public void Advance(TimeSpan duration) => _now = _now.Add(duration);
}

// Usage in a test
[Fact]
public void TokenIsExpired_WhenCurrentTimeExceedsExpiry()
{
    var clock = new FakeTimeProvider(new DateTimeOffset(2025, 6, 15, 12, 0, 0, TimeSpan.Zero));
    var token = new AuthToken(
        expiresAt: new DateTimeOffset(2025, 6, 15, 11, 0, 0, TimeSpan.Zero));

    var sut = new TokenValidator(clock);

    sut.IsExpired(token).ShouldBeTrue();
}

[Fact]
public void TokenIsNotExpired_WhenCurrentTimeBeforeExpiry()
{
    var clock = new FakeTimeProvider(new DateTimeOffset(2025, 6, 15, 10, 0, 0, TimeSpan.Zero));
    var token = new AuthToken(
        expiresAt: new DateTimeOffset(2025, 6, 15, 11, 0, 0, TimeSpan.Zero));

    var sut = new TokenValidator(clock);

    sut.IsExpired(token).ShouldBeFalse();
}
Q: What is property-based testing, and when is it more effective than example-based testing?

A: Property-based testing generates hundreds of random inputs and verifies that invariants hold for all of them. It is more effective for mathematical operations (commutativity, associativity), serialization roundtrips, parsers, sorting algorithms, and any code with clear invariants. FsCheck is the primary .NET library.

using FsCheck;
using FsCheck.Xunit;

public class SortingProperties
{
    [Property]
    public Property Sort_PreservesLength(List<int> input)
    {
        var sorted = input.OrderBy(x => x).ToList();
        return (sorted.Count == input.Count).ToProperty();
    }

    [Property]
    public Property Sort_OutputIsOrdered(List<int> input)
    {
        var sorted = input.OrderBy(x => x).ToList();
        var isOrdered = sorted.Zip(sorted.Skip(1), (a, b) => a <= b).All(x => x);
        return isOrdered.ToProperty();
    }
}
Q: How do you keep tests fast in a large codebase?

A: Maximize unit tests (milliseconds each), minimize integration tests to critical wiring paths, parallelize test execution (xUnit does this by default), use in-memory fakes instead of real databases where possible, avoid Thread.Sleep/Task.Delay, share expensive fixtures with IClassFixture, and run heavy integration suites on separate CI stages rather than on every push.

Q: How do you handle test data setup for complex domain objects?

A: Use the Builder pattern or AutoFixture to construct objects with sensible defaults and override only what matters for each test. Centralize builders alongside the domain model so they evolve together. Avoid constructing complex object graphs inline in every test.

Q: How do you keep integration tests parallelizable without flakiness?

A: Use unique resource identifiers (database names, queue topics, blob prefixes) per test run, isolate shared state through fixtures, and ensure teardown cleans resources. Mark collection fixtures to avoid serial bottlenecks and rely on containerized dependencies to avoid cross-test interference.

Q: How do you validate observability instrumentation through tests?

A: Attach in-memory exporters for OpenTelemetry during integration tests, trigger key user journeys, and assert on emitted spans/metrics/logs (names, attributes, and error flags). This ensures dashboards and alerts stay trustworthy without requiring external telemetry backends.


Summary

TopicKey Takeaway
Red-Green-RefactorDiscipline of small steps: fail, pass, improve
TDD vs Test-First vs Test-LastTDD drives design; test-last is acceptable with discipline
xUnitModern default; [Fact]/[Theory], constructor lifecycle, parallel by default
AAA PatternArrange-Act-Assert keeps tests readable and diagnosable
Mocking (Moq / NSubstitute)Control inputs (stubs) and verify outputs (mocks)
Test DoublesKnow dummy/stub/mock/spy/fake โ€” interviewers test vocabulary
Integration TestingWebApplicationFactory for APIs; Testcontainers for real infra
Async TestingAlways use async Task; never .Result or .Wait()
Code CoverageTarget risk, not percentages; 100% is a vanity metric
BDD / SpecFlowLiving documentation for complex business rules
Anti-PatternsTest behavior, not implementation; keep tests fast and focused
Property-Based TestingVerify invariants over random inputs with FsCheck
Mutation TestingStronger quality signal than coverage; use Stryker.NET