Performance Tests vs. Correctness Tests Link to heading

I need to draw a distinction here right at the beginning between correctness tests (is the code executing as intended) and performance tests (is the code executing within the required time limits). These are two very different kinds of tests and they require very different test environments. It’s not an exaggeration to say that if you’re trying to test for SQL query performance in an environment designed for correctness, you’re wasting your time.

Whether Query A executes faster than Query B can depend on the number of CPUs, amount of available memory, the amount of on-disk fragmentation, the MAXDOP setting of the server, what other database activity is occurring - it’s entirely possible for Query A to run faster on a local 2 vCPU SQL Server container with MAXDOP=0, and Query B to run faster on the 16 vCPU Azure SQL Hyperscale database with MAXDOP=8.

Setting up an environment for performance testing SQL queries is outside the scope of this series, but keep this inherent limitation in mind before you start trying to hand-tune LINQ code.

Local Development Lifecycle Link to heading

Let’s start with the basic assumptions:

  • Your stack consists of one or more C# APIs or function apps communicating with a SQL database
  • Migrations are applied by grate executing raw SQL scripts exported from EF Core
  • All dependencies can be run locally vis Docker Compose, Aspire, or similar

Note we’re ignoring dependencies like queues, caches, etc. as this is series is about database migrations.

Per what I said in the last article in this series (Learn Your Tools), knowing how to configure and orchestrate all of the local dependencies in the stack is a required skill. You need to know grate, SQL Server, and your orchestration/containerization platform, and know them well.

Bringing up the local development environment looks like this:

  1. The containerized SQL server starts. There may be an existing database from a previous test run, or this may be a ex nihilo environment.
  2. The orchestrator applies the existing grate migrations.
  3. grate drops the target database. This can be done either by the --drop CLI option or a custom SQL script in the dropDatabase directory.
  4. grate creates the target database via the --adminconnectionstring CLI option
  5. grate runs all the migrations on the filesystem
  6. grate populates the database with a well-defined set of sample data. This sample data is only used for local testing.
  7. Automated unit tests and/or manual exploratory testing is executed by the developer.

Note this means that every time migrations are applied, the entire target database is dropped and recreated ex nihilo. This is by design. If the target database is not in a known state before any tests are executed, the result of any tests cannot be trusted. The tests also become fragile and unreliable.

Dropping and recreating the entire database from scratch on every migration run also prevents the problem inherent in iterative development of migrations: if a one-time-only migration script is changed, on the second and subsequent migration runs grate will fail the entire transaction because it interprets this as a corrupted file.

grate Environments Link to heading

Obviously we don’t want the target database to be dropped and recreated in a QA or UAT environment (and certainly not in Production!). Similarly, we don’t necessarily want large quantities of sample data in QA, UAT or Prod. This is where grate’s environment-specific scripts feature comes in handy.

In short, naming a migration file somethingorother.env.LOCAL.sql means the script will only run if --env=LOCAL is passed to grate as a CLI option (SQL scripts that do not have .env. somewhere in the filename will run regardless of the --env option).

So, in the local development orchestrator grate can be run with the following options:

grate \
   --environment=LOCAL \
   --drop \
   --connectionstring 'Server={{ SQL Server local address }};Database=myLocalDatabaseName;User Id=myUsername;Password=myPassword;' \
   --adminconnectionstring 'Server={{ SQL Server local address }};Database=master;User Id=myUsername;Password=myPassword;'

and a script in the up directory named 9999_sample_data.env.LOCAL.sql containing all the sample data necessary for the test suite.

The value of {{ SQL Server local address }} will depend on how you are hosting SQL Server. If it’s a Developer Edition installed on the OS, it will be localhost; if you’re using Docker Compose or Aspire it will be the service name (probably something like sql). The important thing is that --adminconnectionstring connects to the master database and not the target database