Quantcast
Channel: try-catch-FAIL - Fail Tracker
Viewing all articles
Browse latest Browse all 3

Data Access in Fail Tracker–Unit Testing

$
0
0

In my last post, I described Fail Tracker’s simple repository model for abstracting LINQ to NHibernate, which is used for all data access in Fail Tracker.  One reason I chose to implement an abstraction around NHibernate’s ISession interface was to facilitate Test Driven Development, a practice that wasn’t really feasible given how LINQ to NHibernate is implemented as an unmockable extension method.  While the abstraction made data access mockable, it would still have been painful if it weren’t for a base SpecsFor context that handled all the heavy lifting. 

Mocking The Hard Way

Recall that the IRepository interface used by Fail Tracker has only two methods:

There’s not much to do when mocking Save, we typically just check to make sure that it’s called with the correct object.  It’s the Query method that can be a bit of a pain.  Depending on the type of repository, we might need to set up one or more projects, users, or issues.  One approach is to set up custom test data for each test fixture, like so:

Since Query returns IQueryable, we can setup our mock to return an IEnumerable that’s been wrapped as an IQueryable. 

While this will work on a small scale, it isn’t ideal.  First, you are going to have a lot of duplicated setup code across your fixtures.  Second, your test data is going to have enough variance that you won’t be able to easily remember what your test data looks like for a given test fixture.  You’ll have to dig in to see what your test data actually looks like whenever you revisit a fixture in the future.  Let’s look at a better approach utilizing a custom SpecsFor base class.

Mocking The Reusable Way

As mentioned, one downside to setting up mocks for each fixture individually is the repetition of code.  Even if you aren’t setting up exactly the same data, the code will still be largely identical.  Another downside is that the variance in test data will obfuscate the context and require that you dig in to your context anytime you need to alter a test.  We can solve both of these issues by creating a custom SpecsFor base class, and establishing consistent test data that we’ll leverage in all of our test cases.  Here’s the custom SpecsFor base class for Fail Tracker specs that require test data:

To utilize this data, all the test fixtures need to do is inherit from this custom class instead of the default SpecsFor<T> class:

Another nice benefit: if my domain model changes in the future, I don’t have to hunt through setup code across dozens of test fixtures.  I can go straight to my custom SpecsFor base class and update my test data there, leaving my test fixtures unchanged.

What’s Next

I have one more post planned around Fail Tracker’s data access code, and it’s a doozy: row-level security.  In the next post, I’ll show you how you can easily implement row level security by leveraging the decorator pattern.


Viewing all articles
Browse latest Browse all 3

Trending Articles