Lately I've been running into a rather interesting problem. I'll go write out a new test, and I won't get one test that fails, I'll get ... 30. At first, my reaction was "oh !!@$% I broke something" -- not the case. Well, I digress, I did mess something up, I changed the value on one thing that caused the next pile of tests to fail.
Some could argue "that's bad design" blah blah, yea thats great, but it STILL wouldn't address the real problem -- bad test data setup. So I've setup what I've begun to call a "sanity" test. At first I meant sanity to mean sanitary but lately sanity means "keep you from going insane". So how do I do this? Easy, and it really does help in the long run when something is broken.
First off, most tests have various setups of some kind. Take this for example ...
protected TestClassThatNeedsSomeLovin testClass;
protected const string dummyString = "some value";
protected const int dummyInt = 1;
protected DateTime dummyDate = DateTime.Today;
public virtual void SetupData()
public void SanityTest()
And that's it for the "base" class for tests. This particular one has special cases so it made sense to do this. Otherwise, pound out your test data right there in the setupdata, done and done. Now, everytime I have a pile of tests that fail for some unknown reason, I can just run my sanity test to make sure "yea, its valid as sits" meaning all my test data is correct and valid. Otherwise its kind of silly to be chasing after 30 failed tests when you forgot to change your dummyString to dummyEmail :-)