Tell me if this sounds familiar.

  • Something breaks, somewhere in the code.
  • It's reported, usually in some kind of panic.
  • Emails are exchanged
  • "All hands on deck"
  • The problem is identified
  • It had nothing to do with your product.

Working with a team and noticing a lot of things being logged left and right, I mentioned how testing can build all kinds of confidence in a system. From small parts to full on selenium tests that mimic a user, these parts all work together to get that feedback sooner. A part of this that is way too often overlooked is logging statements. This team was no different in treating logging statements as a "we have to" and not looking deeper into how they could make them not only more meaningful, but also tightening up debugging and other serious problems that will certainly be long forgotten by the time it happens.

Why do you bother with logging statements in the first place?

"To know what happened"

Logging statements isn't to be overlooked. It's code, real code, that is meant for humans at a later time. The better that is, and if it has a test around it, the faster I can find it and the code associated with it. Maybe even fix it quickly and safely.

I went around the internet to find how so many devs dismiss this as well. The answer is simple really - it's really hard to do in most frameworks. Why? Static injection of old frameworks (looking at you java) which makes it way harder to mock and capture the items logged. Also, log statements aren't really seen as a valuable thing in development, even in the all mighty enterprise.

Are you sure you logged what you think you logged?

Prove it with a test!

Most logging statements I see go something like this...

_logger.log("Attempting the impossible");
try {
  _logger.log("here we go " + User.Name);
  _service.doStuff();
   _logger.log("yay " + objectOfSomeKind.Property + " did its thing and all is well!";
}
catch Exception ex {
  _logger.log("This thing went boom", objectOfSomeOtherKind.Property + " : failed in every way " + ex.Message)";
}
  _logger.log("All done!");

A simple test is to verify those logging statements ARE ACTUALLY LOGGING THE THING YOU WANT TO LOG. So many times when a team goes back and tests and finds out it isn't, or the statement isn't particularly useful. A decent test would do the easy stuff first, test the successful pass, and maybe it looks like this...

[Test]
public void process_logger_contains_4_message(){
  _logger.Messages.Count.ShouldEqual(4);
}

... 

[Test]
public void process_logger_contains_message_about_finishing(){
  _logger.Messages.Contains("All done!");
}

Not only does this prove it works, and it works the way you think it does, it also brings into question its usefulness. Does it really matter if it logs that it completed a thing? Do we need four different logging statements or would a single summary at the end matter? What about that exception, can we do better or put some sort of extra information (or even a specific exception) that will help should something go wrong? These are important, and if they're not, why are you writing log statements in the first place?!

It also helps find the code faster

Adding in tests documents what you were thinking at that point in time. If an issue comes out and one of these statements happens, its easy to do a search, find the tests and run them. Those tests should have a good set of assumptions in them too - you tested some things and not others? Perhaps those were not design considerations or things that were important. Now I have context and I can go back to ask "hey, this broke, but it doesn't look like any of it mattered at the time... can you tell me more about it and if we care?"