The Misunderstood Single Responsibility Principle
The Single Responsibility Principle (SRP) is the first of the SOLID design principles that have been highly influential in software engineering since they were introduced by Robert Martin ("Uncle Bob") in 2000. Unfortunately, this particular principle is often misunderstood. When coupled with blind faith this can engender overly simplistic thinking and design mistakes.
The principle has often been framed as "each software module should have one and only one reason to change". And then the name seems to suggest that the module should have a "single responsibility". That seems pretty easy to understand. So the module should only do one thing, right? No. There's a different rule of thumb around that: a function should only do one thing. Have you ever held your nose while naming a function something like "ImportDataAndLoadIntoDatabase". We know this: a function should have a single, well, function.
The SRP is a license to sometimes forgive yourself for what feels like breaking cohesion. The principle actually means that the code in a given module has to be owned by one and only one ultimate business owner (or functional unit). If that isn't true you have to break it up. Uncle Bob uses an example (in writing and talks) where the CFO and COO both depend on the code that calculates employee hours. Calculating hours is simple math that everyone should agree on, right? One executive requests a change to that code and it breaks the other's business rule. The hours calculation cannot be shared: that piece of code should have a Finance version and an Operations version, despite the fact that that violates our DRY and cohesion sensibilities. This is very different than the way I have seen most people apply the SRP.
While I don't need to apply it everyday, the SRP does align with my experience. I worked on a Transportation Management System and we had an object that represented truck movements. It was operational in nature - you'd assign a driver, arrival and departure times for shipping locations, etc. There was another version of that object that was used for paying the drivers. The objects were one-to-one with a lot of apparent duplication. It really drove me crazy! Several times I tried to treat them uniformly with the same code and it inevitably failed. And guess what? One object belonged to Operations and one belonged to (you guessed it) Finance. It was the exact thing that Uncle Bob was talking about. While the SRP is true, it's sometimes hard to apply proactively as who (or which group) is the "ultimate owner" of something can be squishy and take time to become clear, especially if you're building a system from scratch with many stakeholders. From another angle, I see the SRP as a natural consequence of DDD principles: it's unsafe to have a model that is shared across bounded contexts.
Unfortunately, I've seen many people follow the incorrect understanding of the SRP with blind faith. Often people who like tiny classes will use it to justify making many more even tinier classes. What starts as a simple, easy-to-understand class ends up being broken into many abstractions that are now harder to understand collectively. Regardless of whether you like that style, the SRP on its own will not drive you there. I recently read Adaptive Code (2nd Edition, Microsoft Press) which, as of today, has a 4.7/5 rating on Amazon with 123 ratings. There is a whole chapter on the SRP. And throughout, it incorrectly explains the SRP as relating to classes doing too many things and justifies design decisions based on that alone. For cohesion reasons, classes shouldn't do too many things; so the design decisions aren't always bad. But, as an example, a class with 2 pages of code is broken into ~12 classes and interfaces, all in the name of the misunderstood SRP. And this is a well-respected book by an an accomplished author, not some random blog post.
We have to encourage emerging engineers to understand the reasoning behind these principles, to wrestle with them, to apply them when they make sense and reject them when they don't. It's also instructive and fascinating to go back to the sources. The SOLID principles are rooted in core design principles that pre-date our generation -- like the Parnas paper that introduces the idea of information-hiding and the book where Constantine introduces the ideas of coupling and cohesion. Uncle Bob credits these works with influencing his thinking. In his talks, usually after his requisite, random physics ramblings, he always makes an effort to connect the present with the past. That's a great thing. But there aren't many Uncle Bobs around. As an industry, we don't do a good job of passing down experience across generations. What can we do about that? While it's an exaggeration to say that "there is nothing new under the sun", it's amazing how often we wrestle with old problems and are unaware that we can stand on the shoulders of those that came before us.
Why do we as engineers like these design principles? (I know I do.) Aside from their heuristic utility, I think it's because they provide what feel like constraints for what would otherwise be an unbounded solution space. They reduce anxiety and introduce some semblance of determinism where none can be found. Ordinarily we're a rebellious lot and hate constraints (tell someone they must use vim or emacs). But constraints are our friend when we're facing the abyss of an open-ended problem so we reach for them. But we sometimes reach too far. Experience-based observations become principles, which become rules of thumb, which become rules proper that, finally, ossify into law. But let's be careful here. Physics aside, the only absolute law I've found in Software Engineering is that there are no laws to be found. After all, we're building castles out of bits.