Skip to content

Defending Dependency Injection

August 15, 2007

A few days ago, I read an article on Jacob Proffitt’s blog (Scruffy-Looking Cat Herder) about dependency injection. In the article, Jacob questions whether there are real benefits of dependency injection (DI) above and beyond additional testability. I challenged his stance in a comment, and he replied. I highly suggest that you read the article and the comments, because he’s posing a very good question.

First, a couple of definitions, in case you’re not familiar:

  1. Cohesion is the measure of semantic similarity between the code in a given logical segment. Highly-cohesive code is better code. (See also the Single Responsibility Principle, the optimal level of cohesion in object-oriented software.)
  2. Coupling is the measure of how much a given logical segment of code relies on other segments. Loosely-coupled code is better code.

Essentially, Jacob asserts that dependency injection raises the coupling in your code by pushing the need to understand dependencies of a given type outside of the type itself. As he says:

How can you say that dependency injection…creates loosely coupled units that can be reused easily when the whole point of DI is to require the caller to provide the callee’s needs? That’s an increase in coupling by any reasonable assessment.

This was precisely my first thought when I was originally exposed to the idea of dependency injection, because it goes directly against what we’ve been taught about object-oriented programming. The principle of encapsulation, one of the pillars of object-oriented design, states that if information doesn’t need to be exposed publicly from a class, it should be hidden from any code that consumes it. Encapsulation of data and logic is great, because it reduces the surface area of your classes, making it easier to understand how they work, and more difficult to break them. Here’s the secret that makes dependency injection worthwhile:

When it comes to dependencies, encapsulation is bad.

Consider the following code:

public abstract class Engine { ... }
public class V6Engine : Engine { ... }

public class Car
{
  private readonly Engine _engine;
  public Car()
  {
    _engine = new V6Engine();
  }
}

public class Program
{
  public static void Main()
  {
    // Car will always have a V6 engine.
    Car car = new Car();
  }
}

There’s nothing “wrong” with this code. It works fine, and shows good use of polymorphism. But your Car type, by definition, will always have a V6 engine. But what happens if you need to create a car with a four-cylinder engine? You have to modify the implementation of the Car type. What if it was implemented by a third party and you don’t have the source?

Contrast the previous snippet with this one:

public abstract class Engine { ... }
public class V6Engine : Engine { ... }

public class Car
{
  private readonly Engine _engine;
  public Car(Engine engine)
  {
    _engine = engine;
  }
}

public class Program
{
  public static void Main()
  {
    Car v6car = new Car(new V6Engine());
    Car acuraTsx = new Car(new VTECEngine());
  }
}

Now we can create all sorts of cars with all sorts of engines. If you’re a GoF fan, this is actually the Strategy pattern. Dependency injection (in my perspective) is basically the Strategy pattern used en masse.

However, as Jacob has pointed out, all we’ve done is pushed the requirement for creating an Engine out into the code that consumes our Car class. Jacob is 100% correct in saying that this increases your coupling. Now, instead of Car being the only type coupled to Engine, both Car and Program are essentially coupled to Engine, because to create a Car, you must first create an Engine.

This is why dependency injection frameworks like Ninject, Castle Windsor, and StructureMap exist: they fix this coupling problem by washing your code clean of the dependency resolution logic. In addition, they provide a deterministic point, in code or a mapping file, that describes how the types in your code are wired together.

This leads me to my assertion that dependency injection leads to creating loosely-coupled and highly-cohesive code. Once you start writing code that relies on a DI framework, the cost required to wire objects together falls to next to zero. As a consequence, hitting the goal of Single Responsibility becomes exponentially simpler. Put another way, you are less likely to leave cross-cutting logic in a type where it doesn’t belong. Your MessagingService needs configuration? No problem, write a ConfigurationService and add a dependency to it. Better yet, make your MessagingService dependent on an IConfigurationService interface, and then later when you’re reading configuration from a database rather than an XML file, you won’t have to go through each of your services and rewrite their configuration logic.

Jacob also asks why we don’t just use a Factory pattern (much like the provider model in ADO.NET, which he uses as an example in his article). Factory patterns are great for small implementations, but like dependency-injection-by-hand it can get extremely cumbersome in larger projects. Abstract Factories are unwieldy at best, and relying on a bunch of static Factory Methods (to steal a phrase from Bob Lee) gives your code “static cling” — static methods are the ultimate in concreteness, and make it vastly more difficult to alter your code.

I mentioned in the comment on his article that I saw the “light” when it came to dependency injection, and Jacob asked me to share the light with him. Here it is: simply put, dependency injection makes your code easier to change. That’s why it’s so popular in Agile crowds, whose whole software development practice is geared around quick alterations in path.

Is it a silver bullet? No. Is it the only way to make code easy to change? No. Is it the best way? Maybe not. But it’s the best I’ve seen so far.

About these ads

From → miscellaneous

36 Comments
  1. Harry permalink

    Can you not end up with some hideous looking super complex config file just for wiring your objects together?

  2. Funny you should say that. As a project grows in size, the XML mapping file can get unwieldy as well. That’s exactly why I wrote Ninject. :) It lets you define your map using a DSL (domain-specific language) rather than XML.

    Still, I’d prefer a singular point of reference (even if it’s XML) to having to dig through a bunch of types’ constructors.

  3. Nice reply. Thanks for taking the time to explain. I think you need a better example, though.

    Here’s the problem: in your example, the engine of your car should be a public property anyway. Creating an example of a bad class and fixing it with Dependency Injection isn’t that convincing.

    Your assertion that “dependency injection makes your code easier to change” is revealing. It means that you are applying dependency injection in cases where you expect classes that use your object will need to alter the dependency in some way. This is an advantage over the factory pattern only if you have multiple cases in your execution context that need to have differences in the dependency. If this is the case, then I question the encapsulation in the first place as I have with your example above. In other words, it isn’t a valid “dependency” at all and *should* be exposed. In such cases, I’d prefer making it a property to begin with and maybe supply a constructor that allows you to alter it on instantiation if that’s a useful advantage.

    So I guess my counter is that when classes need to be able to configure a dependency that much, it shouldn’t be encapsulated in the first place. Ideally, it should be a public (or whatver scope makes sense) property.

  4. @Jacob:
    I hate examples. I can never come up with good ones. :)
    You’re exactly right that the Engine should be exposed as a public property. I took a shortcut because I’m lazy and didn’t want to create the property in each code snippet. :) There’s actually several types of dependency injection… I used constructor injection, whereas you’re suggesting property injection. Neither is really “better” than the other, although in this case I would argue that constructor injection with a read-only property exposing the Engine would be the best way (since a car without an engine is pretty useless). Injecting is injecting, no matter how the dependency gets into the object.
    When I say “code should be written to change”, I’m not suggesting that the dependent object is changing the dependency. I’m suggesting that eventually, you will want to alter how your application works, and the “bindings” between types (to use a term from Ninject) that appear when you use dependency injection provide an integration point that’s easy to alter. If you design your application well, you can easy alter how it works by “re-gluing” the type bindings as necessary.
    Consider again the configuration example, and you’ll see where I’m going with it. Let’s say your app starts out by reading its configuration from an XML file, and then you realize you want to store it in a database. If you were diligent with respect to separation of concerns, you don’t need to go through each object that receives configuration information and alter it. Instead, you can just create a new implementation of your IConfigurationService, alter your dependency injection framework’s type bindings (or XML mappings), and your code should work with the new infrastructure.
    If you’ve got some time and are interesting in learning more about dependency injection, check out Bob Lee’s talk on Guice, Google’s dependency injection framework. Bob’s got some great ideas, some of which I pilfered and put in Ninject. :)

  5. Eric Tatgenhorst permalink

    Nice job dingletwitter.

  6. @Eric: Who invited you? Get back in your cage! :)

  7. Guice is interesting. Similar to the Castle Windsor product you mentioned in your comment on my blog post. I wasn’t thrilled that they started the talk with a bunch of straw-man arguments and it’s interesting that they gripe about factories leading to static-cling when their own framework requires that you use their static method to instantiate all your classes. Talk about cling! Indeed, you could view Guice as the proverbial factory factory they talk about with such derision. The talk was also revealing in how often they reference Unit Testing. As I said in my post, I think the popularity of DI is because it allows you to unit test without invoking buried dependencies. If there were no other way to unit test without invoking those objects, DI would probably be worth it. Since that isn’t the case in .Net (at least since TypeMock was released), I’m not seeing much point. Yeah, a DI framework makes some interesting things possible, but it does so at a complexity and design cost I’d have a hard time justifying.

    I don’t buy the “easier to change” argument in a maintenance context, either. Having a DI framework means you update changes to a dependency in exactly as many places as you would without DI. That’s the advantage of encapsulation in the first place. Not having a DI framework means you update every object that uses your class instead of just the class itself. That’s a net loss of maintainability in my book. Further, even in a DI framework any project that uses the nifty feature of enabling multiple bindings in a single execution context will require some serious review to ensure that all the bindings are updated and correct–i.e. a scaled-down version of the problem you have with manual DI and for the same reason (because you break encapsulation).

    In all, it’s still looking like a fad outside of its use in most mock object implementations. Since I don’t have to use DI for my mock objects, though, I’m thinking it’s a fad I’ll pass on.

  8. I don’t agree with your assertion that the availability of TypeMock in .NET means that dependency injection isn’t useful. First off, unless I’m mistaken, JMock was the first mock object framework. Like lots of things in .NET, both type mocking and dependency injection came from Java. (Again, I might be wrong, but I’m relatively certain.) Secondly, I don’t see type mocking and dependency injection as competing concepts — in fact, as you can see with Rhino Mocks, they can be complementary.
    Also, it’s true that Guice has a static factory method, but it’s only used to initiate the binding mechanism and create your Injector. Once you have the Injector, you’re static-free.
    I also disagree with your statement that DI doesn’t aid maintainability. I’ll revisit again my configuration example. If you create an IConfigurationService that provides configuration to several different consuming types, you’ll have a single binding between that interface and your concrete implementation. With Ninject, this would be:
    Binder.Bind<IConfigurationService, XmlConfigurationService>();
    With that single line of code, you’re saying that whenever the framework sees that a type needs an IConfigurationService, it will instead provide an instance of XmlConfigurationService. Then, later, if you decide to create a SqlConfigurationService, you change that one line to:
    Binder.Bind<IConfigurationService, SqlConfigurationService>();
    Immediately, all of the types that consumed the IConfigurationService dependency will be injected with the new implementation. This example is actually directly from the project I’m working on (although it was a question of loading a local configuration file vs. getting one from a server). I have about a dozen consuming services that require configuration (so far), and it was very easy to switch between the two.
    In Ninject’s (and Guice’s) case, the bindings are not just raw metadata like XML files… they’re actual code. This means that you can do even more exotic things, like delaying the binding until runtime, and checking some sort of environmental information before deciding on what implementation to use. For example, if you were able to contact a web service to request configuration information, you could bind to a RemoteConfigurationService, whereas if you couldn’t communicate with the server, you could fall back on a LocalConfigurationService.
    Dependency injection is undeniably trendy, but then again so are TDD and mock objects. ;) Software developers are pretty pragmatic, and I would say that usually when something is trendy, it’s that way for a reason. From experience, I can say decisively that dependency injection, if embraced fully and used correctly, can dramatically decrease the complexity of a project, and significantly speed development.

  9. Jonathan Allen permalink

    > What if it was implemented by a third party and you don’t have the source?

    Then you don’t have a choice. We can’t even have this conversation unless you control the code that you are considering using dependency injection with.

  10. DI is in my opinion, a very important technique for languages such as C# or Java to alleviate the type dependency. For instance, one use for DI besides testing is to solve the “singleton” problem without introducing global variables (the singleton) and global dependency.
    DI is not needed in dynamic languages like Ruby, where you can solve the coupling by other means
    (see a blog post I’ve made recently http://www.rgoarchitects.com/blog/PermaLink,guid,e14288ca-aaa7-4333-a3e2-526596d6a1b5.aspx)

    Arnon

  11. @Jonathan: That’s not true. If dependency injection is implemented properly in the third-party types, you can still inject dependencies into them without modifying their source. You can definitely do it by hand, and many of the DI frameworks (Spring.NET, and I believe also Castle Windsor) allow you to wire objects together without altering their class definition. This is because they are zero-impact, and rely on an XML file for mapping.

    Sadly, Ninject only does constructor injection automatically, without being guided by an [Inject] attribute, so it isn’t currently among the group of DI frameworks that can wire without help. Of course, I plan to fix that… :)

    @Arnon: You’re right, dynamic languages like Ruby have much less need for dependency injection, because they have tricks like dynamic inheritance and duck-typing. I would argue, though, it’s really because “concrete” types in Ruby are actually made out of something closer to soup than concrete. :)

  12. You misunderstand TypeMock. That’s not a technique, it’s a product (http://typemock.com/). It achieves mocking without needing to use dependency injection using reflection to get at internal objects.

    As I said, I can see the utility of these complex injection frameworks. I just don’t think it’s worth the framework dependency and implementation for what I see as a minimal payoff. Now, my development tends to be sub-enterprise level so I freely admit I’d review that opinion should that change. As such, it’s likely that we’re talking past each other because it sounds like we’re working at different complexity levels. Even so, I suspect the utility of a dependency injection framework isn’t really the dependency injection as much as it is an easy way to implement what I’d consider to be a provider pattern.

  13. @Jacob: I’m familiar with TypeMock, and NMock, and the like. You’ll find that Rhino Mocks — and actually, all dependency injection frameworks — use reflection as well. In fact, Ninject lets you do dependency injection directly into private fields without having to expose them as properties. (This has its own detriments, though.)

    I think that we’re really talking about the same thing, the argument is just between using an abstract factory (the “provider” pattern you’re referencing) and a dependency injection framework to do the actual construction of the objects.

  14. First off, let me clarify. While every mock framework uses reflection to detect the members of the type they are mocking, TypeMock is, as far as I’ve been able to determine, unique in that it uses reflection to *intercept the dependency call* and replace it with the mock object. i.e. TypeMock works without ever feeding the mocked object to the class you are testing. Here’s a real-life TypeMock example mocking a strongly-typed Ado.Net table adapter. I have a static method (ooh, ick) that takes a text file and returns a dataset. Problem is, the method hits the database to pull customer store information for each EDI invoice line item. I want to mock that table fill and the data table returned.

    // The following is needed so that we can fake the read for stores later. This is done with
    // dsMock.ExpectGetAlways(“Stores”, store);
    EDIData.StoresDataTable store = new EDIData.StoresDataTable();
    EDIData.StoresRow row = store.NewStoresRow();
    row.custnmbr = “custnbr”;
    row.storenmbr = 1234;
    store.AddStoresRow(row);

    // Create mock objects and set expectations.
    MockManager.Init();
    Mock taMock = MockManager.MockAll();
    Mock dsMock = MockManager.MockAll();
    taMock.ExpectSet(“ClearBeforeFill”);
    taMock.AlwaysReturn(“FillByUl”, 1);
    dsMock.ExpectGetAlways(“Stores”, store);

    // Hit the method
    EDIData ds = EDIRead.ReadEDIFile(UnitTestHelper.GetTestFile(“EDI850.txt”));

    This is the *entire* unit test. No setup, no teardown. Notice that even though I’m testing a static method I’m only feeding it a string path to the EDI file. Mocking all Fill calls on the table adapter and replacing all property reads on the datatable “Stores” with my generated test data table means that no reads are actually made to the database (the point of mocking) and it’s done without needing to feed anything to the class using it. *That’s* what makes it so that I don’t need DI to unit test things with external dependencies or go through enormous contortions to make those tests work. That’s seriously cool reflection-fu.

    This functionality is unique to TypeMock as far as I can tell. All other mock frameworks I’m aware of need DI to insert their mocked objects into the test, including Rhino Mocks.

    As for the rest, I think you’re right that we’re talking about the same thing. I wonder if there are any “provider frameworks” that are as slick as Castle Windsor (or Ninject)? Dependency Injection is sexy right now, but I’d be willing to bet that most of what these slick frameworks are being used for outside of mock objects used for unit testing is more in line with being able to easily define and use providers for external services.

  15. I have to admit, my knowledge of the specifics of each mock object library is lacking. I’ve used Rhino Mocks, but I generally create static mocks for my tests. (Not suggesting that that’s a good practice. I guess I’m just lazy. :)

    I guess I’m just confused why you shy away from dependency injection, and yet you’re looking for a “provider framework”. What is it exactly that you’re looking for in such a framework?

    One of the major reasons I was slow to adopt dependency injection is that most of the frameworks out there (I’m looking at you, Spring.NET) are gigantic and add a massive amount of framework bloat to your project. You literally need to be creating an “enterprise” software app in order to make it worthwhile. This is actually one of the main reasons I created Ninject. The current Subversion build weighs in somewhere around 108KB — easily small enough to use in any project. In fact, I have a version built for the compact framework that I’m using for one of my projects at work.

  16. Ah. The thing is that I’m *not* looking for a provider framework. I just think one would be cool and that if it existed it’d provide all the non-unit-testing advantages you get from a DI framework without all the DI.

    What I *am* looking for is something I can use to mock dependent service calls so I don’t actually hit my database during a unit test. In exploring mock object frameworks, I was dismayed that in order to use mock objects at all, it seemed that I would be forced to use dependency injection which would mean changing how I architect my objects. Oh, and then require that I rewrite all current objects that I want to mock.

    That’s a huge suck and everywhere I turned, people were feeding me the line that dependency injection is good design in its own right. This attitude is on full display in Ayende Rahien’s post today on mocking Linq where he inserts a snide “(Best Practices, anyone?)” in his call to arms against the Linq team for not auto-generating and using interfaces so that he can easily mock them. In my opinion, his so-called “untestable practices” are a failing of mock tools, not of the framework. This attitude is why I said, “I do wish that people would admit that DI doesn’t have compelling applicability outside of Unit Testing” in my original post.

    So. My conclusion thus far is that DI frameworks would be useful if you needed a provider framework and if you needed to mock objects in unit tests and didn’t have something that can work without DI. Since I a) don’t need a provider framework and b) have something that can mock without DI, I’m thinking DI is pain for no gain.

    Also, I’m thinking people who call DI a Best Practice at this point are… a bit premature…

  17. Honestly, I have to say that I don’t understand why you’re so convinced that DI isn’t worth the effort. As for the “premature” comments, DI is not a “new” idea by any stretch… lightweight DI has been around in Java ever since Interface21 got tired of EJB and decided to write Spring. StructureMap was (I believe) the first for .NET, followed by Spring.NET and Castle Windsor. All of these projects are very robust and battle-hardened, and the design principles behind them are well-tested.

    Even if you think that DI is only useful for testing, you have to decide if you’re willing to design your software in a way that makes it easy to test. I think that before you pass judgment on DI and on its proponents, you should try it yourself. Start slow, use it in a smaller project, and I guarantee that if you let it, it can change the way you think about software.

  18. With TypeMock, my software is easy to test without having to implement DI. Why go through all the hoops of DI when I don’t have to? That’s kind of my question all along. That said, in a blog post yesterday I admit that I’m tilting at windmills here by being a lone voice bucking a popular trend.

  19. Let’s ignore testability for a moment. I still think you’re ignoring the best advantage that DI provides you, and that’s SOA. Using DI makes it much more natural (and therefore much easier) to implement your application as a collection of loosely-coupled services. (Note: in this case I’m using the word “service” to refer to a “reusable collection of logic”, not necessarily anything like a web service.) It accomplishes this by lowering the cost of wiring different objects together. Since it’s easier to connect objects, it’s easier to separate functionality. And, since it’s easier to separate functionality, it’s easier to write more cohesive code.
    Now, let’s consider the DI vs. provider model argument. The real benefit of DI over a provider model (abstract factory) is the ability to wire up multiple levels of the object graph at once. With an abstract factory, you can get different implementations for a specific dependency. However, wiring the dependencies of the dependencies (and so on) is not part of the equation, unless you have a bunch of abstract factories.
    Consider again my IConfigurationService example. In my project at work, I have an implementation of a RemoteConfigurationService that loads information from a remote server. (We use it to control the configuration of multiple handheld PCs that communicate over a wireless network with the server.) This implementation of IConfigurationService needs a web service to load the configuration information from. So, I created an IRemoteService interface (bad name, but bear with me) and an implementation (RemoteWebServiceAdapter) that worked as a facade over a WSDL-generated web service proxy.
    Let’s say I was using a provider model like the one in ADO.NET. I would pass in some sort of identifier (probably a string like “remote”) into a factory method, and receive a RemoteConfigurationService implementation back. But remember, RemoteConfigurationService needs an IRemoteService to load the configuration from. How would the provider model know which implementation of IRemoteService to give me? You have two options:
    1. Hard-code your factory to always return a RemoteWebServiceAdapter. In so doing, you couple the RemoteConfigurationService to the RemoteWebServiceAdapter so tightly that you might as well not even bother with the IRemoteService interface.
    2. Create another factory for IRemoteService. What happens when you have a dozen services? You need a dozen factories. Talk about framework bloat! Not to mention you should be spending your time doing something other than writing factories… :)
    If I was using Ninject rather than a collection of factories, all I’d need to do is tag a constructor with [Inject] and then add parameters as necessary. (Actually, with the Subversion build, you don’t even need the [Inject] attribute!)
    This is literally all the code you need to set this situation up:

    public ServiceModule : StandardModule
    {
    public void Load()
    {
    Binder.Bind<IConfigurationService, RemoteConfigurationService>();
    Binder.Bind<IRemoteService, RemoteWebServiceAdapter>();
    }
    }
    public static class Program
    {
    public static void Main()
    {
    IKernel kernel = new StandardKernel(new ServiceModule());
    IConfigurationService configService = kernel.Get<IConfigurationService>();
    // configService will be an instance of RemoteConfigurationService
    }
    }
    [Inject]
    public RemoteConfigurationService(IRemoteService remoteService)
    {
    // remoteService will be an instance of RemoteWebServiceAdapter
    }
    public RemoteWebServiceAdapter : IRemoteService
    { ... }

    That’s where DI shines… the bindings/mappings that you define between types in your application create a pathway that a DI framework can follow. Then, when you request an IConfigurationService, it can figure out that you also need a RemoteWebServiceAdapter as well.

  20. Excellent clarification. Let me ruminate out loud here. The difference is that (at the heart of DI) an object bound to an interface can have a public member whose signature makes a second binding obvious (due to an attribute on the member or due to whatever rules the DI framework might use to couple an interface in a parameter with an object). In contrast a provider that had a dependency on another provider is likely to have that dependency buried in the object somewhere–even if publicly exposed as a property or even constructor parameter. That seems to be what you are saying here and that makes sense.

    Interesting. I’ll have to consider the utility there. I don’t deny that an explicit link is more developer and maintenance friendly. Further, DI tools exist that let you easily create and link your own interfaces in DI frameworks that just don’t exist for providers. Both attempt to couple objetcs looser than possible without them. DI takes it one step further, though, and provides for easier links back to other objects it is managing.

    Frankly, I think the selling point is the tools to help manage the loose coupling. If a tool existed that evened that score so that you could create and use providers as easily, then I think I’d have both groups together in an evaluation of which I’d use to achieve the desired loose coupling. Assuming I had a need strong enough to justify the added overhead of the framework, of course.

    Here’s the thing I keep coming back to: if I don’t need that degree of loose coupling–if my projects are small enough that I don’t want the added hassle and would just as soon leave my objects tightly coupled–then I’d really like a mock object framework (which I *am* going to want on even small projects) that didn’t require this degree of architecture.

    Or, stated a different way, I’d like it to be easy for people to unit test without also having to digest a paradigm shift whose main non-testing benefit is much higher up the enterprise chain. A lower barrier to entry for solid unit testing means more people doing solid unit testing. Tools that make it possible to unit test without requiring such invasive architecture changes would make a much larger impact on the penetration of good unit testing habits than trying to beat developers over the head with a paradigm shift whose only advantage *to them* is unit testing.

  21. Pazu permalink

    Helpful discussion! Anyway I have to say, that DI/pattern provider/plugin (or how you call it) is in my eyes just direct consequence of coding againts interface and ability to dynamically create objects. Usefull for plugins and non-short-sighted design. But for unit testing (=simple and elementary) ? I find DI requiring so much housekeeping and de-capsulation, that for me it is real TDD showstopper ! In light of this I am interested in TypeMock, since it might allow me to keep TDD overhead low and skip the bar I wasn’t able to skip till now.

  22. I think that DI and IoC is a fantastic way to build pluggable component based architectures.

    Great write up and I think you have a good handle on the value of DI.

    I think a great example is how Windsor can also control the lifecycle of your components. Additionally, Windsor as an IoC brings in other aspects, such as EventWiring, AoP interceptors, etc…

    What this means Jacob is that the lightweight container can do more than just ‘create’ your objects.

    Take a typical Pub – Sub. With an IoC, you can register your publisher and subscriber’s in the configuration. If you need to add a new one, or remove one. You edit a config file.

    Windsor can create and objects, start up the publisher, and help facilitate the event handling.

    If you have ever done deployment releases of code or worked on a team, these type of code changes can introduce bugs, etc…

    There is alot of value, and of course understanding how DI helps that is important. It’s value comes when you have code that is built with interfaces where you can use polymorphisms and interchange your components.

    thanks again for the blog post.

  23. Kevin Wong permalink

    Okay, this is the part where I’m a dick, but if you read and understand the theory of DI and still don’t appreciate its elegance, power and simplicity, you’ve got bigger problems than unit testing. 20 years of industry study and practical experience have culminated in several key tenets of good software design, including: simplicity, separation of concerns, loose coupling, transparency. DI delivers on all of these.

    Simplicity:
    It seems Jacob is under the misguided notion that DI makes software MORE complex. I have architected 5 commercial projects using DI spanning various domains, and I have found the opposite to be true. If I’m writing a service and find that it needs a new dependency, I simply add a property and start using it. Consider what this means; the entirety of a service’s dependency code is a single property declaration statement. No simpler design can be imagined.

    Separation of Concerns:
    DI separates and modularizes the concern of application object construction and wiring. This concern would otherwise be scattered throughout the code; not good. This is similar to the way AOP modularizes cross-cutting concerns. A large DI method or DI XML definition file might initially seem complex, but it’s all the same stuff: construction and wiring, i.e., it is highly cohesive. Those that use DI undoubtedly find it liberating to no longer have to worry about dependency management, as it’s all done in one place in one way. Thus, we are left to focus unfettered on our business problems.

    Loose Coupling:
    DI does not directly make code more loosely coupled, but it makes substantially easier practices that do, e.g., developing to interfaces.

    Transparency:
    The argument that DI breaks encapsulation is erroneous. Encapsulation refers to the hiding of design decisions that don’t need to be known to outsiders. If we’re operating at the level of a class, we NEED TO KNOW the types it depends on. DI makes these dependencies explicit and transparent. If the goal is encapsulation on a larger scope, one should look to technologies like OSGi.

    Not only does DI deliver on all of these, but I would argue that on average it does so in fewer lines of code. So, the question is: Why wouldn’t you use DI on every project?

  24. peter permalink

    Hi … you said..Here’s the secret that makes dependency injection worthwhile: when it comes to dependencies, encapsulation is bad.

    Please explain why??

    Thanx

  25. Great discussion, my .01 cents.

    I have had to re-consider DI, since I have noticed it breaks my notion of building reliable constructs by allowing an outsider to have a reference to the internals.

    Of course, this “outsider” is your friendly DI tool / factory method injecting the parts, but it doesn’t change the fact that every time I’m seeing this construct in a code, I need to verify (at least mentally) that whoever has been constructing the parts the construct consists of can’t use the references after injecting them. That is, I don’t want to see my ‘car’ instance’s ‘engine’ been started outside of the class.

    That can be solved by making a defensive copies at assignment time inside the class but cannot be a generic solution since some instance creation operations might simply be too expensive.

    Another thing which has been bothering me is that use of DI effectively exposes the details of the class interface.

    That is, the ‘car’ class having a method ‘set(engine e)’ is unnecessarily revealing some secrets to the outside world. (This can be called “old-skool O-O” viewpoint which respects modelling/cognitive aspects more than nowadays is typical).

  26. Great discussion. I think DI has great value in some cases, but I am wondering why nobody raised the point of troubleshooting and post-production maintenance (I mean corrective maintenance here)?

    What I mean by that is that maintenance teams are usually not the original developers of the product. So moving the creation of full object hierarchies (especially if you use a Domain Model) outside the normal encapuslation locations makes it much harder to debug/troubleshoot problems.

    Not to mention that extensive use of refleaction and late binding prohibits static inspection of code (at least without reading the XML or whatever container that hosts object dependencies).

    It seems when people sell DI as a means to increase maintainability they do not think of corrective-maintenance!

Trackbacks & Pingbacks

  1. Dependency Injection: The Cavalry Has Arrived at Discord&Rhyme
  2. Dependency Injection discussion « Davy Brion’s Blog
  3. Eli Lopian’s Blog (TypeMock) » Blog Archive » Dependency Injection - Keep your privates to yourself
  4. Embracing Change at Discord&Rhyme
  5. Jack Altiere » Learning Dependency Injection
  6. Why use Dependency Injection? « CodeForNothing
  7. 依赖注入是否值得? - 自由、创新、研究、探索…… - 博客园
  8. links for 2008-03-30 | Lazycoder
  9. The Inquisitive Coder » Blog Archive » Dependency Injection discussion
  10. dependency injection lightning talk for DutchAltDotNET | SuppleDesign - Marc Vangrieken’s blog

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: