Skip to content

Hey! This is my old blog, and is only preserved here for historical purposes. I'm now blogging at nate.io instead!

Ninject 2 Reaches Beta!

Since I published this post, I’ve moved the Ninject 2 source to Github. Both the downloads available here and the experimental branch in Subversion is outdated and will be removed eventually. Read more here.

A month or so ago, I started a spike of a new version of Ninject to test out some ideas that have been floating around in my head. I fully intended to throw it away and integrate what I’d learned into the existing Ninject codebase – but gradually the spike grew into what I’m now calling Ninject 2, which I’m proud to say has reached beta status!

Ninject 2 is essentially a rewrite of the original codebase, but that’s not to suggest that anything is particularly wrong with the  code that went into Ninject 1.x. When I started writing Ninject in early 2007, I was targeting .NET 2.0 and didn’t have the fancy toys that I have now – namely LINQ and expression trees. Ninject 2 takes full advantage of LINQ, and I’ve actually jokingly referred to it as “LINQ to IoC”.

Also, I felt that some parts of Ninject 1.x were over-designed, so the development of Ninject 2 has been driven by an obsession with simplicity. To give you an idea of how much simpler Ninject 2 is, the trunk build of Ninject 1.x weighs in at 177KB, and Ninject 2 is just about 80KB.

I’m going to be blogging about the design of Ninject 2 and the differences between it and Ninject 1.x quite a bit over the next few weeks, but here’s a quick overview:

Things that are in Ninject 2 that were not in Ninject 1.x:

  • Cache-and-collect lifecycle management: Rather than using binding behaviors, Ninject 2 uses a scoping system and leverages the garbage collector to reclaim instances. I’m very excited about this improvement, and it deserves a post unto itself to explain it.
  • Multi-injection: The kernel in Ninject 2 now has GetAll<T>() methods, and supports injection of multiple targets with types IEnumerable<T>, List<T>, and arrays of T.
  • Constrained resolution: Rather than just declaring conditional bindings, constraining predicates can now flow into a resolution request from the point of injection. This creates a very powerful push/pull system for resolution, which also deserves its own post. :)
  • Common Service Locator support: Because of multi-injection, Ninject 2 now has full support for the CSL.
  • Optional modules: You can now register bindings directly on the kernel; modules are optional. If you register bindings in a module, and then unload the bindings, the bindings will be un-registered.
  • Automatic module scanning: Ninject 2 can scan directories for assemblies that contain modules, and load them into the kernel.
  • Simplified extension model: Internally, Ninject 2 relies on an inversion of control container of its own, reducing factory bloat and making extension of the core much simpler.
  • Mono support: I haven’t had a chance to test this very well, but it’s something I want to make sure Ninject 2 has.

Things that were in Ninject 1.x that are not in Ninject 2:

  • Support for .NET 2.0: Since Ninject 2 relies on LINQ and expression trees, .NET 2.0 support is now no longer possible.
  • Field injection: Ninject 2’s injection is now driven by expression trees, and in .NET 3.5 there is no way to set field values with an expression tree. Since this is a bad practice anyway, I decided to cut it.
  • Private member injection: Again, not supported by expression trees, and a bad practice, so it has been cut.
  • Behavior attributes: Since behaviors have been cut and replaced by GC-driven scopes, attributes like [Singleton] no longer make sense. I may re-add these if people scream loud enough.
  • Logging infrastructure: Cut because it wasn’t really useful anyway. Ninject doesn’t generate logging messages of its own anymore, but I’m looking into alternative sources of introspection.
  • AOP/Interception: This has been removed from the core for simplicity.
  • Compact framework support: This is missing from the beta, but I may re-add it for the official release.

A few notes:

  • The core namespace is now just Ninject instead of Ninject.Core.
  • The default scope is now singleton rather than transient.
  • Some extensions are missing (Messaging, in particular). They may re-appear before release.

Ninject 2 also has new constructor selection semantics:

  1. If a constructor has an [Inject] attribute, it is used. If multiple constructors have an [Inject] attribute, the behavior is undefined.
  2. If no constructors have an [Inject] attribute, Ninject will select the one with the most parameters.
  3. If no constructors are defined, Ninject will select the default parameterless constructor.

Right now, the source is in an experimental branch on the Ninject subversion repository, but it will soon be moving to the trunk. You can also download the various builds here.

I’m interested in all feedback (good and bad), so please let me know what you think!

EventHandler Extension Method

I just wrote an interesting extension method, and thought I’d share it. If you’re using the EventHandler<T> delegate that was introduced back in .NET framework 2.0, this method might come in handy:

public static class ExtensionsForEventHandler
{
  public static void Raise<T>(this EventHandler<T> handler, object sender, T args)
    where T : EventArgs
  {
    EventHandler<T> evt = handler;
    if (evt != null) evt(sender, args);
  }
}

This helps you avoid having to repeat the boilerplate code that goes along with the typical event-firing pattern. With this extension method, you can do this instead:

public class StuffDoer
{
  public event EventHandler<StuffEventArgs> StuffHappened;

  public void DoStuff()
  {
    StuffHappened.Raise(this, new StuffEventArgs());
  }
}

Since extension methods can be called on references that are actually null, this will work even if no listeners have attached to the StuffHappened method.

I’m all about syntactic sugar, and extension methods provide an easy way to improve the readability of your code without too much hassle. (And you are writing your code so it’s easier for others to read, right? :)

ALT.NET is the Opposition Party

In case you don’t read any blogs or follow any .NET developers on Twitter, Oxite is a new open-source CMS recently released by Microsoft. It was offered as a "real world example" of a project written with the new ASP.NET MVC framework. Soon after it was released, lots of .NET developers (including myself) started calling out Oxite as an example of a very bad application. Many of the people bashing Oxite align themselves with the so-called "ALT.NET" movement. There quickly developed a backlash against people who were saying bad things about Oxite, and ALT.NET was the primary target.

Over the past few years, I’ve become more interested in politics. Last night, something occurred to me: ALT.NET is the opposition party. We’re just here to keep the "other side" (in this case, Microsoft and traditional .NET developers) honest. Calling them out on things doesn’t mean we think badly of them, it just means we disagree with their approach or their ideas. Just like opposing political parties, we might not agree on the solution, but we all want to work towards solving the problems that we believe exist.

When it comes to personal development, ego is blinding. I’m a strong believer in the idea that the only way to learn is to first accept that you don’t know. Personal feelings or attachment to a product can blind you from its shortcomings, and only by removing ego from the process can you take steps to improve it. That’s why two of the main pillars of agile development are peer review and shared ownership of code — they serve to redirect the personal attachment you feel to your product to a shared-team attachment. Then, if someone challenges you, it makes it easier to remember that we’re all working towards the same goal.

This isn’t to say that we shouldn’t take pride in our work — just that that pride shouldn’t blind us from the possibility that there are problems with the work that we create. I’m extremely proud of a lot of the code that I write, but if someone wants to disagree, by all means I want to hear it. Bear in mind also that I’m not saying that the Oxite developers are egotistical. They’ve actually taken the backlash against their product very well.

My point is that there’s a dramatic difference between a debate and an argument. A debate is devoid of personal attachment on the subject. Personal attacks are themselves egotistical ("I’m better/smarter than you!") and have no place in a debate. This is the same reason people find it difficult to discuss politics or religion with others… if you take a personal stake in what you’re debating, you let emotions in. Once you’re emotionally evolved, it becomes more difficult to maintain a stance, and things quickly degrade into the equivalent of "you’re wrong and I’m right, because you’re a big doo-doo head".

To my knowledge, there wasn’t a single person that said anything derogatory about the developers of Oxite, only the software itself. That’s a very important distinction. I hope that everyone recognizes that we’re all working towards the same essential goal — to make ourselves better as software developers, and improve the industry as a whole.

ALT.NET has a public relations problem. We are a loose-knit group, and so we don’t have a single voice. Many of us disagree with each other. We also have a lot of passionate people in our ranks, and sometimes passionate people aren’t tactful in the way they approach a problem. These are all detriments to ALT.NET’s image, but benefits to the group as a whole — everyone is encouraged to speak and be heard.

Just remember, we’re only trying to keep the other side honest.

Generic Variance in C# 4.0

Although I’m not cool enough to actually go to PDC, I’ve been watching some of the things that have been announced. One of the things I’m most excited about is co- and contra-variance in generics, which is something that the CLR has lacked since generics were first introduced in 2.0. (Note: some of the examples on here are pulled from the excellent description of new features release by Microsoft.)

In versions of C# prior to 4.0, generics were invariant. For example, consider this simple type definition:

public class Foo<T>
{
  //...
}

Since the generic type parameter T was not constrained, the compiler understands that T should be treated as type object. That means that since a string is an object, a Foo<string> is functionally equivalent to a Foo<object>. However, because of generic invariance, ian instance of Foo<string> cannot be assigned to a variable of type Foo<object>.

C# 4.0 introduces the ability to declare covariant and contravariant generics. For example:

public class Foo<out T>
{
  //...
}

This class is covariant in T, meaning that if you create a Foo<string>, you can use it effectively as a Foo<object>, since a string is a subclass of object. The example given is the new IEnumerable<T> interface that comes with the BCL in C# 4.0:

public interface IEnumerable<out T> : IEnumerable
{
  IEnumerator<T> GetEnumerator();
}
public interface IEnumerator<out T> : IEnumerator
{
  bool MoveNext();
  T Current { get; }
}

Since these interfaces are covariant in T, an IEnumerable<string> can be used as an IEnumerable<object>. The same is true for List<T>, so you’ll be able to do this, which was previously impossible:

IList<string> strings = new List<string>();
IList<object> objects = strings;

Note, however, that you can only declare that your type is covariant for generic type parameters that appear in output positions — basically, return values.

Like the out keyword, you can also use the in keyword:

public interface IComparer<in T>
{
  int Compare(T left, T right);
}

This interface is contravariant in T, meaning that if you have an IComparer<object>, you can use it as though it was a IComparer<string>. Contravariance carries a similar restriction to covariance, in that contravariant type parameters can only be used in input positions (arguments) on the type.

I didn’t quite understand variance until I considered the changes to the Func delegate:

public delegate TResult Func<in TArg, out TResult>(TArg arg);

Func is contravariant in TArg, and covariant in TResult. For example, consider this (less-than-useful and slightly-contrived) method:

public string Convert(object obj)
{
  return obj.ToString();
}

With the new variance rules, I can do this:

Func func1 = Convert;
Func func2 = Convert;

Since the delegate is contravariant in TArg, I can pass in a string, since a string is an object. Since the delegate is covariant in TResult, I can use it in a situation where an object is expected as a result from the function.

This might seem a little overwhelming if you’re not a language geek like myself, but it basically means that things that seemed like they should work in previous versions (like the List<T> example above), now will just work. Eric Lippert also has a great series of posts about the topic.

If you’d like to tinker with C# 4.0 (and Visual Studio 2010), Microsoft has published a VPC image. Kudos to the C# and CLR teams for getting this stuff to work!

Be the Underdog

Last night, I watched my Cleveland Browns wipe the floor with the New York Giants. The Browns were 1-3, and if they didn’t win this game, their season was effectively over. The Giants, reigning Superbowl champions, came into the game undefeated at 5-0, and it wasn’t believed the Browns had a snowball’s chance in hell to win. In response, the Browns beat them by three touchdowns, without punting or turning the ball over once. They had their problems, but it was a night of complete dominance for Cleveland.

Even if you’re not interested in sports, there’s a lesson to be learned. The Browns came into the game as major underdogs, with nothing to lose. They used it to their advantage, and their gameplan became aggressive and creative. They successfully ran trick plays like reverses and direct snaps, and the only successful execution of the UFO defense that I’ve seen… well, ever. The Giants didn’t see it coming, and by the time it hit them, there was nothing they could do but sit on the sidelines with puzzled looks on their faces.

37signals suggests that when developing a product, you should choose an enemy to compare with. By recognizing what’s wrong with the competition, you can find a niche that you can fill.

I’d take it one step further and see yourself as David, and choose a Goliath to attack. Embrace your role as underdog, unleash your creativity, and hold nothing back. Even if you’re already the industry leader, don’t get cocky — choose the company with the second-largest market share, and attack.

Most importantly, don’t allow yourself to be constrained by the way your competition or the industry-at-large thinks. That’s where real innovation comes from.

Custom Selection Heuristics in Ninject

I haven’t blogged about Ninject for awhile, but that doesn’t mean that the project is entirely dormant. :) Version 1.5 will be coming out relatively soon, but I wanted to give a preview of a new feature that I think is pretty damn cool. It’s currently available in the Subversion trunk if you’d like to flex the bytes yourself.

Every dependency injection framework needs to have some sort of convention or marker to indicate dependencies — spots where the framework should resolve a value to inject. When I originally wrote Ninject, I made the decision to use [Inject] attributes as this marker. In fact, I still advocate the use of attributes to indicate dependencies, because I like the declarative nature. I’ve found that even if you don’t completely understand Ninject, or dependency injection in general, using attributes at least indicates that a value comes from somewhere. It can be a great conversation starter — something you put in your code that when another developer sees it, it makes them ask questions.

Anyhow, I’ve found that the reliance on attributes was a major turnoff to other people. Ninject has always been (and will remain) opinionated software, but I’ll always add flexibility as long as I’m not compromising the core goals of ease of use and efficiency. As a result, I’ve been building in more control over these selection heuristics — the conditions that Ninject uses to test members of a type to determine which should be considered injectable members. The first step of this was to create the auto wiring extension, which alters the heuristic to determine whether a binding exists for a given member’s type — and if so, it will inject it.

Ninject 1.5 builds on this idea by moving to a collection of heuristics rather than just one. Now, when Ninject examines a member on a type to determine if it should be injected, it evaluates all of the related heuristics, and if any of them match, the member will be injected. There are now also two levels of heuristics:

  1. Global heuristics: As the name suggests, these apply to all bindings. You set them directly on the newly-created IMemberSelector component.
  2. Binding-level heuristics: These are set directly on a specific binding, so you can set up your own conventions for a specific type.

When examining a member, the member is first tested with the global heuristics, and then with the binding-level heuristics for the binding that is being used to activate the instance.

Heuristics are specific to the type of member they inspect. The types of members that Ninject considers are constructors, properties, methods, and fields, and each has a matching type of heuristic. When a property is considered for injection, Ninject will only test it using all applicable IHeuristic<PropertyInfo>s, but won’t consider IHeuristic<ConstructorInfo>s, for example.

The easiest way to control heuristics en masse is by defining your own IMemberSelector. By default, Ninject uses the StandardMemberSelector, which registers heuristics that look for the [Inject] attribute on the members — or, if you’ve overridden the attribute via KernelOptions, it will examine that. However, if you prefer a more conventions-based approach, you can use the ConventionMemberSelector:

public class OurKillerAppMemberSelector : ConventionMemberSelector
{
  public override DeclareHeuristics()
  {
    InjectProperties(When.Property.Name.StartsWith("Foo"));
    InjectConstructors(When.Constructor.HasAttribute<MarkAttribute>());
    InjectMethods(m => m.Name.Length < 4);
  }
}

This will cause all properties beginning with Foo, and all constructors with the (fictional) [Mark] attribute to be injected. Methods with names less than 4 characters will also be injected, as as you can see, you can specify conditions using lambdas (predicates) instead of using the conditional infrastructure.

If you want finer-grained control over your selection heuristics, you can also declare them on individual bindings as I said before:

Bind<IService>().To<ServiceImpl>()
  .InjectConstructors(c => c.Arguments.Length == 1)
  .InjectMethods(When.Method.Name == "Prepare")
  .InjectProperties(p => p.Name.EndsWith("Service"))

In this situation, when Ninject activates an instance of ServiceImpl, it will call the constructor on the type declared with one argument. It will then call the method called Prepare (resolving values for each of the method’s arguments), and resolve and inject values into each property whose name ends in Service.

Hopefully now you can see why I’m so excited about the new selection heuristics in Ninject. Out of the box, Ninject will continue to rely on the [Inject] attribute for simplicity, but you now have an incredible amount of flexibility in setting up your application, your way.

Working from home

Today ended my sixth week working for Telligent. Like most of the other members of the product development team, I’m working remotely from home. It’s been a very interesting adjustment, and I figured I’d share my experiences.

First, the good stuff:

  1. You can control your working environment. For the first three weeks, I sat in an old cheapo Staples desk chair at my old uncomfortable desk, until I was able to barter with my wife to let me re-organize the office. I’m now sitting in a Herman Miller Mirra chair at a much more ergonomic desk from Ikea. I have a comfortable (cushioned) chair in my office where I can sit, relax, and think. I can play music through speakers without needing to worry about coworkers, and I can close my office door to talk on the phone, video-conference, or if it just gets too noisy. I’m a natural control freak when it comes to environment while I work, and after working in a “bullpen” environment (one big room without cubes), this is a huge improvement. Incidentally, if you’re wondering, yes, the Mirra is expensive, and I was skeptical too, but it’s worth every penny. It’s so comfortable that I actually look forward to sitting in it.
  2. You can’t beat the commute. I’ve been fortunate not to have to drive more than 35 minutes or so to work for any job I’ve held, but just knowing that I don’t have to get in the car and drive across town is a nice feeling. Not to mention the savings on gas is great, and I suspect that once the Ohio winter strikes, I’ll like the fact that I don’t have to leave the house even more! I’ve also found that my days seem longer, and once I get through the transition period I’m hoping to devote my newfound spare time to side projects like Ninject and Ideavine, as well as a couple of other things I have up my sleeve. Who knows, I might even blog more than once a month! :)
  3. You get to spend more time with your family. My wife is a graduate student, and she’s able to work from home most of the time. It’s nice to be able to spend additional time with her, even if it’s just having lunch or taking a break to talk with her for a few minutes every couple of hours. For others, being able to spending time with children is great. My wife and I don’t have any children, but we do have a couple of dogs, and it’s nice to be able to have them around while I work. They tend to cry sometimes, but they’re starting to learn that when I’m sitting at my desk, they need to relax and wait for me to pay attention to them.
  4. Working without pants is finally a reality. Just kidding. :)

Of course, working at home isn’t without some challenges:

  1. It’s easy to get distracted. Being at home brings with it a certain mindset. It’s important to keep in mind that when you’re working, you’re working, and while you can take short breaks, you need to stay on task or it’s easy to meander off. It’s easier to get off-task when you’re in the “at home” mindset. I think this is largely because of the transition — although I’m used to working on side projects and moonlighting consulting from home, I’m not used to doing it for my “day job”. I’ve noticed as I’ve gotten more accustomed to it, I’ve become more effective at remaining on task.
  2. It’s even easier to work too much. Since your office is your home, your home is also your office, and so you never really “punch out”. I’m also a natural workaholic — programming is more “fun” to me than “work” a lot of the time — so it’s easy for me to come back into the office after dinner and continue to work, even though I really should be done for the day. As part of my transition into telecommuting, I’m trying to draw stronger lines between work-time and off-time.
  3. Communication with the rest of the team is more difficult. Fortunately, Telligent has a pretty firm grasp on how to manage remote workers. Video conferencing makes a huge difference in terms of ease of communication — for whatever reason, seeing the other person’s face as they speak makes a dramatic difference both in the efficiency and quality of the conversation. We use a pretty lightweight Agile practice, and instead of daily stand-ups, we have daily video conferences via Tokbox. (The tool isn’t the best, but so far it’s better than any others that I’ve tried for multi-party video chat.) In terms of communication, the continuum is (with increasing efficiency): Twitter, email, IM, IRC, voice, video, then real-life.
  4. You become even more reliant on technology. Our cable internet was knocked out by the remnants of Hurricane Ike a few weeks ago, and was out for 3 days. We were fortunate enough not to lose power for more than a few minutes, but I was forced to go to my wife’s office at the University in order to work. When you telecommute, losing your internet connection is kind of like having your car break down — except you can’t get a rental or bum a ride to work. We’ve since invested in business-class cable, and while it doesn’t help with widespread outages, I at least get a guarantee of same-day or next-day service. At least I have my wife’s office to fall back on if need be — although I suppose I could always be a wifi leech at Panera. :)

All in all, I have to say that so far the benefits vastly outweigh the detriments. Maybe I’m biased, though, since I’m really interested in the products that Telligent makes, and am jazzed by the stuff I get to work on. (They actually pay me to do this stuff! Suckers. :D)

Follow

Get every new post delivered to your Inbox.