Skip to content

Hey! This is my old blog, and is only preserved here for historical purposes. I'm now blogging at instead!

Automagic Time Localization

One of the most often-overlooked and easy to fix things encountered when developing a hosted software application is the management of local time. First and foremost, you should be storing timestamps in your database in UTC. I don’t care if your users are all in the same time zone now, or your app is just a local deployment, or whatever other excuse you have for storing time as local time. Always store timestamps in UTC. Do it now, and pat yourself on the back a year from now when you realize whatever reason you had for storing it as local time was irrelevant or has changed. :)

Okay, so once you’ve got your timestamps stored in UTC, you need to localize them for each individual user, which means you need their offset from UTC. For example, Eastern Standard Time, my time zone, is UTC-5. The most common thing that I’ve seen is to provide users with a massive dropdown of a bajillion time zones, and ask them to pick which one they’re in.

Maybe it’s just a pet peeve, but I hate it when sites ask me to pick my time zone. What about users who travel often? What about daylight savings time? Do I make my users go in and change their time zones when DST starts and stops? Do I write extra code to figure out if it’s DST in the user’s time zone? That’s pretty much impossible, at least in the US, because it’s not recognized in Hawaii, Arizona (except in the Navajo Nation), and some counties in middle America.

While I was building Zen, I was determined to avoid making users have to choose their time zone if I could avoid it. I realized that the client computer knows what time it is, and what time zone they’re in, so it’s pretty simple to read it with Javascript. Here’s how you can grab the client computer’s UTC offset:

new Date().getTimezoneOffset() / -60

Now, all that remains is to push that information back to the server and make sure that it’s updated often. Zen’s login form has a hidden input called timezoneOffset, which is set using jQuery when the page is displayed:

$(document).ready(function() {
  $("#timezoneOffset").val(new Date().getTimezoneOffset() / -60);

Now, when the form is posted, I read the UTC offset from the hidden input, and store it in the user’s record in the database. This means that each time the user logs in, their time zone information is updated to be current. This helps not only with the daylight savings time problem, but also ensures that Zen will respond to changes in the user’s computer clock. If a user travels from EST to PST, for example, and they change their computer clock, Zen will recognize the change and update the timestamps appropriately.

Then, each time I need to display a timestamp to a user, I call a method on my TimeHelper (a custom ASP.NET MVC view helper, like the built-in HtmlHelper and UrlHelper):

public TimeHelper
  public User User { get; private set; }

  public TimeHelper(User user)
    User = user;

  public DateTime GetLocalTime(DateTime utc)
    DateTime local = utc.AddHours(User.TimezoneOffset);
    return new DateTime(local.Ticks, DateTimeKind.Local);

Just another trick to keep in your bag. :)

Zen and the Art of Project Management

Someone posted a message to the Ninject user group last week asking if anyone had heard from me, and if I was alright. The rumors of my demise have been greatly exaggerated! I’ve been next to silent on my blog and Twitter for awhile now, and although I’ve alluded to it before, I’m finally ready to announce what I’ve been up to.

This is my last week at Telligent. I’m leaving the company to launch a startup with my wife Nicole.

Sometime last year, I was looking for an application that could support a lightweight agile development process. It seemed like all of the existing solutions were bloated with unnecessary features, difficult to use, and too expensive for what I needed. I heard similar grumblings from other developers – it seemed that there was a lack of simple, flexible, and cost-effective solutions available for lightweight project management.

Around the same time, I was introduced to the concept of lean software engineering. If you’re not familiar, lean software engineering draws inspiration from the Toyota Production System, and focuses on optimizing flow, reducing waste, and continually improving your process to become more efficient. The development process is organized around the use of a kanban board, which is a visual representation of your process and the status of each work item as it moves from concept to completion.

I was immediately hooked. For the past several years, I’ve been on a crusade to convince software developers that the majority of effort in software is spent changing software after its initial version. Therefore, the easier your application is to change and improve while maintaining a high level of quality, the more effective you will be at competing or satisfying your clients. Lean is very much in line with this way of thinking, and it’s the first time that a development methodology has really made sense to me.


Today, Nicole and I are proud to officially announce Zen, a lean project management solution. The system is geared around a web-based kanban board, on which you can hang cards representing work items. Here’s an example (scaled down for the screenshot):


The columns on your board can be customized to your project, and can be changed anytime. As a task passes each phase of completion, you drag its card to the next phase. When it reaches the right side of the board, it’s done. Cards can have different colors, you can tag them to organize them, and when things go wrong, you can mark them as “blocked”, indicating there’s a problem.

Zen’s not limited to just a kanban board, though. Collaboration is the key to making a process work, so when things happen in your project (like moving a card from one phase to another), Zen shoots notifications to the other members of your team. And, since nothing’s more annoying than having a service send you a million emails, Zen can notify you of changes not only via email, but Google Talk/XMPP, AIM, Windows Live Messenger, or ICQ. And since everyone has their own preferences on how they want to be updated, each user can customize how they want to be informed, including turning notifications on and off to create cones of silence. :)

Because it’s web-based, and because of its emphasis on messaging, Zen is particularly useful for distributed teams, where you can’t just hang a physical kanban board in the team room. Even if your team works in the same location, having a software application support your process can be very useful – Zen tracks key metrics like lead time and cycle time for you, and provides big visible charts to keep you motivated and on track.

Zen doesn’t force you to adopt a lean process. It’s designed to be as simple and straightforward as possible, while being flexible enough to customize it to match the way you work, and improve your process as you find ways to be more efficient. Zen lets you start with just a task board, and if you want, gradually add lean artifacts to your process at your leisure. Whether you’re a lean expert, or you just want a project management system that stays out of your way and lets you work, Zen is a great option.

Another way to think about it is that Zen is the Ninject of project management software. :)

Zen is still in limited closed beta, but we’re wrapping things up, and will be launching in July. In the meantime, check out our website, and sign up to be notified when we go live. If you have any questions at all, don’t hesitate to drop me a line on email (nate at enkari dot com), Twitter, or Google Talk / Windows Live Messenger (nkohari at gmail dot com).

Leaving Telligent is bittersweet; while I’m excited about Zen, Telligent has a lot of absolutely fantastic developers, and I’ll miss working with them. My time with the company has been memorable, and I wish them nothing but the best of luck. The good news is that the cat’s out of the bag, I get to start talking about all the cool technology that powers Zen, and some neat tricks I’ve picked up along the way. Between that and my obsession with lean/kanban, I’m going to start being more talkative again… I’ll leave it to you to decide if that’s a good thing or not. :)

Ninject 2 and Extensions

As I’ve mentioned before, while developing Ninject 2, I applied an obsessive minimization process to the development. Ninject 2 was largely a rewrite of Ninject 1, but using most of the same concepts – just re-written as necessary to be as small as absolutely possible. This minimization, along with leveraging stuff like LINQ and lambdas, has resulted in a very sleek and compact distribution, currently hovering around 82KB when compiled for release.

However, it’s also meant that I’ve had to cut features from the core, which in turn means that extensions are now of the utmost importance in Ninject 2. I’d like to adopt a model similar to the one used in jQuery, in that I handle development and maintenance of the core, and outsource the development of extensions to the community. This means if you have an idea for a cool extension to Ninject, you’ll just be able to write it, and publish it to the Ninject website (eventually).

This also means that I’m toying with different ideas for discovering and loading extensions when you spin up the kernel. I’ve tried a few things but keep gravitating towards an automatic extension loading solution, in which Ninject looks at all of the DLLs in the directory you have the Ninject.dll in for assemblies that have a NinjectExtensionAttribute. Because of the lack of raw metadata reading in the CLI, this also means that Ninject has to spin up a separate AppDomain to scan the DLLs without loading everything into the primary AppDomain.

My main concern is that this might be too magical or “heavy” for Ninject. One obvious possibility is to create an option that will control automatic extension loading, which is used like this:

var settings = new NinjectSettings { LoadExtensions = true };
var kernel = new StandardKernel(settings, ...);

So all in all I pose two questions to you, dear reader:

  1. Is automatic extension loading too magical, or too heavy for Ninject?
  2. If it’s controlled by an option, should the default be on or off? That is, should you opt-in to or opt-out of extensions?

This and any other feedback, as always, is greatly appreciated.

Ninject + GitHub = Crazy Delicious

Last weekend, I was talking with Ivan Porto Carrero about some work he’s doing on Ninject to add IronRuby support (which looks friggin’ awesome, by the way). He convinced me to finally sack up and try GitHub. I’d toyed with Git before, but I was always hung up on the ugliness of the tooling under Windows. Source control is not something that I want glued together with duct tape and chewing gum, and that was the initial impression I got from the tools.

Boy, was I wrong.

I’ve used Subversion for years, and the relative maturity of the tools lulls you into a false sense of comfort and get you to ignore the major issues with the underlying structure. While the Windows support for Git is kind of patched together, the tools themselves are extremely intelligent. The leap from Subversion to Git isn’t quite as big as the leap from VSS to Subversion was, but it’s definitely close. I’m now a complete Git convert, and I’ve even bought a commercial account on GitHub to move my closed-source work there also.

Anyhow, the source for Ninject 2 now has a new home on GitHub. This source tree will now represent the most-current version of the Ninject 2 source, and I’ll be removing the experimental branch from Subversion soon. The Ninject 1 source will remain in the Google Code repository for posterity. If you don’t want to bother with installing Git, you’ll always be able to grab a zip file of the latest Ninject source from:

If you’re interested in GitHub, Aaron Jensen just published a great post on hosting your projects there. I highly recommend you check it out.

Ninject and the Ms-PL

Someone from Microsoft (I’ve forgotten who, sorry!) approached me at ALT.NET Seattle a couple weeks ago and asked some questions about Ninject’s licensing. Ninject is licensed under the Apache License 2.0, and they expressed concern that they might not be able to get approval to use Ninject on their team inside Microsoft.

To fix this problem, I’d like to announce that both Ninject 1 and 2 are now dual-licensed under the existing Apache License and the Microsoft Public License. The licenses are essentially identical, but the Ms-PL contains an additional clause intended for patent protection.

So, what does this mean for existing Ninject users? Absolutely nothing!

Ninject is and will always be open source, and you can continue to use it in the same way you always have. However, if you’ve ever tried to advocate the use of Ninject in you company only to have management or your legal department shoot you down, you can now say it’s available under a Microsoft-approved open source license.

If you have any questions or concerns, please feel free to comment or post on the Ninject user group.

Cache-and-Collect Lifecycle Management in Ninject 2.0

Warning: unless you’re interested in the nuances of lifecycle management in inversion of control systems, this post might make your eyes glaze over. However, if you’re interested in new solutions to old, difficult problems, read on. :)

One of the most important features of any inversion of control framework is the re-use of instances within certain scopes. For example, you might define that a certain service is a singleton, that is, only a single instance of it may exist throughout your application, or you might mark it as one-per-request, meaning one instance of it would be created for each web request that your application receives.

As it is with most important features, lifecycle management is a complex problem, and one that’s very difficult to get correct. In Ninject 1, I assumed that this complexity was inevitable, and implemented a system of behaviors (SingletonBehavior, OnePerRequestBehavior, and so on). While this solution worked, it was difficult to customize and prone to errors.

Ninject 2 introduces a feature that I’ve been working on for awhile, which I’m calling cache-and-collect lifecycle management. Instead of creating a “manager” that controls the re-use of instances (like behaviors in Ninject 1), Ninject 2 associates all instances with a “scope” object. This object is just a POCO – that is, it doesn’t need to know anything about Ninject or that it’s involved in lifecycle management.

Ninject 2 ships with the same four standard scopes (transient, singleton, one-per-thread, and one-per-request) that were available in Ninject 1. When Ninject receives a request, it has a callback associated with it that returns the scoping object for the request. The following table shows the correlation between the binding syntax and the callback that is used:

Binding syntax Object returned from scope callback
InTransientScope() n/a
InSingletonScope() the kernel itself
InThreadScope() System.Threading.Thread.CurrentThread
InRequestScope() System.Web.HttpContext.Current
InScope(callback) Whatever callback returns (custom)

At this point, an example might help. Assume that you have a service IFoo, that is bound to request scope. When the first request occurs for IFoo, the scope callback returns the HttpContext for the current web request, and the resulting instance of IFoo is associated with it. If any subsequent requests for IFoo are made within the same web request, the same instance is returned. However, if a request for IFoo is received from a thread handing a different web request, the scope callback will return a different HttpContext object, resulting in a new instance of IFoo being activated and returned.

For each request that results in the creation of a new instance, the instance is associated with the object that is returned from the request’s scope callback. If the request has no scope callback (as in the case of transient services), no association takes place and Ninject will return a new instance for each request.

From there, the rules of cache-and-collect are simple:

  1. If the callback for a subsequent request returns an object that already has an instance of the requested type associated with it, it is re-used. (Assuming it was activated via the same binding.)
  2. When the scoping object is garbage collected, all instances associated with it are deactivated via Ninject.

Rule 2 was a kind of tricky to implement. The garbage collector doesn’t fire any events when it’s run, and while you can register for callbacks when it’s executed, you have to mess with the settings on the runtime itself to get it to work. As a result, I had to get a little creative, and use a timer and WeakReferences to poll the GC to see when it is run. If you’re interested in the solution, you can see it here. Basically, every second, Ninject checks to see if the GC has run, and if so, it prunes its internal cache, deactivating any instances whose scopes have been collected.

Remember that you can use any object as a scope through the custom InScope() binding syntax. This is intended to replace the use of container hierarchies, which is a common pattern in other dependency injection frameworks, but something I refuse to implement in Ninject because of the complexity that comes with it.

Note that with cache-and-collect, instances are not guaranteed to be deactivated immediately when the scope terminates. For example, instances activated in request scope will not be collected immediately at the end of web request, but when the HttpContext that was used to control the web request is garbage collected, the instances associated with it will be deactivated. However, they are guaranteed to eventually be disposed.

This just means that the normal rules apply to objects that hold scarce resources like file handles and database connections – either don’t couple the lifespan of the scarce resource to the lifespan of the object that holds it, or dispose of the holding object when you’re done with it!

Ninject does provide a way to get deterministic deactivation of your instances for custom scopes, if you’re willing to give up the POCO-ness if the scoping object. If your scope callback returns an object that implements INotifyWhenDisposed (an interface from Ninject), Ninject will immediately deactivate any instances associated with the object when you call object’s Dispose() method.

There’s one final way of handling scope in Ninject2, through activation blocks. Blocks are a way to override the scope that was declared on a binding, and instead associate the activated instances with the block itself. Since the activation block that is returned implements INotifyWhenDisposed, any instances activated via the block are immediately deactivated when the block is disposed. The block object implements all of the same methods available on the kernel itself – such as Get() – but instead of executing them, it simply delegates any requset it receives to the kernel that created it.

Here’s an example of using an activation block:

IKernel kernel = ...;

using (var block = kernel.BeginBlock())
  var foo = block.Get<IFoo>();
  var bar = block.Get<IBar>();

When your code hits the end of the using() block, the activation block is disposed, causing foo and bar to be deactivated. Note that you aren’t required to use activation blocks within using() constructs; as long as you hold an instance of the block and don’t dispose it, you can continue to activate instances through it and they will be associated with the block. Then when you’re done, dispose of the block, and your instances will be deactivated.

This is a problem that has been nagging me for a long time, and I’m pretty excited about this solution. If you have any feedback, or want to poke holes in the idea, by all means let me know!

Fast Late-Bound Invocation with Expression Trees

Note: After working with expression trees further, I’ve found that generating CIL by hand is dramatically faster than using expression trees. Still, this is an interesting concept, and I’ve kept this post here for posterity.

The implementation of Ninject has some solutions to interesting problems. One in particular is somewhat sticky: how do we call any method, without knowing what methods will be called, nor their signatures, until runtime? The easiest way to do this is via MethodInfo.Invoke(), but reflection-based invocation is extremely expensive in comparison to normal invocation. Fortunately, we can solve this problem through the use of anonymous delegates and runtime code generation.

In order to do this, we need some sort of late-binding system. In Ninject 1, I used DynamicMethod and System.Runtime.Emit to emit CIL at runtime. This solution worked well, but was very complex, difficult to understand, and didn’t support medium trust scenarios. Ninject 2 instead leverages expression trees to accomplish the same thing – and actually, under the hood, the solutions are identical, since expression trees are translated into CIL opcodes when you compile the Expression<TDelegate>. From a code perspective, however, using expression trees is a much cleaner solution because it offloads the heavy lifting to the types in the BCL.

Basically, what I’m talking about is taking any method and creating a delegate for it with this signature:

delegate void object LateBoundMethod(object target, object[] arguments);

This is an open delegate, meaning it can be called on any instance of the type that declares the method that the delegate is bound to. The first parameter to the delegate, target, is the instance that the method will be called on. For example, if we create a LateBoundMethod delegate for String.StartsWith(), we can pass any string in as the first parameter.

The solution is surprisingly simple using expression trees:

using System;
using System.Linq;
using System.Linq.Expressions;
using System.Reflection;

namespace FastDelegates
  public delegate object LateBoundMethod(object target, object[] arguments);

  public static class DelegateFactory
    public static LateBoundMethod Create(MethodInfo method)
      ParameterExpression instanceParameter = Expression.Parameter(typeof(object), "target");
      ParameterExpression argumentsParameter = Expression.Parameter(typeof(object[]), "arguments");

      MethodCallExpression call = Expression.Call(
        Expression.Convert(instanceParameter, method.DeclaringType),
        CreateParameterExpressions(method, argumentsParameter));

      Expression<LateBoundMethod> lambda = Expression.Lambda<LateBoundMethod>(
        Expression.Convert(call, typeof(object)),

      return lambda.Compile();

    private static Expression[] CreateParameterExpressions(MethodInfo method, Expression argumentsParameter)
      return method.GetParameters().Select((parameter, index) =>
          Expression.ArrayIndex(argumentsParameter, Expression.Constant(index)), parameter.ParameterType)).ToArray();

When you call the Create() method, DelegateFactory creates an anonymous delegate that accepts loosely-typed parameters, casts them, and invokes the actual method that you specified.

You can use the DelegateFactory like this:

MethodInfo method = typeof(String).GetMethod("StartsWith", new[] { typeof(string) });
LateBoundMethod callback = DelegateFactory.Create(method);

string foo = "this is a test";
bool result = (bool) callback(foo, new[] { "this" });


Obviously this is a contrived example since we know the type at compile-time. However, if you don’t know what types or methods you’ll be using, this is a great way to avoid the expense of reflection. After you build the delegate originally, your code operates reflection-free for as many times as you want to invoke the method.

We’re using this technique in Community Server REST futures, to bind a call to a REST actions to a specific method on a controller that handles the request. Since these REST actions must support many successive calls, the use of these generated delegates dramatically increases our performance versus reflection-based invocation.


Get every new post delivered to your Inbox.