Fernando Machado Píriz's Blog

Posts about digital transformation, enterprise architecture and related topics

Posts Tagged ‘.NET Framework 4.0

A simple introduction to the Model View ViewModel pattern for building Silverlight and Windows Presentation Foundation applications

with 5 comments

Some time ago I was looking for a simple introduction to the Model View ViewModel pattern (also known as MVVM) and all the samples I was able to find were a bit complicated to my taste. I this post I will try to present you the MVVM pattern using the simplest possible example, focusing just on the key MVVM concepts. In other posts I will deep dive on other closely related concepts, but let us start with the basics.

MVVM is a successor of another well-known and very successful pattern so called Model View Controller (MVC). MVC was born (like many other well-known and successful patterns) in the Smalltalk world more than thirty years ago. Using MVC the application is composed by three kinds of objects, with very clear and differentiated responsibilities:

  • The model. Usually there is only one model per application. The model is responsible for all application’s data and related business logic.
  • The view or views. One or more representations for the end user of the application’s model. The view is responsible for showing data to end users and for allowing end users to manipulate application’s data.
  • The controller or controllers. Usually there is one controller per view, although it is not uncommon to see one controller per domain entity controlling multiple views. The controller is responsible for transferring data from the model to the associated view and vice versa. It is also responsible for implementing view’s behavior to respond to user actions.


With MVC each type of object is responsible for just one thing, which simplifies developing, understanding and testing the code. In addition, it is easy to replace views, or to have more than one view of the same data.

In the case of Silverlight and Windows Presentation Foundation (SL/WPF) applications, the .NET framework provides the ability to use binding to transfer data to and from the view, so the controller is only responsible for implementing view’s behavior. In this case, the controller is called view model, leading to the MVVM pattern:

  • The model. Same as with MVC.
  • The view or views. Also the same as with MVC. Views are SL/WPF windows.
  • The view model. One view model per view. The view model is responsible for implementing view’s behavior to respond to user actions and for exposing model’s data in an “easy-to-bind” way to the view.


MVVC shares all of the benefits of MVC, but it has an additional benefit: the simplification resulting from using declarative binding to transfer data back and forth the view model to the view.

I will illustrate how MVVC works with an example. The code I will be using in this post comes from my sample application Giving a Presentation (although you are encouraged to download the installation file and the source code of Giving a Presentation from CodePlex, there is no need to do it in order to follow this post).

Giving a Presentation is a simple application to change certain setting while delivering a presentation, e.g., hide desktop icons, remove or replace desktop background, etc. Changes are reverted back when presentation ends.

The ability to change settings is provided by extensible parts and is not implemented in the application itself (more details in this post). So strictly speaking, the data Giving a Presentation deals with is just a boolean value indicating if the user is giving a presentation or not: the user sets the value on true when presentation starts, and set it back on false when presentation ends.

The model’s code is as follows. As promised, it is very simple; it has just a boolean property:

public class Model
    public bool GivingAPresentation { get; set; }

Here’s the view model’s code. It is also very simple. Like the model, it has a just a boolean property, whose value is get and set from an instance of the model.

public class MainViewModel
    private Model model;
    public bool GivingAPresentation
        get { return model.GivingAPresentation; }
            model.GivingAPresentation = value;

    public MainViewModel()
        this.model = new Model();

The view (without any extensible part on it) looks like this:


The corresponding XAML code is:

<Window x:Class="GivingAPresentation.MainView"
        xmlns:vm="clr-namespace:GivingAPresentation "
        Title="Giving a Presentation" Height="350" Width="525" Icon="/MvvmDemo;component/Images/Presentation.ico">
        <vm:MainViewModel x:Key="mainViewModel"/>
        <CheckBox Content="I'm currently giving a presentation"
                  HorizontalAlignment="Left" VerticalAlignment="Top" Margin="12,12,0,0"
                  IsChecked="{Binding Source={StaticResource mainViewModel}, Mode=TwoWay, Path=GivingAPresentation}" />

Interestingly, there is no code behind for this view[1] (furthermore, I have deleted the file MainView.xaml.cs from the project, and obviously the solution compiles and the application runs perfectly). Do you wonder why? Because there is no need to have any code in view’s code behind class; view’s behavior is coded in the view model, not in the view, do you remember?

The view is connected with the view model in three steps. First, an xmlns attribute in the Window node is used to alias the GivingAPresentation namespace as vm with xmlns:vm=”clr-namespace:GivingAPresentation. Second, an instance of the view model is created in the Window.Resources node; the name of the instance is declared with x:Key=”mainViewModel”. And finally, the IsChecked property of the view’s check box is binded with view model’s GivingAPresentation property with IsChecked=”{Binding Source={StaticResource mainViewModel}, Mode=TwoWay, Path=GivingAPresentation}”.

And that’s it. WPF/SL does the magic of getting check box’s state from view model’s property value, which in turn is taken from the corresponding model’s property value. WPF/SL also does the magic of setting view model’s property value whenever the user checks or unchecks the check box. The view model propagates property value change to the corresponding property in the model. Through this chain, a change in the view is reflected in the model.

This example is perhaps too basic, but it has all the key components of the MVVM pattern, and clearly shows the basis for implementing WPF/SL applications using that pattern.

What is next? There many things, very common in real world scenarios, not resolved at all in this very simple demo; for example:

  1. Not only the view, but probably other objects need to be aware of changes in view model’s properties values. That is where the INotifyPropertyChanged interface comes in to play. By firing a PropertyChanged event every time properties’ values change, other objects can react to these changes. It is worth to mention that those objects are not known by the view model, but will be notified anyway, thanks to the .NET Framework’s events infrastructure.
  2. It is very common for the view to ask its view model to execute certain actions. This can be achieved by the view model exposing commands, and the view binding control’s actions (like click) to those commands.
  3. It is also very common in real live applications to have interdependent views that need to communicate between them. To keep those views, and their associated view models, independent from each other, view models can use messages to communicate between them.
  4. Who creates who? In this simple demo the view declaratively creates the view model (in view’s XAML code), and the view model creates the model. In large applications, this approach is not always feasible, and you need other helper classes to manage objects creation.

These aspects are really not part of the MVVM pattern, but are actually needed to implement real world applications using WPF/SL; that is why you usually will see them covered together with the MVVM pattern in the literature. I plan to cover them in future posts, just to keep the MVVM basics, basic.

Hope you have enjoyed this explanation. Looking forward to see you soon. Bye.

[1] In this example. In Giving a Presentation’s real code, the view is responsible for presentation related tasks, like showing the notification icon when minimized for example, that requires code behind the view.

Written by fmachadopiriz

June 10, 2010 at 12:41 am

Pex and Contracts: Better Together

with 3 comments

Pex is an automatic unit tests generator that seamlessly integrates with Visual Studio IDE. Contracts is an implementation of a concept (or technique) called Design by Contract (DBC). In this post I will show how to use Pex and Contracts together to improve code quality.

Some time ago I was working on an application where I needed to dynamically load only the types implementing certain interface from certain assemblies. I will not go into details on how to solve the problem of dynamic loading types, but will do on how to use Pex and Contracts to generate test cases with high code coverage, using the class I created to accomplish the task as example (by the way, I might have used the Managed Extensibility Framework to load the types, but in such case I had not been able to write this post).

I implemented a Helper class that can find types implementing a given interface, looking for all types in an assembly. The class can also ensure that it is possible to create instances of those types, looking for the constructors among the instance methods. Then the class can invoke the constructor to create new instances of those types.

Here is the code of the Helper class I will be using in this post:

public class Helper
    public static IEnumerable<Type> LoadTypesFrom(
        string assemblyName, string interfaceName)
        List<Type> result = new List<Type>();
        Assembly assembly = Assembly.LoadFrom(assemblyName);
        Type[] typesInAssembly = assembly.GetExportedTypes();
        foreach (Type type in typesInAssembly)
            if (type.GetInterface(interfaceName) != null)
        return result;

    public static IEnumerable<FileInfo> GetDllsIn(string path)
        List<FileInfo> result = new List<FileInfo>();
        DirectoryInfo directoryInfo = new DirectoryInfo(path);
        foreach (FileInfo item in directoryInfo.GetFiles("*.dll"))
        return result;

    public static object CreateInstance(Type type)
        ConstructorInfo ctor = type.GetConstructor(
            BindingFlags.Instance | BindingFlags.Public, null,
            CallingConventions.HasThis, Type.EmptyTypes, null);
        return ctor.Invoke(Type.EmptyTypes);

The method LoadTypesFrom receives the name of an assembly, including the path, and the name of an interface, and returns a list of all types in the given assembly that implement the given interface.

The list is returned as IEnumerable<type> and not as IList<type> or ICollection<type> to prevent Helper class clients to accidentally modify the list by removing or adding other types that could possibly not implement the given interface.

The method works by loading the assembly from the given path with a call to Assembly.LoadFrom. Then it gets all the types in the assembly by using Assembly.GetExportedTypes. Finally, it iterates among these types, and determines whether or not to each type implements the interface, using Type.GetInterface. The types that implement the interface are added to the result.

The method GetDllsIn receives a full path to a folder and returns a list of all files with .DLL extension in that folder. The list is also returned as an IEnumerable<FileInfo> for the same reasons mentioned for the previous method.

The method simply uses DirectoryInfo.GetFiles to get the list of files.

The CreateInstance method receives a type as a parameter and returns an instance of that type.

The method first looks for a constructor without parameters using Type.GetConstructor and then it creates an instance by calling the constructor using ConstructorInfo.Invoke.

We will create unit tests in Visual Studio (previously we need to install Pex and Contracts; details on how to download and install both appear at the end of this post).

Here comes the magic. We right click on LoadTypesFrom method declaration, and then we click on the Run Pex menu item to invoke Pex.


Pex uses parameterized unit tests (PUT). A PUT is simply a method that takes certain parameters, invokes the code being tested, and tests if certain assertions are satisfied or not. For a PUT written in one of the .NET languages, Pex automatically produces a small test suite with high code coverage. Also, when a generated test fails, Pex can often suggest a solution. Pex runs and analyzes the code under testing, learning about the program’s behavior by monitoring its implementation, and generates new test cases with different behavior.

When Pex runs for the first time, it founds an ArgumentNullException exception and shows what it has found:


We started very well. When programming Helper.LoadTypesFrom I did not realize that if assemblyName is null the method will not work; I should have controlled it. Fortunately Pex tried to run LoadTypesFrom(null) and found the problem.

Typically we solve this by adding

if (assemblyName == null) throw new ArgumentNullException("assemblyName");

at the beginning of the method. It is a good programming practice to check that arguments are not null if when they are null the method does not work. But two things that are not properly solved even with this good practice:

  1. We cannot explicitly declare the prerequisite that the method requires a non-null argument. We can state that in the documentation, and we can also state in the documentation that an ArgumentNullException exception will be thrown if the argument is null (assuming other developers will read the documentation). But we cannot prevent a developer (not even ourselves, some timer after writing the code) to forget that the argument cannot be null.
  2. We cannot get the program to stop compiling or at least giving us a warning if (by accident) we wrote LoadTypeFrom(null). Not only do we have to be able to state the prerequisites of the method, but also the tools must be able to interpret and validate these prerequisites.

Here is where Contracts comes in.

The roots of this concept can be found in the fundamentals of software correctness proposed by C. A. R. Hoare and others a long, long time ago, who pursue ways to write correct programs and knowing they were correct! More recently Bertrand Meyer popularized the concept with the Eiffel programming language.

The key behind design by contract is that in order to determine whether a program is correct or not, we must first have a specification of what the program should do. Once we get that specification, deciding whether the program is correct or not is straightforward: if the program does what is specified, then it is correct; on the contrary, it is not.

I do not want to bore you with details about design by contract. It is enough for now to say that, applied to the object-oriented programming paradigm, a program specification is given by:

  1. Preconditions. These are predicates (assertions which may be true or false) in the context of a method that must be satisfied in order to invoke the method. The responsibility of the predicate being true is on the class who invokes the method: either it makes the predicate fulfilled, or it cannot invoke the method. On the contrary, it is a benefit for the developer of the method knowing that the predicate is true when the method starts, and he can build from that.
  2. Postcondition. They are predicates in the context of a method expected to be true when the method returns. The responsibility of making the predicate evaluate true, is now on the developer of the method; he mandatory needs to code the method in such a way to satisfy the postcondition at the end of the method. On the contrary, postconditions are a benefit to the class invoking the method: it knows that when the method returns, the predicate is true, and can continue from that fact.
  3. Invariant. They are predicates in the context of a class that are true throughout the entire life of each instance. Invariants are like preconditions and postcondition added at the beginning and the end of each instance method (and at the end of the constructors).

The Microsoft Research’s Contracts project of is not the first Microsoft attempt to add design by contract to C#; probably you have heard about the Spec# project.

Contracts implementation is brilliant once you know all the restrictions that its design team had to overcome:

  1. They could not touch the CLR.
  2. They could not change the syntax of the language.
  3. They could not change compilers.
  4. They should make Contracts available in as many languages as possible.

The reasons for these restrictions come, in first place, from the amount of languages that the .NET Framework supports; do not forget there are other languages in addition to C# or Visual Basic; and secondly, playing with the CLR can be a source of instability of the .NET Framework itself.

The solution was:

  1. Create a set of classes to implement preconditions, postcondition and invariant. The same classes can be used from any language.
  2. Integration with Visual Studio to control assertions at design time. Violations of the assertions should appear in the list of errors when compiling.
  3. Inject code at compile time to implement the behavior associated with the assertions. For example, the postcondition can be placed anywhere in the method, but in the generated code always appear at the end.

The main class you use to implement design by contract is Contract class. In Visual Studio 2008 and. NET Framework 3.5, in order to use this class you need to manually add the assembly Microsoft.Contracts.dll, located in %ProgramFiles%\Microsoft\Contracts\PublicAssemblies\v3.5, to the list of project references. In Visual Studio 2010 and. NET Framework 4.0, the assembly is already in the list of references by default. In any case, you should include a reference to the namespace System.Diagnostics.Contracts with a using clause in the source files where you plan to use the Contract class. Finally, you must enable contracts checking at design time and at runtime, by using the Code Contracts tab in project’s properties:


This entire introduction is needed to understand that the real way to specify that an argument cannot be null is:

public static IEnumerable<Type> LoadTypesFrom(
    string assemblyName, string interfaceName)
    Contract.Requires(assemblyName != null);
    Contract.Requires(interfaceName != null);

The preconditions are declared with Contract.Requires. The argument of Requires is an assertion that may be true or false. If in some place in the code there were a call like Helper.LoadTypesFrom (null, null), the compiler will show it:


The message Contracts: Requires is false corresponds to the line

Helper.LoadTypesFrom(null, null);

while the message + location related to previous warning corresponds to the line

Contract.Requires(assemblyName != null);

How does Contracts affects test cases generation with Pex?

When we run Pex again in the method LoadTypesFrom, more test cases and more clues appear for further completing the specification of the method:


The last error message suggests that we can add two new preconditions to LoadTypesFrom:

Contract.Requires(assemblyName.Length > 0);
Contract.Requires(interfaceName.Length > 0);


The last error message suggests again a precondition, but the assertion clause is more complex than previous ones, because somehow we must specify that each and every one of the characters in the string is valid. This type of predicate requires what is called a universal quantifier: Contracts.ForAll. The ForAll method is overloaded; the way they are going to use it requires an IEnumerable and Predicate:

    (Char c) => !assemblyName.Contains(c.ToString())));

The second argument is a lambda expression that is evaluated for each element of the first argument.

When we run Pex again, we see how the actual test case exercises the precondition added, and again the last error message gives us clues about a new precondition:


The new precondition is:

Contract.Requires(assemblyName.Trim() == assemblyName);
Contract.Requires(interfaceName.Trim() == interfaceName);

Pex generates even more test cases and, as before, there is a new error message:


When we add the precondition [1]


Pex generates eleven test cases without further messages.

We are done? Not yet. Remember that the preconditions are obligations for those who invoke the method LoadTypesFrom. But, what are the obligations of LoadTypesFrom? Or put in another way, where it is specified what LoadTypesFrom does?

It is time to add postconditions. The postconditions are added with Contract.Ensures:

Contract.Ensures(Contract.Result<IEnumerable<Type>>() != null);
    (Type item) => item.GetInterface(typeName) != null));

The first postcondition specifies that the result is never null. This means that when types are not found in the given assembly, the method returns an empty list but not null [2] .

The second postcondition also includes a universal quantifier. In prose, the postcondition specifies that all types returned as a result, implement the interface passed as argument, which is exactly what I tell you when I described the method above!

That is just the beauty of design by contract: it is possible to describe what a program does (a method in this case) unequivocally, unambiguously and automatically verifiable way. In addition, as assertions always accompany the code, they are easy to find and easy to keep updated.

But let us go back to Pex. To save generated test cases, we select them, then we right-click the on the selection, and then we choose Save…:


Pex shows us what it will do next, step by step:


After Pex makes the changes announced, the solution will have a new test project that we can compile and run like any other:


Before running the test cases generated by Pex, let us talk a little bit about code coverage.

One of the objectives of using Pex is to generate the smallest number of test cases that ensure the largest possible code coverage. The code coverage of test cases, or simply coverage, is the number of lines of code executed during test runs. If coverage is low, there are potentially some lines of code not reached during the tests that can contain bugs. We cannot say there are bugs, but we can say that if there are errors in the lines not executed, we will not find them.

On the contrary, high coverage is no guarantee of better test cases or absence of errors, but if you let me choose, I prefer the largest possible coverage.

To measure the coverage you need first to change test run configuration. We click on Test, then click Edit Test Run Configurations, and then click on the desired setting, which by default is Local Test Run:


Once there we select Code Coverage from the list and mark the assembly we want to measure coverage from the list Select artifacts to instrument.

To run the test cases we click on Test, then on Run, and then we choose All Tests in Solution. At the end we get the result:


We do not know yet if LoadTypeFrom method works or not. To find it out, we can generate a test case to find a class that implements an interface that we know for sure that is in certain assembly. We can add an interface and a class that implements that interface in the same place where the test cases generated by Pex are, like these ones:

public interface Foo { }

public class Boo : Foo { }

Then we add a method like this:

public void TestLoadTypesFrom()
    IEnumerable<Type> result = Helper.LoadTypesFrom(
        Assembly.GetExecutingAssembly().Location, "Foo");
    bool booExists = false;
    foreach (Type type in result)
        if (type.Equals(typeof(Boo)))
            booExists = true;

As the interface Foo and the class Boo are implemented together with the test cases, they are in the same assembly, the assembly running when the test case gets invoked. This assembly is obtained by calling Assembly.GetExecutingAssembly. So we can be sure that the method LoadTypesFrom should return Boo on the list. The foreach loop search for type Boo in the result and PexAssert.IsTrue is used to display the result together with the results of other test cases generated by Pex.



Pex is a project of Microsoft Research, which integrates directly into Visual Studio. It makes a systematic analysis to code, looking for boundary values, exceptions, etc. and so finds interesting entry and exit values of selected methods, which can be used to generate a small test suite with high code coverage.

Contracts is also a Microsoft Research project to apply design by contract in.NET Framework languages.

Each one on its side is very interesting, but when used together, we can create better specifications with Contracts and better test cases with Pex. That is why I say, Pex and Contacts, better together!


The homepage of Pex at Microsoft Research is http://research.microsoft.com/en-us/projects/pex/

In MSDN DevLabs is http://msdn.microsoft.com/en-us/devlabs/cc950525.aspx

The homepage of Contracts at Microsoft Research is http://research.microsoft.com/en-us/projects/contracts/

In MSDN DevLabs is http://msdn.microsoft.com/en-us/devlabs/dd491992.aspx

[1] Some might say it is not correct to add this precondition it does not indicate a condition on the program but on a file that is external to the program. What we are stating is that the assembly must exist in order to extract types of it.

[2] It is debatable whether the result should be null instead of an empty list. Someone might say you are instantiating an object that is not used then, which negatively impacts performance. I prefer the approach of returning the empty list, because I can always put the result in a foreach loop without fear that it might fail because the null set to go better in this case where the performance impact is minimal.

Written by fmachadopiriz

May 8, 2010 at 11:26 am

A Simple Introduction to the Managed Extensibility Framework

with 6 comments

It is very common for us developers to spend more time modifying existing applications than building new ones from the scratch. New business requirements usually need new features in software applications. Nowadays, the way those requirements are usually added ends when we generate, test, and deploy a new version of the application. Those activities very often affect the entire application, not only the new features added.

When requirements change, we developers should change the corresponding features in the application. After years and years of evolution, source code implementing those features typically has undesired dependencies, and modifications can turn the application unstable, or demand extensive regression testing to make sure new bugs has not been introduced. Also in this case the story ends when we generate and deploy the entire application again.

These days businesses change more, and more often, to survive in a globalized and competitive world. And the software applications that makes those businesses work should also change, and change faster.

Let’s suppose for a moment that every single functionality of an application can be decomposed into a “part”. What we need is the ability to develop weakly coupled parts; independent not only in its structure, but also in its testing, and deployment; and also that those parts can be easily composed into an application.

Software engineering principles required to solve this situation have been known for a long time, and many organizations apply them successfully. But those solutions are usually on case by case basis, and are not available to the general public like us.

Recently, some software development frameworks have appeared to develop applications in parts. MEF is one of them.

There are the following roles in MEF:

  • Exported parts. They declare that they can be used to compose an application and the contract they implement. They are independent development, compilation, and deployment units. They are weakly coupled, not only with other parts in the application they will compose, but also with the application itself, i.e., a part does not know other parts, and does not know necessarily in which application it will be used.
  • Import points. They are variables that contain parts or collections of imported parts that must implement a specific contract. Parts are automatically created from the information contained in parts catalogs.
  • Parts catalogs. They contain parts definitions: where they are and what contract they implement.
  • Parts containers. They contain parts instances and they perform composition of parts.

I will show you how to develop a MEF application step by step. To understand MEF the application must be simple, but must offer multiple features decomposable in parts.

The application in this example allows writing some text and then transforming it by applying different algorithms. Each algorithm offers different functionality and that is what I will transform into parts. The application looks like this:


The drop down list shows available transformations:


The goal is to develop the user interface and each transformation independent for each other. There we go.

We declare the contract that parts must implement with the IFilter interface as follows:

public interface IFilter
    string Filter(string input);

Then we create a part (the transformation ToUnicodeFilter for example) implementing the IFilter interface and we declare it as exportable with the MEF attribute Export:

public class ToUnicodeFilter : IFilter
    public string Filter(string input)
        StringBuilder output = new StringBuilder();
        foreach (char item in input)
            output.AppendFormat(” U+{0:x4}”, (int)item);
        return output.ToString();

The transformation in this case is to convert each character of the input text into its correspondent Unicode representation.

We can have as much parts as we want, but all of them must implement the IFilter interface and be decorated with the Export attribute.

Let’s start coding the composition of parts into the application. We need a catalog to describe our parts. In this example all assemblies containing parts will be in a folder called Extensions, so we use a DirectoryCatalog:

DirectoryCatalog catalog = new DirectoryCatalog(“Extensions”);

We also need a container for the part instances and to build the composition. The container receives the catalog as parameter, so it can know where to find the parts:

CompositionContainer container = new CompositionContainer(catalog);

We are almost there. What we need now is a point where to import the parts. In this case there can be many transformations, so we declare an IList<IFilter> that we decorate with the ImportMany attribute.

private IList&lt;IFilter&gt; filters = new List&lt;IFilter&gt;();

The last thing we need to do is to compose the parts, indicating to the container where the import points are defined:


Then we can iterate over the list to populate the filters in the drop down list:

private void Window_Loaded(object sender, RoutedEventArgs e)
    foreach (IFilter filter in filters)

When the user clicks the Apply button, the filter to execute is found in the list of filters by using the index of the element selected in the drop down list:

private void buttonApply_Click(object sender, RoutedEventArgs e)
    if (comboBoxFilters.SelectedIndex != -1)
        int index = comboBoxFilters.SelectedIndex;
        IFilter filter = filters.ElementAt(index);
        textBoxOutput.Text = filter.Filter(textBoxInput.Text);

Three assemblies are playing here. The first one is where the IFilter interface is declared. The second one is where the class ToUnicodeFilter is declared. The third one is the application itself. The interesting thing here is that in order to add another transformation or to modify an existing one, the only thing we need to do is to deploy the assembly to the Extensions folder. Other assemblies are not affected, including the application assembly.

How this magic works? When the ComposeParts method of the container is called, passing the application itself as parameter, the container looks for all variables decorated with the ImportMany (or with the Import) attribute. In each case the interface required by the import point is also found by the container.

The container knows the catalog, because we passed it as parameter to the constructor. Since the catalog has the information of all parts, including which parts implements which interface, it can find which part or parts implement the interface of each import point. As the catalog also knows where the assemblies implementing the parts are, it can create instances of the appropriate part or parts, and assign them to the variables corresponding to the import points.

Finally, the composition “is built alone”, the work is done by MEF, not by the developer. The developer only “decorates” the types with Export, variables with Import or ImportMany, creates one or more catalogs and the container, and finally composes the parts. It is simple, is not it?

Remember that the challenge was to develop weakly coupled parts, independent not only on its structure, but also in deployment; and that those parts could be easily composed into an application. Goal achieved? I think so.

You can download the code for this sample application from here. The sample contains even more parts than the ones I am showing here.

See also MEF Home y MEF Overview.

In an upcoming post I will write on how to associate metadata to the parts, for example, to show a friendly name instead of the type name in the drop down list. See you later.

Written by fmachadopiriz

April 19, 2010 at 1:55 am

Covariance and contravariance made easy

leave a comment »

One of the most important new features in C# 4.0 and .NET Framework 4.0 is the introduction into the language of covariance and contravariance. Several times I have spoken about covariance and contravariance while presenting C# and .NET Framework 4.0 news, and also in the old blog, but I am not very sure that all attendees end by easily getting these concepts. Until now I have started from formal definitions of covariance and contravariance, and then show how to implement them in C#. Probably that one is not the best approach, so now I will try to explain those concepts in a different way. Here we go.

Take a look on the following classes Animal and Cat inheriting from Animal:

class Animal { }
class Cat : Animal { }

The following declarations are valid in any C# version:

Cat kitty = new Cat();
Animal animal = cat;

Every Cat is an Animal –that is what Cat inherits from Animal means-, so I can assign the variable kitty to the variable animal. There is nothing new until now.

Look now to what happens when I try to do something similar with enumerable of Animal and enumerable of Cat:

IEnumerable<Cat> cats = new List<Cat>();
IEnumerable<Animal> animals = cats;

Every Cat is an Animal, so intuitively any enumerable of Cat is an enumerable of Animal, right? Wrong, at least for all compilers before C# 4.0; they say they cannot convert IEnumerable<Cat> into IEnumerable<Animal> and ask me if I am missing a cast.


I can add the cast, but that will be an unsafe cast. I mean, the program will compile, but I will see an InvalidCastException at runtime when trying to do the assignment.

Okay, you and me will agree that even if the C# compiler does not accept that every enumerable of Cat is also an enumerable of Animal, intuitively we accept that sentence, in the same way we accept that every Cat is an Animal.

C# 4.0 solves the conflict. The code fragment above happily compiles and does not generate any exception at runtime, matching our intuition.

Why the same code that compile in C# 4.0 does not compile in previous versions?

Before C# 4.0 IEnumerable interface was declared as:

public interface IEnumerable<T> : IEnumerable

While in C# 4.0 it is declared as:

public interface IEnumerable<out T> : IEnumerable

Note the out keyword besides the T type parameter: it is used to indicate that IEnumerable is covariant in respect of T. Generally speaking, given S<T>, being S in respect of T implies that, if the assignment Y ← X is valid when X inherits from Y, then the assignment S<Y> ← S<X> is also valid.

Let’s take a look now to another code fragment involving actions –actions are delegates to functions with the form void Action<T>(T)– on Animal and Cat.

Action<Animal> doToAnimal = target => { Console.WriteLine(target.GetType()); };
Action<Cat> doToCat = doToAnimal;
doToCat(new Cat());

Once again, since every Cat is an Animal, an Action<Animal> I can do to an Animal should also be an Action<Cat> that I can do to a Cat. Observe that even while the sentence is reasonable to say, parameter types are the other way around than in the previous case: there I was assigning an enumerable defined in terms of Cat to an enumerable defined in terms of an Animal, while in this case I am assigning an action defined in terms of Animal to an action defined in terms of Cat.

Even if the sentence is reasonable, compilers before C# 4.0 do not like the assignment and fail with a similar message than in the previous case: cannot convert an Action<Animal> into an Action<Cat>; in this case no questions about cast.


Once again C# 4.0 solves the problem, and the code fragment above does compile. Let’s see how actions are declared:

Before C# 4.0 the Action delegate was declared as follows:

public delegate void Action<T>(T obj);

While in C# 4.0 is now declared as:

public delegate void Action<in T>(T obj);

Observe now the keyword in besides type parameter T: it is used to indicate that delegate Action is contravariant in respect of T. Generally speaking, given S<T>, being S in respect of T implies that, if the assignment Y ← X is valid when X inherits from Y, then the assignment S<X> ← S<Y> is also valid.

Summarizing, covariance in C# 4.0 allows a method having a result of a type derived from the type parameter defined in the interface. In this same way, it allows assignments of generic types that, being intuitive, were not allowed before, such as IEnumerable<Animal>IEnumerable<Cat>, when Cat inherits from Animal.

Note that IEnumerable can only return instances, it does not receive instances as parameters. That is why the keyword to declare IEnumerable covariant in respect of T is out.

Analogously, contravariance allows a method to have parameters of a type ancestor of the type specified as type parameter in the delegate. This way, it also allows assignments that even if being intuitive, were not possible before, like Action<Cat>Action<Animal>. Observe that in this case Action can only receive instances as parameters, it does not return instances. That is why the keyword to declare Action contravariant in respect of T is in.

In this post I am showing an example of covariance with a generic interface and an example of contravariance with a generic delegate; in an upcoming post I will show different examples and will list which interfaces and delegates in .NET Framework 4.0 are covariant and which ones are contravariant.

Sample code can be downloaded from here. To test it in compilers earlier that C# 4.0 and see the difference, change the target framework in project properties:


Hope you have enjoyed this post. See you.

Written by fmachadopiriz

April 16, 2010 at 3:31 am