Exploring Entity Framework Core 1.0.0 RTM Changes Understanding a breaking change in the update method behaviour between RC1 and RTM

It’s been a while since my last post but finally I’ve found some time to get this one together, albeit a shorter post this time around.

Outside of my day job, when time permits I like to code for the open source charity project, allReady. This ASP.NET Core web application has been developed during the betas of .NET Core through to RC1. Recently with the help of the very knowledgeable Shawn Wildermuth, the project has been upgraded to run against the final 1.0.0 RTM version of .NET core.

In this post I’m going to talk about one specific change in Entity Framework Core 1.0.0 between RC1 and RTM which caused some breaks in our code.

Before diving into the issue, I need to briefly explain the structure of our code. We have been working to move a lot of our database logic in allReady into Mediatr handlers. This has proven to be a great way to separate the concerns and split up the logic. Our controllers can send messages (commands or queries) via Mediatr to perform actions against the database. The controller have no dependencies on the database layers and therefore are nice and slim. If you want to read more about how we’ve used this pattern, I covered Mediatr in my previous blog post. For this post, we’ll be looking at the code in a particular handler. The code we’re looking at is not handler specific, I point it out just in case the class seems a little confusing as to where it fits into our project. We’ll focus in on a few specific lines of code within the hander.

In a number of places within our code we need to handle the creation or update of an record stored in the database. For example, we have the concept of Itineraries. In allReady an itinerary represents a series of work items (requests) that are grouped together in order to be worked on by volunteers.

In our .NET Core RC1 code base we had the following handler:

public class EditItineraryCommandHandlerAsync : IAsyncRequestHandler<EditItineraryCommand, int>
{
	private readonly AllReadyContext _context;

	public EditItineraryCommandHandlerAsync(AllReadyContext context)
	{
		_context = context;
	}

	public async Task<int> Handle(EditItineraryCommand message)
	{
		try
		{
			var itinerary = await GetItinerary(message) ?? new Itinerary();

			itinerary.Name = message.Itinerary.Name;
			itinerary.Date = message.Itinerary.Date;
			itinerary.EventId = message.Itinerary.EventId;

			_context.Update(itinerary);
			await _context.SaveChangesAsync().ConfigureAwait(false);

			return itinerary.Id;
		}
		catch (Exception)
		{
			// There was an error somewhere
			return 0;
		}
	}

	private async Task<Itinerary> GetItinerary(EditItineraryCommand message)
	{
		return await _context.Itineraries
			.SingleOrDefaultAsync(c => c.Id == message.Itinerary.Id)
			.ConfigureAwait(false);
	}
}

This handler is called from both the create and edit POST actions on our itinerary controller and is intended to handle both scenarios. Within the Handle method we first try to retrieve an existing itinerary based on the Id of the itinerary object being passed in as part of our message. If this does not return an existing itinerary we null coalesce and create a new empty Itinerary object. We then set the properties of our itinerary object based on those coming in via the message (populated by the user in the front end admin page). Then we call Update on the EF context, passing in the Itinerary object and finally call SaveChangesAsync to apply the changes to the database.

This is where things broke for us after beginning to use the RTM version of the EF Core library. During RC1 and prior, the Update method would check the value of the key property on the model and if it was determined it couldn’t be an existing record (i.e. an Id of zero in our case) then the Update method marked the object as Added in the dbContext change tracking, otherwise it would be set as Modified.

Between RC1 and RTM, the Entity Framework team have tightened up on the behaviour of the Update method and made it perform a little more rigidly. It now only performs the action implied by its name. Any object passed in will be marked as modified, even those objects with an Id of zero. It’s up to the caller to call this method correctly.

The resulting exception thrown when calling SaveChangesAsync (after adding a new record) when running against RTM code is…

Database operation expected to affect 1 row(s) but actually affected 0 row(s). Data may have been modified or deleted since entities were loaded. See http://go.microsoft.com/fwlink/?LinkId=527962 for information on understanding and handling optimistic concurrency exceptions.

Essentially this tells us that we sent in an object marked as modified and EF therefore expected to get a count of 1 row being modified against the database. However, since the id on our new object is zero (this is a new record), it won’t match any existing records in the database and as such, no records are actually updated.

That explains the break we experienced. On reflection, the change makes sense as it avoids any assumptions being made by EF about our intentions. We’re calling update, so it marks the object as modified. We’re expected to use the Add method for new objects.

So, with the problem understood, what do we do about it? There were a number of possible options that I considered when putting in a fix for this issue. I won’t go into great detail here since ultimately I was directed to a very sensible and simple option which I’ll share in a few minutes, but at a high level we could have…

  1. Moved from a single shared handler to two separate handlers, one specifically for creating and one specifically for editing an Itinerary. In that case each handler would know whether to call either Add or Update explicitly. Note that Update in this case would not need to be called since the object is already tracked by the context after we get it from the database – more on that later!
  2. Added some logic within our own code to check if the Id is zero and if so, assume we want to Add the object to the context instead of Update.
  3. Utilised a context extension we have in the project which tries to determine whether to call Add or whether to call Update based on the EntityState of the object. This is similar to option 2, but would allow shared use of similar logic.

All three would have worked to some extent, although not without some possible further issues we’d have needed to address. However after opening an issue on the EF core GitHub repository to try to understand the change, Arthur Vickers suggested a much cleaner solution for our case.

Arthur proposed the following replacement code:

var itinerary = await GetItinerary(message) ?? _context.Add(new Itinerary()).Entity;

itinerary.Name = message.Itinerary.Name;
itinerary.Date = message.Itinerary.Date;
itinerary.EventId = message.Itinerary.EventId;

await _context.SaveChangesAsync().ConfigureAwait(false);

It’s a small but elegant change which actually only touches two lines.

First change:

var itinerary = await GetItinerary(message) ?? _context.Add(new Itinerary()).Entity;

What this now does is first try to get an itinerary as we had before. If we have this then the itinerary variable is set and we can change the values of the properties as required. It’s important to realise at this point that since we queried the db for this object, it’s now already being tracked by the context. As such, if we adjust properties on the object, EF will detect these changes and mark them as modified. We therefore would not need to call Update which has a different intended use case.

If we don’t get an existing record back from the query, then we’re working with a new record. In that case, the code above adds a new empty itinerary object to the context. Add returns an EntityEntry and we can use the .Entity property of that to return the actual entity object (an Itinerary). It is assigned to our local variable and we can then set its properties. Since the Add method on the context has already been called it is already being tracked by the context with an EntityState of added.

Second change:

We can remove the _context.Update(itinerary); line entirely. Since we now have a correctly tracked entity in the context after our first line (either modified or added) we don’t need to try and attach it at this stage. We have re-ordered the logic a little which makes things simpler and cleaner. We can just call SaveChangesAsync() which will send SQL commands to add or update as necessary, based on the change tracking information.

In Summary

This issue highlighted for me personally that I still need to think carefully about how EF works under the covers. I’ve tried to read a lot on EF Core and feel I have a better understanding of how it works at a medium-to-high level. In this case, our code took advantage of behaviour in EF RC1 which was in reality hiding a bit of an issue in our code. I don’t think the code was “bad” exactly, just that as we’ve explored, with a bit of thinking about the change tracking behaviour, we could improve our code. At the time of writing the original code using Update for both the add and edit scenario was valid, although perhaps a little naïve. We relied on EF correctly assessing our intention to mark the object with correct state.

When working with EF I think it’s important to have a basic understanding of how the change tracking works and what it does for us. If we query for a record via the context, than that record starts being tracked. We don’t need to expressly call update since the context is already aware of the object and the change tracker can manage any modified properties during SaveChanges.

Next steps

There is certainly more for me to personally learn about EF and its API in general. For example, in this case I learned about the Entity property that EF exposes on an EntityEntry. Beyond the basics EF core exposes many ways to manage the tracking of entities and those do warrant exploration and experimentation to find the right performance vs complexity balance for each scenario.

The above code still has room for improvement as well. One thing that stands out is that we are performing a db query to get an object, in order to update it and save it. This is slightly inefficient in our case. When building the edit page, we’ve already queried for the object to set the form fields in the UI. On our post we’re then querying again, purely to attach the object in it’s current state to the context. A pattern I’ve started using elsewhere for a more performant update is to manually attach an object and mark it’s properties as modified without the need to query it first. In this case, it may be unnecessarily complex in order to remove a pretty light db query, but as always, it’s worth considering.

My thanks go out to Arthur Vickers for his response to my EF issue. It’s extremely helpful being able to reach out to the team directly as we all learn the nuances of the changes in the .NET core libraries.

Read More

CQRS with Mediatr and ASP.NET Core Implementing basic CQRS with ASP.NET Core

I was first introduced to the Mediatr library when I started contributing to the allReady project. It is now being used quite extensively within that application. It has proven to be very useful in decoupling code and separating the concerns. Contributors to the project have recently worked through a good chunk of the codebase and moved many database commands and queries over to the Mediatr request/response pattern. This is allowing us to move away from a large data access wrapper to multiple handlers that clearly handle one function and which are much easier to maintain. This has led to smaller, more testable classes and made the code easier to read as a result.

CQRS Overview

Before going into Mediatr specifically I feel it’s worth briefly talking about Command Query Responsibility Segregation or CQRS for short. CQRS is a pattern that seeks to separate the code and models which perform query logic from the code and models which perform commands such as an insert or update. In each case the model to define the input and output usually differs. By separating the commands and queries it allows the input/output models to be more focused on the specific task they are performing. This makes testing the models simpler since they are less generalised and are therefore not bloated with additional code. Rather than returning an entire database model, a query response model will usually contain only a subset of a table’s fields and possibly data from many related objects, all needed to form a particular view. The input model for a query may be very small. Commands on the other hand will usually require larger input models which more closely map to a full database table and have slimmer response models. Commands may perform some business logic on the properties in order to validate the object before saving it into a database. By contrast the models used for a query will generally contain less business logic.

As with any pattern, there are pros and cons to consider. Some may feel that the complexity added by having to manage different models may outweigh the benefits of separating them. Also, as with all patterns, the concept can be taken too far and start to become a burden on productivity and readability of the code. Therefore the degree to which one uses the CQRS pattern should be governed by each use case. If it’s not providing value, then don’t use it!

Coming back to the allReady project; the approach taken there has been to separate the querying of data used to build the view models from the commands used to update the database. Queries occur far more often than commands, as each page load will need to build up a view model, often with calls to the database to pull in relevant data. By keeping the queries distinct from the commands we can manage the exact shape of the input as well as the size of the data being returned. Queries need to perform quickly since they have a direct effect on user experience and page loads times. Keeping the models as slim as possible and only querying for the required database columns can help the overall performance.

Back to Mediatr

The Mediatr library provides us with a messaging solution and is a nice fit to help us introduce some concepts from the CQRS pattern into our code. In allReady it has allowed the team to greatly simplify the controllers and in many cases they now have a single dependency on Mediatr which is injected by the built in ASP.NET Core dependency injection. The MVC actions use Mediatr to send messages for the data they need to populate the view (queries) or to perform actions that update the database (commands).

Mediatr has the concept of handlers which are responsible for dealing with a query or command message. A handler is setup to handle a particular message which will contain the input needed for the command or query. A query message will usually need only a few properties, perhaps just an id of the object to query for. A command message may contain a more complete object with all of the model’s properties that need to be updated by the handler.

Using Mediatr with ASP.NET Core

Using Mediatr in an ASP.NET Core project is pretty straightforward. There are a couple of steps required in order to set things up.

Firstly we need to bring in the Mediatr package from Nuget. The quickest way is to use the package manager console by issuing the command “Install-Package MediatR”. At the time of writing the current version is 2.0.2.

Now that we have Mediatr added to our project we need to register it’s classes with the ASP.NET Core Dependency Injection (DI) container. The exact way you do this will depend on which DI container you are using. I’m going to show how I’ve got it working in ASP.NET Core with the default container. I ended up pretty much following a great Gist that I found. It got me started with registering Mediatr and it’s delegate factories so all credit to the author.

Within the Startup.cs class ConfigureServices method I added the following code to register Mediatr.

services.AddScoped<IMediator, Mediator>();
services.AddTransient<SingleInstanceFactory>(sp => t => sp.GetService(t));
services.AddTransient<MultiInstanceFactory>(sp => t => sp.GetServices(t));
services.AddMediatorHandlers(typeof(Startup).GetTypeInfo().Assembly);

First I add the Mediatr component itself. There are also two delegate types for the Mediatr factories which must be registered. The final line calls an extension method which will look through the assembly and ensure that any class which is a type of IRequestHandler or IAsyncRequestHandler is registered. By reflecting through the assembly in this way we avoid having to manually map each handler in DI when we create it.

public static class MediatorExtensions
{
	public static IServiceCollection AddMediatorHandlers(this IServiceCollection services, Assembly assembly)
	{
		var classTypes = assembly.ExportedTypes.Select(t => t.GetTypeInfo()).Where(t => t.IsClass && !t.IsAbstract);

		foreach (var type in classTypes)
		{
			var interfaces = type.ImplementedInterfaces.Select(i => i.GetTypeInfo());

			foreach (var handlerType in interfaces.Where(i => i.IsGenericType && i.GetGenericTypeDefinition() == typeof(IRequestHandler<,>)))
			{
				services.AddTransient(handlerType.AsType(), type.AsType());
			}

			foreach (var handlerType in interfaces.Where(i => i.IsGenericType && i.GetGenericTypeDefinition() == typeof(IAsyncRequestHandler<,>)))
			{
				services.AddTransient(handlerType.AsType(), type.AsType());
			}
		}

		return services;
	}
}

The AddMediatorHandlers method first finds all class types in the assembly. It loops through each class and gets it’s interfaces. If any of the interfaces are an IRequestHandler or IAsyncRequestHandler then we add a transient mapping to the services collection.

If you need further details or samples for registering Mediatr with a different DI container I recommend you check out the wiki on Github which contains some setup guidance and links to samples.

Messages and Handlers

The pattern we’ve employed in allReady is to use the Mediatr handlers to return ViewModels needed by our actions. An action will send a message of the correct type to the Mediatr instance and expect a ViewModel in return. All of the logic to handle the DB queries which fetch the data needed to build up the view model are contained within the handler. We also use Mediatr to issue and handle commands for HTTP post/put/delete request actions. These actions will often need to update a record in the database. We send the created/updated object in the message and a handler picks it up, processes it and returns a success or failure result back to the action.

You can also chain Mediatr handlers by having a handler send out it’s own message which allows you to compose queries to get the data you need. For example if you have a handler which reads a user record from a database, this same user model may be needed as part of multiple view models. Rather than code the same database query each time within each handler, you can place your data access query inside a single handler. This handler can then return the user data to any other handler which sends a message for the user data. This allows us to adhere to the don’t repeat yourself principle by writing the code and logic only once. We can also test that logic to ensure that it works as expected and be confident that as everyone uses it they can expect consistent responses.

To create a request message in Mediatr you create a basic class marked as an implementation of the IRequest or IAsyncRequest interface. I try to use async methods for everything I do in ASP.NET Core so I’ll stick to async examples in this post. You can optionally specify the return type you expect from the handler. An async handler will return that object wrapped in a task which can be awaited.

Your message class will define all of the properties expected to be in the message. Here is an example of a basic message which will send an Id out and which expects the response from the handler to be a UserViewModel.

public class UserQuery : IAsyncRequest<UserViewModel>
{
	public int Id { get; set; }
}

With a request message defined we can now go ahead and create a handler that will respond to any messages of that type. We need to make our class implement the IRequestHandler or in my case IAsyncRequestHandler interface, defining the input and output types.

public class UserQueryHandlerAsync : IAsyncRequestHandler<UserQuery, UserViewModel>
{
    public async Task<UserViewModel> Handle(UserQuery message)
    {
        // Could query a db here and get the columns we need.
        
        viewModel = new UserViewModel();
        viewModel.UserId = 100;
        viewModel.Username = "sgordon";
        viewModel.Forename = "Steve";
        viewModel.Surname = "Gordon";

        return viewModel;
    }
}

This interface defines a single method named Handle which returns a Task of your output type. This expects your request message object as it’s parameter.

In my example I’m simply newing up a UserViewModel object, setting it’s properties and returning it. In the real world this would be where I query the database using Entity Framework and build up my view model from the resulting data.

I personally have been in the habit of keeping my request message and my response handler classes together in the same physical .cs file, but you can split them if you prefer. I’m normally keen on keeping one class to one file, but in this case since the two classes are very interrelated I’ve found it quicker to work when I can see both in the same file.

We now have everything wired up so finally it’s time to send a message from our controller.

public class UsersController : Controller
{
    private readonly IMediator _mediator;

    public UsersController(IMediator mediator)
    {
        if (mediator == null)
            throw new ArgumentNullException(nameof(mediator));

        _mediator = mediator;
    }

    [HttpGet]
    [Route("users/{userId}")]
    public async Task<IActionResult> UserDetails(int userId)
    {
        UserViewModel model = await _mediator.SendAsync(new UserQuery { Id = userId });

        if (model == null)
            return HttpNotFound();

        return View(model);
    }
}

The key things to highlight here are the controller’s constructor accepting an IMediatr object. This will be injected by the ASP.NET Core DI when the application runs. What’s very useful is that we can easily mock an IMediatr and it’s response which makes testing a breeze.

The UserDetails action itself expects a user id when it is called. This id gets bound from the route parameter by MVC.

The key line in the code above is where we send the mediator message. We do this by calling SendAsync on the IMediatr object. We send a UserQuery object with the Id property set. This message will now be managed by Mediatr. It will locate the suitable handler, pass it the request message and return the response to our action.

As you can see, this has made our controller very light. The only code left is a basic check to return an appropriate not found response if the response to our Mediatr request is null. That won’t ever be true in my example, but in a real world app if the database doesn’t find an object with the id provided I return null instead of a UserViewModel. This is exactly how I like a controller to be, it’s single responsibility is to send the client a HTTP response of some kind to the user’s request. It doesn’t and shouldn’t need to know about our database or have any concerns with building up it’s view model directly.

Testing

Being good citizens we should always consider the testing process. Testing when using Mediatr and a CQRS style pattern is very simple. My approach has been to ensure that each handler has appropriate unit tests around the handle method testing the logic within. To do this we can new up a Mediatr handler in our test class and then we can call the Handle method direct and run tests on the returned object to verify the result.

[Fact]
public async Task HandlerReturnsCorrectUserViewModel()
{ 
    var sut = new UserQueryHandlerAsync();
    var result = await sut.Handle(new UserQuery { Id = 100 });

    Assert.NotNull(result);
    Assert.Equal("Steve", result.Forename);
}

This is a bit of a contrived example, especially as my handler example really doesn’t perform any logic. However we can test for whatever is necessary on the returned result. You can check out the allReady code on Github to see some real examples of tests around the handlers used there. In those cases we often use an in memory Entity Framework DbContext object so that we can test the handler’s EF query returns the expected data from a known set of test data.

We can also test the controllers very easily by passing in a mock of the IMediatr.

[Fact]
public void UserDetails_SendsQueryWithTheCorrectUserId()
{
    const int userId = 1;
    var mediator = new Mock<IMediator>();
    var sut = new UserController(mediator.Object);

    sut.UserDetails(userId);

    mediator.Verify(x => x.SendAsync(It.Is<UserQuery>(y => y.EventId == userId)), Times.Once);
}

We create a mock IMediatr using Moq and pass that in when instantiating a controller. Here I’ve called the UserDetail action with an Id and verified that a query has been sent to the mediator containing that Id.

If necessary you can setup your IMediatr mock so that you define the data that is returned in response to a message. This can be useful if you want to validate your action’s behaviour to different responses. You can mock up the response object using code such as…

var user = new UserViewModel
{
    viewModel.UserId = 100,
    viewModel.Username = "sgordon",
    viewModel.Forename = "Steve",
    viewModel.Surname = "Gordon",
};

var mediator = new Mock<IMediator>();
mediator.Setup(x => x.SendAsync(It.IsAny<UserQuery>())).Returns(user);

If your controller performs any logic based on the returned object you can now easily specify the different scenarios to test that. Something I often do is to write a test that verifies that when the Mediatr response is null the action sends a HttpNotFound result. In a simple example that can be done in the following way…

[Fact]
public async Task UserDetailsReturnsHttpNotFoundResultWhenUserIsNull()
{
    var mediator = new Mock<IMediator>();

    var sut = new UserController(mediator);

    var result = await sut.UserDetails(It.IsAny<int>());

    Assert.IsType<HttpNotFoundResult>(result);
}

Summing Up

I’ve really taken to the pattern that Mediatr allows us to easily implement. It’s a personal choice of course but my view is that it keeps my controllers clean and allows me to create handlers that have a single responsibility. It keeps things nicely separated as nothing it too tightly bound together. I can easily change the behaviour of a handler and as long as it still returns the correct object type my controllers never care.

As I’ve shown the testing process is pretty nice and if we ensure each handler is tested as well as the controllers, then we have good coverage of the behaviours we expect from the classes. A big bonus is that it already supports ASP.NET Core and is pretty simple to setup with the built-in DI container.

Mediatr also supports a publisher/subscriber pattern which I’ve yet to need in my code. It’s something worth taking a look at though if you need multiple handlers to respond when an event occurs. It’s something that I plan to look into at some point.

I highly recommend trying out the Mediatr library and reviewing the pattern being used on the allReady project. It takes little time to setup and quickly become a comfortable flow when writing code. It’s made me think about what my models are involved in and helped me keep them focused and more robust.

NOTE: This post was written based on RC1 of ASP.NET Core and may not be current by the time RC2 and RTM are released.

Read More

Extending the ASP.NET Core 1.0 Identity SignInManager Adding basic user auditing to ASP.NET Core

So far I have written a couple of posts in which I dive into the code for the ASP.NET Core 1.0 Identity library. In this post I want to do something a little more practical and look at extending the default identity functionality. I’m working on a project at the moment which will be very reliant on a strong user management system. As I move forward with that and build up the requirements I will need to handle things not currently available in the Identity library. Something missing from the current Identity library is user security auditing, an important feature for many real world applications where compliance auditors may expect such information to be available.

Before going further, please note that this code is not final, production ready code. At this stage I want to prove my concept and meet some initial requirements that I have. I expect I’ll end up extending and refactoring this code as my project develops. Also, at the time of writing ASP.NET Core 1.0 is at release candidate 1. We can expect some changes in RC2 and RTM which may require this code to be adjusted. Feel free to do so, but copy and paste at your own risk!

At this stage in my project, my immediate requirement is to store successful login, failed login and logout events in an audit table within my database. I would like to collect the visitor IP address also. This data might be useful after some kind of security breach; for example to review who was logged into the system as well as where from. It would also allow for some analysis of who is using the application and how often / at what times of day. Such data may prove useful to plan upgrades or to encourage more use of the application. Remember that if you record this information, particularly within a public facing SaaS style application, you may well need to include details of what you’re data recording and why, in your privacy policy.

I could implement this auditing functionality within my controllers. For example I could update the Login action on the Account controller to write into an audit table directly. However I don’t really like that solution. If anyone implements a new controller/action to handle login or logout then they would need to remember to also add code to update the audit records. It makes the Login action method more responsible than it should be for performing the audit logic, when really this belongs deeper in the application.

If we take a look at the Login action on the Account controller we can see that it calls into an instance of a SignInManager. In a default MVC application this is setup in the dependency injection container by the call to AddIdentity within the Startup.cs class. The SignInManager provides the default implementations of sign in and sign out logic. Therefore this is a better candidate in which to override some of those methods to include my additional auditing code. This way, any calls to the sign in manager, from any controller/action will run my custom auditing code. If I need to change or extend my audit logic I can do so in a single class which is ultimately responsible for handling that activity.

Before doing anything with the SignInManager I needed to define a database model to store my audit records. I added a UserAudit class which defines the columns I want to store:

public class UserAudit
{
	[Key]
	public int UserAuditId { get; private set; }

	[Required]
	public string UserId { get; private set; }

	[Required]
	public DateTimeOffset Timestamp { get; private set; } = DateTime.UtcNow;

	[Required]
	public UserAuditEventType AuditEvent { get; set; }

	public string IpAddress { get; private set; }   

	public static UserAudit CreateAuditEvent(string userId, UserAuditEventType auditEventType, string ipAddress)
	{
		return new UserAudit { UserId = userId, AuditEvent = auditEventType, IpAddress = ipAddress };
	}
}

public enum UserAuditEventType
{
	Login = 1,
	FailedLogin = 2,
	LogOut = 3
}

In this class I’ve defined an Id column (which will be the primary key for the record), a column which will store the user Id string, a column to store the date and time of the audit event, a column for the UserAuditEventType which is an enum of the 3 available events I will be auditing and finally a column to store the user’s IP address. Note that I’ve made the UserAuditId a basic auto-generated integer for simplicity in this post, however in my final code I’m very likely going to use fluent mappings to make a composite primary key based on user id and the timestamp instead.

I’ve also included a static method within the class which creates a new audit event record by taking in the user id, event type and the ip address. For a class like this I prefer this approach versus exposing the property setters publically.

Now that I have a class which represents the database table I can add it to the entity framework DbContext:

public class ApplicationDbContext : IdentityDbContext<ApplicationUser>
{
	public DbSet<UserAudit> UserAuditEvents { get; set; }
}

At this point, I have a new table defined in code which needs to be physically created in my database. I will do this by creating a migration and applying it to the database. As of ASP.NET Core 1.0 RC1 this can be done by opening a command prompt from my project directory and then running the following two commands:

dnx ef migrations add “UserAuditTable”

dnx ef database update

This creates a migration which will create the table within my database and then runs the migration against the database to actually create it. This leaves me ready to implement the logic which will create audit records in that new table. My first job is to create my own SignInManager which inherits from the default SignInManager. Here’s what that class looks like before we extend the functionality:

public class AuditableSignInManager<TUser> : SignInManager<TUser> where TUser : class
{
	public AuditableSignInManager(UserManager<TUser> userManager, IHttpContextAccessor contextAccessor, IUserClaimsPrincipalFactory<TUser> claimsFactory, IOptions<IdentityOptions> optionsAccessor, ILogger<SignInManager<TUser>> logger)
		: base(userManager, contextAccessor, claimsFactory, optionsAccessor, logger)
	{
	}
}

I define my own class with it’s constructor inheriting from the base SignInManager class. This class is generic and requires the type representing the user to be provided. I also have to implement a constructor, accepting the components which the original SignInManager needs to be able to function. I pass these objects into the base constructor.

Before I implement the logic and override some of the SignInManager’s methods I need to register this custom SignInManager class with the dependency injection framework. After checking out a few sources I found that I could simply register this after the AddIdentity services extension in my StartUp.cs class. This will then replace the SignInManager previously registered by the Identity library.

Here’s what my ConfigureServices method looks like with this code added:

public void ConfigureServices(IServiceCollection services)
{
	// Add framework services.
	services.AddEntityFramework()
		.AddSqlServer()
		.AddDbContext<ApplicationDbContext>(options =>
			options.UseSqlServer(Configuration["Data:DefaultConnection:ConnectionString"]));

	services.AddIdentity<ApplicationUser, IdentityRole>()
		.AddEntityFrameworkStores<ApplicationDbContext>()
		.AddDefaultTokenProviders()
		.AddUserManager<AuditableUserManager<ApplicationUser>>();

	services.AddScoped<SignInManager<ApplicationUser>, AuditableSignInManager<ApplicationUser>>();

	services.AddMvc();

	// Add application services.
	services.AddTransient<IEmailSender, AuthMessageSender>();
	services.AddTransient<ISmsSender, AuthMessageSender>();
}

The important line is services.AddScoped<SignInManager<ApplicationUser>, AuditableSignInManager<ApplicationUser>>(); where I specificy that whenever a class requires a SignInManager<ApplicationUser> the DI container will return our custom AuditableSignInManager<ApplicationUser> class. This is where dependency injection really makes life easier as I don’t have to update multiple classes with concreate instances of the SignInManager. This one change in my startup.cs file will ensure that all dependant classes get my custom SignnManager.

Going back to my AuditableSignInManager I can now make some changes to implement the auditing logic I require.

public class AuditableSignInManager<TUser> : SignInManager<TUser> where TUser : class
{
	private readonly UserManager<TUser> _userManager;
	private readonly ApplicationDbContext _db;
	private readonly IHttpContextAccessor _contextAccessor;

	public AuditableSignInManager(UserManager<TUser> userManager, IHttpContextAccessor contextAccessor, IUserClaimsPrincipalFactory<TUser> claimsFactory, IOptions<IdentityOptions> optionsAccessor, ILogger<SignInManager<TUser>> logger, ApplicationDbContext dbContext)
		: base(userManager, contextAccessor, claimsFactory, optionsAccessor, logger)
	{
		if (userManager == null)
			throw new ArgumentNullException(nameof(userManager));

		if (dbContext == null)
			throw new ArgumentNullException(nameof(dbContext));

		if (contextAccessor == null)
			throw new ArgumentNullException(nameof(contextAccessor));

		_userManager = userManager;
		_contextAccessor = contextAccessor;
		_db = dbContext;
	}

	public override async Task<SignInResult> PasswordSignInAsync(TUser user, string password, bool isPersistent, bool lockoutOnFailure)
	{
		var result = await base.PasswordSignInAsync(user, password, isPersistent, lockoutOnFailure);

		var appUser = user as IdentityUser;

		if (appUser != null) // We can only log an audit record if we can access the user object and it's ID
		{
			var ip = _contextAccessor.HttpContext.Connection.RemoteIpAddress.ToString();

			UserAudit auditRecord = null;

			switch (result.ToString())
			{
				case "Succeeded":
					auditRecord = UserAudit.CreateAuditEvent(appUser.Id, UserAuditEventType.Login, ip);
					break;

				case "Failed":
					auditRecord = UserAudit.CreateAuditEvent(appUser.Id, UserAuditEventType.FailedLogin, ip);
					break;
			}

			if (auditRecord != null)
			{
				_db.UserAuditEvents.Add(auditRecord);
				await _db.SaveChangesAsync();
			}
		}

		return result;
	}

	public override async Task SignOutAsync()
	{
		await base.SignOutAsync();

		var user = await _userManager.FindByIdAsync(_contextAccessor.HttpContext.User.GetUserId()) as IdentityUser;

		if (user != null)
		{
			var ip = _contextAccessor.HttpContext.Connection.RemoteIpAddress.ToString();

			var auditRecord = UserAudit.CreateAuditEvent(user.Id, UserAuditEventType.LogOut, ip);
			_db.UserAuditEvents.Add(auditRecord);
			await _db.SaveChangesAsync();
		}
	}
}

Let’s step through the changes.

Firstly I specify in the constructor that I will require an instance of the ApplicationDbContext, since we’ll directly need to work with the database to add audit records. Again, constructor injection makes this nice and simple as I can rely on the DI container to supply the appropriate object at runtime.

I’ve also added some private fields to store some of the objects the class receives when it is constructed. I need to access the UserManager, DbContext and IHttpContextAccessor objects in my overrides.

The default SignInManager defines it’s public methods as virtual, which means that since I’ve inherited from it, I can now supply overrides for those methods. I do exactly that to implement my auditing logic. The first method I override is the PasswordSignInAsync method, keeping the signature the same as the original base method. I await and store the result of the base implementation which will actually perform the sign in logic. The base method returns a SignInResult object with the result of the sign in attempt. Now that I have this result I can use that to perform some audit logging.

I cast the user object to an IdentityUser so that I can access it’s ID property. Assuming this cast succeeds I can go ahead and log an audit event. I get the remote IP from the context, then I inspect the result and call it’s ToString method(). I use a switch statement to generate an appropriate call to the CreateAuditEvent method passing in the correct UserAuditEventType. If a UserAudit object has been created I then write it into the database via the DbContext that was injected into this class when it was constructed.

I have a very similar override for the SignOutAsync method as well. In this case though I have to get the user via the HttpContext and use the UserManager to get the IdentityUser based on their user id. I can then write a logout audit record into the database. Running my application at this stage and performing some logins, login attempts with an incorrect password and logging out I can check my database and see some records being stored in the database.

db

Summing Up

Whilst not yet fully featured, this blog post hopefully demonstrates the initial steps that we can follow to quite easily extend and override the ASP.NET Core Identity SignInManager class with our own implementation. I expect to be refactoring and extending this code further as my requirements determine.

For example, while the correct place to call the auditing logic is from the SignInManager, I will likely create an AuditManager class which should have the responsibility to actually create and write the audit records. If I do this then I will still need my overridden SignInManager class which would require an injected instance of the AuditManager. As my audit needs grow, so will my AuditManager class and some code will likely get reused within that class.

Including an extra class at this stage would have made this post a bit more complex and have taken me away from my initial goal of showing how we can extend the functionality of the SignInManager class. I hope that this post and the code samples prove useful to others looking to do similar extensions to the default behaviour.

Read More

Contributing to allReady A charity open source project from Humanitarian Toolbox

In this post I want to discuss a fantastic open source project called allReady that I highly encourage all ASP.NET developers to check out. I want to share my early experience with allReady and how I got started with contributing to an open source project for the first time.

What is allReady?

allReady is project developed and managed by the charity organisation Humanitarian Toolbox. It is designed to assist management of community preparedness campaigns, bringing together the campaign organisers and volunteers to make managing the campaign easier and more efficient for all involved. It’s currently in a private preview release and is being trialled and tested by the American Red Cross with a campaign to install smoke alarms within homes in the Chicago area. Once the pilot is completed it will be available for many other important campaigns.

The project is developed using ASP.NET Core 1.0 (formerly ASP.NET 5) and uses Entity Framework Core (formerly EF7) for data access. It’s live preview sites are hosted in Microsoft Azure.

To summarise the functionality; allReady is a web application which hosts campaigns and their associated activities. The public can view campaigns and volunteer to help with activities where they have the appropriate skills. Activities may have goals such as to install a certain number of smoke alarms in a given area by a certain date. Campaign organisers can assign tasks to the volunteers and track the progress of the activity that has taken place. By managing the tasks in this way it allows the most suitable resources to be aligned with the work required.

Contributing to allReady

I first heard about the project a few months ago on the DotNetRocks podcast, hosted by Carl Franklin and Richard Campbell and it sounded interesting. I headed over to the Humanitarian Toolbox website and their allReady GitHub repository to take a deeper look. Whilst I had played around with some features of ASP.NET Core and read/watched a fair amount about it, I’d yet to work with a full ASP.NET Core project. So I started by spending some time looking at the code on GitHub and working out how it was put together.

I then spent a bit of time looking through the issues, both closed ones and new ones to get a feel for the direction of the project and the type of work being done. It was clear to me at this point that I wanted to have a go at contributing, but having never even forked anything on GitHub I was a bit unsure of how to get started. I must admit it took me a few weeks before I decided to bite the bullet and have a go with my first contribution. I was a little intimidated to start using GitHub and jumping into an established project. Fortunately the GitHub readme document for allReady gave some good pointers and I spent a bit of time on Google learning how to fork and clone the repository so that I could work with the code.

I was going to spend some time in this post going through the more detailed steps of how to fork a repository and start contributing but in January Dave Paquette posted an extensive blog post covering this in fantastic detail. If you want a great introduction to GitHub and open source contributions I highly recommend that you start with Dave’s post.

It can be a bit daunting knowing where to start and opening your code up to public review – certainly that’s how I felt. I personally wasn’t sure if my code would be good enough and didn’t want to make any stupid mistakes, but after cloning down the code and playing around on my local machine I started to feel more confident about making some changes and trying my first pull request.

I took a look through the open issues and tried to find something small that I felt I could tackle for my first pull request. I wanted something reasonably simple to begin with while I learned the ropes. I found an issue requiring some UI text to display the password requirements for new account registrations so I started working on the code for that. I got it compiling locally, ran the tests and submitted my first pull request (PR). Well done me!

It was promptly reviewed by MisterJames (aka James Chambers) who welcomed me to the project. At this point it’s fair to say that I’d gone a bit off tangent with my PR and it wasn’t quite right for what was needed. Being brutally honest, it wasn’t great code either. James though was very kind in his feedback and did a good of explaining that although it wasn’t quite right for the issue at hand, he’d be able to help me adjust it so that it was. An offer of some time to work remotely on the code wasn’t at all what I hadn’t expected and was very generous. James very quickly put some of my early fears to rest and I felt encouraged to continue contributing.

The importance of this experience should not be underestimated, since I’m sure that a lot of people may feel worried about making a pull request that is wrong either in scope or technically. My experience though quickly put me at ease and made it easy to continue contributing and learning as I went. The team working on allReady do a great job of welcoming and inducting new contributors to the project. While my code was not quite right for the requirement, I wasn’t made to feel rejected  or humiliated and help was offered to make my code more suitable. If you’re looking for somewhere friendly to start out with open source, I can highly recommend allReady.

Since then I’ve made a total of 23 pull requests (PRs) on the project 22 of which are now closed and merged into the project. After my first PR I picked up some more issues that I felt I could tackle, including a piece of work to rename some of the entities and classes around a more relevant ubiquitous language. It’s really rewarding to have code which you’ve contributed make up part of an open source project and I feel it’s even better when it’s for such a good cause. As my experience with the codebase, including working with ASP.NET Core has developed I have been able to pick up larger and more complex issues. I continue to learn as I go and hopefully get better with each new pull request.

Challenges

Some people will be worried about starting out with open source contributions, but honestly I’ve had no bad experiences with the allReady project. I do want to discuss a couple of areas that could be deemed challenging and perhaps are things which might be putting others off from contributing. I hope in doing so I can set any concerns people may have aside.

The area that I found most technically challenging early on was working with Git and GitHub. I’d only recently been exposed to Git at work and hadn’t yet learned how to best use the commands and processes. I’d never worked with GitHub so that was brand new to me too. Rebasing was the area that as first was a bit confusing and daunting for me. This post isn’t intended to be a full git or rebasing tutorial but I did feel it’s worth briefly discussing what I learned in this area since others may be able to use this when getting started with allReady.

Rebasing 101

Rebasing allows us to take commits which have been made by other authors (or yourself on other branches) and replay our commits on top of them. It allows us to keep the base of our work up-to-date and ensure that any merge conflicts are handled before a pull request is submitted/accepted.

I follow the practice of creating a branch for each issue which I start work on. This allows me to keep that work separate and is also required in order to submit a pull request on GitHub. Given that a feature might take a number of days to complete, it’s likely that the project master branch will have moved on by the time you are ready to submit your PR. You could pull in the master branch changes and then merge them into your branch but this leads to a quite messy commit history and in the case of one of my PRs, didn’t work well at all. Rebasing is your friend here and by rebasing your issue branch on top of the up-to-date master you can ensure that the commit timeline is correct and that all of your changes work with the latest code. There are occasions where you’ll need to handle conflicts as the rebase occurs, but often a rebase can be a pretty simple exercise.

Dave Paquette’s post which I highlighted earlier covers all of this, so I recommend you read that for some great guidance first. I thought it might be useful to share my cheat sheet that I noted down a few months ago and which I personally found handy in the early days until I had memorised the flow of commands. Out of context these may not make sense to Git newcomers but hopefully after reading Dave’s guide you may find these a nice quick reference to have to hand.

git checkout master
git fetch htbox
git merge htbox/master
git checkout issue-branch-1
git rebase master
git push origin issue-branch-1 -f

To summarise what these do:

First I checkout the master branch and fetch any changes from the htbox remote, merging them into my master branch. This brings my local repository up-to-date with the project code on GitHub. When working with a GitHub project you’ll likely setup two remotes. One is to your forked GitHub repository (origin in my case) and one is to the main project repository (which I named htbox). The first three commands above will update my local master branch to reflect the current project master branch.

I then checkout my issue branch in which I have completed my work for the issue. I then rebase from my updated local master branch. This will rewind your issue branch’s changes, update with the master branch commits and then reply each of your branch’s commits onto the updated base (hence the term rebasing). If any of your commits conflict with the master’s changes then you will have to handle those merge conflicts before the rebase operation can continue.

Finally, once my issue branch is rebased, my feature re-tested to ensure that it still works as expected and then unit tests all running green I can push my branch up to my forked repository hosted on GitHub. I tend to push only my specific issue branch and will sometimes require the -f force flag to ensure that the remote fork takes all of my changes exactly as they appear locally. Forcing is most common in cases where I’m updating an existing PR and have had to rebase a second time based on more changes to the master branch.

This leaves me ready to submit a pull request or, if I already have a PR submitted for my branch, GitHub will update that existing PR with my new commits. The project team will be happy as this will make accepting and merging the PR an easier task after the rebase as any conflicts with the current master will have been resolved.

Whilst understanding the steps required to rebase was a learning curve for me, it was in the end, easier than I had feared it might be. Certainly if you’re familiar with Git before you start then you’ll have an easier time, but I wouldn’t let it put you off if you’re a complete newcomer. It is in fact a great chance to learn Git which will surely be useful in future projects.

Finding Time

Another challenge that I feel worth mentioning, since I’m sure many will consider it true for them too, is finding time to work on the project. Life is busy and personally finding time to work on the code isn’t always that easy for me. I don’t have children, so I do have more time than those with little ones to take care of, but outside of work I like to socialise with friends, run a side photography business with my wife, play sports and enjoy the outdoors. All things which consume most of my spare time. However I really enjoy being a part of this project and so when I do find myself with spare time, often early in the morning before work, during the weekend or sometimes even during my lunch break, I try to tackle an issue for allReady. There are a range of open issues, some large in scope, some smaller; so often you can often find something that you can make time to work on. No one puts pressure on the completion of work and I believe everyone is very appreciative of any time people are able to contribute. I do recommend that you be realistic in what you can tackle but certainly don’t be put off if you’ll pick up pieces of work as and when you can. If you do start something but find yourself out of time, I recommend you leave a short comment on the issue so that other contributors know what’s happening and when you might be able to pick it up again.

Sometimes, with larger issues it might make sense for them to be broken down into sub issues, so that PR’s can be submitted for smaller pieces of work. This allows the larger goals to be achieved but in a more manageable way. If you see something that you want to help with, leave a comment and start a discussion. Again the team are very approachable and quick to respond to any questions and comments you may have.

Time is valuable to us all and therefore it’s a great thing to donate when you can. Sharing a little time here and there on a project such as allReady can be really precious, and however small a contribution, it’s sure to be gratefully received.

Benefits

Having touched on a few possible challenges I wanted to move onto the benefits of contributing which I think far outweigh those challenges.

Firstly and in my opinion, most importantly, there is the fact that any contribution will be towards an application that will be helping others. If you have time to give an open source project, this is one which really does represent a very worthwhile cause. One of the goals of Humanitarian Toolbox is to allow those with software development skills to put their knowledge and experience directly towards charitable goals. It’s great to be able to use my software development skills in this way.

Secondly it’s a great learning experience for both new and experienced developers. With ASP.NET Core in RC1 currently and RTM perhaps only a few more months away this is a great opportunity to work with the new framework and to learn in a practical way. Personally I’ve learnt a lot along the way, including seeing the Mediatr library being used. I really like the command/query pattern for data access and I have already used it on a work project. There are a number of experienced developers on the team and I learn a lot from the code reviews on my pull requests and watching their commits.

Thirdly, it’s a very friendly project to be involved with. The team have been great and I’ve felt very welcomed and involved in the project. Some of the main contributors are now part of the .NET monsters on Channel 9. It’s great to work with people who really know their stuff. This makes it a great place to start out with open source contributions, even with no prior experience contributing on GitHub.

Code-a-thon

On the 20th February Humanitarian Toolbox held a code-a-thon at two physical locations in the US and Canada as well as some remote contributions from others on the project. I set aside my day to work on some issues from the UK. It was great being part of a wider event, even if remote. I recommend that you follow @htbox on twitter for news of any future events that you can take part in. If you’re close enough to take part physically then it looked like good fun during the live link up on Google Hangouts. As well as the allReady project people were contributing to other applications with charitable goals such as a missing children’s app for Minnesota. You can read more about the event in Rocky Lhotka’s blog post.

How you can help and get started?

No better way than to jump into the GitHub project and start contributing. Even non developers can get involved by helping test the application, raising any issues that they experience and providing suggestions for improvements. If you have C# and ASP.NET experience I’m sure you’ll quickly get up to speed after checking out the codebase. If you’re looking for good issues to ease in with then check out any tagged with the green jump-in label. Those are smaller, simpler issues that are great for newcomers to the project or to GitHub in general. Once you’ve done a few fixes and pull requests for those issues you’ll be ready to take a look at some of the more complex issues.

If you need help with getting started or are unsure how of how to contribute then the team will be sure to offer help and advice along the way.

Summary of links

As I’ve mentioned and included quite a lot of links in this post, here’s a quick roundup and a few others I thought would be useful:

http://htbox.org

http://htbox.github.io/

http://www.davepaquette.com/archive/2016/01/24/Submitting-Your-First-Pull-request.aspx

https://github.com/htbox/allready

https://www.youtube.com/channel/UCMHQ4xrqudcTtaXFw4Bw54Q – Community Standup Videos

http://www.lhotka.net/weblog/HTBoxTwinCitiesCodeathonFeb2016Recap.aspx

Read More

How to Send Emails in ASP.NET Core 1.0

ASP.NET Core 1.0 is a reboot of the ASP.NET framework which can target the traditional full .NET framework or the new .NET Core framework. Together ASP.NET Core and .NET Core have been designed to work cross platform and have a lighter, faster footprint compared to the current full .NET framework. Many of the .NET Core APIs are the same as they are in the full framework and the team have worked hard to try and keep things reasonably similar where it makes sense and is practical to do so. However, as a consequence of developing a smaller, more modular framework of dependant libraries and most significantly making the move to support cross platform development and hosting; some of libraries have been lost. Take a look at this post from Immo Landwerth which describes the changes in more detail and discusses considerations for porting existing applications to .NET Core.

I’ve been working with ASP.NET Core for quite a few months now and generally I have enjoyed the experience. Personally I’ve hit very few issues along the way and expect to continue using the new framework going forward wherever possible. Recently though I did hit a roadblock on a project at work where I had a requirement to send email from within my web application. In the full framework I’d have used the SmtpClient class in system.net.mail namespace. However in .NET Core this is not currently available to us.

Solutions available in the cloud world include services such as SendGrid; which, depending on the scenario I can see as a very reasonable solution. For my personal projects and tests this would indeed be my preferred approach, since I don’t have to worry about maintaining and supporting an SMTP server. However at work we have SMTP systems in place and a specialised support team who manage them, so I ideally needed a solution to allow me to send emails directly as we do in our traditionally ASP.NET 4.x applications.

As with most coding challenges I jumped straight onto Google to see who else had had this requirement and how they solved the problem. However I didn’t find as many documented solutions that helped me as I was expecting to. Eventually I landed on this issue within the corefx repo on Github. That led me onto the MailKit library maintained by Jeffrey Stedfast and it turned out to be a great solution for me as it has recently been updated to work on .NET Core.

In this post I will take you through how I got this working for the two scenarios I needed to tackle. Firstly sending mail directly via an SMTP relay and secondly the possibility to save the email message into an SMTP pickup folder. Both turned out to be pretty painless to get going.

Adding MailKit to your Project

The first step is to add the reference to the NuGet package for MailKit. I now prefer to use the project.json file directly to setup my dependencies. You’ll need to add the MailKit library – which is at version 1.3.0-beta6 at the time of writing this post – to your dependencies section in the project.json file.

On a vanilla ASP.NET Core web application your dependencies should look like this:

Dependancies

Once you save the change VS should trigger a restore of the necessary NuGet packages and their dependencies.

Sending email via a SMTP server

I tested this solution in a default ASP.NET Core web application project which already includes an IEmailSender interface and a class AuthMessageSender which just needs implementing. It was an obvious choice for me to test the implementation using this class as DI is already hooked up for it. For this post I’ll show the bare bones code needed to get started with sending emails via an SMTP server.

To follow along, open up the MessageServices.cs file in your web application project.

We need three using statements at the top of the file.

using MailKit.Net.Smtp;
using MimeKit;
using MailKit.Security;

The SendEmailAsync method can now be updated as follows:

public async Task SendEmailAsync(string email, string subject, string message)
{
	var emailMessage = new MimeMessage();

	emailMessage.From.Add(new MailboxAddress("Joe Bloggs", "jbloggs@example.com"));
	emailMessage.To.Add(new MailboxAddress("", email));
	emailMessage.Subject = subject;
	emailMessage.Body = new TextPart("plain") { Text = message };

	using (var client = new SmtpClient())
	{
		client.LocalDomain = "some.domain.com";                
		await client.ConnectAsync("smtp.relay.uri", 25, SecureSocketOptions.None).ConfigureAwait(false);
		await client.SendAsync(emailMessage).ConfigureAwait(false);
		await client.DisconnectAsync(true).ConfigureAwait(false);
	}
}

First we declare a new MimeMessage object which will represent the email message we will be sending. We can then set some of it’s basic properties.

The MimeMessage has a “from” address list and a “to” address list that we can populate with our sender and recipient(s). For this example I’ve added a single new MailboxAddress for each. The basic constructor for the MailboxAddress takes in a display name and the email address for the mailbox. In my case the “to” mailbox takes the address which is passed into the SendEmailAsync method by the caller.

We then add the subject string to the email message object and then define the body. There are a couple of ways to build up the message body but for now I’ve used a simple approach to populate the plain text part using the message passed into the SendEmailAsync method. We could also populate a Html body for the message if required.

That leaves us with a very simple email message object, just enough to form a proof of concept here. The final step is to send the message and to do that we use a SmtpClient. Note that this isn’t the SmtpClient from system.net.mail, it is part of the MailKit library.

We create an instance of the SmtpClient wrapped with a using statement to ensure that it is disposed of when we’re done with it. We don’t want to keep connections open to the SMTP server once we’ve sent our email. You can if required (and I have done in my code) set the LocalDomain used when communicating with the SMTP server. This will be presented as the origin of the emails. In my case I needed to supply the domain so that our internal testing SMTP server would accept and relay my emails.

We then asynchronously connect to the SMTP server. The ConnectAsync method can take just the uri of the SMTP server or as I’ve done here be overloaded with a port and SSL option. For my case when testing with our local test SMTP server no SSL was required so I specified this explicitly to make it work.

Finally we can send the message asynchronously and then close the connection. At this point the email should have been fired off via the SMTP server.

Sending email via a SMTP pickup folder

As I mentioned earlier I also had a requirement to drop a message into a SMTP pickup folder running on the web server rather than sending it directly through the SMTP server connection. There may well be a better way to do this (I got it working in my test so didn’t dig any deeper) but what I ended up doing was as follows:

public async Task SendEmailAsync(string email, string subject, string message)
{
	var emailMessage = new MimeMessage();

	emailMessage.From.Add(new MailboxAddress("Joe Bloggs", "jbloggs@example.com"));
	emailMessage.To.Add(new MailboxAddress("", email));
	emailMessage.Subject = subject;
	emailMessage.Body = new TextPart("plain") { Text = message };

	using (StreamWriter data = System.IO.File.CreateText("c:\\smtppickup\\email.txt"))
	{
		emailMessage.WriteTo(data.BaseStream);
	}
}

The only real difference from my earlier code was the removal of the use of SmtpClient. Instead, after generating my email message object I create a steamwriter which creates a text file on a local directory. I then used the MimeMessage.WriteTo method passing in the base stream so that the RFC822 email message file is created in my pickup directory. This is picked up and sent via the smtp system.

Summing Up

MailKit seems like a great library and it’s solved my immediate requirements. There are indications that the Microsoft team will be working on porting their own SmtpClient to support ASP.NET Core at some stage but it’s great that the community have solved the problem for those adopting / testing .NET Core now.

Read More

ASP.NET Core Identity Token Providers – Under the Hood Part 2: Introducing Token Providers

Next up for my series on ASP.NET Core Identity I was interested in how the Identity library provides a way to create tokens which validate actions such as when a user first registers and we need to confirm their email address. This post took a lot longer to write than I expected it to as there are a lot of potential areas to cover. In the interests of making it reasonably digestible I’ve decided to introduce tokens and specifically look at the registration email confirmation token flow in this post. Other posts may follow as I dig deeper into the code and it’s uses.

As with part 1 let me prefix this post with two important notes.

  1. I am not a security expert. This series of posts records my own dive into the ASP.NET Identity Core code, publically available on GitHub which I’ve done for my own self-interest to try and understand how it works and what is available to me as a developer. Do not assume everything I have interpreted to be 100% accurate or any code samples as suitable production code.
  2. This is written whilst reviewing source mostly from the 3.0.0-rc1 release tag. I may stray into more recent dev code if implementations have changed considerably, but will try to highlight when I do so. One very important point here is that at the time of writing this post Microsoft have announced a renaming strategy for ASP.NET 5. Due to the brand new codebase this is now being called ASP.NET Core 1.0 and the underlying .NET Core will be .NET Core 1.0. This is going to result in namespace changes. I’ve used the anticipated new namespaces here (and will update if things change again).

What are tokens and why do we need them?

Tokens are something that an application or service can issue to a user and which they can later hand back as a way to prove their identity and often their authorisation for an action. We can use tokens in various places where we need to provide a mechanism to confirm something about them, such as that a phone number or email address actually belongs to them. They can also be used in other ways; Slack for example uses tokens to provide a magic sign in link on mobile devices.

Because of these potential uses it’s very important that they be secure and trustworthy since they could present a security hole into your application if used incorrectly. Mechanisms need to be in place to expire old or used tokens to prevent someone else using them should they gain access to them. ASP.NET Identity Core provides some basic tokens via token providers for common tasks. These are used by the default ASP.NET Web Application MVC template for some of the account and user management tasks on the AccountController and ManageController.

Now that I’ve explained what a token is let’s look at how we generate one.

Token providers

To get a token or validate one we use a token provider. ASP.NET Core Identity defines an IUserTokenProvider interface which any token providers should implement. This interface has been kept very simple and defines three methods:

Task<string> GenerateAsync(string purpose, UserManager<TUser> manager, TUser user);

This method will generate a token for a given purpose and user. The token is returned as a string.

Task<bool> ValidateAsync(string purpose, string token, UserManager<TUser> manager, TUser user);

This method will validate a token from a user. It will return true or false, indicating whether the token is valid or not.

Task<bool> CanGenerateTwoFactorTokenAsync(UserManager<TUser> manager, TUser user);

This indicates whether the token from this provider can be used for two factor authentication.

You can register as many token providers into your project as necessary to support your requirements. By default IdentityBuilder has a method AddDefaultTokenProviders() which you can chain onto your AddIdentity call from the startup file in your project. This will register the 3 default providers as per the code below. Token providers need to be registered with the DI container so they can be injected when required.

public virtual IdentityBuilder AddDefaultTokenProviders()
public virtual IdentityBuilder AddDefaultTokenProviders()
{
	var dataProtectionProviderType = typeof(DataProtectorTokenProvider<>).MakeGenericType(UserType);
	var phoneNumberProviderType = typeof(PhoneNumberTokenProvider<>).MakeGenericType(UserType);
	var emailTokenProviderType = typeof(EmailTokenProvider<>).MakeGenericType(UserType);
	return AddTokenProvider(TokenOptions.DefaultProvider, dataProtectionProviderType)
		.AddTokenProvider(TokenOptions.DefaultEmailProvider, emailTokenProviderType)
		.AddTokenProvider(TokenOptions.DefaultPhoneProvider, phoneNumberProviderType);
}

This code makes use of the TokenOptions class which defines a few common provider names and maintains a dictionary of the available providers, the key of which is the provider name. The value is the type for the provider being registered. The code for AddTokenProvider is as follows.

public virtual IdentityBuilder AddTokenProvider(string providerName, Type provider)
{
	if (!typeof(IUserTokenProvider<>).MakeGenericType(UserType).GetTypeInfo().IsAssignableFrom(provider.GetTypeInfo()))
	{
		throw new InvalidOperationException(Resources.FormatInvalidManagerType(provider.Name, "IUserTokenProvider", UserType.Name));
	}
	Services.Configure<IdentityOptions>(options =>
	{
		options.Tokens.ProviderMap[providerName] = new TokenProviderDescriptor(provider);
	});
	Services.AddTransient(provider);
	return this; 
}

Here you can see that the provider is being added into the ProviderMap dictionary and then registered with the DI container.

Registration email confirmation

Now that I’ve covered what tokens are and how they are registered I think the best thing to do is to take a look at a token being generated and validated. I’ve chosen to step through the process which creates an email confirmation token. The user is sent a link to the ConfirmEmail action which includes the userid and their token as querystring parameters. The user is required to click the link which will then validate the token and then mark their email as confirmed.

Validating the email this way is good practice as it prevents people from registering with or adding mailboxes which do not belong to them. By sending a link to the email address requiring an action from the user before the email is activated, only the true owner of the mailbox can access the link and click it to confirm that they did indeed signup for the account. We are trusting the user’s action based on something secure we have provided to them. Because the tokens are encrypted they are protected against forgery.

Generating the token

ASP.NET Core Identity provides the classes necessary to generate the token to be issued to the user in their link. The actual use of the Identity system to request the token and to include it in the link is managed by the MVC site itself, calling into the Identity API as necessary.

In ASP.NET MVC projects the generation of the confirmation email is optional and it is not enabled by default. However the code is there, but commented out within the AccountController. The UserManager class within Identity provides all the methods needed to call for the generation of a token and to validate it again later on. Once we have the token back from the Identity library we are then able to use that token when we send our activation email.

In our example we can call GenerateEmailConfirmationTokenAsync(TUser user). We pass in the user for which the token will be generated.

public virtual Task<string> GenerateEmailConfirmationTokenAsync(TUser user)
{
	ThrowIfDisposed();
	return GenerateUserTokenAsync(user, Options.Tokens.EmailConfirmationTokenProvider, ConfirmEmailTokenPurpose);
}

GenerateUserTokenAsync requires the user, the name of the token provider to use (pulled from the Identity options) and the purpose for the token as a string. The ConfirmEmailTokenPurpose is a constant string defining the wording to use. In this case it is “EmailConfirmation”.

Each token is expected to carry a purpose so that they can be tied very closely to a specific action within your system. A token for one action would not be valid for another.

public virtual Task<string> GenerateUserTokenAsync(TUser user, string tokenProvider, string purpose)
{
	ThrowIfDisposed();
	if (user == null)
	{
		throw new ArgumentNullException("user");
	}
	if (tokenProvider == null)
	{
		throw new ArgumentNullException(nameof(tokenProvider));
	}
	if (!_tokenProviders.ContainsKey(tokenProvider))
	{
		throw new NotSupportedException(string.Format(CultureInfo.CurrentCulture, Resources.NoTokenProvider, tokenProvider));
	}

	return _tokenProviders[tokenProvider].GenerateAsync(purpose, this, user);
}

After the usual null checks what this boils down to is checking through the dictionary of token providers available to the UserManager based on the tokenProvider parameter passed into the method. Once the provider is found it’s GenerateAsync method is called.

At the moment all of the three default TokenOptions providers are set to use the default token provider so by default the DataProtectionTokenProvider is being called which has the following GenerateAsync method.

public class TokenOptions
{
	public static readonly string DefaultProvider = "Default";
	public static readonly string DefaultEmailProvider = "Email";
	public static readonly string DefaultPhoneProvider = "Phone";

	public Dictionary<string, TokenProviderDescriptor> ProviderMap { get; set; } = new Dictionary<string, TokenProviderDescriptor>();

	public string EmailConfirmationTokenProvider { get; set; } = DefaultProvider;
	public string PasswordResetTokenProvider { get; set; } = DefaultProvider;
	public string ChangeEmailTokenProvider { get; set; } = DefaultProvider;
}

NOTE: This setup is slightly confusing as it appears that certain token providers, although registered would never be called based on the way the options are setup by default. This could be modified by changing the options and I have tried to query why this is setup this way by default.

For now though let’s look at the GenerateAsync method on the DataProtectionTokenProvider.

public virtual async Task<string> GenerateAsync(string purpose, UserManager<TUser> manager, TUser user)
{
	if (user == null)
	{
		throw new ArgumentNullException(nameof(user));
	}
	var ms = new MemoryStream();
	var userId = await manager.GetUserIdAsync(user);
	using (var writer = ms.CreateWriter())
	{
		writer.Write(DateTimeOffset.UtcNow);
		writer.Write(userId);
		writer.Write(purpose ?? "");
		string stamp = null;
		if (manager.SupportsUserSecurityStamp)
		{
			stamp = await manager.GetSecurityStampAsync(user);
		}
		writer.Write(stamp ?? "");
	}
	var protectedBytes = Protector.Protect(ms.ToArray());
	return Convert.ToBase64String(protectedBytes);
}

This method uses a memory stream to build up a byte array with the following elements:

  1. The current UTC time (converted to ticks within the extension method)
  2. The user id
  3. The purpose if not null
  4. The user security stamp if supported by the current user manager. The security stamp is a Guid stored in the database against the user. It gets updated when certain actions take place within the Identity UserManager class and provides a way to invalidate old tokens when an account has changed. The security stamp is changed for example when we change the username or email address of a user. By changing the stamp we prevent the same token being used to confirm the email again since the security stamp within the token will no longer match the user’s current security stamp.

These are then passed to the Protect method on an injected IDataProtector. For this post, going into detail about data protectors will be a bit deep and take me quite far off track. I do plan to look at them more in the future but for now it’s sufficient to say that the data protection library defines a cryptographic API for protecting data. Identity leverages this API from its token providers to encrypt and decrypt the tokens it has generated.

Once the protected bytes are returned they are base64 encoded and returned.

Validating the token

Once the user clicks on the link in their confirmation email it will take them to the ConfirmEmail action in the AccountController. That action takes in the userid and the code (protected token) from the link. This action will then call the ConfirmEmailAsync method on the UserManager which in turn will call a VerifyUserTokenAsync method. This method will get to appropriate token provider from the ProviderMap and call the ValidateAsync method.

Let’s step through the code which validates the token on the DataProtectorTokenProvider.

public virtual async Task<bool> ValidateAsync(string purpose, string token, UserManager<TUser> manager, TUser user)
{
	try
	{
		var unprotectedData = Protector.Unprotect(Convert.FromBase64String(token));
		var ms = new MemoryStream(unprotectedData);
		using (var reader = ms.CreateReader())
		{
			var creationTime = reader.ReadDateTimeOffset();
			var expirationTime = creationTime + Options.TokenLifespan;
			if (expirationTime < DateTimeOffset.UtcNow)
			{
				return false;
			}

			var userId = reader.ReadString();
			var actualUserId = await manager.GetUserIdAsync(user);
			if (userId != actualUserId)
			{
				return false;
			}
			var purp = reader.ReadString();
			if (!string.Equals(purp, purpose))
			{
				return false;
			}
			var stamp = reader.ReadString();
			if (reader.PeekChar() != -1)
			{
				return false;
			}

			if (manager.SupportsUserSecurityStamp)
			{
				return stamp == await manager.GetSecurityStampAsync(user);
			}
			return stamp == "";
		}
	}
	// ReSharper disable once EmptyGeneralCatchClause
	catch
	{
		// Do not leak exception
	}
	return false;
}

The token is first converted from the base64 string representation to a byte array. It is then passed to the IDataProtector to be decrypted. Once again the details of how this works are too detailed for this post. The decrypted contents are passed into a new memory stream to be read.

The creation time is first read out from the start of the token. The expiration time is calculated by taking the token creation time and adding the token lifespan defined in the DataProtectionTokenProviderOptions. By default this is set at 1 day. If the token has expired then the method returns false since it is no longer considered a valid token.

It then reads the userId string and compares it to the id of the user (this is based on the userId from the link they get sent in their email. The account controller first uses that id to load up a user from the database. This ensures that the token belongs to the user who is attempting to use it.

It next reads the purpose and checks that it matches the purpose for the validation that is occurring (this will be passed into the method by the caller). This ensures a token is valid against only a specific function.

It then reads in the security stamp and stores it in a local variable for use in a few moments.

It then calls PeekChar which tries to get (but not advance) the next character from the token. Since we should be at the end of the stream here it checks for -1 which indicates no more characters are available. Any other value indicates that this token has extra data and is therefore not valid.

Finally, if security stamps are supported by the current user manager the security stamp for the user is retrieved from the user store and compared to the stamp it read from the token. Assuming they match then we can now confirm that the token is indeed valid and return that response to the caller.

Other token providers

In addition to the DataProtectionTokenProvider there are other providers defined within the Identity namespace. As far as I can tell these are not yet used based on the way the options are setup. I have actually queried this in an issue on the Identity repo. It may still be an interesting exercise for me to dig into how they work and differ from the DataProtectionTokenProvider. There is also the concept of an SMS verification token in the default ManageController for a default MVC application which doesn’t use a token provider directly

It would also be quite simple to implement your own token provider if you need to implement some additional functionality or store additional data within the token.

Read More

ASP.NET Identity Core 1.0.0 (Under the Hood) Part 1. Where did my salt column go?

Let me prefix this post / series of blog posts with two important notes.

  1. I am not a security expert. This series of posts records my own dive into the ASP.NET Identity Core code, publicly available on GitHub which I’ve done for my own self-interest to try and understand how it works and what is available to me as a developer. Do not assume everything I have interpreted to be 100% accurate or any code samples as suitable copy and paste production code.
  2. This is written whilst reviewing source mostly from the 3.0.0-rc1 release tag. I may stray into more recent dev code if implementations have changed considerably, but will try to highlight when I do so. One very important point here is that at the time of writing this post Microsoft have announced a renaming strategy for ASP.NET 5. Due to the brand new codebase this is now being called ASP.NET Core 1.0 and the underlying .NET Core will be .Net Core 1.0. This is going to result in namespace changes. I’ve used the anticipated new namespaces here (and will update if things change again).

Goals of this post / series

My personal reasons for digging into this code in the first place are simply to learn and further my own understanding. I’ll be investigating the sections that interest me and summarising as I do. I’m not sure where my investigations will take me so this may one post or many. I’m blogging about my findings as I hope they may be of interest to others wanting to know more about what happens when using this library in your projects. I’m going to try to edit this together in a readable fashion but I do expect it may be a little jumpy as I focus on the parts of the code I personally wanted to know more about. If you’re reading this blog post and see errors (and I expect there will be some) then please comment or contact me so I can update as appropriate.

Password Basics

I feel that before I go further I should try to explain a few key terms and concepts around password security. There are many more detailed resources which discuss security that go deeper into this but this primer should help when reading the rest of the post. If you know all of this then you may want to skip down to the next section below where I’ll start looking at the ASP.NET Identity code specifically.

With any login system, at a minimum we need to store two key pieces of information. A username and a password. The username uniquely identifies the user within your application and the password is a piece of information that only they should hold, used to verify to the application that they are, who they claim to be. This is the concept of authentication within a software application. In order for our applications to work with users we need to store these pieces of information somewhere so that when the user provides them, the application can look them up and compare them. Very commonly this will involve a database store and in ASP.NET that will likely be an SQL database table.

A simple solution would be to store the username as a string value (VARCHAR) and perhaps do the same for the password. This meets the initial requirement as we can now compare these against what a user provides us and if they match, open the door to the application. But there is a lot more to be considered and the most important element is securing that password from malicious intent. The problem being faced is how to store the password in such a way that it’s not plainly readable to anyone who has, or gains access to the database in which it has been stored. It’s well accepted that people will take the path of least complexity with passwords and often use the same password in many places. They will also likely use the same username where they can (especially if that username is their email address). So the risks go way beyond just your application should a password be compromised. It could well open the door to many other places where your user has also registered.

While some might assume that the database is secure and provides all the safety one needs for their user’s data, recent and numerous data exposures have proven that there are many people out there who can probably get to the database if they want to. I’m not planning to dive into the ways and means of that now but if you want more info check out posts from the likes of Troy Hunt who cover some other security aspects to consider. Even if the database is not compromised storing the passwords in plain text would still be a very poor decision since your internal staff also should not have access to them. A password should be as a user expects and be a secret to themselves only. They should be able to expect that you will take due care and responsibility when working with such a piece of data.

So we now want to store the password in a form that is secure from anyone who is looking at the database table, but we also need to be able to verify the user at a later time by comparing their input, to the password stored for their account. A solution could be reversible encryption where we encrypt the password data when we save it to the database and then decrypt it later to compare it with user input. However that also has concerns and issues, again slightly outside the scope of my intentions for this blog post. The short version being that anything reversible still presents a fair degree of insecurity when it comes to passwords.

This is where the concept of a one way hash comes into the picture. A hash is a mathematically repeatable process whereby we can take the password, put it through hashing and then store that data, instead of a plain text password. The word “repeatable” is important here. The reason hashing works is that if you provide the same text and put it through the same hashing process, the result will always be the same. At login we take the provided password, hash it (using the same algorithm used when the password was first saved to the database) and then compare the hashed value against the stored value. If they match, we have authenticated the user.

The advantages are that if the database is compromised, the hashed value is more secure than plain text. The hash cannot be reversed to generate the original plain text password either. But we’re still not done. With the advent of faster processing, even hashes are breakable. For a start, users commonly use simple and guessable passwords. Attackers can take advantage of that by pre-calculating hash values for common passwords, using the same algorithms that an application might use to create a mapping between potential passwords and hashed passwords. They could then use this against a compromised database and quickly work out many of the passwords being used.

We therefore must again go a step further and try to secure against this. So finally we come to the concept of a salt which adds more complexity and uniqueness to the passwords being hashed. Think of an example where two users choose to use the same exact password for your system. Those passwords when hashed will be identical in the database. This could be valuable information to hackers. Therefore to make things more unique, we use a random set of characters, called a salt. This salt is added to the password so that we get a unique password and therefore unique hash for each user, even if they use the same passwords. This mainly helps defend from the dictionary based attacks as hackers would have to calculate the lookup table once per user with their salt.

Finally we have hashing iterations. Hashing once is a start but with modern computer power it’s pretty fast to generate a lookup table. Iterations help somewhat by making the process more computationally expensive and time consuming. The aim of which is to make hacking a password database too much work vs the potential reward.

ASP.NET Identity and Password Hashing

In Identity 2.0 the hashed password and the salt were stored in separate columns in the database. It was only when I looked at a database for a new ASP.NET Core website that I noticed that there was no salt column and I wondered what was happening (hence this blog post)!

So to understand let’s follow the flow of a registering user with a focus on the password so we can understand what ASP.NET Core Identity 1.0.0 is doing to securely save it to the database (and answer my question – where did the salt column go!)

We start in the AccountController with a call to CreateAsync in the Microsoft.AspNet.Identity.UserManager class passing in an IdentityUser object and the password to be hashed and saved.

After running through any password validators we call on an instance an  IPasswordHasher<TUser> (which is injected into the UserManager when it is first constructed).

The PasswordHasher is constructed with an instance of PasswordHasherOptions. The class PasswordHasherOptions defines the settings used for the password hashing. By default the compatibility mode is set to Identity v3 mode, but if you needed to support V2 and below this can be set in the options. There is also a setting for the iterations to use when creating the hash which by default is set to 10,000. Finally this class defines and creates a static instance of the RandomNumberGenerator object found in the mscorelib.

UserManager.CreateAsync calls the HashPassword method.

The HashPassword method checks the compatibility mode in the options and flows through the appropriate method to handle either V2 and V3 identity. We’ll look at the V3 flow here using HashPasswordV3.

Here is the code…

private byte[] HashPasswordV3(string password, RandomNumberGenerator rng)
{
    return HashPasswordV3(password, rng,
    prf: KeyDerivationPrf.HMACSHA256,
    iterCount: _iterCount,
    saltSize: 128 / 8,
    numBytesRequested: 256 / 8);
}

private static byte[] HashPasswordV3(string password, RandomNumberGenerator rng, KeyDerivationPrf prf, int iterCount, int saltSize, int numBytesRequested)
{
    // Produce a version 3 (see comment above) text hash.
    byte[] salt = new byte[saltSize];
    rng.GetBytes(salt);
    byte[] subkey = KeyDerivation.Pbkdf2(password, salt, prf, iterCount, numBytesRequested);

    var outputBytes = new byte[13 + salt.Length + subkey.Length];
    outputBytes[0] = 0x01; // format marker
    WriteNetworkByteOrder(outputBytes, 1, (uint)prf);
    WriteNetworkByteOrder(outputBytes, 5, (uint)iterCount);
    WriteNetworkByteOrder(outputBytes, 9, (uint)saltSize);
    Buffer.BlockCopy(salt, 0, outputBytes, 13, salt.Length);
    Buffer.BlockCopy(subkey, 0, outputBytes, 13 + saltSize, subkey.Length);
    return outputBytes;
}

Breaking down the parameters of the main overloaded method:

  1. A string representing the password to be hashed
  2. An instance of a RandomNumberGenerator (defined in the mscorelib assembly). I’m not going to delve into how that code works here but the CoreCLR project on GitHub contains the code if you want to take a look for yourself.
  3. A choice of the PRF (pseudorandom function family) to use for the key derivation. In this case HMACSHA256 is the standard for Identity V3
  4. An integer representing the iteration count to use when hashing (coming from the PasswordHasherOptions)
  5. The size (number of bytes) to be used for the salt – 128 bits / 8 to calculate the byte size of 16
  6. The number of bytes for the hashed password – 256 bits / 8 to calculate the byte size of 32

A new byte array is created to hold the 16 byte salt which is generated with a call to GetBytes(salt) on the RandomNumberGenerator. This populates the byte array with 16 random bytes. Then a hashed password is created using PBKDF2 (Password-Based Key Derivation Function 2) key derivation, taking the password, the salt, the PRF of HMACSHA256, the number of iterations to use and the number of bytes required as the length of the hashed password.

We now have a hashed password and the random salt used to generate that password. In earlier versions of identity I saw these being saved into two unique columns inside the database. However with Identity 3.0 / ASP.NET Core Identity 1.0 there is only the PasswordHash column. To save the password and hash a new byte array is created. It is the size of the salt + hashed password + 13 extra bytes used to store some meta data about the hashing which took place. These form a single byte array which will be base64 encoded as a string to save into the database.

Construction of the Final Byte Array

The first byte is a format marker and for Identity 3.0 / ASP.NET Core 1.0.0 this is set to 1 (defined in hex).

The next byte stores the PRF that was used. This is the value of the enum available in Microsoft.AspNetCore.Cryptography.KeyDerivation. This belongs in the ASP.NET DataProtection assembly (source on github)

The next 4 bytes store the iteration count used to generate the hash

The next 4 bytes store the salt size

Notice with each of these items there is a call to WriteNetworkByteOrder, a helper method within the PasswordHasher.


private static void WriteNetworkByteOrder(byte[] buffer, int offset, uint value)
{
    buffer[offset + 0] = (byte)(value >> 24);
    buffer[offset + 1] = (byte)(value >> 16);
    buffer[offset + 2] = (byte)(value >> 8);
    buffer[offset + 3] = (byte)(value >> 0);
}

What this is doing in short is splitting the 32 bit integer into the component bytes using bitwise shifting and then storing those in big-endian (network byte) order into the array.

HashPasswordV3 then copies in the bytes from the salt and the password hash to create the final byte array which is returned to the main HashPassword method. This is converted to a base64 encoded string. This is the string is then written to the to the database.

And there we have it. That’s a fairly deep dive of how passwords are hashed inside ASP.NET Core 1.0.0 identity and then stored in the database.

Read More

Introducing My Blog

I’ve been thinking about creating a blog on and off for a year or so. I like the idea of sharing what I learn and if nothing else it gives me somewhere to write stuff down that I might later need. I’ve had a blog before and finding the time and discipline to keep it going was tough. I hope I do better this time around.

It’s mostly going to be about ASP.NET and C# since those are what I work with day to day. It’s an exciting time with the nearing release of ASP.NET Core 1.0 and I’ll be mostly investigating the code and concepts for that product as I use it for new projects. I enjoy learning new things and I want to get as good a grip as possible on how things work and best ways to use this technology. I’ll cover the things I learn as I go.

As far as intro’s go, I think that’ll do. We’ll see how well I keep this up and if I find the time to post.

Read More