Tuesday, August 18, 2009

Turn your actions into feeds on ASP.net MVC

With this implementation you can re-use parts of your content Controller (the database query for example) and render it on a RSS and/or ATOM feed.

Let's say you have an action like this (extracted from DotNetBurner’s source):

       [AcceptVerbs (HttpVerbs.Get | HttpVerbs.Head), CompressFilter]
       public ActionResult Upcoming (int? page, string sort)
           var viewData = GetStoryListViewData<StoryListByCategoryData> (page);
           int sortDays = GetSortDays (sort);

           viewData.Stories = Repository.GetUpcomingStories (CurrentUserId, sortDays, SiteSettings.Settings.UpcomingMaxDays,
                                                             CalculateStartIndex (page), StorySettings.Settings.StoryPerPage);

           viewData.Category = Strings.Upcoming;
           viewData.CategoryDisplayName = Strings.Upcoming;
           viewData.SortDays = sortDays;

           return View ("Upcoming", viewData);

This action returns the upcoming stories. To turn it on a feed, i have just implement this new action:

   [AcceptVerbs (HttpVerbs.Get | HttpVerbs.Head), CompressFilter, OutputCache (CacheProfile = "Feeds")]
       public ActionResult UpcomingFeed(string sort, string feedType, int? feedCount)
           return Content(Feed("Upcoming", null, sort, feedType, feedCount), "application/xml", Encoding.UTF8);

    string Feed(string feed, int? id, string sort, string feedType, int? feedCount)
           int sortDays = GetSortDays(sort);

           if (!id.HasValue)
               id = 0;

           if (!feedCount.HasValue || feedCount == 0)
               feedCount = StorySettings.Settings.StoryPerFeed;

           if (feedCount > 200)
               feedCount = 200;

           IEnumerable<Story> stories = null;

           var categories = Repository.GetCategories();

           stories = Repository.GetUpcomingStories (CurrentUserId, sortDays, SiteSettings.Settings.UpcomingMaxDays, 1, (int)feedCount);

           FeedType type =  ("rss".Equals(feedType, StringComparison.InvariantCultureIgnoreCase)) ? FeedType.RSS20 : FeedType.ATOM10;

           using (var memoryStream = new MemoryStream ())
               FeedHelper.GenerateFeed (stories, categories, memoryStream, type, Url);

               return Encoding.UTF8.GetString (memoryStream.ToArray ());

The Feed method is simplified here for the sake of the example, but it can take a “feed” parameter to return different feeds (for categories, tags, etc.). To generate the feed, i have implemented a FeedHelper class (you probably will have to tune it for your needs):

   public enum FeedType

   public static class FeedHelper
       public static void GenerateFeed(IEnumerable<Story> stories,
                                       IEnumerable<Category> categories,
                                       Stream output,
                                       FeedType type,
                                       UrlHelper urlHelper)

           // Create an XmlWriter to write the feed into it
           using (XmlWriter writer = XmlWriter.Create(output))
               // Set the feed properties
               SyndicationFeed feed = new SyndicationFeed
                    new Uri(SiteSettings.Settings.Address));

               feed.LastUpdatedTime = DateTime.UtcNow;
               feed.Language = SiteSettings.Settings.Language;

               feed.Authors.Add(new SyndicationPerson(SiteSettings.Settings.AdminMail,

               // Add categories
               foreach (var category in categories)
                   foreach(var subcat in category.SubCategories)
                       feed.Categories.Add(new SyndicationCategory(subcat.Name));

               // Set generator
               feed.Generator = SiteSettings.Settings.Title;

               // Set language
               feed.Language = SiteSettings.Settings.Language;
               string siteUrl = SiteSettings.Settings.Address;

               // Add post items
               List<SyndicationItem> items = new List<syndicationitem>();

               foreach (var story in stories)
                   string url = String.Concat(siteUrl,
                                                                 new RouteValueDictionary
                                                                         {"id", story.StoryId},
                                                                         {"title", story.Title.ConvertToUrlPath()}

                   string voteButton = "{0}<br>".FormatWith(SwissKnife.GetVoteButtonFor(story.Url));

                   TextSyndicationContent content =
                       SyndicationContent.CreateHtmlContent(String.Concat("<div><p>", story.Description, "</p>",
                                                                          voteButton, "</div>"));

                   SyndicationItem item = new SyndicationItem(story.Title,
                                                              new Uri(url),
                                                              new DateTimeOffset(story.PostedOn));
                   item.PublishDate = story.PostedOn;
                   item.Categories.Add(new SyndicationCategory(story.CategoryName));


               feed.Items = items;

               // Write the feed to output
               if (type == FeedType.RSS20)
                   Rss20FeedFormatter rssFormatter = new Rss20FeedFormatter(feed);
                   Atom10FeedFormatter atomFormatter = new Atom10FeedFormatter(feed);


This uses System.ServiceModel.Syndication to create a feed from the same database query that is used to render the site content. With this, i have enabled DotNetBurner to have feeds in less than 4 hours.

Drop me a comment if you implement this on a similar time :)

Tuesday, August 11, 2009

Lazy loading a database query to improve performance of caching on ASP.net MVC

This article explains an approach to lazy load a database query on an ASP.net MVC controller to prevent the query from executing on it, and just execute on the View, if it's not cached. So, if the View is cached, the database is not queried.

With ASP.net MVC you would obviously have a Model, a View and a Controller. But the workflow of a page execution on MVC is not really M-V-C, because it doesn't happens on this order. Skipping a few steps, to keep the explanation simpler, let's start with the Controller. On the controller it's common to query a database on the Model and set the View model with the data. On the view, you usually have a Html helper that will take the view model and render it as Html.

There lies a problem. If you use caching on your HTML helper, the database is still queried, because the query would be issued on the Controller, which happens before the View (which is cached).

To understand this, I recommend you to take a look on a previous article: http://tsayao.blogspot.com/2009/07/caching-html-helpers-on-aspnet-mvc.html.

The solution is pretty simple: Lazy load the data on the Controller.

This way, the database is not queried on the Controller, it's just lazy loaded when the Enumeration is requested at the View. So, if the Html helper on the View is cached, it's never requested!

First, let's implement a simple LazyColletion class:

using System.Collections;
using System.Collections.Generic;

namespace DotNetBurner
  public class LazyCollection<t> : IEnumerable<t>
      public delegate IEnumerable<t> LazyCallback();
      private readonly LazyCallback _callBack;
      private IEnumerable<t> _items;
      private bool _wasCalled = false;

      public LazyCollection(LazyCallback callBack)
          _callBack = callBack;

      IEnumerator IEnumerable.GetEnumerator()
          return _items.GetEnumerator();

      public IEnumerator<t> GetEnumerator()
          return _items.GetEnumerator();

      void CallCallBack()
          lock (this)
              if (!_wasCalled)
                  _items = _callBack();

              _wasCalled = true;

The limitation of this example is that your database query method should return a collection that implements IEnumerable<T>. Something like this:

public virtual IEnumerable<Tag> GetTags(int top)
  List<Tag> tags = new List<Tag>();

  using (SqlConnection conn = new SqlConnection(_connStr))
      using (SqlCommand cmd = new SqlCommand("sp_tag_get_tops", conn))
          cmd.CommandType = CommandType.StoredProcedure;
          cmd.Parameters.AddWithValue("top", top);

          using (SqlDataReader reader = cmd.ExecuteReader())
              while (reader.Read())
                  Tag tag = new Tag
                                    StoryCount = reader.GetInt32("Count"),
                                    TagId = reader.GetInt32("TagId"),
                                    Name = reader.GetString("Name")


  return tags;

On the Controller, you can do something like this:

viewData.Tags = new LazyCollection<Tag>(() => Repository.GetTags(StorySettings.Settings.TopTags));

This way, the database is just queried when you use the View Model. On this case, if you never enumerate ViewData.Model.Tags (it happens when you cache the view part that touches it), no query is issued on the database. This example uses the SqlClient directly, but i think you can extend it to use Linq to Entities.

Saturday, August 1, 2009

Using SQL Server 2005+ index included columns to improve performance of an ASP.net resource

This post shows a real scenario on using included columns on SQL Server 2005/2008 to improve the response time of an ASP.net resource.

When developing DotNetBurner, i wanted to make sure that the site performance was the best it could be. Targeting this goal, i have used indexes with included columns to improve the response time of the "burn!" count image. The burn count image shows how many "burns" (votes) a particularly link has. So, to retrieve this vote count, the database must be queried, like this:

SELECT VoteCount
FROM Story
WHERE UrlHash = @urlHash

The query result is filtered by the UrlHash column, which is a MD5 hash of the URL. Assuming that we have thousands of stories, it's recommended to create an index on the UrlHash column.

The following queries will create a Story table and populate it with some items, so we can query it and get the execution plan:


 Description NVARCHAR(2000),
 Url VARCHAR(255),
 UrlHash UNIQUEIDENTIFIER, /* This is not MD5, but can be used for this example */


-- Inserts the first story, so we know the UrlHash
INSERT INTO Story (StoryId, Description, Url, UrlHash, VoteCount)
VALUES(1, 'This is a story description where the quick brown fox jumped over the lazy dog which had a tatoo written lorem ipsum dolor.',
    'http://www.someurl.com', '5946D715-38FA-42C5-8056-F8F0C86B8AE8', 2)

-- Inserts more 9998 stories
SET @count = 2
WHILE @count < 10000

 INSERT INTO Story (StoryId, Description, Url, UrlHash, VoteCount)
 VALUES(@count, 'This is a story description where the quick brown fox jumped over the lazy dog which had a tatoo written lorem ipsum dolor.',
     'http://www.someurl.com', NEWID(), @count + 10)

 SET @count = @count + 1


SELECT VoteCount
FROM Story
WHERE UrlHash = '5946D715-38FA-42C5-8056-F8F0C86B8AE8'

This is the resulting execution plan (with a index on the UrlHash column):

Sql server execution plan

Looking at the execution plan, you will see a key lookup. I bet with you that i can reduce it to one index seek :)



SELECT VoteCount
FROM Story
WHERE UrlHash = '5946D715-38FA-42C5-8056-F8F0C86B8AE8'

Now we have replaced the index which one that includes the VoteCount column. Let’s look at the execution plan:

Sql server execution plan

Now this executes faster and uses less resources. This happens because when we added the included column all the values necessary for the query (UrlHash and VoteCount) can be found on the Index, so SQL Server doesn't need to look for the value of VoteCount at the clustered index.

You could, of course, just add the VoteCount directly on the index, like this:

CREATE UNIQUE INDEX IDX_Story_1 ON Story(UrlHash, VoteCount)

This would work, but would also use more disk space, since the index key would be bigger. This happens because when you use the INCLUDE option SQL Server just stores the value at the leaf node of the index, because this value is just returned, not filtered.

Thursday, July 30, 2009

Using the PIVOT operator on SQL Server 2005+

The PIVOT operator is used to transform rows into column by an aggregate function. One big limitation is that you have to know the columns, but still, it's useful on some use cases, like when you want to aggregate something by hours (for example).

About having to know the column names limitation: This is probably a design decision, because someone could write a query that has too many columns and cause all sort of problems, like exceeding the limit of columns or return column that the application is not prepared to receive.

To keep the example easy to understand we will create a simple Authentication table (which holds authentications):

IF OBJECT_ID('Authentication') IS NOT NULL
 DROP TABLE Authentication

CREATE TABLE Authentication
 AuthenticationId INT,

INSERT INTO Authentication (AuthenticationId, Date, DocumentId) VALUES (1, DATEADD(hh, +1, GETDATE()), NEWID())
INSERT INTO Authentication (AuthenticationId, Date, DocumentId) VALUES (2, DATEADD(hh, +1, GETDATE()), NEWID())
INSERT INTO Authentication (AuthenticationId, Date, DocumentId) VALUES (3, DATEADD(hh, +2, GETDATE()), NEWID())
INSERT INTO Authentication (AuthenticationId, Date, DocumentId) VALUES (4, DATEADD(hh, +3, GETDATE()), NEWID())
INSERT INTO Authentication (AuthenticationId, Date, DocumentId) VALUES (5, DATEADD(hh, +4, GETDATE()), NEWID())
INSERT INTO Authentication (AuthenticationId, Date, DocumentId) VALUES (6, DATEADD(hh, +4, GETDATE()), NEWID())
INSERT INTO Authentication (AuthenticationId, Date, DocumentId) VALUES (7, DATEADD(hh, +5, GETDATE()), NEWID())
INSERT INTO Authentication (AuthenticationId, Date, DocumentId) VALUES (8, DATEADD(hh, +6, GETDATE()), NEWID())
INSERT INTO Authentication (AuthenticationId, Date, DocumentId) VALUES (9, DATEADD(hh, +7, GETDATE()), NEWID())
INSERT INTO Authentication (AuthenticationId, Date, DocumentId) VALUES (10, DATEADD(hh, +7, GETDATE()), NEWID())
INSERT INTO Authentication (AuthenticationId, Date, DocumentId) VALUES (11, DATEADD(hh, +8, GETDATE()), NEWID())

Now we have some authentications. Let's say some manager wants a report of authentications by hour to know how many people comes to pay bills on the company stores (assuming that some kind of authentication is done when receiving payments). The information can be obtained by doing a query like this:

SELECT [0], [1], [2], [3], [4], [5], [6], [7], [8], [9], [10],
  [11], [12], [13], [14], [15], [16], [17], [18], [19],
  [20], [21], [22], [23]
FROM (SELECT AuthenticationId, DATEPART(HOUR, Date) AS Hour FROM Authentication) AS TBL
 FOR Hour IN
 ([0], [1], [2], [3], [4], [5], [6], [7], [8], [9], [10],
  [11], [12], [13], [14], [15], [16], [17], [18], [19],
  [20], [21], [22], [23])

The result will show the authentication count for each hour. Now you can copy it and paste on Excel. Maybe generate some charts (assuming your use case has more data and more rows).

Tracking UPDATE changes on SQL Server 2005+ with the OUTPUT clause

It is common to have to execute a mass update on some table, for example, to update a customer credit classification. When such updates are executed, you may want to know what has changed and maybe log it. There are several ways to do it, but i recently found one that makes it easier: the OUTPUT clause.

First, we create a customer table for the example and populate it with some data:

 DROP TABLE Customers

CREATE TABLE Customers (
 CustomerId INT,
 [Name] VARCHAR(30),
 Income MONEY,
 Classification CHAR(1)

INSERT INTO Customers (CustomerId, [Name], Income, Classification) VALUES (1, 'JOHN DOE', 5000, 'C')
INSERT INTO Customers (CustomerId, [Name], Income, Classification) VALUES (1, 'JANE DOE', 6000, 'B')
INSERT INTO Customers (CustomerId, [Name], Income, Classification) VALUES (1, 'JOHN SMITH', 6500, 'B')
INSERT INTO Customers (CustomerId, [Name], Income, Classification) VALUES (1, 'HOMELESS JOHN', 100, 'Z')

Now, let's update the data (imagine that there are thousands of rows):

UPDATE Customers
SET Classification = 'C'
    DELETED.Classification AS FromClassification,
    INSERTED.Classification AS ToClassification
WHERE Income BETWEEN 5000 AND 7000

The OUTPUT part will cause the following output:

CustomerId  Name                           FromClassification ToClassification
----------- ------------------------------ ------------------ ----------------
1           JOHN DOE                       C                  C
1           JANE DOE                       B                  C
1           JOHN SMITH                     B                  C

If you want to persist the values:

 DROP TABLE CreditChangeLog

CREATE TABLE CreditChangeLog (
 CustomerId INT,
 FromClassification CHAR(1),
 ToClassification CHAR(1)

UPDATE Customers
SET Classification = 'C'
    DELETED.Classification AS FromClassification,
    INSERTED.Classification AS ToClassification
    INTO CreditChangeLog
WHERE Income BETWEEN 5000 AND 7000

SELECT * FROM CreditChangeLog

This will save the changed values on the CreditChangeLog table.

Monday, July 27, 2009

Avoiding code execution when debugging or testing an ASP.net application

It is common to have some code that is part of some workflow and you don’t want it to execute when you are testing. For example, if your code sends an e-mail to some users, you don’t want to crud your users mail box with e-mails originated from your tests. Unless you are testing if sending the e-mail works, you can do this to test the rest of your code:

        static void SendMail(string to, string subject, string body)

#if !DEBUG
           using(var message = new MailMessage())
               message.From = new MailAddress(SiteSettings.Settings.AdminMail);
               message.To.Add(new MailAddress(to));
               message.Subject = subject;
               message.Body = body;
               message.BodyEncoding = Encoding.UTF8;
               message.SubjectEncoding = Encoding.UTF8;

               var mailClient = new SmtpClient();

The #if !DEBUG pre-processor will ensure that the code is not compiled into the assembly so it does not gets executed. Just remember to set your build configuration to “Debug” on Visual Studio.

Wednesday, July 22, 2009

Caching HTML helpers on ASP.net MVC with optional SQL Cache Dependency

When thinking about performance, one of the top things that comes in mind is caching. Caching means that you will save some server resources by saving the result on the “first” time it’s processed. With ASP.net MVC, it’s common the use of HTML Helpers, that are just methods that return a resulting HTML. We can combine both and cache the resulting HTML on the first time the method is called, round house kicking the performance of the page.

To archive this, we first create a CacheExtensions class to hold our caching methods:

public static class CacheExtensions
   public static bool TryGet (this Cache cache, string key, out T value)
       var obj = cache.Get (key);

       if (obj != null)
           value = (T)obj;
           return true;

       value = default(T);

       return false;

   public static void CacheData (this Cache cache, string key, string profileKey, T data)
       var dc = DataCacheSettings.GetDataCacheProfile (profileKey);

       if (dc == null)
           throw new ArgumentException ("Data cache profile {0} not found".FormatWith (profileKey));

       if (data.Equals(default(T)))

       if (dc.Enabled)
           if (dc.UseDependency)

               if (string.IsNullOrEmpty(dc.SqlDependency))
                   throw new InvalidOperationException("sqlDependency must be set when dependency is enabled.");

               var sqlDepInfo = dc.SqlDependency.Split(new char[] {':'}, StringSplitOptions.RemoveEmptyEntries);

               if (sqlDepInfo.Length != 2)
                   throw new InvalidOperationException("sqlDependency option must obey the Database:Table format");

               cache.Insert (key,
                             new SqlCacheDependency(sqlDepInfo[0], sqlDepInfo[1]),
               cache.Insert (key,
                             TimeSpan.FromMinutes (dc.Duration),

The TryGet method will check if the cache key exists and assign it to the out value, returning true if there was a matching key and false if the cache didn’t exist.

The CacheData method caches the data by the provided key. This method supports cache profiles and SQL Server Cache Dependency which i will explain later on this same article.

To use it:

public static class HtmlHelperExtensionsWithCache
   public static string Hello(this HtmlHelper helper, string name)
       string key = String.Concat("hello-", name);
       string ret;

       if (Cache.TryGet(key, out ret))
           return ret;

       //if you have to format some html, use StringBuilder
       ret = String.Format("Hello {0}, have a nice day!", name);

       Cache.CacheData(key, "Html", ret);

       return ret;

This extension method will act as a HTML helper and cache the message depending on the user name. The “Html” parameter of the CacheData method will read the “Html” cache profile from web.config, which is configured this way (for example):

   <!-- duration in minutes -->
       <add priority="High" duration="30" name="Html" />
       <add priority="Normal" duration="10" name="User" />
       <add priority="AboveNormal" duration="15" name="StoryList" usedependency="true" sqldependency="DotNetBurner:Story" />
       <add priority="BelowNormal" duration="15" name="VoteButton" />
       <add priority="Normal" duration="60" name="Planet" />
       <add priority="Low" duration="1440" name="StaticResource" />

Configuration Properties:

name: The profile name. On this case we used “Html”; duration: The duration, in minutes; priority: The cache priority (the same used on ASP.net output cache configuration). Lower priorities will likely be invalidated first. usedependency: If SQL dependency is enabled; sqldependency: Folows the format <dependency configuration name>:<table name> where dependency configuration name is also configured on web.config, as follows:

<sqlcachedependency enabled="true" polltime="5000">
       <add name="DotNetBurner" connectionstringname="DotNetBurnerConnectionString" />

This will check the database every 5 seconds. If the table changed, the profile is invalidated. This means that every cache that depends on this profile will be invalidated.

This is used on DotNetBurner and it’s working very well so far. I used this caching technique where i could (except on user dependent data) and the site is very fast, even for Chuck Norris.