How to setup Semantic Logging Application Block (SLAB)

Event Logging

Logging errors and information is a useful, if not absolutely necessary tool for developers to build and manage applications. Microsoft’s Enterprise Library contains the Semantic Logging Application Block (SLAB), which provides a consistently formatted output of log files to your SQL database. This article will walk you through the setup process for SLAB, which will log information to your SQL Server database

Installing Semantic Logging Application Block:
Install the Nuget package for ‘Semantic Logging Application Block – SQL Server Sink’ . (it will include the core Semantic Logging)
Adding an EventSource class to your application:
Now you need to override the EventSource class, which I have done in the ‘LogWriter’ class below. I’ve also created a ‘Logger’ class to make it easier to deal with LogWriter. It takes care of some repetitive work for me. I can just pass in a raw Exception to Logger, and it will utilize LogWriter to write it to the Traces table in my database. Nice and easy.

You can add this to your Lib folder in your UI (main process project):

using System;
using System.Diagnostics;
using System.Diagnostics.Tracing;

namespace MySolution.WebApi.Lib
{
    public class Logger
    {
        public static readonly Logger Log = new Logger();
        public void Error(Exception ex)
        {
            string message = ex.Message;
            string innerMessage = "";
            if (ex.InnerException != null)
                innerMessage = ex.InnerException.Message;

            StackTrace st = new StackTrace();
            string methName = st.GetFrame(1).GetMethod().Name;
            string stack = st.ToString();
            LogWriter.Log.Error(message, innerMessage, methName, stack);

        }
        public void Error(Exception ex, string addedMessage)
        {
            string message = addedMessage + " :: " + ex.Message;
            string innerMessage = "";
            if (ex.InnerException != null)
                innerMessage = ex.InnerException.Message;

            StackTrace st = new StackTrace();
            string methName = st.GetFrame(1).GetMethod().Name;
            string stack = st.ToString();
            LogWriter.Log.Error(message, innerMessage, methName, stack);

        }
        public void Error(string message)
        {
            StackTrace st = new StackTrace();
            string methName = st.GetFrame(1).GetMethod().Name;
            string stack = st.ToString();
            LogWriter.Log.Error(message, "", methName, stack);
        }

        public void Critical(string message)
        {
            StackTrace st = new StackTrace();
            string methName = st.GetFrame(1).GetMethod().Name;
            string stack = st.ToString();
            LogWriter.Log.Critical(message, methName, stack);
        }

        public void Warning(string message)
        {
            StackTrace st = new StackTrace();
            string methName = st.GetFrame(1).GetMethod().Name;
            string stack = st.ToString();
            LogWriter.Log.Warning(message, methName, stack);
        }
        public void Information(string message)
        {
            StackTrace st = new StackTrace();
            string methName = st.GetFrame(1).GetMethod().Name;
            string stack = st.ToString();
            LogWriter.Log.Information(message, methName, stack);
        }
    }

    public class LogWriter : EventSource
    {
        public static readonly LogWriter Log = new LogWriter();

        [Event(1000, Message = "{0}", Level = EventLevel.Critical)]
        public void Critical(string message, string method, string stack)
        {
            object[] parms = new object[] { message, method, stack };
            if (IsEnabled()) WriteEvent(1000, parms);
        }
        [Event(1001, Message = "{0}", Level = EventLevel.Error)]
        public void Error(string message, string innerExceptionMessage, string method, string stack)
        {
            object[] parms = new object[] { message, innerExceptionMessage, method, stack };
            if (IsEnabled()) WriteEvent(1001, parms);
        }

        [Event(1002, Message = "{0}", Level = EventLevel.Warning)]
        public void Warning(string message, string method, string stack)
        {
            object[] parms = new object[] { message, method, stack };
            if (IsEnabled()) WriteEvent(1002, parms);
        }

        [Event(1003, Message = "{0}", Level = EventLevel.Informational)]
        public void Information(string message, string method, string stack)
        {
            object[] parms = new object[] { message, method, stack };
            if (IsEnabled()) WriteEvent(1003, parms);
        }
    }
}

Wiring your app to use SLAB:
Now that you have your EventSource classes setup. You need to wire up your EventSource classes when your application starts.
In Global.asax add the following ‘using’ statements:

using Microsoft.Practices.EnterpriseLibrary.SemanticLogging;
using System.Diagnostics.Tracing;

and add the bold line below to your Application_Start function (in Global.asax):

protected void Application_Start()
        {
            AreaRegistration.RegisterAllAreas();
            GlobalConfiguration.Configure(WebApiConfig.Register);
            FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);
            RouteConfig.RegisterRoutes(RouteTable.Routes);
            BundleConfig.RegisterBundles(BundleTable.Bundles);

            SetupSemanticLoggingApplicationBlock();

        }

Then add this function to your Global.asax:

protected void SetupSemanticLoggingApplicationBlock()
        {
            //EventTracing setup
            string logConnString =
                System.Configuration.ConfigurationManager.ConnectionStrings["LoggingConnString"].ToString();

            var sqlListener1 = SqlDatabaseLog.CreateListener("UI", logConnString);
            var sqlListener2 = SqlDatabaseLog.CreateListener("Infrastructure", logConnString);
            var sqlListener3 = SqlDatabaseLog.CreateListener("Service", logConnString);

            //get web.config value for logging level
            bool critcal = System.Configuration.ConfigurationManager.AppSettings["Logging_LogCritical"].ToLower() ==
                           "true";
            bool error = System.Configuration.ConfigurationManager.AppSettings["Logging_LogError"].ToLower() ==
                           "true";
            bool warning = System.Configuration.ConfigurationManager.AppSettings["Logging_LogWarning"].ToLower() ==
                           "true";
            bool info = System.Configuration.ConfigurationManager.AppSettings["Logging_LogInformation"].ToLower() ==
                          "true";

            //Enable the level of logging based on settings in web.config
            if (critcal)
            {
                sqlListener1.EnableEvents(LogWriter.Log, EventLevel.Critical);
                //uncomment if you add logging to your sub projects
                //sqlListener2.EnableEvents(MySolution.Infrastructure.InfrLogger.Log, EventLevel.Critical);
                //sqlListener3.EnableEvents(MySolution.Service.SvcLogger.Log, EventLevel.Critical);
            }
            if (error)
            {
                sqlListener1.EnableEvents(LogWriter.Log, EventLevel.Error);
                //uncomment if you add logging to your sub projects  
                //sqlListener2.EnableEvents(MySolution.Infrastructure.InfrLogger.Log, EventLevel.Error);
                //sqlListener3.EnableEvents(MySolution.Service.SvcLogger.Log, EventLevel.Error);
            }
            if (warning)
            {
                sqlListener1.EnableEvents(LogWriter.Log, EventLevel.Warning);
                //uncomment if you add logging to your sub projects                
                //sqlListener2.EnableEvents(MySolution.Infrastructure.InfrLogger.Log, EventLevel.Warning);
                //sqlListener3.EnableEvents(MySolution.Service.SvcLogger.Log, EventLevel.Warning);
            }
            if (info)
            {
                sqlListener1.EnableEvents(LogWriter.Log, EventLevel.Informational);
                //sqlListener2.EnableEvents(MySolution.Infrastructure.InfrLogger.Log, EventLevel.Informational);
                //sqlListener3.EnableEvents(MySolution.Service.SvcLogger.Log, EventLevel.Informational);
            }
        }

Adding EventSource classes in sub projects:
In all of your other projects that you include in this main project you can also have logging by adding the following file into the Lib folder (or any folder really) of those projects. You just have to make sure the name of the class is unique in each project so I will name mine SvcLogger, InfrLogger, etc, and you need to give unique codes on the Events of each (where above they are 100x, these are 200x, and if added to another project you would need to make those 300x.

So here is the sample from my Service project logger. This is all I have to add to the project to do logging in it:

using System;
using System.Diagnostics;
using System.Diagnostics.Tracing;

namespace MySolution.Service
{
    public class SvcLogger : EventSource
    {
        public static readonly SvcLogger Log = new SvcLogger();

        public void Error(string message)
        {
            StackTrace st = new StackTrace();
            string methName = st.GetFrame(1).GetMethod().Name;
            string stack = st.ToString();
            DoError(message, methName, stack);
        }

        public void Critical(string message)
        {
            StackTrace st = new StackTrace();
            string methName = st.GetFrame(1).GetMethod().Name;
            string stack = st.ToString();
            DoCritical(message, methName, stack);
        }

        public void Warning(string message)
        {
            StackTrace st = new StackTrace();
            string methName = st.GetFrame(1).GetMethod().Name;
            string stack = st.ToString();
            DoWarning(message, methName, stack);
        }

        public void Information(string message)
        {
            StackTrace st = new StackTrace();
            string methName = st.GetFrame(1).GetMethod().Name;
            string stack = st.ToString();
            DoInformation(message, methName, stack);
        }

        [Event(2000, Message = "{0}", Level = EventLevel.Critical)]
        private void DoCritical(string message, string method, string stack)
        {
            object[] parms = new object[] { message, method, stack };
            if (IsEnabled()) WriteEvent(2000, parms);
        }
        [Event(2001, Message = "{0}", Level = EventLevel.Error)]
        private void DoError(string message, string method, string stack)
        {
            object[] parms = new object[] { message, method, stack };
            if (IsEnabled()) WriteEvent(2001, parms);
        }

        [Event(2002, Message = "{0}", Level = EventLevel.Warning)]
        private void DoWarning(string message, string method, string stack)
        {
            object[] parms = new object[] { message, method, stack };
            if (IsEnabled()) WriteEvent(2002, parms);
        }

        [Event(2003, Message = "{0}", Level = EventLevel.Informational)]
        private void DoInformation(string message, string method, string stack)
        {
            object[] parms = new object[] { message, method, stack };
            if (IsEnabled()) WriteEvent(2003, parms);
        }

    }
}

Setup your database:
You will need to add the Traces table to your database. You will find a script in your [Solution Folder]\packages\EnterpriseLibrary.SemanticLogging.Database.2.0.1406.1\scripts folder called ‘CreateSemanticLoggingDatabaseObjects.sql’. Run this script against the database you wish to record your logs in.

Setup your app to point to your database:
In the function we put in global.asax, we specified our connection string as

 string logConnString =
                System.Configuration.ConfigurationManager.ConnectionStrings["LoggingConnString"].ToString();

Now you need to add that connection string to your web.config in the connection string section. Add:

 
    add name="LoggingConnString" connectionString="Data Source=(local);initial catalog=MyDatabase;Integrated Security=True" providerName="System.Data.SqlClient"
   

A note about error handling:
In every project except the UI (top level), I normally will want to log the error then have that error thrown again up to the calling function. To make sure that I send the call stack history, I will do it as follows:

            catch (Exception ex)
            {
                SvcLogger.Log.Error(ex.Message);
                throw;  //NOT throw ex; 
            }

instead of the following because the following won’t pass the existing stack information.

            catch (Exception ex)
            {
                SvcLogger.Log.Error(ex.Message);
                throw ex;  //this throws as if the error starts now.
            }

In the UI level I would do the following in order to make sure that sensitive info is never shown on the UI. If you are debugging this isn’t very helpful, but is also not helpful for the hacker trying to gain insight into your code.

            catch (Exception ex)
            {
                Logger.Log.Error(ex.Message);
                throw new Exception("Error deleting item. Review logs for info."); //don't pass sensitive info
            }

You will also need to set the level of errors to log in your web.config file app settings, if you set any item, then that item and the ones more critical will all be logged. For instance if I set true on ‘Warning’, then Warning, Error and Critical would be logged (as it is set in sample below). Information level items would not be logged.

 <!--######### Semantic Logging settings #########-->
    <!-- If you enable any level of Logging, it will enable that level and any level more severe. Enabling 'Warning' would enable 'Critical' and 'Error' automatically. -->
    <add key="Logging_LogCritical" value="False" />
    <add key="Logging_LogError" value="False" />
    <add key="Logging_LogWarning" value="True" /> 
    <add key="Logging_LogInformation" value="False" />
    <!--######### end Semantic Logging settings #########-->

That should get you going with an easy to setup and manage Logging system in your application. Enjoy.

Developing SSRS reports without buying Visual Studio

I have been trying to find a way to develop SSRS reports without requiring a Visual Studio license. This is because we have a group of people that I work with that don’t need Visual Studio for anything else, but have been trained to create SSRS reports and have SQL experience.  I had expected that Visual Studio Express would most likely work, but I wasn’t able to find any success with that.

In the last 2 weeks, Microsoft has released  the new Visual Studio 2013 Community. This is a free version that is basically the equivalent of Visual Studio 2013 Professional. If you qualify for it, and most people probably do, then I highly suggest it. You can get it at http://www.visualstudio.com/en-us/downloads/download-visual-studio-vs#d-community. VS Express is being replaced by Community edition.  This will be a great option if you are an individual user, or in a business situation where there are 5 or less developers. The limitations set on businesses for using this product are:

  1. Can be used by 5 or less developers without restrictions if you have less than 250 PC’s and make less than $1M annual revenue.
  2. Can be used by unlimited users for classroom learning or open source development even if you have more than 250 PC and make more than $1M.

Those restrictions make it impossible for us to use Visual Studio 2013 Community. If you are in a business that exceeds 250 PCs  or has more than $1M annual revenue and you still need a free development environment for SSRS development, I’ve found a solution by using Visual Studio 2013 Integrated Shell and installing BIDS.

This is completely free and easy to setup.

1) Download and install Visual Studio 2013 Isolated Shell (is required before installing integrated shell) –  http://www.microsoft.com/en-us/download/details.aspx?id=40764

2) Download and install Visual Studio 2103 Integrated Shell – http://www.microsoft.com/en-us/download/details.aspx?id=40777

3) Download and install MSST – BIDS for VS 2013 – http://www.microsoft.com/en-us/download/details.aspx?id=42313

Cheers!

Hiding API Documentation in ASP.net

One really nice feature of ASP.Net Web API is the automated documentation, however you might not always want to show all of your controllers or actions there. If don’t want a controller or action to show up there, put this attribute on the the controller or action:

[ApiExplorerSettings(IgnoreApi = true)]

You will need to add the following to your includes:

using System.Web.Http.Description;

Lot’s of .Net news last week

VS 2013 Update 4 was released (mine is installing right now) so can’t wait to get at it.

VS 2015 Preview was released (was VS 2014)

NEW: Visual Studio Community was released – This one seems to be huge! – This is basically VS 2013 Professional with Update 4 and it is absolutely free to individual developers. For developers in a business small or medium business it is completely free for up to 5 developers. For large businesses it can only be used for classroom learning or open source development. This is great news for small businesses and individual developers. (http://channel9.msdn.com/coding4fun/blog/Visual-Studio-2013-Community-Professional-development-for-free)

New tools for TFS – Agile team tool improvements

New video announcement of this release: http://channel9.msdn.com/Shows/Visual-Studio-Toolbox/Visual-Studio-2013-Update-4

How to keep ASP.Net sites from taking so long to load

If you’ve deployed a .Net website, you might find that when you visit your website it will take about 20 seconds to load, which seems more like minutes as you sit there waiting and nothing seems to be happening. This is common with .Net sites and IIS because IIS will unload any applications that haven’t been used recently. By default if your site has not been accessed in the last 20 minutes, then IIS can unload it from memory to free up space for other operations.

To prevent this, I like to setup a script on the server and setup a scheduled task to run every few minutes to load a page from my website. This doesn’t do anything major, but it does cause IIS to reset its counter for how long it has been since the website was last accessed. If I run this script every 5 minutes (it could be every 15 or any value less than 20, but I like 5 since my server doesn’t have much else to do except make sure these websites are available.

In this sample we’ll assume I have 4 websites in IIS that need to I want to keep active. You can modify this for more or less sites as needed:

First create a vbs file and put it somewhere on your computer. I’ll put it in C:/tasks/

Url = "http://www.Site1.com"

Set xml = CreateObject("Microsoft.XMLHTTP")
xml.Open "GET", Url, False

xml.Send

set xml = Nothing

Url2 = "http://www.Site2.com"
Set xml2 = CreateObject("Microsoft.XMLHTTP")
xml2.Open "GET", Url2, False

xml2.Send
set xml2 = Nothing

Url3 = "http://www.site3.com"
Set xml3 = CreateObject("Microsoft.XMLHTTP")
xml3.Open "GET", Url3, False

xml3.Send
set xml3 = Nothing

Url4 = "http://www.site4.com"
Set xml4 = CreateObject("Microsoft.XMLHTTP")
xml4.Open "GET", Url4, False

xml4.Send
set xml4 = Nothing

Save this file as C:/tasks/siteRefresh.vbs

Next create another file in the same folder with this content:

siteRefresh.vbs

Yes that is all. It is just going to call your vbs script so you just need the name of your first file as long as they are in the same folder. Save this one as C:/tasks/siteRefresh.bat

Now open your windows schedule tasks window (I’m using Windows 2003 Server) by going to Start / Control Panel / Administrative Tools / Task Scheduler. Click ‘Task Scheduler Library’ on the right side of the window then click ‘Create Task’ on the right menu.
Set the name to “Site Refresher”. Make sure the check box is checked for ‘Run whether user is logged on or not’ and click OK.
Click the ‘Triggers’ tab, and click ‘New’ to create a trigger.
Set your trigger to Daily, then a the bottom of the screen check the box for ‘Repeat every’, and set the value for 5 minutes for a duration of ‘1 day’. Click Ok.
Open the ‘Actions’ tab and click ‘New’ then select ‘Start a program’. Use the browse button to browse to your .bat file. I’m selecting ‘C:/tasks/siteRefresh.bat’. The click ‘Ok’.

You are finished. As you close out of the window you will be asked to enter your user name and password.

Now your sites will get loaded every 5 minutes and those long 20 second load times should not occur anymore.

Using OAuth with MVC5 WebApi

For the last couple of weeks, I’ve been involved with a project which includes several sub-projects that will need to communicate with each other, and with a credit card processor (Authorize.net).  In addition, other applications will call into this one to handle payment processing business logic.  In developing this project, I’m focused on making sure the Web API is more Restful (REST) than previous RPC style Web API projects I’ve worked on. Also, since this API will handle customer and payment information, this one needs to be very secure.  My research and work has been interesting, and since good documentation and explanations have been hard to find, I’ll layout what I’ve found to be most helpful.

My goal has been to utilize the OWIN and OAuth utilities already contained in .Net MVC5 and then to layer best practices on top of that. With that in mind, I found a great video describing OAuth at http://vimeo.com/user22258446/review/79095048/9a4d62f61c . This article builds on the information I learned in that video by reviewing what I felt was most important, and filling in a few gaps.

There are a  few points to keep in mind about OAuth:

  1. There are 3 entities involved in the OAuth process:
    1. The ‘Client’ (browser, a server application, an Iphone app, etc.)
    2. The ‘Authentication Server’ (verifies the identity (username and password)), and delivers a token to be used.
    3. The ‘Resource Server’ (The the application that needs to be secured and does the work we are interested in.)
  2. The ‘Client’ is interested in communicating with the ‘Resource Server’.
  3. The ‘Resource Server’ allows the ‘Authentication Server’ to be responsible for authentication (verifying a user is who they say they are).
  4. The ‘Authentication Server’ might be Google, Facebook, Twitter, your employer’s service, or it may be included in ‘Resouce Server’ project.
    1. In ASP.Net MVC5 project setup for ‘Individual Accounts’, by default, your application will act as both ‘Authentication Server’ and ‘Resource Server’ but it important that even if you do that, that you understand that these are 2 separate roles. Also, if you do that, you could change the ‘Authentication Server at any time to be some other service, or you could add additional ‘Authentication Servers’ (ex. an app could allow people to login using AD, Google, and Facebook identities).
  5. ‘Authentication Servers’ create access tokens that are used to prove identity to a ‘Resource Server’. The Client gives a username and password to the ‘Authentication Server’ and the ‘Authentication Server’ returns a token to the client. The token has an expiration date such as 14 days after the day it issued. The client can save the token and use that token every time it accesses the ‘Resource Server’ until the token becomes invalid or expires. Then the Client repeats the process to get a new token, and uses that until it expires.
  6. The token becomes proof that you are an authenticated user so it is important that tokens are protected as if they are a username and password.  Use SSL (!!!) when transmitting the token from the Client to the Resource Server on every single trip!
  7. Use SSL on every single trip!
  8. Now if you are wondering why passing a token beats passing username and password on each trip, it is because the token has an expiration and it can be voided by the ‘Authentication Server’.  So if someone gets your token, it is valid for a limited time. Also if you use the same password all the time (which you don’t of course ), then they wouldn’t be able to try to login as you at other services.

 

Let’s look at few images to get a visual concept of how Clients (human or machine) authenticate, and then communicate with the ‘Resource Server’.

 

Ideally, you would completely separate the Authentication Server from Resource Server. The resource application would still be responsible for authorization (what you are allowed to access, what roles you belong to etc.). This version is common if you allow people to login using their Google account or Facebook account. The process looks like this:

3 Party Authentication Process

 

We are going to combine the Authentication Server and Resource Server into the same application. The same processes occur but we only need a single website (or web service) to act as both Authentication Server and Resource Server.  It will look like this:

Two party authentication

You can start your project like this and later you can add more Authentication Servers, for instance you could add Google and FaceBook authentication later. I’ll save that for another article.

For this sample, we’ll make it a Payment Processor application. The scenario is that this application will make calls receive calls from other applications One of those other applications is a website that customers access to manage their accounts, and the other is a application that customer service employees use to manage customer accounts.  Both applications will need to be able to perform CRUD operations on:

  1. Customers
  2. PaymentProfiles (credit cards or bank accounts for customers)
  3. Transactions (payments, refunds, voids)

This application will interact  with a local database, and with Authorize.Net on the back end. I’m not including the Authorize.Net code in this article, but may write another article later for that.  Let’s start by creating our app.

Create a new ASP.Net Web ApplicationCreating WebApi 1

Select ASP.Net Web Application, give it a name and a folder, and click OK.

Creating webApi 2

 

After you click OK, select Web API, then click the Change Authentication button. Make sure that you have it set to ‘Individual Accounts’.

Individiual Accounts setting

Click Ok on both of those windows to create your new Web API project.

 

Now your project has been created, and if you click debug you can run it and it will display the generic looking home page. The application has a Home page and a Help page, which describes the available API functions.

New WebApi

Customizing your application:

You could leave the default connection string in your web.config, and use a SQL Express database if you like, but I’m going to create a new database on my local SQL 2008 server and point the connection string to that. I opened SQL 2008’s SSMS interface and created a new, empty database named ‘PaymentProcAPI’. Then I edited my connection string as follows:

Connection String

 

 

Adding A User Account to the database

Let’s create a little throw away website just to add a user account to our database. I’m assuming you are familiar with creating website in VS 2013 so I’ll run through this quickly.

  1. Right click your solution and click ‘Add Project’ and then select ASP.Net Web Application’. Enter a name of ‘UserRegistrationSite’. Click Ok.
  2. Select MVC, and click Change Authentication. Select ‘Individual Accounts’ and click OK. Click OK again to close the window. Your new application is created.
  3. Copy the connection string from the web.config in your PaymentProcessorAPI project and replace the connection string in this new website with it. Both sites must be hitting the same database. Run this application and click the Login button on the header.
  4. Click the ‘Register’ link
  5. Add a user account.
  6. Once your user account is created you are finished with this website for this exercise.

For my user I’ve created Marctalcott@marctalcott.com as my user account. You have created a user by the name of your choice.  Now if I look in the database, that I setup, I see a set of tables have been added and there is a row in the AspNetUsers table for my new user. The user has a guid Id, Email, UserName, PasswordHash, and other fields. You can check your database to see if this has all worked properly.

 

UserTablesAddedToDb

 

Then I added some model classes in my Model Folder. You can go ahead and add these to your Model’s folder.

    public class CustomerProfile
    {
        public long Id { get; set; }
        public string First { get; set; }
        public string Last { get; set; }
        public string Address { get; set; }
        public string City { get; set; }
        public string State { get; set; }
        public string Email { get; set; }
    }

    public class PaymentProfile
    {
        public long Id { get; set; }
        public long CustomerId { get; set; }
        public string PaymentType { get; set; } //Bank or Card
        public string CardType { get; set; }    //Visa, MC, Diners, etc
        public string LastFour { get; set; }
        public DateTime AddedDate { get; set; }
        public DateTime ExpirationDate { get; set; }
    }

    public class Transaction
    {
        public long Id { get; set; }
        public long CustomerId { get; set; }
        public long PaymentProfileId { get; set; }
        public DateTime ProcessingDate {get;set;}
        public decimal Amount { get; set; }
        public bool Accepted { get; set; }    
    }

Rather than write code in your UI (controller classes) that interacts with your database or Authorize.Net and the database, you should separate it into other classes. I’ve added a created a ‘Services’ folder and 3 service classes to my project. For this sample we’ll write out the Services/CustomerProfileService class with basic CRUD functions

Next I added the following empty Web API controllers to my Controller folder.

  • PaymentProfileController
  • CustomerProfileController
  • TransactionController

To add the controllers you would right click your Controller folder, then click ‘Add new controller’, then select ‘Web API 2 Controller – Empty’, and click ‘Add’.

Now let’s create some REST functions in our Api controllers. Here is the code for the CustomerProfileContoller:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Net.Http;
using System.Web.Http;
using System.Threading.Tasks;
using PaymentProcessorApi.Models;
using PaymentProcessorApi.Services;

namespace PaymentProcessorApi.Controllers
{
     
    public class CustomerProfileController : ApiController
    {
        private CustomerProfileService _svc;
        public CustomerProfileController()
        {
            //in real app I would use IOC injection
            _svc = new CustomerProfileService();
        }

        public List Get()
        {
            return _svc.Get();
        }

        public CustomerProfile Get(long id)
        {
            CustomerProfile c = _svc.Get(id);
            if (c == null)
            {
                throw new HttpResponseException(HttpStatusCode.NotFound);
            }
            return c;
        }

        [HttpPost]
        public async Task Post(CustomerProfile customer)
        {
            try
            {
                _svc.Insert(customer);
            }
            catch (Exception ex)
            {
                 return BadRequest("Failed to insert.");
            }
            return Ok();
        }

        [HttpPut]
        public async Task Put(CustomerProfile customer)
        {
            try
            {
                _svc.Update(customer);
            }
            catch (Exception ex)
            {
                return BadRequest("Failed to update.");
            }
            return Ok();
        }

        [HttpDelete]
        public async Task Delete(CustomerProfile customer)
        {
            try
            {
                _svc.Delete(customer.Id);
            }
            catch (Exception ex)
            {
                return BadRequest("Failed to delete.");
            }
            return Ok();
        }
    }
}

If you run the application now in debug, you can navigate to your CustomerProfile controller at http://localhost:%5Byour port here]/api/CustomerProfile and since it is a GET, it performs the default GET action in your controller, which gets all of the customers, and it displays them in your browser. If you wanted to add a 1, 2, or 3 after the last slash in the url, it will display the customer profile for that customer.

Get with a browser

To test the Web API’s other functionality from a client, I’ve setup Postman in Chrome. Postman allows us simulate a client that makes requests and we can pass values in the body or header. If you don’t have Postman, you can google it and install it. It is free and is a chrome plug in. Once you have it installed:

  1. Run your application locally in debug.
  2. Open a new tab in your browser while it is running
  3. Click the Apps button near the top of your browser window.
  4. Select Postman – REST Client

AppsButton

In this screenshot, you can see that I’m calling the Get action of the CustomerProfileController, which returns all of the customers. Keep in mind we are not securing this controller at all at this time.

Get_postman

So let’s secure the controller by adding the [Authorize] attribute to the CustomerProfileController.

Require Authorization on the controller

 

Now if you run the same GET request, it will fail with a 401 Unauthorized error.

 

Get Fails Without Token

 

We now must login and get a token, then we can pass that token along with our requests. To login, change the request type to ‘POST’, and set the following parameters:

grant_type = password, username = [your user name], password = [your password], and click ‘Send’. If everything works, you will get an access token back. In the image below, I’ve received a bearer token that is valid for 14 days.

Postman To Get Token

Now we’re getting somewhere. So we have a token and our controller has been secured to authorized users. I’ve logged in so I’m authorized. I just need to make requests that include the bearer token so I can prove that I’m authorized. Here is how we do that.

In the Header we add:

Authorization = Bearer [paste in your token from previous request]

Now set the url to the /api/CustomerProfile action and set the request type to GET, and click the send button.  It works!!! We’ve proved our identity because we used our bearer token, so the response is successful and contains the customer profile.

Getting With Token Works

 

 

Here is a PUT call to update a customer profile. It includes the bearer token in the header just as the previous request did, so it is allows to update the customer.

Put with token A few details about how things work

When our application starts, there is a Startup.cs file that is run. You can find it in your applications root. It is a partial class, and the other part is in the App_Start folder’s Startup.Auth.cs file. Checkout the ConfigureAuth(IAppBuilder app) function to see how the tokens are setup.

public void ConfigureAuth(IAppBuilder app)
        {
            // Configure the db context and user manager to use a single instance per request
            app.CreatePerOwinContext(ApplicationDbContext.Create);
            app.CreatePerOwinContext(ApplicationUserManager.Create);

            // Enable the application to use a cookie to store information for the signed in user
            // and to use a cookie to temporarily store information about a user logging in with a third party login provider
            app.UseCookieAuthentication(new CookieAuthenticationOptions());
            app.UseExternalSignInCookie(DefaultAuthenticationTypes.ExternalCookie);

            // Configure the application for OAuth based flow
            PublicClientId = "self";
            OAuthOptions = new OAuthAuthorizationServerOptions
            {
                TokenEndpointPath = new PathString("/Token"),
                Provider = new ApplicationOAuthProvider(PublicClientId),
                AuthorizeEndpointPath = new PathString("/api/Account/ExternalLogin"),
                AccessTokenExpireTimeSpan = TimeSpan.FromDays(14),
                AllowInsecureHttp = true
            };

            // Enable the application to use bearer tokens to authenticate users
            app.UseOAuthBearerTokens(OAuthOptions);

            // Uncomment the following lines to enable logging in with third party login providers
            //app.UseMicrosoftAccountAuthentication(
            //    clientId: "",
            //    clientSecret: "");

            //app.UseTwitterAuthentication(
            //    consumerKey: "",
            //    consumerSecret: "");

            //app.UseFacebookAuthentication(
            //    appId: "",
            //    appSecret: "");

            //app.UseGoogleAuthentication(new GoogleOAuth2AuthenticationOptions()
            //{
            //    ClientId = "",
            //    ClientSecret = ""
            //});
        }

Notice the ‘AllowInsecureHttp = true’ line. When you publish your application you need to set that false in order to require SSL for all exchanges. Without SSL on your site, your tokens are sent as plain text. They can be stolen and used by imposters, and it almost the equivelant of using a login page that doesn’t have SSL. You can also set the timeout length for tokens. By default it is set to 14 days.

We’ve successfully created a new web api application that uses OAuth. We’ve created a user account, and logged in as our user. We’ve then accessed our secured controller actions by passing a bearer token in the header of our requests.

 

If you would like the sample application source code, it is available here: OAuthSampleApplication

 

Zengenda – Task Management

If you ever find yourself wondering ‘What am I supposed to work on now?’, and it isn’t because you’ve completed everything you aspire to do, then you understand the reason for Zengenda. Zengenda is my personal passion project. Most of us have many tasks and projects that we intend to do. Some are simple, some complex, some can only be done after something else has been completed, or when we are at a particular location, or can involve a certain person to assist us. If it is simple and we can do it now (and we aren’t procrastinators), then we can complete those now and never worry about them, but I’ve found that most of mine, either involve many steps, or must be delayed because of a dependence on something I can’t control at this minute.

Post it notes

My mother makes list, and has notepads and post-its all over the place that she uses to keep on top of to-do lists. I’ve tried keeping lists in the way that she does, but it never worked out for me. I would be at the grocery store and my grocery list was at home on the kitchen counter, or I would be standing in line at the book store and have a great idea about how to fix a piece of code I had been working on, but have nothing to write a note on. What I need is an app that I can reach from my phone or tablet when I’m out so I can make a quick note, and that I can access from my laptop when I’m need to work more extensively on my project management.

Where I work, we have an extensive Incident Management system we use in which customers (generally co-workers) can enter requests, which are queued up in an elaborate ‘to-do’ list for me. This is great at work, but still leaves the rest of my life without a solution, and since I’m usually working on several ‘passion projects’ that leaves a lot need for organization.

So that explains the pain points that led me to create Zengenda. I have downloaded and used several task management systems (free and paid), but all are lacking in what seem to be basic requirements.

So what are the basic requirements?

  1. Enter tasks and ideas very easily and quickly when they are thought of.
  2. Have all new tasks go to an Inbox when entered.
  3. Designate entries as ‘Tasks’, ‘RFCs’, ‘Incidents’, ‘Ideas’, or any custom type.
  4. Organize items into folders (Work, Personal, Killer App Project, etc.)
  5. Create hierarchies of lists. I need to add subtasks if it is complex task.
  6. Set dates for tasks:
    • Start date
    • Due date
    • Review date (a date to review and think about this task again)
  7. Set Status (Active, On Hold, Completed, Cancelled, etc.)
  8. Set Priority (High, Med, Low, etc)
  9. Set Context (Mall, Grocery store, Phone, Bob (what resource I need to complete it))
  10. View tasks grouped and ordered:
    • By parent-child relationships (hierarchal)
    • By Start date (see what I’m supposed to start in the coming days)
    • By Due date (see what I need to finish in the coming days)
    • By Review date (every day I can quickly review a few items that had Today as the review date).
    • By updated or completed (a list of all the items I’ve worked on in the past week, which is useful in meetings where I need to keep my boss updated on what I’ve been doing).
  11. Add unlimited notes to tasks.
  12. Add unlimited attachments to tasks (PDF’s, images, etc)

Those are the basics, and after working on implementing them, more requirements are becoming evident. This project has been exciting because it seems to be such a common and basic need. I’ve been using Zengenda as I build it, in order to keep up with Bug Lists, Features to Add, other personal projects like moving, remodeling projects, and work projects.

I hope that once I make Zengenda available, you will find it useful for you, and I look forward to hearing your feedback and ideas for improving and simplifying Zengenda. Thank you!