Friday, January 29, 2010

Agile ADO.Net Persistence Layer Download is on CodePlex

I’ve have quite a few requests for sample code so I pulled the AAPL code out of my app, put it in a stripped down ASP.Net MVC sample app and published it on CodePlex.   You will need Visual Studio 2008,  ASP.Net MVC Framework version 1.0, and Sql Server 2008 to run the sample app.  A db backup file is included in the zip and in the source code. You can download the code at:

http://aapl.codeplex.com/Release/ProjectReleases.aspx?ReleaseId=39653

Monday, January 25, 2010

Agile ADO.Net Persistence Layer Part 4: Writing data access for new data shapes

I recently decided that I needed to take a fresh look at how to build a persistence architecture that would provide the flexibility of an ORM, that would embrace change instead of resist it (making maintenance code easier), and would use ADO.Net.  I started building from the ground up, threw away any best practices that resulted in friction, and Agile ADO.Net Persistence Layer is the result.  Part 1 in the series can be found here: Agile ADO.Net Persistence Layer Overview.

What’s a new data shape?

Has this ever happened to you?  You have a persistence architecture that works beautifully.  It’s effortless to get and save entities, everything works great.  Then you get a requirement for a search results page that will display data from your entity, fine. But then they also want the grid to display fields from another entity entirely.  It’s not even in the same aggregate!  You find yourself thinking things like what a stupid requirement, why would anyone want to do that. But the real problem isn’t with the requirement.  The problem is that your beautiful architecture is inflexible. It works great as long as you’re dealing with nicely segmented entities, but as soon as you need to handle data that is a composite of fields from two different entities, things fall apart.  The worst part of it is, in the back of your mind you’re thinking “it would be so easy to just write this in TSQL”.

Another scenario  you once again have your beautiful architecture.  This time you need to add a couple of fields to an entity.  It’s a pretty simple change, but you’re dreading it because you know that in order to make this simple change, first you have to modify all of the CRUD sprocs, then you need to modify the method signatures on every data access method for this entity in the DAL, then you need to modify the parser (or data mapper methods), then you need to modify the BAL methods that call the modified DAL methods, then you need to modify the entity class itself.  Once again, the architecture is inflexible.  Changes are painful.  I call this friction.

Changes like this happen all the time.  The majority of our time as developers is spent coding changes to existing systems.  Perversely, we usually design architectures to be really easy to use for green field projects, but really hard to use when we need to make changes to those same systems. Making it easy to deal with change was my main objective when coming up with this architecture.  In this post I’m going to go over solutions to the two scenarios above using Agile ADO.Net Persistence Layer.

Scenario 1:  Adding  fields to an existing entity

Just to refresh your memory, our solution looks like the screenshot below. Common contains a DataShapes folder that contains the class definition for every DTO (Data Transfer Object) used in our BAL.  For this scenario we’re going to be working with the Category DTO.  Our BAL contains a Services folder that contains a service class that provide business logic and persistence logic for each aggregate (that’s service in the DDD sense, not a web service).  We’ll be working with the CategoryService.

image

So, our Category data shape is a straight DTO. It has no methods, just properties.  It looks like this.

public class Category

{

    public Guid CategoryGuid { get; set; }

    public string CategoryName { get; set; }

    public string CategoryKey { get; set; }

 

    public Category()

    {

        CategoryGuid = NullValues.NullGuid;

        CategoryName = NullValues.NullString;

        CategoryKey = NullValues.NullString;

    }

}

Our CategoryService class contains business logic and data access methods.  Each data access method does three things, it defines a query, it selects a data shape to return, then it passes both the query and data shape to our DAO (Data Access Object) which executes the query and maps the results to the data shape. Our data access methods in the Category Service look like this.  You can see both methods returns a List<Category>.

public List<Category> GetAllCategories

{

      string query = @"SELECT *

                    FROM Category

                    ORDER BY CategoryName";

      SqlDao dao = SharedSqlDao;

      SqlCommand command = dao.GetSqlCommand(query);

      return dao.GetList<Category>(command);

}

 

 

public List<Category> GetAllCategoriesInUse

{

     string query = @"SELECT DISTINCT c.*

                    FROM Category c

                    JOIN PostCategory p on p.CategoryGuid=c.CategoryGuid

                    ORDER BY CategoryName";

     SqlDao dao = SharedSqlDao;

     SqlCommand command = dao.GetSqlCommand(query);

     return dao.GetList<Category>(command);

}

Now for our simple change.  We forgot to add a CreatedUtc field to our Category entity to store the date when a category was created.  Crap.  We already have a bunch of code written that uses these classes.  How hard is it going to be to make this change without breaking any of our data access methods and UI code.  Well the first step is obvious, we need to add CreatedUtc to the database table.  So we do that.  Now for the tough part.  Do we need to modify any sprocs? No.  Do we need to modify any DAL methods? No.  How about the data access methods in our service class, we need to change those right? Nope. The queries defined in the data access methods use “SELECT *” so they’ll pick up any new fields added to the Category table.  How about the DAO, surely we need to change some mapping code there?  Nope, it uses a little reflection to automatically map query results to a data shape based on field name.  As long as we use CateogryUtc for both our database column name and our Category DTO field name, no code changes are needed.  So what do we need to do?? Just one thing, we need to add CreatedUtc to our Category data shape like this.

public class Category

{

    public Guid CategoryGuid { get; set; }

    public DateTime CreatedUtc { get; set; }

    public string CategoryName { get; set; }

    public string CategoryKey { get; set; }

 

    public Category()

    {

        CategoryGuid = NullValues.NullGuid;

        CreatedUtc = NullValues.NullDateTime;

        CategoryName = NullValues.NullString;

        CategoryKey = NullValues.NullString;

    }

}

Done.  You have to admit, even for a simple change, that was pretty easy.  Just add the new field to the Category DTO, add the new column to the table, and the persistence layer handles the rest.  Now I know some of you are thinking “But what if I don’t want to use the same names for my columns and DTO fields” or “No way would I ever use reflection in my DAL, it’s too slow”.  Those are valid concerns, and I have an easy way to tune the DAO to address both of those issues, but the nice part is that if you’re willing to work within these constraints, your development can go quickly and with very little friction.

Scenario 2: Adding a new data shape

Now let’s look at the case where we need a new composite data shape that consists of data from two different aggregates.  Our site has Categories, Blogs, and BlogPosts. The customer tells us that we need to display a list of the the most popular categories along with a count of how many BlogPosts are in each Category. This doesn’t really fit with how we did our initial modeling for this project.  A BlogPost contains a collection of Categories not the other way around.  Still, the data model will support it, we just have a simple join table between the Category table and the BlogPost table.  There’s no reason we can’t write a query that uses the relationship in the other direction.

image

So, our solution is to add a new data shape called CategoryWithPostCount that will contain the category data plus a count of blog posts that are using that category.  Because we’re using an architecture that anticipates and embraces change, adding this new data shape will be a simple 2 step process.  First we create the CategoryWithPostCount DTO and add it to our Common/DataShapes folder.

public class CategoryWithPostCount

    {

        public Guid CategoryGuid { get; set; }

        public string CategoryName { get; set; }

        public string CategoryKey { get; set; }

        public int PostCount { get; set; }

 

        public CategoryWithPostCount()

        {

            CategoryGuid = NullValues.NullGuid;

            CategoryName = NullValues.NullString;

            CategoryKey = NullValues.NullString;

            PostCount = NullValues.NullInt;

        }

    }

Second, we open up our CategoryService class and add a new GetTopCategoryList data access method.  Remember that our framework does most of the heavy lifting for us.  All we need to do is define the query, pick a data shape, and then pass both to the DAO which will automatically map them by field name.  Writing the method is easy.  We fire up Sql Server Management Studio, write and test our query, copy one of the other data access methods, and paste our new query into it.  Here’s the result.

public List<CategoryWithPostCount> GetTopCategoryList(int pageSize)

{

    string query = string.Format(

                    @"SELECT TOP {0} c.*, q1.PostCount

                    FROM Category c

                    JOIN (

                        SELECT c.CategoryGuid, count(pc.PostGuid) as PostCount

                        FROM PostCategory pc

                        JOIN Category c ON c.CategoryGuid = pc.CategoryGuid

                        Group By c.CategoryGuid

                        ) q1 ON q1.CategoryGuid = c.CategoryGuid

                    ORDER BY q1.PostCount DESC

                    ", pageSize);

    SqlDao dao = SharedSqlDao;

    SqlCommand command = dao.GetSqlCommand(query);

    return dao.GetList<CategoryWithPostCount>(command);

}

That’s it, we’re done.  A brand new data shape complete with data access method ready to use in minutes.  When it’s this easy to add new data shapes, it really frees you up to work in a much more agile way.  You don’t need to worry about getting entity classes designed exactly right the first time, or missing pieces of data like a count or date created.  If you do, the architecture is designed to make it painless to add that stuff as it’s needed.  And none of this is hard, none of it is magic, we’ve just made some decisions and accepted some limitations (like DTO field names and table column names will be the same) that allow us to take the friction out of design changes.

Next time we’ll get into the DOA and the associated DataMapper classes.

Tuesday, January 19, 2010

Agile ADO.Net Persistence Layer: Part 3 Service Class Single<DTO> Data Access Method

When I say data access methods, I’m talking about the methods my UI is going to call whenever it needs to get data.  When my Posts controller needs a list of BlogPosts to display, it’s going to call a BAL method like GetAllBlogPosts() or GetAllBlogPostsForCategory().  Last time I mentioned (over and over) that I like to keep things simple.  When I need to get or save data, I don’t want to have to search through 3 different classes just to find the one with the method I need.  Instead, I’m putting all my persistence logic for a given aggregate in just one place, a service class.  This is not a web service.  I’m using service in the Domain Driven Design sense here. That means that I have a BlogService class that is my one stop shop for all persistence that has to to with Blogs, BlogPosts, SubmittedBlogUrls, and anything else that falls within the Blog aggregate. Here is what my BlogService class looks like.  You can see that it’s mostly “Get” data access methods.

image

What’s an Aggregate?

I keep using the word aggregate.  If you’re not familiar with the term, it just means a group of entities that all share the same persistence class (whether that be a repository, a service, or something else).  This is a key concept in Domain Driven Design. If you want to know more I would recommend picking up Eric Evans’ book or Jimmy Nilsson’s book on DDD.  For now, all you need to know is that a BlogPost can never exist without a Blog, so there’s no point in BlogPost having it’s own persistence class.  In fact we find that if we do break BlogPost out into it’s own persistence class, it will lead to problems down the road due to BlogPost’s dependency on Blog.  What’s the solution?  We put data access methods for both Blog and BlogPost in the same persistence class and call it an aggregate.  That is why BlogService has methods for both Blog and BlogPost entities.

What type of data will data access methods return?

We covered this last post, but to recap all data will be returned as a Data Transfer Object (DTO).  The DTOs are all defined in our Common assembly in the DataShapes folder.  Our BAL will return data in one of the following 4 formats.

  • a single DTO
  • a List<DTO>
  • a DataPage<DTO>
  • a string value

For more see last week’s post Agile ADO.Net Persistence Layer: Part 2 Use DTOs.

A simple Single<DTO> data access method

Let’s look at the simplest possible data access method.  GetBlogPost() takes a postGuid for a parameter, defines the query to find the BlogPost entity for that postGuid, and then returns the result as a single BlogPost DTO.  Here’s the complete method.

public BlogPost GetBlogPost(Guid postGuid)

{

    string query = @"SELECT p.*, s.Score

                    FROM [dbo].[BlogPost] p

                    LEFT JOIN [dbo].[BlogPostReputationScore] s on s.PostGuid = p.PostGuid

                    WHERE PostGuid = @PostGuid";

    SqlDao dao = new SqlDao();

    SqlCommand command = dao.GetSqlCommand(query);

    command.Parameters.Add(dao.CreateParameter("@PostGuid", postGuid));

    return dao.GetSingle<BlogPost>(command);

}

The first thing you’ll notice is that this isn’t a lot of code.  All we’re really doing here is defining a parameterized TSQL query, wrapping that query up in a SqlCommand, and then passing the command and our desired return type off to a Data Access Object (DAO) that automagically executes the command and maps the results to our desired type.  It may seem counter intuitive to write code like this when we haven’t even written the DAO yet, but that’s exactly how I did it when I wrote this code for the very first time.  I decided that my data access methods should be very simple.  I would start with the query and the DTO type that I wanted it to return, then I would pass them both to some type of helper class that would handle the details of running the query and figuring out how to map the query results to the properties of my DTO.  By using this top down approach, I gave myself a very clear picture of how I needed my DAO to behave.

What’s a DAO (Data Access Object)?

By looking at the query logic above, you can see that I have this thing called a DAO or Data Access Object.  This is a class that encapsulates the helper logic for working with my database.  The DAO handles things like creating parameters, getting a connection, and most importantly it implements methods to return my four main data formats, GetSingle<DTO>, GetList<DTO>, GetDataPage<DTO>, and GetStringValue(). The DAO and it’s associated DataMappers are where you’ll find the special sauce that makes this architecture work.  We’ll get into their implementation later on.

A BAL that embraces change

It’s easy to look at the simple code above and miss something that I think is very important.  In fact that thing is the whole reason that I wrote this framework.  That simple data access method above is the blueprint for a flexible persistence layer that makes changing your entities and associated persistence code easy and almost painless.  It sets up a simple 3 step process for all data access in your application.

  1. Define a DTO in the exact data shape that you’re looking for.  That means create a DTO property for each data field that you need out of the database.
  2. Define a query that gets the data.  It can be as simple or as complex as you like.  You can develop it in Sql Server Management Studio.  You can easily optimize it.  Use whatever process or tools work for you. When you’re done just paste the query into your data access method.
  3. Pass both the query and your DTO to the DAO and it will automatically handle field mappings and pass the results back in the data shape you requested.

This is a very powerful way to work.  I can’t count the number of times that I’ve worked with and architecture where I dreaded any changes because I knew that any data fields added would require me to modify a sproc, a DAL method, a BAL method, parsing logic, an entity class, it all adds up to a lot of friction that resists any change.  This BAL design embraces change.  It’s written with the attitude that we know change is going to happen so we’re going to give you as few things as possible to modify, and make sure we don’t have any cross cutting dependencies, so that you can make changes easily.

Next time, more on the service classes.

Next Post: Agile ADO.Net Persistence Layer Part 4: Writing data access for new data shapes

Wednesday, January 13, 2010

Agile Ado.Net Persistence Layer Part 2: Use DTOs (Data Transfer Objects)

What container are we going to use to pass data between the layers of our application?  The usual answers I hear are either DataTables/DataSets or full business objects.  I don’t like any of these options. DataSets and DataTables come with significant overhead and they don’t contain strongly typed data.  Business objects do contain strongly typed data, but they typically contain a lot of extra business logic that I don’t need, and they may even contain persistence logic.  I don’t want any of that.  I want the lightest weight, simplest possible container that will give me strongly typed data, and that container is a Data Transfer Object (DTO). DTOs are simple classes that contain only properties.  They have no real methods, just mutators and accessors for their data. Below is a class diagram for several DTO classes.  You’ll notice that each one has only properties and a constructor, no other logic at all.

image

So I’m going to have a strongly typed DTO class for each “data shape” that needs to go in or out of my BAL.  This concept of a data shape is something that will come up again and it’s actually pretty central to how I’ve designed this architecture.  For now just know that a data shape is a DTO.

DTO, List<DTO>, DataPage<DTO>, and string

Sometimes getting a single DTO will be good enough. For example, when I want to get a single BlogPost, code like the following works just fine.

BlogService service = new BlogService();

BlogPost latestPost = service.GetMostRecentBlogPost();

But most of the time I’m going to be dealing with collections of objects.  The collection of choice is a generic list, List<T>, where T is my DTO type.  So if I need to get a collection of all BlogPosts, that data would come back to me as a List<BlogPost> as shown below.

BlogService service = new BlogService();

List<BlogPost> list = service.GetListOfAllBlogPosts(BlogPostSortOption.ByDate);

So between single DTO and List<DTO>, we’ve got most of our data needs covered but there’s still a couple more.  What about paging?  Take the example above.  There’s no way that I’m ever going to get a list of all BlogPosts and display it all at once.  I’m going to break it up into pages.  What I need is a generic DataPage<T> class that I can use to encapsulate a single page of data along with some metadata like PageSize and PageIndex. The DataPage class is defined in our Common assembly.

public class DataPage<T>

{

    public List<T> Data = new List<T>();

    public int RecordCount = 0;

    public int PageSize = 20;

    public int PageIndex = 0;

    public int PageCount { get{ return (RecordCount ==0 || PageSize==0) ? 0

                         : (RecordCount + PageSize - 1) / PageSize;}}

}

It’s a pretty simple class.  Note the very concise algorithm for calculating PageCount.  That isn’t mine, I picked it up on some website but I can’t remember where.  Anyway it works really well.  So now that we have our DataPage<T> defined in the Common assembly, if we want to divide our BlogPosts up into pages of 20, and we want to get the third page, we’ll use code like this.

BlogService service = new BlogService();

int pageSize = 20;

int pageIndex = 2;

DataPage<BlogPost> page = service.GetPageOfAllBlogPosts(pageSize, pageIndex, BlogPostSortOption.ByDate);

We’ll see later on that having this simple DataPage<T> class really makes paging easy to deal with.  Now we’ve almost got all of our bases covered, but there’s one more input/output type that we need to consider, a simple string value.  At some point we’re going to want to get a simple value out of our persistence layer.  If we don’t think of it ahead of time we’ll be tempted to do something silly like create a DTO class that contains a single string field, not I would ever do something like that … at least not again.  Anyway, we will want to get a simple value at some point. Let’s just plan for it now and require that our persistence layer has the capability to return string values.  

Next time, a look at Service classes and query logic

So, we haven’t written any actual persistence code yet, but we know a lot about how our persistence layer is going to behave.  We know that all data going in or out of our BAL is going to be in the shape of a single DTO, a List<DTO>, a DataPage<DTO>, or a string value.  We also have some sample code that demonstrates how we expect our service classes to work.  By the way, this isn’t just a case of hindsight being 20/20.  When designing code I’ll typically write the consuming code first, because it really guides my development of the lower level framework code and sets up requirements for how that code will need to behave.  Next time we’ll continue this top down approach and write a service class.

Next Post: Agile ADO.Net Persistence Layer Part 3: Service Class Single<DTO> Data Access Method

Saturday, January 9, 2010

Agile Ado.Net Persistence Layer Part 1: Design Overview

Last year I did a blog post series on how to design a High Performance DAL using Ado.Net.  Judging by the response I’ve gotten from that series, there must be a lot of developers out there who believe that even with the availability of LINQ, Entity Framework, and a host of other ORM technologies, Ado.Net is still your best option when designing a persistence layer.  BTW, I’m one of them.

After that series I started digging into Entity Framework and LINQ, and I was impressed by how effortless those technologies made certain parts of application development.  Once the EF or LINQ mappings were in place, I found myself writing much less code and focusing more on the business logic of my application.  I also found myself driven right back to ADO.Net as I found myself struggling with how do something that I already knew how to do in T-SQL, or fighting errors resulting from an attached data context object that I really didn’t want to start with.

So, I found myself back with ADO.Net, but I didn’t want to give up the ease of development and coding efficiencies that I got from the ORMs.  I decided to take a fresh look at how to design an ADO.Net persistence layer.  I started at the top (the application layer) and thought about how I want my app code to consume business logic, and then I worked down from there. I incorporated many of the best practices that I’ve used over the years, but I also looked critically at each one and whenever I found that something was slowing me down or leading to duplicate code, I threw it out.  The resulting architecture is quick to develop on,testable, easily maintainable, and can be easily optimized for performance.  This series of posts will detail the entire design, from application code to database.

A peek at the final design

I always find it’s easier to follow along if I have some idea where I’m going, so this is a quick look at where we’re headed.  We’re going to cover the entire architecture for a simple blog aggregator called RelevantAssertions.com.  RelevantAssertions is an Asp.Net MVC application that uses our new Agile Ado.Net Persistence Layer.  We have 4 projects in the RA solution, WebUI, Tests, Common, and BAL Here’s a quick look.

 image

WebUI

WebUI is our Asp.Net MVC application, that’s our application layer.  This contains all UI and presentation logic, but it contains absolutely no business logic.

Common

Common contains classes that we need at all layers of our code. The DataShapes folder is where we define all of our DTO classes.

Tests

This project contains all of our automated tests for both the BAL and the UI.

BAL

I know it’s probably more correct to say BLL, but I like the term BAL.  It just sounds better. This is the big class where everything interesting happens.  The main workhorses of the BAL are the service classes.  These are not web services.  They are service classes in the DDD sense.  The service classes are going to be the one stop shop where our application code goes to do whatever it needs to do.  The service classes will also contain query logic, that’s right I said query logic.  Behind the scenes the service classes will use DAOs (Data Access Objects), Data Mappers, and Persisters to do their thing in an efficient object oriented way, but the only classes our application code will use directly are the services.  

What, no DAL??

You’ll notice that there is no DAL.  It seems a little strange to have an architecture that focuses on ADO.Net but doesn’t have a DAL, but there’s a reason.  Usually, the DAL is where I’ll put my query logic, mappings to query results, and any other database specific code.  The DAL would allow me to keep all of my TSQL and ADO.Net code separated from the rest of my application, and this separation provided me with some important benefits like:

1) Separation is it’s own virtue, that’s just the right way to do it.
2)  I wouldn’t have leakage of db or query logic into my business logic.
3)  I could easily swap out SQL Server with another database if needed.
4)  It encapsulates code that would otherwise be repeated.
5)  We need to hide TSQL from programmers, it scares them.
6)  It’s fun to make changes to 3 layers of code every time I add a new data member to an entity class.

At least that’s what I was always taught.  But after working with more ORM oriented architectures and the Domain Driven Design way of doing things, I started to look at things differently. Let’s look at some of these benefits (at least the ones that aren’t sarcastic).

I’ve never met anyone who’s ever switched out their database

YAGNI means You Ain’t Gonna Need It. The idea is that we spend a lot of time building stuff that we don’t really need. We build it because it seems like the architecturally correct way to do it, or we think we’ll need the feature one day, or maybe we’re just used to doing it that way.  Whatever the reason, the result is that we spend a lot of time coding features that are never used, and that’s not good.  After doing this for 14 years or so, I’ve never, ever, run into a single project where they’ve decided “hey, let’s trash the years of investment we’ve made in SQL Server and switch over to MySQL, or any other database.  Now I am aware that a db switch is likely if you’re writing a product that clients install onsite and it has to work with whatever their environment is, but for 99% of .Net developers this is just never going to happen.  I call YAGNI on this one.

Query logic IS business logic

One of the biggest gripes I had when I started investigating LINQ, EF, and Hibernate (yes I was looking at Java code) architectures is that they had query logic in their repository classes.  Now the query logic was written in LINQ, or EntitySQL, or some other abstracted query language, but it was still query logic.  Blasphemy!!  You can’t put query logic in a BAL class!  That stuff has to be abstracted away in the DAL or it will contaminate the rest of the application architecture! Our layered architecture is being violated!  Worlds are colliding! It’ll be chaos!! Then I started to notice something, it’s really easy to develop business logic when you include queries in the BAL.  In the past I would put my queries in a sproc, then I would write a DAL wrapper for the sproc, and a BAL wrapper for the DAL method.  Then, if the query changed, or if I needed an identical query but with a slightly different parameter list I would write a new sproc, then write a new DAL wrapper, then write yet another BAL wrapper method for the DAL wrapper method.  By the time all was said and done I would have this crazy duplication of methods across all layers of my application including my database!  And don’t even get me started on the crazy designs that I implemented to try and pass query criteria structures (basically the stuff that goes in the where clause) between my BAL an my DAL.  I came up with these crazy layers of abstraction that basically existed so that I wouldn’t have to create a simple TSQL WHERE clause in my BAL.  Then there’s the problem of handling sorting and data paging, that required even more DAL methods, and each of these DAL methods had corresponding wrapper methods in the BAL that did nothing but pass the call through to the DAL!  Why?? I was doing the right thing by separating my business logic from my query logic, why was it so painful?   The answer I finally arrived at is simply that query logic is business logic.  I’d been putting a separation where no separation belonged.  

All real programmers know TSQL

I’ve heard the argument that TSQL is too hard for programmers so we’re going to create something much easier for programmers to use like LINQ or EF.  The problem is that these tools require almost exactly the same syntax as TSQL but they put an extra layer of stuff in there that can break and a data context (or session for you nHibernate folks) that throws errors whenever you try to save a complex object graph.  How did this attitude that TSQL is a problem for programmers gain any traction?  Have you ever met a real programmer who can’t write TSQL?  And if you did meet such a person, would you let them touch your business layer code?  Why would we ever want to abstract TSQL away?  It’s the perfect DSL for accessing SQL Server data and every programmer in the world is already familiar with it.

Using good object oriented design and encapsulating data access code is a good thing

I fully believe this one, but once we decide that query logic is business logic and that we don’t need to hide TSQL from programmers, there’s no reason to put our well designed object oriented data access code in a separate project and call it a DAL.  I decided to just put it in a Persistence folder in my BAL and now I have one less DLL to worry about.

So, that’s some of what I was thinking when I made the decisions that I did.  It made sense to me.  I’m sure it won’t make sense to everyone, but I do think that it resulted in a very usable architecture.  Before I wrap up for today, I want to look at one more thing.

The target application code experience. What will it be like to use?

When I’m writing code in my application layer, consuming the logic that is provided through my BAL service classes, what does that code look like. Well I know a couple of examples of code I don’t want it to look like. 

I’ve been in a few environments where there were huge libraries of BAL classes, any of which could contain the logic I want.  I would often have to resort to a solution wide text search looking for sproc names or keywords that might exist in the method that I needed. I don’t want that.  I want everything I need to be in one easy to find place.

I’ve also seen a practice that’s common in the DDD (Domain Driven Design) crowd where you’ll need to go to a factory class to create a new entity, you need to go to a repository class to get an entity from the database,  if you have complex logic that involves more than one entity you need to go to a service class, and saving entities is a toss up between using either the repository or a separate service class. There may be a good reason to use that kind of class design inside of the BAL, but when I’m writing code in my application layer,  I don’t want to have to worry about which of 4 different classes I’m going to use.  So again, I’m a simple guy, when I need to get, save, or validate a BlogPost entity, I want a single service class that I can go to for everything. My app code should look something like this.

// instantiate service classes

BlogService blogService = new BlogService();

CategoryService categoryService = new CategoryService();

// Get data shaped as lists and pages of our DTOs

DataPage<BlogPost> page = blogService.GetPageOfBlogPosts(pageSize.Value, pageIndex.Value, sortBy);

List<Category> categoryList = categoryService.GetTopCategoryList(30);

// create and save a new BlogPost

BlogPost newPost = new BlogPost();

newPost.BlogGuid = blog.BlogGuid;

newPost.PostTitle = item.Title.Text;

newPost.PostUrl = item.Links[0].Uri.AbsoluteUri;

newPost.PostSummary = item.Summary.Text;

newPost.Score = 0;

blogService.Save(newPost);

Next time we’ll focus less on discussion and more on code.  We’ll look at DTO classes and the 4 main data shapes that will go into and come out of our BAL: DTO, List<DTO>, DataPage<DTO>, and String. 

Next Post:  Agile ADO.Net Persistence Layer Part 2: Use DTOs