Friday, November 30, 2007

Naive Bayesian Models (Part 1)

[This post is part of a series where I'm exploring how to add data mining functionality to the SQL language; this is an extension of my most recent book Data Analysis With SQL and Excel. The first post is available here.]

The previous posts have shown how to extend SQL Server to support some basic modeling capabilities. This post and the next post add a new type of model, the naive Bayesian model, which is actually quite similar to the marginal value model discussed earlier.

This post explains some of the mathematics behind the model. A more thorough discussion is available in my book Data Analysis Using SQL and Excel.

What Does A Naive Bayesian Model Do?
A naive Bayesian model calculates a probability by combining summary information along different dimensions.

This is perhaps best illustrated by an example. Say that we have a business where 55% of customers survive for the first year. Say that male customers have a 60% probability of remaining a customer after one year and that California customers have an 80% probability. What is the probability that a male customer from California will survive the first year?

The first thing to note is that the question has no correct answer; perhaps men in California are quite different from men elsewhere. The answer could be any number between 0% and 100%.

The second thing to note is the structure of the problem. We are looking for a probability for the intersection of two dimensions (gender and state). To solve this, we have:
  • The overall probability for the population (55%).
  • The probability along each dimension (60% and 80%).
The native Bayesian model combines this information, by making an assumption (which may or may not be true). In this case, the answer is that a male from California has an 83.1% probability for surviving the first year.

The naive Bayesian model can handle any number of dimensions. However, it is always calculating a probability using information about the probabilities along each dimension individually.

Probabilities and Likelihoods
Value of 83.1% may seem surprising. Many people's intuition would put the number between 60% and 80%. Another way of looking at the problem, though, might make this clearer. Being male makes a customer more likely to stay for a year. Being from California also makes a customer even more likely to stay. Combining the information on the two dimensions should be stronger than either dimension individually.

It is one thing to explain this in words. Modeling and data mining requires explaining things with formulas. The problem is about probabilities, but the solution uses a related concept.

The likelihood has a simple formula: likelihood = p / (1-p), where p is the probability. That is, it is the ratio of the probability of something happening to its not happening. Where the probability varies from 0% to 100%, the likelihood varies from zero to infinity. Also, given a likelihood, the probability is easily calculated: p = 1 - (1/(1+likelihood)).

The likehood is also known as the odds. When we say something has 1 in 9 odds, we mean that something happens one time for every nine times it does not happen. Another way of saying this is that the probability is 10%.

For instance, for the following are the likelihoods for the simple problem being discussed:
  • overall likelihood (p = 55%) = 1.22;
  • male likelihood (p = 60%) = 1.50; and,
  • California likelihood (p = 80%) = 4.00.
Notice that the likelihoods vary more dramatically than the probabilities. That is, 80% is just a bit more than 60%, but 4.0 is much larger than 1.5.

The Naive Bayesian Formula
The formula for the naive Bayesian model uses one more concept, the likelihood ratio. This is the ratio of any given likelihood to the overall likelihood. This ratio also varies from zero to infinity. When the likelihood ratio is greater than one, then something is more likely to occur than on average for everyone (such as the case with both males and Californians).

The formula for the naive Bayesian model says the following: the overall likelihood of something occurring along multiple dimensions is the overall likelihood times the likelood ratios along each dimension.

For the example, the formula produces: 1.22*(1.5/1.22)*(4.0/1.22)=4.91. When converted back to a probability this produces 83.1%.

What Does the Naive Assumption Really Mean?
The "Bayesian" in "naive Bayesian" refers to a basic probability formula devised by Rev. Thomas Bayes in the early 1700s. This probability formula is used to devise the formula described above.

The "naive" in naive Bayesian refers to a simple assumption. This is the assumption that the information along the two dimensions is independent. This is the same assumption that we made for the marginal value model. In fact, the two models are very similar. Both combine information along dimensions into a single value. In the first case, it is counts. In the second case, it is probabilities.

In the real world, it is unusual to find dimensions that are truly independent. However, the naive Bayesian approach can still work well in practice. Often, we do not need the actual probabilities. It is sufficient to have relative measures (males from California are better risks than females from Nevada, for instance).

If we further analyzed the data or did a test and learned that males from California really survived at only a 40% rate instead of 83.1%, then this fact would be evidence that state and gender are not independent. The solution is simply to replace state and gender by a single category that combines the two: California-male, California-female, Nevada-male, and so on.

One of the nice features of these models is that they can use a large number of features of the data and readily handle missing information (the likelihood value for a dimension that is missing is simply not included in the equation). This makes them feasible for some applications such as classifying text, which other techniques do not work so well on. It also makes it possible to calculate a probability for a combination of dimensions which has never been seen before -- made possible by the naive assumption.

The next posting contains the code for a basic naive Bayesian model in SQL Server.

Saturday, November 24, 2007

Managing SQL Server Extensions (Functions, Types, Etc.)

[This post is part of a series where I'm exploring how to add data mining functionality to the SQL language; this is an extension of my most recent book Data Analysis With SQL and Excel. The first post is available here.]

Up to now, I have discussed adding functions, types, and aggregates into SQL Server. The code is created in C# and loaded into SQL Server using as an assembly. Placing the code into SQL Server has four steps:
  1. Deleting all objects already defined in the assembly (if any).
  2. Deleting the assembly (if present).
  3. Loading a new version of the assembly.
  4. Redefining the objects in the assembly.
For readers who are familiar with the process of compiling and linking code, this process is similar to linking. The references in the assembly have to be "linked" into the database, so the database knows what the references refer to.

I am doing this process manually for two reasons. First, because this is how I originally set up this project for adding data mining functionality into SQL Server (even though Visual Studio does have options for doing this automatically). Second, this approach provides an opportunity to start to understand how SQL Server manages user defined types.

This post discusses how to manage the first of these steps automatically. That is, it describes how to delete all objects in a database referenced by a particular assembly.

A Common Error
The following code drops the user defined function CreateBasicMarginalValueModel():

DROP AGGREGATE CreateBasicMarginalValueModel

This expression is quite simple. However, if it is executed twice, it returns the error:

Msg 3701, Level 11, State 5, Line 1
Cannot drop the aggregate function 'createbasicmarginalvaluemodel', because it does not exist or you do not have permission.

This is inelegant, because it means that we cannot run the same code to drop a function twice in a row. Even if it works the first time, the second time it runs, the same code will return an error. Furthermore, when we see this error, we do not know if the problem is the non-existance of the object or inadequate database permissions.

To fix this, we use the T-SQL IF construct:

IF OBJECT_ID('CreateBasicMarginalValueModel') IS NOT NULL
....DROP AGGREGATE CreateBasicMarginalValueModel

This simply says that if the object exists, then drop the aggregate. However, this is inelegant, because it mentions the name of the aggregate function twice, once in the "if" clause and once when dropping it. In addition, we do not want to have to explicitly mention every object by name, since we may not know which objects in the assembly were actually declared in the database.

Handling Dependencies
Another problem occurs when we try to drop a type. The following statement:

IF (SELECT COUNT(*) FROM sys.types

....DROP TYPE BasicMarginalValueModel

Returns the enigmatic error:

Msg 3732, Level 16, State 1, Line 3
Cannot drop type 'BasicMarginalValueModel' because it is currently in use.

This error does not, unfortunately, tell us who or what is using the type. In this case, it is the set of functions, procedures, aggregates, and other types that use the type as an argument or return value. We have to remove all these objects before we can remove the type.

In general, we need to remove functions, aggregates, and procedures before we remove types. This ensures that the types have no dependencies on them, so they can be removed cleanly from the database.

This problem with dependencies is actually an advantage. It ensures that code loaded into the database all refers to the proper set of definitions. If a function uses a type, we cannot simply replace the type. We need to drop the function, drop the type, and then re-declare the type and function. This ensures that the function refers to the proper code when using the type.

Finding All User-Defined Functions in an Assembly

The first step to removing a certain class of objects, say functions, is to find all them in the database. They are conveniently located in the sys.objects table, so the following query returns all user-defined functions:

FROM sys.objects o
WHERE o.type in ('FS', 'FT')

The sys.objects table contains references to many different objects in the database (almost everything except user defined types). The specific type abbreviations 'FS' and 'FT' refer to scalar functions and table functions, respectively.

The only problem with this code fragment is that returns all user defined functions, there might be user defined functions from different assemblies. What we really want are only user defined functions in the "ud" assembly. To find this, we have to use two more reference tables. To get all the functions in "ud", the query looks like:

FROM sys.objects o JOIN
.....sys.assembly_modules am
.....ON o.object_id = am.object_id JOIN
.....sys.assemblies a
.....ON am.assembly_id = a.assembly_id
WHERE = 'ud' and
......o.type in ('FS', 'FT')

This finds all user defined functions only in the desired assembly. The code for procedures and aggregates is quite similar. The only difference is that the type in the WHERE clause matches 'PC' and 'AF', respectively.

User defined types are somewhat different. They are stored in the table sys.types, rather than sys.objects. The query to find all of them is similar, requiring looking up assembly information in additional tables:

FROM sys.types t JOIN
.....sys.type_assembly_usages tau
.....ON t.user_type_id = tau.user_type_id JOIN
.....sys.assemblies a
.....ON tau.assembly_id = a.assembly_id
WHERE = 'ud'

Although the query is somewhat different, it returns the name of the user defined types in the given assembly.

Deleting All User Defined Functions
Going from a query that returns a list of user-defined functions (or whatever) to actions on those functions (such as dropping them) requires using the T-SQL command language. In particular, we need to define cursors on the query, so we can do something to each row.

Code that uses cursors has the following structure:


OPEN the_cursor
....FETCH next FROM the_cursor INTO @name
....WHILE @@fetch_status = 0
........FETCH NEXT FROM the_cursor INTO @name

CLOSE the_cursor
DEALLOCATE the_cursor

The first two lines of the code declare two variables. The first is a standard scalar variable, which is used to store each value returned by the query. The second is a cursor, which is used to cycle through the rows. Notice that the cursor variable is not preceded by an at sign.

Most of the remaining code is the framework used to manage the cursor. It is important to handle cursors correctly. A simple mistake -- such as leaving out the FETCH NEXT FROM -- can result in an infinite loop. We do not want that to happen.

Opening the cursor runs the query and the FETCH NEXT statement gets the next value, which is placed in the local variable @name. When there are no more values, the cursor is closed an deallocated.

The full code for dropping all functions is a bit longer, because the query and action portions are filled in:

DECLARE @function_name VARCHAR(2000)
DECLARE function_cursor CURSOR FOR
....FROM sys.objects o JOIN
.........sys.assembly_modules am
.........ON o.object_id = am.object_id JOIN
.........sys.assemblies a
.........ON a.assembly_id = a.assembly_id
....WHERE = 'udf' AND
..........o.type in ('FS', 'FT')

OPEN function_cursor
....FETCH next FROM function_cursor INFO @function_name
....WHILE @@fetch_status = 0
........EXEC('DROP FUNCTION '+@function_name)
........FETCH NEXT FROM function_cursor INTO @function_name

CLOSE function_cursor
DEALLOCATE function_cursor

The cursor is defined over the query that returns all the functions in the "ud" assembly. For each of these function, the action is to drop the function. The action uses the EXEC() function rather than just the DROP FUNCTION statement. The EXEC() function takes a string as an argument, and executes the string as a T-SQL statement. This makes it possible to incorporate the name of the function into the command.

The Full Code
The code for dropping aggregates, procedures, and types follows the same structure as the code for dropping functions. The only differences are to the query that defines the cursor and the string passed to the EXEC() function (DROP AGGREGATE, DROP PROCEDURE, or DROP TYPE).

The only important aspect to the code is that types need to be dropped last, because of the dependency problem.

This entry does not include the T-SQL code for this example. The next entry discusses naive Bayesian models. The entry after that will include code that has these enhancements.

Saturday, November 10, 2007

Marginal Value Models: C# Table Valued Functions (Part 3)

[This post is part of a series where I'm exploring how to add data mining functionality to the SQL language; this is an extension of my most recent book Data Analysis With SQL and Excel. The first post is available here.]

The previous two posts introduce marginal value models. Underlying these models is a table of values. This post discusses how this table can be returned in SQL Server. In other words, this post discusses table valued functions.

For reference, the files associates with the model are available at:
The first two files contain the DLL and SQL code for loading functionality into SQL Server. The third file contains the source code for the functionality. These files are slightly different from the previous blog3 files, since I fixed some errors in them.

What Are Table Valued Functions?
In earlier posts, I introduce user defined functions in SQL. These functions have all been scalar functions, whether implemented as user defined functions or as methods in a user defined type. SQL Server also supports user defined table valued functions. The purpose here is to return all the values in a BasicMarginalValueModel.

The following T-SQL code shows an example for this:

DECLARE @mod dbo.BasicMarginalValueModel

SELECT @mod = ud.dbo.CreateBasicMarginalValueModel(arg)
............... .zc.hhmedincome, 1) as arg
......FROM sqlbook..zipcensus zc) zc

SELECT m.mvme.ToString()
FROM ud.dbo.MarginalValues(@mod) m

The first statement declares a variable called @mod as a BasicMarginalValueModel. The second assigns this variable a value, using the first 100 rows of the table zipcensus (provided on the companion page to the book Data Analysis Using SQL and Excel.

The third statement calls the table valued function MarginalValues(). This function returns the values stored in each cell of the model. So, if there are two dimensions and each has ten values, then this returns twenty rows. Of course, because there are more than one value in the table (a string and a value), a new data type is needed to store these values. This data type is called MarginalValueModelElement. The attached files contain the definitions for these functions and types.

A second table valued function is also defined for the type. This function is called AllCells() and it returns all combinations of the cells. So, if there are ten values along two dimensions, this function returns one hundred rows, one for each combination of the two values. This function also shows that it is possible to have more than one table valued function within a given model.

Defining Table Valued Function in T-SQL
Table valued functions have to be declared in T-SQL. The definition is an extension of the definition of scalar valued functions.

The MarginalValues() function returns a specific type, so this needs to be declared. This is simply:

CREATE TYPE MarginalValueModelElement
EXTERNAL NAME ud.MarginalValueModelElement


The function itself uses the code:

CREATE FUNCTION MarginalValues(@arg BasicMarginalValueModel)
RETURNS TABLE (mvme MarginalValueModelElement)
AS EXTERNAL NAME ud.BasicMarginalValueModel.InitMarginalValueEnumerator

First, notice that table valued functions follow use the same keyword as scalar functions. The difference is the use of RETURNS TABLE rather than just RETURNS. After this keyword comes the table definition. Table valued functions can only return tables with one column. I am not sure if this is a limitation of SQL Server or a C# limitation (table valued functions are implemented as enumerators in C#).

Second, notice that the table valued function is actually defined within the type BasicMarginalValueModel. Scalar functions defined within a type do not need explicit declarations; however, table functions do. Although the function is defined within the type, it is defined as static, so it still needs to take the model as an argument. In fact, all user defined function declared explicitly in SQL Server must be static, both scalar and table functions.

Notice that the function definition defines the name of the colum as mvme. In the previous code, this column name is used to access values.

Within SQL Server, scalar functions and table valued functions are stored separately. After loading blog3enum.dll using blog3enum-load.sql (two files mentioned at the top of this post), the following are in SQL Server:

(I apologize for the small size of this image; I do not know how to make it larger.)
Notice that SQL Server has separate areas for scalar functions and table-valued functions. I find this ironic, since the metadata stores them in the same way.

The Primitives for Implementing Them in C#
The C# code for table valued functions is basically the code for user defined enumerators. A user defined enumerator is something that you use for the foreach statement.

There are three steps for creating a user-defined enumerator in C#:
  1. Declare the class to be an instance of System.Collections.IEnumerable.
  2. Declare the two enumeration functions.
  3. Declare the enumeration class that does all the work.
The next three sections discuss these in a bit more detail.

IEnumerable and IEnumerate
Declaring a table valued function requires declaring a user defined enumeration, and this in turn requires using two underlying classes. The distinction between these two classes is a bit subtle and confusing, although the ideas are not really difficult.

The first class is the IEnumerable class. This class says "hey, I'm a class that supports foreach". We need it, because such classes are actually what table-valued functions are. And this makes sense. A table valued function has a bunch of rows that are returned one-by-one. The foreach clause does the same thing in C#.

The second class is IEnumerate, which we will see used below. This class is not a declaration of an external interface. Instead, it is used in the bowels of the foreach. It maintains the state needed to fetch the next value.

I would like to add one more comment about table valued functions. Unlike aggregation functions, they do not seem to support a parallel interface. This is unfortunate, since this limits the scalability of code using them.

Declaring SQL Table Functions
Two functions are needed to define a table valued function. The first is the enumeration function and the second is a helper function that "fills" a row. These two function are defined as follows:

[SqlFunction(FillRowMethodName = "BVMEnumeratorFillRow")]
public static BVMMElementEnumerator
....InitMarginalValueEnumerator (BVMM csm)

....return new BVMMElementEnumerator(csm);

public static void
....BVMMEnumeratorFillRow (Object row,
........out MarginalValueModelElement evme)

....evme = (MarginalValueModelElement)row;
} // BasicMarginalValueModelEnumeratorFillRow()

(In this code, I have used BVMM for BasicMarginalValueModel so the code formats more easily.)

The first of these functions is the reference used in the CREATE FUNCTION statement. This uses a compiler directive, specific for the SQL Server interface. This directive simply says that the function to call to retrieve each row is called BVMEnumeratorFillRow. Not surprisingly, this is the other function.

The first function returns the enumerator. This is a special class that stores state between calls to the enumerator. This is discussed in the next section.

The underlying C# routines that do the enumerations use very general code that works in terms of objects and that has nothing to do with SQL Server. The interface to SQL Server uses the fill-row routine, which simply copies the appropriate values into the row, and this is handled by casting the object to the appropriate type.

Defining the Enumeration Class
The enumeration class is the most complex part of the definition. However, in this case, the code is rather simple, because it accesses an underlying enumerator used for the Dictionary class.

First a word about the class used for the MarginalValues() SQL function. It is called BasicMarginalValueModelElementEnumerator. The connection between the function in SQL and this class is not readily apparent. It requires looking at the C# code that defines the C# fucntion used to define MarginalValues(). This fucntion is called InitMarginalValueEnumerator() and it creates an instance of this enumeration class.

So, the class must be defined to inherit from System.Collections.IEnumerator; this sets it up to have the appropriate interface for an enumeration.

This class contains the following elements:
  • A private member to store the state. This is an instance of the class System.Collections.IEnumerator.
  • A constructor, which assigns the enumerator from the dictionary to the private member.
  • A MoveNext() function that goes to the next element in the list. This simply calls the dictionary function.
  • A Reset() function that starts over at the beginning.
  • A Current member that returns the current value of the enumerator as an object. It is this object that is then copied into the row, using the fill function.
All of these are defined in terms of the enumeration for the Dictionary class, so the code itself is quite simple. Note that everything in this class is set up only for the enumeration and not for SQL code. The class has no SQL-specific compiler directive, or functions like Write() and Read(). It is the fill-row function that takes the value returned by the enumerator and transfers the value into the SQL Server world.

The AllCells enumeration function provide a more complicated example. In this case, the calculations are done explicitly, because there is no underlying type to support the functionality.

Table Valued Functions and Modeling
Table valued functions are a very powerful feature of SQL Server. However, they are ancillary to my goals, which is to understand how to extend the SQL language to support data mining concepts such as modeling.

They do have one very large short-coming, which is the fact that their interface does not support parallel scalability. This is significant, because my choice of SQL is partly due to its scalability. Remember that the user defined aggregation functions include a Merge() method which does support parallelism. There is no corresponding capability for table valued functions.

The preceding three posts have been a detailed exposition on how to incorporate one type of model into SQL Server. The first explained the model; the second explained the C# code, and this, the third, explains user defined functions.

Much of this has been prepatory. The basic marginal value model is more useful as an example than as a modeling tool. The next post is about making the T-SQL load script a bit simpler. It will then be follwed by the description of another type of model. Naive Bayesian model are quite powerful and useful, and actually quite similar to marginal value models.

Friday, November 2, 2007

Marginal Value Models: Overview of C# Code (Part 2)

[This post is part of a series where I'm exploring how to add data mining functionality to the SQL language; this is an extension of my most recent book Data Analysis With SQL and Excel. The first post is available here.]

Marginal value models are a very simple type of model that calculate expected values along dimensions. The previous posting explains them in more detail.

This posting discusses C# coding issues in implementing the models. The next post discusses one particular aspect, which is the ability to return the marginal values created by the model.

For reference, the files associates with the model are available at:
The first two files contain the DLL and SQL code for loading functionality into SQL Server. The third file contains the source code for the functionality.

Overview of Model and Classes

The marginal value model does a very simple calculation. For each dimension, the marginal value model remembers the counts for all values along all dimensions. The goal is to calculate an expected value for a combination of dimensions, which involves the following steps:
  1. Divide the count for each value by the total count. This gets a p-value for each value.
  2. Multiply all the p-values together.
  3. Multiply the result by the total count.
The result is the expected value. The rest of this post discusses the implementation in C#, starting with the model itself, then the code to create it.

Defining BasicMarginalValueModel

The model is stored as a class. The following declaration defines a class for a model:

....MaxByteSize = 8000)]
public class BasicMarginalValueModel :
....INullable, IBinarySerialize, System.Collections.IEnumerable

This definition includes several compiler directives needed for the interface to SQL Server. The first, Serializable, means that the data in the model can be written to and read from, essentially, a file. In English, this implies that the methods Write() and Read() are defined.

The next directive specifies information about the type for the compiler. The maximum size of the type is 8,000 bytes. This is a SQL Server limit, alas. Also, remember that it applies to the Write() version of the model, not to the actual size in memory. The compiler option Format.UserDefined says that we are using useful types, so we need to write our own Write() and Read() routines. SQL Server can handle just a few types automatically; however, writing code using only unsigned integer values is a great limitation.

As a comment on this approach: it turns out that much of what we are doing -- putting values in and out of memory, defining NULL and so on -- is the type of work done by compilers. Fortunately, much of this work is rather mindless and esay. So after doing it once, it is easy to do it again for the next type.

The class itself inherits from three different classes; the first two are described in this entry. The second is decribed in the next one because it introduces a special type of functionality. The first in the list is the INullable class which enables the value to be NULL. In practice, this means that the following code fragment is in the class:

public bool isNull;

public bool IsNull
........return isNull;
} // IsNull

public static BasicMarginalValueModel Null
........BasicMarginalValueModel bmvm = BasicMarginalValueModel();
....return bmvm;
....} Text Color
} // Null

This code defines the NULL value for the class (this is something that has the type of the class and the value of NULL) and the IsNull property, required by the INullable class. There is little reason to vary this code. Personally, I think the INullable class could just implement it. I suppose the flexibility is there, though, so the boolean variable isNull does not have to be a member of the class.

The IBinarySerialize parent class requires the Read() and Write() functions.

Members of BasicMarginalValueModel

In order to be used, the model must contain the count for each value along each dimension. This table is, in fact, all that is needed for the model. The dimension values are assumed to be strings; the value being stored is the p-value, which is a double. In C#, the appropriate data structure is a dictionary, a built-in data structure which in common parlance is better known as a hash table. Perhaps the biggest strength of C# is the wealth of its built in container classes, so use them liberally.

The first step in using a dictionary is to tell C# where the definition is by including the following line at the top of the file:

using System.Collections.Generic;

The "using" clause is similar to an "#include" in the sense that both bring in outside definitions. However, "using" provides much more detail to the compiler, including compiler directives and definitions.

The dictionary class is generic. We have to tell it the types that it is storing. The following code describes the dictionary as we want to use it:

public System.Collections.Generic.Dictionary
.... marginals;

This syntax says to use the generic dictionary definition, where the key is a string (this is the thing being looked up) and the value is a double (this is the p-value), to define the class variable marginals.

The dictionary uses a trick to store all values along all dimensions. A potential problem is that a given value might be valid for different dimensions. Instead of the key simply being the value, it is a composite key consisting of the dimension number (starting from zero) followed by a colon and then the value. So, the value "upper" for the first dimension would be stored in the key "0:upper". One additional entry in the dictionar is also defined. The value "total:" represents the total count along all dimensions.

By the way, the creation of the key and the parsing of the dimension and value from the key should probably be separate private functions in the class. However, this code does not implement them this way.

The only additional members of the class are isNull and numdimensions.

Notes on Methods in BasicMarginalValueModel
In addition to the standard methods of the class, there are several additional methods. Most involve the functions needed to return the values in the model, which is discussed in the next post. However, two Score() and ChiSquared() are functions that are intended to be accessed from SQL. The advantage of putting these in the model class is that they can be directly accessed from SQL without having to define them using CREATE FUNCTION.

Both these functions call an internal function _Score() to do the calculation. Unfortunately, C# and SQL Server do not do a good job with function overloading, so this function is simply given a different name. That is, if Score() (or any other function) is overloaded, then it generates an error in SQL Server.

The Write() and Read() functions have obvious definitions with two small caveats. First, the number of items in the dictionary is written out. Then the number of dimensions, and then each dictionary entry. The number of items is needed so Read() knows when it is finished. The value is used in the loop.

In addition, there is the danger that "total:" will be defined twice during the Read(), once given the value of zero when an instance of the class is created and once when the dictionary entries are read. To prevent this, the value is removed from the dictionary. This step is not strictly necessary, because it happens not to be there. However, it is a good reminder.

Implementation of CreateBasicMarginalValueModel
Creating an instance of a marginal value model requires an aggregation. Such aggregations make use of the following compiler directives and parent classes:

....IsInvariantToNulls = true,
....IsInvariantToDuplicates = false,
....IsInvariantToOrder = false,
....MaxByteSize = 8000)]
public class CreateBasicMarginalValueModel : IBinarySerialize

The compiler directives specify that this is a serializable class with Write() and Read() methods. The second specifies various features of the aggregation. For instance, IsInvariantToNulls means that adding in a NULL value does not change the aggregation (think of the difference between COUNT(*) and COUNT()).

Members and Methods of CreateBasicMarginalValueModel

The class itself contains one member, an instance of BasicMarginalValueModel. This is updated using in the Accumulate() and Merge() members. Accumulate() updates a value for a dimension, either by adding it to the dictionary (if it does not exist) or incrementing the value stored in the dictionary.

Merge() actually does the same thing, just with two different dictionaries. Recall that Merge() is used to support parallelism. Two different processors might aggregate different chunks of data, which are then combined using this function.

Because the aggregation value needs to be passed between SQL Server and C#, the serialization routines need to be defined. However, these are trivial, because they call the corresponding routines for the one member (which are the routines defined for BasicMarginalValueModel).

About MarginalValueArgs
The argument to CreateBasicMarginalValueModel requires both a value and an associated count (because aggregation functions only take one argument, the value and count need to be combined into a single type). This definition is very similar to WeghtedValue described in an earlier posting.

There is a creation function associated with MarginalValueArgs. This is standard whenever adding a type. An associated function is needed to create an instance of the type.

The next posting describes one additional feature of the basic marginal value model. This feature is the ability to list all the values in the model, and it introduces the idea of a table-values function. Such a function is yet another useful extension of SQL Server.

Thursday, October 25, 2007

Marginal Value Models: Building and Using Data Mining Models in SQL Server (Part 1)

[This post is part of a series where I'm exploring how to add data mining functionality to the SQL language; this is an extension of my most recent book Data Analysis With SQL and Excel. The first post is available here.]

A marginal value model is a very simple type of model. However, it gives a good example of how to implement a data mining model using SQL Server extensions. Recall that the model itself produces an expected value, in the same way as the chi-square calculation. This expected value is the model estimate.

The inputs to the model are the dimensions for the estimate, and these are necessarily categorical variables (strings) that take on, preferably, just a handful of values. As a note, the version described here has limits on the total number of values along all dimensions; these limits are imposed by SQL Server and discussed at the end of this post.

This post describes how to use the model. The next post describes how it is implemented. The third post describes some additional useful technical details.

This posting has three files attached:
As explained in the first posting, the first two files are a DLL containing the functionality and a T-SQL script that loads it into the database. The third file contains the code.

BasicMarginalValueModel Type
The marginal value model itself is a data type in SQL Server. The data type BasicMaringalValueModel is implemented as a C# class containing all the information that describes the model as well as various functions, such as:
  • ToString(), which converts the information describing the model to a string.
  • Parse(), which parses a string containing information describing the model.
  • Write(), which is like ToString() except the format is binary instead of character.
  • Read(), which is like Parse() except the format is binary instead of character.
Actually, there is a subtle difference between the ToString()/Parse() and Write()/Read() pairs. Write() and Read() are used implicitly when using a type with SQL Server. They do get called. On the other hand, ToString() and Parse() are only used by SQL Server when reading and writing BasicMarginalValueModel values to or from text. However, ToString() is very handy for seeing what is happening, so it is also used manually.

What does the model "look" like? It looks like pairs of values. So, if the model contained 50 states and three region types ("urban", "rural", and "mixed"), then an instance of the model would contain up to 53 key-value pairs. The key combines the dimension number (0 for state, 1 for region type) and dimension value (state or region type). The second has the value associated with it.

An Aggregation, Another Type, and a Function
Having a type is useful, but how do we create values of the type? The answer is simple, the CreateBasicMarginalValueModel aggregation function. This aggregation adds up the counts on all dimensions.

Ideally, the aggregation function could be called as:

SELECT ud.dbo.CreateMarginalValueModel(dim1, dim2, . . ., value)

However, this is not possible, because aggregation functions can only take one argument. The data type MarginalValueModelArgs stores one or more dimensions along with a value. The value would typically be 1; however, it is also possible to create the models on summarized or partially summarized data.

This type has a creation function associated with it, MarginalValueArgs1(). This takes the first dimension and the value. To add more dimensions, the type defines a function AddDim(). The second, third, and so forth dimensions can be added using this function.

Defining functions in user defined types is highly recommended -- except for the issue of performance. Once you have a value of the type, the functions are accessible. And, they do not need to be defined in SQL Server. They come automatically with the type. Of course, accessing the function seems to require shuffling the type data back and forth from the DLL to SQL Server, reducing performance.

What Creating a Model Looks Like
The following code shows one way to create a model using state and region type as two dimensions:

SELECT ud.dbo.CreateBasicMarginalValueModel(arg).ToString()
FROM (SELECT ud.dbo.MarginalValueArgs1(state, 1).
.............AddDim(regtype) as arg
......FROM (SELECT zc.*,
...................(CASE WHEN purban = 1 THEN 'urban'
.........................WHEN purban = 0 THEN 'rural'
.........................ELSE 'mixed' END) as regtype
............FROM sqlbook..zipcensus zc) zc) zc

There are three layers of queries. The innermost query defines the region type. The next level defines the inputs into the model creation routine. Notice that the function MarginalValueArgs1() defines the first dimension on state and the AddDim() function defines the second.

Although this is useful for illustration, the model only exists long enough for us to see it using the ToString() function. When the query stops executing, the model is no longer accessible.

The following code assigns the model to a variable. The model can then be referenced in multiple select statements. Note that for this to work, the current database must be "ud", because that is where the data types are defined. Currently, it is not possible to define variables using data types defines in other databases.

DECLARE @model BasicMarginalValueModel
SET @model =
....(SELECT ud.dbo.CreateBasicMarginalValueModel(arg)
.... FROM (SELECT ud.dbo.MarginalValueArgs1(state, 1).
..................... AddDim(regtype) as arg
.......... FROM (SELECT zc.*,
....................... (CASE WHEN purban = 1 THEN 'urban'
............................. WHEN purban = 0 THEN 'rural'
............................. ELSE 'mixed' END) as regtype
................ FROM sqlbook..zipcensus zc) zc) zc

SELECT @model.ToString()

In this case, the @model variable is accessible for the statements, but it does not persist. However, because the model is just a variable with a complicated type it could also be stored in a table.

Scoring a Model
The process of scoring is simply applying the model to a given set of values. For instance, the following query scores all the rows in the zc table:

SELECT @model.Score(arg) FROM (SELECT ud.dbo.MarginalValueArgs1(state, 1).
.................AddDim(regtype) as arg
......FROM (SELECT zc.*,
...................(CASE WHEN purban = 1 THEN 'urban' .........................WHEN purban = 0 THEN 'rural' .........................ELSE 'mixed' END) as regtype ............FROM sqlbook..zipcensus zc) zc) zc

That is, the dimensions are bundled together into MarginalValueArgs and passed to the model for scoring.

The model can also be used to calculate the chi-squared value (which is probably the most useful thing to do with such a model). This is simply another function in the BasicMarginalValueModel.

SELECT @model.ChiSquared(arg)
FROM (SELECT state, regtype, count(*) as cnt, .............ud.dbo.MarginalValueArgs1(state, count(*)).
.......................................AddDim(regtype) as arg ......FROM (SELECT zc.*,
...................(CASE WHEN purban = 1 THEN 'urban' .........................WHEN purban = 0 THEN 'rural' .........................ELSE 'mixed' END) as regtype ............FROM sqlbook..zipcensus zc) zc
......GROUP BY state, regtype
.....) zc

Limits on the Model

The interface between C# and SQL Server limits the size of the model to 8,000 bytes. This severely limits the size of the model. In future postings, I'll suggest an alternative implementation that gets around this limit.

The next posting discusses the C# implementation and the one after that extensions to the model.

Saturday, October 20, 2007

Marginal Value Models: Explanation

This posting describes a very simple type of model used when the target of the model is numeric and all the inputs are categorical variables. This posting explains the concepts behind the models. The next posting has the code associated with them.

I call these models marginal value models. In statistics, the term "marginal" means that we are looking at only one variable at a time. Marginal value models calculate the contribution from each variable, and then combine the results into an expected value.

The chi-square test operates in a similar fashion, but takes the process one step further. The chi-square test compares the actual value to the expected value to determine whether they are sufficiently close to due to small random variations -- or far enough apart to be suspicious. Both marginal value models and the chi-square test are discussed in more detail in my most recent book Data Analysis Using SQL and Excel. Here the emphasis is a bit different; the focus in on implementing this type of model as an extension to Excel.

What are the Marginal Values?

For the purposes of this discussion, the marginal values are the values summarized along one of the dimensions. For instance, if we are interested in the population of different parts of the United States, we might have the population for each state. The following query summarizes this information based on a table of zip code summaries (available on the companion web site to "Data Analysis Using SQL and Excel"):

SELECT state, AVG(medincome), SUM(population)
FROM zipcensus
GROUP BY state

The resulting histogram shows the distribution along this dimension:

The exact values are not known. What if we also know the median income for urban, rural, and mixed areas of the country? These might have the following values:

MIXED 148,595,327
RURAL 27,240,454
URBAN 109,350,778

Given this information about population along two dimensions, how can we combine the information to estimate, say, the rural populatoin of New York?

Those familiar with the chi-square test recognize this as the question of the expected value. In this situation, the expected value is the total population of the state times the total population of the area category divided by the total population in the United States. That is, it is the row total times the column total divided by the total.

For rural Alabama, this results in the following calculation: 4,446,124*27,240,454/285,186,559-424,685. This provides an estimate calculated by combining the inforamtion summarized along each dimension.

Is this estimate accurate? That is quite another question. If the two dimensions are statistically independent, then the estimate is quite accurate. If there is an interaction effect, then the stimate is not accurate. However, if all we have are summaries along the dimensions, then this might be the best that we can do.

Combining Values Along More Than Two Marginal Dimensions

The formula for the expected value can be easily extended to multiple dimensions. The idea is to multiply ratios rather than counts. The two-dimension case can be thought of as the product of the following three numbers:

  • The proportion of the population along dimension 1.
  • The proportion of the population along dimension 2.
  • The total population.

That is, we are multiplying proportions (or probabilities, if you prefer). The idea is that the "probability" of being in Alabama is the population of Alabama divided by the population of the country. The "probability" of being rural is the rural population divided by the population of the county. The "probability" of both is the product. To get the count, we multiply by "joint probability" by the population of the country.

This is easily extended to multiple dimensions. The overall "probability" is the product of the "probabilities" along each dimension. To get the count, we then have to multiply by the overall population. Mathematically, the idea is to combine the distibutions along each dimension, assuming statistical independence. The term "probability" appears in quotes -- it is almost a philosophical question whether "probabilities" are the same as "proportions", and that is not the subject of this posting.

This formulation of the problem is quite similar to naive Bayesian models. The only difference is that here we are working with counts and naive Bayesian models work with ratios. I will return to naive Bayesian models in later postings.

Combining Things That Aren't Counts

Certain things are not counts, but can be treated as counts for the purpose of calculating expected values. The key idea is that the overall totals must be the same (or at least quite close).

For example, the census date contains the proportion of the population that has some colelge degree. What if we wanted to estimate this proportion for the urban population in new York?
What we need for the marginal value model to work is simply the ability to count things up along the dimensions. In this case, we are tempted to count the proportion of the population of interest (since that is the data we have and what the question ultimately asks for).

However, we cannot use proportions because they do not "add up" to the same total numbers along each dimension. This means that if we take the sum of the proportions in each state the total will be quite different than the sum of the proportions for urban, rural, and mixed. If for no other reason, adding up fifty numbers (or so) is unlikely to produce the same result as adding up three.

Fortunately, there is a simple solution. Multiply the proportion by the appropriate population in each group, to get the number of college educated people in each group. This number adds up appropriate along each dimension, so we can use it in the formulas described above.

In the end, we get the number of people in, say rural Alabama who have a college education. We can then divide by the estimate for the population, and arrive at an answer to the question.

This method works with other numbers of interest, such as the average income. The idea would be to multiply the average income times the population to get dollars. Dollars then add up along the dimensions, and we can calculate the appropriate values in each group.

Chi-Square Test

The final topic in this chapter is to point out the calculation of chi-square value, using the marginal value model. The chi-square value is simply:

chi-square value = sum((actual - expected)^2/expected)

The value can be used as a measure of how close the observed data is to the expected values. In other words, it is a measure of how statistically independent the dimensions are. Higher values suggest interdependencies. Values closer to 0 means that the dimensions are independent.

This posting describes the background for marginal value models. The next posting describes how to add them into SQL Server.

Sunday, October 14, 2007

Two More Useful Aggregate Functions: MinOf() and MaxOf()

The overall purpose of this blog is to investigate adding data mining functionality into SQL Server (see the first post for a more detailed explanation). We have not yet arrived at adding real data mining functionality, since this requires being comfortable with .NET, C#, and extending SQL Server.

This post offers two more aggregation functions that provide a flavor for how to think about adding analytic capabilities. These functions return the value in one column when the value of another column is at a minimum or maximum. I call the functions MinOf() and MaxOf(). As a brief aside, my most recent book Data Analysis Using SQL and Excel describes various other techniques for getting this information in SQL without adding new functions into the database. Unfortunately, none of the methods is actually elegant.

The attached files contain the source code as well as a DLL and SQL script for loading functionality into the database. These files are:
Note that these files contain all the functionality in the blog1 files as well as the new functionality here. (The earlier post Weighted Average: An Example of Enhancing SQL Server Functionality explains how to load the functionality into the database.)

Thinking About the Problem
A good place to start is to think about what the code would ideally look like. The functions would look like:

SELECT 〈whatever〉, MINOF(〈value〉, 〈min-column〉), MAXOF(〈value〉, 〈max-column〉)
FROM 〈table〉
GROUP BY 〈whatever〉

This construct could be used, for instance, to find the first product purchased by each customer. Or, the most recent amount spent for each customer.

Alas, we cannot extend SQL server to support such functions, because aggregation functions can only take one argument. This means that we have to add a new type ValuePair to handle the two arguments. But even more alas, the two elements of ValuePair can be of any type for the function to really be useful (for simplicity, we'll limit it to any built-in basic type). That means that we need yet another user defined type, AnyType. I suppose these could be compressed into a single type that took pairs of anytype. However, it is much cleaner to break the code into these pieces.

The result is that the above code instead looks like:

SELECT 〈whatever〉, MINOF(vp).ToDouble(), MAXOF(vp).ToDouble()
..............................ud.dbo.AnyDateTime(〈min-column〉)) as vp
......FROM 〈table〉 t) t
GROUP BY 〈whatever〉

The variable vp becomes an instance of ValuePair for each row. In this case, it consists of floating point value (which is the value returned) and a date time column. Of course, there are "Any" functions for all the built-in types.

What The Solution Looks Like
The solution consts of two built-in types, two aggregation functions, and various support functions:
  • AnyType which represents any SQL type;
  • ValuePair which contains two AnyTypes;
  • MinOf() and MaxOf() aggregation functions; and,
  • Various functions to create instances of AnyType and ValuePair.
The trickiest of these is the AnyType type. The remainder are quite simple, so ValuePair has three members:
  • isNull (the null flag);
  • value1 (of type AnyType); and
  • value2 (of type AnyType).
With the appropriate member fucntions for a user defined type. It also has methods for accessing the two values, called Value1 and Value2.

MinOf() and MaxOf() are aggregation functions. Each contains two private members of type AnyType, the minimum value and the minimum variable.

Adding the AnyType Type
The AnyType type needs to store virtually any type allowed in SQL. Internally, it has a structure with the following members:

private struct union_values
....public Byte value_int8;
....public Int16 value_int16;
....public Int32 value_int32;
....public Int64 value_int64;
....public float value_single;
....public double value_double;
....public String value_string;
....public Decimal value_decimal;
....public DateTime value_datetime;

This would be better as the equivalent of a C union rather than a C struct, since that would use less space in memory. However, for the ToString(), Parse(), Write() and Read() methods, only the one actual value is input or output. Another member is an enumerated type defined as follows:

private enum datatype

Each possible types has a member for returning a particular value, such as ToTinyInt(), ToSmallInt(), and so on. These are all accessible from the SQL-side. Each type also has an overloaded constructor. The constructor is not accessible from SQL.

Finally, AllType redefines the "<" and ">" operators. This is needed for the comparisons for MinOf() and MaxOf(). These are complicated by the fact that the two arguments can be of any type. The comparisons follow the rules of SQL, so if either value is NULL then the comparisons return false. Only numerics can be compared to each other, so int8 can be compared to double but not to a character string or datetime.

Creation Functions for AnyType and ValuePair
The following creation functions take a value of a particular type and return an AnyType:
  • AnyTinyInt()
  • AnySmallInt()
  • AnyInt()
  • AnyBigInt()
  • AnyReal()
  • AnyDouble()
  • AnyDecimal()
  • AnyDateTime()
  • AnyString()
Adding an additional value type is quite simple. The following need to be modified:
  • The union_value struct in AnyType needs to store the new type.
  • A new constructor needs to be added for the new value.
  • A new conversion function (To).
  • Modifications to ToString(), Parse(), Write(), and Read().
  • Modify the ">" and "<" operators.
ValuePair, in turn, has a creation function that takes two arguments of AnyType.

The next post moves in a different direction, by talking about a particular type of data mining model, the marginal value model. The first post discusses how the model works rather than how it is implemented.

Tuesday, October 9, 2007

Weighted Average Continued: C# Code

The previous post described how to load the function WAVG() into SQL Server. This post describes the code that generates the DLL.

This discussion assumes that the reader is familiar with C# or object oriented languages similar to C#, such as C++ or java. That said, the code itself is probably readable by most people who are familiar with object-oriented programming practices.

This discussion is composed of four parts:
  • Very basic discussion of Microsoft Visual Studio;
  • Overview of the code and auxiliary modules;
  • Code for Adding CreateWeightedValue() Function;
  • Code for Adding WAvg() Aggregation Function; and,
  • Code for Adding WeightedValue Type.

The last three specifically describe code. These are ordered by difficulty, since it is easiest to add a user defined function, then an aggregation, and then a type (at least in terms of the volume of code produced). The code containing these is available here.

Overview of Microsoft Visual Studio
Microsoft Visual Studio is the application used to develop C# code (as well as code in other languages) using the .NET framework.

Visual Studio divides work into units called projects. These consist of one or more sets of files containing programming code, and they produce something. This something could be many things:
  • A windows application;
  • A ".exe" file executed from the command line;
  • A library to be shared among other projects;
  • A dynamic load library (DLL);
  • A device driver;
  • and so on.
The thing that we want to create is a DLL, since this can be loaded as an assembly into SQL Server.

For the purposes of this example, I have created a new project called blog in the directory c:\gordon\c-sharp\UserDefinedFunctions. The screen looks like:

Creating such a file automatically opens a code file. The source code for Wavg() is can be copied and placed into this file.After the code is in place, the file is created by going to the Build-->Build Blog menu option. Any errors appear at the bottom of the screen. Visual Studio does a good job of catching errors during the compilation process.

Once the project has been built, the DLL is conveniently located in the path \blog\blog\bin\debug\blog.dll. It can be loaded into SQL Server from this location, copied to a more convenient location, and even emailed or moved onto another computer.

Obviously, there is much more to say about Visual Studio. For that, I recommend Microsoft documentation or simply playing with the tool.

Overview of Code and Auxiliary Modules
Converting C# code into a DLL that can be loaded into SQL Server is a tricky process. In particular, .NET has to be sure that streams of bits represent what they are supposed to represent in both C# and SQL Server. In fact, this is a problem. For instance, by default, database values can be NULL. And yet, this is not part of any native C# type. To support compatibility between the systems, the code includes various using clauses and compiler directives.

However, the bulk of the C# code for this project consists primarily of three class definitions. The class WeightedValue defines the type weighted value, which holds a numeric value and a numeric weight (as C# doubles). The class WAvg defines the aggregation function. Finally, the CreateWeightedValue() function is a member of another class, UserDefinedFunctions. Note that the names of the first two classes match the names of the type and aggregation function respectively. The name of the third class is arbitrary, but carefully chosen to convey the notion that it contains user defined functions.

The beginning of the C# library consists of a series of "using" steps. These specify additional modules used by C#, and are similar to the "#include" preprocessor directive in C and C++ code. For instance, this code has the following references:

using System;
using System.IO;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;

The first two specify various system classes that are commonly used. The last specifies classes used specifically for interfacing to Sql Server.

The third is the most interesting, because it defines the classes that contain data going between SQL Server and C#. These are SQL data types. For instance, FLOAT in SQL corresponds to SqlDouble in C#. Basically, the C# classes encapsulate the basic class with a NULL flag.

However, there are some subtleties when passing data back and forth. "CHAR()" is not supported, although "NCHAR()" is. Fortunately, SQL Server automatically converts between these types.

More insidious is the fact that the length of strings and numerics and decimals and money all have to be specified. So, I have not figured out how create a function that takes arbitrary numeric values. User defined functions can only take numerics of a given length. Of course, we can define our own numeric value that never overflows. More typically, though, we simply declare functions to take a FLOAT. This is sufficient for most purposes and gets passed to C# as SqlDouble. For characters, we define them to take some long character value, such as NVARCHAR(2000), which is converted to SqlString.

More complete matching tables are available in Microsoft documentation:

In addition to the using statement, there are also compiler directives. These are applied to classes and to member in classes, as we will see below.

Code for Adding CreateWeightedAverage() Function
The following code provides the full definition for the CreateWeightedAverage() function.
public partial class UserDefinedFunctions
public static WeightedValue
CreateWeightedValue (SqlDouble val, SqlDouble wgt)
if (val.isNull wgt.IsNull)
return WeightedValue.Null;
return new WeightedValue(val, wgt);
} // CreateWeightedValue()
} // UserDefinedFunctions

This code defines a class called UserDefinedFunctions. The partial keyword simply means that the class definition may be split over several source code files. In this case, that is not the case.

The function itself is a static member of this class, so it can be called without an instance of the class being created. In fact, the function is going to be called from SQL Server. The function itself starts with a compiler directive that specifies that this is, in fact, a SQL Server function.

The remainder of the function declaration specifies the arguments and the return type. The code in the body is quite simply.

Recall from the last posting that this function is added into SQL Server using the following code:

CREATE FUNCTION CreateWeightedValue (@val float, @wgt float)
RETURNS WeightedValue as
EXTERNAL NAME ud.UserDefinedFunctions.CreateWeightedValue

This shows the correspondence between the SQL Server and C# language elements.

Code for Adding WAvg() Aggregation Function
The code for an aggregation is more complicated than the code for a single function, so the entire code is not included here. An aggregation class has the following structure:
public class WAvg : IBinarySerialize
private double sum;
private double cnt;

public void Init ()
{ . . . }

public void Accumulate (WeightedValue value)
{ . . . }

public void Merge (WAvg other)
{ . . . }

public SqlDouble Terminate ()
{ . . . }

public void Write (BinaryWriter w)
{ . . . }

public void Read (BinaryReader r)
{ . . . }
} // WAvg

The way this function works is quite simple. It maintains a running sum and sum of weights, When finished, it returns the sum divided by the sum of the weights. Notice that an aggregation function is really a class that contains data, along with some members of that class.

More interesting is the definition itself. First, the function has two compiler directives. The first is [Serializable]. This directive means that the data in the class can be passed back and forth between SQL Server and C#. In particular, it means that there are two special functions that are going to be defined, Write() and Read(). These are never called explicitly, but are part of the interface. These functions "write" the data to memory and then "read" it back . . . the data is written in one process (SQL Server or C#) and then read back in the other.

The second compiler directive specifies that the class is for an aggregation function. Because the type has "complicated" data, C# does not know how to read and write it automatically. This is true of almost all data types, so the format is typically UserDefined. In addition, other options are available as well, and explained in Microsoft documentation.

The class itself is an instance of IBinarySerialize. This is also part of the serialization stuff. Also, once this inheritance is set up, the Write() and Read() functions must be defined or else there is a compiler error.

The other four functions in the interface are actually useful for doing the aggregation. They are not called explicitly, but are used by SQL Server to do the work. The first is Init(), which initializes the values in the class to start a new aggregation. In this case, it sets the sum and the weight to zero.

The function Accumulate() adds in another value. Unfortunately, the accumulation function can only take one argument, which is why we need to create a special type that contains two values. In this case, it simply increments the sum and weight values.

The third function Merge() is probably the lead obvious of the four functions. This function merges two aggregation values. Why would this ever happen? The reason is parallelism. SQL Server might separate the aggregation into multiple threads for performance reasons. This brings together the intermediate results. One super nice thing about this structure is that we get the benefits of parallel, multi-threaded performance without really having to think about it. A very nice thing indeed.

The final function Terminate() is the most harshly named of the four. It returns the final value, in this case as a SQL floating point value (which is equivalent to a C# double).

Code for Adding WeightedValue Type
The final section is for adding in the type. This is similar to the aggregation class, although slightly different.

The following shows the various functions in the user defined type.

public class WeightedValue : INullable, IBinarySerialize
private bool isNull;
private double value;
private double weight;

public WeightedValue ()
{ . . . }

public bool IsNull
{ . . . }

public static WeightedValue Null
{ . . . }

public override string ToString ()
{ . . . }

public static WeightedValue Parse (SqlString s)
{ . . . }

public void Write (BinaryWriter w)
{ . . . }

public void Read (BinaryReader r)
{ . . . }

public WeightedValue (SqlDouble val, SqlDouble wgt)
{ . . . }

public double Value
{ . . . }

public double Weight
{ . . . }

} // WeightedValue

Although the aggregation function and user defined type are both defined as classes, they do have some fundamental differences. The biggest one is that the type is actually inside SQL Server. This means that the various methods are all available -- even methods not required by the API. This can be a handy way of adding functionality, without bothering to create new user defined functions.

Certain functions are required for a user defined type. First, a constructor is needed that takes no arguments. This usually returns the NULL value. Note that C# does not require destructors.

In addition, NULL and IsNull are also required. Their absence is caught by the compiler, because the user defined type should be an example of INullAble.

The functions Write() and Read() are also required. These behave the same way as for the aggregation function.

Finally, the functions ToString() and Parse() are required. These are very similar to Write() and Read() except that they write the values out to character strings. These strings are intended to be output and input to human beings, preferably in a somewhat understandable format. The function is ToString() particularly useful because it can be used in a query to see what is stored in a value of the type.

This user defined type makes all its elements private, and creates separate accessors for each one (these are called Weight and Value). This is not a requirement. However, it can be good programming practice for protecting members of a class.

The next post extends these ideas for another useful aggregation function, one that returns the value of one column when the value in another is minimized.

Sunday, September 30, 2007

Weighted Average: An Example of Enhancing SQL Server Functionality

This entry discusses how to enhance SQL Server functionality by adding in a simple aggregation function, the weighted average. Ideally, we would want to use the function as follows:

SELECT AVG(purban), WAVG(purban, population)
FROM zipcensus zc

(The zipcensus table contains summaries of census information by zip code and is available at the companion page for Data Analysis Using SQL and Excel.)

This contains two unfortunatelies. First, SQL does not contain a weighted average function, so we have to define one ourselves. Second, aggregation functions cannot take two arguments, so we have to work around this. The call looks like this in practice:
SELECT AVG(purban), ud.dbo.WAVG(wval)
ud.dbo.CreateWeightedValue(purban, population) as wval
FROM zipcensus zc) zc

This posting references two files:
The second is the code for the new objects. The first loads these into the database.

How Wavg is Implemented
The Wavg() aggregation function is implemented as a user-defined aggregate in blog1.dll. As we'll see in the next posting, an aggregation is really a class in C# (or other .NET language) that has certain specific characteristics.

The weighted average requires two values -- the value and the weight -- to do the calculation. Unfortunately, SQL Server does not allow aggregations to take more than one argument, even user defined aggregations. Fortunately, we can define a new type, WeightedValue, that contains these two elements. This type define the following methods (attributes and functions):
  • Null (an attribute) is the NULL value for the type.
  • Value (an attribute) returns the value component.
  • Weight (an attribute) returns the weight component.
  • ToString() (a function) converts it to a string.
  • Parse() (a function) converts the string back into the type.
Note that once the type is defined, it can be used as a colum in a table or as a T-SQL variable.

The type itself does not define a function that creates the type. For this, we need a user defined function, CreateWeightedValue(). This function takes two arguments, which are both FLOATs -- one for the value and one for the weight. SQL Server converts any numeric argument to a FLOAT, so this function can take integers, floats, numerics, and so on as arguments.

Assemblies, Functions, Types and All That Jazz
SQL Server loads DLLs (dynamic link libraries) into an object called an assembly. The code for the DLL is actually placed in the database. Fortunately, the .NET architecture makes it possible to load the same code, even if the code is written on a different machine from the one where it is being run. The DLL code is not actually machine code, but a low-level language that is quickly recompiled as it is run.

An assembly can contain objects such as the following:
  • User defined functions (such as CreateWeightedValue());
  • User defined types (such as WeightedValue());
  • User defined aggregates (such as Wavg());
  • User defined procedures; and,
  • User defined table functions.
User defined types and aggregates are defined as classes in C#. User defined functions and procedures are defined as methods within a single class. For convenience, all user defined functions are defined in a class called UserDefinedFunctions.

Loading an Assembly The First Time

Although an assembly contains definitions for the objects in it, creating an assembly does not load the objects. The following are the steps for loading an assembly:

1) Create the assembly using the CREATE ASSEMBLY statement:

FROM 'c:\\gordon\blog\blog1.dll'

This loads the DLL stored in the location "c:\\gordon\blog\blog1.dll" into the database.

The clause "WITH PERMISSION_SET = SAFE" simply says that the DLL is very well-behaved and should not introduce any problems. This is good, because you need special admistrator priviledges to load unsafe code into the database. And, you cannot fool SQL Server (at least easily). It validates the code to see whether any unsafe features (such as unions or pointers or file i/o) are being used.

(2) Define the objects in the DLL:

CREATE TYPE WeightedValue
EXTERNAL NAME ud.WeightedValue;


CREATE FUNCTION CreateWeightedValue (@val float, @wgt float)
RETURNS WeightedValue as
EXTERNAL NAME ud.UserDefinedFunctions.CreateWeightedValue

CREATE AGGREGATE WAvg (@val WeightedValue)

All of these use the EXTERNAL NAME option for the CREATE command. This "links" the user defined function in SQL to the code that defines it. The external name is of the form . (for types and aggregates) or .. (for functions).

Loading an Assembly Again
Unfortunately, once an assembly is loaded and the objects defined, the following command returns an error:


The error is:

DROP ASSEMBLY failed because 'ud' is referenced by object 'CreateWeightedValue'.

Oops. SQL Server is smart enough to know that objects in the database rely on the assembly. Alas, it is not smart enough to drop them. For that, we need code such as the following:


IF OBJECT_ID('CreateWeightedValue') IS NOT NULL
DROP FUNCTION CreateWeightedValue

DROP TYPE WeightedValue

IF (SELECT COUNT(*) FROM sys.assemblies WHERE UPPER(name) = 'UD') > 0

This code checks to see if each object exists before deleting the object. This prevents errors from occuring. Notice that the TYPE is dropped after the AGGREGATE and FUNCTION. This is because the TYPE is used by them, and cannot be dropped while they reference it. All this code is in blog1-load.sql.

Using Wavg()

As mentioned at the beginning of the post, the right way to use this function is as follows:

SELECT AVG(purban), ud.dbo.WAVG(wval)
ud.dbo.CreateWeightedValue(purban, population) as wval
FROM zipcensus zc) zc

The innermost subquery reads from the table zipcensus, returning all the values in the row, plus a new column containing the weighted value. This is passed to the outermost query and used as the argument to the WAVG function.

By the way, the function can be used on any numeric type (integer, float, numeric, and so on), because SQL Server will convert all numeric values to FLOAT (the type of the arguments to CreateWeightedValue()).

We can verify that weighted value works, using code such as:

SELECT wval.Value, wval.Weight, wval.ToString()
FROM (SELECT ud.dbo.CreateWeightedValue(3, 4.5) as wval) a

This returns the values placed into the function.

Although this example does not require the functionality, user defined types can be placed in tables and used as variables in T-SQL code.

Friday, September 28, 2007

.NET Architecture for Extending SQL Server

I have chosen the .NET architecture for this exploration primarily because it is convenient -- I have ready access to it. It is worth pointing out that many of the ideas are independent of .NET and should be implementable in any database that has comparable facilities for extending functionality.

This post is divided into three sections: the good things about .NET; the bad things about .NET, and a brief overview of .NET archicture for my purposes.

Advantages of .NET
.NET is the Microsoft architecture for developing multi-faceted applications. This is what makes it possible to extend the database to include data mining and statistics functionality. .NET includes SQL Server, components of the operating system, and various programming languages (C# is the primary one, although it also supports Java and C++).

The biggest strength of .NET is that it supports much of what I want to do. It enables extending SQL Server with my own data types, functions, procedures, and aggregations. In addition, SQL Server is a very reasonable database for approaching analysis of moderately large sized data sets.

One big advantage of .NET is that it is single source. That is, all components are developed by Microsoft, which provides evidence that they can work together. This is a big advantage, because I want to think about databases and functionality, rather than APIs (application programming interfaces), version compatibility, installing multiple applications, and so on.

A second advantage of .NET is that it is readily available with the SQL Server database. It has a standard database platform that it interacts with.

The C# language provided with .NET is a powerful, object-oriented language. I view it as a dialect of C++, probably because I've forgotten many of the nuances of C++. The syntax is similar, although .NET provides many more built-in classes.

A fourth advantage is that .NET makes it possible to distribute both the source code and the compiled code (as dynamic load libraries better known as DLLs) regardless of the platform. Of course, .NET only runs on Microsoft operating systems. For true operating system independence, I would need to use a platform such as Java.

Less important but still notable is the fact that it has a consistent documentation interface (both on the web and with the product) and has many books written about it. For my purposes, I have extensively used C# 2005 (published by Wrox and available here) as well as the online and product documentation.

Disadvantages of .NET
.NET is sufficiently powerful for extending the functionality of a database to include data mining and statistical functionality. However, it does have some challenges.

Of course, this approach only works with SQL Server on Microsoft platforms. It is not a general purpose solution, although I do think the ideas should be readily extendible to other platforms.

In addition, there are multiple technical issues that I need to work around:

  • The maximum size of a user defined data type is 8000 bytes.
  • SQL Server does not allow overloading of function names. (That is, I cannot define two functions with the same name where one takes an integer argument and one takes a character argument; I have to give the two functions different names.)
  • SQL Server does not allow updating or modifying the database within a user defined function.
  • SQL Server does not allow aggregations to take more than one argument.
  • SQL Server does not allow functions or aggregations to take variable numbers of arguments.
  • C# has quite a few compiler directives which are needed for building libraries for SQL Server.
  • In order to be loaded as "safe" code into the database, C# is not allowed to use unions, pointers, modify the database, or do file I/O.
  • Loading assemblies (DLLs) into SQL Server is a multi-part process. The assembly is loaded and then individual elements have to be defined (although this is automated using Microsoft Visual Studio).
  • C# is not optimized for processing performance. That said, the performance seems quite reasonable to me.
  • Function calls in SQL Server seem to require a three part name . . . ... This is a minor irritation.
In many cases, there are ways around these limitations.

Overview of .NET
From my perspective, there are four components to the .NET framework:

  • SQL Server;

  • SQL Server Management Studio (for accessing the database);

  • C#; and,

  • Visual Studio (for writing C# code).
Ironically, though, the most important part is the hidden fifth component that ties these together, the .NET framework. The following picture captures my understanding of the relationships between these components. In actual fact, both SQL Server Management Studio and Visual Studio talk through the .NET interface. However, I am concerned with the interface between the database and C# for adding new functionality into the database.

Note that there are two ways that C# code can interact with the database. A C# application has full abilities to SELECT, UPDATE, and INSERT items into the database, as well as changing database parameters, creating new databases, and so on (so long as the user has the appropriate rights). Much database programming is done through such applications (which can also be written in other languagates). That is not what is happening here.

The second method is that C# can extend the functionality of the database through the creation of user defined types, functions, procedures, and aggregations. This is what I am interested in. Alas, we are not able to change the syntax of SQL. However, these elements are great building blocks for what I want to do.

Extending the functionality is simply a matter of doing the following:

  1. Creating a project in Microsoft Visual Studio to produce a DLL.

  2. Load the DLL into SQL Server as an assembly.

  3. Define the elements in the assembly.

(These last two steps can be automated when your version of Visual Studio supports SQL Server assemblies.)

The project in Visual Studio is simply a set of C# classes, with a few well placed compiler directives. Steps (2) and (3) can be done automatically, but I have a script that does the work.

The next post assumes that the Visual Studio and SQL Server environment is available and describes how to add some useful functions.

Wednesday, September 26, 2007

Extending SQL Server to Support Some Statistical and Data Mining Functionality

My most recent book, Data Mining Using SQL and Excel (order here), is about combining the power of databases and Excel for data analysis purposes. From working on that book, I have come to feel that SQL and data mining are natural allies, since both are about making sense of large amounts of data.

A surprising observation (at least to me) is that SQL operations are analogous to data mining operations. In many ways, aggregating data -- summarizing it along dimensions -- is similar to building models, since both are about capturing underlying structure in the data. And, in some cases, joining tables is similar to scoring models, since joining takes information from one row and "adds in" new information.

This idea has intrigued me since finishing the final draft. So, I decided to embark on an adventure. This adventure is to extend SQL functionality to include various types of models. My goal is to make data mining functionality a natural part of using SQL. Okay, that is a bit ambitious, because any SQL extension tends to look "grafted" onto the basic language. However, it is possible to add the concept of a "statistical model" to SQL and see where that goes.

The purpose of this blog is to capture the interesting ideas that I learn and put them in one place. I have already learned a lot about SQL, statistics, C#, and .NET programming by starting this endeavor. In addition, I also want to make the code available to other people who might find it useful.

For various reasons that I discuss in my first technical post, I have decided to implement this scenario using .NET (that is, C# and Microsoft SQL Server). By the way, this is not because of a great love for Microsoft development environments; I have very painful memories of trying to use very buggy release versions of Microsoft Visual C++ in the late 1980s. I am learning this environment "as I go", since I had never programmed in C# before April of this year.

I already have some ideas for upcoming posts:
  • Introduction to .NET for Extending SQL Server
  • Adding A Useful Function: Weighted Averages
  • Two More Useful Functions: MinOF and MaxOF
  • What is a Marginal Value Model?
  • Implementing A Basic Marginal Value Model
  • What is a Linear Regression Model?
  • Implementing A Linear Regression Model
  • Model Management and the Marginal Value Model
  • What is a Naive Bayesian Model?
  • Implementing a Naive Bayesian Model
  • What is a Survival Model?
  • Implementing a Survival Model
I do not have a schedule in mind, but this is an adventure and I'm very curious where it will lead.