Saturday, October 1, 2011

The Average Hotel Does Not Get The Average Rating

The millions of travelers who review hotels, restaurants, and other attractions on TripAdvisor also supply a numeric rating by clicking one of five circles ranging from 1 for "terrible" to 5 for "excellent." On the whole, travelers are pretty kind.The average review rating for hotels and other lodgings is over 3.9. The median score is 4 and since that middle review is lost somewhere in a huge pile of 4-ratings, well over half of hotel reviews give a 4 or 5 rating.

So with such kind reviewers, most hotels must have a rating over 4 and hoteliers must all love us, right? Actually, no. The average of all hotel ratings is 3.6. Here's why: some large, frequently-reviewed hotels have thousands of reviews. It is hardly surprising that the Bellagio in Las Vegas has about 250 times more reviews than say, the Cambridge Gateway Inn, an unloved motel in Cambridge, Massachusetts. It may or may not be surprising that these oft-reviewed properties tend to be well-liked by our reviewers. Surprising or not, it's true: the hotels with the most reviews have a higher average rating than the long tail of hotels, motels, B&Bs, and Inns with only a handful of reviews each.

The chart compares the distribution of user review scores with the distribution of hotel average scores.

For the curious, here are the top 10 hotels on TripAdvisor by number of reviews:


Luxor Las Vegas
Majestic Colonial Punta Cana
Bellagio Las Vegas
MGM Grand Hotel and Casino
Excellence Punta Cana
Flamingo Hotel & Casino
Venetian Resort Hotel Casino
Hotel Pennsylvania New York
Excalibur Hotel & Casino
Treasure Island - TI Hotel & Casino

Not all of these are beloved by TripAdvisor users. The Hotel Pennsylvania drags the average down since it receives more ones than any other score. Despite that, as a group these hotels have a higher than average score. The moral of the story is that you can't extrapolate from one level of aggregation to another without knowing how much weight to give each unit. In the last US presidential election, the average state voted Republican, but the average voter voted Democrat.

Tuesday, August 23, 2011

Common Table Expressions

It's been a while since I posted. My new role at TripAdvisor has been keeping me pretty busy! My first post after a long absence is about a feature of SQL that I have recently fallen in love with. Usually, I leave it to Gordon to write about SQL since he is an expert in that field, but this particular feature is one that he doe not  write about in Data Analysis Using SQL and Excel. The feature is called common table expressions or, more simply, the WITH statement.

Common table expressions allow you to name a bunch of useful subquerries before using them in your main query. I think of the common table expressions as subquerries because that is what they usually replace in my code, but they are actually a lot more convenient than subquerries because they aren't "sub". They are there at the top level so your main query can refer to them as many times as you like anywhere in the query. In that way, they are more like temporary tables or views. Unlike tables and views, however, you don't have to be granted permission to create them, and you don't have to remember to clean them up when you are done. Common table expressions last only as long as the query is running.

An example will help show why common table expressions are so useful. Suppose (because it happens to be true) that I have a complicated query that returns a list of hotels along with various metrics. These could be as simple as the number of rooms, or the average daily rate, or the average rating by our reviewers, or it could be a complex expression to produce a model score. For this purpose, it doesn't matter what the metric is, what matters is that I want to compare "similar" properties for some definition of similar. The first few rows returned by my complicated query look something like this:



Similar hotels have the same value of feature and similar ranking. In fact, I want to compare each hotel with four others: The one with matching feature that is next above it in rank, the one with matching feature that is next below it in rank, the one with non-matching feature that is next above it in rank, and the one with non-matching feature that is next below it in rank. Of course, for any one hotel, some of these neighbors may not exist. The top ranked hotel has no neighbors above it, for instance.

My final query involves joining the result pictured above with itself four times using non-equi joins, but for simplicity, I'll leave out the matching and non-matching features bit and simply compare each hotel to the one above and below it in rank. The ranking column is dense, so I can use equi joins on ranking=ranking+1 and ranking=ranking-1 to achieve this. Here is the query:

with ranks (id, hotel, ranking, feature, metric1, metric2)
    as(select . . .) /* complicated query to get rankings */
select r0.id, r0.hotel, 
    r0.metric1 as m1_self, r1.metric1 as m1_up, r2.metric1 as m1_down
from ranks r0 /* each hotel */ left join
      ranks r1 on r0.ranking=r1.ranking+1 /* the one above */ left join
      ranks r2 on r0.ranking=r2.ranking-1 /* the one below */
order by r0.ranking

The common table expression gives my complicated query the name ranks. In the main query, ranks appears three times with aliases r0, r1, and r2. The outer joins ensure that I don't lose a hotel just because it is missing a neighbor above or below. The query result looks like this:


The Hotel Commonwealth has the highest score, a 99, so there is nothing above it. In this somewhat contrived example, the hotel below it is the Lenox with a score of 98 and so on down the list. To write this query using subqueries, I would have had to repeat the subquery three times which would not only be ugly, it would risk actually running the subquery three times since the query analyzer might not notice that they are identical.

Sunday, May 22, 2011

JMP Webcast:: Measuring What Matters

On Tuesday, May 24 at 1:00pm Eastern Daylight Time, I will be presenting a webcast on behalf of JMP, a visual data exploration and mining tool.  The main theme of  the talk is that companies tend to manage to metrics, so it is very important that the metrics are well-chosen. I will illustrate this with a small case study from the world on on-line retailing recommendations. A secondary theme is the importance of careful data exploration in preparation for modeling--a task JMP is well-suited to.

-Michael

Register.

Tuesday, May 17, 2011

Michael Berry announces a new position

Hello Readers,

As some of you will already have heard, I have accepted the position of Business Intelligence Director at TripAdvisor for Business--the part of TripAdvsor that sells products and services to businesses rather than consumers. The largest part of T4B as this side of the business is called internally is selling direct links to hotel web sites that appear right next to the hotel reviews on TripAdvisor.com. Subscribers are also able to make special offers ("free parking", "20% off", "a free bottle of wine with your meal", . . .) directly on the TripAdvisor site. Another T4B product is listings for vacation rental properties.  There is a lot of data, and a lot of questions to be answered!

I will continue to contribute to this blog and I will continue to work with Gordon and Brij on the data mining courses that Data Miners produces. TripAdvisor is based in Newton, Massachusetts--not far from my home in Cambridge. It will be novel going home every night after work!

-Michael

Friday, April 1, 2011

Data Mining Techniques 3rd Edition

Gordon and I spent much of the last year writing the third edition of Data Mining Techniques and now, at last, I am holding the finished product in my hand. In the 14 years since the first edition came out, our knowledge has increased by a factor of at least 10 while the page count has only doubled so I estimate the information density has increased by a factor of five! I hope reviewers will agree that our writing skills have also improved with time and practice. In short, I'm very proud of our latest effort and I hope our readers will continue to find it useful for the next 14 years!


Table of Contents
Chapter 1 What Is Data Mining and Why Do It? 1
Chapter 2 Data Mining Applications in Marketing and Customer Relationship Management 27
Chapter 3 The Data Mining Process 67
Chapter 4 Statistics 101: What You Should Know About Data 101
Chapter 5 Descriptions and Prediction: Profi ling and Predictive Modeling 151
Chapter 6 Data Mining Using Classic Statistical Techniques 195
Chapter 7 Decision Trees 237
Chapter 8 Artifi cial Neural Networks 283
Chapter 9 Nearest Neighbor Approaches: Memory-Based Reasoning and Collaborative Filtering 323
Chapter 10 Knowing When to Worry: Using Survival Analysis to Understand Customers 359
Chapter 11 Genetic Algorithms and Swarm Intelligence 399
Chapter 12 Tell Me Something New: Pattern Discovery and Data Mining 431
Chapter 13 Finding Islands of Similarity: Automatic Cluster Detection 461
Chapter 14 Alternative Approaches to Cluster Detection 501
Chapter 15 Market Basket Analysis and Association Rules 537
Chapter 16 Link Analysis 583
Chapter 17 Data Warehousing, OLAP, Analytic Sandboxes, and Data Mining 615
Chapter 18 Building Customer Signatures 657
Chapter 19 Derived Variables: Making the Data Mean More 695
Chapter 20 Too Much of a Good Thing? Techniques for Reducing the Number of Variables 737
Chapter 21 Listen Carefully to What Your Customers Say: Text Mining 777
Index 823

Tuesday, March 22, 2011

How to calculate R-squared for a decision tree model

A client recently wrote to us saying that she liked decision tree models, but for a model to be used at her bank, the risk compliance group required an R-squared value for the model and her decision tree software doesn't supply one. How should she fill in the blank? There is more than one possible answer.

Start with the definition of R-squared for regular (ordinary least squares) regression. There are three common ways of describing it. For OLS they all describe the same calculation, but they suggest different ways of extending the definition to other models. The calculation is 1 minus the ratio of the sum of the squared residuals to the sum of the squared differences of the actual values from their average value.

The denominator of this ratio is the variance and the numerator is the variance of the residuals. So one way of describing R-squared is as the proportion of variance explained by the model.

A second way of describing the same ratio is that it shows how much better the model is than the null model which consists of not using any information from the explanatory variables and just predicting the average. (If you are always going to guess the same value, the average is the value that minimizes the squared error.)

Yet a third way of thinking about R-squared is that it is the square of the correlation r between the predicted and actual values. (That, of course, is why it is called R-squared.)

Back to the question about decision trees: When the target variable is continuous (a regression tree), there is no need to change the definition of R-squared. The predicted values are discrete, but everything still works.

When the target is a binary outcome, you have a choice. You can stick with the original formula. In that case, the predicted values are discrete with values between 0 and 1 (as many distinct estimates as the tree has leaves) and the actuals are either 0 or 1. The average of the actuals is the proportion of ones (i.e. the overall probability of being in class 1).  This method is called Efron's pseudo R-squared.

Alternatively, you can say that the job of the model is to classify things.  The null model would be to always predict the most common class. A good pseudo R-squared is how much better does your model do? In other words, the ratio of the proportion correctly classified by your model to the proportion of the most common class.

There are many other pseudo R-squares described on a page put up by the statistical consulting services group at UCLA.

Friday, March 11, 2011

Upcoming talks and classes

Michael will be doing a fair amount of teaching and presenting over the next several weeks:

March 16-18 Data Mining Techniques Theory and Practice at SAS Institute in Chicago.

March 29 Applying Survival Analysis to Forecasting Subscriber Levels at the New England Statistical Association Meeting.

April 7 Predictive Modeling for the Non-Statistician at the TDWI conference in Washington, DC.