The objective is to identify each of a large number of black-and-white rectangular pixel displays as one of the 26 capital letters in the English alphabet. The character images were based on 20 different fonts and each letter within these 20 fonts was randomly distorted to produce a file of 20,000 unique stimuli. Each stimulus was converted into 16 primitive numerical attributes (statistical moments and edge counts) which were then scaled to fit into a range of integer values from 0 through 15. We typically train on the first 16000 items and then use the resulting model to predict the letter category for the remaining 4000. See the article cited above for more details.
Attribute Information:
letter - capital letter (26 values from A to Z)
x-box - horizontal position of box (integer)
y-box - vertical position of box (integer)
width - width of box (integer)
high - height of box (integer)
onpix - total # on pixels (integer)
x-bar - mean x of on pixels in box (integer)
y-bar - mean y of on pixels in box (integer)
x2bar - mean x variance (integer)
y2bar - mean y variance (integer)
xybar - mean x y correlation (integer)
x2ybr - mean of x * x * y (integer)
xy2br - mean of x * y * y (integer)
x-ege - mean edge count left to right (integer)
xegvy - correlation of x-ege with y (integer)
y-ege - mean edge count bottom to top (integer)
yegvx - correlation of y-ege with x (integer)
This is an implementation of Leo Breiman's ringnorm example[1]. It is a 20 dimensional, 2 class classification example. Each class is drawn from a multivariate normal distribution.
Class 1 has mean zero and covariance 4 times the identity.
Class 2 has mean (a,a,..a) and unit covariance. a = 2/sqrt(20).
Breiman reports the theoretical expected misclassification rate as 1.3%. He used 300 training examples with CART and found an error of 21.4%.