In many machine vision and image processing algorithms, simplifying assumptions are made about the uniformity of intensities in local image regions. However, images of real objects often do not exhibit regions of uniform intensities. Extraction of effective features of objects is an important area of research in the intelligent processing of image data. Texture analysis is one of the fundamental aspects of human vision by which we discriminate between surfaces and objects. In a similar manner, computer vision can take advantage of the cues provided by surface texture to distinguish and recognize objects. In computer vision, texture analysis may be used alone or in combination with other sensed features (e.g. color, shape, or motion) to perform the task of recognition.
Overview of Texture Analysis
Texture analysis refers to the characterization of regions in an image by their texture content. Texture analysis attempts to quantify intuitive qualities described by terms such as rough, smooth, silky, or bumpy as a function of the spatial variation in pixel intensities. In this sense, the roughness or bumpiness refers to variations in the intensity values, or gray levels. Texture analysis is used in various applications, including remote sensing, automated inspection, and medical image processing. Texture analysis can be used to find the texture boundaries, called texture segmentation. Texture analysis can be helpful when objects in an image are more characterized by their texture than by intensity, and traditional thresholding techniques cannot be used effectively.
Texture analysis concerns mainly with feature extraction and image coding. Feature extraction identifies and selects a set of distinguishing and sufficient features to characterize a texture. Image coding derives a compact texture description from selected features. By representing a complex texture with a small number of measurable features or parameters, texture analysis archives great dimension-reduction and enables automated texture processing.
The main aim of texture analysis is to improve the tonal information of an image, since texture is a major property of an image that represents important information about the structural arrangement of features in an image. Texture is measured statistically using a moving window throughout the image.
Significance of Texture Analysis
Generally speaking, textures are complex visual patterns composed of entities, or sub-patterns that have characteristic brightness, colour, slope, size, etc. Thus texture can be regarded as a similarity grouping in an image. The local sub-pattern properties give rise to the perceived lightness, uniformity, density, roughness, regularity, linearity, frequency, phase, directionality, coarseness, randomness, fineness, smoothness, granulation, etc., of the texture as a whole.
There are four major issues in texture analysis:
- Feature extraction: to compute a characteristic of a digital image able to numerically describe its texture properties;
- Texture discrimination: to partition a textured image into regions, each corresponding to a perceptually homogeneous texture (leads to image segmentation);
- Texture classification: to determine to which of a finite number of physically defined classes (such as normal and abnormal tissue) a homogeneous texture region belongs;
- Shape from texture: to reconstruct 3D surface geometry from texture information.
Texture analysis plays an important role in many image analysis applications. In industrial visual inspection, texture information can be used in enhancing the accuracy of measurements. Texture methods can also be used in medical image analysis, biometric identification, remote sensing, content based image retrieval, document analysis, texture synthesis and model based image coding
Texture is an important spatial feature useful for identifying objects. Texture, an intrinsic property of object surface, is used by the visual perception system to understand a scene; therefore texture analysis is an important component of image processing.
Texture Analysis Example
In areas with smooth texture, the range of values in the neighborhood around a pixel is a small value; in areas of rough texture, the range is larger. Similarly, calculating the standard deviation of pixels in a neighborhood can indicate the degree of variability of pixel values in that region. The table lists these functions.
The functions all operate in a similar way: they define a neighborhood around the pixel of interest, calculate the statistic for that neighborhood, and use that value as the value of the pixel of interest in the output image.
This example shows how the function operates on a simple array.
A = [1 2 3 4 5; 6 7 8 9 10; 11 12 13 14 15; 16 17 18 19 20]
B = rangfilt (A)
The following figure shows how the value of element was calculated from by default, the function uses a 3-by-3 neighborhood but you can specify neighborhoods of different shapes and sizes.
Determining Pixel Values in Range Filtered Output Image
Figure 1: Texture Analysis
The stdfilt and entropyfilt functions operate similarly, defining a neighborhood around the pixel of interest and calculating the statistic for the neighborhood to determine the pixel value in the output image. The stdfilt function calculates the standard deviation of all the values in the neighborhood.
The entropyfilt function calculates the entropy of the neighborhood and assigns that value to the output pixel. By default, the entropyfilt function defines a 9-by-9 neighborhood around the pixel of interest. To calculate the entropy of an entire image, use the entropy function.
 “Texture Analysis”, available online at: http://in.mathworks.com/help/images/texture-analysis.html
 Andrzej Materka and Michal Strzelecki, “Texture Analysis Methods – A Review”, A Review, Technical University of Lodz, Institute of Electronics, COST B11 report, Brussels 1998.
 “Texture Analysis: an Overview”, available online at: https://www.cs.auckland.ac.nz/~georgy/research/texture/thesis-html/node12.html
 “Mihran Tuceryan”, Texture Analysis, The Handbook of Pattern Recognition and Computer Vision (2nd Edition), pp. 207-248, 198