The division of an image into meaningful structures, image segmentation, is often an essential step in image analysis, object representation, visualization, and many other image processing tasks. Segmentation partitions an image into distinct regions containing each pixel with similar attributes. To be meaningful and useful for image analysis and interpretation, the regions should strongly relate to depicted objects or features of interest. Meaningful segmentation is the first step from low-level image processing transforming a greyscale or colour image into one or more other images to high-level image description in terms of features, objects, and scenes. The success of image analysis depends on reliability of segmentation, but an accurate partitioning of an image is generally a very challenging problem.
Image segmentation is the division of an image into regions or categories, which correspond to different objects or parts of objects. Every pixel in an image is allocated to one of a number of these categories. A good segmentation is typically one in which:
- Pixels in the same category have similar greyscale of multivariate values and form a connected region,
- Neighboring pixels which are in different categories have dissimilar values
The goal of image segmentation is to cluster pixels into salient image regions, i.e., regions corresponding to individual surfaces, objects, or natural parts of objects. Segmentation could be used for object recognition, occlusion boundary estimation within motion or stereo systems, image compression, image editing, or image database look-up.
Image segmentation is the process of partitioning a digital image into multiple segments. The goal of segmentation is to simplify and change the representation of an image into something that is more meaningful and easier to analyze. Each of the pixels in a region is similar with respect to some characteristic or computed property, such as color, intensity or texture. Adjacent regions are significantly different with respect to the same characteristic.
Figure 1 Image segmentation example
The main goal of segmentation is to divide an image into parts having strong correlation with areas of interest in the image. All image processing operations generally aim at a better recognition of objects of interest, i. e., at finding suitable local features that can be distinguished from other objects and from the background. The next step is to check each individual pixel to see whether it belongs to an object of interest or not. This operation is called segmentation and produces a binary image. A pixel has the value one if it belongs to the object; otherwise it is zero. Segmentation is the operation at the threshold between low-level image processing and image analysis. After segmentation, it is known that which pixel belongs to which object. The image is parted into regions and we know the discontinuities as the boundaries between the regions. The different types of segmentations are:
Pixel-Based Segmentation: Point-based or pixel-based segmentation is conceptually the simplest approach used for segmentation.
Edge-Based Segmentation: Even with perfect illumination, pixel based segmentation results in a bias of the size of segmented objects when the objects show variations in their gray values. Darker objects will become too small, brighter objects too large. The size variations result from the fact that the gray values at the edge of an object change only gradually from the background to the object value. No bias in the size occurs if we take the mean of the object and the background gray values as the threshold. However, this approach is only possible if all objects show the same gray value or if we apply different thresholds for each object. An edge based segmentation approach can be used to avoid a bias in the size of the segmented object without using a complex Thresholding scheme. Edge-based segmentation is based on the fact that the position of an edge is given by an extreme of the first-order derivative or a zero crossing in the second-order derivative.
Segmentation is the most important part in image processing. Fence off an entire image into several parts which is something more meaningful and easier for further process. These several parts that are rejoined will cover the entire image. Segmentation may also depend on various features that are contained in the image. It may be either color or texture. Before de-noising an image, it is segmented to recover the original image.The main motto of segmentation is to reduce the information for easy analysis There are three general approaches to segmentation, termed Thresholding, edge-based methods and region-based methods.
In Thresholding, pixels are allocated to categories according to the range of values in which a pixel lies. Pixels with values less than 128 have been placed in one category, and the rest have been placed in the other category. The boundaries between adjacent pixels in different categories have been superimposed in white on the original image. It can be seen that the threshold has successfully segmented the image into the two predominant fibre types.
In edge-based segmentation, an edge filter is applied to the image, pixels are classified as edge or non-edge depending on the filter output, and pixels which are not separated by an edge are allocated to the same category.
Finally, region-based segmentation algorithms operate iteratively by grouping together pixels which are neighbours and have similar values and splitting groups of pixels which are dissimilar in value.
 Krishna Kant Singh and Akansha Singh, “A Study Of Image Segmentation Algorithms For Different Types Of Images”, IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
 “Image Segmentation”, available online at: https://www.cs.auckland.ac.nz/courses/compsci773s1c/lectures/ImageProcessing-html/topic3.htm
 Chapter 4: Segmentation, available online at: http://www.bioss.ac.uk/people/chris/ch4.pdf
 https://in.mathworks.com/discovery/image-segmentation.html (image)