Understanding of Digital Images
/ December 28, 2017

Digital Images are the most common and convenient means of conveying or transmitting information. An image is worth a thousand words. Pictures concisely convey information about positions, sizes and inter-relationships between objects. They portray spatial information that we can recognize as objects. Human beings are good at deriving information from such images, because of our innate visual and mental abilities. About 75% of the information received by human is in pictorial form. Digital Image Overview Digital images are made of picture elements called pixels.  Typically, pixels are organized in an ordered rectangular array.  The size of an image is determined by the dimensions of this pixel array.  The image width is the number of columns, and the image height is the number of rows in the array.  Thus the pixel array is a matrix of M columns x N rows.  To refer to a specific pixel within the image matrix, we define its coordinate at x and y.  The coordinate system of image matrices defines x as increasing from left to right and y as increasing from top to bottom.  Compared to normal mathematic convention, the origin is in the top left corner and the y coordinate is flipped.  Why is…

Introduction of Data Compression
/ December 27, 2017

Data Compression is used just about everywhere. Data compression involves the development of a compact representation of information. Most representations of information contain large amounts of redundancy. Redundancy can exist in various forms. Internet users who download or upload files from/to the web, or use email to send or receive attachments will most likely have encountered files in compressed format. Data Compression Overview With the extending use of computer in various disciplines, number of data processing applications is also increasing which requires processing and storage of large volumes of data. Data compression is primarily a branch of information theory which deals with techniques related to minimizing the amount of data to be transmitted and stored. Data compression is often referred to as coding, where coding is a very general term encompassing any special representation of data which satisfies a given need. Information theory is defined to be the study of efficient coding and its consequences, in the form of speed. What is Data Compression ? Today, with the growing demands of information storage and data transfer, data compression is becoming increasingly important. Compression is the process of encoding data more efficiently to achieve a reduction in file size. One type of compression…

What is Pattern Recognition
/ December 26, 2017

One of the most important capabilities of mankind is learning by experience, by our endeavors, by our faults. By the time we attain an age of five most of us are able to recognize digits, characters; whether it is big or small, uppercase or lowercase, rotated, tilted. We will be able to recognize, even if the character is on a mutilated paper, partially occluded or even on the clustered background. Looking at the history of the human search for knowledge, it is clear that humans are fascinated with recognizing patterns in nature, understand it, and attempt to relate patterns into a set of rules. Informally, a pattern is defined by the common denominator among the multiple instances of an entity. Therefore, Pattern recognition has its origins in engineering, whereas machine learning grew out of computer science. Pattern recognition is concerned with the design and development of systems that recognize patterns in data. The purpose of a pattern recognition program is to analyze a scene in the real world and to arrive at a description of the scene which is useful for the accomplishment of some task. Introduction of Pattern Recognition Pattern Recognition is a mature but exciting and fast developing field,…

What is Fingerprint Recognition
/ December 25, 2017

Fingerprint Recognition is one of the most well-known and publicized biometrics. Because of their uniqueness and consistency over time, fingerprints have been used for identification for over a century, more recently becoming automated (i.e. a biometric) due to advancements in computing capabilities. Fingerprint identification is popular because of the inherent ease in acquisition, the numerous sources (ten fingers) available for collection, and their established use and collections by law enforcement and immigration. Introduction of Fingerprint Recognition Fingerprint recognition is one of most popular and accuracy Biometric technologies. Fingerprint recognition (identification) is one of the oldest methods of identification with biometric traits. Large no. of archeological artifacts and historical items shows the signs of fingerprints of human on stones. The ancient people were aware about the individuality of fingerprint, but they were not aware of scientific methods of finding individuality. Fingerprints have remarkable permanency and uniqueness throughout the time. Fingerprints offer more secure and reliable personal identification than passwords, id-cards or key can provide. Examples such as computers and mobile phones equipped with fingerprint sensing devices for fingerprint based password protection are being implemented to replace ordinary password protection methods. Finger-scan technology is the most widely deployed biometric technology, with a…

Introduction of HopField Neural Network
/ December 22, 2017

Human beings are constantly thinking since ages about the reasons for human capabilities and incapabilities. Successful attempts have been made to design and develop systems that emulate human capabilities or help overcome human incapabilities. The human brain, which has taken millions of years to evolve to its present architecture, excels at tasks such as vision, speech, information retrieval, complex pattern recognition, all of which are extremely difficult tasks for conventional computers. A number of mechanisms have been which seems to enable human brain to handle various problems. These mechanisms include association; generalization and self-organization. A brain similar computational technique namely HopField Neural Network is explained here. Working of Hop Field Neural Network A neural network (or more formally artificial neural network) is a mathematical model or computational model inspired by the structure and functional aspects of biological neural networks. It consists of an interconnected group of artificial neurons. The original inspiration for the term Artificial Neural Network came from examination of central nervous systems and their neurons, axons, dendrites and synapses which constitute the processing elements of biological neural networks. One of the milestones for the current renaissance in the field of neural networks was the associative model proposed by Hopfield at…

Introduction of Image Classification
/ December 21, 2017

Classification includes a broad range of decision-theoretic approaches to the identification of images (or parts thereof). All classification algorithms are based on the assumption that the image in question depicts one or more features (e.g., geometric parts in the case of a manufacturing classification system, or spectral regions in the case of remote sensing, as shown in the examples below) and that each of these features belongs to one of several distinct and exclusive classes. The classes may be specified a priori by an analyst (as in supervised classification) or automatically clustered (i.e. as in unsupervised classification) into sets of prototype classes, where the analyst merely specifies the number of desired categories. (Classification and segmentation (clustering) have closely related objectives, as the former is another form of component labeling that can result in segmentation of various features in a scene.) Definition of Image Classification Image classification is the process of assigning land cover classes to pixels. Image classification refers to the task of extracting information classes from a multiband raster image. The resulting raster from image classification can be used to create thematic maps. Depending on the interaction between the analyst and the computer during classification, there are two types of classification:…

What is Machine Vision
/ December 20, 2017

Vision plays a fundamental role for living beings by allowing them to interact with the environment in an effective and efficient way. Where human vision is best for qualitative interpretation of a complex, unstructured scene, machine vision excels at quantitative measurement of a structured scene because of its speed, accuracy, and repeatability. For example, on a production line, a machine vision system can inspect hundreds, or even thousands, of parts per minute. A machine vision system built around the right camera resolution and optics can easily inspect object details too small to be seen by the human eye. What is Machine Vision? Machine vision (also called “industrial vision” or “vision systems”) is the use of digital sensors (wrapped in cameras with specialized optics) that are connected to processing hardware and software algorithms to visually inspect pretty much anything. Machine vision is a true multi-disciplinary field, encompassing computer science, optics, mechanical engineering, and industrial automation. While historically the tools of machine vision were focused on manufacturing, that’s quickly changing, spreading into medical applications, research, and even movie making. Machine vision is the technology to replace or complement manual inspections and measurements with digital cameras and image processing. The technology is used…

What is Biometrics Authentication
/ December 18, 2017

One of our highest priorities in the world of information security is confirmation that a person accessing sensitive, confidential, or classified information is authorized to do so. Such access is usually accomplished by a person’s proving their identity by the use of some means or method of authentication. Biometrics is a field of technology which has been and is being used in the identification of individuals based on some physical attribute. Biometric technology is used for automatic personal recognition based on biological traits—fingerprint, iris, face, palm print, hand geometry, vascular pattern, voice or behavioral characteristics gait, signature, typing pattern. Fingerprinting is the oldest of these methods and has been utilized for over a century by law enforcement officials who use these distinctive characteristics to keep track of criminals. Basic Overview of Biometrics Authentication In this computer-driven era, identity theft and the loss or disclosure of data and related intellectual property are growing problems. We each have multiple accounts and use multiple passwords on an ever-increasing number of computers and Web sites. Maintaining and managing access while protecting both the user’s identity and the computer’s data and systems has become increasingly difficult. Central to all security is the concept of authentication –…

Introduction of Medical Image Processing or Medical Imaging
/ December 7, 2017

Present day technological developments in imaging and vision have brought so many changes in the medical diagnosis, treatment planning and treatment verification procedures. The accurate precision, speed in diagnosis process and non-invasive clinical procedures are drastically improved. In modern medicine, medical imaging has undergone major advancements. Today, this ability to achieve information about the human body has many useful clinical applications. Over the years, different sorts of medical imaging have been developed, each with their own advantages and disadvantages. Overview of Medical Imaging Aside from Superman with his x-ray vision, people generally can’t look at a sick person and instantly figure out the problem. Most medical issues occur inside the body, so making a diagnosis can be a challenge. Medical imaging has made that challenge far easier over the last century. Medical imaging is the technique of producing visual representations of areas inside the human body to diagnose medical problems and monitor treatment. It has had a huge impact on public health. Medical imaging is the visualization of body parts, tissues, or organs, for use in clinical diagnosis, treatment and disease monitoring. Imaging techniques encompass the fields of radiology, nuclear medicine and optical imaging and image-guided intervention. Medical imaging refers to several…

What is ACO (Ant Colony Optimization) Algorithm
/ November 29, 2017

There are even increasing efforts in searching and developing algorithms that can find solutions to combinatorial optimization problems. In this way, the Ant Colony Optimization Meta-heuristic takes inspiration from biology and proposes different versions of still more efficient algorithms. Ant Colony Optimization (ACO): Overview Ant Colony Optimization (ACO) is a paradigm for designing metaheuristic algorithms for combinatorial optimization problems. The essential trait of ACO algorithms is the combination of a priori information about the structure of a promising solution with a posteriori information about the structure of previously obtained good solutions. ACO is a class of algorithms, whose first member, called Ant System, was initially proposed by Colorni, Dorigo and Maniezzo The main underlying idea, loosely inspired by the behavior of real ants, is that of a parallel search over several constructive computational threads based on local problem data and on a dynamic memory structure containing information on the quality of previously obtained result. The collective behavior emerging from the interaction of the different search threads has proved effective in solving combinatorial optimization (CO) problems. More specifically, we can say that “Ant Colony Optimization (ACO) is a population-based, general search technique for the solution of difficult combinatorial problems which is…

Insert math as
$${}$$