Understanding of Digital Images
Image Processing , Technology & Science / December 28, 2017

Digital Images are the most common and convenient means of conveying or transmitting information. An image is worth a thousand words. Pictures concisely convey information about positions, sizes and inter-relationships between objects. They portray spatial information that we can recognize as objects. Human beings are good at deriving information from such images, because of our innate visual and mental abilities. About 75% of the information received by human is in pictorial form. Digital Image Overview Digital images are made of picture elements called pixels.  Typically, pixels are organized in an ordered rectangular array.  The size of an image is determined by the dimensions of this pixel array.  The image width is the number of columns, and the image height is the number of rows in the array.  Thus the pixel array is a matrix of M columns x N rows.  To refer to a specific pixel within the image matrix, we define its coordinate at x and y.  The coordinate system of image matrices defines x as increasing from left to right and y as increasing from top to bottom.  Compared to normal mathematic convention, the origin is in the top left corner and the y coordinate is flipped.  Why is…

Rule based System

Knowledge is practical or theoretical understanding of a subject or domain. Thus who possess knowledge are called experts. The human mental process is internal, it is too complex to be represented as an algorithm. However, most experts are capable of expressing their knowledge in the form of rules for problem solving. Rules are the popular paradigm for representing knowledge. A rule based expert system is one whose knowledge base contains the domain knowledge coded in the form of rules. Overview of Rule Based Systems Instead of representing knowledge in a relatively declarative, static way (as a bunch of things that are true), rule based system represent knowledge in terms of a bunch of rules that tell you what you should do or what you could conclude in different situations. A rule-based system consists of a bunch of IF-THEN rules, a bunch of facts, and some interpreter controlling the application of the rules, given the facts Rule-based systems (also known as production systems or expert systems) are the simplest form of artificial intelligence. A rule based system uses rules as the knowledge representation for knowledge coded into the system. The definitions of rule-based system depend almost entirely on expert systems, which…

Introduction of Data Compression

Data Compression is used just about everywhere. Data compression involves the development of a compact representation of information. Most representations of information contain large amounts of redundancy. Redundancy can exist in various forms. Internet users who download or upload files from/to the web, or use email to send or receive attachments will most likely have encountered files in compressed format. Data Compression Overview With the extending use of computer in various disciplines, number of data processing applications is also increasing which requires processing and storage of large volumes of data. Data compression is primarily a branch of information theory which deals with techniques related to minimizing the amount of data to be transmitted and stored. Data compression is often referred to as coding, where coding is a very general term encompassing any special representation of data which satisfies a given need. Information theory is defined to be the study of efficient coding and its consequences, in the form of speed. What is Data Compression ? Today, with the growing demands of information storage and data transfer, data compression is becoming increasingly important. Compression is the process of encoding data more efficiently to achieve a reduction in file size. One type of compression…

What is Pattern Recognition

One of the most important capabilities of mankind is learning by experience, by our endeavors, by our faults. By the time we attain an age of five most of us are able to recognize digits, characters; whether it is big or small, uppercase or lowercase, rotated, tilted. We will be able to recognize, even if the character is on a mutilated paper, partially occluded or even on the clustered background. Looking at the history of the human search for knowledge, it is clear that humans are fascinated with recognizing patterns in nature, understand it, and attempt to relate patterns into a set of rules. Informally, a pattern is defined by the common denominator among the multiple instances of an entity. Therefore, Pattern recognition has its origins in engineering, whereas machine learning grew out of computer science. Pattern recognition is concerned with the design and development of systems that recognize patterns in data. The purpose of a pattern recognition program is to analyze a scene in the real world and to arrive at a description of the scene which is useful for the accomplishment of some task. Introduction of Pattern Recognition Pattern Recognition is a mature but exciting and fast developing field,…

What is Fingerprint Recognition

Fingerprint Recognition is one of the most well-known and publicized biometrics. Because of their uniqueness and consistency over time, fingerprints have been used for identification for over a century, more recently becoming automated (i.e. a biometric) due to advancements in computing capabilities. Fingerprint identification is popular because of the inherent ease in acquisition, the numerous sources (ten fingers) available for collection, and their established use and collections by law enforcement and immigration. Introduction of Fingerprint Recognition Fingerprint recognition is one of most popular and accuracy Biometric technologies. Fingerprint recognition (identification) is one of the oldest methods of identification with biometric traits. Large no. of archeological artifacts and historical items shows the signs of fingerprints of human on stones. The ancient people were aware about the individuality of fingerprint, but they were not aware of scientific methods of finding individuality. Fingerprints have remarkable permanency and uniqueness throughout the time. Fingerprints offer more secure and reliable personal identification than passwords, id-cards or key can provide. Examples such as computers and mobile phones equipped with fingerprint sensing devices for fingerprint based password protection are being implemented to replace ordinary password protection methods. Finger-scan technology is the most widely deployed biometric technology, with a…

Information Retrieval System and Applications

Information retrieval (IR) is the field of computer science that deals with the processing of documents containing free text, so that they can be rapidly retrieved based on keywords specified in a user’s query. The effectiveness of IR systems is measured by comparing performance on a common set of queries and documents. The meaning of the term IR can be very broad. Just getting a credit card out of your wallet so that you can type in the card number is a form of IR. However, as an academic field of study, information retrieval might be defined thus: What is information retrieval ? Information retrieval is generally considered as a subfield of computer science that deals with the representation, storage, and access of information. Information retrieval is concerned with the organization and retrieval of information from large database collections Information Retrieval (IR) is the science of searching for information within relational databases, documents, text, multimedia files, and the World Wide Web. Information retrieval is accomplished by means of an information retrieval system and is performed manually or with the use of mechanization or automation. Human beings are indispensable in information retrieval. Depending on the character of the information contained in the…

Introduction of Brain Computer Interface

As the power of modern computers grows alongside our understanding of the human brain, we move ever closer to making some pretty spectacular science fiction into reality. Imagine transmitting signals directly to someone’s brain that would allow them to see, hear or feel specific sensory inputs. Consider the potential to manipulate computers or machinery with nothing more than a thought. It isn’t about convenience — for severely disabled people, development of a brain-computer interface (BCI) could be the most important technological breakthrough in decades. In this article, we’ll learn all about how BCIs work, their limitations and where they could be headed in the future. what is Brain Computer Interface Brain computer interface technology represents a highly growing field of research with application systems. Its contributions in medical fields range from prevention to neuronal rehabilitation for serious injuries. Brain Computer Interface (BCI) technology is a powerful communication tool between users and systems. It does not require any external devices or muscle intervention to issue commands and complete the interaction Definition of Brain Computer Interface A BCI is a computer-based system that acquires brain signals, analyzes them, and translates them into commands that are relayed to an output device to carry out a…

Introduction of HopField Neural Network

Human beings are constantly thinking since ages about the reasons for human capabilities and incapabilities. Successful attempts have been made to design and develop systems that emulate human capabilities or help overcome human incapabilities. The human brain, which has taken millions of years to evolve to its present architecture, excels at tasks such as vision, speech, information retrieval, complex pattern recognition, all of which are extremely difficult tasks for conventional computers. A number of mechanisms have been which seems to enable human brain to handle various problems. These mechanisms include association; generalization and self-organization. A brain similar computational technique namely HopField Neural Network is explained here. Working of Hop Field Neural Network A neural network (or more formally artificial neural network) is a mathematical model or computational model inspired by the structure and functional aspects of biological neural networks. It consists of an interconnected group of artificial neurons. The original inspiration for the term Artificial Neural Network came from examination of central nervous systems and their neurons, axons, dendrites and synapses which constitute the processing elements of biological neural networks. One of the milestones for the current renaissance in the field of neural networks was the associative model proposed by Hopfield at…

Introduction of Image Classification

Classification includes a broad range of decision-theoretic approaches to the identification of images (or parts thereof). All classification algorithms are based on the assumption that the image in question depicts one or more features (e.g., geometric parts in the case of a manufacturing classification system, or spectral regions in the case of remote sensing, as shown in the examples below) and that each of these features belongs to one of several distinct and exclusive classes. The classes may be specified a priori by an analyst (as in supervised classification) or automatically clustered (i.e. as in unsupervised classification) into sets of prototype classes, where the analyst merely specifies the number of desired categories. (Classification and segmentation (clustering) have closely related objectives, as the former is another form of component labeling that can result in segmentation of various features in a scene.) Definition of Image Classification Image classification is the process of assigning land cover classes to pixels. Image classification refers to the task of extracting information classes from a multiband raster image. The resulting raster from image classification can be used to create thematic maps. Depending on the interaction between the analyst and the computer during classification, there are two types of classification:…

What is Machine Vision
Image Processing , Technology & Science / December 20, 2017

Vision plays a fundamental role for living beings by allowing them to interact with the environment in an effective and efficient way. Where human vision is best for qualitative interpretation of a complex, unstructured scene, machine vision excels at quantitative measurement of a structured scene because of its speed, accuracy, and repeatability. For example, on a production line, a machine vision system can inspect hundreds, or even thousands, of parts per minute. A machine vision system built around the right camera resolution and optics can easily inspect object details too small to be seen by the human eye. What is Machine Vision? Machine vision (also called “industrial vision” or “vision systems”) is the use of digital sensors (wrapped in cameras with specialized optics) that are connected to processing hardware and software algorithms to visually inspect pretty much anything. Machine vision is a true multi-disciplinary field, encompassing computer science, optics, mechanical engineering, and industrial automation. While historically the tools of machine vision were focused on manufacturing, that’s quickly changing, spreading into medical applications, research, and even movie making. Machine vision is the technology to replace or complement manual inspections and measurements with digital cameras and image processing. The technology is used…

Insert math as
Block
Inline
Additional settings
Formula color
Text color
#333333
Type math using LaTeX
Preview
\({}\)
Nothing to preview
Insert