Introduction Image De-Noising
Image Processing , Technology & Science / November 20, 2017

Any form of signal processing having image as an input and output (or a set of characteristics or parameters of image) is called image processing. In image processing we work in two domains i.e., spatial domain and frequency domain. Spatial domain refers to the image plane itself, and image processing method in this category are based on direct manipulation of pixels in an image and coming to frequency domain it is the analysis of mathematical functions or signals with respect to frequency rather than time. Image Denoising : Overview The search for efficient image denoising methods still is a valid challenge, at the crossing of functional analysis and statistics. Image denoising refers to the recovery of a digital image that has been contaminated by noise. The presence of noise in images is unavoidable. It may be introduced during image formation, recording or transmission phase. Further processing of the image often requires that the noise must be removed or at least reduced. Even a small amount of noise is harmful when high accuracy is required. The noise can be of different types. The most popular ones are additive white Gaussian noise (AWGN). An image denoising procedure takes a noisy image as input…

What is Ensemble Learning

Ensemble learning typically refers to methods that generate several models which are combined to make a prediction, either in classification or regression problems. This approach has been the object of a significant amount of research in recent years and good results have been reported. This section introduced basic of the ensemble learning of classification. Ensemble Learning : Overview Ensemble learning is a machine learning paradigm where multiple learners are trained to solve the same problem. In contrast to ordinary machine learning approaches which try to learn one hypothesis from training data, ensemble methods try to construct a set of hypotheses and combine them to use. An ensemble contains a number of learners which are usually called base learners. The generalization ability of an ensemble is usually much stronger than that of base learners. Actually, ensemble learning is appealing because that it is able to boost weak learners which are slightly better than random guess to strong learners which can make very accurate predictions. So, “base learners” are also referred as “weak learners”. It is noteworthy, however, that although most theoretical analyses work on weak learners, base learners used in practice are not necessarily weak since using not-so-weak base learners often…

An Introduction of Object Recognition
Image Processing , Technology & Science / November 18, 2017

Object Recognition: A Computer Vision Perception Computer vision is the ability of machines to see and understand what is in their surroundings. This field contains methods for acquiring, processing and analyzing of images to be able to extract important information used by artificial systems. Object recognition in computer vision is the task of finding a given object in an image or video sequence. It is a fundamental vision problem. Humans recognize a huge number of objects in images with little effort, even when the image of the objects may vary in different viewpoints, in many different sizes / scale or even when they are translated or rotated. Object recognition is an important task in image processing and computer vision. Object recognition: Overview Object recognition plays an important role in computer vision. It is indispensable for many applications in the area of autonomous systems or industrial control. An object recognition system finds objects in the real world from an image of the world, using object models which are known a priori. With a simple glance of an object, humans are able to tell its identity or category despite of the appearance variation due to change in pose, illumination, texture, deformation, and…

Introduction of Human Computer Interaction

Utilizing computers had always begged the question of interfacing. The methods by which human has been interacting with computers has travelled a long way. The journey still continues and new designs of technologies and systems appear more and more every day and the research in this area has been growing very fast in the last few decades. The growth in Human-Computer Interaction (HCI) field has not only been in quality of interaction, it has also experienced different branching in its history. Instead of designing regular interfaces, the different research branches have had different focus on the concepts of multimodality rather than unimodality, intelligent adaptive interfaces rather than command/action based ones, and finally active rather than passive interfaces. Human Computer Interaction (HCI) : Overview Human Computer Interaction (HCI) involves the planning and design of the interaction between users and computers. In these days, smaller devices are used to improve technology. The most important advantages of computer vision is its freedom. The user can interact with the computer without wires and manipulating intermediary devices. Recently, User-Interfaces are used to capture the motion of our hands. The researchers developed techniques to track the movements of hand/fingers through the web cam to establish an interaction…

How Question Answering System Works

The majority of all human knowledge is solely represented in natural language. This knowledge is accessible for humans, who can understand the natural language texts and answer questions about them, but it is not in the same measure accessible for machines. In this section we have listed some basics of question answering outlines: Question answering system : Overview Quality and reliable question answering system (QA) would be of high use in various fields. Just imagine, a doctor being able to provide diagnose in a matter of seconds, only by asking his computer a couple of questions about symptoms or a programmer coming with a command right away, without need to read extensive manuals or documentation. Even less specialized tasks, common for our daily life, like looking for a cooking recipe, finding out how to treat a plant, looking up common knowledge or an equation, asking for coordinates to a nearest restaurant. Today, we usually do a web search to find an answer to any of these questions. It is certainly significantly faster than looking for an answer in books, but still, it’s far from perfect. Web Search Engines usually require us to insert our question (or query) not in a…

Introduction of Linear Discriminant Analysis (LDA)

LDA is widely used to find linear combinations of features while preserving class separability. Unlike PCA, LDA tries to model the differences between classes. Classic LDA is designed to take into account only two classes. Specifically, it requires data points for different classes to be far from each other, while points from the same class are close. Consequently, LDA obtains differenced projection vectors for each class. Multi-class LDA algorithms which can manage more than two classes are more used. Linear Discriminant Analysis (LDA) is most commonly used as dimensionality reduction technique in the pre-processing step for pattern-classification and machine learning applications. The goal is to project a dataset onto a lower-dimensional space with good class-separability in order avoid over fitting (“curse of dimensionality”) and also reduce computational costs. Summarizing the LDA approach in 5 steps Listed below are the 5 general steps for performing a linear discriminant analysis; we will explore them in more detail in the following sections. Compute the d-dimensional mean vectors for the different classes from the dataset. Compute the scatter matrices (in-between-class and within-class scatter matrix). Compute the eigenvectors ​\( (e_1,e_2,…e_d) \)​ and corresponding eigenvalues ​\( (λ_1,λ_2,…λ_d) \)​ for the scatter matrices. Sort the eigenvectors by decreasing eigenvalues…

An Example of Principal Component Analysis
Image Processing , Technology & Science / November 14, 2017

Principal component analysis is a quantitatively rigorous method for achieving this simplification. The method generates a new set of variables, called principal components. Each principal component is a linear combination of the original variables. All the principal components are orthogonal to each other, so there is no redundant information. The principal components as a whole form an orthogonal basis for the space of the data. Principal component analysis : Introduction PCA is an orthogonal linear transformation that transforms the data to a new coordinate system such that greatest variance by any projection of the data comes to lie on the rst coordinate; the second greatest variance comes up in the second coordinate, and so on. Eigenfaces also known as Principal Components Analysis (PCA) find the minimum mean squared error linear subspace that maps from the original N dimensional data space into an M-dimensional feature space. By doing this, Eigenfaces (where typically M << N) achieve dimensionality reduction by using the M eigenvectors of the covariance matrix corresponding to the largest eigenvalues. The resulting basis vectors are obtained by finding the optimal basis vectors that maximize the total variance of the projected data (i.e. the set of basis vectors that best describe…

An Introduction of Semantic Web

The current WWW has a huge amount of data that is often unstructured and usually only human understandable. The Semantic Web aims to address this problem by providing machine interpretable semantics to provide greater machine support for the user. Semantic Web is an extension of the current web in which information is given well-defined meaning, better enabling computers and people to work in co-operation. The semantic web will provide intelligent access to heterogeneous, distributed information enabling software products to mediate between user needs and the information source available. Figure 1: Semantic Web Structure The Internet contains more than 10 billion static pages of information to be used by more than 1000 million users spread over the world. It is difficult to access & maintain this enormous amount of data using natural languages. It is rather difficult to bridge the gap between the available information and the techniques used for accessing it. The web content is increasing at very faster rate and difficult for search engines to cope up with it despite new techniques of searching. The Semantic Web’s establishes machine understandable Web resources. Researchers in this area plan to accomplish this by creating ontology and logic mechanisms and replacing HTML…

How Neural Network Works using Simple Example

The simplest definition of a neural network, more properly referred to as an ‘artificial’ neural network (ANN), is provided by the inventor of one of the first neurocomputers, Dr. Robert Hecht-Nielsen. He defines a neural network as: “…a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs. ANNs are processing devices (algorithms or actual hardware) that are loosely modeled after the neuronal structure of the mamalian cerebral cortex but on much smaller scales. A large ANN might have hundreds or thousands of processor units, whereas a mamalian brain has billions of neurons with a corresponding increase in magnitude of their overall interaction and emergent behavior. Although ANN researchers are generally not concerned with whether their networks accurately resemble biological systems, some have. For example, researchers have accurately simulated the function of the retina and modeled the eye rather well. Figure 1 Neural Network Example Although the mathematics involved with neural networking is not a trivial matter, a user can rather easily gain at least an operational understanding of their structure and function. Neural networks are typically organized in layers. Layers are made up of a…

What is Image Retrieval

An image retrieval system can be defined as searching, browsing, and retrieving images from massive databases consisting of digital images. Although Conventional and common techniques of retrieving images make use of  adding metadata namely captioning keywords so as to perform annotation of words. However image search can be described by dedicated technique of search which is mostly used to find images. For searching images user provides the query image and the system returns the image similar to that of query image. Image Retrieval Architecture Image Retrieval has been adopted in most of the major search engines, including Google, Yahoo!, Bing, etc. A large number of image search engines mainly employ the surrounding texts around the images and the image names to index the images. Because there are only two main places where anyone can place text first in title (Name of image) and second in the tags which are proposed and implemented using web 2.0 concepts? Most of the time user make query in the text format for search contents over any search engine. Figure 1: General Image Retrieval System However, this limits the capability of the search engines in retrieving the semantically related images using a given query. On…

Insert math as
Block
Inline
Additional settings
Formula color
Text color
#333333
Type math using LaTeX
Preview
\({}\)
Nothing to preview
Insert