Information Retrieval System and Applications
/ December 24, 2017

Information retrieval (IR) is the field of computer science that deals with the processing of documents containing free text, so that they can be rapidly retrieved based on keywords specified in a user’s query. The effectiveness of IR systems is measured by comparing performance on a common set of queries and documents. The meaning of the term IR can be very broad. Just getting a credit card out of your wallet so that you can type in the card number is a form of IR. However, as an academic field of study, information retrieval might be defined thus: What is information retrieval ? Information retrieval is generally considered as a subfield of computer science that deals with the representation, storage, and access of information. Information retrieval is concerned with the organization and retrieval of information from large database collections Information Retrieval (IR) is the science of searching for information within relational databases, documents, text, multimedia files, and the World Wide Web. Information retrieval is accomplished by means of an information retrieval system and is performed manually or with the use of mechanization or automation. Human beings are indispensable in information retrieval. Depending on the character of the information contained in the…

Introduction of HopField Neural Network
/ December 22, 2017

Human beings are constantly thinking since ages about the reasons for human capabilities and incapabilities. Successful attempts have been made to design and develop systems that emulate human capabilities or help overcome human incapabilities. The human brain, which has taken millions of years to evolve to its present architecture, excels at tasks such as vision, speech, information retrieval, complex pattern recognition, all of which are extremely difficult tasks for conventional computers. A number of mechanisms have been which seems to enable human brain to handle various problems. These mechanisms include association; generalization and self-organization. A brain similar computational technique namely HopField Neural Network is explained here. Working of Hop Field Neural Network A neural network (or more formally artificial neural network) is a mathematical model or computational model inspired by the structure and functional aspects of biological neural networks. It consists of an interconnected group of artificial neurons. The original inspiration for the term Artificial Neural Network came from examination of central nervous systems and their neurons, axons, dendrites and synapses which constitute the processing elements of biological neural networks. One of the milestones for the current renaissance in the field of neural networks was the associative model proposed by Hopfield at…

An Introduction of Data Visualization
/ December 14, 2017

A picture is worth a thousand words – especially when we are trying to understand and discover insights from data. Visuals are especially helpful when we’re trying to find relationships among hundreds or thousands of variables to determine their relative importance – or if they are important at all. Regardless of how much data we have, one of the best ways to discern important relationships is through advanced analysis and high-performance data visualization. If sophisticated analyses can be performed quickly, even immediately, and results presented in ways that showcase patterns and allow querying and exploration, people across all levels in our organization can make faster, more effective decisions. Data Visualization : Definition Data visualizations are surprisingly common in our everyday life, but they often appear in the form of well-known charts and graphs. A combination of multiple visualizations and bits of information are often referred to as infographics. Data visualizations can be used to discover unknown facts and trends. You may see visualizations in the form of line charts to display change over time. Bar and column charts are useful when observing relationships and making comparisons. Pie charts are a great way to show parts-of-a-whole. And maps are the best way…

Introduction of Bioinformatics
/ December 13, 2017

Quantitation and quantitative tools are indispensable in modern biology. Most biological research involves application of some type of mathematical, statistical, or computational tools to help synthesize recorded data and integrate various types of information in the process of answering a particular biological question. Bioinformatics involves the use of computers to collect, organize and use biological information to answer questions in fields like evolutionary biology. Definition of Bioinformatics Bioinformatics is an interdisciplinary research area at the interface between computer science and biological science. A variety of definitions exist in the literature and on the World Wide Web; some are more inclusive than others. Bioinformatics involves the technology that uses computers for storage, information retrieval, manipulation, and distribution of information related to biological macromolecules such as DNA, RNA, and proteins. The emphasis here is on the use of computers because most of the tasks in genomic data analysis are highly repetitive or mathematically complex. The use of computers is absolutely indispensable in mining genomes for information gathering and knowledge building. Bioinformatics is the science of developing computer databases and algorithms for the purpose of speeding up and enhancing biological research. It can be defined more specifically, “Bioinformatics combines the latest technology with biological…

What is Knowledge Graph
/ December 9, 2017

Knowledge graphs are large networks of entities and their semantic relationships. They are a powerful tool that changes the way we do data integration, search, analytics, and context-sensitive recommendations. Knowledge graphs have been successfully utilized by the large Internet tech companies, with prominent examples such as the Google Knowledge Graph. Open knowledge graphs such as Wikidata make community-created knowledge freely accessible. Overview of Knowledge graphs The World Wide Web is a vast repository of knowledge, with data present in multiple modalities such as text, videos, images, structured tables, etc. However, most of the data is present in unstructured format and extracting information in structured machine-readable format is still a very difficult task. Knowledge graphs aim at constructing large repositories of structured knowledge which can be understood by machines. Such knowledge graphs are being used to improve the relevance and the quality of search in case of search engines like Google and Bing. Knowledge graphs are also being used by applications like Google now, Microsoft Cortana and Apple Siri which are capable of understanding natural language queries and answer questions, making recommendations, etc. to the user. The construction of knowledge graphs is thus a major step towards making intelligent personalized machines. Web…

What is Deep Learning
/ December 6, 2017

Machine-learning technology powers many aspects of modern society: from web searches to content filtering on social networks to recommendations on e-commerce websites, and it is increasingly present in consumer products such as cameras and smartphones. Machine-learning systems are used to identify objects in images, transcribe speech into text, match news items, posts or products with users’ interests, and select relevant results of search. Increasingly, these applications make use of a class of techniques called deep learning. Deep learning is a machine learning technique that teaches computers to do what comes naturally to humans: learn by example. Deep Learning Overview Deep Learning is a subfield of machine learning concerned with algorithms inspired by structure and function of brain called artificial neural networks. In deep learning, a computer model learns to perform classification tasks directly from images, text, or sound. Deep learning models can achieve state-of-the-art accuracy, sometimes exceeding human-level performance. Models are trained by using a large set of labeled data and neural network architectures that contain many layers. Deep learning is a subset of machine learning. Usually, when people use the term deep learning, they are referring to deep artificial neural networks, and somewhat less frequently to deep reinforcement learning….

Social Network Analysis
/ November 22, 2017

Network analysis is still a growing field with a great deal of opportunity for new and transformative contributions. The term social network refers to the articulation of a social relationship, official or achieved, among individuals, families, households, villages, communities, regions, and so on. Each of them can play dual roles, acting both as a unit or node of a social network as well as a social actor Social Network Analysis : Definition Social network theory views a network as a group of actors who are connected by a set of relationships. Social networks develop when actors meet and form some kind of relation between each other. These can be of an informal as well as of a formal nature. Hereby actors are often people, but can also be nations, organizations, objects etc. Social Network Analysis (SNA) focuses on patterns of relations between these actors. It seeks to describe networks of relations as fully as possible. This includes teasing out the prominent patterns in such networks, tracing the flow of information through them, and discovering what effects these relations and networks have on people and organizations. It can therefore be used to study network patterns of organizations, ideas, and people that connected…

What is Ensemble Learning
/ November 19, 2017

Ensemble learning typically refers to methods that generate several models which are combined to make a prediction, either in classification or regression problems. This approach has been the object of a significant amount of research in recent years and good results have been reported. This section introduced basic of the ensemble learning of classification. Ensemble Learning : Overview Ensemble learning is a machine learning paradigm where multiple learners are trained to solve the same problem. In contrast to ordinary machine learning approaches which try to learn one hypothesis from training data, ensemble methods try to construct a set of hypotheses and combine them to use. An ensemble contains a number of learners which are usually called base learners. The generalization ability of an ensemble is usually much stronger than that of base learners. Actually, ensemble learning is appealing because that it is able to boost weak learners which are slightly better than random guess to strong learners which can make very accurate predictions. So, “base learners” are also referred as “weak learners”. It is noteworthy, however, that although most theoretical analyses work on weak learners, base learners used in practice are not necessarily weak since using not-so-weak base learners often…

Community Detection : Unsupervised Learning
/ November 9, 2017

Advances in technology and computation have provided the possibility of collecting and mining a massive amount of real-world data. Mining such “big data” allows us to understand the structure and the function of real systems and to find unknown and interesting patterns. This section provides the brief overview of the community structure. Introduction of Community Detection In the actual interconnected world, and the rising of online social networks the graph mining and the community detection become completely up-to-date. Understanding the formation and evolution of communities is a long-standing research topic in sociology in part because of its fundamental connections with the studies of urban development, criminology, social marketing, and several other areas. With increasing popularity of online social network services like Facebook, the study of community structures assumes more significance. Identifying and detecting communities are not only of particular importance but have immediate applications. For instance, for effective online marketing, such as placing online ads or deploying viral marketing strategies [10], identifying communities in social network could often lead to more accurate targeting and better marketing results. Albeit online user profiles or other semantic information is helpful to discover user segments this kind of information is often at a coarse-grained level…

Ariori Algorithm: Example and Algorithm Description
/ November 2, 2017

With the quick growth in e-commerce applications, there is an accumulation vast quantity of data in months not in years. Data Mining, also known as Knowledge Discovery in Databases (KDD), to find anomalies, correlations, patterns, and trends to predict outcomes. Apriori algorithm is a classical algorithm in data mining. It is used for mining frequent itemsets and relevant association rules. It is devised to operate on a database containing a lot of transactions, for instance, items brought by customers in a store. It is very important for effective Market Basket Analysis and it helps the customers in purchasing their items with more ease which increases the sales of the markets. It has also been used in the field of healthcare for the detection of adverse drug reactions. It produces association rules that indicate what all combinations of medications and patient. Figure 1 Apriori algorithm example application Ariori Algorithm :  Overview One of the first algorithms to evolve for frequent itemset and Association rule mining was Apriori. Two major steps of the Apriori algorithm are the join and prune steps. The join step is used to construct new candidate sets. A candidate itemset is basically an item set that could be either Frequent or…

Insert math as
$${}$$