What is Feature Selection and it’s techniques
/ July 27, 2018

Suppose there is a real life problem as to put a nail on a wall , so what are the steps to complete the process of putting that nail , we need a nail and a hammer so this selection of attribute according to the problem is the same thing in the Feature selection in machine learning . The reduction of problem depends on the selection of the variables included in the process, and thus the feature selection plays an important role in the creation of a model of a problem. This article covers some of the following topics: Introduction Techniques of Features Selection Advantage and use of feature selection Introduction of Feature selection In machine learning Feature selection is a process of selecting the essential and useful variables for the particular problem model and feature selection is the process where the complexity and the performance of the model will be decided , as if the features or not selected properly , then model will be complex , slower and bulky and thus performance will be decreased. The feature selection compares and selects the best features that are relevant to the associated problem and removes the unwanted features according to…

What is Statistical Learning?
/ July 10, 2018

Statistical learning (SL) is the third mainstream in machine learning research. The main goal of statistical learning theory is to provide a framework for studying problem of inference. That is of gaining knowledge, making predictions, making decisions or constructing models from a set of data. Statistical Learning provides an accessible overview of the field of statistical learning, an essential tool-set for making sense of the vast and complex data sets that have emerged in fields ranging from biology to finance to marketing to astrophysics in the past twenty years. Basic Overview of Statistical Learning Statistical learning refers to a set of tools for modeling and understanding complex datasets. It is a recently developed area in statistics and blends with parallel developments in computer science and, in particular, machine learning. The field encompasses many methods such as the lasso and sparse regression, classification and regression trees, and boosting and support vector machines. It refers to a vast set of tools for understanding data. These tools can be classified as supervised or unsupervised. Broadly speaking, supervised SL involves building a statistical model for predicting, or estimating, an output based on one or more inputs. Problems of this nature occur in fields as…

What is Web Personalization and its needs and benefits ?
/ June 26, 2018

In the recent few decades the World Wide Web has become the biggest and most popular way of communication and information dissemination. World Wide Web is a huge repository of web pages and links and the volume of information available on the internet is increasing exponentially as approximately one million pages are added daily. The existence of such abundance of information, in combination with the dynamic and heterogeneous nature of the web, makes web site exploration a difficult process for the average end user. So in spite of the fact that users are provided with more information and service options, the average end users may not get the “right” information only, in what they are interested in. with the explosive growth of the number and the complexity of information resources and the advent of e-services; it has become more difficult to access relevant information from the Web. Definition of Web Personalization Web Personalization is the process of creating customized experiences for visitors to a website. Rather than providing a single, broad experience, web personalization allows companies to present visitors with unique experiences tailored to their needs and desires. Personalization is by no means a new concept. Waiters will often greet…

What is Part of Speech Tagging in Natural Language Processing?
/ June 1, 2018

With the advancement of technology, the demand of Natural Language Processing (NLP) is also increasing and it becomes very important to find out correct information from collection of huge data only on the basis of queries and keywords. Sometimes user tries to search data with help of query and get unimportant or irrelevant data instead of correct data. The aim of Natural Language Processing is to facilitate the interaction between human and machine. Many challenges in NLP involve natural language understanding, that is, enabling computers to derive meaning from human or natural language input, and others involve natural language generation. The solution for language understanding is Part of Speech tagging. The basic system is human-computer interaction, which allows user to interact with computer using their everyday languages. Basic Overview of Part of Speech Tagging A Part of Speech Tagger (POS Tagger) is a piece of software that reads text in some language and assigns parts of speech to each word (and other token), such as noun, verb, adjective, etc., although generally computational applications use more fine-grained POS tags like ‘noun-plural’. Tagging Automatic assignment of descriptors to the given tokens is called Tagging. The descriptor is called tag. The tag may…

What is a Confusion Matrix in Machine Learning?
/ March 30, 2018

What is Confusion Matrix? In the field of machine learning and specifically the problem of statistical classification, a confusion matrix, also known as an error matrix. A confusion matrix represents information about actual and classified cases produced by a classification system. Performance of such a system is commonly evaluated by demonstrating the correct and incorrect patterns classification. A confusion matrix is a table that is often used to describe the performance of a classification model (or “classifier”) on a set of test data for which the true values are known. It allows the visualization of the performance of an algorithm. A confusion matrix (Kohavi and Provost, 1998) contains information about actual and predicted classifications done by a classification system. Performance of such systems is commonly evaluated using the data in the matrix. The following table shows the confusion matrix for a two class classifier. It allows easy identification of confusion between classes e.g. one class is commonly mislabeled as the other. Most performance measures are computed from the confusion matrix. The entries in the confusion matrix have the following meaning in the context of our study: TN is the number of correct predictions that an instance is negative, FP is the number of incorrect predictions…

How to Implement ID3 Algorithm using Python
/ March 29, 2018

Introduction ID3 decision tree algorithm is the first of a series of algorithms created by Ross Quinlan to generate decision trees. Decision Tree is one of the most powerful and popular algorithm. Decision-tree algorithm falls under the category of supervised learning algorithms. It works for both continuous as well as categorical output variables. ID3 is a classification algorithm which for a given set of attributes and class labels, generates the model/decision tree that categorizes a given input to a specific class label ​$$C_k [C_1 C_2 C_3,…C_k].$$​. The algorithm follows a greedy approach by selecting a best attribute that yields maximum information gain ​$$(IG)$$​or minimum entropy ​$$(H).$$​. The algorithm then splits the data-set ​$$(S)$$​recursively upon other unused attributes until it reaches the stop criteria (no further attributes to split). The non-terminal nodes in the decision tree represents the selected attribute upon which the split occurs and the terminal nodes represent the class labels. ID3 Characteristics ID3 does not guarantee an optimal solution; it can get stuck in local optimums It uses a greedy approach by selecting the best attribute to split the dataset on each iteration (one improvement that can be made on the algorithm can be to use backtracking during…

How to Implement ID3 Decision Tree Algorithm using JAVA
/ March 28, 2018

The development of Information technology has generated large amount of databases and huge data in various areas. The research in databases and information technology has given rise to an approach to store and manipulate this precious data for further decision making. Decision tree is powerful and popular tool for classification and prediction. Decision trees represent rules. A decision tree is predictive model that, as its name implies, can be viewed as a tree. Specifically each branch of the tree is a classification question and the leaves of the tree are partitions of the dataset with their classification. Decision tree is a classifier in the form of a tree structure, where each node is either: A leaf node- indicates the value of the target attribute(class) of examples, or A decision node- specifies some test to be carried out on a single attribute- value, with one branch and sub-tree for each possible outcome of the test. ID3 algorithm is primarily used for decision making. ID3 (Iterative Dichotomiser 3) algorithm invented by Ross Quinlan is used to generate a decision tree from a dataset. There are different implementations given for Decision Trees. Major ones are ID3: Iternative Dichotomizer was the very first implementation…

Social Media Analytics and How Social Media Analytics Works
/ March 19, 2018

In the decade since social networking was born, we have seen the power of platforms that unite humanity. Across our professional and personal lives, social platforms have truly changed the world. Social media has been the tool to ignite revolutions and elections, deliver real-time news, connect people and interests, and of course, drive commerce. Social media plays a significant role in today‘s networked society. It has affected the online interaction between users, whom shares a lot of personal details and information online. Dynamic nature of social media data is a significant challenge for continuously and speedily evolving social media sites. Social media is growing rapidly and it offers something for everyone. Overview of Social Media Analytics Social Media Analytics is an on-demand offering that integrates, archives, analyzes and reports on the effects of online conversations occurring across professional, consumer-generated and social network media sites. As a result of the intelligence gleaned from this process, organizations can understand the effects online conversations are having on specific aspects of their business operations. Social media analytics (SMA) refers to the approach of collecting data from social media sites and blogs and evaluating that data to make business decisions. This process goes beyond the…

What is Blockchain Technology And How It Is Useful For Us
/ March 18, 2018

Blockchain is being termed as the fifth disruptive innovation in computing. Blockchain technology or the distributed, secure ledger technology has gained much attention in recent years. This article presents blockchain technology literature and its applications. A very significant plus of the blockchain technology is that it solves two of the most dreaded problems of currency based transactions, which have so long necessitated the requirement of a third party to validate the transactions. Blockchain Overview Blockchain technology is a sophisticated, interesting, and emerging technology. It provides a reliable way of confirming the party submitting a record to the blockchain, the time and date of its submission, and the contents of the record at the time of submission, eliminating the need for third-party intermediaries in certain situations. However, it is important to consider that blockchain technology does not verify or address the reliability or the accuracy of the contents, and additionally blockchain technology provides no storage for records, but instead the hashes thereof A blockchain is an electronic ledger of digital records, events, or transactions that are cryptographically hashed, authenticated, and maintained through a “distributed” or “shared” network of participants using a group consensus protocol. Much like a checkbook is a ledger…

What is Visual Analytics
/ February 9, 2018

We are living in a world which faces a rapidly increasing amount of data to be dealt with on a daily basis. In the last decade, the steady improvement of data storage devices and means to create and collect data along the way influenced our way of dealing with information: Most of the time, data is stored without filtering and refinement for later use. Virtually every branch of industry or business, and any political or personal activity nowadays generate vast amounts of data. Making matters worse, the possibilities to collect and store data increase at a faster rate than our ability to use it for making decisions. However, in most applications, raw data has no value in itself; instead we want to extract the information contained in it. Overview Generally, large scale organizations have large amount of data and information to process. They need some strong procedures and techniques to collect, analyze, process and visualize the data in order to get required results as well as to take the right decision in order to get their long term goals and objectives. Several software and tools relating to big data analytics, visual analytics are being used by companies in order to…

Insert math as
$${}$$