# Introduction of Decision Trees

August 11, 2017

## Decision Tree:Overview

in different kinds of supervised data mining techniques the Decision Tree are one of the most popular classification and prediction technique. basically the training data samples are organized in form of tree data structure. where the nodes of tree shows the attributes of the data set and edges can be used for demonstrating the values of these attributes. additionally the leaf node of the tree contains the decisions of the Decision Tree algorithms (i.e. decision tree C4.5, ID3, CART).

example

an example of decision tree is given in figure 1.

Figure 1 decision tree example

in the above given figure 1 a tree is demonstrated that contains decisions. the decision labels are (yes or no) which is placed in leaf nodes. and nodes of tree (humidity, outlook and wind) are attributes.  which are available in data set. because the data set contains both and both the component are help to understand the relationship among the attributes. sometimes these trees can also converted into IF THEN ELSE rules. For above given example a rule can be defined as:

IF (Outlook = sun & Humidity = normal) then decision = yes

the following are the key advantages of any decision tree:

1. Decision tree are simple to understand and construct even after a brief exploration.
2. It requires modest data training. Earlier techniques demand for data normalization, creation of dummy variables and removal of blank values.
3. It can deal with numerical as well as categorical data both. Other techniques have their expertise into analysis of different sets of data having just single category of variables. Such as neural networks, that can handle only numerical values, while relation rules that deals with only nominal variables.
4. Utilizes the white box model. When a given condition is able to be seen in the model then rationalization for the provided scenario is easily elaborated by Boolean logic.
5. Validation of representation with statistical tests is also possible. It makes the explanation for the reliability of the model possible.
6. It is Robust and generates promising outcomes even when its postulations are debased to some extent with the actual model which generates the data.
7. Time efficient even with large data. Standard computing resources help in analyzing large data.

1. Decision-tree learning algorithms are based on heuristic algorithms which fail to offer an assurance of returning the globally optimal decision tree.
2. It is possible that the decision tree learners can generate extra complex trees which may fail to generalize data properly. This is known as over fitting. For avoiding this, usage of some mechanisms such as pruning of data becomes necessary.
3. Decision tree fails to describe some complex concepts like predicaments of XOR, parity or multiplexer. In such cases, large decision trees are generated. In order to overcome this issue two things can be done either shifting the depiction of the problem domain or by means of learning algorithms rooted on more communicative representations.
4. For categorical variables that includes data with a number of levels, decision trees gives a preconception about the information gain on the side of the attributes with more levels.

### References

[1]Susan Lomax, Sunil Vadera, “A Survey of Cost-Sensitive Decision Tree Induction Algorithms”, 2011, A survey of cost-sensitive decision tree induction algorithms ACM Computing Surveys, Article, 34 pages.

[2] Decision tree learning, https://en.wikipedia.org/wiki/Decision_tree_learning

$${}$$