It is employed to handle the concept of partial truth, where the truth value may range between completely true and completely false. In the above definition, the many valued-logic is exactly what the different levels of the hotness or coldness of a coffee convey. Trust me, I will come back to it!
As I said before, fuzzy logic is no misnomer. Sounds confusing? Let me explain.
Till now, we have seen that fuzzy logic is a multi-valued logic. But we should be aware of the fact that there is no clear distinction a. So, one might ask that if we cannot portray different levels of fuzzy logic clearly then how are we supposed to represent fuzzy logic and extract conclusions using it? There comes the m embership function of Fuzzy Logic in the picture. We can also clearly say that it does not exist in the set. So as you can see, there is a pretty clear principle behind the membership of an element or classic set. In other words, we can say that the membership function of a classic set consists of two values, 1 and 0.
What about a fuzzy set? Well then, is fuzzy logic doomed? Well, just hear me out before you conclude this! A fuzzy set can be described as a set for which elements do not have the simple property of either being in the set or out of it. An element can be partially in the set also! See, I told you I would come back to it. So, the membership function in the case of fuzzy is logic is not a simple set consisting of 1 and 0.
For fuzzy logic, the membership function is continuous between 0 and 1 i. The former denotes that the element is not a part of the fuzzy set whereas the latter denotes that the element completely belongs to the fuzzy set.
Any other value between 0 and 1 denotes that the element is partially in the set. The membership function, in the case of fuzzy logic, represents the degree of truth. Now, let us discuss the types of Fuzzy Logic. There are basically 2 types of Fuzzy Logic Note that the membership function we have been talking about till now was for T1 FS. After covering the prerequisites, let us now discuss an important application of Fuzzy Logic: Text Summarization.
As discussed in my previous article , Automatic summarization is the process of shortening a text document with software, in order to create a summary with the major points of the original document. It fetches the data from the data respiratory managed by these systems and performs data mining on that data.
It then stores the mining result either in a file or in a designated place in a database or in a data warehouse. The data mining subsystem is treated as one functional component of an information system. Data Mining Query Languages can be designed to support ad hoc and interactive data mining. This DMQL provides commands for specifying primitives. The DMQL can work with databases and data warehouses as well. DMQL can be used to define data mining tasks.
Particularly we examine how to define data warehouses and data marts in DMQL. Here we will discuss the syntax for Characterization, Discrimination, Association, Classification, and Prediction.
Data Mining - Quick Guide - Tutorialspoint
We have a syntax, which allows users to specify the display of discovered patterns in one or more forms. You would like to know the percentage of customers having that characteristic. In particular, you are only interested in purchases made in Canada, and paid with an American Express credit card.
You would like to view the resulting descriptions in the form of a table. There are two forms of data analysis that can be used for extracting models describing important classes or to predict future data trends.
Classification models predict categorical class labels; and prediction models predict continuous valued functions. For example, we can build a classification model to categorize bank loan applications as either safe or risky, or a prediction model to predict the expenditures in dollars of potential customers on computer equipment given their income and occupation. A bank loan officer wants to analyze the data in order to know which customer loan applicant are risky or which are safe.
A marketing manager at a company needs to analyze a customer with a given profile, who will buy a new computer. In both of the above examples, a model or classifier is constructed to predict the categorical labels. These labels are risky or safe for loan application data and yes or no for marketing data. Suppose the marketing manager needs to predict how much a given customer will spend during a sale at his company.
In this example we are bothered to predict a numeric value. Therefore the data analysis task is an example of numeric prediction. In this case, a model or a predictor will be constructed that predicts a continuous-valued-function or ordered value. With the help of the bank loan application that we have discussed above, let us understand the working of classification. The classifier is built from the training set made up of database tuples and their associated class labels. Each tuple that constitutes the training set is referred to as a category or class. These tuples can also be referred to as sample, object or data points.
In this step, the classifier is used for classification. Here the test data is used to estimate the accuracy of classification rules. The classification rules can be applied to the new data tuples if the accuracy is considered acceptable. The major issue is preparing the data for Classification and Prediction. The noise is removed by applying smoothing techniques and the problem of missing values is solved by replacing a missing value with most commonly occurring value for that attribute. Correlation analysis is used to know whether any two given attributes are related. Normalization involves scaling all values for given attribute in order to make them fall within a small specified range.
Normalization is used when in the learning step, the neural networks or the methods involving measurements are used.
What is Data Mining?
For this purpose we can use the concept hierarchies. It predict the class label correctly and the accuracy of the predictor refers to how well a given predictor can guess the value of predicted attribute for a new data. A decision tree is a structure that includes a root node, branches, and leaf nodes. Each internal node denotes a test on an attribute, each branch denotes the outcome of a test, and each leaf node holds a class label. The topmost node in the tree is the root node.
Each internal node represents a test on an attribute. Each leaf node represents a class. A machine researcher named J. Later, he presented C4.
Partitioning Cluster Analysis Using Fuzzy C-Means
ID3 and C4. In this algorithm, there is no backtracking; the trees are constructed in a top-down recursive divide-and-conquer manner. Tree pruning is performed in order to remove anomalies in the training data due to noise or outliers. The pruned trees are smaller and less complex.
Bayesian classification is based on Bayes' Theorem. Bayesian classifiers are the statistical classifiers. Bayesian classifiers can predict class membership probabilities such as the probability that a given tuple belongs to a particular class. Bayesian Belief Networks specify joint conditional probability distributions.
A Belief Network allows class conditional independencies to be defined between subsets of variables. The arc in the diagram allows representation of causal knowledge. For example, lung cancer is influenced by a person's family history of lung cancer, as well as whether or not the person is a smoker. It is worth noting that the variable PositiveXray is independent of whether the patient has a family history of lung cancer or that the patient is a smoker, given that we know the patient has lung cancer. The antecedent part the condition consist of one or more attribute tests and these tests are logically ANDed.
We do not require to generate a decision tree first. In this algorithm, each rule for a given class covers many of the tuples of that class. As per the general strategy the rules are learned one at a time. For each time rules are learned, a tuple covered by the rule is removed and the process continues for the rest of the tuples.
This is because the path to each leaf in a decision tree corresponds to a rule. The Following is the sequential learning Algorithm where rules are learned for one class at a time. When learning a rule from a class Ci, we want the rule to cover all the tuples from class C only and no tuple form any other class. The Assessment of quality is made on the original set of training data. The rule may perform well on training data but less well on subsequent data. That's why the rule pruning is required.
Related Fuzzy Summaries in Database mining
Copyright 2019 - All Right Reserved