Page 232 - AI_Ver_3.0_class_11
P. 232

Number of correctly classified instances
                                         Accuracy =
                                                             Total number of instances

               • •  Confusion matrix: A confusion matrix provides a breakdown of correct and incorrect classifications for each class.

                                                       Predicted Negative     Predicted Positive
                                    Actual Negative    True Negative (TN)  False Positive (FP)
                                    Actual Positive    False Negative (FN)  True Positive (TP)


               • •    Precision: Precision measures the accuracy of positive predictions. It is the ratio of correctly predicted positive
                  observations to the total predicted positives.
                                                       TP
                                         Precision =
                                                     TP + FP
                  Precision matrix would be most appropriate to use when the cost of false positives is high.
              Before evaluating the metrics, you need to first import it by using the following code:

                   from sklearn import metrics
              In Python, the metrics.precision_score() method is used to calculate the precision of a classification model in Python
              using the sklearn library.
              Let us now evaluate the metrics.

                Program 63: To evaluate the metrics of the IRIS dataset after KNN classification

                   from sklearn.model_selection import train_test_split
                   from sklearn.datasets import load_iris

                   from sklearn.neighbors import KNeighborsClassifier
                   from sklearn import metrics
                   # load dataset
                   iris = load_iris()

                   # separate the data into features and target
                   X = iris.data
                   y = iris.target

                   # Split the data: 80% for training, 20% for testing
                   X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_
                   state=1)
                   # Splitting the data into training and testing sets (80% training, 20% testing)

                   X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_
                   state=1)

                   # Create a KNN classifier with 3 neighbors
                   knn = KNeighborsClassifier(n_neighbors=3)
                   # Train the KNN classifier on the training data
                   knn.fit(X_train, y_train)
                   # Use the trained classifier to make predictions on the test data

                   y_pred = knn.predict(X_test)


                    230     Touchpad Artificial Intelligence (Ver. 3.0)-XI
   227   228   229   230   231   232   233   234   235   236   237