Page 231 - Ai_V3.0_c11_flipbook
P. 231
Before using the KNN classifier, you need to first import the KNeighborsClassifier by using the following code:
from sklearn.neighbors import KNeighborsClassifier
Let us now add a KNN classifier.
Program 62: To add a KNN classifier
# Import necessary libraries
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
# Load the iris dataset
iris = load_iris()
X = iris.data # Features
y = iris.target # Target labels
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_
state=1)
# Create a KNN classifier with 3 neighbors
knn = KNeighborsClassifier(n_neighbors=3)
# Train the KNN classifier on the training data
knn.fit(X_train, y_train)
# Use the trained classifier to make predictions on the test data
y_pred = knn.predict(X_test)
Evaluating Metrics
When working with machine learning models, evaluating their performance using appropriate metrics is crucial to
understand how well the model is performing and to make informed decisions about its effectiveness. Metrics evaluate
how well the model makes predictions, allowing us to better understand its usefulness and identify areas for improvement.
Some important uses of metrics are as follows:
• • Model evaluation: Metrics assist in determining how well a model works on a specific dataset. Accuracy, precision,
recall, F1-score, and AUC-ROC are some of the most commonly used evaluation metrics.
• • Comparison: Metrics enable the comparison of many models or algorithms to identify which one performs best
for a given task.
• • Validation: During model construction, metrics are used to assess the model’s performance on distinct training and
test sets to ensure that it generalises well to new data.
• • Optimisation: Metrics assist hyperparameter tuning and feature selection to optimise model performance.
There are various ways by which you can evaluate the metrics. Some commonly used ways to evaluate metrics are as
follows:
• • Accuracy: This metric measures the proportion of correctly classified instances out of the total instances. In general,
an accuracy of 1.0 (100%) indicates perfect classification, meaning that all instances were classified correctly.
Conversely, an accuracy of 0.0 (0%) indicates that none of the instances were classified correctly.
Python Programming 229

