Page 224 - AI Ver 3.0 class 10_Flipbook
P. 224

Importance of Model Evaluation

              Model evaluation is the process of applying various metrics to assess a machine learning model’s performance.
              AI model improves overtime with constructive feedback. This is an iterative process where you build the model,
              evaluate its performance using appropriate metrics, refine it based on the feedback, and repeat until the desired
              accuracy is achieved. It’s similar to tuning a musical instrument—regularly checking its sound quality, making
              adjustments, and fine-tuning until the melody is harmonious and meets the desired standard.
























              Some of the advantages of evaluating a model are as follows:
                 • Evaluation ensures that the model is operating correctly and optimally.

                 • Evaluation is an initiative to understand how well it achieves its goals.
                 • Evaluation helps to determine what works well and what could be improved in a program.



                 Human
                                You learn a             You take a           You assess the          You thrive for
                Learning
                                  subject                  test                  result              better results
                 Process


                Machine         Training the           Testing the                                  Fine tuning the
                                                                             Evaluating the
                Learning        model with             model with                                   model for better
                                                                                 model
                 Process       training data.          testing data                                   performance


              Need for Model Evaluation

              Model  evaluation  is  like  giving  the  AI  model  a  performance  review.  It  helps  identify  its  strengths,  pinpoint
              weaknesses, and determine how well it fits the task at hand. This feedback acts as a guide to refine and improve
              the model, making it more reliable and trustworthy. The process is continuous, just like fine-tuning a skill to
              achieve the best results. Depending upon the type and the purpose of the evaluation model, there are different
              types of evaluation techniques, like Train-Test split, Confusion Matrix, etc.


                       Splitting the Training Set Data for Evaluation

              Splitting the training set data is a crucial step in model evaluation, allowing for a systematic assessment of the
              model’s performance by creating distinct datasets for training, validation, and testing. Let’s learn more about
              splitting the training set using the Train-Test split in detail.


                    222     Touchpad Artificial Intelligence (Ver. 3.0)-X
   219   220   221   222   223   224   225   226   227   228   229