Page 162 - Artificial Intellegence_v2.0_Class_12
P. 162

Link: https   dialog o .cloud.google.com    his is the free version)
            Botsify    otsify is a chatbot development platform that allo s users to create and deploy chatbots for various
            messaging platforms and websites. It offers a user-friendly interface and provides features to build, customize, and
            manage chatbots effectively.
            Link:  https://botsify.com/


                      Brainy Fact


              Contrary to popular belief, one of the early adopters of ML is Israel (63%) followed by Netherlands
              (57%) and then United States (56%). (Business Broadway Survey, February 2021)



        Evaluation
        Once a model has been created and trained, it must be properly tested to calculate the model's efficiency and performance.
        As a result, the model is evaluated using Testing Data (that was extracted from the acquired dataset during the Data
        Acquisition stage) and the model's efficiency is assessed.

        The set of measurements will differ depending on the problem you're working on. For regression problems, for example,
        MSE or MAE are commonly used. On the other hand, for a balanced dataset, accuracy may be a useful choice for
        evaluating a classification model. Imbalanced sets necessitate the use of more advanced metrics. In such instances, the
        F1 score is useful.
        A separate validation dataset is used for evaluation during training. It monitors how well our model generalises, avoiding
        bias and overfitting.

        There are a few other things considered during this stage too:
        •   The volume of test data can be huge, that provides data complexities.
        •   Human biases in picking test data might have a negative impact on the testing phase; thus, data validation is critical.
        •   The testing team should put the AI and ML algorithms through rigorous testing while maintaining model validity
            and keeping successful learning ability and algorithm efficacy in mind.
        •   As the system may deal with sensitive data, regulatory compliance and security testing are essential.

        •   Also, due to the sheer volume of data, performance testing is critical.
        •   If the AI solution requires data from other systems, systems integration testing is critical.
        •   All relevant subsets of training data, i.e., the data you will use to train the AI system should be included in test data.
        •   The team involved in testing must develop test suites to aid in the validation of the ML models.



                      Brainy Fact

              Training Dataset vs Test Dataset vs Validation Dataset
              Training Dataset: The largest percentage of the original dataset used to train the models.
              Validation Dataset: A subset of the data used to offer an unbiased evaluation of a model's fit on the
              training dataset while tuning model hyperparameters. The models are assessed using the validation
              set to identify the best model.
              Test Data set: A subset of the data used to determine  hether the final AI model,  hich  as chosen
              in the previous stage, can generalise successfully to fresh, untried data. The ideal data points for
              training, validation, and testing sets are those that cannot be found in any other set.
              A common split of training, validation, and testing sets is generally 50:25:25.



              160     Touchpad Artificial Intelligence (Ver. 2.0)-XII
   157   158   159   160   161   162   163   164   165   166   167