Page 136 - Artificial Intellegence_v2.0_Class_12
P. 136
mean_ d if f _ sq = d if f _ sq .mean( )
r mse_ v al = np .sq r t( mean_ d if f _ sq )
r etu r n r mse_ v al
f
p r int( " p r ed icted v alu es ar e: " + str ( [ " % .4 f " % i or i in y _ p r ed ] ) )
f
p r int( " actu al v alu es ar e: " + str ( [ " % .4 f " % i or i in y _ tr u e] ) )
r mse_ v al = r mse( y _ p r ed , y _ tr u e)
p r int( " RM S E r r or is: " + str ( r mse_ v al) )
Output:
predicted values are ' . ', ' . ', ' . '
actual values are ' . ', ' . ', ' . '
rror is .
If you have a larger value, you ill most likely need to alter your feature or t eak your hyperparameters.
Mean S q uar e Per centag e Er r or ( MA PE)
he accuracy of a forecasting technique is determined by the mean absolute percentage error A ). It represents the
average of the absolute percentage errors of each entry in a dataset. arge data sets may typically be effectively analysed
using A , hich requires that dataset values should be other than ero.
A is significant because it may assist a company in creating more precise pro ections for upcoming pro ects.
MAPE = (1/n) * Σ(|actual value — predicted value | / |actual value|) * 100
ince A displays the error numbers as percentages, it is simple to comprehend. or instance, a A of
indicates a difference bet een the actual and pro ected values. Additionally, hen using absolute percentage
errors, the problem of positive and negative errors cancelling each other out is eliminated. he forecast is more accurate
ith smaller values of A .
MA PE
he A penali es negative errors ith greater intensity than positive ones. o, it ill choose a method hose values
are by default too lo hen comparing the accuracy of prediction methods.
H yper par ameter s
yperparameters are parameters hose values govern the learning process. hey also determine the values of model
parameters learned by a learning algorithm. hey are 'top level' parameters that regulate the learning process and the
model parameters that come from it, as the prefi 'hyper' suggests. ince the model cannot modify its values during
learning training, hyperparameters are said to be e ternal to the model. ome e amples of hyperparameters are
• he ratio of train test split
• he optimisation algorithms' learning rate e.g. gradient descent)
• In a neural net ork, the activation function selected e.g. igmoid, e , anh)
• he loss function that the model ill employ
• A neural net ork's number of hidden layers
• he number of iterations epochs) required to train a neural net ork
• A clustering task's number of clusters
Touchpad Artificial Intelligence (Ver. 2.0)-XII

