Page 221 - Ai_C10_Flipbook
P. 221
Input Feature Map Rectified Feature Map
ReLU
Black = negative; white = positive values Only non-negative values
In the resulting feature map after applying ReLU:
When the ReLU activation function is applied, it eliminates all negative values, essentially flattening the regions
where there is no significant change or where the pixel values are below zero.
As a result, positive values are kept, and the transitions between dark and light areas become more defined,
enhancing the edges and features in the feature map.
Pooling Layer
This layer reduces the dimensions of the input image while still retaining the important features. This will help
in making the input image more resistant to small max pooling
transformations, distortions and translations. All this is done 25 45
to reduce the number of parameters and computation in the
105 86
network thus making it more manageable and improving 13 25 45 4
the efficiency of the whole system.
11 19 17 26
For example, if an image of an animal is given as an input
to the CNN then by just retaining the shape of the eyes, 36 110 86 10
ears and face it is easy to identify an animal. Keeping all the average pooling
79 115 19 21
features could increase the processing time and cause the 17 23
model to become more complex and prone to overfitting.
85 34
There are two types of pooling:
• Max Pooling: Max Pooling is the most commonly used method that selects the maximum value of the current
image view and helps preserve the maximum detected features.
• Average Pooling: Average Pooling finds out the average value of the current image view and thus downsamples
the feature map.
Max
Pooling
Sum
Only non-negative values
Rectified Feature Map
Computer Vision 219

