Page 354 - AI Ver 3.0 class 10_Flipbook
P. 354

A convolutional neural network consists of the following layers:
                 • Convolution Layer
                 • Rectified Linear Unit (ReLU)
                 • Pooling Layer
                 • Fully Connected Layer
              Convolution Layer


              The Convolutional Layer is the first layer in a Convolutional Neural Network (CNN) and plays a crucial role in
              processing visual data, such as images. Its main objective is to extract key features from the input image, starting
              with low-level features like edges, textures, colours, and gradients. These features serve as the building blocks for
              the network to understand the content of the image.
              In a CNN, this layer is not limited to just one convolutional operation. As the network deepens, additional
              convolutional  layers  are  added,  each  progressively  capturing  more  complex  patterns  or  high-level  features
              such  as  shapes,  objects,  and  contextual  patterns.  This  enables  the  CNN  to  evolve,  recognising  increasingly
              sophisticated and abstract features, ultimately allowing the network to understand entire images.

              The convolution operation applies multiple kernels to an image to extract various features from the image. The
              result of this process is called a feature map (or activation map), which highlights the important features detected
              by the kernels.

              The feature map has several key functions:
                 • Image Size Reduction: It reduces the image size, often through pooling layers that follow convolutional layers,
                 making it easier and faster to process while retaining crucial information.
                 • Focus on Relevant Features: It helps focus on the most important features needed for further image processing.
                 For example, in plant disease detection, the model may only need to focus on the patterns on the leaves (such
                 as discolouration or spots) rather than analysing the entire plant. By emphasising the specific features of the
                 leaves, the model can efficiently and accurately detect diseases like fungal infections or bacterial blight, even in
                 a crowded field of plants.




















                                       Input                                           Feature map

              Rectified Linear Unit Function

              After the convolutional layer generates the feature map, the next step is to pass it through the Rectified Linear
              Unit (ReLU) layer.
              ReLU (Rectified Linear Unit) is an activation function that introduces non-linearity into the model. Its primary
              job is to remove negative values from the feature map by setting them to zero while leaving positive values
              unchanged.


                    352     Touchpad Artificial Intelligence (Ver. 3.0)-X
   349   350   351   352   353   354   355   356   357   358   359