Page 230 - Ai_C10_Flipbook
P. 230
Exercise (Practical)
B. Long answer type questions.
1. Explain the task used in Computer Vision applications for multiple objects.
2. How Computer Vision plays a key role in retail stores? Solved Questions
3. What is a Computer Vision task for a single object?
SECTION A (Objective Type Questions)
4. Enlist two smartphone apps that utilise Computer Vision technology? How have these apps improved your efficiency uiz
or convenience in daily tasks? [CBSE Handbook]
Tick ( ) the correct option.
5. What do you mean by RGB images?
1. Neural Networks are a series of algorithms used to recognise ……….……................ in raw data.
#Critical Thinking
C. Competency-based/Application-based questions. 21 st Century a. Visible patterns b. Hidden patterns
Skills #Flexibility
1. An autonomous drone is delivering packages in a rural area. The drone uses Computer Vision to navigate safely around c. Simple shapes d. Random noise
trees and power lines. Which of the following technology ensure the drone’s safe flight?
2. The Pooling Layer reduces the ……….……................ of the input image while retaining the important features.
a. The drone uses GPS to avoid collisions with trees and power lines. a. Quality b. Dimensions
b. The drone uses cameras and sensors to detect obstacles, and Computer Vision processes the data to adjust its flight c. Colour depth d. Contrast
path in real-time.
3. ……….……................ selects the maximum value of the current image view and helps preserve the maximum detected
c. The drone relies on a manual operator to avoid obstacles. features.
d. The drone uses radar to detect objects and adjust its flight path. a. Average Pooling b. Mean Pooling
2. You are tasked with developing a Computer Vision system for a self-driving car company. The system needs to accurately c. Max Pooling d. Min Pooling
detect and classify various objects on the road to ensure safe navigation. Imagine you're working on improving the
4. ……….……................ helps make the model more robust to variations in the input image.
object detection algorithm for the self-driving car's Computer Vision system. During testing, you notice that the system
a. Rectified Linear Unit b. CNN
occasionally misclassifies pedestrians as cyclists, especially in low-light conditions.
c. Convolution Layer d. Pooling
How would you approach addressing this issue? What steps would you take to enhance the accuracy of pedestrian
detection while ensuring the system's overall performance and reliability on the road? [CBSE Handbook] 5. No-Code AI tools provide which type of interface?
3. “Imagine you're a researcher tasked with improving workplace safety in a manufacturing environment. You decide to a. Command-line interface b. Drag-and-drop visual interface
employ Computer Vision technology to enhance safety measures.” [CBSE Handbook] c. Text-based programming environment d. None of these
Assertion and Reasoning questions. 6. Teachable Machine is built on which JavaScript library?
Direction: Questions 4-5, consist of two statements – Assertion (A) and Reasoning (R). Answer these questions by selecting a. TensorFlow.js b. Game development
the appropriate option given below:
c. Web development d. Hardware programming
a. Both A and R are correct and R is the correct explanation of A.
7. In which layer are the final outputs of a CNN predicted?
b. Both A and R are correct but R is NOT the correct explanation of A. a. Convolution Layer b. Rectified Linear Unit (ReLU) Layer
c. A is correct but R is incorrect. c. Pooling Layer d. Fully Connected Layer
d. A is incorrect but R is correct. 8. What is the result of applying ReLU to the feature map?
4. Assertion (A): Object detection is a more complex task than image classification because it involves identifying both a. All the feature map values are doubled
the presence and location of objects in an image. b. All negative values are turned to zero and positive values are kept unchanged
Reasoning (R): Object detection algorithms need to not only classify the objects present in an image but also accurately c. The feature map is resized
localise them by determining their spatial extent. [CBSE Handbook]
d. The image is sharpened
5. Assertion (A): Grayscale images consist of shades of gray ranging from black to white, where each pixel is represented
by a single byte, and the size of the image is determined by its height multiplied by its width.
Reasoning (R): Grayscale images are represented using a three intensities per pixel, typically ranging from 0 to 255.
[CBSE Handbook]
228 Artificial Intelligence Play (Ver 1.0)-X

