Q. In the context of neural networks, what does 'epoch' refer to?
-
A.
A single pass through the training dataset
-
B.
The number of layers in the network
-
C.
The learning rate adjustment
-
D.
The size of the training batch
Solution
An epoch refers to one complete pass through the entire training dataset during the training process.
Correct Answer:
A
— A single pass through the training dataset
Learn More →
Q. In the context of neural networks, what does 'overfitting' mean?
-
A.
The model performs well on training data but poorly on unseen data
-
B.
The model is too simple to capture the underlying patterns
-
C.
The model has too few parameters
-
D.
The model is trained on too much data
Solution
Overfitting occurs when a model learns the training data too well, including noise, leading to poor generalization.
Correct Answer:
A
— The model performs well on training data but poorly on unseen data
Learn More →
Q. What is the purpose of batch normalization in neural networks?
-
A.
To increase the number of training epochs
-
B.
To normalize the input features
-
C.
To stabilize and accelerate training
-
D.
To reduce the size of the model
Solution
Batch normalization helps stabilize and accelerate the training process by normalizing the inputs to each layer.
Correct Answer:
C
— To stabilize and accelerate training
Learn More →
Q. What is the purpose of dropout in neural networks?
-
A.
To increase the learning rate
-
B.
To prevent overfitting
-
C.
To enhance feature extraction
-
D.
To reduce computational cost
Solution
Dropout is a regularization technique used to prevent overfitting by randomly dropping units during training.
Correct Answer:
B
— To prevent overfitting
Learn More →
Q. What is the role of the loss function in a neural network?
-
A.
To measure the accuracy of predictions
-
B.
To calculate the gradients for backpropagation
-
C.
To initialize the weights
-
D.
To determine the architecture of the network
Solution
The loss function quantifies how well the neural network's predictions match the actual target values, guiding weight updates.
Correct Answer:
B
— To calculate the gradients for backpropagation
Learn More →
Q. What is the role of the output layer in a neural network?
-
A.
To process input data
-
B.
To extract features
-
C.
To produce the final predictions
-
D.
To apply regularization
Solution
The output layer produces the final predictions of the neural network based on the learned features.
Correct Answer:
C
— To produce the final predictions
Learn More →
Q. Which of the following describes a convolutional neural network (CNN)?
-
A.
A network designed for sequential data
-
B.
A network that uses convolutional layers for image processing
-
C.
A network that only uses fully connected layers
-
D.
A network that does not require any training
Solution
CNNs are specifically designed to process and analyze visual data using convolutional layers.
Correct Answer:
B
— A network that uses convolutional layers for image processing
Learn More →
Q. Which of the following is a common activation function used in hidden layers of neural networks?
-
A.
Softmax
-
B.
ReLU
-
C.
Mean Squared Error
-
D.
Cross-Entropy
Solution
ReLU (Rectified Linear Unit) is commonly used in hidden layers due to its simplicity and effectiveness.
Correct Answer:
B
— ReLU
Learn More →
Q. Which of the following is a common loss function used for regression tasks in neural networks?
-
A.
Binary Cross-Entropy
-
B.
Categorical Cross-Entropy
-
C.
Mean Squared Error
-
D.
Hinge Loss
Solution
Mean Squared Error (MSE) is commonly used as a loss function for regression tasks.
Correct Answer:
C
— Mean Squared Error
Learn More →
Q. Which of the following is a common optimization algorithm used in training neural networks?
-
A.
K-Means
-
B.
Gradient Descent
-
C.
Principal Component Analysis
-
D.
Support Vector Machine
Solution
Gradient Descent is a widely used optimization algorithm for minimizing the loss function in neural networks.
Correct Answer:
B
— Gradient Descent
Learn More →
Q. Which of the following optimizers is commonly used in training neural networks?
-
A.
Stochastic Gradient Descent
-
B.
K-Means
-
C.
Principal Component Analysis
-
D.
Support Vector Machine
Solution
Stochastic Gradient Descent (SGD) is a widely used optimizer for training neural networks.
Correct Answer:
A
— Stochastic Gradient Descent
Learn More →
Q. Which of the following techniques is used to prevent overfitting in neural networks?
-
A.
Increasing the learning rate
-
B.
Using dropout layers
-
C.
Reducing the number of layers
-
D.
Using a larger batch size
Solution
Dropout layers randomly deactivate neurons during training, which helps prevent overfitting.
Correct Answer:
B
— Using dropout layers
Learn More →
Q. Which optimization algorithm is commonly used to minimize the loss function in neural networks?
-
A.
Gradient Descent
-
B.
K-Means
-
C.
Principal Component Analysis
-
D.
Random Forest
Solution
Gradient Descent is the most commonly used optimization algorithm for minimizing the loss function in neural networks.
Correct Answer:
A
— Gradient Descent
Learn More →
Q. Which type of neural network is specifically designed for image processing?
-
A.
Recurrent Neural Network
-
B.
Convolutional Neural Network
-
C.
Generative Adversarial Network
-
D.
Feedforward Neural Network
Solution
Convolutional Neural Networks (CNNs) are specifically designed for processing and analyzing visual data.
Correct Answer:
B
— Convolutional Neural Network
Learn More →
Showing 1 to 14 of 14 (1 Pages)