Q. What is overfitting in the context of neural networks?
A.
When the model performs well on training data but poorly on unseen data
B.
When the model has too few parameters
C.
When the model is too simple
D.
When the model learns too slowly
Show solution
Solution
Overfitting occurs when a model learns the training data too well, capturing noise instead of the underlying pattern.
Correct Answer:
A
— When the model performs well on training data but poorly on unseen data
Learn More →
Q. What is the purpose of using a validation set during training of a neural network?
A.
To train the model
B.
To evaluate the model's performance during training
C.
To test the model after training
D.
To optimize the learning rate
Show solution
Solution
The validation set is used to monitor the model's performance during training and to tune hyperparameters.
Correct Answer:
B
— To evaluate the model's performance during training
Learn More →
Q. What is the role of the hidden layers in a neural network?
A.
To provide input data
B.
To perform computations and extract features
C.
To produce the final output
D.
To initialize weights
Show solution
Solution
Hidden layers perform computations and extract features from the input data, enabling the network to learn complex representations.
Correct Answer:
B
— To perform computations and extract features
Learn More →
Q. Which of the following techniques can help prevent overfitting in neural networks?
A.
Increasing the learning rate
B.
Using dropout
C.
Reducing the number of layers
D.
Using a linear activation function
Show solution
Solution
Dropout is a regularization technique that randomly sets a fraction of input units to zero during training to prevent overfitting.
Correct Answer:
B
— Using dropout
Learn More →
Q. Which optimization algorithm is commonly used to update weights in neural networks?
A.
K-means
B.
Stochastic Gradient Descent
C.
Principal Component Analysis
D.
Random Forest
Show solution
Solution
Stochastic Gradient Descent (SGD) is a popular optimization algorithm used to update weights in neural networks based on the gradient of the loss function.
Correct Answer:
B
— Stochastic Gradient Descent
Learn More →
Showing 1 to 5 of 5 (1 Pages)