Q. In the context of evaluation metrics, what does recall measure?
-
A.
The ability of a model to identify all relevant instances
-
B.
The ability of a model to avoid false positives
-
C.
The overall accuracy of the model
-
D.
The balance between precision and recall
Solution
Recall, also known as sensitivity, measures the proportion of actual positives that are correctly identified.
Correct Answer:
A
— The ability of a model to identify all relevant instances
Learn More →
Q. What does precision indicate in a classification task?
-
A.
The ratio of true positives to the sum of true positives and false negatives
-
B.
The ratio of true positives to the sum of true positives and false positives
-
C.
The ratio of true negatives to the sum of true negatives and false positives
-
D.
The overall correctness of the model
Solution
Precision measures the accuracy of positive predictions, calculated as true positives divided by the sum of true positives and false positives.
Correct Answer:
B
— The ratio of true positives to the sum of true positives and false positives
Learn More →
Q. What is the main advantage of using F1 Score over accuracy?
-
A.
It considers both precision and recall
-
B.
It is easier to interpret
-
C.
It is always higher than accuracy
-
D.
It is not affected by class imbalance
Solution
F1 Score provides a balance between precision and recall, making it more informative than accuracy in certain contexts.
Correct Answer:
A
— It considers both precision and recall
Learn More →
Q. What is the main drawback of using accuracy as a performance metric?
-
A.
It does not consider false positives and false negatives
-
B.
It is difficult to calculate
-
C.
It is only applicable to binary classification
-
D.
It requires a large dataset
Solution
Accuracy can be misleading, especially in imbalanced datasets, as it does not account for the distribution of classes.
Correct Answer:
A
— It does not consider false positives and false negatives
Learn More →
Q. Which evaluation metric is most appropriate for a multi-class classification problem?
-
A.
Accuracy
-
B.
F1 Score
-
C.
Log Loss
-
D.
All of the above
Solution
All of these metrics can be used to evaluate multi-class classification problems, depending on the specific context and requirements.
Correct Answer:
D
— All of the above
Learn More →
Q. Which evaluation metric is most useful for a model predicting rare events?
-
A.
Accuracy
-
B.
Recall
-
C.
Precision
-
D.
F1 Score
Solution
Recall is crucial for rare event prediction as it focuses on capturing as many positive instances as possible.
Correct Answer:
B
— Recall
Learn More →
Q. Which metric is best used when dealing with imbalanced datasets?
-
A.
Accuracy
-
B.
Precision
-
C.
Recall
-
D.
F1 Score
Solution
F1 Score is the harmonic mean of precision and recall, making it a better metric for imbalanced datasets.
Correct Answer:
D
— F1 Score
Learn More →
Q. Which metric is most appropriate for evaluating a model's performance on a multi-class classification problem?
-
A.
Accuracy
-
B.
Precision
-
C.
F1 Score
-
D.
Macro F1 Score
Solution
Macro F1 Score calculates the F1 Score for each class independently and averages them, making it suitable for multi-class problems.
Correct Answer:
D
— Macro F1 Score
Learn More →
Q. Which metric is used to evaluate the performance of regression models?
-
A.
Confusion Matrix
-
B.
Mean Absolute Error
-
C.
Precision
-
D.
Recall
Solution
Mean Absolute Error (MAE) measures the average magnitude of errors in a set of predictions, without considering their direction.
Correct Answer:
B
— Mean Absolute Error
Learn More →
Showing 1 to 9 of 9 (1 Pages)