Q. In a confusion matrix, what does the term 'specificity' refer to?
-
A.
True Positive Rate
-
B.
False Positive Rate
-
C.
True Negative Rate
-
D.
False Negative Rate
Solution
Specificity is the True Negative Rate, indicating the proportion of actual negatives that are correctly identified.
Correct Answer:
C
— True Negative Rate
Learn More →
Q. In the context of classification, what does precision measure?
-
A.
The ratio of true positives to total predicted positives
-
B.
The ratio of true positives to total actual positives
-
C.
The overall accuracy of the model
-
D.
The ratio of false positives to total predicted positives
Solution
Precision measures the ratio of true positives to the total predicted positives, indicating the accuracy of positive predictions.
Correct Answer:
A
— The ratio of true positives to total predicted positives
Learn More →
Q. In the context of regression, what does R-squared indicate?
-
A.
The proportion of variance explained by the model
-
B.
The average error of predictions
-
C.
The correlation between predicted and actual values
-
D.
The number of features used in the model
Solution
R-squared indicates the proportion of variance in the dependent variable that can be explained by the independent variables in the model.
Correct Answer:
A
— The proportion of variance explained by the model
Learn More →
Q. What does a high precision indicate in a classification model?
-
A.
A high number of true positives compared to false positives
-
B.
A high number of true positives compared to false negatives
-
C.
A high overall accuracy
-
D.
A high number of true negatives
Solution
High precision indicates a high number of true positives compared to false positives, meaning the model is good at identifying positive instances.
Correct Answer:
A
— A high number of true positives compared to false positives
Learn More →
Q. What does a high value of Matthews Correlation Coefficient (MCC) indicate?
-
A.
Poor model performance
-
B.
Random predictions
-
C.
Strong correlation between predicted and actual classes
-
D.
High false positive rate
Solution
A high MCC value indicates a strong correlation between predicted and actual classes, reflecting better model performance.
Correct Answer:
C
— Strong correlation between predicted and actual classes
Learn More →
Q. What does a high value of R-squared indicate?
-
A.
Poor model fit
-
B.
Good model fit
-
C.
High bias
-
D.
High variance
Solution
A high R-squared value indicates that a large proportion of the variance in the dependent variable is predictable from the independent variables.
Correct Answer:
B
— Good model fit
Learn More →
Q. What does ROC AUC measure?
-
A.
The area under the Receiver Operating Characteristic curve
-
B.
The accuracy of the model
-
C.
The precision of the model
-
D.
The recall of the model
Solution
ROC AUC measures the area under the Receiver Operating Characteristic curve, indicating the model's ability to distinguish between classes.
Correct Answer:
A
— The area under the Receiver Operating Characteristic curve
Learn More →
Q. What does ROC stand for in the context of evaluation metrics?
-
A.
Receiver Operating Characteristic
-
B.
Randomized Output Curve
-
C.
Relative Operating Curve
-
D.
Receiver Output Classification
Solution
ROC stands for Receiver Operating Characteristic, which is a graphical representation of a classifier's performance.
Correct Answer:
A
— Receiver Operating Characteristic
Learn More →
Q. What does ROC stand for in the context of model evaluation?
-
A.
Receiver Operating Characteristic
-
B.
Receiver Output Curve
-
C.
Rate of Classification
-
D.
Random Output Curve
Solution
ROC stands for Receiver Operating Characteristic, which is a graphical representation of a classifier's performance.
Correct Answer:
A
— Receiver Operating Characteristic
Learn More →
Q. What does the F1 score represent in model evaluation?
-
A.
The harmonic mean of precision and recall
-
B.
The average of precision and recall
-
C.
The ratio of true positives to total predicted positives
-
D.
The ratio of true positives to total actual positives
Solution
The F1 score is the harmonic mean of precision and recall, providing a balance between the two metrics.
Correct Answer:
A
— The harmonic mean of precision and recall
Learn More →
Q. What does the term 'overfitting' refer to in model evaluation?
-
A.
Model performs well on training data but poorly on unseen data
-
B.
Model performs poorly on both training and unseen data
-
C.
Model performs well on unseen data but poorly on training data
-
D.
Model has high bias
Solution
Overfitting occurs when a model learns the training data too well, resulting in poor generalization to unseen data.
Correct Answer:
A
— Model performs well on training data but poorly on unseen data
Learn More →
Q. What is the purpose of the Area Under the Curve (AUC) in ROC analysis?
-
A.
To measure the accuracy of the model
-
B.
To evaluate the model's performance across all classification thresholds
-
C.
To determine the model's precision
-
D.
To assess the model's recall
Solution
AUC measures the model's performance across all classification thresholds, indicating its ability to distinguish between classes.
Correct Answer:
B
— To evaluate the model's performance across all classification thresholds
Learn More →
Q. What is the purpose of the Area Under the ROC Curve (AUC-ROC)?
-
A.
To measure the accuracy of a model
-
B.
To evaluate the trade-off between true positive rate and false positive rate
-
C.
To calculate the precision of a model
-
D.
To determine the model's training time
Solution
AUC-ROC evaluates the trade-off between the true positive rate and false positive rate across different thresholds.
Correct Answer:
B
— To evaluate the trade-off between true positive rate and false positive rate
Learn More →
Q. Which evaluation metric is most appropriate for regression tasks?
-
A.
Accuracy
-
B.
Mean Absolute Error (MAE)
-
C.
F1 Score
-
D.
Precision
Solution
Mean Absolute Error (MAE) is commonly used for evaluating regression tasks as it measures the average magnitude of errors.
Correct Answer:
B
— Mean Absolute Error (MAE)
Learn More →
Q. Which metric is best suited for imbalanced classification problems?
-
A.
Accuracy
-
B.
Precision
-
C.
Recall
-
D.
F1 Score
Solution
The F1 Score is preferred in imbalanced classification problems as it considers both precision and recall.
Correct Answer:
D
— F1 Score
Learn More →
Q. Which metric is best suited for imbalanced datasets?
-
A.
Accuracy
-
B.
F1 Score
-
C.
Mean Squared Error
-
D.
Log Loss
Solution
The F1 Score is more informative than accuracy for imbalanced datasets as it considers both false positives and false negatives.
Correct Answer:
B
— F1 Score
Learn More →
Q. Which metric is used to evaluate the performance of a classification model that outputs probabilities?
-
A.
Accuracy
-
B.
Log Loss
-
C.
F1 Score
-
D.
Mean Absolute Error
Solution
Log Loss evaluates the performance of a classification model that outputs probabilities, penalizing incorrect classifications more heavily.
Correct Answer:
B
— Log Loss
Learn More →
Q. Which metric would you use to evaluate a multi-class classification model?
-
A.
F1 Score
-
B.
Precision
-
C.
Macro-averaged F1 Score
-
D.
Mean Squared Error
Solution
Macro-averaged F1 Score is suitable for evaluating multi-class classification models as it averages the F1 scores across all classes.
Correct Answer:
C
— Macro-averaged F1 Score
Learn More →
Showing 1 to 18 of 18 (1 Pages)