Evaluation Metrics

Download Q&A
Q. In a binary classification, what does a high recall indicate?
  • A. The model is good at identifying negative cases
  • B. The model is good at identifying positive cases
  • C. The model has a high number of false positives
  • D. The model has a high number of false negatives
Q. In the context of evaluation metrics, what is a confusion matrix?
  • A. A table used to describe the performance of a classification model
  • B. A method to visualize the ROC curve
  • C. A technique to calculate the AUC
  • D. A way to measure the variance in predictions
Q. In which scenario would you prefer using the Matthews correlation coefficient?
  • A. When dealing with binary classification problems
  • B. When evaluating multi-class classification problems
  • C. When the dataset is highly imbalanced
  • D. All of the above
Q. What does a high value of R-squared indicate in regression analysis?
  • A. The model explains a large proportion of the variance in the dependent variable
  • B. The model has a high number of features
  • C. The model is overfitting the training data
  • D. The model is underfitting the training data
Q. What does accuracy measure in a classification model?
  • A. The proportion of true results among the total number of cases examined
  • B. The ability of the model to predict positive cases only
  • C. The average error of the predictions
  • D. The time taken to train the model
Q. What does precision indicate in a confusion matrix?
  • A. The ratio of true positives to the total predicted positives
  • B. The ratio of true positives to the total actual positives
  • C. The overall correctness of the model
  • D. The ability to identify all relevant instances
Q. What does recall measure in a classification model?
  • A. The ratio of true positives to the total actual positives
  • B. The ratio of true positives to the total predicted positives
  • C. The ratio of true negatives to the total actual negatives
  • D. The ratio of false negatives to the total actual positives
Q. What does recall measure in a classification task?
  • A. The ratio of true positives to the total actual positives
  • B. The ratio of true positives to the total predicted positives
  • C. The overall accuracy of the model
  • D. The number of false negatives
Q. What does the AUC represent in the context of the ROC curve?
  • A. The area under the curve, indicating the model's ability to distinguish between classes
  • B. The average of the true positive rates
  • C. The total number of false positives
  • D. The accuracy of the model
Q. What is the main advantage of using the F1 Score over accuracy?
  • A. It considers both precision and recall
  • B. It is easier to interpret
  • C. It is always higher than accuracy
  • D. It is less sensitive to class imbalance
Q. What is the purpose of the ROC curve?
  • A. To visualize the trade-off between sensitivity and specificity
  • B. To measure the accuracy of a regression model
  • C. To determine the optimal threshold for classification
  • D. Both A and C
Q. Which metric is best used for imbalanced datasets?
  • A. Accuracy
  • B. F1 Score
  • C. Mean Squared Error
  • D. R-squared
Q. Which metric is used to evaluate regression models?
  • A. F1 Score
  • B. Mean Absolute Error
  • C. Precision
  • D. Recall
Showing 1 to 13 of 13 (1 Pages)
Soulshift Feedback ×

On a scale of 0–10, how likely are you to recommend The Soulshift Academy?

Not likely Very likely