Q. In a binary classification problem, what does a high recall indicate?
A.
High true positive rate
B.
High false positive rate
C.
Low true negative rate
D.
Low false negative rate
Show solution
Solution
High recall indicates that the model correctly identifies a large proportion of actual positive cases.
Correct Answer:
A
— High true positive rate
Learn More →
Q. In a multi-class classification problem, which metric can be used to evaluate the performance across all classes?
A.
Micro F1 Score
B.
Mean Absolute Error
C.
Precision
D.
Recall
Show solution
Solution
Micro F1 Score aggregates the contributions of all classes to compute the average metric, suitable for multi-class evaluation.
Correct Answer:
A
— Micro F1 Score
Learn More →
Q. In the context of regression, which metric measures the average squared difference between predicted and actual values?
A.
F1 Score
B.
Mean Absolute Error
C.
Mean Squared Error
D.
Precision
Show solution
Solution
Mean Squared Error (MSE) measures the average of the squares of the errors, indicating the quality of a regression model.
Correct Answer:
C
— Mean Squared Error
Learn More →
Q. What does a high value of precision indicate in a classification model?
A.
High true positive rate
B.
Low false positive rate
C.
High false negative rate
D.
Low true negative rate
Show solution
Solution
High precision indicates that a large proportion of positive identifications were actually correct.
Correct Answer:
B
— Low false positive rate
Learn More →
Q. What does ROC AUC measure in a classification model?
A.
The area under the Receiver Operating Characteristic curve
B.
The average precision of the model
C.
The total number of true positives
D.
The mean error of predictions
Show solution
Solution
ROC AUC measures the area under the ROC curve, indicating the model's ability to distinguish between classes.
Correct Answer:
A
— The area under the Receiver Operating Characteristic curve
Learn More →
Q. What does ROC AUC stand for in model evaluation?
A.
Receiver Operating Characteristic Area Under Curve
B.
Regression Output Curve Area Under Control
C.
Randomized Output Classification Area Under Curve
D.
Receiver Output Classification Area Under Control
Show solution
Solution
ROC AUC stands for Receiver Operating Characteristic Area Under Curve, measuring the model's ability to distinguish between classes.
Correct Answer:
A
— Receiver Operating Characteristic Area Under Curve
Learn More →
Q. What does the Area Under the ROC Curve (AUC-ROC) represent?
A.
Model accuracy
B.
Probability of false positives
C.
Trade-off between sensitivity and specificity
D.
Model complexity
Show solution
Solution
AUC-ROC represents the trade-off between sensitivity (true positive rate) and specificity (1 - false positive rate).
Correct Answer:
C
— Trade-off between sensitivity and specificity
Learn More →
Q. What does the F1 Score evaluate in a classification model?
A.
The balance between precision and recall
B.
The overall accuracy of the model
C.
The speed of the model
D.
The number of false positives
Show solution
Solution
The F1 Score is the harmonic mean of precision and recall, providing a balance between the two.
Correct Answer:
A
— The balance between precision and recall
Learn More →
Q. What evaluation metric is commonly used to assess the performance of a classification model?
A.
Accuracy
B.
Mean Squared Error
C.
Silhouette Score
D.
R-squared
Show solution
Solution
Accuracy measures the proportion of true results among the total number of cases examined.
Correct Answer:
A
— Accuracy
Learn More →
Q. What is the purpose of using cross-validation in model evaluation?
A.
To increase training time
B.
To reduce overfitting
C.
To improve model complexity
D.
To increase dataset size
Show solution
Solution
Cross-validation helps to assess how the results of a statistical analysis will generalize to an independent dataset, thus reducing overfitting.
Correct Answer:
B
— To reduce overfitting
Learn More →
Q. What is the significance of the confusion matrix in model evaluation?
A.
It shows the distribution of data
B.
It summarizes the performance of a classification model
C.
It calculates the mean error
D.
It visualizes the training process
Show solution
Solution
The confusion matrix summarizes the performance of a classification model by showing true positives, false positives, true negatives, and false negatives.
Correct Answer:
B
— It summarizes the performance of a classification model
Learn More →
Q. Which evaluation metric is best for assessing clustering algorithms?
A.
Accuracy
B.
Silhouette Score
C.
Mean Squared Error
D.
F1 Score
Show solution
Solution
Silhouette Score is used to evaluate clustering algorithms by measuring how similar an object is to its own cluster compared to other clusters.
Correct Answer:
B
— Silhouette Score
Learn More →
Q. Which evaluation metric is best for measuring the performance of a clustering algorithm?
A.
Accuracy
B.
Silhouette Score
C.
Mean Squared Error
D.
F1 Score
Show solution
Solution
Silhouette Score measures how similar an object is to its own cluster compared to other clusters, making it suitable for clustering evaluation.
Correct Answer:
B
— Silhouette Score
Learn More →
Q. Which evaluation metric is commonly used for binary classification problems?
A.
Mean Squared Error
B.
Accuracy
C.
Silhouette Score
D.
R-squared
Show solution
Solution
Accuracy is a common evaluation metric for binary classification, measuring the proportion of correct predictions.
Correct Answer:
B
— Accuracy
Learn More →
Q. Which evaluation metric is used to assess the performance of a recommendation system?
A.
Root Mean Squared Error
B.
F1 Score
C.
Mean Average Precision
D.
Silhouette Score
Show solution
Solution
Mean Average Precision is commonly used to evaluate the performance of recommendation systems.
Correct Answer:
C
— Mean Average Precision
Learn More →
Q. Which evaluation metric is used to measure the performance of regression models?
A.
F1 Score
B.
Mean Absolute Error
C.
Confusion Matrix
D.
ROC Curve
Show solution
Solution
Mean Absolute Error measures the average magnitude of errors in a set of predictions, without considering their direction.
Correct Answer:
B
— Mean Absolute Error
Learn More →
Q. Which metric is best suited for evaluating a model on imbalanced datasets?
A.
F1 Score
B.
Accuracy
C.
Precision
D.
Recall
Show solution
Solution
F1 Score is the harmonic mean of precision and recall, making it suitable for imbalanced datasets.
Correct Answer:
A
— F1 Score
Learn More →
Q. Which metric is most appropriate for evaluating a multi-class classification model?
A.
Confusion Matrix
B.
Mean Absolute Error
C.
F1 Score
D.
Precision
Show solution
Solution
A confusion matrix provides a comprehensive view of the performance of a multi-class classification model.
Correct Answer:
A
— Confusion Matrix
Learn More →
Q. Which metric would be most appropriate for evaluating a model in an imbalanced classification scenario?
A.
Accuracy
B.
F1 Score
C.
Mean Squared Error
D.
R-squared
Show solution
Solution
F1 Score is more appropriate in imbalanced classification scenarios as it considers both precision and recall.
Correct Answer:
B
— F1 Score
Learn More →
Q. Which metric would you use to evaluate a clustering algorithm's performance?
A.
Silhouette Score
B.
Mean Squared Error
C.
F1 Score
D.
Log Loss
Show solution
Solution
Silhouette Score measures how similar an object is to its own cluster compared to other clusters.
Correct Answer:
A
— Silhouette Score
Learn More →
Q. Which metric would you use to evaluate a recommendation system's performance?
A.
Mean Squared Error
B.
Precision at K
C.
F1 Score
D.
Silhouette Score
Show solution
Solution
Precision at K evaluates how many of the top K recommended items are relevant, making it suitable for recommendation systems.
Correct Answer:
B
— Precision at K
Learn More →
Showing 1 to 21 of 21 (1 Pages)