Q. In a multi-class classification problem, which metric can be used to evaluate the model's performance across all classes?
A.
Macro F1 Score
B.
Mean Squared Error
C.
Accuracy
D.
Log Loss
Show solution
Solution
Macro F1 Score calculates the F1 Score for each class independently and then takes the average, providing a balanced evaluation across all classes.
Correct Answer:
A
— Macro F1 Score
Learn More →
Q. In evaluating clustering algorithms, which metric assesses the compactness of clusters?
A.
Silhouette Score
B.
Accuracy
C.
F1 Score
D.
Mean Squared Error
Show solution
Solution
Silhouette Score measures how similar an object is to its own cluster compared to other clusters.
Correct Answer:
A
— Silhouette Score
Learn More →
Q. In the context of a confusion matrix, what does precision measure?
A.
True positive rate
B.
False positive rate
C.
Correct positive predictions out of total positive predictions
D.
Correct predictions out of total predictions
Show solution
Solution
Precision measures the ratio of true positive predictions to the total predicted positives, indicating the accuracy of positive predictions.
Correct Answer:
C
— Correct positive predictions out of total positive predictions
Learn More →
Q. In the context of a confusion matrix, what does the term 'True Positive' refer to?
A.
Correctly predicted positive cases
B.
Incorrectly predicted positive cases
C.
Correctly predicted negative cases
D.
Incorrectly predicted negative cases
Show solution
Solution
True Positive refers to the cases where the model correctly predicts the positive class.
Correct Answer:
A
— Correctly predicted positive cases
Learn More →
Q. What does a confusion matrix provide in model evaluation?
A.
A summary of prediction errors
B.
A graphical representation of data distribution
C.
A measure of model training time
D.
A list of features used in the model
Show solution
Solution
A confusion matrix provides a summary of prediction errors by showing the counts of true positives, true negatives, false positives, and false negatives.
Correct Answer:
A
— A summary of prediction errors
Learn More →
Q. What does a high AUC (Area Under the Curve) value indicate in a ROC curve?
A.
Poor model performance
B.
Model is random
C.
Good model discrimination
D.
Model is overfitting
Show solution
Solution
A high AUC value indicates that the model has good discrimination ability between classes.
Correct Answer:
C
— Good model discrimination
Learn More →
Q. What does a high AUC value in ROC analysis indicate?
A.
Poor model performance
B.
Model is not useful
C.
Good model discrimination ability
D.
Model is overfitting
Show solution
Solution
A high AUC (Area Under the Curve) value indicates that the model has good discrimination ability between the positive and negative classes.
Correct Answer:
C
— Good model discrimination ability
Learn More →
Q. What does the ROC curve represent in model evaluation?
A.
Relationship between precision and recall
B.
Trade-off between true positive rate and false positive rate
C.
Model training time vs accuracy
D.
Data distribution visualization
Show solution
Solution
The ROC curve illustrates the trade-off between the true positive rate (sensitivity) and the false positive rate at various threshold settings.
Correct Answer:
B
— Trade-off between true positive rate and false positive rate
Learn More →
Q. What is the significance of the AUC in ROC analysis?
A.
It measures the model's training time
B.
It indicates the model's accuracy
C.
It represents the probability that a randomly chosen positive instance is ranked higher than a randomly chosen negative instance
D.
It shows the number of features used in the model
Show solution
Solution
The AUC (Area Under the Curve) quantifies the overall ability of the model to discriminate between positive and negative classes.
Correct Answer:
C
— It represents the probability that a randomly chosen positive instance is ranked higher than a randomly chosen negative instance
Learn More →
Q. Which evaluation metric is best for a model predicting customer churn?
A.
Mean Squared Error
B.
F1 Score
C.
R-squared
D.
Log Loss
Show solution
Solution
The F1 Score is suitable for customer churn prediction as it balances precision and recall, which is important in identifying customers who are likely to churn.
Correct Answer:
B
— F1 Score
Learn More →
Q. Which evaluation metric is best for regression tasks?
A.
Accuracy
B.
Mean Absolute Error
C.
F1 Score
D.
Recall
Show solution
Solution
Mean Absolute Error is commonly used to evaluate the performance of regression models.
Correct Answer:
B
— Mean Absolute Error
Learn More →
Q. Which evaluation metric is most appropriate for imbalanced classification problems?
A.
Accuracy
B.
F1 Score
C.
Mean Squared Error
D.
R-squared
Show solution
Solution
The F1 Score is a better evaluation metric for imbalanced classification problems as it considers both precision and recall, providing a balance between the two.
Correct Answer:
B
— F1 Score
Learn More →
Q. Which evaluation metric is particularly useful for ranking predictions?
A.
Accuracy
B.
Mean Absolute Error
C.
Mean Squared Error
D.
Normalized Discounted Cumulative Gain (NDCG)
Show solution
Solution
Normalized Discounted Cumulative Gain (NDCG) is used to evaluate the quality of ranked lists, making it suitable for ranking tasks.
Correct Answer:
D
— Normalized Discounted Cumulative Gain (NDCG)
Learn More →
Q. Which metric would be most appropriate for evaluating a model in a highly imbalanced dataset?
A.
Accuracy
B.
Precision
C.
Recall
D.
F1 Score
Show solution
Solution
F1 Score is appropriate for imbalanced datasets as it considers both precision and recall.
Correct Answer:
D
— F1 Score
Learn More →
Q. Which metric would you use to evaluate a recommendation system?
A.
Mean Squared Error
B.
Precision at K
C.
F1 Score
D.
Recall
Show solution
Solution
Precision at K is a common metric used to evaluate the performance of recommendation systems.
Correct Answer:
B
— Precision at K
Learn More →
Showing 1 to 15 of 15 (1 Pages)