Q. In a binary classification problem, what does a high precision indicate?
A.
High true positive rate
B.
Low false positive rate
C.
High true negative rate
D.
Low false negative rate
Show solution
Solution
High precision indicates that when the model predicts a positive class, it is correct most of the time, meaning low false positives.
Correct Answer:
B
— Low false positive rate
Learn More →
Q. In a case study, if a model has high precision but low recall, what does this indicate?
A.
The model is good at identifying positive cases but misses many.
B.
The model is poor at identifying positive cases.
C.
The model has balanced performance.
D.
The model is overfitting.
Show solution
Solution
High precision and low recall indicate that the model is good at identifying positive cases but fails to capture many actual positives.
Correct Answer:
A
— The model is good at identifying positive cases but misses many.
Learn More →
Q. In a case study, if a model's precision is 0.9 and recall is 0.6, what is the F1 score?
A.
0.72
B.
0.75
C.
0.80
D.
0.85
Show solution
Solution
The F1 score is calculated as 2 * (precision * recall) / (precision + recall), which results in 0.72.
Correct Answer:
A
— 0.72
Learn More →
Q. In a regression case study, which metric would best evaluate the model's prediction error?
A.
Confusion Matrix
B.
R-squared
C.
Precision
D.
Recall
Show solution
Solution
R-squared is a common metric for evaluating the goodness of fit in regression models, indicating how well the model explains the variability of the data.
Correct Answer:
B
— R-squared
Learn More →
Q. In the context of model evaluation, what does 'overfitting' refer to?
A.
Model performs well on training data but poorly on unseen data
B.
Model performs equally on training and test data
C.
Model is too simple to capture the underlying trend
D.
Model has high bias
Show solution
Solution
Overfitting occurs when a model learns the training data too well, capturing noise and failing to generalize to new data.
Correct Answer:
A
— Model performs well on training data but poorly on unseen data
Learn More →
Q. What does a confusion matrix provide?
A.
A summary of prediction results
B.
A graphical representation of data
C.
A method for feature selection
D.
A way to visualize neural network layers
Show solution
Solution
A confusion matrix provides a summary of prediction results, showing true positives, false positives, true negatives, and false negatives.
Correct Answer:
A
— A summary of prediction results
Learn More →
Q. What does a high ROC AUC score indicate?
A.
The model has a high false positive rate.
B.
The model performs well in distinguishing between classes.
C.
The model is overfitting.
D.
The model has low precision.
Show solution
Solution
A high ROC AUC score indicates that the model is effective at distinguishing between the positive and negative classes.
Correct Answer:
B
— The model performs well in distinguishing between classes.
Learn More →
Q. What does the ROC curve represent?
A.
Relationship between precision and recall
B.
Trade-off between true positive rate and false positive rate
C.
Model training time vs accuracy
D.
Data distribution visualization
Show solution
Solution
The ROC curve illustrates the trade-off between the true positive rate and the false positive rate at various threshold settings.
Correct Answer:
B
— Trade-off between true positive rate and false positive rate
Learn More →
Q. What is the primary purpose of evaluation metrics in machine learning?
A.
To improve model training speed
B.
To assess model performance
C.
To increase data size
D.
To reduce overfitting
Show solution
Solution
Evaluation metrics are used to assess how well a machine learning model performs on a given task.
Correct Answer:
B
— To assess model performance
Learn More →
Q. What is the purpose of cross-validation in model evaluation?
A.
To increase the size of the dataset
B.
To ensure the model is not overfitting
C.
To visualize model performance
D.
To reduce training time
Show solution
Solution
Cross-validation helps ensure that the model is not overfitting by validating it on different subsets of the data.
Correct Answer:
B
— To ensure the model is not overfitting
Learn More →
Q. Which evaluation metric is best for a multi-class classification problem?
A.
Accuracy
B.
F1 Score
C.
Log Loss
D.
All of the above
Show solution
Solution
All of the mentioned metrics (Accuracy, F1 Score, and Log Loss) can be used to evaluate multi-class classification problems, each providing different insights into model performance.
Correct Answer:
D
— All of the above
Learn More →
Q. Which evaluation metric is best for multi-class classification problems?
A.
Accuracy
B.
F1 Score
C.
Log Loss
D.
All of the above
Show solution
Solution
All of the mentioned metrics can be used to evaluate multi-class classification problems, depending on the specific requirements.
Correct Answer:
D
— All of the above
Learn More →
Q. Which evaluation metric is best suited for imbalanced classification problems?
A.
Accuracy
B.
F1 Score
C.
Mean Squared Error
D.
R-squared
Show solution
Solution
F1 Score is better for imbalanced datasets as it considers both precision and recall.
Correct Answer:
B
— F1 Score
Learn More →
Q. Which evaluation metric is most sensitive to class imbalance?
A.
Accuracy
B.
Precision
C.
Recall
D.
F1 Score
Show solution
Solution
Accuracy can be misleading in imbalanced datasets, as it may give a false sense of model performance by favoring the majority class.
Correct Answer:
A
— Accuracy
Learn More →
Q. Which metric would be most appropriate for evaluating a regression model?
A.
Accuracy
B.
F1 Score
C.
Mean Absolute Error
D.
Confusion Matrix
Show solution
Solution
Mean Absolute Error (MAE) is a common metric for evaluating regression models, measuring the average magnitude of errors in a set of predictions without considering their direction.
Correct Answer:
C
— Mean Absolute Error
Learn More →
Q. Which metric would be most useful for evaluating a model in a highly imbalanced dataset?
A.
Accuracy
B.
F1 Score
C.
Mean Absolute Error
D.
Root Mean Squared Error
Show solution
Solution
The F1 Score is more informative than accuracy in imbalanced datasets, as it considers both false positives and false negatives.
Correct Answer:
B
— F1 Score
Learn More →
Q. Which metric would you use to evaluate a regression model's performance?
A.
Accuracy
B.
F1 Score
C.
Mean Absolute Error
D.
Confusion Matrix
Show solution
Solution
Mean Absolute Error (MAE) is commonly used to evaluate the performance of regression models.
Correct Answer:
C
— Mean Absolute Error
Learn More →
Q. Which of the following is NOT a common evaluation metric for classification models?
A.
Precision
B.
Recall
C.
Mean Squared Error
D.
F1 Score
Show solution
Solution
Mean Squared Error (MSE) is primarily used for regression models, while Precision, Recall, and F1 Score are metrics used for evaluating classification models.
Correct Answer:
C
— Mean Squared Error
Learn More →
Showing 1 to 18 of 18 (1 Pages)