Q. In classification problems, what does the F1 Score represent?
A.
The harmonic mean of precision and recall
B.
The average of precision and recall
C.
The total number of true positives
D.
The ratio of true positives to total predictions
Show solution
Solution
The F1 Score is the harmonic mean of precision and recall, providing a balance between the two metrics.
Correct Answer:
A
— The harmonic mean of precision and recall
Learn More →
Q. In classification tasks, what does precision measure?
A.
True positives over total positives
B.
True positives over total predicted positives
C.
True positives over total actual positives
D.
True negatives over total negatives
Show solution
Solution
Precision measures the ratio of true positives to the total predicted positives, indicating the accuracy of positive predictions.
Correct Answer:
B
— True positives over total predicted positives
Learn More →
Q. In classification tasks, what does the F1 Score represent?
A.
The harmonic mean of precision and recall
B.
The average of precision and recall
C.
The total number of true positives
D.
The ratio of true positives to total predictions
Show solution
Solution
The F1 Score is the harmonic mean of precision and recall, providing a balance between the two metrics.
Correct Answer:
A
— The harmonic mean of precision and recall
Learn More →
Q. What does a high value of AUC-ROC indicate?
A.
Poor model performance
B.
Model is overfitting
C.
Good model discrimination
D.
Model is underfitting
Show solution
Solution
A high value of AUC-ROC indicates good model discrimination ability between positive and negative classes.
Correct Answer:
C
— Good model discrimination
Learn More →
Q. What does AUC stand for in the context of ROC analysis?
A.
Area Under the Curve
B.
Average Utility Coefficient
C.
Algorithmic Uncertainty Calculation
D.
Area Under Classification
Show solution
Solution
AUC stands for Area Under the Curve, which quantifies the overall ability of the model to discriminate between positive and negative classes.
Correct Answer:
A
— Area Under the Curve
Learn More →
Q. What does RMSE stand for in evaluation metrics?
A.
Root Mean Square Error
B.
Relative Mean Square Error
C.
Root Mean Squared Estimation
D.
Relative Mean Squared Estimation
Show solution
Solution
RMSE stands for Root Mean Square Error, which measures the average magnitude of the errors between predicted and observed values.
Correct Answer:
A
— Root Mean Square Error
Learn More →
Q. What does RMSE stand for in the context of evaluation metrics?
A.
Root Mean Square Error
B.
Relative Mean Square Error
C.
Random Mean Square Error
D.
Root Mean Squared Evaluation
Show solution
Solution
RMSE stands for Root Mean Square Error, which measures the average magnitude of the errors between predicted and observed values.
Correct Answer:
A
— Root Mean Square Error
Learn More →
Q. What does the term 'AUC' stand for in the context of ROC analysis?
A.
Area Under the Curve
B.
Average Utility Coefficient
C.
Algorithmic Uncertainty Coefficient
D.
Area Under Classification
Show solution
Solution
AUC stands for Area Under the Curve, which quantifies the overall ability of the model to discriminate between positive and negative classes.
Correct Answer:
A
— Area Under the Curve
Learn More →
Q. What is the main advantage of using cross-validation?
A.
It increases the training dataset size
B.
It helps in hyperparameter tuning
C.
It provides a more reliable estimate of model performance
D.
It reduces overfitting
Show solution
Solution
Cross-validation provides a more reliable estimate of model performance by using different subsets of the data for training and validation.
Correct Answer:
C
— It provides a more reliable estimate of model performance
Learn More →
Q. What is the main purpose of using cross-validation in model evaluation?
A.
To increase training time
B.
To reduce overfitting
C.
To improve model complexity
D.
To enhance data size
Show solution
Solution
Cross-validation is used to reduce overfitting by ensuring that the model performs well on unseen data.
Correct Answer:
B
— To reduce overfitting
Learn More →
Q. What is the primary goal of using evaluation metrics in machine learning?
A.
To improve model accuracy
B.
To compare different models
C.
To understand model behavior
D.
All of the above
Show solution
Solution
The primary goal of using evaluation metrics is to improve model accuracy, compare different models, and understand model behavior.
Correct Answer:
D
— All of the above
Learn More →
Q. What is the purpose of the R-squared metric?
A.
To measure the accuracy of classification
B.
To indicate the proportion of variance explained by the model
C.
To calculate the error rate
D.
To evaluate clustering performance
Show solution
Solution
R-squared indicates the proportion of variance in the dependent variable that can be explained by the independent variables in a regression model.
Correct Answer:
B
— To indicate the proportion of variance explained by the model
Learn More →
Q. Which evaluation metric is best for assessing the performance of a regression model?
A.
Accuracy
B.
F1 Score
C.
Mean Absolute Error
D.
Confusion Matrix
Show solution
Solution
Mean Absolute Error (MAE) is commonly used to assess the performance of regression models as it measures the average absolute errors.
Correct Answer:
C
— Mean Absolute Error
Learn More →
Q. Which evaluation metric is best suited for regression problems?
A.
Accuracy
B.
F1 Score
C.
Mean Absolute Error
D.
Precision
Show solution
Solution
Mean Absolute Error (MAE) is commonly used for regression problems as it measures the average absolute difference between predicted and actual values.
Correct Answer:
C
— Mean Absolute Error
Learn More →
Q. Which evaluation metric is best suited for regression tasks?
A.
Accuracy
B.
F1 Score
C.
Mean Absolute Error
D.
Precision
Show solution
Solution
Mean Absolute Error (MAE) is commonly used for regression tasks as it measures the average absolute difference between predicted and actual values.
Correct Answer:
C
— Mean Absolute Error
Learn More →
Q. Which evaluation metric is most appropriate for a regression model predicting house prices?
A.
Accuracy
B.
F1 Score
C.
Mean Absolute Error
D.
Precision
Show solution
Solution
Mean Absolute Error (MAE) is most appropriate for regression models predicting continuous values like house prices.
Correct Answer:
C
— Mean Absolute Error
Learn More →
Q. Which metric is NOT typically used for evaluating regression models?
A.
R-squared
B.
Mean Absolute Error
C.
Precision
D.
Mean Squared Error
Show solution
Solution
Precision is not typically used for evaluating regression models; it is a metric for classification tasks.
Correct Answer:
C
— Precision
Learn More →
Q. Which metric is used to evaluate the performance of a binary classification model?
A.
Mean Squared Error
B.
F1 Score
C.
R-squared
D.
Mean Absolute Error
Show solution
Solution
F1 Score is used to evaluate the performance of binary classification models, balancing precision and recall.
Correct Answer:
B
— F1 Score
Learn More →
Q. Which metric is used to evaluate the performance of a model in terms of its ability to distinguish between classes?
A.
Confusion Matrix
B.
Mean Squared Error
C.
R-squared
D.
Log Loss
Show solution
Solution
Log Loss measures the performance of a classification model whose output is a probability value between 0 and 1, evaluating its ability to distinguish between classes.
Correct Answer:
D
— Log Loss
Learn More →
Q. Which metric would you use to evaluate a model's performance on imbalanced classes?
A.
Accuracy
B.
F1 Score
C.
Mean Squared Error
D.
R-squared
Show solution
Solution
F1 Score is preferred for evaluating models on imbalanced classes as it considers both precision and recall.
Correct Answer:
B
— F1 Score
Learn More →
Q. Which metric would you use to evaluate a model's performance on imbalanced datasets?
A.
Accuracy
B.
F1 Score
C.
Mean Squared Error
D.
R-squared
Show solution
Solution
The F1 Score is preferred for imbalanced datasets as it considers both precision and recall, providing a better measure of model performance.
Correct Answer:
B
— F1 Score
Learn More →
Q. Which metric would you use to evaluate a regression model's performance that is sensitive to outliers?
A.
Mean Absolute Error
B.
Mean Squared Error
C.
R-squared
D.
Root Mean Squared Error
Show solution
Solution
Mean Squared Error (MSE) is sensitive to outliers because it squares the errors, giving more weight to larger errors.
Correct Answer:
B
— Mean Squared Error
Learn More →
Showing 1 to 22 of 22 (1 Pages)