Q. How does a Random Forest handle missing values?
A.
It cannot handle missing values.
B.
It uses mean imputation.
C.
It uses a surrogate split.
D.
It drops the entire dataset.
Show solution
Solution
Random Forests can handle missing values by using surrogate splits, which allow the model to make predictions even when some data points are missing.
Correct Answer:
C
— It uses a surrogate split.
Learn More →
Q. In a Decision Tree, what does the term 'node' refer to?
A.
A point where a decision is made.
B.
The final output of the tree.
C.
The data used to train the model.
D.
The overall structure of the tree.
Show solution
Solution
A 'node' in a Decision Tree is where a decision is made based on the feature values.
Correct Answer:
A
— A point where a decision is made.
Learn More →
Q. In Random Forests, how are individual trees typically trained?
A.
On the entire dataset.
B.
On a random subset of the data.
C.
Using only the most important features.
D.
With no data at all.
Show solution
Solution
Individual trees in Random Forests are trained on random subsets of the data, which helps to create diversity among the trees.
Correct Answer:
B
— On a random subset of the data.
Learn More →
Q. In which scenario would you prefer using a Decision Tree over a Random Forest?
A.
When interpretability is crucial.
B.
When you have a very large dataset.
C.
When you need high accuracy.
D.
When computational resources are limited.
Show solution
Solution
Decision Trees are easier to interpret, making them preferable when interpretability is crucial.
Correct Answer:
A
— When interpretability is crucial.
Learn More →
Q. What does 'bagging' refer to in the context of Random Forests?
A.
A method to combine multiple models.
B.
A technique to select features.
C.
A way to visualize trees.
D.
A process to clean data.
Show solution
Solution
'Bagging' refers to the technique of combining multiple models to improve overall performance and reduce variance.
Correct Answer:
A
— A method to combine multiple models.
Learn More →
Q. What is a potential drawback of using a single Decision Tree?
A.
They are very fast to train.
B.
They can easily handle large datasets.
C.
They are prone to overfitting.
D.
They require extensive preprocessing.
Show solution
Solution
A single Decision Tree is prone to overfitting, especially if it is deep and complex.
Correct Answer:
C
— They are prone to overfitting.
Learn More →
Q. What is a primary advantage of using Random Forests over Decision Trees?
A.
Random Forests are easier to interpret.
B.
Random Forests reduce the risk of overfitting.
C.
Random Forests require less data.
D.
Random Forests are faster to train.
Show solution
Solution
Random Forests combine multiple decision trees to reduce overfitting, making them more robust than a single Decision Tree.
Correct Answer:
B
— Random Forests reduce the risk of overfitting.
Learn More →
Q. What is the purpose of feature importance in Random Forests?
A.
To reduce the number of trees.
B.
To identify the most influential features.
C.
To visualize the tree structure.
D.
To increase the model's complexity.
Show solution
Solution
Feature importance helps to identify which features have the most influence on the predictions made by the model.
Correct Answer:
B
— To identify the most influential features.
Learn More →
Q. What is the role of 'feature importance' in Random Forests?
A.
To determine the number of trees in the forest.
B.
To identify which features are most influential in making predictions.
C.
To evaluate the model's performance.
D.
To select the best hyperparameters.
Show solution
Solution
Feature importance in Random Forests helps to identify which features are most influential in making predictions, guiding feature selection and model interpretation.
Correct Answer:
B
— To identify which features are most influential in making predictions.
Learn More →
Q. What metric is often used to evaluate the performance of a Decision Tree?
A.
Mean Squared Error.
B.
Accuracy.
C.
F1 Score.
D.
Confusion Matrix.
Show solution
Solution
Accuracy is a common metric used to evaluate the performance of classification models like Decision Trees.
Correct Answer:
B
— Accuracy.
Learn More →
Q. Which evaluation metric is commonly used to assess the performance of a Decision Tree classifier?
A.
Mean Squared Error.
B.
Accuracy.
C.
Silhouette Score.
D.
Log Loss.
Show solution
Solution
Accuracy is a common evaluation metric for classifiers, including Decision Trees, as it measures the proportion of correctly predicted instances.
Correct Answer:
B
— Accuracy.
Learn More →
Q. Which metric is commonly used to evaluate the performance of a Decision Tree?
A.
Mean Squared Error.
B.
Accuracy.
C.
F1 Score.
D.
Confusion Matrix.
Show solution
Solution
Accuracy is a common metric used to evaluate the performance of a Decision Tree, especially in classification tasks.
Correct Answer:
B
— Accuracy.
Learn More →
Q. Which of the following is a common use case for Decision Trees?
A.
Image recognition.
B.
Customer segmentation.
C.
Natural language processing.
D.
Time series forecasting.
Show solution
Solution
Decision Trees are commonly used for customer segmentation due to their ability to handle categorical data effectively.
Correct Answer:
B
— Customer segmentation.
Learn More →
Q. Which of the following is a common use case for Random Forests?
A.
Image recognition.
B.
Time series forecasting.
C.
Spam detection.
D.
All of the above.
Show solution
Solution
Random Forests can be applied to various tasks, including image recognition, time series forecasting, and spam detection.
Correct Answer:
D
— All of the above.
Learn More →
Q. Which of the following is NOT a characteristic of Random Forests?
A.
They use multiple decision trees.
B.
They are less prone to overfitting.
C.
They can handle missing values.
D.
They always provide the best accuracy.
Show solution
Solution
While Random Forests are robust, they do not always guarantee the best accuracy for every dataset.
Correct Answer:
D
— They always provide the best accuracy.
Learn More →
Q. Which of the following scenarios is best suited for using Random Forests?
A.
When interpretability is crucial.
B.
When the dataset is small and simple.
C.
When there are many features and complex interactions.
D.
When the output is a continuous variable only.
Show solution
Solution
Random Forests are well-suited for datasets with many features and complex interactions due to their ensemble nature.
Correct Answer:
C
— When there are many features and complex interactions.
Learn More →
Showing 1 to 16 of 16 (1 Pages)