Decision Trees and Random Forests

Download Q&A
Q. How does Random Forest handle missing values?
  • A. It cannot handle missing values
  • B. It ignores missing values completely
  • C. It uses imputation techniques
  • D. It can use surrogate splits
Q. In a Random Forest, what is the purpose of using multiple Decision Trees?
  • A. To increase the model's complexity
  • B. To reduce overfitting and improve accuracy
  • C. To simplify the model
  • D. To ensure all trees are identical
Q. In which scenario would you prefer using a Random Forest over a single Decision Tree?
  • A. When interpretability is the main concern
  • B. When you have a small dataset
  • C. When you need higher accuracy and robustness
  • D. When computational resources are limited
Q. What does pruning refer to in the context of Decision Trees?
  • A. Adding more nodes to the tree
  • B. Removing nodes to reduce complexity
  • C. Increasing the depth of the tree
  • D. Changing the splitting criterion
Q. What does the term 'bagging' refer to in the context of Random Forests?
  • A. Using a single Decision Tree for predictions
  • B. Combining predictions from multiple models
  • C. Randomly selecting features for each tree
  • D. Aggregating predictions by averaging
Q. What is a key feature of Random Forests that helps in feature selection?
  • A. It uses all features for every tree
  • B. It randomly selects a subset of features for each split
  • C. It eliminates all features with low variance
  • D. It requires manual feature selection
Q. What is a potential drawback of using Decision Trees?
  • A. They are very fast to train
  • B. They can easily overfit the training data
  • C. They require no feature selection
  • D. They are not interpretable
Q. What is a primary advantage of using Decision Trees?
  • A. They require a lot of data preprocessing
  • B. They are easy to interpret and visualize
  • C. They always provide the best accuracy
  • D. They cannot handle categorical data
Q. What is the main purpose of pruning in Decision Trees?
  • A. To increase the depth of the tree
  • B. To reduce the size of the tree and prevent overfitting
  • C. To improve the interpretability of the tree
  • D. To enhance the training speed
Q. What technique does Random Forest use to create diverse trees?
  • A. Bagging
  • B. Boosting
  • C. Stacking
  • D. Clustering
Q. Which evaluation metric is commonly used to assess the performance of a classification model like Decision Trees?
  • A. Mean Absolute Error
  • B. Accuracy
  • C. R-squared
  • D. Silhouette Score
Q. Which of the following is a common criterion for splitting nodes in Decision Trees?
  • A. Mean Squared Error
  • B. Gini Impurity
  • C. Euclidean Distance
  • D. Cross-Entropy
Q. Which of the following statements is true about Decision Trees?
  • A. They can only be used for regression tasks
  • B. They can handle both categorical and numerical data
  • C. They require normalization of data
  • D. They are always the best choice for any dataset
Q. Which of the following statements is true about Random Forests?
  • A. They are always less accurate than a single Decision Tree
  • B. They can only be used for regression tasks
  • C. They improve accuracy by averaging multiple trees
  • D. They require more computational resources than a single tree
Showing 1 to 14 of 14 (1 Pages)
Soulshift Feedback ×

On a scale of 0–10, how likely are you to recommend The Soulshift Academy?

Not likely Very likely