Decision Trees and Random Forests - Numerical Applications

Download Q&A
Q. How does a Random Forest improve upon a single Decision Tree?
  • A. By using a single model for predictions
  • B. By averaging the predictions of multiple trees
  • C. By increasing the depth of each tree
  • D. By using only the most important features
Q. In a Random Forest, what is the purpose of bootstrapping?
  • A. To reduce overfitting
  • B. To increase the number of features
  • C. To create multiple subsets of data for training
  • D. To improve model interpretability
Q. In Random Forests, how are the trees typically constructed?
  • A. Using all features for each split.
  • B. Using a random subset of features for each split.
  • C. Using only the most important feature.
  • D. Using a fixed number of features for all trees.
Q. What does the Gini impurity measure in a Decision Tree?
  • A. The accuracy of the model.
  • B. The likelihood of misclassifying a randomly chosen element.
  • C. The depth of the tree.
  • D. The number of features used.
Q. What is a key characteristic of Random Forests compared to a single Decision Tree?
  • A. They are less prone to overfitting.
  • B. They require more computational resources.
  • C. They can only handle binary classification.
  • D. They are always more interpretable.
Q. What is the Gini impurity used for in Decision Trees?
  • A. To measure the accuracy of the model
  • B. To determine the best split at each node
  • C. To evaluate the performance of Random Forests
  • D. To select features for the model
Q. What is the main purpose of feature importance in Random Forests?
  • A. To reduce the number of trees in the forest.
  • B. To identify which features contribute most to the predictions.
  • C. To increase the depth of the trees.
  • D. To ensure all features are used equally.
Q. What is the main purpose of pruning a Decision Tree?
  • A. To increase the depth of the tree
  • B. To reduce the size of the tree and prevent overfitting
  • C. To improve the training speed
  • D. To enhance feature selection
Q. What is the maximum depth of a Decision Tree?
  • A. It is always fixed.
  • B. It can be controlled by hyperparameters.
  • C. It is determined by the number of features.
  • D. It is irrelevant to the model's performance.
Q. What type of learning does a Decision Tree primarily use?
  • A. Unsupervised Learning
  • B. Reinforcement Learning
  • C. Supervised Learning
  • D. Semi-supervised Learning
Q. Which algorithm is primarily used for regression tasks in Decision Trees?
  • A. CART (Classification and Regression Trees)
  • B. ID3
  • C. C4.5
  • D. K-Means
Q. Which algorithm is typically faster for making predictions, Decision Trees or Random Forests?
  • A. Decision Trees
  • B. Random Forests
  • C. Both are equally fast
  • D. It depends on the dataset size
Q. Which algorithm is typically faster for making predictions?
  • A. Decision Trees
  • B. Random Forests
  • C. Support Vector Machines
  • D. Neural Networks
Q. Which of the following is a disadvantage of Decision Trees?
  • A. They can handle both numerical and categorical data
  • B. They are prone to overfitting
  • C. They are easy to interpret
  • D. They require less data
Showing 1 to 14 of 14 (1 Pages)
Soulshift Feedback ×

On a scale of 0–10, how likely are you to recommend The Soulshift Academy?

Not likely Very likely