Q. Which of the following statements is true about AVL and Red-Black Trees?
A.
AVL trees are faster for search operations than Red-Black trees
B.
Red-Black trees are always more balanced than AVL trees
C.
Both trees have the same height for n nodes
D.
AVL trees require more memory than Red-Black trees
Show solution
Solution
AVL trees are generally faster for search operations due to their stricter balancing, while Red-Black trees may require more memory due to their additional color attribute.
Correct Answer:
A
— AVL trees are faster for search operations than Red-Black trees
Learn More →
Q. Which of the following statements is true about AVL trees?
A.
They are always complete binary trees.
B.
They can have duplicate values.
C.
They are more rigidly balanced than Red-Black trees.
D.
They require more memory than binary search trees.
Show solution
Solution
AVL trees are more rigidly balanced than Red-Black trees, which helps in maintaining faster search times.
Correct Answer:
C
— They are more rigidly balanced than Red-Black trees.
Learn More →
Q. Which of the following statements is true about BFS?
A.
BFS can be implemented using a stack.
B.
BFS is not suitable for finding shortest paths.
C.
BFS explores nodes level by level.
D.
BFS is faster than DFS in all cases.
Show solution
Solution
BFS explores nodes level by level, making it suitable for finding the shortest path in unweighted graphs.
Correct Answer:
C
— BFS explores nodes level by level.
Learn More →
Q. Which of the following statements is true about binary search?
A.
It can be used on unsorted arrays
B.
It requires a sorted array
C.
It is slower than linear search
D.
It can only find unique elements
Show solution
Solution
Binary search requires a sorted array to function correctly.
Correct Answer:
B
— It requires a sorted array
Learn More →
Q. Which of the following statements is true about Decision Trees?
A.
They can only be used for regression tasks
B.
They can handle both categorical and numerical data
C.
They require normalization of data
D.
They are always the best choice for any dataset
Show solution
Solution
Decision Trees can handle both categorical and numerical data, making them versatile for various types of datasets.
Correct Answer:
B
— They can handle both categorical and numerical data
Learn More →
Q. Which of the following statements is true about DFS?
A.
It can be implemented using a queue.
B.
It is not suitable for finding shortest paths.
C.
It always uses less memory than BFS.
D.
It visits nodes in level order.
Show solution
Solution
DFS is not suitable for finding shortest paths in unweighted graphs as it does not explore all neighbors at the current depth before going deeper.
Correct Answer:
B
— It is not suitable for finding shortest paths.
Learn More →
Q. Which of the following statements is true about Dijkstra's algorithm?
A.
It can handle negative weight edges.
B.
It always finds the shortest path.
C.
It can be used for directed graphs only.
D.
It requires a complete graph.
Show solution
Solution
Dijkstra's algorithm always finds the shortest path in graphs with non-negative weights.
Correct Answer:
B
— It always finds the shortest path.
Learn More →
Q. Which of the following statements is true about dynamic programming?
A.
It is only applicable to optimization problems
B.
It can be used for both optimization and counting problems
C.
It is always faster than greedy algorithms
D.
It requires a sorted input
Show solution
Solution
Dynamic programming can be applied to both optimization and counting problems, making it versatile in problem-solving.
Correct Answer:
B
— It can be used for both optimization and counting problems
Learn More →
Q. Which of the following statements is true about hierarchical clustering?
A.
It requires the number of clusters to be specified in advance
B.
It can produce a hierarchy of clusters
C.
It is always faster than K-means
D.
It only works with numerical data
Show solution
Solution
Hierarchical clustering can produce a hierarchy of clusters, allowing for different levels of granularity.
Correct Answer:
B
— It can produce a hierarchy of clusters
Learn More →
Q. Which of the following statements is true about K-means clustering?
A.
It can only be applied to large datasets
B.
It is sensitive to the initial placement of centroids
C.
It guarantees finding the global optimum
D.
It can handle categorical data directly
Show solution
Solution
K-means is sensitive to the initial placement of centroids, which can affect the final clustering result.
Correct Answer:
B
— It is sensitive to the initial placement of centroids
Learn More →
Q. Which of the following statements is true about LL and LR parsers?
A.
LL parsers are more powerful than LR parsers.
B.
LR parsers can handle all LL grammars.
C.
LL parsers can handle all LR grammars.
D.
Both LL and LR parsers are equivalent in power.
Show solution
Solution
LR parsers can handle all LL grammars, but not all LR grammars can be parsed by LL parsers.
Correct Answer:
B
— LR parsers can handle all LL grammars.
Learn More →
Q. Which of the following statements is true about Random Forests?
A.
They are always less accurate than a single Decision Tree
B.
They can only be used for regression tasks
C.
They improve accuracy by averaging multiple trees
D.
They require more computational resources than a single tree
Show solution
Solution
Random Forests improve accuracy by averaging the predictions of multiple trees, which helps to reduce variance.
Correct Answer:
C
— They improve accuracy by averaging multiple trees
Learn More →
Q. Which of the following statements is true about Red-Black trees?
A.
They are always perfectly balanced
B.
They can have a height of up to 2*log(n+1)
C.
They require more memory than AVL trees
D.
They are not suitable for dynamic datasets
Show solution
Solution
In Red-Black trees, the height can be up to 2*log(n+1), which allows for efficient operations while not being perfectly balanced.
Correct Answer:
B
— They can have a height of up to 2*log(n+1)
Learn More →
Q. Which of the following statements is true about the Bellman-Ford algorithm?
A.
It can handle negative weight edges
B.
It is faster than Dijkstra's algorithm for all graphs
C.
It only works on directed graphs
D.
It cannot detect negative weight cycles
Show solution
Solution
The Bellman-Ford algorithm can handle graphs with negative weight edges and can also detect negative weight cycles.
Correct Answer:
A
— It can handle negative weight edges
Learn More →
Q. Which of the following statements is true about the height of an AVL tree?
A.
It can be greater than log(n)
B.
It is always less than or equal to 1.44 log(n)
C.
It is always equal to log(n)
D.
It can be less than log(n)
Show solution
Solution
The height of an AVL tree is guaranteed to be less than or equal to 1.44 log(n), ensuring efficient operations.
Correct Answer:
B
— It is always less than or equal to 1.44 log(n)
Learn More →
Q. Which of the following statements is true regarding BFS?
A.
It can be implemented using a stack
B.
It can find the shortest path in weighted graphs
C.
It uses a queue for traversal
D.
It is faster than DFS in all cases
Show solution
Solution
BFS uses a queue for traversal, allowing it to explore all neighbors at the present depth before moving on.
Correct Answer:
C
— It uses a queue for traversal
Learn More →
Q. Which of the following statements is true regarding K-means clustering?
A.
It can only be applied to spherical clusters
B.
It is sensitive to the initial placement of centroids
C.
It guarantees finding the global optimum
D.
It can handle categorical data directly
Show solution
Solution
K-means clustering is sensitive to the initial placement of centroids, which can affect the final clustering results.
Correct Answer:
B
— It is sensitive to the initial placement of centroids
Learn More →
Q. Which of the following statements is true regarding the balancing of AVL trees?
A.
They require fewer rotations than Red-Black trees
B.
They are always balanced after every insertion
C.
They can become unbalanced after deletion
D.
They do not require balancing at all
Show solution
Solution
AVL trees can become unbalanced after deletion, requiring rebalancing through rotations.
Correct Answer:
C
— They can become unbalanced after deletion
Learn More →
Q. Which of the following statements is true regarding the time complexity of DFS?
A.
O(V + E)
B.
O(V^2)
C.
O(E log V)
D.
O(V log V)
Show solution
Solution
The time complexity of DFS is O(V + E) as it visits each vertex and edge once.
Correct Answer:
A
— O(V + E)
Learn More →
Q. Which of the following techniques can be used to address multicollinearity?
A.
Feature selection
B.
Regularization techniques like Lasso
C.
Principal Component Analysis (PCA)
D.
All of the above
Show solution
Solution
All of the listed techniques can help address multicollinearity in linear regression models.
Correct Answer:
D
— All of the above
Learn More →
Q. Which of the following techniques can be used to address overfitting in linear regression?
A.
Increasing the number of features
B.
Using regularization techniques like Lasso or Ridge
C.
Decreasing the size of the training dataset
D.
Ignoring outliers
Show solution
Solution
Regularization techniques like Lasso or Ridge can help to reduce overfitting in linear regression models.
Correct Answer:
B
— Using regularization techniques like Lasso or Ridge
Learn More →
Q. Which of the following techniques can be used to assess the linearity assumption in linear regression?
A.
Residual plots
B.
Box plots
C.
Heat maps
D.
Pie charts
Show solution
Solution
Residual plots are used to assess the linearity assumption by showing the relationship between the residuals and the independent variable.
Correct Answer:
A
— Residual plots
Learn More →
Q. Which of the following techniques can be used to handle imbalanced datasets in classification?
A.
Data augmentation
B.
Feature scaling
C.
Cross-validation
D.
Resampling methods
Show solution
Solution
Resampling methods, such as oversampling the minority class or undersampling the majority class, can help address imbalanced datasets.
Correct Answer:
D
— Resampling methods
Learn More →
Q. Which of the following techniques can be used to handle missing values in Decision Trees?
A.
Imputation
B.
Ignoring missing values
C.
Using a separate category for missing values
D.
All of the above
Show solution
Solution
All of the mentioned techniques can be used to handle missing values in Decision Trees, depending on the context.
Correct Answer:
D
— All of the above
Learn More →
Q. Which of the following techniques can be used to improve a linear regression model?
A.
Adding more irrelevant features
B.
Feature scaling
C.
Using a more complex model
D.
Ignoring outliers
Show solution
Solution
Feature scaling can help improve the performance of a linear regression model, especially when predictors are on different scales.
Correct Answer:
B
— Feature scaling
Learn More →
Q. Which of the following techniques can be used to improve the performance of a classification model?
A.
Feature scaling
B.
Data augmentation
C.
Hyperparameter tuning
D.
All of the above
Show solution
Solution
All of the above techniques can be used to improve the performance of a classification model by enhancing data quality and model training.
Correct Answer:
D
— All of the above
Learn More →
Q. Which of the following techniques can help in reducing overfitting?
A.
Feature scaling
B.
Regularization
C.
Data augmentation
D.
All of the above
Show solution
Solution
All of the mentioned techniques can help mitigate overfitting in machine learning models.
Correct Answer:
D
— All of the above
Learn More →
Q. Which of the following techniques can help prevent overfitting in linear regression?
A.
Increasing the number of features
B.
Using regularization techniques like Lasso or Ridge
C.
Decreasing the size of the training set
D.
Ignoring outliers
Show solution
Solution
Regularization techniques like Lasso or Ridge can help prevent overfitting by adding a penalty for larger coefficients.
Correct Answer:
B
— Using regularization techniques like Lasso or Ridge
Learn More →
Q. Which of the following techniques can help prevent overfitting in neural networks?
A.
Increasing the learning rate
B.
Using dropout
C.
Reducing the number of layers
D.
Using a linear activation function
Show solution
Solution
Dropout is a regularization technique that randomly sets a fraction of input units to zero during training to prevent overfitting.
Correct Answer:
B
— Using dropout
Learn More →
Q. Which of the following techniques can help prevent overfitting in supervised learning?
A.
Increasing the complexity of the model
B.
Using more training data
C.
Reducing the number of features
D.
All of the above
Show solution
Solution
Using more training data can help prevent overfitting by providing a more comprehensive representation of the underlying data distribution.
Correct Answer:
B
— Using more training data
Learn More →
Showing 3001 to 3030 of 3237 (108 Pages)