Clustering Methods: K-means, Hierarchical - Advanced Concepts

Download Q&A
Q. In K-means clustering, what happens if the initial centroids are poorly chosen?
  • A. The algorithm will always converge to the global minimum
  • B. The algorithm may converge to a local minimum
  • C. The algorithm will not run
  • D. The clusters will be perfectly formed
Q. What is a key advantage of hierarchical clustering over K-means?
  • A. It requires fewer computations
  • B. It does not require the number of clusters to be specified in advance
  • C. It is always more accurate
  • D. It can only handle small datasets
Q. What is the main difference between agglomerative and divisive hierarchical clustering?
  • A. Agglomerative starts with individual points, while divisive starts with one cluster
  • B. Agglomerative is faster than divisive
  • C. Divisive clustering is more commonly used than agglomerative
  • D. There is no difference; they are the same
Q. What is the main purpose of using distance metrics in clustering algorithms?
  • A. To determine the number of clusters
  • B. To measure the similarity or dissimilarity between data points
  • C. To visualize the clusters formed
  • D. To optimize the performance of the algorithm
Q. What is the main purpose of using the silhouette coefficient in clustering?
  • A. To measure the distance between clusters
  • B. To evaluate the compactness and separation of clusters
  • C. To determine the number of clusters
  • D. To visualize the clusters
Q. Which clustering method is more suitable for discovering non-linear relationships in data?
  • A. K-means clustering
  • B. Hierarchical clustering
  • C. DBSCAN
  • D. Gaussian Mixture Models
Showing 1 to 6 of 6 (1 Pages)
Soulshift Feedback ×

On a scale of 0–10, how likely are you to recommend The Soulshift Academy?

Not likely Very likely