Artificial Intelligence & ML

Cloud ML Services Clustering Methods: K-means, Hierarchical Clustering Methods: K-means, Hierarchical - Advanced Concepts Clustering Methods: K-means, Hierarchical - Applications Clustering Methods: K-means, Hierarchical - Case Studies Clustering Methods: K-means, Hierarchical - Competitive Exam Level Clustering Methods: K-means, Hierarchical - Higher Difficulty Problems Clustering Methods: K-means, Hierarchical - Numerical Applications Clustering Methods: K-means, Hierarchical - Problem Set Clustering Methods: K-means, Hierarchical - Real World Applications CNNs and Deep Learning Basics Decision Trees and Random Forests Decision Trees and Random Forests - Advanced Concepts Decision Trees and Random Forests - Applications Decision Trees and Random Forests - Case Studies Decision Trees and Random Forests - Competitive Exam Level Decision Trees and Random Forests - Higher Difficulty Problems Decision Trees and Random Forests - Numerical Applications Decision Trees and Random Forests - Problem Set Decision Trees and Random Forests - Real World Applications Evaluation Metrics Evaluation Metrics - Advanced Concepts Evaluation Metrics - Applications Evaluation Metrics - Case Studies Evaluation Metrics - Competitive Exam Level Evaluation Metrics - Higher Difficulty Problems Evaluation Metrics - Numerical Applications Evaluation Metrics - Problem Set Evaluation Metrics - Real World Applications Feature Engineering and Model Selection Feature Engineering and Model Selection - Advanced Concepts Feature Engineering and Model Selection - Applications Feature Engineering and Model Selection - Case Studies Feature Engineering and Model Selection - Competitive Exam Level Feature Engineering and Model Selection - Higher Difficulty Problems Feature Engineering and Model Selection - Numerical Applications Feature Engineering and Model Selection - Problem Set Feature Engineering and Model Selection - Real World Applications Linear Regression and Evaluation Linear Regression and Evaluation - Advanced Concepts Linear Regression and Evaluation - Applications Linear Regression and Evaluation - Case Studies Linear Regression and Evaluation - Competitive Exam Level Linear Regression and Evaluation - Higher Difficulty Problems Linear Regression and Evaluation - Numerical Applications Linear Regression and Evaluation - Problem Set Linear Regression and Evaluation - Real World Applications ML Model Deployment - MLOps Model Deployment Basics Model Deployment Basics - Advanced Concepts Model Deployment Basics - Applications Model Deployment Basics - Case Studies Model Deployment Basics - Competitive Exam Level Model Deployment Basics - Higher Difficulty Problems Model Deployment Basics - Numerical Applications Model Deployment Basics - Problem Set Model Deployment Basics - Real World Applications Neural Networks Fundamentals Neural Networks Fundamentals - Advanced Concepts Neural Networks Fundamentals - Applications Neural Networks Fundamentals - Case Studies Neural Networks Fundamentals - Competitive Exam Level Neural Networks Fundamentals - Higher Difficulty Problems Neural Networks Fundamentals - Numerical Applications Neural Networks Fundamentals - Problem Set Neural Networks Fundamentals - Real World Applications NLP - Tokenization, Embeddings Reinforcement Learning Intro RNNs and LSTMs Supervised Learning: Regression and Classification Supervised Learning: Regression and Classification - Advanced Concepts Supervised Learning: Regression and Classification - Applications Supervised Learning: Regression and Classification - Case Studies Supervised Learning: Regression and Classification - Competitive Exam Level Supervised Learning: Regression and Classification - Higher Difficulty Problems Supervised Learning: Regression and Classification - Numerical Applications Supervised Learning: Regression and Classification - Problem Set Supervised Learning: Regression and Classification - Real World Applications Support Vector Machines Overview Support Vector Machines Overview - Advanced Concepts Support Vector Machines Overview - Applications Support Vector Machines Overview - Case Studies Support Vector Machines Overview - Competitive Exam Level Support Vector Machines Overview - Higher Difficulty Problems Support Vector Machines Overview - Numerical Applications Support Vector Machines Overview - Problem Set Support Vector Machines Overview - Real World Applications Unsupervised Learning: Clustering Unsupervised Learning: Clustering - Advanced Concepts Unsupervised Learning: Clustering - Applications Unsupervised Learning: Clustering - Case Studies Unsupervised Learning: Clustering - Competitive Exam Level Unsupervised Learning: Clustering - Higher Difficulty Problems Unsupervised Learning: Clustering - Numerical Applications Unsupervised Learning: Clustering - Problem Set Unsupervised Learning: Clustering - Real World Applications
Q. If a dataset has 200 points and you apply K-means clustering with K=4, how many points will be assigned to each cluster on average?
  • A. 50
  • B. 40
  • C. 60
  • D. 30
Q. If the distance between two clusters in hierarchical clustering is defined as the maximum distance between points in the clusters, what linkage method is being used?
  • A. Single linkage
  • B. Complete linkage
  • C. Average linkage
  • D. Centroid linkage
Q. In a K-means clustering algorithm, if you have 5 clusters and 100 data points, how many centroids will be initialized?
  • A. 5
  • B. 100
  • C. 50
  • D. 10
Q. In hierarchical clustering, what does 'agglomerative' mean?
  • A. Clusters are formed by splitting larger clusters
  • B. Clusters are formed by merging smaller clusters
  • C. Clusters are formed randomly
  • D. Clusters are formed based on a predefined distance
Q. In hierarchical clustering, what does 'agglomerative' refer to?
  • A. A method that starts with all points as individual clusters
  • B. A method that requires the number of clusters to be predefined
  • C. A technique that merges clusters based on distance
  • D. A type of clustering that uses a centroid
Q. In hierarchical clustering, what does agglomerative clustering do?
  • A. Starts with all data points as individual clusters and merges them
  • B. Starts with one cluster and splits it into smaller clusters
  • C. Randomly assigns data points to clusters
  • D. Uses a predefined number of clusters
Q. In hierarchical clustering, what does the term 'dendrogram' refer to?
  • A. A type of data point
  • B. A tree-like diagram that shows the arrangement of clusters
  • C. A method of calculating distances
  • D. A clustering algorithm
Q. In hierarchical clustering, what does the term 'linkage' refer to?
  • A. The method of assigning clusters to data points
  • B. The distance metric used to measure similarity
  • C. The strategy for merging clusters
  • D. The number of clusters to form
Q. In hierarchical clustering, what is agglomerative clustering?
  • A. A bottom-up approach to cluster formation
  • B. A top-down approach to cluster formation
  • C. A method that requires prior knowledge of clusters
  • D. A technique that uses K-means as a base
Q. In hierarchical clustering, what is the difference between agglomerative and divisive methods?
  • A. Agglomerative starts with individual points, divisive starts with one cluster
  • B. Agglomerative merges clusters, divisive splits clusters
  • C. Both A and B
  • D. None of the above
Q. In hierarchical clustering, what is the result of a dendrogram?
  • A. A visual representation of the clustering process
  • B. A table of cluster centroids
  • C. A list of data points in each cluster
  • D. A summary of the clustering algorithm's performance
Q. In hierarchical clustering, what is the result of the agglomerative approach?
  • A. Clusters are formed by splitting larger clusters
  • B. Clusters are formed by merging smaller clusters
  • C. Clusters are formed randomly
  • D. Clusters are formed based on a predefined number
Q. In K-means clustering, what happens if K is set too high?
  • A. Clusters become too large
  • B. Overfitting occurs
  • C. Underfitting occurs
  • D. No effect
Q. In which scenario would hierarchical clustering be preferred over K-means?
  • A. When the number of clusters is known
  • B. When the dataset is very large
  • C. When a hierarchy of clusters is desired
  • D. When the data is strictly numerical
Q. In which scenario would you use reinforcement learning?
  • A. When you have labeled data for training
  • B. When the model needs to learn from interactions with an environment
  • C. When you want to cluster data points
  • D. When you need to predict a continuous outcome
Q. What does the term 'feature engineering' refer to?
  • A. The process of selecting a model
  • B. The process of creating new input features from existing data
  • C. The process of tuning hyperparameters
  • D. The process of evaluating model performance
Q. What is a common application of clustering in marketing?
  • A. Predicting customer behavior
  • B. Segmenting customers into distinct groups
  • C. Optimizing supply chain logistics
  • D. Forecasting sales trends
Q. What is a common application of clustering in real-world scenarios?
  • A. Spam detection in emails
  • B. Predicting stock prices
  • C. Image classification
  • D. Customer segmentation
Q. What is a common application of K-means clustering in the real world?
  • A. Image segmentation
  • B. Spam detection
  • C. Sentiment analysis
  • D. Time series forecasting
Q. What is a key advantage of using hierarchical clustering over K-means?
  • A. It requires less computational power
  • B. It does not require the number of clusters to be specified in advance
  • C. It is always more accurate
  • D. It can handle larger datasets
Q. What is a key characteristic of DBSCAN compared to K-means?
  • A. It requires the number of clusters to be specified
  • B. It can find clusters of arbitrary shape
  • C. It is faster than K-means for all datasets
  • D. It uses centroids to define clusters
Q. What is overfitting in machine learning?
  • A. When a model performs well on training data but poorly on unseen data
  • B. When a model is too simple to capture the underlying trend
  • C. When a model is trained on too little data
  • D. When a model has too many features
Q. What is the effect of outliers on K-means clustering?
  • A. They have no effect on the clustering results
  • B. They can significantly distort the cluster centroids
  • C. They improve the clustering accuracy
  • D. They help in determining the number of clusters
Q. What is the main advantage of hierarchical clustering over K-means?
  • A. It does not require the number of clusters to be specified in advance
  • B. It is faster and more efficient
  • C. It can handle larger datasets
  • D. It is less sensitive to outliers
Q. What is the main advantage of using hierarchical clustering over K-means?
  • A. It is faster and more efficient
  • B. It does not require the number of clusters to be specified
  • C. It can handle large datasets better
  • D. It is less sensitive to outliers
Q. What is the main criterion for determining the optimal number of clusters in K-means?
  • A. Silhouette score
  • B. Elbow method
  • C. Both A and B
  • D. None of the above
Q. What is the main difference between K-means and hierarchical clustering?
  • A. K-means is a partitional method, while hierarchical is a divisive method
  • B. K-means requires the number of clusters to be defined, while hierarchical does not
  • C. K-means can only be used for numerical data, while hierarchical can handle categorical data
  • D. K-means is faster than hierarchical clustering for small datasets
Q. What is the main difference between K-means and K-medoids clustering?
  • A. K-means uses centroids, while K-medoids uses actual data points
  • B. K-medoids is faster than K-means
  • C. K-means can only handle numerical data, while K-medoids can handle categorical data
  • D. K-medoids requires the number of clusters to be specified, while K-means does not
Q. What is the main difference between supervised and unsupervised learning?
  • A. Supervised learning uses labeled data, unsupervised does not
  • B. Unsupervised learning is faster than supervised learning
  • C. Supervised learning is only for classification tasks
  • D. Unsupervised learning requires more data
Q. What is the main function of an activation function in a neural network?
  • A. To initialize weights
  • B. To introduce non-linearity into the model
  • C. To optimize the learning rate
  • D. To reduce the number of layers
Showing 1 to 30 of 65 (3 Pages)