Q. In lexical analysis, what is a 'token'?
-
A.
A sequence of characters in the source code
-
B.
A data structure representing a keyword or identifier
-
C.
A type of error in the source code
-
D.
A part of the syntax tree
Solution
A token is a data structure that represents a keyword, identifier, or other meaningful element in the source code.
Correct Answer:
B
— A data structure representing a keyword or identifier
Learn More →
Q. What is a 'lexeme' in the context of lexical analysis?
-
A.
The smallest unit of meaning in a programming language
-
B.
The actual sequence of characters that matches a token
-
C.
A type of syntax error
-
D.
A representation of a variable in the symbol table
Solution
A lexeme is the actual sequence of characters in the source code that matches a token.
Correct Answer:
B
— The actual sequence of characters that matches a token
Learn More →
-
A.
The smallest unit of a program
-
B.
A sequence of characters that matches a token
-
C.
A type of syntax error
-
D.
A part of the symbol table
Solution
A lexeme is a sequence of characters in the source code that matches a token defined by the lexical analyzer.
Correct Answer:
B
— A sequence of characters that matches a token
Learn More →
Q. What is the output of a lexical analyzer typically used for?
-
A.
To generate machine code
-
B.
To create a syntax tree
-
C.
To feed into the parser
-
D.
To optimize the code
Solution
The output of a lexical analyzer, which consists of tokens, is typically fed into the parser for further processing.
Correct Answer:
C
— To feed into the parser
Learn More →
Q. What is the output of a lexical analyzer when it encounters an unrecognized character?
-
A.
A token
-
B.
A syntax tree
-
C.
An error message
-
D.
A symbol table entry
Solution
When a lexical analyzer encounters an unrecognized character, it typically outputs an error message indicating the issue.
Correct Answer:
C
— An error message
Learn More →
Q. What is the purpose of a finite automaton in lexical analysis?
-
A.
To parse the syntax of the code
-
B.
To recognize patterns in the input stream
-
C.
To generate machine code
-
D.
To optimize the code
Solution
A finite automaton is used to recognize patterns in the input stream, which helps in identifying tokens.
Correct Answer:
B
— To recognize patterns in the input stream
Learn More →
Q. What is the purpose of a symbol table in the context of lexical analysis?
-
A.
To store the tokens generated
-
B.
To keep track of variable names and their attributes
-
C.
To optimize the code
-
D.
To parse the syntax tree
Solution
The symbol table is used to keep track of variable names, their types, and other attributes during the compilation process.
Correct Answer:
B
— To keep track of variable names and their attributes
Learn More →
Q. What is the role of regular expressions in lexical analysis?
-
A.
To define the grammar of the programming language
-
B.
To specify the syntax of the tokens
-
C.
To generate intermediate code
-
D.
To optimize the parsing process
Solution
Regular expressions are used to specify the syntax of the tokens that the lexical analyzer recognizes.
Correct Answer:
B
— To specify the syntax of the tokens
Learn More →
Q. Which of the following best describes 'tokenization'?
-
A.
The process of generating machine code
-
B.
The process of converting source code into tokens
-
C.
The process of optimizing code
-
D.
The process of parsing syntax
Solution
Tokenization is the process of converting source code into tokens that can be processed by the compiler.
Correct Answer:
B
— The process of converting source code into tokens
Learn More →
Q. Which of the following best describes the relationship between a lexer and a parser?
-
A.
The lexer generates machine code, while the parser checks syntax
-
B.
The lexer produces tokens, which the parser uses to build a syntax tree
-
C.
The lexer and parser perform the same function
-
D.
The parser generates tokens, while the lexer checks syntax
Solution
The lexer produces tokens that the parser uses to build a syntax tree, making them complementary components in the compilation process.
Correct Answer:
B
— The lexer produces tokens, which the parser uses to build a syntax tree
Learn More →
Q. Which of the following is a common error detected by a lexical analyzer?
-
A.
Syntax errors
-
B.
Type errors
-
C.
Unrecognized characters
-
D.
Semantic errors
Solution
A lexical analyzer can detect unrecognized characters that do not match any defined tokens.
Correct Answer:
C
— Unrecognized characters
Learn More →
Q. Which of the following is a common technique used in lexical analysis?
-
A.
Recursive descent parsing
-
B.
Finite state machines
-
C.
Dynamic programming
-
D.
Backtracking
Solution
Finite state machines are commonly used in lexical analysis to recognize tokens based on regular expressions.
Correct Answer:
B
— Finite state machines
Learn More →
Q. Which of the following is NOT a typical output of a lexical analyzer?
-
A.
Tokens
-
B.
Symbol table
-
C.
Abstract syntax tree
-
D.
Error messages
Solution
An abstract syntax tree is typically produced by the parser, not the lexical analyzer.
Correct Answer:
C
— Abstract syntax tree
Learn More →
Q. Which of the following tools is commonly used to implement a lexical analyzer?
-
A.
Yacc
-
B.
Lex
-
C.
Bison
-
D.
ANTLR
Solution
Lex is a tool specifically designed for generating lexical analyzers.
Correct Answer:
B
— Lex
Learn More →
Showing 1 to 14 of 14 (1 Pages)