# Data Structure Exam Quiz!

Approved & Edited by ProProfs Editorial Team
At ProProfs Quizzes, our dedicated in-house team of experts takes pride in their work. With a sharp eye for detail, they meticulously review each quiz. This ensures that every quiz, taken by over 100 million users, meets our standards of accuracy, clarity, and engagement.
| Written by MD Bakhtiar
M
MD Bakhtiar
Community Contributor
Quizzes Created: 1 | Total Attempts: 299
Questions: 27 | Attempts: 300  Settings  .

• 1.

### In the case of Union by Rank, the rank is increased based on the rank of the tree with:

• A.

Higher rank

• B.

Lower rank

• C.

None of the above

C. None of the above
Explanation
In the case of Union by Rank, the rank is increased based on the size of the tree, not the rank. The rank of a tree represents an upper bound on its height. When two trees of equal rank are merged, the rank of the resulting tree is increased by one. This ensures that the height of the merged tree remains small, leading to efficient operations in the Union-Find data structure.

Rate this question:

• 2.

### Which of the following are true for the amortized time complexity of DSU having N nodes after applying path compression and union by size?

• A.
• B.

Proportional to Ackermann Function

• C.
• D.
• E.

None of the above

A.
D.
Explanation
The amortized time complexity of DSU (Disjoint Set Union) with N nodes after applying path compression and union by size is proportional to the inverse Ackermann function. This function grows very slowly and is considered to be nearly constant for all practical values of N. Therefore, the amortized time complexity of DSU is very efficient and can be considered as nearly constant.

Rate this question:

• 3.

### You are given a list of N elements. You will be asked Q queries. Each query will ask you to print the minimum value in a range [L, R]. Which of the following data structures can be used to provide the fastest query answers (Here, N, and M both are in the range of 10^5)?

• A.

Segment Tree

• B.

Fenwick Tree

• C.

Disjoint Set Union

C. Disjoint Set Union
• 4.

### Consider that you are participating in a programming contest. It’s almost the end of the contest. You have one last problem to solve. The problem involves performing an efficient range sum query in a 1D array. The queries contain both range updates and printing the range sum. You only have enough time to implement and debug one of the following algorithms. Which one do you choose?

• A.

Segment Tree

• B.

Fenwick Tree

• C.

Disjoint Set Union

A. Segment Tree
Explanation
A Segment Tree is the most suitable choice for efficiently performing range sum queries in a 1D array that involves both range updates and printing the range sum. Segment Trees are data structures that allow efficient querying and updating of intervals or segments of an array. They can handle range queries in logarithmic time complexity, making them ideal for solving this problem in a programming contest where efficiency is crucial. Fenwick Trees and Disjoint Set Union are not specifically designed for range sum queries and may not provide the same level of efficiency as a Segment Tree in this scenario.

Rate this question:

• 5.

### Which of the following range queries does a single Fenwick Tree support?

• A.

Multiplication

• B.

GCD

• C.

Matrix Sum

• D.

Minimum

• E.

None of the above

A. Multiplication
C. Matrix Sum
Explanation
A Fenwick Tree, also known as Binary Indexed Tree (BIT), is a data structure that efficiently supports range queries and updates on an array. It is mainly used for prefix sum calculations. The range queries that a single Fenwick Tree supports are multiplication and matrix sum. Multiplication range queries can be achieved by storing the prefix products in the Fenwick Tree, while matrix sum range queries can be done by storing the prefix sums of the matrix elements. GCD and minimum range queries are not directly supported by a Fenwick Tree, although they can be implemented using additional techniques.

Rate this question:

• 6.

### In practice, the implementation of Fenwick Tree is similar to that of:

• A.

Segment Tree

• B.

Binary Heap

• C.

Binary Search Tree

• D.

AVL Tree

• E.

None of the above

B. Binary Heap
Explanation
The implementation of Fenwick Tree is similar to that of a Binary Heap. Both data structures are used for efficient range queries and updates. Fenwick Tree is specifically designed for efficient prefix sum calculations, while Binary Heap is used for efficient extraction of minimum or maximum elements. Both structures have similar underlying principles, such as maintaining a binary tree structure and using efficient operations like parent-child relationships and heapify operations. Therefore, the implementation of Fenwick Tree closely resembles that of a Binary Heap.

Rate this question:

• 7.

### Each integer can be represented as the sum of the integer power of:

• A.

0

• B.

1

• C.

2

• D.

3

B. 1
C. 2
Explanation
Each integer can be represented as the sum of the integer power of 1 and 2. This means that any integer can be expressed as the sum of 1's and 2's. For example, 4 can be represented as 2 + 2, 5 can be represented as 2 + 1 + 1, and so on. The inclusion of 0 and 3 in the list is unnecessary because they are not needed to represent any integer as a sum of powers.

Rate this question:

• 8.

### Consider that we have 10^6 strings. We want to find out the number of unique strings using the polynomial hash function with m = 10^9 + 9. In that case, there will definitely be a collision. To avoid the problem without increasing much time complexity, we can:

• A.

Use a larger m

• B.

Compare the strings if two hash values match

• C.

Use the same hash function twice, each time with different p

• D.

None of the above

A. Use a larger m
C. Use the same hash function twice, each time with different p
Explanation
To avoid collisions in the polynomial hash function, we can use a larger m value. Increasing the value of m reduces the likelihood of collisions occurring. Additionally, we can use the same hash function twice, each time with a different p value. This helps to further reduce the chances of collisions, as using different p values increases the randomness of the hash function. By combining these two approaches, we can minimize the number of collisions without significantly increasing the time complexity. Comparing the strings if two hash values match is not a viable solution as collisions are expected, and this approach would not effectively address the issue.

Rate this question:

• 9.

### Given a string of length N, and a pattern of length M to look for, what is the time for pre-processing in the Rabin-Karp Algorithm?

• A.
• B.
• C.
• D.
• E.
• F.
• G.
• H.
• I.
D.
Explanation
The time for pre-processing in the Rabin-Karp Algorithm is O(M), where M is the length of the pattern to look for. This is because the algorithm calculates the hash value of the pattern only once and then calculates the hash value of each substring of length M in the string. Therefore, the time complexity of the pre-processing step is linear with respect to the length of the pattern.

Rate this question:

• 10.

### Given a string of length N, and a pattern of length M to look for, the time complexity of Rabin-Karp in the worst case:

• A.
• B.
• C.
• D.
• E.
A.
Explanation
The time complexity of Rabin-Karp algorithm in the worst case is O((N-M+1)M), where N is the length of the string and M is the length of the pattern. This is because the algorithm compares the hash value of the pattern with the hash values of all possible substrings of the same length in the string. In the worst case scenario, all substrings will have the same hash value as the pattern, resulting in a complexity of O((N-M+1)M).

Rate this question:

• 11.

### Why is it best to use a prime number as a mod in a hashing function?

• A.

Minimize collisions

• B.

Have more common factors with the hash value

• C.

Higher Number of Slots in Hash Table

• D.

Higher chance of existence of inverse

A. Minimize collisions
C. Higher Number of Slots in Hash Table
D. Higher chance of existence of inverse
Explanation
Using a prime number as a mod in a hashing function helps minimize collisions because prime numbers have fewer factors compared to composite numbers. This means that the resulting hash values are more likely to be spread out across the hash table, reducing the chance of multiple keys mapping to the same slot. Additionally, using a prime number as a mod increases the number of available slots in the hash table, providing more space for key-value pairs to be stored. Lastly, prime numbers also have a higher chance of having an inverse modulo, which can be useful in certain hashing techniques.

Rate this question:

• 12.

### Which of the following can be considered as a collision in the Hash table?

• A.

Two key-value pairs that have equal keys but different values

• B.

Two key-value pairs that have different keys and hash to different indices

• C.

Two key-value pairs that have different keys but hash to the same index

• D.

Two key-value pairs that have equal keys but hash to different indices

C. Two key-value pairs that have different keys but hash to the same index
Explanation
When two key-value pairs have different keys but hash to the same index in a hash table, it is considered a collision. This means that both pairs are mapped to the same location in the hash table, which can lead to conflicts when trying to retrieve or store the values associated with those keys. Collisions can be resolved using various techniques such as chaining or open addressing to handle multiple values at the same index.

Rate this question:

• 13.

### Consider that, you have a hash function that converts each character of a string using a formula, f(x), and then concatenates them to find the hash value. Which of the following formulae will you use to convert each character to ensure no collision (Here, x denotes the ASCII value of the character)?

• A.

F(x) = (3x%5)+3

• B.

F(x) = 5x+12

• C.

F(x) = x/3+1

A. F(x) = (3x%5)+3
Explanation
The formula f(x) = (3x%5)+3 ensures no collision by taking the ASCII value of each character and performing the following operations: multiplying it by 3, taking the modulus of the result with 5, and finally adding 3. This formula ensures that each character is converted into a unique hash value, preventing collisions where two different characters result in the same hash value.

Rate this question:

• 14.

### String hashing performs worse than the brute force algorithm when doing a single comparison.

• A.

True

• B.

False

A. True
Explanation
String hashing performs worse than the brute force algorithm when doing a single comparison because hashing involves computing a hash value for each string, which can be time-consuming. In contrast, the brute force algorithm directly compares each character of the strings, which is a simpler and faster operation. Therefore, for a single comparison, the brute force algorithm is more efficient than string hashing.

Rate this question:

• 15.

### While calculating the hash of a string using Polynomial Hash Function, we map each character to 0, 1, 2, … and so on.

• A.

True

• B.

False

B. False
Explanation
The statement is false because in Polynomial Hash Function, we do not map each character to 0, 1, 2, and so on. Instead, each character is assigned a numerical value based on its position in the string and its ASCII value. This value is then used in the polynomial hash function to calculate the hash of the string.

Rate this question:

• 16.

### To reduce collision, the hash function should derive a hash value depending on the pattern that might exist in the strings.

• A.

True

• B.

False

B. False
Explanation
The statement is false because to reduce collision, the hash function should derive a hash value that minimizes the chances of different inputs producing the same hash value. The hash function should distribute the hash values evenly across the hash table, regardless of any patterns that might exist in the strings. This helps to ensure that the strings are evenly distributed and collisions are minimized.

Rate this question:

• 17.

### We can use the polynomial hash function for calculating the rolling hash given modulo is prime.

• A.

True

• B.

False

A. True
Explanation
The polynomial hash function is commonly used for calculating the rolling hash in string matching algorithms. When the modulo used in the hash function is prime, it helps to reduce the chances of collisions and ensures a more even distribution of hash values. This makes it a suitable choice for implementing rolling hash algorithms, where we need to efficiently calculate the hash value of a sliding window of characters in a string. Therefore, the statement that we can use the polynomial hash function for calculating the rolling hash given the modulo is prime is true.

Rate this question:

• 18.

### Since Fenwick Tree stores prefix sum, we cannot get the sum for a specific range that doesn’t involve prefix.

• A.

True

• B.

False

B. False
Explanation
The given statement is false. Fenwick Tree can indeed calculate the sum for a specific range that doesn't involve the prefix. Fenwick Tree uses the concept of binary indexed tree to efficiently calculate prefix sums, but it can also be used to calculate the sum of any range by taking the difference between two prefix sums. Hence, the statement is incorrect.

Rate this question:

• 19.

### Fenwick Tree is slower than the prefix array in answering range sum queries.

• A.

True

• B.

False

A. True
Explanation
The Fenwick Tree data structure is slower than the prefix array in answering range sum queries. This is because the Fenwick Tree requires log(n) time complexity for both updates and queries, while the prefix array can answer range sum queries in constant time complexity. Therefore, the given statement is true.

Rate this question:

• 20.

### We can use Fenwick Tree to keep track of prefix GCDs.

• A.

True

• B.

False

B. False
Explanation
Fenwick Tree, also known as Binary Indexed Tree (BIT), is a data structure used for efficient prefix sum calculations. It is not specifically designed for keeping track of prefix GCDs. Therefore, the given statement is false.

Rate this question:

• 21.

### In DSU, two elements having the same representatives might belong to different sets.

• A.

True

• B.

False

B. False
Explanation
In DSU (Disjoint Set Union), two elements having the same representatives cannot belong to different sets. The representatives in DSU are chosen in such a way that all elements in the same set have the same representative. Therefore, if two elements have the same representative, it implies that they belong to the same set. Hence, the given statement is false.

Rate this question:

• 22.

### In Kruskal’s Algorithm, DSU is used to find the edge with the lowest weight.

• A.

True

• B.

False

B. False
Explanation
In Kruskal's Algorithm, DSU (Disjoint Set Union) is not used to find the edge with the lowest weight. DSU is used to efficiently check whether adding an edge will create a cycle in the minimum spanning tree being constructed. The edge with the lowest weight is typically found using a priority queue or by sorting the edges in ascending order of their weights. Therefore, the statement is false.

Rate this question:

• 23.

### Path compression in a tree with N nodes transforms find_set() operation from having time complexity ________ to ________

O(N)
O(logN), O(log(N))
Explanation
Path compression in a tree with N nodes transforms the find_set() operation from having a time complexity of O(N) (linear time complexity) to O(logN) or O(log(N)) (logarithmic time complexity). This is because path compression optimizes the find_set() operation by flattening the tree structure, reducing the height of the tree and thus improving the efficiency of finding the representative element of a set. By compressing the path during the find_set() operation, the time complexity is significantly reduced, making it more efficient and faster.

Rate this question:

• 24.

### The depth of any tree with N nodes if we use union by rank is ________

O(log(N)), O(logN)
Explanation
The depth of a tree with N nodes, when using union by rank, is O(log(N)) or O(logN). This is because union by rank is an optimization technique used in the union-find data structure. It ensures that the tree remains balanced by always attaching the smaller tree to the root of the larger tree during the union operation. As a result, the depth of the tree is limited to O(log(N)) or O(logN), which means that the time complexity for finding the depth of the tree is logarithmic in terms of the number of nodes.

Rate this question:

• 25.

### Consider that in a rolling hash, we have a window of size 23. We shift the window by removing the first character and adding a new character at the end. The old and the window have ________ characters in common.

22
Explanation
In a rolling hash with a window size of 23, when we shift the window by removing the first character and adding a new character at the end, the old and new window have 22 characters in common. This is because only one character is being replaced, so the remaining 22 characters in the window remain the same.

Rate this question:

• 26.

### If our string contains only lowercase English characters, they can be considered to be in the base ________

26
Explanation
The English alphabet consists of 26 lowercase letters. Therefore, if our string contains only lowercase English characters, we can consider them to be in the base 26. Each character can be represented by a unique number from 1 to 26, where 'a' is 1, 'b' is 2, 'c' is 3, and so on.

Rate this question:

• 27.

### If two hash values match, their corresponding values might not match. This can be explained using the ________ Back to top