# Computer Data Structure And Algorithms 1(TTA)

Approved & Edited by ProProfs Editorial Team
At ProProfs Quizzes, our dedicated in-house team of experts takes pride in their work. With a sharp eye for detail, they meticulously review each quiz. This ensures that every quiz, taken by over 100 million users, meets our standards of accuracy, clarity, and engagement.
| Written by Harry.hot02
H
Harry.hot02
Community Contributor
Quizzes Created: 8 | Total Attempts: 1,442
Questions: 25 | Attempts: 117  Settings  INSTRUCTIONS
1. NUMBER OF QUESTIONS 25
2. HAS A TIME LIMIT OF 15 MINUTES
3. HAS A PASS MARKS OF 30%
4. QUESTIONS PER PAGE 1
5. EACH QUESTIONS HAS 1 MARKS
6. NEGATIVE MARKING FOR EACH QUESTIONS 0.25
7. WILL ALLOW TO YOU GO BACK ,SKIP AND CHANGE YOUR ANSWERS
8. WILL ALLOW TO YOU PRINT OUT YOUR RESULT AND CERTIFICATE
9. WILL ALLOW TO YOU PRINT OUT YOURS RESPONSE SHEET WITH CORRECT ANSWER KEY AND EXPLANATION
10. WE WILL PROVIDE YOURS RESULT,CERTIFICATE AND YOUR RESPONSE SHEET TO YOURS EMAIL ID AT END OF ONLINE TEST

.
.
.
.

• 1.

### Two main measures for the efficiency of an algorithm are

• A.

Processor and memory

• B.

Complexity and capacity

• C.

Time and space

• D.

Data and space

C. Time and space
Explanation
The efficiency of an algorithm is typically measured in terms of the time it takes to execute and the amount of memory space it requires. Time refers to the number of operations or steps the algorithm takes to complete, while space refers to the amount of memory or storage it uses. Therefore, "Time and space" is the correct answer as it accurately represents the two main measures for algorithm efficiency.

Rate this question:

• 2.

### The time factor when determining the efficiency of algorithm is measured by

• A.

Counting micro seconds

• B.

Counting the number of Key operations

• C.

Counting the number of statements

• D.

Counting the kilobytes of algorithm

B. Counting the number of Key operations
Explanation
The efficiency of an algorithm is determined by measuring the time factor, which is commonly done by counting the number of key operations. Key operations refer to the fundamental operations that are performed in the algorithm, such as comparisons, assignments, and arithmetic operations. By counting the number of key operations, we can get an idea of how efficiently the algorithm is performing and compare it to other algorithms. Counting micro seconds, the number of statements, or the kilobytes of the algorithm may not accurately reflect the time factor or efficiency of the algorithm.

Rate this question:

• 3.

### The space factor when determining the efficiency of algorithm is measured by

• A.

Counting the maximum memory needed by the algorithm

• B.

Counting the minimum memory needed by the algorithm

• C.

Counting the average memory needed by the algorithm

• D.

Counting the maximum disk space needed by the algorithm

A. Counting the maximum memory needed by the algorithm
Explanation
The space factor when determining the efficiency of an algorithm is measured by counting the maximum memory needed by the algorithm. This means that the efficiency of the algorithm is evaluated based on the maximum amount of memory it requires to execute. By considering the maximum memory needed, we can assess how efficiently the algorithm utilizes memory resources. This measurement helps in understanding the space complexity of the algorithm and comparing it with other algorithms to determine their efficiency in terms of memory usage.

Rate this question:

• 4.

### Which of the following case does not exist in complexity theory

• A.

Best case

• B.

Worst case

• C.

Average case

• D.

Null case

D. Null case
Explanation
In complexity theory, the best case refers to the scenario where an algorithm performs optimally and achieves the lowest possible time or space complexity. The worst case refers to the scenario where an algorithm performs the least efficiently and has the highest time or space complexity. The average case refers to the scenario where an algorithm performs with an average level of efficiency. However, the null case does not exist in complexity theory as it does not represent any specific scenario or input for an algorithm.

Rate this question:

• 5.

### The worst case occur in linear search algorithm when

• A.

Item is some where in the middle of the array

• B.

Item is not in the array at all

• C.

Item is the last element in the array

• D.

Item is the last element in the array or is not there at all

D. Item is the last element in the array or is not there at all
Explanation
The worst case in a linear search algorithm occurs when the item being searched for is either the last element in the array or not present in the array at all. In both cases, the algorithm would have to iterate through all the elements in the array before determining that the item is not there or finding it as the last element. This results in the maximum possible number of comparisons and the worst time complexity for the linear search algorithm.

Rate this question:

• 6.

### The Average case occur in linear search algorithm when

• A.

When item is somewhere in the middle of the array

• B.

When item is not in the array at all

• C.

When items is the last element in the array

• D.

When item is the last element in the array or is not there at all

A. When item is somewhere in the middle of the array
Explanation
The average case in a linear search algorithm occurs when the item being searched for is somewhere in the middle of the array. This means that on average, the algorithm will have to iterate through approximately half of the array before finding the item. In the other scenarios mentioned, such as when the item is not in the array at all or when it is the last element, the algorithm may have to iterate through the entire array before determining that the item is not present.

Rate this question:

• 7.

### The complexity of the average case of an algorithm is

• A.

Much more complicated to analyze than that of worst case

• B.

Much more simpler to analyze than that of worst case

• C.

Sometimes more complicated and some other times simpler than that of worst case

• D.

None of these

A. Much more complicated to analyze than that of worst case
Explanation
The complexity of the average case of an algorithm is much more complicated to analyze than that of the worst case. This is because the average case takes into account all possible inputs and their probabilities, whereas the worst case only considers the input that would result in the maximum amount of operations. Analyzing the average case requires considering a wider range of inputs and their likelihoods, making it more complex.

Rate this question:

• 8.

### The complexity of linear search algorithm is

• A.

O(n)

• B.

O(log n)

• C.

O(n2)

• D.

O(n log n)

A. O(n)
Explanation
The complexity of linear search algorithm is O(n) because in the worst case scenario, the algorithm needs to iterate through each element of the input list or array until it finds the desired element. This means that the time it takes to complete the search grows linearly with the size of the input.

Rate this question:

• 9.

### The complexity of Binary search algorithm is

• A.

O(n)

• B.

O(log n)

• C.

O(n2)

• D.

O(n log n)

B. O(log n)
Explanation
The correct answer is O(log n) because binary search algorithm divides the search space in half with each iteration, effectively reducing the number of elements to search by half. This logarithmic time complexity makes binary search very efficient for large datasets.

Rate this question:

• 10.

### The complexity of Bubble sort algorithm is

• A.

O(n)

• B.

O(log n)

• C.

O(n2)

• D.

O(n log n)

C. O(n2)
Explanation
The complexity of the Bubble sort algorithm is O(n2) because it involves comparing and swapping adjacent elements in a list multiple times until the entire list is sorted. In the worst-case scenario, where the list is in reverse order, Bubble sort requires n-1 passes to sort n elements. This results in a time complexity of O(n2), making it inefficient for large lists.

Rate this question:

• 11.

### The complexity of merge sort algorithm is

• A.

O(n)

• B.

O(log n)

• C.

O(n2)

• D.

O(n log n)

D. O(n log n)
Explanation
The complexity of the merge sort algorithm is O(n log n) because it divides the input array into two halves, recursively sorts each half, and then merges the two sorted halves. The merging step takes O(n) time, and the recursion occurs log n times since the array is divided in half each time. Therefore, the overall time complexity is O(n log n).

Rate this question:

• 12.

### The indirect change of the values of a variable in one module by another module is called

• A.

internal change

• B.

inter-module change

• C.

side effect

• D.

Side-module update

C. side effect
Explanation
When the values of a variable in one module are changed indirectly by another module, it is referred to as a "side effect." This means that the action or behavior of one module has an unintended impact on another module, resulting in a change in the variable's value. Side effects can occur when modules interact and share data, and they can sometimes lead to unexpected or undesired consequences in the program's execution.

Rate this question:

• 13.

### Which of the following data structure is not linear data structure?

• A.

Arrays

• B.

• C.

Both of these

• D.

None of these

D. None of these
Explanation
The given question asks for a data structure that is not linear. Both arrays and linked lists are examples of linear data structures, as they store data elements in a sequential manner. Therefore, the correct answer is "none of these" because both arrays and linked lists are linear data structures.

Rate this question:

• 14.

### Which of the following data structure is linear data structure?

• A.

Trees

• B.

Graphs

• C.

Arrays

• D.

None of these

C. Arrays
Explanation
Arrays are a linear data structure because they store elements in a sequential manner, where each element is accessed using its index. The elements in an array are stored contiguously in memory, allowing for efficient access and traversal. Unlike trees and graphs, which have a hierarchical or non-linear structure, arrays have a simple and straightforward organization. Therefore, arrays are the correct answer as a linear data structure.

Rate this question:

• 15.

### The operation of processing each element in the list is known as

• A.

Sorting

• B.

Merging

• C.

Inserting

• D.

Traversal

D. Traversal
Explanation
Traversal is the correct answer because it refers to the process of accessing each element in a list or data structure, usually in a sequential manner. This operation does not involve any specific sorting, merging, or inserting of elements, but rather focuses on visiting and examining each item individually. Traversal is commonly used in algorithms and data structures to perform various operations on the elements, such as searching, printing, or modifying them.

Rate this question:

• 16.

### Finding the location of the element with a given value is:

• A.

Traversal

• B.

Search

• C.

Sort

• D.

None of the above

B. Search
Explanation
The correct answer is "Search" because finding the location of an element with a given value involves searching through a data structure or collection to locate the desired element. Traversal refers to the process of accessing each element in a data structure, while sorting involves arranging elements in a specific order. None of the above options accurately describe the process of finding the location of an element with a given value.

Rate this question:

• 17.

### Arrays are best data structures

• A.

for relatively permanent collections of data

• B.

For the size of the structure and the data in the structure are constantly changing

• C.

both of these situation

• D.

None these situation

A. for relatively permanent collections of data
Explanation
Arrays are best data structures for relatively permanent collections of data because arrays have a fixed size and are able to efficiently store and access elements at specific indices. This makes them suitable for situations where the size of the structure and the data in the structure are not constantly changing. Arrays provide direct access to elements, allowing for fast retrieval and modification of data. However, if the size of the structure or the data in the structure is constantly changing, other data structures like linked lists may be more appropriate.

Rate this question:

• 18.

### Linked lists are best suited

• A.

for relatively permanent collections of data

• B.

For the size of the structure and the data in the structure are constantly changing

• C.

for both of these situation

• D.

None of these situation

B. For the size of the structure and the data in the structure are constantly changing
Explanation
Linked lists are best suited for situations where the size of the structure and the data in the structure are constantly changing. This is because linked lists allow for efficient insertion and deletion of elements at any position, without the need to reallocate memory or shift existing elements. The dynamic nature of linked lists makes them ideal for scenarios where the size of the data structure is unpredictable or frequently modified.

Rate this question:

• 19.

### Each array declaration need not give, implicitly or explicitly, the information about

• A.

the name of array

• B.

the data type of array

• C.

the first data from the set to be stored

• D.

the index set of the array

C. the first data from the set to be stored
Explanation
When declaring an array, it is not necessary to provide information about the first data from the set to be stored. The declaration only needs to include the name of the array, the data type of the array, and the index set of the array. The first data from the set to be stored can be assigned later when initializing the array or when assigning values to specific indices of the array.

Rate this question:

• 20.

### The elements of an array are stored successively in memory cells because

• A.

By this way computer can keep track only the address of the first element and the addresses of other

• B.

the architecture of computer memory does not allow arrays to store other than serially

• C.

Both of these

• D.

Non of these

A. By this way computer can keep track only the address of the first element and the addresses of other
Explanation
When the elements of an array are stored successively in memory cells, the computer only needs to keep track of the address of the first element. By knowing the address of the first element, the computer can easily calculate the addresses of the other elements in the array by using the size of each element. This allows for efficient memory management and access to array elements.

Rate this question:

• 21.

### The memory address of the first element of an array is called

• A.

• B.

• C.

• D.

Explanation
The memory address of the first element of an array is called the base address. This is because the base address serves as the starting point or foundation for accessing the elements of the array. It is the reference point from which the positions of other elements in the array are calculated.

Rate this question:

• 22.

### . Which of the following data structures are indexed structure?

• A.

linear arrays

• B.

• C.

Both

• D.

None

A. linear arrays
Explanation
Linear arrays are indexed structures because elements in a linear array can be accessed directly using their index values. Each element in the array is assigned a unique index starting from 0, allowing for efficient and direct access to any element in the array. On the other hand, linked lists are not indexed structures as each element in a linked list only contains a reference to the next element, making it necessary to traverse the list sequentially to access a specific element.

Rate this question:

• 23.

### Which of the following is not the required condition for binary search algorithm?

• A.

The list must be sorted

• B.

there should be the direct access to the middle element in any sublist

• C.

There must be mechanism to delete and/or insert elements in list

• D.

None of these

C. There must be mechanism to delete and/or insert elements in list
Explanation
Binary search algorithm requires the list to be sorted and there should be direct access to the middle element in any sublist. However, it does not require a mechanism to delete and/or insert elements in the list. The algorithm is based on repeatedly dividing the search space in half, so it only needs to access and compare elements in the list, not modify or update them.

Rate this question:

• 24.

### Which of the following is not a limitation of binary search algorithm?

• A.

Must use a sorted array

• B.

requirement of sorted array is expensive when a lot of insertion and deletions are needed

• C.

there must be a mechanism to access middle element directly

• D.

D) binary search algorithm is not efficient when the data elements are more than 1000.

D. D) binary search algorithm is not efficient when the data elements are more than 1000.
Explanation
The given answer states that the binary search algorithm is not efficient when the data elements are more than 1000. This means that the algorithm may take longer to execute and may not be as effective in finding the desired element when the number of data elements exceeds 1000. This limitation suggests that the binary search algorithm may not be the best choice for large datasets and alternative algorithms may be more suitable.

Rate this question:

• 25.

### Two dimensional arrays are also called

• A.

tables arrays

• B.

matrix arrays

• C.

Both

• D.

None Back to top