Dt pH II Practice 1

Reviewed by Editorial Team
The ProProfs editorial team is comprised of experienced subject matter experts. They've collectively created over 10,000 quizzes and lessons, serving over 100 million users. Our team includes in-house content moderators and subject matter experts, as well as a global network of rigorously trained contributors. All adhere to our comprehensive editorial guidelines, ensuring the delivery of high-quality content.
Learn about Our Editorial Process
| By Arafatkazi
A
Arafatkazi
Community Contributor
Quizzes Created: 3 | Total Attempts: 898
| Attempts: 114
SettingsSettings
Please wait...
  • 1/95 Questions

    Data when processed becomes Information

    • True
    • False
Please wait...
About This Quiz

DT PH II Practice 1 is designed to assess understanding of data management principles, focusing on data quality, processing, and tools. It evaluates the transition of data to information and emphasizes best practices in data quality management.

Dt pH II Practice 1 - Quiz

Quiz Preview

  • 2. 

    Data quality audit provides traceability between original and corrected values.

    • True

    • False

    Correct Answer
    A. True
    Explanation
    Data quality audit is a process that ensures the accuracy and reliability of data. It involves examining data for errors, inconsistencies, and completeness. By conducting a data quality audit, organizations can trace the origin of data and compare it with the corrected values. This helps in identifying the source of errors and discrepancies, enabling organizations to make necessary corrections and improvements. Therefore, the statement that data quality audit provides traceability between original and corrected values is true.

    Rate this question:

  • 3. 

    Tracing involves audit trails between deleted and surviving customers

    • True

    • False

    Correct Answer
    A. True
    Explanation
    Tracing refers to the process of establishing connections or links between deleted customers and the ones that still exist. It involves creating an audit trail to track the activities and interactions of these customers. Therefore, the statement "Tracing involves audit trails between deleted and surviving customers" is true.

    Rate this question:

  • 4. 

    Data masking and mask pattern analysis are used in substituting string patterns

    • True

    • False

    Correct Answer
    A. True
    Explanation
    Data masking and mask pattern analysis are indeed used in substituting string patterns. Data masking is a technique used to protect sensitive data by replacing it with fictitious but realistic data. It helps to ensure that the original data is not exposed to unauthorized individuals. Mask pattern analysis, on the other hand, involves identifying and analyzing patterns in the masked data to ensure that it follows the desired format and structure. Both of these techniques are commonly employed in data security and privacy measures.

    Rate this question:

  • 5. 

    Customer merging is matching the best attribute into the surviving records from duplicate records

    • True

    • False

    Correct Answer
    A. True
    Explanation
    Customer merging is the process of combining or consolidating duplicate customer records into a single, accurate record. This involves identifying and matching the best attributes or information from each duplicate record and merging them into the surviving record. By doing so, businesses can eliminate duplicate data, improve data quality, and ensure that customer information is up to date and accurate. Therefore, the statement "Customer merging is matching the best attribute into the surviving records from duplicate records" is true.

    Rate this question:

  • 6. 

    Data quality (MDM) involves avoiding overheads while preparing the DW.

    • True

    • False

    Correct Answer
    A. True
    Explanation
    Data quality (MDM) is indeed important in avoiding overheads while preparing the data warehouse (DW). Data quality refers to the accuracy, completeness, consistency, and reliability of data, and it plays a crucial role in ensuring that the data used in the DW is reliable and trustworthy. By implementing Master Data Management (MDM) practices, organizations can improve data quality by ensuring that master data is accurate, consistent, and up-to-date. This, in turn, helps to avoid unnecessary costs and inefficiencies associated with poor data quality, ultimately leading to a more effective and efficient data warehouse.

    Rate this question:

  • 7. 

    DataStage is an ETL tool

    • True

    • False

    Correct Answer
    A. True
    Explanation
    DataStage is indeed an ETL (Extract, Transform, Load) tool. ETL tools are used to extract data from various sources, transform it into a suitable format, and load it into a target system or database. DataStage is specifically designed for this purpose, allowing users to create data integration jobs that extract data from different sources, apply transformations, and load it into a target database or data warehouse. Therefore, the correct answer is true.

    Rate this question:

  • 8. 

    If a primary key uses multiple columns to identify a record then it is known as compound key

    • True

    • False

    Correct Answer
    A. True
    Explanation
    A compound key is used when multiple columns are combined to uniquely identify a record in a database table. This is useful when a single column cannot uniquely identify a record. Therefore, if a primary key uses multiple columns, it is known as a compound key. Hence, the given statement is true.

    Rate this question:

  • 9. 

    The default sort for an attribute can be set in the attribute definition itself.

    • True

    • False

    Correct Answer
    A. True
    Explanation
    The given statement is true because when defining an attribute, it is possible to specify the default sorting order for that attribute. This allows for automatic sorting of data based on the attribute without the need for additional sorting instructions.

    Rate this question:

  • 10. 

    An expression combining two different fact columns in a table (ex – sales – discount) can be set as a fact expression  

    • True

    • False

    Correct Answer
    A. True
    Explanation
    In a table, it is possible to combine two different fact columns, such as sales and discount, into a single fact expression. This can be done to calculate the net sales amount after applying the discount. Therefore, the statement is true.

    Rate this question:

  • 11. 

    Tablespace span across containers and tables can span across tablespaces

    • True

    • False

    Correct Answer
    A. True
    Explanation
    This statement is true because tablespaces in a database can span across multiple containers. A container is a physical storage unit that can be a file or a disk. By spanning across multiple containers, tablespaces can utilize the available storage space efficiently. Additionally, tables within a database can also span across multiple tablespaces. This allows for better management of data and enables partitioning and distribution of tables across different tablespaces based on specific requirements.

    Rate this question:

  • 12. 

    Evaluate data quality before building a fully fledged data ware house

    • True

    • False

    Correct Answer
    A. True
    Explanation
    From Data Quality

    Rate this question:

  • 13. 

    Customer matching is done with Fuzzy and intelligent logic.

    • True

    • False

    Correct Answer
    A. True
    Explanation
    Customer matching is done with fuzzy and intelligent logic, which means that it is not a straightforward and exact process. Fuzzy logic allows for a degree of uncertainty and imprecision in the matching process, taking into account similarities and patterns rather than strict criteria. Intelligent logic implies that the matching system is capable of learning and adapting over time, becoming more accurate and efficient in identifying the right customers for a particular product or service. Therefore, the statement "Customer matching is done with fuzzy and intelligent logic" is true.

    Rate this question:

  • 14. 

    MDM is maintained at organizational level

    • True

    • False

    Correct Answer
    A. True
    Explanation
    The statement "MDM is maintained at organizational level" is true. Master Data Management (MDM) refers to the process of creating and managing a single, consistent, and accurate version of an organization's critical data. MDM is typically implemented and maintained at the organizational level to ensure that all departments and systems within the organization have access to and use the same reliable data. By centralizing the management of master data, organizations can improve data quality, reduce data inconsistencies, and enhance decision-making processes.

    Rate this question:

  • 15. 

    Which tool extracts data from textual sources

    • Conversion

    • Mark-Up

    • Extraction

    Correct Answer
    A. Extraction
    Explanation
    Extraction is the correct answer because it refers to the process of retrieving or extracting data from textual sources. This tool is used to gather information from various text-based documents, such as websites, articles, reports, or social media posts. Extraction tools typically analyze the text and identify relevant data based on specific criteria or patterns. This extracted data can then be further processed, analyzed, or stored for various purposes such as data mining, business intelligence, or research.

    Rate this question:

  • 16. 

    Crosswalk allows metadata created by one user to be used by another

    • True

    • False

    Correct Answer
    A. True
    Explanation
    Crosswalk allows metadata created by one user to be used by another. This means that if one user creates metadata for a specific purpose, another user can access and utilize that metadata for their own purposes. This allows for the sharing and reusability of metadata, promoting collaboration and efficiency among users.

    Rate this question:

  • 17. 

    Cache size can be changed in DS Administrator

    • True

    • False

    Correct Answer
    A. True
    Explanation
    The given statement is true because in DS Administrator, the cache size can be modified or adjusted. The DS Administrator is a tool used for managing and configuring various aspects of a system, including the cache. By accessing the DS Administrator, users can change the cache size to optimize performance and storage capacity based on their specific needs and requirements.

    Rate this question:

  • 18. 

    Hierarchies in microstrategy are

    • System Hierarchy

    • User Hierarchy

    • None

    Correct Answer(s)
    A. System Hierarchy
    A. User Hierarchy
    Explanation
    In MicroStrategy, hierarchies are used to organize and structure data in a logical manner. The two types of hierarchies mentioned, System Hierarchy and User Hierarchy, are commonly used in MicroStrategy. System Hierarchy refers to the default hierarchy created by the system based on the attributes and their relationships in the data model. User Hierarchy, on the other hand, allows users to create their own custom hierarchies based on their specific needs and preferences. Therefore, the correct answer includes both System Hierarchy and User Hierarchy as the types of hierarchies in MicroStrategy.

    Rate this question:

  • 19. 

    Rule repository contains Databases or Flat Files

    • True

    • False

    Correct Answer
    A. True
    Explanation
    The rule repository contains databases or flat files. This means that the repository is used to store and manage rules, which can be stored in either a database or a flat file format. This allows for easy access, retrieval, and management of the rules within the repository. Therefore, the statement "True" is correct.

    Rate this question:

  • 20. 

    Bad quality data affects concurrency and performance.

    • True

    • False

    Correct Answer
    A. True
    Explanation
    Bad quality data refers to data that is inaccurate, incomplete, inconsistent, or outdated. When dealing with bad quality data, it can lead to issues with concurrency and performance. Concurrency refers to the ability of multiple users to access and manipulate data at the same time. If the data is of poor quality, it can cause conflicts and inconsistencies when multiple users try to access and modify it simultaneously. This can lead to data corruption and hinder the overall performance of the system. Therefore, it is true that bad quality data affects concurrency and performance.

    Rate this question:

  • 21. 

    Types of BI Metadata

    • A. OLAP Metadata

    • B. Reporting Metadata

    • C. Data Mining Metadata

    Correct Answer(s)
    A. A. OLAP Metadata
    A. B. Reporting Metadata
    A. C. Data Mining Metadata
    Explanation
    The correct answer is a, b, and c because these are all types of BI metadata. OLAP metadata refers to the metadata used in online analytical processing, which involves analyzing multidimensional data. Reporting metadata is used in generating reports and includes information about data sources, report layouts, and filters. Data mining metadata is used in the process of discovering patterns and relationships in large datasets. These three types of metadata are essential components of a business intelligence system, as they help in organizing and understanding data for analysis and reporting purposes.

    Rate this question:

  • 22. 

    Household matching is for 

    • Business

    • Product

    • Customer

    • None of the above.

    Correct Answer
    A. Customer
    Explanation
    Household matching refers to the process of matching customer data with household data to identify and group individuals who belong to the same household. This is done to gain a better understanding of customer behavior, preferences, and demographics, which can be valuable for businesses in targeting their marketing efforts and providing personalized experiences. Therefore, the correct answer is customer as household matching is primarily focused on identifying and analyzing customers within a household.

    Rate this question:

  • 23. 

    Reports can run with only attributes on the template (and no metrics).

    • True

    • False

    Correct Answer
    A. True
    Explanation
    Reports can run with only attributes on the template (and no metrics) because attributes provide the dimensions or categories by which data is organized, while metrics provide the quantitative measures or calculations based on those dimensions. By using attributes alone, the report can still display and analyze data based on different categories or dimensions without any specific calculations or quantitative measures. This allows for a more descriptive and categorical analysis of the data.

    Rate this question:

  • 24. 

    Different tablespaces have different page sizes

    • True

    • False

    Correct Answer
    A. True
    Explanation
    In a database management system, a tablespace is a logical storage container that holds various database objects such as tables, indexes, and views. Each tablespace can have its own specific page size, which determines the size of the data blocks used to store data on disk. This allows for flexibility in optimizing storage and performance based on the specific needs of different database objects. Therefore, it is true that different tablespaces can have different page sizes.

    Rate this question:

  • 25. 

    Block indexes for multiple columns produces

    • Multidimensional Clusters

    • Clusters

    • Blocks

    Correct Answer
    A. Multidimensional Clusters
    Explanation
    The correct answer is multidimensional Clusters. When block indexes are created for multiple columns, it allows for the creation of multidimensional clusters. This means that the data is organized and stored in a way that allows for efficient retrieval and analysis of data across multiple dimensions. This can be particularly useful in situations where data needs to be analyzed and compared across different attributes or variables. By using multidimensional clusters, it becomes easier to navigate and query the data, leading to improved performance and accuracy in data analysis.

    Rate this question:

  • 26. 

    Trillium server process requires 

    • Input Structure(DLL file)

    • Output structure (DLL file)

    • Parameter file (PAR file)

    Correct Answer(s)
    A. Input Structure(DLL file)
    A. Output structure (DLL file)
    A. Parameter file (PAR file)
    Explanation
    The Trillium server process requires an Input Structure (DLL file), an Output Structure (DLL file), and a Parameter file (PAR file). These files are necessary for the Trillium server process to function properly. The Input Structure (DLL file) contains the necessary data and instructions for the server process to process the input data. The Output Structure (DLL file) defines the format and structure of the output data generated by the server process. The Parameter file (PAR file) contains the configuration settings and parameters that govern the behavior of the server process.

    Rate this question:

  • 27. 

    In which of the following stages a job cannot be run?

    • Abort

    • Compiled

    • Compiled

    Correct Answer
    A. Abort
    Explanation
    The stage where a job cannot be run is the "Abort" stage. This is because when a job is aborted, it is forcefully terminated and cannot be executed further. The other stages mentioned, namely "Compiled" and "Compiled", do not necessarily imply that a job cannot be run. However, it is worth noting that the repetition of "Compiled" in the options may indicate an error or incomplete question.

    Rate this question:

  • 28. 

    Types of actions in hierarchy display

    • Locked

    • Limited

    • Entry point

    • Filtered

    • All the options

    Correct Answer
    A. All the options
    Explanation
    The correct answer is "All the options" because the question is asking about the types of actions in a hierarchy display. The options listed - Locked, Limited, Entry point, and Filtered - are all valid types of actions that can be present in a hierarchy display. Therefore, the answer is that all of the options listed are types of actions in a hierarchy display.

    Rate this question:

  • 29. 

    Data quality does not refer to

    • Volume

    • Accuracy

    • Consistency

    • Integrity

    Correct Answer
    A. Volume
    Explanation
    Data quality refers to the accuracy, consistency, and integrity of the data. It ensures that the data is reliable, complete, and free from errors or inconsistencies. However, volume does not fall under the category of data quality. Volume refers to the amount or quantity of data, and while it is important to manage and analyze large volumes of data effectively, it is not directly related to the quality of the data itself.

    Rate this question:

  • 30. 

    The rules of cleansing are embedded in Trillium’s 

    • Parameter file (PAR).

    • Output structure (DLL file)

    • Input structure (DLL file)

    Correct Answer
    A. Parameter file (PAR).
    Explanation
    The correct answer is the Parameter file (PAR). The explanation for this is that the rules of cleansing are embedded in the Parameter file (PAR). This means that the Parameter file contains the specific instructions and guidelines for how data should be cleansed. It likely includes information on what types of data should be removed or corrected, as well as any specific algorithms or processes that should be followed. The Output structure (DLL file) and Input structure (DLL file) are not directly related to the rules of cleansing, so they are not the correct answer.

    Rate this question:

  • 31. 

    Reason for poor quality of data

    • Careless / Inaccurate data entry

    • No stringent rules or processes followed to validate the data entry

    • Lack of Master Data Management strategy

    Correct Answer(s)
    A. Careless / Inaccurate data entry
    A. No stringent rules or processes followed to validate the data entry
    A. Lack of Master Data Management strategy
    Explanation
    The poor quality of data can be attributed to several factors. One reason is careless or inaccurate data entry, where individuals responsible for inputting data may make mistakes or not pay attention to detail. Another factor is the absence of stringent rules or processes to validate the data entry, which allows for errors to go unnoticed. Additionally, the lack of a Master Data Management strategy contributes to poor data quality as there is no systematic approach to ensure data accuracy, consistency, and integrity.

    Rate this question:

  • 32. 

    A filter qualification can combine

    • Attribute qualification and metric qualification only

    • Attribute qualification and report as filter only

    • Attribute qualification, report as filter and relationship filter only

    • Attribute qualification, metric qualification and relationship filter only

    • Attribute qualification, metric qualification, report as filter and relationship in any combination

    Correct Answer
    A. Attribute qualification, metric qualification, report as filter and relationship in any combination
    Explanation
    A filter qualification can combine attribute qualification, metric qualification, report as filter, and relationship in any combination. This means that a filter can be created using one or more attributes, metrics, reports, and relationships. It allows for flexibility in filtering data based on specific attributes, metrics, reports, and relationships, enabling more precise and customized data analysis.

    Rate this question:

  • 33. 

    Updating schema is required, when we do changes in __________________

    • Attributes

    • Facts

    • Hierarchies

    • All the options

    Correct Answer
    A. All the options
    Explanation
    Updating the schema is required when we make changes in attributes, facts, and hierarchies. This is because these components are essential for defining the structure and organization of a database. When any modifications are made to these elements, the schema needs to be updated to reflect these changes accurately. Therefore, updating the schema is necessary when changes are made to any of these options.

    Rate this question:

  • 34. 

    Default page size in DB2?

    • 4 KB

    • 2 KB

    • 8 KB

    • 16 KB

    Correct Answer
    A. 4 KB
    Explanation
    The default page size in DB2 is 4 KB. This means that the data in DB2 is stored in pages, and each page has a size of 4 KB. This page size is commonly used because it strikes a balance between efficient storage and efficient retrieval of data. Smaller page sizes would result in more pages and potentially slower performance, while larger page sizes would result in wasted space if the data does not fill up the entire page. Therefore, 4 KB is a commonly used default page size in DB2.

    Rate this question:

  • 35. 

    The No. of CPUs used in DB2 Enterprise edition

    • 2 CPU

    • 4 CPU

    • 18 CPU

    • No Limit

    Correct Answer
    A. No Limit
    Explanation
    The given answer "No Limit" suggests that there is no maximum or set limit on the number of CPUs that can be used in DB2 Enterprise edition. This means that users can utilize as many CPUs as they require, based on their specific needs and system capabilities.

    Rate this question:

  • 36. 

    Survivorship is a concept used in 

    • Data de-duplication

    • Cleansing

    • Enrichment

    • None

    Correct Answer
    A. Data de-duplication
    Explanation
    Survivorship is a concept used in data de-duplication. Data de-duplication is the process of identifying and removing duplicate data entries from a dataset. Survivorship refers to the process of selecting the most accurate and reliable data entry among the duplicates to be retained in the dataset, while discarding the rest. This ensures that only the most relevant and correct information is retained, improving data quality and reducing storage space requirements.

    Rate this question:

  • 37. 

    Not a DB2 license method

    • User

    • CPU

    • Memory

    Correct Answer
    A. Memory
    Explanation
    The given options, User, CPU, and Memory, are all related to computer hardware components. In the context of DB2, User and CPU are not license methods, but Memory can be considered as a factor for licensing. Memory is often used as a metric for determining the licensing requirements of certain software, including DB2, as the amount of memory allocated to a system can affect its performance and capacity. Therefore, Memory is the correct answer as it is not a DB2 license method.

    Rate this question:

  • 38. 

    During which of the operations data is not modified

    • Data profiling

    • Data cleansing

    • Data enrichment

    Correct Answer
    A. Data profiling
    Explanation
    During data profiling, the focus is on analyzing and understanding the data, rather than modifying it. Data profiling involves examining the quality, structure, and content of the data to gain insights and identify any issues or anomalies. This process helps in understanding the data's characteristics, such as its completeness, accuracy, and consistency. Unlike data cleansing and data enrichment, data profiling does not involve making changes or additions to the data. Instead, it aims to provide a comprehensive overview of the data, enabling better decision-making and data management.

    Rate this question:

  • 39. 

    Metadata should be maintained even when 

    • Base resource changes

    • If two sources merges together

    • Base source is deleted

    Correct Answer(s)
    A. Base resource changes
    A. If two sources merges together
    A. Base source is deleted
    Explanation
    Metadata should be maintained even when the base resource changes because the metadata provides important information about the resource, such as its origin, format, and any restrictions or permissions associated with it. This ensures that the metadata remains accurate and up-to-date, allowing users to effectively search, retrieve, and use the resource. Similarly, when two sources merge together, it is important to maintain the metadata from both sources to preserve the integrity and completeness of the merged data. Lastly, even if the base source is deleted, the metadata should still be retained to provide historical context and reference for any data or resources that were derived from or linked to the base source.

    Rate this question:

  • 40. 

    Can multiple selections possible in DataStage? 

    • Not Possible

    • Yes Possible

    Correct Answer
    A. Yes Possible
    Explanation
    Multiple selections are possible in DataStage. This means that users can select and process multiple data elements or records simultaneously. This allows for efficient and streamlined data processing, as it eliminates the need for repetitive manual selection and processing of individual data elements. Users can select multiple data elements based on specific criteria or conditions, and perform actions such as transformation, filtering, or integration on the selected data elements as a group.

    Rate this question:

  • 41. 

    Clean up will not effect on by which phase

    • Acquisition

    • Application

    • Cleanup

    • None

    Correct Answer
    A. Acquisition
    Explanation
    The question is asking which phase will not be affected by the clean-up. Clean-up is a process of removing unnecessary or unwanted elements. In the context of the given options, acquisition refers to the phase of obtaining or acquiring something. Clean-up is not related to the acquisition phase, as it focuses on organizing and removing unnecessary elements rather than obtaining something new. Therefore, the clean-up will not affect the acquisition phase.

    Rate this question:

  • 42. 

    The maximum number of attributes that can be set as parent to another attribute is

    • No Limit

    • One

    • Two

    • Three

    Correct Answer
    A. No Limit
    Explanation
    There is no limit to the number of attributes that can be set as a parent to another attribute. This means that an attribute can have any number of parent attributes.

    Rate this question:

  • 43. 

    Data cleansing and standardization will be taken care by

    • Data Quality Tools

    • Data Profiling Tools

    • Metadata Tools

    Correct Answer
    A. Data Quality Tools
    Explanation
    Data quality tools are specifically designed to identify and correct errors, inconsistencies, and inaccuracies in data. They help in cleansing and standardizing the data by removing duplicate entries, validating data against predefined rules, and ensuring data integrity. These tools can also perform various data enrichment techniques to enhance the overall quality of the data. Therefore, it is logical to conclude that data quality tools will be responsible for data cleansing and standardization.

    Rate this question:

  • 44. 

    The best practice in data quality is 

    • Fixing data quality issues in ETL

    • Fixing data quality issues in ODS

    • Fixing data quality issues in Source

    • Fixing data quality issues in DW

    Correct Answer
    A. Fixing data quality issues in Source
    Explanation
    From Data Quality

    Rate this question:

  • 45. 

    During the de-duplication process

    • Delete the original values since they consume space

    • Keep the original values in trail tables

    • Do not disturb the original values and place the new values in new tables

    • None

    Correct Answer
    A. Keep the original values in trail tables
    Explanation
    During the de-duplication process, the original values are kept in trail tables. This means that instead of deleting the original values or placing new values in new tables, the original values are preserved. This allows for a record of the original values to be maintained while still removing any duplicate entries. Keeping the original values in trail tables can be useful for auditing purposes or for historical reference.

    Rate this question:

  • 46. 

    Steps avoid poor quality data

    • Set stringent rules in validation process; if not, then in ETL process

    • De-duplication

    • Provide feedback about quality of data to source and ask source to correct and resend them

    Correct Answer(s)
    A. Set stringent rules in validation process; if not, then in ETL process
    A. De-duplication
    A. Provide feedback about quality of data to source and ask source to correct and resend them
    Explanation
    The answer suggests three steps to avoid poor quality data. The first step is to set stringent rules in the validation process, and if not possible, then in the ETL (Extract, Transform, Load) process. This ensures that data is thoroughly checked and validated before being used. The second step is de-duplication, which involves removing any duplicate or redundant data entries. This helps in maintaining data integrity and accuracy. The third step is to provide feedback about the quality of data to the source and request them to correct and resend the data. This ensures that the source takes responsibility for the quality of the data they provide.

    Rate this question:

  • 47. 

    Order of execution in Datastage is

    • Stage variable then Constraints then Derivations

    • Derivations then Stage variable then Constraints

    • Constraints then Derivations then Stage variable

    Correct Answer
    A. Stage variable then Constraints then Derivations
    Explanation
    The correct answer is "Stage variable then Constraints then Derivations." In Datastage, the order of execution is important for proper data processing. Stage variables are evaluated first, as they are used to store intermediate values during the data transformation process. Constraints are then applied to filter the data based on certain conditions. Finally, derivations are performed to calculate new values or modify existing ones. This sequence ensures that the stage variables are available for use in constraints and derivations, allowing for accurate data manipulation.

    Rate this question:

  • 48. 

    In two tier architecture, how many ODBC connections are there? 

    • 2

    • 5

    • 1

    • No such rules

    Correct Answer
    A. 2
    Explanation
    In two-tier architecture, there are two ODBC connections. This is because two-tier architecture consists of a client-side application that communicates directly with the database server. The first ODBC connection is established between the client-side application and the database server for data retrieval and manipulation. The second ODBC connection is used for administrative purposes, such as database management and configuration. Having two separate connections allows for better control and organization of the client-server communication in two-tier architecture.

    Rate this question:

  • 49. 

    Fetching data from hard disk to buffer pool is known as pre-fatching

    • True

    • False

    Correct Answer
    A. True
    Explanation
    The statement is true because pre-fetching refers to the process of fetching data from the hard disk to the buffer pool in advance, anticipating that it will be needed in the near future. This helps to improve the overall performance of the system by reducing the time required to access the data when it is actually needed.

    Rate this question:

Quiz Review Timeline (Updated): Mar 21, 2024 +

Our quizzes are rigorously reviewed, monitored and continuously updated by our expert board to maintain accuracy, relevance, and timeliness.

  • Current Version
  • Mar 21, 2024
    Quiz Edited by
    ProProfs Editorial Team
  • Jan 24, 2017
    Quiz Created by
    Arafatkazi
Back to Top Back to top
Advertisement
×

Wait!
Here's an interesting quiz for you.

We have other quizzes matching your interest.