Metadata And Data Quality

Reviewed by Editorial Team
The ProProfs editorial team is comprised of experienced subject matter experts. They've collectively created over 10,000 quizzes and lessons, serving over 100 million users. Our team includes in-house content moderators and subject matter experts, as well as a global network of rigorously trained contributors. All adhere to our comprehensive editorial guidelines, ensuring the delivery of high-quality content.
Learn about Our Editorial Process
| By Arafatkazi
A
Arafatkazi
Community Contributor
Quizzes Created: 3 | Total Attempts: 881
| Attempts: 372
SettingsSettings
Please wait...
  • 1/63 Questions

    Metadata is stored in repository

    • True
    • False
Please wait...
About This Quiz


Metadata is data that provides information about other data. Three distinct types of metadata exist: descriptive metadata, structural metadata, and administrative metadata. Metadata data should be correct, complete and relevant. Take the quiz to revise on what you know about metadata and how to ensure it is of good quality. All the best!

Metadata And Data Quality - Quiz

Quiz Preview

  • 2. 

    RDBMS metadata referred as Catalog

    • True

    • False

    Correct Answer
    A. True
    Explanation
    In a relational database management system (RDBMS), metadata refers to the information about the database structure, such as the names and types of tables, columns, indexes, and constraints. This metadata is commonly referred to as a catalog. Therefore, the statement that RDBMS metadata is referred to as a catalog is true.

    Rate this question:

  • 3. 

    Data masking and mask pattern analysis are used in substituting string patterns

    • True

    • False

    Correct Answer
    A. True
    Explanation
    Data masking and mask pattern analysis are indeed used in substituting string patterns. Data masking is a technique used to protect sensitive data by replacing it with fictitious data or masking characters. This helps in preventing unauthorized access to sensitive information. Mask pattern analysis, on the other hand, involves the identification and analysis of specific patterns within the data that need to be masked. By using these techniques, organizations can ensure the privacy and security of their data while still being able to use it for various purposes.

    Rate this question:

  • 4. 

    Source system metadata acquired during source system analysis

    • True

    • False

    Correct Answer
    A. True
    Explanation
    During the source system analysis, the process involves gathering metadata from the source system. This metadata includes information about the structure, format, and content of the data in the source system. It helps in understanding the data sources, their relationships, and the quality of the data. Therefore, it is true that source system metadata is acquired during the source system analysis.

    Rate this question:

  • 5. 

    Metadata should be merged if two sources merged together

    • True

    • False

    Correct Answer
    A. True
    Explanation
    When two sources are merged together, it is important to merge their metadata as well. Metadata provides information about the data, such as its source, format, and other relevant details. By merging the metadata, the combined dataset will have a comprehensive and accurate representation of the merged sources. This ensures that the merged data is properly organized, searchable, and can be effectively used for analysis or other purposes. Therefore, merging metadata is necessary to maintain data integrity and maximize the usefulness of the merged dataset.

    Rate this question:

  • 6. 

    Metadata history should be maintained for changes even if the base source is deleted.

    • True

    • False

    Correct Answer
    A. True
    Explanation
    Maintaining metadata history for changes even if the base source is deleted is important because it allows for traceability and accountability. Even if the original source is no longer available, having a record of the metadata history can help in understanding the context and rationale behind the changes made. This can be useful for auditing purposes, data governance, and ensuring data integrity. Additionally, it provides a historical reference that can be valuable for future analysis or decision-making.

    Rate this question:

  • 7. 

    Data quality audit provides traceability between original and corrected values

    • True

    • False

    Correct Answer
    A. True
    Explanation
    Data quality audit involves examining and evaluating the accuracy and reliability of data. One aspect of this process is to ensure traceability between the original and corrected values. This means that any changes or corrections made to the data can be traced back to their original values, allowing for transparency and accountability. Therefore, the statement that data quality audit provides traceability between original and corrected values is true.

    Rate this question:

  • 8. 

    Customer merging is matching the best attribute into the surviving records from duplicate records.

    • True

    • False

    Correct Answer
    A. True
    Explanation
    Customer merging is the process of combining duplicate records by selecting the best attribute from each duplicate and merging it into the surviving record. This helps to eliminate duplicate data and ensure that the most accurate and complete information is retained in the system. Therefore, the given statement is true.

    Rate this question:

  • 9. 

    When Metadata is hierarchically structured then it is called ontology

    • True

    • False

    Correct Answer
    A. True
    Explanation
    When metadata is hierarchically structured, it means that the metadata is organized in a hierarchical manner, with different levels of information and relationships between them. This type of structure is commonly found in ontologies, which are formal representations of knowledge that capture concepts, relationships, and properties within a specific domain. Therefore, when metadata is hierarchically structured, it can be considered as an ontology.

    Rate this question:

  • 10. 

    Which tool is used to extract metadata from textual source

    • Extraction tool

    • Mark-Up tool

    • Conversion tool

    • Templates

    Correct Answer
    A. Extraction tool
    Explanation
    An extraction tool is used to extract metadata from textual sources. This tool is specifically designed to identify and extract relevant information from unstructured or semi-structured data. It helps in organizing and categorizing the extracted metadata, making it easier to analyze and utilize for various purposes such as data integration, data mining, or information retrieval.

    Rate this question:

  • 11. 

    Customer name matching is done by fuzzy and intelligent logic 

    • True

    • False

    Correct Answer
    A. True
    Explanation
    Customer name matching is a process that involves comparing and matching customer names based on their similarity or closeness. This is typically done using fuzzy and intelligent logic algorithms that take into account various factors such as spelling variations, abbreviations, phonetic similarity, and other patterns. These algorithms are designed to intelligently identify and match customer names even if there are slight differences or variations in the way they are written or recorded. Therefore, the statement "Customer name matching is done by fuzzy and intelligent logic" is true.

    Rate this question:

  • 12. 

    Which tool is used to convert the format of matadata from one fromat to another

    • Mark-up tool

    • Conversion Tool

    • Templates Tool

    • Extraction Tool

    Correct Answer
    A. Conversion Tool
    Explanation
    A conversion tool is used to convert the format of metadata from one format to another. This tool allows for the seamless transformation of metadata, ensuring compatibility and consistency across different systems and platforms. It simplifies the process of migrating data from one system to another by automatically converting the metadata format, saving time and effort. This tool is essential in maintaining data integrity and ensuring that metadata can be effectively utilized and understood in different environments.

    Rate this question:

  • 13. 

    Metadata storage formats are 

    • Human Readable (xml)

    • Non Human Readable(binary)

    • Both XML and Binary

    • None

    Correct Answer
    A. Both XML and Binary
    Explanation
    The correct answer is Both XML and Binary. This means that metadata storage formats can be both in human-readable form (such as XML) and in non-human-readable form (such as binary). XML allows for easy interpretation and editing by humans, while binary formats are more efficient for storing large amounts of data and are not easily readable by humans.

    Rate this question:

  • 14. 

    Metadata storage types are

    • Internal Storage

    • External Storage

    • External Storage & Internal Storage

    • None

    Correct Answer
    A. External Storage & Internal Storage
    Explanation
    The correct answer is "External Storage & Internal Storage" because metadata storage can be done in both internal and external storage. Internal storage refers to the storage space within the device itself, such as the device's memory or hard drive. External storage, on the other hand, refers to storage devices that are connected to the device externally, such as USB drives or SD cards. Therefore, metadata can be stored in either the internal storage or external storage depending on the specific requirements and preferences of the system or application.

    Rate this question:

  • 15. 

    Metadata can be classified based on which factor?

    • Mutablity

    • Logical Function

    • Content

    • All the options

    Correct Answer
    A. All the options
    Explanation
    Metadata can be classified based on various factors, including mutability, logical function, and content. Mutability refers to whether the metadata can be modified or not. Logical function refers to the purpose or role of the metadata within a system. Content classification involves categorizing metadata based on the type of information it represents. Therefore, all the given options are valid factors for classifying metadata.

    Rate this question:

  • 16. 

    Success factor of Crosswalk depends on ______________________.

    • Similarity between Metadata

    • Granularity of Elements

    • Compatibility of the Contents

    • All the Options

    Correct Answer
    A. All the Options
    Explanation
    The success factor of Crosswalk depends on all the options provided, which include similarity between metadata, granularity of elements, and compatibility of the contents. This means that for Crosswalk to be successful, it is important for the metadata to be similar, the elements to have the appropriate level of granularity, and for the contents to be compatible. All of these factors contribute to the effectiveness and efficiency of Crosswalk, ensuring that it can accurately and seamlessly connect different systems or platforms.

    Rate this question:

  • 17. 

    Rules for cleansing are embedded in 

    • Parameter file

    • Input File

    • Output File

    Correct Answer
    A. Parameter file
    Explanation
    The rules for cleansing are embedded in the parameter file. This file contains the specific instructions and guidelines for cleaning and transforming the data. It defines the actions to be taken, such as removing duplicates, correcting errors, and standardizing formats. The parameter file serves as a reference for the data cleansing process, ensuring consistency and accuracy in the resulting cleaned data.

    Rate this question:

  • 18. 

    Data quality does not refer to

    • Accuracy

    • Consistency

    • Integrity

    • Uniqueness

    • Volume

    Correct Answer
    A. Volume
    Explanation
    Data quality refers to the accuracy, consistency, integrity, and uniqueness of data. Volume, on the other hand, refers to the amount of data that is being stored or processed. While volume is an important aspect of data management, it is not directly related to data quality. Therefore, volume does not fall under the category of factors that data quality refers to.

    Rate this question:

  • 19. 

    Standard defined for Metadata repository 

    • ISO 11179

    • ANSI X3.285

    • Both ANSI X3.285 and ISO 11179

    • None

    Correct Answer
    A. Both ANSI X3.285 and ISO 11179
    Explanation
    Both ANSI X3.285 and ISO 11179 are standards that define the requirements and guidelines for a metadata repository. A metadata repository is a centralized system that stores and manages metadata, which provides information about data and its attributes. These standards ensure that the metadata repository follows a consistent structure, format, and naming conventions, making it easier for organizations to share, exchange, and understand metadata across different systems and platforms. By adhering to these standards, organizations can improve data quality, data integration, and data governance processes.

    Rate this question:

  • 20. 

    Data when processed becomes information.

    • False

    • True

    Correct Answer
    A. True
    Explanation
    When data is processed, it undergoes a series of operations such as sorting, organizing, and analyzing, which transforms it into meaningful and useful information. Raw data on its own may not hold any significance or convey any message, but once processed, it becomes valuable and provides insights that can be used for decision-making and understanding various phenomena. Therefore, the statement "Data when processed becomes information" is true.

    Rate this question:

  • 21. 

    Data cleansing and standardization will be taken care by

    • Data Profiling Tools

    • Data Quality Tools

    • Metadata Tools

    Correct Answer
    A. Data Quality Tools
    Explanation
    Data cleansing and standardization involve the process of identifying and correcting or removing errors, inconsistencies, and inaccuracies in data. This is typically done to ensure data accuracy, completeness, and consistency. Data Quality Tools are specifically designed to handle these tasks by providing functionalities such as data profiling, data cleansing, and data standardization. These tools can automatically identify and fix data issues, validate data against predefined rules, and enforce data quality standards. Therefore, it is logical to conclude that Data Quality Tools will be responsible for data cleansing and standardization.

    Rate this question:

  • 22. 

    OLAP metadata depends on

    • DW

    • RDBMS

    • Both DW and RDBMS

    • Does not depends on DW and RDBMS

    Correct Answer
    A. Both DW and RDBMS
    Explanation
    OLAP metadata depends on both Data Warehouses (DW) and Relational Database Management Systems (RDBMS). Data Warehouses are specifically designed to store and manage large amounts of data for OLAP purposes. They provide a structured and optimized environment for storing and querying data. On the other hand, RDBMS is responsible for managing the structured data within the Data Warehouse and providing efficient data retrieval and manipulation capabilities. Therefore, both DW and RDBMS play crucial roles in supporting and managing the metadata required for OLAP operations.

    Rate this question:

  • 23. 

    Which tool is used to generate XML,SGML,DTD 

    • Markup tool

    • Conversion Tool

    • Extraction Tool

    • Templates Tool

    Correct Answer
    A. Markup tool
    Explanation
    A markup tool is used to generate XML, SGML, and DTD. Markup refers to the process of adding tags or annotations to a document to define its structure and format. XML (Extensible Markup Language), SGML (Standard Generalized Markup Language), and DTD (Document Type Definition) are all markup languages used to define the structure and format of documents. Therefore, a markup tool is the correct tool to use for generating XML, SGML, and DTD.

    Rate this question:

  • 24. 

    In MDM, master data holds which of the following components

    • Master Data Store

    • Service for managing master

    • Enterprise Service Bus

    • None of the Options

    Correct Answer
    A. Master Data Store
    Explanation
    The correct answer is Master Data Store. In MDM, master data holds the components that are stored in the Master Data Store. This store is responsible for managing and storing the master data, which includes important information about key entities such as customers, products, employees, and suppliers. The Master Data Store acts as a central repository for all the master data, ensuring consistency and accuracy across different systems and applications within an organization. It allows for efficient data management, access, and sharing, enabling better decision-making and improving overall business processes.

    Rate this question:

  • 25. 

    Layers of metedata are

    • Symbolic Layer

    • Logical layer

    Correct Answer(s)
    A. Symbolic Layer
    A. Logical layer
    Explanation
    The layers of metadata are the symbolic layer and the logical layer. The symbolic layer refers to the metadata that is represented through symbols or codes, such as file names or identifiers. This layer helps in identifying and locating specific data or resources. On the other hand, the logical layer refers to the metadata that provides a logical structure or organization to the data, such as data relationships, data definitions, or data models. These layers work together to ensure efficient data management and retrieval.

    Rate this question:

  • 26. 

    Select correct options for RDBMS metadata

    • Exists as a part of Database

    • Referred as a catalog

    • Provides Information about Foreign key and primary key

    • Provides Information about views indexes.

    Correct Answer(s)
    A. Exists as a part of Database
    A. Referred as a catalog
    A. Provides Information about Foreign key and primary key
    A. Provides Information about views indexes.
    Explanation
    RDBMS metadata exists as a part of the database and is referred to as a catalog. It provides information about foreign keys and primary keys, as well as information about views and indexes.

    Rate this question:

  • 27. 

    Poor data quality will?

    • Will affect the performance of the ETL

    • Will affect the performance of the Reporting

    Correct Answer
    A. Will affect the performance of the ETL
    Explanation
    Poor data quality can have a significant impact on the performance of the ETL (Extract, Transform, Load) process. When the data being extracted is of poor quality, it may contain errors, inconsistencies, or missing values, which can lead to issues during the transformation and loading stages. This can result in delays, errors, and inefficiencies in the ETL process, ultimately affecting its overall performance.

    Rate this question:

  • 28. 

    Different data cleansing operations are

    • Removing invalid character

    • Correcting data format

    • Identifying and removing duplicate records

    • Building data quality and reprocessing feedback with source system.

    Correct Answer(s)
    A. Removing invalid character
    A. Correcting data format
    A. Identifying and removing duplicate records
    A. Building data quality and reprocessing feedback with source system.
    Explanation
    The correct answer includes four different data cleansing operations. The first operation is removing invalid characters from the data. This is important because invalid characters can cause issues in data processing and analysis. The second operation is correcting data format, which involves ensuring that the data is in the correct format and structure for analysis. The third operation is identifying and removing duplicate records, as duplicates can skew the results and cause inaccuracies. Finally, building data quality and reprocessing feedback with the source system is crucial for maintaining and improving the overall data quality and integrity.

    Rate this question:

  • 29. 

    Rule repository contains 

    • Database or Flat File

    • Database or Excel

    • Database only

    • Excel and Flat File

    Correct Answer
    A. Database or Flat File
    Explanation
    The rule repository can contain either a database or a flat file. This means that the rules can be stored and accessed from either a traditional database system or a flat file system. This allows for flexibility in choosing the storage method based on the specific requirements and preferences of the organization.

    Rate this question:

  • 30. 

    Trillium  is 

    • Data Quality Tool

    • Analytical Tool

    • ETL Tool

    • None

    Correct Answer
    A. Data Quality Tool
    Explanation
    Trillium is a data quality tool. Data quality tools are used to ensure the accuracy, completeness, and consistency of data. They help identify and correct errors, validate data against predefined rules, and standardize data formats. Trillium specifically focuses on improving data quality by providing features such as data profiling, data cleansing, and data enrichment. It helps organizations maintain high-quality data, which is essential for making informed business decisions and improving overall operational efficiency.

    Rate this question:

  • 31. 

    How to control  poor quality of data

    • Provide feedback about quality of data to source and ask source to correct and resend them.

    • De-duplication

    • Set stringent rules in validation process; if not, then in ETL process

    • All the options

    Correct Answer
    A. All the options
    Explanation
    The correct answer is "All the options". This means that all of the mentioned options - providing feedback to the source, de-duplication, and setting stringent rules in the validation or ETL process - can be used to control the poor quality of data. Each option tackles a different aspect of data quality control, allowing for a comprehensive approach to ensure data accuracy and reliability.

    Rate this question:

  • 32. 

    Different groups working in the same project must follow 

    • Agreed standards of manipulating same standards.

    • Compatible methods of collecting Metadata.

    Correct Answer(s)
    A. Agreed standards of manipulating same standards.
    A. Compatible methods of collecting Metadata.
    Explanation
    Different groups working in the same project must follow agreed standards of manipulating the same standards in order to ensure consistency and interoperability. This means that all groups should use the same set of rules and procedures when manipulating the project's standards, ensuring that there is a unified approach throughout the project. Additionally, they must also use compatible methods of collecting metadata, which refers to the process of gathering and organizing information about the project. This ensures that the metadata collected by different groups can be easily shared and integrated, further promoting collaboration and efficiency in the project.

    Rate this question:

  • 33. 

    Different data cleansing operations are

    • Removing invalid character

    • Correcting data format

    • Identifying and removing duplicate records

    • Building data quality and reprocessing feedback with source system.

    Correct Answer(s)
    A. Removing invalid character
    A. Correcting data format
    A. Identifying and removing duplicate records
    A. Building data quality and reprocessing feedback with source system.
    Explanation
    The correct answer includes four different data cleansing operations. The first operation is removing invalid characters from the data. This is important because invalid characters can cause errors or inconsistencies in the data. The second operation is correcting the data format. This involves ensuring that the data is in the correct format, such as converting dates to a standardized format. The third operation is identifying and removing duplicate records. Duplicate records can skew analysis and cause inaccuracies in the data. The fourth operation is building data quality and reprocessing feedback with the source system. This involves continuously improving the data quality and providing feedback to the source system for further improvement.

    Rate this question:

  • 34. 

    Vendors of metadata repositories are

    • Oracle Enterprise Metadata manager(EMM)

    • SAS Metadata Repositories

    • Masai technologies M:Scan and M:Grid

    • InfoLibrarian Metadata Integration Framework

    • Data Foundation Metadata Registry

    Correct Answer(s)
    A. Oracle Enterprise Metadata manager(EMM)
    A. SAS Metadata Repositories
    A. Masai technologies M:Scan and M:Grid
    A. InfoLibrarian Metadata Integration Framework
    A. Data Foundation Metadata Registry
    Explanation
    The answer lists various vendors of metadata repositories, including Oracle Enterprise Metadata manager (EMM), SAS Metadata Repositories, Masai technologies M:Scan and M:Grid, InfoLibrarian Metadata Integration Framework, and Data Foundation Metadata Registry. These vendors offer different solutions for managing metadata, which is essential for organizing and understanding data within an organization. These repositories provide a centralized location for storing and accessing metadata, allowing users to easily search, analyze, and govern their data assets. By using these repositories, organizations can improve data quality, enhance data governance, and enable better decision-making processes.

    Rate this question:

  • 35. 

    Select Metadata creation tools

    • Templates

    • Mark-Up tool

    • Extraction

    • Conversion

    Correct Answer(s)
    A. Templates
    A. Mark-Up tool
    A. Extraction
    A. Conversion
    Explanation
    The correct answer includes four options: templates, mark-up tool, extraction, and conversion. These are all tools used in metadata creation. Templates provide a pre-designed structure for organizing metadata. A mark-up tool is used to add tags or labels to specific elements of a document or data. Extraction involves extracting relevant information from a source and converting it into metadata. Conversion refers to the process of transforming data into a different format or structure. These tools are essential for creating and organizing metadata effectively.

    Rate this question:

  • 36. 

    Evaluate data quality 

    • Before building fully flagged data warehouse.

    • While building fully flagged data warehouse.

    • After Building Data Warehouse

    • Any Time

    Correct Answer
    A. Before building fully flagged data warehouse.
    Explanation
    The correct answer is "Before building fully flagged data warehouse" because evaluating data quality before building a fully flagged data warehouse is important to ensure that the data being used is accurate, complete, and reliable. This evaluation helps identify any inconsistencies or errors in the data, allowing for necessary corrections and improvements to be made before the data warehouse is built. By doing so, the data warehouse can be built on a solid foundation, resulting in better decision-making and analysis in the future.

    Rate this question:

  • 37. 

    Which of the following is correct?

    • Only US has a detailed address level

    • Some countries have detailed address level

    • No countries have detailed address level

    • All countries have detailed address level

    Correct Answer
    A. Some countries have detailed address level
    Explanation
    The correct answer is "Some countries have detailed address level." This means that not all countries have a detailed address level, but there are some countries that do. This suggests that the level of detail in addresses varies from country to country, with some having more detailed information than others.

    Rate this question:

  • 38. 

    Reason for data quality issue are

    • Inaccurate entry of data

    • Lack of proper validation

    • Lack of MDM strategy

    Correct Answer(s)
    A. Inaccurate entry of data
    A. Lack of proper validation
    A. Lack of MDM strategy
    Explanation
    The answer states that the reasons for data quality issues are inaccurate entry of data, lack of proper validation, and lack of MDM (Master Data Management) strategy. This means that the data quality issues can occur when data is entered incorrectly or with errors, when there is insufficient validation to ensure the accuracy and integrity of the data, and when there is no proper strategy in place to manage and maintain the master data. These factors can lead to inconsistencies, errors, and unreliable data, affecting the overall quality of the data.

    Rate this question:

  • 39. 

    Master Data Management (MDM) is to be implemented 

    • To track the duplicate value of records.

    • To store data

    • To validate DW

    • None

    Correct Answer
    A. To track the duplicate value of records.
    Explanation
    Master Data Management (MDM) is implemented to track the duplicate value of records. MDM helps in identifying and eliminating duplicate data entries in a database, ensuring data accuracy and consistency. By tracking duplicate records, MDM enables organizations to maintain a single, reliable version of data across different systems and applications. This helps in improving data quality, reducing errors, and enhancing overall operational efficiency.

    Rate this question:

  • 40. 

    Household matching refers

    • Customer belongs to same family or same house

    • Customer’s duplicate record.

    • Both customer’s duplicate record and customer belongs to same family or same house

    • None

    Correct Answer
    A. Customer belongs to same family or same house
    Explanation
    The correct answer is "customer belongs to same family or same house." This means that household matching refers to identifying customers who are part of the same family or live in the same house. This can be useful for various purposes, such as identifying potential duplicate records or analyzing customer behavior within a household.

    Rate this question:

  • 41. 

    The frequency of data count is obtained in

    • Data profiling

    • Data cleansing

    • Data management

    Correct Answer
    A. Data profiling
    Explanation
    Data profiling is the process of analyzing and understanding the content and structure of data. It involves examining the data to identify patterns, inconsistencies, and anomalies. By doing so, the frequency of data count can be obtained, which helps in gaining insights into the data and ensuring its quality. Data cleansing, on the other hand, focuses on removing errors and inconsistencies from the data. Data management involves activities related to organizing, storing, and maintaining the data. Therefore, the most appropriate option for obtaining the frequency of data count is data profiling.

    Rate this question:

  • 42. 

    Select correct options for metadata

    • Metadata must be adapt if base resource has changed

    • Metadata should be merged if two sources merged together

    • Metadata creator should be trained

    • It is not necessary to merge metadata if sources are merged

    Correct Answer(s)
    A. Metadata must be adapt if base resource has changed
    A. Metadata should be merged if two sources merged together
    A. Metadata creator should be trained
    Explanation
    Metadata must be adapted if the base resource has changed because metadata provides information about the resource, and if the resource has changed, the metadata needs to reflect those changes. Metadata should be merged if two sources are merged together because when multiple sources are combined, their respective metadata should also be merged to provide a comprehensive view. Metadata creator should be trained because creating accurate and relevant metadata requires knowledge and expertise. It is not necessary to merge metadata if sources are merged is incorrect because merging sources often requires merging their corresponding metadata as well.

    Rate this question:

  • 43. 

    Which are the following is not an IBM product?

    • Meta stage

    • Quality Stage

    • Profile Stage

    • Analysis stage

    Correct Answer
    A. Analysis stage
    Explanation
    The Analysis stage is not an IBM product. This can be inferred from the question which asks for the product that is not an IBM product. The other options, Meta Stage, Quality Stage, and Profile Stage are not mentioned in the question and their status as IBM products or not is not specified. Therefore, the only option that can be confirmed as not an IBM product is the Analysis stage.

    Rate this question:

  • 44. 

    Individual Matching

    • Identifies customer’s duplicate record.

    • Indicates customer belongs to same family or same house.

    • Both Indicates customer belongs to same family or same house and Identifies customer’s duplicate record.

    • None

    Correct Answer
    A. Identifies customer’s duplicate record.
    Explanation
    The correct answer is "Identifies customer's duplicate record." This means that the individual matching process is used to identify if a customer's record already exists in the system, indicating a potential duplicate entry. It does not necessarily indicate if the customer belongs to the same family or house, as mentioned in the other option.

    Rate this question:

  • 45. 

    Trillium server requires

    • Input file(DDL)

    • Input file(DDL)

    • Parameter file(.PAR)

    Correct Answer(s)
    A. Input file(DDL)
    A. Input file(DDL)
    A. Parameter file(.PAR)
    Explanation
    The Trillium server requires two input files in DDL format and one parameter file in .PAR format. The input files in DDL format contain the data that needs to be processed by the Trillium server. The parameter file in .PAR format contains the configuration settings and instructions for the Trillium server on how to process the input files. These files are necessary for the Trillium server to successfully perform its data processing tasks.

    Rate this question:

  • 46. 

    Best practice in Data Quality is 

    • Fix data quality in source system.

    • Fix data quality in Data Warehouse

    • Both Fix data quality in source system and Fix data quality in Data Warehouse

    • None of the Options

    Correct Answer
    A. Fix data quality in source system.
    Explanation
    The best practice in data quality is to fix data quality issues in the source system. This means addressing and resolving any inaccuracies, inconsistencies, or errors at the point of data entry or creation. By fixing data quality in the source system, it ensures that the data being captured is accurate, reliable, and consistent from the beginning. This approach helps to prevent downstream issues and ensures that the data stored in the data warehouse or other systems is of high quality.

    Rate this question:

  • 47. 

    During the de-duplication process

    • Delete the original values since they consume space

    • Keep the original values in trail tables

    • Do not disturb the original values and place the new values in new tables

    Correct Answer
    A. Keep the original values in trail tables
    Explanation
    During the de-duplication process, the original values are kept in trail tables. This means that instead of deleting the original values or replacing them with new values in new tables, the original values are preserved in separate tables. This allows for a record of the original values to be maintained while still removing any duplicate entries from the main tables. Keeping the original values in trail tables can be useful for auditing purposes or for reference in case the new values need to be reverted back to the original ones.

    Rate this question:

  • 48. 

    Clean up will not effect on by which phase

    • Acquisition

    • Application

    • Cleanup

    • None

    Correct Answer
    A. Acquisition
  • 49. 

    What are the use of frontroom metadata?

    • Label Screen and Reporting

    • Act on data present on DW

    • Used in ETL

    • Bring OLTP data in DW

    Correct Answer(s)
    A. Label Screen and Reporting
    A. Act on data present on DW
    Explanation
    Frontroom metadata is used for labeling, screening, and reporting purposes. It helps in organizing and categorizing data in the data warehouse (DW). This metadata allows users to easily identify and understand the data present in the DW. Additionally, frontroom metadata enables users to take actions and make decisions based on the data available in the DW. It is also used in the Extract, Transform, Load (ETL) process to bring operational data from OLTP systems into the DW.

    Rate this question:

Quiz Review Timeline (Updated): Jan 4, 2024 +

Our quizzes are rigorously reviewed, monitored and continuously updated by our expert board to maintain accuracy, relevance, and timeliness.

  • Current Version
  • Jan 04, 2024
    Quiz Edited by
    ProProfs Editorial Team
  • Feb 10, 2017
    Quiz Created by
    Arafatkazi
Back to Top Back to top
Advertisement
×

Wait!
Here's an interesting quiz for you.

We have other quizzes matching your interest.