Dt pH II Practice 1

Approved & Edited by ProProfs Editorial Team
The editorial team at ProProfs Quizzes consists of a select group of subject experts, trivia writers, and quiz masters who have authored over 10,000 quizzes taken by more than 100 million users. This team includes our in-house seasoned quiz moderators and subject matter experts. Our editorial experts, spread across the world, are rigorously trained using our comprehensive guidelines to ensure that you receive the highest quality quizzes.
Learn about Our Editorial Process
| By Arafatkazi
A
Arafatkazi
Community Contributor
Quizzes Created: 3 | Total Attempts: 796
Questions: 95 | Attempts: 112

SettingsSettingsSettings
Dt pH II Practice 1 - Quiz


Questions and Answers
  • 1. 

    Data when processed becomes Information

    • A.

      True

    • B.

      False

    Correct Answer
    A. True
    Explanation
    From Data Quality

    Rate this question:

  • 2. 

    The best practice in data quality is 

    • A.

      Fixing data quality issues in ETL

    • B.

      Fixing data quality issues in ODS

    • C.

      Fixing data quality issues in Source

    • D.

      Fixing data quality issues in DW

    Correct Answer
    C. Fixing data quality issues in Source
    Explanation
    From Data Quality

    Rate this question:

  • 3. 

    Evaluate data quality before building a fully fledged data ware house

    • A.

      True

    • B.

      False

    Correct Answer
    A. True
    Explanation
    From Data Quality

    Rate this question:

  • 4. 

    Data quality does not refer to

    • A.

      Volume

    • B.

      Accuracy

    • C.

      Consistency

    • D.

      Integrity

    Correct Answer
    A. Volume
    Explanation
    Data quality refers to the accuracy, consistency, and integrity of the data. It ensures that the data is reliable, complete, and free from errors or inconsistencies. However, volume does not fall under the category of data quality. Volume refers to the amount or quantity of data, and while it is important to manage and analyze large volumes of data effectively, it is not directly related to the quality of the data itself.

    Rate this question:

  • 5. 

    Which is not a data quality tool?

    • A.

      Quality stage

    • B.

      Trillium

    • C.

      Data Stage

    • D.

      All the options

    Correct Answer
    C. Data Stage
    Explanation
    Data Stage is an ETL tool from IBM.

    Rate this question:

  • 6. 

    Rule repository contains Databases or Flat Files

    • A.

      True

    • B.

      False

    Correct Answer
    A. True
    Explanation
    The rule repository contains databases or flat files. This means that the repository is used to store and manage rules, which can be stored in either a database or a flat file format. This allows for easy access, retrieval, and management of the rules within the repository. Therefore, the statement "True" is correct.

    Rate this question:

  • 7. 

    Which are the following is not an IBM product?

    • A.

      Meta stage

    • B.

      Quality Stage

    • C.

      Profile Stage

    • D.

      Analysis stage

    Correct Answer
    D. Analysis stage
    Explanation
    The Analysis stage is not an IBM product. The other options, Meta stage, Quality Stage, and Profile Stage, are all IBM products used in data integration and data quality management. However, the Analysis stage does not correspond to any known IBM product in this context.

    Rate this question:

  • 8. 

    Data quality audit provides traceability between original and corrected values.

    • A.

      True

    • B.

      False

    Correct Answer
    A. True
    Explanation
    Data quality audit is a process that ensures the accuracy and reliability of data. It involves examining data for errors, inconsistencies, and completeness. By conducting a data quality audit, organizations can trace the origin of data and compare it with the corrected values. This helps in identifying the source of errors and discrepancies, enabling organizations to make necessary corrections and improvements. Therefore, the statement that data quality audit provides traceability between original and corrected values is true.

    Rate this question:

  • 9. 

    Bad quality data affects concurrency and performance.

    • A.

      True

    • B.

      False

    Correct Answer
    A. True
    Explanation
    Bad quality data refers to data that is inaccurate, incomplete, inconsistent, or outdated. When dealing with bad quality data, it can lead to issues with concurrency and performance. Concurrency refers to the ability of multiple users to access and manipulate data at the same time. If the data is of poor quality, it can cause conflicts and inconsistencies when multiple users try to access and modify it simultaneously. This can lead to data corruption and hinder the overall performance of the system. Therefore, it is true that bad quality data affects concurrency and performance.

    Rate this question:

  • 10. 

    Tracing involves audit trails between deleted and surviving customers

    • A.

      True

    • B.

      False

    Correct Answer
    A. True
    Explanation
    Tracing refers to the process of establishing connections or links between deleted customers and the ones that still exist. It involves creating an audit trail to track the activities and interactions of these customers. Therefore, the statement "Tracing involves audit trails between deleted and surviving customers" is true.

    Rate this question:

  • 11. 

    Survivorship is a concept used in 

    • A.

      Data de-duplication

    • B.

      Cleansing

    • C.

      Enrichment

    • D.

      None

    Correct Answer
    A. Data de-duplication
    Explanation
    Survivorship is a concept used in data de-duplication. Data de-duplication is the process of identifying and removing duplicate data entries from a dataset. Survivorship refers to the process of selecting the most accurate and reliable data entry among the duplicates to be retained in the dataset, while discarding the rest. This ensures that only the most relevant and correct information is retained, improving data quality and reducing storage space requirements.

    Rate this question:

  • 12. 

    Data masking and mask pattern analysis are used in substituting string patterns

    • A.

      True

    • B.

      False

    Correct Answer
    A. True
    Explanation
    Data masking and mask pattern analysis are indeed used in substituting string patterns. Data masking is a technique used to protect sensitive data by replacing it with fictitious but realistic data. It helps to ensure that the original data is not exposed to unauthorized individuals. Mask pattern analysis, on the other hand, involves identifying and analyzing patterns in the masked data to ensure that it follows the desired format and structure. Both of these techniques are commonly employed in data security and privacy measures.

    Rate this question:

  • 13. 

    Customer merging is matching the best attribute into the surviving records from duplicate records

    • A.

      True

    • B.

      False

    Correct Answer
    A. True
    Explanation
    Customer merging is the process of combining or consolidating duplicate customer records into a single, accurate record. This involves identifying and matching the best attributes or information from each duplicate record and merging them into the surviving record. By doing so, businesses can eliminate duplicate data, improve data quality, and ensure that customer information is up to date and accurate. Therefore, the statement "Customer merging is matching the best attribute into the surviving records from duplicate records" is true.

    Rate this question:

  • 14. 

    Customer matching is done with Fuzzy and intelligent logic.

    • A.

      True

    • B.

      False

    Correct Answer
    A. True
    Explanation
    Customer matching is done with fuzzy and intelligent logic, which means that it is not a straightforward and exact process. Fuzzy logic allows for a degree of uncertainty and imprecision in the matching process, taking into account similarities and patterns rather than strict criteria. Intelligent logic implies that the matching system is capable of learning and adapting over time, becoming more accurate and efficient in identifying the right customers for a particular product or service. Therefore, the statement "Customer matching is done with fuzzy and intelligent logic" is true.

    Rate this question:

  • 15. 

    Data quality (MDM) involves avoiding overheads while preparing the DW.

    • A.

      True

    • B.

      False

    Correct Answer
    A. True
    Explanation
    Data quality (MDM) is indeed important in avoiding overheads while preparing the data warehouse (DW). Data quality refers to the accuracy, completeness, consistency, and reliability of data, and it plays a crucial role in ensuring that the data used in the DW is reliable and trustworthy. By implementing Master Data Management (MDM) practices, organizations can improve data quality by ensuring that master data is accurate, consistent, and up-to-date. This, in turn, helps to avoid unnecessary costs and inefficiencies associated with poor data quality, ultimately leading to a more effective and efficient data warehouse.

    Rate this question:

  • 16. 

    During which of the operations data is not modified

    • A.

      Data profiling

    • B.

      Data cleansing

    • C.

      Data enrichment

    • D.

      None

    Correct Answer
    A. Data profiling
    Explanation
    Data profiling is the process of analyzing and understanding the structure, content, and quality of data. It involves examining the data to identify patterns, inconsistencies, and anomalies. During data profiling, the data itself is not modified or changed in any way. Instead, it focuses on gathering information about the data, such as its type, format, and distribution. This analysis helps in understanding the data better and making informed decisions about data cleansing or enrichment processes.

    Rate this question:

  • 17. 

    MDM is maintained at organizational level

    • A.

      True

    • B.

      False

    Correct Answer
    A. True
    Explanation
    The statement "MDM is maintained at organizational level" is true. Master Data Management (MDM) refers to the process of creating and managing a single, consistent, and accurate version of an organization's critical data. MDM is typically implemented and maintained at the organizational level to ensure that all departments and systems within the organization have access to and use the same reliable data. By centralizing the management of master data, organizations can improve data quality, reduce data inconsistencies, and enhance decision-making processes.

    Rate this question:

  • 18. 

    During the de-duplication process

    • A.

      Delete the original values since they consume space

    • B.

      Keep the original values in trail tables

    • C.

      Do not disturb the original values and place the new values in new tables

    • D.

      None

    Correct Answer
    B. Keep the original values in trail tables
    Explanation
    During the de-duplication process, the original values are kept in trail tables. This means that instead of deleting the original values or placing new values in new tables, the original values are preserved. This allows for a record of the original values to be maintained while still removing any duplicate entries. Keeping the original values in trail tables can be useful for auditing purposes or for historical reference.

    Rate this question:

  • 19. 

    Data cleansing and standardization will be taken care by

    • A.

      Data Profiling Tools

    • B.

      Data Quality Tools

    • C.

      Metadata Tools

    • D.

      ETL Tool

    Correct Answer
    B. Data Quality Tools
    Explanation
    Data quality tools are responsible for ensuring the accuracy, completeness, and consistency of data. They perform various tasks such as data cleansing and standardization, which involve identifying and correcting errors, inconsistencies, and duplicates in the data. These tools also validate and verify data against predefined rules and standards to ensure its quality. Therefore, data quality tools are the most suitable option for handling data cleansing and standardization tasks.

    Rate this question:

  • 20. 

    What is the language used in a data quality tool?

    • A.

      C

    • B.

      JAVA

    • C.

      C#

    • D.

      COBOL

    Correct Answer
    A. C
    Explanation
    The correct answer is C. The C programming language is commonly used in data quality tools. C is a powerful and efficient language that allows for low-level programming and direct memory manipulation, making it well-suited for tasks such as data processing and analysis. Many data quality tools are written in C or have components written in C to optimize performance and ensure accurate and reliable data management.

    Rate this question:

  • 21. 

    Household matching is for 

    • A.

      Business

    • B.

      Product

    • C.

      Customer

    • D.

      None of the above.

    Correct Answer
    C. Customer
    Explanation
    Household matching refers to the process of matching customer data with household data to identify and group individuals who belong to the same household. This is done to gain a better understanding of customer behavior, preferences, and demographics, which can be valuable for businesses in targeting their marketing efforts and providing personalized experiences. Therefore, the correct answer is customer as household matching is primarily focused on identifying and analyzing customers within a household.

    Rate this question:

  • 22. 

    Trillium server process requires 

    • A.

      Input Structure(DLL file)

    • B.

      Output structure (DLL file)

    • C.

      Parameter file (PAR file)

    Correct Answer(s)
    A. Input Structure(DLL file)
    B. Output structure (DLL file)
    C. Parameter file (PAR file)
    Explanation
    The Trillium server process requires an Input Structure (DLL file), an Output Structure (DLL file), and a Parameter file (PAR file). These files are necessary for the Trillium server process to function properly. The Input Structure (DLL file) contains the necessary data and instructions for the server process to process the input data. The Output Structure (DLL file) defines the format and structure of the output data generated by the server process. The Parameter file (PAR file) contains the configuration settings and parameters that govern the behavior of the server process.

    Rate this question:

  • 23. 

    The rules of cleansing are embedded in Trillium’s 

    • A.

      Parameter file (PAR).

    • B.

      Output structure (DLL file)

    • C.

      Input structure (DLL file)

    Correct Answer
    A. Parameter file (PAR).
    Explanation
    The correct answer is the Parameter file (PAR). The explanation for this is that the rules of cleansing are embedded in the Parameter file (PAR). This means that the Parameter file contains the specific instructions and guidelines for how data should be cleansed. It likely includes information on what types of data should be removed or corrected, as well as any specific algorithms or processes that should be followed. The Output structure (DLL file) and Input structure (DLL file) are not directly related to the rules of cleansing, so they are not the correct answer.

    Rate this question:

  • 24. 

    Trillium source 

    • A.

      Flat files, fixed width

    • B.

      Flat file ,comma separated

    • C.

      ODBC connection

    • D.

      All

    Correct Answer
    A. Flat files, fixed width
    Explanation
    The correct answer is "Flat files, fixed width". This means that the Trillium source can be obtained from flat files that have a fixed width format. This format is used to store data where each field has a specific length, and the data is aligned accordingly. The fixed width format is commonly used when the data needs to be imported or exported into systems that require a specific layout.

    Rate this question:

  • 25. 

    Basic Functionalities of Trillium

    • A.

      Data Profiling

    • B.

      Data Quality

    • C.

      Data Enrichment

    • D.

      Data Volume

    Correct Answer(s)
    A. Data Profiling
    B. Data Quality
    C. Data Enrichment
    Explanation
    Trillium offers several basic functionalities, including data profiling, data quality, and data enrichment. Data profiling involves analyzing and understanding the characteristics and quality of data. Data quality refers to the accuracy, completeness, consistency, and reliability of data. Data enrichment involves enhancing the existing data with additional information to provide more insights and value. These functionalities are essential for organizations to ensure that their data is reliable, accurate, and useful for decision-making purposes.

    Rate this question:

  • 26. 

    The frequency of data count is obtained in

    • A.

      Data profiling

    • B.

      Data cleansing

    • C.

      Data management

    Correct Answer
    A. Data profiling
    Explanation
    Data profiling involves analyzing and examining the data to understand its structure, content, and quality. By conducting data profiling, the frequency of data count can be obtained. This process helps in identifying the patterns, inconsistencies, and anomalies within the data, allowing organizations to gain insights and make informed decisions. Data cleansing, on the other hand, focuses on removing or correcting errors, duplicates, and inconsistencies in the data. Data management refers to the overall process of collecting, storing, organizing, and maintaining data.

    Rate this question:

  • 27. 

    Clean up will not effect on by which phase

    • A.

      Acquisition

    • B.

      Application

    • C.

      Cleanup

    • D.

      None

    Correct Answer
    A. Acquisition
    Explanation
    The question is asking which phase will not be affected by the clean-up. Clean-up is a process of removing unnecessary or unwanted elements. In the context of the given options, acquisition refers to the phase of obtaining or acquiring something. Clean-up is not related to the acquisition phase, as it focuses on organizing and removing unnecessary elements rather than obtaining something new. Therefore, the clean-up will not affect the acquisition phase.

    Rate this question:

  • 28. 

    Reason for poor quality of data

    • A.

      Careless / Inaccurate data entry

    • B.

      No stringent rules or processes followed to validate the data entry

    • C.

      Lack of Master Data Management strategy

    Correct Answer(s)
    A. Careless / Inaccurate data entry
    B. No stringent rules or processes followed to validate the data entry
    C. Lack of Master Data Management strategy
    Explanation
    The poor quality of data can be attributed to several factors. One reason is careless or inaccurate data entry, where individuals responsible for inputting data may make mistakes or not pay attention to detail. Another factor is the absence of stringent rules or processes to validate the data entry, which allows for errors to go unnoticed. Additionally, the lack of a Master Data Management strategy contributes to poor data quality as there is no systematic approach to ensure data accuracy, consistency, and integrity.

    Rate this question:

  • 29. 

    Steps avoid poor quality data

    • A.

      Set stringent rules in validation process; if not, then in ETL process

    • B.

      De-duplication

    • C.

      Provide feedback about quality of data to source and ask source to correct and resend them

    Correct Answer(s)
    A. Set stringent rules in validation process; if not, then in ETL process
    B. De-duplication
    C. Provide feedback about quality of data to source and ask source to correct and resend them
    Explanation
    The answer suggests three steps to avoid poor quality data. The first step is to set stringent rules in the validation process, and if not possible, then in the ETL (Extract, Transform, Load) process. This ensures that data is thoroughly checked and validated before being used. The second step is de-duplication, which involves removing any duplicate or redundant data entries. This helps in maintaining data integrity and accuracy. The third step is to provide feedback about the quality of data to the source and request them to correct and resend the data. This ensures that the source takes responsibility for the quality of the data they provide.

    Rate this question:

  • 30. 

    Data cleansing and standardization will be taken care by

    • A.

      Data Quality Tools

    • B.

      Data Profiling Tools

    • C.

      Metadata Tools

    Correct Answer
    A. Data Quality Tools
    Explanation
    Data quality tools are specifically designed to identify and correct errors, inconsistencies, and inaccuracies in data. They help in cleansing and standardizing the data by removing duplicate entries, validating data against predefined rules, and ensuring data integrity. These tools can also perform various data enrichment techniques to enhance the overall quality of the data. Therefore, it is logical to conclude that data quality tools will be responsible for data cleansing and standardization.

    Rate this question:

  • 31. 

    During which of the operations data is not modified

    • A.

      Data profiling

    • B.

      Data cleansing

    • C.

      Data enrichment

    Correct Answer
    A. Data profiling
    Explanation
    During data profiling, the focus is on analyzing and understanding the data, rather than modifying it. Data profiling involves examining the quality, structure, and content of the data to gain insights and identify any issues or anomalies. This process helps in understanding the data's characteristics, such as its completeness, accuracy, and consistency. Unlike data cleansing and data enrichment, data profiling does not involve making changes or additions to the data. Instead, it aims to provide a comprehensive overview of the data, enabling better decision-making and data management.

    Rate this question:

  • 32. 

    Which tool extracts data from textual sources

    • A.

      Conversion

    • B.

      Mark-Up

    • C.

      Extraction

    Correct Answer
    C. Extraction
    Explanation
    Extraction is the correct answer because it refers to the process of retrieving or extracting data from textual sources. This tool is used to gather information from various text-based documents, such as websites, articles, reports, or social media posts. Extraction tools typically analyze the text and identify relevant data based on specific criteria or patterns. This extracted data can then be further processed, analyzed, or stored for various purposes such as data mining, business intelligence, or research.

    Rate this question:

  • 33. 

    Metadata can be classified based on 

    • A.

      Content

    • B.

      Mutability

    • C.

      Logical function

    • D.

      Transformation

    • E.

      Partition mapping

    Correct Answer(s)
    A. Content
    B. Mutability
    C. Logical function
    Explanation
    Metadata can be classified based on various factors, including content, mutability, and logical function. Content refers to the type of information that the metadata describes, such as the title, author, or date of a document. Mutability refers to whether the metadata can be modified or not. Logical function refers to the purpose or role of the metadata within a system or application. These classifications help to organize and manage metadata effectively, allowing for easier retrieval and analysis of information.

    Rate this question:

  • 34. 

    What are the standards defined for metadata

    • A.

      ANSI X3.528

    • B.

      ISO/IEC 11179

    • C.

      ISO/IEC 11197

    • D.

      ANSI X3.825

    • E.

      ANSI X3.285

    Correct Answer(s)
    B. ISO/IEC 11179
    E. ANSI X3.285
    Explanation
    ISO/IEC 11179 and ANSI X3.285 are the standards defined for metadata. ISO/IEC 11179 provides guidelines and specifications for managing and registering metadata in a standardized manner. It defines various aspects of metadata, including its structure, content, and representation. ANSI X3.285, on the other hand, focuses on the syntax and semantics of metadata for the interchange of information. These standards ensure consistency and interoperability in the management and exchange of metadata across different systems and organizations.

    Rate this question:

  • 35. 

    Metadata storage formats

    • A.

      Human readable format (XML)

    • B.

      Non-human readable format (Binary)

    • C.

      Pdf

    • D.

      Text

    Correct Answer(s)
    A. Human readable format (XML)
    B. Non-human readable format (Binary)
    Explanation
    The given answer is correct because XML is a format that can be easily read and understood by humans. It uses tags to define elements and attributes to provide additional information about those elements. On the other hand, binary formats are not designed to be read by humans as they consist of binary data that is encoded in a way that is efficient for computers to process. Therefore, XML is a human-readable format, while binary formats are non-human readable.

    Rate this question:

  • 36. 

    Metadata should be maintained even when 

    • A.

      Base resource changes

    • B.

      If two sources merges together

    • C.

      Base source is deleted

    Correct Answer(s)
    A. Base resource changes
    B. If two sources merges together
    C. Base source is deleted
    Explanation
    Metadata should be maintained even when the base resource changes because the metadata provides important information about the resource, such as its origin, format, and any restrictions or permissions associated with it. This ensures that the metadata remains accurate and up-to-date, allowing users to effectively search, retrieve, and use the resource. Similarly, when two sources merge together, it is important to maintain the metadata from both sources to preserve the integrity and completeness of the merged data. Lastly, even if the base source is deleted, the metadata should still be retained to provide historical context and reference for any data or resources that were derived from or linked to the base source.

    Rate this question:

  • 37. 

    Types of DW Metadata

    • A.

      Back Room

    • B.

      Front Room

    • C.

      Source System

    • D.

      Data Staging

    • E.

      RDBMS

    Correct Answer(s)
    A. Back Room
    B. Front Room
    C. Source System
    D. Data Staging
    E. RDBMS
    Explanation
    The given answer lists the different types of metadata in a data warehouse. The "Back Room" refers to the metadata that is stored in the back-end of the data warehouse system, such as data transformation rules and data lineage. The "Front Room" refers to the metadata that is exposed to the end users, such as data definitions and business glossaries. "Source System" metadata includes information about the data sources used in the data warehouse. "Data Staging" metadata pertains to the process of loading and transforming data from source systems to the data warehouse. "RDBMS" metadata refers to the metadata associated with the relational database management system used in the data warehouse.

    Rate this question:

  • 38. 

    Types of BI Metadata

    • A.

      A. OLAP Metadata

    • B.

      B. Reporting Metadata

    • C.

      C. Data Mining Metadata

    Correct Answer(s)
    A. A. OLAP Metadata
    B. B. Reporting Metadata
    C. C. Data Mining Metadata
    Explanation
    The correct answer is a, b, and c because these are all types of BI metadata. OLAP metadata refers to the metadata used in online analytical processing, which involves analyzing multidimensional data. Reporting metadata is used in generating reports and includes information about data sources, report layouts, and filters. Data mining metadata is used in the process of discovering patterns and relationships in large datasets. These three types of metadata are essential components of a business intelligence system, as they help in organizing and understanding data for analysis and reporting purposes.

    Rate this question:

  • 39. 

    Metadata creation tools

    • A.

      Templates

    • B.

      Mark-Up tools

    • C.

      Extraction tool

    • D.

      Conversion tool

    Correct Answer(s)
    A. Templates
    B. Mark-Up tools
    C. Extraction tool
    D. Conversion tool
    Explanation
    The given options are all different types of metadata creation tools. Templates are pre-designed structures or forms that help in organizing and standardizing metadata. Mark-Up tools are used to add metadata tags or labels to content. Extraction tools are used to extract metadata from various sources or documents. Conversion tools are used to convert metadata from one format to another.

    Rate this question:

  • 40. 

    Crosswalk allows metadata created by one user to be used by another

    • A.

      True

    • B.

      False

    Correct Answer
    A. True
    Explanation
    Crosswalk allows metadata created by one user to be used by another. This means that if one user creates metadata for a specific purpose, another user can access and utilize that metadata for their own purposes. This allows for the sharing and reusability of metadata, promoting collaboration and efficiency among users.

    Rate this question:

  • 41. 

    DataStage is an ETL tool

    • A.

      True

    • B.

      False

    Correct Answer
    A. True
    Explanation
    DataStage is indeed an ETL (Extract, Transform, Load) tool. ETL tools are used to extract data from various sources, transform it into a suitable format, and load it into a target system or database. DataStage is specifically designed for this purpose, allowing users to create data integration jobs that extract data from different sources, apply transformations, and load it into a target database or data warehouse. Therefore, the correct answer is true.

    Rate this question:

  • 42. 

    Order of execution in Datastage is

    • A.

      Stage variable then Constraints then Derivations

    • B.

      Derivations then Stage variable then Constraints

    • C.

      Constraints then Derivations then Stage variable

    Correct Answer
    A. Stage variable then Constraints then Derivations
    Explanation
    The correct answer is "Stage variable then Constraints then Derivations." In Datastage, the order of execution is important for proper data processing. Stage variables are evaluated first, as they are used to store intermediate values during the data transformation process. Constraints are then applied to filter the data based on certain conditions. Finally, derivations are performed to calculate new values or modify existing ones. This sequence ensures that the stage variables are available for use in constraints and derivations, allowing for accurate data manipulation.

    Rate this question:

  • 43. 

    Which of the following database is used in DS repository?

    • A.

      Universe

    • B.

      Oracle

    • C.

      Sybase

    • D.

      MS-SQL Sever

    Correct Answer
    A. Universe
    Explanation
    Universe is the correct answer because it is a type of database that is commonly used in DS (Data Science) repositories. It is a multidimensional database that allows for complex data analysis and reporting. Universe databases are designed to store and organize large amounts of data in a way that is optimized for efficient querying and analysis. This makes it a popular choice for data scientists and analysts working with large datasets.

    Rate this question:

  • 44. 

    Which client tool is used to schedule, run and validate the job?

    • A.

      DataStage Director

    • B.

      DataStage Manager

    • C.

      DataStage Administrator

    • D.

      DataStage Manager Roles

    Correct Answer
    A. DataStage Director
    Explanation
    DataStage Director is the correct answer because it is a client tool used in IBM InfoSphere DataStage to schedule, run, and validate jobs. It provides a graphical interface that allows users to manage and monitor DataStage jobs, view job logs, and troubleshoot any issues that may arise during job execution. DataStage Director also allows users to schedule jobs to run at specific times or intervals, ensuring that data integration processes are executed in a timely and efficient manner.

    Rate this question:

  • 45. 

    Which client tool is used to create or move the projects in Datastage?

    • A.

      DataStage Designer

    • B.

      DataStage Director

    • C.

      DataStage Manager

    • D.

      DataStage Administrator

    Correct Answer
    D. DataStage Administrator
    Explanation
    The DataStage Administrator client tool is used to create or move projects in DataStage. It provides the necessary features and functionalities to manage and administer DataStage projects. This tool allows users to perform tasks such as creating and configuring projects, managing project resources, scheduling and monitoring jobs, and controlling access and security settings. With DataStage Administrator, users have the ability to efficiently manage and organize their DataStage projects, ensuring smooth execution and optimal performance.

    Rate this question:

  • 46. 

    Which client tool is used to import and export components? 

    • A.

      DS Manager

    • B.

      DataStage Director

    • C.

      DataStage Designer

    • D.

      DataStage Administrator

    Correct Answer
    A. DS Manager
    Explanation
    DS Manager is the correct answer because it is the client tool used specifically for managing and administering DataStage components. It allows users to import and export components, as well as perform other administrative tasks such as scheduling and monitoring jobs. DataStage Director is used for job monitoring and execution, DataStage Designer is used for designing and developing DataStage jobs, and DataStage Administrator is used for managing and configuring DataStage projects and resources.

    Rate this question:

  • 47. 

    Cache size can be changed in DS Administrator

    • A.

      True

    • B.

      False

    Correct Answer
    A. True
    Explanation
    The given statement is true because in DS Administrator, the cache size can be modified or adjusted. The DS Administrator is a tool used for managing and configuring various aspects of a system, including the cache. By accessing the DS Administrator, users can change the cache size to optimize performance and storage capacity based on their specific needs and requirements.

    Rate this question:

  • 48. 

    When we import a job, the job will be in which state?

    • A.

      State while exported

    • B.

      Aborted state

    • C.

      Not compiled state

    Correct Answer
    C. Not compiled state
    Explanation
    When a job is imported, it will be in the "Not compiled" state. This means that the job has not been compiled or validated yet. In order to run the job successfully, it needs to be compiled first to check for any errors or issues. Therefore, when a job is imported, it is initially in the "Not compiled" state until it is compiled and validated.

    Rate this question:

  • 49. 

    If a primary key uses multiple columns to identify a record then it is known as compound key

    • A.

      True

    • B.

      False

    Correct Answer
    A. True
    Explanation
    A compound key is used when multiple columns are combined to uniquely identify a record in a database table. This is useful when a single column cannot uniquely identify a record. Therefore, if a primary key uses multiple columns, it is known as a compound key. Hence, the given statement is true.

    Rate this question:

  • 50. 

    Which symbol is used in defining the parameter in DataStage? 

    • A.

      $

    • B.

      #

    • C.

      @

    • D.

      &&

    Correct Answer
    A. $
    Explanation
    The symbol "$" is used in defining the parameter in DataStage. This symbol is commonly used to represent a parameter or variable in many programming languages. In DataStage, parameters are often used to pass values between different stages or jobs, allowing for greater flexibility and reusability of code. By using the "$" symbol, DataStage recognizes that the value following it is a parameter and should be treated as such.

    Rate this question:

Quiz Review Timeline +

Our quizzes are rigorously reviewed, monitored and continuously updated by our expert board to maintain accuracy, relevance, and timeliness.

  • Current Version
  • Mar 21, 2024
    Quiz Edited by
    ProProfs Editorial Team
  • Jan 24, 2017
    Quiz Created by
    Arafatkazi
Back to Top Back to top
Advertisement
×

Wait!
Here's an interesting quiz for you.

We have other quizzes matching your interest.