Dovetail Phase 2 (All Topics)

Reviewed by Editorial Team
The ProProfs editorial team is comprised of experienced subject matter experts. They've collectively created over 10,000 quizzes and lessons, serving over 100 million users. Our team includes in-house content moderators and subject matter experts, as well as a global network of rigorously trained contributors. All adhere to our comprehensive editorial guidelines, ensuring the delivery of high-quality content.
Learn about Our Editorial Process
| By MishraAkshay
M
MishraAkshay
Community Contributor
Quizzes Created: 1 | Total Attempts: 1,295
| Attempts: 1,295
SettingsSettings
Please wait...
  • 1/221 Questions

    The user has a report with “Year” and “Expenditure” displayed.  The user wants to see the monthly expenditure for each year and decides to drill from “Year” to “Month”.  But the user wants to see both “Year” and “Month” in the report after drilling.  What is the option to be used during drilling to achieve the desired behavior?

    • Keep the parent while drilling
    • Keep The child while Drilling
Please wait...
Dovetail Phase 2 (All Topics) - Quiz
About This Quiz

The 'Dovetail Phase 2 (All Topics)' quiz assesses knowledge across various database topics, including multidimensional clusters, node partitioning, block indexes, data prefetching, and statistics updates. It is designed to test and reinforce understanding of core database management principles.


Quiz Preview

  • 2. 

    During job run in the designer, the green link indicates

    • Success

    • Failure

    • Error

    • None

    Correct Answer
    A. Success
    Explanation
    The green link indicates success during a job run in the designer. This means that the job has completed without any errors or failures.

    Rate this question:

  • 3. 

    Different table spaces have different page sizes

    • True

    • False

    Correct Answer
    A. True
    Explanation
    In a database, a tablespace is a logical storage unit that contains tables, indexes, and other database objects. Each tablespace can have its own page size, which determines the size of the data blocks used to store data within the tablespace. This allows for flexibility in managing different types of data and optimizing storage efficiency. Therefore, it is true that different table spaces can have different page sizes.

    Rate this question:

  • 4. 

    Updating the schema is nessary when there is a change in 

    • Fact

    • Attributes

    • Both the facts and attributes

    • None

    Correct Answer
    A. Both the facts and attributes
    Explanation
    When there is a change in both the facts and attributes of a schema, updating the schema becomes necessary. A schema is a blueprint or structure that defines the organization and relationships of data in a database. Facts are the actual data stored in the database, while attributes are the characteristics or properties of the data. If either the facts or attributes change, the schema needs to be updated to reflect these changes and ensure the integrity and consistency of the data.

    Rate this question:

  • 5. 

    Where do log file exists?

    • Ds directory

    • D:

    • C:

    • None

    Correct Answer
    A. Ds directory
    Explanation
    Log files exist in the "ds directory".

    Rate this question:

  • 6. 

    Is training required for creating a metadata?  

    • Yes

    • No

    Correct Answer
    A. Yes
    Explanation
    Training is required for creating metadata because metadata involves organizing and describing data in a standardized and consistent manner. It requires knowledge and understanding of the content, structure, and context of the data. Training helps individuals learn how to properly classify, tag, and annotate data to ensure accurate and meaningful metadata. Without training, there is a risk of inconsistent or incorrect metadata, which can lead to difficulties in searching, retrieving, and managing data effectively.

    Rate this question:

  • 7. 

    A good metadata must

    • Use standard terminologies

    • Mandatory elements must not be missed

    • Support archiving

    • All

    Correct Answer
    A. All
    Explanation
    A good metadata must use standard terminologies, ensure that mandatory elements are not missed, and support archiving. This means that it should follow established conventions and vocabularies, include all the necessary information required, and be able to be preserved for long-term access and retrieval.

    Rate this question:

  • 8. 

    Data quality does not refer to

    • Accuracy

    • Consistency

    • Integrity

    • Volume

    Correct Answer
    A. Volume
    Explanation
    Data quality refers to the accuracy, consistency, and integrity of data. It ensures that the data is correct, reliable, and free from errors. However, volume is not a factor that determines data quality. While the volume of data can be important for certain analyses or applications, it does not directly impact the quality of the data itself. Therefore, volume is not considered as a factor when evaluating data quality.

    Rate this question:

  • 9. 

    The MDM services helps in 

    • Creation

    • Validatio

    • Updation

    • Deletion

    • All

    Correct Answer
    A. All
    Explanation
    The correct answer is "All". MDM services help in all of the mentioned actions, including creation, validation, updation, and deletion. MDM, or Master Data Management, is a process that ensures consistent and accurate master data across an organization. It involves creating new data, validating existing data, updating outdated data, and deleting irrelevant data. By performing all of these actions, MDM services help maintain the integrity and quality of master data within an organization.

    Rate this question:

  • 10. 

    Tablespace span across containers and tables can span across tablespaces

    • True

    • False

    Correct Answer
    A. True
    Explanation
    This statement is true because tablespaces in a database can span across multiple containers, which are physical storage units. This allows for better organization and allocation of storage space. Additionally, tables within a database can also span across multiple tablespaces, providing flexibility in managing and distributing data within the database.

    Rate this question:

  • 11. 

     Data cleansing rules

    • Audit Data

    • Filter Data

    • Correct Data

    • All

    Correct Answer
    A. All
    Explanation
    The correct answer is "All" because data cleansing rules involve auditing, filtering, and correcting data. When cleaning data, it is important to first audit the existing data to identify any errors or inconsistencies. Then, filtering can be done to remove any irrelevant or duplicate data. Finally, the identified errors can be corrected to ensure the data is accurate and reliable. Therefore, all of these steps are essential in the data cleansing process.

    Rate this question:

  • 12. 

    Household matching is for

    • Business

    • Product

    • Customer

    • None of the above

    Correct Answer
    A. Customer
    Explanation
    Household matching is a process used to match customer data with household data in order to gain a better understanding of customer behavior and preferences. By identifying households, businesses can target their marketing efforts more effectively and provide personalized offers and recommendations to customers. Therefore, the correct answer for this question is "Customer" as household matching is primarily used for understanding customer data.

    Rate this question:

  • 13. 

    What is the technology used to match entities like “Bill” as short for “William” and “CNN” as an abbreviation for “Cable News Network”?

    • Name Match

    • Fuzzy Match

    • Spelling Match

    • Like Match

    Correct Answer
    A. Fuzzy Match
    Explanation
    Fuzzy match is the technology used to match entities like "Bill" as short for "William" and "CNN" as an abbreviation for "Cable News Network". Fuzzy matching algorithms are designed to find matches between strings that are similar but not exactly the same. In this case, the algorithm would identify the similarity between "Bill" and "William" and "CNN" and "Cable News Network" based on their phonetic or semantic similarities, allowing for a fuzzy match to be made.

    Rate this question:

  • 14. 

    An expression combining two different fact columns in a table (ex – sales – discount) can be set as a fact expression

    • True

    • False

    Correct Answer
    A. True
    Explanation
    In a table, a fact column represents a measurable quantity or value, such as sales or discount. When we combine two different fact columns, such as sales and discount, in an expression, it can be considered a fact expression. This expression would represent a calculation or relationship between the two fact columns, providing additional insights or analysis. Therefore, the given statement is true.

    Rate this question:

  • 15. 

    Data stage job consists of

    • Links

    • Stages

    • Both

    • None

    Correct Answer
    A. Both
    Explanation
    A Data stage job consists of both links and stages. Links are used to connect the stages and define the flow of data between them. Stages, on the other hand, are the building blocks of a Data stage job and perform various operations such as data extraction, transformation, and loading. Therefore, both links and stages are essential components of a Data stage job.

    Rate this question:

  • 16. 

    When creating database or users, specifying its parent is necessary

    • True

    • False

    Correct Answer
    A. True
    Explanation
    When creating a database or users in a system, specifying its parent is necessary because it helps in organizing and managing the hierarchy and relationships between different entities. By specifying the parent, it becomes easier to understand the context and dependencies of the database or user within the system. This information is crucial for effective administration and access control, as well as for maintaining data integrity and consistency. Therefore, it is important to specify the parent when creating a database or user.

    Rate this question:

  • 17. 

    A fact table in the centre surrounded by dimension tables which are again split up into further dimension tables is called as

    • Star schema

    • Snowflake schema

    • None

    • Both

    Correct Answer
    A. Snowflake schema
    Explanation
    A snowflake schema is a type of database schema in which the dimension tables are further normalized into multiple levels of dimension tables. In this schema, the fact table is at the center, surrounded by dimension tables that are split up into additional dimension tables. This design allows for more efficient storage and retrieval of data, as well as better data integrity and flexibility in querying the database.

    Rate this question:

  • 18. 

    There is an option “Generate report” in DataStage Designer.

    • True

    • False

    Correct Answer
    A. True
    Explanation
    The statement is true because DataStage Designer does have an option called "Generate report." This option allows users to generate reports based on the data and transformations created in DataStage Designer. This feature is useful for analyzing and documenting the data integration processes in DataStage.

    Rate this question:

  • 19. 

    What is meant by pre-fetching?           

    • Fetching data from hard disk to buffer pool

    • Fetching data from page to buffer pool

    • None

    Correct Answer
    A. Fetching data from hard disk to buffer pool
    Explanation
    Pre-fetching refers to the process of retrieving data from the hard disk and storing it in the buffer pool before it is actually needed. This is done in order to improve performance by reducing the time it takes to access the data when it is required. By pre-fetching data from the hard disk to the buffer pool, the system can anticipate future data needs and have it readily available, minimizing the delay in retrieving it from the slower hard disk.

    Rate this question:

  • 20. 

    What is ASLheapsz?

    • It is the Communication buffer between the local application and its associated Agent.

    • Error Agent

    • Local Error Handler

    Correct Answer
    A. It is the Communication buffer between the local application and its associated Agent.
    Explanation
    ASLheapsz refers to the communication buffer that facilitates the exchange of information between a local application and its associated Agent. This buffer allows for seamless communication, enabling the local application to send and receive data to and from the Agent. It plays a crucial role in ensuring smooth and efficient communication between the two entities.

    Rate this question:

  • 21. 

    More than one ID column 

    • Compound attribute

    • Multiple attribute

    • Common Attribute

    Correct Answer
    A. Compound attribute
    Explanation
    A compound attribute refers to a single attribute that consists of multiple sub-attributes. In this case, the attribute being referred to is an ID column, and it is stated that there is more than one ID column. This suggests that the ID column is composed of multiple sub-attributes, making it a compound attribute.

    Rate this question:

  • 22. 

    Schema updation can be done by

    • Stop and start microstatergy intelligence server

    • Disconnect and reconnect to the project source

    • All

    • Manually update the schema

    Correct Answer
    A. All
    Explanation
    The correct answer is "All". This means that schema updation can be done by any of the mentioned methods, including stopping and starting the microstrategy intelligence server, disconnecting and reconnecting to the project source, and manually updating the schema.

    Rate this question:

  • 23. 

    A fact can have different expressions based on the table against which it is evaluated.

    • True

    • False

    Correct Answer
    A. True
    Explanation
    This statement is true because a fact can be expressed in different ways depending on the context or perspective from which it is evaluated. Different tables or frameworks can provide different interpretations or representations of the same fact. Therefore, the expression of a fact can vary based on the table or framework used for evaluation.

    Rate this question:

  • 24. 

    The maximum number of attributes that can be set as parent to another attribute is

    • 2

    • 5

    • 7

    • No Limit

    Correct Answer
    A. No Limit
    Explanation
    There is no limit to the number of attributes that can be set as parents to another attribute. This means that an attribute can have any number of parent attributes.

    Rate this question:

  • 25. 

    Avg(Sum(Fact1) {~+, month+}) {~+, quarter+} is an example of 

    • Simple Metric

    • Compound Metric

    • Nested Metric

    Correct Answer
    A. Nested Metric
    Explanation
    The given expression Avg(Sum(Fact1) {~+, month+}) {~+, quarter+} is an example of a Nested Metric. This is because it involves multiple levels of aggregation and grouping. The inner expression Sum(Fact1) {~+, month+} calculates the sum of the metric Fact1 at the month level, and the outer expression Avg() further aggregates this result at the quarter level. The use of multiple levels of aggregation and grouping makes it a nested metric.

    Rate this question:

  • 26. 

    In which of the following stages a job cannot be run?

    • Abort

    • Compiled

    • Reset

    • None

    Correct Answer
    A. Abort
    Explanation
    In the Abort stage, a job cannot be run because it is intentionally terminated or cancelled before it can be executed. This stage usually occurs when there is an error or issue that prevents the job from running successfully. Therefore, the job cannot proceed further and cannot be run in the Abort stage.

    Rate this question:

  • 27. 

    Which tool extracts data from textual source​?

    • Conversion

    • Mark-Up

    • Extraction

    • None

    Correct Answer
    A. Extraction
    Explanation
    Extraction is the correct answer because it refers to the process of extracting data from a textual source. This can involve using specific tools or techniques to extract relevant information from text documents, websites, or other sources. Extraction is commonly used in data mining, natural language processing, and information retrieval to gather data and transform it into a structured format that can be analyzed or used for further processing.

    Rate this question:

  • 28. 

    Which of the following types of mapping allows the engine to perform joins on dissimilar column names?

    • Implicit expression

    • Derived expression

    • Simple expression

    • Heterogeneous mapping

    Correct Answer
    A. Heterogeneous mapping
    Explanation
    Heterogeneous mapping allows the engine to perform joins on dissimilar column names. This type of mapping is used when there are columns with different names in the tables that need to be joined. It allows the engine to match and join the columns based on their data and not just their names. This is useful when working with databases that have inconsistent naming conventions or when integrating data from different sources.

    Rate this question:

  • 29. 

    Database partition is known as

    • Leaf

    • Node

    • Root

    Correct Answer
    A. Node
    Explanation
    In a database, partitioning refers to the process of dividing a large database into smaller, more manageable parts called partitions. Each partition is then stored on a separate storage device or server. In this context, a "node" refers to a unit or component in a distributed database system that stores and manages a partition of the database. Therefore, the correct answer is "Node" because it represents a partition in a database.

    Rate this question:

  • 30. 

    What does page cleaner do?

    • Data from the buffer pool is written to the disk

    • No Buffer

    • Data from the buffer pool is written to the page

    Correct Answer
    A. Data from the buffer pool is written to the disk
    Explanation
    The page cleaner is responsible for writing data from the buffer pool to the disk. The buffer pool is a cache that holds frequently accessed data, and the page cleaner ensures that any changes made to this data are persisted to the disk. This process helps to prevent data loss in the event of a system failure or shutdown. By regularly writing the buffered data to the disk, the page cleaner helps to maintain data integrity and ensure that the most up-to-date information is stored persistently.

    Rate this question:

  • 31. 

    A database partition is not given complete control of Hardware resource  

    • In Logical Partition

    • In Primary Partition

    • In Secondary Partition

    Correct Answer
    A. In Logical Partition
    Explanation
    A database partition is not given complete control of hardware resources in a logical partition. In a logical partition, the hardware resources are shared among multiple partitions, including the database partition. This means that the database partition does not have exclusive control over the hardware resources and may have to compete with other partitions for their usage. This can impact the performance and efficiency of the database partition as it may not be able to utilize the hardware resources to their full potential.

    Rate this question:

  • 32. 

    Schema objects are  

    • Facts, Attributes

    • Hierarchies

    • Transformation

    • Partition mapping

    • All

    Correct Answer
    A. All
    Explanation
    The correct answer is "All". This means that schema objects include all of the options listed: facts, attributes, hierarchies, transformation, and partition mapping. In database management, a schema is a logical container for organizing and grouping related database objects. These objects can include tables, views, indexes, procedures, and more. So, all of these options are valid examples of schema objects.

    Rate this question:

  • 33. 

    Types of partition mapping?

    • Server Level Partitioning

    • Application Level Partitioning

    • Both

    Correct Answer
    A. Both
    Explanation
    The correct answer is "Both" because there are two types of partition mapping: server level partitioning and application level partitioning. Server level partitioning involves dividing data across multiple servers or nodes, while application level partitioning involves dividing data within a single server or node. Therefore, both types of partition mapping are valid and can be used depending on the specific requirements and architecture of the system.

    Rate this question:

  • 34. 

    UNIX command to run a datastage job

    • Ds job

    • Ds/job

    • None

    Correct Answer
    A. Ds job
    Explanation
    The correct answer is "ds job". This is the UNIX command that is used to run a Datastage job.

    Rate this question:

  • 35. 

    Can multiple selections possible in DataStage?

    • True

    • False

    Correct Answer
    A. True
    Explanation
    Multiple selections are possible in DataStage. This means that users can select and process multiple data sets or sources simultaneously within the DataStage environment. The ability to make multiple selections allows for efficient and streamlined data integration and processing, enabling users to handle large volumes of data more effectively.

    Rate this question:

  • 36. 

    A project source can have how many projects?

    • 1

    • 5

    • 2

    • Many

    Correct Answer
    A. Many
    Explanation
    The answer "Many" suggests that a project source can have an unlimited number of projects. This means that there is no specific limit or restriction on the number of projects that can be associated with a project source.

    Rate this question:

  • 37. 

    Order of executionin ds

    • Stage variable-> Constraints-> Derivations

    • Stage variable->Derivations-> Constraints

    • Derivations-> Constraints->Stage variable

    • None

    Correct Answer
    A. Stage variable-> Constraints-> Derivations
    Explanation
    The correct answer is "Stage variable-> Constraints-> Derivations". In datastage, the order of execution is important to ensure that the data is processed correctly. Stage variables are typically used to store intermediate values during the data transformation process. Constraints are used to define rules or conditions that must be met for the data to be processed. Derivations are transformations applied to the data. Therefore, the correct order of execution is to first process the stage variables, then apply any constraints, and finally perform the derivations on the data.

    Rate this question:

  • 38. 

    DataStage is a

    • Reporting Tool

    • ETL tool

    • Analysis Tool

    • MetadataTool

    Correct Answer
    A. ETL tool
    Explanation
    DataStage is an ETL (Extract, Transform, Load) tool. ETL tools are used to extract data from various sources, transform it into a suitable format, and load it into a target database or data warehouse. DataStage specifically focuses on these tasks, allowing users to design and manage data integration processes. It provides a graphical interface for designing workflows and transformations, making it easier to extract, transform, and load data from different systems and formats. Therefore, DataStage is primarily known as an ETL tool.

    Rate this question:

  • 39. 

    Is market basket analysis is a BI & DW solution? 

    • True

    • False

    Correct Answer
    A. True
    Explanation
    Market basket analysis is a BI (Business Intelligence) and DW (Data Warehousing) solution. It is a technique used to identify associations and relationships between items that are frequently purchased together by customers. This analysis helps businesses understand customer behavior, improve product placement, optimize pricing strategies, and enhance cross-selling and upselling opportunities. By analyzing transactional data, market basket analysis provides valuable insights that can be used to make informed business decisions and drive growth.

    Rate this question:

  • 40. 

    Not a DB2 licence method

    • CPU

    • User

    • Memory

    Correct Answer
    A. Memory
    Explanation
    The given options CPU and User are not related to licensing methods in DB2. Memory, on the other hand, can be a relevant factor when it comes to licensing as it determines the amount of data that can be stored and processed in the database. Therefore, Memory is not a DB2 license method.

    Rate this question:

  • 41. 

    OLAP services

    • Report objects

    • View filters

    • Derived metrics

    • All

    Correct Answer
    A. All
    Explanation
    The correct answer is "All" because all of the mentioned options (report objects, view filters, derived metrics) are part of OLAP services. OLAP services are used for analyzing multidimensional data and these components are essential for performing various operations and calculations on the data. Therefore, selecting "All" implies that all of these components are included in OLAP services.

    Rate this question:

  • 42. 

    In two tier architecture, how many ODBC connections are there?

    • 2

    • 4

    • 3

    • 1

    Correct Answer
    A. 2
    Explanation
    In a two-tier architecture, there are two ODBC connections. This architecture consists of a client and a server, where the client directly communicates with the server. The first ODBC connection is established between the client application and the database server, allowing the client to send queries and retrieve data. The second ODBC connection is between the database server and the database itself, enabling the server to execute the queries and retrieve the requested data. Therefore, there are two ODBC connections in a two-tier architecture.

    Rate this question:

  • 43. 

    In a hashed file, if you add a duplicate column to a key column. 

    • Latest row is retained.

    • First row is retained.

    • No row is retained.

    Correct Answer
    A. Latest row is retained.
    Explanation
    When a duplicate column is added to a key column in a hashed file, the latest row is retained. This means that if there are multiple rows with the same key value, only the most recent row will be kept in the file. The previous rows with the same key value will be overwritten by the new row, ensuring that only the latest information is stored in the file.

    Rate this question:

  • 44. 

    DS server in UNIX can be started by

    • DSSTART

    • ../univ

    • ./univ

    • None

    Correct Answer
    A. DSSTART
    Explanation
    The correct answer is DSSTART. This is because DSSTART is a command used to start the DS server in UNIX. The other options, ".. /univ", "./univ", and "None", are not valid commands for starting the DS server.

    Rate this question:

  • 45. 

    When we import a job, the job will be in which state?

    • Not compiled state

    • Aborted state

    • State while exported

    • None

    Correct Answer
    A. Not compiled state
    Explanation
    When we import a job, the job will be in the "Not compiled state". This means that the job has been imported but has not yet been compiled or executed. The job is not ready to be run until it is compiled, which involves checking for any errors or issues in the code. Therefore, when a job is imported, it initially remains in the not compiled state until further action is taken.

    Rate this question:

  • 46. 

    What should happen if two sources merge together?

    • Metadata must merge together

    • Metadata must not merge together

    • Duplication

    • None

    Correct Answer
    A. Metadata must merge together
    Explanation
    When two sources merge together, it is important for their metadata to also merge. Metadata refers to the information about the data, such as its description, format, source, and other relevant details. By merging the metadata, it ensures that all the necessary information from both sources is combined and consolidated. This helps in maintaining data integrity, avoiding duplication, and ensuring that the merged data is properly organized and documented.

    Rate this question:

  • 47. 

    Which one is false?

    • Data masking and mask pattern analysis are used in string substitution.

    • Postal department contains Residential address.

    • Global data router is used to scan specific address.

    • Router is to determine the country to which the addresses belong.

    • Only USA has detailed address level

    • Evaluate data quality before building a fully fledged data ware house.

    Correct Answer
    A. Only USA has detailed address level
    Explanation
    The statement "Only USA has detailed address level" is false because many countries have detailed address levels, not just the USA.

    Rate this question:

  • 48. 

    “You can have multiple jobs with the same name”. Which of the following options is true about the above statement?

    • Yes, it is possible in case of PX jobs

    • It is not possible in case of server jobs

    • Not possible

    • Yes, if they exist in different category

    Correct Answer
    A. Yes, if they exist in different category
    Explanation
    The statement is suggesting that having multiple jobs with the same name is possible, but only if they exist in different categories. This means that if two jobs have the same name but belong to different categories, it is acceptable to have them both. However, if they belong to the same category, it is not possible to have multiple jobs with the same name.

    Rate this question:

  • 49. 

    MPP –

    • Massively Parallel Processing

    • Maximum Parallel Processing

    • Minimum Parallel Processing

    • Marginal Parallel Processing

    Correct Answer
    A. Massively Parallel Processing
    Explanation
    Massively Parallel Processing (MPP) refers to a computing architecture that uses multiple processors to perform tasks simultaneously. It allows for the efficient processing of large amounts of data by dividing the workload into smaller tasks that can be executed in parallel. This approach significantly speeds up data processing and analysis, making it suitable for applications that require high-performance computing and handling big data. Therefore, the given answer, Massively Parallel Processing, accurately describes the concept and its significance in computing.

    Rate this question:

Quiz Review Timeline (Updated): Mar 21, 2023 +

Our quizzes are rigorously reviewed, monitored and continuously updated by our expert board to maintain accuracy, relevance, and timeliness.

  • Current Version
  • Mar 21, 2023
    Quiz Edited by
    ProProfs Editorial Team
  • Sep 29, 2016
    Quiz Created by
    MishraAkshay
Back to Top Back to top
Advertisement
×

Wait!
Here's an interesting quiz for you.

We have other quizzes matching your interest.