Dovetail Phase 2 (All Topics)

Reviewed by Editorial Team
The ProProfs editorial team is comprised of experienced subject matter experts. They've collectively created over 10,000 quizzes and lessons, serving over 100 million users. Our team includes in-house content moderators and subject matter experts, as well as a global network of rigorously trained contributors. All adhere to our comprehensive editorial guidelines, ensuring the delivery of high-quality content.
Learn about Our Editorial Process
| By MishraAkshay
M
MishraAkshay
Community Contributor
Quizzes Created: 1 | Total Attempts: 1,295
| Attempts: 1,295 | Questions: 200
Please wait...
Question 1 / 200
0 %
0/100
Score 0/100
1. During job run in the designer, the green link indicates

Explanation

The green link indicates success during a job run in the designer. This means that the job has completed without any errors or failures.

Submit
Please wait...
About This Quiz
Dovetail Phase 2 (All Topics) - Quiz

The 'Dovetail Phase 2 (All Topics)' quiz assesses knowledge across various database topics, including multidimensional clusters, node partitioning, block indexes, data prefetching, and statistics updates. It is designed... see moreto test and reinforce understanding of core database management principles. see less

2. The user has a report with "Year" and "Expenditure" displayed.  The user wants to see the monthly expenditure for each year and decides to drill from "Year" to "Month".  But the user wants to see both "Year" and "Month" in the report after drilling.  What is the option to be used during drilling to achieve the desired behavior?

Explanation

By selecting the option "Keep the parent while drilling," the user ensures that both the "Year" and "Month" columns will be displayed in the report after drilling. This means that the user will still be able to view the original level of aggregation (the parent level) while also seeing the drilled-down level of detail (the child level). This option allows for a comprehensive view of the monthly expenditure for each year without losing the context of the overall yearly expenditure.

Submit
3. The MDM services helps in 

Explanation

The correct answer is "All". MDM services help in all of the mentioned actions, including creation, validation, updation, and deletion. MDM, or Master Data Management, is a process that ensures consistent and accurate master data across an organization. It involves creating new data, validating existing data, updating outdated data, and deleting irrelevant data. By performing all of these actions, MDM services help maintain the integrity and quality of master data within an organization.

Submit
4. Different table spaces have different page sizes

Explanation

In a database, a tablespace is a logical storage unit that contains tables, indexes, and other database objects. Each tablespace can have its own page size, which determines the size of the data blocks used to store data within the tablespace. This allows for flexibility in managing different types of data and optimizing storage efficiency. Therefore, it is true that different table spaces can have different page sizes.

Submit
5. Is training required for creating a metadata?  

Explanation

Training is required for creating metadata because metadata involves organizing and describing data in a standardized and consistent manner. It requires knowledge and understanding of the content, structure, and context of the data. Training helps individuals learn how to properly classify, tag, and annotate data to ensure accurate and meaningful metadata. Without training, there is a risk of inconsistent or incorrect metadata, which can lead to difficulties in searching, retrieving, and managing data effectively.

Submit
6. Updating the schema is nessary when there is a change in 

Explanation

When there is a change in both the facts and attributes of a schema, updating the schema becomes necessary. A schema is a blueprint or structure that defines the organization and relationships of data in a database. Facts are the actual data stored in the database, while attributes are the characteristics or properties of the data. If either the facts or attributes change, the schema needs to be updated to reflect these changes and ensure the integrity and consistency of the data.

Submit
7. A good metadata must

Explanation

A good metadata must use standard terminologies, ensure that mandatory elements are not missed, and support archiving. This means that it should follow established conventions and vocabularies, include all the necessary information required, and be able to be preserved for long-term access and retrieval.

Submit
8. Where do log file exists?

Explanation

Log files exist in the "ds directory".

Submit
9. Data quality does not refer to

Explanation

Data quality refers to the accuracy, consistency, and integrity of data. It ensures that the data is correct, reliable, and free from errors. However, volume is not a factor that determines data quality. While the volume of data can be important for certain analyses or applications, it does not directly impact the quality of the data itself. Therefore, volume is not considered as a factor when evaluating data quality.

Submit
10. Tablespace span across containers and tables can span across tablespaces

Explanation

This statement is true because tablespaces in a database can span across multiple containers, which are physical storage units. This allows for better organization and allocation of storage space. Additionally, tables within a database can also span across multiple tablespaces, providing flexibility in managing and distributing data within the database.

Submit
11. What is the technology used to match entities like "Bill" as short for "William" and "CNN" as an abbreviation for "Cable News Network"?

Explanation

Fuzzy match is the technology used to match entities like "Bill" as short for "William" and "CNN" as an abbreviation for "Cable News Network". Fuzzy matching algorithms are designed to find matches between strings that are similar but not exactly the same. In this case, the algorithm would identify the similarity between "Bill" and "William" and "CNN" and "Cable News Network" based on their phonetic or semantic similarities, allowing for a fuzzy match to be made.

Submit
12. Household matching is for

Explanation

Household matching is a process used to match customer data with household data in order to gain a better understanding of customer behavior and preferences. By identifying households, businesses can target their marketing efforts more effectively and provide personalized offers and recommendations to customers. Therefore, the correct answer for this question is "Customer" as household matching is primarily used for understanding customer data.

Submit
13.  Data cleansing rules

Explanation

The correct answer is "All" because data cleansing rules involve auditing, filtering, and correcting data. When cleaning data, it is important to first audit the existing data to identify any errors or inconsistencies. Then, filtering can be done to remove any irrelevant or duplicate data. Finally, the identified errors can be corrected to ensure the data is accurate and reliable. Therefore, all of these steps are essential in the data cleansing process.

Submit
14. Data stage job consists of

Explanation

A Data stage job consists of both links and stages. Links are used to connect the stages and define the flow of data between them. Stages, on the other hand, are the building blocks of a Data stage job and perform various operations such as data extraction, transformation, and loading. Therefore, both links and stages are essential components of a Data stage job.

Submit
15. There is an option "Generate report" in DataStage Designer.

Explanation

The statement is true because DataStage Designer does have an option called "Generate report." This option allows users to generate reports based on the data and transformations created in DataStage Designer. This feature is useful for analyzing and documenting the data integration processes in DataStage.

Submit
16. When creating database or users, specifying its parent is necessary

Explanation

When creating a database or users in a system, specifying its parent is necessary because it helps in organizing and managing the hierarchy and relationships between different entities. By specifying the parent, it becomes easier to understand the context and dependencies of the database or user within the system. This information is crucial for effective administration and access control, as well as for maintaining data integrity and consistency. Therefore, it is important to specify the parent when creating a database or user.

Submit
17. An expression combining two different fact columns in a table (ex – sales – discount) can be set as a fact expression

Explanation

In a table, a fact column represents a measurable quantity or value, such as sales or discount. When we combine two different fact columns, such as sales and discount, in an expression, it can be considered a fact expression. This expression would represent a calculation or relationship between the two fact columns, providing additional insights or analysis. Therefore, the given statement is true.

Submit
18. A fact table in the centre surrounded by dimension tables which are again split up into further dimension tables is called as

Explanation

A snowflake schema is a type of database schema in which the dimension tables are further normalized into multiple levels of dimension tables. In this schema, the fact table is at the center, surrounded by dimension tables that are split up into additional dimension tables. This design allows for more efficient storage and retrieval of data, as well as better data integrity and flexibility in querying the database.

Submit
19. Avg(Sum(Fact1) {~+, month+}) {~+, quarter+} is an example of 

Explanation

The given expression Avg(Sum(Fact1) {~+, month+}) {~+, quarter+} is an example of a Nested Metric. This is because it involves multiple levels of aggregation and grouping. The inner expression Sum(Fact1) {~+, month+} calculates the sum of the metric Fact1 at the month level, and the outer expression Avg() further aggregates this result at the quarter level. The use of multiple levels of aggregation and grouping makes it a nested metric.

Submit
20. More than one ID column 

Explanation

A compound attribute refers to a single attribute that consists of multiple sub-attributes. In this case, the attribute being referred to is an ID column, and it is stated that there is more than one ID column. This suggests that the ID column is composed of multiple sub-attributes, making it a compound attribute.

Submit
21. Which of the following types of mapping allows the engine to perform joins on dissimilar column names?

Explanation

Heterogeneous mapping allows the engine to perform joins on dissimilar column names. This type of mapping is used when there are columns with different names in the tables that need to be joined. It allows the engine to match and join the columns based on their data and not just their names. This is useful when working with databases that have inconsistent naming conventions or when integrating data from different sources.

Submit
22. Schema updation can be done by

Explanation

The correct answer is "All". This means that schema updation can be done by any of the mentioned methods, including stopping and starting the microstrategy intelligence server, disconnecting and reconnecting to the project source, and manually updating the schema.

Submit
23. Which tool extracts data from textual source​?

Explanation

Extraction is the correct answer because it refers to the process of extracting data from a textual source. This can involve using specific tools or techniques to extract relevant information from text documents, websites, or other sources. Extraction is commonly used in data mining, natural language processing, and information retrieval to gather data and transform it into a structured format that can be analyzed or used for further processing.

Submit
24. What is ASLheapsz?

Explanation

ASLheapsz refers to the communication buffer that facilitates the exchange of information between a local application and its associated Agent. This buffer allows for seamless communication, enabling the local application to send and receive data to and from the Agent. It plays a crucial role in ensuring smooth and efficient communication between the two entities.

Submit
25. A fact can have different expressions based on the table against which it is evaluated.

Explanation

This statement is true because a fact can be expressed in different ways depending on the context or perspective from which it is evaluated. Different tables or frameworks can provide different interpretations or representations of the same fact. Therefore, the expression of a fact can vary based on the table or framework used for evaluation.

Submit
26. In which of the following stages a job cannot be run?

Explanation

In the Abort stage, a job cannot be run because it is intentionally terminated or cancelled before it can be executed. This stage usually occurs when there is an error or issue that prevents the job from running successfully. Therefore, the job cannot proceed further and cannot be run in the Abort stage.

Submit
27. The maximum number of attributes that can be set as parent to another attribute is

Explanation

There is no limit to the number of attributes that can be set as parents to another attribute. This means that an attribute can have any number of parent attributes.

Submit
28. What is meant by pre-fetching?           

Explanation

Pre-fetching refers to the process of retrieving data from the hard disk and storing it in the buffer pool before it is actually needed. This is done in order to improve performance by reducing the time it takes to access the data when it is required. By pre-fetching data from the hard disk to the buffer pool, the system can anticipate future data needs and have it readily available, minimizing the delay in retrieving it from the slower hard disk.

Submit
29. A database partition is not given complete control of Hardware resource  

Explanation

A database partition is not given complete control of hardware resources in a logical partition. In a logical partition, the hardware resources are shared among multiple partitions, including the database partition. This means that the database partition does not have exclusive control over the hardware resources and may have to compete with other partitions for their usage. This can impact the performance and efficiency of the database partition as it may not be able to utilize the hardware resources to their full potential.

Submit
30. UNIX command to run a datastage job

Explanation

The correct answer is "ds job". This is the UNIX command that is used to run a Datastage job.

Submit
31. Schema objects are  

Explanation

The correct answer is "All". This means that schema objects include all of the options listed: facts, attributes, hierarchies, transformation, and partition mapping. In database management, a schema is a logical container for organizing and grouping related database objects. These objects can include tables, views, indexes, procedures, and more. So, all of these options are valid examples of schema objects.

Submit
32. Types of partition mapping?

Explanation

The correct answer is "Both" because there are two types of partition mapping: server level partitioning and application level partitioning. Server level partitioning involves dividing data across multiple servers or nodes, while application level partitioning involves dividing data within a single server or node. Therefore, both types of partition mapping are valid and can be used depending on the specific requirements and architecture of the system.

Submit
33. Database partition is known as

Explanation

In a database, partitioning refers to the process of dividing a large database into smaller, more manageable parts called partitions. Each partition is then stored on a separate storage device or server. In this context, a "node" refers to a unit or component in a distributed database system that stores and manages a partition of the database. Therefore, the correct answer is "Node" because it represents a partition in a database.

Submit
34. What does page cleaner do?

Explanation

The page cleaner is responsible for writing data from the buffer pool to the disk. The buffer pool is a cache that holds frequently accessed data, and the page cleaner ensures that any changes made to this data are persisted to the disk. This process helps to prevent data loss in the event of a system failure or shutdown. By regularly writing the buffered data to the disk, the page cleaner helps to maintain data integrity and ensure that the most up-to-date information is stored persistently.

Submit
35. Order of executionin ds

Explanation

The correct answer is "Stage variable-> Constraints-> Derivations". In datastage, the order of execution is important to ensure that the data is processed correctly. Stage variables are typically used to store intermediate values during the data transformation process. Constraints are used to define rules or conditions that must be met for the data to be processed. Derivations are transformations applied to the data. Therefore, the correct order of execution is to first process the stage variables, then apply any constraints, and finally perform the derivations on the data.

Submit
36. DataStage is a

Explanation

DataStage is an ETL (Extract, Transform, Load) tool. ETL tools are used to extract data from various sources, transform it into a suitable format, and load it into a target database or data warehouse. DataStage specifically focuses on these tasks, allowing users to design and manage data integration processes. It provides a graphical interface for designing workflows and transformations, making it easier to extract, transform, and load data from different systems and formats. Therefore, DataStage is primarily known as an ETL tool.

Submit
37. Can multiple selections possible in DataStage?

Explanation

Multiple selections are possible in DataStage. This means that users can select and process multiple data sets or sources simultaneously within the DataStage environment. The ability to make multiple selections allows for efficient and streamlined data integration and processing, enabling users to handle large volumes of data more effectively.

Submit
38. A project source can have how many projects?

Explanation

The answer "Many" suggests that a project source can have an unlimited number of projects. This means that there is no specific limit or restriction on the number of projects that can be associated with a project source.

Submit
39. Is market basket analysis is a BI & DW solution? 

Explanation

Market basket analysis is a BI (Business Intelligence) and DW (Data Warehousing) solution. It is a technique used to identify associations and relationships between items that are frequently purchased together by customers. This analysis helps businesses understand customer behavior, improve product placement, optimize pricing strategies, and enhance cross-selling and upselling opportunities. By analyzing transactional data, market basket analysis provides valuable insights that can be used to make informed business decisions and drive growth.

Submit
40. OLAP services

Explanation

The correct answer is "All" because all of the mentioned options (report objects, view filters, derived metrics) are part of OLAP services. OLAP services are used for analyzing multidimensional data and these components are essential for performing various operations and calculations on the data. Therefore, selecting "All" implies that all of these components are included in OLAP services.

Submit
41. What should happen if two sources merge together?

Explanation

When two sources merge together, it is important for their metadata to also merge. Metadata refers to the information about the data, such as its description, format, source, and other relevant details. By merging the metadata, it ensures that all the necessary information from both sources is combined and consolidated. This helps in maintaining data integrity, avoiding duplication, and ensuring that the merged data is properly organized and documented.

Submit
42. Which one is false?

Explanation

The statement "Only USA has detailed address level" is false because many countries have detailed address levels, not just the USA.

Submit
43. "You can have multiple jobs with the same name". Which of the following options is true about the above statement?

Explanation

The statement is suggesting that having multiple jobs with the same name is possible, but only if they exist in different categories. This means that if two jobs have the same name but belong to different categories, it is acceptable to have them both. However, if they belong to the same category, it is not possible to have multiple jobs with the same name.

Submit
44. When we import a job, the job will be in which state?

Explanation

When we import a job, the job will be in the "Not compiled state". This means that the job has been imported but has not yet been compiled or executed. The job is not ready to be run until it is compiled, which involves checking for any errors or issues in the code. Therefore, when a job is imported, it initially remains in the not compiled state until further action is taken.

Submit
45. In two tier architecture, how many ODBC connections are there?

Explanation

In a two-tier architecture, there are two ODBC connections. This architecture consists of a client and a server, where the client directly communicates with the server. The first ODBC connection is established between the client application and the database server, allowing the client to send queries and retrieve data. The second ODBC connection is between the database server and the database itself, enabling the server to execute the queries and retrieve the requested data. Therefore, there are two ODBC connections in a two-tier architecture.

Submit
46. In a hashed file, if you add a duplicate column to a key column. 

Explanation

When a duplicate column is added to a key column in a hashed file, the latest row is retained. This means that if there are multiple rows with the same key value, only the most recent row will be kept in the file. The previous rows with the same key value will be overwritten by the new row, ensuring that only the latest information is stored in the file.

Submit
47. DS server in UNIX can be started by

Explanation

The correct answer is DSSTART. This is because DSSTART is a command used to start the DS server in UNIX. The other options, ".. /univ", "./univ", and "None", are not valid commands for starting the DS server.

Submit
48. Not a DB2 licence method

Explanation

The given options CPU and User are not related to licensing methods in DB2. Memory, on the other hand, can be a relevant factor when it comes to licensing as it determines the amount of data that can be stored and processed in the database. Therefore, Memory is not a DB2 license method.

Submit
49. Which of the following is correct?

Explanation

The correct answer is "Some countries have detailed address level." This means that not all countries have a detailed address level, but there are some countries that do. This implies that the level of address detail varies from country to country, and it is not a universal standard across all nations.

Submit
50. MPP –

Explanation

Massively Parallel Processing (MPP) refers to a computing architecture that uses multiple processors to perform tasks simultaneously. It allows for the efficient processing of large amounts of data by dividing the workload into smaller tasks that can be executed in parallel. This approach significantly speeds up data processing and analysis, making it suitable for applications that require high-performance computing and handling big data. Therefore, the given answer, Massively Parallel Processing, accurately describes the concept and its significance in computing.

Submit
51. % tracing of ETL and the Ideal percentage of rejected rows in ETL shoul be

Explanation

The ideal percentage of rejected rows in ETL should be 0. This means that there should be no rejected rows during the ETL process. Rejected rows are typically a result of data quality issues or inconsistencies, and having a high percentage of rejected rows can indicate problems with the data source or the ETL process itself. Therefore, the goal is to have a rejection rate of 0, indicating that all rows are successfully processed and loaded into the target system.

Submit
52.  A filter qualification can combine

Explanation

A filter qualification can combine attribute qualification, metric qualification, report as filter, and relationship in any combination. This means that when creating a filter qualification, you can include attributes, metrics, reports, and relationships to define the criteria for the filter. This allows for more flexibility in filtering and narrowing down the data based on various dimensions and measures.

Submit
53. What is GIGO? 

Explanation

GIGO stands for Garbage In Garbage Out. This term is often used in computer science and information technology to describe the concept that the quality of output is determined by the quality of input. In other words, if you input incorrect or irrelevant data into a system, the output will also be incorrect or irrelevant. This principle emphasizes the importance of ensuring accurate and reliable input in order to obtain meaningful and useful output.

Submit
54. The No. of CPUs used in DB2 Enterprise edition

Explanation

The DB2 Enterprise edition does not have any restriction on the number of CPUs that can be used. This means that there is no limit or maximum number of CPUs specified for this edition. Users can utilize as many CPUs as they require for their specific needs and workload without any limitations.

Submit
55. Data Masking and Mask Pattern analysis are employed in

Explanation

Data Masking and Mask Pattern analysis are employed in Substituting String Pattern. This means that data masking techniques are used to substitute sensitive information in a string pattern with masked or encrypted values. Mask pattern analysis involves analyzing the pattern of the data to determine the appropriate masking technique to be applied. This ensures that sensitive data is protected while maintaining the integrity and structure of the data.

Submit
56. A Data warehouse:

Explanation

A data warehouse is a large and centralized repository of data that is collected from various sources and organized in a way that facilitates analysis and reporting. It is designed to support strategic decision-making by providing a comprehensive and historical view of the organization's data. By storing and integrating data from different systems and departments, a data warehouse enables executives and managers to analyze trends, identify patterns, and make informed decisions that can have long-term impacts on the organization's goals and objectives. Therefore, the correct answer is that a data warehouse is used to take strategic decisions.

Submit
57. Role of DS administrator

Explanation

The role of a DS administrator involves managing user privileges, setting environmental variables, changing the size of cache, creating security levels for projects, exporting project components, viewing the project list, and adding or deleting projects. These tasks are important for maintaining and controlling the data science environment, ensuring that users have the necessary access and resources, and managing the overall project workflow.

Submit
58. Internal storage causes high redundancy 

Explanation

Internal storage refers to the storage space within a computer or electronic device. When data is stored internally, there is a tendency for high redundancy, meaning that multiple copies of the same data may be stored. This redundancy can occur due to various reasons such as backup processes, file versioning, or system requirements. High redundancy can lead to wasted storage space and can also make data management more complex. Therefore, the statement "Internal storage causes high redundancy" is true.

Submit
59. Crosswalk: 

Explanation

This answer correctly identifies all the important aspects of crosswalks. Crosswalks are important for virtual collections because they allow for the organization and retrieval of metadata created by different users. They also involve the mapping of elements from one schema to another, which can be complex when trying to map from less granularity to more granularity. Crosswalks are expected to act as a whole entity, similar to a single search engine, and require labor-intensive development and maintenance.

Submit
60. A container is not a

Explanation

A container is not considered a memory. Memory refers to the physical or virtual storage space where data and instructions are stored for processing by a computer system. A container, on the other hand, is a software unit that encapsulates and isolates applications and their dependencies, providing a consistent and portable environment for running them. While containers may utilize memory resources, they are not synonymous with memory itself.

Submit
61. Metadata uses and needs

Explanation

Metadata plays a crucial role in ensuring the survival and accessibility of resources in the future. By sharing resources across users and various tools, metadata helps to bridge semantic gaps and enhance collaboration. It also speeds up and enriches searching of resources, making it easier for users to find the information they need. Additionally, metadata provides additional information about the data it describes, giving users a better understanding of the content and context. Therefore, all of these options correctly describe the uses and needs of metadata.

Submit
62.  How will a default user be notified while login in ds administrator?

Explanation

A default user will be notified while logging in as the ds administrator.

Submit
63. Default page size in DB2?

Explanation

The default page size in DB2 is 4 KB. This means that the database system allocates storage in units of 4 KB for storing data and indexes. This page size is commonly used because it strikes a balance between efficient storage and performance. Larger page sizes can reduce the overhead of managing storage, but they may also result in wasted space if the data being stored is smaller than the page size. Smaller page sizes can be more efficient for small amounts of data, but they may also increase the overhead of managing storage.

Submit
64. Level prompt –

Explanation

The dimensionality of a metrics refers to the number of dimensions or variables that are used to measure or analyze a particular metric. It indicates the complexity or richness of the metric. A higher dimensionality means that more variables are considered, resulting in a more comprehensive analysis. On the other hand, a lower dimensionality implies a simpler analysis with fewer variables. The dimensionality of a metrics is important in determining the accuracy and reliability of the analysis conducted using that metric.

Submit
65. Which is not a data quality tool?

Explanation

Data Stage is not a data quality tool. It is actually an ETL (Extract, Transform, Load) tool used for data integration and transformation. Data quality tools, on the other hand, are specifically designed to assess and improve the quality of data, ensuring accuracy, completeness, consistency, and reliability. Examples of data quality tools include First Logic and Trillium, which are widely used in the industry for data cleansing, profiling, and standardization.

Submit
66. A table in a RDBMS must have which of the following options?

Explanation

A table in a Relational Database Management System (RDBMS) must have at least one row and one column. This is because a table is a collection of related data organized in rows and columns, where each row represents a record and each column represents a specific attribute or field. Without at least one row and one column, there would be no data to store or retrieve from the table, making it essentially useless.

Submit
67. OLAP services

Explanation

The correct answer includes three options: report objects, derived metrics, and view filters. These are all components or features commonly found in OLAP services. Report objects refer to the various elements that can be included in a report, such as tables, charts, and graphs. Derived metrics are calculated measures that are derived from existing data in the OLAP cube. View filters allow users to apply specific filters to the data being viewed in order to focus on specific subsets of information. Therefore, all three options are relevant to OLAP services.

Submit
68. What is the language used in a data quality tool?

Explanation

UNIX Shell Scripting is the language used in a data quality tool. This language is commonly used for automating tasks and manipulating data in UNIX-based systems. It provides a powerful and flexible way to write scripts that can process, validate, and clean data in a data quality tool. By using UNIX Shell Scripting, users can easily automate data quality processes and perform various operations on the data, such as filtering, sorting, and transforming, to ensure its accuracy and consistency.

Submit
69. Default drill path –

Explanation

The correct answer is System Hierarchy because the term "default drill path" refers to the predefined path or sequence followed when navigating through a system or data hierarchy. In this context, the System Hierarchy refers to the hierarchical structure of the system, which is the default path followed when drilling down or navigating through different levels of the system. It is the primary hierarchy that determines the organization and relationship between different components or levels within the system.

Submit
70. Two types of hierarchies available in MicroStrategy are –  

Explanation

MicroStrategy offers two types of hierarchies: System hierarchy and User hierarchy. The System hierarchy is predefined by the system and cannot be modified by users. It represents the logical structure of the data and is used for organizing and navigating through the data. On the other hand, the User hierarchy is created by users and can be customized to meet specific reporting requirements. It allows users to define their own logical structure for the data, providing flexibility and customization options. Therefore, the correct answer is System hierarchy and User hierarchy.

Submit
71. Default administrator in UNIX is

Explanation

not-available-via-ai

Submit
72. Grouping of attributes which can be displayed, ordered, unordered –

Explanation

not-available-via-ai

Submit
73. Basic Functionalities of Trillum​

Explanation

The correct answer is "All". The given options list the basic functionalities of Trillum, which include data profiling, data discovery, data monitoring, data governance, data quality (including data cleansing and standardization, de-duplication and identifying relationships, address verification), and data enrichment (including postal certifications, latitude or longitude for precise location, and appending information from an outside source). Therefore, the correct answer is that all of these functionalities are included in Trillum.

Submit
74. Which is true?

Explanation

A database partition is a division of a database into separate parts for the purpose of improving performance, scalability, and availability. Each partition contains its own data, index, config files, and transaction logs. This allows for better organization and management of the database, as well as faster access to the data.

Submit
75. Which option in the metric editor allows the user to calculate a metric 6 months prior to the supplied month value?

Explanation

The option "Tranformation" in the metric editor allows the user to calculate a metric 6 months prior to the supplied month value. This suggests that the "Tranformation" option includes a function or feature that enables the user to manipulate the data and perform calculations based on a specified time period. This could involve transforming the data in a way that allows for the calculation of metrics from a previous time period, such as 6 months prior to the supplied month value.

Submit
76. Role of DS Manager

Explanation

The role of a DS Manager involves various tasks such as creating and moving projects, creating new objects, exporting or importing repository components, and conducting usage analysis. These tasks are essential for managing and organizing data within the system. By creating and moving projects, the DS Manager can effectively categorize and allocate resources. Creating new objects allows for the customization and addition of data elements. Exporting or importing repository components enables the transfer of data between systems. Lastly, conducting usage analysis helps in understanding the patterns and trends of data usage, aiding in decision-making processes.

Submit
77. Details of locks held by transactions that are recorded in buffer pool area is called 

Explanation

The details of locks held by transactions that are recorded in the buffer pool area are referred to as a "Locklist." The locklist contains information about the locks acquired by each transaction, such as the type of lock (shared or exclusive), the object being locked, and the transaction ID. This information is crucial for managing concurrency control and ensuring data consistency in a multi-user database system.

Submit
78. One key function of Auditing data cleansing rules is:

Explanation

Auditing data cleansing rules is important in order to provide traceability on the cleansed value to the original value sent by the source. This means that the auditing process keeps track of the changes made during the data cleansing operation, allowing for a clear understanding of the transformations that occurred and ensuring that the cleansed data can be traced back to its original form. This helps to maintain data integrity and transparency, as well as enabling any necessary investigations or analysis of the data.

Submit
79. Role of DS Director

Explanation

The role of the DS Director is to validate, schedule, run, and monitor datastage server jobs and parallel jobs. They have the ability to view logs, print logs, see the time elapsed, and reset jobs. This allows them to ensure that the jobs are running smoothly and efficiently, and to troubleshoot any issues that may arise during the process.

Submit
80. Which of the following is responsible for MOLAP functionality of microstatergy – 

Explanation

The analytical engine is responsible for the MOLAP (Multidimensional Online Analytical Processing) functionality of MicroStrategy. MOLAP is a type of database technology that enables fast and efficient analysis of large amounts of data. The analytical engine within MicroStrategy processes and organizes data in a multidimensional format, allowing users to easily navigate and analyze data from different perspectives. This engine performs calculations, aggregations, and other operations necessary for generating reports and visualizations in MOLAP cubes. It plays a crucial role in providing users with a powerful and interactive analytical experience within the MicroStrategy platform.

Submit
81. Row level math calculation and virtual attributes is possible with 

Explanation

Consolidation allows for row level math calculations and virtual attributes. It is a process of combining data from multiple sources into a single, unified view. This means that calculations can be performed on individual rows of data and virtual attributes can be created to represent derived values. Consolidation is commonly used in financial reporting and analysis, where data from different departments or subsidiaries needs to be aggregated and analyzed together.

Submit
82. Which of the following database is used in DS repository?

Explanation

The correct answer is Universe because Universe is a database used in DS (DataStage) repository. DataStage is an ETL (Extract, Transform, Load) tool that is used for data integration and transformation. The DS repository stores metadata about the data sources, transformations, and jobs in DataStage. Universe is one of the databases that can be used as the underlying database for the DS repository.

Submit
83. Which of the following statements is true?

Explanation

All of the listed options are true. More than one unique index can be created for a table, allowing for multiple unique constraints on different sets of columns. Additionally, more than one column can be included in a unique index, allowing for uniqueness to be enforced across multiple columns. It is not mandatory to define a primary key constraint for a unique index, as a unique index can enforce uniqueness without being the primary key of the table. Therefore, all of the listed options are true.

Submit
84. Block indexes for multiple columns produces

Explanation

Block indexes for multiple columns produce Multidimensional Clusters. Multidimensional Clusters are used to optimize queries that involve multiple columns by organizing the data in a way that allows for efficient retrieval based on different combinations of column values. This type of index is particularly useful for queries that involve range queries or queries that have multiple filter conditions. By using multidimensional clustering, the database can quickly locate and retrieve the required data, improving query performance.

Submit
85. Cmd to invoke administrator?

Explanation

The correct answer is "dsadmin". This is the command that can be used to invoke the administrator. The other options, ".. /admin" and "./dsadmin", are not valid commands to invoke the administrator. The option "None" implies that there is no command to invoke the administrator, which is incorrect.

Submit
86. The best practice in data quality is

Explanation

Fixing data quality issues in the source is considered the best practice in data quality. This involves identifying and resolving any data quality issues at the point of origin, before it is loaded into the ETL (Extract, Transform, Load) process, ODS (Operational Data Store), or data warehouse (DW). By addressing data quality issues at the source, it ensures that the data is accurate, consistent, and reliable from the beginning, leading to better decision-making and analysis downstream.

Submit
87. Default level of metrics –

Explanation

The default level of metrics is set at the Report Level. This means that when you create a report, the metrics included in that report will be displayed at the report level by default. This allows you to view and analyze the metrics in the context of the entire report.

Submit
88. Types of prompts 

Explanation

The given answer lists the different types of prompts. A level prompt is used to select a level or hierarchy in a system. An object prompt allows the selection of specific objects or entities. A value prompt allows the input of a specific value or range. A filter definition prompt is used to define filters or conditions for data retrieval.

Submit
89. Cluster of SMP is

Explanation

The correct answer is MPP. This is because MPP stands for Massively Parallel Processing, which refers to a type of computing architecture that uses multiple processors to perform tasks in parallel. In the context of the given options, MPP is the only one that is related to clustering and processing data in a parallel manner. SMPP and SSMP are not commonly used acronyms, and SMPP is repeated twice in the options.

Submit
90. If lookup store details of one attribute then it is called as

Explanation

When the lookup store details of only one attribute, it means that the data is organized in a structured and efficient manner. This is known as normalization, where each attribute is stored in a separate table to eliminate redundancy and improve data integrity. Normalization helps in reducing data duplication and ensures that the data is consistent and accurate.

Submit
91. If Num_oicleaners is 0,then _____ are started

Explanation

If the variable Num_oicleaners is equal to 0, it means that there are no page cleaners available. Therefore, no page cleaners will be started.

Submit
92. Select the true statements.

Explanation

The given answer is correct because it accurately identifies the true statements. Customer matching is indeed done with fuzzy and intelligent logic, which helps in identifying and linking similar customer records. Data quality is important in preparing the data warehouse (DW) as it helps in avoiding unnecessary overheads and ensures that the data is accurate and reliable. Tracing involves creating audit trails between deleted and surviving customers, which helps in tracking changes and maintaining data integrity. Data quality audit provides traceability between original and corrected values, ensuring that any errors or discrepancies in the data are identified and corrected.

Submit
93. If a primary key uses multiple columns to identify a record then it is known as 

Explanation

A compound key is used when multiple columns are combined to uniquely identify a record in a database table. This is different from a single-column primary key, where only one column is used for identification. A compound key is useful when a single column cannot uniquely identify a record, but the combination of multiple columns can. Therefore, the correct answer is compound key.

Submit
94. If you want to filter on more than one attribute you will use

Explanation

The correct answer is "Joint element list" because when you want to filter on more than one attribute, you need to combine or join the elements from multiple lists into a single list. This joint element list will contain all the elements that satisfy the filtering conditions for each attribute.

Submit
95. What is the default connection timed out in data stage?

Explanation

The default connection timed out in data stage is 86400. This means that if there is no activity on a connection for a period of 86400 seconds (24 hours), the connection will be automatically closed.

Submit
96. Number of input links for a transformer

Explanation

The correct answer is "streamed input link" because the number of input links for a transformer is the same as that of the output link. This means that the transformer receives input data from a single streamed input link and processes it to produce output data on the same streamed input link. Therefore, there are no additional input links for the transformer.

Submit
97. The frequency of data count is obtained is

Explanation

Data profiling is the process of analyzing and examining data from various sources to understand its structure, content, and quality. It involves collecting statistics and information about the data, such as the frequency of data counts, to gain insights and make informed decisions. Therefore, data profiling is the most appropriate choice for obtaining the frequency of data count.

Submit
98. Converting non standard data into standardized format is taken care by which module – 

Explanation

The correct answer is "Converter". The Converter module is responsible for converting non-standard data into a standardized format. It takes care of transforming data from one format to another, ensuring that it is consistent and compatible with the desired format. This module plays a crucial role in data integration and data processing tasks, ensuring that data can be effectively utilized and analyzed.

Submit
99. MDM is build 

Explanation

MDM (Master Data Management) is a process that involves creating and managing a single, consistent, and accurate version of master data within an organization. Building MDM before the ETL (Extract, Transform, Load) process ensures that the data being loaded into the data warehouse is clean, standardized, and reliable. By establishing MDM before building the data warehouse, organizations can ensure that the data being stored and analyzed is of high quality and can be trusted for decision-making purposes.

Submit
100. Various components of a transformation are

Explanation

The given answer correctly lists the various components of a transformation, which include Member Attributes, Member Tables, Member Expressions, and Mapping Type. These components are essential in defining and executing a transformation process. Member Attributes refer to the specific characteristics or properties of a member in a transformation. Member Tables are the tables that store the member data. Member Expressions are used to manipulate or transform the member data. Mapping Type determines the type of mapping or relationship between the source and target data. Together, these components play a crucial role in achieving successful transformations.

Submit
101. Change logs or histories are needed for

Explanation

Change logs or histories are needed for both access restriction and when the base source is deleted. Change logs or histories provide a record of all the modifications made to a system or source code. This is crucial for access restriction because it allows administrators to track and monitor user activity, ensuring that only authorized individuals can access certain resources. Additionally, change logs or histories are necessary when the base source is deleted because they provide a backup and recovery mechanism. By having a record of all changes made, it becomes easier to restore the system or source code to a previous state if necessary.

Submit
102. Viewing desktop report is possible in which of the following office tools?

Explanation

not-available-via-ai

Submit
103. ODS  is fora) Tactical decisionsb) Strategically decisionc) Takes less time than DWd) Subset of DW

Explanation

The correct answer is a, c are correct. ODS (Operational Data Store) is used for tactical decisions as it provides real-time or near real-time data for day-to-day operations. It also takes less time than a Data Warehouse (DW) to process and store data. ODS is a subset of DW, as it contains a smaller amount of data and is designed to support operational activities. Therefore, options a and c are correct.

Submit
104. What is meant by Rebalancing of extents?

Explanation

Rebalancing of extents refers to the process of moving extents between containers. This is done to optimize the distribution of data across different storage containers, ensuring that the data is evenly distributed and maximizing the performance of the system. By moving extents between containers, the system can balance the workload and prevent any single container from becoming overloaded. This helps to improve the overall efficiency and reliability of the storage system.

Submit
105. Metric formula can be reused by saving them as

Explanation

A nested formula refers to a formula that is embedded within another formula. It allows for the reuse of metric formulas by incorporating them within different calculations. This can be useful when multiple calculations require the same underlying metric formula. By saving the metric formula as a nested formula, it can be easily referenced and utilized in various calculations without the need for duplication.

Submit
106. Output of hash file - 

Explanation

The output of the hash file is "Not sorted" because it does not follow any specific order or arrangement. A hash file is typically used for quick data retrieval and does not guarantee any particular order of the stored data. Therefore, the output is not sorted and can be accessed randomly.

Submit
107. Metadata storage formats

Explanation

This question is asking about the types of metadata storage formats. The correct answer is "Both" because there are both human readable formats (such as XML) and non-human readable formats (such as binary) used for storing metadata. This means that metadata can be stored in a format that can be easily understood and interpreted by humans, as well as in a format that is optimized for efficient storage and processing by machines.

Submit
108. Customer Data Management (CDM) applies the principles of

Explanation

Customer Data Management (CDM) refers to the process of collecting, organizing, and maintaining customer data to ensure its accuracy and consistency across various systems and channels. MDM, or Master Data Management, is a discipline within CDM that focuses on creating a single, reliable, and consistent version of customer data that can be shared across different applications and departments within an organization. Therefore, MDM is the correct answer as it aligns with the principles of CDM by ensuring the accuracy and consistency of customer data.

Submit
109. Which among the following is not a part of Geocoding?

Explanation

Geocoding is the process of converting addresses or place names into geographic coordinates, such as latitude and longitude. It involves adding location information based on coordinates or codes. The options "Adding 5+4 Zip code format," "Adding Latitude / Longitude information," and "Providing a code value for referencing the geographical location with respect to North Pole" all relate to geocoding as they involve adding specific location data. However, "Adding Census information" does not directly pertain to geocoding, as it refers to demographic data rather than the coordinates or codes used in geocoding.

Submit
110. To change license details, the user must have which of the following roles?

Explanation

The correct answer is Production manager. This role is responsible for managing the production process, which includes overseeing the licensing of software and ensuring that the license details are up to date. Developers may be involved in creating and implementing the software, but they may not have the authority to change license details. The Operator role may have access to the software but may not have the necessary permissions to modify license details. Therefore, the Production manager role is the most appropriate choice for changing license details.

Submit
111. Which mapping stores the project information – 

Explanation

Metadata mapping is the mapping that stores project information. Metadata refers to the data that provides information about other data, in this case, it includes information about the project. This mapping is responsible for organizing and managing the metadata of the project, such as project name, description, version, and other relevant details. By using metadata mapping, project information can be easily accessed and utilized for various purposes, such as project management, documentation, and analysis.

Submit
112. Metadata creation tools

Explanation

The correct answer includes four different types of metadata creation tools: templates, conversion tools, extraction tools, and mark-up tools. Templates are pre-designed formats that can be used to create consistent and standardized metadata. Conversion tools are used to convert metadata from one format to another. Extraction tools are used to extract metadata from various sources such as documents or websites. Mark-up tools are used to add metadata to content using specific markup languages. These four types of tools are commonly used in the process of creating and managing metadata.

Submit
113. Survivorship is a concept used in

Explanation

Survivorship is a concept used in data deduplication. Data deduplication is a process of identifying and eliminating duplicate data within a dataset. Survivorship refers to the process of determining which version of duplicated data should be retained or considered as the most accurate or up-to-date. It helps in improving data quality by ensuring that only the most reliable and relevant data is retained, thus reducing storage costs and improving data analysis accuracy.

Submit
114. In DataStage, system variables are prefixed with

Explanation

In DataStage, system variables are prefixed with "@" symbol.

Submit
115. Metric level – group set to NONE is not applicable for 

Explanation

The given question states that the metric level with the group set to NONE is not applicable for Transformation. This suggests that the metric level and group set to NONE can be applicable for Transaction and Duplication. Therefore, the correct answer is Transformation.

Submit
116. Implicit facts or attributes are 

Explanation

Implicit facts or attributes are facts or attributes that are not explicitly stated but can be inferred or assumed based on other information. They are not directly mentioned but can be understood or deduced from the context or other explicit facts. Therefore, implicit facts or attributes can be considered as virtual or constant, as they exist but are not explicitly mentioned or stated.

Submit
117. Parameter for individual database are stored in which  config file

Explanation

The parameter for individual databases are stored in the SQLDBCONF config file.

Submit
118. True about active and passive stage?

Explanation

The statement "Both" is the correct answer because it states that both the Aggregator and Transformer stages are active stages, while the ODBC, Universe, ORACLE, Hashed file and Sequential file, and Inter-Process stages are passive stages. This means that the Aggregator and Transformer stages actively perform operations on the data, while the other stages simply receive and pass along the data without actively manipulating it.

Submit
119. If a user would want to list the top 10 revenue values by region what type of filter is to be used?

Explanation

To list the top 10 revenue values by region, a user would need to use a metric qualification filter. This type of filter allows the user to filter and sort data based on a specific metric, in this case, the revenue values. By applying a metric qualification filter, the user can specify the condition to display only the top 10 revenue values, providing a clear and concise list of the highest revenue values by region.

Submit
120. Not a valid type of locking

Explanation

Extent level locking is a valid type of locking in database systems. It refers to the locking of a group of contiguous data blocks, typically consisting of multiple rows or records. This type of locking allows for efficient access to data and reduces the chances of conflicts or contention between transactions. Therefore, the correct answer is that extent level locking is a valid type of locking, not an invalid one.

Submit
121. OR clauses and OUTER joins

Explanation

OR clauses and OUTER joins can decrease performance in DB2 because they involve additional processing and comparisons. OR clauses can result in slower query execution as the database needs to evaluate multiple conditions. OUTER joins also require more processing as they involve matching records from two tables, including those that do not have corresponding values. This can lead to increased query execution time and decreased performance.

Submit
122. Front room metadata

Explanation

The correct answer is "Both" because the front room metadata can be used for both queries and report definition. It provides information about the structure and properties of the front room, which can be used to retrieve data and generate reports.

Submit
123. In which type of filter the SQL is not changed?

Explanation

A view filter in SQL is a filter applied to a view, which is a virtual table created from the result of a query. When a view filter is applied, the SQL query itself remains unchanged. Instead, the filter is applied to the result set of the query, allowing for selective retrieval of data from the view. Therefore, in a view filter, the SQL is not changed.

Submit
124. Reports can run with only attributes on the template (and no metrics)

Explanation

Reports can run with only attributes on the template (and no metrics) because attributes provide the dimensions or categories by which the data can be organized and filtered. While metrics provide the numerical values or calculations based on those dimensions. Therefore, if there are no metrics included in the report, it means that there won't be any numerical values or calculations, but the report can still be generated and display the data based on the attributes selected.

Submit
125. Which is incorrect?

Explanation

The given options are all acronyms related to different concepts. ESB stands for Enterprise Service Bus, which is a software architecture used for integrating different applications. SOA stands for Service Oriented Architecture, which is an architectural style for building software applications. NCOA stands for National Change of Address, which is a service used to update changes in address. DSA stands for Data Source Architecture, which is not a commonly known or recognized term in the field of technology or software development. Therefore, DSA is the incorrect option.

Submit
126. Transformer can be created without an input link?

Explanation

Transformers can be created without an input link. In certain cases, transformers can be designed to operate without an input connection. These transformers are known as autotransformers. Autotransformers have a single winding that acts as both the primary and secondary winding. They are commonly used in applications where the voltage needs to be stepped up or down by a small amount. Autotransformers are more cost-effective and compact compared to traditional transformers with separate primary and secondary windings. Therefore, it is true that transformers can be created without an input link.

Submit
127. From where to import DS jobs?

Explanation

not-available-via-ai

Submit
128. Rule repository?  

Explanation

A rule repository is a database or flat file where rules are stored. It allows for the modification of rules when there is a change in the source data patterns. Additionally, it allows for the addition of new but similar rules without changing the underlying code. It also enables the reuse and standardization of rules across multiple processes that handle similar data.

Submit
129. Default file format for datastage components to be exported is

Explanation

The default file format for datastage components to be exported is .dsx. This file format is specific to DataStage and is used for exporting and importing DataStage components such as jobs, stages, and routines. The .dsx file format ensures that the exported components can be easily imported back into DataStage without any compatibility issues.

Submit
130. Types of actions in hierarchy display

Explanation

The given answer lists the types of actions in a hierarchy display. "Locked" refers to actions that are completely restricted and cannot be accessed or modified. "Limited" indicates actions that have some restrictions or limitations in terms of usage or availability. "Entry point" refers to actions that serve as the starting point or gateway to access other actions or features. "Filtered" implies that certain actions are displayed or made available based on specific filters or criteria.

Submit
131. If the user wants particular attribute to be displayed in the report o/p, include the attribute in

Explanation

The attribute that the user wants to be displayed in the report output should be included in the Report display form. This form is specifically designed to display the attributes that will be included in the report output. By including the attribute in this form, the user ensures that it will be visible in the final report.

Submit
132. True about consolidation

Explanation

Consolidation refers to the process of combining or merging data from different sources or levels. In this context, the statement "Elements of the same attribute" is true because consolidation involves combining elements that belong to the same attribute. Additionally, "Attribute elements from different levels" is also true as consolidation can involve merging elements from different levels within the same attribute. "Existing consolidation elements" refers to elements that have already been consolidated and are included in the current consolidation process. "Elements from any other consolidation in the project" indicates that elements from other consolidations within the project can also be included. However, "Elements from unrelated attributes" is not true as consolidation specifically involves elements from the same attribute.

Submit
133. Terminology used to describe augmenting of entities with data from third party sources

Explanation

Enrichment refers to the process of augmenting entities with data from third party sources. This involves enhancing the existing data by adding additional information or attributes obtained from external sources. It helps to improve the quality and depth of the data by supplementing it with relevant and valuable information. Enrichment can include various techniques such as data matching, cleansing, and integrating data from different sources to provide a more comprehensive and accurate representation of the entities.

Submit
134. Which of the following tables provides data of an attribute ID and description columns?

Explanation

A lookup table is a table that provides data of an attribute ID and description columns. It is commonly used to store reference data and allows for easy and efficient retrieval of information based on the ID. In this case, the other options (Partition mapping table, Aggregate table, and Fact table) do not specifically provide data of an attribute ID and description columns, making the lookup table the correct answer.

Submit
135. Maximum number of tables that can be loaded in single fastload job is

Explanation

not-available-via-ai

Submit
136. Which are the following is not an IBM product?

Explanation

The correct answer is "Analysis stage" because all the other options mentioned - Meta stage, Quality Stage, and Profile Stage - are actual IBM products. However, "Analysis stage" is not a recognized IBM product.

Submit
137. Trillium source

Explanation

The correct answer is "flat files, fixed width" because Trillium source can read data from flat files that have a fixed width format. This means that each field in the file has a predetermined length, and the data is aligned accordingly. This format is commonly used when the data needs to be imported into a database or processed by a system that requires a specific field length.

Submit
138. The transformation information is stored in a table as part of the warehouse in

Explanation

The correct answer is "Both" because in data warehousing, transformation information can be stored in a table as part of the warehouse, as well as in expression-based transformations. Table-based transformations involve using lookup tables or reference tables to transform data, while expression-based transformations involve using expressions or formulas to manipulate and transform data. Therefore, both methods can be used to store transformation information in a data warehouse.

Submit
139. Which table used to resolve many to many relationship between attributes

Explanation

A LookUp Table is used to resolve many-to-many relationships between attributes. This table acts as a bridge between two entities, allowing multiple instances of one entity to be associated with multiple instances of another entity. It contains foreign keys from both entities and provides a way to establish connections between them. By using a LookUp Table, we can efficiently manage and query data related to the many-to-many relationship, ensuring data integrity and reducing redundancy.

Submit
140. Page cleaner process is 

Explanation

The page cleaner process is asynchronous, meaning it operates independently of other processes and does not require immediate or direct interaction with them. This allows the page cleaner to run in the background, freeing up system resources and improving overall system performance. Asynchronous processes are often used for tasks that can be performed independently and do not need to be synchronized with other processes.

Submit
141. During which of the operations data is not modified

Explanation

During the data profiling operation, data is not modified. Data profiling involves analyzing and assessing the quality, structure, and content of the data. It aims to understand the data's characteristics, such as completeness, accuracy, consistency, and uniqueness, without making any changes to the data itself. This process helps in identifying data quality issues, patterns, and relationships within the data, which can then be used to make informed decisions about data cleansing or enrichment operations.

Submit
142. Concerns about Metadata

Explanation

The concerns about metadata include the fact that it is too expensive and time-consuming to implement. Additionally, it is considered too complicated, as it requires expertise and technical knowledge to properly categorize and manage metadata. Furthermore, the subjective nature of metadata means that it depends on the specific content being described, making it difficult to create standardized metadata across different types of content. Lastly, there is no end to metadata, meaning that as new content is created, more metadata needs to be generated and managed.

Submit
143. What feature in MicroStrategy allows a user to have attributes and metrics as part of the report definition but not as part of the final report display?

Explanation

The correct answer is "Report Object." In MicroStrategy, a Report Object allows users to include attributes and metrics as part of the report definition, but these objects are not displayed in the final report. This feature is useful when users want to include additional information for analysis purposes without cluttering the report display. The Report Object can be used to organize and structure the report, making it easier to understand and navigate.

Submit
144. Which is false?

Explanation

The statement that "Pagesize does not impact performance" is false. Pagesize refers to the amount of data that is read or written in a single I/O operation. A larger pagesize can reduce the number of I/O operations required, leading to improved performance in certain scenarios. However, it is important to note that the optimal pagesize depends on the specific workload and database system being used.

Submit
145. Large updates and transactions are not suitable for

Explanation

Large updates and transactions are not suitable for block index and clustered index.

Block index is a data structure used in databases to organize and efficiently access blocks of data. It is not ideal for large updates and transactions because it requires a lot of disk I/O operations to update the index, which can be time-consuming and inefficient.

Clustered index is a type of index that determines the physical order of data in a table. It is not suitable for large updates and transactions because it requires rearranging the data on disk to maintain the physical order, which can be costly in terms of time and resources.

Therefore, both block index and clustered index are not recommended for handling large updates and transactions.

Submit
146. Which of the following does not require a statistics update after the operation?

Explanation

A table backup does not require a statistics update after the operation because a backup is simply a copy of the existing table data. It does not involve any changes or modifications to the data or its structure. The statistics, which provide information about the distribution and characteristics of the data, are not affected by the backup process. Therefore, there is no need to update the statistics after performing a table backup.

Submit
147. Effects of locking –    

Explanation

Locking is a mechanism used to control access to shared resources in a concurrent environment. When multiple threads or processes try to access the same resource simultaneously, locking ensures that only one thread or process can access it at a time, preventing data corruption or inconsistency. This improves concurrency by allowing multiple threads to execute concurrently without interfering with each other. However, the process of acquiring and releasing locks introduces overhead, which can degrade performance. Therefore, while locking improves concurrency, it can also have a negative impact on performance.

Submit
148. This type of filter gets applied to the report when the SQL is generated and executed against the warehouse

Explanation

A report filter is a type of filter that is applied to the report when the SQL is generated and executed against the warehouse. This means that the filter is applied directly to the data that is being retrieved from the database, allowing for more efficient and accurate filtering of the report results. By applying the filter at the report level, only the relevant data is included in the report, making it easier for users to analyze and interpret the information.

Submit
149. Trillium server process requires –

Explanation

The Trillium server process requires all of the mentioned components - Input Structure (DLL file), Output structure (DLL file), and Parameter file (PAR file). Each of these components plays a crucial role in the functioning of the Trillium server process. The Input Structure (DLL file) is responsible for providing the necessary input data to the server process. The Output structure (DLL file) defines the format in which the processed data will be outputted. The Parameter file (PAR file) contains the configuration settings and rules that govern the data processing. Therefore, all of these components are necessary for the Trillium server process to operate correctly.

Submit
150. During the de-duplication process

Explanation

During the de-duplication process, the original values are kept in trail tables. This means that instead of deleting or disturbing the original values, they are stored separately in trail tables. This allows for a record of the original values to be maintained while the de-duplication process is carried out. This can be useful for various reasons such as auditing, historical analysis, or reverting back to the original values if needed.

Submit
151. Difference between the job status 'Finished' and 'Finished(see log)'?

Explanation

In the case of "Finished(see log)", the job has finished but there were some warnings during the process. This means that the job may have completed successfully, but there were some issues or potential errors that occurred. On the other hand, in the case of just "Finished", there are no mentioned warnings or rejected rows, indicating that the job completed without any issues or errors. Therefore, the main difference between the two is the presence of warnings in the latter case.

Submit
152. How is trillium invoked in datastage?

Explanation

The correct answer is "Advanced external procedure." In DataStage, trillium is invoked using advanced external procedures. These procedures allow for the integration of Trillium Software's data quality capabilities within DataStage. By using advanced external procedures, users can perform various data quality operations such as data cleansing, standardization, and matching, enhancing the overall data quality and accuracy of the data processed in DataStage.

Submit
153. Optimized storage or space utilization can be achieved in 

Explanation

Optimized storage or space utilization can be achieved in non-human readable format. This means that by using a format that is not easily readable by humans, such as binary or compressed formats, data can be stored in a more efficient manner, taking up less space. This can be particularly useful when dealing with large amounts of data or when storage resources are limited. By sacrificing human readability, the data can be stored in a more compact form, allowing for better storage efficiency.

Submit
154. Formula automatically got updated fact column by which metrics

Explanation

The correct answer is "Smart" because the term "Smart" is commonly used to describe formulas or functions that automatically update or recalculate values based on changes in other cells or data. In this context, the formula is likely designed to update the fact column with metrics based on certain conditions or calculations. The other options (Nested, Derived, and Compound) do not specifically imply this automatic updating behavior.

Submit
155. Master data store -

Explanation

The master data store is responsible for storing unique entries for all applications. This means that it contains information that is specific to each application and is not duplicated across multiple applications. The purpose of having unique entries for all applications is to ensure data integrity and avoid redundancy. This allows each application to have its own set of data that is relevant and specific to its needs, without interfering with the data of other applications.

Submit
156. Which feature of LOAD makes it faster than IMPORT or SQL INSERT?

Explanation

The reason why writing formatted data pages directly to the hard disk makes LOAD faster than IMPORT or SQL INSERT is because it eliminates the need for intermediate steps or buffering. When using LOAD, the data is directly written to the disk in a formatted manner, which reduces the time and resources required for data processing. In contrast, IMPORT and SQL INSERT may involve additional steps such as parsing, buffering, or intermediate commits, which can slow down the overall process.

Submit
157. Client requests are serviced by

Explanation

Coordinator agents are responsible for servicing client requests. They act as intermediaries between the clients and the server, coordinating and managing the communication between them. They handle tasks such as routing requests, managing connections, and ensuring the proper execution of client requests. Client threads, on the other hand, are responsible for executing specific tasks within the client application, while client agents are not typically involved in servicing client requests. Therefore, the correct answer is coordinator agents.

Submit
158. Database statistics are used by the optimizer to estimate the costs of alternative access plans for each query. Collection of these database statistics represents which of the following activity?

Explanation

Database statistics are used by the optimizer to estimate the costs of alternative access plans for each query. This means that the database statistics need to be up-to-date and accurate in order for the optimizer to make informed decisions about the most efficient access plan for a given query. Therefore, regularly scheduling the collection of these database statistics is necessary to ensure that they are always current and reflect the actual state of the database.

Submit
159. What are the standards defined for metadata(Choose 2)

Explanation

ISO/IEC 11179 and ANSI X3.285 are the standards defined for metadata. ISO/IEC 11179 is an international standard that provides guidelines for the development and management of metadata registries. It defines various aspects of metadata, such as concepts, data elements, and value domains. ANSI X3.285, also known as the Common Warehouse Metadata Interchange (CWMI) standard, provides a framework for exchanging metadata between different data warehousing environments. Both standards play a crucial role in ensuring consistency, interoperability, and quality of metadata across different systems and organizations.

Submit
160. Which symbol is used in defining the parameter in datastage?

Explanation

The symbol "#" is used in defining the parameter in Datastage.

Submit
161.  iconv is used for?

Explanation

The correct answer is that the iconv program is used for converting text from one encoding to another encoding. The provided example "iconv −f ISO−8859−1 −t UTF−8" demonstrates how the program can be used to convert text from the ISO-8859-1 encoding to the UTF-8 encoding. It also mentions that the conversion can be used to convert the user input into the data stage internal format.

Submit
162. "Statistics can be collected anytime"What is true about this statement? 

Explanation

This statement is true because collecting statistics can have an impact on performance. When statistics are collected, it involves analyzing and processing data, which can consume system resources and potentially slow down the performance of the database. Therefore, it is not something that can be done anytime without considering the potential impact on performance.

Submit
163. Fast export is used to export all the data 

Explanation

Fast export is not used to export all the data. It is a utility in Teradata that is used to efficiently export a large amount of data from a Teradata database to an external file. It is designed to export data in a highly optimized and parallel manner, making it suitable for exporting large datasets. However, it does not export all the data in the database, but rather allows for selective exporting based on specific criteria or conditions. Therefore, the correct answer is False.

Submit
164. What is true about stage hash files?

Explanation

Stage hash files use a combination of hash algorithms and sequential scanning to process data files and overflow files. The data files are scanned using hash algorithms, which allows for quick access and retrieval of specific data records. On the other hand, the overflow files are processed in a sequential manner, meaning that the records are read and processed one after the other in the order they appear in the file. This combination of hash algorithms and sequential scanning ensures efficient and accurate processing of data in stage hash files.

Submit
165. Which of the following is true?

Explanation

Tables can span across tablespaces means that a single table can be stored in multiple tablespaces. This allows for better organization and management of data within a database. It also allows for better distribution of data across different storage devices.

Submit
166. Which module is used for parsing name and address fields –

Explanation

The correct answer is Customer Data Parser. This module is used specifically for parsing name and address fields. It is designed to extract and interpret information from customer data, allowing for easy organization and analysis of name and address details. The Client Data Parser, on the other hand, may serve a different purpose or focus on parsing other types of data. Therefore, the correct module for parsing name and address fields is the Customer Data Parser.

Submit
167. Statistics update is done 

Explanation

The statistics update is done once a week, but the frequency may vary depending on how often the table is updated. This means that if the table is updated more frequently, the statistics will also be updated more often to ensure accurate and up-to-date information.

Submit
168. A datastage project can be created in

Explanation

not-available-via-ai

Submit
169. Metadata is

Explanation

The correct answer is "Both". Metadata refers to data about data, which includes information about the structure, content, and context of data. It can be used to organize and categorize data in a hierarchical manner, similar to how an ontology arranges concepts in a hierarchical structure. Therefore, metadata can serve as an ontology when it is hierarchically arranged.

Submit
170. An MDM solution is successful:
  1. If all master data element touch points are integrated to master data store
  2. If master data store is integrated to DW
  3. If MDM solution is implemented with a package solution
  4. None of the listed options
 

Explanation

The correct answer is 2. An MDM solution is successful if it is implemented with a package solution. This means that the MDM solution is implemented using pre-built software or a commercial off-the-shelf (COTS) solution rather than being custom-built. Implementing an MDM solution with a package solution can provide several benefits such as faster implementation time, lower costs, and access to pre-built functionality and features.

Submit
171. Command to stop Datastage server on UNIX is

Explanation

The correct answer is "/uv –admin –stop" because it is the command to stop the Datastage server on UNIX. The "-admin" flag indicates that the command is being run as an administrator, and the "-stop" flag specifies that the server should be stopped. The "/uv" portion of the command is the specific command used to interact with the Datastage server.

Submit
172. Microstrategy is a _______ tool

Explanation

Microstrategy is a ROLAP (Relational Online Analytical Processing) tool. ROLAP is a type of OLAP (Online Analytical Processing) that uses a relational database management system (RDBMS) to store and manage data. Unlike MOLAP (Multidimensional Online Analytical Processing), which stores data in a multidimensional cube, ROLAP retrieves data directly from the relational database when needed. Therefore, Microstrategy being a ROLAP tool means that it uses a relational database to perform online analytical processing tasks.

Submit
173. Query against xml data is called

Explanation

XQuery is the correct answer because it is a query language specifically designed for querying XML data. It allows users to extract and manipulate data from XML documents using a combination of XPath syntax and extended functionality. XQuery provides a powerful and flexible way to retrieve information from XML databases or other XML data sources.

Submit
174. Update schema, updates the information stored in _________

Explanation

The correct answer is metadata database. When we update the schema, we are modifying the structure and organization of a database. This includes changes to tables, columns, relationships, and other database objects. The metadata database stores information about the database schema, such as the definitions of tables, columns, constraints, and indexes. Therefore, updating the schema involves making changes to the metadata database to reflect the updated information and ensure the database remains consistent and accurate.

Submit
175. Which of the following logging strategies allow roll-forward recovery?

Explanation

Log Retention logging allows roll-forward recovery because it retains all the log files indefinitely, even after they have been backed up. This means that in the event of a failure, the system can use the retained log files to roll forward and recover any transactions that were in progress at the time of the failure. Circular logging, on the other hand, does not retain log files once they have been backed up, making roll-forward recovery impossible. Therefore, the correct answer is Log Retention logging.

Submit
176. Most important component of reject data capture

Explanation

The most important component of reject data capture is capturing the reason for rejection. This allows for a thorough analysis of why the data was rejected, which can help in identifying and resolving any underlying issues or errors. By capturing the reason for rejection, organizations can gain valuable insights into their data quality and make necessary improvements to prevent future rejections.

Submit
177. Report view mode

Explanation

The given correct answer lists the different view modes available in a report. These modes include Grid, Graph, SQL, and Grid graph mode. Grid mode displays the data in a tabular format, Graph mode represents the data visually using charts and graphs, SQL mode allows users to write custom SQL queries, and Grid graph mode combines both the tabular and visual representations of the data.

Submit
178. Types of DW Metadata

Explanation

The correct answer is Back Room and Front Room. In a data warehouse, metadata is classified into different types based on the location and purpose. The back room metadata refers to the technical metadata that is used by IT professionals to manage and maintain the data warehouse. It includes information about data sources, data transformations, and data loading processes. On the other hand, the front room metadata refers to the business metadata that is used by business users to understand and analyze the data in the data warehouse. It includes information about data definitions, business rules, and data lineage.

Submit
179. Which index is Physically ordered

Explanation

A block index is physically ordered because it stores the data in the order it appears on the disk. This allows for efficient retrieval of data as it can be read sequentially. A cluster index is also physically ordered as it organizes the data based on the clustering key, which determines the physical order of the data on the disk. However, a tree index is not physically ordered as it uses a hierarchical structure to organize the data, allowing for efficient searching and retrieval operations. Therefore, the correct answer is Block Index and Cluster Index.

Submit
180. Which among the following is not a public object?

Explanation

The given options consist of different types of objects. Transformation is not a public object because it refers to the process of changing data from one form to another, rather than being a specific object that can be accessed publicly. On the other hand, metrics, consolidation, and filters are all objects that can be accessed and used publicly in various contexts.

Submit
181. The rules of cleansing are embedded in

Explanation

The correct answer is Trillium's parameter file (PAR). The explanation for this is that the rules of cleansing, which determine how data is cleaned and standardized, are embedded in Trillium's parameter file. This file contains the specific instructions and configurations for the cleansing process. The Input Str file and Output Str file are related to the input and output of data, but they do not contain the rules for cleansing. Therefore, the parameter file is the correct answer.

Submit
182. DSN for ODBC Stage in DataStage​

Explanation

The correct answer is "should be created in server". This is because the DSN (Data Source Name) for ODBC (Open Database Connectivity) stage in DataStage needs to be created on the server where the DataStage job is running. The DSN is used to establish a connection between the DataStage job and the database server, allowing the job to read or write data from/to the database. Creating the DSN on the server ensures that the job can access the necessary database resources and perform the required operations.

Submit
183. The number of rows to be transferred in one call between Datastage and Oracle before they are written is called

Explanation

The number of rows to be transferred in one call between Datastage and Oracle before they are written is referred to as the array size. This determines how many rows can be processed and transferred at a time, optimizing the data transfer process and improving efficiency.

Submit
184. OCI =>

Explanation

The correct answer is Oracle Call Interface. OCI is a programming interface that allows applications to interact with Oracle databases. It provides a set of functions and utilities for performing various database operations, such as executing SQL statements, fetching data, and managing transactions. OCI is commonly used by developers to build high-performance and scalable applications that access Oracle databases.

Submit
185. Master Data Repository is an ideal source for

Explanation

The Master Data Repository is an ideal source for all OLTP data. This means that it contains both dimension elements and transaction elements. Dimension elements refer to the different attributes or characteristics that describe the data, such as customer names, product codes, or geographical locations. Transaction elements, on the other hand, are the actual data entries or records that capture specific events or transactions, such as sales orders, purchase orders, or inventory movements. Therefore, the Master Data Repository serves as a comprehensive source for all types of data in an OLTP system.

Submit
186. Data cleansing and standardization will be taken care by

Explanation

Data cleansing and standardization refer to the process of identifying and correcting or removing errors, inconsistencies, and inaccuracies in data. Data profiling tools help in analyzing and understanding the structure, content, and quality of data. Metadata tools are used to manage and organize metadata, which provides information about the data. However, data quality tools are specifically designed to ensure the accuracy, completeness, consistency, and reliability of data, making them the most suitable for handling data cleansing and standardization tasks.

Submit
187. What are the formatting properties applicable to fact objects in a report?

Explanation

The given answer is "None of the above" because the options provided do not accurately represent the formatting properties applicable to fact objects in a report. The correct formatting properties for fact objects in a report typically include font/color formatting, alignment formatting, and number formatting. Border formatting and positional formatting are not specifically applicable to fact objects in a report.

Submit
188. N which of the following cases, Change Logs or Histories should be implemented? (Choose two)

Explanation

Change Logs or Histories should be implemented when the base resource is deleted and when access restrictions are enabled. Implementing Change Logs or Histories in these cases allows for tracking and recording any changes or modifications made to the base resource. This can be helpful for auditing purposes, ensuring accountability, and maintaining a record of all actions taken. Additionally, tracking access restrictions enables monitoring and managing user permissions, ensuring data security and compliance.

Submit
189. Which of the following options use database statistics?

Explanation

Database statistics are used by the query compiler to optimize query execution. The query compiler analyzes the statistics to determine the most efficient execution plan for a given query. This involves evaluating factors such as the size of the tables involved, the distribution of data, and the selectivity of the predicates. By utilizing database statistics, the query compiler can generate a plan that minimizes the amount of I/O and CPU resources required, resulting in faster and more efficient query execution.

Submit
190. DBAs can control the physical location of the table on the disks in

Explanation

DBAs can control the physical location of the table on the disks in DMS (Database Management System). DMS provides the ability to manage the storage and organization of data within a database. With DMS, DBAs have the flexibility to specify where the table should be stored on the disks, allowing for better performance and optimization. This control over the physical location is not available in SMS (Storage Management System) or any other option listed.

Submit
191. Grant permission can be given by?

Explanation

Grant permission can be given by a DS Manager. A DS Manager is responsible for managing and overseeing the permissions and access levels within a system or application. They have the authority to grant or revoke permissions to users, developers, and administrators based on the requirements and policies of the organization.

Submit
192. Multi Dimensional Clusters are  Beneficial For

Explanation

Multi Dimensional Clusters are beneficial for OLAP (Online Analytical Processing). OLAP involves analyzing large volumes of data from multiple dimensions, such as time, geography, and product. Multi-dimensional clusters help in organizing and structuring this data in a way that allows for efficient and fast retrieval of information. By grouping similar data points together based on their attributes, multi-dimensional clusters enable OLAP systems to perform complex queries and aggregations more quickly, improving the overall performance and responsiveness of the analytical process.

Submit
193. Export file format in DS is 

Explanation

The correct answer is "Both" because in DataStage (DS), you have the option to export files in two different formats, DSX and XML. DSX is a proprietary file format used by DataStage for exporting job designs, while XML is a widely used file format for data interchange. Therefore, you can choose to export files in either DSX or XML format depending on your requirements.

Submit
194. Different types of drilling

Explanation

The correct answer is "Drill down the hierarchy" because it refers to the action of navigating from a higher level to a lower level in a hierarchical structure. This type of drilling involves exploring more detailed or specific information within a certain category or branch of the hierarchy. "Drill up the hierarchy" would involve navigating from a lower level to a higher level, while "across" and "template" do not pertain to drilling in a hierarchical structure. The options "Both" and "None" are not relevant to the types of drilling.

Submit
195. The number of rows that are written to the database before they are committed is called

Explanation

The term "transaction size" refers to the number of rows that are written to the database before they are committed. This means that when a certain number of rows have been processed and written to the database, they are then permanently saved or committed. This concept is important in database management as it helps in optimizing performance and ensuring data integrity.

Submit
196. Data identification and de-duplication processes are in-built in the 

Explanation

The correct answer is data entry or middleware applications. Data identification and de-duplication processes are typically built into data entry or middleware applications. These applications are designed to handle large volumes of data and ensure data quality by identifying and removing duplicate entries. This helps to maintain data integrity and accuracy, which is essential for effective data management.

Submit
197. You have data for an organization at different locations. How would you go about maintaining metadata?

Explanation

To maintain metadata for an organization at different locations, the best approach would be to create metadata at each location and then merge them periodically. This means that metadata specific to each location would be created and managed separately, and then periodically combined into a central metadata system. This approach ensures that the unique metadata needs of each location are addressed while still maintaining a centralized system for overall organization-wide metadata management.

Submit
198. True for Chngpgs_thresh:

Explanation

This answer is correct because it accurately explains that for heavy update transactions, the value of Chngpgs_thresh should be decreased below the default value. This is because heavy update transactions tend to result in a higher percentage of changed pages in the buffer pool. By decreasing the Chngpgs_thresh value, the asynchronous page cleaners will be started at a lower threshold, ensuring that the buffer pool is cleaned more frequently and efficiently.

Submit
199. Thresholds can be applied to

Explanation

Thresholds can be applied to metrics. Thresholds are used to set specific values or ranges that determine whether a metric is considered acceptable or not. By applying thresholds to metrics, organizations can monitor performance and identify any deviations from the desired targets. This allows them to take corrective actions and ensure that the metrics are within the desired range. Applying thresholds to other options like filters, consolidations, attributes, or reports would not make sense in the context of monitoring and evaluating performance metrics.

Submit
200. Data quality tools

Explanation

The given answer includes a list of data quality tools, namely Trillium from Harte-Hanks, FirstLogic from BusinessObjects, Quality Stage from IBM, Informatica Data Quality from Informatica, and DfPower from data flux. These tools are all well-known and widely used in the industry for improving the quality of data.

Submit
View My Results

Quiz Review Timeline (Updated): Mar 21, 2023 +

Our quizzes are rigorously reviewed, monitored and continuously updated by our expert board to maintain accuracy, relevance, and timeliness.

  • Current Version
  • Mar 21, 2023
    Quiz Edited by
    ProProfs Editorial Team
  • Sep 29, 2016
    Quiz Created by
    MishraAkshay
Cancel
  • All
    All (200)
  • Unanswered
    Unanswered ()
  • Answered
    Answered ()
During job run in the designer, the green link indicates
The user has a report with "Year" and "Expenditure" displayed. ...
The MDM services helps in 
Different table spaces have different page sizes
Is training required for creating a metadata?  
Updating the schema is nessary when there is a change in 
A good metadata must
Where do log file exists?
Data quality does not refer to
Tablespace span across containers and tables can span across...
What is the technology used to match entities like "Bill" as short for...
Household matching is for
 Data cleansing rules
Data stage job consists of
There is an option "Generate report" in DataStage Designer.
When creating database or users, specifying its parent is necessary
An expression combining two different fact columns in a table (ex...
A fact table in the centre surrounded by dimension tables which are...
Avg(Sum(Fact1) {~+, month+}) {~+, quarter+} is an example of 
More than one ID column 
Which of the following types of mapping allows the engine to perform...
Schema updation can be done by
Which tool extracts data from textual source​?
What is ASLheapsz?
A fact can have different expressions based on the table against which...
In which of the following stages a job cannot be run?
The maximum number of attributes that can be set as parent to another...
What is meant by pre-fetching?         ...
A database partition is not given complete control of Hardware...
UNIX command to run a datastage job
Schema objects are  
Types of partition mapping?
Database partition is known as
What does page cleaner do?
Order of executionin ds
DataStage is a
Can multiple selections possible in DataStage?
A project source can have how many projects?
Is market basket analysis is a BI & DW solution? 
OLAP services
What should happen if two sources merge together?
Which one is false?
"You can have multiple jobs with the same name". Which of the...
When we import a job, the job will be in which state?
In two tier architecture, how many ODBC connections are there?
In a hashed file, if you add a duplicate column to a key column. 
DS server in UNIX can be started by
Not a DB2 licence method
Which of the following is correct?
MPP –
% tracing of ETL and the Ideal percentage of rejected rows in ETL...
 A filter qualification can combine
What is GIGO? 
The No. of CPUs used in DB2 Enterprise edition
Data Masking and Mask Pattern analysis are employed in
A Data warehouse:
Role of DS administrator
Internal storage causes high redundancy 
Crosswalk: 
A container is not a
Metadata uses and needs
 How will a default user be notified while login in ds...
Default page size in DB2?
Level prompt –
Which is not a data quality tool?
A table in a RDBMS must have which of the following options?
OLAP services
What is the language used in a data quality tool?
Default drill path –
Two types of hierarchies available in MicroStrategy are...
Default administrator in UNIX is
Grouping of attributes which can be displayed, ordered, unordered...
Basic Functionalities of Trillum​
Which is true?
Which option in the metric editor allows the user to calculate a...
Role of DS Manager
Details of locks held by transactions that are recorded in buffer pool...
One key function of Auditing data cleansing rules is:
Role of DS Director
Which of the following is responsible for MOLAP functionality of...
Row level math calculation and virtual attributes is possible...
Which of the following database is used in DS repository?
Which of the following statements is true?
Block indexes for multiple columns produces
Cmd to invoke administrator?
The best practice in data quality is
Default level of metrics –
Types of prompts 
Cluster of SMP is
If lookup store details of one attribute then it is called as
If Num_oicleaners is 0,then _____ are started
Select the true statements.
If a primary key uses multiple columns to identify a record then it is...
If you want to filter on more than one attribute you will use
What is the default connection timed out in data stage?
Number of input links for a transformer
The frequency of data count is obtained is
Converting non standard data into standardized format is taken care by...
MDM is build 
Various components of a transformation are
Change logs or histories are needed for
Viewing desktop report is possible in which of the following office...
ODS  is fora) Tactical decisionsb) Strategically decisionc) Takes...
What is meant by Rebalancing of extents?
Metric formula can be reused by saving them as
Output of hash file - 
Metadata storage formats
Customer Data Management (CDM) applies the principles of
Which among the following is not a part of Geocoding?
To change license details, the user must have which of the following...
Which mapping stores the project information – 
Metadata creation tools
Survivorship is a concept used in
In DataStage, system variables are prefixed with
Metric level – group set to NONE is not applicable for 
Implicit facts or attributes are 
Parameter for individual database are stored in which  config...
True about active and passive stage?
If a user would want to list the top 10 revenue values by region what...
Not a valid type of locking
OR clauses and OUTER joins
Front room metadata
In which type of filter the SQL is not changed?
Reports can run with only attributes on the template (and no metrics)
Which is incorrect?
Transformer can be created without an input link?
From where to import DS jobs?
Rule repository?  
Default file format for datastage components to be exported is
Types of actions in hierarchy display
If the user wants particular attribute to be displayed in the report...
True about consolidation
Terminology used to describe augmenting of entities with data from...
Which of the following tables provides data of an attribute ID and...
Maximum number of tables that can be loaded in single fastload job is
Which are the following is not an IBM product?
Trillium source
The transformation information is stored in a table as part of the...
Which table used to resolve many to many relationship between...
Page cleaner process is 
During which of the operations data is not modified
Concerns about Metadata
What feature in MicroStrategy allows a user to have attributes and...
Which is false?
Large updates and transactions are not suitable for
Which of the following does not require a statistics update after the...
Effects of locking –    
This type of filter gets applied to the report when the SQL is...
Trillium server process requires –
During the de-duplication process
Difference between the job status 'Finished' and 'Finished(see log)'?
How is trillium invoked in datastage?
Optimized storage or space utilization can be achieved in 
Formula automatically got updated fact column by which metrics
Master data store -
Which feature of LOAD makes it faster than IMPORT or SQL INSERT?
Client requests are serviced by
Database statistics are used by the optimizer to estimate the costs of...
What are the standards defined for metadata(Choose 2)
Which symbol is used in defining the parameter in datastage?
 iconv is used for?
"Statistics can be collected anytime"What is true about this...
Fast export is used to export all the data 
What is true about stage hash files?
Which of the following is true?
Which module is used for parsing name and address fields –
Statistics update is done 
A datastage project can be created in
Metadata is
An MDM solution is successful:If all master data element touch points...
Command to stop Datastage server on UNIX is
Microstrategy is a _______ tool
Query against xml data is called
Update schema, updates the information stored in _________
Which of the following logging strategies allow roll-forward recovery?
Most important component of reject data capture
Report view mode
Types of DW Metadata
Which index is Physically ordered
Which among the following is not a public object?
The rules of cleansing are embedded in
DSN for ODBC Stage in DataStage​
The number of rows to be transferred in one call between Datastage and...
OCI =>
Master Data Repository is an ideal source for
Data cleansing and standardization will be taken care by
What are the formatting properties applicable to fact objects in a...
N which of the following cases, Change Logs or Histories should be...
Which of the following options use database statistics?
DBAs can control the physical location of the table on the disks in
Grant permission can be given by?
Multi Dimensional Clusters are  Beneficial For
Export file format in DS is 
Different types of drilling
The number of rows that are written to the database before they are...
Data identification and de-duplication processes are in-built in...
You have data for an organization at different locations. How would...
True for Chngpgs_thresh:
Thresholds can be applied to
Data quality tools
Alert!

Advertisement