1.
What data is used in First Contact Resolution Solution?
Correct Answer
E. All of the above
Explanation
The First Contact Resolution Solution uses all of the mentioned data sources. CTI data from ACD, IVR, and PBX provides information about call routing and customer interactions. Transactional data from Billing systems and CRM gives insights into customer purchases and interactions. Interactions Content data from Text, Speech, and Desktop analytics helps analyze the content and context of customer interactions. Additional customer-related data from external sources like Social Media provides additional information about customer preferences and behavior. Therefore, all of the mentioned data sources are used in the First Contact Resolution Solution.
2.
What processing is done in First Contact Resolution Solution?
(Multiple answers)
Correct Answer(s)
A. Accurate and automated contact reasoning
B. True customer identifier
D. SopHisticated Repeat Contact Sequencing engines
E. Measurement down to the individual level
Explanation
The First Contact Resolution Solution involves several processing steps. Accurate and automated contact reasoning is used to analyze the nature of the contact and determine the best course of action. A true customer identifier helps in identifying the customer accurately and retrieving their relevant information. Automatic pattern analysis is performed to identify any recurring issues or patterns in customer contacts. Sophisticated Repeat Contact Sequencing engines are used to manage and prioritize multiple contacts from the same customer. Finally, measurement down to the individual level allows for tracking and analyzing the performance of each customer interaction.
3.
What are main characteristics of the Big Data Interaction HUB?
Correct Answer(s)
A. Structured and unstructured data
C. Cloud ready
D. Real Time support
E. Customer Entity
Explanation
The main characteristics of the Big Data Interaction HUB include the ability to handle both structured and unstructured data, being cloud ready, providing real-time support, and having a customer entity. This means that the HUB is capable of processing and analyzing different types of data, can be easily integrated into cloud environments, supports real-time data processing and analysis, and has a focus on customer-related information and interactions.
4.
What are BigData 4 V's?
(Multiple answers)
Correct Answer(s)
A. Volume
B. Variety
C. Value
E. Velocity
Explanation
The 4 V's of Big Data are Volume, Variety, Value, and Velocity. Volume refers to the vast amount of data being generated, Variety refers to the different types and formats of data, Value refers to the insights and value that can be derived from the data, and Velocity refers to the speed at which data is being generated and needs to be processed.
5.
Hadoop technology is based on two main elements:
(multiple answers)
Correct Answer(s)
A. HDFS - Highly distrbuited file system
D. Map/Reduce
Explanation
Hadoop technology is based on two main elements: HDFS - Highly distributed file system and Map/Reduce. HDFS is a distributed file system that allows for the storage and processing of large datasets across multiple machines. It provides high fault tolerance and scalability. Map/Reduce is a programming model that allows for the parallel processing of data across a cluster of computers. It divides the input data into smaller chunks and processes them in parallel, greatly improving the performance of data processing tasks. Together, HDFS and Map/Reduce form the core components of the Hadoop ecosystem, enabling the efficient storage and processing of big data.
6.
Hadoop can be used for:
(Multiple answers)
Correct Answer(s)
A. Data storage
B. Data processing by MapReduce
C. Data replication
E. Data mining and exploration
Explanation
Hadoop can be used for data storage, as it is designed to handle large volumes of data and distribute it across a cluster of computers. It can also process data using the MapReduce framework, which allows for parallel processing and efficient analysis of large datasets. Hadoop supports data replication, which ensures data availability and fault tolerance. Additionally, Hadoop can be used for data mining and exploration, as it provides tools and libraries for analyzing and extracting insights from large datasets.
7.
How NICE is using BigData
Correct Answer
B. NICE is using BigData technology to offer new solutions
Explanation
NICE is utilizing BigData technology to provide innovative solutions. This suggests that NICE is actively involved in the field of BigData and is using it to develop and offer new products or services to its customers.
8.
What is Ocean?
Correct Answer
B. Project name for NICE BigData analytics platform
Explanation
The correct answer is "Project name for NICE BigData analytics platform". This suggests that "Ocean" is a project name specifically associated with the NICE BigData analytics platform. It is not referring to any other technology or platform such as IBM's BigData technology, Hadoop, or Social BigData technology.
9.
The main reason for NICE strategic partnership with IBM is
Correct Answer
D. All of the above
Explanation
The main reason for NICE strategic partnership with IBM is that IBM has a comprehensive strategy for Big Data, offers BigInsights which is their Hadoop distribution, and has a strong reputation with IT organizations. These factors make IBM a suitable partner for NICE in terms of their Big Data needs and the credibility of their technology solutions.
10.
The benefits of Hadoop over Relational DB are:
(multiple answers)
Correct Answer(s)
B. It is cost effective
C. It can grow linearly
D. It is good for structured and unstructured data
Explanation
Hadoop offers several benefits over Relational DB. Firstly, it is cost-effective as it can be deployed on commodity hardware, eliminating the need for expensive specialized hardware. Secondly, it can grow linearly by adding more nodes to the cluster, allowing for scalability and handling large amounts of data. Lastly, Hadoop is suitable for both structured and unstructured data, making it versatile and capable of handling a wide range of data types.