# Deep Learning With Keras - Part I

Approved & Edited by ProProfs Editorial Team
The editorial team at ProProfs Quizzes consists of a select group of subject experts, trivia writers, and quiz masters who have authored over 10,000 quizzes taken by more than 100 million users. This team includes our in-house seasoned quiz moderators and subject matter experts. Our editorial experts, spread across the world, are rigorously trained using our comprehensive guidelines to ensure that you receive the highest quality quizzes.
| By Alireza Akhavan
A
Alireza Akhavan
Community Contributor
Quizzes Created: 1 | Total Attempts: 1,177
Questions: 17 | Attempts: 1,177

Settings

Https://github. Com/alireza-akhavan/SRU-deeplearning-workshop/

• 1.

• A.

Option1

• B.

Option2

• C.

Option3

• D.

Option4

B. Option2
• 2.

### لایه ی avg pooling پارامترهای کمتری نسبت به لایه‌های max pool دارد.

• A.

صحیح

• B.

غلط

B. غلط
Explanation
غلط - لایه ی pooling پارامتر ندارد

Rate this question:

• 3.

### در الگوریتم KNN همواره مقدار K بیشتر باعث افزایش دقت میشود، اما سرعت اجرای الگوریتم را کاهش می‌دهد

• A.

صحیح

• B.

غلط

B. غلط
Explanation
The given statement is incorrect. In the KNN algorithm, increasing the value of K does not always result in an increase in accuracy. While a higher value of K may reduce the impact of noise in the data, it can also lead to over-smoothing and loss of important details. Additionally, increasing K can also increase the computational complexity and slow down the execution speed of the algorithm. Therefore, the statement that increasing K always improves accuracy but reduces the algorithm's execution speed is incorrect.

Rate this question:

• 4.

### اگر پس از آموزش یک شبکه عصبی کانولوشنالی عمیق به بررسی نورون‌ها بپردازیم، برخی از نورون ها قابل تفسیر هستند و برخی از آن‌ها تفسیری ندارند. اما اهمیت این نورون‌ها یکسان بوده و اولویتی نسبت به هم ندارد.

• A.

صحیح

• B.

غلط

A. صحیح
Explanation
اطلاعات بیشتر:
https://t.me/class_vision/159

Rate this question:

• 5.

### ) معمولا با جلو رفتن در سلسه مراتب شبکه عصبی کانولوشنالی، طول و عرض Activation هر لایه کمتر و به عمق یا تعداد کانال هر لایه افزوده میشود.

• A.

صحیح

• B.

غلط

A. صحیح
Explanation
As we progress through the layers of a convolutional neural network, the length and width of the activation of each layer usually decrease, while the depth or number of channels of each layer increases. This is because the convolutional layers apply filters to the input data, which reduces its spatial dimensions but increases its depth or number of channels. This allows the network to extract more complex and abstract features from the input data as we go deeper into the network. Therefore, the given answer is correct.

Rate this question:

• 6.

### فرض کنید وردی یک تصویر 300x300 رنگی(RGB) است. الف) اگر از شبکه های کانولوشنالی استفاده نکنیم. اگر لایه ی hidden اول 100 نورون داشته باشد. و هر نورون به صورت تمام-متصل به لایه ورودی وصل باشد، چه تعداد پارامتر این برای این لایه خواهیم داشت؟ (با احتساب بایاس).

• A.

27000100

• B.

900

• C.

100

• D.

90100

A. 27000100
Explanation
(300 x 300 x 3) x 100 + 100

Rate this question:

• 7.

### فرض کنید وردی یک تصویر 300x300 رنگی(RGB) است.  اگر از شبکه عصبی کانولوشنالی با 100 فیلتر با اندازه 5x5 استفاده کنیم این لایه hidden چه تعداد پارامتر خواهد داشت؟

• A.

27000100

• B.

7600

• C.

90000

• D.

2500

B. 7600
Explanation
(5 x 5 x 3) x 100 + 100

Rate this question:

• 8.

### فرض کنید  حجم ورودی یک لایه 16x63x63 باشد . اگر  با 32 فیلتر 7*7 با stride برابر با 2 و بدون padding کانوالو شود، حجم خروجی چه خواهد شد؟

• A.

29x29x63

• B.

7x7x32

• C.

29x29x32

• D.

7x7x1

C. 29x29x32
Explanation
(63 – 7) / 2 + 1 = 29 => 29 x 29 x 32

Rate this question:

• 9.

### فرض کنید  حجم ورودی یک لایه 16x32x32 باشد . اگر max pool با strideی برابر با 2  و اندازه فیلتر 2 روی آن اعمال شود حجم خروجی چه خواهد شد؟

• A.

8x32x32

• B.

16x16x1

• C.

16x32x1

• D.

16x16x16

D. 16x16x16
Explanation
When a max pool operation is applied with a stride of 2 and a filter size of 2 on a 16x32x32 input volume, the output volume will have a size of 16x16x16. This is because the filter moves across the input volume in steps of 2, taking the maximum value within each 2x2 region and creating a new output element. The resulting output volume will have a reduced spatial dimension of 16x16, while the depth remains the same at 16.

Rate this question:

• 10.

### کدام یک در مورد شبکه های عصبی کاولوشنالی یا CNN ها صحیح است؟

• A.

CNN can be applied Only on Image and Text data.

• B.

CNN can be applied on ANY 2D and 3D array of data.

• C.

CNN can be applied Only on Text and speech data.

• D.

CNN can be applied Only on Image data.

B. CNN can be applied on ANY 2D and 3D array of data.
Explanation
CNNs can be applied on any 2D and 3D array of data, not just limited to image and text data. This is because CNNs are designed to effectively capture spatial and temporal dependencies in data, making them suitable for various applications such as computer vision, natural language processing, and speech recognition. By using convolutional layers, pooling layers, and fully connected layers, CNNs can learn and extract meaningful features from different types of data, enabling them to perform tasks like image classification, object detection, and sentiment analysis.

Rate this question:

• 11.

### کدام یک از واحدهای زیر می‌تواند به عنوان بخشی از شبکه های عصبی کانولوشنالی باشد؟

• A.

Dropout

• B.

Softmax

• C.

Maxpooling

• D.

Relu

• E.

تمام موارد

E. تمام موارد
• 12.

### چرا از activation function ها (مثلا relu یا sigmoid) استفاده میکنیم؟

• A.

افزایش ابعاد شبکه

• B.

برای غیر خطی کردن یا  Non-Linearity

• C.

برای خطی کردن

• D.

کاهش اندازه شبکه

B. برای غیر خطی کردن یا  Non-Linearity
Explanation
Activation functions like ReLU or sigmoid are used to introduce non-linearity in neural networks. Without activation functions, the neural network would only be able to learn and represent linear relationships between the input and output. However, in many real-world problems, the relationships are non-linear. Activation functions allow the neural network to learn and represent complex non-linear relationships, making it more powerful and capable of solving a wider range of problems.

Rate this question:

• 13.

### کدام تابع نمیتواند به عنوان activation function استفاده شود؟

• A.

Sigmoid()

• B.

Tanh()

• C.

Sin()

• D.

relu()

C. Sin()
Explanation
The activation function sin() cannot be used as an activation function because it does not have the necessary properties required for an activation function. Activation functions are typically nonlinear and have a bounded range. The sin() function is periodic and does not have a bounded range, making it unsuitable for use as an activation function in neural networks.

Rate this question:

• 14.

### What is an Activation Function

• A.

A function that models a pHenomenon or process

• B.

A function that triggers a neuron and generate the outputs

• C.

A function to normalize the output

• D.

None of the above

• E.

All of the above

B. A function that triggers a neuron and generate the outputs
Explanation
An activation function is a mathematical function that determines the output of a neuron in a neural network. It takes the weighted sum of the inputs and applies a non-linear transformation to it, generating the output of the neuron. This output is then used as input for the next layer of the neural network. Therefore, the correct answer is "A function that triggers a neuron and generate the outputs".

Rate this question:

• 15.

### کدام مورد در مورد شبکه های کانولوشنالی غلط است؟

• A.

Fully connects to all neurons in all the layers

• B.

connects only to neurons in local region(kernel size) of input image

• C.

builds feature maps hierarchically in every layer

• D.

Inspired by human visual system

A.  Fully connects to all neurons in all the layers
Explanation
Convolutional neural networks (CNNs) do not fully connect to all neurons in all the layers. Instead, they connect only to neurons in the local region (kernel size) of the input image. This local connectivity allows CNNs to efficiently extract features from images. Additionally, CNNs build feature maps hierarchically in every layer, meaning that they learn and extract increasingly complex features as they go deeper into the network. CNNs are inspired by the human visual system, which also processes visual information in a hierarchical and localized manner.

Rate this question:

• 16.

### What does "Strides" in Maxpooling Mean

• A.

The number of pixels, kernel should add.

• B.

The number of pixels, kernel should be moved.

• C.

The size of kernel.

• D.

The number of pixels, kernel should remove.

B. The number of pixels, kernel should be moved.
Explanation
In maxpooling, "strides" refers to the number of pixels that the kernel should be moved by. This means that the kernel will move a certain number of pixels horizontally and vertically to cover the input image or feature map. By adjusting the stride value, we can control the amount of overlap between the kernel's receptive fields and the amount of downsampling that occurs in the output.

Rate this question:

• 17.

• A.

Size of Input Image is reduced for "VALID" padding.

• B.

Size of Input Image is Increased for "VALID" padding.

• C.

Size of Input Image is reduced for "SAME" padding.

• D.

Size of Input Image is Increased for "SAME" padding.

A. Size of Input Image is reduced for "VALID" padding.
Explanation
The statement "size of Input Image is reduced for 'VALID' padding" is true. Padding is a technique used in convolutional neural networks to preserve the spatial dimensions of the input image. In 'VALID' padding, no padding is added to the input image, resulting in a smaller output size compared to the input size. This is because the convolution operation only considers the valid positions of the kernel within the input image, without extending beyond the boundaries. Therefore, the output size is reduced when using 'VALID' padding.

Rate this question:

Quiz Review Timeline +

Our quizzes are rigorously reviewed, monitored and continuously updated by our expert board to maintain accuracy, relevance, and timeliness.

• Current Version
• Mar 21, 2023
Quiz Edited by
ProProfs Editorial Team
• Feb 11, 2019
Quiz Created by
Alireza Akhavan

Related Topics

×

Wait!
Here's an interesting quiz for you.