.
Dropout
Regularization
Batch normalizatioh
Adding more layers
Rate this question:
Multi layer perceptron
Recurrent neural network
Convolutional neural network
Perceptron
Rate this question:
I only
II only
I & II only
I,II and III
Rate this question:
G(z)(1-g(z))
G(z)(1+g(z))
-g(z)(1+g(z))
G(z)(g(z)-1)
Rate this question:
Always finds the globally optimal solution
Finds a locally optimal solution which may be globally optimal.
Never finds the globally optimal solution.
Finds a locally optimal solution which is never globally optimal
Rate this question:
Is an ensemble of some selected hypotheses in the hypothesis space
Is an ensemble of all the hypotheses in the hypothesis space
Is the hypothesis that gives best result on test instances
None of the above
Rate this question:
It can learn a OR function
It can learn a AND function
The obtained separating hyperplane depends on the order in which the points are presented in the training process.
For a linearly separable problem, there exists some initialization of the weights which might lead to non-convergent cases.
Rate this question:
Option 1
Option 2
Option 3
Option 4
In batch gradient descent we update the weights and biases of the neural network after forward pass over each training example.
In batch gradient descent we update the weights and biases of our neural network after forward pass over all the training examples.
Each step of stochastic gradient descent takes more time than each step of batch gradient descent.
None of these three options is correct
Rate this question:
Quiz Review Timeline (Updated): Mar 18, 2023 +
Our quizzes are rigorously reviewed, monitored and continuously updated by our expert board to maintain accuracy, relevance, and timeliness.