Practice Free H13-311_V3.5 Exam Online Questions
What is wrong description or backpropagation?
- A . The learning process or the backpropagation algorithm consists of a forward propagation process and a back-propagation process.
- B . The backpropagation algorithm is a learning algorithm suitable for multi-layer neural networks, which is based on the gradient descent method
- C . The backpropagation phase sends training inputs to the network to obtain an stimuli response
- D . until the response of the network to the input reaches the predetermined target range.
In order for a machine to be intelligent, it must be knowledgeable. Therefore, there is a research field in artificial intelligence. which mainly studies how computers automatically acquire knowledge and skills to achieve self-impr011emenL
What is the branch of this research called?
- A . Expert system
- B . Machine learning
- C . Neural Network
- D . Natural language processing
TensorFlow is Google’s first generation of proprietary machine learning systems
- A . True
- B . False
Deep learning makes it easy to derive simple mathematical functions from a large amount of high-dimensional data to describe complex decision interfaces.
- A . True
- B . False
Grid search is a method of parameter adjustment.
- A . TRUE
- B . FALSE
Sigmoid, tanh, and softsign activation functions cannot avoid vanishing gradient problems when the network is deep.
- A . TRUE
- B . FALSE
A
Explanation:
Activation functions like Sigmoid, tanh, and softsign suffer from the vanishing gradient problem when used in deep networks. This happens because, in these functions, gradients become very small as the input moves away from the origin (either positively or negatively). As a result, the weights of the earlier layers in the network receive very small updates, hindering the learning process in deep networks. This is one reason why activation functions like ReLU, which avoid this issue, are often preferred in deep learning.
Reference: Huawei HCIA-AI Certification, Deep Learning Overview C Activation Functions and Vanishing Gradient Problem.
Which of the following are callback options provided by MindSpore?
- A . SummaryCollector
- B . TrainStep
- C . ModelCheckpoint
- D . LossMonitor
A, C, D
Explanation:
MindSpore provides several callback functions that can be used to monitor, modify, or control the behavior of the training process. These include:
SummaryCollector: Collects summaries such as loss and accuracy for visualization and monitoring.
ModelCheckpoint: Saves model parameters during or after training.
LossMonitor: Monitors the loss values during training and can stop training if certain conditions are met.
TrainStep is not a callback but rather a fundamental step in training.
Reference: Huawei HCIA-AI Certification, AI Development Framework C MindSpore Callback Functions.
Which of the following options is not the- session mode used by Tensorflow?
- A . Explicitly call the session to generate function
- B . Explicitly call the session to close function
- C . Through the Python context manager
- D . Multiple POST queries
It is fine to pass in only one image when calling the face comparison service.
- A . TRUE
- B . FALSE
What are the commonly used gradient descent optimization functions? (Multiple Choice)
- A . Random gradient descent
- B . Adadelta
- C . Adagrad
- D . momentum
- E . RMSProp