Practice Free H13-311_V3.5 Exam Online Questions
TensorFlow It is an end-to-end open source platform for machine learning and deep learning.
- A . TRUE
- B . FALSE
Which of the following about the description of expectations and variances is incorrect?
- A . Expectation reflects the average level of random variable values
- B . The variance reflects the degree of deviation between the random variable and its mathematical expectation
- C . Expectation and variance are both numerical characteristics of random variables
- D . The greater the expectation the smaller the variance
Among the machine learning algorithms, the following is not an integrated learning strategy?
- A . Boosting
- B . Stacking
- C . Bagging
- D . Marking
Feature is the dimension that describes the characteristics of the sample.
Regarding its interpretability in traditional machine learning and deep learning, the following statement is correct:
- A . Features are interpretable in traditional machine learning, but weak in deep learning
- B . Features are weak in traditional machine learning, but strong in deep learning
- C . Features are weak in interpretability in traditional machine learning and deep learning
- D . Features are interpretable in traditional machine learning and deep learning
Huawei Machine learning Service MLS MLS is a one-stop platform that supports the entire process of data analysis.
Which of the following is not a feature of MLS?
- A . A rich library of machine learning algorithms.
- B . machine learning program is intuitive and easy to use.
- C . Distributed and scalable big data computing engine.
- D . Support for the R language but does not support the Python language
When you use MindSpore to execute the following code, which of the following is the output?
from mindspore import ops
import mindspore
shape = (2, 2)
ones = ops.Ones()
output = ones(shape, dtype=mindspore.float32)
print(output)
- A . [[1 1]
[1 1]] - B . [[1. 1.]
[1. 1.]] - C . 1
- D . [[1. 1. 1. 1.]]
B
Explanation:
In MindSpore, using ops.Ones() with a specified shape and dtype=mindspore.float32 will create a tensor of ones with floating-point values. The output will be a 2×2 matrix filled with 1.0 values. The floating-point format (with a decimal point) ensures that the output is in the form of [[1. 1.], [1. 1.]].
Reference: Huawei HCIA-AI Certification, Tensor Operations in MindSpore.
Principal Component Analysis (PCA) is a statistical method. A set of variables that may be related to each other is transformed into a set of linearly related variables by orthogonal transformation. The converted set of variables is called the principal component.
- A . True
- B . False
Which is the correct description of the python creation function? (Multiple choice}
- A . The function created starts with the def keyword followed by the function name and parentheses.
- B . The parameters need to be placed in parentheses
- C . The function content starts with a colon and needs lo be indented (Right Answers}
- D . Return the result with return and the function ends.
TensorFlow2.0 The methods that can be used for tensor merging are?
- A . join
- B . concat
- C . split
- D . unstack
All kernels of the same convolutional layer in a convolutional neural network share a weight.
- A . TRUE
- B . FALSE
B
Explanation:
In a convolutional neural network (CNN), each kernel (also called a filter) in the same convolutional layer does not share weights with other kernels. Each kernel is independent and learns different weights during training to detect different features in the input data. For instance, one kernel might learn to detect edges, while another might detect textures.
However, the same kernel’s weights are shared across all spatial positions it moves across the input feature map. This concept of weight sharing is what makes CNNs efficient and well-suited for tasks like image recognition.
Thus, the statement that all kernels share weights is false.
HCIA AI
Reference: Deep Learning Overview: Detailed description of CNNs, focusing on kernel operations and weight sharing mechanisms within a single kernel, but not across different kernels.