交叉熵 - 翻译成英语

cross-entropy
交 叉 熵
化交 叉 熵
cross entropy
交 叉 熵
化交 叉 熵

在 中文 中使用 交叉熵 的示例及其翻译为 英语

{-}
  • Political category close
  • Ecclesiastic category close
  • Programming category close
从随机量子线路采样是量子计算机的优秀校正基准,称为交叉熵基准测试。
Sampling from random quantum circuits is an excellent calibration benchmark for quantum computers, which we call cross-entropy benchmarking.
从随机量子电路进行采样是量子计算机的一个很好的校准基准,我们称之为交叉熵基准。
Sampling from random quantum circuits is an excellent calibration benchmark for quantum computers, which we call cross-entropy benchmarking.
最后,我们讲解逻辑回归、交叉熵优化准则及其以第一和第二阶方法来求解的方案。
Finally, we present logistic regression, the cross-entropy optimization criterion and its solution through first- and second-order methods.
请记住,交叉熵涉及在softmax层的输出上计算的日志。
Remember that the cross-entropy involves a log, computed on the output of the softmax layer.
令人振奋的是交叉熵代价函数给了我们类似的或者更好的结果。
It's encouraging that the cross-entropy cost gives us similar or better results than the quadratic cost.
交叉熵定义为.
Cross-entropy loss is defined as.
交叉熵定义为.
Cross-correlation is defined as.
这里采用了交叉熵(cross-entropy)来作为costfunction。
Cross-entropy- is used as a cost function.
因此,我们使用交叉熵作为损失函数。
And we also use a binary cross entropy as the loss function.
交叉熵方法是一种蒙特卡洛方法,主要用来优化和重要性采样。
The cross-entropy(CE) method is a Monte Carlo method for importance sampling and optimization.
损失函数的选择(这里是“交叉熵(cross-entropy)”)将在后面解释。
The choice of a loss function(here,"cross-entropy") is explained later.
在这个图像中,交叉熵被表示为一个具有两个权重的函数。
In this picture, cross-entropy is represented as a function of 2 weights.
一个具体的例子:最小化平均交叉熵是训练神经网络分类图像的标准方法。
A concrete example: minimizing the average cross-entropy error is a standard way to train neural networks to classify images.
为了计算交叉熵,我们首先需要添加一个新的占位符用于输入正确值:.
To implement cross-entropy we need to first add a new placeholder to input the correct answers.
您将注意到,测试和训练数据的交叉熵曲线在数千次迭代后开始断开连接。
You will have noticed that cross-entropy curves for test and training data start disconnecting after a couple thousand iterations.
训练包含14次交叉熵cross-entropy训练,然后使用增强型MMI(MaximumMutualInformation)标准进行1次随机梯度下降(SGD)序列训练。
Training consists of 14 passes of cross-entropy followed by 1 pass of Stochastic Gradient Descent(SGD) sequence training using the boosted MMI(Maximum Mutual Information) criterion.
因此我现在深入讨论交叉熵就是因为这是一种开始理解神经元饱和和如何解决这个问题的很好的实验。
And so I have discussed the cross-entropy at length because it's a good laboratory to begin understanding neuron saturation and how it may be addressed.
交叉熵损失函数使用梯度下降进行优化。
Cross-Entropy Loss functions are optimized using Gradient Descent.
在分类树中我们使用交叉熵和基尼指数。
In classification trees, we use cross-entropy and Gini index.
这里采用了交叉熵(cross-entropy)来作为costfunction。
Use Cross entropy as the cost function.
结果: 68, 时间: 0.0194

单词翻译

顶级字典查询

中文 - 英语