This is a very popular research direction in recent years, and a large number of algorithm models and loss functions have also been generated.
当我们队其进行最小化时,我们也把他称为代价函数,损失函数或误差函数。
When we are minimizing it, we may also call it the cost function,loss function, or error function..
LeonBottou列出了一些有用的表格,关于激活函数,损失函数,和它们相应的导函数。
Leon Bottou had some useful tables with activation functions,loss functions, and their corresponding derivatives.
为了将SVM扩展到数据线性不可分的情况,我们引入损失函数,.
To extend SVM to cases in which the data are not linearly separable, we introduce the hinge loss function.
为此,它们提供了灵活的API,允许用户在API中定义和插入自定义损失函数、评分函数和度量。
Therefore, we provide flexible API's, within which the users can define and plug in their own customized loss functions, scoring functions and metrics.
我们希望我们的输出与我们的输入格式相同,我们可以使用损失函数来比较我们的结果。
We want our outputs to be in the same format as our inputs so we can compare our results using the loss function.
TensorFlow提供了优化器用于缓慢地改变每个变量以使损失函数最小化。
TensorFlow provides optimisers that slowly change each variable so that loss functions can be minimised.
机器学习也与优化密切相关:许多学习问题被公式化为训练集的一些损失函数的最小化。
Machine learning also has intimate ties to optimization: many learning problems are formulated as minimization of some loss function on a training set of examples.
该协议在不同的体系结构,优化方法和损失函数方面都很强大。
This agreement is robust across different architectures, optimization methods, and loss functions.
在每一步,给定当前的模型Fm-1(x),决策树hm(x)通过最小化损失函数L更新模型:.
At each stage the decision tree hm(x) is chosen to minimize a loss function L given the current model Fm-1(x).
选择超参数(如,在深度学习的情况下,这包括选择架构、损失函数和优化器).
Choose hyperparameters(e.g. in the case of deep learning, this includes choosing an architecture, loss function, and optimizer).
选择决策树hm(x)在每个阶段使用给定当前的Fm-1(x)来最小化损失函数L。
At each stage the decision tree hm(x) is chosen to minimize a loss function L given the current model Fm-1(x).
注意在我们的正则项中,我们添加了一个l1激活函数正则器,它将在优化阶段对损失函数应用一个惩罚。
Notice in our hidden layer, we added an l1 activity regularizer, that will apply a penalty to the loss function during the optimization phase.
机器学习与优化也有着密切的联系:许多学习问题被描述为训练集上的一些损失函数的最小化。
Machine learning also has intimate ties to optimization: many learning problems are formulated as minimization of some loss function on a training set of examples.
最终可得到3x2矩阵dLoss/dW2,以在最小化损失函数的方向更新原始的W2值.
The result is a 3x2 matrix dLoss/dW2, which will update the original W2 values in a direction that minimizes the Loss function.
训练”神经网络实际上意味着使用训练图像和标签来调整权重和偏差,以便最小化交叉熵损失函数。
Training" the neural network actually means using training images and labels to adjust weights and biases so as to minimise the cross-entropy loss function.
接着我们计算损失函数相对于权重和偏置的坡度。
Next we calculate the slope of the loss function with respect to our weights and biases.
English
Bahasa indonesia
日本語
عربى
Български
বাংলা
Český
Dansk
Deutsch
Ελληνικά
Español
Suomi
Français
עִברִית
हिंदी
Hrvatski
Magyar
Italiano
Қазақ
한국어
മലയാളം
मराठी
Bahasa malay
Nederlands
Norsk
Polski
Português
Română
Русский
Slovenský
Slovenski
Српски
Svenska
தமிழ்
తెలుగు
ไทย
Tagalog
Turkce
Українська
اردو
Tiếng việt