The distance between the parameters of the model is then regularized in order to encourage the parameters to be similar.
寻找模型参数的通常做法就是最小化一个带有正则化项的训练误差,如下:.
A common choice to find the model parameters is by minimizing the regularized training error given by.
在训练过程中调整模型参数的操作,通常在梯度下降的单个迭代中进行。
The operation of adjusting a model's parameters during training, typically within a single iteration of gradient descent.
这些标准都没考虑到模型参数中的不确定性,而且在实际中,他们在很简单的模型上效果比复杂模型要好。
Such criteria do not take account of the uncertainty in the model parameters, however, and in practice they tend to favour overly simple models..
找到模型参数的一般选择是通过最小化由以下式子给出的正则化训练误差.
A common choice to find the model parameters is by minimizing the regularized training error given by.
它不但可以优化模型参数,还可以反复的对丢失数据进行猜测。
Not only can it optimize for model parameters, it can also iteratively make guesses about missing data.
最后,“没有显示模型参数,所有输入数据-药物,目标,和互动--保密”.
In the end,“no parameters of the model are revealed and all input data- the drugs, targets, and interactions- are kept private.”.
这样的准则没有考虑到模型参数的不确定性,所以在实际应用中它们倾向于选择过于简单的模型。
Such criteria do not take account of the uncertainty in the model parameters, however, and in practice they tend to favour overly simple models..
这一探索通过修改模型参数和模型关系进行。
This exploration is facilitated both through the modification of model parameters and through the modification of model relationships.
提出了使用模型参数先验的贝叶斯神经网络方法,来鼓励任务间使用相似的参数。
Propose a Bayesian neural network for multi-task learning by placing a prior on the model parameters to encourage similar parameters across tasks.
Bengio等人在2003年创造了词嵌入这个名词,并且在自然语言模型中将其与模型参数一起联合训练。
The term word embeddings was originally coined by Bengio et al. in 2003 who trained them in a neural language model together with the model's parameters.
随后,它将更改汇总为小更新,通常包含模型参数和相应的权重。
Subsequently, it summarises the changes as a small update, typically containing the model parameters and corresponding weights.
但是,神经网络模型也变得越来越大,这体现在模型参数的计算上。
However, the neural network model is getting larger and larger, which is expressed in the calculation of model parameters.
Memory is required- typically dynamic random access memory(DRAM)- to store input data, weight model parameters, and perform other functions during both inference and training.
This can be motivated from the perspective of information geometry(Amari, 1998), which considers the differential geometry of the space of model parameters.
模型参数的优化。
Optimization of model parameters.
附录2:模型参数.
Appendix 2: Model parameters.
模型参数的优化。
Allow optimizations of model parameters.
现在我们知道了模型参数.
Now you know the parameters.
我们使用HSROC模型参数研究异质性来源。
We explored sources of heterogeneity using the HSROC model parameters.
English
Bahasa indonesia
日本語
عربى
Български
বাংলা
Český
Dansk
Deutsch
Ελληνικά
Español
Suomi
Français
עִברִית
हिंदी
Hrvatski
Magyar
Italiano
Қазақ
한국어
മലയാളം
मराठी
Bahasa malay
Nederlands
Norsk
Polski
Português
Română
Русский
Slovenský
Slovenski
Српски
Svenska
தமிழ்
తెలుగు
ไทย
Tagalog
Turkce
Українська
اردو
Tiếng việt