LANGUAGE MODEL in Chinese translation

['læŋgwidʒ 'mɒdl]
['læŋgwidʒ 'mɒdl]
language model

Examples of using Language model in English and their translations into Chinese

{-}
  • Political category close
  • Ecclesiastic category close
  • Programming category close
The challenge then is to obtain enough data and compute to train such a language model.
接下来的挑战是获取足够的数据和计算力来训练这样的语言模型
As we start to better understand how to pre-train and initialize our models, pre-trained language model embeddings are poised to become more effective.
随着我们开始对预训练和模型初始化的方式有更好的了解,预训练的语言模型嵌入将会变得更加高效。
RNNs have been demonstrated by many people on the internet who created amazing models that can represent a language model.
网上已经有很多人证明了RNN,他们创造出了令人惊讶的模型,这些模型能表示一种语言模型
For example, we trained a large language model on 15x more data, which generated a language model containing 19.2 billion N-grams within a few hours.
例如,我们训练一个较大的语言模型,它可以在几小时内生成一个包含192亿N-grams的语言模型
These days, Cohen says that Google uses 230 billion search queries to train the language model used by Google's speech recognizer.
科恩说,如今Google使用2300亿个搜索查询词条来“培训”其语音识别功能所使用的语言模型
Pretraining a language model was first proposed in 2015[34], but it remained unclear whether a single pretrained language model was useful for many tasks.
预训练语言模型最初是在2015[36]年提出的,但是人们一直不清楚单个预训练语言模型是否对多个任务有用。
Word sense disambiguation(left) and POS tagging(right) results of first and second layer bidirectional language model compared to baselines(Peters et al., 2018).
与基线相比的第一层和第二层双向语言模型的词义消歧(左)和词性标注(右)结果(Petersetal.,2018)。
In collaboration with Stanford University and University of Massachusetts, Google has been working on an AI technique called“recurrent neural network language model(RNNLM)”.
谷歌正在同美国马萨诸塞州大学和斯坦福大学合作,提高一种被称为“循环神经网络语言模型”(RNNLM)的自然语言技能。
Their model maximizes what we have described above as the prototypical neural language model objective(For simplicity, the regularization term has been omitted).
他们的模型的最大化目标函数就是我们在上文中介绍的典型神经语言模型的目标(为了简洁,我们忽略规范化(regularization)这一项):.
Their model maximizes what we have described above as the prototypical neural language model objective(we omit the regularization term for simplicity).
他们的模型的最大化目标函数就是我们在上文中介绍的典型神经语言模型的目标(为了简洁,我们忽略规范化(regularization)这一项):.
Its GPT-2 product is a large transformer-based language model with 1.5 billion parameters, trained on a dataset of 8 million web pages.
而是在说GPT-2,GPT-2是基于transformer大型语言模型,内部包含了15亿的参数,并且在一个800万的网页数据集上训练而生的产物。
Despite the unnerving nature of these strange ad libs, however, the language model isn't perfect, which the team freely acknowledges.
尽管这些奇怪的随心所欲的作品的本质令人不安,但是这个语言模型并不完善,该团队也坦率的承认了这一点。
Designed to identify valuable information in conversations, it interprets a user's goals and distills valuable information from sentences, for a high quality, nuanced language model.
为了在对话中识别有价值的信息,语言理解解释了用户目标,并从句子中提取有价值的信息,以获得高质量、细致的语言模型
Character-based RNN language model.
基于RNN的语言模型.
What is a Language Model?
语言模型是什么??
What is the Language Model?
语言模型是什么??
Data_iter_random Language Model Data Set.
Data_iter_random语言模型数据集.
Training and Evaluation of the Transformer-XL language model.
图:Transformer-XL语言模型的训练和测试示意。
This language model follows a mirror-based design; see.
语言模型使用基于镜像的设计;请参阅.
The notion of a language model is inherently probabilistic.
语言模型的概念本质上也是基于概率的。
Results: 1058, Time: 0.029

Language model in different Languages

Word-for-word translation

Top dictionary queries

English - Chinese