EMBEDDINGS - 翻译成中文

在 英语 中使用 Embeddings 的示例及其翻译为 中文

{-}
  • Political category close
  • Ecclesiastic category close
  • Programming category close
Natural Language data can also have multiple channels, in the form of different types of embeddings for example.
自然语言数据也可以有多个通道,例如以不同类型的嵌入形式。
Such tactics are essential since, according to Bolukbasi,“word embeddings not only reflect stereotypes but can also amplify them.”.
这样的策略是至关重要的,因为根据Bolukbasi的说法,「词嵌入不仅反映了刻板印象,同时还会扩大它们」。
For training and testing, the researchers used a dataset of publicly available word embeddings, called FASTTEXT, with 110 language pairs.
对于模型训练和模型测试,研究人员使用了一个公开的单词嵌入数据集,称为FASTTEXT,具有110种语言对。
Finally, if we manage to find common embeddings for our images and our words, we could use them to do text to image search!
最后,如果我们设法为我们的图像和单词找到常见的嵌入,我们可以使用它们来进行文本到图像的搜索!!
You can now use Amazon SageMaker's BlazingText implementation of the Word2Vec algorithm to generate word embeddings from a large number of documents.
您现在可以在AmazonSageMaker中使用Word2Vec算法的BlazingText实施,从大量文档中生成词嵌入
As we start to better understand how to pre-train and initialize our models, pre-trained language model embeddings are poised to become more effective.
随着我们开始对预训练和模型初始化的方式有更好的了解,预训练的语言模型嵌入将会变得更加高效。
Dropout layers first gained popularity through their use in CNNs, but have since been applied to other layers, including input embeddings or recurrent networks.
Dropout层最初是通过在CNN中的使用而流行起来的,但后来被应用到其他层,包括输入嵌入或循环网络。
Chris Manning and Richard Socher have put a lot of effort into developing compositional models that combine neural embeddings with more traditional parsing approaches.
ChrisManning和RichardSocher已经投入了大量的精力来开发组合模型,它将神经嵌入与更多传统的分析方法组合起来。
To validate the model outputs high uncertainty for OOV, we took a validation set and switched all the advertisers embeddings into OOV.
为了验证模型输出OOV的高度不确定性,我们采用了验证集并将所有广告客户嵌入向量转换为OOV。
Using the same word embeddings as above, for instance, here are the three nearest neighbors for each word and the corresponding angles.
使用上述相同的wordembedding,例如,这里对于每个词和相应的夹角只有三个最邻近:.
The learned similarity of the embeddings will be helpful for tasks like fraud detection.
所学的embedding相似性将有助于执行欺诈检测等任务。
We're seeing multilingual embeddings perform better for English, German, French, and Spanish, and for languages that are closely related.
多语言词嵌入模型对于英语,德语,法语和西班牙语以及联系更紧密的语言有更好的表现。
Leveraging knowledge from unlabeled data via pre-trained embeddings is an instance of transfer learning.
通过预训练的嵌入来利用未标注数据的知识是迁移学习的一个实例。
Word and sentence embeddings have become an essential part of any Deep-Learning-based natural language processing systems.
词语和句子的嵌入已经成为了任何基于深度学习的自然语言处理系统必备的组成部分。
Second, the inputs of BiLSTM-CRF model are those embeddings and the outputs are predicted labels for words in sentence$x$.
其次,BiLSTM-CRF模型的的输入是上述的embeddings,输出是该句子xxx中每个单词的预测标签。
These learned embeddings are then successfully applied to another task- recommending potentially interesting documents to users, trained based on clickstream data.
然后,这些学习的嵌入成功应用于另一个任务-向用户推荐可能有趣的文档,并根据点击流数据进行训练。
As word embeddings are a key building block of deep learning models for NLP, word2vec is often assumed to belong to the same group.
正因为词嵌入模型是自然语言处理中深度学习的一个关键的模块,word2vec通常也被归于深度学习。
Word embeddings drastically improve tasks like text classification, named entity recognition, machine translation.
单词embedding大大改进了诸如文本分类、命名实体识别、机器翻译等任务。
Similar images will have similar embeddings, meaning a high cosine similarity between embeddings..
类似的图像将具有类似的嵌入,意味着嵌入之间的高余弦相似性。
Presents a CNN architecture to predict hashtags for Facebook posts, while at the same time generating meaningful embeddings for words and sentences.
用CNN预测Facebookposts的标签,同时为词句生成有意义的嵌入
结果: 98, 时间: 0.0364

顶级字典查询

英语 - 中文