FLINK in English translation

Examples of using Flink in Chinese and their translations into English

{-}
  • Political category close
  • Ecclesiastic category close
  • Programming category close
Checkpoint的主要目标是充当Flink中的恢复机制,确保能从潜在的故障中恢复。
Checkpoints' primary objective is to act as a recovery mechanism in Apache Flink ensuring a fault-tolerant processing framework that can recover from potential job failures.
这保证了FlinkRESTAPI的稳定性,因此可以在Flink中针对稳定的API开发第三方应用程序。
This guarantees the stability of the Flink REST API, so third-party applications can be developed for stable APIs in Flink..
Flink支持的最高级语言是SQL,它在语义上类似于表API,并将程序表示为SQL查询表达式。
The highest-level language supported by Flink is SQL, which is semantically similar to the Table API and represents programs as SQL query expressions.
在过去几年中,我们对Flink的checkpoint机制有过深入的描述,这是Flink有能力提供Exactly-Once语义的核心。
Over the past few years, we have written in depth about Flink's checkpointing, which is at the core of Flink's ability to provide exactly-once semantics.
这些项目没有一个能和Flink在开源社区的规模上相提并论。
And none of these projects have been able to attract an open source community comparable to the Flink community.
最新版本包括超过420个已解决的问题以及Flink的一些新增内容,About云将在本文的以下部….
The latest release includes more than 420 resolved issues and some new additions to Flink, which will be described in the following sections of this article.
Flink作业刚开始会处于created状态,然后切换到running状态,当所有任务都执行完之后会切换到finished状态。
A Flink job is first in the created state, then switches to running and upon completion of all work it switches to finished.
DataArtisans的MikeWinters回顾了Flink在2016年取得的成就,但并未使用“机器学习”这个词。
DataArtisans' Mike Winters reviewed Flink's accomplishments in 2016 without using the words“machine learning.”.
在过去五个月的时间里,Flink社区共解决了超过780个issues。
For the past 5 months, the Apache Flink community has been working really hard to tackle more than 780 issues.
ApacheFlink的数据流编程模型在有限和无限数据集上提供单次事件(event-at-a-time)处理。
Apache Flink's dataflow programming model provides event-at-a-time processing on both finite and infinite datasets.
如果不通过这些接口调用Flink程序,那么程序运行环境为本地环境。
If the Flink program is invoked differently than through these interfaces, the environment will act like a local environment.
当代码在DataflowSDK中被实现后,就可以运行在多个后端,如Flink和Spark。
When the code is implemented in Dataflow SDK, it will run on multiple backends such as Flink and Spark.
使用connector并不是唯一可以使数据进入或者流出Flink的方式。
Using a connector isn't the only way to get data in and out of Flink.
在Flink1.7.0,我们更关注实现快速数据处理以及以无缝方式为Flink社区构建数据密集型应用程序。
In Flink 1.7.0, it is closer to achieving fast data processing and seamlessly building data-intensive applications for the Flink community.
这是ApacheFlink1.7.0的一个重要补充,它为FlinkSQL提供了MATCH_RECOGNIZE标准的初始支持。
This is an important addition to Apache Flink 1.7.0, which provides initial support for the MATCH_RECOGNIZE standard for Flink SQL.
因为包装了最佳实践,dataArtisansPlatformApplicationManager不仅适用于生产部署:它也适用于开始应用Flink
Since it encodes best practices, data Artisans Platform Application Manager isn't just for production deployments: it's good for getting started with Flink as well.
保存点可以在不丢失应用程序状态的情况下对Flink程序或Flink群集进行更新。
Savepoints enable updates to a Flink program or a Flink cluster without losing the application's state.
会议的另一个热门话题是流式SQL,我们将继续在Flink中添加更多的SQL支持和TableAPI的支持。
Another popular conversation topic from the conference is streaming SQL, and we're continuing to add further SQL support and Table API support in Flink.
但是,Flink还可以访问Hadoop的分布式文件系统(HDFS)来读取和写入数据,以及Hadoop的下一代资源管理器(YARN)来配置群集资源。
However, Flink can also access Hadoop's distributed file system(HDFS) to read and write data, and Hadoop's next-generation resource manager(YARN) to provision cluster resources.
另一方面,Flink作为一个流引擎,从一开始就必须面对这个问题,并将托管状态作为一个通用的解决方案引入。
On the other hand, Flink as a streaming engine had to face this problem from the beginning and introduced managed state as a general solution.
Results: 145, Time: 0.0304

Top dictionary queries

Chinese - English