但它们通常被用于诸如随机森林或gradientboosting 之类的组合中。 But they are most often used in compositions such as Random Forest or Gradient boosting . Boosting 通过前面的弱规则更加关注错误分类或具有更高错误的示例。Boosting pays higher focus on examples which are mis-classified or have higher errors by preceding weak rules.在1995年的一篇论文:TowardAugmentingTheHumanIntellectandBoosting OurCollectiveIQCOMM。 Toward augmenting the human intellect and boosting our collective IQ"(PDF). 虽然Bagging和Boosting 都是集成方法,但它们从相反的方向解决问题。 While bagging and boosting are both ensemble methods, they approach the problem from opposite directions. 这些boosting 算法在Kaggle,AVHackthon,CrowdAnalytix等数据科学竞赛中有出色发挥。 These boosting algorithms always work well in data science competitions like Kaggle, AV Hackathon, CrowdAnalytix.
这些boosting 算法通常在数据科学比赛如Kaggl、AVHackathon、CrowdAnalytix中很有效。 These boosting algorithms always work well in data science competitions like Kaggle, AV Hackathon, CrowdAnalytix. Boosting 通过前面的弱规则更加关注错误分类或具有更高错误的示例。Boosting lays more focus on examples which are wrongly classified or have higher errors by due to weak rules.在1995年的一篇论文:TowardAugmentingTheHumanIntellectandBoosting OurCollectiveIQCOMM。 Toward augmenting the human intellect and boosting our collective IQ. 基本上,adaboosting 是第一个为二进制分类开发的真正成功的增强算法。 Basically, Ada Boosting was the first really successful boosting algorithm developed for binary classification. 从这点看boosting 与bagging以及其他的基于committee的方式(8.8节)类似。 From this perspective boosting bears a resemblance to bagging and other committee-based approaches(Section 8.8). Boosting 是在一组基本的“基”函数集合中拟合加性展开(additiveexpansions)的方法。Boosting is a way of fitting an additive expansion in a set of elementary“basis” functions.正如在10.1节看到的,boosting 决策树提高了它们的准确性,经常是显著提高。 As seen in Section 10.1, boosting decision trees improves their accuracy, often dramatically. Boosting :迭代地训练BaseModel,每次根据上一个迭代中预测错误的情况修改训练样本的权重。Boosting : The Base Model is trained iteratively, each time modifying the weight of the training sample based on the prediction errors in the previous iteration.提升方法提升(boosting )方法是一种常用的统计学习方法,应用广泛且有效。 Boosting algorithm is a commonly used statistical learning method, which is widely used and effective. 在你心目中应该浮现的一个直接的问题是,“Boosting 是如何识别弱规则的?"? An immediate question which should pop in your mind is,‘How boosting identify weak rules? 我们以最流行的被称为“AdaBoost.M1”的boosting 算法开始,这归功于FreundandSchapire(1997)1。 We begin by describing the most popular boosting algorithm due to Freund and Schapire(1997) called“AdaBoost.M1.”. Bagging使用复杂的基础模型,试图“平滑”他们的预测,而Boosting 使用简单的基础模型,并试图“提高”他们的总复杂度。 Bagging uses complex base models and tries to"smooth out" their predictions, while boosting uses. 决策树、随机森林、gradientboosting 等方法被广泛用于各种数据学科问题中。 Methods like decision trees, random forest, gradient boosting are being popularly used in all kinds of data science problems. 在Boosting 中,树是按顺序构建的,这样每个后续树的目的是减少前一棵树的错误。 In boosting , the trees are constructed sequentially such that each subsequent tree aims to minimize the errors of the prior tree. 另一方面,boosting 是一个连续的集合,每个模型的建立都是基于纠正前一个模型的错误分类。 On the other hand, boosting is a sequential ensemble where each model is built based on correcting the misclassifications of the previous model.
展示更多例子
结果: 59 ,
时间: 0.0232
English
Bahasa indonesia
日本語
عربى
Български
বাংলা
Český
Dansk
Deutsch
Ελληνικά
Español
Suomi
Français
עִברִית
हिंदी
Hrvatski
Magyar
Italiano
Қазақ
한국어
മലയാളം
मराठी
Bahasa malay
Nederlands
Norsk
Polski
Português
Română
Русский
Slovenský
Slovenski
Српски
Svenska
தமிழ்
తెలుగు
ไทย
Tagalog
Turkce
Українська
اردو
Tiếng việt