机器学习:5.4 Stacking

机器学习:5.4 Stacking,第1张

文章目录
        • Multi-layer Stacking
        • Overfitting in Multi-layer Stacking

  • Combine multiple base learners to reduce variance

    • Base learners can be different model types
    • Linearly combine base learners outputs by learned parameters
  • Widely used in competitions

  • bagging VS stacking

    • Bagging: bootstrap samples to get diversity

    • Stacking: different types of models extract different features

Multi-layer Stacking
  • Stacking base learners in multiple levels to reduce bias
    • Can use a different set of base learners at each level
  • Upper levels (e.g. L2) are trained on the outputs of the level below (e.g. L1)
    • Concatenating original inputs helps
Overfitting in Multi-layer Stacking
  • Train leaners from different levels on different data to alleviate
    overfitting

  • Split training data into A and B, train L 1 L_1 L1 learners on A, run inference on B to generate training data for L 2 L_2 L2 learners

  • Repeated k-fold bagging:

    • Train k models as in k-fold cross validation
    • Combine predictions of each model on out-of-fold data
    • Repeat step 1,2 by n times, average the n predictions of each example for the next level training

欢迎分享,转载请注明来源:内存溢出

原文地址:https://54852.com/langs/578295.html

(0)
打赏 微信扫一扫微信扫一扫 支付宝扫一扫支付宝扫一扫
上一篇 2022-04-11
下一篇2022-04-11

发表评论

登录后才能评论

评论列表(0条)

    保存