machine learning - How to use weak learners in Adaboost? -


i'm using adaboost , here question weak learners. in adaboost algorithm, follows, in step (2), can use different algorithms? example, when k 1, use knn, if k=2, svm used , k=3, use decision tree? or, should use single algorithm in k iteration of for loop?

(1) initialize weight of each tuple in d 1=d; (2) = 1 k // each round: (3) sample d replacement according tuple weights obtain di ; (4) use training set di derive model, mi ; (5) compute error.mi/, error rate of mi (eq. 8.34) (6) if error.mi/ > 0.5 (7) go step 3 , try again; (8) endif (9) each tuple in di correctly classified (10) multiply weight of tuple error.mi/=.1ô€€€error.mi//; // update    weights (11) normalize weight of each tuple; (12) endfor 

adaboost used week learners, short decision trees. can use more complicated learners in case adaboost might not best choice combine results.

most implementations (like scikit learn adaboostclassifier) assume using same learner each step, shouldn't difficult change this.

also, question might better suited https://stats.stackexchange.com/.


Comments

Popular posts from this blog

java - Andrioid studio start fail: Fatal error initializing 'null' -

android - Gradle sync Error:Configuration with name 'default' not found -

StringGrid issue in Delphi XE8 firemonkey mobile app -