Vaidas Armonas
Mar 18, 2021

Thanks for the pointer to Gradio! Looks neat :)

However, if I understood you correctly, you are misleading readers in this post - cross validation does not eliminate the need for the `test set`. All it does, is replacement of a single validation set with a technique that can estimate performance of a set of parameters betters, but it does so by training a set of **different** models, not a single model. That's why when using cross-validation approach, once the best parameters are selected, you should train a final model on the whole training dataset and evaluate it on **test set** to get the "real world" performance measurement.

Thanks for putting this into a single post!

All the best!

Vaidas Armonas
Vaidas Armonas

Written by Vaidas Armonas

Technically minded Data Scientist with experience in different business domains. Business School graduate. Enjoy learning all things ML/CS.

No responses yet