Machine Learning Basics
English
Elementary
Introduce in the basics of machine learning theory, laying down the common concepts and techniques involved.
Description
30
Questions
30 sec
Per question
4:01
Average time
Disqualified
Contest Score
3.4
Community Rating
12
Participants
I insist on provide evidence of "word-for-word" copy. I agreed on unfortunate mistakes but not this.
English is not my native language, of cause I checked some sentences to make it sound more correct. But your .pdf It is neither a finished quiz nor even a textbook.
I think the nit-picking for "similar sentences" is not applicable here.
The original verdict was confirmed to be correct.
In addition to the above issues:
Not an original test. Questions and answers were copied from an existing test. E.g.:
#q2: "Although machine learning is an interesting concept, there are limited business applications in which it is useful. Is it true or false?" <= https://quizizz.com/admin/quiz/5d065e32d7b16a001b6fd2f8/machine-learning (see q.1),
#q7: "Which of the following is a technique frequently used in machine learning?" <= https://searchenterpriseai.techtarget.com/quiz/Quiz-Find-out-how-smart-you-are-about-machine-learning-and-AI (see q.7),
#q21: "In ensemble learning, you aggregate the predictions for weak learners, so that an ensemble of these models will give a better prediction than prediction of individual models. They don’t usually overfit and have high bias. Is it true?" <= https://www.analyticsvidhya.com/blog/2017/04/40-questions-test-data-scientist-machine-learning-solution-skillpower-machine-learning-datafest-2017/ (q.21 with little modifications),
#q22: "Suppose we have a dataset which can be trained with 100% accuracy with help of a decision tree of depth 8. Depth 5 will have high bias and low variance. Is it correct?" <= https://www.analyticsvidhya.com/blog/2017/04/40-questions-test-data-scientist-machine-learning-solution-skillpower-machine-learning-datafest-2017/ (q.34 with little modifications),
#q28: "What learning rate presented at the picture is the best for a neural network?" <= https://www.cs.toronto.edu/~lczhang/360/files/midterm20191.pdf (q2.a, p.4 with little modifications).
- Q2: i think that it is a subjective question
- Q6: i think that it is a subjective question, most answers are limitations of ML
- Q9: i don't understand why the good answer is Convolutional; internal memory is an attribute of recurrent NN; maybe i misunderstood the meaning of memory in the question?
- Q16: i disagree with the answer: the consequence of underfitting is high bias; the consequence of overfitting is high variance
- Q9: https://en.wikipedia.org/wiki/Recurrent_neural_network
- Q16: https://towardsdatascience.com/overfitting-vs-underfitting-a-complete-example-d05dd7e19765 and a quote on this page: "This is because an underfit model has low variance and high bias."
Q9: The key moment here is not just a "learnt memory" but "learnt memory of historical results".
Convolutional neural network (CNN) using historical results to processing new data. CNN has no notion of order in time, and the only input it considers is the current example it has been exposed to.
In case of recurrent neural networks they consider on each step decisions they have perceived previously in time (internal memory/state), not just historical results, i.e short-term memory.
"Recurrent networks are distinguished from feedforward networks by that feedback loop connected to their past decisions, ingesting their own outputs moment after moment as input."
Link: https://pathmind.com/wiki/lstm
Q16+Q19: Yeah, completely messed up with underfitting/overfitting. Of course it's high bias. I had to double-check before sending. Even in explanation I write about "when model is too simple with regards to the data it's trying to model", i.e a simple model by definition cannot suffer from high variance.
Still i don't understand how this is "learnt memory of historical results": if you consider 2 sliding windows on the same sample, just separated of a few points or pixels, the second sliding window does not learn from the first one sliding window