In contrast with standard supervised learning where learner gets random training examples, an active learner can pick training examples itself. Examples picked this way can be more ``informative" and we show that for the same model we often need less of them. In this work we present ways in which a learner can choose examples and criteria on how to evaluate their ``informativness". We compare different approaches and show that active learning can surpass the standard one. We show some theoretical foundations of active learning and give some criteria that guarantee its success. At the end we present results of our own tests.