In this thesis we take upon different approaches for estimating reliability of individual classification predictions made by classifiers based on supervised learning. The general definition of the term reliability is the ability to perform required functions under stated conditions. In machine learning, we refer to accuracy, as in the ability to provide accurate predictions. We face the problem that measures of reliability are not quantitatively defined. We can therefore only conceive estimates. Reliability estimates of individual predictions provide valuable information that could be beneficial in individual predictions assessment of use. For the needs of our thesis we develop several methods for reliability estimation based on existing approaches of local methods and the variance of a bagged model. We test our methods on various available real-life and artificial datasets and compare our methods with those based on inverse transduction. Methods were tested on 20 different datasets on 7 classification models and the estimates were calculated using 11 measures of similarity. We applied three statistical methods to our results. We came to a conclusion that these tests do not give clear results, as Q-Q plots only vaguely support calculated correlation. Correlation tests show potential of estimates LCV and BAGV as they demonstrated best on average performance. Second-comers with good result were estimates TRANS1 and CNK, while other estimates failed to excel.