Machine learning (ML) algorithms have grown in popularity in recent years, providing straightforward solutions to a wide range of applications, such as search engines, recommendation systems, robotics and even self-driven cars. As the last example suggests, ML has been gaining its space even on critical systems, including ground, avionics and space applications. However, there are requirements that need to be met in such systems, as faults are either costly or disastrous (or both) in these scenarios. First, often these systems have limited processing power on their embedded components. Secondly, radiation induced faults are also a major concern, specially at avionics altitudes and on space, but still relevant enough for ground level applications. Exploring these requirements, this thesis evaluates the radiation effects on edge implementation of prominent machine learning algorithms. Initially, FPGA implementations of Support Vector Machine (SVM) algorithm have been evaluated under the effects of both fast and thermal neutron radiation, particles that should be considered for ground and avionic applications. This evaluation was complemented using two different fault injection techniques to better understand the effects of faults on the systems. These tests have shown that the implementations had a certain level of intrinsic fault tolerance. Following this work, the implementations of three Algorithms - Artificial Neural Networks (ANN), Random Forest (RF) and SVM - using off-the-shelf micro-controllers (STM32 Nucleo development boards) have been evaluated under the effects of fast neutrons. Again, following the trend observed on the FPGA implementations, they presented intrinsic fault tolerance. Furthermore, the RF implementation was able to tolerate all the radiation induced failures. This work was also complemented by a fault injection campaign. The tool used was developed within the context of the thesis and, while still restricted to the STM32platforms, could be expanded to other boards.