Automation of biological image analysis is essential to boost biomedical research. The study of complex diseases such as neurodegenerative diseases calls for big amounts of data to build models towards precision medicine. Such data acquisition is feasible in the context of high-throughput screening in which the quality of the results relays on the accuracy of image analysis. Although the state-of-the-art solutions for image segmentation employ deep learning approaches, the high cost of manual data curation is hampering the real use in current biomedical research laboratories. Here, we propose a pipeline that employs deep learning not only to conduct accurate segmentation but also to assist with the creation of high-quality datasets in a less time-consuming solution for the experts. Weakly-labelled datasets are becoming a common alternative as a starting point to develop real-world solutions. Traditional approaches based on classical multimedia signal processing were employed to generate a pipeline specifically optimized for the high-throughput screening images of iPSC fused with rosella biosensor. Such pipeline produced good segmentation results but with several inaccuracies. We employed the weakly-labelled masks produced in this pipeline to train a multiclass semantic segmentation CNN solution based on U-net architecture. Since a strong class imbalance was detected between the classes, we employed a class sensitive cost function: Dice coe!cient. Next, we evaluated the accuracy between the weakly-labelled data and the trained network segmentation using double-blind tests conducted by experts in cell biology with experience in this type of images; as well as traditional metrics to evaluate the quality of the segmentation using manually curated segmentations by cell biology experts. In all the evaluations the prediction of the neural network overcomes the weakly-labelled data quality segmentation. Another big handicap that complicates the use of deep learning solutions in wet lab environments is the lack of user-friendly tools for non-computational experts such as biologists. To complete our solution, we integrated the trained network on a GUI built on MATLAB environment with non-programming requirements for the user. This integration allows conducting semantic segmentation of microscopy images in a few seconds. In addition, thanks to the patch-based approach it can be employed in images with different sizes. Finally, the human-experts can correct the potential inaccuracies of the prediction in a simple interactive way which can be easily stored and employed to re-train the network to improve its accuracy. In conclusion, our solution focuses on two important bottlenecks to translate leading-edge technologies in computer vision to biomedical research: On one hand, the effortless obtention of high-quality datasets with expertise supervision taking advantage of the proven ability of our CNN solution to generalize from weakly-labelled inaccuracies. On the other hand, the ease of use provided by the GUI integration of our solution to both segment images and interact with the predicted output. Overall this approach looks promising for fast adaptability to new scenarios.