Medical image segmentation has seen positive developments in recent years but remains challenging with many practical obstacles to overcome. The applications of this task are wide-ranging in many fields of medicine, and used in several imaging modalities which usually require tailored solutions. Deep learning models have gained much attention and have been lately recognized as the most successful for automated segmentation. In this work we show the versatility of this technique by means of a single deep learning architecture capable of successfully performing segmentation on two very different types of imaging: computed tomography and magnetic resonance. The developed model is fully convolutional with an encoder-decoder structure and high-resolution pathways which can process whole three-dimensional volumes at once, and learn directly from the data to find which voxels belong to the regions of interest and localize those against the background. The model was applied to two publicly available datasets achieving equivalent results for both imaging modalities, as well as performing segmentation of different organs in different anatomic regions with comparable success.