Selection modulates gene sequence evolution in different ways by constraining potential changes of amino acid sequences (purifying selection) or by favoring new and adaptive genetic variants (positive selection). The number of nonsynonymous differences in a pair of protein-coding sequences can be used to quantify the mode and strength of selection. To control for regional variation in substitution rates, the proportionate number of nonsynonymous differences (d(N)) is divided by the proportionate number of synonymous differences (d(S)). The resulting ratio (d(N)/d(S)) is a widely used indicator for functional divergence to identify particular genes that underwent positive selection. With the ever-growing amount of genome data, summary statistics like mean d(N)/d(S) allow gathering information on the mode of evolution for entire species. Both applications hinge on the assumption that d(S) and mean d(S) (approximately branch length) are neutral and adequately control for variation in substitution rates across genes and across organisms, respectively. We here explore the validity of this assumption using empirical data based on whole-genome protein sequence alignments between human and 15 other vertebrate species and several simulation approaches. We find that d(N)/d(S) does not appropriately reflect the action of selection as it is strongly influenced by its denominator (d(S)). Particularly for closely related taxa, such as human and chimpanzee, d(N)/d(S) can be misleading and is not an unadulterated indicator of selection. Instead, we suggest that inconsistencies in the behavior of d(N)/d(S) are to be expected and highlight the idea that this behavior may be inherent to taking the ratio of two randomly distributed variables that are nonlinearly correlated. New null hypotheses will be needed to adequately handle these nonlinear dynamics.