The goal of this paper is to investigate whether purely neuro-mimetic architectures are more efficient for signal compression than architectures that combine neuroscience and state-of-the-art models. We are motivated to produce spikes, using the LIF model, in order to compress images. Seeking for solutions to improve the efficiency of the LIF in terms of the memory cost, we compare two different quantization approaches; the Neuro- inspired Quantization (NQ) and the Conventional Quantization (CQ). We present that the purely neuro-mimetic architecture, that combines the LIF and the NQ, is more efficient in terms of the rate-distortion trade-off due to the dynamic properties embedded in these neuro-mimetic models. To achieve this goal, we first study which are the dynamic properties of the recently released (NQ) which is an intuitive way of counting the number of spikes. We show that the observation window and the resistance are the most important parameters of NQ that strongly influence its behavior that ranges from non-uniform to uniform. As a result, the NQ is more flexible than the CQ when it is applied to real data while for the same bit rate it ensures higher reconstruction quality.