We are developing a realtime, wearable, open-source speech-processing platform (OSP) that can be configured at compile and run times by audiologists and hearing aid (HA) researchers to investigate advanced HA algorithms in lab and field studies. The goals of this contribution are to present the current system and propose areas for enhancements and extensions. We identify (i) basic and (ii) advanced features in commercial HAs and describe current signal processing libraries and reference designs to build a functional HA. We present performance of this system and compare with commercial HAs using "Specification of Hearing Aid Characteristics," the ANSI 3.22 standard. We then describe a wireless protocol stack for remote control of the HA parameters and uploading media and HA status for offline research. The proposed architecture enables advanced research to compensate for hearing loss by offloading processing from ear-level-assemblies, thereby eliminating the bottlenecks of CPU and communication between left and right HAs.