Affordable Access

Publisher Website

Multiple- [formula omitted]support vector regression based on spectral risk measure minimization

Publication Date
DOI: 10.1016/j.neucom.2012.09.002
  • Conditional Value-At-Risk
  • Spectral Risk Measure
  • Support Vector Regression
  • Kernel Methods
  • Computer Science
  • Mathematics


Abstract Statistical learning theory provides the justification of the ϵ-insensitive loss in support vector regression, but suggests little guidance on the determination of the critical hyper-parameter ϵ. Instead of predefining ϵ, ν-support vector regression automatically selects ϵ by making the percent of deviations larger than ϵ be asymptotically equal to ν. In stochastic programming terminology, the goal of ν-support vector regression is to minimize the conditional Value-at-Risk measure of deviations, i.e. the expectation of the larger ν-percent deviations. This paper tackles the determination of the critical hyper-parameter ν in ν-support vector regression when the error term follows a complex distribution. Instead of one singleton ν, the paper assumes ν to be a combination of multiple, finite or infinite, candidate choices. Thus, the cost function becomes a weighted sum of component conditional value-at-risk measures associated with these base νs. This paper shows that this cost function can be represented with a spectral risk measure and its minimization can be reformulated to a linear programming problem. Experiments on three artificial data sets show that this multiple-ν support vector regression has great advantage over the classical ν-support vector regression when the error terms follow mixed polynomial distributions. Experiments on 10 real-world data sets also clearly demonstrate that this new method can achieve better performance than ϵ-support vector regression and ν-support vector regression.

There are no comments yet on this publication. Be the first to share your thoughts.