Image-Based Rendering is the task of generating novel views from existing images. In this thesis different new methods to solve this problem are presented. These methods are designed to fulfil special goals such as scalability and interactive rendering performance. First, the theory of the Plenoptic Function is introduced as the mathematical foundation of image formation. Then a new taxonomy is introduced to categorise existing methods and an extensive overview of known approaches is given. This is followed by a detailed analysis of the design goals and the requirements with regards to input data. It is concluded that for perspectively correct image generation from sparse spatial sampling geometry information about the scene is necessary. This leads to the design of three different Image-Based Rendering methods. The rendering results are analysed on different data sets. For this analysis, error metrics are defined to evaluate different aspects.