The subject of this paper is the modelling of the in°uence of non-minimum phase discrete-time system dynamics on the performance of norm optimal iterative learning control (NOILC) algorithms with the intent of explaining the observed phenomenon and predicting its primary characteristics. It is established that performance in the presence of non-minimum phase plant zeros typically has two phases. These consist of an initial fast monotonic reduction of the L2 error norm (mean square error) followed by a very slow asymptotic convergence. Although the norm of the tracking error does eventually converge to zero, the practical implications over a finite number of trials is apparent convergence to a non-zero error. The source of this slow convergence is identified using the singular value distribution of the system's all pass component. A predictive model of the onset of slow convergence behavior is developed as a set of linear constraints and shown to be valid when the iteration time interval is sufficiently long. The results provide a good prediction of the magnitude of error norm where slow convergence begins. Formulae for this norm and associated error time series are obtained for single-input single-output systems with several non-minimum phase zeros outside the unit circle using Lagrangian techniques. Numerical simulations are given to confirm the validity of the analysis.