Robust Learning of 2-D Separable Transforms for Next-Generation Video Coding

    •  Sezer, O.G.; Cohen, R.; Vetro, A., "Robust Learning of 2-D Separable Transforms for Next-Generation Video Coding", Data Compression Conference (DCC), March 2011.
      BibTeX Download PDF
      • @inproceedings{Sezer2011mar,
      • author = {Sezer, O.G. and Cohen, R. and Vetro, A.},
      • title = {Robust Learning of 2-D Separable Transforms for Next-Generation Video Coding},
      • booktitle = {Data Compression Conference (DCC)},
      • year = 2011,
      • month = mar,
      • url = {}
      • }
  • MERL Contact:
  • Research Areas:

    Digital Video, Multimedia

With the simplicity of its application together with compression efficiency, the Discrete Cosine Transform(DCT) plays a vital role in the development of video compression standards. For next-generation video coding, a new set of 2-D separable transforms has emerged as a candidate to replace the DCT. These separable transforms are learned from residuals of each intra prediction mode; hence termed as Mode dependent- directional transforms (MDDT). MDDT uses the Karhunen-Loeve Transform (KLT) to create sets of separable transforms from training data. Since the residuals after intra prediction have some structural similarities, transforms utilizing these correlations improve coding efficiency. However, the KLT is the optimal approach only if the data has a Gaussian distribution without outliers. Due to the nature of the least-square norm, outliers can arbitrarily affect the directions of the KLT components. In this paper, we will address robust learning of separable transforms by enforcing sparsity on the coefficients of the representations. With this new approach, it is possible to improve upon the video coding performance of H.264/AVC by up to 10.2% BD-rate for intra coding. At no additional cost, the proposed techniques can also provide up to 3.9% improvement in BD-rate for intra coding compared to existing MDDT schemes.