WISE: Whitebox Image Stylization by Example-based Learning

1Hasso Plattner Institute, University of Potsdam, Germany 2Digitalmasterpieces GmbH, Germany
*equal contribution
In ECCV 2022

WISE transfers styles to algorithmic stylization filters. It enables global and local editing by adjusting semantically meaningful (whitebox) parameters such as contours or splattering amount.

Abstract

Image-based artistic rendering can synthesize a variety of expressive styles using algorithmic image filtering. In contrast to deep learning-based methods, these heuristics-based filtering techniques can operate on high-resolution images, are interpretable, and can be parameterized according to various design aspects. However, adapting or extending these techniques to produce new styles is often a tedious and error-prone task that requires expert knowledge.

We propose a new paradigm to alleviate this problem: implementing algorithmic image filtering techniques as differentiable operations that can learn parametrizations aligned to certain reference styles. To this end, we present WISE, an example-based image-processing system that can handle a multitude of stylization techniques, such as watercolor, oil or cartoon stylization, within a common framework. By training parameter prediction networks for global and local filter parameterizations, we can simultaneously adapt effects to reference styles and image content, e.g., to enhance facial features.

Our method can be optimized in a style-transfer framework or learned in a generative-adversarial setting for image-to-image translation. We demonstrate that jointly training an XDoG filter and a CNN for postprocessing can achieve comparable results to a state-of-the-art GAN-based method.

Overview Video

Applications

Parametric Style Transfer

WISE can optimize the parameters of a variety of algorithmic stylization effects such as watercolor or oil paint to match reference styles. These can then be further edited globally and locally by tuning parameters such as amount of contours, or bump-mapping scale.

Parametric Style Transfer for oilpaint and watercolor in WISE

Image-to-Image Translation

WISE can train Parameter Prediction Networks (PPNs) to predict stylization parameters for image to image translation tasks. We show this by training a PPN for the extended Difference of Gaussians (xDog) [1] effect to match black and white hand-drawn paintings on the APDrawing [2] dataset.

Using xDoG and PPN prediction in WISE: Whitebox Image Stylization by Example-based Learning

As simple filters such as the xDoG do not have the capacity to create arbitrary stylizations, we add a postprocessing network which is trained together with the PPN. This approach performs on-par with GAN-based state-of-the-art methods, while reducing model complexity and retaining a degree of editability.

BibTeX

@article{loetzsch2022wise,
  author    = {Lötzsch, Winfried and Reimann, Max and Büßemeyer, Martin and Semmo, Amir and Döllner, Jürgen and Trapp, Matthias},
  title     = {WISE: Whitebox Image Stylization by Example-based Learning},
  journal   = {ECCV},
  year      = {2022},
}

Acknowledgements

Our work "WISE: Whitebox Image Stylization by Example-based Learning" was partially funded by the German Federal Ministry of Education and Research (BMBF) through grants 01IS15041 – “mdViPro” and 01IS19006 – “KI-Labor ITSE”.


Hasso-Plattner-Institut
Digital Masterpieces
BMBF: Bundesministerium für Bildung und Forschung
Universität Potsdam