
In this work, we propose a sketch-to-photo translation model which, for the first time, can synthesize photos according to sketches without paired data. Unfortunately, the quality of synthesized photos is far from satisfactory, and the method relies on paired sketch/photo images and class labels. It focuses on synthesizing natural photos of multiple classes based on sketches. Nevertheless, edge maps do not have the shape deformation problem as they are extracted from photos. Works like can derive high-quality photo images from edge maps. Works like have shown impressive performance in style transfer or object transfiguration among photo images. However, existing image-to-image translation methods mostly focus on one of them. So translating a sketch to photo involves changes in two aspects, shape and color. Second, sketches lack visual cues as they do not contain colors and most texture information. Besides, since people have various drawing styles, sketches could appear differently even they are corresponding to the same object. First of all, sketches are drawn by amateurs therefore, they generally deform in shape. In this work, we focus on synthesizing photos. We call the task of generating a photo given a sketch as S ketch-B ased I mage S ynthesis (SBIS) 2 2 2In practice, the synthesized image can be with various visual formats, e.g., photo, cartoon, and so on. We alsoĭemonstrate how these generated photos and sketches can benefit otherĪpplications, such as sketch-based image retrieval. Model can synthesize high-quality sketches from photos inversely. Both quantitative and qualitative comparisons are presented Besides, a conditional module is adapted forĬolor translation to improve diversity and increase users' control over the More robust to drawing style variations, we design a data augmentation strategyĪnd re-purpose an attention module, aiming to make our model pay less attention Generated results significantly and may even lead to failure. WeĪlso find that, when translating shapes, specific drawing styles affect the Model consisting of two sub-networks, with each one tackling one sub-task.

Sub-tasks, shape translation and colorization. Show that the key to this task lies in decomposing the translation into two Task well, as they mostly focus on solving one translation. Thus translation from sketch to photo involves Only consist of strokes, they usually exhibit shape deformation and lack visualĬues, i.e., colors and textures. It is a challenging task because sketches are drawn by non-professionals and Sketch-based image synthesis aims to generate a photo image given a sketch. An Unpaired Sketch-to-Photo Translation Model
