AStro dataset was collected to support Deep Learning techniques for interactive drawing media by learning from unlabeled media samples. We informally captured two style datasets using a hand-held mobile phone. Style 1 contains 146 scribbles of common physical and digital tools like paints and crayons. Style 2 contains 241 photos of eclectic materials, like ribbons and beads. To obtain patch datasets X1 and X2, we draw random patches and augment them with standard image processing, expanding the training set diversity. As a proxy for user control, we also generate a synthetic geometry dataset of random splines, cut into a patch dataset G.
The data is available here, where atro_raw.zip contains raw images and astro_datsets.zip contains augmented patch datasets that can be directly used for training. This data may only be used for research, evaluation and non-commercial purposes and may not be redistributed. Permission is granted to distribute trained models only for research, evaluation and non-commercial purposes with appropriate terms attached. Exact license terms will be included soon. If using this data please make sure to cite this work.
To support softer edge losses, we automatically preprocess G into a tri-band dataset, where curve boundaries are marked "uncertain". In addition, style datasets are passed through image-processing heuristics to generate corresponding black-on-white strokes. In the past these geometry datasets were only used for evaluation of methods, but they could be also useful for training.
If building on this data, please cite:
@article{shugrina2022neube,
title={Neural Brushstroke Engine: Learning a Latent Style Space of Interactive Drawing Tools},
author={Shugrina, Maria and Li, Chin-Ying and Fidler, Sanja},
journal={ACM Transactions on Graphics (TOG)},
volume={41},
number={6},
year={2022},
publisher={ACM New York, NY, USA}
}