Style Transfer Generator for Dataset Testing Classification


  • Bayu Yasa Wedha Pradita University
  • Daniel Avian Karjadi Universitas Pradita
  • Alessandro Enriqco Putra Bayu Wedha Universitas Pradita
  • Handri Santoso Universitas Pradita




Generative Adversarial Networks, Convolutional Neural Network, Style Transfer, Image Dataset, Art Image


The development of the Generative Adversarial Network is currently very fast. First introduced by Ian Goodfellow in 2014, its development has accelerated since 2018. Currently, the need for datasets is sometimes still lacking, while public datasets are sometimes still lacking in number. This study tries to add an image dataset for supervised learning purposes. However, the dataset that will be studied is a unique dataset, not a dataset from the camera. But the image dataset by doing the augmented process by generating from the existing image. By adding a few changes to the augmentation process. So that the image datasets become diverse, not only datasets from camera photos but datasets that are carried out with an augmented process. Camera photos added with painting images will become still images with a newer style. There are many studies on Style transfer to produce images in drawing art, but it is possible to generate images for the needs of image datasets. The resulting force transfer image data set was used as the test data set for the Convolutional Neural Network classification. Classification can also be used to detect specific objects or images. The image dataset resulting from the style transfer is used for the classification of goods transporting vehicles or trucks. Detection trucks are very useful in the transportation system, where currently many trucks are modified to avoid road fees

GS Cited Analysis


Download data is not yet available.


Barth, R., Hemming, J., & Van Henten, E. J. (2020). Optimising realism of synthetic images using cycle generative adversarial networks for improved part segmentation. Computers and Electronics in Agriculture, 173(March), 105378.

Gatys;, Leon A., Alexander S. Ecker, M. B. (2015). A Neural Algorithm of Artistic Style.

Guo, L., Zhang, Y., & Li, Y. (2021). An intelligent electromagnetic environment reconstruction method based on super-resolution generative adversarial network. Physical Communication, 44, 101253.

Huang, X. S. B. (2017). Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization. arXiv:1703.06868v2.

Jiang, M., Zhi, M., Wei, L., Yang, X., Zhang, J., Li, Y., Wang, P., Huang, J., & Yang, G. (2021). FA-GAN: Fused attentive generative adversarial networks for MRI image super-resolution. Computerized Medical Imaging and Graphics, 92(April), 101969.

Lu, L. (2015). Gram matrix of Bernstein basis: Properties and applications. Journal of Computational and Applied Mathematics, 280, 37–41.

Makuracki, B., & Mróz, A. (2021). Coefficients of non-negative quasi-Cartan matrices, their symmetrizers and Gram matrices. Discrete Applied Mathematics, 303, 108–121.

Modanwal, G., Vellal, A., & Mazurowski, M. A. (2021). Normalization of breast MRIs using cycle-consistent generative adversarial networks. Computer Methods and Programs in Biomedicine, 208, 106225.

Paymode, A. S., & Malode, V. B. (2022). Transfer Learning for Multi-Crop Leaf Disease Image Classification using Convolutional Neural Network VGG. Artificial Intelligence in Agriculture, 6, 23–33.

Phon-Amnuaisuk, S. (2019). Image Synthesis and Style Transfer.

Shim, S., Kim, J., Lee, S. W., & Cho, G. C. (2022). Road damage detection using super-resolution and semi-supervised learning with generative adversarial network. Automation in Construction, 135.

Song, Z., Fu, L., Wu, J., Liu, Z., Li, R., & Cui, Y. (2019). Kiwifruit detection in field images using Faster R-CNN with VGG16. IFAC-PapersOnLine, 52(30), 76–81.

Teramoto, A., Yamada, A., Tsukamoto, T., Kiriyama, Y., Sakurai, E., Shiogama, K., Michiba, A., Imaizumi, K., Saito, K., & Fujita, H. (2021). Mutual stain conversion between Giemsa and Papanicolaou in cytological images using cycle generative adversarial network. Heliyon, 7(2), e06331.

Ulyanov, Dmitry;Andrea, V. (2017). Improved Texture Networks: Maximizing Quality and Diversity inFeed-forward Stylization and Texture Synthesis. Computer Science.

Ulyanov, D., Lebedev, V., Vedaldi, A., & Lempitsky, V. (2016). Texture networks: Feed-forward synthesis of textures and stylized images. In 33rd International Conference on Machine Learning, ICML 2016 (Vol. 3, pp. 2027–2041).

Yaskov, P. (2016). Controlling the least eigenvalue of a random Gram matrix. Linear Algebra and Its Applications, 504, 108–123.

Zheng, Z., & Liu, J. (2020). P2GAN: Efficient Style Transfer Using Single Style Image.


Crossmark Updates

How to Cite

Wedha, B. Y., Karjadi, D. A. ., Wedha, A. E. P. B. ., & Santoso, H. . (2022). Style Transfer Generator for Dataset Testing Classification. Sinkron : Jurnal Dan Penelitian Teknik Informatika, 7(2), 448-454.

Most read articles by the same author(s)

1 2 > >>