Style Transfer Generator for Dataset Testing Classification
DOI:
10.33395/sinkron.v7i2.11375Keywords:
Generative Adversarial Networks, Convolutional Neural Network, Style Transfer, Image Dataset, Art ImageAbstract
The development of the Generative Adversarial Network is currently very fast. First introduced by Ian Goodfellow in 2014, its development has accelerated since 2018. Currently, the need for datasets is sometimes still lacking, while public datasets are sometimes still lacking in number. This study tries to add an image dataset for supervised learning purposes. However, the dataset that will be studied is a unique dataset, not a dataset from the camera. But the image dataset by doing the augmented process by generating from the existing image. By adding a few changes to the augmentation process. So that the image datasets become diverse, not only datasets from camera photos but datasets that are carried out with an augmented process. Camera photos added with painting images will become still images with a newer style. There are many studies on Style transfer to produce images in drawing art, but it is possible to generate images for the needs of image datasets. The resulting force transfer image data set was used as the test data set for the Convolutional Neural Network classification. Classification can also be used to detect specific objects or images. The image dataset resulting from the style transfer is used for the classification of goods transporting vehicles or trucks. Detection trucks are very useful in the transportation system, where currently many trucks are modified to avoid road fees
Downloads
References
Barth, R., Hemming, J., & Van Henten, E. J. (2020). Optimising realism of synthetic images using cycle generative adversarial networks for improved part segmentation. Computers and Electronics in Agriculture, 173(March), 105378. https://doi.org/10.1016/j.compag.2020.105378
Gatys;, Leon A., Alexander S. Ecker, M. B. (2015). A Neural Algorithm of Artistic Style.
Guo, L., Zhang, Y., & Li, Y. (2021). An intelligent electromagnetic environment reconstruction method based on super-resolution generative adversarial network. Physical Communication, 44, 101253. https://doi.org/10.1016/j.phycom.2020.101253
Huang, X. S. B. (2017). Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization. arXiv:1703.06868v2.
Jiang, M., Zhi, M., Wei, L., Yang, X., Zhang, J., Li, Y., Wang, P., Huang, J., & Yang, G. (2021). FA-GAN: Fused attentive generative adversarial networks for MRI image super-resolution. Computerized Medical Imaging and Graphics, 92(April), 101969. https://doi.org/10.1016/j.compmedimag.2021.101969
Lu, L. (2015). Gram matrix of Bernstein basis: Properties and applications. Journal of Computational and Applied Mathematics, 280, 37–41. https://doi.org/10.1016/j.cam.2014.11.037
Makuracki, B., & Mróz, A. (2021). Coefficients of non-negative quasi-Cartan matrices, their symmetrizers and Gram matrices. Discrete Applied Mathematics, 303, 108–121. https://doi.org/10.1016/j.dam.2020.05.022
Modanwal, G., Vellal, A., & Mazurowski, M. A. (2021). Normalization of breast MRIs using cycle-consistent generative adversarial networks. Computer Methods and Programs in Biomedicine, 208, 106225. https://doi.org/10.1016/j.cmpb.2021.106225
Paymode, A. S., & Malode, V. B. (2022). Transfer Learning for Multi-Crop Leaf Disease Image Classification using Convolutional Neural Network VGG. Artificial Intelligence in Agriculture, 6, 23–33. https://doi.org/10.1016/j.aiia.2021.12.002
Phon-Amnuaisuk, S. (2019). Image Synthesis and Style Transfer. http://arxiv.org/abs/1901.04686
Shim, S., Kim, J., Lee, S. W., & Cho, G. C. (2022). Road damage detection using super-resolution and semi-supervised learning with generative adversarial network. Automation in Construction, 135. https://doi.org/10.1016/j.autcon.2022.104139
Song, Z., Fu, L., Wu, J., Liu, Z., Li, R., & Cui, Y. (2019). Kiwifruit detection in field images using Faster R-CNN with VGG16. IFAC-PapersOnLine, 52(30), 76–81. https://doi.org/10.1016/j.ifacol.2019.12.500
Teramoto, A., Yamada, A., Tsukamoto, T., Kiriyama, Y., Sakurai, E., Shiogama, K., Michiba, A., Imaizumi, K., Saito, K., & Fujita, H. (2021). Mutual stain conversion between Giemsa and Papanicolaou in cytological images using cycle generative adversarial network. Heliyon, 7(2), e06331. https://doi.org/10.1016/j.heliyon.2021.e06331
Ulyanov, Dmitry;Andrea, V. (2017). Improved Texture Networks: Maximizing Quality and Diversity inFeed-forward Stylization and Texture Synthesis. Computer Science.
Ulyanov, D., Lebedev, V., Vedaldi, A., & Lempitsky, V. (2016). Texture networks: Feed-forward synthesis of textures and stylized images. In 33rd International Conference on Machine Learning, ICML 2016 (Vol. 3, pp. 2027–2041).
Yaskov, P. (2016). Controlling the least eigenvalue of a random Gram matrix. Linear Algebra and Its Applications, 504, 108–123. https://doi.org/10.1016/j.laa.2016.03.048
Zheng, Z., & Liu, J. (2020). P2GAN: Efficient Style Transfer Using Single Style Image. http://arxiv.org/abs/2001.07466
Downloads
How to Cite
Issue
Section
License
Copyright (c) 2022 Bayu Yasa Wedha, Daniel Avian Karjadi, Alessandro Enriqco Putra Bayu Wedha, Handri Santoso
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.