Unknown

Dataset Information

0

Fractional Wavelet-Based Generative Scattering Networks


ABSTRACT: Generative adversarial networks and variational autoencoders (VAEs) provide impressive image generation from Gaussian white noise, but both are difficult to train, since they need a generator (or encoder) and a discriminator (or decoder) to be trained simultaneously, which can easily lead to unstable training. To solve or alleviate these synchronous training problems of generative adversarial networks (GANs) and VAEs, researchers recently proposed generative scattering networks (GSNs), which use wavelet scattering networks (ScatNets) as the encoder to obtain features (or ScatNet embeddings) and convolutional neural networks (CNNs) as the decoder to generate an image. The advantage of GSNs is that the parameters of ScatNets do not need to be learned, while the disadvantage of GSNs is that their ability to obtain representations of ScatNets is slightly weaker than that of CNNs. In addition, the dimensionality reduction method of principal component analysis (PCA) can easily lead to overfitting in the training of GSNs and, therefore, affect the quality of generated images in the testing process. To further improve the quality of generated images while keeping the advantages of GSNs, this study proposes generative fractional scattering networks (GFRSNs), which use more expressive fractional wavelet scattering networks (FrScatNets), instead of ScatNets as the encoder to obtain features (or FrScatNet embeddings) and use similar CNNs of GSNs as the decoder to generate an image. Additionally, this study develops a new dimensionality reduction method named feature-map fusion (FMF) instead of performing PCA to better retain the information of FrScatNets,; it also discusses the effect of image fusion on the quality of the generated image. The experimental results obtained on the CIFAR-10 and CelebA datasets show that the proposed GFRSNs can lead to better generated images than the original GSNs on testing datasets. The experimental results of the proposed GFRSNs with deep convolutional GAN (DCGAN), progressive GAN (PGAN), and CycleGAN are also given.

SUBMITTER: Wu J 

PROVIDER: S-EPMC8577828 | biostudies-literature |

REPOSITORIES: biostudies-literature

Similar Datasets

| S-EPMC9979468 | biostudies-literature
| S-EPMC9044199 | biostudies-literature
| S-EPMC10098854 | biostudies-literature
| S-EPMC8119494 | biostudies-literature
| S-EPMC11320552 | biostudies-literature
| S-EPMC10500083 | biostudies-literature
| S-EPMC10624695 | biostudies-literature
| S-EPMC8054014 | biostudies-literature
| S-EPMC7453563 | biostudies-literature
| S-EPMC7710330 | biostudies-literature