Big-Volume SliceGAN for Improving a Synthetic 3D Microstructure Image of Additive-Manufactured TYPE 316L Steel
概要
The reconstruction of three-dimensional (3D) microstructures can enhance our understanding of the properties of a material. Traditionally, serial sectioning [1,2] and tomography [3] have been used to generate 3D microstructure images. However, these
methods are time-consuming and require specialized equipment. Recently, Kench and
Cooper [4] introduced a new approach for efficient 3D microstructure reconstruction using
a generative adversarial network (GAN) called SliceGAN. There are two primary types of
image generation algorithms: adversarial generation network (GAN) [5] and variational autoencoder [6]. SliceGAN produces a synthetic 3D image from one or three two-dimensional
(2D) images for isotropic and anisotropic microstructures, respectively.
SliceGAN consists of three components: a 3D image generator (3D generator), a critic
(similar to a discriminator in conventional GAN [5]), and a slicer. The 3D generator creates
a 3D image from noise (latent variables), which is then sliced into three perpendicular
planes by the slicer. The critic compares the sliced images with 2D images cropped from
an original microstructure image, and updates the weight coefficient in the transpose
convolution matrix of the 3D generator accordingly. SliceGAN runs on a high-performance
graphics processing unit in Pytorch frame and GPGPU mode.
In the original SliceGAN architecture proposed by Kench and Cooper [4], 64 sets of
latent variables in the format of 4 × 4 × 4 (voxel) were used. These latent variables were
processed by a transpose convolution with five layers, resulting in a 3D image with dimensions of 64 × 64 × 64 voxels and three channels. The 2D images sliced from the generated
3D image were compared with 2D images cropped from the original image using the critic
of Wasserstein GAN with Gradient Penalty (WGAN-GP) [7]. The weight coefficient of the
3D generator was then updated based on the result. ...