91av视频/亚洲h视频/操亚洲美女/外国一级黄色毛片 - 国产三级三级三级三级

資源簡介

keras 官方例子,深度學習專用,機器學習專用,代碼簡單,

資源截圖

代碼片段和文件信息

#?-*-?coding:?utf-8?-*-
“““
Train?an?Auxiliary?Classifier?Generative?Adversarial?Network?(ACGAN)?on?the
MNIST?dataset.?See?https://arxiv.org/abs/1610.09585?for?more?details.

You?should?start?to?see?reasonable?images?after?~5?epochs?and?good?images
by?~15?epochs.?You?should?use?a?GPU?as?the?convolution-heavy?operations?are
very?slow?on?the?CPU.?Prefer?the?TensorFlow?backend?if?you?plan?on?iterating
as?the?compilation?time?can?be?a?blocker?using?Theano.

Timings:

Hardware???????????|?Backend?|?Time?/?Epoch
-------------------------------------------
?CPU???????????????|?TF??????|?3?hrs
?Titan?X?(maxwell)?|?TF??????|?4?min
?Titan?X?(maxwell)?|?TH??????|?7?min

Consult?https://github.com/lukedeo/keras-acgan?for?more?information?and
example?output
“““
from?__future__?import?print_function

from?collections?import?defaultdict
try:
????import?cPickle?as?pickle
except?ImportError:
????import?pickle
from?PIL?import?Image

from?six.moves?import?range

from?keras.datasets?import?mnist
from?keras?import?layers
from?keras.layers?import?Input?Dense?Reshape?Flatten?embedding?Dropout
from?keras.layers?import?BatchNormalization
from?keras.layers.advanced_activations?import?LeakyReLU
from?keras.layers.convolutional?import?Conv2DTranspose?Conv2D
from?keras.models?import?Sequential?Model
from?keras.optimizers?import?Adam
from?keras.utils.generic_utils?import?Progbar
import?numpy?as?np

np.random.seed(1337)
num_classes?=?10


def?build_generator(latent_size):
????#?we?will?map?a?pair?of?(z?L)?where?z?is?a?latent?vector?and?L?is?a
????#?label?drawn?from?P_c?to?image?space?(...?28?28?1)
????cnn?=?Sequential()

????cnn.add(Dense(3?*?3?*?384?input_dim=latent_size?activation=‘relu‘))
????cnn.add(Reshape((3?3?384)))

????#?upsample?to?(7?7?...)
????cnn.add(Conv2DTranspose(192?5?strides=1?padding=‘valid‘
????????????????????????????activation=‘relu‘
????????????????????????????kernel_initializer=‘glorot_normal‘))
????cnn.add(BatchNormalization())

????#?upsample?to?(14?14?...)
????cnn.add(Conv2DTranspose(96?5?strides=2?padding=‘same‘
????????????????????????????activation=‘relu‘
????????????????????????????kernel_initializer=‘glorot_normal‘))
????cnn.add(BatchNormalization())

????#?upsample?to?(28?28?...)
????cnn.add(Conv2DTranspose(1?5?strides=2?padding=‘same‘
????????????????????????????activation=‘tanh‘
????????????????????????????kernel_initializer=‘glorot_normal‘))

????#?this?is?the?z?space?commonly?referred?to?in?GAN?papers
????latent?=?Input(shape=(latent_size?))

????#?this?will?be?our?label
????image_class?=?Input(shape=(1)?dtype=‘int32‘)

????cls?=?Flatten()(embedding(num_classes?latent_size
??????????????????????????????embeddings_initializer=‘glorot_normal‘)(image_class))

????#?hadamard?product?between?z-space?and?a?class?conditional?embedding
????h?=?layers.multiply([latent?cls])

????fake_image?=?cnn(h)

????return?Model([latent?image_class]?fake_image)


def?build_discriminator():
????#?build?a?relatively?

評論

共有 條評論