91av视频/亚洲h视频/操亚洲美女/外国一级黄色毛片 - 国产三级三级三级三级

資源簡介

基于Python+Theano實現(xiàn)的邏輯回歸Lenet5源代碼,內(nèi)附有詳細(xì)注釋,讓新手盡可能了解每一個函數(shù)及變量的作用。這個源代碼還需要mlp.py,可以從我的資源中免費下載到:http://download.csdn.net/detail/niuwei22007/9170435

資源截圖

代碼片段和文件信息

#?-*-?coding:?utf-8?-*-
import?os
import?sys
import?timeit

import?numpy

import?theano
import?theano.tensor?as?T
from?theano.tensor.signal?import?downsample
from?theano.tensor.nnet?import?conv

#from?logistic_sgd?import?LogisticRegression?load_data
from?mlp?import?HiddenlayerLR?as?LogisticRegressionload_data


class?LeNetConvPoollayer(object):
????“““卷積層+采樣層“““

????def?__init__(self?rng?input?filter_shape?image_shape?poolsize=(2?2)):
????????“““
????????rnginput在之前的MLP中已經(jīng)說過。

????????filter_shape:?長度為4的元組或list
????????filter_shape:?(過濾器數(shù)目?輸入特征圖數(shù)目?過濾器高度?過濾器寬度)

????????image_shape:?長度為4的元組或list
????????image_shape:?(樣本塊大小?輸入特征圖數(shù)目?圖像高度?圖像寬度)

????????poolsize:?長度為2的元組或list
????????poolsize:?下采樣的shape大小(#rows?#cols)
????????“““

#?斷言,確定image_shape的1號元素與filter_shape的1號元素相等
#?因為從以上定義中可以知道,這個元素代表輸入特征圖的數(shù)量
????????assert?image_shape[1]?==?filter_shape[1]
????????self.input?=?input

????????#?prod()返回元素之積。如果filter_shape=(2433)
????????#?那么filter_shape[1:]=(433)
????????#?prod(filter_shape[1:])=4*3*3=36
????????fan_in?=?numpy.prod(filter_shape[1:])
????????#?each?unit?in?the?lower?layer?receives?a?gradient?from:
????????#?“num?output?feature?maps?*?filter?height?*?filter?width“?/
????????#???pooling?size
????????fan_out?=?(filter_shape[0]?*?numpy.prod(filter_shape[2:])?/
???????????????????numpy.prod(poolsize))
???????????????????
????????#?用隨機均勻分布初始化權(quán)值W
????????W_bound?=?numpy.sqrt(6.?/?(fan_in?+?fan_out))
????????self.W?=?theano.shared(
????????????numpy.asarray(
????????????????rng.uniform(low=-W_bound?high=W_bound?size=filter_shape)
????????????????dtype=theano.config.floatX
????????????)
????????????borrow=True
????????)

????????#?每一張輸出特征圖都有一個一維的偏置值,初始化為0。
????????b_values?=?numpy.zeros((filter_shape[0])?dtype=theano.config.floatX)
????????self.b?=?theano.shared(value=b_values?borrow=True)

????????#?將輸入特征圖與過濾器進行卷積操作
????????conv_out?=?conv.conv2d(
????????????input=input
????????????filters=self.W
????????????filter_shape=filter_shape
????????????image_shape=image_shape
????????)

????????#?用maxpooling方法下采樣每一個張?zhí)卣鲌D
????????pooled_out?=?downsample.max_pool_2d(
????????????input=conv_out
????????????ds=poolsize
????????????ignore_border=True
????????)

????????#?先把偏置進行張量擴張,由1維擴展為4維張量(1*2*1*1)????
????????#?再把擴展后的偏置累加到采樣輸出
????????#?把累加結(jié)果送入tanh非線性函數(shù)得到本層的網(wǎng)絡(luò)輸出
????????self.output?=?T.tanh(pooled_out?+?self.b.dimshuffle(‘x‘?0?‘x‘?‘x‘))

????????#?store?parameters?of?this?layer
????????self.params?=?[self.W?self.b]

????????#?keep?track?of?model?input
????????self.input?=?input


def?evaluate_lenet5(learning_rate=0.1?n_epochs=200
????????????????????dataset=‘mnist.pkl.gz‘
????????????????????nkerns=[20?50]?batch_size=500):
????“““?Demonstrates?lenet?on?MNIST?dataset
????實驗數(shù)據(jù)集是MNIST數(shù)據(jù)集。
????:type?learning_rate:?float
????:param?learning_rate:?learning?rate?used?(factor?for?the?stochastic
??????????????????????????g

評論

共有 條評論