, `; n4 p; e, p3 F! }* ] 8 O* c8 S& i4 A m2 u为什么?因为p1作为测试样本时,和p1最接近;p2作为测试样本时,和p2最接近……每个样本作为测试样本输入时,总会和自己最接近,输出就会是1。 T) }- ?; s6 m* b X$ x
4 n* `( `9 F$ b- L: i$ T( G
matlab中GRNN的函数是:net = newgrnn(P,T,spread),我们可以通过MATLAB的文档来学习一下GRNN的知识:7 t6 p$ m' @8 a. Q n I
. c/ E" E! C6 e8 u
P R-by-Q matrix of Q input vectors& \! Z, t3 N. y$ i
1 |# g# r) Z0 q0 p, w' j( p8 V
T S-by-Q matrix of Q target class vectors. q7 C/ a2 @. b# T0 D
& ]5 R7 ?3 ?- Q8 P5 lspread Spread of radial basis functions (default = 1.0) 0 i% q2 l% ]& B3 `. D }8 e Q2 B7 G, K! }8 w
如何调参数spread:The larger the spread, the smoother the function approximation. To fit data very closely, use a spread smaller than the typical distance between input vectors. To fit the data more smoothly, use a larger spread.(调整spread直接影响到的是偏置b)% {, o) l3 T5 Q. [# g! @8 m' [% c
" F6 U* q+ W* a
+ Q# F4 ?6 I5 O. ^, C) Y$ y) K* ?
我们就会发现所有值都接近于1。 1 Q+ P1 g/ \2 _1 d: }% b1 I5 V* L5 E9 H$ i
GRNN的性质: , h4 [8 c( q; g4 H3 u ! K1 Y6 Y* E: N. Y5 Y! Q9 hnewgrnn creates a two-layer network. The first layer has radbas neurons, and calculates weighted inputs with dist and net input with netprod. The second layer has purelin neurons, calculates weighted input with normprod, and net inputs with netsum. Only the first layer has biases. ! }+ n" i, v3 ~- D v5 l+ R0 O4 J W, {$ m, L1 r) s
newgrnn sets the first layer weights to P', and the first layer biases are all set to 0.8326/spread, resulting in radial basis functions that cross 0.5 at weighted inputs of +/– spread. The second layer weights W2 are set to T. 3 q" O; I" s" |0 Z9 h: H4 p/ H! z , p% Z; W$ N- _" G正因为GRNN没有权值这一说,所以不用训练的优势就体现在他的速度真的很快。而且曲线拟合的非常自然。样本精准度不如径向基精准,但在实际测试中表现甚至已经超越了BP。% [1 ^, E0 c8 B: S8 o
, _ E% t9 G' ?) Q* Z8 m% G' B# Y6 [$ D5 N' d$ }
; D" d2 K6 Q, I2 x+ [3 G5 ^' U
PNN(概率神经网络)6 B, N; m1 g5 i; X
" k/ A8 p# U9 G: W# |' u& p I% u' w4 e0 W" `
PNN和RBF、GRNN很相似,只是第二层最后输出时,不是用线性函数(加权和,线性映射),而是成了竞争层(只是取出一个最大的样本对应的label作为结果)!# z Z1 H% L: N/ n$ G
8 _! Y3 J( W/ ^9 `/ xMATLAB函数:net = newpnn(P,T,spread): k2 A* Z* G- R4 q, a' E( r7 A4 m) v
) R# M6 X% C+ g, e+ W: F
P R-by-Q matrix of Q input vectors k: J- j& `' L" O+ w) B& Z3 q# O' Q) Q4 r
T S-by-Q matrix of Q target class vectors3 r8 z3 J5 Q7 w, q5 X2 z1 Q& C
: _, g/ _) n) `spread Spread of radial basis functions (default = 0.1) % z. Z g" S: Z: \+ } 8 c2 D( _3 L/ M% [5 VIf spread is near zero, the network acts as a nearest neighbor classifier. As spread becomes larger, the designed network takes into account several nearby design vectors. 2 r% X! a& \! N/ ` 5 [4 S5 q, t4 [: Q; lnewpnn creates a two-layer network. The first layer has radbas neurons, and calculates its weighted inputs with dist and its net input with netprod. The second layer has compet neurons, and calculates its weighted input with dotprod and its net inputs withnetsum. Only the first layer has biases. 3 x6 ?! p, z2 o4 {$ m6 ^, @# \& z. p; m4 I! W* T O8 m e
newpnn sets the first-layer weights to P', and the first-layer biases are all set to 0.8326/spread, resulting in radial basis functions that cross 0.5 at weighted inputs of +/– spread. The second-layer weights W2 are set to T.# ~& X" c4 s% X" y1 H- R) g( i
. h9 N/ k# _. r X8 W* h& l