( ^8 I% v* T& [( c/ N
- F! |" |: S, n$ D, \
如上图,只有在输入4的位置处,对应的输出才是1,其它都是0。这就是竞争层的作用。7 f/ }" p" i0 s/ y: [+ P/ z
" [8 P" P V L }& D
通过这样的方法,对于n1中最大的元素,可以获胜,竞争层输出1。如果b1=0,即没有阈值向量作用,那么只有当神经元的权值向量IW与输入向量最接近时,它在n1中各元素的负值最小,而值最大,所以能赢得竞争。这就是为什么要用负距离。$ J' t ?; b5 [4 M- J4 p
; V: u- y7 d6 ]+ Y7 n就使得不是每次迭代就把所有神经元的权重都更新一遍,而是每次迭代只更新一个隐含层神经元的权重。例如上面这个例子的输出a1,a1是S1x1的维度,其中就一个值为1,其它为0,那么就只有1的位置对应的那个神经元的权重才会得到更新(只有获胜的神经元才有机会跳转权值)。# d7 ^( L- M4 V2 |' Y) A
- Q/ [# m, u3 p* f$ K7 Y
这样不断迭代,势必会带来一个问题:获胜的神经元通过调整能够不断缩小误差,导致它们取胜几率不断增大,导致最后整个网络中只有一部分胜利的神经元取得成功,其他神经元始终没有取得训练,成为“死神经元”。 * W9 Q: R, _6 b0 M6 {2 p) O' L ; M* D: E" H5 t4 C& s8 |解决这个问题的办法:设置较大的b1,帮助被冷落的神经元们获得竞争胜利的机会。针对某个竞争神经元而言,如果偏置b1取得比较大,那么就能帮助输入p在竞争中获得更大的取胜概率。当然,这个b也是会动态调整的,通过学习函数learncon来帮助动态调整b:Update the biases with the learning function learncon so that the biases of frequently active neurons become smaller, and biases of infrequently active neurons become larger.& I# i: h& Z1 v6 v# w8 B- L
4 v4 C3 S" v" E" n( l5 p
权重IW{1,1}如何设定?2 V" Y, s$ a, I& n
5 V6 H" u/ B* L! P9 zare initialized to the centers of the input ranges with the function midpoint.也就是说是取输入向量的中心。 , u. l4 ?, k6 B" { 2 S6 m& \5 R0 T( V M中心怎么理解呢?简单的说是最大最小值的平均值。' O# z7 K) l1 y6 V9 J: c0 ?
5 J$ K2 Y" B; D& T而midpoint在matlab里的效果:W = midpoint(S,PR)4 u, {& I5 L/ S
) K& h! l/ `- ]4 z, p
S Number of rows (neurons) 2 o3 W' Z- X% m) F H0 j( V9 f) u- e& j% C* a" }7 x( U& V
PR R-by-Q matrix of input value ranges = [Pmin Pmax] * N( I2 j8 d3 j8 Z( S- ]8 V . U" t% d: }6 u g7 d) ~9 wand returns an S-by-R matrix with rows set to (Pmin+Pmax)'/2.9 U2 v* l4 ?- u" [- \; p$ W
* v* y E4 C) i3 `7 S
这里,输入参数意义是:产生S个神经元,PR是一个矩阵(向量)。输出结果中注意有个转置!2 j# ^) t& P7 ^$ i% ?
* F* }* |% c$ b- x4 w+ B* }; S
输入:6 h/ c2 S! x4 `
! ?1 m8 u; }& Q1 C% J2 ^. y
W = midpoint(5,[0 1; -2 2]) + S' a) _7 G/ ^! k( k6 Q) d* r ; z) n6 {5 L0 [按上述规则计算结果是:6 t7 D' B& T. k) r/ G: ~0 l
0 B7 n" ~* p6 @$ ^# ^. H
, N5 {( X7 H/ Y" S6 h* b" J- }) w: w. ^6 ]) ~. \% ?9 z! E( {& e% A& b
The Kohonen rule allows the weights of a neuron to learn an input vector, and because of this it is useful in recognition applications. 3 K8 O5 f; o5 h* h% O K( ^8 ^5 A" Q( L$ R5 L
Thus, the neuron whose weight vector was closest to the input vector is updated to be even closer. The result is that the winning neuron is more likely to win the competition the next time a similar vector is presented, and less likely to win when a very different input vector is presented. As more and more inputs are presented, each neuron in the layer closest to a group of input vectors soon adjusts its weight vector toward those input vectors. Eventually, if there are enough neurons, every cluster of similar input vectors will have a neuron that outputs 1 when a vector in the cluster is presented, while outputting a 0 at all other times. Thus, the competitive network learns to categorize the input vectors it sees. ! T& g# V1 h) g, O$ Z8 p , \$ B! U2 ?# t4 RThe function learnk is used to perform the Kohonen learning rule in this toolbox. ! s8 n6 ^% o8 W0 C5 r$ l ; P$ R1 r2 Y/ H: M/ Y3 L- u/ m( h % A3 e3 q& m; @* s( t竞争神经网络的缺点:% u9 D" M8 Z4 S; U' \6 ?6 a
. o# O) T! P! E( W# Q( f
$ c% Y) m; g2 p) V
8 s- f' E: Z% Q) h7 X
SOM神经网络, v5 Q8 g: n0 n# S$ V9 M- e
SOM能够通过其输入样本学会检测其规律性和输入样本相互之间的关系,并且根据这些输入样本的信息自适应地调整网络,使得网络以后的响应与输入样本相适应。竞争型神经网络的神经元通过输入信息能够识别成组的相似输入向量;自组织映射神经网络通过学习同样能够识别成组的相似输入向量,使那些网络层中彼此靠的很近的神经元对相似的输入向量产生响应。与竞争型神经网络不同的是,SOM不但能学习输入向量的分布情况,还可以学习输入向量的拓扑结构,其单个神经元对模式分类不起决定性作用,而要靠多个神经元的协同作用才能完成模式分类。' \' X7 Y! M9 B c
& K( N& ^- L8 P& w" J) x0 Q! T + N2 M, n$ a w% n3 pA self-organizing map (SOM) or self-organizing feature map (SOFM) is a type of artificial neural network that is trained using unsupervised learning to produce a low-dimensional (typically two-dimensional), discretized representation of the input space of the training samples, called a map. ( _& b3 I! Y) N) H3 D5 V/ {5 p: @' l* D4 `3 h5 `7 S& ^: U
The self-organizing map describes a mapping from a higher dimensional input space to a lower dimensional map space.(它把一个高维度的样本映射到二维空间上)- w' c. S& M$ H. L# ~
5 k* b/ R/ J- H: N$ ^" z( n5 t0 V0 o
This architecture is like that of a competitive network, except no bias is used here.(和传统竞争神经元很相似,就是少了个阈值向量) 0 w# K- X7 j7 N4 V) F9 N$ N) U5 i& c4 t' W, }
Instead of updating only the winning neuron, neurons close to the winning neuron are updated along with the winning neuron.(一次不仅只更新一个神经元,而是竞争获胜的神经元邻域的神经元都会获得更新) . ]* I4 B% }8 J! B2 N& L# s' G: [* ]8 S" N3 B
训练(迭代)的过程分为两步: 2 u' @6 @3 k% d* `* e5 h6 T : J$ `# N5 a% h$ M& |. |# f1、ordering phase:排序阶段。邻域神经元都会得到更新1 I7 ~! M( u% L% U
9 X- N6 m5 S; d& x3 ~3 R4 X
2、tuning phase:微调阶段。只有一个神经元更新/ e7 W' x, @1 E. h. ~- E# ]
# i, w( Q/ Q9 _; t- a) L