! a. h/ x3 Z- E, X p- y简单地说,就是某个测试集样本p和某个训练集样本越接近,即欧氏距离越小,那么在RBF作用后输出的值就越大。 y" o- N4 {# X( z4 K, r ]8 A' r, ^$ H
假设这个样本p和训练集中某个样本(即IW1中某一列)很相似(即欧氏距离dist很小),那么输出结果a1中(a1维度是S1X1)就有一个值会很大。经过权重和偏置的作用后,再进入线性分类器中,就很容易可以分出来。这是我的直观的理解。 ( c0 {; Q3 l+ o5 r9 d6 U' W# i3 G, S: M% J$ R; v
生成RBF的matlab函数:net = newrbe(P,T,spread)。只有一个参数spread需要调整。 I1 E4 J- w# D9 ?& y, r8 R& o& Z: y. A, ?; y
P RxQ matrix of Q R-element input vectors 8 M' z. x8 O9 h! g1 E1 `9 I: O8 e$ N+ U g a! U
T SxQ matrix of Q S-element target class vectors) n: a3 f7 W; O, C
% }6 ?8 i. h1 {" U3 lspread Spread of radial basis functions (default = 1.0) 4 M; ^/ Q9 r0 E" _5 o+ b, z7 f ! X8 R) B# q' A) w. W N. _# o; w& n
The larger the spread is, the smoother the function approximation will be. Too large a spread can cause numerical problems./ g+ _( F0 u5 O) V9 J! @7 E+ W
5 w5 c4 \" g. k: j& ]
也就是说,spread这个参数越大,RBF图像越平滑,RBF的输出差距不大,则所有输入的作用都会被减弱。 # F2 t( ?+ c4 l4 x1 k+ N5 \5 \ 7 W, O9 D7 a1 w" W; a$ B& H关于此算法,MATLAB文档中给出了很精炼的解释: 7 f: G8 ^# m# o% u U' t9 Y! X, \: m2 w5 l. k+ O% h, g0 W& x7 o
newrbe creates a two-layer network. The first layer has radbas neurons, and calculates its weighted inputs with dist and its net input with netprod. The second layer has purelin neurons, and calculates its weighted input withdotprod and its net inputs with netsum. Both layers have biases.# [+ M/ D! g% H/ P ^; n+ w
* G3 A; B0 N, a9 I
newrbe sets the first-layer weights to P', and the first-layer biases are all set to 0.8326/spread, resulting in radial basis functions that cross 0.5 at weighted inputs of +/– spread.(就是说这样设计出默认的RBF的效果是,关于X=0对称,横轴为+/-0.8333时函数的纵轴坐标值大约是0.5,表现在图中就是cross于这个点。如果加权后输入是+/- spread的话,正好产生此效果。) 5 l' F5 O8 U g9 @; ~! Y) K" v6 F$ s+ q
The second-layer weights IW{2,1} and biases b{2} are found by simulating the first-layer outputs A{1} and then solving the following linear expression: " n1 U/ v* u) w; f/ {6 K) j) C- a* {1 j0 N2 H
[W{2,1} b{2}] * [A{1}; ones] = T & ~. H4 ?. O2 e* {0 p9 v2 @ I, h, Y. f" s8 ]3 `+ D! n