TA的每日心情 | 开心 2019-11-20 15:05 |
---|
签到天数: 2 天 [LV.1]初来乍到
|
EDA365欢迎您登录!
您需要 登录 才可以下载或查看,没有帐号?注册
x
本帖最后由 Colbie 于 2020-3-18 09:58 编辑
$ k# p$ r- B" l+ D" }: W" |8 f" s% n3 [ o7 l8 F" s
核主元分析(Kernel principal component analysis ,KPCA)在降维、特征提取以及故障检测中的应用。主要功能有:(1)训练数据和测试数据的非线性主元提取(降维、特征提取), a/ h0 Z$ n \) u# ^
(2)SPE和T2统计量及其控制限的计算3 V; v+ Q3 u9 X& M! m9 Y
(3)故障检测/ d# d( L8 ~* M) P" P7 O+ ]
8 Q/ h: p" q+ _) w2 v: ~3 K参考文献:2 v1 R8 P+ b* Y8 S. p& T
Lee J M, Yoo C K, Choi S W, et al. Nonlinear process monitoring using kernel principal component analysis[J]. Chemical engineering science, 2004, 59(1) : 223-234.
. H. A! W m; C s* I, M2 @# [- c/ W, `+ N/ G- ^. r8 `1 F
1. KPCA的建模过程(故障检测):
& _* a0 p z* c$ v(1)获取训练数据(工业过程数据需要进行标准化处理)# i0 q) }6 z- ]0 i: H( t/ y0 x f# Z0 j
(2)计算核矩阵
, V4 w" I( }! a+ {, S3 E(3)核矩阵中心化
7 _8 Q7 S- O) u" j7 I# I( J(4)特征值分解+ e) S r5 n7 p l2 }3 m
(5)特征向量的标准化处理% E) n3 v3 d8 _
(6)主元个数的选取$ { P. C1 m, f `' P0 o( i: V
(7)计算非线性主成分(即降维结果或者特征提取结果) Q3 W0 a+ N1 a4 O1 l" r
(8)SPE和T2统计量的控制限计算4 M! `# m& d% F3 L4 X9 U
- function model = kpca_train(X,options)
- % DESCRIPTION
- % Kernel principal component analysis (KPCA)
- %
- % mappedX = kpca_train(X,options)
- %
- % INPUT
- % X Training samples (N*d)
- % N: number of samples
- % d: number of features
- % options Parameters setting
- %
- % OUTPUT
- % model KPCA model
- %
- %
- % Created on 9th November, 2018, by Kepeng Qiu.
- % number of training samples
- L = size(X,1);
- % Compute the kernel matrix
- K = computeKM(X,X,options.sigma);
- % Centralize the kernel matrix
- unit = ones(L,L)/L;
- K_c = K-unit*K-K*unit+unit*K*unit;
- % Solve the eigenvalue problem
- [V,D] = eigs(K_c/L);
- lambda = diag(D);
- % Normalize the eigenvalue
- V_s = V ./ sqrt(L*lambda)';
- % Compute the numbers of principal component
- % Extract the nonlinear component
- if options.type == 1 % fault detection
- dims = find(cumsum(lambda/sum(lambda)) >= 0.85,1, 'first');
- else
- dims = options.dims;
- end
- mappedX = K_c* V_s(:,1:dims) ;
- % Store the results
- model.mappedX = mappedX ;
- model.V_s = V_s;
- model.lambda = lambda;
- model.K_c = K_c;
- model.L = L;
- model.dims = dims;
- model.X = X;
- model.K = K;
- model.unit = unit;
- model.sigma = options.sigma;
- % Compute the threshold
- model.beta = options.beta;% corresponding probabilities
- [SPE_limit,T2_limit] = comtupeLimit(model);
- model.SPE_limit = SPE_limit;
- model.T2_limit = T2_limit;
- end+ G t" o" ^& s) k$ p1 }1 N+ E5 k' N
$ L) ~9 [1 f* }. V9 T+ \9 C" d# L
; A) i4 l# I3 y6 q# Y2 E2. KPCA的测试过程:
) V. H# T0 H+ y4 c, q( z(1)获取测试数据(工业过程数据需要利用训练数据的均值和标准差进行标准化处理)* x& j" a7 R7 l+ P5 G7 b2 j
(2)计算核矩阵' m1 J8 r7 Y4 y8 ^% c' ~9 V3 F
(3)核矩阵中心化& W& D0 o: @* s+ ~+ B, `) S: e
(4)计算非线性主成分(即降维结果或者特征提取结果)
( o9 m; Z: }6 f; L(5)SPE和T2统计量的计算" P/ ?; ?0 C& D1 e+ I: n6 g
- function [SPE,T2,mappedY] = kpca_test(model,Y)
- % DESCRIPTION
- % Compute the T2 statistic, SPE statistic,and the nonlinear component of Y
- %
- % [SPE,T2,mappedY] = kpca_test(model,Y)
- %
- % INPUT
- % model KPCA model
- % Y test data
- %
- % OUTPUT
- % SPE the SPE statistic
- % T2 the T2 statistic
- % mappedY the nonlinear component of Y
- %
- % Created on 9th November, 2018, by Kepeng Qiu.
- % Compute Hotelling's T2 statistic
- % T2 = diag(model.mappedX/diag(model.lambda(1:model.dims))*model.mappedX');
- % the number of test samples
- L = size(Y,1);
- % Compute the kernel matrix
- Kt = computeKM(Y,model.X,model.sigma );
- % Centralize the kernel matrix
- unit = ones(L,model.L)/model.L;
- Kt_c = Kt-unit*model.K-Kt*model.unit+unit*model.K*model.unit;
- % Extract the nonlinear component
- mappedY = Kt_c*model.V_s(:,1:model.dims);
- % Compute Hotelling's T2 statistic
- T2 = diag(mappedY/diag(model.lambda(1:model.dims))*mappedY');
- % Compute the squared prediction error (SPE)
- SPE = sum((Kt_c*model.V_s).^2,2)-sum(mappedY.^2 ,2);
- end. Y- O( P) R# O5 {! q4 D
: P: \3 f5 V! j: n
3 k8 E* m% k3 P- G$ a: `
6 z' w. `6 Y, A' b2 q/ H3. demo1: 降维、特征提取
* f0 S+ N R" S5 W+ s5 m, Y3 c(1) 源代码 e- d# f' G/ Q- k/ r0 W1 M
- % Demo1: dimensionality reduction or feature extraction
- % ---------------------------------------------------------------------%
- clc
- clear all
- close all
- addpath(genpath(pwd))
- % 4 circles
- load circledata
- %
- X = circledata;
- for i = 1:4
- scatter(X(1+250*(i-1):250*i,1),X(1+250*(i-1):250*i,2))
- hold on
- end
- % Parameters setting
- options.sigma = 5; % kernel width
- options.dims = 2; % output dimension
- options.type = 0; % 0:dimensionality reduction or feature extraction
- % 1:fault detection
- options.beta = 0.9; % corresponding probabilities (for ault detection)
- options.cpc = 0.85; % Principal contribution rate (for ault detection)
- % Train KPCA model
- model = kpca_train(X,options);
- figure
- for i = 1:4
- scatter(model.mappedX(1+250*(i-1):250*i,1), ...
- model.mappedX(1+250*(i-1):250*i,2))
- hold on
- end( c$ Y2 a* O. q3 E( d. e! a
7 R" P K) H# ~! @$ S0 C
5 t. H9 J# B! }% g3 I(2)结果 (分别为原图和特征提取后的图)
5 N6 Z9 u5 N$ Q1 H9 v" Y2 g7 F( m
, T, T. O/ u/ o3 j5 J
/ s' R% B& w# d# N% Q4. demo2: 故障检测(需要调节核宽度、主元贡献率和置信度等参数来提高故障检测效果)( P/ e; B+ k% }( G ?" x& {; m; N
(1)源代码' Q* F' ?9 r9 U5 ^5 i* w5 Z4 B
- % Demo2: Fault detection
- % X: training samples
- % Y: test samples
- % Improve the peRFormance of fault detection by adjusting parameters
- % 1. options.sigma = 16; % kernel width
- % 2. options.beta % corresponding probabilities
- % 3. options.cpc ; % principal contribution rate
- % ---------------------------------------------------------------------%
- clc
- clear all
- close all
- addpath(genpath(pwd))
- %
- X = rand(200,10);
- Y = rand(100,10);
- Y(20:40,: ) = rand(21,10)+3;
- Y(60:80,: ) = rand(21,10)*3;
- % Normalization (if necessary)
- % mu = mean(X);
- % st = std(X);
- % X = zscore(X);
- % Y = bsxfun(@rdivide,bsxfun(@minus,Y,mu),st);
- % Parameters setting
- options.sigma = 16; % kernel width
- options.dims = 2; % output dimension
- options.type = 1; % 0:dimensionality reduction or feature extraction
- % 1:fault detection
- options.beta = 0.9; % corresponding probabilities (for ault detection)
- options.cpc = 0.85; % principal contribution rate (for ault detection)
- % Train KPCA model
- model = kpca_train(X,options);
- % Test a new sample Y (vector of matrix)
- [SPE,T2,mappedY] = kpca_test(model,Y);
- % Plot the result
- plotResult(model.SPE_limit,SPE);
- plotResult(model.T2_limit,T2);) c% Y+ B& M6 o% `& G7 j" L& S
0 f8 K& L8 g& |& X
, U( u! G# i: t; L0 X(2)结果(分别是SPE统计量和T2统计量的结果图)
' X. g; @: k/ e, N
* D; E8 |& ]" p1 X/ q$ c9 P9 e, }) {' o$ D
附件是基于KPCA的降维、特征提取和故障检测程序源代码。如有错误的地方请指出,谢谢。
% K6 O m h6 k3 S
/ k2 `, G1 W( b8 ^0 ? |
|