
windows使用,先用import,将数据分别放入
inputs和targets(导入数据)。然后按NEW
NETWORK选择结构,选择Feedforward
Backprop,确定Number
of
Layers(网络层数),在下面确定每层节点数,然后选择下函数:logsig
,purelin,tansig。最后,关闭此窗口。单击View,即可显示结构。
然后按train,在
inputs和targets里面填入输入值X和训练的Y,在training
parameters中设置你要的参数,比如误差。最后按train就可以开始训练。完了一定记住按网络模型输出(Export),将模型转入command
windows。下面调用:如y1=sim(network1,x0)plot(x,y,'o',x0,y0,y1,':')。
如果你要程序,可以这样:
function
BP
x=[-1:0.01:1]
y=[-1:0.01:1]
p=[xy]
T=x.^2+y.^2
x0=[-1:0.1:1]
y0=[-1:0.1:1]
p0=[x0y0]
T0=x0.^2+y0.^2
net=newff(minmax(p),[10,1],{'logsig','purelin'})
net.trainParam.epochs=10000
net.trainParam.goal=1e-6
net=train(net,p,T)
figure
T1=sim(net,p0)
plot(p,T,'o',p0,T0,p0,T1,':')
end
这样:
clear
%输入数据矩阵
p1=zeros(1,1000)
p2=zeros(1,1000)
%填充数据
for i=1:1000
p1(i)=rand
p2(i)=rand
end
%输入层有两个,样本数为1000
p=[p1p2]
%目标(输出)数据矩阵,待拟合的关系为简单的三角函数
t = cos(pi*p1)+sin(pi*p2)
%对训练集中的输入数据矩阵和目标数据矩阵进行归一化处理
[pn, inputStr] = mapminmax(p)
[tn, outputStr] = mapminmax(t)
%建立BP神经网络
net = newff(pn, tn, [200,10])
%每10轮回显示一次结果
net.trainParam.show = 10
%最大训练次数
net.trainParam.epochs = 5000
%网络的学习速率
net.trainParam.lr = 0.05
%训练网络所要达到的目标误差
net.trainParam.goal = 10^(-8)
%网络误差如果连续6次迭代都没变化,则matlab会默认终止训练。为了让程序继续运行,用以下命令取消这条设置
net.divideFcn = ''
%开始训练网络
net = train(net, pn, tn)
%训练完网络后要求网络的权值w和阈值b
%获取网络权值、阈值
netiw = net.iw
netlw = net.lw
netb = net.b
w1 = net.iw{1,1}%输入层到隐层1的权值
b1 = net.b{1} %输入层到隐层1的阈值
w2 = net.lw{2,1}%隐层1到隐层2的权值
b2 = net.b{2} %隐层1到隐层2的阈值
w3 = net.lw{3,2}%隐层2到输出层的权值
b3 = net.b{3} %隐层2到输出层的阈值
%在默认的训练函数下,拟合公式为,y=w3*tansig(w2*tansig(w1*in+b1)+b2)+b3
%用公式计算测试数据[x1x2]的输出,输入要归一化,输出反归一化
in = mapminmax('apply',[x1x2],inputStr)
y=w3*tansig(w2*tansig(w1*in+b1)+b2)+b3
y1=mapminmax('reverse',y,outputStr)
%用bp神经网络验证计算结果
out = sim(net,in)
out1=mapminmax('reverse',out,outputStr)
扩展资料:注意事项
一、训练函数
1、traingd
Name:Gradient descent backpropagation (梯度下降反向传播算法 )
Description:triangd is a network training function that updates weight and bias values according to gradient descent.
2、traingda
Name:Gradient descent with adaptive learning rate backpropagation(自适应学习率的t梯度下降反向传播算法)
Description:triangd is a network training function that updates weight and bias values according to gradient descent with adaptive learning rate. it will return a trained net (net) and the trianing record (tr).
3、traingdx (newelm函数默认的训练函数)
name:Gradient descent with momentum and adaptive learning rate backpropagation(带动量的梯度下降的自适应学习率的反向传播算法)
Description:triangdx is a network training function that updates weight and bias values according to gradient descent momentum and an adaptive learning rate.it will return a trained net (net) and the trianing record (tr).
4、trainlm
Name:Levenberg-Marquardt backpropagation (L-M反向传播算法)
Description:triangd is a network training function that updates weight and bias values according toLevenberg-Marquardt optimization. it will return a trained net (net) and the trianing record (tr).
注:更多的训练算法请用matlab的help命令查看。
二、学习函数
1、learngd
Name:Gradient descent weight and bias learning function (梯度下降的权值和阈值学习函数)
Description:learngd is the gradient descent weight and bias learning function, it will return the weight change dW and a new learning state.
2、learngdm
Name:Gradient descent with momentum weight and bias learning function (带动量的梯度下降的权值和阈值学习函数)
Description:learngd is the gradient descent with momentum weight and bias learning function, it will return the weight change dW and a new learning state.
注:更多的学习函数用matlab的help命令查看。
三、训练函数与学习函数的区别
函数的输出是权值和阈值的增量,训练函数的输出是训练好的网络和训练记录,在训练过程中训练函数不断调用学习函数修正权值和阈值,通过检测设定的训练步数或性能函数计算出的误差小于设定误差,来结束训练。
或者这么说:训练函数是全局调整权值和阈值,考虑的是整体误差的最小。学习函数是局部调整权值和阈值,考虑的是单个神经元误差的最小。
它的基本思想是学习过程由信号的正向传播与误差的反向传播两个过程组成。
正向传播时,输入样本从输入层传入,经各隐层逐层处理后,传向输出层。若输出层的实际输出与期望的输出(教师信号)不符,则转入误差的反向传播阶段。
反向传播时,将输出以某种形式通过隐层向输入层逐层反传,并将误差分摊给各层的所有单元,从而获得各层单元的误差信号,此误差信号即作为修正各单元权值的依据。
美国Michigan 大学的 Holland 教授提出的遗传算法(GeneticAlgorithm, GA)是求解复杂的组合优化问题的有效方法 ,其思想来自于达尔文进化论和门德尔松遗传学说 ,它模拟生物进化过程来从庞大的搜索空间中筛选出较优秀的解,是一种高效而且具有强鲁棒性方法。所以,遗传算法在求解TSP和 MTSP问题中得到了广泛的应用。
matlab程序如下:
function[opt_rte,opt_brk,min_dist] =mtspf_ga(xy,dmat,salesmen,min_tour,pop_size,num_iter)
%%
%实例
% n = 20%城市个数
% xy = 10*rand(n,2)%城市坐标 随机产生,也可以自己设定
% salesmen = 5%旅行商个数
% min_tour = 3%每个旅行商最少访问的城市数
% pop_size = 80%种群个数
% num_iter = 200%迭代次数
% a = meshgrid(1:n)
% dmat =reshape(sqrt(sum((xy(a,:)-xy(a',:)).^2,2)),n,n)
% [opt_rte,opt_brk,min_dist] = mtspf_ga(xy,dmat,salesmen,min_tour,...
% pop_size,num_iter)%函数
%%
[N,dims]= size(xy)%城市矩阵大小
[nr,nc]= size(dmat)%城市距离矩阵大小
n = N -1% 除去起始的城市后剩余的城市的数
% 初始化路线、断点的选择
num_brks= salesmen-1
dof = n- min_tour*salesmen %初始化路线、断点的选择
addto =ones(1,dof+1)
for k =2:num_brks
addto = cumsum(addto)
end
cum_prob= cumsum(addto)/sum(addto)
%% 初始化种群
pop_rte= zeros(pop_size,n) % 种群路径
pop_brk= zeros(pop_size,num_brks) % 断点集合的种群
for k =1:pop_size
pop_rte(k,:) = randperm(n)+1
pop_brk(k,:) = randbreaks()
end
% 画图路径曲线颜色
clr =[1 0 00 0 10.67 0 10 1 01 0.5 0]
ifsalesmen >5
clr = hsv(salesmen)
end
%%
% 基于遗传算法的MTSP
global_min= Inf %初始化最短路径
total_dist= zeros(1,pop_size)
dist_history= zeros(1,num_iter)
tmp_pop_rte= zeros(8,n)%当前的路径设置
tmp_pop_brk= zeros(8,num_brks)%当前的断点设置
new_pop_rte= zeros(pop_size,n)%更新的路径设置
new_pop_brk= zeros(pop_size,num_brks)%更新的断点设置
foriter = 1:num_iter
% 计算适应值
for p = 1:pop_size
d = 0
p_rte = pop_rte(p,:)
p_brk = pop_brk(p,:)
rng = [[1 p_brk+1][p_brk n]]'
for s = 1:salesmen
d = d + dmat(1,p_rte(rng(s,1)))% 添加开始的路径
for k = rng(s,1):rng(s,2)-1
d = d + dmat(p_rte(k),p_rte(k+1))
end
d = d + dmat(p_rte(rng(s,2)),1)% 添加结束的的路径
end
total_dist(p) = d
end
% 找到种群中最优路径
[min_dist,index] = min(total_dist)
dist_history(iter) = min_dist
if min_dist <global_min
global_min = min_dist
opt_rte = pop_rte(index,:)%最优的最短路径
opt_brk = pop_brk(index,:)%最优的断点设置
rng = [[1 opt_brk+1][opt_brk n]]'%设置记录断点的方法
figure(1)
for s = 1:salesmen
rte = [1 opt_rte(rng(s,1):rng(s,2))1]
plot(xy(rte,1),xy(rte,2),'.-','Color',clr(s,:))
title(sprintf('城市数目为 = %d,旅行商数目为 = %d,总路程 = %1.4f, 迭代次数 =%d',n+1,salesmen,min_dist,iter))
hold on
grid on
end
plot(xy(1,1),xy(1,2),'ko')
hold off
end
% 遗传 *** 作
rand_grouping = randperm(pop_size)
for p = 8:8:pop_size
rtes = pop_rte(rand_grouping(p-7:p),:)
brks = pop_brk(rand_grouping(p-7:p),:)
dists =total_dist(rand_grouping(p-7:p))
[ignore,idx] = min(dists)
best_of_8_rte = rtes(idx,:)
best_of_8_brk = brks(idx,:)
rte_ins_pts = sort(ceil(n*rand(1,2)))
I = rte_ins_pts(1)
J = rte_ins_pts(2)
for k = 1:8 %产生新种群
tmp_pop_rte(k,:) = best_of_8_rte
tmp_pop_brk(k,:) = best_of_8_brk
switch k
case 2% 倒置 *** 作
tmp_pop_rte(k,I:J) =fliplr(tmp_pop_rte(k,I:J))
case 3 % 互换 *** 作
tmp_pop_rte(k,[I J]) =tmp_pop_rte(k,[J I])
case 4 % 滑动平移 *** 作
tmp_pop_rte(k,I:J) =tmp_pop_rte(k,[I+1:J I])
case 5% 更新断点
tmp_pop_brk(k,:) = randbreaks()
case 6 % 倒置并更新断点
tmp_pop_rte(k,I:J) =fliplr(tmp_pop_rte(k,I:J))
tmp_pop_brk(k,:) =randbreaks()
case 7 % 互换并更新断点
tmp_pop_rte(k,[I J]) =tmp_pop_rte(k,[J I])
tmp_pop_brk(k,:) =randbreaks()
case 8 % 评议并更新断点
tmp_pop_rte(k,I:J) =tmp_pop_rte(k,[I+1:J I])
tmp_pop_brk(k,:) =randbreaks()
otherwise
end
end
new_pop_rte(p-7:p,:) = tmp_pop_rte
new_pop_brk(p-7:p,:) = tmp_pop_brk
end
pop_rte = new_pop_rte
pop_brk = new_pop_brk
end
figure(2)
plot(dist_history,'b','LineWidth',2)
title('历史最优解')
xlabel('迭代次数')
ylabel('最优路程')
% 随机产生一套断点 的集合
function breaks = randbreaks()
if min_tour == 1 % 一个旅行商时,没有断点的设置
tmp_brks = randperm(n-1)
breaks =sort(tmp_brks(1:num_brks))
else % 强制断点至少找到最短的履行长度
num_adjust = find(rand <cum_prob,1)-1
spaces =ceil(num_brks*rand(1,num_adjust))
adjust = zeros(1,num_brks)
for kk = 1:num_brks
adjust(kk) = sum(spaces == kk)
end
breaks = min_tour*(1:num_brks) +cumsum(adjust)
end
end
disp('最优路径为:/n')
disp(opt_rte)
disp('其中断点为为:/n')
disp(opt_brk)
end
欢迎分享,转载请注明来源:内存溢出
微信扫一扫
支付宝扫一扫
评论列表(0条)