如何在python中以编程方式为caffe生成deploy.txt

如何在python中以编程方式为caffe生成deploy.txt,第1张

如何在python中以编程方式为caffe生成deploy.txt

很简单:

from caffe import layers as L, params as Pdef custom_net(lmdb, batch_size):    # define your own net!    n = caffe.NetSpec()    if lmdb is None: # "deploy" flavor        # assuming your data is of shape 3x224x224        n.data = L.Input(input_param={'shape':{'dim':[1,3,224,224]}})    else:        # keep this data layer for all networks        n.data, n.label = L.Data(batch_size=batch_size, backend=P.Data.LMDB, source=lmdb,   ntop=2, transform_param=dict(scale=1. / 255))    # the other layers common to all flavors: train/val/deploy...    n.conv1 = L.Convolution(n.data, kernel_size=6,  num_output=48, weight_filler=dict(type='xavier'))    n.pool1 = L.Pooling(n.conv1, kernel_size=2, stride=2, pool=P.Pooling.MAX)    n.conv2 = L.Convolution(n.pool1, kernel_size=5,  num_output=48, weight_filler=dict(type='xavier'))    n.pool2 = L.Pooling(n.conv2, kernel_size=2, stride=2, pool=P.Pooling.MAX)    n.conv3 = L.Convolution(n.pool2, kernel_size=4,  num_output=48, weight_filler=dict(type='xavier'))    n.pool3 = L.Pooling(n.conv3, kernel_size=2, stride=2, pool=P.Pooling.MAX)    n.conv4 = L.Convolution(n.pool3, kernel_size=2,  num_output=48, weight_filler=dict(type='xavier'))    n.pool4 = L.Pooling(n.conv4, kernel_size=2, stride=2, pool=P.Pooling.MAX)    n.fc1 = L.InnerProduct(n.pool4, num_output=50, weight_filler=dict(type='xavier'))    # do you "drop" i deploy as well? up to you to decide...    n.drop1 = L.Dropout(n.fc1, dropout_param=dict(dropout_ratio=0.5))    n.score = L.InnerProduct(n.drop1, num_output=2,   weight_filler=dict(type='xavier'))    if lmdb is None:        n.prob = L.Softmax(n.score)    else:        # keep this loss layer for all networks apart from "Deploy"        n.loss = L.SoftmaxWithLoss(n.score, n.label)    return n.to_proto()

现在调用函数:

with open('net_deploy.prototxt', 'w') as f:    f.write(str(custom_net(None, None)))

正如你可以看到有两处修改到prototxt(条件上

lmdb
None
):
第一个,而不是
"Data"
一层,你必须声明
"Input"
层只声明
"data"
,不
"label"

第二个变化是输出层:您有一个预测层(而不是损失层)(例如,参见此答案)。



欢迎分享,转载请注明来源:内存溢出

原文地址:https://54852.com/zaji/5431544.html

(0)
打赏 微信扫一扫微信扫一扫 支付宝扫一扫支付宝扫一扫
上一篇 2022-12-11
下一篇2022-12-11

发表评论

登录后才能评论

评论列表(0条)

    保存