keras多维输出精度不高

keras多维输出精度不高,第1张

解决方法:

1、检查数据:确保您使用的数据准确无误。如果数据存在噪声或异常值,那么它可能会影响模型的精度。

2、模型调整:对于多维输出,您应该检查您的模型架构是否合适。您可以尝试改变层数,增加神经元数量或改变激活函数来改善模型性能。您还可以尝试优化器,例如改变学习率、初始化方法等等,来改进模型性能。

为防止过拟合,除了正则化和Dropout还可以对样本数据做增强。

1、一般对train集合做数据增强,对valid或test不做数据增强。当模型效果不好时,可以对test做多次,比如10次数据扩增,最后做平均。--这点不是特别理解。

2、做Cross Validtion 比如5折交叉验证,生成5个模型,最后投票。

3、不同的训练参数。联合前面两种方法。

4、keras中的方法,如果是图像,可以通过ImageDataGenerator进行增强。

一个稍微讲究一点的办法是,利用在大规模数据集上预训练好的网络。这样的网络在多数的计算机视觉问题上都能取得不错的特征,利用这样的特征可以让我们获得更高的准确率。

我们将使用vgg-16网络,该网络在ImageNet数据集上进行训练,这个模型我们之前提到过了。因为ImageNet数据集包含多种“猫”类和多种“狗”类,这个模型已经能够学习与我们这个数据集相关的特征了。事实上,简单的记录原来网络的输出而不用bottleneck特征就已经足够把我们的问题解决的不错了。不过我们这里讲的方法对其他的类似问题有更好的推广性,包括在ImageNet中没有出现的类别的分类问题。

VGG-16的网络结构如下:

我们的方法是这样的,我们将利用网络的卷积层部分,把全连接以上的部分抛掉。然后在我们的训练集和测试集上跑一遍,将得到的输出(即“bottleneck feature”,网络在全连接之前的最后一层激活的feature map)记录在两个numpy array里。然后我们基于记录下来的特征训练一个全连接网络。

我们将这些特征保存为离线形式,而不是将我们的全连接模型直接加到网络上并冻结之前的层参数进行训练的原因是处于计算效率的考虑。运行VGG网络的代价是非常高昂的,尤其是在CPU上运行,所以我们只想运行一次。这也是我们不进行数据提升的原因。

我们不再赘述如何搭建vgg-16网络了,这件事之前已经说过,在keras的example里也可以找到。但让我们看看如何记录bottleneck特征。

generator = datagen.flow_from_directory(

'data/train',

target_size=(150, 150),

batch_size=32,

class_mode=None, # this means our generator will only yield batches of data, no labels

shuffle=False) # our data will be in order, so all first 1000 images will be cats, then 1000 dogs

# the predict_generator method returns the output of a model, given

# a generator that yields batches of numpy data

bottleneck_features_train = model.predict_generator(generator, 2000)

# save the output as a Numpy array

np.save(open('bottleneck_features_train.npy', 'w'), bottleneck_features_train)

generator = datagen.flow_from_directory(

'data/validation',

target_size=(150, 150),

batch_size=32,

class_mode=None,

shuffle=False)

bottleneck_features_validation = model.predict_generator(generator, 800)

np.save(open('bottleneck_features_validation.npy', 'w'), bottleneck_features_validation)

记录完毕后我们可以将数据载入,用于训练我们的全连接网络:

train_data = np.load(open('bottleneck_features_train.npy'))

# the features were saved in order, so recreating the labels is easy

train_labels = np.array([0] * 1000 + [1] * 1000)

validation_data = np.load(open('bottleneck_features_validation.npy'))

validation_labels = np.array([0] * 400 + [1] * 400)

model = Sequential()

model.add(Flatten(input_shape=train_data.shape[1:]))

model.add(Dense(256, activation='relu'))

model.add(Dropout(0.5))

model.add(Dense(1, activation='sigmoid'))

model.compile(optimizer='rmsprop',

loss='binary_crossentropy',

metrics=['accuracy'])

model.fit(train_data, train_labels,

nb_epoch=50, batch_size=32,

validation_data=(validation_data, validation_labels))

model.save_weights('bottleneck_fc_model.h5')

因为特征的size很小,模型在CPU上跑的也会很快,大概1s一个epoch,最后我们的准确率是90%~91%,这么好的结果多半归功于预训练的vgg网络帮助我们提取特征。

下面是代码:

[python] view plain copy

import os

import h5py

import numpy as np

from keras.preprocessing.image import ImageDataGenerator

from keras.models import Sequential

from keras.layers import Convolution2D, MaxPooling2D, ZeroPadding2D

from keras.layers import Activation, Dropout, Flatten, Dense

import sys

defaultencoding = 'utf-8'

if sys.getdefaultencoding() != defaultencoding:

reload(sys)

sys.setdefaultencoding(defaultencoding)

# path to the model weights file.

weights_path = '../weights/vgg16_weights.h5'

top_model_weights_path = 'bottleneck_fc_model.h5'

# dimensions of our images.

img_width, img_height = 150, 150

train_data_dir = '../data/train'

validation_data_dir = '../data/validation'

nb_train_samples = 2000

nb_validation_samples = 800

nb_epoch = 50

def save_bottlebeck_features():

datagen = ImageDataGenerator(rescale=1./255)

# build the VGG16 network

model = Sequential()

model.add(ZeroPadding2D((1, 1), input_shape=(3, img_width, img_height)))

model.add(Convolution2D(64, 3, 3, activation='relu', name='conv1_1'))

model.add(ZeroPadding2D((1, 1)))

model.add(Convolution2D(64, 3, 3, activation='relu', name='conv1_2'))

model.add(MaxPooling2D((2, 2), strides=(2, 2)))

model.add(ZeroPadding2D((1, 1)))

model.add(Convolution2D(128, 3, 3, activation='relu', name='conv2_1'))

model.add(ZeroPadding2D((1, 1)))

model.add(Convolution2D(128, 3, 3, activation='relu', name='conv2_2'))

model.add(MaxPooling2D((2, 2), strides=(2, 2)))

model.add(ZeroPadding2D((1, 1)))

model.add(Convolution2D(256, 3, 3, activation='relu', name='conv3_1'))

model.add(ZeroPadding2D((1, 1)))

model.add(Convolution2D(256, 3, 3, activation='relu', name='conv3_2'))

model.add(ZeroPadding2D((1, 1)))

model.add(Convolution2D(256, 3, 3, activation='relu', name='conv3_3'))

model.add(MaxPooling2D((2, 2), strides=(2, 2)))

model.add(ZeroPadding2D((1, 1)))

model.add(Convolution2D(512, 3, 3, activation='relu', name='conv4_1'))

model.add(ZeroPadding2D((1, 1)))

model.add(Convolution2D(512, 3, 3, activation='relu', name='conv4_2'))

model.add(ZeroPadding2D((1, 1)))

model.add(Convolution2D(512, 3, 3, activation='relu', name='conv4_3'))

model.add(MaxPooling2D((2, 2), strides=(2, 2)))

model.add(ZeroPadding2D((1, 1)))

model.add(Convolution2D(512, 3, 3, activation='relu', name='conv5_1'))

model.add(ZeroPadding2D((1, 1)))

model.add(Convolution2D(512, 3, 3, activation='relu', name='conv5_2'))

model.add(ZeroPadding2D((1, 1)))

model.add(Convolution2D(512, 3, 3, activation='relu', name='conv5_3'))

model.add(MaxPooling2D((2, 2), strides=(2, 2)))

# load the weights of the VGG16 networks

# (trained on ImageNet, won the ILSVRC competition in 2014)

# note: when there is a complete match between your model definition

# and your weight savefile, you can simply call model.load_weights(filename)

assert os.path.exists(weights_path), 'Model weights not found (see "weights_path" variable in script).'

f = h5py.File(weights_path)

for k in range(f.attrs['nb_layers']):

if k >= len(model.layers):

# we don't look at the last (fully-connected) layers in the savefile

break

g = f['layer_{}'.format(k)]

weights = [g['param_{}'.format(p)] for p in range(g.attrs['nb_params'])]

model.layers[k].set_weights(weights)

f.close()

print('Model loaded.')

generator = datagen.flow_from_directory(

train_data_dir,

target_size=(img_width, img_height),

batch_size=32,

class_mode=None,

shuffle=False)

print('generator ok.')

bottleneck_features_train = model.predict_generator(generator, nb_train_samples)

print('predict ok.')

np.save(open('bottleneck_features_train.npy', 'wb'), bottleneck_features_train)

generator = datagen.flow_from_directory(

validation_data_dir,

target_size=(img_width, img_height),

batch_size=32,

class_mode=None,

shuffle=False)

bottleneck_features_validation = model.predict_generator(generator, nb_validation_samples)

np.save(open('bottleneck_features_validation.npy', 'wb'), bottleneck_features_validation)

print('save_bottlebeck_features ok')

def train_top_model():

train_data = np.load(open('bottleneck_features_train.npy'))

train_labels = np.array([0] * (nb_train_samples / 2) + [1] * (nb_train_samples / 2))

validation_data = np.load(open('bottleneck_features_validation.npy'))

validation_labels = np.array([0] * (nb_validation_samples / 2) + [1] * (nb_validation_samples / 2))

model = Sequential()

model.add(Flatten(input_shape=train_data.shape[1:]))

model.add(Dense(256, activation='relu'))

model.add(Dropout(0.5))

model.add(Dense(1, activation='sigmoid'))

model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])

model.fit(train_data, train_labels,

nb_epoch=nb_epoch, batch_size=32,

validation_data=(validation_data, validation_labels))

model.save_weights(top_model_weights_path)

print('train_top_model ok')

save_bottlebeck_features()

train_top_model()


欢迎分享,转载请注明来源:内存溢出

原文地址:https://54852.com/bake/11767109.html

(0)
打赏 微信扫一扫微信扫一扫 支付宝扫一扫支付宝扫一扫
上一篇 2023-05-18
下一篇2023-05-18

发表评论

登录后才能评论

评论列表(0条)

    保存