时间:2020-08-31 python教程 查看: 1001
最近一段时间一直在研究yolo物体检测,基于网络上很少有yolo的分类预训练和yolo9000的联合数据的训练方法,经过本人的真实实验,对这两个部分做一个整理(本篇介绍yolo的分类预训练)
1、数据准备
1000类的Imagenet图片数据
因为Imagenet不同的类别数据都是单独放在一个文件夹中,并且有特定的命名,如‘n00020287',所以在做分类时我们不需要去制作特定的标签,只要训练的图片的path中包含自身的类别标签,而不含有其他类的标签即可。
制作用于训练的数据列表*classf_list.txt
2、分类标签制作
制作所有类别的标签列表new_label.txt和标签对应的类别名称的列表new_name.txt
new_label.txt
new_name.txt(训练时不需要,但是测试时可以显示出具体的类别)
3、修改cfg/.data配置文件(*classf.data)
classes=1000
train =/home/research/disk2/wangshun/yolo1700/darknet/coco/filelist/classf_list.txt
labels=data/new_label.txt
names=data/new_name.txt
backup=backup
top=5
修改网络配置文件(classf.cfg)
[net]
#Training
batch=64
subdivisions=1
width=416
height=416
channels=3
momentum=0.9
decay=0.0005
angle=0
saturation = 1.5
exposure = 1.5
hue=.1
max_crop = 512
learning_rate=0.001
burn_in=1000
max_batches = 1000000000
policy=steps
steps=350000,500000,750000,1200000
scales=.1,.1,.1,.1
[convolutional]
batch_normalize=1
filters=16
size=3
stride=1
pad=1
activation=leaky
[maxpool]
size=2
stride=2
[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=leaky
[maxpool]
size=2
stride=2
[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=32
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=leaky
[maxpool]
size=2
stride=2
[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky
[maxpool]
size=2
stride=2
[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky
[maxpool]
size=2
stride=2
[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky
#######
[convolutional]
batch_normalize=1
size=1
stride=1
pad=1
filters=128
activation=leaky
[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=256
activation=leaky
[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=256
activation=leaky
[convolutional]
filters=1000
size=1
stride=1
pad=1
activation=leaky
[avgpool]
[softmax]
groups = 1
[cost]
type=sse
当然中间的网络层是我自己修改的网络。
5. 训练
./darknet classifier train cfg/classf.data cfg/classf.cfg -gpus 0,3(选择自己机器的gpu)
6 . 测试
./darknet classifier predict cfg/classf.data cfg/classf.cfg backup/classf.weights data/eagle.jpg
当然这只是刚刚训练了2000次测试的结果,只是测试,还需要继续训练。
以上这篇使用darknet框架的imagenet数据分类预训练操作就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持python博客。