基于MAX78000实现车型识别
该项目使用了undefined,实现了undefined的设计,它的主要功能为:undefined。
标签
嵌入式系统
MAX78000
forever123
更新2024-01-11
华中师范大学
165

1、项目介绍

随着交通越来越发达,交通安全问题也一直是我们重视的问题,于是本项目利用MAX78000的图像识别功能,在常见车型(自行车、小轿车、卡车、摩托车、公交车)中区分出具体车型,从而帮助交通管理以及给人们生活带来便利。

2、项目设计思路

首先是搜寻数据集,我是在kaggle官网找到的关于车型识别的数据集,其次是搭建环境包括下载MSDK,和使用wsl2和Ubuntu22.04。然后修改模型调整超参数变为适合自己训练集的模型进行训练,再将训练结果进量化和评估生成测试数据集,最后通过对ai8x-synthesis中的YAML模型进行修改得到适合自己数据集的YAML模型,最后合成部署到MAX78000的c代码,再在工程里添加摄像头和TFT屏的代码用作显示摄像头识别结果,在终端也可以输出识别结果和摄像头图像的打印结果,总体流程如下图:

FiCtwNGAn_JgqwAhVlySH6eIH-2E

3、收集素材的思路

在kagglel的官网收集公交车、轿车、摩托车等车型的图片作为数据

 公司名称  中度可信度描述已自动生成

数据集获取网址:https://www.kaggle.com/datasets/iamsandeepprasad/vehicle-data-set?rvi=1

4、预训练实现过程

将收集来的数据集按以下的格式储存

data—vehicle

|--test

| |-- Car: includes .jpg images of cars.

| |-- Bus: includes .jpg images of bus.

| |-- Bike: includes .jpg images of bikes.

| |-- Motocycle: includes .jpg images of motocycles.

| |-- Truck: includes .jpg images of trucks.

|--train

| |-- Car: includes .jpg images of cars.

| |-- Bus: includes .jpg images of bus.

| |-- Bike: includes .jpg images of bikes.

| |-- Motocycle: includes .jpg images of motocycles.

| |-- Truck: includes .jpg images of trucks.

数据集介绍

训练集:11098张照片(自行车、小轿车、公交车、摩托车、卡车)

测试集:3220张照片(自行车、小轿车、公交车、摩托车、卡车)

数据处理操作

ai85-training的datasets中新建vehicle.py

这个是基于猫狗分类的数据集处理代码修改得到,对label进行修改,变为我所需要的自行车、小轿车、公交车、摩托车、卡车这五类交通工具的英文单词

对代码末尾的outputs也做出同样修改使结果输出为这五个种类

"""
vehicle Datasets
"""
import os
import sys

import torch
from torch.utils.data import Dataset
from torchvision import transforms

import albumentations as album
import cv2

import ai8x


class vehicle(Dataset):

labels = ['car', 'motocycle', 'bus', 'truck', 'bike']
label_to_id_map = {k: v for v, k in enumerate(labels)} # 将标签中的成员与标识符对应(执行完后有label_to_id_map = {'car': 0,'bike': 1,'bus': 2}

label_to_folder_map = {'car': 'Car', 'motocycle': 'Motocycle', 'bus': 'Bus', 'truck': 'Truck', 'bike': 'Bike'} # 将car标签与Car文件夹对应

def __init__(self, root_dir, d_type, transform=None,
resize_size=(128, 128), augment_data=False):
self.root_dir = root_dir
self.data_dir = os.path.join(root_dir, 'vehicle', d_type) # 将root_dir/'vehicle'/d_type这个路径赋给data_dir

if not self.__check_vehicle_data_exist():
self.__print_download_manual()
sys.exit("Dataset not found!")

self.__get_image_paths()

self.album_transform = None
if d_type == 'train' and augment_data: #对数据的增强操作
self.album_transform = album.Compose([
album.GaussNoise(var_limit=(1.0, 20.0), p=0.25),
album.RGBShift(r_shift_limit=15, g_shift_limit=15, b_shift_limit=15, p=0.5),
album.ColorJitter(p=0.5),
album.SmallestMaxSize(max_size=int(1.2*min(resize_size))),
album.ShiftScaleRotate(shift_limit=0.05, scale_limit=0.05, rotate_limit=15, p=0.5),
album.RandomCrop(height=resize_size[0], width=resize_size[1]),
album.HorizontalFlip(p=0.5),
album.Normalize(mean=(0.0, 0.0, 0.0), std=(1.0, 1.0, 1.0))])
if not augment_data or d_type == 'test':
self.album_transform = album.Compose([
album.SmallestMaxSize(max_size=int(1.2*min(resize_size))),
album.CenterCrop(height=resize_size[0], width=resize_size[1]),
album.Normalize(mean=(0.0, 0.0, 0.0), std=(1.0, 1.0, 1.0))])

self.transform = transform

def __check_vehicle_data_exist(self):
return os.path.isdir(self.data_dir)


def __get_image_paths(self):
self.data_list = []

for label in self.labels:
image_dir = os.path.join(self.data_dir, self.label_to_folder_map[label])
for file_name in sorted(os.listdir(image_dir)):
file_path = os.path.join(image_dir, file_name)
if os.path.isfile(file_path):
self.data_list.append((file_path, self.label_to_id_map[label]))

def __len__(self):
return len(self.data_list)

def __getitem__(self, index):
label = torch.tensor(self.data_list[index][1], dtype=torch.int64)

image_path = self.data_list[index][0]
print(image_path)
image = cv2.imread(image_path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

if self.album_transform:
image = self.album_transform(image=image)["image"]

if self.transform:
image = self.transform(image)

return image, label


def get_vehicle_dataset(data, load_train, load_test):

(data_dir, args) = data

transform = transforms.Compose([
transforms.ToTensor(),
ai8x.normalize(args=args),
])

if load_train:
train_dataset = vehicle(root_dir=data_dir, d_type='train',
transform=transform, augment_data=True)
else:
train_dataset = None

if load_test:
test_dataset = vehicle(root_dir=data_dir, d_type='test', transform=transform)
else:
test_dataset = None

return train_dataset, test_dataset


datasets = [
{
'name': 'vehicle',
'input': (3, 128, 128),
'output': ('car', 'motocycle', 'bus', 'truck', 'bike'),
'loader': get_vehicle_dataset,
},
]

以下是对图像尺寸和文件夹对应标签的处理

Fio0XKiYwZI1R1-RRldC6ZuKScEn

限制图像尺寸为128x128x3

以下是对图像进行的增强处理 

FujIWFiCFO_cu8PfPpXpsANeSSBO

从上至下函数的功能分别为:

album.GaussNoise(var_limit=(1.0, 20.0), p=0.25), - 这行代码给图像添加了均值为1.0到20.0的高斯噪声,概率为25%。

album.RGBShift(r_shift_limit=15, g_shift_limit=15, b_shift_limit=15, p=0.5), - 这行代码将图像的RGB通道进行了偏移,偏移限制为15,概率为50%。

album.ColorJitter(p=0.5), - 这行代码对图像进行颜色抖动,概率为50%

album.SmallestMaxSize(max_size=int(1.2*min(resize_size))), - 这行代码将图像调整为最小尺寸的1.2倍

album.ShiftScaleRotate(shift_limit=0.05, scale_limit=0.05, rotate_limit=15, p=0.5), - 这行代码对图像进行平移、缩放和旋转变换,限制为0.05,概率为50%。 

album.RandomCrop(height=resize_size[0], width=resize_size[1]), - 这行代码随机裁剪图像到指定的高度和宽度。

 album.HorizontalFlip(p=0.5), - 这行代码以50%的概率对图像进行水平翻转。

album.Normalize(mean=(0.0, 0.0, 0.0), std=(1.0, 1.0, 1.0))]) - 这行代码使用指定的均值和标准差对图像进行归一化处理。

编写训练模型

这是基于ai85cdnet上修改的用于识别车型的模型文件vehicle.py

from torch import nn

import ai8x


class vehicleNet(nn.Module):
"""
Define CNN model for image classification.
"""
def __init__(self, num_classes=5, num_channels=3, dimensions=(128, 128),
fc_inputs=16, bias=False, **kwargs):
super().__init__()

# AI85 Limits
assert dimensions[0] == dimensions[1] # Only square supported

# Keep track of image dimensions so one constructor works for all image sizes
dim = dimensions[0]

self.conv1 = ai8x.FusedConv2dReLU(num_channels, 16, 3,
padding=1, bias=bias, **kwargs)
# padding 1 -> no change in dimensions -> 16x128x128

pad = 2 if dim == 28 else 1
self.conv2 = ai8x.FusedMaxPoolConv2dReLU(16, 32, 3, pool_size=2, pool_stride=2,
padding=pad, bias=bias, **kwargs)
dim //= 2 # pooling, padding 0 -> 32x64x64
if pad == 2:
dim += 2 # padding 2 -> 32x32x32

self.conv3 = ai8x.FusedMaxPoolConv2dReLU(32, 64, 3, pool_size=2, pool_stride=2, padding=1,
bias=bias, **kwargs)
dim //= 2 # pooling, padding 0 -> 64x32x32

self.conv4 = ai8x.FusedMaxPoolConv2dReLU(64, 32, 3, pool_size=2, pool_stride=2, padding=1,
bias=bias, **kwargs)
dim //= 2 # pooling, padding 0 -> 32x16x16

self.conv5 = ai8x.FusedMaxPoolConv2dReLU(32, 32, 3, pool_size=2, pool_stride=2, padding=1,
bias=bias, **kwargs)
dim //= 2 # pooling, padding 0 -> 32x8x8

self.conv6 = ai8x.FusedConv2dReLU(32, fc_inputs, 3, padding=1, bias=bias, **kwargs)


self.fc = ai8x.Linear(fc_inputs*dim*dim, num_classes, bias=True, wide=True, **kwargs)

for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')

def forward(self, x): # pylint: disable=arguments-differ
"""Forward prop"""
x = self.conv1(x)
x = self.conv2(x)
x = self.conv3(x)
x = self.conv4(x)
x = self.conv5(x)
x = self.conv6(x)
x = x.view(x.size(0), -1)
x = self.fc(x)

return x


def vehiclenet(pretrained=False, **kwargs):
"""
Constructs a vehicleNet model.
"""
assert not pretrained
return vehicleNet(**kwargs)


models = [
{
'name': 'vehiclenet',
'min_input': 1,
'dim': 2,
},
]

该模型第一层先进行卷积初步提取特征,然后四层最大池化加卷积,用于进一步提取特征,并减少训练参数数量,降低卷积层输出的特征向量的维度,第六层再进行卷积提取特征,第七层使用全连接层将各部分特征汇总,产生分类器。

经过多次调整超参数进行训练,发现epoch150,学习率LR0.002,gamma 0.6时识别效果最好(gamma指的是Batch Normalization(批量归一化)中的一个参数)

由vehiclenet进行训练得到的识别率为83.779%

编写训练脚本

python train.py --epochs 150 --optimizer Adam --lr 0.002 --wd 0 --deterministic --compress policies/schedule-vehicle.yaml --qat-policy policies/qat_policy_vehicle.yaml --model vehiclenet --dataset vehicle --confusion --param-hist --embedding --device MAX78000 "$@"

--deterministic:设置随机数种子,制造可重复的训练结果

--epochs 150:训练的次数

--optimizer Adam:优化器

--lr 0.002:learning rate 学习率

--wd 0:weight decay 权重衰减

--model vehiclenet:模型选择,模型定义在models文件夹下

--dataset vehicle:数据集名称,之前在数据集加载文件中定义的

--device MAX78000:单片机芯片型号

--qat-policy policies/qat_policy_vehicle.yaml:量化参数的策略

开始训练

bEFdxJC5fa+Ojc9vBv13XhdrV728HpAAIIGCBwJ69d4mxmwIBSBIHbKEB26Tbi0zQCCCCAAAIIIIAAAgggcNsE7uTs0m1DoWEEEDBIgDvjDGKjEAIIIIAAAggggAACCCCAAAIIIIBAhQDZJSYCAggggAACCCCAAAIIIIAAAggggIDhAmSXDLejJAIIIIAAAggggAACCCCAAAIIIIAA2SXmAAIIIIAAAggggAACCCCAAAIIIICA4QJklwy3oyQCCCCAAAIIIIAAAggggAACCCCAANkl5gACCCCAAAIIIIAAAggggAACCCCAgOECZJcMt6MkAggggAACCCCAAAIIIIAAAggggADZJeYAAggggAACCCCAAAIIIIAAAggggIDhAmSXDLejJAIIIIAAAggggAACCCCAAAIIIIAA2SXmAAIIIIAAAggggAACCCCAAAIIIICA4QJklwy3oyQCCCCAAAIIIIAAAggggAACCCCAANkl5gACCCCAAAIIIIAAAggggAACCCCAgOECZJcMt6MkAggggAACCCCAAAIIIIAAAggggADZJeYAAggggAACCCCAAAIIIIAAAggggIDhAmSXDLejJAIIIIAAAggggAACCCCAAAIIIIAA2SXmAAIIIIAAAggggAACCCCAAAIIIICA4QJklwy3oyQCCCCAAAIIIIAAAggggAACCCCAANkl5gACCCCAAAIIIIAAAggggAACCCCAgOECZJcMt6MkAggggAACCCCAAAIIIIAAAggggADZJeYAAggggAACCCCAAAIIIIAAAggggIDhAmSXDLejJAIIIIAAAggggAACCCCAAAIIIIAA2SXmAAIIIIAAAggggAACCCCAAAIIIICA4QJklwy3oyQCCCCAAAIIIIAAAggggAACCCCAANkl5gACCCCAAAIIIIAAAggggAACCCCAgOECZJcMt6MkAggggAACCCCAAAIIIIAAAggggADZJeYAAggggAACCCCAAAIIIIAAAggggIDhAmSXDLejJAIIIIAAAggggAACCCCAAAIIIIAA2SXmAAIIIIAAAggggAACCCCAAAIIIICA4QJklwy3oyQCCCCAAAIIIIAAAggggAACCCCAANkl5gACCCCAAAIIIIAAAggggAACCCCAgOECZJcMt6MkAggggAACCCCAAAIIIIAAAggggADZJeYAAggggAACCCCAAAIIIIAAAggggIDhAmSXDLejJAIIIIAAAggggAACCCCAAAIIIIAA2SXmAAIIIIAAAggggAACCCCAAAIIIICA4QJklwy3oyQCCCCAAAIIIIAAAggggAACCCCAANkl5gACCCCAAAIIIIAAAggggAACCCCAgOECZJcMt6MkAggggAACCCCAAAIIIIAAAggggADZJeYAAggggAACCCCAAAIIIIAAAggggIDhAmSXDLejJAIIIIAAAggggAACCCCAAAIIIIAA2SXmAAIIIIAAAggggAACCCCAAAIIIICA4QJklwy3oyQCCCCAAAIIIIAAAggggAACCCCAANkl5gACCCCAAAIIIIAAAggggAACCCCAgOECZJcMt6MkAggggAACCCCAAAIIIIAAAggggADZJeYAAggggAACCCCAAAIIIIAAAggggIDhAmSXDLejJAIIIIAAAggggAACCCCAAAIIIIAA2SXmAAIIIIAAAggggAACCCCAAAIIIICA4QJklwy3oyQCCCCAAAIIIIAAAggggAACCCCAANkl5gACCCCAAAIIIIAAAggggAACCCCAgOECZJcMt6MkAggggAACCCCAAAIIIIAAAggggADZJeYAAggggAACCCCAAAIIIIAAAggggIDhAmSXDLejJAIIIIAAAggggAACCCCAAAIIIIAA2SXmAAIIIIAAAggggAACCCCAAAIIIICA4QJklwy3oyQCCCCAAAIIIIAAAggggAACCCCAANkl5gACCCCAAAIIIIAAAggggAACCCCAgOECZJcMt6MkAggggAACCCCAAAIIIIAAAggggADZJeYAAggggAACCCCAAAIIIIAAAggggIDhAmSXDLejJAIIIIAAAggggAACCCCAAAIIIIAA2SXmAAIIIIAAAggggAACCCCAAAIIIICAUgErS4t6b5WPbyelpdkPAQQQQAABBBBAAAEEEECgtQh4qX0SE661lmiIAwEEWk6gS+eOpqZmtdtj7VLL6dMSAggggAACCCCAAAIIIIAAAggg0PoEyC61vjElIgQQQAABBBBAAAEEEEAAAQQQQKDlBMgutZw1LSGAAAIIIIAAAggggAACCCCAAAKtT4DsUusbUyJCAAEEEEAAAQQQQAABBBBAAAEEWk6A7FLLWdMSAggggAACCCCAAAIIIIAAAggg0PoEyC61vjElIgQQQAABBBBAAAEEEEAAAQQQQKDlBMgutZw1LSGAAAIIIIAAAggggAACCCCAAAKtT4DsUusbUyJCAAEEEEAAAQQQQAABBBBAAAEEWk6A7FLLWdMSAggggAACCCCAAAIIIIAAAggg0PoEyC61vjElIgQQQAABBBBAAAEEEEAAAQQQQKDlBMgutZw1LSGAAAIIIIAAAggggAACCCCAAAKtT4DsUusbUyJCAAEEEEAAAQQQQAABBBBAAAEEWk6A7FLLWdMSAggggAACCCCAAAIIIIAAAggg0PoEyC61vjElIgQQQAABBBBAAAEEEEAAAQQQQKDlBMgutZw1LSGAAAIIIIAAAggggAACCCCAAAKtT4DsUusbUyJCAAEEEEAAAQQQQAABBBBAAAEEWk6A7FLLWdMSAggggAACCCCAAAIIIIAAAggg0PoEyC61vjElIgQQQAABBBBAAAEEEEAAAQQQQKDlBMgutZw1LSGAAAIIIIAAAggggAACCCCAAAKtT4DsUusbUyJCAAEEEEAAAQQQQAABBBBAAAEEWk6A7FLLWdMSAggggAACCCCAAAIIIIAAAggg0PoEyC61vjElIgQQQAABBBBAAAEEEEAAAQQQQKDlBMgutZw1LSGAAAIIIIAAAggggAACCCCAAAKtT4DsUusbUyJCAAEEEEAAAQQQQAABBBBAAAEEWk6A7FLLWdMSAggggAACCCCAAAIIIIAAAggg0PoEyC61vjElIgQQQAABBBBAAAEEEEAAAQQQQKDlBMgutZw1LSGAAAIIIIAAAggggAACCCCAAAKtT4DsUusbUyJCAAEEEEAAAQQQQAABBBBAAAEEWk6A7FLLWdMSAggggAACCCCAAAIIIIAAAggg0PoEyC61vjElIgQQQAABBBBAAAEEEEAAAQQQQKDlBMgutZw1LSGAAAIIIIAAAggggAACCCCAAAKtT0DVrVuP1hcVESGAAAIIIIAAAggggAACCMgLuLq53UhLQ6nVCBQWFbWaWAjkDhfo0rmjqalZ7U7+f0jjI9e6m7ZoAAAAAElFTkSuQmCC

执行完上面的脚本后会生成以下文件

z8vIE33uQASAACBAIfK8CEyaYb9u2PftLDqWt4y0R4uLiiooKLqdPZWdRXzAHNkAAEBAsgf8DxLNLFjkhL0MAAAAASUVORK5CYII=

将这些文件拷贝到ai8x-synthesis/trained目录下,运行量化脚本

python quantize.py trained/vehicleqat_best.pth.tartrained/vehicleqat_best-q.pth.tar --device MAX78000 -v

运行结束后会生成以下文件

yBu2RLqdmuEAAAAASUVORK5CYII=

模型评估

得到量化后的模型之后,在ai8x-training/scripts/目录下创建模型评估脚本命令,目的是为了评估模型量化后的识别能力。脚本命令为:

python train.py  --model vehiclenet –dataset vehicle --confusion --evaluate --exp-load-weights-from ../ai8x-synthesis/trained/vehicleqat_best-q.pth.tar --device MAX78000 -8  "$@"

生成样本测试文件

在ai8x-training/tests/目录下创建测试文件,生成的文件拷贝到ai8x-synthesis/tests/目录下备用。脚本文件命令为:

python train.py --model vehiclenet --dataset vehicle --save-sample 10 --confusion --evaluate --exp-load-weights-from ../ai8x-synthesis/trained/vehicleqat_best-q.pth.tar -8 --device MAX78000 "$@"

然后生成以下文件

wccSi91LEC2OAAAAABJRU5ErkJggg==

编写YAML文件

arch: vehiclenet
dataset: vehicle

# Define layer parameters in order of the layer sequence
layers:
- pad: 1
activate: ReLU
out_offset: 0x1000
processors: 0x0000000000000007
data_format: HWC
operation: Conv2d
streaming: true
- max_pool: 2
pool_stride: 2
pad: 1
activate: ReLU
out_offset: 0x2000
processors: 0x000ffff000000000
operation: Conv2d
streaming: true
- max_pool: 2
pool_stride: 2
pad: 1
activate: ReLU
out_offset: 0x0000
processors: 0x00000000ffffffff
operation: Conv2d
- max_pool: 2
pool_stride: 2
pad: 1
activate: ReLU
out_offset: 0x2000
processors: 0xffffffffffffffff
operation: Conv2d
- max_pool: 2
pool_stride: 2
pad: 1
activate: ReLU
out_offset: 0x0000
processors: 0x00000000ffffffff
operation: Conv2d
- pad: 1
activate: ReLU
out_offset: 0x2000
processors: 0xffffffff00000000
operation: Conv2d
- op: mlp
flatten: true
out_offset: 0x1000
output_width: 32
processors: 0x000000000000ffff
activate: None


编写YAML文件的注意事项可以参考https://github.com/MaximIntegratedAI/MaximAI_Documentation/blob/master/Guides/YAML%20Quickstart.md

运行合成c代码的命令

python ai8xize.py --test-dir $TARGET --prefix vehicle_demo --checkpoint-file trained/vehicle_5qat_best-q.pth.tar --config-file networks/vehicle.yaml --fifo --softmax $COMMON_ARGS "$@"

最后得到VScode的工程

FsOYzU9wTcbMe8gFhEAOoFDMtTvH

最终工程文件关键代码解释

MXC_ICC_Enable(MXC_ICC0);

/* Switch to 100 MHz clock */
MXC_SYS_Clock_Select(MXC_SYS_CLOCK_IPO);
SystemCoreClockUpdate();

/* Enable peripheral, enable CNN interrupt, turn on CNN clock */
/* CNN clock: 50 MHz div 1 */
cnn_enable(MXC_S_GCR_PCLKDIV_CNNCLKSEL_PCLK, MXC_S_GCR_PCLKDIV_CNNCLKDIV_DIV1);

/* Configure P2.5, turn on the CNN Boost */
cnn_boost_enable(MXC_GPIO2, MXC_GPIO_PIN_5);

/* Bring CNN state machine into consistent state */
cnn_init();
/* Load CNN kernels */
cnn_load_weights();
/* Load CNN bias */
cnn_load_bias();
/* Configure CNN state machine */
cnn_configure();

设置系统时钟,对cnn进行使能、初始化以及配置

void TFT_Print(char *str, int x, int y, int font, int length)
{
// fonts id
text_t text;
text.data = str;
text.len = length;
MXC_TFT_PrintFont(x, y, font, &text, NULL);
}

TFT屏显示函数,str指TFT数据的存储地址,x、y分别表示在TFT屏上的横纵坐标,font代表选择的字体,length是数据的长度

mxc_gpio_cfg_t tft_reset_pin = {MXC_GPIO0, MXC_GPIO_PIN_19, MXC_GPIO_FUNC_OUT, MXC_GPIO_PAD_NONE, MXC_GPIO_VSSEL_VDDIOH};
mxc_gpio_cfg_t tft_blen_pin = {MXC_GPIO0, MXC_GPIO_PIN_9, MXC_GPIO_FUNC_OUT, MXC_GPIO_PAD_NONE, MXC_GPIO_VSSEL_VDDIOH};
MXC_TFT_Init(MXC_SPI0, 1, &tft_reset_pin, &tft_blen_pin);
MXC_TFT_SetRotation(ROTATE_270);
MXC_TFT_ShowImage(0, 0, image_bitmap_1);
MXC_TFT_SetForeGroundColor(WHITE); // set chars to white

配置P0_19为tft屏的reset引脚连接口,配置P0_9为tft屏的blen背光引脚连接口,MXC_TFT_Init函数中完成了SPI的初始化和tft屏D/C引脚和CS引脚的配置,以下三个函数分别为旋转画面270°,显示一张图片,设置背景颜色

printf("Classification results:\n");
int c=0,f=0;
for (i = 0; i < CNN_NUM_OUTPUTS; i++) {
digs = (1000 * ml_softmax[i] + 0x4000) >> 15;
tens = digs % 10;
digs = digs / 10;
result[i] = digs;
printf("[%7d] -> Class %d %8s: %d.%d%%\r\n", ml_data[i], i, classes[i], result[i],
tens);
TFT_Print(buff, 150, 20 +i*20, font_1,
snprintf(buff, sizeof(buff), "%s (%d.%d%%\r)", classes[i], result[i],tens)); //是为了在TFT屏幕的(150,20)的位置纵坐标每隔20显示出分类结果
if(digs>50)
{ c++;
f=i;
}
printf("\n");
} //这段语句是为了判断是否存在分类结果的置信度是高于50%的
if(c<1)
{
printf("the result is Unknown");
TFT_Print(buff, 55, 160, font_2, snprintf(buff, sizeof(buff), "the result is Unknown "));
TFT_Print(buff, 55, 175, font_2, snprintf(buff, sizeof(buff), " "));
} //如果没有一个种类的结果是高于50%的则在串口和TFT屏都输出“the result is Unknown”
//TFT屏输出的语句为“the result is Unknown ”和“ ”的原因是若这行之前如果显示过其他语句,由于the result is Unknown的长度不够覆盖不了之前的语句,会出现残留的字母,故用空格来增大长度以覆盖之前的语句,” ”的原因也是如此。

else
{
printf("the result is %8s",classes[f]);
printf("\n");
TFT_Print(buff, 55, 160, font_2, snprintf(buff, sizeof(buff), "the result is %8s ",classes[f]));
TFT_Print(buff, 55, 175, font_2, snprintf(buff, sizeof(buff), " "));
if(f==3)//当识别结果为卡车时执行以下语句
{
for (i=0;i<4;i++)
{
MXC_Delay(1000000);
LED_Toggle(LED1);//LED灯每隔一秒闪一下
}
printf("Trucks have right of way, please yield!!!");//串口输出该语句,表示警示
TFT_Print(buff, 55, 160, font_2, snprintf(buff, sizeof(buff), "Trucks have right of way,"));//TFT屏显示以下两行语句
TFT_Print(buff, 55, 175, font_2, snprintf(buff, sizeof(buff), "please yield!!!"));
}
}
printf("\n");

在TFT屏和PC端的显示操作

TFT_Print(buff, 150, 20 +i*20, font_1,
snprintf(buff, sizeof(buff), "%s (%d.%d%%\r)", classes[i], result[i],tens)); //是为了在TFT屏幕的(150,20)的位置纵坐标每隔20显示出分类结果

if(digs>50)
{ c++;
f=i;
}
printf("\n");
} //这段语句是为了判断是否存在分类结果的置信度是高于50%的,如果存在则c的值加一,并将此时的种类序号赋给f

if(c<1)
{
printf("the result is Unknown");
TFT_Print(buff, 55, 160, font_2, snprintf(buff, sizeof(buff), "the result is Unknown "));
TFT_Print(buff, 55, 175, font_2, snprintf(buff, sizeof(buff), " "));
} //如果没有一个种类的结果是高于50%的则在串口和TFT屏都输出“the result is Unknown”
//TFT屏输出的语句为“the result is Unknown ”和“ ”的原因是若这行之前如果显示过其他语句,由于the result is Unknown的长度不够覆盖不了之前的语句,会出现残留的字母,故用空格来增大长度以覆盖之前的语句,” ”的原因也是如此。

else //当存在一个种类的置信度高于50%时输出结果为该种类
{
printf("the result is %8s",classes[f]);
printf("\n");
TFT_Print(buff, 55, 160, font_2, snprintf(buff, sizeof(buff), "the result is %8s ",classes[f]));
TFT_Print(buff, 55, 175, font_2, snprintf(buff, sizeof(buff), " "));
if(f==3)//当识别结果为卡车时执行以下语句
{
for (i=0;i<4;i++)
{
MXC_Delay(1000000);
LED_Toggle(LED1);//LED灯每隔一秒闪一下
}
printf("Trucks have right of way, please yield!!!");//串口输出该语句,表示警示
TFT_Print(buff, 55, 160, font_2, snprintf(buff, sizeof(buff), "Trucks have right of way,"));//TFT屏显示以下两行语句
TFT_Print(buff, 55, 175, font_2, snprintf(buff, sizeof(buff), "please yield!!!"));
}
}

void capture_process_camera(void)
{
uint8_t *raw;
uint32_t imgLen;
uint32_t w, h;

int cnt = 0;
uint8_t r, g, b;
uint16_t rgb;
int j = 0;

uint8_t *data = NULL;
stream_stat_t *stat;

camera_start_capture_image();

// Get the details of the image from the camera driver.
camera_get_image(&raw, &imgLen, &w, &h);
printf("W:%d H:%d L:%d \n", w, h, imgLen);

#if defined(TFT_ENABLE) && defined(BOARD_FTHR_REVA)
// Initialize FTHR TFT for DMA streaming
MXC_TFT_Stream(TFT_X_START, TFT_Y_START, w, h);
#endif

// Get image line by line
for (int row = 0; row < h; row++) {
// Wait until camera streaming buffer is full
while ((data = get_camera_stream_buffer()) == NULL) {
if (camera_is_image_rcv()) {
break;
}
}

//LED_Toggle(LED2);
#ifdef BOARD_EVKIT_V1
j = IMAGE_SIZE_X * 2 - 2; // mirror on display
#else
j = 0;
#endif
for (int k = 0; k < 4 * w; k += 4) {
// data format: 0x00bbggrr
r = data[k];
g = data[k + 1];
b = data[k + 2];
//skip k+3

// change the range from [0,255] to [-128,127] and store in buffer for CNN
input_0[cnt++] = ((b << 16) | (g << 8) | r) ^ 0x00808080;
// convert to RGB656 for display
rgb = ((r & 0b11111000) << 8) | ((g & 0b11111100) << 3) | (b >> 3);
data565[j] = (rgb >> 8) & 0xFF;
data565[j + 1] = rgb & 0xFF;

#ifdef BOARD_EVKIT_V1
j -= 2; // mirror on display
#else
j += 2;
#endif
}
#ifdef TFT_ENABLE

#ifdef BOARD_EVKIT_V1
MXC_TFT_ShowImageCameraRGB565(TFT_X_START, TFT_Y_START + row, data565, w, 1);
#endif
#ifdef BOARD_FTHR_REVA
tft_dma_display(TFT_X_START, TFT_Y_START + row, w, 1, (uint32_t *)data565);
#endif

#endif

//LED_Toggle(LED2);
// Release stream buffer
release_camera_stream_buffer();
}

//camera_sleep(1);
stat = get_camera_stream_statistic();

if (stat->overflow_count > 0) {
printf("OVERFLOW DISP = %d\n", stat->overflow_count);
LED_On(LED2); // Turn on red LED if overflow detected
while (1) {}
}
}

摄像头函数

  • camera_start_capture_image();: 调用了名为camera_start_capture_image的函数,开始捕获图像
  • camera_get_image(&raw, &imgLen, &w, &h);: 调用了名为camera_get_image的函数,获取图像的数据和大小信息,并将结果存储在rawimgLenwh
  • MXC_TFT_Stream(TFT_X_START, TFT_Y_START, w, h);: 调用MXC_TFT_Stream函数,是用于初始化FTHR TFT进行DMA流式传输。TFT_X_START、TFT_Y_START是代表摄像头画面的起始坐标,我修改为了均修改为了20,为了将摄像头画面显示在TFT屏的左上角。
  • for (int row = 0; row < h; row++) : 这是一个循环,用于逐行获取图像数据
  • while ((data = get_camera_stream_buffer()) == NULL) : 这是一个循环,等待相机流缓冲区满
  • if (camera_is_image_rcv()) { break; }: 如果相机已经接收到图像,则跳出循环
  • r = data[k]; g = data[k + 1]; b = data[k + 2];: 从data指针指向的内存地址中获取RGB三个通道的值,并分别存储在rgb变量中
  • input_0[cnt++] = ((b << 16) | (g << 8) | r) ^ 0x00808080;: 将RGB三个通道的值转换并存储在input_0数组中
  • rgb = ((r & 0b11111000) << 8) | ((g & 0b11111100) << 3) | (b >> 3);: 将RGB三个通道的值转换成RGB565格式,并存储在rgb变量中
  • data565[j] = (rgb >> 8) & 0xFF; data565[j + 1] = rgb & 0xFF;: 将RGB565格式的像素值存储在data565数组中
  • MXC_TFT_ShowImageCameraRGB565(TFT_X_START, TFT_Y_START + row, data565, w, 1);: 在TFT上显示RGB565格式的图像
  • tft_dma_display(TFT_X_START, TFT_Y_START + row, w, 1, (uint32_t *)data565);: 使用DMA传输功能在TFT上显示RGB565格式的图像
  • release_camera_stream_buffer();: 释放摄像头流缓冲区
  • stat = get_camera_stream_statistic();: 获取摄像头流统计信息

5、外设连接

本项目使用的显示屏为ILI9341驱动的TFT LCD显示屏,分辨率为320×240,使用SPI协议与MAX78000进行通讯。

FiAdXFEyYdnKHI0ZgsjiUZFpxZSXFq9mSmi7rzIx2T7oxRlO41REXOC6

引脚连接为

Fsj_dTndHMtVCN3GJbEK8sVUCHwA

除了TFT屏外,本项目还使用了MAX78000上的CAMERA(用于采集图像用作识别)、UART(用于传输数据到PC端)、LED(用于显示状态变化)

6、识别效果展示

识别结果的输出是取决于每个车型的置信度,如果有一个车型的置信度超过50%,就认为输出结果为该车型,如果没有任何一个车型的置信度超过50%则会输出the result is Unknown

TFT屏上的效果

初始页面

C:\Users\Silence\Desktop\1700727154805.jpg

摩托车识别效果图

C:\Users\Silence\Desktop\1700727154767.jpg

卡车识别效果图

识别结果为卡车时LED灯会闪烁,会显示下图的警示语句,以提醒大家路上出现卡车,请注意避让

C:\Users\Silence\Desktop\1700727154799.jpg

公交车识别效果图

C:\Users\Silence\Desktop\1700727154789.jpg

自行车识别效果图

C:\Users\Silence\Desktop\1700727154783.jpg

小轿车识别效果图

C:\Users\Silence\Desktop\1700727154774.jpg

 

PC端的输出效果

Fs3BIsjSJukW3V9w8POQiTCnarWc

 

7、遇到的困难及解决办法

         搭建 Max78000 FTHR 板卡的开发环境_max78000开发环境-CSDN博客

  • 随着识别车型种类的增多,识别率开始下降,通过反复调整学习率LR、epoch、gamma的值,得到83.779%的识别率

8、结语

在完成这个项目的过程中遇到了许许多多的困难,也学习到了很多,在接触这个项目之前,对嵌入式AI完全没有概念,到现在自己从一个人工智能小白到稍微能了解它的皮毛,也学习到了很多关于CNN的相关知识,不禁要感叹神经网络的神奇,未来我也将继续更深入的去学习人工智能的知识,与自己本专业的内容相结合,努力跟上时代的浪潮。

 

 

 

 

附件下载
vehicle_training.zip
训练的模型、数据集处理代码以及训练完成生成的文件
vehicle_demo.rar
车型识别的工程代码
团队介绍
本人forever~~~
评论
0 / 100
查看更多
目录
硬禾服务号
关注最新动态
0512-67862536
info@eetree.cn
江苏省苏州市苏州工业园区新平街388号腾飞创新园A2幢815室
苏州硬禾信息科技有限公司
Copyright © 2023 苏州硬禾信息科技有限公司 All Rights Reserved 苏ICP备19040198号