MAX78000设计大赛报告——从入门到放弃:图像的卷积处理
MAX78000 python 机器学习 训练 摄像头 卷积
标签
Funpack活动
图像处理
MAX78000
卷积
aramy
更新2023-01-31
1285

MAX78000:是一个小型模块 — 66 × 23 mm — 具有 MAX78000、微型 VGA 摄像头、数字麦克风、立体声音频 I/O、microSD 卡插槽、1 MB QSPI RAM、SWD 调试器/编程器USB 端口和锂聚合物电池充电器。还有两个用户 RGB LED 和两个用户按钮以及与 Adafruit Feather 外形兼容的扩展连接器。一个单独的 JTAG 连接器可用于对 RISC-V 内核进行编程和调试。FohnI0rMQIzSfBMFa2rv60eRfgWB

开发板入门:收到板子后,板子出厂带了个语音识别的DEMO。可识别英文单词“Zero”到“Nine”、“Go”、“Stop”、“Left”、“Right”、“Up”、“Down”、“On”和“Off” . 当它检测到“Go”这个词时,演示会进入数字识别模式,在这种模式下,它会按照扬声器的指令闪烁 LED 灯的次数。也就是说,说“六”后,LED 会闪烁六次。“停止”返回正常模式。串口能显示相关的信息。

安装工具。厂家提供工具是基于Eclipse的工具链,在线安装一共有4G多,很多服务器在国外,需要保持好的网络。包括MinGW、用于 ARM 和 RISC-V 处理器的 GCC 工具链、OpenOCD等。看厂家说明,还可以使用VSCODE作为开发工具。打开安装目录,可以看见官方提供的例程。例程还是蛮丰富的,从入门的hello到外设的例程,还有CNN机器学习的例程。打开官方例程,需要修改例程的两个地方才能正常运行。
1、例程中的Makefile中开发板的选择要由“BOARD ?= EvKit_V1”修改为“BOARD ?= FTHR_RevA”。手头这个板子型号就是FTHR_RevA。

2、修个C/C++Build中的Build command的设置FggC_zxQt1jDRgcg1Kn88znrZoHr

搭建机器学习的环境。参考https://github.com/MaximIntegratedAI中的文档,使用Anconda创建一个新的python环境(留意环境python要选择Python 3.8.2),安装需要的功能包,注意版本号。

numpy>=1.22,<1.23
PyYAML>=5.1.1
scipy>=1.3.0
librosa>=0.7.2
Pillow>=7
shap>=0.34.0
tk>=0.1.0
torch==1.8.1
torchaudio==0.8.1
torchvision==0.9.1
tensorboard>=2.9.0,<2.10.0
protobuf>=3.20.1,<4.0
numba<0.50.0
opencv-python>=4.4.0
pytsmod>=0.3.3
h5py>=3.7.0

参考网络上的大神的文章《美信Maxim78000Evaluation Kit AI开发环境》在WIN10下搭建了python环境,用ai8x-training尝试做一次训练。

FtIdN0LOLXPbA45K0DW0oEAUgrJXFr0OGvfOsgrwTcpqAGaFoEyDJFAj

入门到放弃。拿到板子,开始的想法是想用板子上的摄像头做一个垃圾分类的项目。通过板子上的摄像头获取垃圾物品的图片,通过机器学习,识别图片进行垃圾分类,然后控制Gpio,模拟控制垃圾桶的开关,实现垃圾分类。这个项目网上挺多资源的,从网上下载了几个G的垃圾图片,用来做训练的资源。然后发现了两个问题:始终没搞懂官方例程中的人脸识别程序的结构,获取摄像头数据后,如何使用机器学习模型?如何使用图片做分类的。更致命的问题是,没有GPU,使用CPU训练速度太慢了,就官方的例程做训练,整个训练过程超过7小时。这样给自己试错的机会就太少了。

2022-11-28 17:32:43,528 - Log file for this run: E:\MAX78000\ai8x-training\logs\2022.11.28-173243\2022.11.28-173243.log
2022-11-28 17:32:43,567 - Optimizer Type: <class 'torch.optim.sgd.SGD'>
2022-11-28 17:32:43,568 - Optimizer Args: {'lr': 0.1, 'momentum': 0.9, 'dampening': 0, 'weight_decay': 0.0001, 'nesterov': False}
2022-11-28 17:32:44,159 - Dataset sizes:
	training=54000
	validation=6000
	test=10000
2022-11-28 17:32:44,160 - Reading compression schedule from: policies/schedule.yaml
2022-11-28 17:32:44,163 - 

2022-11-28 17:32:44,164 - Training epoch: 54000 samples (256 per mini-batch)
2022-11-28 17:32:54,596 - Epoch: [0][   10/  211]    Overall Loss 2.298564    Objective Loss 2.298564                                        LR 0.100000    Time 1.043111    
2022-11-28 17:32:58,981 - Epoch: [0][   20/  211]    Overall Loss 2.265719    Objective Loss 2.265719                                        LR 0.100000    Time 0.739323    
2022-11-28 17:33:03,232 - Epoch: [0][   30/  211]    Overall Loss 2.199830    Objective Loss 2.199830                                        LR 0.100000    Time 0.634570    
2022-11-28 17:33:07,517 - Epoch: [0][   40/  211]    Overall Loss 2.076009    Objective Loss 2.076009                                        LR 0.100000    Time 0.582991    
2022-11-28 17:33:11,869 - Epoch: [0][   50/  211]    Overall Loss 1.934813    Objective Loss 1.934813                                        LR 0.100000    Time 0.553440    
2022-11-28 17:33:16,155 - Epoch: [0][   60/  211]    Overall Loss 1.809629    Objective Loss 1.809629                                        LR 0.100000    Time 0.532626    
2022-11-28 17:33:20,413 - Epoch: [0][   70/  211]    Overall Loss 1.684102    Objective Loss 1.684102                                        LR 0.100000    Time 0.517360    
2022-11-28 17:33:24,715 - Epoch: [0][   80/  211]    Overall Loss 1.568398    Objective Loss 1.568398                                        LR 0.100000    Time 0.506446    
2022-11-28 17:33:28,997 - Epoch: [0][   90/  211]    Overall Loss 1.472792    Objective Loss 1.472792                                        LR 0.100000    Time 0.497747    
2022-11-28 17:33:33,556 - Epoch: [0][  100/  211]    Overall Loss 1.384406    Objective Loss 1.384406                                        LR 0.100000    Time 0.493550    
2022-11-28 17:33:38,080 - Epoch: [0][  110/  211]    Overall Loss 1.313016    Objective Loss 1.313016                                        LR 0.100000    Time 0.489809    
2022-11-28 17:33:42,663 - Epoch: [0][  120/  211]    Overall Loss 1.250540    Objective Loss 1.250540                                        LR 0.100000    Time 0.487172    
2022-11-28 17:33:46,950 - Epoch: [0][  130/  211]    Overall Loss 1.191245    Objective Loss 1.191245                                        LR 0.100000    Time 0.482671    
2022-11-28 17:33:51,245 - Epoch: [0][  140/  211]    Overall Loss 1.133978    Objective Loss 1.133978                                        LR 0.100000    Time 0.478862    
2022-11-28 17:33:55,524 - Epoch: [0][  150/  211]    Overall Loss 1.083477    Objective Loss 1.083477                                        LR 0.100000    Time 0.475469    
2022-11-28 17:33:59,784 - Epoch: [0][  160/  211]    Overall Loss 1.038986    Objective Loss 1.038986                                        LR 0.100000    Time 0.472368    
2022-11-28 17:34:04,060 - Epoch: [0][  170/  211]    Overall Loss 0.998710    Objective Loss 0.998710                                        LR 0.100000    Time 0.469732    
2022-11-28 17:34:08,315 - Epoch: [0][  180/  211]    Overall Loss 0.961280    Objective Loss 0.961280                                        LR 0.100000    Time 0.467267    
2022-11-28 17:34:12,609 - Epoch: [0][  190/  211]    Overall Loss 0.927787    Objective Loss 0.927787                                        LR 0.100000    Time 0.465272    
2022-11-28 17:34:16,875 - Epoch: [0][  200/  211]    Overall Loss 0.896116    Objective Loss 0.896116                                        LR 0.100000    Time 0.463331    
2022-11-28 17:34:21,197 - Epoch: [0][  210/  211]    Overall Loss 0.866394    Objective Loss 0.866394    Top1 92.187500    Top5 100.000000    LR 0.100000    Time 0.461708    
2022-11-28 17:34:21,598 - Epoch: [0][  211/  211]    Overall Loss 0.863859    Objective Loss 0.863859    Top1 90.927419    Top5 99.798387    LR 0.100000    Time 0.461420    
2022-11-28 17:34:22,150 - --- validate (epoch=0)-----------
2022-11-28 17:34:22,150 - 6000 samples (256 per mini-batch)
2022-11-28 17:34:29,647 - Epoch: [0][   10/   24]    Loss 0.283329    Top1 91.328125    Top5 99.726562    
2022-11-28 17:34:31,309 - Epoch: [0][   20/   24]    Loss 0.281354    Top1 91.406250    Top5 99.687500    
2022-11-28 17:34:31,875 - Epoch: [0][   24/   24]    Loss 0.282949    Top1 91.433333    Top5 99.716667    
2022-11-28 17:34:32,415 - ==> Top1: 91.433    Top5: 99.717    Loss: 0.283

2022-11-28 17:34:32,416 - ==> Confusion:
[[564   0   4   2   1   1   8   1  21   3]
 [  0 665   6   3   2   0   7   3   2   0]
 [  1   0 525  15   5   1   4   9  24   2]
 [  0   1  10 557   0   3   0   2   9   1]
 [  3   2   5   2 503   1  10   8   9  22]
 [  1   2   2  15   1 438  17   3  30   9]
 [  0   1   3   0   5   1 602   0  18   1]
 [  0   4  21  29   2   2   0 547   4  16]
 [  1   1   4   3   3   2  13   1 550   6]
 [  2   3   9  12  15   6   3   7  23 535]]

2022-11-28 17:34:32,422 - ==> Best [Top1: 91.433   Top5: 99.717   Sparsity:0.00   Params: 71148 on epoch: 0]
2022-11-28 17:34:32,422 - Saving checkpoint to: logs\2022.11.28-173243\checkpoint.pth.tar
2022-11-28 17:34:32,483 - 

2022-11-28 17:34:32,483 - Training epoch: 54000 samples (256 per mini-batch)
2022-11-28 17:34:42,455 - Epoch: [1][   10/  211]    Overall Loss 0.303115    Objective Loss 0.303115                                        LR 0.100000    Time 0.996934    
2022-11-28 17:34:46,806 - Epoch: [1][   20/  211]    Overall Loss 0.292789    Objective Loss 0.292789                                        LR 0.100000    Time 0.715986    
2022-11-28 17:34:51,059 - Epoch: [1][   30/  211]    Overall Loss 0.288909    Objective Loss 0.288909                                        LR 0.100000    Time 0.619078    
2022-11-28 17:34:55,471 - Epoch: [1][   40/  211]    Overall Loss 0.286541    Objective Loss 0.286541                                        LR 0.100000    Time 0.574614    
2022-11-28 17:34:59,707 - Epoch: [1][   50/  211]    Overall Loss 0.290066    Objective Loss 0.290066                                        LR 0.100000    Time 0.544404    
2022-11-28 17:35:03,996 - Epoch: [1][   60/  211]    Overall Loss 0.282046    Objective Loss 0.282046                                        LR 0.100000    Time 0.525162    
2022-11-28 17:35:08,304 - Epoch: [1][   70/  211]    Overall Loss 0.272592    Objective Loss 0.272592                                        LR 0.100000    Time 0.511675    
2022-11-28 17:35:12,609 - Epoch: [1][   80/  211]    Overall Loss 0.265767    Objective Loss 0.265767                                        LR 0.100000    Time 0.501509    
2022-11-28 17:35:16,864 - Epoch: [1][   90/  211]    Overall Loss 0.259592    Objective Loss 0.259592                                        LR 0.100000    Time 0.493048    
2022-11-28 17:35:21,260 - Epoch: [1][  100/  211]    Overall Loss 0.254881    Objective Loss 0.254881                                        LR 0.100000    Time 0.487706    
2022-11-28 17:35:25,552 - Epoch: [1][  110/  211]    Overall Loss 0.253164    Objective Loss 0.253164                                        LR 0.100000    Time 0.482383    
2022-11-28 17:35:29,769 - Epoch: [1][  120/  211]    Overall Loss 0.249848    Objective Loss 0.249848                                        LR 0.100000    Time 0.477332    
2022-11-28 17:35:33,950 - Epoch: [1][  130/  211]    Overall Loss 0.246969    Objective Loss 0.246969                                        LR 0.100000    Time 0.472767    
2022-11-28 17:35:38,209 - Epoch: [1][  140/  211]    Overall Loss 0.243982    Objective Loss 0.243982                                        LR 0.100000    Time 0.469416    
2022-11-28 17:35:42,414 - Epoch: [1][  150/  211]    Overall Loss 0.240281    Objective Loss 0.240281                                        LR 0.100000    Time 0.466154    
2022-11-28 17:35:46,660 - Epoch: [1][  160/  211]    Overall Loss 0.237530    Objective Loss 0.237530                                        LR 0.100000    Time 0.463554    
2022-11-28 17:35:50,871 - Epoch: [1][  170/  211]    Overall Loss 0.233607    Objective Loss 0.233607                                        LR 0.100000    Time 0.461055    
2022-11-28 17:35:55,097 - Epoch: [1][  180/  211]    Overall Loss 0.231156    Objective Loss 0.231156                                        LR 0.100000    Time 0.458912    
2022-11-28 17:35:59,281 - Epoch: [1][  190/  211]    Overall Loss 0.229752    Objective Loss 0.229752                                        LR 0.100000    Time 0.456779    
2022-11-28 17:36:03,531 - Epoch: [1][  200/  211]    Overall Loss 0.228390    Objective Loss 0.228390                                        LR 0.100000    Time 0.455178    
2022-11-28 17:36:07,763 - Epoch: [1][  210/  211]    Overall Loss 0.226308    Objective Loss 0.226308    Top1 94.140625    Top5 100.000000    LR 0.100000    Time 0.453649    
2022-11-28 17:36:08,165 - Epoch: [1][  211/  211]    Overall Loss 0.226081    Objective Loss 0.226081    Top1 94.758065    Top5 99.395161    LR 0.100000    Time 0.453399    
2022-11-28 17:36:08,708 - --- validate (epoch=1)-----------
2022-11-28 17:36:08,708 - 6000 samples (256 per mini-batch)
2022-11-28 17:36:15,978 - Epoch: [1][   10/   24]    Loss 0.169041    Top1 94.960938    Top5 99.765625    
2022-11-28 17:38:31,130 - Epoch: [1][   20/   24]    Loss 0.173639    Top1 94.765625    Top5 99.785156    
2022-11-28 17:38:31,683 - Epoch: [1][   24/   24]    Loss 0.172894    Top1 94.750000    Top5 99.783333    
2022-11-28 17:38:32,225 - ==> Top1: 94.750    Top5: 99.783    Loss: 0.173

2022-11-28 17:38:32,226 - ==> Confusion:
[[595   0   1   0   0   0   4   0   5   0]
 [  1 666   8   1   4   0   2   5   0   1]
 [  5   0 546   5   2   1   1   9  14   3]
 [  3   0   8 556   0   1   1   3   5   6]
 [  2   0   4   1 528   0   5   2   3  20]
 [  6   0   5   7   1 462   6   2  19  10]
 [  6   0   0   0   2   1 610   0  12   0]
 [  0   1   9   6   3   0   0 588   3  15]
 [  4   0   0   0   5   0   9   1 561   4]
 [  5   0   0   7   6   4   1   7  12 573]]

2022-11-28 17:38:32,228 - ==> Best [Top1: 94.750   Top5: 99.783   Sparsity:0.00   Params: 71148 on epoch: 1]
2022-11-28 17:38:32,229 - Saving checkpoint to: logs\2022.11.28-173243\checkpoint.pth.tar
2022-11-28 17:38:32,239 - 

2022-11-28 17:38:32,240 - Training epoch: 54000 samples (256 per mini-batch)
2022-11-28 17:38:42,156 - Epoch: [2][   10/  211]    Overall Loss 0.162670    Objective Loss 0.162670                                        LR 0.100000    Time 0.991549    
2022-11-28 17:38:46,376 - Epoch: [2][   20/  211]    Overall Loss 0.163434    Objective Loss 0.163434                                        LR 0.100000    Time 0.706760    
2022-11-28 17:38:50,723 - Epoch: [2][   30/  211]    Overall Loss 0.169471    Objective Loss 0.169471                                        LR 0.100000    Time 0.616070    
2022-11-28 17:38:54,955 - Epoch: [2][   40/  211]    Overall Loss 0.171583    Objective Loss 0.171583                                        LR 0.100000    Time 0.567832    
2022-11-28 17:38:59,168 - Epoch: [2][   50/  211]    Overall Loss 0.175466    Objective Loss 0.175466                                        LR 0.100000    Time 0.538540    
2022-11-28 17:39:03,386 - Epoch: [2][   60/  211]    Overall Loss 0.177056    Objective Loss 0.177056                                        LR 0.100000    Time 0.519062    
2022-11-28 17:39:07,682 - Epoch: [2][   70/  211]    Overall Loss 0.175469    Objective Loss 0.175469                                        LR 0.100000    Time 0.506275    
2022-11-28 17:39:11,976 - Epoch: [2][   80/  211]    Overall Loss 0.172752    Objective Loss 0.172752                                        LR 0.100000    Time 0.496648    
2022-11-28 17:39:16,241 - Epoch: [2][   90/  211]    Overall Loss 0.171963    Objective Loss 0.171963                                        LR 0.100000    Time 0.488838    
2022-11-28 17:39:20,490 - Epoch: [2][  100/  211]    Overall Loss 0.171959    Objective Loss 0.171959                                        LR 0.100000    Time 0.482450    
2022-11-28 17:39:24,739 - Epoch: [2][  110/  211]    Overall Loss 0.171803    Objective Loss 0.171803                                        LR 0.100000    Time 0.477206    
2022-11-28 17:39:28,952 - Epoch: [2][  120/  211]    Overall Loss 0.171238    Objective Loss 0.171238                                        LR 0.100000    Time 0.472537    
2022-11-28 17:39:33,163 - Epoch: [2][  130/  211]    Overall Loss 0.171115    Objective Loss 0.171115                                        LR 0.100000    Time 0.468570    
2022-11-28 17:39:37,404 - Epoch: [2][  140/  211]    Overall Loss 0.171511    Objective Loss 0.171511                                        LR 0.100000    Time 0.465384    
2022-11-28 17:39:41,645 - Epoch: [2][  150/  211]    Overall Loss 0.170378    Objective Loss 0.170378                                        LR 0.100000    Time 0.462623    
2022-11-28 17:39:45,879 - Epoch: [2][  160/  211]    Overall Loss 0.169934    Objective Loss 0.169934                                        LR 0.100000    Time 0.460170    
2022-11-28 17:39:50,078 - Epoch: [2][  170/  211]    Overall Loss 0.170061    Objective Loss 0.170061                                        LR 0.100000    Time 0.457800    
2022-11-28 17:39:54,327 - Epoch: [2][  180/  211]    Overall Loss 0.168993    Objective Loss 0.168993                                        LR 0.100000    Time 0.455964    
2022-11-28 17:39:58,554 - Epoch: [2][  190/  211]    Overall Loss 0.167059    Objective Loss 0.167059                                        LR 0.100000    Time 0.454207    
2022-11-28 17:40:02,816 - Epoch: [2][  200/  211]    Overall Loss 0.166115    Objective Loss 0.166115                                        LR 0.100000    Time 0.452805    
2022-11-28 17:40:07,072 - Epoch: [2][  210/  211]    Overall Loss 0.164306    Objective Loss 0.164306    Top1 96.484375    Top5 100.000000    LR 0.100000    Time 0.451507    
2022-11-28 17:40:07,468 - Epoch: [2][  211/  211]    Overall Loss 0.164035    Objective Loss 0.164035    Top1 96.774194    Top5 99.798387    LR 0.100000    Time 0.451244    
2022-11-28 17:40:08,038 - --- validate (epoch=2)-----------
2022-11-28 17:40:08,038 - 6000 samples (256 per mini-batch)
2022-11-28 17:40:15,657 - Epoch: [2][   10/   24]    Loss 0.131910    Top1 96.445312    Top5 99.804688    
2022-11-28 17:40:17,330 - Epoch: [2][   20/   24]    Loss 0.134751    Top1 96.269531    Top5 99.882812    
2022-11-28 17:40:17,890 - Epoch: [2][   24/   24]    Loss 0.134910    Top1 96.166667    Top5 99.900000    
2022-11-28 17:40:18,431 - ==> Top1: 96.167    Top5: 99.900    Loss: 0.135

2022-11-28 17:40:18,431 - ==> Confusion:
[[595   0   3   0   0   1   5   0   1   0]
 [  0 685   2   0   1   0   0   0   0   0]
 [  1   1 560   2   1   4   2   7   6   2]
 [  0   2   7 557   0   8   1   3   5   0]
 [  0   1   4   0 546   1   5   1   0   7]
 [  1   0   3   2   2 503   6   0   1   0]
 [  3   2   0   0   1   3 619   0   3   0]
 [  0   3  10   0   4   3   0 599   2   4]
 [  2   2   1   0   3   8  10   4 549   5]
 [  4   6   1   4  13  12   1   8   9 557]]

2022-11-28 17:40:18,433 - ==> Best [Top1: 96.167   Top5: 99.900   Sparsity:0.00   Params: 71148 on epoch: 2]
2022-11-28 17:40:18,434 - Saving checkpoint to: logs\2022.11.28-173243\checkpoint.pth.tar
2022-11-28 17:40:18,462 - 

2022-11-28 17:40:18,462 - Training epoch: 54000 samples (256 per mini-batch)
2022-11-28 17:40:28,648 - Epoch: [3][   10/  211]    Overall Loss 0.135862    Objective Loss 0.135862                                        LR 0.100000    Time 1.018377    
2022-11-28 17:40:32,899 - Epoch: [3][   20/  211]    Overall Loss 0.138279    Objective Loss 0.138279                                        LR 0.100000    Time 0.721720    
2022-11-28 17:40:37,147 - Epoch: [3][   30/  211]    Overall Loss 0.138205    Objective Loss 0.138205                                        LR 0.100000    Time 0.622735    
2022-11-28 17:40:41,643 - Epoch: [3][   40/  211]    Overall Loss 0.137411    Objective Loss 0.137411                                        LR 0.100000    Time 0.579476    
2022-11-28 17:40:46,032 - Epoch: [3][   50/  211]    Overall Loss 0.141440    Objective Loss 0.141440                                        LR 0.100000    Time 0.551346    
2022-11-28 17:40:50,770 - Epoch: [3][   60/  211]    Overall Loss 0.143205    Objective Loss 0.143205                                        LR 0.100000    Time 0.538427    
2022-11-28 17:40:55,172 - Epoch: [3][   70/  211]    Overall Loss 0.143112    Objective Loss 0.143112                                        LR 0.100000    Time 0.524377    
2022-11-28 17:40:59,533 - Epoch: [3][   80/  211]    Overall Loss 0.142289    Objective Loss 0.142289                                        LR 0.100000    Time 0.513334    
2022-11-28 17:41:03,833 - Epoch: [3][   90/  211]    Overall Loss 0.140776    Objective Loss 0.140776                                        LR 0.100000    Time 0.504058    
2022-11-28 17:41:08,179 - Epoch: [3][  100/  211]    Overall Loss 0.141807    Objective Loss 0.141807                                        LR 0.100000    Time 0.497106    
2022-11-28 17:41:12,483 - Epoch: [3][  110/  211]    Overall Loss 0.144942    Objective Loss 0.144942                                        LR 0.100000    Time 0.491028    
2022-11-28 17:41:16,751 - Epoch: [3][  120/  211]    Overall Loss 0.144196    Objective Loss 0.144196                                        LR 0.100000    Time 0.485681    
2022-11-28 17:41:21,042 - Epoch: [3][  130/  211]    Overall Loss 0.143078    Objective Loss 0.143078                                        LR 0.100000    Time 0.481317    
2022-11-28 17:41:25,327 - Epoch: [3][  140/  211]    Overall Loss 0.140382    Objective Loss 0.140382                                        LR 0.100000    Time 0.477548    
2022-11-28 17:41:29,718 - Epoch: [3][  150/  211]    Overall Loss 0.138152    Objective Loss 0.138152                                        LR 0.100000    Time 0.474987    
2022-11-28 17:41:33,977 - Epoch: [3][  160/  211]    Overall Loss 0.136183    Objective Loss 0.136183                                        LR 0.100000    Time 0.471904    
2022-11-28 17:41:38,333 - Epoch: [3][  170/  211]    Overall Loss 0.134215    Objective Loss 0.134215                                        LR 0.100000    Time 0.469770    
2022-11-28 17:41:42,621 - Epoch: [3][  180/  211]    Overall Loss 0.133231    Objective Loss 0.133231                                        LR 0.100000    Time 0.467486    
2022-11-28 17:41:46,838 - Epoch: [3][  190/  211]    Overall Loss 0.132332    Objective Loss 0.132332                                        LR 0.100000    Time 0.465064    
2022-11-28 17:41:51,067 - Epoch: [3][  200/  211]    Overall Loss 0.132762    Objective Loss 0.132762                                        LR 0.100000    Time 0.462960    
2022-11-28 17:41:55,306 - Epoch: [3][  210/  211]    Overall Loss 0.132196    Objective Loss 0.132196    Top1 96.093750    Top5 100.000000    LR 0.100000    Time 0.461093    
2022-11-28 17:41:55,702 - Epoch: [3][  211/  211]    Overall Loss 0.132407    Objective Loss 0.132407    Top1 95.161290    Top5 99.798387    LR 0.100000    Time 0.460785    
2022-11-28 17:41:56,254 - --- validate (epoch=3)-----------
2022-11-28 17:41:56,254 - 6000 samples (256 per mini-batch)
2022-11-28 17:42:03,501 - Epoch: [3][   10/   24]    Loss 0.116774    Top1 96.484375    Top5 99.882812    
2022-11-28 17:42:05,133 - Epoch: [3][   20/   24]    Loss 0.110772    Top1 96.699219    Top5 99.921875    
2022-11-28 17:42:05,696 - Epoch: [3][   24/   24]    Loss 0.112113    Top1 96.583333    Top5 99.933333    
2022-11-28 17:42:06,243 - ==> Top1: 96.583    Top5: 99.933    Loss: 0.112

2022-11-28 17:42:06,244 - ==> Confusion:
[[596   0   1   0   1   0   1   0   1   5]
 [  0 682   4   1   0   0   0   1   0   0]
 [  1   2 569   2   1   0   0   7   2   2]
 [  0   0   7 559   2   3   0   6   3   3]
 [  0   1   3   0 547   2   0   3   1   8]
 [  3   1   1   1   2 500   3   0   3   4]
 [  4   5   2   0   4  12 598   0   5   1]
 [  0   3   3   3   3   0   0 611   0   2]
 [  5   2   5   0   5   3   2   3 552   7]
 [  2   3   0   3  13   3   0   9   1 581]]

2022-11-28 17:42:06,246 - ==> Best [Top1: 96.583   Top5: 99.933   Sparsity:0.00   Params: 71148 on epoch: 3]
2022-11-28 17:42:06,246 - Saving checkpoint to: logs\2022.11.28-173243\checkpoint.pth.tar
2022-11-28 17:42:06,251 - 

2022-11-28 17:42:06,251 - Training epoch: 54000 samples (256 per mini-batch)
2022-11-28 17:42:16,146 - Epoch: [4][   10/  211]    Overall Loss 0.105769    Objective Loss 0.105769                                        LR 0.100000    Time 0.989454    
2022-11-28 17:42:20,441 - Epoch: [4][   20/  211]    Overall Loss 0.112949    Objective Loss 0.112949                                        LR 0.100000    Time 0.709453    
2022-11-28 17:42:24,725 - Epoch: [4][   30/  211]    Overall Loss 0.117931    Objective Loss 0.117931                                        LR 0.100000    Time 0.615787    
2022-11-28 17:42:28,958 - Epoch: [4][   40/  211]    Overall Loss 0.116802    Objective Loss 0.116802                                        LR 0.100000    Time 0.567657    
2022-11-28 17:42:33,220 - Epoch: [4][   50/  211]    Overall Loss 0.114094    Objective Loss 0.114094                                        LR 0.100000    Time 0.539358    
2022-11-28 17:42:37,430 - Epoch: [4][   60/  211]    Overall Loss 0.111658    Objective Loss 0.111658                                        LR 0.100000    Time 0.519627    
2022-11-28 17:42:41,749 - Epoch: [4][   70/  211]    Overall Loss 0.114227    Objective Loss 0.114227                                        LR 0.100000    Time 0.507087    
2022-11-28 17:42:46,050 - Epoch: [4][   80/  211]    Overall Loss 0.115709    Objective Loss 0.115709                                        LR 0.100000    Time 0.497457    
2022-11-28 17:42:50,377 - Epoch: [4][   90/  211]    Overall Loss 0.114733    Objective Loss 0.114733                                        LR 0.100000    Time 0.490234    
2022-11-28 17:42:54,668 - Epoch: [4][  100/  211]    Overall Loss 0.114030    Objective Loss 0.114030                                        LR 0.100000    Time 0.484116    
2022-11-28 17:42:59,000 - Epoch: [4][  110/  211]    Overall Loss 0.114818    Objective Loss 0.114818                                        LR 0.100000    Time 0.479473    
2022-11-28 17:43:03,287 - Epoch: [4][  120/  211]    Overall Loss 0.115313    Objective Loss 0.115313                                        LR 0.100000    Time 0.475229    
2022-11-28 17:43:07,533 - Epoch: [4][  130/  211]    Overall Loss 0.115641    Objective Loss 0.115641                                        LR 0.100000    Time 0.471332    
2022-11-28 17:43:11,841 - Epoch: [4][  140/  211]    Overall Loss 0.117132    Objective Loss 0.117132                                        LR 0.100000    Time 0.468440    
2022-11-28 17:43:16,106 - Epoch: [4][  150/  211]    Overall Loss 0.116808    Objective Loss 0.116808                                        LR 0.100000    Time 0.465635    
2022-11-28 17:43:20,435 - Epoch: [4][  160/  211]    Overall Loss 0.116517    Objective Loss 0.116517                                        LR 0.100000    Time 0.463585    
2022-11-28 17:43:24,688 - Epoch: [4][  170/  211]    Overall Loss 0.116621    Objective Loss 0.116621                                        LR 0.100000    Time 0.461319    
2022-11-28 17:43:28,973 - Epoch: [4][  180/  211]    Overall Loss 0.115962    Objective Loss 0.115962                                        LR 0.100000    Time 0.459483    
2022-11-28 17:43:33,223 - Epoch: [4][  190/  211]    Overall Loss 0.116119    Objective Loss 0.116119                                        LR 0.100000    Time 0.457660    
2022-11-28 17:43:37,444 - Epoch: [4][  200/  211]    Overall Loss 0.116018    Objective Loss 0.116018                                        LR 0.100000    Time 0.455881    
2022-11-28 17:43:41,738 - Epoch: [4][  210/  211]    Overall Loss 0.115201    Objective Loss 0.115201    Top1 96.875000    Top5 100.000000    LR 0.100000    Time 0.454622    
2022-11-28 17:43:42,138 - Epoch: [4][  211/  211]    Overall Loss 0.115169    Objective Loss 0.115169    Top1 96.572581    Top5 100.000000    LR 0.100000    Time 0.454363    
2022-11-28 17:43:42,682 - --- validate (epoch=4)-----------
2022-11-28 17:43:42,683 - 6000 samples (256 per mini-batch)
2022-11-28 17:43:49,923 - Epoch: [4][   10/   24]    Loss 0.092024    Top1 96.875000    Top5 99.960938    
2022-11-28 17:43:51,585 - Epoch: [4][   20/   24]    Loss 0.092981    Top1 96.953125    Top5 99.980469    
2022-11-28 17:43:52,135 - Epoch: [4][   24/   24]    Loss 0.095692    Top1 96.933333    Top5 99.966667    
2022-11-28 17:43:52,670 - ==> Top1: 96.933    Top5: 99.967    Loss: 0.096

2022-11-28 17:43:52,670 - ==> Confusion:
[[592   0   3   1   0   2   4   0   3   0]
 [  0 681   5   0   0   0   1   1   0   0]
 [  0   2 567   1   0   0   1  11   4   0]
 [  1   0   5 563   0   3   0   8   3   0]
 [  0   1   2   0 535   0   2   7   0  18]
 [  2   1   0   1   0 500   6   0   5   3]
 [  8   1   1   0   1   2 616   0   2   0]
 [  0   4   7   0   1   0   0 610   0   3]
 [  1   1   5   1   1   1   6   1 562   5]
 [  1   1   1   0   6   4   2   6   4 590]]

2022-11-28 17:43:52,672 - ==> Best [Top1: 96.933   Top5: 99.967   Sparsity:0.00   Params: 71148 on epoch: 4]
2022-11-28 17:43:52,673 - Saving checkpoint to: logs\2022.11.28-173243\checkpoint.pth.tar
2022-11-28 17:43:52,677 - 

2022-11-28 17:43:52,677 - Training epoch: 54000 samples (256 per mini-batch)
2022-11-28 17:44:02,563 - Epoch: [5][   10/  211]    Overall Loss 0.105496    Objective Loss 0.105496                                        LR 0.100000    Time 0.988457    
2022-11-28 17:44:06,888 - Epoch: [5][   20/  211]    Overall Loss 0.114271    Objective Loss 0.114271                                        LR 0.100000    Time 0.710450    
2022-11-28 17:44:11,201 - Epoch: [5][   30/  211]    Overall Loss 0.109886    Objective Loss 0.109886                                        LR 0.100000    Time 0.617383    
2022-11-28 17:44:15,425 - Epoch: [5][   40/  211]    Overall Loss 0.109347    Objective Loss 0.109347                                        LR 0.100000    Time 0.568654    
2022-11-28 17:44:19,705 - Epoch: [5][   50/  211]    Overall Loss 0.105478    Objective Loss 0.105478                                        LR 0.100000    Time 0.540515    
2022-11-28 17:44:23,989 - Epoch: [5][   60/  211]    Overall Loss 0.109943    Objective Loss 0.109943                                        LR 0.100000    Time 0.521838    
2022-11-28 17:44:28,274 - Epoch: [5][   70/  211]    Overall Loss 0.115873    Objective Loss 0.115873                                        LR 0.100000    Time 0.508483    
2022-11-28 17:44:32,505 - Epoch: [5][   80/  211]    Overall Loss 0.115535    Objective Loss 0.115535                                        LR 0.100000    Time 0.497806    
2022-11-28 17:44:36,738 - Epoch: [5][   90/  211]    Overall Loss 0.116393    Objective Loss 0.116393                                        LR 0.100000    Time 0.489524    
2022-11-28 17:44:41,033 - Epoch: [5][  100/  211]    Overall Loss 0.117968    Objective Loss 0.117968                                        LR 0.100000    Time 0.483507    
2022-11-28 17:44:45,275 - Epoch: [5][  110/  211]    Overall Loss 0.117010    Objective Loss 0.117010                                        LR 0.100000    Time 0.478103    
2022-11-28 17:44:49,535 - Epoch: [5][  120/  211]    Overall Loss 0.116879    Objective Loss 0.116879                                        LR 0.100000    Time 0.473750    
2022-11-28 17:44:53,861 - Epoch: [5][  130/  211]    Overall Loss 0.116730    Objective Loss 0.116730                                        LR 0.100000    Time 0.470580    
2022-11-28 17:44:58,178 - Epoch: [5][  140/  211]    Overall Loss 0.114929    Objective Loss 0.114929                                        LR 0.100000    Time 0.467799    
2022-11-28 17:45:02,497 - Epoch: [5][  150/  211]    Overall Loss 0.115214    Objective Loss 0.115214                                        LR 0.100000    Time 0.465402    
2022-11-28 17:45:06,807 - Epoch: [5][  160/  211]    Overall Loss 0.114155    Objective Loss 0.114155                                        LR 0.100000    Time 0.463249    
2022-11-28 17:45:11,111 - Epoch: [5][  170/  211]    Overall Loss 0.114295    Objective Loss 0.114295                                        LR 0.100000    Time 0.461308    
2022-11-28 17:45:15,388 - Epoch: [5][  180/  211]    Overall Loss 0.114194    Objective Loss 0.114194                                        LR 0.100000    Time 0.459427    
2022-11-28 17:45:19,597 - Epoch: [5][  190/  211]    Overall Loss 0.112834    Objective Loss 0.112834                                        LR 0.100000    Time 0.457393    
2022-11-28 17:45:23,936 - Epoch: [5][  200/  211]    Overall Loss 0.112101    Objective Loss 0.112101                                        LR 0.100000    Time 0.456220    
2022-11-28 17:45:28,217 - Epoch: [5][  210/  211]    Overall Loss 0.111618    Objective Loss 0.111618    Top1 95.312500    Top5 100.000000    LR 0.100000    Time 0.454869    
2022-11-28 17:45:28,627 - Epoch: [5][  211/  211]    Overall Loss 0.111687    Objective Loss 0.111687    Top1 95.564516    Top5 100.000000    LR 0.100000    Time 0.454652    
2022-11-28 17:45:29,185 - --- validate (epoch=5)-----------
2022-11-28 17:45:29,186 - 6000 samples (256 per mini-batch)
2022-11-28 17:45:36,388 - Epoch: [5][   10/   24]    Loss 0.102077    Top1 97.070312    Top5 99.921875    
2022-11-28 17:45:38,022 - Epoch: [5][   20/   24]    Loss 0.105979    Top1 96.621094    Top5 99.960938    
2022-11-28 17:45:38,589 - Epoch: [5][   24/   24]    Loss 0.101846    Top1 96.750000    Top5 99.966667    
2022-11-28 17:45:39,128 - ==> Top1: 96.750    Top5: 99.967    Loss: 0.102

2022-11-28 17:45:39,128 - ==> Confusion:
[[596   0   1   0   0   0   5   0   1   2]
 [  1 678   2   0   1   1   2   3   0   0]
 [  2   0 562   2   2   0   0   3  13   2]
 [  1   0   4 558   0   9   0   1   6   4]
 [  0   0   2   0 544   1   2   1   3  12]
 [  1   0   1   1   0 508   3   0   2   2]
 [  2   1   0   0   2   2 619   0   5   0]
 [  0   5   9   8   8   4   0 578   0  13]
 [  1   0   0   1   1   1   8   0 567   5]
 [  0   1   0   1  10   3   0   1   4 595]]


………………………………


2022-11-29 00:24:37,209 - Epoch: [193][  150/  211]    Overall Loss 0.040234    Objective Loss 0.040234                                        LR 0.000100    Time 0.556219    
2022-11-29 00:24:42,415 - Epoch: [193][  160/  211]    Overall Loss 0.040418    Objective Loss 0.040418                                        LR 0.000100    Time 0.553994    
2022-11-29 00:24:47,603 - Epoch: [193][  170/  211]    Overall Loss 0.041247    Objective Loss 0.041247                                        LR 0.000100    Time 0.551924    
2022-11-29 00:24:52,860 - Epoch: [193][  180/  211]    Overall Loss 0.040925    Objective Loss 0.040925                                        LR 0.000100    Time 0.550461    
2022-11-29 00:24:58,021 - Epoch: [193][  190/  211]    Overall Loss 0.041321    Objective Loss 0.041321                                        LR 0.000100    Time 0.548649    
2022-11-29 00:25:03,154 - Epoch: [193][  200/  211]    Overall Loss 0.041075    Objective Loss 0.041075                                        LR 0.000100    Time 0.546878    
2022-11-29 00:25:08,327 - Epoch: [193][  210/  211]    Overall Loss 0.040713    Objective Loss 0.040713    Top1 99.218750    Top5 100.000000    LR 0.000100    Time 0.545461    
2022-11-29 00:25:08,824 - Epoch: [193][  211/  211]    Overall Loss 0.040734    Objective Loss 0.040734    Top1 98.991935    Top5 100.000000    LR 0.000100    Time 0.545229    
2022-11-29 00:25:09,354 - --- validate (epoch=193)-----------
2022-11-29 00:25:09,355 - 6000 samples (256 per mini-batch)
2022-11-29 00:25:17,403 - Epoch: [193][   10/   24]    Loss 0.038751    Top1 99.023438    Top5 100.000000    
2022-11-29 00:25:20,020 - Epoch: [193][   20/   24]    Loss 0.042008    Top1 98.828125    Top5 99.980469    
2022-11-29 00:25:20,917 - Epoch: [193][   24/   24]    Loss 0.044995    Top1 98.800000    Top5 99.983333    
2022-11-29 00:25:21,448 - ==> Top1: 98.800    Top5: 99.983    Loss: 0.045

2022-11-29 00:25:21,449 - ==> Confusion:
[[599   0   1   0   0   0   2   0   1   2]
 [  0 686   0   0   0   0   0   2   0   0]
 [  0   0 579   1   0   0   1   3   1   1]
 [  0   0   3 576   0   0   0   2   1   1]
 [  0   1   0   0 555   0   1   0   0   8]
 [  0   1   0   2   0 508   5   0   2   0]
 [  1   2   0   0   3   0 625   0   0   0]
 [  0   3   3   1   0   0   0 618   0   0]
 [  0   0   1   0   1   1   2   0 578   1]
 [  1   2   0   0   3   1   0   2   2 604]]

2022-11-29 00:25:21,451 - ==> Best [Top1: 99.067   Top5: 100.000   Sparsity:0.00   Params: 71148 on epoch: 173]
2022-11-29 00:25:21,451 - Saving checkpoint to: logs\2022.11.28-173243\qat_checkpoint.pth.tar
2022-11-29 00:25:21,454 - 

2022-11-29 00:25:21,454 - Training epoch: 54000 samples (256 per mini-batch)
2022-11-29 00:25:32,195 - Epoch: [194][   10/  211]    Overall Loss 0.040329    Objective Loss 0.040329                                        LR 0.000100    Time 1.074028    
2022-11-29 00:25:37,387 - Epoch: [194][   20/  211]    Overall Loss 0.040542    Objective Loss 0.040542                                        LR 0.000100    Time 0.796570    
2022-11-29 00:25:42,581 - Epoch: [194][   30/  211]    Overall Loss 0.040781    Objective Loss 0.040781                                        LR 0.000100    Time 0.704151    
2022-11-29 00:25:47,757 - Epoch: [194][   40/  211]    Overall Loss 0.040673    Objective Loss 0.040673                                        LR 0.000100    Time 0.657492    
2022-11-29 00:25:52,961 - Epoch: [194][   50/  211]    Overall Loss 0.042351    Objective Loss 0.042351                                        LR 0.000100    Time 0.630075    
2022-11-29 00:25:58,191 - Epoch: [194][   60/  211]    Overall Loss 0.041670    Objective Loss 0.041670                                        LR 0.000100    Time 0.612213    
2022-11-29 00:26:03,403 - Epoch: [194][   70/  211]    Overall Loss 0.041011    Objective Loss 0.041011                                        LR 0.000100    Time 0.599198    
2022-11-29 00:26:08,571 - Epoch: [194][   80/  211]    Overall Loss 0.041417    Objective Loss 0.041417                                        LR 0.000100    Time 0.588863    
2022-11-29 00:26:13,754 - Epoch: [194][   90/  211]    Overall Loss 0.041389    Objective Loss 0.041389                                        LR 0.000100    Time 0.581024    
2022-11-29 00:26:18,946 - Epoch: [194][  100/  211]    Overall Loss 0.041455    Objective Loss 0.041455                                        LR 0.000100    Time 0.574843    
2022-11-29 00:26:24,120 - Epoch: [194][  110/  211]    Overall Loss 0.041739    Objective Loss 0.041739                                        LR 0.000100    Time 0.569613    
2022-11-29 00:26:29,298 - Epoch: [194][  120/  211]    Overall Loss 0.041621    Objective Loss 0.041621                                        LR 0.000100    Time 0.565297    
2022-11-29 00:26:34,447 - Epoch: [194][  130/  211]    Overall Loss 0.041431    Objective Loss 0.041431                                        LR 0.000100    Time 0.561422    
2022-11-29 00:26:39,634 - Epoch: [194][  140/  211]    Overall Loss 0.041545    Objective Loss 0.041545                                        LR 0.000100    Time 0.558364    
2022-11-29 00:26:44,810 - Epoch: [194][  150/  211]    Overall Loss 0.041820    Objective Loss 0.041820                                        LR 0.000100    Time 0.555634    
2022-11-29 00:26:49,948 - Epoch: [194][  160/  211]    Overall Loss 0.041720    Objective Loss 0.041720                                        LR 0.000100    Time 0.553015    
2022-11-29 00:26:55,135 - Epoch: [194][  170/  211]    Overall Loss 0.041144    Objective Loss 0.041144                                        LR 0.000100    Time 0.550997    
2022-11-29 00:27:00,327 - Epoch: [194][  180/  211]    Overall Loss 0.041093    Objective Loss 0.041093                                        LR 0.000100    Time 0.549231    
2022-11-29 00:27:05,496 - Epoch: [194][  190/  211]    Overall Loss 0.040998    Objective Loss 0.040998                                        LR 0.000100    Time 0.547525    
2022-11-29 00:27:10,677 - Epoch: [194][  200/  211]    Overall Loss 0.040770    Objective Loss 0.040770                                        LR 0.000100    Time 0.546055    
2022-11-29 00:27:15,845 - Epoch: [194][  210/  211]    Overall Loss 0.040823    Objective Loss 0.040823    Top1 98.828125    Top5 100.000000    LR 0.000100    Time 0.544653    
2022-11-29 00:27:16,335 - Epoch: [194][  211/  211]    Overall Loss 0.040913    Objective Loss 0.040913    Top1 98.588710    Top5 100.000000    LR 0.000100    Time 0.544397    
2022-11-29 00:27:16,871 - --- validate (epoch=194)-----------
2022-11-29 00:27:16,871 - 6000 samples (256 per mini-batch)
2022-11-29 00:27:25,000 - Epoch: [194][   10/   24]    Loss 0.046200    Top1 98.789062    Top5 100.000000    
2022-11-29 00:27:27,603 - Epoch: [194][   20/   24]    Loss 0.041708    Top1 98.886719    Top5 100.000000    
2022-11-29 00:27:28,506 - Epoch: [194][   24/   24]    Loss 0.042684    Top1 98.833333    Top5 100.000000    
2022-11-29 00:27:29,043 - ==> Top1: 98.833    Top5: 100.000    Loss: 0.043

2022-11-29 00:27:29,044 - ==> Confusion:
[[601   0   2   0   0   0   1   0   1   0]
 [  0 685   1   0   1   0   0   1   0   0]
 [  0   0 575   0   1   1   1   3   4   1]
 [  0   0   0 578   0   2   0   2   1   0]
 [  0   1   0   0 553   0   2   2   0   7]
 [  0   0   0   2   0 513   1   0   2   0]
 [  0   1   0   0   1   1 626   0   2   0]
 [  0   1   1   2   1   1   0 618   0   1]
 [  2   0   0   2   2   1   1   0 576   0]
 [  1   1   0   0   4   1   0   1   2 605]]

2022-11-29 00:27:29,046 - ==> Best [Top1: 99.067   Top5: 100.000   Sparsity:0.00   Params: 71148 on epoch: 173]
2022-11-29 00:27:29,046 - Saving checkpoint to: logs\2022.11.28-173243\qat_checkpoint.pth.tar
2022-11-29 00:27:29,050 - 

2022-11-29 00:27:29,050 - Training epoch: 54000 samples (256 per mini-batch)
2022-11-29 00:27:39,683 - Epoch: [195][   10/  211]    Overall Loss 0.041090    Objective Loss 0.041090                                        LR 0.000100    Time 1.063157    
2022-11-29 00:27:44,893 - Epoch: [195][   20/  211]    Overall Loss 0.040325    Objective Loss 0.040325                                        LR 0.000100    Time 0.792032    
2022-11-29 00:27:50,085 - Epoch: [195][   30/  211]    Overall Loss 0.043324    Objective Loss 0.043324                                        LR 0.000100    Time 0.701092    
2022-11-29 00:27:55,267 - Epoch: [195][   40/  211]    Overall Loss 0.042096    Objective Loss 0.042096                                        LR 0.000100    Time 0.655373    
2022-11-29 00:28:00,466 - Epoch: [195][   50/  211]    Overall Loss 0.041351    Objective Loss 0.041351                                        LR 0.000100    Time 0.628260    
2022-11-29 00:28:05,651 - Epoch: [195][   60/  211]    Overall Loss 0.041290    Objective Loss 0.041290                                        LR 0.000100    Time 0.609952    
2022-11-29 00:28:10,815 - Epoch: [195][   70/  211]    Overall Loss 0.039647    Objective Loss 0.039647                                        LR 0.000100    Time 0.596576    
2022-11-29 00:28:15,997 - Epoch: [195][   80/  211]    Overall Loss 0.039195    Objective Loss 0.039195                                        LR 0.000100    Time 0.586781    
2022-11-29 00:28:21,222 - Epoch: [195][   90/  211]    Overall Loss 0.039482    Objective Loss 0.039482                                        LR 0.000100    Time 0.579628    
2022-11-29 00:28:26,407 - Epoch: [195][  100/  211]    Overall Loss 0.040421    Objective Loss 0.040421                                        LR 0.000100    Time 0.573506    
2022-11-29 00:28:31,543 - Epoch: [195][  110/  211]    Overall Loss 0.040668    Objective Loss 0.040668                                        LR 0.000100    Time 0.568045    
2022-11-29 00:28:36,736 - Epoch: [195][  120/  211]    Overall Loss 0.040550    Objective Loss 0.040550                                        LR 0.000100    Time 0.563984    
2022-11-29 00:28:41,937 - Epoch: [195][  130/  211]    Overall Loss 0.040587    Objective Loss 0.040587                                        LR 0.000100    Time 0.560609    
2022-11-29 00:28:47,102 - Epoch: [195][  140/  211]    Overall Loss 0.040201    Objective Loss 0.040201                                        LR 0.000100    Time 0.557445    
2022-11-29 00:28:52,242 - Epoch: [195][  150/  211]    Overall Loss 0.039808    Objective Loss 0.039808                                        LR 0.000100    Time 0.554537    
2022-11-29 00:28:57,389 - Epoch: [195][  160/  211]    Overall Loss 0.040046    Objective Loss 0.040046                                        LR 0.000100    Time 0.552036    
2022-11-29 00:29:02,558 - Epoch: [195][  170/  211]    Overall Loss 0.039700    Objective Loss 0.039700                                        LR 0.000100    Time 0.549958    
2022-11-29 00:29:07,677 - Epoch: [195][  180/  211]    Overall Loss 0.039700    Objective Loss 0.039700                                        LR 0.000100    Time 0.547835    
2022-11-29 00:29:12,864 - Epoch: [195][  190/  211]    Overall Loss 0.039723    Objective Loss 0.039723                                        LR 0.000100    Time 0.546297    
2022-11-29 00:29:18,063 - Epoch: [195][  200/  211]    Overall Loss 0.040026    Objective Loss 0.040026                                        LR 0.000100    Time 0.544972    
2022-11-29 00:29:23,257 - Epoch: [195][  210/  211]    Overall Loss 0.039709    Objective Loss 0.039709    Top1 99.218750    Top5 100.000000    LR 0.000100    Time 0.543755    
2022-11-29 00:29:23,743 - Epoch: [195][  211/  211]    Overall Loss 0.039742    Objective Loss 0.039742    Top1 99.193548    Top5 100.000000    LR 0.000100    Time 0.543475    
2022-11-29 00:29:24,273 - --- validate (epoch=195)-----------
2022-11-29 00:29:24,273 - 6000 samples (256 per mini-batch)
2022-11-29 00:29:32,263 - Epoch: [195][   10/   24]    Loss 0.032679    Top1 99.335938    Top5 100.000000    
2022-11-29 00:29:34,916 - Epoch: [195][   20/   24]    Loss 0.035417    Top1 99.199219    Top5 100.000000    
2022-11-29 00:29:35,809 - Epoch: [195][   24/   24]    Loss 0.038478    Top1 99.066667    Top5 100.000000    
2022-11-29 00:29:36,337 - ==> Top1: 99.067    Top5: 100.000    Loss: 0.038

2022-11-29 00:29:36,337 - ==> Confusion:
[[600   0   1   1   0   0   1   0   0   2]
 [  0 684   2   0   0   0   0   2   0   0]
 [  0   0 582   1   0   0   0   1   1   1]
 [  0   0   1 578   0   1   0   2   0   1]
 [  0   2   0   0 557   0   0   1   0   5]
 [  1   1   0   2   1 511   2   0   0   0]
 [  0   0   0   0   1   0 630   0   0   0]
 [  0   2   1   0   0   0   0 622   0   0]
 [  0   0   1   3   0   2   0   1 577   0]
 [  0   2   1   0   3   1   0   2   3 603]]

2022-11-29 00:29:36,339 - ==> Best [Top1: 99.067   Top5: 100.000   Sparsity:0.00   Params: 71148 on epoch: 195]
2022-11-29 00:29:36,340 - Saving checkpoint to: logs\2022.11.28-173243\qat_checkpoint.pth.tar
2022-11-29 00:29:36,343 - 

2022-11-29 00:29:36,343 - Training epoch: 54000 samples (256 per mini-batch)
2022-11-29 00:29:46,980 - Epoch: [196][   10/  211]    Overall Loss 0.043070    Objective Loss 0.043070                                        LR 0.000100    Time 1.063556    
2022-11-29 00:29:52,160 - Epoch: [196][   20/  211]    Overall Loss 0.040096    Objective Loss 0.040096                                        LR 0.000100    Time 0.790786    
2022-11-29 00:29:57,371 - Epoch: [196][   30/  211]    Overall Loss 0.040544    Objective Loss 0.040544                                        LR 0.000100    Time 0.700893    
2022-11-29 00:30:02,562 - Epoch: [196][   40/  211]    Overall Loss 0.042399    Objective Loss 0.042399                                        LR 0.000100    Time 0.655447    
2022-11-29 00:30:07,759 - Epoch: [196][   50/  211]    Overall Loss 0.042348    Objective Loss 0.042348                                        LR 0.000100    Time 0.628280    
2022-11-29 00:30:12,935 - Epoch: [196][   60/  211]    Overall Loss 0.041876    Objective Loss 0.041876                                        LR 0.000100    Time 0.609836    
2022-11-29 00:30:18,117 - Epoch: [196][   70/  211]    Overall Loss 0.041448    Objective Loss 0.041448                                        LR 0.000100    Time 0.596733    
2022-11-29 00:30:23,360 - Epoch: [196][   80/  211]    Overall Loss 0.041418    Objective Loss 0.041418                                        LR 0.000100    Time 0.587679    
2022-11-29 00:30:28,616 - Epoch: [196][   90/  211]    Overall Loss 0.041257    Objective Loss 0.041257                                        LR 0.000100    Time 0.580769    
2022-11-29 00:30:33,810 - Epoch: [196][  100/  211]    Overall Loss 0.040838    Objective Loss 0.040838                                        LR 0.000100    Time 0.574634    
2022-11-29 00:30:39,097 - Epoch: [196][  110/  211]    Overall Loss 0.040892    Objective Loss 0.040892                                        LR 0.000100    Time 0.570456    
2022-11-29 00:30:44,332 - Epoch: [196][  120/  211]    Overall Loss 0.041372    Objective Loss 0.041372                                        LR 0.000100    Time 0.566535    
2022-11-29 00:30:49,530 - Epoch: [196][  130/  211]    Overall Loss 0.041073    Objective Loss 0.041073                                        LR 0.000100    Time 0.562941    
2022-11-29 00:30:54,714 - Epoch: [196][  140/  211]    Overall Loss 0.040589    Objective Loss 0.040589                                        LR 0.000100    Time 0.559753    
2022-11-29 00:30:59,888 - Epoch: [196][  150/  211]    Overall Loss 0.040800    Objective Loss 0.040800                                        LR 0.000100    Time 0.556924    
2022-11-29 00:31:05,017 - Epoch: [196][  160/  211]    Overall Loss 0.040848    Objective Loss 0.040848                                        LR 0.000100    Time 0.554174    
2022-11-29 00:31:10,209 - Epoch: [196][  170/  211]    Overall Loss 0.040589    Objective Loss 0.040589                                        LR 0.000100    Time 0.552112    
2022-11-29 00:31:15,420 - Epoch: [196][  180/  211]    Overall Loss 0.040322    Objective Loss 0.040322                                        LR 0.000100    Time 0.550378    
2022-11-29 00:31:20,649 - Epoch: [196][  190/  211]    Overall Loss 0.040044    Objective Loss 0.040044                                        LR 0.000100    Time 0.548932    
2022-11-29 00:31:25,829 - Epoch: [196][  200/  211]    Overall Loss 0.040068    Objective Loss 0.040068                                        LR 0.000100    Time 0.547381    
2022-11-29 00:31:31,004 - Epoch: [196][  210/  211]    Overall Loss 0.039860    Objective Loss 0.039860    Top1 100.000000    Top5 100.000000    LR 0.000100    Time 0.545959    
2022-11-29 00:31:31,495 - Epoch: [196][  211/  211]    Overall Loss 0.039944    Objective Loss 0.039944    Top1 98.991935    Top5 100.000000    LR 0.000100    Time 0.545693    
2022-11-29 00:31:32,036 - --- validate (epoch=196)-----------
2022-11-29 00:31:32,036 - 6000 samples (256 per mini-batch)
2022-11-29 00:31:40,151 - Epoch: [196][   10/   24]    Loss 0.046594    Top1 98.632812    Top5 100.000000    
2022-11-29 00:31:42,771 - Epoch: [196][   20/   24]    Loss 0.044691    Top1 98.808594    Top5 100.000000    
2022-11-29 00:31:43,685 - Epoch: [196][   24/   24]    Loss 0.043413    Top1 98.816667    Top5 99.983333    
2022-11-29 00:31:44,215 - ==> Top1: 98.817    Top5: 99.983    Loss: 0.043

2022-11-29 00:31:44,215 - ==> Confusion:
[[602   1   0   0   0   0   1   0   0   1]
 [  0 684   2   0   1   1   0   0   0   0]
 [  0   0 586   0   0   0   0   0   0   0]
 [  0   0   1 576   0   2   0   1   3   0]
 [  1   0   0   0 556   0   0   0   0   8]
 [  1   0   0   3   0 507   4   0   3   0]
 [  0   1   0   0   2   1 625   0   2   0]
 [  0   2   4   2   1   0   0 616   0   0]
 [  1   0   1   1   1   4   1   0 573   2]
 [  1   2   0   0   2   2   0   1   3 604]]

2022-11-29 00:31:44,217 - ==> Best [Top1: 99.067   Top5: 100.000   Sparsity:0.00   Params: 71148 on epoch: 195]
2022-11-29 00:31:44,218 - Saving checkpoint to: logs\2022.11.28-173243\qat_checkpoint.pth.tar
2022-11-29 00:31:44,220 - 

2022-11-29 00:31:44,220 - Training epoch: 54000 samples (256 per mini-batch)
2022-11-29 00:31:54,919 - Epoch: [197][   10/  211]    Overall Loss 0.032844    Objective Loss 0.032844                                        LR 0.000100    Time 1.069839    
2022-11-29 00:32:00,154 - Epoch: [197][   20/  211]    Overall Loss 0.041889    Objective Loss 0.041889                                        LR 0.000100    Time 0.796520    
2022-11-29 00:32:05,366 - Epoch: [197][   30/  211]    Overall Loss 0.042166    Objective Loss 0.042166                                        LR 0.000100    Time 0.704649    
2022-11-29 00:32:10,563 - Epoch: [197][   40/  211]    Overall Loss 0.041843    Objective Loss 0.041843                                        LR 0.000100    Time 0.658390    
2022-11-29 00:32:15,759 - Epoch: [197][   50/  211]    Overall Loss 0.040065    Objective Loss 0.040065                                        LR 0.000100    Time 0.630634    
2022-11-29 00:32:20,948 - Epoch: [197][   60/  211]    Overall Loss 0.042284    Objective Loss 0.042284                                        LR 0.000100    Time 0.612014    
2022-11-29 00:32:26,195 - Epoch: [197][   70/  211]    Overall Loss 0.041101    Objective Loss 0.041101                                        LR 0.000100    Time 0.599525    
2022-11-29 00:32:31,429 - Epoch: [197][   80/  211]    Overall Loss 0.040270    Objective Loss 0.040270                                        LR 0.000100    Time 0.589997    
2022-11-29 00:32:36,601 - Epoch: [197][   90/  211]    Overall Loss 0.040462    Objective Loss 0.040462                                        LR 0.000100    Time 0.581911    
2022-11-29 00:32:41,870 - Epoch: [197][  100/  211]    Overall Loss 0.039729    Objective Loss 0.039729                                        LR 0.000100    Time 0.576399    
2022-11-29 00:32:47,128 - Epoch: [197][  110/  211]    Overall Loss 0.040009    Objective Loss 0.040009                                        LR 0.000100    Time 0.571798    
2022-11-29 00:32:52,387 - Epoch: [197][  120/  211]    Overall Loss 0.040253    Objective Loss 0.040253                                        LR 0.000100    Time 0.567965    
2022-11-29 00:32:57,592 - Epoch: [197][  130/  211]    Overall Loss 0.039340    Objective Loss 0.039340                                        LR 0.000100    Time 0.564314    
2022-11-29 00:33:02,843 - Epoch: [197][  140/  211]    Overall Loss 0.039311    Objective Loss 0.039311                                        LR 0.000100    Time 0.561499    
2022-11-29 00:33:08,056 - Epoch: [197][  150/  211]    Overall Loss 0.039410    Objective Loss 0.039410                                        LR 0.000100    Time 0.558812    
2022-11-29 00:33:13,255 - Epoch: [197][  160/  211]    Overall Loss 0.039413    Objective Loss 0.039413                                        LR 0.000100    Time 0.556381    
2022-11-29 00:33:18,473 - Epoch: [197][  170/  211]    Overall Loss 0.039128    Objective Loss 0.039128                                        LR 0.000100    Time 0.554335    
2022-11-29 00:33:23,676 - Epoch: [197][  180/  211]    Overall Loss 0.039173    Objective Loss 0.039173                                        LR 0.000100    Time 0.552445    
2022-11-29 00:33:28,906 - Epoch: [197][  190/  211]    Overall Loss 0.039230    Objective Loss 0.039230                                        LR 0.000100    Time 0.550895    
2022-11-29 00:33:34,131 - Epoch: [197][  200/  211]    Overall Loss 0.039786    Objective Loss 0.039786                                        LR 0.000100    Time 0.549471    
2022-11-29 00:33:39,356 - Epoch: [197][  210/  211]    Overall Loss 0.039715    Objective Loss 0.039715    Top1 98.046875    Top5 100.000000    LR 0.000100    Time 0.548187    
2022-11-29 00:33:39,829 - Epoch: [197][  211/  211]    Overall Loss 0.039615    Objective Loss 0.039615    Top1 98.991935    Top5 100.000000    LR 0.000100    Time 0.547829    
2022-11-29 00:33:40,361 - --- validate (epoch=197)-----------
2022-11-29 00:33:40,361 - 6000 samples (256 per mini-batch)
2022-11-29 00:33:48,380 - Epoch: [197][   10/   24]    Loss 0.034291    Top1 99.179688    Top5 100.000000    
2022-11-29 00:33:51,013 - Epoch: [197][   20/   24]    Loss 0.035368    Top1 99.179688    Top5 100.000000    
2022-11-29 00:33:51,902 - Epoch: [197][   24/   24]    Loss 0.036034    Top1 99.133333    Top5 100.000000    
2022-11-29 00:33:52,430 - ==> Top1: 99.133    Top5: 100.000    Loss: 0.036

2022-11-29 00:33:52,431 - ==> Confusion:
[[603   0   1   0   0   0   1   0   0   0]
 [  0 687   0   0   1   0   0   0   0   0]
 [  0   1 575   2   1   0   0   5   1   1]
 [  0   0   2 577   0   2   0   1   1   0]
 [  0   0   0   0 560   0   0   0   0   5]
 [  1   0   0   1   0 513   2   0   1   0]
 [  1   1   0   0   2   0 625   0   2   0]
 [  0   0   1   1   0   0   0 623   0   0]
 [  0   0   0   1   0   0   4   0 577   2]
 [  1   1   0   1   2   1   0   0   1 608]]

2022-11-29 00:33:52,433 - ==> Best [Top1: 99.133   Top5: 100.000   Sparsity:0.00   Params: 71148 on epoch: 197]
2022-11-29 00:33:52,433 - Saving checkpoint to: logs\2022.11.28-173243\qat_checkpoint.pth.tar
2022-11-29 00:33:52,437 - 

2022-11-29 00:33:52,437 - Training epoch: 54000 samples (256 per mini-batch)
2022-11-29 00:34:03,037 - Epoch: [198][   10/  211]    Overall Loss 0.040831    Objective Loss 0.040831                                        LR 0.000100    Time 1.059966    
2022-11-29 00:34:08,190 - Epoch: [198][   20/  211]    Overall Loss 0.039520    Objective Loss 0.039520                                        LR 0.000100    Time 0.787594    
2022-11-29 00:34:13,378 - Epoch: [198][   30/  211]    Overall Loss 0.038420    Objective Loss 0.038420                                        LR 0.000100    Time 0.698000    
2022-11-29 00:34:18,509 - Epoch: [198][   40/  211]    Overall Loss 0.040410    Objective Loss 0.040410                                        LR 0.000100    Time 0.651757    
2022-11-29 00:34:23,625 - Epoch: [198][   50/  211]    Overall Loss 0.041493    Objective Loss 0.041493                                        LR 0.000100    Time 0.623712    
2022-11-29 00:34:28,804 - Epoch: [198][   60/  211]    Overall Loss 0.040579    Objective Loss 0.040579                                        LR 0.000100    Time 0.606079    
2022-11-29 00:34:33,965 - Epoch: [198][   70/  211]    Overall Loss 0.040475    Objective Loss 0.040475                                        LR 0.000100    Time 0.593228    
2022-11-29 00:34:39,143 - Epoch: [198][   80/  211]    Overall Loss 0.040047    Objective Loss 0.040047                                        LR 0.000100    Time 0.583789    
2022-11-29 00:34:44,312 - Epoch: [198][   90/  211]    Overall Loss 0.039242    Objective Loss 0.039242                                        LR 0.000100    Time 0.576337    
2022-11-29 00:34:49,498 - Epoch: [198][  100/  211]    Overall Loss 0.039051    Objective Loss 0.039051                                        LR 0.000100    Time 0.570564    
2022-11-29 00:34:54,671 - Epoch: [198][  110/  211]    Overall Loss 0.038867    Objective Loss 0.038867                                        LR 0.000100    Time 0.565715    
2022-11-29 00:34:59,814 - Epoch: [198][  120/  211]    Overall Loss 0.038734    Objective Loss 0.038734                                        LR 0.000100    Time 0.561432    
2022-11-29 00:35:04,991 - Epoch: [198][  130/  211]    Overall Loss 0.038996    Objective Loss 0.038996                                        LR 0.000100    Time 0.558054    
2022-11-29 00:35:10,177 - Epoch: [198][  140/  211]    Overall Loss 0.039119    Objective Loss 0.039119                                        LR 0.000100    Time 0.555237    
2022-11-29 00:35:15,315 - Epoch: [198][  150/  211]    Overall Loss 0.038800    Objective Loss 0.038800                                        LR 0.000100    Time 0.552469    
2022-11-29 00:35:20,470 - Epoch: [198][  160/  211]    Overall Loss 0.038702    Objective Loss 0.038702                                        LR 0.000100    Time 0.550154    
2022-11-29 00:35:25,639 - Epoch: [198][  170/  211]    Overall Loss 0.038553    Objective Loss 0.038553                                        LR 0.000100    Time 0.548193    
2022-11-29 00:35:30,769 - Epoch: [198][  180/  211]    Overall Loss 0.038839    Objective Loss 0.038839                                        LR 0.000100    Time 0.546239    
2022-11-29 00:35:35,934 - Epoch: [198][  190/  211]    Overall Loss 0.039131    Objective Loss 0.039131                                        LR 0.000100    Time 0.544665    
2022-11-29 00:35:41,090 - Epoch: [198][  200/  211]    Overall Loss 0.039148    Objective Loss 0.039148                                        LR 0.000100    Time 0.543208    
2022-11-29 00:35:46,284 - Epoch: [198][  210/  211]    Overall Loss 0.038765    Objective Loss 0.038765    Top1 98.828125    Top5 100.000000    LR 0.000100    Time 0.542074    
2022-11-29 00:35:46,767 - Epoch: [198][  211/  211]    Overall Loss 0.038811    Objective Loss 0.038811    Top1 98.790323    Top5 100.000000    LR 0.000100    Time 0.541793    
2022-11-29 00:35:47,300 - --- validate (epoch=198)-----------
2022-11-29 00:35:47,300 - 6000 samples (256 per mini-batch)
2022-11-29 00:35:55,284 - Epoch: [198][   10/   24]    Loss 0.035620    Top1 99.101562    Top5 100.000000    
2022-11-29 00:35:57,868 - Epoch: [198][   20/   24]    Loss 0.042762    Top1 98.886719    Top5 100.000000    
2022-11-29 00:35:58,761 - Epoch: [198][   24/   24]    Loss 0.043785    Top1 98.850000    Top5 100.000000    
2022-11-29 00:35:59,290 - ==> Top1: 98.850    Top5: 100.000    Loss: 0.044

2022-11-29 00:35:59,290 - ==> Confusion:
[[601   0   2   0   0   0   1   0   1   0]
 [  0 684   0   0   1   0   0   3   0   0]
 [  0   2 579   0   1   0   0   3   1   0]
 [  0   0   2 576   0   2   0   1   2   0]
 [  0   1   1   0 559   0   0   0   0   4]
 [  0   0   0   1   0 513   2   0   2   0]
 [  0   1   0   0   2   2 625   0   1   0]
 [  0   2   3   0   1   0   0 619   0   0]
 [  0   0   2   3   0   1   2   0 575   1]
 [  1   1   0   1   4   2   0   3   3 600]]

2022-11-29 00:35:59,292 - ==> Best [Top1: 99.133   Top5: 100.000   Sparsity:0.00   Params: 71148 on epoch: 197]
2022-11-29 00:35:59,293 - Saving checkpoint to: logs\2022.11.28-173243\qat_checkpoint.pth.tar
2022-11-29 00:35:59,295 - 

2022-11-29 00:35:59,295 - Training epoch: 54000 samples (256 per mini-batch)
2022-11-29 00:36:09,907 - Epoch: [199][   10/  211]    Overall Loss 0.049309    Objective Loss 0.049309                                        LR 0.000100    Time 1.061063    
2022-11-29 00:36:15,067 - Epoch: [199][   20/  211]    Overall Loss 0.045252    Objective Loss 0.045252                                        LR 0.000100    Time 0.788492    
2022-11-29 00:36:20,240 - Epoch: [199][   30/  211]    Overall Loss 0.043656    Objective Loss 0.043656                                        LR 0.000100    Time 0.698067    
2022-11-29 00:36:25,401 - Epoch: [199][   40/  211]    Overall Loss 0.044106    Objective Loss 0.044106                                        LR 0.000100    Time 0.652530    
2022-11-29 00:36:30,563 - Epoch: [199][   50/  211]    Overall Loss 0.045120    Objective Loss 0.045120                                        LR 0.000100    Time 0.625268    
2022-11-29 00:36:35,687 - Epoch: [199][   60/  211]    Overall Loss 0.043667    Objective Loss 0.043667                                        LR 0.000100    Time 0.606428    
2022-11-29 00:36:40,813 - Epoch: [199][   70/  211]    Overall Loss 0.043323    Objective Loss 0.043323                                        LR 0.000100    Time 0.593029    
2022-11-29 00:36:45,963 - Epoch: [199][   80/  211]    Overall Loss 0.044458    Objective Loss 0.044458                                        LR 0.000100    Time 0.583265    
2022-11-29 00:36:51,112 - Epoch: [199][   90/  211]    Overall Loss 0.043754    Objective Loss 0.043754                                        LR 0.000100    Time 0.575672    
2022-11-29 00:36:56,255 - Epoch: [199][  100/  211]    Overall Loss 0.043181    Objective Loss 0.043181                                        LR 0.000100    Time 0.569527    
2022-11-29 00:37:01,418 - Epoch: [199][  110/  211]    Overall Loss 0.043208    Objective Loss 0.043208                                        LR 0.000100    Time 0.564681    
2022-11-29 00:37:06,581 - Epoch: [199][  120/  211]    Overall Loss 0.042748    Objective Loss 0.042748                                        LR 0.000100    Time 0.560643    
2022-11-29 00:37:11,725 - Epoch: [199][  130/  211]    Overall Loss 0.042272    Objective Loss 0.042272                                        LR 0.000100    Time 0.557087    
2022-11-29 00:37:16,852 - Epoch: [199][  140/  211]    Overall Loss 0.042143    Objective Loss 0.042143                                        LR 0.000100    Time 0.553912    
2022-11-29 00:37:22,042 - Epoch: [199][  150/  211]    Overall Loss 0.042305    Objective Loss 0.042305                                        LR 0.000100    Time 0.551585    
2022-11-29 00:37:27,176 - Epoch: [199][  160/  211]    Overall Loss 0.041828    Objective Loss 0.041828                                        LR 0.000100    Time 0.549200    
2022-11-29 00:37:32,331 - Epoch: [199][  170/  211]    Overall Loss 0.041595    Objective Loss 0.041595                                        LR 0.000100    Time 0.547213    
2022-11-29 00:37:37,568 - Epoch: [199][  180/  211]    Overall Loss 0.041523    Objective Loss 0.041523                                        LR 0.000100    Time 0.545901    
2022-11-29 00:37:42,684 - Epoch: [199][  190/  211]    Overall Loss 0.041367    Objective Loss 0.041367                                        LR 0.000100    Time 0.544087    
2022-11-29 00:37:47,835 - Epoch: [199][  200/  211]    Overall Loss 0.041561    Objective Loss 0.041561                                        LR 0.000100    Time 0.542639    
2022-11-29 00:37:52,985 - Epoch: [199][  210/  211]    Overall Loss 0.041556    Objective Loss 0.041556    Top1 98.828125    Top5 100.000000    LR 0.000100    Time 0.541319    
2022-11-29 00:37:53,473 - Epoch: [199][  211/  211]    Overall Loss 0.041561    Objective Loss 0.041561    Top1 98.790323    Top5 100.000000    LR 0.000100    Time 0.541060    
2022-11-29 00:37:54,003 - --- validate (epoch=199)-----------
2022-11-29 00:37:54,003 - 6000 samples (256 per mini-batch)
2022-11-29 00:38:01,956 - Epoch: [199][   10/   24]    Loss 0.036670    Top1 98.906250    Top5 100.000000    
2022-11-29 00:38:04,539 - Epoch: [199][   20/   24]    Loss 0.039807    Top1 98.925781    Top5 100.000000    
2022-11-29 00:38:05,419 - Epoch: [199][   24/   24]    Loss 0.038310    Top1 98.950000    Top5 100.000000    
2022-11-29 00:38:05,946 - ==> Top1: 98.950    Top5: 100.000    Loss: 0.038

2022-11-29 00:38:05,947 - ==> Confusion:
[[603   0   1   0   0   0   1   0   0   0]
 [  1 686   0   0   0   0   0   1   0   0]
 [  0   0 581   1   0   0   0   2   1   1]
 [  0   0   0 576   0   4   0   1   2   0]
 [  0   1   1   0 554   1   1   1   0   6]
 [  1   0   0   1   1 513   2   0   0   0]
 [  1   1   0   0   2   1 626   0   0   0]
 [  0   1   2   2   0   0   0 620   0   0]
 [  2   0   0   1   1   3   2   0 574   1]
 [  0   1   0   0   3   1   0   3   3 604]]

2022-11-29 00:38:05,949 - ==> Best [Top1: 99.133   Top5: 100.000   Sparsity:0.00   Params: 71148 on epoch: 197]
2022-11-29 00:38:05,949 - Saving checkpoint to: logs\2022.11.28-173243\qat_checkpoint.pth.tar
2022-11-29 00:38:05,951 - --- test ---------------------
2022-11-29 00:38:05,952 - 10000 samples (256 per mini-batch)
2022-11-29 00:38:13,849 - Test: [   10/   40]    Loss 0.023113    Top1 99.257812    Top5 100.000000    
2022-11-29 00:38:16,480 - Test: [   20/   40]    Loss 0.022107    Top1 99.335938    Top5 100.000000    
2022-11-29 00:38:19,102 - Test: [   30/   40]    Loss 0.020887    Top1 99.401042    Top5 99.986979    
2022-11-29 00:38:21,451 - Test: [   40/   40]    Loss 0.020720    Top1 99.370000    Top5 99.990000    
2022-11-29 00:38:21,984 - ==> Top1: 99.370    Top5: 99.990    Loss: 0.021

2022-11-29 00:38:21,984 - ==> Confusion:
[[ 976    0    1    1    0    0    2    0    0    0]
 [   0 1130    2    0    0    0    0    3    0    0]
 [   0    0 1028    0    0    0    0    3    1    0]
 [   0    0    0 1009    0    0    0    1    0    0]
 [   0    0    1    0  974    0    0    0    1    6]
 [   1    0    0    5    0  884    2    0    0    0]
 [   2    2    1    0    1    1  948    0    3    0]
 [   0    2    2    1    0    0    0 1022    0    1]
 [   0    0    3    1    1    1    0    0  966    2]
 [   0    0    0    0    5    1    0    2    1 1000]]

2022-11-29 00:38:21,993 - 
2022-11-29 00:38:21,993 - Log file for this run: E:\MAX78000\ai8x-training\logs\2022.11.28-173243\2022.11.28-173243.log

最后的选择。最终决定放弃机器学习的任务,转而实现任务4。使用板卡上的摄像头完成以下一种或多种图像处理,并显示出处理后的图像:彩色图像噪点滤波,频域图像滤波,基于亮度的形态学处理(膨胀/腐蚀),图像背景估计,基于颜色的稀疏光流法。以前对图像的处理都是调用成熟的库。确实是很方便,但是对底层实现了解的不够清晰。趁这次机会好好学习一下图片卷积的应用,使用不同的卷积核就能实现图片的模糊、锐化、边缘检测灯功能。

任务实现:选定了任务,那就动手开始做吧!
首先是自己对卷积的理解。卷积算是图像处理中的基础操作,就如同使用不同的滤镜去观察图片,将需要的信息强化出来,不需要的信息弱化或抹去。这样图片指定的特征信息就变得明晰了,再交给后续处理。机器学习中卷积神经网络中的卷积也就是这么操作的。这里我实现了图片的边缘提取,实现了使用了Sobel 算子、Roberts算子、Prewitt 算子对摄像头图片进行边缘提取的功能。FrbCzfkXHR-0XVBzbNX2TgckMr7l

1、读取图像。MAX78000开发板上集成了一颗摄像头,使用I2C通讯。直接参考例程ImgCapture,将摄像头的数据读取到内存了。这里读取到的是一个彩色的位图,图像格式为RGB565。卷积操作都是针对矩阵进行操作的,所以首先要将采集到的图片映射到二维矩阵中去。这里有两个选择,一种是将RGB564映射到红、绿、蓝三个二维矩阵中去,然后对每个矩阵进行卷积操作,最后再将三个矩阵合并成一幅图片。第二个方法就是将RGB的的图片转换为灰度图,这样就获得一个灰度图的二维矩阵,然后对灰度图进行卷积操作。这里我选择了第二个方法,使用灰度图进行操作。彩色图转灰度图使用公式:Gray = 0.299*R+0.587*G+0.114*B

// 从DMA中读取灰度图,写入内存
void read_grayimg_fromdma(uint32_t w, uint32_t h, uint8_t *imgdata) {
	uint8_t *data = NULL;
	uint16_t rgb;
	uint16_t r, g, b;
// Get image line by line
	for (int i = 0; i < h; i++) {
// Wait until camera streaming buffer is full
		while ((data = get_camera_stream_buffer()) == NULL) {
			if (camera_is_image_rcv()) {
				break;
			}
		}
		for (int j = 0; j < w; j++) {
			rgb = data[j * 2] * 256 + data[j * 2 + 1];
			r = (rgb & 0Xf800) >> 8;
			g = (rgb & 0X07e0) >> 3;
			b = (rgb & 0X001f) << 3;
			imgdata[i * w + j] = (uint8_t) ((r * 30 + g * 59 + b * 11 + 50)
					/ 100);	//使用著名的心理学公式,转换为灰度图
//        	imgdata[i*w+j]=(((rgb & 0Xf800 )>>8)*30+((rgb & 0X07e0 )>>3)*59+((rgb & 0X001f )<<3)*11+50)/100;
		}
// Release stream buffer
		release_camera_stream_buffer();
	}
//    utils_stream_image_row_to_pc(imgdata, IMAGE_XRES * IMAGE_YRES * 2);
}


读取摄像头图片的尺寸,参考例程可以多种选择。但是在实际操作过程中,发现超过240X240后,貌似动态申请内存就会出错,暂时未能解决,这里就先使用240X240的图片尺寸。
当开始手写卷积操作,才痛苦地感觉到知易行难。各种和预想不一致的情况,为了乐能更好地测试卷积的算法,手写了一个测试案例图片,先用测试图片进行各种卷积操作。

//生成测试用图片
void read_grayimg_test(uint32_t w, uint32_t h, uint8_t *imgdata) {
	for (int i = 0; i < h; i++) {
		for (int j = 0; j < w; j++) {
			imgdata[i * w + j] = 255 - (i / (h / 4)) * 64;
			if (j > (w / 2 - 10) && j < (w / 2 + 10))
				imgdata[i * w + j] = 255;
		}
	}
}

FvGoT8whZU4tE1ir6xsJMt78DGPs

2、卷积操作。这里选择的都是3X3的卷积核。依次取原图像中的每一个数据,并获取该图像数据和其四周的3X3数据,依次与卷积核对应数据相乘,再累加,作为原图像改位置新的数据。对原图像边缘场景,给与补零处理。

//卷积运算 kernel 是3*3的卷积核,边缘使用添0处理
void juanji(uint32_t w, uint32_t h, uint8_t *imgdata, uint8_t *dealdata) {
	int sour[9];				//源数据
	int sob_dx[9] = { 1, 0, -1, 2, 0, -2, 1, 0, -1 };	//Sobel 算子
	int sob_dy[9] = { -1, -2, -1, 0, 0, 0, 1, 2, 1 };
	int pre_dx[9] = { 1, 0, -1, 1, 0, -1, 1, 0, -1 };	//Prewitt 算子
	int pre_dy[9] = { -1, -1, -1, 0, 0, 0, 1, 1, 1 };
	int sch_dx[9] = { -3, 0, 3, -10, 0, 10, -3, 0, 3 };	//Scharr 算子
	int sch_dy[9] = { -3, -10, -3, 0, 0, 0, 3, 10, 3 };
	int newx, newy;
	uint16_t x, y;

	for (y = 0; y < h; y++) {
		for (x = 0; x < w; x++) {
			sour[4] = imgdata[y * w + x];
			if (x == 0) {
				sour[0] = 0;
				sour[3] = 0;
				sour[6] = 0;
				sour[4] = imgdata[y * w + x];
				sour[5] = imgdata[y * w + x + 1];
				if (y == 0) {
					sour[1] = 0;
					sour[2] = 0;
					sour[7] = imgdata[(y + 1) * w + x];
					sour[8] = imgdata[(y + 1) * w + x + 1];
				} else if (y == (h - 1)) {
					sour[1] = imgdata[(y - 1) * w + x];
					sour[2] = imgdata[(y - 1) * w + x + 1];
					sour[7] = 0;
					sour[8] = 0;
				} else {
					sour[1] = imgdata[(y - 1) * w + x];
					sour[2] = imgdata[(y - 1) * w + x + 1];
					sour[7] = imgdata[(y + 1) * w + x];
					sour[8] = imgdata[(y + 1) * w + x + 1];
				}

			} else if (x == (w - 1)) {
				sour[2] = 0;
				sour[5] = 0;
				sour[8] = 0;
				sour[4] = imgdata[y * w + x];
				sour[3] = imgdata[y * w + x - 1];
				if (y == 0) {
					sour[1] = 0;
					sour[0] = 0;
					sour[7] = imgdata[(y + 1) * w + x];
					sour[6] = imgdata[(y + 1) * w + x - 1];
				} else if (y == (h - 1)) {
					sour[1] = imgdata[(y - 1) * w + x];
					sour[0] = imgdata[(y - 1) * w + x - 1];
					sour[7] = 0;
					sour[6] = 0;
				} else {
					sour[1] = imgdata[(y - 1) * w + x];
					sour[0] = imgdata[(y - 1) * w + x - 1];
					sour[7] = imgdata[(y + 1) * w + x];
					sour[6] = imgdata[(y + 1) * w + x - 1];
				}
			} else {
				if (y == 0) {
					sour[1] = 0;
					sour[0] = 0;
					sour[2] = 0;
					sour[7] = imgdata[(y + 1) * w + x];
					sour[6] = imgdata[(y + 1) * w + x - 1];
					sour[8] = imgdata[(y + 1) * w + x + 1];
					sour[3] = imgdata[y * w + x - 1];
					sour[5] = imgdata[y * w + x + 1];
				} else if (y == (h - 1)) {
					sour[1] = imgdata[(y - 1) * w + x];
					sour[0] = imgdata[(y - 1) * w + x - 1];
					sour[3] = imgdata[y * w + x - 1];
					sour[5] = imgdata[y * w + x + 1];
					sour[7] = 0;
					sour[6] = 0;
					sour[8] = 0;
				} else {
					sour[1] = imgdata[(y - 1) * w + x];
					sour[0] = imgdata[(y - 1) * w + x - 1];
					sour[2] = imgdata[(y - 1) * w + x + 1];
					sour[7] = imgdata[(y + 1) * w + x];
					sour[6] = imgdata[(y + 1) * w + x - 1];
					sour[8] = imgdata[(y + 1) * w + x + 1];
					sour[3] = imgdata[y * w + x - 1];
					sour[5] = imgdata[y * w + x + 1];
				}

			}
			//矩阵乘
//			dealdata[y * w + x] = matrixplus(sour, kernel);
			if (button == 2) {	//Prewitt 算子
				newx = matrixplus(sour, pre_dx);
				newy = matrixplus(sour, pre_dy);
			}
			if (button == 1) {	//Scharr 算子
				newx = matrixplus(sour, sch_dx);
				newy = matrixplus(sour, sch_dy);
			}
			if (button == 0) {	//Sobel 算子
				newx = matrixplus(sour, sob_dx);
				newy = matrixplus(sour, sob_dy);
			}
			newx = (int) sqrt(newx * newx + newy * newy);
//			printf("\n%d    %d    %d\n",newx,newy,newxy);
			if (newx > 255)
				newx = 255;
			if (newx < 0)
				newx = 0;
			dealdata[y * w + x] = newx;
		}

	}
}

Fnrh4u5XCdiPfOtovwqbdYhYPyObFvapiTemI0ruEanyHMf3FXk0lOqLFrRvTNdLCXGbDZgRE_Ru1BlGrjvT对测试数据,可以看出Prewitt算子和Sobel算子结果差不多,Scharr算子有些区别。再看看真实环境的结果。
Fi4YeIMKgu6pu06WPIF1FWshIY39Fi_WZQEEwJjt1nygTi5JqkV-QR9WFmMzPr3-lPFsq2Mvh64j_ssNuO3s
左侧均为实拍的灰度图。明显可以看出Prewitt算子在细微差别上边缘提取的更精致些。这里摘抄一下三个算子的特点:Sobel算子考虑了综合因素,对噪声较多的图像处理效果更好,Sobel 算子边缘定位效果不错,但检测出的边缘容易出现多像素宽度。Prewitt算子对灰度渐变的图像边缘提取效果较好,而没有考虑相邻点的距离远近对当前像素点的影响,与Sobel 算子类似,不同的是在平滑部分的权重大小有些差异。Scharr 算子:解决Sobel算子在梯度角度接近水平或垂直方向时的不精确性。Scharr通过将模版中的权重系数放大来增大像素值间的差异(计算 x 或 y 方向上的图像差分)。

后记:现在机器学习的功能,很多都在终端上实习开来。需要努力地学习这块的新知识!MAX78000板子还没能熟练掌握,获取图片速度慢,质量不高,例程中的CNN还不能为己所用,还有漫漫学习路要去走。感谢电子森林提供的这样的活动,及学习到了知识又开拓了眼界!

附件下载
CameraIF.zip
团队介绍
瞎捣鼓小能手
团队成员
aramy
单片机业余爱好者,瞎捣鼓小能手。
评论
0 / 100
查看更多
目录
硬禾服务号
关注最新动态
0512-67862536
info@eetree.cn
江苏省苏州市苏州工业园区新平街388号腾飞创新园A2幢815室
苏州硬禾信息科技有限公司
Copyright © 2023 苏州硬禾信息科技有限公司 All Rights Reserved 苏ICP备19040198号