51工具盒子

依楼听风雨
笑看云卷云舒,淡观潮起潮落

人工智能考核

task1.1 {#task1-1}

对于你给出的解释,请给出相应的PaddlePaddle对应的函数,并尽可能的解释函数参数的意义

  1. 解释神经网络中卷积层 的作用。
  • 把卷积核中心点周围的像素按比例附加到(对其产生影响)卷积核中心对应的像素;
  • 卷积核代表一个特征,用来提取输入数据的特定特征
    paddle.nn.Conv2D()
  • in_channels:输入特征图的通道数。
  • out_channels:输出特征图的通道数。
  • kernel_size:卷积核大小。
  • stride:卷积步长。
  • padding:填充大小。
  1. 解释神经网络中池化层 的作用。 过滤特征(减小特征图的维度)

    paddle.nn.MaxPool2D(最大池化) paddle.nn.AvgPool2D(平均池化)

    • kernel_size:池化核大小。
    • stride:池化步长。
    • padding:填充大小。
  2. 解释神经网络中连接层 的作用。 将输入的多维向量数据展平为一维向量,然后线性处理,与权重矩阵相乘,再加上偏置项,通过激活函数输出结构,可以进行分类/预测

    paddle.nn.Linear()

    • in_features:输入特征的大小。
    • out_features:输出特征的大小。
  3. 解释神经网络中激活函数层 的作用。 把连接层得到的线性结构转化为非线性结果,从而更好地拟合数据并提高模型的性能

    paddle.nn.ReLU paddle.nn.Sigmoid paddle.tanh

    • 没有参数
  4. 解释神经网络中Dropout层 的作用。 正则化,随机丢弃神经元,减少参数,降低模型复杂度,减少过拟合情况

    paddle.nn.Dropout

    • dropout_prob:丢弃概率,即要丢弃的神经元的比例。

task2.1 {#task2-1}

在一个单层感知器中,假设有两个输入特征和一个输出。

权重分别是w1 = 0.5,w2 = -0.3,偏差为b = 0.1。

输入特征为x1 = 2,x2 = -1。

计算输出(假设激活函数是阶跃函数)。

|---------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | # 定义权重/偏差/输入特征 w1 = 0.5 w2 = -0.3 b = 0.1 x1 = 2 x2 = -1 # 定义激活函数(阶跃函数) def step_function(x): if x >= 0: return 1 else: return 0 # 计算 def perceptron_output(w1, w2, b, x1, x2): result = w1 * x1 + w2 * x2 + b return step_function(result) # 输出 output = perceptron_output(w1, w2, b, x1, x2) print("Output:", output) |

task2.2(代码实现) {#task2-2(代码实现)}

假设你有一个包含两个隐藏层的前馈神经网络,每个隐藏层分别有3个神经元。

输入层有4个特征,输出层有2个神经元。

所有的激活函数都是ReLU。

编写一个函数来计算整个网络的输出,给定输入和网络参数。

|------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 | import numpy as np IP = [[1, 2, 3, 4]] W1 = [[0.32361505, -1.08053636, 1.94312927], [1.87599918, 0.09333858, -0.54394708], [0.31721595, 0.15919131, 0.75238423], [0.52593422, 0.39281801, 0.47263081]] B1 = [[-0.95059911, -0.78673449, -0.27075615]] W2 = [[-1.11562289, -1.05667625, 1.17330918], [1.15682221, 0.30359694, -0.32633211], [0.44536135, 0.31302144, -1.16012537]] B2 = [[1.48936813, -0.50073038, -0.16725209]] W3 = [[-0.06277751, -0.52103597], [-0.29022904, 1.55451498], [-0.80766273, -0.24495497]] B3 = [[0.97822396, 1.32775756]] # 定义relu函数 def relu(x): return np.maximum(0, x) def Feedforward_neural_network(IP, W1, B1, W2, B2, W3, B3): IP = np.array(IP) W1 = np.array(W1) B1 = np.array(B1) W2 = np.array(W2) B2 = np.array(B2) W3 = np.array(W3) B3 = np.array(B3) out1 = np.dot(IP, W1) + B1 in2 = relu(out1) out2 = np.dot(in2, W2) + B2 in3 = relu(out2) out3 = np.dot(in3, W3) + B3 output = out3 return output # 进行前向传播 result = Feedforward_neural_network(IP, W1, B1, W2, B2, W3, B3) print(result) |

task3.1(代码实现) {#task3-1(代码实现)}

题目:线性回归模型 {#题目:线性回归模型}

使用PaddlePaddle框架(Paddle Pytorch Tensorflow,或者自己手写线性回归网络都可)搭建一个简单的线性回归模型,对data.csv数据集中的数据进行训练,并预测新数据点的值。

提示 {#提示}

使用paddle.nn.Linear创建一个具有单个输入和单个输出的线性层。
定义损失函数和优化器。
训练模型以拟合数据。
对新数据进行预测。

|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 | import paddle from paddle.nn import Linear import paddle.nn.functional as F import numpy as np import os import random import pandas as pd def load_data(): # 从文件导入数据 datafile = pd.read_csv("C:/Users/lxcqm/Desktop/人工智能考核/训练数据/data.csv") X = datafile["X"].values.astype("float32") Y = datafile["y"].values.astype("float32") data_matrix = np.column_stack((X, Y)) # 确保数据的总元素数量为 100000 num_rows = 50000 num_cols = 2 data_matrix = data_matrix.reshape((num_rows, num_cols)) # 将原数据集拆分成训练集和测试集 # 这里使用80%的数据做训练,20%的数据做测试 # 测试集和训练集必须是没有交集的 ratio = 0.8 offset = int(data_matrix.shape[0] * ratio) training_data = data_matrix[:offset] test_data = data_matrix[offset:] # 计算train数据集的最大值,最小值 maximums, minimums = training_data.max(axis=0), training_data.min(axis=0) # 记录数据的归一化参数,在预测时对数据做归一化 global max_values global min_values max_values = maximums min_values = minimums # 对数据进行归一化处理 training_data = (training_data - min_values) / (maximums - minimums) test_data = (test_data - min_values) / (maximums - minimums) return training_data, test_data # # 验证数据集读取程序的正确性 # training_data, test_data = load_data() # print(training_data.shape) # print(training_data) class Regressor(paddle.nn.Layer): # self代表类的实例自身 def __init__(self): # 初始化父类中的一些参数 super(Regressor, self).__init__() # 定义一层全连接层,输入维度是1,输出维度是1 self.fc = Linear(in_features=1, out_features=1) # 网络的前向计算 def forward(self, inputs): x = self.fc(inputs) return x # 声明定义好的线性回归模型 model = Regressor() # 开启模型训练模式 model.train() # 加载数据 training_data, test_data = load_data() # 定义优化算法,使用随机梯度下降SGD # 学习率设置为0.01 opt = paddle.optimizer.SGD(learning_rate=0.01, parameters=model.parameters()) EPOCH_NUM = 10 # 设置外层循环次数 BATCH_SIZE = 100 # 设置batch大小 # 定义外层循环 for epoch_id in range(EPOCH_NUM): # 在每轮迭代开始之前,将训练数据的顺序随机的打乱 np.random.shuffle(training_data) # 将训练数据进行拆分,每个batch包含10条数据 mini_batches = [training_data[k:k + BATCH_SIZE] for k in range(0, len(training_data), BATCH_SIZE)] # print(mini_batches) # 定义内层循环 for iter_id, mini_batch in enumerate(mini_batches): x = np.array(mini_batch[:, :-1]) # 获得当前批次训练X y = np.array(mini_batch[:, -1:]) # 获得当前批次训练Y # print(mini_batch, x, y) # 将numpy数据转为飞桨动态图tensor的格式 X = paddle.to_tensor(x) Y = paddle.to_tensor(y) # 前向计算 predicts = model(X) # 计算损失 loss = F.square_error_cost(predicts, label=Y) avg_loss = paddle.mean(loss) if iter_id % 100 == 0: print("epoch: {}, iter: {}, loss is: {}".format(epoch_id, iter_id, avg_loss.numpy())) # 反向传播,计算每层参数的梯度值 avg_loss.backward() # 更新参数,根据设置好的学习率迭代一步 opt.step() # 清空梯度变量,以备下一轮计算 opt.clear_grad() def load_one_example(): # 从上边已加载的测试集中,随机选择一条作为测试数据 idx = np.random.randint(0, test_data.shape[0]) # idx = -10 x0, y0 = test_data[idx, 0], test_data[idx, -1] x0 = x0.reshape([1, -1]) return x0, y0 # 保存模型参数,文件名为LR_model.pdparams paddle.save(model.state_dict(), 'C:/Users/lxcqm/Desktop/model/model1.pdmodel') print("模型保存成功,模型参数保存在C:/Users/lxcqm/Desktop/model/model1.pdmodel") # 参数为保存模型参数的文件地址 model_dict = paddle.load('C:/Users/lxcqm/Desktop/model/model1.pdmodel') model.load_dict(model_dict) model.eval() # 参数为数据集的文件地址 xn, yn = load_one_example() # 将数据转为动态图的variable格式 xn = paddle.to_tensor(xn) predict = model(xn) # print("xn", xn,"predict",predict, "yn", yn) # 对结果做反归一化处理 predict = predict * (max_values[-1] - min_values[-1]) + min_values[-1] # 对Y数据做反归一化处理 yn = yn * (max_values[-1] - min_values[-1]) + min_values[-1] print("Inference result is {}, the corresponding Y is {}".format(predict.numpy(), yn)) |

task3.2(代码实现) {#task3-2(代码实现)}

题目:卷积神经网络(CNN)分类 {#题目:卷积神经网络(CNN)分类}

使用PaddlePaddle框架构建一个简单的卷积神经网络(CNN),用于对MNIST手写数字数据集进行分类。

(Paddle Pytorch Tensorflow等框架都可)

|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 | import json import gzip import paddle from paddle.vision.transforms import Normalize from paddle.io import Dataset import numpy as np from PIL import Image from paddle.vision.transforms import functional as F # 定义图像归一化处理方法,这里的CHW指图像格式需为 [C通道数,H图像高度,W图像宽度] # 减均值,除以标准差 # 假设像素值范围是 [0, 255] # 当每个像素值减去 127.5 后 # 范围变为 [-127.5, 127.5] # 再除以 127.5(或者是原始数据范围的一半) # 就得到了 [-1, 1] 的范围 transform = Normalize(mean=[127.5], std=[127.5], data_format='CHW') class MNISTDataset(Dataset): def __init__(self, datafile, mode='train', transform=None): super().__init__() self.mode = mode self.transform = transform print('loading mnist dataset from {} ......'.format(datafile)) data = json.load(gzip.open(datafile)) print('mnist dataset load done') # 读取到的数据区分训练集,验证集,测试集 train_set, val_set, eval_set = data if mode == 'train': # 获得训练数据集 self.imgs, self.labels = train_set[0], train_set[1] elif mode == 'valid': # 获得验证数据集 self.imgs, self.labels = val_set[0], val_set[1] elif mode == 'test': # 获得测试数据集 self.imgs, self.labels = eval_set[0], eval_set[1] else: raise Exception("mode can only be one of ['train', 'valid', 'test']") def __getitem__(self, index): """ 实现__getitem__方法,定义指定index时如何获取数据 """ data = self.imgs[index] label = self.labels[index] return self.transform(data), label def __len__(self): """ 实现__len__方法,返回数据集总数目 """ return len(self.imgs) datafile = 'D:/paddlepaddle/paddlepaddle手写数字/mnist.json.gz' # 下载数据集并初始化 DataSet train_dataset = MNISTDataset(datafile, mode='train', transform=transform) test_dataset = MNISTDataset(datafile, mode='test', transform=transform) # print('train images: ', train_dataset.__len__(), ', test images: ', test_dataset.__len__()) # from matplotlib import pyplot as plt # # for data in train_dataset: # image, label = data # print('shape of image: ', image.shape) # plt.title(str(label)) # plt.imshow(image[0]) # break # 定义并初始化数据读取器 train_loader = paddle.io.DataLoader(train_dataset, batch_size=64, shuffle=True, num_workers=0) test_loader = paddle.io.DataLoader(test_dataset, batch_size=64, shuffle=False, num_workers=0) # print('step num:', len(train_loader)) # class MNIST_CNN(paddle.nn.Layer): # def __init__(self): # super(MNIST_CNN, self).__init__() # # # 定义一层全连接层,输出维度为一 # self.fc = paddle.nn.Linear(in_features=784, out_features=1) # # 图片像素28*28=784个像素点 # # def forward(self, inputs): # outputs = self.fc(inputs) # return outputs class MNIST_CNN(paddle.nn.Layer): def __init__(self): super(MNIST_CNN, self).__init__() # 定义卷积层 self.conv1 = paddle.nn.Conv2D(in_channels=1, out_channels=16, kernel_size=3, stride=1, padding=1) self.pool1 = paddle.nn.MaxPool2D(kernel_size=2, stride=2) self.conv2 = paddle.nn.Conv2D(in_channels=16, out_channels=32, kernel_size=3, stride=1, padding=1) self.pool2 = paddle.nn.MaxPool2D(kernel_size=2, stride=2) # 计算全连接层的输入维度 self.fc_input_dim = 32 * 7 * 7 # 根据卷积层和池化层输出的维度计算 # 定义全连接层 self.fc1 = paddle.nn.Linear(in_features=self.fc_input_dim, out_features=128) self.fc2 = paddle.nn.Linear(in_features=128, out_features=10) # 输出类别数量为10,适用于MNIST数据集 def forward(self, inputs): x = paddle.to_tensor(inputs) x = paddle.reshape(x, [-1, 1, 28, 28]) # x = paddle.unsqueeze(x, axis=1) # Add channel dimension x = self.conv1(x) x = paddle.nn.functional.relu(x) x = self.pool1(x) x = self.conv2(x) x = paddle.nn.functional.relu(x) x = self.pool2(x) x = paddle.flatten(x, start_axis=1) x = self.fc1(x) x = paddle.nn.functional.relu(x) x = self.fc2(x) return x model = MNIST_CNN() def train(model): print('train:') model.train() # 定义优化器,使用随机梯度下降SGD优化器,学习率设置为0.001 # SGD的基本原理是在每一次迭代中,使用随机选择的小批量训练数据来计算梯度,并更新模型参数 opt = paddle.optimizer.SGD(learning_rate=0.001, parameters=model.parameters()) EPOCH_NUM = 5 for epoch_id in range(EPOCH_NUM): print('epoch:', epoch_id) for batch_id, data in enumerate(train_loader()): images, labels = data images = paddle.to_tensor(images).astype('float32') labels = paddle.to_tensor(labels).astype('float32') images = paddle.reshape(images, [images.shape[0], images.shape[2] * images.shape[3]]) # 前向计算的过程 predicts = model(images) # Convert labels data type to integer labels_int = paddle.cast(labels, dtype='int64') # Convert labels to one-hot encoding labels_onehot = paddle.nn.functional.one_hot(labels_int, num_classes=10) # 计算损失,取一个批次样本损失的平均值 loss = paddle.nn.functional.square_error_cost(predicts, labels_onehot) avg_loss = paddle.mean(loss) # 每训练了200批次的数据,打印下当前Loss的情况 if batch_id % 200 == 0: print("epoch: {}, batch: {}, loss is: {}".format(epoch_id, batch_id, avg_loss.numpy())) # 后向传播,更新参数的过程 avg_loss.backward() opt.step() opt.clear_grad() # 创建模型 print("create model:") # 启动训练过程 train(model) # # 保存模型参数,文件名为LR_model.pdparams # paddle.save(model.state_dict(), 'C:/Users/lxcqm/Desktop/model/modelhn.pdmodel') # print("模型保存成功,模型参数保存在C:/Users/lxcqm/Desktop/model/modelhn.pdmodel") # # # 参数为保存模型参数的文件地址 # model_dict = paddle.load('C:/Users/lxcqm/Desktop/model/modelhn.pdmodel') # model.load_dict(model_dict) # model.eval() |

task3.2(代码实现) {#task3-2(代码实现)-1}

题目:卷积神经网络(CNN)分类 {#题目:卷积神经网络(CNN)分类-1}

使用PaddlePaddle框架构建一个简单的卷积神经网络(CNN),用于对MNIST手写数字数据集进行分类。

(Paddle Pytorch Tensorflow等框架都可)

提示 {#提示-1}

构建一个包含卷积层、池化层和全连接层的CNN模型。
加载MNIST数据集,可以使用paddle.vision.datasets.MNIST。
定义损失函数和优化器。
训练模型以对手写数字进行分类。
评估模型性能并进行预测。

|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 | import json import gzip import paddle from paddle.vision.transforms import Normalize from paddle.io import Dataset import matplotlib.pyplot as plt import numpy as np from PIL import Image import numpy as np from PIL import Image from paddle.vision.transforms import functional as F # 定义图像归一化处理方法,这里的CHW指图像格式需为 [C通道数,H图像高度,W图像宽度] transform = Normalize(mean=[127.5], std=[127.5], data_format='CHW') class MNISTDataset(Dataset): def __init__(self, datafile, mode='train', transform=None): super().__init__() self.mode = mode self.transform = transform print('loading mnist dataset from {} ......'.format(datafile)) data = json.load(gzip.open(datafile)) print('mnist dataset load done') # 读取到的数据区分训练集,验证集,测试集 train_set, val_set, eval_set = data if mode == 'train': # 获得训练数据集 self.imgs, self.labels = train_set[0], train_set[1] elif mode == 'valid': # 获得验证数据集 self.imgs, self.labels = val_set[0], val_set[1] elif mode == 'test': # 获得测试数据集 self.imgs, self.labels = eval_set[0], eval_set[1] else: raise Exception("mode can only be one of ['train', 'valid', 'test']") def __getitem__(self, index): """ 实现__getitem__方法,定义指定index时如何获取数据 """ data = self.imgs[index] label = self.labels[index] return self.transform(data), label def __len__(self): """ 实现__len__方法,返回数据集总数目 """ return len(self.imgs) datafile = 'D:/paddlepaddle/paddlepaddle手写数字/mnist.json.gz' # 下载数据集并初始化 DataSet train_dataset = MNISTDataset(datafile, mode='train', transform=transform) test_dataset = MNISTDataset(datafile, mode='test', transform=transform) # 定义并初始化数据读取器 train_loader = paddle.io.DataLoader(train_dataset, batch_size=64, shuffle=True, num_workers=0) test_loader = paddle.io.DataLoader(test_dataset, batch_size=64, shuffle=False, num_workers=0) # train_loader = paddle.io.DataLoader(train_dataset, batch_size=128, shuffle=True, num_workers=0) # test_loader = paddle.io.DataLoader(test_dataset, batch_size=128, shuffle=False, num_workers=0) class MNIST_CNN(paddle.nn.Layer): def __init__(self): super(MNIST_CNN, self).__init__() # 定义卷积层 self.conv1 = paddle.nn.Conv2D(in_channels=1, out_channels=16, kernel_size=3, stride=1, padding=1) self.pool1 = paddle.nn.MaxPool2D(kernel_size=2, stride=2) self.conv2 = paddle.nn.Conv2D(in_channels=16, out_channels=32, kernel_size=3, stride=1, padding=1) self.pool2 = paddle.nn.MaxPool2D(kernel_size=2, stride=2) # 计算全连接层的输入维度 self.fc_input_dim = 32 * 7 * 7 # 根据卷积层和池化层输出的维度计算 # 定义全连接层 self.fc1 = paddle.nn.Linear(in_features=self.fc_input_dim, out_features=128) self.fc2 = paddle.nn.Linear(in_features=128, out_features=10) # 输出类别数量为10,适用于MNIST数据集 def forward(self, inputs): x = paddle.to_tensor(inputs) x = paddle.reshape(x, [-1, 1, 28, 28]) # x = paddle.unsqueeze(x, axis=1) # Add channel dimension x = self.conv1(x) x = paddle.nn.functional.relu(x) x = self.pool1(x) x = self.conv2(x) x = paddle.nn.functional.relu(x) x = self.pool2(x) x = paddle.flatten(x, start_axis=1) x = self.fc1(x) x = paddle.nn.functional.relu(x) x = self.fc2(x) return x model = MNIST_CNN() def train(model): print('train:') model.train() # 定义优化器 # opt = paddle.optimizer.SGD(learning_rate=0.001, parameters=model.parameters()) # opt = paddle.optimizer.Momentum(learning_rate=0.001, momentum=0.9, parameters=model.parameters()) # opt = paddle.optimizer.Adagrad(learning_rate=0.001, parameters=model.parameters()) opt = paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters()) EPOCH_NUM = 40 total_losses = [] # 存储每个batch的损失值 for epoch_id in range(EPOCH_NUM): print('epoch:', epoch_id) for batch_id, data in enumerate(train_loader()): images, labels = data images = paddle.to_tensor(images).astype('float32') labels = paddle.to_tensor(labels).astype('float32') images = paddle.reshape(images, [images.shape[0], images.shape[2] * images.shape[3]]) # 前向计算的过程 predicts = model(images) # Convert labels data type to integer labels_int = paddle.cast(labels, dtype='int64') # Convert labels to one-hot encoding labels_onehot = paddle.nn.functional.one_hot(labels_int, num_classes=10) # 计算损失,取一个批次样本损失的平均值 loss = paddle.nn.functional.square_error_cost(predicts, labels_onehot) avg_loss = paddle.mean(loss) # 计算预测结果 probs = paddle.nn.functional.softmax(predicts, axis=1) predictions = paddle.argmax(probs, axis=1) # 计算准确率 correct = paddle.sum(paddle.cast(paddle.equal(predictions, labels_int), dtype='float32')) accuracy = correct / images.shape[0] * 100 if batch_id % 200 == 0: print("epoch: {}, batch: {}, loss is: {}, accuracy is: {}".format(epoch_id, batch_id, avg_loss.numpy(), accuracy.numpy())) # 后向传播,更新参数的过程 avg_loss.backward() opt.step() opt.clear_grad() # 保存每个batch的损失值 total_losses.append(avg_loss.numpy()) # 绘制总的损失曲线图 plt.plot(total_losses) plt.xlabel('Batch') plt.ylabel('Loss') plt.title('Training Loss Curve') plt.ylim(0, 0.1) # 设置y轴范围为0到0.1 plt.show() # 创建模型 print("create model:") # 启动训练过程 train(model) # 保存模型参数,文件名为modelhn.pdmodel paddle.save(model.state_dict(), 'C:/Users/lxcqm/Desktop/model/modelhn.pdmodel') print("模型保存成功,模型参数保存在C:/Users/lxcqm/Desktop/model/modelhn.pdmodel") # 参数为保存模型参数的文件地址 model_dict = paddle.load('C:/Users/lxcqm/Desktop/model/modelhn.pdmodel') model.load_dict(model_dict) model.eval() test_loader = paddle.io.DataLoader(test_dataset, batch_size=64, shuffle=False, num_workers=0) for batch_id, data in enumerate(test_loader): images, labels = data images = paddle.to_tensor(images).astype('float32') labels = paddle.to_tensor(labels).astype('float32') images = paddle.reshape(images, [images.shape[0], images.shape[2] * images.shape[3]]) # 前向计算的过程 predicts = model(images) ######################### # 设置数据读取器,API自动读取MNIST数据测试集 test_dataset = paddle.vision.datasets.MNIST(mode='test') test_data0 = np.array(test_dataset[batch_id][0]) test_label_0 = np.array(test_dataset[batch_id][1]) # plt.figure("Image") # 图像窗口名称 plt.figure(figsize=(2, 2)) plt.imshow(test_data0, cmap=plt.cm.binary) plt.axis('on') # 关掉坐标轴为 off plt.title('image') # 图像题目 plt.show() break # test_loader = paddle.io.DataLoader(test_dataset, batch_size=64, shuffle=False, num_workers=0) # 打印测试结果 # print("Inference result is {}, the corresponding label is {}".format(predicts, labels)) predicts_np = predicts.numpy() labels_np = labels.numpy() print("预测各标签概率:") print(predicts_np[0]) print("实际标签:", labels_np[0]) # 推理结果向量 predictions = np.array(predicts_np[0]) # 找到概率最高的标签 predicted_label = np.argmax(predictions) # print(predictions) print("模型认为概率最高的标签是:", predicted_label) # for batch_id, data in enumerate(test_loader): # images, labels = data # images = paddle.to_tensor(images).astype('float32') # labels = paddle.to_tensor(labels).astype('float32') # # images = paddle.reshape(images, [images.shape[0], images.shape[2] * images.shape[3]]) total_accuracy = 0.0 total_samples = 0 for batch_id, data in enumerate(test_loader()): images, labels = data images = paddle.to_tensor(images).astype('float32') labels = paddle.to_tensor(labels).astype('float32') images = paddle.reshape(images, [images.shape[0], images.shape[2] * images.shape[3]]) # 前向计算 predicts = model(images) # Convert labels data type to integer labels_int = paddle.cast(labels, dtype='int64') # Convert labels to one-hot encoding labels_onehot = paddle.nn.functional.one_hot(labels_int, num_classes=10) # 计算损失,取一个批次样本损失的平均值 loss = paddle.nn.functional.square_error_cost(predicts, labels_onehot) avg_loss = paddle.mean(loss) # 计算预测结果 probs = paddle.nn.functional.softmax(predicts, axis=1) predictions = paddle.argmax(probs, axis=1) # 计算准确率 correct = paddle.sum(paddle.cast(paddle.equal(predictions, labels_int), dtype='float32')) accuracy = correct / images.shape[0] * 100 if batch_id % 10 == 0: print(" batch: {}, loss is: {}, accuracy is: {}".format( batch_id, avg_loss.numpy(), accuracy.numpy())) # 累加准确率值和样本数量 total_accuracy += accuracy.numpy() * images.shape[0] total_samples += images.shape[0] # 计算平均准确率 accuracy_total = total_accuracy / total_samples print("- Total Accuracy: {:.5f}%".format(accuracy_total)) |

赞(0)
未经允许不得转载:工具盒子 » 人工智能考核