实验1—从0实现logistic回归(只借助Tensor和Numpy相关的库)

阅读: 评论:0

实验1—从0实现logistic回归(只借助Tensor和Numpy相关的库)

实验1—从0实现logistic回归(只借助Tensor和Numpy相关的库)

从0实现logistic回归(只借助Tensor和Numpy相关的库)

import torch
from IPython import display
from matplotlib import pyplot as plt 
from torch import nn
import numpy as np
import random

1.生成数据集

# 特征数
num_inputs = 2
# set example number 
num_examples = 1000true_w = [2.1,-3.0]
true_b = 1.3# 生成1000*2个随机数,作为特征值
features = sor(al(0,1,(num_examples,num_inputs)),dtype=torch.float)# 根据w和b的值,生成特征相应的标签
labels = 1 / (1 + p(-1 * (true_w[0] * features[:, 0] + true_w[1] * features[:, 1]) + true_b ))# 增加干扰
labels += sor(al(0,0.01,size=labels.size()),dtype=float)
num0 = 0
num1 = 0
for i in range(num_examples):if labels[i] < 0.5:labels[i] = 0num0 += 1else:labels[i] = 1num1 += 1
# print(labels)
labels = labels.view(num_examples, 1) #把label变成1000*1的矩阵
#print(labels)
def use_svg_display():#用矢量图表示display.set_matplotlib_formats('svg')
def set_figsize(figsize=(3.5,2.5)):use_svg_display()#设置图的尺寸Params['figure.figsize'] = figsize
set_figsize()
plt.scatter(features[:,1].numpy(),labels.numpy(),1)
&llections.PathCollection at 0x7fba09d012b0>


[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-oWNtB3uA-1633402407784)(output_5_1.svg)]

2.读取数据

num_inputs = 2
def data_iter(batch_size, features, labels):num_examples = len(features)indices = list(range(num_examples)) # [0, 1, ..., 998, 999] random.shuffle(indices) # 样本的读取顺序是随机的 for i in range(0, num_examples, batch_size):j = torch.LongTensor(indices[i: min(i + batch_size, num_examples)]) # 最后一次可能不足一 个batchyield features.index_select(0, j), labels.index_select(0, j)

3.手动构建模型

w = sor(al(0,0.01,(num_inputs,1)),dtype=torch.float32)
b = s(1,dtype=torch.float32)
w.requires_grad_(requires_grad=True)
b.requires_grad_(requires_grad=True)
tensor([0.], requires_grad=True)
# 构建logistic函数
def logistic_regression(x,w,b):return 1/(1&#p(-(x,w)+b))

4.定义损失函数

def bce_loss(y_hat,y):return -1 * (y * torch.log10(y_hat) + (1 - y) * torch.log10(1 - y_hat))

5.定义优化函数

def sgd(params,lr,batch_size):for param in params:param.data -= lr * ad / batch_size

6.训练

# super parameters init
lr = 0.03#学习率
num_epochs = 20#训练周期
batch_size = 10net = logistic_regression
loss = bce_loss# training
#进行20轮训练,每轮训练都是分批求解,20轮结果正确率求平均
for epoch in range(num_epochs):  # training repeats num_epochs times# in each epoch, all the samples in dataset will be used once# X is the feature and y is the label of a batch samplefor X, y in data_iter(batch_size, features, labels):l = loss(net(X, w, b), y).sum()  # calculate the gradient of batch sample loss l.backward()#计算梯度  # using small batch random gradient descent to iter model parameters模型求解sgd([w, b], lr, batch_size)  # reset parameter gradient梯度清零w._()b._()#模型偏差train_l = loss(net(features, w, b), labels)print('epoch %d, loss %f' % (epoch + 1, an().item()))
epoch 1, loss 0.112721
epoch 2, loss 0.108413
epoch 3, loss 0.104645
epoch 4, loss 0.101311
epoch 5, loss 0.098335
epoch 6, loss 0.095655
epoch 7, loss 0.093226
epoch 8, loss 0.091011
epoch 9, loss 0.088979
epoch 10, loss 0.087107
epoch 11, loss 0.085374
epoch 12, loss 0.083763
epoch 13, loss 0.082262
epoch 14, loss 0.080858
epoch 15, loss 0.079540
epoch 16, loss 0.078300
epoch 17, loss 0.077130
epoch 18, loss 0.076024
epoch 19, loss 0.074977
epoch 20, loss 0.073984
print(true_w,'n',w)
[2.1, -3.0] tensor([[ 1.9279],[-2.6933]], requires_grad=True)
print(true_b,'n',b)
1.3 tensor([1.2855], requires_grad=True)

本文发布于:2024-02-01 22:01:05,感谢您对本站的认可!

本文链接:https://www.4u4v.net/it/170679606439694.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:logistic   Tensor   Numpy
留言与评论(共有 0 条评论)
   
验证码:

Copyright ©2019-2022 Comsenz Inc.Powered by ©

网站地图1 网站地图2 网站地图3 网站地图4 网站地图5 网站地图6 网站地图7 网站地图8 网站地图9 网站地图10 网站地图11 网站地图12 网站地图13 网站地图14 网站地图15 网站地图16 网站地图17 网站地图18 网站地图19 网站地图20 网站地图21 网站地图22/a> 网站地图23