【深度学习】1.2:简单神经网络的python实现

阅读: 评论:0

【深度学习】1.2:简单神经网络的python实现

【深度学习】1.2:简单神经网络的python实现

资料参考:
[1]知乎专栏总结:https//zhuanlan.zhihu/p/21423252
[2]Youtube上的视频network demystified playlist
以下实现神经网络,但此处的矩阵是Youtube视频上的转置
参数的选取:
learn_rate:步子大了,容易跳过最优点;步子小了,容易陷入局部最优
hidden_nodes: 隐藏层节点个数多了,容易过拟合,节点个数少,容易欠拟合
epho: 迭代次数多了,容易过拟合,时间长;次数少了,容易欠拟合

代码块

#有56个输入节点,2个隐藏节点,1个输出节点,做个示例
#隐藏层激活函数为sigmoid,输出层为f(x)=x.
class NeuralNetwork(object):def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):# Set number of nodes in input, hidden and output layers.self.input_nodes = input_nodesself.hidden_nodes = hidden_nodesself.output_nodes = output_nodes# Initialize weights,注意权重的shapeself.weights_input_to_hidden = al(0.0, self.hidden_nodes**-0.5, (self.hidden_nodes, self.input_nodes))self.weights_hidden_to_output = al(0.0, self.output_nodes**-0.5, (self.output_nodes, self.hidden_nodes))self.lr = learning_rate#### Set this to your implemented sigmoid function ##### Activation function is the sigmoid functiondef sigmoid(x):return 1 / (1 + np.exp(-x))self.activation_function = sigmoiddef train(self, inputs_list, targets_list):# Convert inputs list to 2d arrayinputs = np.array(inputs_list, ndmin=2).T # inputs的shape为 [feature_diemension, 1]=[56,1],1是指1个实例targets = np.array(targets_list, ndmin=2).T#[1,1]#print("input",inputs.shape)#print("targets",targets.shape)#### Implement the forward pass here ####### Forward pass #### TODO: Hidden layerhidden_inputs = np.dot(self.weights_input_to_hidden,inputs)#hidden_inputs是[hidden_nodes,1]=[2,1]hidden_outputs = self.activation_function(hidden_inputs)#hidden_outputs是[hidden_nodes,1]=[2,1]#print("hidden_inputs",(hidden_inputs.shape))#print(("hidden_outputs"),hidden_outputs.shape)# TODO: Output layerfinal_inputs = np.dot(self.weights_hidden_to_output,hidden_outputs)#[output_nodes,1]final_outputs = final_inputs #[output,input]=[1,1]#print("final_inputs",final_inputs.shape)#print("final_outputs",final_outputs.shape)#### Implement the backward pass here ####### Backward pass #### TODO: Output erroroutput_errors = targets-final_outputs #[output,1]# TODO: Backpropagated errorhidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors)#[hidden_nodes,1]hidden_grads = hidden_outputs * ( 1-hidden_outputs )#[hidden_nodes,1]#  Update the weights  #  update hidden-to-output weights with gradient descent step  self.weights_hidden_to_output += np.dot(output_errors, hidden_outputs.T) * self.lr  #[output,hidden]# update input-to-hidden weights with gradient descent step  self.weights_input_to_hidden += np.dot(hidden_errors * hidden_grads, inputs.T) * self.lr #[hidden,input]def run(self, inputs_list):# Run a forward pass through the networkinputs = np.array(inputs_list, ndmin=2).T#### Implement the forward pass here ##### TODO: Hidden layerhidden_inputs = np.dot(self.weights_input_to_hidden,inputs)hidden_outputs = self.activation_function(hidden_inputs)# TODO: Output layerfinal_inputs = np.dot(self.weights_hidden_to_output,hidden_outputs)final_outputs = final_inputs return final_outputs

本文发布于:2024-01-27 23:59:28,感谢您对本站的认可!

本文链接:https://www.4u4v.net/it/17063711703426.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:神经网络   深度   简单   python
留言与评论(共有 0 条评论)
   
验证码:

Copyright ©2019-2022 Comsenz Inc.Powered by ©

网站地图1 网站地图2 网站地图3 网站地图4 网站地图5 网站地图6 网站地图7 网站地图8 网站地图9 网站地图10 网站地图11 网站地图12 网站地图13 网站地图14 网站地图15 网站地图16 网站地图17 网站地图18 网站地图19 网站地图20 网站地图21 网站地图22/a> 网站地图23