keras编写代码,dense层维度多出一维,求大佬解疑答惑

阅读: 评论:0

keras编写代码,dense层维度多出一维,求大佬解疑答惑

keras编写代码,dense层维度多出一维,求大佬解疑答惑

keras编写代码,dense层维度多出一维,求大佬解疑答惑
轴承故障的一维数据诊断和分类
我原先用TensorFlow编写的空洞门卷积代码,然后接了两个双向LSTM层和两个dense层,是可以运行程序的,为了出图比较方便,用Keras重新编写了我的代码。其中空洞门卷积是我自己编写的代码,TensorFlow版本如下:

    def conv1d_block(self, x, filters, kernel_size, dr, pd='SAME',name="name"):"""gated dilation conv1d layer"""glu = tf.sigmoid(v1d(x, filters, kernel_size, dilation_rate=dr, padding=pd))print('glu=',glu)conv = tf.tanh(v1d(x, filters, kernel_size, dilation_rate=dr, padding=pd))print('conv=',conv)conv3 = v1d(x, filters, kernel_size, dilation_rate=dr, padding=pd)cog = tf.multiply(conv, glu)cog1 = tf.multiply(1-glu, conv3)outputs = tf.add(cog, cog1)print('outputs=',outputs)return outputs

然后用Keras编写的代码如下:

def conv1d_block(input,filters, kernerl_size, strides, conv_padding,dr,name ):"""gated dilation conv1d layer"""glu =Conv1D(filters=filters, kernel_size=kernerl_size, strides=strides,padding=conv_padding,dilation_rate=dr )(input)glu=k.sigmoid(glu)print('glu=',glu)conv = Conv1D(filters=filters, kernel_size=kernerl_size, strides=strides,padding=conv_padding,dilation_rate=dr )(input)conv=k.tanh(conv)print('conv=',conv)conv3 = Conv1D(filters=filters, kernel_size=kernerl_size, strides=strides,padding=conv_padding,dilation_rate=dr )(input)weight_1 = Lambda(lambda x: x * 0.8)weight_2 = Lambda(lambda x: x * 0.2)weight_gru1 = weight_1(conv)weight_gru2 = weight_2(glu)weight_gru3 = weight_1(1-glu)weight_gru4 = weight_2(conv3)cog = Multiply()([weight_gru1, weight_gru2])cog1 = Multiply()([weight_gru3, weight_gru4])outputs = Add()([cog, cog1])print('outputs=',outputs)return outputs

print的时候Keras的输出是:

'''inputs_01 (?, 1200, 1)
glu= Tensor("Sigmoid:0", shape=(?, 1200, 16), dtype=float32)
conv= Tensor("Tanh:0", shape=(?, 1200, 16), dtype=float32)
outputs= Tensor("add_1/add:0", shape=(?, 1200, 16), dtype=float32)
glu= Tensor("Sigmoid_1:0", shape=(?, 1200, 16), dtype=float32)
conv= Tensor("Tanh_1:0", shape=(?, 1200, 16), dtype=float32)
outputs= Tensor("add_2/add:0", shape=(?, 1200, 16), dtype=float32)
glu= Tensor("Sigmoid_2:0", shape=(?, 1200, 16), dtype=float32)
conv= Tensor("Tanh_2:0", shape=(?, 1200, 16), dtype=float32)
outputs= Tensor("add_3/add:0", shape=(?, 1200, 16), dtype=float32)
glu= Tensor("Sigmoid_3:0", shape=(?, 1200, 16), dtype=float32)
conv= Tensor("Tanh_3:0", shape=(?, 1200, 16), dtype=float32)
outputs= Tensor("add_4/add:0", shape=(?, 1200, 16), dtype=float32)
con4 Tensor("add_4/add:0", shape=(?, 1200, 16), dtype=float32)
lstm Tensor("lstmt1/transpose_1:0", shape=(?, ?, 100), dtype=float32)
lstm2 Tensor("lstmt2/transpose_1:0", shape=(?, ?, 100), dtype=float32)
megre============================================================ (?, ?, 200)
merge1========================= Tensor("lambda_9/ExpandDims:0", shape=(?, ?, 200, 1), dtype=float32)
dense1 Tensor("Dense1/Relu:0", shape=(?, 1200, 200, 200), dtype=float32)

TensorFlow的输出为:

input_shape= (1200,)
glu= Tensor("Sigmoid:0", shape=(?, 1, 16), dtype=float32)
conv= Tensor("Tanh:0", shape=(?, 1, 16), dtype=float32)
outputs= Tensor("Add:0", shape=(?, 1, 16), dtype=float32)
cov1 (?, 1, 16)
glu= Tensor("Sigmoid_1:0", shape=(?, 1, 16), dtype=float32)
conv= Tensor("Tanh_1:0", shape=(?, 1, 16), dtype=float32)
outputs= Tensor("Add_1:0", shape=(?, 1, 16), dtype=float32)
cov2 (?, 1, 16)
glu= Tensor("Sigmoid_2:0", shape=(?, 1, 16), dtype=float32)
conv= Tensor("Tanh_2:0", shape=(?, 1, 16), dtype=float32)
outputs= Tensor("Add_2:0", shape=(?, 1, 16), dtype=float32)
cov3 (?, 1, 16)
glu= Tensor("Sigmoid_3:0", shape=(?, 1, 16), dtype=float32)
conv= Tensor("Tanh_3:0", shape=(?, 1, 16), dtype=float32)
layer_output (?, 200)
(?, 10)
self.layer_final_output Tensor("dense2/BiasAdd:0", shape=(?, 10), dtype=float32)

TensorFlow的dense层输出就只有一个维度,但是Keras变成了两维,多出的一维是input的长度,而且进行Keras尽力model的时候运行时,出现了如下错误:

Traceback (most recent call last):File "D:/1111/NEW Train/Gru-train/Keras-GRU.py", line 151, in <module>model = Model(inputs=inputs_01, outputs=Dense2)File "C:ProgramDataAnaconda3libsite-packageskeraslegacyinterfaces.py", line 91, in wrapperreturn func(*args, **kwargs)File "C:ProgramDataAnaconda3libsite-packageskerasenginenetwork.py", line 93, in __init__self._init_graph_network(*args, **kwargs)File "C:ProgramDataAnaconda3libsite-packageskerasenginenetwork.py", line 231, in _init_graph_networkself.inputs, self.outputs)File "C:ProgramDataAnaconda3libsite-packageskerasenginenetwork.py", line 1366, in _map_graph_networktensor_index=tensor_index)File "C:ProgramDataAnaconda3libsite-packageskerasenginenetwork.py", line 1353, in build_mapnode_index, tensor_index)File "C:ProgramDataAnaconda3libsite-packageskerasenginenetwork.py", line 1353, in build_mapnode_index, tensor_index)File "C:ProgramDataAnaconda3libsite-packageskerasenginenetwork.py", line 1353, in build_mapnode_index, tensor_index)[Previous line repeated 4 more times]File "C:ProgramDataAnaconda3libsite-packageskerasenginenetwork.py", line 1325, in build_mapnode = layer._inbound_nodes[node_index]
AttributeError: 'NoneType' object has no attribute '_inbound_nodes'

这个问题困扰了好多天,一直没有解决,不知道是我编写的这个代码块有问题还是中间哪出错了(Keras后面是普通的两个LSTM和两个dense层),还请大佬们答疑解惑,谢谢

本文发布于:2024-02-04 23:48:06,感谢您对本站的认可!

本文链接:https://www.4u4v.net/it/170719164460871.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:大佬   维度   多出   代码   keras
留言与评论(共有 0 条评论)
   
验证码:

Copyright ©2019-2022 Comsenz Inc.Powered by ©

网站地图1 网站地图2 网站地图3 网站地图4 网站地图5 网站地图6 网站地图7 网站地图8 网站地图9 网站地图10 网站地图11 网站地图12 网站地图13 网站地图14 网站地图15 网站地图16 网站地图17 网站地图18 网站地图19 网站地图20 网站地图21 网站地图22/a> 网站地图23