同样的数据julia库knet中的conv4和tensorflow.nn.conv2d的算法实现区别

这里是knet的conv4 但是我看不太懂,能不能简要介绍下原理,或者过程。
这里是tensorflow.nn.conv2d

对于julia部分,源码核心是这样的:

权重是相同的如下:

testw
3×3×3 Array{Float32,3}:
[:, :, 1] =
  0.480015    0.408547   -0.0651456
  0.310477    0.0502024  -0.403383 
 -0.0508717  -0.285228   -0.418516 

[:, :, 2] =
  0.550379    0.440075   -0.081387
  0.345739    0.0406322  -0.453501
 -0.0586349  -0.33067    -0.48503 

[:, :, 3] =
  0.429471    0.373467   -0.0613601
  0.27477     0.0386808  -0.367223 
 -0.0574682  -0.26225    -0.350097 

数据部分如下:

julia>partappledata
5×5×3 Array{Float64,3}:
[:, :, 1] =
 41.063  41.063  41.063  41.063  41.063
 41.063  41.063  41.063  41.063  41.063
 41.063  41.063  41.063  41.063  41.063
 42.063  42.063  42.063  42.063  42.063
 42.063  42.063  42.063  42.063  42.063

[:, :, 2] =
 55.221  55.221  55.221  55.221  55.221
 55.221  55.221  55.221  55.221  55.221
 55.221  55.221  55.221  55.221  55.221
 56.221  56.221  56.221  56.221  56.221
 56.221  56.221  56.221  56.221  56.221

[:, :, 3] =
 60.343  60.343  60.343  60.343  60.343
 60.343  60.343  60.343  60.343  60.343
 60.343  60.343  60.343  60.343  60.343
 61.343  61.343  61.343  61.343  61.343
 62.343  62.343  62.343  62.343  62.343

julia 卷积实现如下

using Pkg; for p in ("Knet","ArgParse"); haskey(Pkg.installed(),p) || Pkg.add(p); end
using LinearAlgebra
using CUDAdrv
using Knet, MAT, ArgParse, CuArrays,Statistics
to_host(ypred)=arraytype(ypred)
arraytype=Array{Float64}

data=zeros(5,5,3,1)
data[:,:,:,1]=partappledata[:,:,:];
v = param(testw;atype=KnetArray{Float64})
x = param(partappledata;atype=KnetArray{Float64})
y = conv4(v,x;padding=1)
ydata=to_host(y)

julia>ydata[:,:,1,1]
5×5 Array{Float64,2}:
194.694      120.055      120.055      120.055       -4.38531
140.102        0.368195     0.368195     0.368195  -115.278  
142.784        2.84226      2.84226      2.84226   -114.264  
144.647        3.42023      3.42023      3.42023   -115.046  
  0.798015  -130.909     -130.909     -130.909     -171.574 

在tensorflow中

import tensorflow as tf
import numpy as np
sess=tf.Session()


immatdata[:,:,:] =
array([[[ 41.06299973,  55.22100067,  60.3430481 ],
        [ 41.06299973,  55.22100067,  60.3430481 ],
        [ 41.06299973,  55.22100067,  60.3430481 ],
        [ 41.06299973,  55.22100067,  60.3430481 ],
        [ 41.06299973,  55.22100067,  60.3430481 ]],

       [[ 41.06299973,  55.22100067,  60.3430481 ],
        [ 41.06299973,  55.22100067,  60.3430481 ],
        [ 41.06299973,  55.22100067,  60.3430481 ],
        [ 41.06299973,  55.22100067,  60.3430481 ],
        [ 41.06299973,  55.22100067,  60.3430481 ]],

       [[ 41.06299973,  55.22100067,  60.3430481 ],
        [ 41.06299973,  55.22100067,  60.3430481 ],
        [ 41.06299973,  55.22100067,  60.3430481 ],
        [ 41.06299973,  55.22100067,  60.3430481 ],
        [ 41.06299973,  55.22100067,  60.3430481 ]],

       [[ 42.06299973,  56.22100067,  61.3430481 ],
        [ 42.06299973,  56.22100067,  61.3430481 ],
        [ 42.06299973,  56.22100067,  61.3430481 ],
        [ 42.06299973,  56.22100067,  61.3430481 ],
        [ 42.06299973,  56.22100067,  61.3430481 ]],

       [[ 42.06299973,  56.22100067,  62.3430481 ],
        [ 42.06299973,  56.22100067,  62.3430481 ],
        [ 42.06299973,  56.22100067,  62.3430481 ],
        [ 42.06299973,  56.22100067,  62.3430481 ],
        [ 42.06299973,  56.22100067,  62.3430481 ]]], dtype=float32)

wdata=
array([[[ 0.4800154 ,  0.55037946,  0.42947057],
        [ 0.4085474 ,  0.44007453,  0.373467  ],
        [-0.06514555, -0.08138704, -0.06136011]],

       [[ 0.31047726,  0.34573907,  0.27476987],
        [ 0.05020237,  0.04063221,  0.03868078],
        [-0.40338343, -0.45350131, -0.36722335]],

       [[-0.05087169, -0.05863491, -0.05746817],
        [-0.28522751, -0.33066967, -0.26224968],
        [-0.41851634, -0.4850302 , -0.35009676]]], dtype=float32)

w=wdata
data=np.zeros([1,5,5,3])
data[:,:,:,:]=immatdata[:,:,:]
part_conv=tf.nn.conv2d(data,w,strides=[1,1,1,1],padding="SAME")
part_conv_data=sess.run(part_conv)

in[58]:part_conv_data[0,:,:,0]
Out[58]: 
array([[-168.01942444, -128.39241028, -128.39241028, -128.39241028,
           0.46919155],
       [-115.27775574,    0.36819839,    0.36819839,    0.36819839,
         140.10180664],
       [-117.4095459 ,   -1.93058014,   -1.93058014,   -1.93058014,
         139.05670166],
       [-119.1164856 ,   -2.76397705,   -2.76397705,   -2.76397705,
         139.79747009],
       [  -4.79425049,  122.31194305,  122.31194305,  122.31194305,
         198.7494812 ]], dtype=float32)

我想要让tensorflow和julia的结果一致。因为我正在将一个julia版本的卷积转化为tensorflow版,实验需要。我猜测有两个问题,一个是卷积方式,一个是精度问题。

如果是卷积方式不同,那可以调整,但是看起来在tensorflow和julia中的有部分是重复,更像是对称。我猜测可能是卷积与相关卷积的区别。但是,我找的函数tensorflow.nn.convolution注释是这样写的,但是结果没有区别。

转回Julia,我原来想是不是julia中conv4的mode参数不同导致的,但是mode=1 or 0 没有区别。文档解释是mode=0 对应 convolution; mode=1对应cross-correlation.

经过实验证明,tensorflow中的卷积是相关卷积和Julia的conv4,mode=1是相同的结果。
那下面是如何将将相关卷积改为卷积

最后解决了,将所有的卷积核旋转180度。我使用的是迁移学习,所以需要原有的训练好的卷积核。参考rot90,或者自己实现