test CUDAnative error

julia 版本:1.4.0-rc1.0
安装cuda后test CUDAnative 报错
请问有同样情况的么,如何解决

julia> Pkg.test(["CUDAnative"])
    Testing CUDAnative
Status `/tmp/jl_CjB770/Manifest.toml`
  [621f4979] AbstractFFTs v0.5.0
  [79e6a3ab] Adapt v1.0.0
  [b99e7846] BinaryProvider v0.5.8
  [fa961155] CEnum v0.2.0
  [3895d2a7] CUDAapi v3.1.0
  [c5f51814] CUDAdrv v6.0.0
  [be33ccc6] CUDAnative v2.10.2
  [3a865a2d] CuArrays v1.7.2
  [864edb3b] DataStructures v0.17.9
  [0c68f7d7] GPUArrays v2.0.1
  [929cbde3] LLVM v1.3.3
  [1914dd2f] MacroTools v0.5.3
  [872c559c] NNlib v0.6.4
  [bac558e1] OrderedCollections v1.1.0
  [ae029012] Requires v0.5.2
  [a759f4b9] TimerOutputs v0.5.3
  [2a0f44e3] Base64 
  [8ba89e20] Distributed 
  [b77e0a4c] InteractiveUtils 
  [8f399da3] Libdl 
  [37e2e46d] LinearAlgebra 
  [56ddb016] Logging 
  [d6f4376e] Markdown 
  [de0858da] Printf 
  [9a3f8284] Random 
  [ea8e919c] SHA 
  [9e88b42a] Serialization 
  [6462fe0b] Sockets 
  [2f01184e] SparseArrays 
  [10745b16] Statistics 
  [8dfed614] Test 
  [4ec0a83e] Unicode 
┌ Warning: Incompatibility detected between CUDA and LLVM 8.0+; disabling debug info emission for CUDA kernels
└ @ CUDAnative ~/.julia/packages/CUDAnative/hfulr/src/CUDAnative.jl:114
[ Info: Testing using device GeForce GTX 1060 (compute capability 6.1.0, 5.798 GiB available memory) on CUDA driver 10.2.0 and toolkit 10.2.89
┌ Warning: Incompatibility detected between CUDA and LLVM 8.0+; disabling debug info emission for CUDA kernels
└ @ CUDAnative ~/.julia/packages/CUDAnative/hfulr/src/CUDAnative.jl:114
ERROR: LoadError: CUDA error: a PTX JIT compilation failed (code 218, ERROR_INVALID_PTX)
ptxas application ptx input, line 29; error   : Feature 'WMMA with floating point types' requires .target sm_70 or higher
ptxas application ptx input, line 29; error   : Modifier '.m16n16k16' requires .target sm_70 or higher
ptxas application ptx input, line 30; error   : Feature 'WMMA with floating point types' requires .target sm_70 or higher
ptxas application ptx input, line 30; error   : Modifier '.m16n16k16' requires .target sm_70 or higher
ptxas application ptx input, line 31; error   : Feature 'WMMA with floating point types' requires .target sm_70 or higher
ptxas application ptx input, line 31; error   : Modifier '.m16n16k16' requires .target sm_70 or higher
ptxas application ptx input, line 40; error   : Feature 'WMMA with floating point types' requires .target sm_70 or higher
ptxas application ptx input, line 40; error   : Modifier '.m16n16k16' requires .target sm_70 or higher
ptxas application ptx input, line 45; error   : Feature 'WMMA with floating point types' requires .target sm_70 or higher
ptxas application ptx input, line 45; error   : Modifier '.m16n16k16' requires .target sm_70 or higher
ptxas fatal   : Ptx assembly aborted due to errors
Stacktrace:
 [1] CUDAdrv.CuModule(::String, ::Dict{CUDAdrv.CUjit_option_enum,Any}) at /home/soliva/.julia/packages/CUDAdrv/b1mvw/src/module.jl:40
 [2] macro expansion at /home/soliva/.julia/packages/CUDAnative/hfulr/src/execution.jl:423 [inlined]
 [3] cufunction(::typeof(kernel), ::Type{Tuple{CuDeviceArray{Float16,2,CUDAnative.AS.Global},CuDeviceArray{Float16,2,CUDAnative.AS.Global},CuDeviceArray{Float32,2,CUDAnative.AS.Global},CuDeviceArray{Float32,2,CUDAnative.AS.Global}}}; name::Nothing, kwargs::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at /home/soliva/.julia/packages/CUDAnative/hfulr/src/execution.jl:360
 [4] cufunction(::Function, ::Type{T} where T) at /home/soliva/.julia/packages/CUDAnative/hfulr/src/execution.jl:360
 [5] top-level scope at /home/soliva/.julia/packages/CUDAnative/hfulr/src/execution.jl:179
 [6] include(::Module, ::String) at ./Base.jl:377
 [7] exec_options(::Base.JLOptions) at ./client.jl:288
 [8] _start() at ./client.jl:484
in expression starting at /home/soliva/.julia/packages/CUDAnative/hfulr/examples/wmma/high-level.jl:42
example = wmma/high-level.jl: Test Failed at /home/soliva/.julia/packages/CUDAnative/hfulr/test/examples.jl:34
  Expression: rv
Stacktrace:
 [1] macro expansion at /home/soliva/.julia/packages/CUDAnative/hfulr/test/examples.jl:34 [inlined]
 [2] macro expansion at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.4/Test/src/Test.jl:1186 [inlined]
 [3] (::var"#680#683"{String})() at /home/soliva/.julia/packages/CUDAnative/hfulr/test/examples.jl:20
┌ Warning: Incompatibility detected between CUDA and LLVM 8.0+; disabling debug info emission for CUDA kernels
└ @ CUDAnative ~/.julia/packages/CUDAnative/hfulr/src/CUDAnative.jl:114
ERROR: LoadError: CUDA error: a PTX JIT compilation failed (code 218, ERROR_INVALID_PTX)
ptxas application ptx input, line 29; error   : Feature 'WMMA with floating point types' requires .target sm_70 or higher
ptxas application ptx input, line 29; error   : Modifier '.m16n16k16' requires .target sm_70 or higher
ptxas application ptx input, line 30; error   : Feature 'WMMA with floating point types' requires .target sm_70 or higher
ptxas application ptx input, line 30; error   : Modifier '.m16n16k16' requires .target sm_70 or higher
ptxas application ptx input, line 31; error   : Feature 'WMMA with floating point types' requires .target sm_70 or higher
ptxas application ptx input, line 31; error   : Modifier '.m16n16k16' requires .target sm_70 or higher
ptxas application ptx input, line 32; error   : Feature 'WMMA with floating point types' requires .target sm_70 or higher
ptxas application ptx input, line 32; error   : Modifier '.m16n16k16' requires .target sm_70 or higher
ptxas application ptx input, line 37; error   : Feature 'WMMA with floating point types' requires .target sm_70 or higher
ptxas application ptx input, line 37; error   : Modifier '.m16n16k16' requires .target sm_70 or higher
ptxas fatal   : Ptx assembly aborted due to errors
Stacktrace:
 [1] CUDAdrv.CuModule(::String, ::Dict{CUDAdrv.CUjit_option_enum,Any}) at /home/soliva/.julia/packages/CUDAdrv/b1mvw/src/module.jl:40
 [2] macro expansion at /home/soliva/.julia/packages/CUDAnative/hfulr/src/execution.jl:423 [inlined]
 [3] cufunction(::typeof(kernel), ::Type{Tuple{CuDeviceArray{Float16,2,CUDAnative.AS.Global},CuDeviceArray{Float16,2,CUDAnative.AS.Global},CuDeviceArray{Float32,2,CUDAnative.AS.Global},CuDeviceArray{Float32,2,CUDAnative.AS.Global}}}; name::Nothing, kwargs::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at /home/soliva/.julia/packages/CUDAnative/hfulr/src/execution.jl:360
 [4] cufunction(::Function, ::Type{T} where T) at /home/soliva/.julia/packages/CUDAnative/hfulr/src/execution.jl:360
 [5] top-level scope at /home/soliva/.julia/packages/CUDAnative/hfulr/src/execution.jl:179
 [6] include(::Module, ::String) at ./Base.jl:377
 [7] exec_options(::Base.JLOptions) at ./client.jl:288
 [8] _start() at ./client.jl:484
in expression starting at /home/soliva/.julia/packages/CUDAnative/hfulr/examples/wmma/low-level.jl:40
example = wmma/low-level.jl: Test Failed at /home/soliva/.julia/packages/CUDAnative/hfulr/test/examples.jl:34
  Expression: rv
Stacktrace:
 [1] macro expansion at /home/soliva/.julia/packages/CUDAnative/hfulr/test/examples.jl:34 [inlined]
 [2] macro expansion at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.4/Test/src/Test.jl:1186 [inlined]
 [3] (::var"#680#683"{String})() at /home/soliva/.julia/packages/CUDAnative/hfulr/test/examples.jl:20
Test Summary:                           | Pass  Fail  Total
CUDAnative                              |  522     2    524
  base interface                        |             No tests
  pointer                               |   20           20
  code generation                       |   92           92
  code generation (relying on a device) |    8            8
  execution                             |   77           77
  pointer                               |   41           41
  device arrays                         |   20           20
  CUDA functionality                    |  251          251
  examples                              |    6     2      8
    example = hello_world.jl            |    1            1
    example = pairwise.jl               |    1            1
    example = peakflops.jl              |    1            1
    example = reduce/verify.jl          |    1            1
    example = scan.jl                   |    1            1
    example = vadd.jl                   |    1            1
    example = wmma/high-level.jl        |          1      1
    example = wmma/low-level.jl         |          1      1
ERROR: LoadError: Some tests did not pass: 522 passed, 2 failed, 0 errored, 0 broken.
in expression starting at /home/soliva/.julia/packages/CUDAnative/hfulr/test/runtests.jl:8
ERROR: Package CUDAnative errored during testing
Stacktrace:
 [1] pkgerror(::String, ::Vararg{String,N} where N) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.4/Pkg/src/Types.jl:53
 [2] test(::Pkg.Types.Context, ::Array{Pkg.Types.PackageSpec,1}; coverage::Bool, julia_args::Cmd, test_args::Cmd, test_fn::Nothing) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.4/Pkg/src/Operations.jl:1503
 [3] test(::Pkg.Types.Context, ::Array{Pkg.Types.PackageSpec,1}; coverage::Bool, test_fn::Nothing, julia_args::Cmd, test_args::Cmd, kwargs::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.4/Pkg/src/API.jl:316
 [4] test(::Pkg.Types.Context, ::Array{Pkg.Types.PackageSpec,1}) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.4/Pkg/src/API.jl:303
 [5] #test#68 at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.4/Pkg/src/API.jl:297 [inlined]
 [6] test at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.4/Pkg/src/API.jl:297 [inlined]
 [7] #test#67 at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.4/Pkg/src/API.jl:296 [inlined]
 [8] test(::Array{String,1}) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.4/Pkg/src/API.jl:296
 [9] top-level scope at REPL[7]:1

搜到一个已解决的 issue

https://github.com/JuliaGPU/CUDAnative.jl/issues/428

你装的是最新版的 CUDAnative 么?可以 ] up CUDAnative 一下,或者删了包装 GitHub 上的 master 分支。

用了最新版或 master 还有问题,可以去 issue 例回复。记得提供 julia 版本、包的版本、显卡型号及驱动版本。

话说 using 会有报错吗?不影响你使用的基本都可以忽略。


类似的问题,据说是编译和执行的架构不一致

https://github.com/open-mmlab/mmdetection/issues/766

提问了作者,这是他们的一个测试内容,忽略就可以