Julia在AMD的5950x和Intel的8300H上运行结果不同,困惑~~

作出刚接触Julia没多久的新手,至今有个问题非常困惑,我在旧游戏本的8300H和今年新配的5950xCPU上分别运行DynamicalSystems这个Julia包,输入同样的时间序列,得到的结果却不同,这是什么原因呢?

截图,麻烦你 >…

请直接贴示例代码,并附上运行结果。尽量减少贴图,便于其他人运行。

贴个 MWE,最好简单一些的。

有可能是随机数初始化的影响。
或者就是各个平台数学函数库精度不一致的问题。

(julia 最近又想返璞归真用回系统的 libm 了

不好意思,初次发帖,不知道规矩,现在补上代码和运行结果。
以下是运行在Intel 和 AMD平台上的代码:

using SignalDecomposition
using DynamicalSystems, PyPlot
f = open("D:/Chaos/Test/Y.txt")
xtemp = readlines(f)
close(f)
x = map(vt->parse(Float64,vt),xtemp)
m = 5
k = 30
Q = [2, 2, 2, 3, 3, 3, 3]
s, r = decompose(x, ManifoldProjection(m, Q, k))
err = nrmse(x, s)
theiler = estimate_delay(s, "mi_min") # estimate a Theiler window
T, Δt = length(s)-1,1
t = 0:Δt:T
estimate_period(s, :zerocrossing, t)
Tmax = 700 # maximum possible delay
Y, τ_vals, ts_vals, Ls, εs = pecuzal_embedding(s; τs = 0:Tmax , w = theiler, econ = true)

在5950x平台上的结果如下:

julia> theiler = estimate_delay(s, "mi_min") # estimate a Theiler window
42

julia> estimate_period(s, :zerocrossing, t)
286.2608695652174

julia> Y, τ_vals, ts_vals, Ls, εs = pecuzal_embedding(s; τs = 0:Tmax , w = theiler, econ = true)
Initializing PECUZAL algorithm for univariate input...
Starting 1-th embedding cycle...
Starting 2-th embedding cycle...
Starting 3-th embedding cycle...
Starting 4-th embedding cycle...
Starting 5-th embedding cycle...
Starting 6-th embedding cycle...
Algorithm stopped due to increasing L-values. VALID embedding achieved ✓.
(6-dimensional Dataset{Float64} with 7313 points, **[0, 507, 246, 687, 120, 358]**..........)

而在8300H平台上的结果如下:

julia> theiler = estimate_delay(s, "mi_min") # estimate a Theiler window
42

julia> estimate_period(s, :zerocrossing, t)
286.2608695652174

julia> Y, τ_vals, ts_vals, Ls, εs = pecuzal_embedding(s; τs = 0:Tmax , w = theiler, econ = true)
Initializing PECUZAL algorithm for univariate input...
Starting 1-th embedding cycle...
Starting 2-th embedding cycle...
Starting 3-th embedding cycle...
Starting 4-th embedding cycle...
Starting 5-th embedding cycle...
Starting 6-th embedding cycle...
Algorithm stopped due to increasing L-values. VALID embedding achieved ✓.
(6-dimensional Dataset{Float64} with 7303 points, [0, 599, 248, 397, 697, 116], ........)

不知道如何上传我的时间序列文件:Y.txt ?

那这样是否无解呢?正是为了生产力考虑,我才新配置了5950x平台的,现在发现这个现象很郁闷,而且从我实际效果比对,8300H平台得出的结果更适合,挺郁闷啊

using SignalDecomposition
using DynamicalSystems, PyPlot
f = open(“D:/Chaos/Test/Y.txt”)
xtemp = readlines(f)
close(f)
x = map(vt->parse(Float64,vt),xtemp)
m = 5
k = 30
Q = [2, 2, 2, 3, 3, 3, 3]
s, r = decompose(x, ManifoldProjection(m, Q, k))
err = nrmse(x, s)
theiler = estimate_delay(s, “mi_min”) # estimate a Theiler window
T, Δt = length(s)-1,1
t = 0:Δt:T
estimate_period(s, :zerocrossing, t)
Tmax = 700 # maximum possible delay
Y, τ_vals, ts_vals, Ls, εs = pecuzal_embedding(s; τs = 0:Tmax , w = theiler, econ = true)

在5950x平台上的结果如下:

julia> theiler = estimate_delay(s, “mi_min”) # estimate a Theiler window
42
julia> estimate_period(s, :zerocrossing, t)
286.2608695652174
julia> Y, τ_vals, ts_vals, Ls, εs = pecuzal_embedding(s; τs = 0:Tmax , w = theiler, econ = true)
Initializing PECUZAL algorithm for univariate input…
Starting 1-th embedding cycle…
Starting 2-th embedding cycle…
Starting 3-th embedding cycle…
Starting 4-th embedding cycle…
Starting 5-th embedding cycle…
Starting 6-th embedding cycle…
Algorithm stopped due to increasing L-values. VALID embedding achieved ✓.
(6-dimensional Dataset{Float64} with 7313 points, **[0, 507, 246, 687, 120, 358]**…)

而在8300H平台上的结果如下:

julia> theiler = estimate_delay(s, “mi_min”) # estimate a Theiler window
42
julia> estimate_period(s, :zerocrossing, t)
286.2608695652174
julia> Y, τ_vals, ts_vals, Ls, εs = pecuzal_embedding(s; τs = 0:Tmax , w = theiler, econ = true)
Initializing PECUZAL algorithm for univariate input…
Starting 1-th embedding cycle…
Starting 2-th embedding cycle…
Starting 3-th embedding cycle…
Starting 4-th embedding cycle…
Starting 5-th embedding cycle…
Starting 6-th embedding cycle…
Algorithm stopped due to increasing L-values. VALID embedding achieved ✓.
(6-dimensional Dataset{Float64} with 7303 points, [0, 599, 248, 397, 697, 116], …)

长么。如果比较长(> 1KB)我感觉你可以试试继续缩减这个问题的规模。
尽可能用少量的数据来复现你的问题,这样方便调试。


(6-dimensional Dataset{Float64} with 7313 points
v.s.
(6-dimensional Dataset{Float64} with 7303 points

这个算法是不是会自己抽样?怎么数据集都不一样多。

不确定,可以在代码最前面加上一下代码,固定一下随机数种子:

using Random
Random.seed!(6452)

s, r = decompose(x, ManifoldProjection(m, Q, k))

这一步的结果 s, r 一致么,或者是做差后绝对误差数量级大概是多少


分享长一些的纯文本可以用 pastebin