Lux.jl
vices-v1.6.2
import Pkg
Pkg . add ( " Lux " )
提示
如果您使用的是 Lux.jl v1 之前的版本,請參閱更新至 v1 部分以取得如何更新的說明。
套餐 | 穩定版 | 每月下載量 | 總下載量 | 建置狀態 |
---|---|---|---|---|
?勒克斯.jl | ||||
└? LuxLib.jl | ||||
└? LuxCore.jl | ||||
└? MLDataDevices.jl | ||||
└?權重初始化器.jl | ||||
└? LuxTestUtils.jl | ||||
└? LuxCUDA.jl |
using Lux, Random, Optimisers, Zygote
# using LuxCUDA, AMDGPU, Metal, oneAPI # Optional packages for GPU support
# Seeding
rng = Random . default_rng ()
Random . seed! (rng, 0 )
# Construct the layer
model = Chain ( Dense ( 128 , 256 , tanh), Chain ( Dense ( 256 , 1 , tanh), Dense ( 1 , 10 )))
# Get the device determined by Lux
dev = gpu_device ()
# Parameter and State Variables
ps, st = Lux . setup (rng, model) |> dev
# Dummy Input
x = rand (rng, Float32, 128 , 2 ) |> dev
# Run the model
y, st = Lux . apply (model, x, ps, st)
# Gradients
# # First construct a TrainState
train_state = Lux . Training . TrainState (model, ps, st, Adam ( 0.0001f0 ))
# # We can compute the gradients using Training.compute_gradients
gs, loss, stats, train_state = Lux . Training . compute_gradients ( AutoZygote (), MSELoss (),
(x, dev ( rand (rng, Float32, 10 , 2 ))), train_state)
# # Optimization
train_state = Training . apply_gradients! (train_state, gs) # or Training.apply_gradients (no `!` at the end)
# Both these steps can be combined into a single call
gs, loss, stats, train_state = Training . single_train_step! ( AutoZygote (), MSELoss (),
(x, dev ( rand (rng, Float32, 10 , 2 ))), train_state)
在範例目錄中尋找獨立的使用範例。該文件將範例分類為適當的類別。
對於與使用相關的問題,請使用 Github 討論,它允許對問題和答案進行索引。若要回報錯誤,請使用 github issues,或更好地發送拉取請求。
如果您發現該庫對學術工作有用,請引用:
@software { pal2023lux ,
author = { Pal, Avik } ,
title = { {Lux: Explicit Parameterization of Deep Neural Networks in Julia} } ,
month = apr,
year = 2023 ,
note = { If you use this software, please cite it as below. } ,
publisher = { Zenodo } ,
version = { v0.5.0 } ,
doi = { 10.5281/zenodo.7808904 } ,
url = { https://doi.org/10.5281/zenodo.7808904 }
}
@thesis { pal2023efficient ,
title = { {On Efficient Training & Inference of Neural Differential Equations} } ,
author = { Pal, Avik } ,
year = { 2023 } ,
school = { Massachusetts Institute of Technology }
}
也可以考慮為我們的 github 儲存庫加註星標。
這部分有些不完整。您可以透過貢獻完成本節來做出貢獻?
Lux.jl
的完整測試需要很長時間,以下介紹如何測試部分程式碼。
對於每個@testitem
,都有相應的tags
,例如:
@testitem " SkipConnection " setup = [SharedTestSetup] tags = [ :core_layers ]
例如,讓我們考慮SkipConnection
的測試:
@testitem " SkipConnection " setup = [SharedTestSetup] tags = [ :core_layers ] begin
...
end
我們可以透過測試core_layers
來測試SkipConnection
所屬的群組。為此,請設定LUX_TEST_GROUP
環境變量,或重新命名標記以進一步縮小測試範圍:
export LUX_TEST_GROUP= " core_layers "
或直接修改runtests.jl
中預設的測試標籤:
# const LUX_TEST_GROUP = lowercase(get(ENV, "LUX_TEST_GROUP", "all"))
const LUX_TEST_GROUP = lowercase ( get ( ENV , " LUX_TEST_GROUP " , " core_layers " ))
但一定要在提交程式碼之前恢復預設值“all”。
此外,如果您想根據測試集的名稱執行特定測試,可以使用 TestEnv.jl,如下所示。首先啟動 Lux 環境,然後執行以下命令:
using TestEnv; TestEnv . activate (); using ReTestItems;
# Assuming you are in the main directory of Lux
ReTestItems . runtests ( " tests/ " ; name = " NAME OF THE TEST " )
對於SkipConnection
測試,將是:
ReTestItems . runtests ( " tests/ " ; name = " SkipConnection " )