Lux.jl
vices-v1.6.2
import Pkg
Pkg . add ( " Lux " )
提示
如果您使用的是 Lux.jl v1 之前的版本,请参阅更新到 v1 部分以获取有关如何更新的说明。
套餐 | 稳定版 | 每月下载量 | 总下载量 | 构建状态 |
---|---|---|---|---|
?勒克斯.jl | ||||
└? LuxLib.jl | ||||
└? LuxCore.jl | ||||
└? MLDataDevices.jl | ||||
└?权重初始化器.jl | ||||
└? LuxTestUtils.jl | ||||
└? LuxCUDA.jl |
using Lux, Random, Optimisers, Zygote
# using LuxCUDA, AMDGPU, Metal, oneAPI # Optional packages for GPU support
# Seeding
rng = Random . default_rng ()
Random . seed! (rng, 0 )
# Construct the layer
model = Chain ( Dense ( 128 , 256 , tanh), Chain ( Dense ( 256 , 1 , tanh), Dense ( 1 , 10 )))
# Get the device determined by Lux
dev = gpu_device ()
# Parameter and State Variables
ps, st = Lux . setup (rng, model) |> dev
# Dummy Input
x = rand (rng, Float32, 128 , 2 ) |> dev
# Run the model
y, st = Lux . apply (model, x, ps, st)
# Gradients
# # First construct a TrainState
train_state = Lux . Training . TrainState (model, ps, st, Adam ( 0.0001f0 ))
# # We can compute the gradients using Training.compute_gradients
gs, loss, stats, train_state = Lux . Training . compute_gradients ( AutoZygote (), MSELoss (),
(x, dev ( rand (rng, Float32, 10 , 2 ))), train_state)
# # Optimization
train_state = Training . apply_gradients! (train_state, gs) # or Training.apply_gradients (no `!` at the end)
# Both these steps can be combined into a single call
gs, loss, stats, train_state = Training . single_train_step! ( AutoZygote (), MSELoss (),
(x, dev ( rand (rng, Float32, 10 , 2 ))), train_state)
在示例目录中查找独立的使用示例。该文档将示例分类为适当的类别。
对于与使用相关的问题,请使用 Github 讨论,它允许对问题和答案进行索引。要报告错误,请使用 github issues,或者更好地发送拉取请求。
如果您发现该库对学术工作有用,请引用:
@software { pal2023lux ,
author = { Pal, Avik } ,
title = { {Lux: Explicit Parameterization of Deep Neural Networks in Julia} } ,
month = apr,
year = 2023 ,
note = { If you use this software, please cite it as below. } ,
publisher = { Zenodo } ,
version = { v0.5.0 } ,
doi = { 10.5281/zenodo.7808904 } ,
url = { https://doi.org/10.5281/zenodo.7808904 }
}
@thesis { pal2023efficient ,
title = { {On Efficient Training & Inference of Neural Differential Equations} } ,
author = { Pal, Avik } ,
year = { 2023 } ,
school = { Massachusetts Institute of Technology }
}
还可以考虑为我们的 github 存储库加注星标。
这部分有些不完整。您可以通过贡献完成本节来做出贡献?
Lux.jl
的完整测试需要很长时间,下面介绍如何测试部分代码。
对于每个@testitem
,都有相应的tags
,例如:
@testitem " SkipConnection " setup = [SharedTestSetup] tags = [ :core_layers ]
例如,让我们考虑SkipConnection
的测试:
@testitem " SkipConnection " setup = [SharedTestSetup] tags = [ :core_layers ] begin
...
end
我们可以通过测试core_layers
来测试SkipConnection
所属的组。为此,请设置LUX_TEST_GROUP
环境变量,或重命名标记以进一步缩小测试范围:
export LUX_TEST_GROUP= " core_layers "
或者直接修改runtests.jl
中默认的测试标签:
# const LUX_TEST_GROUP = lowercase(get(ENV, "LUX_TEST_GROUP", "all"))
const LUX_TEST_GROUP = lowercase ( get ( ENV , " LUX_TEST_GROUP " , " core_layers " ))
但一定要在提交代码之前恢复默认值“all”。
此外,如果您想根据测试集的名称运行特定测试,可以使用 TestEnv.jl,如下所示。首先激活 Lux 环境,然后运行以下命令:
using TestEnv; TestEnv . activate (); using ReTestItems;
# Assuming you are in the main directory of Lux
ReTestItems . runtests ( " tests/ " ; name = " NAME OF THE TEST " )
对于SkipConnection
测试,将是:
ReTestItems . runtests ( " tests/ " ; name = " SkipConnection " )