nGPT pytorch
0.2.7
快速實施 nGPT,完全在超球面上學習,來自 NvidiaAI。問題是他們是否隱藏了表達能力的損失,但我會真誠地接受這一點。
這種類型的網絡也應該在持續學習和可塑性喪失的背景下進行研究
對視覺變壓器的適應就在這裡
$ pip install nGPT-pytorch
import torch
from nGPT_pytorch import nGPT
model = nGPT (
num_tokens = 256 ,
dim = 512 ,
depth = 4 ,
attn_norm_qk = True
)
x = torch . randint ( 0 , 256 , ( 2 , 2048 ))
loss = model ( x , return_loss = True )
loss . backward ()
logits = model ( x ) # (2, 2048, 256)
恩威克8
$ python train.py
@inproceedings { Loshchilov2024nGPTNT ,
title = { nGPT: Normalized Transformer with Representation Learning on the Hypersphere } ,
author = { Ilya Loshchilov and Cheng-Ping Hsieh and Simeng Sun and Boris Ginsburg } ,
year = { 2024 } ,
url = { https://api.semanticscholar.org/CorpusID:273026160 }
}
@article { Luo2017CosineNU ,
title = { Cosine Normalization: Using Cosine Similarity Instead of Dot Product in Neural Networks } ,
author = { Chunjie Luo and Jianfeng Zhan and Lei Wang and Qiang Yang } ,
journal = { ArXiv } ,
year = { 2017 } ,
volume = { abs/1702.05870 } ,
url = { https://api.semanticscholar.org/CorpusID:1505432 }
}
@inproceedings { Zhou2024ValueRL ,
title = { Value Residual Learning For Alleviating Attention Concentration In Transformers } ,
author = { Zhanchao Zhou and Tianyi Wu and Zhiyun Jiang and Zhenzhong Lan } ,
year = { 2024 } ,
url = { https://api.semanticscholar.org/CorpusID:273532030 }
}