Bibliography

[LH17]

Ilya Loshchilov and Frank Hutter. Sgdr: stochastic gradient descent with warm restarts. 2017. arXiv:1608.03983.

[SWO21]

Sam Shleifer, Jason Weston, and Myle Ott. Normformer: improved transformer pretraining with extra normalization. 2021. URL: https://arxiv.org/abs/2110.09456, doi:10.48550/ARXIV.2110.09456.

[VSP+17]

Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. 2017. URL: https://arxiv.org/abs/1706.03762, doi:10.48550/ARXIV.1706.03762.

[XYH+20]

Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, Shuxin Zheng, Chen Xing, Huishuai Zhang, Yanyan Lan, Liwei Wang, and Tie-Yan Liu. On layer normalization in the transformer architecture. 2020. URL: https://arxiv.org/abs/2002.04745, doi:10.48550/ARXIV.2002.04745.