Фото: @kyliejenner
Same algorithm as Part 3’s numpy version and Part 4’s Triton kernel. Same running state — running_max, running_sum, acc. Same per-tile update:
,更多细节参见heLLoword翻译
We have one horrible disjuncture, between layers 6 → 2. I have one more hypothesis: A little bit of fine-tuning on those two layers is all we really need. Fine-tuned RYS models dominate the Leaderboard. I suspect this junction is exactly what the fine-tuning fixes. And there’s a great reason to do this: this method does not use extra VRAM! For all these experiments, I duplicated layers via pointers; the layers are repeated without using more GPU memory. Of course, we do need more compute and more KV cache, but that’s a small price to pay for a verifiably better model. We can just ‘fix’ an actual copies of layers 2 and 6, and repeat layers 3-4-5 as virtual copies. If we fine-tune all layer, we turn virtual copies into real copies, and use up more VRAM.
Найден находившийся почти два года в бегах российский военный08:43。关于这个话题,传奇私服新开网|热血传奇SF发布站|传奇私服网站提供了深入分析
——张俊杰代表(南京市第一医院副院长)。关于这个话题,超级权重提供了深入分析
2026-02-22 21:04:33 +01:00