目录
手动求解 Transformer:分步数学示例
Understanding Transformers: A Step-by-Step Math Example — Part 1了解 Transformer:分步数学示例 — 第 1 部分
I understand that the transformer architecture may seem scary, and you might have encountered various explanations on…我知道变压器架构可能看起来很可怕,并且您可能遇到过关于……的各种解释。
This blog is incomplete, here is the complete version of it:该博客不完整,以下是完整版本:
Understanding Transformers from Start to End — A Step-by-Step Math Example从头到尾理解 Transformer — 一个逐步的数学示例
We will be using a simple dataset and performing numerous matrix multiplications to solve the encoder and decoder parts…我们将使用一个简单的数据集并执行大量矩阵乘法来解决编码器和解码器部分......
Inputs and Positional Encoding输入和位置编码
Step 1 (Defining the data)步骤 1(定义数据)
Step 2 (Finding the Vocab Size)第 2 步(计算词汇量)
Step 3 (Encoding and Embedding)步骤 3(编码和嵌入)
Step 4 (Positional Embedding)步骤 4(位置嵌入)
Encoder 编码器
Step 1 (Performing Single Head Attention)第 1 步(执行单头注意力)
Table of Contents 目录
Step 1 — Defining our Dataset第 1 步 - 定义我们的数据集
Step 2— Finding Vocab Size第 2 步 — 查找词汇量
Step 3 — Encoding 第 3 步 — 编码
Step 4 — Calculating Embedding第 4 步 — 计算嵌入
Step 5 — Calculating Positional Embedding第 5 步 — 计算位置嵌入
Step 6 — Concatenating Positional and Word Embeddings第 6 步 — 连接位置嵌入和词嵌入
Step 7 — Multi Head Attention第 7 步 — 多头注意力
Step 8 — Adding and Normalizing第 8 步 — 添加和规范化
Step 9 — Feed Forward Network第 9 步——前馈网络
Step 10 — Adding and Normalizing Again第 10 步 — 再次添加并标准化
Step 11 — Decoder Part第11步——解码器部分
Step 12 — Understanding Mask Multi Head Attention第 12 步 — 了解 Mask Multi Head Attention
Let’s do a simplified calculation:我们来做一个简单的计算:
Step 13 — Calculating the Predicted Word第 13 步 — 计算预测词
Important Points 要点
Conclusion 结论
Transformer Architecture explainedTransformer 架构解释
Tokenization 代币化
Embedding 嵌入
Positional encoding 位置编码
Transformer block 变压器块
Attention 注意力
The Softmax Layer Softmax 层
Post Training 岗位培训
How GPT3 Works - Visualizations and Animations
The GPT-3 Architecture, on a Napkin餐巾纸上的 GPT-3 架构
Original Diagrams 原始图表
In / Out 进出
Encoding 编码
Embedding 嵌入
Positional Encoding 位置编码
Attention (Simplified) 注意(简体)
Multi-Head Attention 多头注意力
Feed Forward 前馈
Add & Norm 添加和规范
Decoding 解码
Full Architecture 完整架构
References 参考
OpenAI GPT-3: Understanding the ArchitectureOpenAI GPT-3:了解架构
Introduction 介绍
What are Language Models?什么是语言模型?
How does language modeling work?语言建模如何工作?
OpenAI GPT-3 ArchitectureOpenAI GPT-3 架构
Why GPT-3 is so powerful?为什么GPT-3如此强大?
Use cases of GPT-3 GPT-3 的用例
Writing and translation 写作和翻译
Generating code 生成代码
Building machine learning models/code构建机器学习模型/代码
How Can We Get Our Hands on the Model?我们如何获得模型?
Limitations of OpenAI GPT-3OpenAI GPT-3 的局限性
Future of GPT-3 GPT-3 的未来
Conclusion 结论
References 参考
手动求解 Transformer:分步数学示例
I have already written a detailed blog on how transformers work using a very small sample of the dataset, which will be my best blog ever because it has elevated my profile and given me the motivation to write more. However, that blog is incomplete as it only covers 20% of the transformer architecture and contains numerous calculation e
发布评论