tf.keras.layers.GroupQueryAttention
Stay organized with collections
Save and categorize content based on your preferences.
Grouped Query Attention layer.
Inherits From: Layer
, Operation
tf.keras.layers.GroupQueryAttention(
head_dim,
num_query_heads,
num_key_value_heads,
dropout=0.0,
use_bias=True,
kernel_initializer='glorot_uniform',
bias_initializer='zeros',
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
**kwargs
)
This is an implementation of grouped-query attention introduced by
Ainslie et al., 2023. Here
num_key_value_heads
denotes number of groups, setting
num_key_value_heads
to 1 is equivalent to multi-query attention, and
when num_key_value_heads
is equal to num_query_heads
it is equivalent
to multi-head attention.
This layer first projects query
, key
, and value
tensors. Then, key
and value
are repeated to match the number of heads of query
.
Then, the query
is scaled and dot-producted with key
tensors. These are
softmaxed to obtain attention probabilities. The value tensors are then
interpolated by these probabilities and concatenated back to a single
tensor.
Args |
head_dim
|
Size of each attention head.
|
num_query_heads
|
Number of query attention heads.
|
num_key_value_heads
|
Number of key and value attention heads.
|
dropout
|
Dropout probability.
|
use_bias
|
Boolean, whether the dense layers use bias vectors/matrices.
|
kernel_initializer
|
Initializer for dense layer kernels.
|
bias_initializer
|
Initializer for dense layer biases.
|
kernel_regularizer
|
Regularizer for dense layer kernels.
|
bias_regularizer
|
Regularizer for dense layer biases.
|
activity_regularizer
|
Regularizer for dense layer activity.
|
kernel_constraint
|
Constraint for dense layer kernels.
|
bias_constraint
|
Constraint for dense layer kernels.
|
Call arguments |
query
|
Query tensor of shape (batch_dim, target_seq_len, feature_dim) ,
where batch_dim is batch size, target_seq_len is the length of
target sequence, and feature_dim is dimension of feature.
|
value
|
Value tensor of shape (batch_dim, source_seq_len, feature_dim) ,
where batch_dim is batch size, source_seq_len is the length of
source sequence, and feature_dim is dimension of feature.
|
key
|
Optional key tensor of shape
(batch_dim, source_seq_len, feature_dim) . If not given, will use
value for both key and value , which is most common case.
|
attention_mask
|
A boolean mask of shape
(batch_dim, target_seq_len, source_seq_len) , that prevents
attention to certain positions. The boolean mask specifies which
query elements can attend to which key elements, where 1 indicates
attention and 0 indicates no attention. Broadcasting can happen for
the missing batch dimensions and the head dimension.
|
return_attention_scores
|
A boolean to indicate whether the output
should be (attention_output, attention_scores) if True , or
attention_output if False . Defaults to False .
|
training
|
Python boolean indicating whether the layer should behave in
training mode (adding dropout) or in inference mode (no dropout).
Will go with either using the training mode of the parent
layer/model or False (inference) if there is no parent layer.
|
use_causal_mask
|
A boolean to indicate whether to apply a causal mask to
prevent tokens from attending to future tokens (e.g., used in a
decoder Transformer).
|
Returns |
attention_output
|
Result of the computation, of shape
(batch_dim, target_seq_len, feature_dim) , where target_seq_len
is for target sequence length and feature_dim is the query input
last dim.
|
attention_scores
|
(Optional) attention coefficients of shape
(batch_dim, num_query_heads, target_seq_len, source_seq_len) .
|
Attributes |
input
|
Retrieves the input tensor(s) of a symbolic operation.
Only returns the tensor(s) corresponding to the first time
the operation was called.
|
output
|
Retrieves the output tensor(s) of a layer.
Only returns the tensor(s) corresponding to the first time
the operation was called.
|
Methods
from_config
View source
@classmethod
from_config(
config
)
Creates a layer from its config.
This method is the reverse of get_config
,
capable of instantiating the same layer from the config
dictionary. It does not handle layer connectivity
(handled by Network), nor weights (handled by set_weights
).
Args |
config
|
A Python dictionary, typically the
output of get_config.
|
Returns |
A layer instance.
|
symbolic_call
View source
symbolic_call(
*args, **kwargs
)
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-06-07 UTC.
[null,null,["Last updated 2024-06-07 UTC."],[],[],null,["# tf.keras.layers.GroupQueryAttention\n\n\u003cbr /\u003e\n\n|-----------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/layers/attention/grouped_query_attention.py#L12-L433) |\n\nGrouped Query Attention layer.\n\nInherits From: [`Layer`](../../../tf/keras/Layer), [`Operation`](../../../tf/keras/Operation) \n\n tf.keras.layers.GroupQueryAttention(\n head_dim,\n num_query_heads,\n num_key_value_heads,\n dropout=0.0,\n use_bias=True,\n kernel_initializer='glorot_uniform',\n bias_initializer='zeros',\n kernel_regularizer=None,\n bias_regularizer=None,\n activity_regularizer=None,\n kernel_constraint=None,\n bias_constraint=None,\n **kwargs\n )\n\nThis is an implementation of grouped-query attention introduced by\n[Ainslie et al., 2023](https://arxiv.org/abs/2305.13245). Here\n`num_key_value_heads` denotes number of groups, setting\n`num_key_value_heads` to 1 is equivalent to multi-query attention, and\nwhen `num_key_value_heads` is equal to `num_query_heads` it is equivalent\nto multi-head attention.\n\nThis layer first projects `query`, `key`, and `value` tensors. Then, `key`\nand `value` are repeated to match the number of heads of `query`.\n\nThen, the `query` is scaled and dot-producted with `key` tensors. These are\nsoftmaxed to obtain attention probabilities. The value tensors are then\ninterpolated by these probabilities and concatenated back to a single\ntensor.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|------------------------|--------------------------------------------------------------|\n| `head_dim` | Size of each attention head. |\n| `num_query_heads` | Number of query attention heads. |\n| `num_key_value_heads` | Number of key and value attention heads. |\n| `dropout` | Dropout probability. |\n| `use_bias` | Boolean, whether the dense layers use bias vectors/matrices. |\n| `kernel_initializer` | Initializer for dense layer kernels. |\n| `bias_initializer` | Initializer for dense layer biases. |\n| `kernel_regularizer` | Regularizer for dense layer kernels. |\n| `bias_regularizer` | Regularizer for dense layer biases. |\n| `activity_regularizer` | Regularizer for dense layer activity. |\n| `kernel_constraint` | Constraint for dense layer kernels. |\n| `bias_constraint` | Constraint for dense layer kernels. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Call arguments -------------- ||\n|---------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `query` | Query tensor of shape `(batch_dim, target_seq_len, feature_dim)`, where `batch_dim` is batch size, `target_seq_len` is the length of target sequence, and `feature_dim` is dimension of feature. |\n| `value` | Value tensor of shape `(batch_dim, source_seq_len, feature_dim)`, where `batch_dim` is batch size, `source_seq_len` is the length of source sequence, and `feature_dim` is dimension of feature. |\n| `key` | Optional key tensor of shape `(batch_dim, source_seq_len, feature_dim)`. If not given, will use `value` for both `key` and `value`, which is most common case. |\n| `attention_mask` | A boolean mask of shape `(batch_dim, target_seq_len, source_seq_len)`, that prevents attention to certain positions. The boolean mask specifies which query elements can attend to which key elements, where 1 indicates attention and 0 indicates no attention. Broadcasting can happen for the missing batch dimensions and the head dimension. |\n| `return_attention_scores` | A boolean to indicate whether the output should be `(attention_output, attention_scores)` if `True`, or `attention_output` if `False`. Defaults to `False`. |\n| `training` | Python boolean indicating whether the layer should behave in training mode (adding dropout) or in inference mode (no dropout). Will go with either using the training mode of the parent layer/model or `False` (inference) if there is no parent layer. |\n| `use_causal_mask` | A boolean to indicate whether to apply a causal mask to prevent tokens from attending to future tokens (e.g., used in a decoder Transformer). |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|--------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `attention_output` | Result of the computation, of shape `(batch_dim, target_seq_len, feature_dim)`, where `target_seq_len` is for target sequence length and `feature_dim` is the query input last dim. |\n| `attention_scores` | (Optional) attention coefficients of shape `(batch_dim, num_query_heads, target_seq_len, source_seq_len)`. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|----------|------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `input` | Retrieves the input tensor(s) of a symbolic operation. \u003cbr /\u003e Only returns the tensor(s) corresponding to the *first time* the operation was called. |\n| `output` | Retrieves the output tensor(s) of a layer. \u003cbr /\u003e Only returns the tensor(s) corresponding to the *first time* the operation was called. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `from_config`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/ops/operation.py#L191-L213) \n\n @classmethod\n from_config(\n config\n )\n\nCreates a layer from its config.\n\nThis method is the reverse of `get_config`,\ncapable of instantiating the same layer from the config\ndictionary. It does not handle layer connectivity\n(handled by Network), nor weights (handled by `set_weights`).\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|----------|----------------------------------------------------------|\n| `config` | A Python dictionary, typically the output of get_config. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| A layer instance. ||\n\n\u003cbr /\u003e\n\n### `symbolic_call`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/ops/operation.py#L58-L70) \n\n symbolic_call(\n *args, **kwargs\n )"]]