keras-attention-mechanism-master_2

所属分类:人工智能/神经网络/深度学习
开发工具:Python
文件大小:1167KB
下载次数:5
上传日期:2020-07-04 10:21:48
上 传 者Healer_wwq
说明:  基于Keras的GAN网络代码,里面有各种GAN网络的代码,请下载
(Gan network code based on keras)

文件列表:
LICENSE (11357, 2020-01-06)
assets (0, 2020-01-06)
assets\1.png (45984, 2020-01-06)
assets\attention_1.png (215990, 2020-01-06)
assets\graph_multi_attention.png (437259, 2020-01-06)
assets\graph_single_attention.png (443997, 2020-01-06)
assets\lstm_after.png (47113, 2020-01-06)
assets\lstm_before.png (51615, 2020-01-06)
attention_dense.py (1865, 2020-01-06)
attention_lstm.py (3430, 2020-01-06)
attention_lstm_todimensions.py (3487, 2020-01-06)
attention_utils.py (3152, 2020-01-06)
requirements.txt (60, 2020-01-06)

# Keras Attention Mechanism Simple attention mechanism implemented in Keras for the following layers: - [x] **Dense (attention 2D block)** - [x] **LSTM, GRU (attention 3D block)**


Example: Attention block

## Dense Layer ``` inputs = Input(shape=(input_dims,)) attention_probs = Dense(input_dims, activation='softmax', name='attention_probs')(inputs) attention_mul = merge([inputs, attention_probs], output_shape=input_dims, name='attention_mul', mode='mul') ``` Let's consider this Hello World example: - A vector *v* of 32 values as input to the model (simple feedforward neural network). - *v[1]* = target. - Target is binary (either 0 or 1). - All the other values of the vector *v* (*v[0]* and *v[2:32]*) are purely random and do not contribute to the target. We expect the attention to be focused on *v[1]* only, or at least strongly. We recap the setup with this drawing:

Attention Mechanism explained

The first two are samples taken randomly from the training set. The last plot is the attention vector that we expect. A high peak indexed by 1, and close to zero on the rest. Let's train this model and visualize the attention vector applied to the inputs:

Attention Mechanism explained

We can clearly see that the network figures this out for the inference. ### Behind the scenes The attention mechanism can be implemented in three lines with Keras: ``` inputs = Input(shape=(input_dims,)) attention_probs = Dense(input_dims, activation='softmax', name='attention_probs')(inputs) attention_mul = merge([inputs, attention_probs], output_shape=32, name='attention_mul', mode='mul') ``` We apply a `Dense - Softmax` layer with the same number of output parameters than the `Input` layer. The attention matrix has a shape of `input_dims x input_dims` here. Then we merge the `Inputs` layer with the attention layer by multiplying element-wise. Finally, the activation vector (probability distribution) can be derived with: ``` attention_vector = get_activations(m, testing_inputs_1, print_shape_only=True)[1].flatten() ``` Where `1` is the index of definition of the attention layer in the model definition (`Inputs` is indexed by `0`). ## Recurrent Layers (LSTM, GRU...) ### Application of attention at input level We consider the same example as the one used for the Dense layers. The attention index is now on the 10th value. We therefore expect an attention spike around this value. There are two main ways to apply attention to recurrent layers: - Directly on the inputs (same as the Dense example above): `APPLY_ATTENTION_BEFORE_LSTM = True`

Attention vector applied on the inputs (before)

### Application of attention on the LSTM's output - After the LSTM layer: `APPLY_ATTENTION_BEFORE_LSTM = False`

Attention vector applied on the output of the LSTM layer (after)

Both have their own advantages and disadvantages. One obvious advantage of applying the attention directly at the inputs is that we clearly understand this space. The high dimensional space spanned by the LSTM might be a bit trickier to interpret, although they share the time steps in common with the inputs (`return_sequences=True` is used here). ### Attention of multi dimensional time series Also, sometimes, the time series can be N-dimensional. It could be interesting to have one attention vector per dimension. Let's say we have a 2-D time series on 20 steps. Setting `SINGLE_ATTENTION_VECTOR = False` gives an attention vector of shape `(20, 2)`. If `SINGLE_ATTENTION_VECTOR` is set to `True`, it means that the attention vector will be of shape `(20,)` and shared across the input dimensions. - `SINGLE_ATTENTION_VECTOR = False`

Attention defined per time series (each TS has its own attention)

- `SINGLE_ATTENTION_VECTOR = True`

Attention shared across all the time series

## Resources - https://github.com/fchollet/keras/issues/1472 - http://distill.pub/2016/augmented-rnns/

近期下载者

相关文件


收藏者