Differences

This shows you the differences between two versions of the page.

Link to this comparison view

attention_visualization [2017/01/06 00:34] (current)
Line 1: Line 1:
 +====== Attention Visualization ======
  
 +**Aliases**
 +
 +**Intent**
 +
 +Present the section of the input that the network used for classification.
 +
 +**Motivation**
 +
 +How can the network explain which subsets of data it used to perform its classification?​
 +
 +
 +**Sketch**
 +
 +//This section provides alternative descriptions of the pattern in the form of an illustration or alternative formal expression. By looking at the sketch a reader may quickly understand the essence of the pattern. ​
 +//
 +
 +**Discussion** ​
 +
 +//This is the main section of the pattern that goes in greater detail to explain the pattern. We leverage a vocabulary that we describe in the theory section of this book. We don’t go into intense detail into providing proofs but rather reference the sources of the proofs. How the motivation is addressed is expounded upon in this section. We also include additional questions that may be interesting topics for future research.// ​
 +
 +
 +**Known Uses**
 +
 +//Here we review several projects or papers that have used this pattern.// ​
 +
 +
 +**Related Patterns**
 +//
 +In this section we describe in a diagram how this pattern is conceptually related to other patterns. The relationships may be as precise or may be fuzzy, so we provide further explanation into the nature of the relationship. We also describe other patterns may not be conceptually related but work well in combination with this pattern.//
 +
 +//​Relationship to Canonical Patterns//
 +
 +//​Relationship to other Patterns//
 +
 +**Further Reading**
 +
 +//We provide here some additional external material that will help in exploring this pattern in more detail.//
 +
 +
 +**References**
 +
 +//To aid in reading, we include sources that are referenced in the text in the pattern.//
 +
 +
 +http://​arxiv.org/​abs/​1502.03044 Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
 +
 +https://​arxiv.org/​pdf/​1610.02391v2.pdf ​ Grad-CAM: Why did you say that?
 +Visual Explanations from Deep Networks via Gradient-based Localization
 +
 +http://​gradcam.cloudcv.org/​