Differences

This shows you the differences between two versions of the page.

Link to this comparison view

model [2017/09/15 17:39] (current)
Line 1: Line 1:
 +https://​docs.google.com/​a/​codeaudit.com/​document/​d/​1vqLebanMrNW0-CkQlHpgy6pxeq3lDrIUPr64hifIlis/​edit?​usp=sharing
  
 +====== Model Patterns ======
 +
 +Neural Networks are typically described by what they are made of. That is the individual construction of their internal components. These components are defined as mathematical expressions that are parameterized with their weights. ​   Fully connected feedforward networks, convolution networks and recurrent networks all are constructed using different mathematical expressions. ​ These parameters of the expressions incrementally evolves through the learning process. ​ In this chapter, we describe the different kinds of expressions that are currently in use in Deep Learning systems.
 +
 +DL frameworks predominantly are specified by layers where each layer have several components with identical functionality. ​  A layer is a collection of components of the same kind, so for example a convolution layer may consist of several components. ​ This would include a convolution component, a pooling component and an activation function. ​ These would all be packaged up into a single layer and the conventional API would specify the wiring of the layers rather than specifying the wiring of each individual component. ​ This convention aids in brevity of expression of the NN structure.  ​
 +
 +Important to note that other deep learning literature will use the word "​representation"​ to have the same meaning as Model. ​ Unfortunately,​ the representation is also used to describe the space of activations (i.e. feature space). ​ We avoid this confusion, so we use the term Model which is less ambiguous. ​ Although Models consists of real valued (or conjugate valued) weights, they are complex in how they are used as input parameters to the computational units (i.e. functions or operators) employed in a network. ​ These parameters will be very heterogenous unlike the parameters in a more simple fully connected network. ​  ​Therefore treating them equivalently to some high dimensional vector (actually tensor since it is a matrix) makes matters more confusing. ​ Model is called differently in other fields. ​ In information theory it is called an encoding. ​ In signal processing it is called a transform and in computational geometry the word embedding is used.
 +
 +Models are operators that have parameters defined by their weights. ​  ​Models are the operators that act on the Features. ​ The Features are typically are vectors and the Models are matrices (or alternatively tensors of rank 2).   ​Features and Models could be represented by higher ranked tensors which we do find in practice.  ​
 +
 +
 +
 +{{http://​main-alluviate.rhcloud.com/​wp-content/​uploads/​2016/​06/​model.png}}
 +
 +
 +
 +
 +
 +[[Fitness]] ​
 +
 +
 +[[Differentiable Layers]]
 +
 +
 +[[Activation]] ​
 +
 +[[Gated Unit]]
 +
 +[[Attention]] (Need to move to Collective Learning?)
 +
 +[[Complex Parameters]] (Conjugate Model)
 +
 +[[MaxOut]]
 +
 +[[Pooling]] ​
 +
 +[[Convolution]] ​
 +
 +[[Structured Receptive Field]]
 +
 +[[Generalized Convolution]]
 +
 +[[Recurrent Layer]]
 +
 +[[Passthrough]]
 +
 +
 +
 +[[Filter Groups]]
 +
 +[[Gain]]
 +
 +[[In Layer Transform]]
 +
 +[[Structured Matrix]]
 +
 +
 +[[Bottleneck Layer]]
 +
 +[[Gaussian Model]]
 +
 +[[Pointer Network]]
 +
 +[[Binary Network]]
 +
 +[[Multi-Grid]]
 +
 +[[Timed Gate]]
 +
 +[[Upscale Filter]]
 +
 +[[Autoregressive Network]]
 +
 +[[Discrete Model]]
 +
 +[[Spiking Perceptron]]
 +
 +[[Feedback Network]]
 +
 +[[Tensor Network]]
 +
 +[[Optimization Layer]]
 +
 +[[Second Order Statistics]]
 +
 +[[Overlapping Receptive Fields]]
 +
 +[[Projection Network]]
 +
 +[[Adversarial Generator-Encoder]]
 +
 +[[Introspective Model]]
 +
 +[[Phase Function]]
 +
 +[[Recurrent Additive]]
 +
 +[[Capsule Theory]]
 +
 +[[Neural Automata]]
 +
 +**References**
 +
 +https://​en.wikipedia.org/​wiki/​Measure-preserving_dynamical_system
 +
 +http://​arxiv.org/​pdf/​1509.08627v1.pdf ​ Semantics, Representations and Grammars for Deep Learning