Distinguishing Conjoint and Independent Neural Tuning for Stimulus Features With fMRI Adaptation. Drucker, Kerr, Aguirre. Innovative Methodology 2009.

  1. “We describe an application of functional magnetic resonance imaging (fMRI) adaptation to distinguish between independent and conjoint neural representations of dimensions by examining the neural signal evoked by changes in one versus two stimulus dimensions and considering the metric of two-dimension additivity.”
  2. “Do different neurons represent individual stimulus dimensions or could one neuron be tuned to represent multiple dimensions?”
  3. “This study describes an application of functional magnetic resonance imaging (fMRI) to distinguish conjoint from independent representation of two stimulus dimensions within a spatially restricted population of neurons.”
  4. “If the recovery for a combined change is simply the additive combination of the recovery for each dimension in isolation, we take this as evidence for independent neural populations. When the neural recovery for a combined change is subadditive, this may reflect populations consisting of neurons that conjointly represent the two stimulus dimensions.”
  5. “These two possibilities could be distinguished directly by measuring the tuning of individual neurons. However, the signal obtained with BOLD fMRI averages the population neural response from a voxel, making this measurement unavailable.”
  6. “To distinguish conjoint and independent tuning in this case, we must measure the properties of the neural population using adaptation methods.”
  7. “In summary, we may distinguish between conjoint and independent tuning of neurons in a population by comparing the recovery from adaptation for combined transitions to that seen for isolated transitions along each stimulus dimension.”
  8. “In theory, one could conduct the test described earlier by measuring the BOLD fMRI response to three stimulus pairs: a pair that differs only in color, a pair that differs only in shape, and a pair that differs in both color and shape.”
    1. To make this robust though, more thorough sampling is needed
  9. Looking at the difference between Manhattan and Euclidian distance helps figure out how neurons respond to stimuli with multiple dimensions (additive/independent or not)
  10. Another issue that needs to be addressed before interpreting how neurons respond requires figuring out how linear changes in stimuli lead to changes in neural response — assumed to be nonlinear <I figure they measure this as they vary only 1 dimension?>
  11. They have a model to estimate the nonlinearity, <trying to deal with varying forms of nonlinearities seems complex the way they do it maybe there is a better way>
  12. “A different violation of the model assumptions occurs when the underlying neural representation is independent for the stimulus dimensions, but its neural instantiation is not aligned with the assumed dimensional axes of the study. For example, consider an experiment designed to examine the neural representation of rectangles. The stimulus space used in the experiment consists of rectangles that vary in height and width, and the experimenter models these two parameters. It may be the case, however, that a population of neurons actually has independent tuning for the sum and difference of height and width (roughly corresponding to area and aspect ratio)—a 45° rotation of the axes as modeled by the experimenter.”
  13. “In summary, when significant loading on the Euclidean contraction covariate is obtained in an experiment, an additional test is necessary to reject the possibility of independent,but misaligned, neural populations. Post hoc testing of the performance of the model under assumed rotations of the stimulus axes can distinguish between the independent, but rotated, and the conjointly tuned cases.”
  14. “Earlier, we considered how these concepts are related to receptive fields that are either linear or radially symmetric within a stimulus space. Intermediate receptive fields are possible, however, with oval shapes of varying elongation. In such cases the population would not be wholly independent, but instead represent one dimension to a greater extent than the other. These intermediate cases are considered readily within the framework of the Minkowski exponent that defines the representational space.”
  15. fmris introduce other problems related to things they are bad at measuring
  16. “The test for a conjointly tuned neural population amounts to the measurement of variance attributable to the Euclidean contraction covariate. We consider here optimizations of the approach to maximize power for this test.”
  17. Instead of sampling stimuli from a grid of parameter space, they sample from nested octagons
    1. Helps keep points at a diagonal more evenly spaced than those that are up/down which is an issue in rectangular sampling. “the dioctagonal space increases the range of the Euclidean contraction covariate, thus improving power.”
  18. “Our selection of stimuli was motivated by the psychological study of integral and separable perceptual spaces. Some visual properties of objects are apprehended separately (e.g., color and shape), whereas other dimensions are perceived as a composite (e.g., saturation and brightness); these have been termed separable and integral dimensions (Shepard 1964). We hypothesized that integral perceptual dimensions are represented by populations of neurons that represent the dimensions conjointly, whereas separable dimensions are represented by independent neural populations; similar ideas have been proposed recently”
  19. First set of experiments are on “popcorn” and “moons”
  20. Screen Shot 2015-08-05 at 12.10.34 PM
  21. There is evidence that the two dimensions used for both stimuli sets are perceptually independent
  22. <Maybe they dont actually do an experiment on the stars that vary orange to red with differing number of points and only use them as an example?  that would be a bummer>
  23. “During separate fMRI scanning sessions … Subjects were required to monitor and report the position of a bisecting line, which was randomly tilted and shifted within preset limits … to maintain attention.”
  24. “For each subject, we identified within ventral occipitotemporal cortex voxels that showed recovery from adaptation to both stimulus axes for both stimulus spaces …Most voxels were concentrated around the right posterior fusiform sulcus, corresponding to ventral LOC…”
  25. In popcorn they find the neural representation is not independent based on dimension, but for the crescents they may indeed be independent
  26. “Although a particular study may find independent tuning for a pair of stimulus dimensions, it does not automatically follow that neurons are therefore tuned “for” those axes. It remains possible that the dimensions selected for study are manifestations of some further, as yet unstudied, organizational scheme.”
  27. In discussion, mentions other features that are thought to be represented independently on a neural level
  28. “Our method amounts to using a linear model to test the metric of a space—an approach that has been considered problematic…we have argued by simulation for the validity of our model for two dimensions with 16 regularly spaced samples.”
    1. “Herein we have considered several types of nonlinearities and distortions that can exist in neural representation or recovery from adaptation. Although we find that the method is generally robust to these deviations, there naturally exists the possibility of further violations of the assumptions of the model that we have not evaluated.”
  29. “We envision the use of the metric estimation test to study the representation of stimulus properties across sensory cortical areas. By revealing the presence of independently tuned neural populations, the fundamental axes of perceptual representation might be identified. Interestingly, a given stimulus space may be represented conjointly in one region of cortex, but independently in another.”
    1. This is true of the visual system

Properties of Shape Tuning of Macaque Inferior Temporal Neurons Examined Using Rapid Serial Visual Presentation. De Baene, Premereur, Vogels. J Neurophysiology 2007.

  1. Examined macaque inferior temporal cortical neuron responses to parametrically defined shapes
  2. “we found that the large majority of neurons preferred extremes of the shape configuration, extending the results of a previous study using simpler shapes and a standard testing paradigm. A population analysis of the neuronal responses demonstrated that, in general, IT neurons can represent the similarities among the shapes at an ordinal level, extending a previous study that used a smaller number of shapes and a categorization task. However, the same analysis showed that IT neurons do not faithfully represent the physical similarities among the shapes.”
  3. Also, IT neurons adapt to stimulus distribution statistics
  4. “Single IT neurons can be strongly selective for object attributes such as shape, texture, and color, while remaining tolerant to some transformations such as object position and scale”
  5. Rapidly display images in succession without interstimulus break
  6. Other results also show that neurons seem to be tuned to activate at when shapes that come from the extremes of parameter shape are presented
  7. “Because a high number of stimuli are presented repeatedly in RSVP, this paradigm might be more sensitive to adaptive effects than classical testing paradigms in which one stimulus is presented per trial after acquisition of fixation and the intertrial interval is relatively long”
  8. <Skipping experimental details and moving on to results>
  9. <Again,> Neuron responses were tuned to extremes of the parameter space and not normally or uniformly distributed
    1. They used a number of different shape classes, and all showed this effect
  10. There was “a good overall fit between physical and neural similarities.”
  11. Although they had the issue that some dimensions were more salient than others,
  12. Screen Shot 2015-08-04 at 1.11.39 PM
  13. Did a hierarchical clustering of shapes according to neural responses and different shape classes are always together (aside from one shape class that is split in half and has another shape class “inside” it)
  14. “One issue to consider regarding the interpretation of the observed stronger responses for extreme stimuli is that the employed stimuli are likely to be suboptimal for the tested IT neurons. The critical question here is why the extreme stimuli are less suboptimal than the other stimuli given the likely high-dimensional space in which IT neurons are tuned. A satisfactory answer to this important question will require a full description of the nature of the tuning functions of IT neurons as well as knowledge about the relative position and range of the stimulus set with respect to these tuning functions. The possibility cannot be excluded that IT neurons learn the stimulus statistics of the parametric shape spaces and thus that the observed tunings depend on the stimulation history and the specific stimulus spaces. Experiment 2 demonstrated that the responses of IT neurons can indeed be modified by changes in input statistics. These effects were small in comparison to the degree of monotonic tuning, but stimulus statistics might exert a more profound effect with more extensive daily repetition of the same stimulus spaces as is the common practice in singlecell recording experiments  The MDS results clearly show that IT neurons are more sensitive for some stimulus variations (e.g., indentation; stimulus sets 3 and 4) than for others. This is in agreement with previous studies using calibrated sets of shapes…”

Representation of object similarity in human vision: psychophysics and a computational model. Cutzu, Edelman. Vision Research 1997.

  1. Visual system is robust to illumination and perspective changes.  We usually hold that we should be sensitive to changes in shape, but how do you study that in a well principled way?
  2. References to earlier work that studied 2d shape change, here considering 3d
  3. 3 main ideas about how to make pose-independent shape classification, and there are ways to test which one seems to be what we do
  4. <Mostly interested in the way they generate their shape data and properties of it, so skipping most of the other stuff>
    1.  ex/ “theories such as Shepard’s law of generalization, Nosofsky’s GCM and Ashby’s GRT”
  5. Shapes made up bodies – in all they were 70-dimensional
  6. Screen Shot 2015-07-15 at 4.58.54 PM
  7. “We remark that the nonlinearities in the image creation process led to a complicated relationship between the shape-space representation of an object and its appear- ance on the screen”

  8. “Many early studies relied on the estimation of subjective similarities between stimuli, through a process in which the ob- server had to provide a numerical rating of similarity when presented with a pair of stimuli. One drawback of this method is that many subjects do not feel comfort-

    able when forced to rate similarity on a numerical scale. Another problem is the possibility of subjects modifying their internal similarity scale as the experiment pro- gresses. We avoided these problems by employing two different methods for measuring subjective similarity: compare pairs of pairs (CPP) and delayed match to sample (DMTS).”

  9. <skipping different experimental designs, moving on to discussion>
  10. Running MDS on subject data puts points pretty much where they should be
  11. “The CPP experiments described above support the hypothesis of veridical representation of similarity, by demonstrating that it is possible to recover the true low-dimensional shape-space configuration of complex stimuli from proximity tables obtained from subjects who made forced-choice similarity judgments.”
  12. “It is important to realize that the major computa- tional accomplishment in the experiments we have de- scribed so far is that of the human visual system and not of the MDS procedure used to analyze the data.”
  13. “The detailed recovery from subject data of complex similarity patterns imposed on the stimuli supports the notion of veridical representation of similarity, dis- cussed in the introduction. Although our findings are not inconsistent with a two-stage scheme in which geometric reconstruction of individual stimuli precedes the computation of their mutual similarities, the com- putational model that accompanies these findings offers a more parsimonious account of the psychophysical data. Specifically, representing objects by their similari- ties to a number of reference shapes (as in the RBF model described in Section 6.2) allowed us to replicate the recovery of parameter-space patterns observed in human subjects, while removing the need for a prior reconstruction of the geometry of the objects.”
  14. “Assuming that perceptual simi- larities decrease monotonically with psychological space distances, multidimensional scaling algorithms derive the psychological space configuration of the stimulus points from the table of the observed similarities.”
  15. asd

Perceptual-Cognitive Explorations of a Toroidal Set of Free-Form Stimuli. Shepard, Cermak. Cognitive Psychology 1973.

<I’m just going to post images because it explains the important stuff>

Screen Shot 2015-07-15 at 3.45.37 PMScreen Shot 2015-07-15 at 3.45.27 PMScreen Shot 2015-07-15 at 3.45.46 PMScreen Shot 2015-07-15 at 3.46.11 PM

  1. But also people tended to view shapes based on what object they were most similar to (classifying them based on whether they looked like a gingerbread man, for example)
    1. “a striking aspect of the subsets is their very marked variation in size and shape in the underlying two-dimensional toroidal surface.”
    2. So these clusters don’t match the earlier contour maps either in size or shape (they are not necessarily symmetric or convex, although they seem to be L/R symmetric mostly but not up/down)
    3. Sometimes a category formed two disconnected clusters
  2. The general conclusions, here, seem to be the following: On the one hand, the underlying parameter space provides a very convenient frame- work for representing the groups into which Ss tend to sort the forms. Moreover this space is directly relevant in the sense that most of the forms sorted into any one group typically cluster together into one or two internally connected subsets in the space. But, on the other hand, the fact that the spatial representations of the spontaneously produced sub- sets vary greatly in size and shape and sometimes even consist of two or more widely separated clumps seems to establish that Experiment II taps a variety of cognitive functioning that was not operative in Experi- ment I. Just what forms will be seen as representing the same object ap- parently cannot be adequately explained solely in terms of the metric of perceptual proximity among the free forms themselves…”

  3. Each cluster can be further broken down in to subsequent subclusters
  4. Parameter space is toroidal, so top links to bottom and side to side

On correlation and budget constraints in model-based bandit optimization with application to automatic machine learning. Hoffman, Shahriari, Freitas. AISTATS 2014.

  1. Consider noisy optimization with finite samples <not yet clear if this is budget is imposed by the actor or the environment>
  2. “Bayesian approach places emphasis on detailed modelling, including the modelling of correlations among the arms. As a result, it can perform well in situations where the number of arms is much larger than the number of allowed function evaluation, whereas the frequentist counterpart is inapplicable.”
  3. “This paper draws connections between Bayesian optimization approaches and best arm identification in the bandit setting. It focuses on problems where the number of permitted function evaluations is bounded.”
  4. Applications include parameter selection for machine learning tasks
  5. “The paper also shows that one can easily obtain the same theoretical guarantees for the Bayesian approach that were previously derived in the frequentist setting [Gabillon et al., 2012].”
  6. A number of different criteria can be used in Bayesian land to select where to sample: “probability of improvement (PI), expected improvement (EI), Bayesian upper confidence bounds (UCB), and mixtures of these”
  7. Mentions work of Bubeck/Munos/Etal
  8. Tons of relevant references
  9. Also discussion in terms of simple regret
  10. But looks like they are also talking PACy
  11. Setting they consider includes GPs
  12. “As with standard Bayesian optimization with GPs, the statistics of … enable us to construct many different acquisition functions that trade-off exploration and exploitation. Thompson sampling in this setting also becomes straightforward, as we simply have to pick the maximum of the random sample from …, at one of the arms, as the next point to query.”
  13. Seems like they are really considering the finite arm case where arms have some covariance
  14. Used Bayes math to get upper and lower bounds among all arms, and then this is used to generate a bound on the simple regret
  15. “Intuitively this strategy will select either the arm minimizing our bound on the simple regret (i.e. J(t)) or the best “runner up” arm. Between these two, the arm with the highest uncertainty will be selected, i.e. the one expected to give us the most information.”
  16. The exploration parameter beta is chosen based on how often each arm is chosen and then finding something epsilon optimal
    1. Regret bound is in terms of near-optimality
  17. “Here we should note that while we are using Bayesian methodology to drive the exploration of the bandit, we are analyzing this using frequentist regret bounds. This is a common practice when analyzing the regret of Bayesian bandit methods”
  18. Can do a derivation with Hoeffding or Bernstein bounds as well (leads to analysis of case of independent arms, bounded rewards)
  19. UGap vs BayesGap – bounds are pretty much the same
  20. Have a nice empirical section where they use data from 357 traffic sensors and try to find the location with the highest speed
    1. “By looking at the results, we quickly learn that techniques that model correlation perform better than the techniques designed for best arm identification, even when they are being evaluated in a best arm identification task.”
  21. Then they use it for optimizing parameters in scikit-learn
    1. “EI, PI, and GPUCB get stuck in local minima”

Protected: Notes on Optimization Task

This content is password protected. To view it please enter your password below:

Active Model Selection. Madani, Lizotte, Greiner. UAI 2004

  1. Considers the case where there is a fixed budget
  2. Shown to be NP-Hard
  3. Consider some heuristics
  4. “We observe empirically that the simple biased-robin algorithm significantly outperforms the other algorithms in the case of identical costs and priors.”
  5. Formalize the problem in terms of coins.  You are given a set of coins with different biases, and are given a budget of number of flips to sample.  Goal is to pick the coin with the highest bias for heads.  Actually consider the case where there are priors over the distributions for each coin, so considers Bayesian case
  6. “We address the computational complexity of the problem, showing that it is in PSPACE, but also NP-hard under different coin costs.”
  7. Metric is based on regret
  8. “A strategy may be viewed as a finite, rooted, directed tree where each leaf node is a special “stop” node, and each internal node corresponds to flipping a particular coin, whose two children are also strategy trees, one for each outcome of the flip”
    1. So naturally the total number of ways this can work out is exponential
  9. “We have observed that optimal strategies for identical priors typically enjoy a similar pattern (with some exceptions): their top branch (i.e., as long as the outcomes are all heads) consists of flipping the same coin, and the bottom branch (i.e., as long as the outcomes are all tails) consists of flipping the coins in a Round-Robin fashion”
  10. Update estimates on coins according to beta distribution
  11. “The proof reduces the Knapsack Problem to a special coins problem where the coins have different costs, and discrete priors with non-zero probability at head probabilities 0 and 1 only. It shows that maximizing the profit in the Knapsack instance is equivalent to maximizing the probability of finding a perfect coin, which is shown equivalent to minimizing the regret. The reduction reveals the packing aspect of the budgeted problem. It remains open whether the problem is NP-hard when the coins have unit costs and/or uni-modal distributions”
  12. “It follows that in selecting the coin to flip, two significant properties of a coin are the magnitude of its current mean, and the spread of its density (think “variance”), that is how changeable its density is if it is queried: if a coin’s mean is too low, it can be ignored by the above result, and if its density is too peaked (imagine no uncertainty), then flipping it may yield little or no information …However, the following simple, two coin example shows that the optimal action can be to flip the coin with the lower mean and lower spread!”
  13. Even if Beta parameters of two coins are fixed, the beta parameter of a third coin make require you to choose the first or second coin depending on their values
  14. Furthermore, “The next example shows that the optimal strategy can be contingent — i.e., the optimal flip at a given stage depends on the outcomes of the previous flips.”
  15. Although the optimal algorithm is contingent, an algorithm that is not contingent may only give up a little bit on optimality
  16. Discusses a number of heuristics including biased robin and interval estimation
  17. Gittins indices are simple and optimal, but only in the infinite horizon discounted case
    1. Discusses a hack to get it to work in the budgeted case (manipulating the discount based on the remaining budget)
  18. Goes on to empirical evaluation of heuristics

Gaussian Process Dynamical Models. Wang, Fleet, Hertzmann. Nips 2006

  1. “A GPDM comprises a low-dimensional latent space with associated dynamics, and a map from the latent space to an observation space.”
  2. “We demonstrate the approach on human motion capture data in which each pose is 62-dimensional.”
  3. “we show that integrating over parameters in nonlinear dynamical systems can also be performed in closed-form. The resulting Gaussian Process Dynamical Model (GPDM) is fully defined by a set of lowdimensional representations of the training data, with both dynamics and observation mappings learned from GP regression.”
  4. As a Bayesian nonparametric, GPs make them easier to use and overfit less
  5. “Despite the large state space, the space of activity-specific human poses and motions has a much smaller intrinsic dimensionality; in our experiments with walking and golf swings, 3 dimensions often suffice.”
  6. “The Gaussian Process Dynamical Model (GPDM) comprises a mapping from a latent space to the data space, and a dynamical model in the latent space…The GPDM is obtained by marginalizing out the parameters of the two mappings, and optimizing the latent coordinates of training data.”
  7. “t should be noted that, due to the nonlinear dynamical mapping in (3), the joint distribution of the latent coordinates is not Gaussian. Moreover, while the density over the initial state may be Gaussian, it will not remain Gaussian once propagated through the dynamics.”
  8. Looks like all predictions are 1-step, can specifically set it up to use more history to make it higher-order
  9. “In effect, the GPDM models a high probability “tube” around the data.”
  10. “Here we consider a simple online method for generating a new motion, called mean-prediction, which avoids the relatively expensive Monte Carlo sampling used above.”
  11. <Wordpress ate the rest of this post.  A very relevant paper I should follow up on.>

Surpassing Human-Level Face Verification Performance on LFW with GaussianFace. Lu, Tang. AAAI 2015

  1. First algorithm to beat human performance in Labeled Faces in the Wild dataset
  2. This has traditionally been a difficult problem for a few reasons:
    1. Often algorithms try to use different face datasets to help training, but the faces in different datasets come from different distributions
    2. On the other hand, relying on only one dataset can lead to overfitting
    3. So it is necessary to be able to learn from multiple datasets with different distributions and generalize appropriately
  3. Most algorithms for face recognition fall in 2 categories:
    1. Extracting low-level features (through manually designed approaches, such as SIFT)
    2. Classification models (such as NNs)
  4. “Since most existing methods require some assumptions to be made about the structures of the data, they cannot work well when the assumptions are not valid. Moreover, due to the existence of the assumptions, it is hard to capture the intrinsic structures of data using these methods.”
  5. GaussianFace is based on Discriminative Gaussian Process Latent Variable Model
  6. The algorithm is extended to work from multiple data sources
    1. From the perspective of information theory, this constraint aims to maximize the mutual information between the distributions of target-domain data and multiple source-domains data.
  7. Because GPs are in the class of Bayesian nonparametrics, they require less tuning
  8. There are optimizations made to allow GPs to scale up for large data sets
  9. Model functions both as:
    1. Binary classifier
    2. Feature extractor
    3. “In the former mode, given a pair of face images, we can directly compute the posterior likelihood for each class to make a prediction. In the latter mode, our model can automatically extract high-dimensional features for each pair of face images, and then feed them to a classifier to make the final decision.”
  10. Earlier work on this dataset used the Fisher vector, which is derived from a Gaussian Mixture Model
  11. <I wonder if its possible to use multi-task learning to work on both the video and kinematic data?  Multi-task learning with GPs existed before this paper>
  12. Other work used conv nets to take faces from different perspectives and lighting to produce a canonical representation, other approach that explicitly models face in 3D and also used NNs, but these require engineering to get right
  13. “hyper-parameters [of GPs] can be learned from data automatically without using model selection methods such as cross validation, thereby avoiding the high computational cost.”
  14. GPs are also robust to overfitting
  15. “The principle of GP clustering is based on the key observation that the variances of predictive values are smaller in dense areas and larger in sparse areas. The variances can be employed as a good estimate of the support of a 3 probability density function, where each separate support domain can be considered as a cluster…Another good property of Equation (7) is that it does not depend on the labels, which means it can be applied to the unlabeled data.”
    1. <I would say this is more of a heuristic than an observation, but I could see how it is a useful assumption to work from>
    2. Basically it just works from the density of the samples in the domain
    3. <Oh I guess I knew this already>
  16. “The Gaussian Process Latent Variable Model (GPLVM) can be interpreted as a Gaussian process mapping from a low dimensional latent space to a high dimensional data set, where the locale of the points in latent space is determined by maximizing the Gaussian process likelihood with respect to Z [the datapoints in their latent space].”
  17. “The DGPLVM is an extension of GPLVM, where the discriminative prior is placed over the latent positions, rather than a simple spherical Gaussian prior. The DGPLVM uses the discriminative prior to encourage latent positions of the same class to be close and those of different classes to be far”
  18. “In this paper, however, we focus on the covariance function rather than the latent positions.”
  19. “The covariance matrix obtained by DGPLVM is discriminative and more flexible than the one used in conventional GPs for classification (GPC), since they are learned based on a discriminative criterion, and more degrees of freedom are estimated than conventional kernel hyper-parameters”
  20. “From an asymmetric multi-task learning perspective, the tasks should be allowed to share common hyper-parameters of the covariance function. Moreover, from an information theory perspective, the information cost between target task and multiple source tasks should be minimized. A natural way to quantify the information cost is to use the mutual entropy, because it is the measure of the mutual dependence of two distributions”
  21. There is a weighing parameter that controls how much the other data sets contribute
  22. Optimize with scaled conjugate gradient
  23. Use anchor graphs to work around dealing with a large matrix they need to invert
  24. “For classification, our model can be regarded as an approach to learn a covariance function for GPC”
  25. <Not following the explanation for how it is used as a feature generator, I think it has to do with how close a point is to cluster centers>
  26. Other traditional methods work well here (like SVM, boosting, linear regression), but not as well as GP <Is this vanilla versions or on the GP features?>
  27. Works better as a feature extractor than other methods like k-means, tree, GMM
  28. “Deepface” was the next-best method
  29. It is only half-fair to say this beats human performance, because human performance is better in the non-cropped scenario, and this is in the cropped scenario.
    1. <My guess is that in the non-cropped scenario, machine performance conversely degrades even though human performance increases>
  30. Performance could be further increased but memory is an issue, so better forms of sparsification for the large covariance matrix would be a win

Deep Learning. LeCun, Bengio, Hinton. Nature 2015

  1. “Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. “
  2. Previous machine learning methods traditionally relied on significant hand-engineering to process data into something the real learning algorithm could use
  3. “Deep-learning methods are representation-learning methods with multiple levels of representation, obtained by composing simple but non-linear modules that each transform the representation at one level (starting with the raw input) into a representation at a higher, slightly more abstract level. With the composition of enough such transformations, very complex functions can be learned. For classification tasks, higher layers of representation amplify aspects of the input that are important for discrimination and suppress irrelevant variations.”
  4. Has allowed for breakthroughs in many different areas
  5. We think that deep learning will have many more successes in the near future because it requires very little engineering by hand, so it can easily take advantage of increases in the amount of available computation and data.
    1. <really? they have a very different definition of very little than i do>
  6. “In practice, most practitioners use a procedure called stochastic gradient descent (SGD).”
  7. Visual classifiers have to learn to be invariant to many things like background, shading, contrast, orientation, zoom, but also have to be very sensitive to other things (for example, learning to distinguish a german shepard from a wolf)
  8. “As long as the modules are relatively smooth functions of their inputs and of their internal weights, one can compute gradients using the backpropagation procedure. ”
    1. Which is just an application of the chain rule for derivatives
  9. ReLUs are best for deep networks, can help remove the need for pre training
  10. Theoretical results as to why NNs rarely get stuck in local minima (especially large networks)
  11. Deep NN work started in 2006, when pretraining was done by having each layer model the activity of the layer below
  12. 1st major application of deep nets was speech recognition in 09, by 12 it was doing speech recognition on Android
  13. For small datasets, unsupervised pretraining is helpful
  14. Convnets for vision
  15. “There are four key ideas behind ConvNets that take advantage of the properties of natural signals: local connections, shared weights, pooling and the use of many layers.”
  16. “Recent ConvNet architectures have 10 to 20 layers of ReLUs, hundreds of millions of weights, and billions of connections between units. Whereas training such large networks could have taken weeks only two years ago, progress in hardware, software and algorithm parallelization have reduced training times to a few hours.”
  17. “The issue of representation lies at the heart of the debate between the logic-inspired and the neural-network-inspired paradigms for cognition. In the logic-inspired paradigm, an instance of a symbol is something for which the only property is that it is either identical or non-identical to other symbol instances. It has no internal structure that is relevant to its use; and to reason with symbols, they must be bound to the variables in judiciously chosen rules of inference. By contrast, neural networks just use big activity vectors, big weight matrices and scalar non-linearities to perform the type of fast ‘intuitive’ inference that underpins effortless commonsense reasoning.”
  18. Machine translation and rnns
  19. Regular RNNs don’t work so well, LSTM fixes major problems
  20. “Over the past year, several authors have made different proposals to augment RNNs with a memory module. Proposals include the Neural Turing Machine in which the network is augmented by a ‘tape-like’ memory that the RNN can choose to read from or write to88, and memory networks, in which a regular network is augmented by a kind of associative memory89. Memory networks have yielded excellent performance on standard question-answering benchmarks. The memory is used to remember the story about which the network is later asked to answer questions.Beyond simple memorization, neural Turing machines and memory networks are being used for tasks that would normally require reasoning and symbol manipulation. Neural Turing machines can be taught ‘algorithms’. Among other things, they can learn to output a sorted list of symbols when their input consists of an unsorted sequence in which each symbol is accompanied by a real value that indicates its priority in the list88. Memory networks can be trained to keep track of the state of the world in a setting similar to a text adventure game and after reading a story, they can answer questions that require complex inference90. In one test example, the network is shown a 15-sentence version of the The Lord of the Ringsand correctly answers questions such as “where is Frodo now?”89.”
  21. Although the focus now is mainly  on supervised learning, expect that unsupervised learning will become most important in the long term
  22. “Systems combining deep learning and reinforcement learning are in their infancy, but they already outperform passive vision systems99 at classification tasks and produce impressive results in learning to play many different video games100.”
  23. “Natural language understanding is another area in which deep learning is poised to make a large impact over the next few years. We expect systems that use RNNs to understand sentences or whole documents will become much better when they learn strategies for selectively attending to one part at a time.”

Get every new post delivered to your Inbox.