Name
Papers
Collaborators
GEOFFREY E HINTON
220
204
Citations 
PageRank 
Referers 
40435
4753.73
66094
Referees 
References 
1852
1848
Search Limit
2002000
Title
Citations
PageRank
Year
Pix2seq: A Language Modeling Framework for Object Detection00.342022
Unsupervised Part Representation by Flow Capsules00.342021
Teaching with Commentaries00.342021
Deep learning for AI50.452021
Teaching with Commentaries.00.342021
CvxNet: Learnable Convex Decomposition70.422020
Imputer: Sequence Modelling via Imputation and Dynamic Programming00.342020
Big Self-Supervised Models are Strong Semi-Supervised Learners00.342020
A Simple Framework for Contrastive Learning of Visual Representations20.362020
Detecting and Diagnosing Adversarial Images with Class-Conditional Capsule Reconstructions.00.342020
The Next Generation of Neural Networks00.342020
Analyzing and Improving Representations with the Soft Nearest Neighbor Loss.10.352019
Similarity of Neural Network Representations Revisited.20.362019
When Does Label Smoothing Help?70.402019
Learning Sparse Networks Using Targeted Dropout.10.362019
Cerberus: A Multi-headed Derenderer.00.342019
Stacked Capsule Autoencoders.00.342019
Assessing the Scalability of Biologically-Motivated Deep Learning Algorithms and Architectures00.342018
Who Said What: Modeling Individual Labelers Improves Classification120.652018
Large scale distributed neural network training through online distillation.00.342018
Illustrative Language Understanding: Large-Scale Visual Grounding With Image Search00.342018
Large scale distributed neural network training through online distillation.180.682018
Matrix capsules with EM routing471.522018
DARCCC: Detecting Adversaries by Reconstruction from Class Conditional Capsules.50.422018
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer.953.022017
Distilling a Neural Network Into a Soft Decision Tree.180.642017
Regularizing Neural Networks by Penalizing Confident Output Distributions.421.242017
Regularizing Neural Networks by Penalizing Confident Output Distributions.00.342017
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer.00.342017
Attend, Infer, Repeat: Fast Scene Understanding with Generative Models00.342016
Layer Normalization.00.342016
A Simple Way to Initialize Recurrent Networks of Rectified Linear Units.943.552015
Distilling the Knowledge in a Neural Network.69621.802015
Guest Editorial: Deep Learning30.382015
Application of Deep Belief Networks for Natural Language Understanding832.322014
Grammar as a Foreign Language.23710.732014
Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models.00.342014
Dropout: a simple way to prevent neural networks from overfitting3420130.132014
Where do features come from?60.422014
Tensor Analyzers.00.342013
Modeling Documents with Deep Boltzmann Machines.643.622013
Discovering Multiple Constraints that are Frequently Approximately Satisfied124.732013
New types of deep neural network learning for speech recognition and related applications: an overview1818.002013
Modeling Natural Images Using Gated MRFs251.652013
Improving deep neural networks for LVCSR using rectified linear units and dropout.835.622013
Speech recognition with deep recurrent neural networks45423.812013
Efficient parametric projection pursuit density estimation11.322012
Deep Lambertian Networks.10.352012
Conditional Restricted Boltzmann Machines for Structured Output Prediction362.242012
Visualizing non-metric similarities in multiple maps281.182012